Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
4,200 | 4,801 | Forging The Graphs: A Low Rank and Positive
Semidefinite Graph Learning Approach
Dijun Luo, Chris Ding, Heng Huang, Feiping Nie
Department of Computer Science and Engineering
The University of Texas at Arlington
[email protected], [email protected]
[email protected], [email protected]
Abstract
In many graph-based machine learning and data mining approaches, the quality
of the graph is critical. However, in real-world applications, especially in semisupervised learning and unsupervised learning, the evaluation of the quality of a
graph is often expensive and sometimes even impossible, due the cost or the unavailability of ground truth. In this paper, we proposed a robust approach with
convex optimization to ?forge? a graph: with an input of a graph, to learn a graph
with higher quality. Our major concern is that an ideal graph shall satisfy all the
following constraints: non-negative, symmetric, low rank, and positive semidefinite. We develop a graph learning algorithm by solving a convex optimization
problem and further develop an efficient optimization to obtain global optimal solutions with theoretical guarantees. With only one non-sensitive parameter, our
method is shown by experimental results to be robust and achieve higher accuracy
in semi-supervised learning and clustering under various settings. As a preprocessing of graphs, our method has a wide range of potential applications machine
learning and data mining.
1 Introduction
Many machine learning algorithms use graphs as input, such as clustering [16, 14], manifold based
dimensional reduction [2, 15], and graph-based semi-supervised learning [23, 22]. In these approaches, we are particularly interested in the similarity among objects. However, the observation
of similarity graphs often contain noise which sometimes mislead the learning algorithm, especially
in unsupervised and semi-supervised learning. Deriving graphs with high quality becomes attractive
in machine learning and data mining research.
A robust and stable graph learning algorithm is especially desirable in unsupervised and semisupervised learning, because the unavailability or high cost of ground truth in real world applications. In this paper, we develop a novel graph learning algorithm based on convex optimization,
which leads to robust and competitive results.
1.1 Motivation and Main Problem
In this section, the properties of similarity matrix are revisited from point of view of normalized
cut clustering [19]. Given a symmetric similarity matrix W ? Rn?n on n objects, normalized cut
solves the following optimization problem [10].
min trH? (D ? W)H s.t. H? DH = I,
H?0
1
(1)
where H ? {0, 1}n?K is the cluster indicator matrix, or equivalently,
?
max trF? WF
s.t. F? F = I,
(2)
F?0
? =
where F = [f1 , f2 , ? ? ? , fK ], H = [h1 , h2 , ? ? ? , hK ], fk = D 2 hk /kD 2 hk k, 1 ? k ? K, W
Pn
? 12
? 12
D WD , D = diag(d1 , d2 , ? ? ? , dn ), di = j=1 Wij , I is the identity matrix, and K is the
number of groups. Eq. (2) can be further rewritten as,
? ? FF? kF s.t. F? F = I,
min kW
(3)
1
1
F?0
where k ? kF denotes the Frobenius norm. We notice that
? ? G + G ? FF? kF ? kW
? ? GkF + kG ? FF? kF ,
kW
(4)
n?n
for any G ? R
. Our goal is to minimize the LHS (left-hand side); Instead, we can minimize the
RHS which is the upper-bound of LHS.
Thus we need to find the intermediate matrix G, i.e., we learn a surrogate graph which is close but
? Our upper-bounding approach offers flexibility which allows us to impose cernot identical to W.
tain desirable properties. Note that matrix FF? holds the following properties: (P1) symmetric, (P2)
nonnegative, (P3) low rank, and (P4) positive semidefinite. This suggests a convex graph learning
? 2 s.t. G < 0, kGk? ? c, G = G? , G ? 0,
min kG ? Wk
(5)
G
F
where G < 0 denotes the positive semidefinite constraint, k ? k? denotes the trace norm, i.e. the sum
of the singular values [8], and c is a model parameter which controls the rank of G. The constraint
of G ? 0 is to force the similarity to be naturally non-negative. By intuition, one might impose row
rank constraint of rank(G) ? c. But this leads to a non-convex optimization, which is undesirable
in unsupervised and semi-supervised learning. Following matrix completion methods [5], the trace
constraint in Eq. (5) is a good surrogate for the low rank constraint. For notational convenience, the
? is denoted as W in the rest of the paper.
normalized similarity matrix W
By solving Eq. (5), we are actually seeking a similarity matrix which satisfies all the properties of
a perfect similarity matrix (P1?P4) and which is close to the original input matrix G. Our whole
paper is here dedicated to solve Eq. (5) and to demonstrate the usefulness of its optimal solution in
clustering and semi-supervised learning using both theoretical and empirical evidences.
1.2 Related Work
Our method can be viewed as a preprocessing for similarity matrix and a large number of machine
learning and data mining approaches require a similarity matrix (interpreted as a weighted graph) as
input. For example, in unsupervised clustering, Normalized Cut [19], Ratio Cut [11], Cheeger Cut
[3] have been widely applied in various real world applications. In graphical models for relational
data, e.g. Mixed Membership Block models [1] can be also interpreted as generative models on
the similarity matrices among objects. Thus a similarity matrix preprocessing model can be widely
applied.
A large number of approaches have been developed to learn similarity matrix with different emphasis. Local Linear Embedding (LLE ) [17, 18] and Linear Label Propagation [21] can be viewed as
obtaining a similarity matrix using sparse coding. Another way to perform the similarity matrix preprocessing is to take a graph as input and to obtain a refined graph by learning, such as bi-stochastic
graph learning [13]. Our method falls in this category. We will compare our method with these
methods in the experimental section.
On the optimization techniques for problems with multiple constraints, there also exist many related
researches. First, von Neumann provided a convergence proof of successive projection method that
it guarantees to converge to feasible solution in convex optimization with multiple constraints, which
was employed in the paper by Liu et al. [13]. In this paper, we develop a novel optimization algorithm to solve the optimization problem with multiple convex constraints (including the inequality
constraints), which is guaranteed to find the global solution. More explicitly, we develop a variant
of inexact Augmented Lagrangian Multiplier method to handle inequality constraints. We also develop a useful Lemma to handle the subproblems with trace norm constraint in the main algorithm.
Interestingly, one of the derived subproblems is the ?1 ball projection problem, which can be solved
elegantly by simple thresholding.
2
(a)
(c1)
(d) 2
Eq. (5)
Eq. (6)
(b)
(c2)
Eigenvalues
1.5
1
0.5
0
?0.5
?1
0
5
10
15
Sorting index
20
25
30
Figure 1: A toy example of low rank and positive semidefinite graph learning. (a): A perfect
similarity matrix. (b): Adding noise from (a). (c1): the optimal solution of Eq. (5) by using the
matrix in (b) as G. (c2): the optimal solution of Eq. (6) by using the matrix in (b) as G. (d): sorted
eigenvalues for the two solutions of Eq. (5) and Eq. (6).
2 A Toy Example
We first emphasize the usefulness of the positive semidefinite and low rank constraints in problem of
Eq. (5) using a toy example. In this toy example, we also solve the following problem for contrast,
min kG ? Wk2F s.t. G = G? , Ge = e, G ? 0,
G
(6)
where e = [1, 1, ? ? ? , 1]T and the constraints of positive semidefinite and low rank are removed from
Eq. (5) and instead a bi-stochastic constraint is applied (Ge = e). Notice that the model defined in
Eq. (6) is used in the bi-stochastic graph learning [13]. We solve Eqs. (5) and (6) for the same input
G and compare the solution to see the effect of positive semidefinite and low rank constraints.
In the toy example, we first generate a perfect similarity matrix W in which Wij = 1 if data points
i, j are in the same group and Wij = 0 otherwise. Three groups of data points (10 data points in
each group) are considered. G are shown in Figure 1 (a) with black color denoting zeros values. We
then randomly add some positive noise on G which is shown in Figure 1 (b). Then we solve Eqs.
(5) and (6) and the results of G is shown in Figure 1 (c1) and (c2). The observation is that Eq. (5)
recover the perfect similarity much more accurately than Eq. (6). The reason is that in model of
Eq. (6), the low rank and positive semidefinite constraints are ignored and the result deviates from
the ground truth.
We show the eigenvalues distributions of G for solution in Figure 1 (d) for both methods in Eqs. (5)
and (6). One can observe that the solution Eq. (5) gives low rank and positive semidefinite results,
while the solution for Eq. (6) is full rank and has negative eigenvalues.
Since the solution of Eq. (5) is always positive, symmetric, low rank, and positive semidefinite, we
called our solution the Non-negative Low-rank Kernel (NLK).
2.1 NLK for Semi-supervised Learning
Although NKL is mainly developed for unsupervised learning, it can be easily extended to incorporate the label information in semi-supervised learning [23]. Assume we are given a set of data
X = [x1 , x2 , ? ? ? , x? , x?+1 , ? ? ? , xn ] where the first l data points are labeled as [y1 , y2 , ? ? ? , y? ].
Then we have more information to learn a better similarity matrix. Here we add additional constraints on Eq. (5) by enforcing the similarity to be zeros for those paris of data points in different
classes, i.e. Gij = 0 if yi 6= yj , 1 ? i, j ? ?. By considering all the constraints, we optimize the
following,
min kG ? Wk2F s.t. G < 0, kGk? ? c, G = G? , G ? 0, Gij = 0, ?yi 6= yj .
G
(7)
We will demonstrate the advantage of these semi-supervision constraints in the experimental section.
The computational algorithm is given in ?3.3.
3
3 Optimization
The optimization problemss in Eqs. (5) and (7) are non-trivial since there are multiple constraints,
including both equality and inequality constraints. Our strategy is to introduce two extra copies of
optimization variable X and Y to split the constraints into several directly solvable subproblems,
min kG ? Wk2F , s.t.
G
min
X
kX ? Wk2F , s.t.
min kY ? Wk2F , s.t.
Y
G?0
(8a)
X < 0, with X = G.
(8b)
kYk? ? c, with Y = G
(8c)
More formally, we solve the following problem,
minG
s.t.
kG ? Wk2F
G?0
X = G, X < 0,
Y = G, |Yk? ? c,
(9a)
(9b)
(9c)
(9d)
One should notice that problem in Eqs. (9a) ? (9d) is equivalent to our main problem in Eq. (5).
In the rest of this section, we will employ a variant of Augmented Lagrangian Multiplier (AML)
method to solve Eqs. (9a) ? (9c).
3.1 Seeking Global Solutions: A Variant of ALM
The augmented Lagrangian multiplier function of Eqs. (9a) ? (9d) is
?
?
?(G, X, Y) =kG ? Wk2F ? h?, X ? Gi + kG ? Xk2F ? h?, Y ? Gi + kG ? Yk2F ,
2
2
(10)
with constraints of G ? 0, X < 0, and kYk? ? c, where ?, ? are the Lagrangian multipliers.
Then ALM method leads to the following updating steps,
G ? arg min ?(G, X, Y, Z)
(11a)
X ? arg min ?(G, X, Y, Z)
(11b)
G?0
X<0
Y
? arg min ?(G, X, Y, Z)
kYk? ?c
? ? ? ? ? (G ? X)
? ? ? ? ? (G ? Y)
? ? ??, t ? t + 1.
(11c)
(11d)
(11e)
(11f)
Notice that the symmetric constraint is removed here. We will later show that given a symmetric
input W, the output of our algorithm automatically satisfies the symmetric constraints.
3.2 Solving the Subproblems in ALM
X and Y updating algorithms in Eqs. (11b and 11c) contain eigenvalue constraints, which appear
complicated. Fortunately they have closed form solution. To show that, we first introduce the
following useful Lemma.
Lemma 3.1. Consider the following problem,
min kX ? Ak2F , s.t. ?i (X) ? ci , 1 ? i ? m,
X
(12)
where ?i (X) ? ci is any constraint on eigenvalues of X, i = 1, 2, ? ? ? , m and m is the number of constraints. Then there exists a diagonal matrix S such that USU? is an optimizer of
Eq. (12), where U?U? = A is the eigenvector decomposition of A. S relates to eigenvalues of
? = diag(?1 , ? ? ? , ?n ) and satisfying the constraints.
4
Proof. Let VSV? = X and UDU? = A be the eigenvector decomposition of X and A, respectively. By applying von Neumann?s trace inequality, the following holds for any X and A,
trX? A ? trSD.
(13)
Then
trVSV? A = trX? A ? trSD = tr(USU? )(UDU? ) = USU? A,
(14)
which leads to
kUSU? ? Ak2F ? kVSV? ? Ak2F .
(15)
?
Now assume X = VSV is a minimizer of Eq. (12). By comparing two solutions of X = VSV?
and Z = USU? , one should notice (a) that Z satisfies all the constraints of ?i (Z) = ?i (X) ?
ci , 1 ? i ? m in Eq. (12) and (b) that Z gives equal or less value of the objective, thus Z = USU?
ia also a minimizer of Eq. (12).
Lemma 3.1 shows an interesting property of the matrix approximation with eigenvalue or singular
value constraint: the optimal solution matrix shares the same subspace of the input matrix. This
is useful, because once the subspace is determined, the whole optimization becomes much easier.
Thus the lemma provides a powerful mathematical tool in computation of optimization problem
with eigenvalue and singular value constraints. Here, we apply Lemma 3.1 to solve the updating of
X and Y in ?3.2.2 - 3.2.3.
3.2.1 Updating G
By ignoring the irrelevant terms with respect to G , we can rewrite Eq. (11a) as following,
G ? arg min k(2 + 2?)G ? (2W + ?(X + Y) + ? + ?) k2F + const
G?0
2W + ?(X + Y) + ? + ?
= max
,0 .
2 + 2?
(16)
(17)
3.2.2 Updating X
For Eq. (11b), we need to solve the following subproblem
min kX ? Pk2F , X < 0, where P = G + ?/?.
X
(18)
Notice that X < 0 is constraint on the eigenvalues of X. Then we can directly apply Lemma 3.1, X
can be written as USU? and Eq. (18) becomes
min kUSU? ? UDU? k2F , s.t. S ? 0,
(19)
S
where UDU? = P is the eigenvector decomposition of P. Let S = diag(s1 , s2 , ? ? ? , sn ) and
D = diag(d1 , d2 , ? ? ? , dn ). Then Eq. (19) can be further rewritten as,
n
X
2
(si ? di ) , s.t. si ? 0, i = 1, 2, ? ? ? , n.
(20)
min
s1 ,s2 ,??? ,sn
i=1
Eq. (20) has simple closed form solution as si = max(di , 0), i = 1, 2, ? ? ? , n.
3.2.3 Updating Y
Eq. (11c) can be rewritten as,
min kY ? Qk2F , kYk? ? c,
Y
where Q = G +
1
? ?.
(21)
The corresponding Lagrangian function is,
L(Y, ?) = kY ? Qk2F + ? (kYk? ? c) .
(22)
Since we do not know the true Lagrangian multiplier ?, we cannot directly apply the singular value
thresholding technique [4]. However, we find Lemma 3.1 useful again. We notice that Y is symmetric and the constraint of kYk? ? c becomes a constraint on the eigenvalues of Y. Let Y = USU?
and by directly applying Lemma 3.1, Eq. (21) can be further written as,
n
X
?
? 2
|si | ? c,
(23)
min kUSU ? UDU kF , s.t.
S
i=1
5
or,
min ks ? dk2 , s.t.
s
n
X
|si | ? c,
(24)
i=1
where S = diag(s), s = [s1 , s2 , ? ? ? , sn ]? , D = diag(d), and d = [d1 , d2, ? ? ? , dn ]? .
Interestingly, the above problem is a standard ?1 ball optimization problem which has been studied
for a long time and Duchi et al. has recently P
provided a simple and elegant solution [7]. The final
solution is to search the least ? ? 0 such that i max(|di | ? ?, 0) ? c, i.e.
? = arg min ? s.t.
?
n
X
max(|di | ? ?, 0) ? c.
(25)
i=1
This can be easily done by sorting the |di | and try the ? values between two consecutive sorted |di |.
And the solution is
si = sign(di ) max(|di | ? ?, 0).
(26)
Notice that in each step of algorithm, the solution has closed form solution and that the output of
G is always symmetric, which indicates that the constraint of G = G? is automatically satisfied in
each step.
3.3 NLK Algorithm For Semi-supervised Learning
In many real world settings, we know partially of data class labels and hope to further utilize such
information, as described in Eq. (7). Fortunately, the corresponding optimization problem remains
convex. The augmented Lagrangian multiplier function is
?(G, X, Y) = kG ? Wk2F ? h?, X ? Gi + ?2 kX ? Gk2F ? h?, Y ? Gi
P
+ ?2 kY ? Gk2F + (i,j)?T ?2 G2ij ? ?ij Gij ,
(27)
This is identical to Eq. (10), except we added ? as additional Lagrangian multiplier for the semisupervised constraints, i.e. the desired similarity Gij = 0 for (i, j) having different known class
labels. Here T = {(i, j) : yi 6= yj , i, j = 1, 2, ? ? ? , ?}.
We modify Algorithm of Eqs. (11a?11f) to solve this problem. The updating of X and Y remains
the same as NLK algorithm described previously. To update G, we set ??(G, X, Y)/?G = 0 and
obtain
?
2Wij + ?(Xij + Yij ) + ?ij + ?ij + ?ij
?
?
,0
? max
2 + 3?
Gij ?
?
2Wij + ?(Xij + Yij ) + ?ij + ?ij
?
? max
,0
2 + 2?
if yi 6= yj ,
otherwise.
(28)
For Lagrangian multiplier ?, the corresponding updating is
?ij ? ?ij ? ?Gij , ?yi 6= yj .
(29)
Thus the semi-supervised learning algorithm is nearly identical to the unsupervised learning algorithm ? one strength of our unified NLK approach.
We summarize the NLK algorithms for unsupervised and semi-supervised learning in Algorithm 1.
In the algorithm, Lines 4 and 9 are updated for semi-supervised learning while other lines are shared.
6
Algorithm 1 NLK Algorithm For Supervised Learning and Semi-supervised Learning
Require: Weighted graph W, model parameters c, optimization parameter ?, partial label y for
semi-supervised learning.
1: Initialization: G = W, ? = 0, ? = 0, ? = 0, ? = 1.
2: while Not converged do
3:
For unsupervised learning, G ? max 2W+?(X+Y)+?+?
,
0
.
2+2?
4:
For semi-supervised learning, update G using Eq. (28).
5:
X ? UD+ U? where UDU? = G + ?/?.
6:
Y ? USU? where UDU? = G + ?/? and S is computed by Eq. (26).
7:
? ? ? ? ? (X ? G)
8:
? ? ? ? ? (Y ? G)
9:
For semi-supervised learning, ?ij ? ?ij ? ?Gij , ?yi 6= yj .
10:
? ? ??.
11: end while
12: return G
3.4 Theoretical Analysis of The Algorithm
Since the objective function and all the constraints are convex, we have the following [12]
Theorem 3.2. Algorithm 1 converges to the global solution of Eq. (5) or Eq. (7).
Notice that this conclusion is stronger than that in the related research papers [13] for graph learning.
4 Experimental Validation
As mentioned in the introduction section, the optimization results for NLK (Eq. (5)) can be used
as preprocessing for any graph based methods. Here we evaluated NLK on several state-of-the-art
graph based learning models, include Normalized Cut (Ncut) [19] for unsupervised learning and
Gaussian Fields and Harmonic Functions (GFHF) and local and global consistency learning (LGC)
for semi-supervised learning. We compare the clustering in both clustering accuracy and normalized
mutual information (NMI). For the semi-supervised learning model (Eq. (7)), we evaluate the our
models on GFHF and LGC models. For semi-supervised learning, we measure the classification
accuracy. We verify the algorithms on four data sets: AT&T (n = 400, p = 644, K = 40), BinAlpha
(n = 1404, p = 320, K = 36), Segment (n = 2310, p = 19, K = 7), and Vehicle(n = 946, p =
18, K = 4) from UCI data [9], where n, p, and K are the number of data points, features, and
classes, respectively.
4.1 Experimental Settings
For clustering, we compare three
similarity matrices: (1) original from Gaussian kernel matrix,
wij = exp ?kxi ? xj k2 /2? 2 , where ? is set to the average pairwise distances among all the data
points. (2) the BBS (Bregmanian Bi-Stochastication) [20], and our method (NLK). The clustering
algorithm of Normalized Cut [19] is applied on the three similarity matrices. Then we have total
three clustering approaches: Normalized Cut (Ncut), BBS+Ncut, and NLK+Ncut. For each clustering method, we try 100 random trials for different clustering initializations. For the semi-supervised
learning, we test three basic graph-based semi-supervised learning models. Gaussian Fields and
Harmonic Functions (GFHF) [23], Local and Global Consistency learning (LGC) [22], and Green?s
function (Green) [6]. We compare 4 types of similarity matrices: original Gaussian kernel matrix, as
discussed before, BBS, NLK, and NLK with semi-supervised constraints (model in Eq. (7), denoted
by NLK Semi). Then we totally have 3 ? 4 methods. For each method, we random split the data
to 30%/70% where 30% is is used as labeled data an the other 70% as the testing data. We try 100
random split and we report the average and standard deviations.
4.2 Parameter Settings
For all the similarity learning approaches (BBS, NLK, and NLK Semi), we set the convergent criteria as follows. If kGt+1 ? Gt k2F /kGt k2F < 10?10 we stop the algorithms. For our methods (NLK
7
Table 1: Clustering accuracy and NMI comparison over 3 methods, Normalized Cut (Ncut),
BBS+Ncut, and NLK+Ncut on 4 data sets. The best results are highlighted in bold.
Accuracy
AT&T
BinAlpha
Segment
Vehicle
Ncut
0.607 ? 0.022
0.431 ? 0.018
0.613 ? 0.018
0.383 ? 0.001
NMI
BBS+Ncut
0.686 ? 0.021
0.444 ? 0.022
0.593 ? 0.009
0.383 ? 0.000
NLK+Ncut
0.767 ? 0.006
0.490 ? 0.009
0.616 ? 0.002
0.426 ? 0.000
Ncut
0.785 ? 0.025
0.618 ? 0.013
0.528 ? 0.016
0.121 ? 0.001
BBS+Ncut
0.836 ? 0.026
0.629 ? 0.015
0.579 ? 0.013
0.122 ? 0.000
NLK+Ncut
0.873 ? 0.025
0.673 ? 0.011
0.538 ? 0.002
0.184 ? 0.000
and NLK Semi), there is one model parameter c, which is always set to be c = 0.5kWk? where W
is the input similarity matrix.
4.3 Experimental Results
We show that clustering results in Table 1 where we compare both measurements (accuracy, NMI)
over 3 methods on 4 data sets. For each method, we report the average performance and the corresponding standard deviation. Out of 4 data sets, our method outperforms all the other methods with
all the measurements on 3 data sets (AT&T, BinAlpha, and Vehicle).
We also test the semi-supervised learning performance over the 12 methods on 4 data sets. In
each method on each data, we show the original performance values with dots. Shown are also the
average accuracies and the corresponding standard deviations. Out of 4 data sets, our method (NLK
and NLK Semi) outperform the other methods.
AT&T
(a) GFHF
BinAlpha
Original
BBS
NLK
NLK_Semi
(b) LGC
0.2
0.4
0.6
(c) Green
0.5
0.6
0.7
0.7
0.8
0.7445 ? 0.0442
0.6729 ? 0.0195
0.5981 ? 0.0465
0.4715 ? 0.0803
0.7926 ? 0.0331
0.6802 ? 0.0188
0.7104 ? 0.0277
0.4843 ? 0.0770
0.8038 ? 0.0290
0.6857 ? 0.0179
0.7121 ? 0.0289
0.4931 ? 0.0777
0.8500 ? 0.0343
0.2
0.9
0.6
0.8
0.7
0.8
0.9
0.7257 ? 0.0213
1
0.65
0.7
0.75
0.8
0.4250 ? 0.0745
0.4881 ? 0.0760
0.4936 ? 0.0671
0.6605 ? 0.0480
0.4252 ? 0.0755
0.5892 ? 0.0656
0.5112 ? 0.0666
0.6649 ? 0.0452
0.4367 ? 0.0791
0.6850 ? 0.0526
0.6233 ? 0.0292
0.7213 ? 0.0451
0.4805 ? 0.0789
0.6997 ? 0.0518
0.2
1
0.4
0.6561 ? 0.0480
0.8
Original
BBS
NLK
NLK_Semi
Vehicle
0.4720 ? 0.0772
0.8
Original
BBS
NLK
NLK_Semi
Segmentation
0.4807 ? 0.0419
0.4
0.6
0.8
0.2
0.4
0.6
0.8
0.6707 ? 0.0313
0.2
0.4
0.6
0.8
0.7148 ? 0.0261
0.5497 ? 0.0768
0.6887 ? 0.0320
0.5299 ? 0.0266
0.7176 ? 0.0269
0.5527 ? 0.0752
0.7143 ? 0.0325
0.5352 ? 0.0255
0.7340 ? 0.0326
0.5545 ? 0.0785
0.7539 ? 0.0366
0.5965 ? 0.0215
0.7774 ? 0.0349
0.5584 ? 0.0806
0.7648 ? 0.0395
0.2
0.4
0.6
0.8
0.7
0.8
0.9
1
0.6132 ? 0.0274
0.4
0.6
0.8
Figure 2: Semi-supervised learning performance over the 12 methods on 4 data sets. Original
accuracy value for each random split is plotted with dots. Shown are also the average accuracies and
the corresponding standard deviations.
5 Conclusions and Discussion
In this paper, we derive a similarity learning model based on convex optimizations. We demonstrate
that the low rank and positive semidefinite constraints are nature in the similarity. Further more,
we develop new sufficient algorithm to obtain global solution with theoretical guarantees. We also
develop more optimization techniques that are potentially useful in the related eigenvalues or singular values constraints optimization. The presented model is verified on extensive experiments, and
the results show that our method enhances the quality of the similarity matrix significantly, in both
clustering and semi-supervised learning.
Acknowledgement This research was partially supported by NSF-CCF 0830780, NSF-DMS
0915228, NSF-CCF 0917274, NSF-IIS 1117965.
8
References
[1] E. Airoldi, D. Blei, E. Xing, and S. Fienberg. A latent mixed membership model for relational
data. In Proceedings of the 3rd international workshop on Link discovery, pages 82?89. ACM,
2005.
[2] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373?1396, 2003.
[3] T. B?uhler and M. Hein. Spectral Clustering based on the graph p-Laplacian. In Proceedings of
the 26th Annual International Conference on Machine Learning, pages 81?88. ACM, 2009.
[4] J. Cai, E. Candes, and Z. Shen. A singular value thresholding algorithm for matrix completion.
IEEE Trans. Inform. Theory, 56(5), 2053-2080, (5):2053?2080, 2008.
[5] E. Candes and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925?
936, 2010.
[6] C. Ding, R. Jin, T. Li, and H. Simon. A learning framework using Green?s function and
kernel regularization with application to recommender system. In Proceedings of the 13th ACM
SIGKDD international conference on Knowledge discovery and data mining, pages 260?269.
ACM, 2007.
[7] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l 1ball for learning in high dimensions. In Proceedings of the 25th international conference on
Machine learning, pages 272?279. ACM, 2008.
[8] M. Fazel. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002.
[9] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[10] M. Gu, H. Zha, C. Ding, X. He, H. Simon, and J. Xia. Spectral relaxation models and structure
analysis for k-way graph clustering and bi-clustering. UC Berkeley Math Dept Tech Report,
2001.
[11] L. Hagen and A. Kahng. New spectral methods for ratio cut partitioning and clustering. Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on,
11(9):1074?1085, 2002.
[12] R. Lewis, V. Torczon, and L. R. Center. A globally convergent augmented lagrangian pattern
search algorithm for optimization with general constraints and simple bounds. SIAM Journal
on Optimization, 12(4):1075?1089, 2002.
[13] W. Liu and S. Chang. Robust multi-class transductive learning with graphs. 2009.
[14] D. Luo, C. Ding, and H. Huang. Graph evolution via social diffusion processes. Machine
Learning and Knowledge Discovery in Databases, pages 390?404, 2011.
[15] D. Luo, C. Ding, F. Nie, and H. Huang. Cauchy graph embedding. ICML2011, pages 553?560,
2011.
[16] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. Advances
in neural information processing systems, 2:849?856, 2002.
[17] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323, 2000.
[18] H. Seung and D. Lee.
The manifold ways of perception.
Science(Washington),
290(5500):2268?9, 2000.
[19] J. Shi and J. Malik. Normalized cuts and image segmentation. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 22(8):888?905, 2002.
[20] F. Wang, P. Li, and A. K?onig. Learning a Bi-Stochastic Data Similarity Matrix. In 2010 IEEE
International Conference on Data Mining, pages 551?560. IEEE, 2010.
[21] F. Wang and C. Zhang. Label propagation through linear neighborhoods. IEEE Transactions
on Knowledge and Data Engineering, pages 55?67, 2007.
[22] D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global
consistency. In Advances in Neural Information Processing Systems 16: Proceedings of the
2003 Conference, pages 595?602, 2004.
[23] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and
harmonic functions. In ICML 2003.
9
| 4801 |@word kgk:2 repository:1 trial:1 stronger:1 norm:3 d2:3 decomposition:3 tr:1 reduction:3 liu:2 denoting:1 interestingly:2 outperforms:1 com:2 wd:1 luo:4 comparing:1 si:6 gmail:2 written:2 update:2 generative:1 intelligence:1 kyk:6 blei:1 provides:1 math:1 revisited:1 successive:1 zhang:1 mathematical:1 dn:3 c2:3 introduce:2 pairwise:1 alm:3 p1:2 multi:1 ming:1 globally:1 automatically:2 considering:1 totally:1 becomes:4 provided:2 circuit:1 kg:10 interpreted:2 eigenvector:3 developed:2 unified:1 guarantee:3 berkeley:1 k2:1 control:1 partitioning:1 onig:1 appear:1 positive:14 before:1 engineering:2 local:4 modify:1 might:1 black:1 emphasis:1 initialization:2 k:1 studied:1 suggests:1 range:1 bi:6 fazel:1 yj:6 testing:1 block:1 empirical:1 significantly:1 projection:3 convenience:1 close:2 undesirable:1 cannot:1 onto:1 impossible:1 applying:2 optimize:1 equivalent:1 lagrangian:10 center:1 shi:1 convex:10 shen:1 mislead:1 deriving:1 embedding:3 handle:2 updated:1 ak2f:3 expensive:1 particularly:1 updating:8 satisfying:1 hagen:1 cut:11 labeled:2 database:1 subproblem:1 ding:5 solved:1 wang:2 verify:1 removed:2 yk:1 mentioned:1 intuition:1 cheeger:1 nie:2 seung:1 solving:3 rewrite:1 segment:2 f2:1 gu:1 easily:2 various:2 shalev:1 refined:1 neighborhood:1 widely:2 solve:10 stanford:1 otherwise:2 niyogi:1 gi:4 transductive:1 highlighted:1 final:1 advantage:1 eigenvalue:12 cai:1 p4:2 uci:2 flexibility:1 achieve:1 roweis:1 frobenius:1 ky:4 olkopf:1 convergence:1 cluster:1 neumann:2 perfect:4 converges:1 object:3 derive:1 develop:8 completion:3 ij:10 eq:50 solves:1 p2:1 aml:1 kgt:2 stochastic:4 require:2 f1:1 yij:2 hold:2 considered:1 ground:3 exp:1 major:1 optimizer:1 consecutive:1 label:6 sensitive:1 tool:1 weighted:2 hope:1 minimization:1 always:3 gaussian:5 pn:1 zhou:1 derived:1 usu:8 notational:1 rank:18 indicates:1 feipingnie:1 mainly:1 hk:3 contrast:1 sigkdd:1 tech:1 wf:1 membership:2 integrated:1 wij:6 interested:1 arg:5 among:3 classification:1 denoted:2 plan:1 art:1 mutual:1 uc:1 equal:1 once:1 field:3 having:1 ng:1 washington:1 identical:3 kw:3 unsupervised:10 k2f:4 nearly:1 icml:1 report:3 employ:1 belkin:1 randomly:1 uta:2 uhler:1 mining:6 evaluation:1 semidefinite:12 partial:1 lh:2 desired:1 plotted:1 hein:1 theoretical:4 nkl:1 cost:2 deviation:4 usefulness:2 eigenmaps:1 kxi:1 international:5 siam:1 lee:1 von:2 again:1 satisfied:1 thesis:1 huang:3 return:1 toy:5 li:2 potential:1 coding:1 wk:1 bold:1 satisfy:1 explicitly:1 vehicle:4 later:1 view:1 h1:1 closed:3 try:3 kwk:1 zha:1 competitive:1 recover:1 complicated:1 xing:1 candes:2 asuncion:1 simon:2 trh:1 minimize:2 accuracy:9 xk2f:1 accurately:1 converged:1 inform:1 inexact:1 dm:1 naturally:1 proof:2 di:9 stop:1 color:1 knowledge:3 dimensionality:2 segmentation:2 actually:1 lgc:4 higher:2 supervised:26 arlington:1 wei:1 done:1 evaluated:1 vsv:3 hand:1 nonlinear:1 propagation:2 quality:5 feiping:1 semisupervised:3 effect:1 contain:2 normalized:10 multiplier:8 y2:1 true:1 equality:1 evolution:1 regularization:1 ccf:2 symmetric:9 attractive:1 unavailability:2 criterion:1 demonstrate:3 duchi:2 dedicated:1 image:1 harmonic:3 novel:2 recently:1 discussed:1 he:1 measurement:2 rd:1 consistency:3 fk:2 dot:2 stable:1 similarity:30 supervision:1 gt:1 add:2 irrelevant:1 inequality:4 yi:6 additional:2 fortunately:2 impose:2 employed:1 converge:1 ud:1 semi:30 relates:1 multiple:4 desirable:2 full:1 ii:1 offer:1 long:1 gkf:1 laplacian:2 variant:3 basic:1 chandra:1 sometimes:2 kernel:4 c1:3 singular:6 sch:1 extra:1 rest:2 elegant:1 lafferty:1 jordan:1 ideal:1 intermediate:1 split:4 xj:1 texas:1 chqding:1 tain:1 ignored:1 useful:5 locally:1 category:1 generate:1 outperform:1 exist:1 xij:2 nsf:4 notice:9 sign:1 shall:1 group:4 four:1 verified:1 diffusion:1 utilize:1 graph:35 relaxation:1 sum:1 powerful:1 p3:1 bound:2 guaranteed:1 convergent:2 nonnegative:1 annual:1 strength:1 constraint:43 x2:1 bousquet:1 min:20 department:1 ball:3 kd:1 nmi:4 s1:3 fienberg:1 remains:2 previously:1 singer:1 know:2 ge:2 end:1 forge:1 rewritten:3 apply:3 observe:1 spectral:4 original:8 denotes:3 clustering:20 include:1 graphical:1 const:1 ghahramani:1 especially:3 seeking:2 objective:2 malik:1 added:1 strategy:1 diagonal:1 surrogate:2 enhances:1 subspace:2 distance:1 link:1 chris:1 evaluate:1 manifold:2 cauchy:1 trivial:1 reason:1 enforcing:1 index:1 ratio:2 equivalently:1 potentially:1 wk2f:8 subproblems:4 frank:1 trace:4 negative:4 design:1 kahng:1 qk2f:2 perform:1 upper:2 recommender:1 observation:2 jin:1 relational:2 extended:1 y1:1 rn:1 paris:1 extensive:1 lal:1 trans:1 pattern:2 perception:1 summarize:1 max:9 including:2 green:4 ia:1 critical:1 force:1 indicator:1 solvable:1 zhu:1 sn:3 deviate:1 acknowledgement:1 discovery:3 kf:5 mixed:2 interesting:1 validation:1 h2:1 sufficient:1 thresholding:3 heng:2 share:1 row:1 supported:1 copy:1 side:1 lle:1 wide:1 fall:1 saul:1 sparse:1 dk2:1 xia:1 dimension:1 xn:1 world:4 preprocessing:5 social:1 transaction:3 yk2f:1 bb:10 emphasize:1 global:8 shwartz:1 search:2 latent:1 table:2 learn:4 nature:1 robust:5 ignoring:1 obtaining:1 elegantly:1 diag:6 main:3 rh:1 whole:2 motivation:1 noise:4 bounding:1 s2:3 x1:1 augmented:5 ff:4 icml2011:1 theorem:1 udu:7 concern:1 evidence:1 exists:1 workshop:1 adding:1 ci:3 airoldi:1 phd:1 kx:4 sorting:2 easier:1 ncut:13 partially:2 trf:1 chang:1 nlk:26 truth:3 satisfies:3 minimizer:2 dh:1 acm:5 lewis:1 weston:1 trx:2 identity:1 goal:1 viewed:2 sorted:2 shared:1 feasible:1 aided:1 determined:1 except:1 lemma:9 called:1 gij:7 total:1 experimental:6 formally:1 incorporate:1 dept:1 d1:3 |
4,201 | 4,802 | Semi-Supervised Domain Adaptation with
Non-Parametric Copulas
David Lopez-Paz
MPI for Intelligent Systems
[email protected]
Jos?e Miguel Hern?andez-Lobato
University of Cambridge
[email protected]
Bernhard Sch?olkopf
MPI for Intelligent Systems
[email protected]
Abstract
A new framework based on the theory of copulas is proposed to address semisupervised domain adaptation problems. The presented method factorizes any
multivariate density into a product of marginal distributions and bivariate copula functions. Therefore, changes in each of these factors can be detected and
corrected to adapt a density model accross different learning domains. Importantly, we introduce a novel vine copula model, which allows for this factorization
in a non-parametric manner. Experimental results on regression problems with
real-world data illustrate the efficacy of the proposed approach when compared to
state-of-the-art techniques.
1
Introduction
When humans address a new learning problem, they often use knowledge acquired while learning
different but related tasks in the past. For example, when learning a second language, people rely on
grammar rules and word derivations from their mother tongue. This is called language transfer [19].
However, in machine learning, most of the traditional methods are not able to exploit similarities
between different learning tasks. These techniques only achieve good performance when the data
distribution is stable between training and test phases. When this is not the case, it is necessary to a)
collect and label additional data and b) re-run the learning algorithm. However, these operations are
not affordable in most practical scenarios.
Domain adaptation, transfer learning or multitask learning frameworks [17, 2, 5, 13] confront these
issues by first, building a notion of task relatedness and second, providing mechanisms to transfer
knowledge between similar tasks. Generally, we are interested in improving predictive performance
on a target task by using knowledge obtained when solving another related source task. Domain
adaptation methods are concerned about what knowledge we can share between different tasks, how
we can transfer this knowledge and when we should do it or not to avoid additional damage [4].
In this work, we study semi-supervised domain adaptation for regression tasks. In these problems,
the object of interest (the mechanism that maps a set of inputs to a set of outputs) can be stated as
a conditional density function. The data available for solving each learning task is assumed to be
sampled from modified versions of a common multivariate distribution. Therefore, we are interested
in sharing the ?common pieces? of this generative model between tasks, and use the data from
each individual task to detect, learn and adapt the varying parts of the model. To do so, we must
find a decomposition of multivariate distributions into simpler building blocks that may be studied
separately across different domains. The theory of copulas provides such representations [18].
Copulas are statistical tools that factorize multivariate distributions into the product of its marginals
and a function that captures any possible form of dependence among them. This function is referred
to as the copula, and it links the marginals together into the joint multivariate model. Firstly intro1
duced by Sklar [22], copulas have been successfully used in a wide range of applications, including
finance, time series or natural phenomena modeling [12]. Recently, a new family of copulas named
vines have gained interest in the statistics literature [1]. These are methods that factorize multivariate densities into a product of marginal distributions and bivariate copula functions. Each of these
factors corresponds to one of the building blocks that we assume either constant or varying across
different learning domains.
The contributions of this paper are two-fold. First, we propose a non-parametric vine copula model
which can be used as a high-dimensional density estimator. Second, by making use of this method,
we present a new framework to address semi-supervised domain adaptation problems, which performance is validated in a series of experiments with real-world data and competing state-of-the-art
techniques.
The rest of the paper is organized as follows: Section 2 provides a brief introduction to copulas,
and describes a non-parametric estimator for the bivariate case. Section 3 introduces a novel nonparametric vine copula model, which is formed by the described bivariate non-parametric copulas.
Section 4 describes a new framework to address semi-supervised domain adaptation problems using
the proposed vine method. Finally, section 5 describes a series of experiments that validate the
proposed approach on regression problems with real-world data.
2
Copulas
When the components of x = (x1 , . . . , xd ) are jointly independent, their density function p(x) can
be written as
d
Y
p(xi ) .
(1)
p(x) =
i=1
This equality does not hold when x1 , . . . , xd are not independent. Nevertheless, the differences
can be corrected if we multiply the right hand side of (1) by a specific function that fully describes
any possible dependence between x1 , . . . , xd . This function is called the copula of p(x) [18] and
satisfies
d
Y
p(xi ) c(P (x1 ), ..., P (xd )) .
p(x) =
(2)
|
{z
}
i=1
copula
The copula c is the joint density of P (x1 ), . . . , P (xd ), where P (xi ) is the marginal cdf of the random variable xi . This density has uniform marginals, since P (z) ? U[0, 1] for any random variable
z. That is, when we apply the transformation P (x1 ), . . . , P (xd ) to x1 , . . . , xd , we are eliminating all
information about the marginal distributions. Therefore, the copula captures any distributional pattern that does not depend on their specific form, or, in other words, all the information regarding the
dependencies between x1 , . . . , xd . When P (x1 ), . . . , P (xd ) are continuous, the copula c is unique
[22]. However, infinitely many multivariate models share the same underlying copula function, as
illustrated in Figure 1. The main advantage of copulas is that they allow us to model separately the
marginal distributions and the dependencies linking them together to produce the multivariate model
subject of study.
Given a sample from (2), we can estimate p(x) as follows. First, we construct estimates of the
marginal pdfs, p?(x1 ), . . . , p?(xd ), which also provide estimates of the corresponding marginal cdfs,
P? (x1 ), . . . , P? (xd ). These cdfs estimates are used to map the data to the d-dimensional unit hypercube. The transformed data are then used to obtain an estimate c? for the copula of p(x). Finally, (2)
is approximated as
d
Y
p?(x) =
p?(xi ) c?(P? (x1 ), ..., P? (xd )).
(3)
i=1
The estimation of marginal pdfs and cdfs can be implemented in a non-parametric manner by using
unidimensional kernel density estimates. By contrast, it is common practice to assume a parametric
model for the estimation of the copula function. Some examples of parametric copulas are Gaussian,
Gumbel, Frank, Clayton or Student copulas [18]. Nevertheless, real-world data often exhibit complex dependencies which cannot be correctly described by these parametric copula models. This
lack of flexibility of parametric copulas is illustrated in Figure 2. As an alternative, we propose
2
1.00
?
?
?
?
?? ?
??
?
?
??
?
?
?
?
?
?
?
??
??
?
?
?
? ? ?? ????
?? ?
?
?
?
?
?
? ??
?
?
??
??
?
?
???
?
???
?
??
??
? ?
?
?
?
?
?
? ? ??
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
??
?
?
??
?
?
?
?
?
?
? ??
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
??
?
?
?
? ?
?
?
?
??
???
?
?
???
?
?
? ??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
? ???
??
?
?
? ??
??
? ?
?
??
?
?
?
??
??
???
?? ?
?
?? ?
??
?
?
?
?? ?
?
?
? ??
?
??
??
?
?
?
??
????
? ?? ?
?
???
??? ?
????
??
??
?? ?
?? ?
? ????
??
?
?
? ?
?
??
???
?
?
?? ?
?
?
? ????
???
?
?
??
? ??
?
?
?
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ? ???
??
?? ? ? ? ?
? ??
? ?
?
???
?
?
????
??
?
??
? ?? ?? ??
???
? ??
?? ??
???
?
?
?
??
?????? ?
? ?
? ? ?? ?? ?
??????
?
?
?
???
?
???
?
?
??
?? ??
?
??? ?
?
??
?? ??
?
? ? ?? ??
????
?
??
?
?
?
?
?
???
?
?
??
?
??
?
?
?????
?
?
?
?? ?
? ?? ? ?? ? ???
?
?
????
?
? ??
???
? ???
????
?? ????????
?
?
? ?
??
??
?
?
?? ??
??
?
?????
?
??
?
? ? ??
?
??
? ??
? ? ??? ?
??
?
?
??
?
??
?
??
???? ??
???
?
??
??
????
??
?
?? ? ???
??
?
???
??
?? ?
???
? ??? ?????
??
?
?
????
?
? ?? ? ? ???? ?
?
?
?
? ??
??? ? ???
??
?
?
???
? ???
? ?? ?
?
?
?
?
??
?
? ? ??
?
?
? ? ??
??
?
?
? ??
???? ??
??? ???
? ?
?
?
?? ?
?????
??
?
?
?
?
?
?? ?
???? ??
? ? ??
?
?
? ? ? ?? ??
? ? ??
?
? ??
? ???
?
? ????
?
?? ?
???
? ?
? ?
?
?? ?
?
?
???? ?? ?
?
?
?
?
?
?
?
?
?? ????
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
??
? ??
?
?
?
??
? ?? ?
?
?
? ?
??
? ???
?? ??
?
?
?? ?
?? ?
????? ??
???? ?
??
??
? ? ? ??? ?
??? ??
??
??
?
?
????
?
??
?
? ? ?
?
????
?
?
?? ?? ?? ???
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
? ?
??
??
??
??
?
? ??
???? ?????
? ??? ?
?
?
?? ?
?? ? ?? ???? ????
????
?
?? ?
???
?
? ?
?
?
?? ?? ? ??
?
??
??? ?? ?
?? ? ?
? ?? ??
??
? ?
?? ?? ?? ??
? ?? ?
?
? ?? ?
? ??
??
?
??? ?
?? ?
??
? ???? ?
?
?????? ??
?
? ? ? ??
??
??
? ?
????
?
?
?
??
??
?
?
??
??
?
???
? ???
???
?? ??
??
??
? ? ? ???????
????
???
??
??
? ??
?
??
?
? ? ? ?? ?
?
?
?
??
? ???
? ? ?? ?
???
?
? ?
?
?? ?
?? ?
??
?
?
?
? ? ??
?? ??
??
??
? ?
?
?
? ?? ???
? ?????????
??
?? ???
?
??
??? ? ? ?
??
?
?? ?
???
??
? ????
?
?
? ??
?
?
?
? ??
??
???? ???
??
? ??
?? ???
?????
?
?
?
? ???? ?
? ???
?
?
?
?
?
?? ?? ? ? ????
?
? ?? ??
??
?
?
?
?
??
?
?? ? ?????
?? ?
????
?
?
?
???? ?
?
?
?
? ?
? ?
? ????
?
?
?
?? ?
???
???
?
?
?
???
?
?
????
???? ???
?
??
???
????
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?? ? ?
?
?
?
? ?
?
? ?
?
?? ?
?
??
?? ? ?
?
??? ? ?
?? ?? ?? ?
??
?
? ?
??
?
??
?????
??
?
?? ?
??
?
?? ??
? ? ???
?? ?
??
??
?
??? ?
? ??
? ????
?
??
? ? ???
??
?
??? ??? ???
?
? ??
?
?
?
??
??
?
?
??
? ??
??
?
?
?
?
?
??? ? ???
??
? ? ?? ?????
??
?
?? ? ?
??
? ?
?
??? ??
?
?
???
?
??
? ?
????? ?
? ?? ?
?
?? ??
?
??
? ?
?
?
? ? ?
????
?
??
? ??
??
??
? ??
? ?
??
????
?
??
?
?
?
?
? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?? ??
? ? ????
? ???
??
?
?? ? ??
? ?? ??
??
? ?? ?
? ? ??
?
?
??? ?? ?
? ??
? ?? ??
??
?
? ? ?? ?? ??
? ??
??
??? ??
?????
?
???? ?
?
? ?
? ?
? ????
???
?? ??? ? ???
?
? ??
? ? ?? ?? ???
?? ?
??? ????? ?
?? ? ? ?
?
?
??
?
?
?
? ? ?
?
??
?
??
? ? ??
?
??
?? ?
?? ? ? ?
?
?
?
??
??
? ?
?? ??
??? ??
??
? ??? ?
?????
?
???
??
?
???
?
? ?? ?????? ?
?
?
???
? ?
??
?
?
?
?
??
? ??
?
????
??
?
?
?
? ??
?
??
? ?? ?? ??? ??
?
??
?
?? ? ?
?
???
??
??? ?
???
?
? ??
?
?????????? ??
?
?? ? ?
??
?
?
????? ?
?
?
?
?
?
???
??
? ??? ?
?
???? ?? ??
?
?
??
? ?
?
??? ???
?
?
??
?
? ??
?
???
??
? ?? ???
? ? ?? ???
?
? ?? ?? ?
?
?
????
?
??
?? ???
??
?
?
? ? ? ? ??? ??
?? ?
?
????? ?
?
?
?
??
??
?
?
? ? ????
?
?? ???
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ???
?
?
?
?
??
?? ? ?
? ?
? ????
? ?? ? ? ?
?
?
?
??
?
??
??
???
???
??
? ??
?
? ? ??
?
??
??
?
??
??
??
??
?
???
?
??
??? ?? ?
??
??
?
??
?? ?
??
? ???? ? ??
?
?
? ??
??
?
?
??
??
??
?
?
?
???
?
?
?
? ??
??? ???? ???
?? ??
? ? ?? ?? ?
?
?
??
?
??
???
??
?
?
?
? ? ? ?
??
???
?
?
??
?
? ?
???
?
??
?
?? ?
??
??
?
?
?
?
??
?
?
??
??
?
??
?
?
?
? ? ? ? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?? ?? ??
??
??
???
???? ?
?
? ??
?
?
?
??
??
??
???
??
??
?
?
??
??
?
??? ?
?
?
?
??
?
?? ? ?
??
?
???
???
??
?
?
??
??
?
??
??
?
?
?
? ??
?
?
?
?
?
?
?
????
?
???
?
?
??
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?? ? ? ??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??? ? ?
??? ?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
? ? ??
?
?
??
???
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?? ? ? ?
?
??
?
?
?
?
?
?
?
??
? ??? ? ?
?
?
?
0.75
0.50
0.25
0.00
0.00
0.25
? ??
0.50
0.75
??
?
?
?
?
?
6
?
?
5.0
?
??
?
?
?
?
?
?
?
?
?
?
? ???
?
?
?
? ?? ?
?
?
????
??
??
?
?
?
?
??
?
??
? ?
? ? ???
? ??
?? ? ?
?
? ?
?
? ? ? ??
????
? ? ??
?
??
?
? ???
? ?????
? ??? ????
??
? ??
?
? ?
?
?? ??
?
?
?
????
?? ?
?? ?? ?
?
? ???
? ?
?
? ?
???
?
?
?
?
?
?
?
?
?
?
?
?
??
??
? ???
? ?
?? ?
??? ?
?
? ? ? ? ??
????
?
? ?
??
?
?
??
?
?
???
? ??
??
?
?
?
?
?
??
? ? ?? ?
? ??
?
?
?
?
?
??? ?
??
??
?
??? ??
?
?
?
??
?
?
?
?
?
?
?
??
? ? ?? ?
?
?
??
?
??
?
?
??
?? ?
??
???
?
?
??
? ??
??
??? ?
?
??
?
?
?
???
?
?????? ?
??
?
?
?? ?
?
????
? ??
???
?
?
?
??
?
??
??
?
?
?
?
?
?
??
?
?
?
??
?
??
?
? ?
?
? ???
?
?
?
??
?
?
??
?
?
?
?
??
?
?
???
?
??
?
?
?
?
?
??
?
?
?
? ?
?
?
?
??
?
?
?
?
?
?
?
?
?
???
?
? ?
??
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
??
??
?
?
?
?
??
?
?
?? ?
??
?
?
??
?
??
?
?
?
?
????
?
???
?
???
?
?
?
??
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
???
??
??
?
??
?
?
?
?
?
?
???
?
??
??
??
??
?
??
???
?
?
?? ?
??
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
? ???
?
?
?
?? ?
??
???
??
?
?
?
?
?
?
?
?? ?
?
?
?
?? ? ?
??
?
?
?
?
?
?
??
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
???????
?
?
??
?
??
?
?
?
?
??
??
?
?
?
?
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
??
?
??
?
??
?
?
????
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
? ??
?
?
?
?
?
?
?
??
??
?
?
?
?
?
?
?
?
?
?
? ??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
? ?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
? ?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
??
?
??
?
?
?
?
?
?
??
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
??
???
?
?
?
?
?
?
?
?
??
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ? ??
?
?
?
?
?
?
?
?
?
???
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
? ??
?
? ? ?? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
????
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????? ?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
? ? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?? ??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
???
?
?
??
?
?
?
?
?
??
?
?
??
??
?
?
?
??
??? ??
?
?
???
???
??
?
??
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?? ? ? ?
?
4
2
0
1.00
?2
0
?
? ?
? ? ??
?? ? ?
??
?? ? ??
?
?
?
?
??
??
?
? ??
?
?
?
?
?
?
?
?
? ?
?
?
?
?
??
?
???
?
?
?
??
?
???
??
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
? ? ???
?
?
?
?
?
?
? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????
?
?
?
?
?
?
??
?
?
??
??
?
?
?
?
??
?
?
??
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
??
?
??
???
?
?
?
???
??
?
?
? ?
???
?
?
?
?
??
?
?
??
?
?
???
?
?
? ?
?
?
?
??????
??
?
? ?? ?
?
? ???
?
?
? ? ?
?
?
?? ? ?
?? ?
???
???
???
???
?
??
?
?
?
?
?? ??
?
?
??
???? ?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
????
??
?
?
?
??
?
??
? ???
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
???
?
?
?
?
?
?
?
??
?
?
?
?
?
?
???
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
?
??
?
?
?
?
?? ?
?
?
???
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
??
?
?
?
?
?
?
?
?
???????
?
?
?
??
?
?
?
?
?
?
??
?
? ???
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
? ???
??
?
?
?
?
?
?
?
????
?
?
?
?
???
??
???
?
?
?
? ??
?
??
?
?
??
??
?
?
?
??
????
??
???
???
??
?? ????
?
?
?
?
?
?
?
?
?
?
2.5
?
?
? ?
?
?
? ??
? ???
?
?
?
?
?? ???
?
?
????
?
????
??
??
? ??? ? ?
?
?
?
??
?
?
?
?
??
?
? ???
?
?
??????
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
??
??
??
??
?
?
???
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
???
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
??
?
?
?
??
?
?
?
?
???
?
?
?
?
?? ?
?
?
?
?
?
?
?
??
?
?
? ?
?
? ??
??
?
?
?
?
???
???
?
?
?
?
???
?
?
?
?
?
?
?
?
? ??
?
?
?
?
?
?
?
??
?
? ?
? ?
0.0
2
0.0
?
?
? ??
?
?
??
?
?? ?
?
?? ?
?? ? ? ??
?
? ???
?
?????
??
?
?
??
?
??
?
?
?
??
?
?
??
?
??
??
?
?
? ??
?
?
?
?
?
??
??
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
? ??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
????
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??? ?
?
?
?
?
?
??
?
?
?
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????
?
?
?
???
????
??
?
?
?
??
?
?
?
??
?
?
?
? ???
?
??
?
?
???
?
?
?
?
?
?
? ?
??
?
?
?
?
?
??
?
? ?
?
??
? ???
?
?? ?
? ?
2.5
5.0
Figure 1: Left, sample from a Gaussian copula with correlation ? = 0.8. Middle and right, two
samples drawn from multivariate models with this same copula but different marginal distributions,
depicted as rug plots.
1.00
0.75
0.50
0.25
0.00
?
??
?
??
?? ?
?????
?
???
??
??
?
?
?
?
??
??
?
?
?
??
?
? ?
???
??
?
??
???
?
??
??
?
???
??
?
?
?
??
?
?
??
?
???
?
?
?
?
?
?
?
?
?
??
??
?
?
?
?
??
??
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
???
??
?
?
?
?
?
?
?
?
???
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
??
??
?
??
?
??
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
??
??
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
???
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
??
?
?
??
?
?
?
?
?
?
??
?
??
?
?
?
?
??
??
?
??
?
??
?
?
? ?? ?
?
??
?
?
?
?
??
?
?
??
??
?
? ??
?
?
?
?
?
??
?
?
?
?
?
?
??
??
??
??
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
?
?
??
?
?
?
?
??? ????
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ??
?????
?
?
?
?
?? ??
?
?
?
?
?
?? ?
?
?
?
?? ?
?
??
?
?
???
???
??
?
?
?
?
?
??
?
?
??
?
??
?
?
?
?
??
?
?
?
?
?
?
??
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
???
??
?
??
?
?
?
?
??
?
?
?
?
?
???
?
?
?
??
??
??
??
?
?
?
?
?
?
?
?
???
?
??
??
??
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
???
????
?
?
???
??
?
?
????
?
?
?
?
?
??
?
?
?
??
??
?
?
?
?
?
?
?
?
??
?
??
?
????
?
?
?
??
??
?
?
?
?
?
?
?
??
???
?
??
??
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
??
?
?
?
??
????
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
??
?
?
??
?
?
?
?
??
?
???
?
?
?
?
?
?
??
??
??
??
??
?
?
?
??
?
?
?
?
??
??
?
?
??
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
??????
?
?
?
?
?
??
??
?
?
?
?
?
?
??
??
?
??
?
?
??
?
?
?
??
?
??
?
??
?
??
?
??
?
??
?
?
?
?
?
?
?
??
??
?
??
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
??
?
?
??
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
???
?
??
?
??
?
?
???
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
????
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
??
??
?
??
?
?
?
??
?
?
??
??
?
?
?
??
?
?
??
?
??
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
??
?
??
?
?
??
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
????
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
??
??
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
??
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
????
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
???
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
? ?
??
?
?
??
???
?
?
?
?
????
?
?
?
?
?
??
?
?
?
?
???
?
?
?
?
?
?
??
?
?
??
??
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
??
?
?
??
?
?
?
?
?
?
??
?
?
???
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
???
?
?
?
??
?
?
?
??
?
?
?
??
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
??
??
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
??
?
?
?
??
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
??
??
?
?
?
?
??
?
?
?
???
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
??
?
??
??
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
??
?
??
??
?
?
?
?
?
?
?
??
??
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
??
?
?
??
?
?
??
?
??
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
???
??
?
?
?
??
???
?
?
??
?
?
?
?
?
??
?
??
??
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
???
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
??
?
?
?
?
??
?
?
?
??
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
??
?
??
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
???
?
??
?
??
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
??
??
?
?
?
???
?
??
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
??
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
??
???
?
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
??
?
?
?
?
?
?
??
?
???
?
?
?
?
??
?
?
?
??
?
?
?
?
???
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
??
?
?
??
??
???
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
???
?
??
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
??
??
?
?
?
??
?
?
?
??
??
?
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
???
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
??
??
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?? ????
?
?
?
?
??
?
?
?
?
???
??
?
?
?
?
?
??
??
???
?
??
?
?
?
?
?
?
?
??
?
??
?
?
??
?
?
?
?
?
?
?
?
??
???
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
???
?
?
?
???
?
??
?
?
??
??
?
??
?
?
?
?
?
?
?
?????
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
???
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
??
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
??
??
?
?
?
?
?
?
?
?
?
?
?
??
?
??
??
?
?
?
?
?
????
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
??
??
?
?
?
?
?
???
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
???
?
?
?
??
??
?
?
?
????
?
???
?
?
?
?
?
?
?
? ??
?
?
?
?
?
????
?
?
??
?
?
???
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
????
?
?
??
?
???
??
??
?
?
?
?
?
?
???
?
?
? ???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?????
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
? ??
?
????
??
?? ??
?? ?
??
?
?
?
?
?
?
?
?
?
?
??
?
? ?
??
?
?
?
?
??
?
?
?
??
?
??
?
??
?
?
?
???
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ? ??
?
?
??
??
?
?
? ?????
?
?
??
?
?
? ??? ??? ????
??
?
?
?
?
?
?
?
?
??
?
?
?
?
?
? ???? ??? ?
?
?
?
?
?
?
?
??
?
?
?
???
??
??
?
?
?
???
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
??
?
?
??
??
??
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
??
??
?
??
?
?
??
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
??
???? ?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
??
?
??
?
?
?
?
?
?
?
??
?
? ??
?
?
?
?
?
?
?
?
??
?
?
???
0.00
0.25
0.50
0.75
1.00
100
100
75
75
50
50
25
25
0
0
0
25
50
75
100
0
25
50
75
100
Figure 2: Left, sample from the copula linking variables 4 and 11 in the W IRELESS dataset. Middle,
density estimate generated by a Gaussian copula model when fitted to the data. This technique is
unable to capture the complex patterns present in the data. Right, copula density estimate generated
by the non-parametric method described in section 2.1.
to approximate the copula function in a non-parametric manner. Kernel density estimates can also
be used to generate non-parametric approximations of copulas, as described in [8]. The following
section reviews this method for the two-dimensional case.
2.1
Non-parametric Bivariate Copulas
We now elaborate on how to non-parametrically estimate the copula of a given bivariate density
p(x, y). Recall that this density can be factorized as the product of its marginals and its copula
p(x, y) = p(x) p(y) c(P (x), P (y)).
(4)
{(xi , yi )}ni=1
Additionally, given a sample
from p(x, y), we can obtain a pseudo-sample from its
copula c by mapping each observation to the unit square using estimates of the marginal cdfs, namely
{(ui , vi )}ni=1 := {(P? (xi ), P? (yi ))}ni=1 .
(5)
These are approximate observations from the uniformly distributed random variables u = P (x) and
v = P (y), whose joint density is the copula function c(u, v). We could try to approximate this
density function by placing Gaussian kernels on each observation ui and vi . However, the resulting
density estimate would have support on R2 , while the support of c is the unit square. A solution
is to perform the density estimation in a transformed space. For this, we select some continuous
distribution with support on R, strictly positive density ?, cumulative distribution ? and quantile
function ??1 . Let z and w be two new random variables given by z = ??1 (u) and w = ??1 (v).
Then, the joint density of z and w is
p(z, w) = ?(z) ?(w) c(?(z), ?(w)) .
(6)
The copula of this new density is identical to the copula of (4), since the performed transformations are marginal-wise. The support of (6) is now R2 ; therefore, we can now approximate it with
3
Gaussian kernels. Let zi = ??1 (ui ) and wi = ??1 (vi ). Then,
n
1X
p?(z, w) =
N (z, w|zi , wi , ?),
n i=1
(7)
where N (?, ?|?1 , ?2 , ?) is a two-dimensional Gaussian density with mean (?1 , ?2 ) and covariance
matrix ?. For convenience, we select ?, ? and ??1 to be the standard Gaussian pdf, cdf and
quantile function, respectively. Finally, the copula density c(u, v) is approximated by combining (6)
with (7):
n
p?(??1 (u), ??1 (v))
1 X N (??1 (u), ??1 (v)|??1 (ui ), ??1 (vi ), ?)
c?(u, v) =
=
.
(8)
?(??1 (u))?(??1 (v))
n i=1
?(??1 (u))?(??1 (v))
3
Regular Vines
The method described above can be generalized to the estimation of copulas of more than two random variables. However, although kernel density estimates can be successful in spaces of one or two
dimensions, as the number of variables increases, this methods start to be significantly affected by
the curse of dimensionality and tend to overfit to the training data. Additionally, for addressing domain adaptation problems, we are interested in factorizing these high-dimensional copulas into simpler building blocks transferrable accross learning domains. These two drawbacks can be addressed
by recent methods in copula modelling called vines [1]. Vines decompose any high-dimensional
copula density as a product of bivariate copula densities that can be approximated using the nonparametric model described above. These bivariate copulas (as well as the marginals) correspond to
the simple building blocks that we plan to transfer from one learning domain to another. Different
types of vines have been proposed in the literature. Some examples are canonical vines, D-vines
or regular vines [16, 1]. In this work we focus on regular vines (R-vines) since they are the most
general models.
An R-vine V for a probability density p(x1 , . . . , xd ) with variable set V = {1, . . . d} is formed
by a set of undirected trees T1 , . . . , Td?1 , each of them with corresponding set of nodes Vi and set
of edges Ei , where Vi = Ei?1 for i ? [2, d ? 1] . Any edge e ? Ei has associated three sets
C(e), D(e), N (e) ? V called the conditioned, conditioning and constraint sets of e, respectively.
Initially, T1 is inferred from a complete graph with a node associated with each element of V; for any
e ? T1 joining nodes Vj and Vk , C(e) = N (e) = {Vj , Vk } and D(e) = {?}. The trees T2 , ..., Td?1
are constructed so that each e ? Ei is formed by joining two edges e1 , e2 ? Ei?1 which share a
common node, for i ? 2. The new edge e has conditioned, conditioning and constraint sets given
by C(e) = N (e1 )?N (e2 ), D(e) = N (e1 ) ? N (e1 ), N (e) = N (e1 ) ? N (e2 ), where ? is the
symmetric difference operator. Figure 3 illustrates this procedure for an R-vine with 4 variables.
For any edge e(j, k) ? Ti , i = 1, . . . , d ? 1 with conditioned set C(e) = {j, k} and conditioning
set D(e) let cjk|D(e) be the value of the copula density for the conditional distribution of xj and xk
when conditioning on {xi : i ? D(e)}, that is,
cjk|D(e) := c(Pj|D(e) , Pk|D(e) |xi : i ? D(e)),
(9)
where Pj|D(e) := P (xj |xi : i ? D(e)) is the conditional cdf of xj when conditioning on {xi : i ?
D(e)}. Kurowicka and Cooke [16] indicate that any probability density function p(x1 , . . . , xd ) can
then be factorized as
d
d?1
Y
Y Y
p(x) =
p(xi )
cjk|D(e) ,
(10)
i=1
i=1 e(j,k)?Ei
where E1 , . . . , Ed?1 are the edge sets of the R-vine V for p(x1 , . . . , xd ). In particular, each of the
edges in the trees from V specify a different conditional copula density in (10). For d variables, the
density in (10) is formed by d(d ? 1)/2 factors. Changes in each of these factors can be detected
and independently transferred accross different learning domains to improve the estimation of the
target density function.
The definition of cjk|D(e) in (9) requires the calculation of conditional marginal cdfs. For this, we
use the following recursive identity introduced by Joe [14], that is,
? Cjk|D(e)\k
,
(11)
Pj|D(e) =
?Pk|D(e)\k
4
Tree 1
1, 2|?
1, 1|?
2, 4|1, 3
2, 3|1
2, 2|?
1, 3|?
1, 2|?
1, 3|?
2, 4|?
3, 3|?
Tree 3
Tree 2
2, 3|1
1, 4|3
1, 4|3
4, 4|?
3, 4|?
3, 4|?
p1234 = p1 ? p2 ? p3 ? p4 ? c12 ? c13 ? c34 ? c23|1 ? c14|3 ? c24|13
{z
} | {z } | {z }
|
{z
} |
Tree 1
Marginals
Tree 2
Tree 3
Figure 3: Example of the hierarchical construction of a R-vine copula for a system of four variables.
The edges selected to form each tree are highlighted in bold. Conditioned and conditioning sets for
each node and edge are shown as C(e)|D(e). Later, each edge in bold will correspond to a different
bivariate copula function.
which holds for any k ? D(e), where D(e) \ k = {i : i ? D(e) ? i 6= k} and Cjk|D(e)\k is the cdf
of cjk|D(e)\k .
One major advantage of vines is that they can model high-dimensional data by estimating density
functions of only one or two random variables. For this reason, these techniques are significantly
less affected by the curse of dimensionality than regular density estimators based on kernels, as we
show in Section 5. So far Vines have been generally constructed using parametric models for the
estimation of bivariate copulas. In the following, we describe a novel method for the construction
of non-parametric regular vines.
3.1
Non-parametric Regular Vines
In this section, we introduce a vine distribution in which all participant bivariate copulas can be
estimated in a non-parametric manner. Todo so, we model each of the copulas in (10) using the nonparametric method described in Section 2.1. Let {(ui , vi )}ni=1 be a sample from the copula density
c(u, v). The basic operation needed for the implementation of the proposed method is the evaluation
of the conditional cdf P (u|v) using the recursive equation (11). Define w = ??1 (v), zi = ??1 (ui )
and wi = ??1 (vi ). Combining (8) and (11) we obtain
Z u
P? (u|v) =
c?(x, v) dx
0
n
u
N (??1 (x), w|zi , wi , ?)
dx
?(??1 (x))
0
"
#
n
??1 (u) ? ?zi |wi
1 X
2
=
N (w|wi , ?w ) ?
,
n?(w) i=1
?z2i |wi
=
1 X
n?(w) i=1
Z
where N (?|?, ? 2 ) denotes a Gaussian density with mean ? and variance ? 2 , ? =
kernel bandwidth matrix, ?zi |wi = zi +
?z
?w ?(w
(12)
?z2
?
?
2
?w
the
? wi ) and ?z2i |wi = ?z2 (1 ? ? 2 ).
Equation (12) can be used to approximate any conditional cdf Pj|D(e) . For this, we use the fact that
P (xj |xi : i ? D(e)) = P (uj |ui : i ? D(e)), where ui = P (xi ), for i = 1, . . . , d, and recursively
apply rule (11) using equation (12) to compute P? (uj |ui : i ? D(e)).
To complete the inference recipe for the non-parametric regular vine, we must specify how to construct the hierarchy of trees T1 , . . . , Td?1 . In other words, we must define a procedure to select the
edges (bivariate copulas) that will form each tree. We have a total of d(d ? 1)/2 bivariate copulas
5
which should be distributed among the different trees. Ideally, we would like to include in the first
trees of the hierarchy the copulas with strongest dependence level. This will allow us to prune the
model by assuming independence in the last k < d trees, since the density function for the independent copula is constant and equal to 1. To construct the trees T1 , . . . , Td?1 , we assign a weight
to each edge e(j, k) (copula) according to the level of dependence between the random variables xj
and xk . A common practice is to fix this weight to the empirical estimate of Kendall?s? ? for the two
random variables under consideration[1]1 . Given these weights for each edge, we propose to solve
the edge selection problem by obtaining d ? 1 maximum spanning trees. Prim?s Algorithm [20] can
be used to solve this problem efficiently.
4
Domain Adaptation with Regular Vines
In this section we describe how regular vines can be used to address domain adaptation problems
in the non-linear regression setting with continuous data. The proposed approach could be easily
extended to other problems such as density estimation or classification. In regression problems, we
are interested in inferring the mapping mechanism or conditional distribution with density p(y|x)
that maps one feature vector x = (x1 , . . . , xd ) ? Rd into a target scalar value y ? R. Rephrased
into the copula framework, this conditional density can be expressed as
p(y|x) ? p(y)
d
Y
Y
cjk|D(e)
(13)
i=1 e(j,k)?Ei
where E1 , . . . , Ed are the edge sets of an R-vine for p(x, y). Note that the normalization of the right
part of (13) is relatively easy since y is scalar.
In the classic domain adaptation setup we usually have large amounts of data for solving a source
task characterized by the density function ps (x, y). However, only a partial or reduced sample is
available for solving a target task with density pt (x, y). Given the data available for both tasks, our
objective is to build a good estimate for the conditional density pt (y|x). To address this domain
adaptation problem, we assume that pt is a modified version of ps . In particular, we assume that
pt is obtained in two steps from ps . First, ps is expressed using an R-vine representation as in (10)
and second, some of the factors included in that representation (marginal distributions or pairwise
copulas) are modified to derive pt . All we need to address the adaptation across domains is to
reconstruct the R-vine representation of ps using data from the source task, and then identify which
of the factors have been modified to produce pt . These factors are corrected using data from the
target task. In the following, we describe how to identify and correct these modified factors.
Marginal distributions can change between source and target tasks (also known as covariate shift).
In this case, Ps (xi ) 6= Pt (xi ), for i = 1, . . . , d, or Ps (y) 6= Pt (y), and we need to re-generate
the estimates of the affected marginals using data from the target task. Additionally, some of the
bivariate copulas cjk|D(e) may differ from source to target tasks. In this case, we also re-estimate
the affected copulas using data from the target task. Simultaneous changes in both copulas and
marginals can occur. However, there is no limitation in updating each of the modified components
separately. Finally, if some of the factors remain constant across domains, we can use the available
data from the target task to improve the estimates obtained using only the data from the source
task. Note that we are addressing a more general problem than covariate shift. Besides identifying
and correcting changes in marginal distributions, we also consider changes in any possible form of
dependence (conditional distributions) between random variables.
For the implementation of the strategy mentioned above, we need to identify when two samples
come from the same distribution or not. For this, we propose to use the non-parametric two-sample
test Maximum Mean Discrepancy (MMD) [10]. MMD will return low p-values when two samples
are unlikely to have been drawn from the same distribution. Specifically, given samples from two
distributions P and Q, MMD will determine P 6= Q if the distance between the embeddings of the
empirical distributions for these two samples in a RKHS is significantly large.
1
We have tried more general dependence measures such as the HSIC (Hilbert-Schmidt Independence Criterion) without observing gains that justify the increase of computational costs.
6
Table 1: Average TLL obtained by NPRV, GRV and KDE on six different UCI datasets.
Dataset
No. of variables
KDE
GRV
NPRV
Auto
8
1.32 ? 0.06
1.84 ? 0.08
2.07 ? 0.07
Cloud
10
3.25 ? 0.10
5.00 ? 0.12
4.54 ? 0.13
Housing
14
1.96 ? 0.17
1.68 ? 0.11
3.18 ? 0.17
Magic
11
1.13 ? 0.11
2.09 ? 0.08
2.72 ? 0.17
Page-Blocks
10
1.90 ? 0.13
4.69 ? 0.20
5.64 ? 0.14
Wireless
11
0.98 ? 0.06
0.36 ? 0.08
2.17 ? 0.13
Semi-supervised and unsupervised domain adaptation: The proposed approach can be easily
extended to take advantage of additional unlabeled data to improve the estimation of our model.
Specifically, extra unlabeled target task data can be used to refine the factors in the R-Vine decomposition of pt which do not depend on y. This is still valid even in the limiting case of not having
access to labeled data from the target task at training time (unsupervised domain adaptation).
5
Experiments
To validate the proposed method, we run two series of experiments using real world data. The first
series illustrates the accuracy of the density estimates generated by the proposed non-parametric
vine method. The second series validates the effectiveness of the proposed framework for domain
adaptation problems in the non-linear regression setting. In all experiments, kernel bandwidth matrices are selected using Silverman?s rule-of-thumb [21]. For comparative purposes, we include the
results of different state-of-the-art domain adaptation methods whose parameters are selected by a
10-fold cross validation process on the training data.
Approximations: A complete R-Vine requires the use of conditional copula functions, which are
challenging to learn. A common approximation is to ignore any dependence between the copula
functional form and its set of conditioning variables. Note that the copula functions arguments remain to be conditioned cdfs. Moreover, to avoid excesive computational costs, we consider only
the first tree (d ? 1 copulas) of the R-Vine, which is the one containing the most amount of dependence between the distribution variables. Increasing the number of considered trees did not lead to
significant performance improvements.
5.1
Accuracy of Non-parametric Regular Vines for Density Estimation
The density estimates generated by the new non-parametric R-vine method (NPRV) are evaluated on
data from six normalized UCI datasets [9]. We compare against a standard density estimator based
on Gaussian kernels (KDE), and a parametric vine method based on bivariate Gaussian copulas
(GRV). From each dataset, we extract 50 random samples of size 1000. Training is performed using
30% of each random sample. Average test log-likelihoods and corresponding standard deviations
on the remaining 70% of the random sample are summarized in Table 1 for each technique. In these
experiments, NPRV obtains the highest average test log-likelihood in all cases except one, where it
is outperformed by GRV. KDE shows the worst performance, due to its direct exposure to the curse
of dimensionality.
5.2
Comparison with other Domain Adaptation Methods
NPRV is analyzed in a series of experiments for domain adaptation on the non-linear regression
setting with real-world data. Detailed descriptions of the 6 UCI selected datasets and their domains
are available in the supplementary material. The proposed technique is compared with different
benchmark methods. The first two, GP-S OURCE and GP-A LL, are considered baselines. They are
two gaussian process (GP) methods, the first one trained only with data from the source task, and
the second one trained with the normalized union of data from both source and target problems.
The other five methods are considered state-of-the-art domain adaptation techniques. DAUME [7]
performs a feature augmentation such that the kernel function evaluated at two points from the same
7
Table 2: Average NMSE and standard deviation for all algorithms and UCI datasets.
Dataset
No. of variables
GP-Source
GP-All
Daume
SSL-Daume
ATGP
KMM
KuLSIF
NPRV
UNPRV
Av. Ch. Mar.
Av. Ch. Cop.
Wine
12
0.86 ? 0.02
0.83 ? 0.03
0.97 ? 0.03
0.82 ? 0.05
0.86 ? 0.08
1.03 ? 0.01
0.91 ? 0.08
0.73 ? 0.07
0.76 ? 0.06
10
5
Sarcos
21
1.80 ? 0.04
1.69 ? 0.04
0.88 ? 0.02
0.74 ? 0.08
0.79 ? 0.07
1.00 ? 0.00
1.67 ? 0.06
0.61 ? 0.10
0.62 ? 0.13
1
8
Rocks-Mines
60
0.90 ? 0.01
1.10 ? 0.08
0.72 ? 0.09
0.59 ? 0.07
0.56 ? 0.10
1.00 ? 0.00
0.65 ? 0.10
0.72 ? 0.13
0.72 ? 0.15
38
49
Hill-Valleys
100
1.00 ? 0.00
0.87 ? 0.06
0.99 ? 0.03
0.82 ? 0.07
0.15 ? 0.07
1.00 ? 0.00
0.80 ? 0.11
0.15 ? 0.07
0.19 ? 0.09
100
34
Axis-Slice
386
1.52 ? 0.02
1.27 ? 0.07
0.95 ? 0.02
0.65 ? 0.04
1.00 ? 0.01
1.00 ? 0.00
0.98 ? 0.07
0.38 ? 0.07
0.37 ? 0.07
226
155
Isolet
617
1.59 ? 0.02
1.58 ? 0.02
0.99 ? 0.00
0.64 ? 0.02
1.00 ? 0.00
1.00 ? 0.00
0.58 ? 0.02
0.46 ? 0.09
0.42 ? 0.04
89
474
domain is twice larger than when these two points come from different domains. SSL-DAUME [6] is
a SSL extension of DAUME which takes into account unlabeled data from the target domain. ATGP
[4] models the source and target task data using a single GP, but learns additional kernel parameters
to correlate input vectors between domains. This method outperforms others like the one proposed
by Bonilla et al. [3]. KMM [11] minimizes the distance of marginal distributions in source and
target domains by matching their means when mapped into an universal RKHS. Finally, K U LSIF
[15] operates in a similar way as KMM. Besides NPRV, we also include in the experiments its fully
unsupervised variant, UNPRV, which ignores any labeled data from the target task.
For training, we randomly sample 1000 data points for both source and target tasks, where all the
data in the source task and 5% of the data in the target task are labeled. The test set contains 1000
points from the target task. Table 2 summarizes the average test normalized mean square error
(NMSE) and corresponding standard deviation for each method in each dataset across 30 random
repetitions of the experiment. The proposed methods obtain the best results in 5 out of 6 cases.
Notably, UNPRV (Unsupervised NPRV), which ignores labeled data from the target task, also
outperforms the other benchmark methods in most cases. Finally, the two bottom rows in Table
2 show the average number of marginals and bivariate copulas which are updated in each dataset
during the execution of NPRV, respectively.
Computational Costs: Running NPRV requires to fill in a weight matrix of size O(d2 ) with
the empirical estimates of Kendall?s ? for any two random variables. The computation of each of
these estimates can be done efficiently with cost O(n log n), where n is the number of available
data points. Therefore, the final training cost of NPRV is O(d2 n log n). In practice, we obtain
competitive training times. Training NPRV for the Isolet dataset took about 3 minutes on a regular
laptop computer. Predictions made by a single level NPRV have cost O(nd). Parametric copulas
may be used to reduce the computational demands.
6
Conclusions
We have proposed a novel non-parametric domain adaptation strategy based on copulas. The new
approach works by decomposing any multivariate density into a product of marginal densities and
bivariate copula functions. Changes in these factors across different domains can be detected using
two sample tests, and transferred across domains in order to adapt the target task density model.
A novel non-parametric vine method has been introduced for the practical implementation of this
method. This technique leads to better density estimates than standard parametric vines or KDE, and
is also able to outperform a large number of alternative domain adaptation methods in a collection
of regression problems with real-world data.
8
References
[1] K. Aas, C. Czado, A. Frigessi, and H. Bakken. Pair-copula constructions of multiple dependence. Insurance: Mathematics and Economics, 44(2):182?198, 2006.
[2] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman. A theory of
learning from different domains. Machine Learning, 79(1):151?175, 2010.
[3] E. Bonilla, K. Chai, and C. Williams. Multi-task gaussian process prediction. NIPS, 2008.
[4] B. Cao, S. Jialin, Y. Zhang, D. Yeung, and Q. Yang. Adaptive transfer learning. AAAI, 2010.
[5] C. Cortes and M. Mohri. Domain adaptation in regression. In Proceedings of the 22nd international conference on Algorithmic learning theory, ALT?11, pages 308?323, Berlin, Heidelberg,
2011. Springer-Verlag.
[6] H. Daum?e, III, Abhishek Kumar, and Avishek Saha. Frustratingly easy semi-supervised domain adaptation. Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 53?59, 2010.
[7] H. Daum?e III. Frustratingly easy domain adaptation. Association of Computational Linguistics,
pages 256?263, 2007.
[8] J. Fermanian and O. Scaillet. The estimation of copulas: Theory and practice. Copulas: From
Theory to Application in Finance, pages 35?60, 2007.
[9] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[10] A. Gretton, K. Borgwardt, M. Rasch, B. Scholkopf, and A. Smola. A kernel method for the
two-sample-problem. NIPS, pages 513?520, 2007.
[11] J. Huang, A. Smola, A. Gretton, K. Borgwardt, and B. Schoelkopf. Correcting sample selection
bias by unlabeled data. NIPS, pages 601?608, 2007.
[12] P. Jaworski, F. Durante, W.K. H?ardle, and T. Rychlik. Copula Theory and Its Applications.
Lecture Notes in Statistics. Springer, 2010.
[13] S. Jialin-Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge
and Data Engineering, 22(10):1345?1359, 2010.
[14] H. Joe. Families of m-variate distributions with given margins and m(m ? 1)/2 bivariate
dependence parameters. Distributions with Fixed Marginals and Related Topics, 1996.
[15] T. Kanamori, T. Suzuki, and M. Sugiyama. Statistical analysis of kernel-based least-squares
density-ratio estimation. Machine Learning, 86(3):335?367, 2012.
[16] D. Kurowicka and R. Cooke. Uncertainty Analysis with High Dimensional Dependence Modelling. Wiley Series in Probability and Statistics, 1st edition, 2006.
[17] Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation: Learning bounds and algorithms. In COLT, 2009.
[18] R. Nelsen. An Introduction to Copulas. Springer Series in Statistics, 2nd edition, 2006.
[19] S. Nitschke, E. Kidd, and L. Serratrice. First language transfer and long-term structural priming
in comprehension. Language and Cognitive Processes, 5(1):94?114, 2010.
[20] R. C. Prim. Shortest connection networks and some generalizations. Bell System Technology
Journal, 36:1389?1401, 1957.
[21] B.W. Silverman. Density Estimation for Statistics and Data Analysis. Monographs on Statistics
and Applied Probability. Chapman and Hall, 1986.
[22] A. Sklar. Fonctions de repartition a` n dimension set leurs marges. Publ. Inst. Statis. Univ.
Paris, 8(1):229?231, 1959.
9
| 4802 |@word multitask:1 repository:1 version:2 eliminating:1 middle:2 frigessi:1 nd:3 d2:2 tried:1 decomposition:2 covariance:1 recursively:1 series:9 efficacy:1 contains:1 rkhs:2 past:1 outperforms:2 z2:2 dx:2 must:3 written:1 plot:1 statis:1 generative:1 selected:4 xk:2 sarcos:1 provides:2 node:5 firstly:1 simpler:2 zhang:1 five:1 constructed:2 direct:1 scholkopf:1 lopez:1 manner:4 introduce:2 pairwise:1 acquired:1 notably:1 mpg:2 p1:1 multi:1 td:4 curse:3 accross:3 increasing:1 estimating:1 underlying:1 moreover:1 factorized:2 laptop:1 what:1 minimizes:1 c13:1 transformation:2 pseudo:1 ti:1 xd:16 finance:2 uk:1 unit:3 positive:1 t1:5 engineering:1 joining:2 twice:1 studied:1 collect:1 challenging:1 factorization:1 cdfs:6 range:1 practical:2 unique:1 practice:4 block:5 recursive:2 union:1 silverman:2 procedure:2 c24:1 empirical:3 universal:1 bell:1 significantly:3 matching:1 word:3 regular:11 cannot:1 convenience:1 selection:2 operator:1 unlabeled:4 valley:1 map:3 lobato:1 exposure:1 economics:1 williams:1 independently:1 survey:1 identifying:1 correcting:2 rule:3 estimator:4 isolet:2 importantly:1 fill:1 jmh233:1 classic:1 notion:1 hsic:1 limiting:1 updated:1 target:22 construction:3 hierarchy:2 pt:9 element:1 approximated:3 updating:1 distributional:1 labeled:4 bottom:1 cloud:1 capture:3 worst:1 vine:38 schoelkopf:1 highest:1 mentioned:1 monograph:1 ui:9 ideally:1 cam:1 mine:1 trained:2 depend:2 solving:4 predictive:1 lsif:1 easily:2 joint:4 derivation:1 sklar:2 univ:1 describe:3 detected:3 whose:2 supplementary:1 solve:2 z2i:2 larger:1 reconstruct:1 grammar:1 statistic:6 gp:6 jointly:1 highlighted:1 validates:1 cop:1 final:1 housing:1 advantage:3 rock:1 took:1 propose:4 product:6 adaptation:27 p4:1 uci:5 combining:2 cao:1 flexibility:1 achieve:1 c34:1 jaworski:1 description:1 validate:2 olkopf:1 recipe:1 chai:1 p:7 produce:2 comparative:1 nelsen:1 ben:1 object:1 illustrate:1 derive:1 ac:1 blitzer:1 miguel:1 p2:1 implemented:1 indicate:1 come:2 differ:1 rasch:1 drawback:1 correct:1 human:1 material:1 assign:1 fix:1 andez:1 generalization:1 decompose:1 comprehension:1 ardle:1 strictly:1 extension:1 hold:2 considered:3 hall:1 mapping:2 algorithmic:1 major:1 wine:1 purpose:1 estimation:12 outperformed:1 label:1 repetition:1 successfully:1 tool:1 gaussian:12 modified:6 avoid:2 factorizes:1 varying:2 tll:1 validated:1 focus:1 pdfs:2 vk:2 modelling:2 improvement:1 likelihood:2 contrast:1 baseline:1 detect:1 rostamizadeh:1 kidd:1 inference:1 inst:1 unlikely:1 initially:1 transformed:2 interested:4 issue:1 among:2 classification:1 colt:1 plan:1 art:4 ssl:3 copula:81 marginal:17 equal:1 construct:3 having:1 chapman:1 identical:1 placing:1 unsupervised:4 discrepancy:1 t2:1 others:1 intelligent:2 saha:1 randomly:1 individual:1 phase:1 interest:2 ource:1 multiply:1 insurance:1 evaluation:1 introduces:1 analyzed:1 jialin:2 edge:15 partial:1 necessary:1 tree:19 re:3 fitted:1 tongue:1 modeling:1 cost:6 addressing:2 parametrically:1 deviation:3 leurs:1 uniform:1 successful:1 paz:1 wortman:1 c14:1 dependency:3 st:1 density:53 international:1 borgwardt:2 jos:1 together:2 augmentation:1 aaai:1 containing:1 huang:1 cognitive:1 return:1 account:1 avishek:1 de:3 c12:1 student:1 bold:2 summarized:1 bonilla:2 vi:8 piece:1 performed:2 try:1 later:1 kendall:2 kurowicka:2 observing:1 start:1 competitive:1 participant:1 asuncion:1 contribution:1 formed:4 ni:4 square:4 accuracy:2 variance:1 efficiently:2 correspond:2 identify:3 thumb:1 simultaneous:1 strongest:1 sharing:1 ed:2 definition:1 against:1 e2:3 associated:2 sampled:1 gain:1 dataset:7 recall:1 knowledge:6 dimensionality:3 organized:1 hilbert:1 supervised:6 specify:2 evaluated:2 done:1 mar:1 smola:2 correlation:1 overfit:1 hand:1 ei:7 lack:1 semisupervised:1 building:5 normalized:3 equality:1 symmetric:1 illustrated:2 ll:1 during:1 mpi:2 criterion:1 generalized:1 pdf:1 hill:1 complete:3 performs:1 wise:1 consideration:1 novel:5 recently:1 common:6 functional:1 conditioning:7 linking:2 association:1 marginals:10 significant:1 cambridge:1 mother:1 fonctions:1 rd:1 mathematics:1 sugiyama:1 language:5 stable:1 access:1 similarity:1 multivariate:10 recent:1 scenario:1 verlag:1 yi:2 additional:4 rug:1 prune:1 determine:1 shortest:1 semi:6 multiple:1 gretton:2 adapt:3 calculation:1 characterized:1 cross:1 long:1 e1:7 prediction:2 variant:1 regression:9 basic:1 confront:1 affordable:1 yeung:1 kernel:13 normalization:1 mmd:3 separately:3 addressed:1 source:13 sch:1 extra:1 rest:1 subject:1 tend:1 undirected:1 effectiveness:1 structural:1 yang:2 iii:2 easy:3 concerned:1 embeddings:1 xj:5 independence:2 zi:7 variate:1 competing:1 bandwidth:2 reduce:1 regarding:1 unidimensional:1 shift:2 six:2 generally:2 detailed:1 amount:2 nonparametric:3 reduced:1 generate:2 outperform:1 canonical:1 estimated:1 correctly:1 rephrased:1 affected:4 four:1 nevertheless:2 drawn:2 pj:4 graph:1 run:2 uncertainty:1 named:1 family:2 p3:1 summarizes:1 bound:1 fold:2 refine:1 occur:1 constraint:2 argument:1 todo:1 kumar:1 relatively:1 durante:1 transferred:2 according:1 across:7 describes:4 remain:2 pan:1 wi:10 b:1 making:1 kmm:3 equation:3 hern:1 mechanism:3 needed:1 bakken:1 available:6 operation:2 decomposing:1 apply:2 hierarchical:1 alternative:2 schmidt:1 denotes:1 remaining:1 include:3 running:1 linguistics:1 daum:2 exploit:1 quantile:2 uj:2 build:1 hypercube:1 transferrable:1 objective:1 parametric:28 damage:1 dependence:11 strategy:2 traditional:1 exhibit:1 distance:2 link:1 unable:1 mapped:1 tue:2 berlin:1 topic:1 reason:1 spanning:1 assuming:1 besides:2 providing:1 ratio:1 setup:1 frank:2 kde:5 stated:1 magic:1 implementation:3 publ:1 perform:1 av:2 observation:3 datasets:4 benchmark:2 extended:2 mansour:1 duced:1 inferred:1 david:2 clayton:1 namely:1 introduced:2 pair:1 paris:1 connection:1 nip:3 address:7 able:2 usually:1 pattern:2 kulesza:1 including:1 natural:2 rely:1 improve:3 technology:1 brief:1 axis:1 auto:1 extract:1 review:1 literature:2 fully:2 lecture:1 limitation:1 validation:1 c23:1 share:3 cooke:2 row:1 mohri:2 last:1 wireless:1 kanamori:1 side:1 allow:2 bias:1 wide:1 czado:1 distributed:2 slice:1 dimension:2 world:7 cumulative:1 valid:1 ignores:2 made:1 collection:1 adaptive:1 suzuki:1 far:1 correlate:1 transaction:1 approximate:5 obtains:1 ignore:1 relatedness:1 bernhard:1 assumed:1 factorize:2 xi:16 abhishek:1 factorizing:1 continuous:3 frustratingly:2 table:5 additionally:3 learn:2 transfer:8 obtaining:1 improving:1 heidelberg:1 complex:2 priming:1 domain:43 vj:2 did:1 pk:2 main:1 daume:5 edition:2 x1:16 nmse:2 referred:1 elaborate:1 wiley:1 inferring:1 pereira:1 learns:1 minute:1 specific:2 covariate:2 prim:2 r2:2 cortes:1 alt:1 bivariate:18 cjk:9 workshop:1 joe:2 gained:1 execution:1 conditioned:5 illustrates:2 demand:1 gumbel:1 margin:1 depicted:1 infinitely:1 expressed:2 scalar:2 springer:3 ch:2 corresponds:1 aa:1 satisfies:1 cdf:6 conditional:12 identity:1 change:7 included:1 specifically:2 except:1 corrected:3 uniformly:1 justify:1 operates:1 called:4 total:1 experimental:1 select:3 people:1 support:4 crammer:1 repartition:1 marge:1 phenomenon:1 |
4,202 | 4,803 | Cost-Sensitive Exploration in
Bayesian Reinforcement Learning
Dongho Kim
Department of Engineering
University of Cambridge, UK
Kee-Eung Kim
Dept of Computer Science
KAIST, Korea
Pascal Poupart
School of Computer Science
University of Waterloo, Canada
[email protected]
[email protected]
[email protected]
Abstract
In this paper, we consider Bayesian reinforcement learning (BRL) where actions
incur costs in addition to rewards, and thus exploration has to be constrained in
terms of the expected total cost while learning to maximize the expected longterm total reward. In order to formalize cost-sensitive exploration, we use the
constrained Markov decision process (CMDP) as the model of the environment, in
which we can naturally encode exploration requirements using the cost function.
We extend BEETLE, a model-based BRL method, for learning in the environment
with cost constraints. We demonstrate the cost-sensitive exploration behaviour in
a number of simulated problems.
1
Introduction
In reinforcement learning (RL), the agent interacts with a (partially) unknown environment, classically assumed to be a Markov decision process (MDP), with the goal of maximizing its expected
long-term total reward. The agent faces the exploration-exploitation dilemma: the agent must select actions that exploit its current knowledge about the environment to maximize reward, but it
also needs to select actions that explore for more information so that it can act better. Bayesian RL
(BRL) [1, 2, 3, 4] provides a principled framework to the exploration-exploitation dilemma.
However, exploratory actions may have serious consequences. For example, a robot exploring in an
unfamiliar terrain may reach a dangerous location and sustain heavy damage, or wander off from the
recharging station to the point where a costly rescue mission is required. In a less mission critical
scenario, a route recommendation system that learns actual travel times should be aware of toll fees
associated with different routes. Therefore, the agent needs to carefully (if not completely) avoid
critical situations while exploring to gain more information.
The constrained MDP (CMDP) extends the standard MDP to account for limited resources or multiple objectives [5]. The CMDP assumes that executing actions incur costs and rewards that should
be optimized separately. Assuming the expected total reward and cost criterion, the goal is to find
an optimal policy that maximizes the expected total reward while bounding the expected total cost.
Since we can naturally encode undesirable behaviors into the cost function, we formulate the costsensitive exploration problem as RL in the environment modeled as a CMDP.
Note that we can employ other criteria for the cost constraint in CMDPs. We can make the actual
total cost below the cost bound with probability one using the sample-path cost constraints [6, 7], or
with probability 1 ? ? using the percentile cost constraints [8]. In this paper, we restrict ourselves
to the expected total cost constraint mainly due to the computational efficiency in solving the constrained optimization problem. Extending our work to other cost criteria is left as a future work. The
main argument we make is that the CMDP provides a natural framework for representing various
approaches to constrained exploration, such as safe exploration [9, 10].
1
In order to perform cost-sensitive exploration in the Bayesian RL (BRL) setting, we cast the problem
as a constrained partially observable MDP (CPOMDP) [11, 12] planning problem. Specifically, we
take a model-based BRL approach and extend BEETLE [4] to solve the CPOMDP which models
BRL with cost constraints.
2
Background
In this section, we review the background for cost-sensitive exploration in BRL. As we explained
in the previous section, we assume that the environment is modeled as a CMDP, and formulate
model-based BRL as a CPOMDP. We briefly review the CMDP and CPOMDP before summarizing
BEETLE, a model-based BRL for environments without cost constraints.
2.1
Constrained MDPs (CMDPs) and Constrained POMDPs (CPOMDPs)
The standard (infinite-horizon discounted return) MDP is defined by tuple hS, A, T, R, ?, b0 i where:
S is the set of states s; A is the set of actions a; T (s, a, s? ) is the transition function which denotes
the probability Pr(s? |s, a) of changing to state s? from s by executing action a; R(s, a) ? ? is the
reward function which denotes the immediate reward of executing action a in state s; ? ? [0, 1) is
the discount factor; b0 (s) is the initial state probability for state s. b0 is optional, since an optimal
policy ? ? : S ? A that maps from states to actions can be shown not to be dependent on b0 .
The constrained MDP (CMDP) is defined by tuple hS, A, T, R, C, c?, ?, b0 i with the following additional components: C(s, a) ? ? is the cost function which denotes the immediate cost incurred by
executing action a in state s; c? is the bound on expected total discounted cost.
An optimal policy of a CMDP maximizes the expected total discounted reward over the infinite
horizon, while not incurring more than c? total discounted cost in the expectation. We can formalize
this constrained optimization problem as:
max? V ?
s.t. C ? ? c?.
P?
where P
V ? = E?,b0 [ t=0 ? t R(st , at )] is the expected total discounted reward, and C ? =
?
E?,b0 [ t=0 ? t C(st , at )] is the expected total discounted cost. We will also use C ? (s) to denote
the expected total cost starting from the state s.
It has been shown that an optimal policy for CMDP is generally a randomized stationary policy [5].
Hence, we define a policy ? as a mapping of states to probability distributions over actions, where
?(s, a) denotes the probability that an agent will execute action a in state s. We can find an optimal
policy by solving the following linear program (LP):
X
max
R(s, a)x(s, a)
(1)
x
s.t.
s,a
X
a
X
x(s? , a) ? ?
X
x(s, a)T (s, a, s? ) = b0 (s? ) ?s?
s,a
C(s, a)x(s, a) ? c? and x(s, a) ? 0 ?s, a
s,a
The variables x?s are related to the occupancy measure of optimal policy, where x(s, a) is the expected discounted number of times executing a at stateP
s. If the above LP yields a feasible solution,
optimal policy can be obtained by ?(s, a) = x(s, a)/ a? x(s, a? ). Note that due to the introduction of cost constraints, the resulting optimal policy is contingent on the initial state distribution b0 ,
in contrast to the standard MDP of which an optimal policy can be independent of the initial state
distribution. Note also that the above LP may be infeasible if there is no policy that can satisfy the
cost constraint.
The constrained POMDP (CPOMDP) extends the standard POMDP in a similar manner. The standard POMDP is defined by tuple hS, A, Z, T, O, R, ?, b0 i with the following additional components:
the set Z of observations z, and the observation probability O(s? , a, z) representing the probability
Pr(z|s? , a) of observing z when executing action a and changing to state s? . The states in the
POMDP are hidden to the agent, and it has to act based on the observations instead. The CPOMDP
2
Algorithm 1: Point-based backup of ?-vector pairs with admissible cost
input : (b, d) with belief state b and admissible cost d; set ? of ?-vector pairs
output: set ??(b,d) of ?-vector pairs (contains at most 2 pairs for a single cost function)
// regress
foreach a ? A do
a,?
a,?
?R
= R(?, a), ?C
= C(?, a)
foreach (?i,R ,P
?i,C ) ? ?, z ? Z do
a,z
?i,R
(s) = s? T (s, a, s? )O(s? , a, z)?i,R (s? )
P
a,z
?i,C (s) = s? T (s, a, s? )O(s? , a, z)?i,C (s? )
// backup for each action
foreach a ? A do
Solve the following LP to obtain best randomized action at the next time step:
P
a,z
b? iw
?iz ?i,C
? dz ?z
P
X
?iz = 1 ?z
iw
a,z
max b ?
w
?iz ?i,R
subject to
w
?iz ? 0 ?i, z
w
?iz ,dz
i,z
P
1
z dz = ? (d ? C(b, a))
P
a,?
a,z
a
?R
= ?R
+ ? i,z w
?iz ?i,R
P
a,?
a,z
a
?C
= ?C
+ ? i,z w
?iz ?i,C
// find the best randomized action for the current time step
Solve the following LP with :
max b ?
wa
X
a
wa ?R
subject to
a
P
a
b ? a wa ?C
?d
P
w
=
1
a a
wa ? 0 ?a
a
a
return ??(b,d) = {(?R
, ?C
)|wa > 0}
is defined by adding the cost function C and the cost bound c? into the definition as in the CMDP. Although the CPOMDP is intractable to solve as is the case with the POMDP, there exists an efficient
point-based algorithm [12].
The Bellman backup operator for CPOMDP generates pairs of ?-vectors (?R , ?C ), each vector
corresponding to the expected total reward and cost, respectively. In order to facilitate defining the
Bellman backup operator at a belief state, we augment the belief state with a scalar quantity called
admissible cost [13], which represents the expected total cost that can be additionally incurred for
the future time steps without violating the cost constraint. Suppose that, at time step t, the agent
Pt
?
has so far incurred a total cost of Wt , i.e., Wt =
? =0 ? C(s? , a? ). The admissible cost at
1
time step t + 1 is defined as dt = ? t+1 (?
c ? Wt ). It can be computed recursively by the equation
1
dt+1 = ? (dt ? C(st , at )), which can be derived from Wt = Wt?1 + ?C(st , at ), and d0 = c?. Given
a pair of belief state and admissible cost (b, d) and the set of ?-vector pairs ? = {(?i,R , ?i,C )}, the
best (randomized) action is obtained by solving the following LP:
P
b ? i wi ?i,C ? d
X
P
max b ?
wi ?i,R subject to
i wi = 1
wi
i
wi ? 0 ?i
where wi corresponds to the probability of choosing the action associated with the pair (?i,R , ?i,C ).
The point-based backup for CPOMDP leveraging the above LP formulation is shown in Algorithm 1.1
1
Note that this algorithm is an improvement over the heuristic distribution of the admissible cost to each
observation by ratio Pr(z|b, a) in [12]. Instead, we optimize the cost distribution by solving an LP.
3
2.2
BEETLE
BEETLE [4] is a model-based BRL algorithm, based on the idea that BRL can be formulated
as a POMDP planning problem. Assuming that the environment is modeled as a discrete-state
MDP P = hS, A, T, R, ?i where the transition function T is unknown, we treat each transi?
tion probability T (s, a, s? ) as an unknown parameter ?as,s and formulate BRL as a hyperstate
?
POMDP hSP , AP , ZP , TP , OP , RP , ?, b0 i where SP = S ? {?as,s }, AP = A, ZP = S,
?
TP (s, ?, a, s? , ?? ) = ?as,s ?? (?? ), OP (s? , ?? , a, z) = ?s? (z), and RP (s, ?, a) = R(s, a). In summary, the hyperstate POMDP augments the original state space with the set of unknown parameters
?
{?as,s }, since the agent has to take actions without exact information on the unknown parameters.
The belief state b in the hyperstate POMDP yields the posterior of ?. Specifically, assuming a
product of Dirichlets for the belief state such that
Y
b(?) =
Dir(?as,? ; ns,?
a )
s,a
?as,?
is the parameter vector of multinomial distribution defining the transition function for
where
state s and action a, and ns,?
a is the hyperparameter vector of the corresponding Dirichlet distri?
can be viewed as pseudocounts, i.e., the number of times
bution. Since the hyperparameter ns,s
a
?
observing transition (s, a, s ), the updated belief after observing transition (?
s, a
?, s?? ) is also a product
of Dirichlets:
Y
?
?
bas??,?s (?) =
Dir(?as,? ; ns,?
a,?
s? (s, a, s ))
a + ?s?,?
s,a
Hence, belief states in the hyperstate POMDP can be represented by |S|2 |A| variables one for each
hyperparameter, and the belief update is efficiently performed by incrementing the hyperparmeter
corresponding to the observed transition.
Solving the hyperstate POMDP is performed by dynamic programming with the Bellman backup
operator [2]. Specifically, the value function is represented as a set ? of ?-functions for each state
?
Rs, so that the value of optimal policy is obtained by Vs (b) = max??? ?s (b) where ?s (b) =
b(?)?s (?)d?. Using the fact that ?-functions are multivariate polynomials of ?, we can obtain
?
an exact solution to the Bellman backup.
There are two computational challenges with the hyperstate POMDP approach. First, being a
POMDP, the Bellman backup has to be performed on all possible belief states in the probability
simplex. BEETLE adopts Perseus [14], performing randomized point-based backups confined to
the set of sampled (s, b) pairs by simulating a default or random policy, and reducing the total
number of value backups by improving the value of many belief points through a single backup.
Second, the number of monomial terms in the ?-function increases exponentially with the number
of backups. BEETLE chooses a fixed set of basis functions and projects the ?-function onto a linear
combination of these basis functions. The set of basis functions is chosen to be the set of monomials
extracted from the sampled belief states.
3
Constrained BEETLE (CBEETLE)
We take an approach similar to BEETLE for cost-sensitive exploration in BRL. Specifically, we formulate cost-sensitive BRL as a hyperstate CPOMDP hSP , AP , ZP , TP , OP , RP , CP , c?, ?, b0 i where
?
?
SP = S ? {?as,s }, AP = A, ZP = S, TP (s, ?, a, s? , ?? ) = ?as,s ?? (?? ), OP (s? , ?? , a, z) = ?s? (z),
RP (s, ?, a) = R(s, a), and CP (s, ?, a) = C(s, a).
Note that using the cost function C and cost bound c? to encode the constraints on the exploration
behaviour allows us to enjoy the same flexibility as using the reward function to define the task
objective in the standard MDP and POMDP. Although, for the sake of exposition, we use a single
cost function and discount factor in our definition of CMDP and CPOMDP, we can generalize the
model to have multiple cost functions that capture different aspects of exploration behaviour that
cannot be put together on the same scale, and different discount factors for rewards and costs. In
addition, we can even completely eliminate the possibility of executing action a in state s by setting
the discount factor to 1 for the cost constraint and impose a sufficiently low cost bound c? < C(s, a).
4
Algorithm 2: Point-based backup of ?-function pairs for the hyperstate CPOMDP2
input : (s, n, d) with state s, Dirichlet hyperparameter n representing belief state b, and admissible
cost d; set ?s of ?-function pairs for each state s
output: set ??(s,n,d) of ?-function pairs (contains at most 2 pairs for a single cost function)
// regress
foreach a ? A do
a,?
a,?
?R
= R(s, a), ?C
= C(s, a) // constant functions
?
foreach s ? S, (?i,R , ?i,C ) ? ?s? do
?
?
?
a,s?
a,s?
= ?as,s ?i,C // multiplied by variable ?as,s
= ?as,s ?i,R , ?i,C
?i,R
// backup for each action
foreach a ? A do
Solve the following LP to obtain best randomized action at the next time step:
P
a,s?
(b) ? ds? ?s?
?is? ?i,C
iw
P
X
?is? = 1 ?s?
a,s?
iw
(b) subject to
max
w
?is? ?i,R
w
?is? ,dz
w
?is? ? 0 ?i, s?
i,s?
P
1
z ds? = ? (d ? C(s, a))
a,?
a
?R
= ?R
+?
P
?
i,s?
a,?
a,s
a
, ?C
= ?C
w
?is? ?i,R
+?
P
i,s?
?
a,s
w
?is? ?i,C
// find the best randomized action for the current time step
Solve the following LP with :
P
a
a wa ?C (b) ? d
X
P
a
wa ?R (b) subject to
max
a wa = 1
wa
a
wa ? 0 ?a
a
a
return ??(s,n,d) = {(?R
, ?C
)|wa > 0}
We call our algorithm CBEETLE, which solves the hyperstate CPOMDP planning problem. As
in BEETLE, ?-vectors for the expected total reward and cost are represented as ?-functions in
terms of unknown parameters. The point-based backup operator in Algorithm 1 naturally extends
to ?-functions without significant increase in the computation complexity: the size of LP does not
increase even though the belief states represent probability distributions over unknown parameters.
Algorithm 2 shows the point-based backup of ?-functions in the hyperstate CPOMDP. In addition,
if we choose a fixed set of basis functions for representing ?-functions, we can pre-compute the
? and C)
? in the same way as BEETLE. This technique is used in the
projections of ?-functions (T?, R,
point-based backup, although not explicitly described in the pseudocode due to the page limit.
We also implemented the randomized point-based backup to further improve the performance. The
key step in the randomized value update is to check whether a newly generated ?-function pairs
? = {(?i,R , ?i,C )} from a point-based backup yields improved value at some other sampled belief
state (s, n, d). We can obtain the value of ? at the belief state by solving the following LP:
P
wi ?i,C (b) ? d
X
Pi
wi ?i,R (b) subject to
max
(2)
i wi = 1
wi
i
wi ? 0 ?i
If we can find an improved value, we skip the point-based backup at (s, n, d) in the current iteration.
Algorithm 3 shows the randomized point-based value update.
In summary, the point-based value iteration algorithm for CPOMDP and BEETLE readily provide
all the essential computational tools to implement the hyperstate CPOMDP planning for the costsensitive BRL.
2
The ?-functions in the pseudocode are functions of ? and ?(b) is defined to be
in Sec. 2.2.
5
R
?
b(?)?(?)d? as explained
Algorithm 3: Randomized point-based value update for the hyperstate CPOMDP
input : set B of sampled belief points, and set ?s of ?-function pairs for each state s
output: set ??s of ?-function pairs (updated value function)
// initialize
? = B // belief points needed to be improved
B
foreach s ? S do
??s = ?
// randomized backup
? 6= ? do
while B
? ?B
?
Sample ?b = (?
s, n
? , d)
?
Obtain ??b by point-based backup at ?b with {?s |?s ? S} (Algorithm 2)
??s? = ??s? ? ???b
foreach b ? B do
Calculate V ? (b) by solving the LP Eqn. 2 with ???b
? = {b ? B : V ? (b) < V (b)}
B
return {??s |?s ? S}
(a)
(b)
Figure 1: (a) 5-state chain: each edge is labeled with action, reward, and cost associated with the
transition. (b) 6 ? 7 maze: a 6 ? 7 grid including the start location with recharging station (S), goal
location (G), and 3 flags to capture.
4
Experiments
We used the constrained versions of two standard BRL problems to demonstrate the cost-sensitive
exploration. The first one is the 5-state chain [15, 16, 4], and the second one is the 6 ? 7 maze [16].
4.1
Description of Problems
The 5-state chain problem is shown in Figure 1a, where the agent has two actions 1 and 2. The agent
receives a large reward of 10 by executing action 1 in state 5, or a small reward of 2 by executing
action 2 in any state. With probability 0.2, the agent slips and makes the transition corresponding
to the other action. We defined the constrained version of the problem by assigning a cost of 1 for
action 1 in every state, thus making the consecutive execution of action 1 potentially violate the cost
constraint.
The 6 ? 7 maze problem is shown in Figure 1b, where the white cells are navigatable locations and
gray cells are walls that block navigation. There are 5 actions available to the agent: move left, right,
up, down, or stay. Every ?move? action (except for the stay action) can fail with probability 0.1,
resulting in a slip to two nearby cells that are perpendicular to the intended direction. If the agent
bumps into a wall, the action will have no effect. The goal of this problem is to capture as many
flags as possible and reach the goal location. Upon reaching the goal, the agent obtains a reward
equal to the number of flags captured, and the agent gets warped back to the start location. Since
there are 33 reachable locations in the maze and 8 possible combinations for the status of captured
flags, there are a total of 264 states. We defined the constrained version of the problem by assuming
that the agent is equipped with a battery and every action consumes energy except the stay action at
6
recharging station. We modeled the power consumption by assigning a cost of 0 for executing the
stay action at the recharging station, and a cost of 1 otherwise. Thus, the battery recharging is done
by executing stay action at the recharging station, as the admissible cost increases by factor 1/?.3
4.2
Results
Table 1 summarizes the experimental results for the constrained chain and maze problems.
In the chain problem, we used two structural prior models, ?tied? and ?semi?, among three priors
experimented in [4]. Both chain-tied and chain-semi assume that the transition dynamics are known
to the agent except for the slip probabilities. In chain-tied, the slip probability is assumed to be
independent of state and action, thus there is only one unknown parameter in transition dynamics.
In chain-semi, the slip probability is assumed to be action dependent, thus there are two unknown
parameters since there are two actions. We used uninformative Dirichlet priors in both settings.
We excluded experimenting with the ?full? prior model (completely unknown transition dynamics)
since even BEETLE was not able to learn a near-optimal policy as reported in [4].
We report the average discounted total reward and cost as well as their 95% confidence intervals
for the first 1000 time steps using 200 simulated trials. We performed 60 Bellman iterations on 500
belief states, and used the first 50 belief states for choosing the set of basis functions. The discount
factor was set to 0.99.
When c?=100, which is the maximum expected total cost that can be incurred by any policy, CBEETLE found policies that are as good as the policy found by BEETLE since the cost constraint has no
effect. As we impose tighter cost constraints by c?=75, 50, and 25, the policies start to trade off the
rewards in order to meet the cost constraint. Note also that, although we use approximations in the
various stages of the algorithm, c? is within the confidence intervals of the average total cost, meaning
that the cost constraint is either met or violated by statistically insignificant amounts. Since chainsemi has more unknown parameters than chain-tied, it is natural that the performance of CBEETLE
policy is slighly degraded in chain-semi. Note also that as we impose tighter cost constraints, the
running times generally increase. This is because the cost constraint in the LP tends to become
active at more belief states, generating two ?-function pairs instead of a single ?-function pair when
the cost constaint in the LP is not active.
The results for the maze problem were calculated for the first 2000 time steps using 100 simulated
trials. We performed 30 Bellman iterations on 2000 belief states, and used 50 basis functions. Due
to the computational requirement for solving the large hyperstate CPOMDP, we only experimented
with the ?tied? prior model which assumes that the slip probability is shared by every state and
action. Running CBEETLE with c? = 1/(1 ? 0.95) = 20 is equivalent to running BEETLE without
cost constraints, as verified in the table.
We further analyzed the cost-sensitive exploration behaviour in the maze problem. Figure 2 compares the policy behaviors of BEETLE and CBEETLE(?
c=18) in the maze problem. The BEETLE
policy generally captures the top flag first (Figure 2a), then navigates straight to the goal (Figure 2b)
or captures the right flag and navigates to the goal (Figure 2c). If it captures the right flag first, it then
navigates to the goal (Figure 2d) or captures the top flag and navigates to the goal (Figure 2e). We
suspect that the reason the third flag on the left is not captured is due to the relatively low discount
rate, hence ignored due to numerical approximations. The CBEETLE policy shows a similar capture behaviour, but it stays at the recharging station for a number of time steps between the first and
second flag captures, which can be confirmed by the high state visitation frequency for the cell S in
Figures 2g and 2i. This is because the policy cannot navigate to the other flag position and move to
the goal without recharging the battery in between. The agent also frequently visits the recharging
station before the first flag capture (Figure 2f) because it actively explores for the first flag with a
high uncertainty in the dynamics.
3
It may seem odd that the battery recharges at an exponential rate. We can set ? = 1 and make the cost
function assign, e.g., a cost of -1 for recharging and 1 for consuming, but our implementation currently assumes
same discount factor for the rewards and costs. Implementation for different discount factors is left as a future
work, but note that we can still obtain meaningful results with ? sufficiently close to 1.
7
Table 1: Experimental results for the chain and maze problems.
problem
chain-tied
|S| = 5
|A| = 2
algorithm
c?
BEETLE
?
100
75
50
25
?
100
75
50
25
?
20
18
CBEETLE
BEETLE
chain-semi
|S| = 5
|A| = 2
maze-tied
|S| = 264
|A| = 5
CBEETLE
BEETLE
CBEETLE
utopic
value
354.77
354.77
325.75
296.73
238.95
354.77
354.77
325.75
296.73
238.95
1.03
1.03
0.97
avg discounted
total reward
351.11?8.42
354.68?8.57
287.70?8.17
264.97?7.06
212.19?4.98
351.11?8.42
354.68?8.57
287.64?8.16
256.76?7.23
204.84?4.51
1.02?0.02
1.02?0.02
0.93?0.04
avg discounted
total cost
?
100.00?0
75.05?0.14
49.96?0.09
25.12?0.13
?
100.00?0
75.05?0.14
50.09?0.14
25.01?0.16
?
19.04?0.02
17.96?0.46
time
(minutes)
1.0
2.4
2.4
44.3
80.59
1.6
3.7
3.8
70.7
139.3
159.8
242.5
733.1
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
Figure 2: State visitation frequencies of each location in the maze problem over 100 runs. Brightness
is proportional to the relative visitation frequency. (a-e) Behavior of BEETLE (a) before the first
flag capture, (b) after the top flag captured first, (c) after the top flag captured first and the right flag
second, (d) after the right flag captured first, and (e) after the right flag captured first and the top flag
second. (f-j) Behavior of CBEETLE (?
c = 18). The yellow star represents the current location of the
agent.
5
Conclusion
In this paper, we proposed CBEETLE, a model-based BRL algorithm for cost-sensitive exploration,
extending BEETLE to solve the hyperstate CPOMDP which models BRL using cost constraints. We
showed that cost-sensitive BRL can be effectively solved by the randomized point-based value iteration for CPOMDPs. Experimental results show that CBEETLE can learn reasonably good policies
for underlying CMDPs while exploring the unknown environment cost-sensitively.
While our experiments show that the policies generally satisfy the cost constraints, it can still potentially violate the constraints since we approximate the alpha functions using a finite number of
basis functions. As for the future work, we plan to focus on making CBEETLE more robust to the
approximation errors by performing a constrained optimization when approximating alpha functions
to guarantee that we never violate the cost constraints.
Acknowledgments
This work was supported by National Research Foundation of Korea (Grant# 2012-007881), the
Defense Acquisition Program Administration and Agency for Defense Development of Korea (Contract# UD080042AD), and the SW Computing R&D Program of KEIT (2011-10041313) funded by
the Ministry of Knowledge Economy of Korea.
8
References
[1] R. Howard. Dynamic programming. MIT Press, 1960.
[2] M. Duff. Optimal learning: Computational procedures for Bayes-adaptive Markov decision
processes. PhD thesis, University of Massachusetts, Amherst, 2002.
[3] S. Ross, J. Pineau, B. Chaib-draa, and P. Kreitmann. A Bayesian approach for larning and
planning in partially observable markov decision processes. Journal of Machine Learning
Research, 12, 2011.
[4] P. Poupart, N. Vlassis, J. Hoey, and K. Regan. An analytic solution to descrete Bayesian
reinforcement learning. In Proc. of ICML, 2006.
[5] E. Altman. Constrained Markov Decision Processes. Chapman & Hall/CRC, 1999.
[6] K. W. Ross and R. Varadarajan. Markov decision-processes with sample path constraints - the
communicating case. Operations Research, 37(5):780?790, 1989.
[7] K. W. Ross and R. Varadarajan. Multichain Markov decision-processes with a sample path
constraint - a decomposition approach. Mathematics of Operations Research, 16(1):195?207,
1991.
[8] E. Delage and S. Mannor. Percentile optimization for Markov decision processes with parameter uncertainty. Operations Research, 58(1), 2010.
[9] A. Hans, D. Schneega?, A. M. Sch?afer, and S. Udluft. Safe exploration for reinforcement
learning. In Proc. of 16th European Symposium on Artificial Neural Networks, 2008.
[10] T. M. Moldovan and P. Abbeel. Safe exploration in Markov decision processes. In Proc. of
NIPS Workshop on Bayesian Optimization, Experimental Design and Bandits, 2011.
[11] J. D. Isom, S. P. Meyn, and R. D. Braatz. Piecewise linear dynamic programming for constrained POMDPs. In Proc. of AAAI, 2008.
[12] D. Kim, J. Lee, K.-E. Kim, and P. Poupart. Point-based value iteration for constrained
POMDPs. In Proc. of IJCAI, 2011.
[13] A. B. Piunovskiy and X. Mao. Constrained Markovian decision processes: the dynamic programming approach. Operations Research Letters, 27(3):119?126, 2000.
[14] M. T. J. Spaan and N. Vlassis. Perseus: Randomized point-based value iteration for POMDPs.
Journal of Artificial Intelligence Research, 24, 2005.
[15] R. Dearden, N. Friedman, and D. Andre. Bayesian Q-learning. In Proc. of AAAI, 1998.
[16] M. Strens. A Bayesian framework for reinforcement learning. In Proc. of ICML, 2000.
9
| 4803 |@word h:4 trial:2 exploitation:2 longterm:1 briefly:1 polynomial:1 version:3 r:1 decomposition:1 brightness:1 recursively:1 initial:3 contains:2 current:5 cmdp:12 assigning:2 must:1 readily:1 numerical:1 analytic:1 update:4 v:1 stationary:1 intelligence:1 provides:2 mannor:1 location:9 eung:1 become:1 symposium:1 manner:1 expected:17 behavior:4 planning:5 frequently:1 bellman:7 discounted:10 actual:2 equipped:1 distri:1 project:1 underlying:1 maximizes:2 perseus:2 guarantee:1 every:4 act:2 uk:2 grant:1 enjoy:1 before:3 engineering:1 treat:1 tends:1 limit:1 consequence:1 meet:1 path:3 ap:4 limited:1 perpendicular:1 statistically:1 acknowledgment:1 block:1 implement:1 procedure:1 delage:1 projection:1 pre:1 confidence:2 varadarajan:2 get:1 onto:1 undesirable:1 close:1 operator:4 cannot:2 put:1 optimize:1 equivalent:1 map:1 dz:4 maximizing:1 starting:1 pomdp:14 formulate:4 recharges:1 keit:1 communicating:1 meyn:1 exploratory:1 altman:1 updated:2 pt:1 suppose:1 exact:2 programming:4 slip:6 labeled:1 observed:1 solved:1 capture:11 calculate:1 trade:1 consumes:1 principled:1 environment:9 agency:1 complexity:1 reward:23 battery:4 cam:1 dynamic:8 solving:8 incur:2 dilemma:2 upon:1 efficiency:1 completely:3 basis:7 various:2 represented:3 artificial:2 brl:19 choosing:2 heuristic:1 kaist:2 solve:7 otherwise:1 schneega:1 toll:1 mission:2 product:2 flexibility:1 ppoupart:1 description:1 ud080042ad:1 ijcai:1 requirement:2 extending:2 zp:4 generating:1 executing:11 ac:2 op:4 odd:1 school:1 b0:12 solves:1 implemented:1 c:2 skip:1 met:1 direction:1 safe:3 exploration:20 crc:1 behaviour:5 assign:1 abbeel:1 wall:2 tighter:2 exploring:3 sufficiently:2 hall:1 mapping:1 bump:1 consecutive:1 proc:7 travel:1 iw:4 currently:1 ross:3 sensitive:11 waterloo:1 tool:1 mit:1 reaching:1 avoid:1 sensitively:1 encode:3 derived:1 focus:1 improvement:1 check:1 mainly:1 experimenting:1 contrast:1 kim:4 summarizing:1 economy:1 dependent:2 eliminate:1 hidden:1 bandit:1 among:1 pascal:1 augment:1 development:1 plan:1 constrained:21 initialize:1 equal:1 aware:1 never:1 chapman:1 represents:2 icml:2 future:4 simplex:1 report:1 piecewise:1 serious:1 employ:1 national:1 intended:1 ourselves:1 friedman:1 possibility:1 navigation:1 analyzed:1 chain:14 tuple:3 edge:1 korea:4 draa:1 markovian:1 tp:4 cost:85 monomials:1 reported:1 dir:2 chooses:1 st:4 explores:1 randomized:14 amherst:1 stay:6 contract:1 off:2 lee:1 together:1 dirichlets:2 thesis:1 aaai:2 choose:1 classically:1 warped:1 return:4 actively:1 account:1 star:1 sec:1 satisfy:2 explicitly:1 tion:1 performed:5 observing:3 bution:1 start:3 bayes:1 degraded:1 efficiently:1 yield:3 yellow:1 generalize:1 bayesian:9 pomdps:4 confirmed:1 straight:1 reach:2 andre:1 definition:2 energy:1 acquisition:1 frequency:3 regress:2 naturally:3 associated:3 gain:1 sampled:4 newly:1 chaib:1 massachusetts:1 knowledge:2 formalize:2 carefully:1 back:1 dt:3 violating:1 sustain:1 improved:3 formulation:1 execute:1 though:1 done:1 stage:1 d:2 eqn:1 receives:1 pineau:1 costsensitive:2 gray:1 mdp:9 facilitate:1 effect:2 hence:3 excluded:1 white:1 percentile:2 strens:1 criterion:3 demonstrate:2 cp:2 meaning:1 pseudocode:2 multinomial:1 rl:4 foreach:8 exponentially:1 extend:2 kekim:1 unfamiliar:1 significant:1 cambridge:1 grid:1 mathematics:1 reachable:1 funded:1 robot:1 han:1 afer:1 navigates:4 posterior:1 multivariate:1 showed:1 beetle:22 scenario:1 route:2 captured:7 ministry:1 additional:2 contingent:1 impose:3 maximize:2 semi:5 multiple:2 violate:3 full:1 d0:1 long:1 visit:1 expectation:1 iteration:7 represent:1 transi:1 confined:1 cell:4 addition:3 background:2 separately:1 uninformative:1 interval:2 sch:1 subject:6 suspect:1 leveraging:1 seem:1 call:1 structural:1 near:1 restrict:1 idea:1 administration:1 whether:1 defense:2 action:43 ignored:1 generally:4 amount:1 discount:8 augments:1 rescue:1 discrete:1 hyperparameter:4 iz:7 visitation:3 key:1 changing:2 verified:1 run:1 letter:1 uncertainty:2 extends:3 cmdps:3 decision:10 summarizes:1 fee:1 bound:5 dangerous:1 constraint:26 sake:1 nearby:1 generates:1 aspect:1 argument:1 performing:2 relatively:1 department:1 hyperstate:14 combination:2 wi:11 lp:15 spaan:1 making:2 explained:2 pseudocounts:1 pr:3 hoey:1 resource:1 equation:1 fail:1 needed:1 available:1 operation:4 incurring:1 multiplied:1 moldovan:1 simulating:1 rp:4 original:1 assumes:3 denotes:4 dirichlet:3 running:3 top:5 sw:1 exploit:1 approximating:1 objective:2 move:3 quantity:1 damage:1 costly:1 interacts:1 simulated:3 hsp:2 consumption:1 poupart:3 reason:1 assuming:4 modeled:4 ratio:1 potentially:2 ba:1 implementation:2 design:1 policy:26 unknown:12 perform:1 observation:4 markov:9 howard:1 finite:1 optional:1 immediate:2 situation:1 defining:2 vlassis:2 duff:1 station:7 canada:1 cast:1 required:1 pair:18 optimized:1 nip:1 able:1 below:1 challenge:1 program:3 max:9 including:1 belief:22 dearden:1 power:1 critical:2 natural:2 representing:4 occupancy:1 improve:1 mdps:1 udluft:1 review:2 prior:5 relative:1 wander:1 regan:1 cpomdp:18 proportional:1 foundation:1 incurred:4 agent:19 pi:1 heavy:1 summary:2 supported:1 infeasible:1 monomial:1 face:1 default:1 calculated:1 transition:11 maze:11 adopts:1 reinforcement:6 avg:2 adaptive:1 far:1 approximate:1 observable:2 obtains:1 alpha:2 status:1 active:2 assumed:3 consuming:1 terrain:1 recharging:10 table:3 additionally:1 learn:2 reasonably:1 robust:1 ca:1 improving:1 european:1 sp:2 main:1 uwaterloo:1 bounding:1 backup:22 incrementing:1 multichain:1 n:4 position:1 mao:1 exponential:1 tied:7 third:1 learns:1 admissible:8 down:1 minute:1 navigate:1 experimented:2 insignificant:1 workshop:1 intractable:1 exists:1 essential:1 adding:1 utopic:1 kr:1 effectively:1 phd:1 execution:1 horizon:2 explore:1 partially:3 scalar:1 recommendation:1 corresponds:1 extracted:1 goal:11 formulated:1 kee:1 viewed:1 exposition:1 slighly:1 shared:1 feasible:1 specifically:4 infinite:2 reducing:1 except:3 wt:5 flag:20 total:25 called:1 experimental:4 meaningful:1 select:2 violated:1 dept:1 |
4,203 | 4,804 | How Prior Probability Influences Decision Making:
A Unifying Probabilistic Model
Abram L. Friesen
University of Washington
[email protected]
Yanping Huang
University of Washington
[email protected]
Michael N. Shadlen
Columbia University
Howard Hughes Medical Institute
[email protected]
Timothy D. Hanks
Princeton University
[email protected]
Rajesh P. N. Rao
University of Washington
[email protected]
Abstract
How does the brain combine prior knowledge with sensory evidence when making
decisions under uncertainty? Two competing descriptive models have been proposed based on experimental data. The first posits an additive offset to a decision
variable, implying a static effect of the prior. However, this model is inconsistent
with recent data from a motion discrimination task involving temporal integration
of uncertain sensory evidence. To explain this data, a second model has been proposed which assumes a time-varying influence of the prior. Here we present a
normative model of decision making that incorporates prior knowledge in a principled way. We show that the additive offset model and the time-varying prior
model emerge naturally when decision making is viewed within the framework
of partially observable Markov decision processes (POMDPs). Decision making
in the model reduces to (1) computing beliefs given observations and prior information in a Bayesian manner, and (2) selecting actions based on these beliefs
to maximize the expected sum of future rewards. We show that the model can
explain both data previously explained using the additive offset model as well as
more recent data on the time-varying influence of prior knowledge on decision
making.
1
Introduction
A fundamental challenge faced by the brain is to combine noisy sensory information with prior
knowledge in order to perceive and act in the natural world. It has been suggested (e.g., [1, 2, 3, 4])
that the brain may solve this problem by implementing an approximate form of Bayesian inference.
These models however leave open the question of how actions are chosen given probabilistic representations of hidden state obtained through Bayesian inference. Daw and Dayan [5, 6] were among
the first to study decision theoretic and reinforcement learning models with the goal of interpreting
results from various neurobiological experiments. Bogacz and colleagues proposed a model that
combines a traditional decision making model with reinforcement learning [7] (see also [8, 9]).
In the decision making literature, two apparently contradictory models have been suggested to explain how the brain utilizes prior knowledge in decision making: (1) a model that adds an offset to a
1
decision variable, implying a static effect of changes to the prior probability [10, 11, 12], and (2) a
model that adds a time varying weight to the decision variable, representing the changing influence
of prior probability over time [13]. The LATER model (Linear Approach to Threshold with Ergodic
Rate), an instance of the additive offset model, incorporates prior probability as the starting point
of a linearly rising decision variable and successfully predicts changes to saccade latency when discriminating between two low contrast stimuli [10]. However, the LATER model fails to explain data
from the random dots motion discrimination task [14] in which the agent is presented with noisy,
time-varying stimuli and must continually process this data in order to make a correct choice and
receive reward. The drift diffusion model (DDM), which uses a random walk accumulation, instead
of a linear rise to a boundary, has been successful in explaining behavioral and neurophysiological
data in various perceptual discrimination tasks [14, 15, 16]. However, in order to explain behavioral
data from recent variants of random dots tasks in which the prior probability of motion direction is
manipulated [13], DDMs require the additional assumption of dynamic reweighting of the influence
of the prior over time.
Here, we present a normative framework for decision making that incorporates prior knowledge and
noisy observations under a reward maximization hypothesis. Our work is inspired by models which
cast human and animal decision making in a rational, or optimal, framework. Frazier & Yu [17]
used dynamic programming to derive an optimal strategy for two-alternative forced choice tasks
under a stochastic deadline. Rao [18] proposed a neural model for decision making based on the
framework of partially observable Markov decision processes (POMDPs) [19]; the model focuses
on network implementation and learning but assumes a fixed deadline to explain the collapsing
decision threshold seen in many decision making tasks. Drugowitsch et al. [9] sought to explain
the collapsing decision threshold by combining a traditional drift diffusion model with reward rate
maximization; their model also requires knowledge of decision time in hindsight. In this paper,
we derive a novel POMDP model from which we compute the optimal behavior for sequential
decision making tasks. We demonstrate our model?s explanatory power on two such tasks: the
random dots motion discrimination task [13] and Carpenter and Williams? saccadic eye movement
task [10]. We show that the urgency signal, hypothesized in previous models, emerges naturally as a
collapsing decision boundary with no assumption of a decision deadline. Furthermore, our POMDP
formulation enables incorporation of partial or incomplete prior knowledge about the environment.
By fitting model parameters to the psychometric function in the neutral prior condition (equal prior
probability of either direction), our model accurately predicts both the psychometric function and
the reaction times for the biased (unequal prior probability) case, without introducing additional free
parameters. Finally, the same model also accurately predicts the effect of prior probability changes
on the distribution of reaction times in the Carpenter and Williams task, data that was previously
interpreted in terms of the additive offset model.
2
2.1
Decision Making in a POMDP framework
Model Setup
We model a decision making task using a POMDP, which assumes that at any particular time step,
t, the environment is in a particular hidden state, x ? X , that is not directly observable by the
animal. The animal can make sensory measurements in order to observe noisy samples of this hidden
state. At each time step, the animal receives an observation (stimulus), st , from the environment as
determined by an emission distribution, Pr(st |x). The animal must maintain a belief over the set
of possible true world states, given the observations it has made so far: bt (x) = Pr(x|s1:t ), where
s1:t represents the sequence of stimuli that the animal has received so far, and b0 (x) represents
the animal?s prior knowledge about the environment. At each time step, the animal chooses an
action, a ? A and receives an observation and a reward, R(x, a), from the environment, depending
on the current state and the action taken. The animal uses Bayes rule to update its belief about the
environment after each observation. Through these interactions, the animal learns a policy, ?(b) ? A
for all b, which dictates the action to take for each belief state. The goal is to find an optimal policy,
? ? (b), that maximizes the animal?s total expected future reward in the task.
For example, in the random dots motion discrimination task, the hidden state, x, is composed of
both the coherence of the random dots c ? [0, 1] and the direction d ? {?1, 1} (corresponding
to leftward and rightward motion, respectively), neither of which are known to the animal. The
2
animal is shown a movie of randomly moving dots, a fraction of which are moving in the same
direction (this fraction is the coherence). The movie is modeled as a sequence of time varying
stimuli s1:t . Each frame, st , is a snapshot of the changes in dot positions, sampled from the emission
distribution st ? Pr(st |kc, d), where k > 0 is a free parameter that determines the scale of st .
In order to discriminate the direction given the stimuli, the animal uses Bayes rule to compute the
posterior probability of the static joint hidden state, Pr(x = kdc|s1:t )1 . At each time step, the animal
chooses one of three actions, a ? {AR , AL , AS }, denoting rightward eye movement, leftward eye
movement, and sampling (i.e., waiting for one more observation), respectively. When the animal
makes a correct choice (i.e., a rightward eye movement a = AR when x > 0 or a leftward eye
movement a = AL when x < 0), the animal receives a positive reward RP > 0. The animal
receives a negative reward (penalty) or no reward when an incorrect action is chosen, RN ? 0. We
assume that the animal is motivated by hunger or thirst to make a decision as quickly as possible
and model this with a unit penalty RS = ?1, representing the cost the agent needs to pay when
choosing the sampling action AS .
2.2
Bayesian Inference of Hidden State from Prior Information and Noisy Observations
In a POMDP, decisions are made based on the belief state bt (x) = Pr(x|s1:t ), which is the posterior
probability distribution over x given a sequence of observations s1:t . The initial belief b0 (x) represents the animal?s prior knowledge about x. In both the Carpenter and William?s task [10] and the
random dots motion discrimination task [13], prior information about the probability of a specific
direction (we use rightward direction here, dR , without loss of generality) is learned by the subjects,
Pr(dR ) = Pr(d = 1) = Pr(x > 0) = 1 ? Pr(dL ). Consider the random dots motion discrimination task. Unlike the traditional case where a full prior distribution is given, this direction-only prior
information provides only partial knowledge about the hidden state which also includes coherence.
In the least informative case, only Pr(dR ) is known and we model the distribution over the remaining components of x as a uniform distribution. Combining this with the direction prior, which is
Bernoulli distributed, gives a piecewise uniform distribution for the prior, b0 (x). In the general case,
we can express the distribution over coherence as a normal distribution parameterized by ?0 and ?0 ,
resulting in a piecewise normal prior over x,
Pr(dR )
x?0
(1)
b0 (x) = Z0?1 N (x | ?0 , ?0 ) ?
Pr(dL )
x < 0,
where Zt = Pr(d
R x R )(1 ? ? (0 | ?t , ?t )) + Pr(dL )? (0 | ?t , ?t ) is the normalization factor and
?(x | ?, ?) = ?? N (x | ?, ?)dx is the cumulative distribution function (CDF) of the normal
distribution. The piecewise uniform prior is then just a special case with ?0 = 0 and ?0 = ?.
We assume the emission distribution is also normally-distributed, Pr(st |x) = N (st |x, ?e2 ), which,
from Bayes? rule, results in a piecewise normal posterior distribution
Pr(dR )
x?0
?1
bt (x) = Zt N (x | ?t , ?t ) ?
(2)
Pr(dL )
x<0
?0
t?
st
1
t
where
?t =
+ 2 /
+ 2 ,
(3)
2
2
?0
?e
?0
?e
?1
1
t
?t2 =
+
,
(4)
?02
?e2
Pt
and the running average s?t = t0 =1 st0 /t. Consequently, the posterior distribution depends only on
s? and t, the two sufficient statistics of the sequence s1:t . For the case of a piecewise uniform prior,
?2
the variance ?t2 = te , which decreases inversely in proportion to elapsed time. Unless otherwise
mentioned, we fix ?e = 1, ?0 = ? and ?0 = 0 for the rest of this paper because we can rescale the
POMDP time step t0 = ?te to compensate.
1
In the decision making tasks that we model in this paper, the hidden state is fixed within a trial and thus
there is no transition distribution to include in the belief update equation. However, the POMDP framework is
entirely valid for time-varying states.
3
2.3
Finding the optimal policy by reward maximization
Within the POMDP framework, the animal?s goal is to find an optimal policy ? ? (bt ) that maximizes
its expected reward, starting at bt . This is encapsulated in the value function
"?
#
X
?
v (bt ) = E
r(bt+k , ?(bt+k )) | bt , ?
(5)
k=1
where the expectation is taken with respect to all future belief states (bt+1 , . . . , bt+k , . . .) given
that the animal is using ? to make decisions, and r(b, a) is the reward
R function over belief states
or, equivalently, the expected reward over hidden states, r(b, a) = x R(x, a)b(x)dx. Given the
value function, the optimal policy is simply ? ? (b) = arg max? v ? (b). In this model, the belief b is
parameterized by s?t and t, so the animal only needs to keep track of these instead of encoding the
entire posterior distribution bt (x) explicitly.
R
In our model, the expected reward r(b, a) = x R(x, a)b(x)dx is
?
RS ,
when a = AS
?
?
?1
when a = AR
r(b, a) = Zt [ RP Pr(dR ) (1 ? ?(0 | ?t , ?t )) + RN Pr(dL )?(0 | ?t , ?t ) ],
?
? ?1
Zt [ RN Pr(dR ) (1 ? ?(0 | ?t , ?t )) + RP Pr(dL )?(0 | ?t , ?t ) ],
when a = AL
(6)
where ?t and ?t are given by (3) and (4), respectively. The above equations can be interpreted as
follows. With probability Pr(dL ) ? ?(0 | ?t , ?t ), the hidden state x is less than 0, making AR an
incorrect decision and resulting in a penalty RN if chosen. Similarly, action AR is correct with
probability Pr(dR ) ? [1 ? ?(0 | ?t , ?t )] and earns a reward of RP . The inverse is true for AL . When
AS is selected, the animal simply receives an observation at a cost of RS .
Computing the value function defined in (5) involves an expectation with respect to future belief.
Therefore, we need to compute the transition probabilities over belief states, T (bt+1 |bt , a), for each
action. When the animal chooses to sample, at = AS , the animal?s belief distribution at the next
time step is computed by marginalizing over all possible observations [19]
Z
T (bt+1 |bt , AS ) = Pr(bt+1 |s, bt , AS )Pr(s|bt , AS )ds
(7)
s
1 if bt+1 (x) = Pr(s|x)bt (x)/Pr(s|bt , AS ), ?x
(8)
where
Pr(bt+1 | s, bt , AS ) =
0 otherwise;
Z
and
Pr(s | bt , AS ) =
Pr(s|x)Pr(x|b, a)dx = Ex?b [Pr(s|x)]
(9)
x
When choosing AS , the agent does not affect the world state, so, given the current state bt and
an observation s, the updated belief bt+1 is deterministic and thus Pr(bt+1 | s, bt , AS ) is a delta
function, following Bayes? rule. The probability Pr(s | bt , AS ) can be treated as a normalization
factor and is independent of hidden state2 . Thus, the transition probability function, T (bt+1 | bt , AS ),
is solely a function of the belief bt and is a stationary distribution over the belief space.
When the selected action is AL or AR , the animal stops sampling and makes an eye movement to the
left or the right, respectively. To account for these cases, we include a terminal state, ?, with zeroreward (i.e., R(?, a) = 0, ?a), and absorbing behavior, T (?|?, a) = 1, ?a. Moreover, whenever the
animal chooses AL or AR , the POMDP immediately transitions into ?: T (?|b, a ? {AL , AR }) =
1, ?b, indicating the end of a trial.
Given the transition probability between belief states T (bt+1 |bt , a) and the reward function, we can
convert our POMDP model into a Markov Decision Process (MDP) over the belief state. Standard
dynamic programming techniques (e.g., value iteration [20]) can then be applied to compute the
value function in (5). A neurally plausible method for learning the optimal policy by trial and error
using temporal difference (TD) learning was suggested in [18]. Here, we derive the optimal policy
from first principles and focus on comparisons between our model?s predictions and behavioral data.
2
Explicitly, Pr(s|bt , AS ) = Zt?1 N (s|?t , ?e2 + ?t2 )[Pr(dR ) + (1 ? 2Pr(dR ))?(0|
4
?t
2
?t
1
2
?t
+ s2
?e
+ 12
?e
, ( ?12 + ?12 )?1 ]).
t
e
3
3.1
Model Predictions
Optimal Policy
(a)
(b)
Figure 1: Optimal policy for Pr(dR ) = 0.5, and 0.9. (a?b) Optimal policy as a joint function of
s? and t. Every point in these figures represents a belief state determined by equations (2), (3) and
(4). The color of each point represents the corresponding optimal action. The boundaries ?R (t) and
?L (t) divide the belief space into three areas ?S (center), ?R (upper) and ?L (lower), respectively.
P
Model parameters: RNR?R
= 1, 000.
S
Figure 1(a) shows the optimal policy ? ? as a joint function of s? and t for the unbiased case where
the prior probability Pr(dR ) = Pr(dL ) = 0.5. ? ? partitions the belief space into three regions: ?R ,
?L , and ?S , representing the set of belief states preferring actions AR , AL and AS , respectively.
We define the boundary between AR and AS , and the boundary between AL and AS as ?R (t)
and ?L (t), respectively. Early in a trial, the model selects the sampling action AS regardless of
the value of the observed evidence. This is because the variance of the running average s? is high
for small t. Later in the trial, the model will choose AR or AL when s? is only slightly above
0 because this variance decreases as the model receives more observations. For this reason, the
width of ?S diminishes over time. This gradual decrease in the threshold for choosing one of
the non-sampling actions AR or AL has been called a ?collapsing bound? in the decision making
literature [21, 17, 22]. For this unbiased prior case, the expected reward function is symmetric,
r(bt (x), AR ) = r(Pr(x|?
st , t), AR ) = r(Pr(x| ??
st , t), AL ), and thus the decision boundaries are
also symmetric around 0: ?R (t) = ??L (t).
The optimal policy ? ? is entirely determined by the reward parameters {RP , RN , RS } and the prior
probability (the standard deviation of the emission distribution ?e only determines the temporal
resolution of the POMDP). It applies to both Carpenter and Williams? task and the random dots
task (these two tasks differ only in the interpretation of the hidden state x). The optimal action at
a specific belief state is determined by the relative, not the absolute, value of the expected future
reward. From (6), we have
r(b, AL ) ? r(b, AR ) ? RN ? RP .
(10)
Moreover, if the unit of reward is specified by the sampling penalty, the optimal policy ? ? is entirely
P
and the prior.
determined by the ratio RNR?R
S
As the prior probability becomes biased, the optimal policy becomes asymmetric. When the prior
probability, Pr(dR ), increases, the decision boundary for the more likely direction (?R (t)) shifts
towards the center (the dashed line at s? = 0 in figure 1), while the decision boundary for the opposite
direction (?L (t)) shifts away from the center, as illustrated in Figure 1(b) for prior Pr(dR = 0.9).
Early in a trial, ?S has approximately the same width as in the neutral prior case, but it is shifted
downwards to favor more sampling for dL (?
s < 0). Later in a trial, even for some belief states
with s? < 0, the optimal action is still AR , because the effect of the prior is stronger than that of the
observed data.
3.2
Psychometric function and reaction times in the random dots task
We now construct a decision model from the learned policy for the reaction time version of the
motion discrimination task [14], and compare the model?s predictions to the psychometric and
5
(a) Human SK
(b) Human LH
(c) Monkey Pr(dR ) = .8 (d) Monkey Pr(dR ) = .7
Figure 2: Comparison of Psychometric (upper panels) and Chronometric (lower panels) functions between the Model and Experiments. The dots with error bars represent experimental data
from human subject SK, and LH, and the combined results from four monkeys. Blue solid curves
are model predictions in the neutral case while green dotted curves are model predictions from the
P
= 1, 000, k = 1.45.
biased case. The R2 fits are shown in the plots. Model parameters: (a) RNR?R
S
RN ?RP
RN ?RP
(b) RS = 1, 000, ? = 1.45. (c) Pr(dR ) = 0.8, RS = 1, 000, k = 1.4. (d) Pr(dR ) = 0.7,
RN ?RP
= 1, 000, k = 1.4.
RS
chronometric functions of a monkey performing the same task [13, 14]. Recall that the belief b
is parametrized by s?t and t, so the animal only needs to know the elapsed time and compute a running average s?t of the observations in order to maintain the posterior belief bt (x). Given its current
belief, the animal selects an action from the optimal policy ? ? (bt ). When bt ? ?S , the animal
chooses the sampling action and gets a new observation st+1 . Otherwise the animal terminates
the trial by making an eye movement to the right or to the left, for s?t > ?R (t) or s?t < ?L (t),
respectively.
The performance on the task using the optimal policy can be measured in terms of both the accuracy
of direction discrimination (the so-called psychometric function), and the reaction time required to
reach a decision (the chronometric function). The hidden variable x = kdc encapsulates the unknown direction and coherence, as well as the free parameter k that determines the scale of stimulus
st . Without loss of generality, we fix d = 1 (rightward direction), and set the hidden direction dR as
the biased direction. Given an optimal policy, we compute both the psychometric and chronometric function by simulating a large number of trials (10000 trials per data point) and collecting the
reaction time and chosen direction from each trial.
The upper panels of figure 2(a) and 2(b) (blue curves) show the performance accuracy as a function
of coherence for both the model (blue solid curve) and the human subjects (blue dots) for neutral
prior Pr(dR ) = 0.5. We fit our simulation results to the experimental data by adjusting the only
P
and k. The lower panels of figure 2(a) and 2(b) (blue
two free parameters in our model: RNR?R
S
solid curves) shows the predicted mean reaction time for correct choices as a function of coherence
c for our model (blue solid curve, with same model parameters) and the data (blue dots). Note
that our model?s predicted reaction times represent the expected number of POMDP time steps
before making a rightward eye movement AR , which we can directly compare to the monkey?s
experimental data in units of real time. A linear regression is used to determine the duration ? of
a single time step and the onset of decision time tnd . This offset, tnd , can be naturally interpreted
as the non-decision residual time. We applied the experimental mean reaction time reported in [13]
with motion coherence c = 0.032, 0.064, 0.128, 0.256 and 0.512 to compute the slope and offset, ?
and tnd . Linear regression gives the unit duration per POMDP step as ? = 5.74ms , and the offset
tnd = 314.6ms, for human SK. For human LH, similar results are obtained with ? = 5.20ms and
tnd = 250.0ms. Our predicted offsets compare well with the 300ms non-decision time on average
reported in the literature [23, 24].
6
When the human subject is verbally told that the prior probability is Pr(dR ) = Pr(d = 1) = 0.8,
the experimental data is inconsistent with the predictions of the classic drift diffusion model [14]
unless an additional assumption of a dynamic bias signal is introduced. In the POMDP model we
propose, we predict both the accuracy and reaction times in the biased setting (green curves in
figure 2) with the parameters learned in the neutral case, and achieve a good fit (with the coefficients
of determination shown in fig. 2) to the experimental data reported by Hanks et al. [13]. Our model
predictions for the biased cases are a direct result of the reward maximization component of our
framework and require no additional parameter fitting.
Combined behavioral data from four monkeys is shown by the dotted curves in figure 2(c). We
fit our model parameters to the psychometric function in the neutral case, with ? = 8.20ms and
tnd = 312.50ms, and predict both the psychometric function and the reaction times in the biased
case. However, our results do not match the monkey data as well as the human data when Pr(dR ) =
0.8. This may be due to the fact that the monkeys cannot receive verbal instructions from the
experimenters and must learn an estimate of the prior during training. As a result, the monkeys?
estimate of the prior probability might be inaccurate. To test this hypothesis, we simulated our
model with Pr(dR ) = 0.7 (see figure 2(d)) and these results fit the experimental data much more
accurately (even though the actual probability was 0.8).
3.3
Reaction times in the Carpenter and Williams? task
(a)
(b)
Figure 3: Model predictions of saccadic eye movement in Carpenter & Williams? experiments [10]. (a) Saccadic latency distributions from model simulations plotted in the form of probitscale cumulative mass function, as a function of reciprocal latency. For different values of Pr(dR ),
the simulated data are well fit by straight lines, indicating that the reciprocal of latency follows a
normal distribution. The solid lines are linear functions fit to the data with the constraint that all
lines must pass through the same intercept for infinite time (see [10]). (b) Median latency plotted as
a function of log prior probability. Black dots are from experimental data and blue dots are model
predictions. The two (overlapping) straight lines are the linear least squares fits to the experimental
data and model data. These lines do not differ noticeably in either slope or offset. Model parameters:
RN ?RP
= 1, 000, k = 0.3, ?e = 0.46.
RS
In Carpenter and Williams? task, the animal needs to decide on which side d ? {?1, 1} (denoting
left or right side) a target light appeared at a fixed distance from a central fixation light. After the
sudden appearance of the target light, a constant stimulus st = s is observed by the animal, where s
can be regarded as the perceived location of the target. Due to noise and uncertainty in the nervous
system, we assume that s varies from trial to trial, centered at the location of the target light and
with standard deviation ?e (i.e., s ? N (s | k, ?e2 )), where k is the distance between the target and
the fixation light. Inference over the direction d thus involves joint inference over (d, k) where the
emission probability follows Pr(s|d, k). Then the joint state (k, d) can be one-on-one-mapped to
kd = x, where x represents the actual location of the target light. Under the POMDP framework,
Carpenter and Williams? task and the random dots task differ in the interpretation of hidden state x
and stimulus s, but they follow the same optimal policy given the same reward parameters.
Without loss of generality, we set the hidden variable x > 0 and say that the animal makes a
correct choice at a hitting time tH when the animal?s belief state reaches the right boundary. The
7
?1
saccadic latency can be computed by inverting the boundary function ?R
(s) = tH . Since, for
small t, ?R (t) behaves like a simple reciprocal function of t, the reciprocal of the reaction time is
approximately proportional to a normal distribution with t1H ? N (1/tH | k, ?e2 ). In figure 3(a),
we plot the distribution of reciprocal reaction time with different values of Pr(dR ) on a probit scale
(similar to [10]). Note that we label the y-axis using the CDF of the corresponding probit value
and the x-axis in figure 3(a) has been reversed. If the reciprocal of reaction time (with the same
prior Pr(dR ))?follows a normal distribution, each point on the graph will fall on a straight line with
y-intercept k?e2 that is independent of Pr(dR ). We fit straight lines to the points on the graph,
with the constraint that all lines should pass through the same intercept for infinite time (see [10]).
We obtain an intercept of 6.19, consistent with the intercept 6.20 obtained from experimental data
in [10]. Figure 3(b) demonstrates that the median of our model?s reaction times is a linear function
of the log of the prior probability. Increasing the prior probability lowers the decision boundary
?R (t), effectively decreasing the latency. The slope and intercept of the best fit line are consistent
with experimental data (see fig. 3(b)).
4
Summary and Conclusion
Our results suggest that decision making in the primate brain may be governed by the dual principles
of Bayesian inference and reward maximization as implemented within the framework of partially
observable Markov decision processes (POMDPs). The model provides a unified explanation for
experimental data previously explained by two competing models, namely, the additive offset model
and the dynamic weighting model for incorporating prior knowledge. In particular, the model predicts psychometric and chronometric data for the random dots motion discrimination task [13] as
well as Carpenter and Williams? saccadic eye movement task [10].
Previous models of decision making, such as the LATER model [10] and the drift diffusion
model [25, 15], have provided descriptive accounts of reaction time and accuracy data but often
require assumptions such as a collapsing bound, urgency signal, or dynamic weighting to fully explain the data [26, 21, 22, 13]. Our model provides a normative account of the data, illustrating how
the subject?s choices can be interpreted as being optimal under the framework of POMDPs.
Our model relies on the principle of reward maximization to explain how an animal?s decisions
are influenced by changes in prior probability. The same principle also allows us to predict how an
animal?s choice is influenced by changes in the reward function. Specifically, the model predicts that
P
the optimal policy ? ? is determined by the ratio RNR?R
and the prior probability Pr(dR ). Thus, a
S
testable prediction of the model is that the speed-accuracy trade-off in tasks such as the random dots
P
task is governed by the ratio RNR?R
: smaller penalties for sampling (RS ) will increase accuracy
S
and reaction time, as will larger rewards for correct choices (RP ) or greater penalties for errors
(RN ). Since the reward parameters in our model represent internal reward, our model also provides
a bridge to study the relationship between physical reward and subjective reward.
In our model of the random dots discrimination task, belief is expressed in terms of a piecewise normal distribution with the domain of the hidden variable x ? (??, ?). A piecewise beta distribution
with domain x ? [?1, 1] fits the experimental data equally well. However, the beta distribution?s
conjugate prior is the multinomial, which can limit the application of this model. For example, the
observations in the Carpenter and Williams? model cannot easily be described by a discrete value.
The belief in our model can be expressed by any distribution, even a non-parametric one, as long
as the observation model provides a faithful representation of the stimuli and captures the essential
relationship between the stimuli and the hidden world state.
The POMDP model provides a unifying framework for a variety of perceptual decision making
tasks. Our state variable x and action variable a work with arbitrary state and action spaces, ranging
from multiple alternative choices to high dimensional real value choices. The state variables can
also be dynamic, with xt following a Markov chain. Currently, we have assumed that the stimuli
are independent from one time step to the next, but most real world stimuli are temporally correlated. Our model is suitable for decision tasks with time-varying state and observations that are time
dependent within a trial (as long as they are conditional independent given the time-varying hidden
state sequence). We thus expect our model to be applicable to significantly more complicated tasks
than the ones modeled here.
8
References
[1] D. Knill and W. Richards. Perception as Bayesian inference. Cambridge University Press, 1996.
[2] R.S. Zemel, P. Dayan, and A. Pouget. Probabilistic interpretation of population codes. Neural Computation, 10(2), 1998.
[3] R.P.N. Rao. Bayesian computation in recurrent neural circuits. Neural Computation, 16(1):1?38, 2004.
[4] W.J. Ma, J.M. Beck, P.E. Latham, and A. Pouget. Bayesian inference with probabilistic population codes.
Nature Neuroscience, 9(11):1432?1438, 2006.
[5] N.D. Daw, A.C. Courville, and D.S.Touretzky. Representation and timing in theories of the dopamine
system. Neural Computation, 18(7):1637?1677, 2006.
[6] P. Dayan and N.D. Daw. Decision theory, reinforcement learning, and the brain. Cognitive, Affective and
Behavioral Neuroscience, 8:429?453, 2008.
[7] R. Bogacz and T. Larsen. Integration of reinforcement learning and optimal decision making theories of
the basal ganglia. Neural Computation, 23:817?851, 2011.
[8] C.T. Law and J. I. Gold. Reinforcement learning can account for associative and perceptual learning on a
visual-decision task. Nat. Neurosci, 12(5):655?663, 2009.
[9] J. Drugowitsch, and A. K. Churchland R. Moreno-Bote, M. N. Shadlen, and A. Pouget. The cost of
accumulating evidence in perceptual decision making. J. Neurosci, 32(11):3612?3628, 2012.
[10] R.H.S. Carpenter and M.L.L. Williams. Neural computation of log likelihood in the control of saccadic
eye movements. Nature, 377:59?62, 1995.
[11] M.C. Dorris and D.P. Munoz. Saccadic probability influences motor preparation signals and time to
saccadic initiation. J. Neurosci, 18:7015?7026, 1998.
[12] J.I. Gold, C.T. Law, P. Connolly, and S. Bennur. The relative influences of priors and sensory evidence on
an oculomotor decision variable during perceptual learning. J. Neurophysiol, 100(5):2653?2668, 2008.
[13] T.D. Hanks, M.E. Mazurek, R. Kiani, E. Hopp, and M.N. Shadlen. Elapsed decision time affects the
weighting of prior probability in a perceptual decision task. Journal of Neuroscience, 31(17):6339?6352,
2011.
[14] J.D. Roitman and M.N. Shadlen. Response of neurons in the lateral intraparietal area during a combined
visual discrimination reaction time task. Jounral of Neuroscience, 22, 2002.
[15] R. Bogacz, E. Brown, J. Moehlis, P. Hu, P. Holmes, and J.D. Cohen. The physics of optimal decision
making: A formal analysis of models of performance in two-alternative forced choice tasks. Psychological
Review, 113:700?765, 2006.
[16] R. Ratcliff and G. McKoon. The diffusion decision model: Theory and data for two-choice decision tasks.
Neural Computation, 20:127?140, 2008.
[17] P. L. Frazier and A. J. Yu. Sequential hypothesis testing under stochastic deadlines. In Advances in Neural
Information procession Systems, 20, 2007.
[18] R.P.N. Rao. Decision making under uncertainty: A neural model based on POMDPs. Frontiers in Computational Neuroscience, 4(146), 2010.
[19] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable stochastic
domains. Artificial Intelligence, 101:99?134, 1998.
[20] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. The MIT Press, 1998.
[21] P.E. Latham, Y. Roudi, M. Ahmadi, and A. Pouget. Deciding when to decide. Soc. Neurosci. Abstracts,
740(10), 2007.
[22] A. K. Churchland, R. Kiani, and M. N. Shadlen. Decision-making with multiple alternatives. Nat.
Neurosci., 11(6), 2008.
[23] R.D. Luce. Response times: their role in inferring elementary mental organization. Oxford University
Press, 1986.
[24] M.E. Mazurek, J.D. Roitman, J. Ditterich, and M.N. Shadlen. A role for neural integrators in perceptual
decision-making. Cerebral Cortex, 13:1257?1269, 2003.
[25] J. Palmer, A.C. Huk, and M.N. Shadlen. The effects of stimulus strength on the speed and accuracy of a
perceptual decision. Journal of Vision, 5:376?404, 2005.
[26] J. Ditterich. Stochastic models and decisions about motion direction: Behavior and physiology. Neural
Networks, 19:981?1012, 2006.
9
| 4804 |@word trial:14 illustrating:1 version:1 rising:1 proportion:1 stronger:1 open:1 instruction:1 hu:1 r:9 gradual:1 simulation:2 solid:5 initial:1 selecting:1 denoting:2 subjective:1 reaction:19 current:3 dx:4 must:4 additive:6 partition:1 informative:1 enables:1 motor:1 moreno:1 plot:2 update:2 discrimination:12 implying:2 stationary:1 selected:2 intelligence:1 nervous:1 reciprocal:6 sudden:1 mental:1 provides:6 location:3 direct:1 beta:2 incorrect:2 fixation:2 combine:3 fitting:2 behavioral:5 affective:1 manner:1 expected:8 behavior:3 planning:1 brain:6 terminal:1 integrator:1 inspired:1 ddms:1 decreasing:1 td:1 actual:2 increasing:1 becomes:2 provided:1 moreover:2 maximizes:2 panel:4 mass:1 circuit:1 bogacz:3 interpreted:4 monkey:9 unified:1 hindsight:1 st0:1 finding:1 temporal:3 every:1 collecting:1 act:1 demonstrates:1 control:1 unit:4 medical:1 normally:1 continually:1 positive:1 before:1 timing:1 limit:1 sutton:1 encoding:1 oxford:1 solely:1 approximately:2 kdc:2 might:1 black:1 palmer:1 faithful:1 testing:1 hughes:1 area:2 significantly:1 dictate:1 physiology:1 suggest:1 get:1 cannot:2 influence:7 intercept:6 accumulating:1 accumulation:1 deterministic:1 center:3 williams:10 regardless:1 starting:2 duration:2 ergodic:1 pomdp:16 resolution:1 hunger:1 immediately:1 thirst:1 perceive:1 pouget:4 rule:4 holmes:1 regarded:1 classic:1 population:2 updated:1 pt:1 target:6 programming:2 us:3 hypothesis:3 asymmetric:1 richards:1 predicts:5 observed:3 role:2 capture:1 region:1 movement:11 decrease:3 trade:1 principled:1 mentioned:1 environment:6 reward:30 littman:1 dynamic:7 churchland:2 neurophysiol:1 rightward:6 easily:1 joint:5 various:2 forced:2 artificial:1 zemel:1 choosing:3 larger:1 solve:1 plausible:1 say:1 otherwise:3 favor:1 statistic:1 noisy:5 associative:1 descriptive:2 sequence:5 propose:1 interaction:1 combining:2 achieve:1 gold:2 mazurek:2 leave:1 derive:3 depending:1 recurrent:1 measured:1 rescale:1 b0:4 received:1 soc:1 implemented:1 c:3 involves:2 predicted:3 differ:3 direction:19 posit:1 correct:6 stochastic:4 centered:1 human:9 mckoon:1 implementing:1 noticeably:1 require:3 fix:2 elementary:1 frontier:1 around:1 normal:8 deciding:1 predict:3 sought:1 early:2 perceived:1 encapsulated:1 diminishes:1 applicable:1 label:1 currently:1 bridge:1 successfully:1 mit:1 varying:9 barto:1 focus:2 emission:5 frazier:2 bernoulli:1 likelihood:1 ratcliff:1 contrast:1 inference:8 dayan:3 dependent:1 inaccurate:1 bt:40 entire:1 explanatory:1 hidden:19 kc:1 selects:2 arg:1 among:1 dual:1 animal:38 integration:2 special:1 equal:1 construct:1 washington:6 sampling:9 represents:6 yu:2 future:5 t2:3 stimulus:14 piecewise:7 randomly:1 manipulated:1 composed:1 beck:1 maintain:2 william:1 organization:1 light:6 chain:1 rajesh:1 partial:2 moehlis:1 lh:3 unless:2 incomplete:1 divide:1 walk:1 plotted:2 uncertain:1 psychological:1 instance:1 rao:5 ar:17 maximization:6 cost:3 introducing:1 deviation:2 neutral:6 kaelbling:1 uniform:4 successful:1 connolly:1 reported:3 varies:1 chooses:5 combined:3 thanks:1 st:14 fundamental:1 discriminating:1 preferring:1 probabilistic:4 told:1 off:1 physic:1 michael:1 quickly:1 central:1 huang:1 choose:1 collapsing:5 dr:28 cognitive:1 account:4 includes:1 coefficient:1 explicitly:2 depends:1 onset:1 later:5 apparently:1 bayes:4 complicated:1 slope:3 square:1 accuracy:7 variance:3 bayesian:8 accurately:3 pomdps:5 straight:4 explain:9 reach:2 influenced:2 touretzky:1 whenever:1 colleague:1 larsen:1 e2:6 naturally:3 static:3 rational:1 sampled:1 stop:1 adjusting:1 experimenter:1 recall:1 knowledge:12 emerges:1 color:1 follow:1 friesen:1 response:2 formulation:1 though:1 hank:3 furthermore:1 generality:3 just:1 d:1 receives:6 reweighting:1 overlapping:1 mdp:1 effect:5 hypothesized:1 roitman:2 true:2 unbiased:2 brown:1 procession:1 symmetric:2 illustrated:1 during:3 width:2 m:7 bote:1 theoretic:1 demonstrate:1 latham:2 motion:12 interpreting:1 ranging:1 novel:1 urgency:2 absorbing:1 behaves:1 multinomial:1 physical:1 cohen:1 cerebral:1 interpretation:3 measurement:1 cambridge:1 munoz:1 similarly:1 dot:20 moving:2 cortex:1 add:2 posterior:6 recent:3 roudi:1 leftward:3 initiation:1 tnd:6 seen:1 additional:4 greater:1 determine:1 maximize:1 signal:4 dashed:1 full:1 neurally:1 multiple:2 reduces:1 match:1 determination:1 compensate:1 long:2 deadline:4 equally:1 prediction:10 involving:1 variant:1 regression:2 vision:1 expectation:2 dopamine:1 iteration:1 normalization:2 represent:3 receive:2 median:2 biased:7 abram:1 unlike:1 rest:1 subject:5 inconsistent:2 incorporates:3 variety:1 affect:2 fit:11 competing:2 earns:1 opposite:1 luce:1 shift:2 t0:2 motivated:1 ditterich:2 penalty:6 rnr:6 action:21 latency:7 ddm:1 kiani:2 shifted:1 dotted:2 delta:1 neuroscience:5 track:1 per:2 intraparietal:1 blue:8 discrete:1 waiting:1 express:1 basal:1 four:2 threshold:4 changing:1 neither:1 diffusion:5 graph:2 fraction:2 sum:1 convert:1 inverse:1 parameterized:2 uncertainty:3 decide:2 utilizes:1 decision:65 coherence:8 hopp:1 entirely:3 bound:2 pay:1 courville:1 strength:1 incorporation:1 constraint:2 speed:2 performing:1 kd:1 conjugate:1 terminates:1 slightly:1 smaller:1 making:30 s1:7 encapsulates:1 primate:1 explained:2 pr:59 taken:2 equation:3 previously:3 know:1 end:1 observe:1 away:1 simulating:1 alternative:4 ahmadi:1 rp:11 assumes:3 remaining:1 running:3 include:2 unifying:2 testable:1 question:1 strategy:1 saccadic:8 parametric:1 traditional:3 distance:2 reversed:1 mapped:1 simulated:2 lateral:1 parametrized:1 reason:1 code:2 modeled:2 relationship:2 ratio:3 equivalently:1 setup:1 yanping:1 negative:1 rise:1 implementation:1 zt:5 policy:20 unknown:1 upper:3 observation:18 snapshot:1 markov:5 neuron:1 howard:1 frame:1 rn:11 arbitrary:1 drift:4 introduced:1 inverting:1 cast:1 required:1 specified:1 namely:1 unequal:1 learned:3 elapsed:3 daw:3 suggested:3 bar:1 perception:1 appeared:1 challenge:1 oculomotor:1 max:1 green:2 explanation:1 belief:31 power:1 suitable:1 natural:1 treated:1 residual:1 representing:3 movie:2 eye:11 inversely:1 temporally:1 axis:2 columbia:2 faced:1 prior:55 literature:3 review:1 marginalizing:1 relative:2 law:2 loss:3 probit:2 fully:1 expect:1 proportional:1 agent:3 sufficient:1 consistent:2 shadlen:7 principle:4 summary:1 free:4 verbal:1 bias:1 side:2 formal:1 institute:1 explaining:1 fall:1 emerge:1 absolute:1 distributed:2 boundary:11 curve:8 world:5 cumulative:2 transition:5 drugowitsch:2 sensory:5 valid:1 made:2 reinforcement:6 far:2 approximate:1 observable:5 neurobiological:1 keep:1 state2:1 assumed:1 sk:3 learn:1 nature:2 correlated:1 huk:1 domain:3 linearly:1 neurosci:5 s2:1 noise:1 knill:1 carpenter:11 fig:2 psychometric:10 downwards:1 fails:1 position:1 inferring:1 governed:2 perceptual:8 weighting:3 learns:1 z0:1 specific:2 xt:1 normative:3 offset:12 r2:1 evidence:5 dl:9 incorporating:1 essential:1 sequential:2 effectively:1 te:2 nat:2 cassandra:1 timothy:1 simply:2 likely:1 appearance:1 ganglion:1 neurophysiological:1 visual:2 hitting:1 expressed:2 verbally:1 partially:4 saccade:1 applies:1 determines:3 relies:1 cdf:2 ma:1 conditional:1 viewed:1 goal:3 consequently:1 towards:1 change:6 determined:6 infinite:2 specifically:1 acting:1 contradictory:1 total:1 called:2 discriminate:1 pas:2 experimental:14 indicating:2 internal:1 preparation:1 princeton:2 ex:1 |
4,204 | 4,805 | A Bayesian Approach for Policy Learning from
Trajectory Preference Queries
Aaron Wilson ?
School of EECS
Oregon State University
Alan Fern ?
School of EECS
Oregon State University
Prasad Tadepalli ?
School of EECS
Oregon State University
Abstract
We consider the problem of learning control policies via trajectory preference
queries to an expert. In particular, the agent presents an expert with short runs of
a pair of policies originating from the same state and the expert indicates which
trajectory is preferred. The agent?s goal is to elicit a latent target policy from
the expert with as few queries as possible. To tackle this problem we propose
a novel Bayesian model of the querying process and introduce two methods that
exploit this model to actively select expert queries. Experimental results on four
benchmark problems indicate that our model can effectively learn policies from
trajectory preference queries and that active query selection can be substantially
more efficient than random selection.
1
Introduction
Directly specifying desired behaviors for automated agents is a difficult and time consuming process. Successful implementation requires expert knowledge of the target system and a means of
communicating control knowledge to the agent. One way the expert can communicate the desired
behavior is to directly demonstrate it and have the agent learn from the demonstrations, e.g. via
imitation learning [15, 3, 13] or inverse reinforcement learning [12]. However, in some cases, like
the control of complex robots or simulation agents, it is difficult to generate demonstrations of the
desired behaviors. In these cases an expert may still recognize when an agent?s behavior matches a
desired behavior, or is close to it, even if it is difficult to directly demonstrate it. In such cases an
expert may also be able to evaluate the relative qualities to the desired behavior of a pair of example
trajectories and express a preference for one or the other.
Given this motivation, we study the problem of learning expert policies via trajectory preference
queries to an expert. A trajectory preference query (TPQ) is a pair of short state trajectories originating from a common state. Given a TPQ the expert is asked to indicate which trajectory is most
similar to the target behavior. The goal of our learner is to infer the target trajectory using as few
TPQs as possible. Our first contribution (Section 3) is to introduce a Bayesian model of the querying process along with an inference approach for sampling policies from the posterior given a set
of TPQs and their expert responses. Our second contribution (Section 4) is to describe two active
query strategies that attempt to leverage the model in order to minimize the number of queries required. Finally, our third contribution (Section 5) is to empirically demonstrate the effectiveness of
the model and querying strategies on four benchmark problems.
We are not the first to examine preference learning for sequential decision making. In the work
of Cheng et al. [5] action preferences were introduced into the classification based policy iteration
?
[email protected]
[email protected]
?
[email protected]
?
1
framework. In this framework preferences explicitly rank state-action pairs according to their relative payoffs. There is no explicit interaction between the agent and domain expert. Further the
approach also relies on knowledge of the reward function, while our work derives all information
about the target policy by actively querying an expert. In more closely related to our work, Akraur
et al. [1] consider the problem of learning a policy from expert queries. Similar to our proposal this
work suggests presenting trajectory data to an informed expert. However, their queries require the
expert to express preferences over approximate state visitation densities and to possess knowledge
of the expected performance of demonstrated policies. Necessarily the trajectories must be long
enough to adequately approximate the visitation density. We remove this requirement and only require short demonstrations; our expert assesses trajectory snippets not whole solutions. We believe
this is valuable because pairs of short demonstrations are an intuitive and manageable object for
experts to assess.
2
Preliminaries
We explore policy learning from expert preferences in the framework of Markov Decision Processes
(MDP). An MDP is a tuple (S, A, T, P0 , R) with state space S, action space A, state transition
distribution T , which gives the probability T (s, a, s0 ) of transitioning to state s0 given that action a
is taken in state s. The initial state distribution P0 (s0 ) gives a probability distribution over initial
states s0 . Finally the reward function R(s) gives the reward for being in state s. Note that in this
work, the agent will not be able to observe rewards and rather must gather all information about
the quality of policies via interaction with an expert. We consider agents that select actions using
a policy ?? parameterized by ?, which is a stochastic mapping from states to actions P? (a|s, ?).
For example, in our experiments, we use a log-linear policy representation, where the parameters
correspond to coefficients of features defined over state-action pairs.
Agents acting in an MDP experience the world as a sequence of state-action pairs called a trajectory. We denote a K-length trajectory as ? = (s0 , a0 , ..., aK?1 , sK ) beginning in state s0
and terminating after K steps. It follows from the definitions above that the probability of generating a K-length trajectory given that the agent executes policy ?? starting from state s0 is,
QK
P (?|?, s0 ) = t=1 T (st?1 , at?1 , st )P? (at?1 |st?1 , ?). Trajectories are an important part of our
query process. They are an intuitive means of communicating policy information. Trajectories have
the advantage that the expert need not share a language with the agent. Instead the expert is only
required to recognize differences in physical performances presented by the agent. For purposes of
generating trajectories we assume that our learner is provided with a strong simulator (or generative
model) of the MDP dynamics, which takes as input a start state s, a policy ?, and a value K, and
outputs a sampled length K trajectory of ? starting in s.
In this work, we evaluate policies in an episodic setting where an episode starts by drawing an initial
state from P0 and then executing the policy for a finite horizon T . A policy?s value is the expected
total reward of an episode. The goal of the learner is to select a policy whose value is close to that
of an expert?s policy. Note, that our work is not limited to finite-horizon problems, but can also be
applied to infinite-horizon formulations.
In order to learn a policy, the agent presents trajectory preference queries (TPQs) to the expert and
receives responses back. A TPQ is a pair of length K trajectories (?i , ?j ) that originate from a
common state s. Typically K will be much smaller than the horizon T , which is important from
the perspective of expert usability. Having been provided with a TPQ the expert gives a response y
indicating, which trajectory is preferred. Thus, each TPQ results in a training data tuple (?i , ?j , y).
Intuitively, the preferred trajectory is the one that is most similar to what the expert?s policy would
have produced from the same starting state. As detailed more in the next section, this is modeled
by assuming that the expert has a (noisy) evaluation function f (.) on trajectories and the response
is then given by y = I(f (?i ) > f (?j )) (a binary indicator). We assume that the expert?s evaluation
function is a function of the observed trajectories and a latent target policy ?? .
3
Bayesian Model and Inference
In this section we first describe a Bayesian model of the expert response process, which will be
used to: 1) Infer policies based on expert responses to TPQs, and 2) Guide the action selection of
2
TPQs. Next, we describe a posterior sampling method for this model which is used for both policy
inference and TPQ selection.
3.1
Expert Response Model
The model for the expert response y given a TPQ (?i , ?j ) decomposes as follows
P (y|(?i , ?j ), ?? )P (?? )
where P (?? ) is a prior over the latent expert policy, and P (y|(?i , ?j ), ?? ) is a response distribution
conditioned on the TPQ and expert policy. In our experiments we use a ridge prior in the form of a
Gaussian over ?? with diagonal covariance, which penalizes policies with large parameter values.
Response Distribution. The conditional response distribution is represented in terms of an expert
evaluation function f ? (?i , ?j , ?? ), described in detail below, which translates a TPQ and a candidate
expert policy ?? into a measure of preference for trajectory ?i over ?j . Intuitively, f ? measures
the degree to which the policy ?? agrees with ?i relative to ?j . To translate the evaluation into an
expert response we borrow from previous work [6]. In particular, we assume the expert response is
given by the indicator I(f ? (?i , ?j , ?? ) > ) where ? N (0, ?r2 ). The indicator simply returns 1 if
the condition is true, indicating ?i is preferred, and zero otherwise. It follows that the conditional
response distribution is given by:
Z
P (y = 1|(?i , ?j ), ?? ) =
+?
I(f ? (?i , ?j , ?? ) > )N (|0, ?r2 )d
?
f (?i , ?j , ?? )
=?
.
?r
??
where ?(.) denotes the cumulative distribution function of the normal distribution. This formulation
allows the expert to err when demonstrated trajectories are difficult to distinguish as measured by the
magnitude of the evaluation function f ? . We now describe the evaluation function in more detail.
Evaluation Function. Intuitively the evaluation function must combine distances between the query
trajectories and trajectories generated by the latent target policy. We say that a latent policy and
query trajectory are in agreement when they produce similar trajectories. The dissimilarity between
two trajectories ?i and ?j is measured by the trajectory dissimilarity function
K
X
k([si,t , ai,t ], [sj,t , aj,t ])
f (?i , ?j ) =
t=0
where the variables [si,t , ai,t ] represent the values of the state-action pair at time step t in trajectory
i (similarly for [sj,t , aj,t ]) and the function k computes distances between state-action pairs. In our
experiments, states and actions are represented by real-valued vectors and we use a simple function
of the form: k([s, a], [s0 , a0 ]) = ks ? s0 k + ka ? a0 k though other more sophisticated comparison
functions could be easily used in the model.
Given the trajectory comparison function, we now encode a dissimilarity measure between the latent
target policy and an observed trajectory ?i . To do this let ? ? be a random variable ranging over length
k trajectories generated by target policy ?? starting in the start state of ?i . The dissimilarity measure
is given by:
d(?i , ?? ) = E[f (?i , ? ? )]
This function computes the expected dissimilarity between a query trajectory ?i and the K-length
trajectories generated by the latent policy from the same initial state.
Finally, the comparison function value f ? (?i , ?j , ?? ) = d(?j , ?? ) ? d(?i , ?? ) is the difference in
computed values between the ith and jth trajectory. Larger values of f ? indicate stronger preferences
for trajectory ?i .
3.2 Posterior Inference
Given the definition of the response model, the prior distribution, and an observed data set D =
{(?i , ?j , y)} of TPQs and responses the posterior distribution is,
P (?? |D) ? P (?? )
Y
? (z)y (1 ? ? (z))1?y ,
(?i ,?j ,y)?D
?
?
d(? ,? )?d(?i ,? )
where z = j ?r
must approximate it.
. This posterior distribution does not have a simple closed form and we
3
We approximate the posterior distribution using a set of posterior samples which we generate using
a stochastic simulation algorithm called Hybrid Monte Carlo (HMC) [8, 2]. The HMC algorithm
is an example of a Markov Chain Monte Carlo (MCMC) algorithm. MCMC algorithms output a
sequence of samples from the target distribution. HMC has an advantage in our setting because it
introduces auxiliary momentum variables proportional to the gradient of the posterior which guides
the sampling process toward the modes of the posterior distribution.
To apply the HMC algorithm we must derive the gradient of the energy function
5?? log(P (D|?)P (?)) as follows.
?
?
log[P (?? |D)] =
log[P (?? )] +
??i?
??i?
X
(?i ,?j ,y)?D
?
log ? (z)y (1 ? ? (z))1?y
??i?
The energy function decomposes into prior and likelihood components. Using our assumption of a
Gaussian prior with diagonal covariance on ?? the partial derivative of the prior component at ?i? is
(?? ? ?)
?
log[P (?? )] = ? i 2 .
??i?
?
Next, consider the gradient of the data log likelihood,
X
(?i ,?j ,y)?D
?
log ?(z)y (1 ? ?(z))1?y ,
??i?
which decomposes into |D| components each of which has a value dependent on y.
In what follows we will assume that y = 1 (It is straight forward to derive the second case). Recall
that the function ?(.) is the cumulative distribution function of N (z; 0, ?r2 ), Therefore, the gradient
of log(?(z)) is,
?
1
?
?(z)
=
z
N (z; 0, ?r2 )
??i?
?(z) ??i?
?
1
1
?
?
?
2
=
d(?
,
?
)
?
d(?
,
?
)
N
(z;
0,
?
)
.
j
i
r
?(z) ?r ??i?
??i?
?
1
log[?(z)] =
??i?
?(z)
Rrecall the definition of d(?, ?? ) from above. After moving the derivative inside the integral the
gradient of this function is
?
d(?, ?? ) = ?
??i?
=?
Z
f (?, ? ? )
Z
f (?, ? ? )P (? ? |?? )
?
P (? ? |?? )d? ? = ?
??i?
Z
f (?, ? ? )P (? ? |?? )
?
log(P (? ? |?? ))d? ?
??i?
K
X
?
log(P? (ak |sk , ?? ))d? ? .
??i?
k=1
The final step follows from the definition of the trajectory density which decomposes under the log
transformation. For purposes of approximating the gradient this integral must be estimated. We do
this by generating N sample trajectories from P (? ? |?? ) and then compute the Monte-Carlo estimate
PN
PK
? N1 l=1 f (?, ?l? ) k=1 ??? ? log(P? (ak |sk , ?? )). We leave the definition of log(P? (ak |sk , ?? ))
i
for the experimental results section where we describe a specific kind of stochastic policy space.
Given this gradient calculation, we can apply HMC in order to sample policy parameter vectors from
the posterior distribution. This can be used for policy selection in a number of ways. For example, a
policy could be formed via Bayesian averaging. In our experiments, we select a policy by generating
a large set of samples and then select the sample maximizing the energy function.
4
Active Query Selection
Given the ability to perform posterior inference, the question now is how to collect a data set of
TPQs and their responses. Unlike many learning problems, there is no natural distribution over
TPQs to draw from, and thus, active selection of TPQs is essential. In particular, we want the
learner to select TPQs for which the responses will be most useful toward the goal of learning the
target policy. This selection problem is difficult due to the high dimensional continuous space of
TPQs, where each TPQ is defined by an initial state and two trajectories originating from the state.
To help overcome this complexity our algorithm assumes the availability of a distribution P?0 over
4
candidate start states of TPQs. This distribution is intended to generate start states that are feasible
and potentially relevant to a target policy. The distribution may incorporate domain knowledge
to rule out unimportant parts of the space (e.g. avoiding states where the bicycle has crashed) or
simply specify bounds on each dimension of the state space and generate states uniformly within
the bounds. Given this distribution, we consider two approaches to actively generating TPQs for the
expert.
4.1
Query by Disagreement
Our first approach Query by Disagreement (QBD) is similar to the well-known query-by-committee
approach to active learning of classifiers [17, 9]. The main idea behind the basic query-by-committee
approach is to generate a sequence of unlabeled examples from a given distribution and for each
example sample a pair of classifiers from the current posterior. If the sampled classifiers disagree
on the class of the example, then the algorithm queries the expert for the class label. This simple
approach is often effective and has theoretical guarantees on its efficiency.
We can apply this general idea to select TPQs in a straightforward way. In particular, we generate
a sequence of potential initial TPQ states from P?0 and for each draw two policies ?i and ?j from
the current posterior distribution P (?? |D). If the policies ?disagree? on the state, then a query is
posed based on trajectories generated by the policies. Disagreement on an initial state s0 is measured
according to the expected difference between K length
R trajectories generated by ?i and ?j starting at
s0 . In particular, the disagreement measure is g = (?i ,?j ) P (?i |?i , s0 , K)P (?j |?j , s0 , K)f (?i , ?j ),
which we estimate via sampling a set of K length trajectories from each policy. If this measure
exceeds a threshold then TPQ is generated and given to the expert by running each policy for K
steps from the initial state. Otherwise a new initial state is generated. If no query is posed after a
specified number of initial states, then the state and policy pair that generated the most disagreement
are used to generate the TPQ. We set the threshold t so that ?(t/?r ) = .95.
This query strategy has the benefit of generating TPQs such that ?i and ?j are significantly different.
This is important from a usability perspective, since making preference judgements between similar
trajectories can be difficult for an expert and error prone. In practice we observe that the QBD
strategy often generates TPQs based on policy pairs that are from different modes of the distribution,
which is an intuitively appealing property.
4.2
Expected Belief Change
Another class of active learning approaches for classifiers is more selective than traditional queryby-committee. In particular, they either generate or are given an unlabeled dataset and then use a
heuristic to select the most promising example to query from the entire set. Such approaches often
outperform less selective approaches such as the traditional query-by-committee. In this same way,
our second active learning approach for TPQs attempts to be more selective than the above QBD
approach by generating a set of candidate TPQs and heuristically selecting the best among those
candidates.
A set of candidate TPQs is generated by first drawing an initial state from from P?0 , sampling a pair
of policies from the posterior, and then running the policies for K steps from the initial state. It
remains to define the heuristic used to select the TPQ for presentation to the expert.
A truly Bayesian heuristic selection strategy should account for the overall change in belief about the
latent target policy after adding a new data point. To represent the difference in posterior beliefs we
use the variational distance between posterior based on the current data D and the posterior based
on the updated data D ? {(?i , ?j , y)}.
Z
V (P (?|D) k P (?|D ? {(?i , ?j , y)})) =
|P (?|D) ? P (?|D ? {(?i , ?j , y)})|d?.
By integrating over the entire latent policy space it accounts for the total impact of the query on the
agent?s beliefs.
The value of the variational distance depends on the response to the TPQ, which is unobserved at
query selection time. Therefore, the agent computes the expected variational distance,
H(d) =
X
P (y|?i , ?j , D)V (P (?|D) k P (?|D ? {(?i , ?j , y)})).
y?0,1
5
R
Where P (y|?i , ?j , D) = P (y|?i , ?j , ?? )P (?? |D)d?? is the predictive distribution and is straightforwardly estimated using a set of posterior samples.
Finally, we specify a simple method of estimating the variational distance given a particular response. For this, we re-express the variational distance as an expectation with respect to P (?|D),
Z
P (?|D)
V (P (?|D) k P (?|D ? d)) = |P (?|D) ? P (?|D ? d)| d? = P (?|D) ? P (?|D ? d)
d?
P (?|D)
Z
Z
P (?|D ? d)
z1
= P (?|D) 1 ?
d? = P (?|D) 1 ? P (d|?) d?
P (?|D)
z2
Z
where z1 and z2 are the normalizing constants of the posterior distributions. The final expression is
a likelihood weighted estimate of the variational distance. We can estimate this value using MonteCarlo over a set S of policies sampled from the posterior,
X
z1
V (P (?|D) k P (?|D ? (?i , ?j , y))) ?
1 ? z2 P (d|?)
??S
This leaves the computation of the ratio of normalizing constants zz12 which we estimate using MonteCarlo based on a sample set of policies from the prior distribution, hence avoiding further posterior
sampling.
Our basic strategy of using an information theoretic selection heuristic is similar to early work using
Kullback Leibler Divergence ([7]) to measure the quality of experiments [11, 4]. Our approach differs in that we use a symmetric measure which directly computes differences in probability instead
of expected differences in code lengths. The key disadvantage of this form of look-ahead query
strategy (shared by other strategies of this kind) is the computational cost.
5
Empirical Results
Below we outline our experimental setup and present our empirical results on four standard RL
benchmark domains.
5.1
Setup
If the posterior distribution focuses mass on the expert policy parameters the expected value of the
MAP parameters will converge to the expected value of the expert?s policy. Therefore, to examine
the speed of convergence to the desired expert policy we report the performance of the MAP policy
in the MDP task. We choose the MAP policy, maximizing P (D|?)P (?), from the sample generated
by our HMC routine. The expected return of the selected policy is estimated and reported. Note that
no reward information is given to the learner and is used for evaluation only.
We produce an automated expert capable of responding to the queries produced by our agent. The
expert knows a target policy, and compares, as described above, the query trajectories generated
by the agent to the trajectories generated by the target policy. The expert stochastically produces
a response based on its evaluations. Target policies are hand designed and produce near optimal
performance in each domain.
a)
In all experiments the agent executes a simple parametric policy, P (a|s, ?) = P exp(?(s)??
.
b?A exp(?(s)??b )
The function ?(s) is a set of features derived from the current state s. The complete parameter
vector ? is decomposed into components ?a associated with each action a. The policy is executed
by sampling an action from this distribution. The gradient of this action selection policy can be
derived straightforwardly and substituted into the gradient of the energy function required by our
HMC procedure.
We use the following values for the unspecified model parameters: ?r2 = 1, ? 2 = 2, ? = 0. The
value of K used for TPQ trajectories was set to 10 for each domain except for Bicycle, for which we
used K = 300. The Bicycle simulator uses a fine time scale, so that even K = 300 only corresponds
to a few seconds of bike riding, which is quite reasonable for a TPQ.
For purposes of comparison we implement a simple random TPQ selection strategy (Denoted Random in the graphs below). The random strategy draws an initial TPQ state from P?0 and then generates a trajectory pair by executing two policies drawn i.i.d. from the prior distribution P (?). Thus,
this approach does not use information about past query responses when selecting TPQs.
6
Domains. We consider the following benchmark domains.
Acrobot. The acrobot task simulates a two link under-actuated robot. One joint end, the ?hands?
of the robot, rotates around a fixed point. The mid joint associated with the ?hips? attach the upper
and lower links of the robot. To change the joint angle between the upper and lower links the
agent applies torque at the hip joint. The lower link swings freely. Our expert knows a policy for
swinging the acrobot into a balanced handstand. The acrobot system is defined by four continuous
state variables (?1 , ?2 , ??1 , ??2 ) representing the arrangement of the acrobot?s joints and the changing
velocities of the joint angles. The acrobot is controlled by a 12 dimensional softmax policy selecting
between positive, negative, and zero torque to be applied at the hip joint. The feature vector ?(s)
returns the vector of state variables. The acrobot receives a penalty on each step proportional to the
distance between the foot and the target position for the foot.
Mountain Car. The mountain car domain simulates an underpowered vehicle which the agent must
drive to the top of a steep hill. The state of the mountain car system is described by the location of
the car x, and its velocity v. The goal of the agent controlling the mountain car system is to utilize
the hills surrounding the car to generate sufficient energy to escape a basin. Our expert knows a
policy for performing this escape. The agent?s softmax control policy space has 16 dimensions and
selects between positive and negative accelerations of the car. The feature vector ?(s) returns a
polynomial expansion (x, v, x2 , x3 , xv, x2 v, x3 v, v 2 ) of the state. The agent receives a penalty for
every step taken to reach the goal.
Cart Pole. In the cart-pole domain the agent attempts to balance a pole fixed to a movable cart
while maintaining the carts location in space. Episodes terminate if the pole falls or the cart leaves
its specified boundary. The state space is composed of the cart velocity v, change in cart velocity
v 0 , angle of the pole ?, and angular velocity of the pole ? 0 . The control policy has 12 dimensions
and selects the magnitude of the change in velocity (positive or negative) applied to the base of the
cart. The feature vector returns the state of the cart-pole. The agent is penalized for pole positions
deviating from upright and for movement away from the midpoint.
Bicycle Balancing. Agents in the bicycle balancing task must keep the bicycle balanced for 30000
steps. For our experiments we use the simulator originally introduced in [14]. The state of the
bicycle is defined by four variables (?, ?,
? ?, ?).
? The variable ? is the angle of the bicycle with
respect to vertical, and ?? is its angular velocity. The variable ? is the angle of the handlebars
with respect to neutral, and ?? is the angular velocity. The goal of the agent is to keep the bicycle
from falling. Falling occurs when |?| > ?/15. We borrow the same implementation used in[10]
including the discrete action set, the 20 dimensional feature space, and 100 dimensional policy. The
agent selects from a discrete set of five actions. Each discrete action has two components. The first
component is the torque applied to the handlebars T ? (?1, 0, 1), and the second component is the
displacement of the rider in the saddle p ? (?.02, 0, .02). From these components five action tuples
are composed a ? ((?1, 0), (1, 0), (0, ?.02), (0, .02), (0, 0)). The agent is penalized proportional
to the magnitude of ? at each step and receives a fixed penalty for falling.
We report the results of our experiments in Figure 1. Each graph gives the results for the TPQ
selection strategies Random, Query-by-Disagreement (QBD), and Expected Belief Change (EBC).
The average reward versus number of queries is provided for each selection strategy, where curves
are averaged over 20 runs of learning.
5.2
Experiment Results
In all domains the learning algorithm successfully learns the target policy. This is true independent
of the query selection procedure used. As can be seen our algorithm can successfully learn even from
queries posed by Random. This demonstrates the effectiveness of our HMC inference approach.
Importantly, in some cases, the active query selection heuristics significantly improve the rate of
convergence compared to Random. The value of the query selection procedures is particularly high
in the Mountain Car and Cart Pole domains. In the Mountain Car domain more than 500 Random
queries were needed to match the performance of 50 EBC queries. In both of these domains examining the generated query trajectories shows that the Random strategy tended to produce difficult
to distinguish trajectory data and later queries tended to resemble earlier queries. This is due to
?plateaus? in the policy space which produce nearly identical behaviors. Intuitively, the information content of queries selected by Random decreases rapidly leading to slower convergence. By
7
Figure 1: Results: We report the expected return of the MAP policy, sampled during Hybrid MCMC
simulation of the posterior, as a function of the number of expert queries. Results are averaged over
50 runs. Query trajectory lengths: Acrobot K = 10, Mountain-Car K = 10, Cart-Pole K = 20,
Bicycle Balancing K = 300.
comparison the selection heuristics ensure that selected queries have high impact on the posterior
distribution and exhibit high query diversity.
The benefits of the active selection procedure diminish in the Acrobot and Bicycle domains. In both
of these domains active selection performs only slightly better than Random. This is not the first time
active selection procedures have shown performance similar to passive methods [16]. In Acrobot all
of the query selection procedure quickly converge to the target policy (only 25 queries are needed
for Random to identify the target). Little improvement is possible over this result. Similarly, in
the bicycle domain the performance results are difficult to distinguish. We believe this is due to
the length of the query trajectories (300) and the importance of the initial state distribution. Most
bicycle configurations lead to out of control spirals from which no policy can return the bicycle
to balanced. In these configurations inputs from the agent result in small impact on the observed
state trajectory making policies difficult to distinguish. To avoid these cases in Bicycle the start state
distribution P?0 only generated initial states close to a balanced configuration. In these configurations
poor balancing policies are easily distinguished from better policies and the better policies are not
rare. These factors lead Random to be quite effective in this domain.
Finally, comparing the active learning strategies, we see that EBC has a slight advantage over QBD
in all domains other than Bicycle. This agrees with prior active learning work, where more selective
strategies tend to be superior in practice. The price that EBC pays for the improved performance is
in computation time, as it is about an order of magnitude slower.
6
Summary
We examined the problem of learning a target policy via trajectory preference queries. We formulated a Bayesian model for the problem and a sampling algorithm for sampling from the posterior
over policies. Two query selection methods were introduced, which heuristically select queries with
an aim to efficiently identify the target. Experiments in four RL benchmarks indicate that our model
and inference approach is able to infer quality policies and that the query selection methods are
generally more effective than random selection.
Acknowledgments
We gratefully acknowledge the support of ONR under grant number N00014-11-1-0106.
8
References
[1] R. Akrour, M. Schoenauer, and M. Sebag. Preference-based policy learning. In Dimitrios
Gunopulos, Thomas Hofmann, Donato Malerba, and Michalis Vazirgiannis, editors, Proc.
ECML/PKDD?11, Part I, volume 6911 of Lecture Notes in Computer Science, pages 12?27.
Springer, 2011.
[2] Christophe Andrieu, Nando de Freitas, Arnaud Doucet, and Michael I. Jordan. An introduction
to mcmc for machine learning. Machine Learning, 50(1-2):5?43, 2003.
[3] Brenna D. Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot
learning from demonstration. Robot. Auton. Syst., 57(5):469?483, May 2009.
[4] J M Bernardo. Expected information as expected utility. Annals of Statistics, 7(3):686?690,
1979.
[5] Weiwei Cheng, Johannes F?urnkranz, Eyke H?ullermeier, and Sang-Hyeun Park. Preferencebased policy iteration: Leveraging preference learning for reinforcement learning. In Proceedings of the 22nd European Conference on Machine Learning (ECML 2011), pages 312?327.
Springer, 2011.
[6] Wei Chu and Zoubin Ghahramani. Preference learning with gaussian processes. In Proceedings of the 22nd international conference on Machine learning, ICML ?05, pages 137?144,
New York, NY, USA, 2005. ACM.
[7] Thomas M. Cover and Joy A. Thomas. Elements of information theory. Wiley-Interscience,
New York, NY, USA, 1991.
[8] Simon Duane, A. D. Kennedy, Brian J. Pendleton, and Duncan Roweth. Hybrid monte carlo.
Physics Letters B, 195(2):216 ? 222, 1987.
[9] Yoav Freund, H. Sebastian Seung, Eli Shamir, and Naftali Tishby. Selective sampling using
the query by committee algorithm. Machine Learning, 28(2-3):133?168, 1997.
[10] Michail G. Lagoudakis, Ronald Parr, and L. Bartlett. Least-squares policy iteration. Journal
of Machine Learning Research, 4, 2003.
[11] D. V. Lindley. On a Measure of the Information Provided by an Experiment. The Annals of
Mathematical Statistics, 27(4):986?1005, 1956.
[12] Andrew Y. Ng and Stuart J. Russell. Algorithms for inverse reinforcement learning. In ICML,
pages 663?670, 2000.
[13] Bob Price and Craig Boutilier. Accelerating reinforcement learning through implicit imitation.
J. Artif. Intell. Res. (JAIR), 19:569?629, 2003.
[14] Jette Randl?v and Preben Alstr?m. Learning to drive a bicycle using reinforcement learning
and shaping. In ICML, pages 463?471, 1998.
[15] Stefan Schaal. Learning from demonstration. In NIPS, pages 1040?1046, 1996.
[16] Andrew I. Schein and Lyle H. Ungar. Active learning for logistic regression: an evaluation.
Mach. Learn., 68(3):235?265, October 2007.
[17] H. S. Seung, M. Opper, and H. Sompolinsky. Query by committee. In Proceedings of the fifth
annual workshop on Computational learning theory, COLT ?92, pages 287?294, New York,
NY, USA, 1992. ACM.
9
| 4805 |@word manageable:1 polynomial:1 judgement:1 stronger:1 tadepalli:1 nd:2 heuristically:2 simulation:3 prasad:1 covariance:2 p0:3 initial:15 configuration:4 selecting:3 past:1 freitas:1 err:1 ka:1 current:4 z2:3 comparing:1 si:2 chu:1 must:8 ronald:1 hofmann:1 remove:1 designed:1 joy:1 generative:1 leaf:2 selected:3 beginning:1 ith:1 short:4 location:2 preference:19 five:2 mathematical:1 along:1 combine:1 interscience:1 inside:1 introduce:2 expected:14 behavior:8 pkdd:1 examine:2 simulator:3 torque:3 decomposed:1 alstr:1 little:1 provided:4 estimating:1 brett:1 mass:1 bike:1 what:2 mountain:7 kind:2 unspecified:1 substantially:1 argall:1 informed:1 unobserved:1 transformation:1 guarantee:1 every:1 bernardo:1 tackle:1 classifier:4 demonstrates:1 control:6 grant:1 positive:3 xv:1 gunopulos:1 mach:1 ak:4 k:1 examined:1 specifying:1 suggests:1 collect:1 limited:1 averaged:2 acknowledgment:1 lyle:1 practice:2 implement:1 differs:1 x3:2 handlebar:2 procedure:6 displacement:1 episodic:1 empirical:2 elicit:1 significantly:2 integrating:1 zoubin:1 close:3 selection:26 unlabeled:2 map:4 demonstrated:2 maximizing:2 straightforward:1 starting:5 survey:1 swinging:1 communicating:2 rule:1 importantly:1 borrow:2 updated:1 annals:2 target:22 controlling:1 shamir:1 us:1 agreement:1 velocity:8 element:1 particularly:1 observed:4 afern:1 episode:3 sompolinsky:1 movement:1 decrease:1 russell:1 valuable:1 balanced:4 complexity:1 reward:7 asked:1 schoenauer:1 seung:2 dynamic:1 terminating:1 predictive:1 efficiency:1 learner:5 easily:2 joint:7 represented:2 surrounding:1 describe:5 effective:3 monte:4 query:59 pendleton:1 whose:1 heuristic:6 larger:1 valued:1 posed:3 say:1 drawing:2 otherwise:2 quite:2 ability:1 statistic:2 noisy:1 final:2 sequence:4 advantage:3 propose:1 interaction:2 relevant:1 rapidly:1 translate:1 intuitive:2 convergence:3 requirement:1 produce:6 generating:7 executing:2 leave:1 object:1 help:1 derive:2 andrew:2 measured:3 school:3 strong:1 auxiliary:1 resemble:1 indicate:4 foot:2 closely:1 stochastic:3 nando:1 donato:1 require:2 ungar:1 preliminary:1 brian:1 around:1 diminish:1 normal:1 exp:2 mapping:1 bicycle:17 parr:1 early:1 purpose:3 proc:1 label:1 agrees:2 successfully:2 weighted:1 stefan:1 gaussian:3 aim:1 rather:1 pn:1 avoid:1 wilson:1 encode:1 derived:2 focus:1 schaal:1 improvement:1 rank:1 indicates:1 likelihood:3 inference:7 browning:1 dependent:1 typically:1 entire:2 a0:3 originating:3 selective:5 selects:3 overall:1 classification:1 among:1 colt:1 denoted:1 softmax:2 having:1 ng:1 sampling:10 identical:1 park:1 look:1 icml:3 nearly:1 stuart:1 report:3 ullermeier:1 escape:2 few:3 composed:2 recognize:2 divergence:1 intell:1 dimitrios:1 deviating:1 intended:1 n1:1 attempt:3 evaluation:11 introduces:1 truly:1 chernova:1 behind:1 chain:1 integral:2 tuple:2 partial:1 capable:1 experience:1 penalizes:1 desired:6 re:2 schein:1 theoretical:1 roweth:1 hip:3 earlier:1 cover:1 disadvantage:1 yoav:1 cost:1 pole:10 neutral:1 rare:1 successful:1 examining:1 tishby:1 reported:1 straightforwardly:2 underpowered:1 eec:6 st:3 density:3 international:1 physic:1 michael:1 quickly:1 choose:1 stochastically:1 expert:56 derivative:2 leading:1 return:7 sang:1 actively:3 syst:1 account:2 potential:1 diversity:1 de:1 availability:1 coefficient:1 oregon:3 explicitly:1 depends:1 vehicle:1 later:1 closed:1 start:6 simon:1 lindley:1 contribution:3 minimize:1 square:1 ass:2 formed:1 qk:1 efficiently:1 correspond:1 identify:2 bayesian:8 produced:2 fern:1 craig:1 carlo:4 trajectory:59 drive:2 kennedy:1 straight:1 executes:2 bob:1 plateau:1 reach:1 tended:2 sebastian:1 tpq:20 definition:5 energy:5 associated:2 sampled:4 dataset:1 recall:1 knowledge:5 car:10 shaping:1 routine:1 sophisticated:1 back:1 originally:1 jair:1 response:22 specify:2 improved:1 wei:1 formulation:2 though:1 vazirgiannis:1 angular:3 implicit:1 hand:2 receives:4 mode:2 logistic:1 quality:4 aj:2 believe:2 mdp:5 riding:1 usa:3 artif:1 true:2 adequately:1 hence:1 swing:1 andrieu:1 arnaud:1 symmetric:1 leibler:1 eyke:1 during:1 naftali:1 presenting:1 outline:1 ridge:1 demonstrate:3 theoretic:1 complete:1 hill:2 performs:1 passive:1 ranging:1 variational:6 novel:1 lagoudakis:1 common:2 superior:1 empirically:1 physical:1 rl:2 volume:1 slight:1 ai:2 similarly:2 language:1 gratefully:1 moving:1 robot:6 movable:1 base:1 posterior:25 perspective:2 ebc:4 n00014:1 binary:1 onr:1 christophe:1 seen:1 michail:1 freely:1 converge:2 infer:3 alan:1 exceeds:1 match:2 usability:2 calculation:1 veloso:1 long:1 controlled:1 impact:3 basic:2 regression:1 expectation:1 iteration:3 represent:2 proposal:1 want:1 fine:1 randl:1 unlike:1 posse:1 cart:11 tend:1 simulates:2 leveraging:1 effectiveness:2 jordan:1 near:1 leverage:1 enough:1 spiral:1 automated:2 weiwei:1 idea:2 translates:1 expression:1 utility:1 bartlett:1 accelerating:1 penalty:3 york:3 action:19 boutilier:1 useful:1 generally:1 detailed:1 unimportant:1 johannes:1 mid:1 generate:9 outperform:1 estimated:3 discrete:3 urnkranz:1 express:3 visitation:2 key:1 four:6 threshold:2 falling:3 drawn:1 changing:1 utilize:1 graph:2 run:3 inverse:2 tadepall:1 parameterized:1 communicate:1 angle:5 letter:1 eli:1 reasonable:1 draw:3 decision:2 duncan:1 bound:2 pay:1 distinguish:4 cheng:2 annual:1 ahead:1 x2:2 generates:2 speed:1 performing:1 according:2 poor:1 smaller:1 rider:1 slightly:1 appealing:1 making:3 intuitively:5 taken:2 remains:1 montecarlo:2 committee:6 needed:2 know:3 end:1 auton:1 apply:3 observe:2 away:1 disagreement:6 distinguished:1 sonia:1 slower:2 thomas:3 denotes:1 assumes:1 running:2 responding:1 top:1 ensure:1 michalis:1 maintaining:1 exploit:1 ghahramani:1 approximating:1 question:1 arrangement:1 occurs:1 strategy:15 parametric:1 diagonal:2 traditional:2 jette:1 exhibit:1 gradient:9 distance:9 link:4 rotates:1 originate:1 toward:2 assuming:1 length:11 code:1 modeled:1 ratio:1 demonstration:6 balance:1 difficult:9 hmc:8 setup:2 executed:1 potentially:1 steep:1 october:1 negative:3 implementation:2 policy:92 perform:1 disagree:2 upper:2 vertical:1 markov:2 benchmark:5 finite:2 snippet:1 acknowledge:1 ecml:2 payoff:1 introduced:3 pair:15 required:3 crashed:1 specified:2 z1:3 nip:1 able:3 below:3 including:1 belief:5 natural:1 hybrid:3 attach:1 indicator:3 representing:1 improve:1 prior:9 oregonstate:3 relative:3 freund:1 lecture:1 proportional:3 querying:4 versus:1 agent:32 degree:1 gather:1 sufficient:1 basin:1 s0:14 editor:1 share:1 balancing:4 prone:1 penalized:2 summary:1 jth:1 guide:2 fall:1 midpoint:1 brenna:1 fifth:1 benefit:2 overcome:1 dimension:3 boundary:1 transition:1 world:1 cumulative:2 computes:4 curve:1 forward:1 opper:1 reinforcement:5 sj:2 approximate:4 preferred:4 kullback:1 keep:2 doucet:1 active:14 manuela:1 consuming:1 tuples:1 imitation:2 akrour:1 continuous:2 latent:9 sk:4 decomposes:4 preben:1 learn:5 promising:1 terminate:1 actuated:1 expansion:1 complex:1 necessarily:1 european:1 domain:18 substituted:1 pk:1 main:1 motivation:1 whole:1 sebag:1 ny:3 wiley:1 momentum:1 position:2 explicit:1 candidate:5 third:1 learns:1 transitioning:1 specific:1 r2:5 normalizing:2 derives:1 essential:1 workshop:1 sequential:1 effectively:1 adding:1 importance:1 magnitude:4 dissimilarity:5 acrobot:10 conditioned:1 horizon:4 simply:2 explore:1 saddle:1 applies:1 springer:2 duane:1 corresponds:1 relies:1 acm:2 conditional:2 goal:7 presentation:1 formulated:1 acceleration:1 shared:1 price:2 feasible:1 change:6 content:1 infinite:1 except:1 uniformly:1 upright:1 acting:1 averaging:1 called:2 total:2 experimental:3 aaron:1 select:10 indicating:2 support:1 incorporate:1 evaluate:2 mcmc:4 avoiding:2 |
4,205 | 4,806 | The Perturbed Variation
Maayan Harel
Department of Electrical Engineering
Technion, Haifa, Israel
[email protected]
Shie Mannor
Department of Electrical Engineering
Technion, Haifa, Israel
[email protected]
Abstract
We introduce a new discrepancy score between two distributions that gives an indication on their similarity. While much research has been done to determine if two
samples come from exactly the same distribution, much less research considered
the problem of determining if two ?nite samples come from similar distributions.
The new score gives an intuitive interpretation of similarity; it optimally perturbs
the distributions so that they best ?t each other. The score is de?ned between
distributions, and can be ef?ciently estimated from samples. We provide convergence bounds of the estimated score, and develop hypothesis testing procedures
that test if two data sets come from similar distributions. The statistical power of
this procedures is presented in simulations. We also compare the score?s capacity
to detect similarity with that of other known measures on real data.
1
Introduction
The question of similarity between two sets of examples is common to many ?elds, including statistics, data mining, machine learning and computer vision. For example, in machine learning, a
standard assumption is that the training and test data are generated from the same distribution. However, in some scenarios, such as Domain Adaptation (DA), this is not the case and the distributions
are only assumed similar. It is quite intuitive to denote when two inputs are similar in nature, yet the
following question remains open: given two sets of examples, how do we test whether or not they
were generated by similar distributions? The main focus of this work is providing a similarity score
and a corresponding statistical procedure that gives one possible answer to this question.
Discrepancy between distributions has been studied for decades, and a wide variety of distance
scores have been proposed. However, not all proposed scores can be used for testing similarity.
The main dif?culty is that most scores have not been designed for statistical testing of similarity
but equality, known as the Two-Sample Problem (TSP). Formally, let P and Q be the generating
distributions of the data; the TSP tests the null hypothesis H0 : P = Q against the general alternative
H1 : P ?= Q. This is one of the classical problems in statistics. However, sometimes, like in DA,
the interesting question is with regards to similarity rather than equality. By design, most equality
tests may not be transformed to test similarity; see Section 3 for a review of representative works.
In this work, we quantify similarity using a new score, the Perturbed Variation (PV). We propose
that similarity is related to some prede?ned value of permitted variations. Consider the gait of two
male subjects as an example. If their physical characteristics are similar, we expect their walk to
be similar, and thus assume the examples representing the two are from similar distributions. This
intuition applies when the distribution of our measurements only endures small changes for people
with similar characteristics. Put more generally, similarity depends on what ?small changes? are in
a given application, and implies that similarity is domain speci?c. The PV, as hinted by its name,
measures the discrepancy between two distributions while allowing for some perturbation of each
distribution; that is, it allows small differences between the distributions. What accounts for small
differences is a parameter of the PV, and may be de?ned by the user with regard to a speci?c domain.
1
Figure 1: X and O identify samples from two distributions, doted circles denote allowed perturbations.
Samples marked in red are matched with neighbors, while the unmatched samples indicate the PV discrepancy.
Figure 1 illustrates the PV. Note that, like perceptual similarity, the PV turns a blind eye to variations
of some rate.
2
The Perturbed Variation
The PV on continuous distributions is de?ned as follows:
De?nition 1. Let P and Q be two distributions on a Banach space X , and let M (P, Q) be the set
of all joint distributions on X ? X with marginals P and Q. The PV, with respect to a distance
function d : X ? X ? R and ?, is de?ned by
.
PV(P, Q, ?, d) =
inf
??M (P,Q)
P? [d(X, Y ) > ?],
(1)
over all pairs (X, Y ) ? ?, such that the marginal of X is P and the marginal of Y is Q.
Put into words, Equation (1) de?nes the joint distribution ? that couples the two distributions such
that the probability of the event of a pair (X, Y ) ? ? being within a distance grater than ? is
minimized.
The solution to (1) is a special case of
? the classical mass transport problem of Monge [1] and its
version by Kantorovich: inf ??M (P,Q) X ?X c(x, y)d?(x, y), where c : X ?X ? R is a measurable
cost function. When c is a metric, the problem describes the 1st Wasserstein metric. Problem (1)
may be rephrased as the optimal
??mass transport problem with the cost function c(x, y) = 1[d(x,y)>?] ,
1[d(x,y)>?] ?(y|x)dy P (x)dx. The probability ?(y|x) de?nes the
and may be rewritten as inf ?
transportation plan of x to y. The PV optimal transportation plan is obtained by perturbing the mass
of each point x in its ? neighborhood so that it redistributes to the distribution of Q. These small
perturbations do not add any cost, while transportation of mass to further areas is equally costly.
Note that when P = Q the PV is zero as the optimal plan is simply the identity mapping. Due to
its cost function, the PV it is not a metric, as it is symmetric but does not comply with the triangle
inequality and may be zero for distributions P ?= Q. Despite this limitation, this cost function fully
quanti?es the intuition that small variations should not be penalized when similarity is considered.
In this sense, similarity is not unique by de?nition, as more than one distribution can be similar to a
reference distribution.
The PV is also closely related to the Total Variation distance (TV) that may be written, using a
coupling characterization, as T V (P, Q) = inf ??M (P,Q) P? [X ?= Y ] [2]. This formulation argues
that any transportation plan, even to a close neighbor, is costly. Due to this property, the TV is
known to be an overly sensitive measure that overestimates the distance between distributions. For
example, consider two distributions de?ned by the dirac delta functions ?(a) and ?(a + ?). For any
?, the TV between the two distributions is 1, while they are intuitively similar. The PV resolves this
problem by adding perturbations, and therefore is a natural extension of the TV. Notice, however,
that the ? used to compute the PV need not be in?nitesimal, and is de?ned by the user.
The PV can be seen as a conciliatory between the Wasserstein distance and the TV. As explained, it
relaxes the sensitivity of the TV; however, it does not ?over optimize? the transportation plan. Specifically, distances larger than the allowed perturbation are discarded. This aspect also contributes to
the ef?ciency of estimation of the PV from samples; see Section 2.2.
2
P V (?1 , ?2 , ?) = 12 :
?
a1 = 0, a2 = 1, a3 = 2, a4 = 2.1
w1 = w2 = 14 , w3 = w4 = 0
v4 = 12 , v1 = v2 = v3 = 0
?
?
0 0 0 0
1
? 0 4 0 0 ?
Z=?
0 0 0 14 ?
0 0 0 0
?1
?2
0.75
0.5
0.25
??
0
1
2
Figure 2.1: Illustration of the PV score between discrete distributions.
2.1
The Perturbed Variation on Discrete Distributions
It can be shown that for two discrete distributions Problem (1) is equivalent to the following problem.
De?nition 2. Let ?1 and ?2 be two discrete distributions on the uni?ed support {a1 , ..., aN }. De?ne
the neighborhood of ai as ng(ai , ?) = {z ; d(z, ai ) ? ?}. The PV(?1 , ?2 , ?, d) between the two
distributions is:
N
N
1?
1?
wi +
vj
(2)
min
wi ?0,vi ?0,Zij ?0 2
2 j=1
i=1
?
Zij + wi = ?1 (ai ), ?i
s.t.
aj ?ng(ai ,?)
?
ai ?ng(aj ,?)
Zij + vj = ?2 (aj ), ?j
Zij = 0 ,
?(i, j) ?? ng(ai , ?).
Each row in the matrix Z ? RN ?N corresponds to a point mass in ?1 , and each column to a point
mass in ?2 . For each i, Z(i, :) is zero in columns corresponding to non neighboring elements, and
non-zero only for columns j for which transportation between ?2 (aj ) ? ?1 (ai ) is performed. The
discrepancies between the distributions are depicted by the scalars wi and vi that count the ?leftover?
mass in ?1 (ai ) and ?2 (aj ). The objective is to minimize these discrepancies, therefore matrix Z
describes the optimal transportation plan constrained to ?-perturbations. An example of an optimal
plan is presented in Figure 2.1.
2.2
Estimation of the Perturbed Variation
Typically, we are given samples from which we would like to estimate the PV. Given two samples S1 = {x1 , ..., xn } and S2 = {y1 , ..., ym }, generated by distributions P and Q respectively,
? 1 , S2 , ?, d) is:
PV(S
n
m
1 ?
1 ?
wi +
vj
(3)
min
wi ?0,vi ?0,Zij ?0 2n
2m j=1
i=1
?
?
Zij + wi = 1,
Zij + vj = 1, ?i, j
s.t.
yj ?ng(xi ,?)
Zij = 0 ,
xi ?ng(yj ,?)
?(i, j) ?? ng(xi , ?),
where Z ? R
. When n = m, the optimization in (3) is identical to (2), as in this case the
samples de?ne a discrete distribution. However, when n ?= m Problem (3) also accounts for the
difference in the size of the two samples.
n?m
Problem (3) is a linear program with constraints that may be written as a totally unimodular matrix.
It follows that one of the optimal solutions of (3) is integral [3]; that is, the mass of each sample
is transferred as a whole. This solution may be found by solving the optimal assignment on an
appropriate bipartite graph [3]. Let G = (V = (A, B), E) de?ne this graph, with A = {xi , wi ; i =
1, ..., n} and B = {yj , vj ; j = 1, ..., m} as its bipartite partition. The vertices xi ? A are linked
3
? 1 , S2 , ?, d)
Algorithm 1 Compute PV(S
Input: S1 = {x1 , ..., xn } and S2 = {y1 , ..., ym }, ? rate, and distance measure d.
? = {yj ? S2 },
? = (V? = (A,
? B),
? E):
? A? = {xi ? S1 }, B
1. De?ne G
? if d(xi , yj ) ? ?.
Connect an edge eij ? E
?
2. Compute the maximum matching on G.
3. De?ne Sw and Sv as number of unmatched edges in sets S1 and S2 respectively.
?
Output: P
V (S1 , S2 , ?, d) = 21 ( Snw + Smv ).
with edge weight zero to yj ? ng(xi ) and with weight ? to yj ?? ng(xi ). In addition, every vertex
xi (yj ) is linked with weight 1 to wi (vj ). To make the graph complete, assign zero cost edges
between all vertices xi and wk for k ?= i (and vertices yj and vk for k ?= j).
We note that the Earth Mover Distance (EMD) [4], a sampled version of the transportation problem,
is also formulated by a linear program that may be solved by optimal assignment. For the EMD and
other typical assignment problems, the computational complexity is more demanding, for example
using the Hungarian algorithm it has an O(N 3 ) complexity, where N = n + m is the number of ver? is a simple bipartite graph for which maximum
tices [5]. Contrarily, graph G, which describes PV,
cardinality matching, a much simpler problem, can be applied to ?nd the optimal assignment. To
?nd the optimal assignment, ?rst solve the maximum matching on the partial graph between vertices
xi , yj that have zero weight edges (corresponding to neighboring vertices). Then, assign vertices xi
and yj for whom a match was not found with wi and vj respectively; see Algorithm 1 and Figure
1 for an illustration of a matching. It is easy to see that the solution obtained solves the assignment
?
problem associated with PV.
The complexity of Algorithm 1 amounts to the complexity of the maximal matching step and of
setting up the graph, i.e., additional O(nm) complexity of computing distances between all points.
Let k be the average number of neighbors of a sample, then the average number of edges in the
?
?
bipartite
? graph G is |E| = n ? k. The maximal cardinality matching of this graph is obtained in
O(kn (n + m)) steps, in the worst case [5].
3
Related Work
Many scores have been de?ned for testing discrepancy between distributions. We focus on representative works for nonparametric tests that are most related to our work. First, we consider statistics for
the Two Sample Problem (TSP), i.e., equality testing, that are based on the asymptotic distribution of
the statistic conditioned on the equality. Among these tests is the well known Kolmogorov-Smirnov
test (for one dimensional distributions), and its generalization to higher dimensions by minimal
spanning trees [6]. A different statistic is de?ned by the portion of k-nearest neighbors of each sample that belongs to different distributions; larger portions mean the distributions are closer [7]. These
scores are well known in the statistical literature but cannot be easily changed to test similarity, as
their analysis relies on testing equality.
As discussed earlier, the 1st Wasserstein metric and the TV metric have some relation to the PV. The
EMD and histogram based L1 distance are the sample based estimates of these metrics respectively.
In both cases, the distance is not estimated directly on the samples, but on a higher level partition
of the space: histogram bins or signatures (cluster centers). It is impractical to use the EMD to
estimate the Wasserstein metric between the continuous distributions, as convergence would require
the number of bins to be exponentially dependent on the dimension. As a result, it is commonly
used to rate distances and not for statistical testing. Contrarily, the PV is estimated directly on the
samples and converges to its value between the underlying continuous distributions. We note that
after a good choice of signatures, the EMD captures perceptual similarity, similar to that of the PV. It
is possible to consider the PV as a re?nement of the EMD notion of similarity; instead of clustering
the data to signatures and moving the signatures, it perturbs each sample. In this manner, it captures
a ?ner notion of similarity better suited for statistical testing.
4
12
12
10
10
10
8
8
8
6
6
6
4
4
4
2
2
0
0
0.1
0.2
0.3
0.4
0.5
(a) PV(? = 0.1) = 0
2
0
0
0.1
0.2
0.3
0.4
0.5
0.6
(b) PV(? = 0.1) = 0
0
0
0.1
0.2
0.3
0.4
0.5
0.6
(c) PV(? = 0.1) = 1
Figure 2: Two distributions on R: The PV captures the perceptual similarity of (a),(b) against the disimilarity
in (c). The L11 = 1 on I1 = {(0, 0.1), (0.1, 0.2), ...} for all cases; on I2 = {(0, 0.2), (0.2, 0.4), ...} it is
L21 (Pa , Qa ) = 0, L21 (Pb , Qb ) = 1, L21 (Pc , Qc ) = 1; and on I3 = {(0, 0.3), (0.3, 0.6), ...} it is L31 (Pa , Qa ) =
0, L31 (Pb , Qb ) = 0, L31 (Pc , Qc ) = 0.
The partition of the support to bins allows some relaxation of the TV notion. Therefore, instead
of the TV, it may be interesting to consider the L1 as a similarity distance on the measures after
discretization. The example in Figure (2) shows that this relaxation is quite rigid and that there is no
single partition that captures the perceptual similarity. In general, the problem would remain even
if bins with varying width were permitted. Namely, the problem is the choice of a single partition
to measure similarity of a reference distribution to multiple distributions, while choosing multiple
partitions would make the distances incomparable. Also note that de?ning a ?good? partition is a
dif?cult task, which is exasperated in higher dimensions.
The last group of statistics are scores established in machine learning: the dA distance presented by
Kifer et al. that is based on the maximum discrepancy on a chosen subset of the support [8], and
Maximum Mean Discrepancy (MMD) by Gretton et al., which de?ne discrepancy after embeddings
the distributions to a Reproducing Kernel Hilbert Space (RKHS)[9]. These scores have corresponding statistical tests for the TSP; however, since their analysis is based on ?nite convergence bounds,
in principle they may be modi?ed to test similarity. The dA captures some intuitive notion of similarity, however, to our knowledge, it is not known how to compute it for a general subset class 1 . The
MMD captures the distance between the samples in some RKHS. The MMD may be used to de?ne
a similarity test, yet this would require de?ning two parameters, ? and the similarity rate, whose
dependency is not intuitive. Namely, for any similarity rate the result of the test is highly dependent
on the choice of ?, but it is not clear how it should be made. Contrarily, the PV?s parameter ? is
related to the data?s input domain and may be chosen accordingly.
4
Analysis
We present sample rate convergence analysis of the PV. The proofs of the theorems are provided in
the supplementary material. When no clarity is lost, we omit d from the notation. Our main theorem
is stated as follows:
Theorem 3. Suppose we are given two i.i.d. samples S1 = {x1 , ..., xn } ? Rd and S2 =
{y1 , ..., ym } ? Rd generated by distributions P and Q, respectively. Let the ground distance be
d = ? ? ?? and let N (?) be the cardinality of?a disjoint cover of the distributions? support. Then,
N (?)
?2))+log(1/?))
we have that
for any ? ? (0, 1), N = min(n, m), and ? = 2(log(2(2
N
?
?
??
??
?
P ?PV
(S1 , S2 , ?) ? PV (P, Q, ?)? ? ? ? 1 ? ?.
The theorem is de?ned using ? ? ?? , but can be rewritten for other metrics (with a slight change of
constants). The proof of the theorem exploits the form of the optimization Problem 3. We use the
bound of Theorem 3 construct hypothesis tests. A weakness of this bound is its strong dependency
on the dimension. Speci?cally, it is dependent on N (?), which for ???? is O((1/?)d ): the number of
disjoint boxes of volume ?d that cover the support. Unfortunately, this convergence rate is inherent;
namely, without making any further assumptions on the distribution, this rate is unavoidable and is
an instance of the ?curse of dimensionality?. In the following theorem, we present a lower bound on
the convergence rate.
1
Most work with the dA has been with the subset of characteristic functions, and approximated by the error
of a classi?er.
5
Theorem 4. Let P = Q be the uniform distribution on Sd?1 , a unit (d ? 1)?dimensional hypersphere. Let S1 = {x1 , ..., xN } ? P and S2 = {y1 , ..., yN } ? Q be two i.i.d. samples. For
any ?, ?? , ? ? (0, 1), 0 ? ? < 2/3 and sample size
P V (P, Q, ?? ) = 0 and
log(1/?)
2(1?3?/2)2
? (S1 , S2 , ?) > ?) ? 1 ? ?.
P(PV
? N ?
2
? d(1? ?2 )/2
,
2e
we have
(4)
2
?
? > 0.5 with
For example, for ? = 0.01, ? = 0.5, for any 37 ? N ? 0.25ed(1? 2 )/2 we have that PV
probability at least 0.99. The theorem shows that, for this choice of distributions, for a sample size
? is far form PV.
that is smaller than O(ed ), there is a high probability that the value of PV
? is stable, that is, it is almost identical for two
It can be observed that the empirical estimate PV
data sets differing on one sample. Due to its stability, applying McDiarmid inequality yields the
following.
Theorem 5. Let S1 = {x1 , ..., xn } ? P and S2 = {y1 , ..., ym } ? Q be two i.i.d. samples. Let
n ? m, then for any ? > 0
?
?
? (S1 , S2 , ?) ? E[PV
? (n, m, ?)]| ? ? ? e??2 m2 /4n ,
P |PV
? (n, m, ?)] is the expectation of PV
? for a given sample size.
where E[PV
This theorem shows that the sample estimate of the PV converges to its expectation without dependence on the dimension. By combining this result with Theorem 3 it may be deduced that only the
?
convergence of the bias ? the difference |E[PV(n,
m, ?)] ? PV(P, Q, ?)| ? may be exponential in the
dimension. This convergence is distribution dependent. However, intuitively, slow convergence is
not always the case, for example when the support of the distributions lies in a lower dimensional
manifold of the space. To remedy this dependency we propose a bootstrapping bias correcting technique, presented in Section 5. A different possibility is to project the data to one dimension; due
to space limitations, this extension of the PV is left out of the scope of this paper and presented in
Appendix A.2 in the supplementary material.
5
Statistical Inference
We construct two types of complementary procedures for hypothesis testing of similarity and dissimilarity2 . In the ?rst type of procedures, given 0 ? ? < 1, we distinguish between the null
(1)
hypothesis H0 : PV(P, Q, ?, d) ? ?, which implies similarity, and the alternative hypothesis
(1)
H1 : PV(P, Q, ?, d) > ?. Notice that when ? = 0, this test is a relaxed version of the TSP. Using
PV(P, Q) = 0 instead of P = Q as the null, allows for some distinction between the distributions,
which gives the needed relaxation to capture similarity. In the second type of procedures, we test
whether two distributions are similar. To do so, we ?ip the role of the null and the alternative. Note
that there isn?t an equivalent of this form for the TSP, therefore we can not infer similarity using
the TSP test, but only reject equality. Our hypothesis tests are based on the ?nite sample analysis
presented in Section 4; see Appendix A.1 in the supplementary material for the procedures.
To provide further inference on the PV, we apply bootstrapping for approximations of Con?dence
Intervals (CI). The idea of bootstrapping for estimating CIs is based on a two step procedure: approximation of the sampling distribution of the statistic by resampling with replacement from the
initial sample ? the bootstrap stage ? following, a computation of the CI based on the resulting distribution. We propose to estimate the CI by Bootstrap Bias-Corrected accelerated (BCa) interval,
which adjusts the simple percentile method to correct for bias and skewness [10]. The BCa is known
for its high accuracy; particularly, it can be shown, that the BCa interval converges to the theoretical
CI with rate O(N ?1 ), where N is the sample size. Using the CI, a hypothesis test may be formed:
(1)
the null H0 is rejected with signi?cance ? if the range [0, ?] ?? [CI, CI]. Also, for the second test,
we apply the principle of CI inclusion [11], which states that if [CI, CI] ? [0, ?], dissimilarity is
rejected and similarity deduced.
2
The two procedures are distinct, as, in general, lacking evidence to reject similarity is not suf?cient to infer
dissimilarity, and vice versa.
6
Type?2 error
0.6
0.4
0.2
0 2
10
1
6.1
0.8
0.8
0.7
3
10
Sample size
10
0.5
0
PV
MMD
FR
KNN
0.6
0.4
0.2
0.6
0.2
0.4
Recall
0.6
0.8
(a) The Type-2 error for varying (b) Precision-Recall: Gait data.
perturbation sizes and ? values.
6
1
PV
MMD
FR
KNN
0.9
Precision
?=0.1
?=0.2
?=0.3
?=0.4
?=0.5
Precision
1
0.8
1
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
Recall
1
(c) Precision-Recall: Video clips.
Experiments
Synthetic Simulations
In our ?rst experiment, we examine the effect of the choice of ? on the statistical power of the test.
For this purpose, we apply signi?cance testing for similarity on two univariate uniform distributions:
P ? U [0, 1] and Q ? U [?(?), 1 + ?(?)], where ?(?) is a varying size of perturbation. We
considered values of ? = [0.1, 0.2, 0.3, 0.4, 0.5] and sample sizes up to 5000 samples from each
(1)
distribution. For each value ?? , we test the null hypothesis H0 : P V (P, Q, ?? ) = 0 for ten equally
?
?
spaced values of ?(? ) in the range [0, 2? ]. In this manner, we test the ability of the PV to detect
similarity for different sizes of perturbations. The percentage of times the null hypothesis was falsely
rejected, i.e. the type-1 error, was kept at a signi?cance level ? = 0.05. The percentage of times
the null hypothesis was correctly rejected, the power of the test, was estimated as a function of the
sample size and averaged over 500 repetitions. We repeated the simulation using the tests based on
the bounds as well as using BCa con?dence intervals.
The results in Figure (3(a)) show the type-2 error of the bound based simulations. As expected,
the power of the test increases as the sample size grows. Also, when ?ner perturbations need to be
detected, more samples are needed to gain statistical power. For the BCa CI we obtained type-1
and type-2 errors smaller than 0.05 for all the sample sizes. This shows that the convergence of the
estimated PV to its value is clearly faster than the bounds. Note that, given a suf?cient sample size,
any statistic for the TSP would have rejected similarity for any ? > 0.
6.2
Comparing Distance Measures
Next, we test the ability of the PV to measure similarity on real data. To this end, we test the ranking
performance of the PV score against other known distributional distances. We compare the PV to
the multivariate extension of the Wald-Wolfowitz score of Friedman & Rafsky (FR) [6] , Schilling?s
nearest neighbors score (KNN) [7], and the Maximum Mean Discrepancy score of Gretton et al. [9]
(MMD)3 . We rank similarity for the applications of video retrieval and gait recognition.
The ranking performance of the methods was measured by precision-recall curves, and the Mean
Average Precision (MAP). Let r be the number of samples similar to a query sample. For each
similarity rank, where T is the total
1 ? i ? r of these observations, de?ne ri ? [1, T ? 1] as its ?
number of observations. The Average Precision is: AP = 1/r i i/ri , and the MAP is the average
of the AP over the queries. The tuning parameter for the methods ? k for the KNN, ? for the MMD
(with RBF kernel), and ? for the PV ? were chosen by cross-validation. The Euclidian distance was
used in all methods.
In our ?rst experiment, we tested raking for video-clip retrieval. The data we used was collected
and generated by [12], and includes 1,083 videos of commercials, each of about 1,500 frames (25
fps). Twenty unique videos were selected as query videos, each of which has one similar clip in
3
Note that the statistical tests of these measures test equality while the PV tests similarity and therefore our
experiments are not of statistical power but of ranking similarity. Even in the case of the distances that may be
transformed for similarity, like the MMD, there is no known function between the PV similarity to other forms
of similarity. As a result, there is no basis on which to compare which similarity test has better performance.
7
Table 1: MAP for Auslan, Video, and Gait data sets. Average MAP (? standard deviation) computed on a
random selection of 75% of the queries, repeated 100 times.
?
DATA SET
PV
KNN
MMD
FR
V IDEO
0.758 ?0.009
0.741 ?0.014 0.689 ? 0.008 0.563 ? 0.019
G AIT
0.792?0.021
0.736 ? 0.014 0.722 ? 0.017 0.698 ? 0.017
0.844?0.017 0.750 ? 0.015 0.729 ? 0.017 0.666 ? 0.016
G AIT-F
G AIT-M
0.679 ? 0.024 0.712 ? 0.017 0.716 ? 0.031 0.799 ?0.016
the collection, to which 8 more similar clips were generated by different transformations: brightness increased/decreased, saturation increased/decreased, borders cropped, logo inserted, randomly
dropped frames, and added noise frames. Lastly, each frame of a video was transformed to a 32RGB representation. We computed the similarity rate for each query video to all videos in the set,
and ranked the position of each video. The results show that the PV and the KNN score are invariant
to most of the transformations, and outperform the FR and MMD methods (Table 1 and Figure 3(c)).
We found that brightness changes were most problematic for the PV. For this type of distortion, the
simple RGB representation is not suf?cient to capture the similarity.
We also tested gait similarity of female and male subjects; same gender samples are assumed similar.
We used gait data that was recorded by a mobile phone, available at [13]. The data consists of two
sets of 15min walks of 20 individuals, 10 women and 10 men. As features we used the magnitude
of the triaxial accelerometer.We cut the raw data to intervals of approximately 0.5secs, without
identi?cation of gait cycles. In this manner, each walk is represented by a collection of about 1500
intervals. An initial scaling to [0,1] was performed once for the whole set. The comparison was
done by ranking by gender the 39 samples with respect to a reference walk.
The precision-recall curves in Figure 3(b) show that the PV is able to retrieve with higher precision
in the mid-recall range. For the early recall points the PV did not show optimal performance; Interestingly, we found that with a smaller ?, the PV had better performance on early recall points. This
behavior re?ects the ?exibility of the PV: smaller ? should be chosen when the goal is to ?nd very
similar instances, and larger when the goal is to ?nd higher level similarity. The MAP results presented in Table 1 show that the PV had better performance on the female subjects. From examination
of the subject information sheet we found that the range of weight and hight within the female group
is 50-77Kg and 1.6-1.8m, while within the male group it is 47-100Kg and 1.65-1.93m; that is, there
is much more variability in the male group. This information provides a reasonable explanation to
the PV results, as it appears that a subject from the male group may have a gait that is as dissimilar
to the gait of a female subject as it is to a different male. In the female group the subjects are more
similar and therefore the precision is higher.
7
Discussion
We proposed a new score that measures the similarity between two multivariate distributions, and
assigns to it a value in the range [0,1]. The sensitivity of the score, re?ected by the parameter ?,
allows for ?exibility that is essential for quantifying the notion of similarity. The PV is ef?ciently
estimated from samples. Its low computational complexity relies on its simple binary classi?cation
of points as neighbors or non-neighbor points, such that optimization of distances of faraway points
is not needed. In this manner, the PV captures only the essential information to describe similarity.
Although it is not a metric, our experiments show that it captures the distance between similar distributions as well as well known distributional distances. Our work also includes convergence analysis
of the PV. Based on this analysis we provide hypothesis tests that give statistical signi?cance to the
resulting score. While our bounds are dependent on the dimension, when the intrinsic dimension of
the data is smaller than the domains dimension, statistical power can be gained by bootstrapping.
In addition, the PV has an intuitive interpretation that makes it an attractive score for a meaningful
statistical testing of similarity. Lastly, an added value of the PV is that its computation also gives
insight to the areas of discrepancy; namely, the areas of the unmatched samples. In future work we
plan to further explore this information, which may be valuable on its own merits.
Acknowledgements
This Research was supported in part by the Israel Science Foundation (grant No. 920/12).
8
References
[1] G. Monge. M?emoire sur la th?eorie des d?eblais et de remblais. Histoire de l?Academie Royale
des Sciences de Paris, avec les Memoires de Mathematique et de Physique pour la meme annee,
1781.
[2] L. R?uschendorf. Monge?kantorovich transportation problem and optimal couplings. Jahresbericht der DMV, 3:113?137, 2007.
[3] A. Schrijver. Theory of linear and integer programming. John Wiley & Sons Inc, 1998.
[4] Y. Rubner, C. Tomasi, and L.J. Guibas. A metric for distributions with applications to image
databases. In Computer Vision, 1998. Sixth International Conference on, pages 59?66. IEEE,
1998.
[5] R.K. Ahuja, L. Magnanti, and J.B. Orlin. Network Flows: Theory, Algorithms, and Applications, chapter 12, pages 469?473. Prentice Hall, 1993.
[6] J.H. Friedman and L.C. Rafsky. Multivariate generalizations of the Wald-Wolfowitz and
Smirnov two-sample tests. Annals of Statistics, 7:697?717, 1979.
[7] M.F. Schilling. Multivariate two-sample tests based on nearest neighbors. Journal of the
American Statistical Association, pages 799?806, 1986.
[8] D. Kifer, S. Ben-David, and J. Gehrke. Detecting change in data streams. In Proceedings
of the Thirtieth international conference on Very large data bases, pages 180?191. VLDB
Endowment, 2004.
[9] A. Gretton, K. Borgwardt, B. Sch?olkopf, M. Rasch, and E. Smola. A kernel method for the
two sample problem. In Advances in Neural Information Processing Systems 19, 2007.
[10] B. Efron and R. Tibshirani. An introduction to the bootstrap, chapter 14, pages 178?188.
Chapman & Hall/CRC, 1993.
[11] S. Wellek. Testing Statistical Hypotheses of Equivalence and Noninferiority; 2nd edition.
Chapman and Hall/CRC, 2010.
[12] J. Shao, Z. Huang, H. Shen, J. Shen, and X. Zhou. Distribution-based similarity measures for
multi-dimensional point set retrieval applications. In Proceeding of the 16th ACM international
conference on Multimedia MM 08, 2008.
[13] J. Frank, S. Mannor, and D. Precup. Data sets: Mobile phone gait recognition data, 2010.
[14] S. Boyd and L. Vandenberghe. Convex Optimization, chapter 5, pages 258?261. Cambridge
University Press, New York, NY, USA, 2004.
[15] T. Weissman, E. Ordentlich, G. Seroussi, S. Verdu, and M.J. Weinberger. Inequalities for the
l1 deviation of the empirical distribution. Hewlett-Packard Labs, Tech. Rep, 2003.
9
| 4806 |@word version:3 smirnov:2 nd:5 open:1 vldb:1 simulation:4 rgb:2 brightness:2 eld:1 euclidian:1 initial:2 score:24 zij:8 rkhs:2 interestingly:1 discretization:1 comparing:1 yet:2 dx:1 written:2 nitesimal:1 john:1 partition:7 ideo:1 designed:1 resampling:1 selected:1 accordingly:1 cult:1 hypersphere:1 characterization:1 mannor:2 provides:1 detecting:1 mcdiarmid:1 simpler:1 ect:1 fps:1 consists:1 manner:4 magnanti:1 introduce:1 falsely:1 expected:1 behavior:1 pour:1 examine:1 multi:1 resolve:1 curse:1 cardinality:3 totally:1 provided:1 project:1 matched:1 underlying:1 notation:1 mass:8 estimating:1 null:8 israel:3 what:2 kg:2 skewness:1 differing:1 transformation:2 bootstrapping:4 impractical:1 every:1 unimodular:1 exactly:1 unit:1 grant:1 omit:1 yn:1 overestimate:1 tices:1 ner:2 engineering:2 dropped:1 sd:1 despite:1 ap:2 approximately:1 logo:1 studied:1 verdu:1 equivalence:1 dif:2 range:5 averaged:1 unique:2 testing:12 yj:11 lost:1 bootstrap:3 procedure:9 nite:3 area:3 w4:1 empirical:2 reject:2 matching:6 boyd:1 word:1 hight:1 cannot:1 close:1 selection:1 sheet:1 put:2 prentice:1 applying:1 optimize:1 measurable:1 equivalent:2 map:5 transportation:9 center:1 convex:1 shen:2 qc:2 assigns:1 correcting:1 m2:1 adjusts:1 insight:1 vandenberghe:1 retrieve:1 stability:1 notion:5 variation:9 annals:1 suppose:1 commercial:1 user:2 programming:1 maayan:1 academie:1 hypothesis:13 pa:2 element:1 approximated:1 particularly:1 recognition:2 cut:1 distributional:2 database:1 observed:1 role:1 inserted:1 electrical:2 solved:1 worst:1 capture:10 cycle:1 valuable:1 intuition:2 cance:4 meme:1 complexity:6 signature:4 solving:1 bipartite:4 basis:1 triangle:1 shao:1 easily:1 joint:2 represented:1 tx:1 kolmogorov:1 eorie:1 chapter:3 distinct:1 describe:1 detected:1 query:5 neighborhood:2 h0:4 choosing:1 quite:2 whose:1 larger:3 solve:1 supplementary:3 distortion:1 ability:2 statistic:9 redistributes:1 knn:6 tsp:8 ip:1 indication:1 propose:3 gait:10 maximal:2 adaptation:1 fr:5 neighboring:2 combining:1 culty:1 intuitive:5 dirac:1 olkopf:1 rst:4 convergence:11 cluster:1 generating:1 converges:3 ben:1 coupling:2 develop:1 ac:2 measured:1 seroussi:1 nearest:3 strong:1 solves:1 hungarian:1 signi:4 come:3 implies:2 quantify:1 indicate:1 rasch:1 ning:2 closely:1 correct:1 prede:1 material:3 bin:4 crc:2 require:2 mathematique:1 assign:2 generalization:2 avec:1 hinted:1 extension:3 mm:1 considered:3 ground:1 guibas:1 hall:3 mapping:1 scope:1 early:2 a2:1 earth:1 purpose:1 estimation:2 sensitive:1 leftover:1 vice:1 repetition:1 gehrke:1 smv:1 clearly:1 always:1 i3:1 rather:1 zhou:1 varying:3 mobile:2 thirtieth:1 focus:2 vk:1 rank:2 tech:1 detect:2 sense:1 inference:2 dependent:5 rigid:1 typically:1 relation:1 transformed:3 i1:1 among:1 plan:8 constrained:1 special:1 marginal:2 construct:2 once:1 ng:9 emd:6 sampling:1 identical:2 chapman:2 discrepancy:12 minimized:1 future:1 inherent:1 randomly:1 modi:1 harel:1 mover:1 individual:1 replacement:1 friedman:2 mining:1 highly:1 possibility:1 weakness:1 male:6 physique:1 pc:2 hewlett:1 integral:1 edge:6 partial:1 closer:1 tree:1 walk:4 haifa:2 circle:1 re:3 theoretical:1 minimal:1 increased:2 column:3 earlier:1 instance:2 cover:2 assignment:6 cost:6 vertex:7 subset:3 deviation:2 uniform:2 technion:4 optimally:1 dependency:3 answer:1 perturbed:5 connect:1 sv:1 kn:1 synthetic:1 st:2 deduced:2 international:3 sensitivity:2 borgwardt:1 v4:1 ym:4 precup:1 w1:1 nm:1 unavoidable:1 recorded:1 huang:1 woman:1 unmatched:3 american:1 account:2 de:31 accelerometer:1 sec:1 wk:1 includes:2 inc:1 ranking:4 depends:1 blind:1 vi:3 performed:2 h1:2 stream:1 lab:1 linked:2 red:1 portion:2 orlin:1 minimize:1 formed:1 il:2 accuracy:1 characteristic:3 yield:1 identify:1 spaced:1 raw:1 cation:2 l21:3 ed:4 sixth:1 against:3 associated:1 proof:2 con:2 couple:1 sampled:1 gain:1 recall:9 knowledge:1 efron:1 dimensionality:1 hilbert:1 appears:1 higher:6 permitted:2 formulation:1 done:2 box:1 rejected:5 stage:1 l31:3 lastly:2 smola:1 transport:2 aj:5 grows:1 name:1 effect:1 usa:1 remedy:1 equality:8 symmetric:1 i2:1 attractive:1 width:1 percentile:1 complete:1 argues:1 l1:3 image:1 ef:3 common:1 physical:1 perturbing:1 exponentially:1 volume:1 banach:1 discussed:1 interpretation:2 slight:1 association:1 marginals:1 measurement:1 versa:1 cambridge:1 ai:9 rd:2 tuning:1 inclusion:1 had:2 moving:1 stable:1 similarity:56 add:1 base:1 multivariate:4 own:1 female:5 inf:4 belongs:1 phone:2 scenario:1 inequality:3 binary:1 rep:1 der:1 nition:3 seen:1 wasserstein:4 additional:1 relaxed:1 speci:3 determine:1 v3:1 wolfowitz:2 multiple:2 gretton:3 infer:2 match:1 faster:1 cross:1 retrieval:3 equally:2 weissman:1 a1:2 wald:2 vision:2 metric:10 expectation:2 histogram:2 sometimes:1 kernel:3 mmd:10 addition:2 cropped:1 interval:6 decreased:2 sch:1 w2:1 contrarily:3 subject:7 shie:2 flow:1 integer:1 ciently:2 ee:1 easy:1 relaxes:1 embeddings:1 variety:1 w3:1 incomparable:1 idea:1 schilling:2 whether:2 nement:1 york:1 generally:1 clear:1 amount:1 nonparametric:1 mid:1 ten:1 clip:4 outperform:1 percentage:2 problematic:1 notice:2 estimated:7 overly:1 delta:1 disjoint:2 correctly:1 tibshirani:1 discrete:5 rephrased:1 group:6 pb:2 clarity:1 kept:1 v1:1 graph:9 relaxation:3 almost:1 reasonable:1 dy:1 appendix:2 scaling:1 bound:9 distinguish:1 constraint:1 ri:2 dence:2 aspect:1 ected:1 min:4 qb:2 ned:10 transferred:1 department:2 tv:9 describes:3 remain:1 smaller:5 son:1 wi:10 making:1 s1:11 intuitively:2 explained:1 invariant:1 equation:1 remains:1 turn:1 count:1 royale:1 needed:3 merit:1 end:1 kifer:2 available:1 rewritten:2 apply:3 v2:1 appropriate:1 alternative:3 weinberger:1 clustering:1 a4:1 sw:1 cally:1 exploit:1 classical:2 objective:1 question:4 added:2 costly:2 dependence:1 kantorovich:2 perturbs:2 distance:25 capacity:1 whom:1 manifold:1 collected:1 spanning:1 sur:1 illustration:2 providing:1 unfortunately:1 frank:1 stated:1 design:1 twenty:1 rafsky:2 allowing:1 l11:1 observation:2 discarded:1 variability:1 y1:5 rn:1 perturbation:10 reproducing:1 frame:4 triaxial:1 david:1 pair:2 namely:4 paris:1 tomasi:1 identi:1 distinction:1 established:1 qa:2 able:1 program:2 saturation:1 including:1 packard:1 video:11 explanation:1 power:7 event:1 demanding:1 natural:1 ranked:1 examination:1 representing:1 eye:1 ne:10 isn:1 review:1 comply:1 literature:1 bca:5 acknowledgement:1 determining:1 asymptotic:1 lacking:1 fully:1 expect:1 interesting:2 limitation:2 suf:3 men:1 monge:3 validation:1 foundation:1 rubner:1 principle:2 endowment:1 row:1 penalized:1 changed:1 supported:1 last:1 bias:4 wide:1 neighbor:8 regard:2 curve:2 dimension:10 xn:5 ordentlich:1 commonly:1 made:1 collection:2 grater:1 far:1 uni:1 ver:1 assumed:2 xi:13 continuous:3 decade:1 table:3 nature:1 contributes:1 domain:5 da:5 quanti:1 vj:7 did:1 main:3 s2:13 whole:2 border:1 noise:1 edition:1 ait:3 allowed:2 complementary:1 repeated:2 x1:5 representative:2 cient:3 ahuja:1 slow:1 wiley:1 ny:1 precision:10 position:1 pv:77 ciency:1 exponential:1 lie:1 perceptual:4 theorem:12 er:1 a3:1 evidence:1 essential:2 intrinsic:1 adding:1 gained:1 ci:12 dissimilarity:2 magnitude:1 illustrates:1 conditioned:1 suited:1 depicted:1 simply:1 eij:1 univariate:1 faraway:1 explore:1 scalar:1 applies:1 gender:2 corresponds:1 relies:2 acm:1 emoire:1 marked:1 identity:1 formulated:1 goal:2 rbf:1 quantifying:1 change:5 specifically:1 typical:1 corrected:1 classi:2 total:2 multimedia:1 e:1 la:2 schrijver:1 meaningful:1 formally:1 people:1 support:6 dissimilar:1 accelerated:1 exibility:2 tested:2 |
4,206 | 4,807 | Multi-task Vector Field Learning
1
2
1
2
1
Binbin Lin
Sen Yang
Chiyuan Zhang
Jieping Ye
Xiaofei He
1
State Key Lab of CAD&CG, Zhejiang University, Hangzhou 310058, China
{binbinlinzju, chiyuan.zhang.zju, xiaofeihe}@gmail.com
2
The Biodesign Institute, Arizona State University, Tempe, AZ, 85287
{senyang, jieping.ye}@asu.edu
Abstract
Multi-task learning (MTL) aims to improve generalization performance by learning multiple related tasks simultaneously and identifying the shared information
among tasks. Most of existing MTL methods focus on learning linear models
under the supervised setting. We propose a novel semi-supervised and nonlinear
approach for MTL using vector fields. A vector field is a smooth mapping from
the manifold to the tangent spaces which can be viewed as a directional derivative
of functions on the manifold. We argue that vector fields provide a natural way to
exploit the geometric structure of data as well as the shared differential structure
of tasks, both of which are crucial for semi-supervised multi-task learning. In this
paper, we develop multi-task vector field learning (MTVFL) which learns the predictor functions and the vector fields simultaneously. MTVFL has the following
key properties. (1) The vector fields MTVFL learns are close to the gradient fields
of the predictor functions. (2) Within each task, the vector field is required to be as
parallel as possible which is expected to span a low dimensional subspace. (3) The
vector fields from all tasks share a low dimensional subspace. We formalize our
idea in a regularization framework and also provide a convex relaxation method
to solve the original non-convex problem. The experimental results on synthetic
and real data demonstrate the effectiveness of our proposed approach.
1
Introduction
In many applications, labeled data are expensive and time consuming to obtain while unlabeled data
are abundant. The problem of using unlabeled data to improve the generalization performance is
often referred to as semi-supervised learning (SSL). It is well known that in order to make semisupervised learning work, some assumptions on the dependency between the predictor function and
the marginal distribution of data are needed. The manifold assumption [15, 5], which has been
widely adopted in the last decade, states that the predictor function lives in a low dimensional manifold of the marginal distribution.
Multi-task learning was proposed to enhance the generalization performance by learning multiple
related tasks simultaneously. The abundant literature on multi-task learning demonstrates that the
learning performance indeed improves when the tasks are related [4, 6, 7]. The key step in MTL
is to find the shared information among tasks. Evgeniou et al. [12] proposed a regularization MTL
framework which assumes all tasks are related and close to each other. Ando and Zhang [2] proposed a structural learning framework, which assumed multiple predictors for different tasks shared
a common structure on the underlying predictor space. An alternating structure optimization (ASO)
method was proposed for linear predictors which assumed the task parameters shared a low dimensional subspace. Arvind et al. [1] generalized the idea of sharing a subspace by assuming that all
task parameters lie on a manifold.
1
(a) A parallel field on R2
(b) A parallel field on Swiss roll
Figure 1: Examples of parallel fields. The parallel field on R2 spans a one dimensional subspace
and the parallel field on the Swiss roll spans a two dimensional subspace.
In this paper, we consider semi-supervised multi-task learning (SSMTL). Although many SSL methods have been proposed in the literature [10], these methods are often not directly amenable to MTL
extensions [18]. Liu et al. [18] proposed an SSMTL framework which encouraged related models
to have similar parameters. However they require that related tasks share similar representations [9].
Wang et al. [19] proposed another SSMTL method under the assumption that the tasks are clustered [4, 14]. The cluster structure is characterized by task parameters of linear predictor functions.
For linear predictors, the task parameters they used are actually the constant gradient of the predictor
functions which form a first order differential structure. For general nonlinear predictor functions,
we show it is more natural to capture the shared differential structure using vector fields.
In this paper, we propose a novel SSMTL formulation using vector fields. A vector field is a smooth
mapping from the manifold to the tangent spaces which can be viewed as a directional derivative
of functions on the manifold. In this way, a vector field naturally characterizes the differential
structure of functions while also providing a natural way to exploit the geometric structure of data;
these are the two most important aspects for SSMTL. Based on this idea, we develop the multi-task
vector field learning (MTVFL) method which learns the prediction functions and the vector fields
simultaneously. The vector fields we learned are forced to be close to the gradient fields of predictor
functions. In each task, the vector field is required to be as parallel as possible. We say that a
vector field is parallel if the vectors are parallel along the geodesics on the manifold. In extreme
cases, when the manifold is a linear (or an affine) space, then the geodesics of such manifold are
straight lines. In such cases, the space spanned by these parallel vectors is a simply one-dimensional
subspace. Thus when the manifold is flat (i.e., with zero curvature) or the curvature is small, it is
expected that these parallel vectors concentrate on a low dimensional subspace. As an example, we
can see from Fig. 1 that the parallel field on the plane spans a one-dimensional subspace and the
parallel field on the Swiss roll spans a two-dimensional subspace. For the multi-task case, these
vector fields share a low dimensional subspace. In addition, we assume these vector fields share a
low dimensional subspace among all tasks. In essence, we use a first-order differential structure to
characterize the shared structure of tasks and use a second-order differential structure to characterize
the specific parts of tasks. We formalize our idea in a regularization framework and provide a convex
relaxation method to solve the original non-convex problem. We have performed experiments using
both synthetic and real data; results demonstrate the effectiveness of our proposed approach.
2
Multi-task Learning: A Vector Field Approach
In this section, we first introduce vector fields and then present multi-task learning via exploring
shared structure using vector fields.
2.1
Multi-task Learning Setting and Vector Fields
We first introduce notation and symbols. We are givenPm tasks, with nl samples xli , i = 1, . . . , nl
for
l the
l-th task. The total number of samples is n = l nl . For the l-th task, we assume the data
xi are on a dl -dimensional manifold Ml . All of these data manifolds are embedded in the same
2
D-dimensional ambient space RD . It is worth noting that the dimensions of different data manifolds
are not required to be the same. Without loss of generality, we assume the first n0l (n0l < nl ) samples
0
l
are labeled, with yjl ? R for regression
P and0 yj ? {?1, 1} for classification, j = 1, . . . , nl . The total
0
number of labeled samples is n = l nl . For the l-th task, we denote the regression function or
classification function by fl? . The goal of semi-supervised multi-task learning is to learn the function
value on unlabeled data, i.e., fl? (xli ), n0l + 1 ? i ? nl .
Given the l-th task, we first construct a nearest neighbor graph by either -neighborhood or k nearest
l
neighbors. Let xli ? xlj denote that xli and xlj are neighbors. Let wij
denote the weight which
l
l
measures the similarity between xi and xj . It can be approximated by the heat kernel weight or the
simple 0-1 weight. For each point xli , we estimate its tangent space Txli M by performing PCA on its
neighborhood. We choose the largest dl eigenvectors as the bases since the tangent space Txli M has
the same dimension as the manifold Ml . Let Til ? RD?dl be the matrix whose columns constitute
T
an orthonormal basis for Txli M. It is easy to show that Pil = Til Til is the unique orthogonal
projection from RD onto the tangent space Txli M [13]. That is, for any vector a ? Rm , we have
Pil a ? Txli M and (a ? Pil a) ? Pil a.
We now formally define the vector field and show how to represent it in the discrete case.
Definition 2.1 ([16]). A vector field X on the manifold M is a continuous map X : M ? T M
where T M is the set of tangent spaces, written as p 7? Xp , with the property that for each p ? M,
Xp is an element of Tp M.
We can think of a vector field on the manifold as an arrow in the same way as we think of the vector
field in the Euclidean space, with a given magnitude and direction attached to each point on the
manifold, and chosen to be tangent to the manifold. A vector field V on the manifold is called a
gradient field if there exists a function f on the manifold such that ?f = V where ? is the covariant
derivative on the manifold. Therefore, gradient fields are one kind of vector fields. It plays a critical
role in connecting vector fields and functions.
Let Vl be a vector field on the manifold Ml . For each point xli , let Vxli denote the value of the
vector field Vl at xli . Recall the definition of vector field, Vxli should be a vector in the tangent
space Txli Ml . Therefore, we can represent it by the coordinates of the tangent space Txli Ml as
Vxli = Til vil , where vil ? Rdl is the local representation of Vxli with respect of Til . Let fl be a function
on the manifold Ml . By abusing the notation without confusion, we also use fl to denote the vector
T
T T
fl = (fl (x1l ), . . . , fl (xlnl ))T and use Vl to denote the vector Vl = v1l , . . . , vnl l
? Rdl nl . That
is, Vl is a dl nl -dimensional big column vector which concatenates all the vil ?s for a fixed l. Then for
each task, we aim to compute the vector fl and the vector Vl .
2.2
Multi-task Vector Field Learning
In this section, we introduce multi-task vector field learning (MTVFL).
Many existing MTL methods capture the task relatedness by sharing task parameters. For linear
predictors, the task parameters they used are actually the constant gradient vectors of the predictor
functions. For general nonlinear predictor functions, we show it is natural to capture the shared
T T
differential structure using vector fields. Let f denote the vector (f1T , . . . , fm
) and V denote the
mT T
T
T T
1T
vector (V1 , . . . , Vm ) = (v1 , . . . , vnl ) . We propose to learn f and V simultaneously:
? The vector field Vl should be close to the gradient field ?fl of fl , which can be formularized
as follows:
m
m Z
X
X
min R1 (f, V ) =
R1 (fl , Vl ) :=
k?fl ? Vl k2 .
(1)
f,V
l=1
Ml
l=1
? The vector field Vl should be as parallel as possible:
m
m Z
X
X
min R2 (V ) =
R2 (Vl ) :=
V
l=1
l=1
3
Ml
k?Vl k2HS ,
(2)
where ? is the covariant derivative on the manifold, where k ? kHS denotes the HilbertSchmidt
tensor norm [11]. ?Vl measures the change of the vector field, therefore minimizR
ing Ml k?Vl k2HS enforces the vector field Vl to be parallel.
? All vector fields share an h-dimensional subspace where h is a predefined parameter:
Til vil = uli + ?T wil ,
s.t. ??T = Ih?h .
(3)
Since these vector fields are assumed to share a low dimensional space, it is expected that the residual
vector uli is small. We define another term R3 to control the complexity as follows:
R3 (vil , wil , ?)
=
=
nl
m X
X
l=1 i=1
nl
m X
X
?kuli k2 + ?kTil vil k2
(4)
?kTil vil ? ?T wil k2 + ?kTil vil k2 .
(5)
l=1 i=1
Note that ? and ? are pre-specified coefficients, indicating the importance of the corresponding
regularization component. Since we would like the vector field to be parallel, the vector norm is not
expected to be too small. Besides, we assume the vector fields share a low dimensional subspace,
the residual vector uli is expected to be small. In practice we suggest to use a small ? and a large ?.
By setting ? = 0, R3 will reduce to the regularization term proposed in ASO if we also replace the
tangent vectors by the task parameters. Therefore, this formulation is a generalization of ASO.
?
It can be verified that wil = ?Til vil = arg minwil R3 (vil , wil , ?). Thus we have uli = Til vil ?
?T wil = (I ? ?T ?)Til vil . Therefore, we can rewrite R3 as follows:
R3 (V, ?)
=
nl
m X
X
?kuli k2 + ?kTil vil k2
l=1 i=1
=
=
nl
m X
X
?k(I ? ?T ?)Til vil k2 + ?kTil vil k2
l=1 i=1
?V T A? V
(6)
+ ?V T HV,
T
where H is a block diagonal matrix with the diagonal blocks being Til Til , and A? is another block
T
T
diagonal matrix with the diagonal blocks being Til (I ??T ?)T (I ??T ?)Til = Til (I ??T ?)Til .
Therefore, the proposed formulation solves the following optimization problem:
arg min E(f, V, ?) = R0 (f ) + ?1 R1 (f, V ) + ?2 R2 (V ) + ?3 R3 (V, ?) s.t. ??T = Ih?h , (7)
f,V,?
where R0 (f ) is the loss function. For simplicity, we use the quadratic loss function R0 (f ) =
Pm Pn0l
l
l 2
i=1 (fl (xi ) ? yi ) .
l=1
2.3
Objective Function in the Matrix Form
To simplify Eq. (7), in this section we rewrite our objective function in the matrix form.
Using the discrete methods in [17], we have the following discrete form equations:
X
2
l
R1 (fl , Vl ) =
wij
(xlj ? xli )T Til vil ? fjl + fil ,
(8)
i?j
R2 (fl , Vl )
=
X
2
l
l l l
wij
Pi Tj vj ? Til vil
.
(9)
i?j
Interestingly, with some algebraic transformations, we have the following matrix forms for our
objective functions:
R1 (fl , Vl ) = 2flT Ll fl + VlT Gl Vl ? 2VlT Cl fl ,
(10)
4
where Ll is the graph Laplacian matrix, Gl is a dl nl ? dl nl block diagonal matrix, and Cl =
T
T
[C1l , . . . , Cnl ]T is a dl nl ? nl block matrix. Denote the i-th dl ? dl diagonal block of Gl by Glii
and the i-th dl ? nl block of Cl by Cil , we have
X
X
T
l
l
Glii =
wij
(xlj ? xli )(xlj ? xli )T , Cil =
wij
(xlj ? xli )slij ,
(11)
j?i
slij
j?i
nl
where
? R is a selection vector of all zero elements except for the i-th element being ?1 and
the j-th element being 1. And R2 becomes
R2 (Vl ) = VlT Bl Vl ,
(12)
where Bl is a dl nl ? dl nl sparse block matrix. If we index each dl ? dl block by
X
T
l
l
Bii
=
wij
(Qlij Qlij + I),
l
Bij
,
then we have
(13)
j?i
(
l
Bij
=
l
?2wij
Qlij , if xi ? xj
0,
otherwise
,
(14)
T
where Qlij = Til Tjl . It is worth nothing that both R1 and R2 depend on tangent spaces Til .
Thus we can further write R1 (f, V ) and R2 (V ) as follows
R1 (f, V )
R2 (V )
=
=
m
X
l=1
m
X
R1 (fl , Vl ) = 2f T Lf + V T GV ? 2V T Cf,
(15)
R2 (Vl ) = V T BV,
(16)
l=1
where L, G and B are block diagonal matrices with the corresponding l-th block matrix being Ll ,
Gl and Bl , respectively. C is a column block matrix with the l-th block matrix being Cl .
Let I denote an n ? n diagonal matrix where Iii = 1 if the corresponding i-th data is labeled and
Iii = 0 otherwise. And let y ? Rn be a column vector whose i-th element is the corresponding label
of the i-th labeled data and 0 otherwise. Then R0 (f ) = n10 (f ? y)T I(f ? y). Finally, we get the
following matrix form for our objective function in Eq. (7) with the constraint ??T = Ih?h as:
E(f, V, ?) = R0 (f ) + ?1 R1 (f, V ) + ?2 R2 (V ) + ?3 R3 (V, ?)
1
=
(f ? y)T I(f ? y) + ?1 (2f T Lf + V T GV ? 2V T Cf ) + ?2 V T BV + ?3 V T (?A? + ?H)V
n0
1
(f ? y)T I(f ? y) + 2?1 f T Lf + V T (?1 G + ?2 B + ?3 (?A? + ?H))V ? 2?1 V T Cf.
=
n0
It is worth noting that matrices L, G, B, C depend on data, and only the matrix A? is related to ?.
3
Optimization
In this section, we discuss how to solve the following optimization problem:
arg min E(f, V, ?),
s.t. ??T = Ih?h .
(17)
f,V,?
We use the alternating optimization to solve this problem.
? Optimization of f and V . For a fixed ?, the optimal f and V can be obtained via solving
arg min E(f, V, ?).
(18)
f,V
? Optimization of ?. For a fixed V , the optimal ? can be obtained via solving.
arg min R3 (V, ?),
?
5
s.t. ??T = Ih?h .
(19)
3.1
Optimization of f and V for a Given ?
When ? is fixed, the objective function is similar to that of the single task case. However, there are
some differences we would like to mention. Firstly, when constructing the nearest neighbor graph,
data points from different tasks are disconnected. Therefore when estimating tangent spaces, data
points from different tasks are independent. Secondly, we do not require the dimension of tangent
spaces from each task to be the same.
We note that
?E
?f
?E
?V
1
1
I
+
2?
L
f ? 2?1 C T V ? 2 0 y,
1
n0
n
=
2
=
?2?1 Cf + 2(?1 G + ?2 H + ?3 (?A? + ?H))V.
(20)
(21)
Requiring the derivatives to be vanish, we get the following linear system
1
1
f
??1 C T
y
n0 I + 2?1 L
= n0
.
(22)
V
??1 C
?1 G + ?2 B + ?3 (?A? + ?H)
0
Except for the matrix A? , all other matrices can be computed in advance and will not change during
the iterative process.
3.2
Optimization of ? for a Given V
Since functions R0 (f ), R1 (f, V ) and R2 (V ) are not related to the variable ?, we only need to
optimize R3 (V, ?) subject to ??T = Ih?h .
Recall Eq. (6), we rewrite R3 (V, ?) as follows:
nl
m X
X
? l l 2
T
l l 2
?
? k(I ? ? ?)Ti vi k + kTi vi k
? = arg min
?
?
l=1 i=1
?
= arg min ? tr VT ((1 + )I ? ?T ?)V
?
?
(23)
arg max tr(?VVT ?T ),
=
?
1 1
m m
(T1 v1 , . . . , Tnm vnm )
is a D ? n matrix with each column being a tangent vector. The
where V =
? can be obtained by using singular value decomposition (SVD). Let V = Z1 ?Z T be the
optimal ?
2
SVD of V and we assume that the singular values are in a decreasing order in ?. Then the rows of
? are given by the first h columns of Z1 .
?
3.3
Convex Relaxation
The orthogonality constraint in Eq. (23) is non-convex. Next, we propose to convert Eq. (23) into a
convex formulation by relaxing its feasible domain into a convex set.
Let ? = ?/?. It can be verified that the following equality holds: (1 + ?)I ? ?T ? = ?(1 + ?)(?I +
?T ?)?1 . Then we can rewrite R3 (V, ?) as R3 (V, ?) = ??(1 + ?) tr VT (?I + ?T ?)?1 V .
Let Me be defined as Me = {M : M = ?T ?, ??T = I, ? ? Rh?d }. The convex hull [8] of Me
can be expressed as the convex set Mc given by Mc = {M : tr(M ) = h, M I, M ? Sd+ } and
each element in Me is referred to as an extreme point of Mc .
To convert the non-convex problem Eq. (23) into a convex formulation, we replace ?T ? with M ,
and naturally relax its feasible domain into a convex set based on the relationship between Me and
Mc presented above; this results in an optimization problem as
arg min R3 (V, M ),
?
s.t. , tr(M ) = h, M I, M ? Sd+ ,
(24)
where R3 (V, M ) is defined as R3 (V, M ) = ??(1 + ?) tr VT (?I + M )?1 V . It follows from
[3, Theorem 3.1] that the relaxed R3 is jointly convex in V and M . After we obtain the optimal
M , the optimal ? can be approximated using the first h eigenvectors (corresponding to the largest
h eigenvalues) of the optimal M .
6
4
Experiments
In this section, we evaluate our method on one synthetic data and one real data set. We compare
the proposed Multi-Task Vector Field Learning (MTVFL) algorithm against the following methods:
(a) Single Task Vector Field Learning (STVFL, or PFR), (b) Alternating Structure Optimization
(ASO) and (c) its nonlinear version - Kernelized Alternating Structure Optimization (KASO). The
kernel constructed in KASO uses both labeled data and unlabeled data. Thus it can be viewed as a
semi-supervised MTL method.
4.1
Synthetic Data
0.25
8
MTVFL
STVFL
7
0.2
Sigular Value
6
MSE
0.15
0.1
5
4
3
2
0.05
1
0
10
20
30
40
Number of Labeled Data
50
(a) MSE
0
1
2
Principal Component
3
(b) Singular value distribution
Figure 2: (a) Performance of MTVFL and STVFL; (b) The singular value distribution.
We first construct a synthetic data to evaluate our method in comparison with the semi-supervised
single task learning method (STVFL). We generate two data sets including Swiss roll and Swiss roll
with hole embedded in 3-dimensional Euclidean space. The Swiss roll is generated by the following
equations x = t1 cos t1 ; y = t2 ; z = t1 sin t1 where t1 ? [3?/2, 9?/2]; t2 ? [0, 21]. The Swill
roll with hole excludes points within t1 ? [9, 12] and t2 ? [9, 14]. The ground truth function is
f (x, y, z) = t1 . This test is a semi-supervised multi-task regression problem. We randomly select a
number of labeled data in each task and try to predict the value on other unlabeled data.
Each data set has 400 points. We construct a nearest neighbor graph for each task. The number of
nearest neighbors is set to 5 and the manifold dimension is set to 2 as they are both 2 dimensional
manifolds. The shared subspace dimension is set to 2. The regularization parameters are chosen
via cross-validation. We perform 100 independent trials with randomly selected labeled sets. The
performance is measured by the mean squared error (MSE). We also try ASO and KASO, however
they perform poorly since the data is highly nonlinear. The averaged MSE over two tasks is presented
in Fig. 2. We can observe that MTVFL consistently outperforms STVFL which demonstrates the
effectiveness of SSMTL.
We also show the singular value distribution of the ground truth gradient fields. Given the ground
truth f , we can compute the gradient field V by taking derivatives of R1 (f, V ) with respect to V .
Requiring the derivative to vanish, we get the following equation GV = Cf. After obtaining V , the
gradient vector Vxli at each point can be obtained as Vxli = Til vil . Then we perform PCA on these
vectors and the singular values of the covariance matrix of Vxli are shown in Fig. 2 (b). As can be
seen from Fig. 2 (b), the number of dominant singular values is 2 which indicates that the ground
truth gradient fields concentrate on a 2-dimensional subspace.
4.2
Landmine Detection
We use the landmine data set studied in [20]. There are totally 29 sets of data which are collected
from various real landmine fields. Each data example is represented by a 9-dimensional vector with
a binary label, which is either 1 for landmine or 0 for clutter. The problem of landmine detection
1
The data set is available at http://www.ee.duke.edu/?lcarin/LandmineData.zip.
7
0.8
1200
MTVFL
STVFL
KASO
ASO
1000
800
Sigular Value
Average AUC on 19 Tasks
0.85
0.75
0.7
600
400
0.65
20
200
30
40
50
60
Number of Labeled Data
70
0
80
(a) Averaged AUC
2
4
6
Principal Component
8
(b) Singular value distribution
Figure 3: (a) Performance of various MTL algorithms; (b) The singular value distribution.
is to predict the labels of unlabeled objects. Among the 29 data sets, 1-15 correspond to relatively
highly foliated regions and 16-29 correspond to bare earth or desert regions. Following [20], we
choose the data sets 1-10 and 16-24 to form 19 tasks.
The basic setup of all the algorithms is as follows. First, we construct a nearest neighbor graph
for each task. The number of nearest neighbors is set to 10 and the manifold dimension is set to 4
empirically. These two parameters are the same for all 19 tasks. The shared subspace dimension
is set to be 5 for both of MTVFL and ASO and the shared subspace dimension of KASO is set to
10. All the regularization parameters for the four algorithms are chosen via cross-validation. Note
that KASO needs to construct a kernel matrix. We use Gaussian kernel in KASO and the Gaussian
width is set to be optimal by searching within [0.01, 10].
We perform 100 independent trials with randomly selected labeled sets. We measure the performance by AUC which denotes area under the Receiver Operation Characteristic (ROC) curve. A
large AUC value indicates good classification performance. Since the data have severely unbalanced labels, following [20], we do a special setting that assures there is at least one ?1? and one ?0?
labeled sample in the training set of each task. The AUC averaged over the 19 tasks is presented in
Fig. 3 (a). As can be seen, MTVFL consistently outperforms the other three algorithms. When the
number of labeled data increases, KASO outperforms STVFL. ASO does not improve much when
the amount of labeled data increases, which is probably because the data have severely unbalanced
labels and the ground truth predictor function is nonlinear. We also show the singular value distribution of the ground truth gradient fields in Fig. 3 (b). The computation of the singular values is the
same as in Section. 4.1. As can be seen from Fig. 3 (b), the number of dominant singular values
is 5. The percentage of the sum of the first 5 singular values over the total sum is 91.34%, which
indicates that the ground truth gradient fields concentrate on a 5-dimensional subspace.
5
Conclusion
In this paper, we propose a new semi-supervised multi-task learning formulation using vector fields.
We show that vector fields can naturally capture the shared differential structure among tasks as well
as the structure of the data manifolds. Our experimental results on synthetic and real data demonstrate the effectiveness of the proposed method. There are several interesting directions suggested
in this work. One is the relation between learning on task parameters and learning on vector fields.
Ultimately, both of them are learning functions. Another one is to apply other assumptions made in
the multi-task learning community into vector field learning, e.g., the cluster assumption.
Acknowledgments
This work was supported by the National Natural Science Foundation of China under Grants
61125203, 61233011 and 90920303, the National Basic Research Program of China (973 Program)
under Grant 2012CB316404, the Fundamental Research Funds for the Central Universities under
grant 2011FZA5022, NIH (R01 LM010730) and NSF (IIS-0953662, CCF-1025177).
8
References
[1] A. Agarwal, H. D. III, and S. Gerber. Learning multiple tasks using manifold regularization.
In Advances in Neural Information Processing Systems 23, pages 46?54. 2010.
[2] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks
and unlabeled data. Journal of Machine Learning Research, 6:1817?1853, 2005.
[3] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework
for multi-task structure learning. In Advances in Neural Information Processing Systems 20,
pages 25?32. 2008.
[4] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. Journal
of Machine Learning Research, 4:83?99, 2003.
[5] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework
for learning from labeled and unlabeled examples. Journal of Machine Learning Research,
7:2399?2434, December 2006.
[6] S. Ben-David, J. Gehrke, and R. Schuller. A theoretical framework for learning from a pool of
disparate data sources. In Proceedings of the eighth ACM SIGKDD international conference
on Knowledge discovery and data mining, pages 443?449, 2002.
[7] S. Ben-David and R. Schuller. Exploiting task relatedness for mulitple task learning. In Conference on Learning Theory, pages 567?580, 2003.
[8] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[9] A. Carlson, J. Betteridge, R. C. Wang, E. R. Hruschka, Jr., and T. M. Mitchell. Coupled semisupervised learning for information extraction. In Proceedings of the third ACM international
conference on Web search and data mining, pages 101?110, 2010.
[10] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, 2006.
[11] A. Defant and K. Floret. Tensor Norms and Operator Ideals. North-Holland Mathematics
Studies, North-Holland, Amsterdam, 1993.
[12] T. Evgeniou, C. A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods.
Journal of Machine Learning Research, 6:615?637, 2005.
[13] G. H. Golub and C. F. V. Loan. Matrix computations. Johns Hopkins University Press, 3rd
edition, 1996.
[14] L. Jacob, F. Bach, and J.-P. Vert. Clustered multi-task learning: A convex formulation. In
Advances in Neural Information Processing Systems 21, pages 745?752. 2009.
[15] J. Lafferty and L. Wasserman. Statistical analysis of semi-supervised regression. In Advances
in Neural Information Processing Systems 20, pages 801?808, 2007.
[16] J. M. Lee. Introduction to Smooth Manifolds. Springer Verlag, New York, 2nd edition, 2003.
[17] B. Lin, C. Zhang, and X. He. Semi-supervised regression via parallel field regularization. In
Advances in Neural Information Processing Systems 24, pages 433?441. 2011.
[18] Q. Liu, X. Liao, and L. Carin. Semi-supervised multitask learning. In Advances in Neural
Information Processing Systems 20, pages 937?944. 2008.
[19] F. Wang, X. Wang, and T. Li. Semi-supervised multi-task learning with task regularizations.
In Proceedings of the 2009 Ninth IEEE International Conference on Data Mining, pages 562?
568. IEEE Computer Society, 2009.
[20] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram. Multi-task learning for classification with
dirichlet process priors. Journal of Machine Learning Research, 8:35?63, 2007.
9
| 4807 |@word multitask:2 trial:2 version:1 norm:3 nd:1 decomposition:1 covariance:1 jacob:1 mention:1 tr:6 liu:2 interestingly:1 outperforms:3 existing:2 com:1 cad:1 gmail:1 written:1 john:1 gv:3 fund:1 n0:5 asu:1 selected:2 plane:1 firstly:1 zhang:5 along:1 x1l:1 constructed:1 differential:8 introduce:3 indeed:1 expected:5 multi:23 decreasing:1 totally:1 becomes:1 estimating:1 underlying:1 notation:2 kind:1 bakker:1 transformation:1 ti:1 demonstrates:2 rm:1 k2:9 control:1 grant:3 t1:8 local:1 sd:2 severely:2 tempe:1 f1t:1 china:3 studied:1 relaxing:1 co:1 zhejiang:1 averaged:3 unique:1 acknowledgment:1 enforces:1 yj:1 practice:1 block:14 lf:3 swiss:6 lcarin:1 pontil:2 area:1 vert:1 projection:1 boyd:1 pre:1 suggest:1 krishnapuram:1 get:3 onto:1 close:4 selection:1 operator:1 unlabeled:8 optimize:1 www:1 map:1 jieping:2 convex:16 landminedata:1 simplicity:1 identifying:1 wasserman:1 spanned:1 orthonormal:1 vandenberghe:1 searching:1 coordinate:1 play:1 duke:1 us:1 element:6 expensive:1 approximated:2 labeled:15 role:1 wang:4 capture:4 hv:1 region:2 wil:6 complexity:1 geodesic:2 ultimately:1 depend:2 rewrite:4 solving:2 predictive:1 c1l:1 basis:1 glii:2 various:2 represented:1 forced:1 heat:1 vlt:3 neighborhood:2 whose:2 widely:1 solve:4 cnl:1 say:1 relax:1 otherwise:3 niyogi:1 think:2 jointly:1 eigenvalue:1 sen:1 propose:5 poorly:1 yjl:1 az:1 olkopf:1 exploiting:1 cluster:2 r1:12 ben:2 object:1 develop:2 measured:1 nearest:7 eq:6 solves:1 concentrate:3 direction:2 hull:1 require:2 generalization:4 clustered:2 secondly:1 extension:1 exploring:1 hold:1 fil:1 ground:7 mapping:2 predict:2 chiyuan:2 earth:1 label:5 largest:2 gehrke:1 aso:8 mit:1 gaussian:2 aim:2 focus:1 consistently:2 zju:1 indicates:3 uli:4 sigkdd:1 cg:1 hangzhou:1 vl:23 kernelized:1 relation:1 wij:7 arg:9 among:5 classification:4 tjl:1 ssl:2 special:1 marginal:2 field:70 construct:5 evgeniou:2 extraction:1 encouraged:1 vvt:1 carin:2 t2:3 simplify:1 belkin:1 randomly:3 simultaneously:5 national:2 mulitple:1 ando:2 detection:2 highly:2 mining:3 golub:1 extreme:2 nl:22 tj:1 amenable:1 predefined:1 ambient:1 orthogonal:1 euclidean:2 gerber:1 abundant:2 theoretical:1 column:6 tp:1 predictor:16 too:1 characterize:2 dependency:1 xue:1 synthetic:6 fundamental:1 international:3 lee:1 vm:1 pool:1 enhance:1 connecting:1 hopkins:1 squared:1 central:1 choose:2 derivative:7 til:21 li:1 north:2 coefficient:1 vi:2 performed:1 try:2 lab:1 characterizes:1 parallel:17 pil:4 roll:7 characteristic:1 correspond:2 directional:2 landmine:5 xli:11 bayesian:1 mc:4 vil:18 worth:3 straight:1 n10:1 sharing:2 definition:2 against:1 naturally:3 mitchell:1 recall:2 knowledge:1 improves:1 formalize:2 actually:2 supervised:15 mtl:9 formulation:7 generality:1 web:1 nonlinear:6 abusing:1 semisupervised:2 fjl:1 ye:2 requiring:2 ccf:1 regularization:12 equality:1 alternating:4 ll:3 during:1 sin:1 width:1 auc:5 essence:1 generalized:1 demonstrate:3 confusion:1 novel:2 nih:1 common:1 mt:1 empirically:1 attached:1 he:2 cambridge:1 rd:4 pm:1 heskes:1 mathematics:1 n0l:3 chapelle:1 similarity:1 base:1 dominant:2 curvature:2 verlag:1 binary:1 life:1 vt:3 yi:1 seen:3 relaxed:1 zip:1 r0:6 semi:14 zien:1 multiple:6 ii:1 smooth:3 ing:1 characterized:1 arvind:1 cross:2 lin:2 bach:1 vnl:2 laplacian:1 prediction:1 regression:5 basic:2 liao:2 kernel:5 represent:2 agarwal:1 addition:1 singular:13 source:1 crucial:1 sch:1 probably:1 subject:1 december:1 lafferty:1 effectiveness:4 structural:1 ee:1 yang:1 noting:2 ideal:1 iii:3 easy:1 xj:2 fm:1 reduce:1 idea:4 pca:2 and0:1 algebraic:1 york:1 constitute:1 foliated:1 eigenvectors:2 amount:1 clutter:1 generate:1 http:1 percentage:1 nsf:1 discrete:3 write:1 key:3 four:1 verified:2 v1:3 graph:5 relaxation:3 excludes:1 convert:2 sum:2 k2hs:2 fl:19 quadratic:1 arizona:1 bv:2 hilbertschmidt:1 constraint:2 vnm:1 orthogonality:1 flat:1 aspect:1 span:5 min:9 performing:1 relatively:1 slij:2 disconnected:1 jr:1 xlj:6 lm010730:1 equation:3 assures:1 discus:1 r3:17 needed:1 adopted:1 available:1 operation:1 apply:1 observe:1 spectral:1 hruschka:1 bii:1 original:2 assumes:1 denotes:2 cf:5 clustering:1 dirichlet:1 carlson:1 exploit:2 rdl:2 society:1 r01:1 bl:3 tensor:2 objective:5 micchelli:2 diagonal:8 biodesign:1 gradient:13 subspace:19 me:5 manifold:32 argue:1 collected:1 assuming:1 besides:1 index:1 relationship:1 providing:1 ying:1 setup:1 disparate:1 perform:4 xiaofei:1 rn:1 ninth:1 community:1 david:2 required:3 specified:1 z1:2 learned:1 suggested:1 eighth:1 program:2 max:1 including:1 critical:1 natural:5 residual:2 schuller:2 improve:3 coupled:1 bare:1 prior:1 geometric:3 literature:2 tangent:14 discovery:1 embedded:2 loss:3 interesting:1 validation:2 foundation:1 kti:1 affine:1 xp:2 editor:1 share:7 pi:1 row:1 gl:4 last:1 supported:1 institute:1 neighbor:8 taking:1 sparse:1 curve:1 dimension:8 made:1 relatedness:2 ml:9 receiver:1 assumed:3 consuming:1 xi:4 continuous:1 iterative:1 search:1 decade:1 learn:2 concatenates:1 obtaining:1 mse:4 cl:4 constructing:1 domain:2 vj:1 arrow:1 big:1 rh:1 edition:2 nothing:1 fig:7 referred:2 roc:1 cil:2 lie:1 vanish:2 third:1 learns:3 bij:2 theorem:1 specific:1 tnm:1 pfr:1 gating:1 symbol:1 r2:14 flt:1 betteridge:1 dl:14 exists:1 ih:6 importance:1 magnitude:1 hole:2 simply:1 expressed:1 amsterdam:1 sindhwani:1 holland:2 springer:1 covariant:2 khs:1 truth:7 acm:2 viewed:3 goal:1 shared:13 replace:2 feasible:2 change:2 loan:1 except:2 principal:2 total:3 called:1 experimental:2 svd:2 indicating:1 formally:1 select:1 desert:1 unbalanced:2 evaluate:2 argyriou:1 |
4,207 | 4,808 | Hamming Distance Metric Learning
Mohammad Norouzi?
David J. Fleet?
Ruslan Salakhutdinov?,?
?
Departments of Computer Science and Statistics?
University of Toronto
[norouzi,fleet,rsalakhu]@cs.toronto.edu
Abstract
Motivated by large-scale multimedia applications we propose to learn mappings
from high-dimensional data to binary codes that preserve semantic similarity.
Binary codes are well suited to large-scale applications as they are storage efficient and permit exact sub-linear kNN search. The framework is applicable
to broad families of mappings, and uses a flexible form of triplet ranking loss.
We overcome discontinuous optimization of the discrete mappings by minimizing
a piecewise-smooth upper bound on empirical loss, inspired by latent structural
SVMs. We develop a new loss-augmented inference algorithm that is quadratic in
the code length. We show strong retrieval performance on CIFAR-10 and MNIST,
with promising classification results using no more than kNN on the binary codes.
1
Introduction
Many machine learning algorithms presuppose the existence of a pairwise similarity measure on
the input space. Examples include semi-supervised clustering, nearest neighbor classification, and
kernel-based methods, When similarity measures are not given a priori, one could adopt a generic
function such as Euclidean distance, but this often produces unsatisfactory results. The goal of
metric learning techniques is to improve matters by incorporating side information, and optimizing
parametric distance functions such as the Mahalanobis distance [7, 12, 30, 34, 36].
Motivated by large-scale multimedia applications, this paper advocates the use of discrete mappings,
from input features to binary codes. Compact binary codes are remarkably storage efficient, allowing one to store massive datasets in memory. The Hamming distance, a natural similarity measure
on binary codes, can be computed with just a few machine instructions per comparison. Further, it
has been shown that one can perform exact nearest neighbor search in Hamming space significantly
faster than linear search, with sublinear run-times [15, 25]. By contrast, retrieval based on Mahalanobis distance requires approximate nearest neighbor (ANN) search, for which state-of-the-art
methods (e.g., see [18, 23]) do not always perform well, especially with massive, high-dimensional
datasets when storage overheads and distance computations become prohibitive.
Most approaches to discrete (binary) embeddings have focused on preserving the metric (e.g. Euclidean) structure of the input data, the canonical example being locality-sensitive hashing (LSH)
[4, 17]. Based on random projections, LSH and its variants (e.g., [26]) provide guarantees that metric
similarity is preserved for sufficiently long codes. To find compact codes, recent research has turned
to machine learning techniques that optimize mappings for specific datasets (e.g., [20, 28, 29, 32, 3]).
However, most such methods aim to preserve Euclidean structure (e.g. [13, 20, 35]).
In metric learning, by comparison, the goal is to preserve semantic structure based on labeled attributes or parameters associated with training exemplars. There are papers on learning binary hash
functions that preserve semantic similarity [29, 28, 32, 24], but most have only considered ad hoc
datasets and uninformative performance measures, for which it is difficult to judge performance with
anything but the qualitative appearance of retrieval results. The question of whether or not it is possible to learn hash functions capable of preserving complex semantic structure, with high fidelity,
has remained unanswered.
1
To address this issue, we introduce a framework for learning a broad class of binary hash functions
based on a triplet ranking loss designed to preserve relative similarity (c.f. [11, 5]). While certainly
useful for preserving metric structure, this loss function is very well suited to the preservation of
semantic similarity. Notably, it can be viewed as a form of local ranking loss. It is more flexible
than the pairwise hinge loss of [24], and is shown below to produce superior hash functions.
Our formulation is inspired by latent SVM [10] and latent structural SVM [37] models, and it generalizes the minimal loss hashing (MLH) algorithm of [24]. Accordingly, to optimize hash function
parameters we formulate a continuous upper-bound on empirical loss, with a new form of lossaugmented inference designed for efficient optimization with the proposed triplet loss on the Hamming space. To our knowledge, this is one of the most general frameworks for learning a broad class
of hash functions. In particular, many previous loss-based techniques [20, 24] are not capable of
optimizing mappings that involve non-linear projections, e.g., by neural nets.
Our experiments indicate that the framework is capable of preserving semantic structure on challenging datasets, namely, MNIST [1] and CIFAR-10 [19]. We show that k-nearest neighbor (kNN)
search on the resulting binary codes retrieves items that bear remarkable similarity to a given query
item. To show that the binary representation is rich enough to capture salient semantic structure,
as is common in metric learning, we also report classification performance on the binary codes.
Surprisingly, on these datasets, simple kNN classifiers in Hamming space are competitive with sophisticated discriminative classifiers, including SVMs and neural networks. An important appeal of
our approach is the scalability of kNN search on binary codes to billions of data points, and of kNN
classification to millions of class labels.
2
Formulation
The task is to learn a mapping b(x) that projects p-dimensional real-valued inputs x ? Rp onto
q-dimensional binary codes h ? H ? {?1, 1}q , while preserving some notion of similarity. This
mapping, referred to as a hash function, is parameterized by a real-valued vector w as
b(x; w) = sign (f (x; w)) ,
(1)
p
q
where sign(.) denotes the element-wise sign function, and f (x; w) : R ? R is a real-valued
transformation. Different forms of f give rise to different families of hash functions:
1. A linear transform f (x) = W x, where W ? Rq?p and w ? vec(W ), is the simplest and most
well-studied case [4, 13, 24, 33]. Under this mapping the k th bit is determined by a hyperplane
in the input space whose normal is given by the k th row of W . 1
2. In [35], linear projections are followed by an element-wise cosine transform, i.e. f (x) =
cos(W x). For such mappings the bits correspond to stripes of 1 and ?1 regions, oriented
parallel to the corresponding hyperplanes, in the input space.
3. Kernelized hash functions [20, 21].
4. More complex hash functions are obtained with multilayer neural networks [28, 32]. For
example, a two-layer network with a p0 -dimensional hidden layer and weight matrices W1 ?
0
0
Rp ?p and W2 ? Rq?p can be expressed as f (x) = tanh(W2 tanh(W1 x)), where tanh(.)
is the element-wise hyperbolic tangent function.
Our Hamming distance metric learning framework applies to all of the above families of hash functions. The only restriction is that f must be differentiable with respect to its parameters, so that one
is able to compute the Jacobian of f (x; w) with respect to w.
2.1 Loss functions
The choice of loss function is crucial for learning good similarity measures. To this end, most existing supervised binary hashing techniques [13, 22, 24] formulate learning objectives in terms of
pairwise similarity, where pairs of inputs are labelled as either similar or dissimilar. Similaritypreserving hashing aims to ensure that Hamming distances between binary codes for similar (dissimilar) items are small (large). For example, MLH [24] uses a pairwise hinge loss function. For
1
For presentation clarity, in linear and nonlinear cases, we omit bias terms. They are incorporated by adding
one dimension to the input vectors, and to the hidden layers of neural networks, with a fixed value of one.
2
two binary codes h, g ? H with Hamming distance2 kh?gkH , and a similarity label s ? {0, 1},
the pairwise hinge loss is defined as:
[ kh?gkH ? ? + 1 ]+ for s = 1 (similar)
(2)
`pair (h, g, s) =
[ ? ? kh?gkH + 1 ]+ for s = 0 (dissimilar) ,
where [?]+ ? max(?, 0), and ? is a Hamming distance threshold that separates similar from dissimilar codes. This loss incurs zero cost when a pair of similar inputs map to codes that differ by
less than ? bits. The loss is zero for dissimilar items whose Hamming distance is more than ? bits.
One problem with such loss functions is that finding a suitable threshold ? with cross-validation is
slow. Furthermore, for many problems one cares more about the relative magnitudes of pairwise
distances than their precise numerical values. So, constraining pairwise Hamming distances over
all pairs of codes with a single threshold is overly restrictive. More importantly, not all datasets are
amenable to labeling input pairs as similar or dissimilar. One way to avoid some of these problems is
to define loss in terms of relative similarity. Such loss functions have been used in metric learning [5,
11], and, as shown below, they are also naturally suited to Hamming distance metric learning.
To define relative similarity, we assume that the training data includes triplets of items (x, x+ , x? ),
such that the pair (x, x+ ) is more similar than the pair (x, x? ). Our goal is to learn a hash function
b such that b(x) is closer to b(x+ ) than to b(x? ) in Hamming distance. Accordingly, we propose a
ranking loss on the triplet of binary codes (h, h+ , h? ), obtained from b applied to (x, x+ , x? ):
`triplet (h, h+ , h? ) = kh?h+ kH ? kh?h? kH + 1 + .
(3)
This loss is zero when the Hamming distance between the more-similar pair, kh ? h+ kH , is at
least one bit smaller than the Hamming distance between the less-similar pair, kh ? h? kH . This
loss function is more flexible than the pairwise loss function `pair , as it can be used to preserve
rankings among similar items, for example based on Euclidean distance, or perhaps using path
distance between category labels within a phylogenetic tree.
3
Optimization
? n
Given a training set of triplets, D = (xi , x+
i , xi ) i=1 , our objective is the sum of the empirical
loss and a simple regularizer on the vector of unknown parameters w:
X
?
L(w) =
`triplet b(x; w), b(x+ ; w), b(x? ; w) + kwk22 .
(4)
2
+
?
(x,x ,x )?D
This objective is discontinuous and non-convex. The hash function is a discrete mapping and empirical loss is piecewise constant. Hence optimization is very challenging. We cannot overcome the
non-convexity, but the problems owing to the discontinuity can be mitigated through the construction
of a continuous upper bound on the loss.
The upper bound on loss that we adopt is inspired by previous work on latent structural SVMs [37].
The key observation that relates our Hamming distance metric learning framework to structured
prediction is as follows,
b(x; w) = sign (f (x; w))
=
argmax hT f (x; w) ,
(5)
h?H
where H ? {?1, +1}q . The argmax on the RHS effectively means that for dimensions of f (x; w)
with positive values, the optimal code should take on values +1, and when elements of f (x; w) are
negative the corresponding bits of the code should be ?1. This is identical to the sign function.
3.1
Upper bound on empirical loss
The upper bound on loss that we exploit for learning hash functions takes the following form:
`triplet b(x; w), b(x+ ; w), b(x? ; w) ?
n
o
+
?
T
+T
+
?T
?
max
`
+
g
f
(x;
w)
+
g
f
(x
;
w)
+
g
f
(x
;
w)
triplet g, g , g
g,g+ ,g?
n
o
n
o
+T
+
?T
?
? max hT f (x; w) ? max
h
f
(x
;
w)
?
max
h
f
(x
;
w)
,
+
?
h
2
h
h
The Hamming norm kvkH is defined as the number of non-zero entries of vector v.
3
(6)
where g, g+ , g? , h, h+ , and h? are constrained to be q-dimensional binary vectors. To prove
the inequality in Eq. 6, note that if the first term on the RHS were maximized3 by (g, g+ , g? ) =
(b(x), b(x+ ), b(x? )), then using Eq. 5, it is straightforward to show that Eq. 6 would become an
equality. In all other cases of (g, g+ , g? ) which maximize the first term, the RHS can only be as
large or larger than when (g, g+ , g? ) = (b(x), b(x+ ), b(x? )), hence the inequality holds.
Summing the upper bound instead of the loss in Eq. 4 yields an upper bound on the regularized
empirical loss in Eq. 4. Importantly, the resulting bound is easily shown to be continuous and
piecewise smooth in w as long as f is continuous in w. The upper bound of Eq. 6 is a generalization
of a bound introduced in [24] for the linear case, f (x) = W x. In particular, when f is linear in
w, the bound on regularized empirical loss becomes piecewise linear and convex-concave. While
the bound in Eq. 6 is more challenging to optimize than the bound in [24], it allows us to learn
hash functions based on non-linear functions f , e.g. neural networks. While the bound in [24] was
defined for `pair -type loss functions and pairwise similarity labels, the bound here applies to the more
flexible class of triplet loss functions.
3.2
Loss-augmented inference
To use the upper bound in Eq. 6 for optimization, we must be able to find the binary codes given by
n
o
T
T
?+, g
? ? ) = argmax `triplet g, g+ , g? + gT f (x) + g+ f (x+ ) + g? f (x? ) .
(?
g, g
(7)
(g,g+ ,g? )
In the structured prediction literature this maximization is called loss-augmented inference. The
challenge stems from the 23q possible binary codes over which one has to maximize the RHS.
Fortunately, we can show that this loss-augmented inference problem can be solved efficiently for
the class of triplet loss functions that depend only on the value of
d(g, g+ , g? ) ? kg?g+ kH ? kg?g? kH .
Importantly, such loss functions do not depend on the specific binary codes, but rather just the
differences. Further, note that d(g, g+ , g? ) can take on only 2q +1 possible values, since it is an
integer between ?q and +q. Clearly the triplet ranking loss only depends on d since
`triplet g, g+ , g? = ` 0 d(g, g+ , g? ) , where ` 0 (?) = [ ? ? 1 ]+ .
(8)
For this family of loss functions, given the values of f (.) in Eq. 7, loss-augmented inference can be
performed in time O(q 2 ). To prove this, first consider the case d(g, g+ , g? ) = m, where m is an
integer between ?q and q. In this case we can replace the loss augmented inference problem with
n
o
T
+T
+
?T
?
` 0 (m) + max
g
f
(x)
+
g
f
(x
)
+
g
f
(x
)
s.t. d(g, g+ , g? ) = m .
(9)
+
?
g,g ,g
One can solve Eq. 9 for each possible value of m. It is straightforward to see that the largest of those
2q + 1 maxima is the solution to Eq. 7. Then, what remains for us is to solve Eq. 9.
To solve Eq. 9, consider the ith bit for each of the three codes, i.e. a = g[i], b = g+ [i], and c =
g? [i], where v[i] denotes the ith element of vector v. There are 8 ways to select a, b and c, but no
matter what values they take on, they can only change the value of d(g, g+ , g? ) by ?1, 0, or +1.
Accordingly, let ei ? {?1, 0, +1} denote the effect of the ith bits on d(g, g+ , g? ). For each value
of ei , we can easily compute the maximal contribution of (a, b, c) to Eq. 9 by:
cont(i, ei ) = max af (x)[i] + bf (x+ )[i] + cf (x? )[i]
(10)
a,b,c
such that a, b, c ? {?1, +1} and ka?bkH ? ka?ckH = ei .
Pq
Therefore, to solve Eq. 9, we aim to select values for ei , for all i, such that i=1 ei = m and
P
q
i=1 cont(i, ei ) is maximized. This can be solved for any m using a dynamic programming algorithm, similar to knapsack, in O(q 2 ). Finally, we choose m that maximizes Eq. 9 and set the bits to
the configurations that maximized cont(i, ei ).
3
For presentation clarity we will sometimes drop the dependence of f and b on w, and write b(x) and f (x).
4
3.3 Perceptron-like learning
Our learning algorithm is a form of stochastic gradient descent, where in the tth iteration we sample
a triplet (x, x+ , x? ) from the dataset, and then take a step in the direction that decreases the upper
bound on the triplet?s loss in Eq. 6. To this end, we randomly initialize w(0) . Then, at each iteration
t + 1, given w(t) , we use the following procedure to update the parameters, w(t+1) :
1. Select a random triplet (x, x+ , x? ) from dataset D.
? h
? +, h
? ? ) = (b(x; w(t) ), b(x+ ; w(t) ), b(x? ; w(t) )) using Eq. 5.
2. Compute (h,
?+, g
? ? ), the solution to the loss-augmented inference problem in Eq. 7 .
3. Compute (?
g, g
4. Update model parameters using
?f (x? )
?f (x+ )
?f (x) ?
+
+
?
?
(t)
(t+1)
(t)
?
?
? +
? +
? ? ?w
h? g
h ?g
h ?g
,
w
=w + ?
?w
?w
?w
where ? is the learning rate, and ?f (x)/?w ? ?f (x; w)/?w|w=w(t) ? R|w|?q is the transpose of the Jacobian matrix, where |w| is the number of parameters.
This update rule can be seen as gradient descent in the upper bound of the regularized empirical loss.
Although the upper bound in Eq. 6 is not differentiable at isolated points (owing to the max terms),
in our experiments we find that this update rule consistently decreases both the upper bound and the
actual regularized empirical loss L(w).
4
Asymmetric Hamming distance
When Hamming distance is used to score and retrieve the nearest neighbors to a given query, there is
a high probability of a tie, where multiple items are equidistant from the query in Hamming space.
To break ties and improve the similarity measure, previous work suggests the use of an asymmetric
Hamming (AH) distance [9, 14]. With an AH distance, one stores dataset entries as binary codes (for
storage efficiency) but the queries are not binarized. An asymmetric distance function is therefore
defined on a real-valued query vector, v ? Rq , and a database binary code, h ? H. Computing AH
distance is slightly less efficient than Hamming distance, and efficient retrieval algorithms, such as
[25], are not directly applicable. Nevertheless, the AH distance can also be used to re-rank items
retrieved using Hamming distance, with a negligible increase in run-time. To improve efficiency
further when there are many codes to be re-ranked, AH distance from the query to binary codes can
be pre-computed for each 8 or 16 consecutive bits, and stored in a query-specific lookup table.
In this work, we use the following asymmetric Hamming distance function
1
2
AH(h, v; s) = k h ? tanh(Diag(s) v) k2 ,
(11)
4
where s ? Rq is a vector of scaling parameters that control the slope of hyperbolic tangent applied
to different bits; Diag(s) is a diagonal matrix with the elements of s on its diagonal. As the scaling
factors in s approach infinity, AH and Hamming distances become identical. Here we use the AH
distance between a database code b(x0 ) and the real-valued projection for the query f (x). Based
on our validation sets, the AH distance of Eq. 11 is relatively insensitive to values in s. For the
experiments we simply use s to scale the average absolute values of the elements of f (x) to be 0.25.
5
Implementation details
In practice, the basic learning algorithm described in Sec. 3 is implemented with several modifications. First, instead of using a single training triplet to estimate the gradients, we use mini-batches
comprising 100 triplets and average the gradient. Second, for each triplet (x, x+ , x? ), we replace
x? with a ?hard? example by selecting an item among all negative examples in the mini-batch that is
closest in the current Hamming distance to b(x). By harvesting hard negative examples, we ensure
that the Hamming constraints for the triplets are not too easily satisfied. Third, to find good binary
codes, we encourage each bit, averaged over the training data, to be mean-zero before quantization
(motivated in [35]). This is accomplished by adding the following penalty to the objective function:
1
k mean f (x; w) k22 ,
(12)
x
2
5
0.99
Precision @k
Precision @k
0.99
0.96
0.93
0.9
0.87
Two?layer net, triplet
Two?layer net, pairwise
Linear, triplet
Linear, pairwise [24]
10
100
k
1000
0.96
0.93
0.9
0.87
10000
128?bit, linear, triplet
64?bit, linear, triplet
32?bit, linear, triplet
Euclidean distance
10
100
k
1000
10000
Figure 1: MNIST precision@k: (left) four methods (with 32-bit codes); (right) three code lengths.
where mean(f (x; w)) denotes the mean of f (x; w) across the training data. In our implementation,
for efficiency, the stochastic gradient of Eq. 12 is computed per mini-batch using the Jacobian matrix
in the update rule (see Sec. 3.3). Empirically, we observe that including this term in the objective
improves the quality of binary codes, especially with the triplet ranking loss.
We use a heuristic to adapt learning rates, known as bold driver [2]. For each mini-batch we evaluate
the learning objective before the parameters are updated. As long as the objective is decreasing we
slowly increase the learning rate ?, but when the objective increases, ? is halved. In particular, after
every 25 epochs, if the objective, averaged over the last 25 epochs, decreased, we increase ? by 5%,
otherwise we decrease ? by 50%. We also used a momentum term; i.e. the previous gradient update
is scaled by 0.9 and then added to the current gradient.
All experiments are run on a GPU for 2, 000 passes through the datasets. The training time for
our current implementation is under 4 hours of GPU time for most of our experiments. The two
exceptions involve CIFAR-10 with 6400-D inputs and relatively long code-lengths of 256 and 512
bits, for which the training times are approximated 8 and 16 hours respectively.
6
Experiments
Our experiments evaluate Hamming distance metric learning using two families of hash functions,
namely, linear transforms and multilayer neural networks (see Sec. 2). For each, we examine two
loss functions, the pairwise hinge loss (Eq. 2) and the triplet ranking loss (Eq. 3).
Experiments are conducted on two well-known image corpora, MNIST [1] and CIFAR-10 [19].
Ground-truth similarity labels are derived from class labels; items from the same class are deemed
similar4 . This definition of similarity ignores intra-class variations and the existence of subcategories, e.g. styles of handwritten fours, or types of airplanes. Nevertheless, we use these coarse
similarity labels to evaluate our framework. To that end, using items from the test set as queries,
we report precision@k, i.e. the fraction of k closest items in Hamming distance that are same-class
neighbors. We also show kNN retrieval results for qualitative inspection. Finally, we report Hamming (H) and asymmetric Hamming (AH) kNN classification rates on the test sets.
Datasets. The MNIST [1] digit dataset contains 60, 000 training and 10, 000 test images (28?28
pixels) of ten handwritten digits (0 to 9). Of the 60, 000 training images, we set aside 5, 000 for
validation. CIFAR-10 [19] comprises 50, 000 training and 10, 000 test color images (32?32 pixels).
Each image belongs to one of 10 classes, namely airplane, automobile, bird, cat, deer, dog, frog,
horse, ship, and truck. The large variability in scale, viewpoint, illumination, and background clutter
poses a significant challenge for classification. Instead of using raw pixel values, we borrow a bagof-words representation from Coates et al [6]. Its 6400-D feature vector comprises one 1600-bin
histogram per image quadrant, the codewords of which are learned from 6?6 image patches. Such
high-dimensional inputs are challenging for learning similarity-preserving hash functions. Of the
50, 000 training images, we set aside 5, 000 for validation.
MNIST: We optimize binary hash functions, mapping raw MNIST images to 32, 64, and 128-bit
codes. For each test code we find the k closest training codes using Hamming distance, and report
precision@k in Fig. 1. As one might expect, the non-linear mappings5 significantly outperform linear mappings. We also find that the triplet loss (Eq. 3) yields better performance than the pairwise
4
Training triplets are created by taking two items from the same class, and one item from a different class.
The two-layer neural nets for Fig. 1 and Table 1 had 1 hidden layer with 512 units. Weights were initialized
randomly, and the Jacobian with respect to the parameters was computed with the backprop algorithm [27].
5
6
kNN
2 NN
2 NN
30 NN
30 NN
3 NN
3 NN
30 NN
30 NN
Distance
Asym.
Hamming Hamming
Hash function, Loss
Linear, pairwise hinge [24]
Linear, triplet ranking
Two-layer Net, pairwise hinge
Two-layer Net, triplet ranking
Linear, pairwise hinge
Linear, triplet ranking
Two-layer Net, pairwise hinge
Two-layer Net, triplet ranking
Baseline
Deep neural nets with pre-training [16]
Large margin nearest neighbor [34]
RBF-kernel SVM [8]
Neural network [31]
Euclidean 3NN
32 bits
4.66
4.44
1.50
1.45
4.30
3.88
1.50
1.45
64 bits
3.16
3.06
1.45
1.38
2.78
2.90
1.36
1.29
128 bits
2.61
2.44
1.44
1.27
2.46
2.51
1.35
1.20
Error
1.2
1.3
1.4
1.6
2.89
Table 1: Classification error rates on MNIST test set.
loss (Eq. 2). The sharp drop in precision at k = 6000 is a consequence of the fact that each digit in
MNIST has approximately 6000 same-class neighbors. Fig. 1 (right) shows how precision improves
as a function of the binary code length. Notably, kNN retrieval, for k > 10 and all code lengths,
yields higher precision than Euclidean NN on the 784-D input space. Further, note that these Euclidian results effectively provide an upper bound on the performance one would expect with existing
hashing methods that preserve Eucliean distances (e.g., [13, 17, 20, 35]).
One can also evaluate the fidelity of the Hamming space represenation in terms of classification
performance from the Hamming codes. To focus on the quality of the hash functions, and the speed
of retrieval for large-scale multimedia datasets, we use a kNN classifier; i.e. we just use the retrieved
neighbors to predict class labels for each test code. Table 1 reports classification error rates using
kNN based on Hamming and asymmetric Hamming distance. Non-linear mappings, even with
only 32-bit codes, significantly outperform linear mappings (e.g.with 128 bits). The ranking hinge
loss also improves upon the pairwise hinge loss, even though the former has no hyperparameters.
Table 1 also indicates that AH distance provides a modest boost in performance. For each method
the parameter k in the kNN classifier is chosen based on the validation set.
For baseline comparison, Table 1 reports state-of-the-art performance on MNIST with sophisticated
discriminative classifiers (excluding those using examplar deformations and convolutional nets).
Despite the simplicity of a kNN classifier, our model achieves error rates of 1.29% and 1.20% using
64- and 128-bit codes. This is compared to 1.4% with RBF-SVM [8], and to 1.6%, the best published
neural net result for this version of the task [31]. Our model also out performs the metric learning
approach of [34], and is competitive with the best known Deep Belief Network [16]; although they
used unsupervised pre-training while we do not.
The above results show that our Hamming distance metric learning framework can preserve sufficient semantic similarity, to the extent that Hamming kNN classification becomes competitive with
state-of-the-art discriminative methods. Nevertheless, our method is not solely a classifier, and it
can be used within many other machine learning algorithms.
In comparison, another hashing technique called iterative quantization (ITQ) [13] achieves 8.5%
error on MNIST and 78% accuracy on CIFAR-10. Our method compares favorably, especially on
MNIST. However, ITQ [13] inherently binarizes the outcome of a supervised classifier (Canonical
Correlation Analysis with labels), and does not explicitly learn a similarity measure on the input
features based on pairs or triplets.
CIFAR-10: On CIFAR-10 we optimize hash functions for 64, 128, 256, and 512-bit codes. The
supplementary material includes precision@k curves, showing superior quality of hash functions
learned by the ranking loss compared to the pairwise loss. Here, in Fig. 2, we depict the quality
of retrieval results for two queries, showing the 16 nearest neighbors using 256-bit codes, 64-bit
codes (both learned with the triplet ranking loss), and Euclidean distance in the original 6400-D
feature space. The number of class-based retrieval errors is much smaller in Hamming space, and
the similarity in visual appearance is also superior. More such results, including failure modes, are
shown in the supplementary material.
7
(Hamming on 256 bit codes)
(Hamming on 64 bit codes)
(Euclidean distance)
Figure 2: Retrieval results for two CIFAR-10 test images using Hamming distance on 256-bit and
64-bit codes, and Euclidean distance on bag-of-words features. Red rectangles indicate mistakes.
Hashing, Loss
Linear, pairwise hinge [24]
Linear, pairwise hinge
Linear, triplet ranking
Linear, triplet ranking
Distance
H
AH
H
AH
kNN
7 NN
8 NN
2 NN
2 NN
Baseline
One-vs-all linear SVM [6]
Euclidean 3NN
64 bits
72.2
72.3
75.1
75.7
128 bits
72.8
73.5
75.9
76.8
256 bits
73.8
74.3
77.1
77.5
512 bits
74.6
74.9
77.9
78.0
Accuracy
77.9
59.3
Table 2: Recognition accuracy on the CIFAR-10 test set (H ? Hamming, AH ? Asym. Hamming).
Table 2 reports classification performance (showing accuracy instead of error rates for consistency
with previous papers). Euclidean NN on the 6400-D input features yields under 60% accuracy,
while kNN with the binary codes obtains 76 ? 78%. As with MNIST data, this level of performance is comparable to one-vs-all SVMs applied to the same features [6]. Not surprisingly, training
fully-connected neural nets on 6400-dimensional features with only 50, 000 training examples is
challenging and susceptible to over-fitting, hence the results of neural nets on CIFAR-10 were not
competitive. Previous work [19] had some success training convolutional neural nets on this dataset.
Note that our framework can easily incorporate convolutional neural nets, which are intuitively better suited to the intrinsic spatial structure of natural images.
7
Conclusion
We present a framework for Hamming distance metric learning, which entails learning a discrete
mapping from the input space onto binary codes. This framework accommodates different families
of hash functions, including quantized linear transforms, and multilayer neural nets. By using a
piecewise-smooth upper bound on a triplet ranking loss, we optimize hash functions that are shown
to preserve semantic similarity on complex datasets. In particular, our experiments show that a
simple kNN classifier on the learned binary codes is competitive with sophisticated discriminative
classifiers. While other hashing papers have used CIFAR or MNIST, none report kNN classification
performance, often because it has been thought that the bar established by state-of-the-art classifiers
is too high. On the contrary our kNN classification performance suggests that Hamming space can
be used to represent complex semantic structures with high fidelity. One appeal of this approach is
the scalability of kNN search on binary codes to billions of data points, and of kNN classification to
millions of class labels.
8
References
[1] http://yann.lecun.com/exdb/mnist/.
[2] R. Battiti. Accelerated backpropagation learning: Two optimization methods. Complex Systems, 1989.
[3] A. Bergamo, L. Torresani, and A. Fitzgibbon. Picodes: Learning a compact code for novel-category
recognition. NIPS, 2011.
[4] M. Charikar. Similarity estimation techniques from rounding algorithms. STOC, 2002.
[5] G. Chechik, V. Sharma, U. Shalit, and S. Bengio. Large scale online learning of image similarity through
ranking. JMLR, 2010.
[6] A. Coates, H. Lee, and A. Ng. An analysis of single-layer networks in unsupervised feature learning.
AISTATS, 2011.
[7] J. Davis, B. Kulis, P. Jain, S. Sra, and I. Dhillon. Information-theoretic metric learning. ICML, 2007.
[8] D. Decoste and B. Sch?olkopf. Training invariant support vector machines. Machine Learning, 2002.
[9] W. Dong, M. Charikar, and K. Li. Asymmetric distance estimation with sketches for similarity search in
high-dimensional spaces. SIGIR, 2008.
[10] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part-based models. IEEE Trans. PAMI, 2010.
[11] A. Frome, Y. Singer, F. Sha, and J. Malik. Learning globally-consistent local distance functions for
shape-based image retrieval and classification. ICCV, 2007.
[12] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. NIPS,
2004.
[13] Y. Gong and S. Lazebnik. Iterative quantization: A procrustean approach to learning binary codes. CVPR,
2011.
[14] A. Gordo and F. Perronnin. Asymmetric distances for binary embeddings. CVPR, 2011.
[15] D. Greene, M. Parnas, and F. Yao. Multi-index hashing for information retrieval. FOCS, 1994.
[16] G. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
2006.
[17] P. Indyk and R. Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality.
STOC, 1998.
[18] H. J?egou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. IEEE Trans.
PAMI, 2011.
[19] A. Krizhevsky. Learning multiple layers of features from tiny images. MSc. thesis, Univ. Toronto, 2009.
[20] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. NIPS, 2009.
[21] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing for scalable image search. ICCV, 2009.
[22] W. Liu, J. Wang, R. Ji, Y. Jiang, and S. Chang. Supervised hashing with kernels. CVPR, 2012.
[23] M. Muja and D. Lowe. Fast approximate nearest neighbors with automatic algorithm configuration.
VISSAPP, 2009.
[24] M. Norouzi and D. J. Fleet. Minimal Loss Hashing for Compact Binary Codes. ICML, 2011.
[25] M. Norouzi, A. Punjani, and D. Fleet. Fast search in hamming space with multi-index hashing. CVPR,
2012.
[26] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. NIPS, 2009.
[27] D. Rumelhart, G. Hinton, and R. Williams. Learning internal representations by error propagation. MIT
Press, 1986.
[28] R. Salakhutdinov and G. Hinton. Semantic hashing. Int. J. Approximate Reasoning, 2009.
[29] G. Shakhnarovich, P. A. Viola, and T. Darrell. Fast pose estimation with parameter-sensitive hashing.
ICCV, 2003.
[30] S. Shalev-Shwartz, Y. Singer, and A. Ng. Online and batch learning of pseudo-metrics. ICML, 2004.
[31] P. Simard, D. Steinkraus, and J. Platt. Best practice for convolutional neural networks applied to visual
document analysis. ICDR, 2003.
[32] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. CVPR,
2008.
[33] J. Wang, S. Kumar, and S. Chang. Sequential Projection Learning for Hashing with Compact Codes.
ICML, 2010.
[34] K. Weinberger, J. Blitzer, and L. Saul. Distance metric learning for large margin nearest neighbor classification. NIPS, 2006.
[35] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. NIPS, 2008.
[36] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning, with application to clustering with
side-information. NIPS, 2002.
[37] C. N. J. Yu and T. Joachims. Learning structural SVMs with latent variables. ICML, 2009.
9
| 4808 |@word kulis:3 version:1 norm:1 bf:1 instruction:1 p0:1 egou:1 incurs:1 euclidian:1 configuration:2 contains:1 score:1 selecting:1 liu:1 document:1 existing:2 ka:2 current:3 com:1 goldberger:1 must:2 gpu:2 numerical:1 shape:1 designed:2 drop:2 update:6 depict:1 hash:25 aside:2 v:2 prohibitive:1 item:14 accordingly:3 inspection:1 ith:3 harvesting:1 coarse:1 provides:1 quantized:1 toronto:3 hyperplanes:1 phylogenetic:1 become:3 driver:1 qualitative:2 prove:2 focs:1 advocate:1 overhead:1 fitting:1 introduce:1 x0:1 pairwise:21 notably:2 examine:1 multi:2 salakhutdinov:4 inspired:3 decreasing:1 globally:1 steinkraus:1 actual:1 decoste:1 curse:1 becomes:2 project:1 mitigated:1 maximizes:1 what:2 kg:2 finding:1 transformation:1 guarantee:1 pseudo:1 every:1 binarized:1 concave:1 tie:2 grauman:1 classifier:11 k2:1 scaled:1 control:1 unit:1 ramanan:1 omit:1 platt:1 positive:1 negligible:1 before:2 local:2 mistake:1 consequence:1 despite:1 jiang:1 path:1 solely:1 approximately:1 pami:2 might:1 bird:1 frog:1 studied:1 suggests:2 challenging:5 co:1 averaged:2 lecun:1 practice:2 backpropagation:1 fitzgibbon:1 digit:3 procedure:1 empirical:9 significantly:3 hyperbolic:2 projection:5 thought:1 pre:3 word:2 quadrant:1 chechik:1 onto:2 cannot:1 storage:4 optimize:6 restriction:1 map:1 straightforward:2 williams:1 convex:2 focused:1 formulate:2 sigir:1 simplicity:1 rule:3 importantly:3 borrow:1 gkh:3 retrieve:1 asym:2 unanswered:1 notion:1 variation:1 updated:1 construction:1 massive:2 exact:2 programming:1 us:2 element:7 rumelhart:1 approximated:1 recognition:3 stripe:1 asymmetric:8 labeled:1 database:3 solved:2 capture:1 wang:2 region:1 connected:1 decrease:3 russell:1 rq:4 convexity:1 dynamic:1 trained:1 depend:2 shakhnarovich:1 upon:1 efficiency:3 examplar:1 easily:4 cat:1 retrieves:1 regularizer:1 univ:1 jain:1 fast:3 reconstructive:1 query:10 presuppose:1 labeling:1 deer:1 horse:1 outcome:1 shalev:1 whose:2 heuristic:1 larger:1 valued:5 solve:4 supplementary:2 cvpr:5 otherwise:1 statistic:1 knn:22 transform:2 indyk:1 online:2 hoc:1 differentiable:2 net:16 propose:2 douze:1 maximal:1 product:1 turned:1 roweis:1 kh:13 scalability:2 olkopf:1 billion:2 motwani:1 darrell:2 produce:2 object:1 blitzer:1 develop:1 gong:1 pose:2 exemplar:1 nearest:11 eq:26 strong:1 implemented:1 c:1 frome:1 judge:1 indicate:2 itq:2 differ:1 direction:1 discontinuous:2 attribute:1 owing:2 stochastic:2 mcallester:1 material:2 bin:1 backprop:1 generalization:1 hold:1 sufficiently:1 considered:1 ground:1 normal:1 mapping:16 predict:1 gordo:1 achieves:2 adopt:2 consecutive:1 torralba:2 ruslan:1 estimation:3 applicable:2 bag:1 label:10 tanh:4 sensitive:4 largest:1 bergamo:1 mit:1 clearly:1 always:1 aim:3 rather:1 avoid:1 ckh:1 derived:1 focus:1 joachim:1 unsatisfactory:1 consistently:1 rank:1 indicates:1 contrast:1 baseline:3 inference:8 perronnin:1 nn:16 kernelized:2 hidden:3 comprising:1 pixel:3 issue:1 classification:16 flexible:4 fidelity:3 among:2 priori:1 art:4 constrained:1 initialize:1 spatial:1 ng:3 identical:2 broad:3 yu:1 unsupervised:2 icml:5 report:8 piecewise:5 torresani:1 few:1 oriented:1 randomly:2 preserve:9 argmax:3 detection:1 intra:1 certainly:1 amenable:1 capable:3 closer:1 encourage:1 modest:1 tree:1 euclidean:12 initialized:1 re:2 shalit:1 isolated:1 deformation:1 girshick:1 minimal:2 maximization:1 cost:1 entry:2 krizhevsky:1 rounding:1 conducted:1 too:2 stored:1 lee:1 dong:1 yao:1 w1:2 thesis:1 satisfied:1 choose:1 slowly:1 simard:1 style:1 li:1 lookup:1 sec:3 bold:1 includes:2 int:1 matter:2 explicitly:1 ranking:19 ad:1 depends:1 performed:1 break:1 lowe:1 red:1 competitive:5 xing:1 parallel:1 slope:1 contribution:1 accuracy:5 convolutional:4 efficiently:1 maximized:2 correspond:1 yield:4 norouzi:4 handwritten:2 raw:2 none:1 published:1 ah:14 definition:1 failure:1 naturally:1 associated:1 hamming:49 dataset:5 knowledge:1 color:1 improves:3 dimensionality:2 sophisticated:3 hashing:17 higher:1 supervised:4 kvkh:1 wei:2 formulation:2 though:1 furthermore:1 just:3 correlation:1 msc:1 sketch:1 ei:8 nonlinear:1 propagation:1 mode:1 quality:4 perhaps:1 effect:1 k22:1 former:1 hence:3 equality:1 bkh:1 dhillon:1 semantic:11 mahalanobis:2 davis:1 anything:1 cosine:1 procrustean:1 exdb:1 theoretic:1 mohammad:1 performs:1 reasoning:1 image:16 wise:3 lazebnik:2 novel:1 superior:3 common:1 muja:1 empirically:1 ji:1 insensitive:1 million:2 significant:1 vec:1 automatic:1 consistency:1 had:2 pq:1 lsh:2 entail:1 similarity:28 gt:1 picodes:1 closest:3 halved:1 recent:1 retrieved:2 optimizing:2 belongs:1 vissapp:1 ship:1 store:2 inequality:2 binary:38 success:1 battiti:1 accomplished:1 preserving:6 seen:1 fortunately:1 care:1 sharma:1 maximize:2 semi:1 preservation:1 relates:1 multiple:2 stem:1 smooth:3 faster:1 adapt:1 af:1 cross:1 long:4 retrieval:12 cifar:12 prediction:2 variant:1 basic:1 scalable:1 multilayer:3 metric:19 lossaugmented:1 iteration:2 kernel:4 sometimes:1 histogram:1 represent:1 preserved:1 background:1 remarkably:1 uninformative:1 decreased:1 crucial:1 sch:1 w2:2 pass:1 kwk22:1 contrary:1 jordan:1 integer:2 structural:4 constraining:1 bengio:1 embeddings:3 enough:1 equidistant:1 airplane:2 shift:1 fleet:4 whether:1 motivated:3 penalty:1 distance2:1 deep:2 useful:1 involve:2 transforms:2 clutter:1 ten:1 svms:5 category:2 simplest:1 tth:1 http:1 parnas:1 outperform:2 bagof:1 canonical:2 coates:2 sign:5 overly:1 per:3 discrete:5 write:1 key:1 salient:1 four:2 threshold:3 nevertheless:3 clarity:2 represenation:1 ht:2 rectangle:1 fraction:1 sum:1 raginsky:1 run:3 parameterized:1 family:6 yann:1 patch:1 scaling:2 comparable:1 bit:35 bound:23 layer:13 followed:1 quadratic:1 truck:1 greene:1 infinity:1 constraint:1 speed:1 kumar:1 relatively:2 department:1 structured:2 charikar:2 smaller:2 slightly:1 across:1 rsalakhu:1 modification:1 intuitively:1 invariant:2 iccv:3 remains:1 mlh:2 singer:2 end:3 generalizes:1 permit:1 observe:1 generic:1 spectral:1 neighbourhood:1 batch:5 weinberger:1 rp:2 existence:2 similaritypreserving:1 knapsack:1 denotes:3 clustering:2 include:1 ensure:2 cf:1 original:1 hinge:12 exploit:1 binarizes:1 restrictive:1 especially:3 objective:9 malik:1 question:1 added:1 codewords:1 parametric:1 sha:1 dependence:1 diagonal:2 gradient:7 distance:55 separate:1 accommodates:1 extent:1 code:60 length:5 cont:3 index:2 mini:4 minimizing:1 difficult:1 susceptible:1 stoc:2 favorably:1 negative:3 rise:1 implementation:3 unknown:1 perform:2 allowing:1 upper:16 observation:1 datasets:11 descent:2 viola:1 hinton:4 incorporated:1 precise:1 variability:1 excluding:1 sharp:1 david:1 introduced:1 namely:3 pair:12 dog:1 learned:4 established:1 hour:2 boost:1 discontinuity:1 nip:7 address:1 able:2 bar:1 trans:2 below:2 challenge:2 including:4 memory:1 max:8 belief:1 suitable:1 natural:2 ranked:1 regularized:4 improve:3 created:1 deemed:1 schmid:1 epoch:2 literature:1 tangent:2 relative:4 loss:63 subcategories:1 bear:1 expect:2 sublinear:1 fully:1 discriminatively:1 remarkable:1 validation:5 sufficient:1 consistent:1 viewpoint:1 tiny:1 row:1 surprisingly:2 last:1 transpose:1 side:2 bias:1 perceptron:1 neighbor:14 saul:1 taking:1 felzenszwalb:1 absolute:1 overcome:2 dimension:2 curve:1 rich:1 ignores:1 approximate:4 compact:5 obtains:1 summing:1 corpus:1 discriminative:4 xi:2 shwartz:1 fergus:2 search:11 latent:5 continuous:4 triplet:40 iterative:2 table:8 promising:1 learn:6 inherently:1 sra:1 automobile:1 complex:5 diag:2 aistats:1 rh:4 hyperparameters:1 augmented:7 fig:4 referred:1 slow:1 precision:9 sub:1 momentum:1 comprises:2 jmlr:1 jacobian:4 third:1 removing:1 remained:1 specific:3 showing:3 appeal:2 svm:5 incorporating:1 intrinsic:1 mnist:15 quantization:4 adding:2 effectively:2 sequential:1 magnitude:1 illumination:1 margin:2 suited:4 locality:3 simply:1 appearance:2 visual:2 expressed:1 chang:2 applies:2 truth:1 goal:3 viewed:1 presentation:2 ann:1 rbf:2 towards:1 labelled:1 replace:2 change:1 hard:2 determined:1 reducing:1 hyperplane:1 multimedia:3 called:2 exception:1 select:3 internal:1 support:1 dissimilar:6 accelerated:1 incorporate:1 evaluate:4 |
4,208 | 4,809 | Semiparametric Principal Component Analysis
Han Liu
Department of Operations Research
and Financial Engineering
Princeton University, NJ 08544
[email protected]
Fang Han
Department of Biostatistics
Johns Hopkins University
Baltimore, MD 21210
[email protected]
Abstract
We propose two new principal component analysis methods in this paper utilizing
a semiparametric model. The according methods are named Copula Component
Analysis (COCA) and Copula PCA. The semiparametric model assumes that, after unspecified marginally monotone transformations, the distributions are multivariate Gaussian. The COCA and Copula PCA accordingly estimate the leading
eigenvectors of the correlation and covariance matrices of the latent Gaussian distribution. The robust nonparametric rank-based correlation coefficient estimator,
Spearman?s rho, is exploited in estimation. We prove that, under suitable conditions, although the marginal distributions can be arbitrarily continuous, the COCA
and Copula PCA estimators obtain fast estimation rates and are feature selection
consistent in the setting where the dimension is nearly exponentially large relative
to the sample size. Careful numerical experiments on the synthetic and real data
are conducted to back up the theoretical results. We also discuss the relationship
with the transelliptical component analysis proposed by Han and Liu (2012).
1
Introduction
The Principal Component Analysis (PCA) is introduced as follows. Given a random vector X ? Rd
with covariance matrix ? and n independent observations of X, the PCA reduces the dimension of
the data by projecting the data onto a linear subspace spanned by the k leading eigenvectors of ?,
such that the principal modes of variations are preserved. In practice, ? is unknown and replaced by
Pd
the sample covariance S. By spectral decomposition, ? = j=1 ?j uj uTj with eigenvalues ?1 ?
. . . ? ?d and the corresponding orthornormal eigenvectors u1 , . . . , ud . PCA aims at recovering the
first k eigenvectors u1 , . . . , uk .
Although the PCA method as a procedure is model free, its theoretical and empirical performances
rely on the distributions. With regard to the empirical concern, the PCA?s geometric intuition is
coming from the major axes of the contours of constant probability of the Gaussian [10]. [5] show
that if X is multivariate Gaussian, then the distribution is centered about the principal component
axes and is therefore ?self-consistent? [8]. We refer to [10] for more good properties that the PCA
enjoys under the Gaussian model, which we wish to preserve while designing its generalization.
With regard to the theoretical concern, firstly, the PCA generally fails to be consistent in high dimensional setting. Given u
b1 the dominant eigenvector of S, [9] show that the angle between u
b1
and u1 will not converge to 0, i.e. lim inf n?? E?(b
u1 , u1 ) > 0, where we denote by ?(b
u1 , u 1 )
the angle between the estimated and the true leading eigenvectors. This key observation motivates
regularizing ?, resulting in a series of methods with different formulations and algorithms. The statistical model is generally further specified such that u1 is sparse, namely supp(u1 ) := {j : u1j 6=
0} and card(supp(u1 )) = s < n. The resulting estimator u
e1 is:
u
e1 = arg max v T Sv subject to kvk2 = 1, card(supp(v)) ? s.
(1.1)
v?Rd
To solve Equation (1.1), a variety of algorithms are proposed: greedy algorithms [3], lasso-type
methods including SCoTLASS [11], SPCA [25] and sPCA-rSVD [19], a number of power methods
[12, 23, 16], the biconvex algorithm PMD [21] and the semidefinite relaxation DSPCA [4]. Secondly, it is realized that the distribution where the data are drawn from needs to be specified, such
1
that the estimator u
e1 converges to u
?1 in a fast rate. [9, 1, 16, 18, 20] all establish their results under
a strong Gaussian or sub-Gaussian assumption in order to obtain a fast rate under certain conditions.
In this paper, we first explore the use of the PCA conducted on the correlation matrix ?0 instead of
the covariance matrix ?, and then propose a high dimensional semiparametric scale-invariant principal component analysis method, named the Copula Component Analysis (COCA). In this paper,
the population version of the scale-invariant PCA is built as the estimator of the leading eigenvector
of the population correlation matrix ?0 . Secondly, to handle the non-Gaussian data, we generalize the distribution family from the Gaussian to the larger Nonparanormal family [15]. A random
variable X = (X1 , . . . , Xd )T belongs to a Nonparanormal family if and only if there exists a set
of univariate monotone functions {fj0 }dj=1 such that (f10 (X1 ), . . . , fd0 (Xd ))T is multivariate Gaussian. The Nonparanormal can have arbitrary continuous marginal distributions and can be far away
from the sub-Gaussian family. Thirdly, to estimate ?0 robustly and efficiently, instead of estimating
the normal score transformation functions {fbj0 }dj=1 as [15] did, realizing that {fj0 }dj=1 preserve the
ranks of the data, we utilize the nonparametric correlation coefficient estimator, Spearman?s rho, to
estimate ?0 . [14, 22] prove that the corresponding estimators converge to ?0 in a parametric rate.
In theory, we analyze the general case that X is following the Nonparanormal and ?1 is weakly
sparse, here ?1 is the leading eigenvector of ?0 . We obtain the estimation consistency of the COCA
estimator to ?1 using the Spearman?s rho correlation coefficient matrix. We prove that the estimation
consistency rates are close to the parametric rate under Gaussian assumption and the feature selection consistency can be achieved when d is nearly exponential to the sample size. In this paper, we
also propose a scale variant PCA procedure, named the Copula PCA. The Copula PCA estimates
the leading eigenvector of the latent covariance matrix ?. To estimate the leading eigenvectors of
?, instead of ?0 , in a fast rate, we prove that extra conditions are required on the transformation
functions.
2
Background
We start with notations: Let M = [Mjk ] ? Rd?d and v = (v1 , ..., vd )T ? Rd . Let v?s subvector
with entries indexed by I be denoted by vI , M ?s submatrix with rows indexed by I and columns
indexed by J be denoted by MIJ . Let MI? and M?J be the submatrix of M with rows in I and
all columns, and the submatrix of M with columns in J and all rows. For 0 < q ? ?, we
Pd
define the `q and `? vector norm as kvkq := ( i=1 |vi |q )1/q and kvk? := max1?i?d |vi |, and
kvk0 := card(supp(v))?kvk2 . We define the matrix `max norm as the elementwise
maximum value:
Pn
kM kmax := max{|Mij |} and the `? norm as kM k? := max1?i?m j=1 |Mij |. Let ?j (M ) be
the toppest j?th eigenvalue of M. In special, ?min (M ) := ?d (M ) and ?max (M ) := ?1 (M ) are
the smallest and largest eigenvalues of M . The vectorized matrix of M , denoted by vec(M ), is
T
T T
defined as: vec(M ) := (M?1
, . . . , M?d
) . Let Sd?1 := {v ? Rd : kvk2 = 1} be the d-dimensional
`2 sphere. For any two vectors a, b ? Rd and any two squared matrices A, B ? Rd?d , denote the
inner product of a and b, A and B by ha, bi := aT b and hA, Bi := Tr(AT B).
2.1
The Models of the PCA and Scale-invariant PCA
Pd
Let ?0 be the correlation matrix of ?, and by spectral decomposition, ? = j=1 ?j uj uTj and ?0 =
Pd
T
j=1 ?j ?j ?j . Here ?1 ? ?2 ? . . . ? ?d > 0 and ?1 ? ?2 ? . . . ? ?d > 0 are the eigenvalues
0
of ? and ? , with u1 , . . . , ud and ?1 , . . . , ?d the corresponding orthonormal eigenvectors. The next
proposition claims that the estimators {b
u1 , . . . , u
bd } and {?b1 , . . . , ?bd }, the eigenvectors of the sample
covariance and correlation matrices S and S 0 , are the MLEs of {u1 , . . . , ud } and {?1 , . . . , ?d }:
Proposition 2.1. Let x1 . . . xn ? N (?, ?) and ?0 be the correlation matrix of ?. Then the estimators of PCA, {b
u1 , . . . , u
bd }, and the estimators of the scale-invariant PCA, {?b1 , . . . , ?bd }, are the
MLEs of {u1 , . . . , ud } and {?1 , . . . , ?d }.
Proof. Use Theorem 11.3.1 in [2] and the functional invariance property of the MLE.
Proposition 2.2. For any 1 ? i ? d, we have supp(ui ) = supp(?i ) and sign(uij ) =
sign(?ij ), ? 1 ? j ? d.
Proof. For 1 ? i ? d, ui = (?i1 /?1 , ?i2 /?2 , . . . , ?id /?d ), where (?12 , . . . , ?d2 )T := diag(?).
It is easy to observe that the scale-invariant PCA is a safe procedure for dimension reduction when
variables are measured in different scales. Although there seems no theoretical advantage of scaleinvariant PCA over the PCA under the Gaussian model, in this paper we will show that under a more
general Nonparanormal (or Gaussian Copula) model, the scale-invariant PCA will pose much less
conditions to make the estimator achieve good theoretical performance.
2
2.2 The Nonparanormal
We first introduce two definitions of the Nonparanormal separately defined in [15] and [14].
Definition 2.1 [15]. A random variable X = (X1 , ..., Xd )T with population marginal means and
standard deviations ? = (?1 , . . . , ?d )T and ? = (?1 . . . . , ?d )T is said to follow a Nonparanormal
distribution N P Nd (?, ?, f ) if and only if there exists a set of univariate monotone transformations
f = {fj }dj=1 such that: f (X) = (f1 (X1 ), ..., fd (Xd ))T ? N (?, ?), and ?j2 = ?jj , j = 1, . . . , d.
Definition 2.2 [14]. Let f 0 = {fj0 }dj=1 be a set of monotone univariate functions and ?0 ? Rd?d
be a positive definite correlation matrix with diag(?0 ) = 1. We say that a d dimensional random
variable X = (X1 , . . . , Xd )T follows a Nonparanormal distribution, i.e. X ? N P Nd (?0 , f 0 ), if
f 0 (X) := (f10 (X1 ), . . . , fd0 (Xd ))T ? N (0, ?0 ).
The following lemma proves that two definitions of the Nonparanormal are equivalent.
Lemma 2.1. A random variable X ? N P Nd (?0 , f 0 ) if and only if there exist ? = (?1 , . . . , ?d )T ,
? = [?jk ] ? Rd?d such that for any 1 ? j, k ? d, E(Xj ) = ?j , Var(Xj ) = ?jj and ?0jk =
? ?jk , and a set of monotone univariate functions f = {fj }dj=1 such that X ? N P Nd (?, ?, f ).
?jj ??kk
Proof. Using the connection that fj (x) = ?j + ?j fj0 (x), for j ? {1, 2 . . . , d}.
Lemma 2.1 guarantees that the Nonparanormal is defined properly. Definition 2.2 is more appealing
because it emphasizes the correlation and hence matches the spirit of the Copula. However, Definition 2.1 enjoys notational simplicity in analyzing the Copula-based LDA and PCA approaches.
2.3 Spearman?s rho Correlation and Covariance Matrices
Given n data points x1 , . .q
. , xn ? Rd , where xi = (xi1 , . . . , xid )T , we denote by ?
bj :=
Pn
Pn
1
1
and ?
bj =
bj )2 , the marginal sample means and standard devii=1 xij
i=1 (xij ? ?
n
n
ations. Because the Nonparanormal distribution preserves the rank of the data, it is natural to use the
nonparametric rank-based correlation coefficient estimator, Spearman?s rho, to estimate
the latent
Pn
1
r
= n+1
correlation. In detail, let rij be the rank of xij among
x
,
.
.
.
,
x
and
r
?
:=
ij
1j
nj
j
i=1
n
2 ,
Pn
(rij ??
rj )(rik ??
rk )
i=1
we consider the following statistics: ?bjk = ?Pn
, and the correlation maPn
2
2
rj )
i=1 (rij ??
?
rk )
i=1 (rik ??
bjk = 2 sin( ? ?bjk ). The Lemma 2.2, coming from [14], claims that the estimation
trix estimator: R
6
can reach the parametric rate.
21
Lemma 2.2 ([14]). When x1 , . . . , xn ?i.i.d N P Nd (?0 , f 0 ), for any n ? log
d + 2,
!
r
b ? ?0 kmax ? 8? log d ? 1 ? 2/d2 .
P kR
(2.1)
n
b
b
We denote by R := [Rjk ] the Spearman?s rho correlation coefficient matrix. In the following let
bjk ] be the Spearman?s rho covariance matrix.
Sb := [Sbjk ] = [b
?j ?
bk R
3
Methods
In Figure 1, we randomly generate 10,000 samples from three different types of Nonparanormal
1 0.5 and transdistributions. We suppose that X ? N P N2 (?0 , f 0 ). Here we set ?0 = 0.5
1
formation functions as follows: (A) f10 (x) = x3 and f20 (x) = x1/3 ; (B) f10 (x) = sign(x)x2 and
f20 (x) = x3 ; (C) f10 (x) = f20 (x) = ??1 (x). It can be observed that there does not exist a nice
geometric explanation now. For example, researchers might wish to conduct PCA separately on
different clusters in (A) and (B). For (C), the data look very noisy and a nice major axis might be
considered not existing.
However, under the Nonparanormal model and realizing that there is a latent Gaussian distribution
behind, the geometric intuition of the PCA naturally comes back. In the next section, we will present
the model of the COCA and Copula PCA motivated from this observation.
3.1 COCA Model
We firstly present the model of the Copula Component Analysis (COCA) method, where the idea
of scale-invariant PCA is exploited and we wish to estimate the leading eigenvector of the latent
correlation matrix. In particular, the following model M0 (q, Rq , ?0 , f 0 ) is considered:
(
x1 , . . . , xn ?i.i.d N P Nd (?0 , f 0 ),
M0 (q, Rq , ?0 , f 0 ) :
(3.1)
?1 ? Sd?1 ? Bq (Rq ),
3
B
C
?1.5
?1.0
?0.5
0.0
0.5
1.0
1.5
0.8
0.4
0.2
0.0
?40
?20
?1.5 ?1.0 ?0.5
0
0.0
0.6
0.5
20
1.0
40
1.5
1.0
A
?2
?1
0
1
2
0.0
0.2
0.4
0.6
0.8
1.0
Figure 1: Scatter plots of three Nonparanormals, X ? N P N2 (?0 , f 0 ). Here ?012 = 0.5 and
the transformation functions have the form as follows: (A) f10 (x) = x3 and f20 (x) = x1/3 ; (B)
f10 (x) = sign(x)x2 and f20 (x) = x3 ; (C) f10 (x) = f20 (x) = ??1 (x).
where ?1 is the leading eigenvectors of the latent correlation matrix ?0 we are interested in estimating, 0 ? q ? 1 and the `q ball Bq (Rq ) is defined as:
when q = 0,
B0 (R0 ) := {v ? Rd : card(supp(v)) ? R0 };
when 0 < q ? 1,
d
Bq (Rq ) := {v ? R :
kvkqq
? Rq }.
(3.2)
(3.3)
Inspired by the model M0 (q, Rq , ?0 , f 0 ), we consider the following COCA estimator ?e1 , which
maximizes the following equation with the constraint that ?e1 ? Bq (Rq ) for some 0 ? q ? 1:
b subject to v ? Sd?1 ? Bq (Rq ).
?e1 = arg max v T Rv,
(3.4)
v?Rd
b is the estimated Spearman?s rho correlation coefficient matrix. The corresponding COCA
Here R
estimator ?e1 can be considered as a nonlinear dimensional reduction procedure and has the potential
to gain more flexibility compared with the classical PCA. In Section 4 we will establish the theoretical results on the COCA estimator and will show that it can estimate the latent true dominant
eigenvector ?1 in a fast rate and can achieve feature selection consistency.
3.1.1 Copula PCA Model
In contrast, we provide another model inspired from the classical PCA method, where we wish to
estimate the leading eigenvector of the latent covariance matrix. In particular, the following model
M(q, Rq , ?, f ) is considered:
(
x1 , . . . , xn ?i.i.d N P Nd (0, ?, f ),
M(q, Rq , ?, f ) :
(3.5)
u1 ? Sd?1 ? Bq (Rq ),
where u1 is the leading eigenvector of the covariance matrix ? and it is what we are interested in
estimating. The corresponding Copula PCA estimator is:
b subject to v ? Sd?1 ? Bq (Rq ),
u
e1 = arg max v T Sv,
(3.6)
v?Rd
where Sb is the Spearman?s rho covariance coefficient matrix. This procedure is named the Copula
PCA. In Section 4, we will show that the Copula PCA requires a much stronger condition than
COCA to make u
e1 converge to u1 in a fast rate.
3.2 Algorithms
In this section we provide three sparse PCA algorithms, where the Spearman?s rho correlation and
b and Sb can be directly plugged in to obtain sparse estimators.
covariance matrices R
Penalized Matrix Decomposition (PMD) is proposed by [21]. The main idea of the PMD is a bib
convex optimization algorithm to the following problem: arg maxu,v uT ?v,
subject to kuk22 ?
2
1, kvk2 ? 1, kuk1 ? ?, kvk1 ? ?. The COCA with PMD and Copula PCA with PMD are listed in
b Initialize v ? Sd?1 ; (2) Iterate until convergence:
the following: (1) Input: A symmetric matrix ?.
Tb
b subject
(a) u ? arg maxu?Rd u ?v subject to kuk1 ? ? and kuk22 ? 1.(b) v ? arg maxv?Rd uT ?v
2
b is either R
b or S,
b corresponding to the COCA with
to kvk1 ? ? and kvk2 ? 1; (3) Output: v. Here ?
PMD and Copula PCA with PMD. ? is the tuning parameter. [21] suggest using the first leading
4
b to be the initial value of v. The PMD can be considered as a solver to Equation
eigenvector of ?
(3.4) and Equation (3.6) with q = 1.
The SPCA algorithm is proposed by [25]. The main idea of the SPCA algorithm is to exploit a
regression approach to PCA and then utilize lasso and elastic net [24] to calculate a sparse estimator
to the leading eigenvector. The COCA with SPCA and Copula PCA with SPCA are listed as follows:
b Initialize u ? Sd?1 . (2). Iterate until convergence: (a) v ?
(1) Input: A symmetric matrix ?.
Tb
b
b 2 . (3) Output: v/kvk2 . Here
?vk
arg minv?Rd (u ? v) ?(u ? v) + ?1 kvk22 + ?2 kvk1 ; (b) u ? ?v/k
b is either R
b or S,
b corresponding to the COCA with SPCA and Copula PCA with SPCA. ?1 ? R
?
b to be the
and ?2 ? R are two tuning parameters. [25] suggest using the first leading eigenvector of ?
initial value of v. The SPCA can be considered as a solver to Equations (3.4) and (3.6) with q = 1.
The Truncated Power method (TPower) is proposed by [23]. The main idea is to utilize the power
method, but truncate the vector to a `0 ball in each iteration. Actually, TPower can be generalized
to a family of algorithms to solve Equation (3.4) when 0 ? q ? 1. We name it the `q Constraint
Truncated Power Method (qTPM). Especially, when q = 0, the algorithm qTPM coincides with
[23]?s method. The TPower can be considered as a general solver to Equation (3.4) and Equation
(3.6) with q ? [0, 1]. In detail, we utilize the classical power method, but in each iteration t we
project the intermediate vector xt to the intersection of the d-dimension sphere Sd?1 and the `q ball
1/q
with the radius Rq . Detailed algorithms are presented in the long version of this paper [6].
4
Theoretical Properties
In this section we provide the theoretical properties of the COCA and Copula PCA methods. Especially, we are interested in the high dimensional case when d > n.
4.1 Rank-based Correlation and Covariance Matrices Estimation
b to ?0
This section is devoted to the statement of our result on quantifying the convergence rate of R
b
and S to ?. In particular, we establish the results on the `max convergence rates of the Spearman?s
rho correlation and covariance matrices to ? and ?0 . For COCA, Lemma 2.2 is enough. For Copula
PCA, however, we still need to quantify the convergence rate of Sb to ?.
Definition 4.1 Subgaussian Transformation Function Class. Let Z ? R be a random variable
following the standard Gaussian distribution. The Subgaussian Transformation Function Class
TF(K) is defined as the set of functions {g0 : R ? R} which satisfies that: E|g0 (Z)|m ?
m! m
+
2 K , ?m?Z .
Here it is easy to see that for any function g0 : R ? R, if there exists a constant L < ? such that
g0 (z) ? L or g00 (z) ? L or g000 (z) ? L, ? z ? R, then g0 ? TF(K) for some constant K.
Then we have the following result, which states that ? can also be recovered in the parametric rate.
Lemma 4.1. When x1 , . . . , xn ?i.i.d N P Nd (?, ?, f ), 0 < 1/c0 < min{?j } < max{?j } < c0 <
j
j
?, for some constant c0 and g := {gj = fj?1 }dj=1 satisfies for all j = 1, . . . , K, gj2 ? T F (K)
21
where K < ? is some constant, we have for any 1 ? j, k ? d, for any n ? log
d + 2,
P(|Sbjk ? ?jk | > t) ? 2 exp(?c1 nt2 ),
(4.1)
where c1 is a constant only depending on the choice of K.
Remark 4.1. The Lemma 4.1 claims that, under certain constraint on the transformation functions,
the latent covariance matrix ? can be recovered using the Spearman?s rho covariance matrix. However, in this case, the marginal distributions of the Nonparanormal are required to be sub-gaussian
and cannot be arbitrarily continuous. This makes the Copula PCA a less favored method.
4.2 COCA and Copula PCA
This section is devoted to the statement of our main result on the upper bound of the estimated error
of the COCA estimator and Copula PCA estimator.
Theorem 4.1 (Upper bound for the COCA). Let ?e1 be the global solution to Equation (3.4) and
the Model M0 (q,p
Rq , ?0 , f 0 ) holds. For any two vectors v1 ? Sd?1 and v2 ? Sd?1 , let
21
| sin ?(v1 , v2 )| = 1 ? (v1T v2 )2 , then we have, for any n ? log
d + 2,
2?q !
64? 2
log d 2
2
2
e
P sin ?(?1 , ?1 ) ? ?q Rq
?
? 1 ? 1/d2 ,
(4.2)
(?1 ? ?2 )2
n
5
where ?q = 2 ? I(q = 1) + 4 ? I(q = 0) + (1 +
?
3)2 ? I(0 < q < 1).
b to ?0 . Detailed
Proof. The key idea of the proof is to utilize the `max norm convergence result of R
proofs are presented in the long version of this paper [6].
Generally, when Rq and ?1 , ?2 do not scale with (n, d), the rate is OP ( logn d )1?q/2 , which is the
parametric rate [16, 20, 18] obtain. When (n, d) goes to infinity, the two dominant eigenvalues ?1
and ?2 will typically go to infinity and will at least be away from zero. Hence, our rate shown in
2?q
2
64? 2 ?2
Equation (4.2) is better than the seemingly more state-of-art rate: ?q Rq2 (?1 ??21)2 ? logn d
.
The COCA is significantly different from [20] and [18]?s results in the sense that: (1) In theory,
the Nonparanormal family can have arbitrary continuous marginal distributions, where a fast rate
cannot be obtained using the techniques built for either Gaussian or sub-Gaussian distributions;
b to estimate ?0 ,
(2) In methodology, we utilize the Spearman?s rho correlation coefficient matrix R
instead of using the sample correlation matrix S 0 . This procedure has been shown to lose little in
rate and will be much more robust under the Nonparanormal model. Given Theorem 4.1, we can
immediately obtain a feature selection consistency result.
Corollary 4.1 (Feature Selection Consistency of the COCA). Let ?e1 be the global solution to
b 0 :=
Equation (3.4) and the Model M0 (0, R0 , ?0 , f 0 ) holds.q Let ?0 := supp(?1 ) and ?
?
log d
2R0 ?
supp(?e1 ). If we further have minj??0 |?1j | ? 16?1 ??
n , then for any n ? 21/ log d + 2,
2
0
0
2
b
P(? = ? ) ? 1 ? 1/d .
Similarly, we can give an upper bound for the estimation rate of the Copula PCA to the true leading
eigenvalue u1 of the latent covariance matrix ?. The next theorem provides the detail result.
Theorem 4.2 (Upper bound for Copula PCA). Let u
e1 be the global solution to Equation (3.6) and
the Model M(q, Rq , ?, f ) holds. If g := {gj = fj?1 }dj=1 satisfies gj2 ? T F (K) for all 1 ? j ? d,
and 0 < 1/c0 < min{?j } < max{?j } < c0 < ?, then we have, for any n ? 21/ log d + 2,
j
j
2?q !
4
log d 2
P sin ?(e
u1 , u 1 ) ?
?
? 1 ? 1/d2 ,
c1 (?1 ? ?2 )2
n
?
where ?q = 2 ? I(q = 1) + 4 ? I(q = 0) + (1 + 3)2 ? I(0 < q < 1) and c1 is a constant defined in
Equation (4.1), only depending on K.
Corollary 4.2 (Feature Selection Consistency of the Copula PCA). Let u
e1 be the global solution
b := supp(e
to Equation (3.6) and the Model M(0, R0 , ?, f ) holds. Let ? := supp(u1 ) and ?
u1 ).
?1 d
2
If g := {gj = fj }j=1 satisfies gj ? T F (K) for all 1 ? j ? d, and 0 < 1/c0 < minj {?j } <
q
?
log d
0
maxj {?j } < c0 < ?, and we further have minj?? |u1j | ? ?c41 (?2R
n , then for any
1 ??2 )
b = ?) ? 1 ? 12 .
n ? 21 + 2, P(?
2
log d
5
?q Rq2
d
Experiments
In this section we investigate the empirical usefulness of the COCA method. Three sparse PCA
algorithms are considered: PMD proposed by [21], SPCA proposed by [25] and Truncated Power
method (TPower) proposed by [23]. The following three methods are considered: (1) Pearson:
the classic high dimensional PCA using the Pearson sample correlation matrix; (2) Spearman:
the COCA using the Spearman?s rho correlation coefficient matrix; (3) Oracle: the classic high
dimensional PCA using the Pearson sample correlation matrix of the data from the latent Gaussian
(perfect without contaminations).
5.1 Numerical Simulations
In the simulation study we randomly sample n data points x1 , . . . , xn from the Nonparanormal
distribution X ? N P Nd (?0 , f 0 ). Here we consider the setup of d = 100. We follow the
same generating scheme as in [19, 23] and [7]. A covariance matrix ? is firstly synthesized
through the eigenvalue decomposition, where the first two eigenvalues are given and the corresponding eigenvectors are pre-specified to be sparse. In detail, we suppose that the first two dominant eigenvectors of ?, u1 and u2 , are sparse in the sense that only the
? first s = 10 entries of
u1 and the second s = 10 entries of u2 are nonzero and set to be 1/ 10. ?1 = 5, ?2 = 2,
6
?3 = . . . = ?d = 1. The remaining eigenvectors are chosen arbitrarily. The correlation matrix
?0 is accordingly generated from ?, with ?1 = 4, ?2 = 2.5, ?3 , . . . , ?d ? 1 and the two dominant eigenvectors sparse. To sample data from the Nonparanormal, we also need the transformation
functions: f 0 = {fj0 }dj=1 . Here two types of transformation functions are considered: (1) Linear
0
transformation (or no transformation): flinear
= {h0 , h0 , . . . , h0 }, where h0 (x) := x; (2)
Nonlinear transformation: there exist five univariate monotone functions h1 , h2 , . . . , h5 : R ? R
0
and fnonlinear
= {h1 , h2 , h3 , h4 , h5 , h1 , h2 , h3 , h4 , h5 , . . .}, where Rh?1
h?1
1 (x) := x,
2 (x) :=
1/2
3
?(x)? ?(t)?(t)dt
sign(x)|x|
?1
?1
?1
x
?R
,
h3 (x) := ?R 6
, h4 (x) := ?R
, h5 (x) :=
R
2
?R
|t|?(t)dt
R
exp(x)? exp(t)?(t)dt
.
R
(exp(y)? exp(t)?(t)dt)2 ?(y)dy
t ?(t)dt
(?(y)? ?(t)?(t)dt) ?(y)dy
Here ? and ? are defined to be the probability density and cu-
mulative distribution functions of the standard Gaussian. h1 , . . . , h5 are defined such that for any
?1
Z ? N (0, 1), E(h?1
j (Z)) = 0 and Var(hj (Z)) = 1 ? j ? {1, . . . , 5}. We then generate
n = 100, 200 or 500 data points from:
0
0
[Scheme 1] X ? N P Nd (?0 , flinear
) where flinear
= {h0 , h0 , . . . , h0 } and ?0 is defined as above.
0
0
[Scheme 2] X ? N P Nd (?0 , fnonlinear
) where fnonlinear
= {h1 , h2 , h3 , h4 , h5 , . . .}.
To evaluate the robustness of different methods, we adopt a similar data contamination procedure as
in [14]. Let r ? [0, 1) represents the proportion of samples being contaminated. For each dimension,
we randomly select bnrc entries and replace them with either 5 or -5 with equal probability. The
final data matrix we obtained is X ? Rn?d . The PMD, SPCA and TPower algorithms are then
employed on X to computer the estimated leading eigenvector ?e1 .
Under the Scheme 1 and Scheme 2 with different levels of contamination (r = 0 or 0.05), we
repeatedly generate the data matrix X for 1,000 times and compute the averaged False Positive Rates
and False Negative Rates using a path of tuning parameters ?. The feature selection performances
of different methods are then evaluated. The corresponding ROC curves are presented in Figure 2.
More quantitative results are provided in the long version of this paper [6]. It can be observed that
when r = 0 and X is exactly Gaussian, Pearson,Spearman and Oracle can all recover the sparsity
pattern perfectly. However, when r > 0, the performances of Pearson significantly decrease, while
Spearman is still very close to the Oracle. In Scheme 2, even when r = 0, Pearson cannot recover
the support set of ?1 , while Spearman can still recover the sparsity pattern almost perfectly. When
r > 0, the performance of Spearman is still very close to the Oracle.
r=0
0.4
0.6
FPR
0.8
1.0
0.6
FPR
0.8
1.0
0.6
0.8
1.0
1.0
0.6
TPR
0.4
0.4TPower0.6
0.8
0.0
FPR
0.4
0.6
FPR
0.8
1.0
0.2
0.4TPower0.6
0.8
1.0
FPR
0.8
TPR
0.6
0.8
TPR
0.4
0.2
Pearson
Spearman
Oracle
0.0
1.0
0.2
Pearson
Spearman
Oracle
0.0
FPR
0.2
0.6
0.8
0.6
0.4
0.2
0.0
FPR
0.4
0.2
Pearson
Spearman
Oracle
0.0
1.0
1.0
0.8
0.2
Pearson
Spearman
Oracle
0.0
0.8
1.0
0.6
TPR
0.4
0.4 SPCA 0.6
TPR
0.6
TPR
0.4
0.2
0.4
FPR
0.4
0.2
0.2
0.0
0.2
Pearson
Spearman
Oracle
0.0
Pearson
Spearman
Oracle
0.0
1.0
0.2
0.8
1.0
0.4 SPCA 0.6
TPower
0.8
1.0
0.6
0.4
0.2
0.2
0.8
1.0
0.8
0.2
Pearson
Spearman
Oracle
0.0
0.6
TPR
0.4
0.2
0.0
Pearson
Spearman
Oracle
0.0
TPR
1.0
FPR
0.0
0.2
0.4
TPR
0.6
0.8
0.8
1.0
0.4 PMD 0.6
r = 0.05
TPower
0.8
1.0
0.6
0.2
0.0
Pearson
Spearman
Oracle
0.0
FPR
1.0
0.2
0.4
TPR
1.0
0.0
0.8
1.0
0.6
0.0
0.4PMD
r=0
SPCA
0.8
1.0
0.8
0.6
TPR
0.4
0.2
0.0
0.0
Pearson
Spearman
Oracle
0.0
r = 0.05
SPCA
0.2
0.2
0.4
TPR
0.6
0.8
1.0
PMD
Pearson
Spearman
Oracle
0.0
0.2
0.4
0.6
FPR
0.8
1.0
Pearson
Spearman
Oracle
0.0
r = 0.05
PMD
0.0
r=0
0.0
0.2
0.4
0.6
0.8
1.0
FPR
Figure 2: ROC curves for the PMD, SPCA and Truncated Power method (the left two, the middle
two, the right two) with linear (no) and nonlinear transformation (top, bottom) and data contamination at different levels (r = 0, 0.05). Here n = 100 and d = 100.
5.2 Large-scale Genomic Data Analysis
In this section we investigate the performance of Spearman compared with the Pearson using
one of the largest microarray datasets [17]. In summary, we collect in all 13,182 publicly available
microarray samples from Affymetrixs HGU133a platform. The raw data contain 20,248 probes and
13,182 samples belonging to 2,711 tissue types (e.g., lung cancers, prostate cancer, brain tumor etc.).
There are at most 1,599 samples and at least 1 sample belonging to each tissue type. We merge the
probes corresponding to the same gene. There are remaining 12,713 genes and 13,182 samples. This
dataset is non-Gaussian (see the long version of this paper [6]). The main purpose of this experiment
is to compare the performance of the COCA with the classical high dimensional PCA. We utilize the
Truncated Power method proposed by [23] to achieve the sparse estimated dominant eigenvectors.
7
We adopt the same idea of data-preprocessing as in [14]. In particular, we firstly remove the batch
effect by applying the surrogate variable analysis proposed by [13]. We then extract the top 2,000
genes with the highest marginal standard deviations. There are, accordingly, 2,000 genes left and the
data matrix we are focusing is 2, 000 ? 13, 182. We then explore several tissue types with the largest
sample size: (1) Breast tumor, 1,599 samples; (2) B cell lymphoma, 213 samples; (3) Prostate tumor,
148 samples; (4) Wilms tumor, 143 samples.
breast tumor
prostate tumor
wilms tumor
?
?
? ?
??
?
?
?
?
?
?
?
?
0
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
??
?
?
?
? ?
?
?
? ?
?
?
?
? ?
?
?
?
??
?
?
?
?
?
? ??
?
? ?
? ??
? ?
?
? ? ?
?
??
?
??
?
? ?
??
??
?
?
?
? ??
?
?
?
?
?
???
?
?
?
???
?
?
?
??
?
?
?
?
?
?
?
?
?
? ??
? ?
?
?
?
?
?
?
?? ?
??
?
?
? ?
?
?
?
?
?
? ?
?
?
??
?
?? ?
??
??
?
? ?
? ?
? ? ? ??
??
??
?
?
? ? ?
??
?
?
? ? ?
? ??
??
? ? ? ??
?
?
? ?
??
?
?
??
??
?
??
?
? ?
? ?
?
?
?
?
?
? ?
?
?? ? ?
?
?
?
? ??
??
?
?
??
? ?
?
?
?
? ?
?
?
?
?
?
? ?
?
?
? ?
?
?
? ? ?? ?
???
?? ?
??
?
?
??
? ?
? ? ?? ?
??
?
?
?
? ??
?
? ???
? ?
?
?
?
?
?
?
??
???? ?? ?
?
? ??
???? ?? ??
? ?
?
?
? ?? ?
?
?
?
??
?
??
?
?
?
? ?
?
? ? ?
? ? ?
?
?
? ??
?
??
?
?
?
?
??
? ? ?? ? ? ?
? ?
?
?
?
?
?
???
? ? ?
?
?
?
?
??
?
? ?
?
?
? ? ?
? ? ?
? ?
?
?
? ? ?? ?
?? ? ?
? ? ?
?
?? ? ?
??
?
?
?
?
?
?
??
?
?
?
? ?
??
??
?
??
?
???
? ?
?
?
? ?
?
?
?
? ?
? ?
?
?
?
? ?
??
?
? ?
?
? ?
?
??
??
? ??? ? ?
? ?
? ? ?
?
?
? ? ??
? ?? ?
??
?? ?? ? ?
?
??? ? ? ?
?
?
?
?
??
??
? ?
? ??
??
?? ?
?
?
?? ?
? ?
?
? ?
? ? ? ?? ? ? ?
?
?? ? ?? ?
?
?
?
??
?
? ?
?
??
?
?? ? ?
? ??
? ? ?
?
?
?
?
?
?
?
? ??
??
???
??
?
? ? ? ??
??
?
? ?
?
?
?
?
? ?? ?
??? ?
?
?
?
? ??
? ? ?? ?
??
??
?
?
? ? ??
?
?
? ?
? ? ???
?? ?
?? ? ? ?
? ?
?
?
?? ?
?
? ? ?
??
?
??
?? ? ? ? ? ??
? ? ??
???? ?
?
?
? ?
?
?
?
?
? ? ??
?
? ??? ? ?
?
?
?
? ? ??
? ??
?
?? ?
?
?? ?
?
? ? ?? ? ?
?
? ? ? ?? ?
? ? ?? ?
?
?
?
??
? ?
?
? ? ?
?? ? ??
?? ? ?
?
?? ?
?
?
?
?? ?
?? ?
? ? ?
??
??
?
?? ?
??
?
? ?
?
?
?
? ? ?
?
? ??
?
?
? ?
??
?
???
? ?
? ?
? ???
?? ?
? ?? ?
?
?
?
?
?? ? ?
?
?
???
?
?
?
??
??
?
?
?
?
?
?
???
?
? ?
?
?
?
??
? ?
? ?
?
?
?
? ?
?
?
?
??
??
?
??
? ? ?
?
?
?
?? ?? ? ?
?
? ??
?
? ?
?
??
?
?
?
? ? ?
?
? ?
?
?? ? ??
?? ?
?
? ?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
? ? ?
??
??
?
?
??
?
??
?
?
?
?
?
? ??
??
?
?? ? ?
?
??
?
?? ?
?
?
?? ?
?
?
? ??
?
?
? ? ?? ?
??
???
?? ?
? ?
?
?
??
? ?
?
? ?
??
??
? ?
?
??
? ?
?
? ??
?? ?
? ??
?
?
?
? ?
?
??
?
?
?
? ?
?
?
? ?
?
?
?
?
? ??
?
??
? ??
?
? ??
? ?
?
?
?
? ?
?
?
?
?
?
?
? ??
?
?
?
?
?
?
???
?? ??
?
?
? ?
?
?
?
?
?
? ?
? ?
?
?
?? ?
?
?
?
?
?
?
?? ?
?
?
?
?
??
? ?
?
? ? ??
?
?? ?
?
?
?
?
?
?
?
?
?
?
? ?
??
?
?
?
?
??
?
?
? ?
?
???
?
?
?
?? ?
?
?
?
??
?
? ??
?
?
?
?
? ? ??
?
?
?
?
?
?
?
?
?
?
? ?
?
? ? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
??
?
?
?
?
?
?
?
?
? ?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
? ??
?
?
?
?
?
?
?
?
?
?
?
?10
?5
0
5
10
15
?20
?15
?10
?5
0
5
10
15
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
??
??
?
?
?
?
?
??
?
?
?
?
?
?
? ??
?
?
?
?
? ??
?
?
?
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
??
?
?
?
??
? ?
?
?
? ?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?15
?10
?5
0
5
10
?10
0
10
20
Principal Component 1
Principal Component 1
Principal Component 1
Principal Component 1
b cell lymphoma
breast tumor
prostate tumor
wilms tumor
2
4
?15
?
?
?
?
?
?20
?
?
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?10
?
?
?
??
?
?
?
??
?? ?
?
? ??
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
10
5
?
?10
?
?
?
?
?
?
5
?
?
?
?
?
?
?
?
?
?
0
?
?
?? ?
?
??
?
?
?
Principal Component 2
?
??
?
?
?
?
?
?
?
?
?
?
?
?5
?
?
?
?
?
?
?10
?
? ?
?
?
?
?
?
?
?
?
?
0
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
10
10
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
Principal Component 2
?
??
?
?
?
?
?
?
?
?
?
?5
?
? ?
?
?
?
?
?
?
?
? ?
??
?
?
? ?
0
??
?
?
?
?
??
?
?
?
?
?
?
?
?10
Principal Component 2
?
?
?
???
? ?
?
?
?
?
? ? ?
?
?
?
?? ?
? ?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
?
? ??
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
? ?
?
?
?
?
Principal Component 2
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
?
??
? ?
?
15
?
?
?
?
?
?
?
?
? ?
?
?
?
10
20
20
20
b cell lymphoma
?
?2
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
? ?
?
?
?
? ??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ??
?
?
?
?
???
?
?
?
??
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
? ?
?
?
?? ?
?
?
?
?
?
?
?
? ?
?
?
? ? ??
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
? ?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?? ?
?
?
? ?? ??
?
?
?
??
?
?
?
?
?
?
? ?
?
? ?
?
?
?
?
?
?
?
? ? ? ?
?? ?
?
? ?
?
? ?
? ?
?
?
? ??
?
?
? ? ??
??
?
?
?
?
?
??
?? ?
? ?
?
?
? ? ? ????
?
?
?
?
?? ?
?
?
??
?
?
?
? ?
?
? ??
? ?
? ?
?
? ?
?
?
? ???
?
??
?? ? ? ? ?
? ?
?
?
? ??
?
?
? ??
? ? ??
?
? ?
? ?
?
??
?
?
?
?
? ?
?
?
?
?
?
?? ? ? ? ?
?
?
?
?
? ??
? ??
??
?
???
??
??? ?
??
??
?
?
?
?
? ??
? ?? ?
?
?
?
? ????
??
?
?
? ?? ? ? ? ?
? ?
? ?
??
?
?
?
?
?
????? ?
??
?? ??? ???
?
?
????? ??? ?
??
?
? ?
??
? ?? ?
?
??
??
? ?
?
?
?
?
?
?
?
?
? ?
??
??
?
?
?
??
? ? ?? ? ??
?
?
?
? ? ?
?
?
? ?
?
?
?
??
? ??
?
?
?? ? ? ?
??
?
?? ? ?? ? ?? ? ??
??
?? ?? ???
?
?
?
? ? ?? ?
?
?
? ? ? ??
??? ?
?
??
?
?? ?
? ??
? ??
?
? ?
?
?
?
?
?
?? ?
?
? ?
?
?
?
??
? ?
?
??
? ? ? ?
??
?
? ? ?
??
? ?? ? ? ??? ????? ??
? ?? ?
??
??
?
?
? ?
??
?
?
????
?
??
?
? ?? ?? ? ? ? ??
?
??? ??
?????? ?? ?
? ?
? ?
? ? ?
?
??
? ??? ?
?? ?
?
?
? ??
?
? ??
?
?
? ? ?? ? ?
?
?
?
?
?
?? ? ?
?
??
?? ? ?? ? ? ??? ? ? ? ? ?? ?
?? ?
?
?
?
?? ??
? ?
?
?
?
?
? ?
?
??
??
?
?
?
??
?
? ?
?
?? ? ?
??? ???
? ?
?
??
?? ?
?
?? ? ?
?
?
?
?
?
??
? ??
? ?
? ? ???
? ? ?? ? ?
?
?
?
? ? ??
?
? ???
?
?
? ? ?
?? ?
?
? ??? ? ?
? ? ??
?
? ?
? ? ??
?
?
? ? ? ? ? ? ??
? ? ? ??
?
? ?
?
??
?
?
?
??
? ?? ? ? ? ?? ?
?
?? ? ?
?? ? ????
?
??
?
?
?
?
?
? ??? ? ?? ?? ??
?
? ?? ?
??
?? ?
? ?? ? ?
?? ?
? ??
?
? ? ? ?? ?
?? ?
??
?
? ????
?
? ?
?
??
? ? ?
?
?
? ?
? ??
?
??
?
????? ?? ? ?? ? ?
? ??
? ? ?
? ?
?
?
? ?
?
?
???
?
? ??
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?? ? ?? ?
?
? ? ?? ?
? ?
?
? ?
? ??
? ?
?
?
?? ?
?
?
? ? ?
? ?
??
? ??
?? ?? ?
?? ? ?
?
? ?
?
??
? ? ????
?
?? ?
?
??
?
?
?
?? ?
? ?? ? ?
?
?
?
?
?? ?
? ?
?
?
? ?? ?
??
? ? ? ? ? ???
?
??
? ??
???
? ?
?
?
?
?? ?
?? ??
? ? ? ???
? ??
?
?
? ?? ? ??
? ?
?? ?
? ?? ? ?
??
? ?
?
?
?? ? ?
?
?? ?
??
? ?
? ?
?? ? ?
? ?
?
???
? ??
?
? ?
??
?? ? ?
? ??
? ?
?
?
?
??
?
??
?? ?
?
?
?
?
?
?
?
?
?
?
?? ? ? ?
?
??? ?
?
?
?
?
?
?
?
? ? ?
??
?
?
?
?
?
? ? ?
?
?
? ?
?? ? ?
?
?
? ??
?
?
? ?
??
? ? ?
?
?
?
?
?? ? ? ? ?
?
?
?
?
?
??
??
?
?
?
?
?
?
?? ?
?
?
?
?
?
?4
??
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
? ?
?
2
?
?
?
?
?
?
?
?
?
???
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
? ?
?
?
??
?
?
?
?
?
?
??
??
? ? ?
?? ?
? ??
?
?
?
?
?
?? ?
?
??
?
?
?
??
?
?
? ?
?
?
?
???
? ?
? ?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
? ?
?
?
?
?? ?
?
?? ?
??
??
?
?
? ?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
??
?
? ?
? ?
?
?
? ?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
? ??
??
?
?
?
??
?
??
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
? ?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?4
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
2
?
?? ?
??
?
?
?
?
?
?
?
?
? ?
??
0
?
?
?
?
0
?
?
?
?
?
?
?
?? ? ?
?
? ?
?
?
?
?
??
? ?
?
?
?
?
?
?
? ? ?
?
??? ?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?? ?
? ?
?
?
?
?
?
?
?
?
Principal Component 2
?
??
?
?
?2
?
?
? ?
?
?
?
?
? ?
?
?
?
?
?
?
0
0
?
?
?
? ?
?
?
?? ? ?
?
? ?
??
? ? ? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?2
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
Principal Component 2
?
?
?
?
?
???
?
?
??
?
?
?
?
?
?
?
Principal Component 2
2
?
?
?
?
?
?
?
?
?
?
?
?
?
?2
?
?
?
?
Principal Component 2
?
?
?
??
?
?
?
?
?
? ?
? ?
4
?
?
?
?
?
?
?2
0
Principal Component 1
2
4
?4
?2
0
2
4
?4
?4
?
?4
?4
Principal Component 1
?2
0
Principal Component 1
2
?2
0
2
4
6
Principal Component 1
Figure 3: The scatter plots of the first two principal components of the dataset. The Spearman
versus Pearson are compared (top to bottom). b cell lymphoma, breast tumor, prostate tumor and
Wilms tumor are explored (from left to right). Each black point represents a sample and each red
point represents a sample belonging to the corresponding tissue type.
For each tissue type listed above, we apply the COCA (Spearman) and the classic high dimensional
PCA (Pearson) on the data belonging to this specific tissue type and obtain the first two dominant
sparse eigenvectors. Here we set R0 = 100 for both eigenvectors. For COCA, we do a normal score
transformation on the original dataset. We subsequently project the whole dataset to the first two
principal components using the obtained eigenvectors. The according 2-dimension visualization is
illustrated in Figure 3. In Figure 3 each black point represents a sample and each red point represents
a sample belonging to the corresponding tissue type. It can be observed that, in 2D plots learnt by
the COCA, the red points are averagely more dense and more close to the border of the sample
cluster. The first phenomenon indicates that the COCA has the potential to preserve more common
information shared by samples from the same tissue type. The second phenomenon indicates that
the COCA has the potential to differentiate samples from different tissue types more efficiently.
6
Discussion and Comparison with Related Work
A similar principal component analysis procedure is proposed by [7], in which they advocate the
use of the transformed Kendall?s tau correlation matrix (instead of the Spearman?s rho correlation
matrix as in the current paper) for estimating the sparse leading eigenvectors. Though both papers
are working on principal component analysis, the core ideas are quite different: Firstly, the analysis in [7] is based on a different distribution family called transelliptical, while COCA and Copula
PCA are based on the Nonparanormal family. Secondly, by improving the modeling flexibility, in
[7] there does not exist a scale-variant variant since it is hard to quantify the transformation functions. In contrast, by introducing the subgaussian transformation function family, the current paper
provides sufficient conditions for Copula PCA to achieve parametric rates. Thirdly, the method in
[7] cannot explicitly conduct data visualization, due to the fact that the latent elliptical distribution
is unspecified and accordingly they cannot accurately estimate the marginal transformations. For
Copula PCA, we are able to provide the projection visualization such as in the experiment part of
this paper. Moreover, via quantifying a sharp convergence rate in estimating the marginal transformations, we can provide the convergence rates in estimating the principal components. Due to space
limit, we refer to the longer version of this paper [6] for more details. Finally, we recommend using
the Spearman?s rho instead of the Kendall?s tau in estimating the correlation coefficients provided
that the Nonparanormal model holds. This is because Spearman?s rho is statistically more efficient
than Kendall?tau within the Nonparanormal family. This research was supported by NSF award
IIS-1116730.
8
References
[1] A.A. Amini and M.J. Wainwright. High-dimensional analysis of semidefinite relaxations for
sparse principal components. In Information Theory, 2008. ISIT 2008. IEEE International
Symposium on, pages 2454?2458. IEEE, 2008.
[2] T.W Anderson. An introduction to multivariate statistical analysis, volume 2. Wiley New
York, 1958.
[3] A. d?Aspremont, F. Bach, and L.E. Ghaoui. Optimal solutions for sparse principal component
analysis. The Journal of Machine Learning Research, 9:1269?1294, 2008.
[4] A. d?Aspremont, L. El Ghaoui, M.I. Jordan, and G.R.G. Lanckriet. A direct formulation for
sparse PCA using semidefinite programming. Computer Science Division, University of California, 2004.
[5] B. Flury. A first course in multivariate statistics. Springer Verlag, 1997.
[6] F. Han and H. Liu. High dimensional semiparametric scale-invariant principal component
analysis. Technical Report, 2012.
[7] F. Han and H. Liu. Tca: Transelliptical principal component analysis for high dimensional
non-gaussian data. Technical Report, 2012.
[8] T. Hastie and W. Stuetzle. Principal curves. Journal of the American Statistical Association,
pages 502?516, 1989.
[9] I.M. Johnstone and A.Y. Lu. On consistency and sparsity for principal components analysis in
high dimensions. Journal of the American Statistical Association, 104(486):682?693, 2009.
[10] I.T. Jolliffe. Principal component analysis, volume 2. Wiley Online Library, 2002.
[11] I.T. Jolliffe, N.T. Trendafilov, and M. Uddin. A modified principal component technique based
on the lasso. Journal of Computational and Graphical Statistics, 12(3):531?547, 2003.
[12] M. Journ?ee, Y. Nesterov, P. Richt?arik, and R. Sepulchre. Generalized power method for sparse
principal component analysis. The Journal of Machine Learning Research, 11:517?553, 2010.
[13] J.T. Leek and J.D. Storey. Capturing heterogeneity in gene expression studies by surrogate
variable analysis. PLoS Genetics, 3(9):e161, 2007.
[14] H. Liu, F. Han, M. Yuan, J. Lafferty, and L. Wasserman. High dimensional semiparametric
gaussian copula graphical models. Annals of Statistics, 2012.
[15] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high
dimensional undirected graphs. The Journal of Machine Learning Research, 10:2295?2328,
2009.
[16] Z. Ma. Sparse principal component analysis and iterative thresholding. Arxiv preprint
arXiv:1112.2432, 2011.
[17] Matthew McCall, Benjamin Bolstad, and Rafael Irizarry. Frozen robust multiarray analysis
(frma). Biostatistics, 11:242?253, 2010.
[18] D. Paul and I.M. Johnstone. Augmented sparse principal component analysis for high dimensional data. Arxiv preprint arXiv:1202.1242, 2012.
[19] H. Shen and J.Z. Huang. Sparse principal component analysis via regularized low rank matrix
approximation. Journal of multivariate analysis, 99(6):1015?1034, 2008.
[20] V.Q. Vu and J. Lei. Minimax rates of estimation for sparse pca in high dimensions. Arxiv
preprint arXiv:1202.0786, 2012.
[21] D.M. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics,
10(3):515?534, 2009.
[22] L. Xue and H. Zou. Regularized rank-based estimation of high-dimensional nonparanormal
graphical models. Annals of Statistics, 2012.
[23] X.T. Yuan and T. Zhang. Truncated power method for sparse eigenvalue problems. Arxiv
preprint arXiv:1112.2679, 2011.
[24] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 67(2):301?320, 2005.
[25] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of computational and graphical statistics, 15(2):265?286, 2006.
9
| 4809 |@word cu:1 middle:1 version:6 averagely:1 norm:4 seems:1 nd:11 stronger:1 c0:7 proportion:1 km:2 d2:4 simulation:2 covariance:18 decomposition:5 tr:1 sepulchre:1 reduction:2 initial:2 liu:6 series:2 dspca:1 score:2 nonparanormal:24 existing:1 recovered:2 current:2 elliptical:1 scatter:2 bd:4 john:1 numerical:2 remove:1 plot:3 maxv:1 greedy:1 accordingly:4 fpr:12 realizing:2 core:1 provides:2 firstly:5 zhang:1 five:1 kvk2:6 h4:4 direct:1 symposium:1 yuan:2 prove:4 advocate:1 introduce:1 brain:1 v1t:1 inspired:2 little:1 solver:3 project:2 estimating:7 notation:1 provided:2 maximizes:1 biostatistics:3 moreover:1 what:1 unspecified:2 eigenvector:12 transformation:19 nj:2 guarantee:1 quantitative:1 xd:6 exactly:1 uk:1 positive:2 engineering:1 kuk1:2 sd:10 limit:1 id:1 analyzing:1 path:1 merge:1 might:2 black:2 collect:1 bi:2 statistically:1 averaged:1 bjk:4 vu:1 practice:1 minv:1 definite:1 x3:4 procedure:8 stuetzle:1 empirical:3 significantly:2 projection:1 pre:1 suggest:2 onto:1 close:4 selection:8 cannot:5 kmax:2 applying:1 fd0:2 equivalent:1 go:2 convex:1 shen:1 simplicity:1 immediately:1 wasserman:2 estimator:22 utilizing:1 spanned:1 orthonormal:1 financial:1 fang:1 population:3 handle:1 classic:3 variation:1 annals:2 suppose:2 programming:1 designing:1 lanckriet:1 jk:4 observed:3 bottom:2 preprint:4 rij:3 calculate:1 jhsph:1 richt:1 decrease:1 contamination:4 highest:1 plo:1 rq:18 intuition:2 pd:4 benjamin:1 ui:2 bnrc:1 nesterov:1 weakly:1 max1:2 division:1 tca:1 fast:7 formation:1 pearson:21 h0:7 lymphoma:4 quite:1 larger:1 rho:17 solve:2 say:1 statistic:6 noisy:1 scaleinvariant:1 final:1 seemingly:1 online:1 differentiate:1 advantage:1 eigenvalue:9 frozen:1 net:2 propose:3 coming:2 product:1 j2:1 flexibility:2 achieve:4 f10:8 convergence:8 cluster:2 generating:1 perfect:1 converges:1 depending:2 pose:1 measured:1 ij:2 op:1 h3:4 kvkq:1 b0:1 strong:1 recovering:1 come:1 quantify:2 safe:1 radius:1 subsequently:1 bib:1 centered:1 xid:1 g000:1 f1:1 generalization:1 proposition:3 isit:1 secondly:3 hold:5 considered:10 normal:2 exp:5 maxu:2 bj:3 claim:3 matthew:1 m0:5 major:2 adopt:2 smallest:1 purpose:1 estimation:10 lose:1 largest:3 tf:2 genomic:1 gaussian:25 arik:1 aim:1 modified:1 pn:6 hj:1 kvk1:3 corollary:2 ax:2 properly:1 notational:1 rank:8 vk:1 indicates:2 contrast:2 sense:2 el:1 sb:4 typically:1 uij:1 journ:1 transformed:1 i1:1 interested:3 arg:7 among:1 denoted:3 favored:1 logn:2 art:1 special:1 copula:32 initialize:2 marginal:9 equal:1 platform:1 represents:5 look:1 nearly:2 uddin:1 contaminated:1 prostate:5 recommend:1 report:2 randomly:3 preserve:4 maxj:1 replaced:1 fd:1 investigate:2 kvk:1 semidefinite:3 behind:1 devoted:2 fj0:5 bq:7 indexed:3 conduct:2 plugged:1 theoretical:8 column:3 modeling:1 ations:1 introducing:1 deviation:2 entry:4 usefulness:1 conducted:2 sv:2 learnt:1 synthetic:1 xue:1 mles:2 density:1 international:1 xi1:1 hopkins:1 squared:1 huang:1 american:2 leading:17 supp:11 potential:3 coca:32 coefficient:10 explicitly:1 vi:3 h1:5 kendall:3 analyze:1 red:3 start:1 recover:3 lung:1 publicly:1 efficiently:2 generalize:1 raw:1 accurately:1 emphasizes:1 marginally:1 lu:1 researcher:1 tissue:9 minj:3 reach:1 definition:7 naturally:1 proof:6 mi:1 gain:1 dataset:4 lim:1 ut:2 pmd:15 actually:1 back:2 focusing:1 dt:6 follow:2 methodology:2 formulation:2 evaluated:1 though:1 anderson:1 correlation:32 until:2 working:1 nonlinear:3 mode:1 lda:1 lei:1 name:1 effect:1 contain:1 true:3 hence:2 regularization:1 symmetric:2 nonzero:1 i2:1 illustrated:1 sin:4 self:1 coincides:1 biconvex:1 generalized:2 fj:6 tpower:7 common:1 witten:1 functional:1 exponentially:1 volume:2 thirdly:2 association:2 tpr:12 elementwise:1 synthesized:1 refer:2 scotlass:1 vec:2 rd:16 tuning:3 consistency:8 similarly:1 dj:9 han:6 longer:1 gj:4 etc:1 dominant:7 multivariate:6 inf:1 belongs:1 certain:2 verlag:1 arbitrarily:3 exploited:2 employed:1 r0:6 converge:3 ud:4 ii:1 rv:1 rj:2 reduces:1 technical:2 match:1 bach:1 sphere:2 long:4 e1:15 mle:1 award:1 variant:3 regression:1 breast:4 arxiv:8 iteration:2 achieved:1 cell:4 c1:4 preserved:1 background:1 semiparametric:7 separately:2 baltimore:1 microarray:2 extra:1 subject:6 undirected:1 lafferty:2 spirit:1 jordan:1 subgaussian:3 ee:1 spca:16 intermediate:1 easy:2 enough:1 variety:1 xj:2 iterate:2 hastie:4 lasso:3 perfectly:2 inner:1 idea:7 motivated:1 pca:58 sbjk:2 expression:1 york:1 jj:3 remark:1 repeatedly:1 generally:3 detailed:2 eigenvectors:18 listed:3 nonparametric:3 generate:3 exist:4 xij:3 nsf:1 canonical:1 sign:5 estimated:5 tibshirani:2 key:2 drawn:1 utilize:7 v1:3 graph:1 relaxation:2 monotone:6 angle:2 h5:6 named:4 family:10 almost:1 dy:2 submatrix:3 capturing:1 bound:4 c41:1 oracle:16 constraint:3 infinity:2 x2:2 g00:1 transelliptical:3 u1:23 min:3 utj:2 department:2 wilms:4 according:2 truncate:1 ball:3 spearman:37 belonging:5 appealing:1 projecting:1 invariant:8 ghaoui:2 equation:14 visualization:3 discus:1 leek:1 jolliffe:2 available:1 operation:1 probe:2 observe:1 apply:1 away:2 spectral:2 v2:3 amini:1 robustly:1 flury:1 batch:1 robustness:1 original:1 assumes:1 remaining:2 top:3 graphical:4 exploit:1 uj:2 establish:3 prof:1 classical:4 especially:2 society:1 g0:5 realized:1 parametric:6 md:1 surrogate:2 said:1 subspace:1 card:4 vd:1 rjk:1 relationship:1 kk:1 setup:1 statement:2 negative:1 motivates:1 rsvd:1 unknown:1 upper:4 observation:3 datasets:1 u1j:2 truncated:6 heterogeneity:1 rn:1 kvk0:1 arbitrary:2 sharp:1 introduced:1 bk:1 namely:1 required:2 specified:3 subvector:1 connection:1 rq2:2 california:1 able:1 pattern:2 sparsity:3 tb:2 built:2 max:10 including:1 explanation:1 tau:3 wainwright:1 power:10 suitable:1 royal:1 natural:1 rely:1 regularized:2 minimax:1 scheme:6 mjk:1 kvk22:1 library:1 axis:1 aspremont:2 extract:1 nice:2 geometric:3 relative:1 fhan:1 var:2 versus:1 h2:4 rik:2 vectorized:1 consistent:3 sufficient:1 thresholding:1 row:3 cancer:2 course:1 penalized:2 summary:1 supported:1 genetics:1 free:1 enjoys:2 johnstone:2 sparse:23 orthornormal:1 regard:2 curve:3 dimension:8 xn:7 contour:1 preprocessing:1 far:1 rafael:1 gene:5 global:4 b1:4 xi:1 continuous:4 latent:12 iterative:1 storey:1 robust:3 elastic:2 improving:1 zou:3 diag:2 did:1 main:5 dense:1 rh:1 whole:1 border:1 paul:1 n2:2 x1:15 augmented:1 roc:2 wiley:2 fails:1 sub:4 kuk22:2 wish:4 exponential:1 theorem:5 rk:2 xt:1 hanliu:1 specific:1 explored:1 concern:2 exists:3 gj2:2 false:2 kr:1 f20:6 intersection:1 explore:2 univariate:5 trix:1 u2:2 springer:1 mij:3 trendafilov:1 satisfies:4 ma:1 quantifying:2 careful:1 replace:1 shared:1 hard:1 mccall:1 principal:41 lemma:8 tumor:13 called:1 invariance:1 select:1 nt2:1 support:1 evaluate:1 princeton:2 regularizing:1 phenomenon:2 |
4,209 | 481 | The VC-Dimension versus the Statistical
Capacity of Multilayer Networks
Chuanyi Ji "and Demetri Psaltis
Department of Electrical Engineering
California Institute of Technology
Pasadena, CA 91125
Abstract
A general relationship is developed between the VC-dimension and the
statistical lower epsilon-capacity which shows that the VC-dimension can
be lower bounded (in order) by the statistical lower epsilon-capacity of a
network trained with random samples. This relationship explains quantitatively how generalization takes place after memorization, and relates
the concept of generalization (consistency) with the capacity of the optimal
classifier over a class of classifiers with the same structure and the capacity
of the Bayesian classifier. Furthermore, it provides a general methodology
to evaluate a lower bound for the VC-dimension of feedforward multilayer
neural networks.
This general methodology is applied to two types of networks which are
important for hardware implementations: two layer (N - 2L - 1) networks with binary weights, integer thresholds for the hidden units and
zero threshold for the output unit, and a single neuron ((N - 1) networks) with binary weigths and a zero threshold. Specifically, we obtain
OC~L) ::; d 2 ::; O(W), and d 1 ""' O(N). Here W is the total number
of weights of the (N - 2L - 1) networks. d 1 and d2 represent the VCdimensions for the (N - 1) and (N - 2L - 1) networks respectively.
1
Introduction
The information capacity and the VC-dimension are two important quantities that
characterize multilayer feedforward neural networks. The former characterizes their
"Present Address: Department of Electrical Computer and System Engineering, Rensselaer Poly tech Institute, Troy, NY 12180.
928
The VC-Dimension versus the Statistical Capacity of Multilayer Networks
memorization capability, while the latter represents the sample complexity needed
for generalization. Discovering their relationships is of importance for obtaining
a better understanding of the fundamental properties of multilayer networks in
learning and generalization.
In this work we show that the VC-dimension of feedforward multilayer neural networks, which is a distribution-and network-parameter-indenpent quantity, can be
lower bounded (in order) by the statistical lower epsilon-capacity C; (McEliece
et.al, (1987?, which is a distribution-and network-dependent quantity, when the
samples are drawn from two classes: 0 1 (+1) and 02{-1). The only requirement
on the distribution from which samples are drawn is that the optimal classification
error achievable, the Bayes error Pbe, is greater than zero. Then we will show that
the VC-dimension d and the statistical lower epsilon-capacity C; are related by
C;
I
~
(1)
Ad,
I
I
I
where (: = P eo - (: for 0 < (: ~ P eo ; or (: = Pbe - (: for 0 < (: ~ Pbe. Here (:
is the error tolerance, and P eo represents the optimal error rate achievable on the
class of classifiers considered. It is obvious that P eo ~ Pbe' The relation given
in Equation (1) is non-trivial if Pbe > 0, P eo ~ / or Pbe ~ / so that (: is a nonnegative quantity. Ad is called the universal sample bound for generalization, where
1281n-+
A <
12 '
is a positive constant. When the sample complexity exceeds Ad, all the
networks of the same architechture for all distributions of the samples can generalize
with almost probability 1 for d large. A special case of interest, in which Pbe = ~,
corresponds to random assignments of samples. Then C; represents the random
storage capacity which characterizes the memorizing capability of networks.
(
Although the VC-dimension is a key parameter in generalization , there exists no
systematic way of finding it. The relationship we have obtained, however, brings
concomitantly a constructive method of finding a lower bound for the VC-dimension
of multilayer networks. That is, if the weights of a network are properly constructed using random samples drawn from a chosen distribution, the statistical
lower epsilon-capacity can be evaluated and then utilized as bounds for the VCdimension. In this paper we will show how this constructive approach cQntributes
to finding lower bounds of the VC-dimension of multilayer networks with binary
weights.
2
2.1
A Relationship Between the VC-Dimension and the
Statistical Capacity
Definition of the Statistical Capacity
Consider a network s whose weights are constructed from M random samples belonging to two classes. Let r{ s) = ~, where Z is the total number of samples
classified incorrectly by the network s. Then the random variable r( s) is the training error rate. Let
(2)
929
930
Ji and Psaltis
where 0 < ? ~ 1. Then the statistical lower epsilon-capacity (statistical capacity in
short) C; is the maximum M such that Pf(M) ;::: 1 - Tj, where Tj can be arbitrarily
small for sufficiently large N.
Roughly speaking, the statistical lower epsilon-capacity defined here can be regarded
as a sharp transition point on the curve Pf(M) shown in Fig.1. When the number
of samples used is below this sharp transition, the network can memorize them
perfectly.
2.2
The Universal Sample Bound for Generalization
Let Pe(xls) be the true probability of error for the network s. Then the generalization error LlE(s) satisfies LlE(s) =1 r(s) - Pe(xls) I. We can show that the
probability for the generalization error to exceed a given small quantity ( satisfies
the following relation.
Theorem 1
Pr(maxLlE(s)
sES
> /) ~
(3)
h(2M;d,l),
where
h(2M; d, <') = {
.
ezther 2M
1;
6
(2M)"
d!
_
.,2 M ?
e ---r-,
:s d,
or 6
(2M)"
d!
.,2 M
e - -s -;::: 1&2M
> d,
otherwise.
Here S is a class of networks with the same architecture. The function h(2M; d, (')
has one sharp transition occurring at Ad shown in Fig.l, where A is a constant
,2
satisfying the equation A
In(2A) + 1 - TA O.
=
=
This theorem says that when the number M of samples used exceeds Ad, generalization happens with probability 1. Since Ad is a distribution-and network-parameterindependent quantity, we call it the universal sample bound for generalization.
2.3
A Relationship between The VC-Dimension and C;
Roughly speaking, since both the statistical capacity and the VC-dimension represent sharp transition points, it is natural to ask whether they are related. The
relationship can actually be given through the theorem below.
Theorem 2 Let samples belonging to two classes 0 1 (+1) and O 2 (-1) be drawn
independently from some distribution. The only requirement on the distributions
considered is that the Bayes error Pbe satisfies 0 < Pbe ~
Let 5 be a class
of feedforward multilayer networks with a fixed structure consisting of threshold
elements and SI be one network in 5, where the weights of S1 are constructed from
M (training) samples drawn from one distribution as specified above. For a given
distribution, let Peo be the optimal error rate achievable on Sand P be be the Bayes
error rate. Then
!.
,
Pr(r(sI) < Peo - ( )
:s h(2M; d, ( ,),
(4)
and
(5)
The VC-Dimension versus the Statistical Capacity of Multilayer Networks
I h(2M;d,E'
1
M
Figure 1: Two sharp transition points for the capacity and the universal sample
bound for generalization.
where f(S1) is equal to the training error rate of S1. (It is also called the resubstitution error estimator in the pattern recognition literature.) These relations are
nontrivial if P eo > /, Pbe > / and (' > 0 small.
The key idea of this result is illustrated in Fig.1. That is, the sharp transition which
stands for the lower epsilon-capacity is below the sharp transition for the universal
sample bound for generalization.
To interpret this relation, let us compare Equation (2) and Equation (5) and examine the range of (: and (' respectively. Since (', which is initially given in Inequality (3), represents a bound on the generalization error, it is usually quite small.
For most of practical problems, Pbe is small also. If the structure of the class of
networks is properly chosen so that P eo ~ Pbe, then ( = P eo - (' will be a sma.ll
quantity. Although the epsilon-capacity is a valid quantity depending on M for any
network in the class, for M sufficiently large, the meaningful networks to be considered through this relation is only a small subset in the class whose true probability
of error is close to Peo . That is, this small subset contains only those networks
which can approximate the best classifier contained in this class .
For a special case in which samples are assigned randomly to two classes with equal
probability, we have a result stated in Corollary 1.
Corollary 1 Let samples be drawn independently from some distribution and then
assigned randomly to two classes fh(+I) and O2 (-1) with equal probability. This
is equivalent to the case that the two class conditional distributions have complete
overlap with one another. That is, Pr(x 101) = Pr(x I O 2 ). Then the Bayes error
is
Using the same notation as in the above theorem, we have
!.
C"l2 - ( < Ad.
I
(6)
931
932
Ji and Psaltis
Although the distributions specified here give an uninteresting case for classification
purposes, we will see later that the random statistical epsilon-capacity in Inequality (6) can be used to characterize the memorizing capability of networks, and to
formulate a constructive approach to find a lower bound for the VC-dimension.
3
Bounds for the VC-Dimension of Two Networks with
Binary Weights
3.1
A Constructive Methodology
One of the applications of this relation is that it provides a general constructive approach to find a lower bound for the VC-dimension for a class of networks. Specifically, using the relationship given in Inequality (6), the procedures can be described
as follows.
1) Select a distribution.
2) Draw samples independently from the chosen distribution, and then assign them
randomly to two classes.
3) Evaluate the lower epsilon-capacity and then use it as a lower bound for the
VC-dimension.
Two example are given below to demonstrate how this general approach can be
applied to find lower bounds for the VC-dimension.
3.2
Bounds for Two-Layer Networks with Binary Weigths
Two-layer (N - 2L - 1) networks with binary weights and integer thresholds are
considered in this section.
3.2.1
A lower Bound
The construction of the network we consider is motivated by the one used by Baum
(Baum, 1988) in finding the capacity for two layer networks with real weights.
Although this particular network will fail if the accuracy of the weights and the
thresholds is reduced, the idea of using the grandmother-cell type of network will
be adopted to construct our network.
We consider a two layer binary network with 2L hidden threshold units and one
output threshold unit shown in Fig.2 a).
The weights at the second layer are fixed and equal to +1 and -1 alternately. The
hidden units are allowed to have integer thresholds in [-N, N], and the threshold
for the output unit is zero.
Let Xr(m) = (x~;n), .. " x~;) be a N dimensional random vector, where x~;n)'s are
independent random variables taking (+ 1) and (-1) with equal probability ~, 0 ~
I ::; L, and 0 ::; m ::; M. Consider the Ith pair of hidden units. The weights at the
first layer for this pair of hidden units are equal. Let Wri denote the weight from
the ith input to these two hidden units, then we have
The VC-Dimension versus the Statistical Capacity of Multilayer Networks
1
+
+
+
2L
+
+
+
+
N
ith
(b)
(a)
Figure 2: a) The two-layer network with binary weights. b) Illustration on how a
pair of hidden units separates samples.
M
W/i
= sgn(a/ L
x~r?),
(7)
m=l
where sgn(x) = 1 if x> 0, and -1 otherwise. a/'s, 1 ~ I ~ L, which are independent random variables which take on two values +1 or -1 with equal probability,
represent the random assignments of the LM samples into two classes Ol( +1) and
02( -1).
The thresholds for these two units are different and are given as
(8)
where 0 < k < 1, and t/:J: correspond to the thresholds for the units with weight + 1
and -1 at the second layer respectively.
Fig.2 b) illustrates how this network works. Each pair of hidden units forms two
parallel hyperplanes separated by the two thresholds, which will generates a presynaptic input either +2 or (-2) to the output unit only for the samples stored in
this pair which fall in between the planes when a/ equals either + 1 or -1, and a
presynaptic input 0 for the samples falling outside. When the samples as well as
the parallel hyperplanes are random, with a certain probability they will fall either
between a pair of parallel hyperplanes or outside. Therefore, statistical analysis is
needed to obtain the lower epsilon-capacity.
933
934
Ji and Psaltis
Theorem 3 A lower bound
c~
,
2-(
,for the lower epsilon-capacity c~2-( ,for this
network is:
,
(1-k)2NL
c; ,
~-(
(9)
3.2.2
An Upper Bound
Since the total number of possible mappings of two layer (N -2L-1) networks with
binary weights and integer thresholds ranging in [-N, N] is bounded by 2w +L log 2N,
the VC-dimension d 2 is upper bounded by W + L log 2N, which is in the order of
W. Then d2 ~ O(W). By combining both the upper and lower bounds, we have
(10)
3.3
Bounds for One-Layer Networks with Binary Weigths
The one-layer network we consider here is equivalent to one hidden unit in the above
(N - 2L -1) network. Specifically, the weight from the i-th input unit to the neuron
IS
M
Wi
= sgn( L
O'mx~m?,
(11)
m=l
where (1 < i :::; N), x~m) 's and O'm's are independent and equally probable
binary(?1) random variables, which represent elements of N-dimensional sample
vectors and their random assignments to two classes respectively.
Theorem 4 The lower epsilon-capacity
C-1
2-(
Then by Corollary 1 we have O(N)
one-layer (N - 1) networks.
c~
2-(
N
"" -22'
-
,""
~
,of this network satisfies
7r (:
O(dd, where d1 is the VC-dimension of
Using the similar counting arguement, an upper bound can be obtained as d 1
Then combining the lower and upper bounds, we have d1 "" O(N)
4
(12)
~
N .
Discussions
The general relationship we have drawn between the VC-dimension and the statistical lower epsilon-capacity provides a new view on the sample complexity for
generalization. Specifically, it has two implications to learning and generalziation.
1) For random assignments of the samples (Pbe = t), the relationship confirms that
generalization occurs after memorization, since the statistical lower epsilon-capacity
The VC-Dimension versus the Statistical Capacity of Multilayer Networks
for this case is the random storage capacity which charaterizes the memorizing
capability of networks and it is upper bounded by the universal sample bound for
generalization.
2) For cases where the Bayes error is smaller than ~, the relationship indicates
that an appropriate choice of a network structure is very important. If a network
structure is properly chosen so that the optimal achievable error rate Peo is close
to the Bayes error Peb , than the optimal network in this class is the one which has
the largest lower epsilon-capacity. Since a suitable structure can hardly be chosen
a priori due to the lack of knowledge about the underlying distribution, searching
for network structures as well as weight values becomes necessary. Similar idea
has been addressed by Devroye (Devroye, 1988) and by Vapnik (Vapnik, 1982) for
structural minimization.
We have applied this relation as a general constructive approach to obtain lower
bounds for the VC-dimension of two-layer and one-layer networks with binary interconnections. For the one-layer networks, the lower bound is tight and matches the
upper bound. For the two-layer networks, the lower bound is smaller than the upper
bound (in order) by a In factor. In an independent work by Littlestone (Littlestone,
1988), the VC-dimension of so-called DNF expressions were obtained. Since a.ny
DNF expression can be implemented by a two layer network of threshold units with
binary weights and integer thresholds, this result is equivalent to showing that the
VC-dimension of such networks is O(W). We believe that the In factor in our lower
bound is due to the limitations of the grandmother-cell type of networks used in
our construction.
Acknowledgement
The authors would like to thank Yaser Abu-Mostafa and David Haussler for helpful
discussions. The support of AFOSR and DARPA is gratefully acknowledged.
References
E. Baum. (1988) On the Capacity of Multilayer Perceptron. J. of Complexity,
4:193-215.
L. Devroye. (1988) Automatic Pattern Recognition: A Study of Probability of
Error. IEEE Trans. on Pattern Recognition and Machine Intelligence, Vol. 10,
No.4: 530-543.
N. Littlestone. (1988) Learning Quickly When Irrelevant Attributes Abound: A
New Linear-Threshold Algorithm. Machine Learning 2: 285-318.
R.J . McEliece, E.C . Posner, E.R. Rodemich, S.S . Venkatesh. (1987) The Capacity
of the Hopfield Associative Memory. IEEE Trans. Inform. Theory, Vol. IT-33, No.
4,461-482.
V.N . Vapnik (1982) Estimation of Dependences Based on Empirical Data, New
York: Springer-Verlag.
935
| 481 |@word implemented:1 concept:1 true:2 memorize:1 achievable:4 former:1 assigned:2 quantity:8 occurs:1 d2:2 confirms:1 attribute:1 vc:28 illustrated:1 dependence:1 sgn:3 ll:1 mx:1 explains:1 sand:1 separate:1 oc:1 assign:1 thank:1 capacity:35 contains:1 generalization:17 presynaptic:2 probable:1 complete:1 demonstrate:1 trivial:1 o2:1 devroye:3 sufficiently:2 considered:4 ranging:1 si:2 relationship:11 illustration:1 mapping:1 lm:1 mostafa:1 sma:1 ji:4 troy:1 stated:1 fh:1 purpose:1 estimation:1 implementation:1 intelligence:1 discovering:1 psaltis:4 interpret:1 upper:8 plane:1 neuron:2 largest:1 ith:3 short:1 automatic:1 consistency:1 incorrectly:1 minimization:1 provides:3 gratefully:1 hyperplanes:3 sharp:7 constructed:3 corollary:3 resubstitution:1 david:1 venkatesh:1 pair:6 specified:2 properly:3 irrelevant:1 indicates:1 certain:1 tech:1 verlag:1 inequality:3 binary:13 arbitrarily:1 roughly:2 california:1 examine:1 helpful:1 peo:4 ol:1 dependent:1 address:1 alternately:1 greater:1 trans:2 below:4 pattern:3 initially:1 pasadena:1 hidden:9 pf:2 relation:7 eo:8 becomes:1 abound:1 grandmother:2 bounded:5 notation:1 underlying:1 classification:2 relates:1 memory:1 overlap:1 priori:1 exceeds:2 match:1 suitable:1 natural:1 special:2 developed:1 equal:8 finding:4 construct:1 equally:1 technology:1 represents:4 multilayer:13 classifier:5 demetri:1 quantitatively:1 unit:17 represent:4 understanding:1 randomly:3 cell:2 literature:1 l2:1 positive:1 acknowledgement:1 engineering:2 addressed:1 afosr:1 consisting:1 limitation:1 versus:5 interest:1 integer:5 wri:1 call:1 nl:1 structural:1 counting:1 feedforward:4 range:1 tj:2 exceed:1 practical:1 implication:1 architecture:1 perfectly:1 necessary:1 idea:3 lle:2 xr:1 perceptron:1 procedure:1 institute:2 fall:2 taking:1 concomitantly:1 littlestone:3 universal:6 empirical:1 whether:1 motivated:1 expression:2 tolerance:1 curve:1 dimension:28 transition:7 stand:1 valid:1 yaser:1 author:1 close:2 speaking:2 york:1 assignment:4 storage:2 hardly:1 memorization:3 approximate:1 subset:2 usually:1 equivalent:3 uninteresting:1 weigths:3 baum:3 independently:3 characterize:2 stored:1 formulate:1 hardware:1 reduced:1 estimator:1 haussler:1 rensselaer:1 regarded:1 fundamental:1 posner:1 systematic:1 searching:1 pbe:13 ca:1 vol:2 quickly:1 abu:1 construction:2 key:2 obtaining:1 threshold:17 poly:1 falling:1 drawn:7 acknowledged:1 element:2 satisfying:1 recognition:3 utilized:1 allowed:1 fig:5 electrical:2 architechture:1 place:1 almost:1 ny:2 ad:7 draw:1 later:1 view:1 xl:2 characterizes:2 pe:2 complexity:4 bayes:6 bound:30 capability:4 parallel:3 layer:18 theorem:7 trained:1 nonnegative:1 tight:1 nontrivial:1 accuracy:1 showing:1 correspond:1 exists:1 darpa:1 hopfield:1 generalize:1 bayesian:1 generates:1 vapnik:3 importance:1 separated:1 illustrates:1 occurring:1 dnf:2 department:2 classified:1 inform:1 outside:2 belonging:2 smaller:2 quite:1 whose:2 definition:1 wi:1 say:1 s:1 otherwise:2 interconnection:1 obvious:1 happens:1 s1:3 memorizing:3 contained:1 pr:4 springer:1 corresponds:1 associative:1 satisfies:4 ask:1 equation:4 knowledge:1 conditional:1 fail:1 needed:2 actually:1 combining:2 rodemich:1 ta:1 adopted:1 dd:1 specifically:4 methodology:3 chuanyi:1 evaluated:1 appropriate:1 total:3 called:3 furthermore:1 meaningful:1 mceliece:2 requirement:2 select:1 charaterizes:1 support:1 lack:1 latter:1 depending:1 vcdimension:1 brings:1 constructive:6 evaluate:2 believe:1 d1:2 epsilon:17 |
4,210 | 4,810 | Smooth-projected Neighborhood Pursuit for
High-dimensional Nonparanormal Graph Estimation
Kathryn Roeder
Department of Statistics
Carnegie Mellon University
Tuo Zhao
Department of Computer Science
Johns Hopkins University
Han Liu
Department of Operations Research and Financial Engineering
Princeton University
Abstract
We introduce a new learning algorithm, named smooth-projected neighborhood
pursuit, for estimating high dimensional undirected graphs. In particularly, we
focus on the nonparanormal graphical model and provide theoretical guarantees
for graph estimation consistency. In addition to new computational and theoretical
analysis, we also provide an alternative view to analyze the tradeoff between computational efficiency and statistical error under a smoothing optimization framework. Numerical results on both synthetic and real datasets are provided to support
our theory.
1
Introduction
We consider the undirected graph estimation problem for a d-dimensional random vector X =
(X1 , ..., Xd )T (Lauritzen, 1996; Wille et al., 2004; Blei and Lafferty, 2007; Honorio et al., 2009).
More specifically, let V be the set that contains nodes representing the d variables in X, and E be the
set that contains edges representing the conditional independence relationship among X1 , ..., Xd , we
say that the distribution of X is Markov to G = (V, E) if Xi is independent of Xj given X\{i,j} for
all (i, j) ?
/ E, where X\{i,j} = {Xk : k 6= i, j}. Our goal is to recover G based on n independent
observations of X.
Most existing methods for high dimensional graph estimation assume that the random vector X
follows a Gaussian distribution, i.e., X ? N (?, ?). Under this parametric assumption, the graph
estimation problem can be solved by estimating the sparsity pattern of the precision matrix ? =
??1 , i.e., the nodes i and j are connected if and only if ?ij 6= 0. The problem of estimating the
sparsity pattern of ? is also called covariance selection in Dempster (1972). There are two major
approaches for learning high dimensional Gaussian graphical models: (i) graphical lasso (Yuan and
Lin, 2007; Friedman et al., 2007; Banerjee et al., 2008) and (ii) neighborhood pursuit (Meinshausen
and B?uhlmann, 2006). The graphical lasso maximizes the `1 -penalized Gaussian likelihood and
simultaneously estimates the precision matrix ? and graph G. In contrast, the neighborhood pursuit
method maximizes the `1 -penalized pseudo-likelihood and can only estimate the graph G. Scalable
software packages such as glasso and huge have been developed to implement these algorithms
(Friedman et al., 2007; Zhao et al., 2012). Theoretically, both methods are consistent in graph
recovery for Gaussian models under certain regularity conditions. However, Ravikumar et al. (2011)
suspect that the neighborhood pursuit approach has a better sample complexity in graph recovery
than the graphical lasso. Moreover, these two methods are often observed to behave differently on
real datasets in practical applications.
In Liu et al. (2009), an semiparametric nonparanormal model is proposed to relax the restrictive normality assumption. More specifically, they assume that there exists a set of strictly monotone trans1
formations f = (fj )dj=1 , such that the transformed random vector f (X) = (f1 (X1 ), . . . , fd (Xd ))T
follows a Gaussian distribution, i.e., f (X) ? N (0, ??1 ). Liu et al. (2009) show that for the nonparanormal distribution, the graph G can also be estimated by examining the sparsity pattern of ?.
Different methods have been proposed to infer the nonparanormal model in high dimensions. In Liu
et al. (2012), a rank-based estimator named nonparanormal SKEPTIC is proposed to directly estimate ?. Their main idea is to calculate a rank-correlation matrix (either based on the Spearman?s
rho or Kendall?s tau correlation) and plug the estimated correlation matrix into the graphical lasso to
estimate ? and graph G. Such a procedure has been proven to be robust and achieve the same parametric rates of convergence as the graphical lasso (Liu et al., 2012). However, how to combine the
nonparanormal SKEPTIC estimator with the neighborhood pursuit approach is still an open problem.
The main challenge is that the possible indefiniteness of the rank-based correlation matrix estimates
could lead to a non-convex computational formulation. Such potential non-convexity challenges
both computational and theoretical analysis.
In this paper, we bridge this gap by proposing a novel smooth-projected neighborhood pursuit
method. The main idea is to project the possibly indefinite nonparanormal SKEPTIC correlation
matrix estimator into the cone of all positive semi-definite matrices with respect to a smoothed elementwise `? -norm. Such a projection step is closely related to the dual smoothing approach in
Nesterov (2005). We provide both computational and theoretical analysis of the derived algorithm.
Computationally, our proposed smoothed elementwise `? -norm has nice structure so that?we can
develop an efficient fast proximal gradient solver with a provable convergence rate O(1/ ) ( is
the desired accuracy of the objective value, Nesterov (1988)). Theoretically, we provide sufficient
conditions to guarantee that the proposed smooth-projected neighborhood pursuit approach is graph
estimation consistent.
In addition to new computational and statistical analysis, we further provide an alternative view
to analyze the fundamental tradeoff between computational efficiency and statistical error under the
smoothing optimization framework. Existing literature (Nesterov, 2005; Chen et al., 2012) considers
the dual smoothing approach as a tradeoff between computational efficiency and approximation
error. To avoid a large approximation
error, they need to restrict the smoothness and obtain a slower
?
rate (O(1/) vs. O(1/ )). However, we directly consider the statistical error introduced by the
smoothing approach, and show that the obtained estimator preserves the good statistical properties
without losing the computational efficiency. Thus we get the good sides of both worlds.
The rest of this paper is organized as follows: The next section reviews the nonparanormal SKEPTIC
in Liu et al. (2012); Section 3 introduces the smooth-projected neighborhood pursuit and derives
the fast proximal gradient algorithm; Section 4 explores the statistical properties of the procedure;
Section 5 and 6 present results on on both simulated and real datasets. Due to the space limit,
most of technical details are put in a significantly extended version of this paper (Zhao et al., 2013).
In addition, Zhao et al. (2013) also contains more thorough numerical experiments and detailed
comparison with other competitors.
2
Background
T
d
We first introduce
some notation.
norms:
P 2Let v = (v1 , . . . , vd ) ? R , we define the vectord?d
P
2
||v||1 =
|v
|,
||v||
=
v
,
and
||v||
=
max
|v
|.
Let
A
=
[A
]
?
R
and
j
?
j
i
jk
2
j j
j
d?d
B = [B
]
?
R
be
two
symmetric
matrices,
we
define
the
matrix
operator
norms:
||A||
1 =
Pjk
P
maxk j |Ajk |, ||A||? = maxj k |Ajk |, ||A||2 = max||v||2 =1 ||Av||2 and elementwise norms
P
P
|||A|||1 = j,k |Ajk |, |||A|||? = maxj,k |Ajk |, ||A||2F = j,k |Ajk |2 . We denote ?min (A) and
?max (A)
as the smallest and largest eigenvalues of A. The inner product of A and B is denoted by
A, B = tr(AT B), where tr(?) is the trace operator.
We denote the subvector of v with the j th entry removed by v\j = (v1 , . . . , vj?1 , vj+1 , . . . , vd )T ?
Rd?1 . In a similar notion, we denote the ith row of A with its j th entry removed by Ai,\j . If I is a
set of indices, then the sub-matrix of A with both column and row indices in I is denoted by AII .
We then introduce the nonparanormal graphical model. The nonparanormal (nonparametric normal)
distribution was initially motivated by the sparse additive models (Ravikumar et al., 2009). It aims
at separately modeling the marginal distribution and conditional independence structure. The formal
definition is as follows,
2
Definition 2.1 (Nonparanormal Distribution Liu et al. (2009)). Let f = {f1 , ..., fd } be a collection
of non-decreasing univariate functions and ?? ? Rd?d be a correlation matrix with diag(?? ) = 1.
We say a d-dimensional random variable X = (X1 , ..., Xd )T follows a nonparanormal distribution,
denoted by X ? N P Nd (f, ?? ), if
f (X) = (f1 (X1 ), ..., fd (Xd ))T ? N (0, ?? ).
(2.1)
The nonparanormal family is equivalent to the Gaussian copula family for continuous distributions
(Klaassen and Wellner, 1997; Tsukahara, 2005; Liu et al., 2009). Similar to the Gaussian graphical
model, the nonparanormal graphical model also encodes the conditional independence graph by the
sparsity pattern of the precision matrix ?? = (?? )?1 . More details can be found in (Liu et al.,
2009).
Recently, Liu et al. (2012) propose a rank-based procedure, named nonparanormal SKEPTIC, for
learning nonparanromal graphical models. More specifically, let x1 , ..., xn with xi = (xi1 , ..., xid )T
be n independent observations of X, we define the Spearman?s rho and Kendall?s tau correlation
coefficients as
Pn
i
?j )(rki ? r?k )
i=1 (rj ? r
Spearman?s rho : ?bjk = qP
,
(2.2)
Pn
n
i
?j )2 ? i=1 (rki ? r?k )2
i=1 (rj ? r
X
0
0
2
sign xij ? xij
xik ? xik ,
(2.3)
Kendall?s tau : ?bjk =
n(n ? 1) 0
i<i
Pn
where rji denotes the rank of xij among x1j , . . . , xnj and r?j = n1 i=1 rji = (n + 1)/2. Both the
Spearman?s rho and Kendall?s tau correlations are rank-based and invariant to univariate monotone
b ? = [S
b ? ] ? Rd?d and
transformations. The nonparanormal SKEPTIC estimators are defined as S
jk
b ? = [S
b ? ] ? Rd?d calculated from
S
jk
b ? = sin ? ?bjk .
b ? = 2 sin ? ?bjk and S
(2.4)
S
jk
jk
6
2
b ? and S
b ? avoid explicitly calculating
Here the sin(?) transformations correct the population bias. S
d
the marginal transformation functions {fj }j=1 and has been shown to achieve the optimal parametb ? and S
b ? have very
ric rates of convergence (Liu et al., 2012). Since Liu et al. (2012) suggest that S
b
similar performance, for notational simplicity, we simply omit the superscript (? and ?), and use S
instead. Theoretically, Liu et al. (2012) establish the following concentration bound of the nonparanormal SKEPTIC estimator, which is a sufficient condition to achieve graph estimation consistency
in high dimensions.
Lemma 2.2 (Nonparanormal SKEPTIC, Liu et al. (2012)). Given the nonparanormal SKEPTIC estib for large enough n, we have S
b satisfying
mator S,
b ? ?? |||? ? 8?? ? 1 ? d2 exp(?n?2 ).
(2.5)
P |||S
In the next section we will introduce our new smooth-projected neighborhood pursuit method and
show that it also admits a similar concentration bound.
3
Smooth-Projected Neighborhood Pursuit
Similar to the neighborhood pursuit, our smooth-projected neighborhood pursuit also solves a collection of `1 -penalized least square problems as follows,
b \j,j = argmin BT S
e
eT
B
\j,j \j,\j B\j,j ? 2S\j,j B\j,j + ?kB\j,j k1 for all j = 1, ..., d,
(3.1)
Bj,j =0
e is a positive semi-definite replacement of the nonparanormal SKEPTIC estimator S.
b (3.1)
where S
can be efficiently solved by existing solvers such as the coordinate descent algorithm (Friedman
et al., 2007). Let Ij denote a set of vertices, that are the neighbors of of node j, and Jj denote a set
b jk 6= 0} and Jbj = {k : B
b jk = 0}. Thus we can
of vertices, that are not, then we obtain Ibj = {k : B
b
b
eventually get the graph estimator G by combining all Ij ?s.
3
3.1
Smoothed Elementwise `? -norm
Our proposed method starts with the following projection problem,
b ? S|||? s.t. S 0.
S = argmin |||S
(3.2)
S
From the triangle inequality and the fact that ?? is a feasible solution to (3.2), we have
b+S
b ? S|||? ? |||S
b ? S|||? + |||S
b ? ?? |||? ? 2|||S
b ? ?? |||? .
|||?? ? S
(3.3)
Then by combining Lemma 2.2 and (3.3), we can show that S concentrates to ?? with a rate similar
to Lemma 2.2. However, (3.2) is computationally expensive due to the non-smooth elementwise
`? -norm. To overcome this challenge, we apply the dual smoothing approach in Nesterov (2005)
to efficiently solve (3.2) with a controllable loss in accuracy. More specifically, for any matrix
A ? Rd?d , we exploit the Fenchel?s dual representation of the elementwise `? -norm to obtain its
smooth surrogate as follows,
?
|||A|||?? = max U, A ? |||U|||2F ,
(3.4)
2
|||U|||1 ?1
where ? > 0 is the smoothing parameter, and the second term is the proximity function of U. We
call |||A|||?? smoothed elementwise `? -norm. A closed form solution to (3.4) is characterized in the
following lemma.
e with
Lemma 3.1. Equation (3.4) has a closed form solution, U
Ajk
e
Ujk = sign (Ajk ) ? max
? ?, 0 ,
(3.5)
?
e 1 ? 1.
where ? is the minimum non-negative constant such that |||U|||
By utilizing a suitable pivotal quantity, we can efficiently obtain ? with the expected computational
complexity O(d2 ). More details of the algorithm can be found in Zhao et al. (2013). The smoothed
b
elementwise `? -norm is a smooth convex function. Let A = S?S,
and we can evaluate its gradient
using (3.5) as follows,
b ? S|||? =
?|||S
?
b ? S|||? ?(S
b ? S)
?|||S
?
e
= ?U.
?
b ? S)
?S
?(S
(3.6)
e is essentially a soft thresholding function, therefore it is continuous in S with the LipSince U
chitz constant ??1 . In the next section, we will show that by considering the following alternative
optimization problem
e = argmin |||S
b ? S|||? s.t. S 0,
S
?
(3.7)
S
we can also obtain a good correlation estimator without losing computational efficiency.
3.2
Fast Proximal Gradient Algorithm
Equation (3.7) has a minimum eigenvalue constraint, regarding which, we exploit Nesterov (1988)
and derive the following fast proximal gradient algorithm. The main idea is to utilize the gradients in
previous iterations to help find the descent direction for the current iteration, and eventually achieves
a faster convergence rate than the ordinary projected gradient algorithm. In this algorithm, we need
two sequences of auxiliary variables M(t) and W(t) with M(0) = W(0) = S(0) , and a sequence of
weights ?t = 2/(1 + t) where t = 0, 1, 2, ....
Before we proceed with the proposed algorithm, we describe Lemma 3.2, which can solve the following projection problem
?+ (A) = argmin ||B ? A||2F s.t. B 0,
B
where A ? Rd?d is a symmetric matrix.
4
(3.8)
Pd
Lemma 3.2. Suppose A has the eigenvalue decomposition as A = j=1 ?j vj vjT , where ?j ?s are
the eigenvalues and vj ?s are corresponding eigenvectors. Let ?
ej = max{?j , 0} for j = 1, ..., d,
Pd
then we have ?+ (A) = j=1 ?
ej vj vjT .
Now we start with the t-th iteration. We first calculate the auxiliary variable M(t) as
M(t) = (1 ? ?t )S(t?1) + ?t W(t?1) .
We then evaluate the gradient according to (3.6),
b ? M(t) |||?
?|||S
?
G(t) =
.
?M(t)
We consider the following quadratic approximation
b ? W(t?1) |||? + G(t) , W ? W(t?1) +
Q(W, W(t?1) , ?) = |||S
?
(3.9)
(3.10)
1
||W ? W(t) ||2F . (3.11)
2??t
By simple manipulations and Lemma 3.2, the fast proximal gradient algorithm takes
? (t)
(t)
(t?1)
(t?1)
W = argmin Q(W, W
, ?) = ?+ W
,
(3.12)
? G
?t
W0
where ? works as a step-size here. We further calculate S(t) for the t-th iteration as follow,
S(t) = (1 ? ?t )S(t?1) + ?t W(t) .
(3.13)
b ? S|||
e ? < , we need the
b ? S(t) |||? ? |||S
Theorem 3.3. Given the desired accuracy such that |||S
?
?
number of iterations to be at most
q
p
e 2 /(?) ? 1 = O
t = 2||S(0) ? S||
1/(?) .
(3.14)
F
The detailed proof can be found in the extended draft Zhao et al. (2013) due to the space limit.
Theorem 3.3 guarantees that our derived algorithm achieves the optimal rate of convergence for
minimizing (3.7) over the class of all gradient-based computational algorithms. In next section, by
directly analyzing the tradeoff between the computational efficiency and statistical error, we will
e concentrate to ?? with a rate similar
show that choosing a suitable smooth parameter ? allows S
to Lemma 2.2 in high dimensions, though (3.7) is not the same as the original projection problem
(3.2).
4
Statistical Properties
In this section we present the statistical properties of the proposed method. Due to space limit, all
the proofs of following theorems can be found in the extended draft Zhao et al. (2013). The next
e under the elementwise `? norm. This result
theorem establishes the concentration property of S
will be useful to prove later main theorem.
b for any large enough n, under the
Theorem 4.1. Given the nonparanormal SKEPTIC estimator S,
e satisfying
conditions that ? ? 4?? and ? > 0, we have the optimum to (3.7), denoted as S,
e ? ?? |||? ? 18?? ? 1 ? d2 exp(?n?2 ).
P |||S
(4.1)
Theorem 4.1 is non-asymptotic. It implies that we can gain the computational efficiency without
losing statistical rate in terms of elementwise sup-norm as long as ? is reasonably large. We now
show that our proposed smooth-projected neighborhood approach recovers the true neighborhood
for each node with high probability under the following irrepresentable condition (Zhao and Yu,
2006; Zou, 2006; Wainwright, 2009).
Assumption 1 (Irrepresentable Condition). Recall that Ij and Jj denote the true neighborhood and
non-neighborhood of node j respectively. There exist ? ? (0, 1), ?min > 0 and ? ?1 ? ? < ? such
that for ?j = 1, .., d, the following conditions hold,
(C.1) ||??Jj Ij (??Ij Ij )?1 ||? ? ?;
(C.2)
?min (??Ij Ij )
? ?,
5
k(??Ij Ij )?1 k?
(4.2)
? ?.
(4.3)
The proposed projection approach can be also combined with other graph estimation method such
as Zhou et al. (2009), in which the conditions above can be relaxed. Here we use this condition for
an illustrative purpose to show that the proposed method has a theoretical guarantee.
?
Theorem 4.2 (Graph Recovery Performance). Let ? = min |B?jk | for all (j, k)?s such that Gjk
6= 0,
?
d?d
?
?
?1 ?
?
?
where B ? R
with B\j,j = (?\j,\j ) ?\j,j and Bj,j = 0. We assume that ? satisfies
Conditions C.1 and C.2. Let sj = |Ij | < n and choose the ? such that ? ? min {? /?, 2}, then
there exist positive universal constants c0 and c1 , such that
!
!
2
2
?c
n?
c
n?
1
1
P Jbj = Jj , Ibj = Ij ? 1 ? s2j exp
? s2j exp ? 2
4s2j
sj
!
c1 n?2
? (d ? sj )sj exp ? 2
? d exp(?c1 n?2 ), (4.4)
sj
q
n
o
1 ?(1??) ?(1??)
?
?
where ? satisfies that c0 logn d ? ? ? min 1, 2?
, 26? , 26(?+1) , 14?
.
2 , 14?
Theorem 4.2 is also non-asymptotic. It guarantees that for each individual node, we can correctly recover its neighborhood with high probability. Consequently, the following corollary can be implied
so that we can asymptotically recover the underlying graph structure under given conditions.
Corollary 4.3. Let s = max1?j?d sj , then under the same conditions as in Theorem 4.2, we have
P(Gb = G) ? 1 if the following conditions hold:
(C.3) ?, ? and ? are constants, which do not scale with the triplet (n, d, s);
(C.4) The triplet (n, d, s) scales as s2 (log d + log s)/n ? 0 and s2 log d/(? 2 n) ? 0;
(C.5) ? scales with (n, d, s) as ?/? ? 0 and s2 log d/(?2 n) ? 0.
5
Numerical Simulations
Liu et al. (2012) recommend to use the Kendall?s tau for nonparanormal graph estimation because
of its superior robustness property compared to the Spearman?s rho. In this section, we use the
Kendall?s tau in our smooth-projected neighborhood pursuit method. For synthetic data, we use
the following four different graphs with 200 nodes (d = 200): (1) Erd?os-R?enyi graph; (ii) Cluster
graph; (iii) Chain graph; (4) Scale-free graph. We simulate data from the Gaussian distributions that
Markov to the above graphs. We adopt the power function g(t) = sign(t)|t|4 to convert the Gaussian
data to the nonparanormal data. More details about the data simulation can be found in Zhao et al.
(2013). We use the ROC curve to evaluate the graph estimation performance. Since d > n, the full
solution paths cannot be obtained, therefore we restrict the range of false positive edge discovery
rates to be from 0 to 0.3 for computational convenience.
5.1
Our proposed method vs. Nonparanormal SKEPTIC Estimator
We first evaluate the proposed smoothed elementwise `? -norm projection algorithm. For this, we
sampled 100 data points from a 200-dimensional standard normal distribution N (0, I200 ). We study
the empirical performance of the proposed fast proximal gradient algorithm using different smoothing parameters (? = 1, 0.5, 0.25, 0.1). The optimization and statistical error curves for different
smoothing parameters (averaged over 50 replications) are presented in Figure 1. Figure 1(a) shows
b ? S(t) |||? v.s. the number of iterations. Compared with smaller
the original objective value |||S
??s, we see that choosing ? = 1 reduces the computational burden but increases the approximation error w.r.t the problem in (3.2). However, Figure 1(b) shows that, in terms of the statistical
error |||?? ? S(t) |||? , ? = 1 performs similarly to the other smaller ??s. Therefore, we show that
significant computational efficiency can be gained with little loss of statistical error.
We further compare the graph recovery performance of our proposed method with the naive indefinite nonparanormal SKEPTIC estimator as in Liu et al. (2012). The averaged ROC curves over
100 replications are presented in Figure 2. We see that directly plugging the indefinite nonparanormal SKEPTIC estimator into the neighborhood pursuit results in the worst performance. The
ROC performance drops dramatically due to the non-convexity of the objective function. While
6
0.442
0.440
Statistical Error
0.444
?=1
? = 0.5
? = 0.25
? = 0.1
0.436
0.438
0.035
0.025
0.020
0.015
0.010
Objective Values
0.030
?=1
? = 0.5
? = 0.25
? = 0.1
0
10
20
30
40
50
0
10
20
# of Iterations
30
40
50
# of Iterations
(b) |||?? ? S(t) |||?
b ? S(t) |||?
(a) |||S
Figure 1: The empirical performance using different smoothing parameters. ? = 1 has a similar
performance to the smaller ??s in terms of the estimation error.
0.0
0.1
0.2
0.3
0.4
0.0
0.1
False Positive Rate
0.2
0.3
0.8
0.6
0.5
0.4
True Positive Rate
0.3
0.2
SKEPTIC
Projection
0.3
0.4
SKEPTIC
Projection
0.4
0.0
0.1
False Positive Rate
(a) Erd?os-R?enyi
SKEPTIC
Projection
0.2
0.4
0.5
0.4
0.6
True Positive Rate
0.7
0.6
0.5
True Positive Rate
0.8
0.7
0.6
True Positive Rate
0.8
0.7
1.0
0.9
0.8
0.9
1.0
our smoothed-projected neighborhood pursuit method significantly outperforms the naive indefinite
nonparanormal SKEPTIC estimator.
0.2
0.3
0.4
0.0
0.1
False Positive Rate
(b) Cluster
0.2
0.3
0.4
False Positive Rate
(c) Chain
(d) Scale-free
Figure 2: The averaged ROC curves of the neighborhood pursuit when combined with different
correlation estimators. ?SKEPTIC? represents the indefinite nonparanormal SKEPTIC estimator,
and ?Projection? represents our proposed projection approach.
5.2
Our Proposed Method vs. Naive Neighborhood Pursuit
0.05
0.10
0.15
0.20
0.25
0.30
False Positive Rate
(a) Erd?os-R?enyi
0.05
0.10
0.15
0.20
0.25
0.30
False Positive Rate
1.0
0.6
0.2
0.05
0.10
0.15
0.20
(c) Chain
0.25
0.30
NNP
SNP
0.0
NNP
SNP
0.00
False Positive Rate
(b) Cluster
0.4
True Positive Rate
0.8
1.0
0.6
0.2
0.00
0.0
NNP
SNP
0.0
0.0
NNP
SNP
0.00
0.4
True Positive Rate
0.8
1.0
0.8
0.6
0.2
0.4
True Positive Rate
0.6
0.4
0.2
True Positive Rate
0.8
1.0
In this subsection, we conduct similar numerical studies as in Liu et al. (2012) to compare our proposed method with the naive neighborhood pursuit method. The naive neighborhood pursuit directly
exploits the Pearson correlation estimator under the neighborhood pursuit framework. Choosing
n = 100 and d = 200, we use the same experimental setup as in the previous subsection. The
averaged ROC curves over 100 replications are presented in Figure 3. As can be seen, our proposed
projection method outperforms the naive neighborhood pursuit throughout all four different graphs.
0.00
0.05
0.10
0.15
0.20
0.25
0.30
False Positive Rate
(d) Scale-free
Figure 3: The averaged ROC curves of the neighborhood pursuit when combined with different
correlation estimators. ?SNP? represents our proposed estimator and ?NNP? represents the Pearson
estimator. The SNP uniformly outperforms the NNP for all four graphs.
6
Real Data Analysis
In this section we present a real data experiment to compare the nonparanormal graphical model to
Gaussian graphical model. For model selection, we use the stability graph procedure (Meinshausen
7
and B?uhlmann, 2010; Liu et al., 2010), which has the following steps: (1) Calculate the solution
path using all the samples, and choose the regularization parameter at the sparsity level 4%; (2)
Randomly choose 10% of all the samples without replacement using the regularization parameter
chosen in (1); (3) Repeat the step (2) 500 times and retain the edges that appear with frequencies no
less than 95%.
The topic graph is first used in Blei and Lafferty (2007) to illustrate the idea of correlated topic
modeling. The correlated topic model, is a hierarchical Bayesian model for abstracting K ?topics?
that occur in a collection of documents (corpus). By applying the variational EM-algorithm, we can
estimate the topic proportion for each document and represent it in a K-dimensional simplex (mixedmembership). Blei and Lafferty (2007) assume that the topic proportion approximately follows a
normal distribution after the logarithmic-transformation. Here we are interested in visualizing the
relationship among the topics using an undirected graph: the nodes represent individual topics,
and edges connecting different nodes represent highly related topics. The corpus used in Blei and
Lafferty (2007) contains 16,351 documents with 19,088 unique terms. Similar to Blei and Lafferty
(2007), we choose K = 100 and fit a topic model to the articles published in Science from 1990 to
1999.
Evaluated by the Kolmogorov-Smirnov test, we find many topic data highly violate the normality assumption (More details can be found in Zhao et al. (2013)). This motivates our choice of
the smooth-projected neighborhood pursuit approach. The estimated topic graphs are provided in
Figure 4. The smooth-projected neighborhood pursuit generates 6 mid-size modules and 6 small
modules, while the naive neighborhood pursuit generated 1 large module, 2 mid-size modules and
6 small modules. The nonparanormal approach discovers more refined structures and improves the
interpretability of the obtained graph. For example: (1) Topics closely related to climate change in
Antarctica are clustered in the same module such as ?ice-68?, ?ozone-23? and ?carbon-64?; (2) Topics closely related to environmental ecology are clustered in the same module such as ?monkey-21?,
?science-4?, ?environmental-67?, ?species-86?, etc.; (3) Topics closely related to modern physics
are clustered in the same module such as ?quantum-29?, ?magnetic-55?, ?pressure-92?, ?solar-62?,
etc.. In contrast, the naive neighborhood pursuit mixes all these different topics in a large module.
(a) Our Proposed Method
(b) Naive Neighborhood Pursuit
Figure 4: Two topic graphs illustrating the difference of the estimated topic graphs. The smoothprojected neighborhood pursuit (subfigure (a)) generates 6 mid-size modules and 6 small modules
while the naive neighborhood pursuit (subfigure (b)) generates 1 large module, 2 mid-size modules
and 6 small modules.
7
Conclusion and Acknowledgement
In this paper, we study how to estimate the nonparanormal graph using the neighborhood pursuit
in conjunction with the possible indefinite nonparanormal skeptic estimator. Using our proposed
smoothed projection approach, the resulting estimator can be used as a positive semi-definite refinement of the nonparanormal skeptic estimator. Our estimator has better graph estimation performance
with theoretical guarantee. Our results suggest that it is possible to gain estimation robustness and
modeling flexibility without losing two important computational structures: convexity and smoothness. The topic modeling experiment demonstrates that our proposed method may lead to more
refined scientific discovery. Han Liu and Tuo Zhao are supported by NSF award IIS-11167308, and
Kathryn Roeder is supported by National Institute of Mental Health grant MH057881.
8
References
BANERJEE , O., G HAOUI , L. E. and D ?A SPREMONT, A. (2008). Model selection through sparse maximum
likelihood estimation. Journal of Machine Learning Research 9 485?516.
B LEI , D. and L AFFERTY, J. (2007). A correlated topic model of science. Annals of Applied Statistics 1 17?35.
C HEN , X., L IN , Q., K IM , S., C ARBONELL , J. and X ING , E. (2012). A smoothing proximal gradient method
for general structured sparse regression. Annals of Applied Statistics To appear.
D EMPSTER , A. (1972). Covariance selection. Biometrics 28 157?175.
F RIEDMAN , J., T. H ASTIE , H. H. and T IBSHIRANI , R. (2007). Pathwise coordinate optimization. Annals of
Applied Statistics 1 302?332.
H ONORIO , J., O RTIZ , L., S AMARAS , D., PARAGIOS , N., and G OLDSTEIN , R. (2009). Sparse and locally
constant gaussian graphical models. Advances in Neural Information Processing Systems 745?753.
K LAASSEN , C. and W ELLNER , J. (1997). Efficient estimation in the bivariate normal copula model: Normal
margins are least-favorable. Bernoulli 3 55?77.
L AURITZEN , S. (1996). Graphical models, vol. 17. Oxford University Press, USA.
L IU , H., H AN , F., Y UAN , M., L AFFERTY, J. and WASSERMAN , L. (2012). High dimensional semiparametric
gaussian copula graphical models. Annals of Statistics To appear.
L IU , H., L AFFERTY, J. and WASSERMAN , L. (2009). The nonparanormal: Semiparametric estimation of high
dimensional undirected graphs. Journal of Machine Learning Research 10 2295?2328.
L IU , H., ROEDER , K. and WASSERMAN , L. (2010). Stability approach to regularization selection for high
dimensional graphical models. Advances in Neural Information Processing Systems .
?
M EINSHAUSEN , N. and B UHLMANN
, P. (2006). High dimensional graphs and variable selection with the
lasso. Annals of Statistics 34 1436?1462.
?
M EINSHAUSEN , N. and B UHLMANN
, P. (2010). Stability selection. Journal of the Royal Statistical Society,
Series B 72 417?473.
N ESTEROV, Y. (1988). On an approach to the construction of optimal methods of smooth convex functions.
?
Ekonom.
i. Mat. Metody 24 509?517.
N ESTEROV, Y. (2005). Smooth minimization of non-smooth functions. Mathematical Programming 103 127?
152.
R AVIKUMAR , P., L AFFERTY, J., L IU , H. and WASSERMAN , L. (2009). Sparse additive models. Journal of
the Royal Statistical Society, Series B 71 1009?1030.
R AVIKUMAR , P., WAINWRIGHT, M., R ASKUTTI , G. and Y U , B. (2011). High-dimensional covariance estimation by minimizing `1 -penalized log-determinant divergence. Electronic Journal of Statistics 5 935?980.
T SUKAHARA , H. (2005). Semiparametric estimation in copula models. Canadian Journal of Statistics 33
357?375.
WAINWRIGHT, M. (2009). Sharp thresholds for highdimensional and noisy sparsity recovery using
`1 constrained quadratic programming. IEEE Transactions on Information Theory 55 2183?2201.
W ILLE , A., Z IMMERMANN , P., V RANOVA , E., F RHOLZ , A., L AULE , O., B LEULER , S., H ENNIG , L.,
?
P RELIC , A., VON ROHR , P., T HIELE , L., Z ITZLER , E., G RUISSEM , W. and B UHLMANN
, P. (2004). Sparse
graphical gaussian modeling of the isoprenoid gene network in arabidopsis thaliana. Genome Biology 5 R92.
Y UAN , M. and L IN , Y. (2007). Model selection and estimation in the gaussian graphical model. Biometrika
94 19?35.
Z HAO , P. and Y U , B. (2006). On model selection consistency of lasso. Journal of Machine Learning Research
7 2541?2563.
Z HAO , T., L IU , H., ROEDER , K., L AFFERTY, J. and WASSERMAN , L. (2012). The huge package for highdimensional undirected graph estimation in r. Journal of Machine Learning Research To appear.
Z HAO , T., ROEDER , K. and L IU , H. (2013). A smoothing projection approach for high dimensional nonparanormal graph estimation. Tech. rep., Johns Hopkins University.
?
Z HOU , S., VAN D E G EER , S. and B UHLMANN
, P. (2009). Adaptive lasso for high dimensional regression and
gaussian graphical modeling. Tech. rep., ETH Zurich.
Z OU , H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association
101 1418?1429.
9
| 4810 |@word determinant:1 illustrating:1 version:1 norm:13 proportion:2 nd:1 c0:2 smirnov:1 open:1 d2:3 simulation:2 covariance:3 decomposition:1 pressure:1 tr:2 liu:19 contains:4 series:2 document:3 nonparanormal:36 outperforms:3 existing:3 xnj:1 current:1 auritzen:1 john:2 hou:1 numerical:4 additive:2 drop:1 v:3 xk:1 ith:1 blei:5 mental:1 draft:2 node:9 nnp:6 mathematical:1 replication:3 yuan:1 prove:1 combine:1 introduce:4 theoretically:3 expected:1 gjk:1 decreasing:1 little:1 solver:2 considering:1 provided:2 estimating:3 moreover:1 project:1 maximizes:2 notation:1 underlying:1 argmin:5 monkey:1 developed:1 proposing:1 transformation:4 guarantee:6 pseudo:1 thorough:1 xd:5 biometrika:1 demonstrates:1 arabidopsis:1 grant:1 omit:1 appear:4 positive:21 before:1 engineering:1 ice:1 limit:3 analyzing:1 oxford:1 path:2 approximately:1 meinshausen:2 range:1 averaged:5 bjk:4 practical:1 unique:1 implement:1 definite:3 procedure:4 universal:1 empirical:2 eth:1 significantly:2 projection:14 eer:1 suggest:2 get:2 cannot:1 irrepresentable:2 selection:9 operator:2 convenience:1 put:1 einshausen:2 applying:1 equivalent:1 convex:3 simplicity:1 recovery:5 mixedmembership:1 wasserman:5 ozone:1 estimator:24 utilizing:1 financial:1 wille:1 population:1 stability:3 notion:1 coordinate:2 annals:5 construction:1 suppose:1 losing:4 programming:2 kathryn:2 satisfying:2 particularly:1 jk:8 expensive:1 observed:1 module:14 solved:2 worst:1 calculate:4 connected:1 removed:2 dempster:1 convexity:3 complexity:2 pd:2 nesterov:5 max1:1 efficiency:8 triangle:1 aii:1 differently:1 kolmogorov:1 enyi:3 fast:6 describe:1 formation:1 neighborhood:36 choosing:3 pearson:2 refined:2 rho:5 solve:2 say:2 relax:1 estib:1 statistic:8 noisy:1 superscript:1 sequence:2 eigenvalue:4 propose:1 skeptic:22 product:1 combining:2 flexibility:1 achieve:3 convergence:5 regularity:1 optimum:1 cluster:3 help:1 derive:1 develop:1 illustrate:1 metody:1 ij:13 lauritzen:1 solves:1 auxiliary:2 implies:1 concentrate:2 direction:1 closely:4 correct:1 kb:1 xid:1 pjk:1 f1:3 clustered:3 im:1 strictly:1 hold:2 proximity:1 normal:5 exp:6 bj:2 major:1 achieves:2 adopt:1 smallest:1 purpose:1 estimation:21 favorable:1 uhlmann:6 bridge:1 largest:1 establishes:1 minimization:1 gaussian:15 rki:2 aim:1 avoid:2 pn:3 ej:2 zhou:1 conjunction:1 corollary:2 derived:2 focus:1 notational:1 rank:6 likelihood:3 bernoulli:1 tech:2 contrast:2 roeder:5 honorio:1 bt:1 initially:1 transformed:1 interested:1 iu:6 among:3 dual:4 denoted:4 logn:1 smoothing:12 constrained:1 copula:4 marginal:2 biology:1 represents:4 yu:1 simplex:1 recommend:1 modern:1 randomly:1 simultaneously:1 preserve:1 national:1 individual:2 divergence:1 maxj:2 replacement:2 n1:1 friedman:3 ecology:1 huge:2 fd:3 highly:2 introduces:1 chain:3 edge:4 biometrics:1 conduct:1 desired:2 theoretical:6 subfigure:2 fenchel:1 column:1 modeling:6 soft:1 s2j:3 ordinary:1 vertex:2 entry:2 examining:1 proximal:7 synthetic:2 combined:3 fundamental:1 explores:1 retain:1 xi1:1 physic:1 connecting:1 hopkins:2 von:1 choose:4 possibly:1 american:1 zhao:11 rji:2 potential:1 coefficient:1 ennig:1 explicitly:1 later:1 view:2 closed:2 kendall:6 analyze:2 sup:1 start:2 recover:3 solar:1 square:1 accuracy:3 mator:1 efficiently:3 bayesian:1 published:1 ille:1 definition:2 competitor:1 frequency:1 proof:2 recovers:1 gain:2 sampled:1 recall:1 subsection:2 improves:1 organized:1 ou:1 x1j:1 follow:1 erd:3 formulation:1 evaluated:1 though:1 correlation:12 o:3 banerjee:2 scientific:1 lei:1 usa:1 true:10 regularization:3 symmetric:2 climate:1 visualizing:1 sin:3 esterov:2 illustrative:1 jbj:2 performs:1 fj:2 snp:6 variational:1 novel:1 recently:1 discovers:1 superior:1 qp:1 association:1 elementwise:11 mellon:1 significant:1 ai:1 smoothness:2 rd:6 consistency:3 similarly:1 dj:1 han:2 etc:2 manipulation:1 certain:1 inequality:1 rep:2 seen:1 minimum:2 relaxed:1 ibshirani:1 semi:3 ii:3 violate:1 mix:1 rj:2 infer:1 full:1 reduces:1 smooth:19 technical:1 faster:1 characterized:1 plug:1 ing:1 long:1 lin:1 ravikumar:2 award:1 plugging:1 scalable:1 regression:2 essentially:1 iteration:8 represent:3 c1:3 addition:3 semiparametric:4 background:1 separately:1 rest:1 suspect:1 undirected:5 lafferty:5 call:1 canadian:1 iii:1 enough:2 independence:3 xj:1 ujk:1 fit:1 lasso:9 restrict:2 inner:1 idea:4 regarding:1 tradeoff:4 motivated:1 gb:1 wellner:1 proceed:1 jj:4 dramatically:1 useful:1 detailed:2 eigenvectors:1 nonparametric:1 mid:4 locally:1 xij:3 exist:2 nsf:1 sign:3 estimated:4 correctly:1 carnegie:1 mat:1 ibj:2 vol:1 indefinite:6 four:3 threshold:1 utilize:1 v1:2 graph:43 asymptotically:1 monotone:2 cone:1 convert:1 package:2 named:3 klaassen:1 family:2 throughout:1 electronic:1 ric:1 thaliana:1 bound:2 quadratic:2 oracle:1 uan:2 occur:1 constraint:1 software:1 encodes:1 generates:3 simulate:1 min:6 department:3 structured:1 according:1 spearman:5 smaller:3 em:1 invariant:1 computationally:2 equation:2 vjt:2 zurich:1 eventually:2 hiele:1 pursuit:31 operation:1 apply:1 hierarchical:1 magnetic:1 alternative:3 robustness:2 slower:1 original:2 denotes:1 graphical:20 calculating:1 exploit:3 restrictive:1 k1:1 establish:1 society:2 implied:1 objective:4 quantity:1 parametric:2 concentration:3 surrogate:1 gradient:12 simulated:1 vd:2 topic:20 considers:1 provable:1 index:2 relationship:2 minimizing:2 setup:1 carbon:1 xik:2 hao:3 trace:1 negative:1 motivates:1 av:1 observation:2 datasets:3 markov:2 afferty:5 descent:2 behave:1 maxk:1 extended:3 smoothed:8 sharp:1 tuo:2 introduced:1 subvector:1 pattern:4 sparsity:6 challenge:3 max:6 tau:6 interpretability:1 royal:2 wainwright:3 power:1 suitable:2 representing:2 normality:2 naive:10 health:1 nice:1 literature:1 review:1 discovery:2 acknowledgement:1 hen:1 asymptotic:2 glasso:1 loss:2 abstracting:1 proven:1 sufficient:2 consistent:2 riedman:1 article:1 thresholding:1 row:2 penalized:4 repeat:1 supported:2 free:3 side:1 formal:1 bias:1 institute:1 neighbor:1 empster:1 sparse:6 van:1 overcome:1 dimension:3 xn:1 world:1 calculated:1 curve:6 quantum:1 genome:1 collection:3 refinement:1 projected:14 adaptive:2 transaction:1 sj:6 gene:1 astie:1 corpus:2 xi:2 continuous:2 triplet:2 reasonably:1 robust:1 controllable:1 zou:1 vj:5 diag:1 main:5 s2:3 antarctica:1 pivotal:1 x1:6 roc:6 precision:3 sub:1 paragios:1 theorem:10 avikumar:2 admits:1 derives:1 exists:1 burden:1 bivariate:1 false:9 gained:1 margin:1 gap:1 chen:1 logarithmic:1 simply:1 univariate:2 pathwise:1 satisfies:2 environmental:2 conditional:3 goal:1 consequently:1 ajk:7 feasible:1 change:1 specifically:4 uniformly:1 lemma:9 called:1 specie:1 experimental:1 highdimensional:2 support:1 evaluate:4 princeton:1 correlated:3 |
4,211 | 4,811 | Label Ranking with Partial Abstention based on
Thresholded Probabilistic Models
?
Eyke Hullermeier
Mathematics and Computer Science
Philipps-Universit?at Marburg
Marburg, Germany
[email protected]
Weiwei Cheng
Mathematics and Computer Science
Philipps-Universit?at Marburg
Marburg, Germany
[email protected]
Willem Waegeman
Mathematical Modeling, Statistics and
Bioinformatics, Ghent University
Ghent, Belgium
[email protected]
Volkmar Welker
Mathematics and Computer Science
Philipps-Universit?at Marburg
Marburg, Germany
[email protected]
Abstract
Several machine learning methods allow for abstaining from uncertain predictions. While being common for settings like conventional classification, abstention has been studied much less in learning to rank. We address abstention for the
label ranking setting, allowing the learner to declare certain pairs of labels as being
incomparable and, thus, to predict partial instead of total orders. In our method,
such predictions are produced via thresholding the probabilities of pairwise preferences between labels, as induced by a predicted probability distribution on the
set of all rankings. We formally analyze this approach for the Mallows and the
Plackett-Luce model, showing that it produces proper partial orders as predictions
and characterizing the expressiveness of the induced class of partial orders. These
theoretical results are complemented by experiments demonstrating the practical
usefulness of the approach.
1
Introduction
In machine learning, the notion of ?abstention? commonly refers to the possibility of refusing a
prediction in cases of uncertainty. In classification with a reject option, for example, a classifier
may abstain from a class prediction if making no decision is considered less harmful than making an
unreliable and hence potentially false decision [7, 1]. The same idea could be used in the context of
ranking, too, where a reject option appears to be even more interesting than in classification. While
a conventional classifier has only two choices, namely to predict a class or to abstain, a ranker can
abstain to some degree: The order relation predicted can be more or less complete, ranging from a
total order to the empty relation in which all alternatives are declared incomparable.
Our focus is on so-called label ranking problems [16, 10], to be introduced more formally in Section 2 below. Label ranking has a strong relationship with the standard setting of multi-class classification, but each instance is now associated with a complete ranking of all labels instead of a single
label. Typical examples, which also highlight the need for abstention, include the ranking of candidates for a given job and the ranking of products for a given customer. In such applications, it is
desirable to avoid the expression of unreliable or unwarranted preferences. Thus, if a ranker cannot
reliably decide whether a first label should precede a second one or the other way around, it should
abstain from this decision and instead declare these alternatives as being incomparable. Abstaining
in a consistent way, the relation thus produced should form a partial order [6].
1
In Section 4, we propose and analyze a new approach for abstention in label ranking that builds
on existing work on partial orders in areas like decision theory, probability theory and discrete
mathematics. We predict partial orders by thresholding parameterized probability distributions on
rankings, using the Plackett-Luce and the Mallows model. Roughly speaking, this approach is able
to avoid certain inconsistencies of a previous approach to label ranking with abstention [6], to be
discussed in Section 3. By making stronger model assumptions, our approach simplifies the construction of consistent partial order relations. In fact, it obeys a number of appealing theoretical
properties. Apart from assuring proper partial orders as predictions, it allows for an exact characterization of the expressivity of a class of thresholded probability distributions in terms of the number
of partial orders that can be produced. The proposal and formal analysis of this approach constitute
our main contributions.
Last but not least, as will be shown in Section 5, the theoretical advantages of our approach in
comparison with [6] are also reflected in practical improvements.
2
Label Ranking with Abstention
In the setting of label ranking, each instance x from an instance space X is associated with a total
order of a fixed set of class labels Y = {y1 , . . . , yM }, that is, a complete, transitive, and antisymmetric relation on Y, where yi yj indicates that yi precedes yj in the order. Since a ranking
can be considered as a special type of preference relation, we shall also say that yi yj indicates a
preference for yi over yj (given the instance x).
Formally, a total order can be identified with a permutation ? of the set [M ] = {1, . . . , M }, such
that ?(i) is the position of yi in the order. We denote the class of permutations of [M ] (the symmetric
group of order M ) by ?. Moreover, we identify with the mapping (relation) R : Y 2 ?? {0, 1}
such that R(i, j) = 1 if yi yj and R(i, j) = 0 otherwise.
The goal in label ranking is to learn a ?label ranker? in the form of an X ?? ? mapping. As training
data, a label ranker uses a set of instances xn (n ? [N ]), together with preference information in the
form of pairwise comparisons yi yj of some (but not necessarily all) labels in Y, suggesting that
instance xn prefers label yi to yj .
The prediction accuracy of a label ranker is assessed by comparing the true ranking ? with the
prediction ?
? , using a distance measure D on rankings. Among the most commonly used measures
is the Kendall distance, which is defined by the number of inversions, that is, pairs {i, j} ? [M ]
such that sign(?(i) ? ?(j)) 6= sign(?
? (i) ? ?
? (j)). Besides, the sum of squared rank distances,
PM
2
? (i)) , is often used as an alternative distance; it is closely connected to Spearman?s
i=1 (?(i) ? ?
rank correlation (Spearman?s rho), which is an affine transformation of this number to the interval
[?1, +1].
Motivated by the idea of a reject option in classification, Cheng et al. [6] introduced a variant of
the above setting in which the label ranker is allowed to partially abstain from a prediction. More
specifically, it is allowed to make predictions in the form of a partial order Q instead of a total order
R: If Q(i, j) = Q(j, i) = 0, the ranker abstains on the label pair (yi , yj ) and instead declares these
labels as being incomparable. Abstaining in a consistent way, Q should still be antisymmetric and
transitive, hence a partial order relation. Note that a prediction Q can be associated with a confidence
set, i.e., a subset of ? supposed
to cover the true ranking ?, namely the set of all linear
extensions
of this partial order: C(Q) = ? ? ? | Q(i, j) = 1 ? (?(i) < ?(j)) for all i, j ? [M ] .
3
Previous Work
Despite a considerable amount of work on ranking in general and learning to rank in particular, the
literature on partial rankings is relatively sparse. Worth mentioning is work on a specific type of
partial orders, namely linear orders of unsorted or tied subsets (partitions, bucket orders) [13, 17].
However, apart from the restriction to this type of order relation, the problems addressed in these
works are quite different from our goals. The authors in [17] specifically address computational
aspects that arise when working with distributions on partially ranked data, while [13] seeks to
discover an underlying bucket order from pairwise precedence information between the items.
2
More concretely, in the context of the label ranking problem, the aforementioned work [6] is the
only one so far that addresses the more general problem of learning to predict partial orders. This
method consists of two main steps and can be considered as a pairwise approach in the sense that,
as a point of departure for a prediction, a valued preference relation P : Y 2 ?? [0, 1] is produced,
where P (i, j) is interpreted as a measure of support of the pairwise preference yi yj . Support
is commonly interpreted in terms of probability, hence P is assumed to be reciprocal: P (i, j) =
1 ? P (j, i) for all i, j ? [M ].
Then, in a second step, a partial order Q is derived from P via thresholding:
1, if P (i, j) > q
Q(i, j) = JP (i, j) > qK =
,
0, otherwise
(1)
where 1/2 ? q < 1 is a threshold. Thus, the idea is to predict only those pairwise preferences that
are sufficiently likely, while abstaining on pairs (i, j) for which P (i, j) ? 1/2.
The first step of deriving the relation P is realized by means of an ensemble learning technique:
Training an ensemble of standard label rankers, each of which provides a prediction in the form of
a total order, P (i, j) is defined by the fraction of ensemble members voting for yi yj . Other
possibilities are of course conceivable, and indeed, the only important point to notice here is that the
preference degrees P (i, j) are essentially independent of each other. Or, stated differently, they do
not guarantee any specific properties of the relation P except being reciprocal. In particular, P does
not necessarily obey any type of transitivity property.
For the relation Q derived from P via thresholding, this has two important consequences: First, if
the threshold q is not large enough, then Q may have cycles. Thus, not all thresholds in [1/2, 1) are
actually feasible. In particular, if q = 1/2 cannot be chosen, this also implies that the method may
not be able to predict a total order as a special case. Second, even if Q does not have cycles, it is not
guaranteed to be transitive.
To overcome these problems, the authors devise an algorithm (of complexity O(|Y|3 )) that finds
the smallest feasible threshold qmin , namely the threshold that guarantees Q(i, j) = JP (i, j) > qK
to be cycle-free for each threshold q ? [qmin , 1). Then, since Q may still be non-transitive, it is
?repaired? in a second step by replacing it with its transitive closure [23].
4
Predicting Partial Orders based on Probabilistic Models
In order to tackle the problems of the approach in [6], our idea is to restrict the relation P in (1) so
as to exclude the possibility of cycles and intransitivity from the very beginning, thereby avoiding
the need for a post-reparation of a prediction. To this end, we take advantage of methods for label
ranking that produce (parameterized) probability distributions over ? as predictions. Our main theoretical result is to show that thresholding pairwise preferences induced by such distributions, apart
from being semantically meaningful due to their clear probabilistic interpretation, yields preference
relations with the desired properties, that is, partial order relations Q.
4.1
Probabilistic Models
Different types of probability models for rank data have been studied in the statistical literature
[11, 20], including the Mallows model and the Plackett-Luce (PL) model as the most popular representatives of the class of distance-based and stagewise models, respectively. Both models have
recently attracted attention in machine learning [14, 15, 22, 21, 18] and, in particular, have been
used in the context of label ranking.
A label ranking method that produces predictions expressed in terms of the Mallows model is proposed in [5]. The standard Mallows model
P(? | ?, ?0 ) =
exp(??D(?, ?0 ))
?(?)
(2)
is determined by two parameters: The ranking ?0 ? ? is the location parameter (mode, center
ranking) and ? ? 0 is a spread parameter. Moreover, D is a distance measure on rankings, and
? = ?(?) is a normalization factor that depends on the spread (but, provided the right-invariance
3
of D, not on ?0 ). Obviously, the Mallows model assigns the maximum probability to the center
ranking ?0 . The larger the distance D(?, ?0 ), the smaller the probability of ? becomes. The spread
parameter ? determines how quickly the probability decreases, i.e., how peaked the distribution is
around ?0 . For ? = 0, the uniform distribution is obtained, while for ? ? ?, the distribution
converges to the one-point distribution that assigns probability 1 to ?0 and 0 to all other rankings.
Alternatively, the Plackett-Luce (PL) model was used in [4]. This model is specified by a parameter
vector v = (v1 , v2 , . . . , vM ) ? RM
+:
P(? | v) =
M
Y
v??1 (i)
v ?1 + v??1 (i+1) + . . . + v??1 (M )
i=1 ? (i)
(3)
It is a generalization of the well-known Bradley-Terry model for the pairwise comparison of altera
natives, which specifies the probability that ?a wins against b? in terms of P(a b) = vav+v
.
b
Obviously, the larger va in comparison to vb , the higher the probability that a is chosen. Likewise,
the larger the parameter vi in (3) in comparison to the parameters vj , j 6= i, the higher the probability
that yi appears on a top rank.
4.2
Thresholded Relations are Partial Orders
Given a probability distribution P on the set of rankings ?, the probability of a pairwise preference yi yj (and hence the corresponding entry in the preference relation P ) is obtained through
marginalization:
X
P (i, j) = P(yi yj ) =
P(?) ,
(4)
??E(i,j)
where E(i, j) denotes the set of linear extensions of the incomplete ranking yi yj , i.e., the set
of all rankings ? ? ? with ?(i) < ?(j). We start by stating a necessary and sufficient condition
on P (i, j) for the threshold relation (1) to result in a (strict) partial order, i.e., an antisymmetric,
irreflexive and transitive relation.
Lemma 1. Let P be a reciprocal relation and let Q be given by (1). Then Q defines a strict
partial order relation for all q ? [1/2, 1) if and only if P satisfies partial stochastic transitivity,
i.e., P (i, j) > 1/2 and P (j, k) > 1/2 implies P (i, k) ? min(P (i, j), P (j, k)) for each triple
(i, j, k) ? [M ]3 .
This lemma was first proven by Fishburn [12], together with a number of other characterizations of
subclasses of strict partial orders by means of transitivity properties on P (i, j). For example, replacing partial stochastic transitivity by interval stochastic transitivity (now a condition on quadruples
instead of triplets) leads to a characterization of interval orders, a subclass of strict partial orders; a
partial order Q on [M ]2 is called an interval order if each i ? [M ] can be associated with an interval
(li , ui ) ? R such that Q(i, j) = 1 ? ui ? lj .
Our main theoretical results below state that thresholding (4) yields a strict partial order relation Q,
both for the PL and the Mallows model. Thus, we can guarantee that a strict partial order relation
can be predicted by simple thresholding, and without the need for any further reparation. Moreover,
the whole spectrum of threshold parameters q ? [1/2, 1) can be used.
Theorem 1. Let P in (4) be the PL model (3). Moreover, let Q be given by the threshold relation
(1). Then Q defines a strict partial order relation for all q ? [1/2, 1).
Theorem 2. Let P in (4) be the Mallows model (2), with a distance D having the so-called transposition property. Moreover, let Q be given by the threshold relation (1). Then Q defines a strict
partial order relation for all q ? [1/2, 1).
Theorem 1 directly follows from the strong stochastic transitivity of the PL model [19]. The proof
of Theorem 2 is slightly more complicated and given below. Moreover, the result for Mallows is less
general in the sense that D must obey the transposition property. Actually, however, this property is
not very restrictive and indeed satisfied by most of the commonly used distance measures, including
the Kendall distance (see, e.g., [9]). In the following, we always assume that the distance D in the
Mallows model (2) satisfies this property.
4
Definition 1. A distance D on ? is said to have the transposition property, if the following holds: Let
? and ? 0 be rankings and let (i, j) be an inversion (i.e., i < j and (?(i) ? ?(j))(? 0 (i) ? ? 0 (j)) < 0).
Let ? 00 ? ? be constructed from ? 0 by swapping yi and yj , that is, ? 00 (i) = ? 0 (j), ? 00 (j) = ? 0 (i)
and ? 00 (m) = ? 0 (m) for all m ? [M ] \ {i, j}. Then, D(?, ? 00 ) ? D(?, ? 0 ).
Lemma 2. If yi precedes yj in the center ranking ?0 in (2), then P(yi yj ) ? 1/2. Moreover, if
P(yi yj ) > q ? 1/2, then yi precedes yj in the center ranking ?0 .
Proof. For every ranking ? ? ?, let b(?) = ? if yi precedes yj in ?; otherwise, b(?) is defined by
swapping yi and yj in ?. Obviously, b(?) defines a bijection between E(i, j) and E(j, i). Moreover,
since D has the transposition property, D(b(?), ?0 ) ? D(?, ?0 ) for all ? ? ?. Therefore, according
to the Mallows model, P(b(?)) ? P(?), and hence
X
X
X
P(yi yj ) =
P(?) ?
P(b?1 (?)) =
P(?) = P(yj yi )
??E(i,j)
??E(i,j)
??E(j,i)
Since, moreover, P(yi yj ) = 1 ? P(yj yi ), it follows that P(yi yj ) ? 1/2. The second
part immediately follows from the first one by way of contradiction: If yj would precede yi , then
P(yj yi ) ? 1/2, and therefore P(yi yj ) = 1 ? P(yj yi ) ? 1/2 ? q.
Lemma 3. If yi precedes yj and yj precedes yk in the center ranking ?0 in (2), then P(yi yk ) ?
max (P(yi yj ), P(yj yk )).
Proof. We show that P(yi yk ) ? P(yi yj ). The second inequality P(yi yk ) ? P(yj yk )
is shown analogously. Let E(i, j, k) denote the set of linear extensions of yi yj yk ,
i.e., the set of rankings ? ? ? in which yi precedes yj and yj precedes yk . Now, for every
? ? E(k, j, i), define b(?) by first swapping yk and yj and then yk and yi in ?. Obviously, b(?)
defines a bijection between E(k, j, i) and E(j, i, k). Moreover, due to the transposition property,
D(b(?), ?0 ) ? D(?, ?0 ), and therefore P(b(?)) ? P(?) under the Mallows model. Consequently,
since E(i, j) = E(i, j, k) ? E(i, k, j) ? E(k, i, j) and
P E(i, k) = E(i, k, j) ? E(i,
P j, k) ? E(j, i, k),
it follows that P(yi yk ) ? P(yi yj ) =
P(?) =
??E(i,k)\E(i,j)
??E(j,i,k) P(?) ?
P
P
P(?)
=
P(b(?))
?
P(?)
?
0.
??E(k,j,i)
??E(k,j,i)
Lemmas 2 and 3 immediately imply the following lemma.
Lemma 4. The relation P derived via P (i, j) = P(yi yj ) from the Mallows model satisfies the
following property (closely related to strong stochastic transitivity): If (P (i, j) > q and P (j, k) > q,
then P (i, k) ? max(P (i, j), P (j, k)), for all q ? 1/2 and all i, j, k ? [M ].
Proof of Theorem 2. Since P(yi yj ) = 1 ? P(yj yi ), it obviously follows that Q(yi , yj ) = 1
implies Q(yj , yi ) = 0. Moreover, Lemma 4 implies that Q is transitive. Consequently, Q defines a
proper partial order relation.
The above statements guarantee that a strict partial order relation can be predicted by simple thresholding, and without the need for any further reparation. Moreover, the whole spectrum of threshold
parameters q ? [1/2, 1) can be used. As an aside, we mention that strict partial orders can also be
produced by thresholding other probabilistic preference learning models. All pairwise preference
models based on utility scores satisfy strong stochastic transitivity. This includes traditional statistical models such as the Thurstone Case 5 model [25] and the Bradley-Terry model [3], as well
as modern learning models such as [8, 2]. These models are usually not applied in label ranking
settings, however.
4.3
Expressivity of the Model Classes
So far, we have shown that predictions produced by thresholding probability distributions on rankings are proper partial orders. Roughly speaking, this is accomplished by restricting P in (1) to
specific valued preference relations (namely marginals (4) of the Mallows or the PL model), in contrast to the approach of [6], where P can be any (reciprocal) relation. From a learning point of
view, one may wonder to what extent this restriction limits the expressivity of the underlying model
class. This expressivity is naturally defined in terms of the number of different partial orders (up to
5
isomorphism) that can be represented in the form of a threshold relation (1). Interestingly, we can
show that, in this sense, the approach based on PL is much more expressive than the one based on
the Mallows model.
Theorem 3. Let QM denote the set of different partial orders (up to isomorphism) that can be
represented as a threshold relation Q defined by (1), where P is derived according to (4) from the
Mallows model (2) with D the Kendall distance. Then |QM | = M .
Proof. By the right invariance of D, different choices of ?0 lead to the same set of isomorphism
classes QM . Hence we may assume that ?0 is the identity. By Theorem 6.3 in [20] the (M ? M )matrix with entries P (i, j) is a Toeplitz matrix, i.e., P (i, j) = P (i + 1, j + 1) for all i, j ? [M ? 1],
with entries strictly increasing along rows, i.e., P (i, j) < P (i, j + 1) for 1 ? i < j < M . Thus, by
Theorem 2, thresholding leads to M different partial orders.
More specifically, the partial orders in QM have a very simple structure that is purely rankdependent: The first structure is the total order ? = ?0 . The second structure is obtained by
removing all preferences between all direct neighbors, i.e., labels yi and yj with adjacent ranks
(|?(i) ? ?(j)| = 1). The third structure is obtained from the second one by removing all preferences between 2-neighbors, i.e., labels yi and yj with (|?(i) ? ?(j)| = 2), and so forth.
The cardinality of QM increases for distance measures D other than Kendall (like Spearman?s rho
or footrule), mainly since in general the matrix with entries P (i, j) is no longer Toeplitz. However,
for some measures, including the two just mentioned, the matrix will still be symmetric with respect
to the antidiagonal, i.e., P (i, j) = P (M + 1 ? i, M + 1 ? j) for j > i) and have entries increasing
along rows. While the exact counting of QM appears to be very difficult in such cases, an argument
similar to the one used in the proof of the next result
shows that |QM | is bounded by the number of
M
symmetric Dyck paths and hence |QM | ? b M c (see Ch. 7 [24]). It is a simple consequence of
2
Theorem 4 below, showing that exponentially more orders can be produced based on the PL model.
Lemma 5. For fixed q ? (1/2, 1) and a set A of subsets of [M ], the following are equivalent:
(i) The set A is the set of maximal antichains of a partial order induced by (4) on [M ] for some
v1 > ? ? ? > vM > 0.
(ii) The set A is a set of mutually incomparable intervals that cover [M ].
Proof. The fact that (i) implies (ii) is a simple calculation. Now assume (ii). For any interval
c
{a, a + 1, . . . , b} ? A we must have vcv+v
? q for any c, d ? {a, a + 1, . . . , b} for which c < d.
d
From va ? vc > vd ? vb it follows that
1
vc
va
1
=
.
vb ?
vd =
va + vb
1 + va
1 + vc
vc + vd
a
Thus, it suffices to show that there are real numbers v1 > ? ? ? > vn > 0 such that vav+v
? q for any
b
vc
{a, a + 1, . . . , b} ? A and vc +vd > q for any c < d which are not contained in an antichain from
A. We proceed by induction on M .
The induction base M = 1 is trivial. Assume M ? 2. Since all elements of A are intervals and
any two intervals are mutually incomparable, it follows that M is contained in exactly one set from
A?possibly the singleton {M }. Let A0 be the set A without the unique interval {a, a + 1, . . . , M }
containing M . Then A0 is a set of intervals that cover a proper subset [M 0 ] of [M ] and fulfill the
assumptions of (ii) for [M 0 ]. Hence by induction there is a choice of real numbers v1 > ? ? ? >
vM 0 > 0 such that the set of maximal antichains of the order on [M 0 ] induced by (4) is exactly A0 .
We consider two cases: (i) a = M 0 + 1. Then, by the considerations above, we need to choose
0
va
numbers vM 0 > va > va+1 > ? ? ? > vM > 0 such that va +v
? q and v v0M+v
> q. The
a
M
latter implies
d
vd +vc
=
1
1+ vvc
d
?
vM 0
vM 0 +va
M
> q for d ? M 0 > a = M 0 + 1 ? d ? M . But
those are easily checked to exist. (ii) a ? M 0 . Since M 0 is contained in at least one set from A0
va?1
va
and since this set is not contained in {a, a + 1, . . . , M }, it follows that q ? va?1
+vM 0 > va +vM 0 .
In particular (1 ? q)va < qvM 0 . Now choose vM 0 +1 > vM 0 +2 > ? ? ? > vM > 0 such that
qvM 0 > qvM 0 +1 > qvM > va (1 ? q). Note that here q > 1/2 is essential. Then one checks that all
desired inequalities are fulfilled.
6
Theorem 4. Let QP L denote the set of different partial orders (up to isomorphism) that can be
represented as a threshold relation Q defined by (1), where P is derived according to (4) from the
PL model (3). For any given threshold q ? [1/2, 1), the cardinality of this set is given by the M th
Catalan number:
1
2M
|QP L | =
M +1 M
Sketch of Proof. Without loss of generality, we can assume the parameters of the PL model to satisfy
v1 > ? ? ? > vM > 0 .
(5)
Consider the (M ?M )-matrix with entries P (i, j). By (5), the main diagonal of this matrix is strictly
increasing along rows and columns. From the set {(i, j) | 0 ? i ? M + 1, 0 ? i ? 1 ? j ? M }, we
remove those (i, j), 1 ? i < j ? M , for which P (i, j) is above the given threshold. As a picture
in the plane, this yields a shape whose upper right boundary can be identified with a Dyck path?a
path on integer points consisting of 2M moves (1, 0), (0, 1) from position (1, 0) to (M + 1, M ) and
staying weakly above the (i + 1, i)-diagonal. It is immediate that each path uniquely determines
its partial order. Moreover, it is well-known that these Dyck paths are counted by the M th Catalan
number.
In order to verify that any Dyck path is induced by a suitable choice of parameters, one establishes a
bijection between Dyck paths from (1, 0) to (M +1, M ) and maximal sets of mutually incomparable
intervals (in the natural order) in [M ]. To this end, consider for a Dyck path a peak at position (i, j),
i.e., a point on the path where a (1, 0) move is followed by a (0, 1) move. Then we must have j ? i,
and we identify this peak with the interval {i, i + 1, . . . , j}. It is a simple yet tedious task to check
that assigning to a Dyck path the set of intervals associated to its peaks is indeed a bijection to the
set of maximal sets of mutually incomparable intervals in [M ]. Again, it is easy to verify that the set
of intervals associated to a Dyck path is the set of maximal antichains of the partial order determined
by the Dyck path. Now, the assertion follows from Lemma 5.
Again, using Lemma 5, one checks that (5) implies that partial orders induced by (4) in the PL
model have a unique labeling up to poset automorphism. Hence our count is a count of isomorphism
classes.
We note that, from the above proof, it follows that the partial orders in QP L are the so called
semiorders. We refer the reader to Ch. 8 ?2 [26] for more details. Indeed, the first part of the proof
of Theorem 4 resembles the proof of Ch. 8 (2.11) [26]. Moreover, we remark that QM is not only
smaller in size than QP L , but the former is indeed strictly included in the latter: QM ? QP L . This
can easily be seen by defining the weights vi of the PL model as vi = 2M ?i (i ? [M ]), in which
2j?i
case the matrix with entries P (i, j) = 1+2
j?i is Toeplitz.
Finally, given that we have been able to derive explicit (combinatorial) expressions for |QM | and
|QP L |, it might be interesting to note that, somewhat surprisingly at first sight, no such expression
exists for the total number of partial orders on M elements.
5
Experiments
We complement our theoretical results by an empirical study, in which we compare the different
approaches on the SUSHI data set,1 a standard benchmark for preference learning. Based on a
food-chain survey, this data set contains complete rankings of 10 types of sushi provided by 5000
customers, where each customer is characterized by 11 numeric features.
Our evaluation is done by measuring the tradeoff between correctness and completeness achieved
by varying the threshold q in (1). More concretely, we use the measures that were proposed in [6]:
correctness is measured by the gamma rank correlation between the true ranking and the predicted
partial order (with values in [?1, +1]), and completeness is defined by one minus the fraction of
pairwise comparisons on which the model abstains. Obviously, the two criteria are conflicting:
increasing completeness typically comes along with reducing correctness and vice versa, at least if
the learner is effective in the sense of abstaining from those decisions that are indeed most uncertain.
1
Available online at http://www.kamishima.net/sushi
7
1
0,4
0,8
0,3
derived-MS
0,2
derived-MK
derived-PL
0,1
correctness
correctness
0,5
0,6
derived-PL
0,4
direct
0,2
direct
0
0
0
0,2
0,4
0,6
0,8
1
0
completeness
0,2
0,4
0,6
0,8
1
completeness
Figure 1: Tradeoff between completeness and correctness for the SUSHI label ranking data set:
Existing pairwise method (direct) versus the probabilistic approach based on the PL model and
Mallows model with Spearman?s rho (MS) and Kendall (MK) as distance measure. The figure on
the right corresponds to the original data set with rankings of size 10, while and the figure on the left
shows results for rankings of size 6.
We compare the original method of [6] with our new proposal, calling the former direct, because
the pairwise preference degrees on which we threshold are estimated directly, and the latter derived,
because these degrees are derived from probability distributions on ?. As a label ranker, we used
the instance-based approach of [5] with a neighborhood size of 50. We conducted a 10-fold cross
validation and averaged the completeness/correctness curves over all test instances. Due to computational reasons, we restricted the experiments with the Mallows model to a reduced data set with
only six labels, namely the first six of the ten sushis. (The aforementioned label ranker is based
on an instance-wise maximum likelihood estimation of the probability distribution P on ?; in the
case of the Mallows model, this involves the estimation of the center ranking ?0 , which is done by
searching the discrete space ?, that is, a space of size |M !|.)
The experimental results are summarized in Figure 1. The main conclusion that can be drawn from
these results is that, as expected, our probabilistic approach does indeed achieve a better tradeoff
between completeness and correctness than the original one, especially in the sense that it spans
a wider range of values for the former. Indeed, with the direct approach, it is not possible to go
beyond a completeness of around 0.4, whereas our probabilistic methods always allow for predicting
complete rankings (i.e., to achieve a completeness of 1). Besides, we observe that the tradeoff curves
of our new methods are even lifted toward a higher level of correctness. Among the probabilistic
models, the PL model performs particularly well, although the differences are rather small.
Similar results are obtained on a number of other benchmark data sets for label ranking. These
results can be found in the supplementary material.
6
Summary and Conclusions
The idea of producing predictions in the form of a partial order by thresholding a (valued) pairwise
preference relation is meaningful in the sense that a learner abstains on the most unreliable comparisons. While this idea has first been realized in [6] in an ad-hoc manner, we put it on a firm
mathematical grounding that guarantees consistency and, via variation of the threshold, allows for
exploiting the whole spectrum between a complete ranking and an empty relation.
Both variants of our probabilistic approach, the one based on the Mallows and the other one based
the PL model, are theoretically sound, semantically meaningful, and show strong performance in
first experimental studies. The PL model may appear especially appealing due to its expressivity,
and is also advantageous from a computational perspective.
An interesting question to be addressed in future work concerns the possibility of improving this
model further, namely by increasing its expressivity while still assuring consistency. In fact, the
transitivity properties guaranteed by PL seem to be stronger than what is necessarily needed. In this
regard, we plan to study models based on the notion of Luce-decomposability [20], which include
PL as a special case.
8
References
[1] P.L. Bartlett and M.H. Wegkamp. Classification with a reject option using a hinge loss. Journal
of Machine Learning Research, 9:1823?1840, 2008.
[2] E.V. Bonilla, S. Guo, and S. Sanner. Gaussian process preference elicitation. In Proc. NIPS?
2010, pages 262?270, Vancouver, Canada, 2010. MIT Press.
[3] R. Bradley and M. Terry. Rank analysis of incomplete block designs. I: The method of paired
comparisons. Biometrika, 39:324?345, 1952.
[4] W. Cheng, K. Dembczy?nski, and E. H?ullermeier. Label ranking based on the Plackett-Luce
model. In Proc. ICML?2010, pages 215?222, Haifa, Israel, 2010. Omnipress.
[5] W. Cheng, J. H?uhn, and E. H?ullermeier. Decision tree and instance-based learning for label
ranking. In Proc. ICML?2009, pages 161?168, Montreal, Canada, 2009. Omnipress.
[6] W. Cheng, M. Rademaker, B. De Baets, and E. H?ullermeier. Predicting partial orders: Ranking with abstention. In Proc. ECML/PKDD?2010, pages 215?230, Barcelona, Spain, 2010.
Springer.
[7] C. Chow. On optimum recognition error and reject tradeoff. IEEE Transactions on Information
Theory, 16(1):41?46, 1970.
[8] W. Chu and Z. Ghahramani. Preference learning with Gaussian processes. In Proc. ICML?
2005, pages 137?144, Bonn, Germany, 2005. ACM.
[9] D. Critchlow, M. Fligner, and J. Verducci. Probability models on rankings. Journal of Mathematical Psychology, 35:294?318, 1991.
[10] O. Dekel, CD. Manning, and Y. Singer. Log-linear models for label ranking. In Proc. NIPS?
2003, Vancouver, Canada, 2003. MIT Press.
[11] P. Diaconis. Group representations in probability and statistics, volume 11 of Lecture Notes?
Monograph Series. Institute of Mathematical Statistics, Hayward, CA, 1988.
[12] P.C. Fishburn. Binary choice probabilities: on the varieties of stochastic transitivity. Journal
of Mathematical Psychology, 10:321?352, 1973.
[13] A. Gionis, H. Mannila, K. Puolam?aki, and A. Ukkonen. Algorithms for discovering bucket
orders from data. In Proc. KDD?2006, pages 561?566, Philadelphia, US, 2006. ACM.
[14] I.C. Gormley and T.B. Murphy. A latent space model for rank data. In Proc. ICML?06, pages
90?102, Pittsburgh, USA, 2006. Springer.
[15] J. Guiver and E. Snelson. Bayesian inference for Plackett-Luce ranking models. In Proc.
ICML?2009, pages 377?384, Montreal, Canada, 2009. Omnipress.
[16] S. Har-Peled, D. Roth, and D. Zimak. Constraint classification: a new approach to multiclass
classification. In Proc. ALT?2002, pages 365?379, L?ubeck, Germany, 2002. Springer.
[17] G. Lebanon and Y. Mao. Nonparametric modeling of partially ranked data. Journal of Machine
Learning Research, 9:2401?2429, 2008.
[18] T. Lu and C. Boutilier. Learning Mallows models with pairwise preferences. In Proc. ICML?
2011, pages 145?152, Bellevue, USA, 2011. Omnipress.
[19] R. Luce and P. Suppes. Handbook of Mathematical Psychology, chapter Preference, Utility
and Subjective Probability, pages 249?410. Wiley, 1965.
[20] J. Marden. Analyzing and Modeling Rank Data. Chapman and Hall, 1995.
[21] M. Meila and H. Chen. Dirichlet process mixtures of generalized mallows models. In Proc.
UAI?2010, pages 358?367, Catalina Island, USA, 2010. AUAI Press.
[22] T. Qin, X. Geng, and T.Y. Liu. A new probabilistic model for rank aggregation. In Proc.
NIPS?2010, pages 1948?1956, Vancouver, Canada, 2010. Curran Associates.
[23] M. Rademaker and B. De Baets. A threshold for majority in the context of aggregating partial
order relations. In Proc. WCCI?2010, pages 1?4, Barcelona, Spain, 2010. IEEE.
[24] R.P. Stanley. Enumerative Combinatorics, Vol. 2. Cambridge University Press, 1999.
[25] L. Thurstone. A law of comparative judgment. Psychological Review, 79:281?299, 1927.
[26] W.T. Trotter. Combinatorics and partially ordered sets: dimension theory. The Johns Hopkins
University Press, 1992.
9
| 4811 |@word inversion:2 trotter:1 stronger:2 advantageous:1 dekel:1 tedious:1 closure:1 seek:1 bellevue:1 thereby:1 mention:1 minus:1 liu:1 contains:1 score:1 series:1 interestingly:1 subjective:1 existing:2 bradley:3 comparing:1 yet:1 assigning:1 attracted:1 must:3 chu:1 john:1 partition:1 kdd:1 shape:1 remove:1 aside:1 discovering:1 item:1 plane:1 beginning:1 reciprocal:4 transposition:5 characterization:3 provides:1 bijection:4 location:1 preference:25 completeness:10 mathematical:6 along:4 constructed:1 direct:6 consists:1 manner:1 theoretically:1 pairwise:15 expected:1 indeed:8 roughly:2 pkdd:1 multi:1 food:1 cardinality:2 increasing:5 becomes:1 provided:2 discover:1 moreover:14 underlying:2 bounded:1 qmin:2 hayward:1 spain:2 what:2 israel:1 interpreted:2 transformation:1 guarantee:5 every:2 voting:1 subclass:2 tackle:1 auai:1 exactly:2 universit:3 classifier:2 rm:1 qm:11 biometrika:1 catalina:1 appear:1 producing:1 declare:2 aggregating:1 sushi:5 limit:1 consequence:2 despite:1 analyzing:1 quadruple:1 path:12 might:1 studied:2 resembles:1 mentioning:1 range:1 obeys:1 averaged:1 practical:2 unique:2 yj:47 mallow:22 poset:1 block:1 mannila:1 area:1 empirical:1 reject:5 confidence:1 refers:1 cannot:2 put:1 context:4 restriction:2 conventional:2 equivalent:1 customer:3 center:6 www:1 roth:1 go:1 attention:1 survey:1 guiver:1 wcci:1 assigns:2 immediately:2 contradiction:1 deriving:1 marden:1 searching:1 notion:2 thurstone:2 variation:1 construction:1 assuring:2 exact:2 us:1 curran:1 baets:2 associate:1 element:2 recognition:1 particularly:1 native:1 hullermeier:1 connected:1 cycle:4 automorphism:1 decrease:1 yk:11 mentioned:1 monograph:1 complexity:1 ui:2 peled:1 weakly:1 purely:1 learner:3 easily:2 differently:1 represented:3 chapter:1 effective:1 precedes:8 labeling:1 neighborhood:1 firm:1 quite:1 whose:1 larger:3 rho:3 valued:3 say:1 supplementary:1 otherwise:3 toeplitz:3 statistic:3 online:1 obviously:6 hoc:1 advantage:2 net:1 propose:1 product:1 maximal:5 qin:1 achieve:2 supposed:1 forth:1 exploiting:1 empty:2 optimum:1 produce:3 comparative:1 converges:1 staying:1 wider:1 derive:1 stating:1 montreal:2 measured:1 job:1 strong:5 predicted:5 involves:1 implies:7 come:1 closely:2 stochastic:7 vc:7 abstains:3 material:1 suffices:1 generalization:1 extension:3 precedence:1 pl:20 hold:1 strictly:3 sufficiently:1 considered:3 hall:1 around:3 exp:1 mapping:2 predict:6 smallest:1 belgium:1 unwarranted:1 estimation:2 proc:14 precede:2 label:39 combinatorial:1 correctness:9 vice:1 establishes:1 mit:2 always:2 sight:1 gaussian:2 fulfill:1 rather:1 avoid:2 lifted:1 varying:1 gormley:1 derived:11 focus:1 improvement:1 rank:12 indicates:2 mainly:1 check:3 likelihood:1 contrast:1 sense:6 inference:1 plackett:6 irreflexive:1 lj:1 typically:1 a0:4 chow:1 critchlow:1 relation:39 germany:5 classification:8 among:2 aforementioned:2 plan:1 special:3 having:1 chapman:1 icml:6 geng:1 peaked:1 future:1 ullermeier:3 modern:1 diaconis:1 gamma:1 murphy:1 consisting:1 possibility:4 evaluation:1 mixture:1 swapping:3 chain:1 har:1 partial:50 necessary:1 tree:1 incomplete:2 harmful:1 desired:2 haifa:1 theoretical:6 uncertain:2 mk:2 instance:10 column:1 modeling:3 psychological:1 cover:3 assertion:1 zimak:1 measuring:1 subset:4 decomposability:1 ugent:1 uniform:1 usefulness:1 wonder:1 entry:7 conducted:1 dembczy:1 too:1 nski:1 peak:3 probabilistic:11 vm:13 wegkamp:1 ym:1 together:2 quickly:1 analogously:1 hopkins:1 squared:1 again:2 satisfied:1 containing:1 philipps:3 possibly:1 fishburn:2 choose:2 li:1 suggesting:1 exclude:1 de:5 singleton:1 summarized:1 includes:1 gionis:1 satisfy:2 combinatorics:2 bonilla:1 ranking:56 depends:1 vi:3 ad:1 view:1 kendall:5 analyze:2 vcv:1 start:1 aggregation:1 option:4 complicated:1 contribution:1 accuracy:1 qk:2 likewise:1 ensemble:3 yield:3 identify:2 judgment:1 bayesian:1 produced:7 lu:1 refusing:1 worth:1 checked:1 definition:1 against:1 naturally:1 associated:6 proof:11 popular:1 marburg:9 stanley:1 actually:2 appears:3 higher:3 verducci:1 reflected:1 done:2 catalan:2 generality:1 just:1 correlation:2 working:1 sketch:1 replacing:2 expressive:1 defines:6 mode:1 stagewise:1 grounding:1 usa:3 verify:2 true:3 former:3 hence:9 symmetric:3 dyck:9 eyke:2 adjacent:1 transitivity:10 uniquely:1 aki:1 criterion:1 m:2 generalized:1 complete:6 performs:1 omnipress:4 ranging:1 wise:1 consideration:1 abstain:5 recently:1 snelson:1 common:1 qp:6 jp:2 exponentially:1 volume:1 discussed:1 interpretation:1 marginals:1 refer:1 versa:1 cambridge:1 meila:1 consistency:2 mathematics:4 pm:1 longer:1 base:1 perspective:1 apart:3 certain:2 inequality:2 binary:1 unsorted:1 inconsistency:1 abstention:9 yi:49 devise:1 accomplished:1 seen:1 somewhat:1 ii:5 desirable:1 sound:1 characterized:1 calculation:1 cross:1 post:1 paired:1 va:16 prediction:18 variant:2 essentially:1 normalization:1 achieved:1 proposal:2 whereas:1 interval:16 addressed:2 strict:10 induced:7 member:1 seem:1 integer:1 counting:1 weiwei:1 enough:1 easy:1 variety:1 marginalization:1 psychology:3 identified:2 restrict:1 incomparable:8 idea:6 simplifies:1 luce:8 tradeoff:5 multiclass:1 ranker:10 whether:1 expression:3 motivated:1 six:2 utility:2 isomorphism:5 bartlett:1 speaking:2 proceed:1 constitute:1 prefers:1 remark:1 boutilier:1 clear:1 amount:1 nonparametric:1 ten:1 reduced:1 http:1 specifies:1 exist:1 notice:1 sign:2 fulfilled:1 estimated:1 discrete:2 shall:1 vol:1 group:2 waegeman:2 demonstrating:1 threshold:20 drawn:1 thresholded:3 abstaining:5 v1:5 welker:2 fraction:2 sum:1 parameterized:2 uncertainty:1 reader:1 decide:1 vn:1 decision:6 vb:4 guaranteed:2 followed:1 cheng:6 fold:1 declares:1 constraint:1 calling:1 declared:1 aspect:1 bonn:1 argument:1 min:1 span:1 relatively:1 according:3 manning:1 spearman:4 smaller:2 slightly:1 appealing:2 island:1 making:3 restricted:1 bucket:3 mutually:4 mathematik:3 count:2 needed:1 singer:1 end:2 available:1 willem:2 obey:2 observe:1 v2:1 alternative:3 original:3 top:1 denotes:1 include:2 dirichlet:1 hinge:1 restrictive:1 ghahramani:1 build:1 especially:2 move:3 question:1 realized:2 traditional:1 diagonal:2 said:1 conceivable:1 win:1 distance:15 vd:5 majority:1 enumerative:1 extent:1 trivial:1 reason:1 induction:3 toward:1 besides:2 relationship:1 difficult:1 potentially:1 statement:1 antichains:3 stated:1 design:1 reliably:1 proper:5 allowing:1 upper:1 benchmark:2 ecml:1 immediate:1 defining:1 y1:1 expressiveness:1 canada:5 introduced:2 complement:1 pair:4 namely:7 specified:1 intransitivity:1 conflicting:1 expressivity:6 barcelona:2 nip:3 address:3 able:3 beyond:1 elicitation:1 below:4 usually:1 departure:1 including:3 max:2 terry:3 suitable:1 ranked:2 natural:1 predicting:3 sanner:1 footrule:1 imply:1 picture:1 transitive:7 philadelphia:1 review:1 literature:2 vancouver:3 law:1 repaired:1 loss:2 lecture:1 highlight:1 permutation:2 antichain:1 interesting:3 ukkonen:1 proven:1 versus:1 triple:1 validation:1 degree:4 affine:1 sufficient:1 consistent:3 thresholding:12 cd:1 row:3 course:1 summary:1 uhn:1 surprisingly:1 last:1 free:1 formal:1 allow:2 institute:1 neighbor:2 characterizing:1 sparse:1 regard:1 overcome:1 boundary:1 xn:2 numeric:1 curve:2 dimension:1 author:2 commonly:4 concretely:2 counted:1 far:2 transaction:1 lebanon:1 uni:3 unreliable:3 uai:1 handbook:1 pittsburgh:1 assumed:1 alternatively:1 spectrum:3 latent:1 triplet:1 learn:1 ca:1 improving:1 necessarily:3 vj:1 antisymmetric:3 main:6 spread:3 whole:3 arise:1 allowed:2 representative:1 fligner:1 wiley:1 position:3 mao:1 explicit:1 candidate:1 tied:1 third:1 theorem:11 removing:2 specific:3 showing:2 alt:1 concern:1 volkmar:1 essential:1 exists:1 false:1 restricting:1 chen:1 likely:1 expressed:1 contained:4 ordered:1 partially:4 springer:3 ch:3 corresponds:1 vav:2 determines:2 satisfies:3 complemented:1 kamishima:1 acm:2 goal:2 identity:1 consequently:2 considerable:1 feasible:2 included:1 typical:1 specifically:3 except:1 semantically:2 determined:2 reducing:1 lemma:11 ghent:2 total:9 called:4 invariance:2 experimental:2 meaningful:3 formally:3 support:2 guo:1 latter:3 assessed:1 bioinformatics:1 avoiding:1 |
4,212 | 4,812 | Action-Model Based Multi-agent Plan Recognition
Hankz Hankui Zhuo
Department of Computer Science
Sun Yat-sen University, Guangzhou, China 510006
[email protected]
Qiang Yang
Huawei Noah?s Ark Research Lab
Core Building 2, Hong Kong Science Park, Shatin, Hong Kong
[email protected]
Subbarao Kambhampati
Department of Computer Science and Engineering
Arizona State University, Tempe, Arizona, US 85287-5406
[email protected]
Abstract
Multi-Agent Plan Recognition (MAPR) aims to recognize dynamic
team structures and team behaviors from the observed team traces (activity sequences) of a set of intelligent agents. Previous MAPR approaches required a library of team activity sequences (team plans) be
given as input. However, collecting a library of team plans to ensure
adequate coverage is often difficult and costly. In this paper, we relax
this constraint, so that team plans are not required to be provided beforehand. We assume instead that a set of action models are available.
Such models are often already created to describe domain physics; i.e.,
the preconditions and effects of effects actions. We propose a novel approach for recognizing multi-agent team plans based on such action
models rather than libraries of team plans. We encode the resulting
MAPR problem as a satisfiability problem and solve the problem using
a state-of-the-art weighted MAX-SAT solver. Our approach also allows
for incompleteness in the observed plan traces. Our empirical studies
demonstrate that our algorithm is both effective and efficient in comparison to state-of-the-art MAPR methods based on plan libraries.
1
Introduction
Multi-Agent Plan Recognition (MAPR) seeks an explanation of observed team-action traces. From
the activity sequences of a set of agents, MAPR aims to identify the dynamic team structures and
team behaviors of agents. The MAPR problem has important applications in analyzing data from
automated monitoring, situation awareness, intelligence surveillance and analysis [4]. Many approaches have been proposed in the past to automatically recognize team plans given an observed
team trace as input. For instance, Banerjee et al. [4, 3] proposed to formalize MAPR with a new
model. They solved MAPR problems using a first-cut approach, provided that a fully observed team
trace and a library of full team plans were given as input. To relax the full observability constraint,
Zhuo and Li [19] proposed a MARS system to recognize team plans based on partially observed team
traces and libraries of partial team plans.
1
Despite the success of these previous approaches, they all assume that a library of team plans has
been collected beforehand and provided as input. However, there are many applications where collecting and maintaining a library of team plans is difficult and costly. For example, in military operations, it is difficult and expensive to collect team plans, since activities of team-mates may consume
lots of resources such as ammunition and human labor. Collecting a smaller library is not an option
since it is infeasible to recognize team plans if they are not covered by the library. It is thus useful to
design approaches for solving the MAPR problem where we do not require libraries of team plans
to be known.
In this paper, we advocate replacing the plan library with a compact action model of the domain. In
contrast to plan libraries, action models are easier to specify (in terms of preconditions and effects
of each type of activity). Moreover, in principle action models provide full coverage to recognize
any team plans. The specific algorithmic framework we develop is called DARE, which stands for
Domain- model based multi-Agent REcognition, to recognize multi-agent plans. DARE does not
require plan libraries to be given as input. Instead, DARE takes as input a team trace and a set of
action models. DARE also allows the observed traces to be incomplete, i.e., there can be To fill these
gaps, DARE leverages all possible constraints both from the plan traces and from its knowledge
of how a plan works in terms of its causal structure. To do this, DARE first builds a set of hard
constraints that encode the correctness property of the team plans, and a set of soft constraints that
encode the optimal utility property of team plans based on the input team trace and action models.
After that, it solves all these constraints using a state-of-the-art weighted MAX-SAT solver, such as
MaxSatz [10], and converts the solution to a set of team plans as output.
We organize the rest of the paper as follows. In the next section, we first introduce the related
work including single agent plan recognition and multi-agent plan recognition, and then give our
formulation of the MAPR problem. After that, we present DARE and discuss its properties. Finally,
we evaluate DARE in the experimental section and present our conclusions.
2
Related work
The plan recognition problem has been addressed by many researchers. Kautz and Allen proposed
an approach to recognize plans based on parsing observed actions as sequences of subactions and
essentially model this knowledge as a context-free rule in an ?action grammar? [9]. Bui et al. presented approaches to probabilistic plan recognition problems [5, 7]. Instead of using a library of
plans, Ramrez and Geffner [12] proposed an approach to solving the plan recognition problem using slightly modified planning algorithms, assuming the action models were given as input. Note
that action models can be created by experts or learnt by previous systems, such as ARMS [18] and
LAMP [20]. Singla and Mooney proposed an approach to abductive reasoning using a first-order
probabilistic logic to recognize plans [15]. Amir and Gal addressed a plan recognition approach to
recognizing student behaviors using virtual science laboratories [1]. Ramirez and Geffner exploited
off-the-shelf classical planners to recognize probabilistic plans [13]. Despite the success of these
systems, a limitation is that they all focus only on single agent plans.
For multi-agent plan recognition, Sukthankar and Sycara presented an approach that leveraged several types of agent resource dependencies and temporal ordering constraints in the plan library to
prune the size of the plan library considered for each observation trace [16]. Avrahami-Zilberbrand
and Kaminka preferred a library of single agent plans to team plans, but identified dynamic teams
based on the assumption that all agents in a team executing the same plan under the temporal constraints of that plan [2]. The constraint on activities of the agents that can form a team can be severely
limiting when team-mates can execute coordinated but different behaviors.
Instead of using the assumption that agents in the same team should execute a common activity,
besides the approaches introduced in the introduction section [4, 3, 19], Sadilek and Kautz provided
a unified framework to model and recognize activities that involved multiple related individuals
playing a variety of roles [14]; Masato et al. proposed a probabilistic model based on conditional
random fields to automatically recognize the composition of teams and team activities in relation to
a plan [11]. In these systems, although coordinated activities can be recognized, they either assume
there is a set of real-world GPS data available, or assume that team traces and team plans can be
fully observed. In this paper, we allow that: (1) agents can execute coordinated different activities in
a team, (2) team traces can be partial, and (3) neither GPS data nor team plans are needed.
2
3
Problem Definition
We first define a team trace. Let
= { 1 , 2 , . . . , n } be a set of agents, and O = [otj ] be an
observed team trace. Let otj be the observed activity executed by agent j at time step t, where
0 < t ? T and 0 < j ? n. A team trace O is partial, if some elements in O are empty (denoted by
null), i.e., there are missing values in O.
We then define an action model. In the STRIPS language [6], an action model is a tuple
ha, Pre(a), Add(a), Del(a)i, where a is an action name with zero or more parameters, Pre(a) is
a list of preconditions of a, Add(a) is a list of add effects, and Del(a) is a list of deleting effects.
A set of action models is denoted by A. An action name with zero of more parameters is called an
activity. An observed activity otj in a partial team trace O is either an instantiated action of A or
noop or null, where noop is an empty activity that does nothing.
An initial state s0 is a set of propositions that describes a closed world state from which the team
trace O starts to be observed. In other words, activities at time step t = 0 can be applied in the initial
state s0 . When we say an activity can be applied in a state, we mean the activity?s preconditions are
satisfied by the state. A set of goals G, each of which is a set of propositions, describes the probable
targets of the team trace. We assume s0 and G can both be observed by sensing devices.
A team is composed of a subset of agents 0 = { j1 , j2 , . . . , jm }. A team plan is defined as
p = [atk ]0<k?m
0<t?T , where m ? n, and atk is an activity or noop. A set of correct team plans P is
required to have properties P1-P5.
P1: P is a partition of the team trace O, i.e., each element of O should be in exactly one p of P and
each activity of p should be an element of O;
P2: P should cover all the observed activities, i.e., for each p 2 P and 0 < t ? T and 0 < k ? m,
if otjk 6= null, then atk = otjk , where atk 2 p and otjk 2 O;
P3: P is executable starting from s0 and achieves some goal g 2 G, i.e., at? is executable in state
st 1 for all 0 < t ? T , and achieves g after step T , where at? = hat1 , at2 , . . . , atm i;
P4: Each team plan p 2 P is associated with a likelihood ?(p): P 7! R+ . ?(p) specifies the
likelihood of recognizing team plan p, and can be affected by many factors, including the
number of agents in the team, the cost of executing p, etc. The value of ?(p) is composed of
1
two parts ?1 (Nactivity (p)) and ?2 (Nagent (p)), i.e., ?(p) = ?1 (Nactivity (p))+?
,
2 (Nagent (p))
where ?1 (Nactivity (p)) depends on Nactivity (p), the number of activities of p, and
?2 (Nagent (p)) depends on Nagent (p), the number of agents (i.e., team-mates) of p.
Generally, ?1 (Nactivity (p)) (or ?2 (Nagent (p))) becomes larger when Nactivity (p) (or
Nagent (p)) increases. Note that more agents would have a smaller likelihood (or larger
cost) to coordinate these agents to successfully execute p. Thus, we require that ?2 should
satisfy the condition: ?2 (n1 + n2 ) > ?2 (n1 ) + ?2 (n2 ). For each goal g 2 G, the output
plan P should have the largest likelihood, i.e.,
X
P = arg max
?(p),
0
P
p2P 0
where P 0 is a team-plan set that achieves g. Note that we presume that teams are (usually)
organized with the largest likelihood.
P5: Any pair of interacting agents must belong to the same team plan. In other words, if an agent
i interacts with another agent j , i.e., i provides or deletes some conditions of j , then
i and j should be in the same team, and activities of agents in the same team compose a
team plan. Agents exist in exactly one team plan, i.e., team plans do not share any common
agents.
Our multi-agent plan recognition problem can be stated as: Given a partially observed team trace
O, a set of action models A, an initial state s0 , and a set of goals G, the recognition algorithm must
output a set of team plans P with the maximal likelihood to achieve some goal g 2 G, where P
satisfies the properties P1-P5.
3
T
h1
h2
h3
h4
h5
1
a1
a3
noop
null
a7
2
noop
null
null
null
noop
3
null
null
a2
a6
null
4
a4
a5
null
noop
noop
5
noop
null
noop
noop
null
E
C D
A B F G
(b). initial state
C
B
A
G
F
D
E
(c). goals {g}
a1: unstack(C B); a2: stack(B A); a3: unstack(E D);
a4: stack (C B); a5: stack(F D); a6: putdown(D); a7:
pickup(G);
(a). team trace
pickup(?x ? block)
precondition: (handempty)(clear ?x)(ontable ?x)
effect: (holding ?x)(not (handempty))
(not (clear ?x))(not (ontable))
putdown(?x ? block)
precondition: (holding ?x)
effect: (clear ?x) (ontable ?x)(handempty)
(not (holding ?x))
unstack(?x ? block ?y ? block)
precondition: (on ?x ?y) (clear ?x) (handempty)
effect: (holding ?x)(clear ?y)(not (clear ?x))
(not (handempty))(not (on ?x ?y))
stack(?x ? block ?y ? block)
precondition: (holding ?x) (clear ?y)
Effect: (not (holding ?x))(not (clear ?y))
(clear ?x) (handempty) (on ?x ?y)
(I). inputs
(d). action models
T
h1
h3
T
h2
h4
h5
1
a1
noop
1
a3
noop
a7
2
noop
a8
2
a10
a9
noop
3
noop
a2
3
a11
a6
noop
4
a4
noop
4
a5
noop
noop
5
noop
noop
5
noop
noop
a12
(a). team plan p1
(b). team plan p2
a1: unstack(C B); a2: stack(B A); a3: unstack(E D);
a4: stack (C B); a5: stack(F D); a6: putdown(D);
a7: pickup(G); a8: pickup(B); a9: unstack(D F);
a10: putdown(E); a11: pickup(F); a12: stack(G F);
(II). outputs
Figure 1: An example of the input and output of our problem from the blocks domain. (I) is an input
example, where ?(b) the initial state s0 ? is a set of propositions: {(ontable A)(ontable B)(ontable
F)(ontable G)(on C B)(on D F)(on E D)(clear A)(clear C)(clear E)(clear G)(handempty)}; ?(c) goals
{g}? is a goal set composed of one goal g, and g is composed of propositions: {(ontable A)(ontable
D)(ontable E)(on B A)(on C B)(on F D)(on G F)(clear C)(clear G)(clear E)}. (II) is an output example, which is the set of team plans {p1 , p2 }.
Figure 1 shows an example multi-agent plan recognition problem from blocks world1 . In part (a) of
Figure 1(I), the first column indicates the time steps from 1 to 5. h1 , . . . , h5 are five hoist agents.
The value null suggests the missing observation, and noop suggests the empty activity. We assume
?1 and ?2 are defined by: ?1 (k) = k, and ?2 (k) = k 2 . Based on ?1 and ?2 , the corresponding
output is shown in Figure 1(II), which is the set of two team plans {p1 , p2 }.
4
DARE Algorithm Framework
Algorithm 1 below describes the plan recognition process in DARE. In the subsequent subsections,
we describe each step of this algorithm in detail.
Algorithm 1 An overview of our algorithm framework
input: a partial team trace O, an initial state s0 , a set of goals G, and a set of action models A;
output: a set of team plans P ;
1: max = 0;
2: for each g 2 G do
3:
build a set of candidate activities ?;
4:
build a set of hard constraints based on ?;
5:
build a set of soft constraints based on the likelihood ?;
6:
solve all the constraints using a weighted MAX-SAT solver, with hmax0 , soli as output;
7:
if max0 > max then
8:
max = max0 ;
9:
convert the solution sol to a set of team plans P 0 , and let P = P 0 ;
10:
end if
11: end for
12: return P ;
4.1
Candidate activities
In Step 3 of Algorithm 1, we build a set of candidate activities ? by instantiating each parameter
of action models in A with all objects in the initial state s0 , team trace O and goal g. We perform
the following phases. We first scan each parameter of propositions (or activities) in s0 , O, and g,
and collect sets of different objects (note that each set of objects corresponds to a type, e.g., there
1
http://www.cs.toronto.edu/aips2000/
4
is a type ?block? in the blocks domain). Second, we substitute each parameter of each action model
in A with its corresponding objects (the correspondence relationship is reflected by type, i.e., the
parameters of action models and objects should belong to the same type), which results in a set of
different activities, called candidate activities ?. Note that we also add an noop activity in ?.
For example, there are seven objects {A, B, C, D, E, F, G} corresponding to type ?block? in Figure 1(I). The set of candidate activities ? is: {noop, pickup(A), pickup(B), pickup(C), pickup(D),
pickup(E), pickup(F), pickup(G), . . . }, where the ?dots? suggests other activities that are generated
by instantiating parameters of actions ?putdown, stack, unstack?.
4.2
Hard constraints
With the set of candidate activities ?, we build a set of hard constraints to ensure the properties P1 to
P3 in Step 4 of Algorithm 1. We associate each element otj 2 O with a variable vtj , i.e., we have a
set of variables V = [vtj ]0<j?n
0<t?T , which is also called a variable matrix. Each variable in the variable
matrix will be assigned with a specific activity in candidate activities ?, and we will partition these
variables to attain a set of team plans that have the properties P1-P5 based on the assignments.
According to properties P2 and P3, we build two kinds of hard constraints: Observation constraints
and Causal-link constraints. Note that P1 is guaranteed since the set of team plans that is output is
a partition of the team trace.
Observation constraints For P2, i.e., given a team plan p = [atk ]0<k?m
0<t?T composed of agents
0
= { j1 , j2 , . . . , jm }, if otjk 6= null, then atk = otjk , this suggests vtjk should have
the same activity of otjk if otjk 6= null, since the team plan p is a partition of V and atk
is an element of of p. Thus, we build hard constraints as follows. For each 0 < t ? T and
0 < j ? n, we have
(otj 6= null) ! (vtj = otj ).
We call this kind of hard constraints the observation constraints, since they are built based
on the partially observed activities of O.
Causal-link constraints For P3, i.e., each team plan p should be executable starting from the initial
state s0 , this suggests each row of variables hvt1 , vt2 , . . . , vtn i should be executable, where
0 < t ? T . Note that ?executable? suggests that the preconditions of vtj should be satisfied.
This means, for each 0 < t ? T and 0 < j ? n, the following constraints should be
satisfied:
? each precondition of vtj either exists in the initial state s0 or is added by vt0 j 0 , and is
not deleted by any activity between t0 and t, where t0 < t and 0 < j 0 ? n.
? likewise, each proposition in goal g either exists in the initial state s0 or is added
by vt0 j 0 , and is not deleted by any activity between t0 and T , where t0 < T and
0 < j 0 ? n.
We call this kind of hard constraints causal-link constraints, since they are created according to the causal link requirement of executable plans.
4.3
Soft constraints
In Step 5 of Algorithm 1, we build a set of soft constraints based on the likelihood function ?. Each
variable in V can be assigned with any element of the candidate activities ?. We require that all
variables in V should be assigned with exactly one activity from ?. For each ha1 , a2 , . . . , a|V | i 2
? ? . . . ? ?, we have
^
(vi = ai ).
0<i?|V |
We calculate the weights of these constraints by the following phases. First, we partition the variable
matrix V based on property P5 into a set of team plans P , i.e., agent i provides or deletes some
conditions of j , then i and j should be in the same team, and activities of agents in the same
team compose a team plan. Second, for all team plans, we calculate the total likelihood ?(P ), i.e.,
X
X
1
?(P ) =
?(p) =
,
?1 (Nactivity (p)) + ?2 (Nagent (p))
p2P
p2P
5
and let ?(P ) be the weights of the soft constraints. Note that we aim to maximize the total likelihood
when solving these constraints (together with hard constraints) with a weighted MAX-SAT solver.
4.4 Solving the constraints
In Step 6 of Algorithm 1, we put both hard and soft constraints together, and solve these constraints
using Maxsatz [10], a MAX-SAT solver . The solution sol is an assignment for all variables in V ,
and max0 is the total weight of the satisfied constraints corresponding to the solution sol. In Step 8
of Algorithm 1, we partition V into a set of team plans P based on P5.
As an example, in (a) of Figure 1(I), the team trace?s corresponding variable in V is assigned with
activities, which means the null values in (a) of Figure 1(I) are replaced with the corresponding
assigned activities in V . According to property P5, we can simply partition the team trace into two
team plans, as is shown in Figure 1(II), by checking preconditions and effects of activities in the
team trace.
4.5
Properties of DARE
DARE can be shown to have the following properties:
Theorem 1: (Conditional Soundness) If the weighted MAX-SAT solver is powerful enough to
optimally solve all solvable SAT problems, DARE is sound.
Theorem 2: (Conditional Completeness) If the weighted MAX-SAT solver we exploit in DARE is
complete, DARE is also complete.
For Theorem 1, we only need to check that the solutions output by DARE satisfy P1-P5. P2 and P3
are guarranteed by observation constraints and causal-link constraints; P4 is guaranteed by the soft
constraints built in Section 4.3 and the MAX-SAT solver; P1 and P5 are both guaranteed by the
partition step in Section 4.4, i.e., partitioning the variable matrix into a set of team plans; that is to
say, the conditional soundness property holds.
For Theorem 2, since all steps in Algorithm 1, except Step 6 that calls a weighted MAX-SAT solver,
can be executed in finite time, the completeness property only depends on the weighted MAX-SAT
solver, which means the conditional completeness property holds.
5
Experiments
5.1
Dataset and Evaluation Criterion
We evaluate DARE in three planning domains: blocks, driverlog2 and rovers2 . We modify the three
domains for multi-agent setting. In blocks, there are multiple hoists, which are viewed as agents
that perform actions of pickup, putdown, stack and unstack. In driverlog, there are multiple trucks,
drivers and hoists, which are agents that can group together to form different teams (trucks and
drivers can be in the same team, likewise for hoists.). In rovers, there are multiple rovers that can
group together to form different teams. For each domain, we set T = 50 and generate 50 team traces
with the size of T ? n for each n 2 {20, 40, 60, 80, 100}. For each team trace, we have a set of
optimal team plans (which is viewed as the ground truth), denoted by Ptrue , and its corresponding
goal gtrue , which best explains the team trace according to the likelihood function ?. We define the
likelihood function by: ? = ?1 + ?2 , where ?1 (k) = k and ?2 (k) = k 2 , as is presented in the end
of the problem definition section.
We randomly delete a subset of activities from each team trace with respect to a specific percentage
?. We will test different ? values with 0%, 10%, 20%, 30%, 40%, 50%. As an example, ? = 10%
suggests there are 10 activities deleted from a team trace with 100 activities. We also randomly
add 10 additional goals, together with gtrue , to form the goal set G, as is presented in the problem
definition section. We define the accuracy by:
the number of correctly recognized team plan sets
=
,
the total number of team traces
2
http://planning.cis.strath.ac.uk/competition/
6
where ?correctly recognized team plan sets? suggests the recognized team plan sets and goals are
the same as the expected team plan sets {Ptrue } and goals G.
We generate 100 team plans as the library as is described by MARS [19], and compare the recognition
results with MARS as a baseline.
5.2
Experimental Results
We evaluate DARE in the following aspects: (1) accuracy with respect to different number of agents;
(2) accuracy with respect to different percentages of null values; and (3) the running time.
5.2.1
Varying the number of agents
(b). driverlog
(a). blocks
0.95
0.95
? MARS
0.9
(c). rovers
0.95
? MARS
0.9
0.85
0.9
0.85
DARE?
0.85
? MARS
0.8
?
?
?
? DARE
0.8
0.8
0.75
0.75
0.75
0.7
0.7
0.7
0.65
0.65
0.65
0.6
20
40
60
80
number of agents
0.6
20
100
40
60
80
number of agents
? DARE
0.6
20
100
40
60
80
number of agents
100
Figure 2: Accuracies with respect to different number of agents
We would like to evaluate the change of accuracies when the number of agents increases. We set the
percentage of null values to be 30%, and also ran DARE five times to calculate an average of accuracies. The result is shown in Figure 2. From the figure, we found that the accuracies of both DARE
and MARS generally decreased when the number of agents increased. This is because the problem
space is enlarged when the number of agents increases, which makes the available information be
decreased comparing to the large problem space, and not enough to attain high accuracies.
We also found that the accuracy of DARE was lower than MARS at the beginning, and then became
better than MARS as the number of agents became larger. This indicates that DARE has better performance in handling large number of agents based on action models. This is because DARE builds
the MAX-SAT problem space (described as proposition variables and constraints) based on model
inferences (i.e., action models), while MARS is based on instances (i.e., plan library). When the number of agents is small, the problem space built by MARS is smaller than that built by DARE; when the
number of agents becomes larger, the problem space built by MARS becomes larger than that built
by DARE; the larger the problem space is, the more difficult it is for MAX-SAT to solve the problem;
thus, DARE performs worse than MARS with less agents, while better with more agents.
(b). driverlog
(a). blocks
1.05
1
(c). rovers
1.05
1.05
1
1
? DARE
0.95
0.95
0.95
? MARS
? MARS
0.9
? MARS
0.9
?
0.85
?
?
0.9
0.85
? DARE
0.85
0.8
0.8
0.75
0.75
0.75
0.7
0.65
0.8
DARE?
0
10
20
30
percentage
40
50
0.7
0.7
0
10
20
30
percentage
40
50
0.65
0
10
20
30
percentage
40
50
Figure 3: Accuracies with respect to different percentages of null values.
5.2.2
Varying the percentage of null values
We set the number of agents to be 60, and run DARE five times to calculate average of accuracies
with a percentage ? of null values. We found both accuracies of DARE and MARS decreased when
7
the percentage ? increased, due to less information provided when the percentage increasing. When
the percentage is 0%, both DARE and MARS can recognize all the team traces successfully.
By observing all three domains in Figure 3, we find that DARE does not function as well as MARS
when the percentage of incompleteness is large. This relative advantage for the library-based approach is due in large part to the fact that all team plans to be recognized are covered by the small
library in the experiment, and the library of team plans will help reduce the recognition problem
space compared to DARE. We conjecture that if the team plans to be recognized are not covered by
the library (because of the size restrictions on the library), DARE will perform better than MARS. In
this case, MARS cannot successfully recognize some team plans.
5.2.3
The running time
(a) blocks
(b) driverlog
800
600
400
200
1200
cpu time (seconds)
1000
0
(c) rovers
1200
cpu time (seconds)
cpu time (seconds)
1200
1000
800
600
400
200
20
40
60
80
number of agents
100
0
1000
800
600
400
200
20
40
60
80
number of agents
100
0
20
40
60
80
number of agents
100
Figure 4: The CPU time of DARE.
We show the average CPU time of DARE over 50 team traces with respect to different number of
agents in Figure 4. As can be seen from the figure, the running time increases polynomially with the
number of input agents. This can be verified by fitting the relationship between the number of agents
and the running time to a performance curve with a polynomial of order 2 or 3. For example, the fit
polynomial for blocks is 0.0821x2 + 20.1171x 359.8.
6
Final Remark
In this paper, we presented a system called DARE for recognizing multi-agent team plans from
incomplete observed plan traces based on action models . This approach has significant advantage
over previous approaches that make use of a library of predefined team plans. Such plan libraries
are difficult to obtain in many applications. With the action model based approach, we first build a
set of candidate activities, and then build sets of hard and soft constraints to finally recognize team
plans. Our experiments show that DARE is effective in three benchmark domains compared to the
state-of-the-art multi-agent plan recognition system MARS that relies on a library of team plans. Our
approach is thus well suited for scenarios where collecting a library of team plans is infeasible before
performing team plan recognition tasks.
In the current work, we assume that the action models are complete. A more realistic assumption
is to allow the models to be incomplete [8, 17]. In future, we plan to extend DARE to work with
incomplete action models. Another assumption in the current model is that it expects as input the
alternative sets of goals, one of which the observed plan is expected to be targeting. We plan to relax
this so DARE can take as input a set of potential goals, with the understanding that the observed plan
is achieving a bounded subset of these goals. We believe that both these extensions can be easily
accommodated into the MAX-SAT framework of DARE.
Acknowledgments
Hankz Hankui Zhuo thanks Natural Science Foundation of Guangdong Province of China (No.
S2011040001869) and Research Fund for the Doctoral Program of Higher Education of China
(No. 20110171120054) for the support of this research. Qiang Yang thanks Hong Kong RGC GRF
Projects 621010 and 621211 for the support of this research. Kambhampati?s research is supported
in part by the NSF grant IIS201330813 and ONR grants N00014-09-1-0017, N00014-07-1-1049,
and N000140610058.
8
References
[1] Ofra Amir and Yaakov (Kobi) Gal. Plan recognition in virtual laboratories. In Proceedings of
IJCAI, 2011.
[2] Dorit Avrahami-Zilberbrand and Gal A. Kaminka. Towards dynamic tracking of multi-agents
teams: An initial report. In Proceedings of the AAAI Workshop on Plan, Activity, and Intent
Recognition (PAIR 2007), 2007.
[3] Bikramjit Banerjee and Landon Kraemer. Branch and price for multi-agent plan recognition.
In Proceedings of AAAI, 2011.
[4] Bikramjit Banerjee, Landon Kraemer, and Jeremy Lyle. Multi-agent plan recognition: formalization and algorithms. In Proceedings of AAAI, 2010.
[5] Hung H. Bui. A general model for online probabilistic plan recognition. In Proceedings of
IJCAI, 2003.
[6] R. Fikes and N. J. Nilsson. STRIPS: A new approach to the application of theorem proving to
problem solving. Artificial Intelligence Journal, pages 189?208, 1971.
[7] Christopher W. Geib and Robert P. Goldman. A probabilistic plan recognition algorithm based
on plan tree grammars. Artificial Intelligence, 173(11):1101?1132, 2009.
[8] Subbarao Kambhampati. Model-lite planning for the web age masses: The challenges of planning with incomplete and evolving domain models. In AAAI, 2007.
[9] Henry A. Kautz and James F. Allen. Generalized plan recognition. In Proceedings of AAAI,
1986.
[10] Chu Min LI, Felip Manya, Nouredine Mohamedou, and Jordi Planes. Exploiting cycle structures in Max-SAT. In In proceedings of 12th international conference on the Theory and
Applications of Satisfiability Testing (SAT-09), pages 467?480, 2009.
[11] Daniele Masato, Timothy J. Norman, Wamberto W. Vasconcelos, and Katia Sycara. Agentoriented incremental team and activity recognition. In Proceedings of IJCAI, 2011.
[12] Miquel Ramrez and Hector Geffner. Plan recognition as planning. In Proceedings of IJCAI,
2009.
[13] Miquel Ramrez and Hector Geffner. Probabilistic plan recognition using off-the-shelf classical
planners. In Proceedings of AAAI, 2010.
[14] Adam Sadilek and Henry Kautz. Recognizing multi-agent activities from gps data. In Proceedings of AAAI, 2010.
[15] Parag Singla and Raymond Mooney. Abductive markov logic for plan recognition. In Proceedings of AAAI, 2011.
[16] Gita Sukthankar and Katia Sycara. Hypothesis pruning and ranking for large plan recognition
problems. In Proceedings of AAAI, 2008.
[17] Minh Do. Tuan Nguyen, Subbarao Kambhampati. Synthesizing robust plans under incomplete
domain models. In Proc. AAAI Workshop on Generalized Planning, 2011.
[18] Qiang Yang, Kangheng Wu, and Yunfei Jiang. Learning action models from plan examples
using weighted MAX-SAT. Artificial Intelligence, 171:107?143, February 2007.
[19] Hankz Hankui Zhuo and Lei Li. Multi-agent plan recognition with partial team traces and plan
libraries. In Proceedings of IJCAI, 2011.
[20] Hankz Hankui Zhuo, Qiang Yang, Derek Hao Hu, and Lei Li. Learning complex action models
with quantifiers and implications. Artificial Intelligence, 174(18):1540 ? 1569, 2010.
9
| 4812 |@word kong:3 polynomial:2 hector:2 hu:1 seek:1 initial:11 past:1 current:2 comparing:1 chu:1 ust:1 parsing:1 must:2 realistic:1 subsequent:1 partition:8 j1:2 fund:1 intelligence:5 asu:1 device:1 amir:2 plane:1 beginning:1 lamp:1 core:1 provides:2 completeness:3 cse:1 toronto:1 five:3 h4:2 driver:2 advocate:1 compose:2 fitting:1 introduce:1 expected:2 behavior:4 p1:11 planning:7 nor:1 multi:18 automatically:2 goldman:1 cpu:5 jm:2 solver:10 increasing:1 becomes:3 provided:5 project:1 moreover:1 bounded:1 mass:1 null:23 kind:3 unified:1 gal:3 temporal:2 collecting:4 exactly:3 uk:1 partitioning:1 grant:2 organize:1 before:1 engineering:1 modify:1 severely:1 despite:2 analyzing:1 jiang:1 tempe:1 vtn:1 doctoral:1 china:3 collect:2 suggests:8 acknowledgment:1 testing:1 lyle:1 block:17 empirical:1 evolving:1 attain:2 fikes:1 pre:2 word:2 cannot:1 targeting:1 put:1 context:1 sukthankar:2 www:1 restriction:1 missing:2 starting:2 rule:1 fill:1 proving:1 coordinate:1 limiting:1 target:1 gps:3 hypothesis:1 associate:1 element:6 recognition:32 expensive:1 ark:1 cut:1 dare:45 observed:20 role:1 p5:9 solved:1 precondition:11 calculate:4 cycle:1 sun:1 ordering:1 sol:3 ran:1 dynamic:4 solving:5 rover:5 easily:1 noop:27 instantiated:1 describe:2 effective:2 artificial:4 larger:6 solve:5 consume:1 relax:3 say:2 grammar:2 soundness:2 final:1 online:1 a9:2 sequence:4 advantage:2 sen:1 propose:1 vtj:5 maximal:1 p4:2 j2:2 achieve:1 grf:1 competition:1 exploiting:1 ijcai:5 empty:3 requirement:1 a11:2 incremental:1 executing:2 adam:1 object:6 help:1 develop:1 ac:1 h3:2 p2:7 solves:1 coverage:2 c:1 correct:1 human:1 a12:2 virtual:2 atk:7 education:1 explains:1 require:4 vt2:1 parag:1 proposition:7 probable:1 extension:1 hold:2 considered:1 ground:1 algorithmic:1 achieves:3 a2:5 proc:1 singla:2 largest:2 correctness:1 successfully:3 weighted:9 aim:3 modified:1 rather:1 shelf:2 surveillance:1 guangzhou:1 varying:2 encode:3 focus:1 likelihood:12 indicates:2 check:1 hk:1 contrast:1 baseline:1 inference:1 huawei:1 relation:1 arg:1 denoted:3 plan:122 art:4 field:1 vasconcelos:1 qiang:4 park:1 putdown:6 miquel:2 future:1 report:1 intelligent:1 randomly:2 composed:5 recognize:14 individual:1 lite:1 replaced:1 phase:2 n1:2 a5:4 evaluation:1 predefined:1 implication:1 beforehand:2 solo:1 tuple:1 partial:6 tree:1 incomplete:6 accommodated:1 causal:6 delete:1 instance:2 military:1 soft:8 column:1 rao:1 increased:2 cover:1 strath:1 assignment:2 a6:4 cost:2 subset:3 expects:1 recognizing:5 at2:1 optimally:1 dependency:1 learnt:1 st:1 thanks:2 international:1 probabilistic:7 physic:1 off:2 together:5 aaai:10 satisfied:4 leveraged:1 geffner:4 worse:1 expert:1 return:1 li:4 potential:1 jeremy:1 student:1 coordinated:3 satisfy:2 ranking:1 depends:3 vi:1 h1:3 lot:1 lab:1 closed:1 observing:1 start:1 option:1 vt0:2 kautz:4 p2p:3 atm:1 accuracy:12 became:2 likewise:2 identify:1 monitoring:1 researcher:1 presume:1 mooney:2 strip:2 definition:3 a10:2 derek:1 involved:1 james:1 associated:1 jordi:1 dataset:1 knowledge:2 subsection:1 sycara:3 satisfiability:2 organized:1 formalize:1 higher:1 reflected:1 specify:1 formulation:1 execute:4 mar:22 web:1 replacing:1 christopher:1 a7:4 banerjee:3 del:2 yat:1 lei:2 believe:1 building:1 effect:10 name:2 norman:1 assigned:5 laboratory:2 guangdong:1 daniele:1 hong:3 criterion:1 generalized:2 complete:3 demonstrate:1 performs:1 allen:2 subbarao:3 reasoning:1 novel:1 common:2 executable:6 overview:1 belong:2 extend:1 significant:1 composition:1 ai:1 language:1 henry:2 dot:1 etc:1 add:5 scenario:1 n00014:2 onr:1 success:2 exploited:1 seen:1 yaakov:1 additional:1 prune:1 recognized:6 maximize:1 ii:4 branch:1 full:3 multiple:4 sound:1 a1:4 instantiating:2 essentially:1 addressed:2 decreased:3 rest:1 call:3 yang:4 leverage:1 enough:2 automated:1 variety:1 fit:1 identified:1 observability:1 otj:6 cn:1 reduce:1 shatin:1 masato:2 t0:4 utility:1 kraemer:2 action:36 adequate:1 remark:1 useful:1 generally:2 covered:3 clear:16 http:2 specifies:1 generate:2 exist:1 percentage:13 nsf:1 correctly:2 affected:1 group:2 deletes:2 deleted:3 achieving:1 neither:1 verified:1 convert:2 run:1 powerful:1 planner:2 wu:1 p3:5 incompleteness:2 guaranteed:3 correspondence:1 arizona:2 truck:2 activity:53 noah:1 constraint:40 x2:1 aspect:1 min:1 performing:1 conjecture:1 department:2 according:4 smaller:3 slightly:1 describes:3 nilsson:1 quantifier:1 handling:1 resource:2 discus:1 needed:1 end:3 available:3 operation:1 alternative:1 substitute:1 running:4 ensure:2 tuan:1 a4:4 maintaining:1 exploit:1 build:12 february:1 classical:2 already:1 added:2 costly:2 interacts:1 link:5 seven:1 mail:1 evaluate:4 collected:1 assuming:1 besides:1 relationship:2 ammunition:1 difficult:5 executed:2 robert:1 holding:6 hao:1 trace:38 stated:1 intent:1 synthesizing:1 design:1 perform:3 observation:6 markov:1 benchmark:1 mate:3 finite:1 minh:1 pickup:13 situation:1 team:124 interacting:1 stack:10 introduced:1 pair:2 required:3 zhuo:5 usually:1 below:1 challenge:1 program:1 built:6 max:19 including:2 explanation:1 deleting:1 natural:1 solvable:1 arm:1 library:30 created:3 raymond:1 understanding:1 checking:1 relative:1 fully:2 katia:2 limitation:1 age:1 h2:2 foundation:1 awareness:1 agent:70 s0:12 principle:1 playing:1 share:1 row:1 supported:1 free:1 infeasible:2 allow:2 sysu:1 ha1:1 curve:1 stand:1 world:2 nguyen:1 dorit:1 polynomially:1 pruning:1 compact:1 preferred:1 bui:2 logic:2 sat:17 kobi:1 robust:1 complex:1 sadilek:2 domain:12 n2:2 nothing:1 enlarged:1 formalization:1 candidate:9 theorem:5 gtrue:2 specific:3 sensing:1 list:3 a3:4 exists:2 workshop:2 ci:1 province:1 gap:1 easier:1 suited:1 timothy:1 simply:1 ramirez:1 labor:1 tracking:1 partially:3 ptrue:2 a8:2 corresponds:1 satisfies:1 kambhampati:4 truth:1 relies:1 conditional:5 goal:20 viewed:2 towards:1 price:1 hard:11 change:1 except:1 max0:3 called:5 total:4 experimental:2 rgc:1 support:2 scan:1 geib:1 h5:3 hung:1 |
4,213 | 4,813 | Neurally Plausible Reinforcement Learning of
Working Memory Tasks
Jaldert O. Rombouts, Sander M. Bohte
CWI, Life Sciences
Amsterdam, The Netherlands
{j.o.rombouts, s.m.bohte}@cwi.nl
Pieter R. Roelfsema
Netherlands Institute for Neuroscience
Amsterdam, The Netherlands
[email protected]
Abstract
A key function of brains is undoubtedly the abstraction and maintenance of information from the environment for later use. Neurons in association cortex play
an important role in this process: by learning these neurons become tuned to relevant features and represent the information that is required later as a persistent
elevation of their activity [1]. It is however not well known how such neurons
acquire these task-relevant working memories. Here we introduce a biologically
plausible learning scheme grounded in Reinforcement Learning (RL) theory [2]
that explains how neurons become selective for relevant information by trial and
error learning. The model has memory units which learn useful internal state representations to solve working memory tasks by transforming partially observable
Markov decision problems (POMDP) into MDPs. We propose that synaptic plasticity is guided by a combination of attentional feedback signals from the action
selection stage to earlier processing levels and a globally released neuromodulatory signal. Feedback signals interact with feedforward signals to form synaptic
tags at those connections that are responsible for the stimulus-response mapping.
The neuromodulatory signal interacts with tagged synapses to determine the sign
and strength of plasticity. The learning scheme is generic because it can train
networks in different tasks, simply by varying inputs and rewards. It explains
how neurons in association cortex learn to 1) temporarily store task-relevant information in non-linear stimulus-response mapping tasks [1, 3, 4] and 2) learn to
optimally integrate probabilistic evidence for perceptual decision making [5, 6].
1
Introduction
By giving reward at the right times, animals like monkeys can be trained to perform complex tasks
that require the mapping of sensory stimuli onto responses, the storage of information in working
memory and the integration of uncertain sensory evidence. While significant progress has been
made in reinforcement learning theory [2, 7, 8, 9], a generic learning rule for neural networks that is
biologically plausible and also accounts for the versatility of animal learning has yet to be described.
We propose a simple biologically plausible neural network model that can solve a variety of working
memory tasks. The network predicts action-values (Q-values) for different possible actions [2],
and it learns to minimize SARSA [10, 2] temporal difference (TD) prediction errors by stochastic
gradient descent. The model has memory units inspired by neurons in lateral intraparietal (LIP)
cortex and prefrontal cortex. Such neurons exhibit persistent activations for task related cues in
visual working memory tasks [1, 11, 4]. Memory units learn to represent an internal state that
allows the network to solve working memory tasks by transforming POMDPs into MDPs [25]. The
updates for synaptic weights have two components. The first is a synaptic tag [12] that arises from
an interaction between feedforward and feedback activations. Tags form on those synapses that are
responsible for the chosen actions by an attentional feedback process [13]. The second factor is a
1
Feedforward
Instant
On
Feedback
Action
Feedforward
Off
Sensory
Association
Q-values
Action
Selection
Figure 1: Model and learning (see section 2). Pentagons represent synaptic tags.
global neuromodulatory signal ? that reflects the TD error, and this signal interacts with the tags to
yield synaptic plasticity. TD-errors are represented by dopamine neurons in the ventral tegmental
area and substantia nigra [9, 14]. The persistence of tags permits learning if time passes between
synaptic activity and the animal?s choice, for example if information is stored in working memory
or evidence accumulates before a decision is made. The learning rules are biologically plausible
because the information required for computing the synaptic updates is available at the synapse. We
call the new learning scheme AuGMEnT (Attention-Gated MEmory Tagging).
We first discuss the model and then show that it explains how neurons in association cortex learn to 1)
temporarily store task-relevant information in non-linear stimulus-response mapping tasks [1, 3, 4]
and 2) learn to optimally integrate probabilistic evidence for perceptual decision making [5, 6].
2
Model
AuGMEnT is modeled as a three layer neural network (Fig. 1). Units in the motor (output) layer
predict Q-values [2] for their associated actions. Predictions are learned by stochastic gradient
descent on prediction errors.
The sensory layer contains two types of units; instantaneous and transient on(+)/off(-) units. Instantaneous units xi encode sensory inputs si (t), and + and - units encode positive and negative changes
in sensory inputs with respect to the previous time step t ? 1:
x?
i (t) = [si (t ? 1) ? si (t)]+ ,
x+
i (t) = [si (t) ? si (t ? 1)]+ ;
(1)
where [.]+ is a threshold operator that returns 0 for all negative inputs but leaves positive inputs
?
unchanged. Each sensory variable si is thus represented by three units xi , x+
i , xi (we only explicitly
write the time dependence if it is ambiguous). We denote the set of differentiating units as x0 . The
hidden layer models the association cortex and it contains regular units and memory units. The
regular units j (Fig. 1, circles) are fully connected to the instantaneous units i in the sensory layer
R
R
by connections vij
; v0j
is a bias weight. Regular unit activations yjR are computed as:
X
1
R
yjR = ?(aR
with aR
vij
xi .
(2)
j )=
j =
R
1 + exp (? ? aj )
i
Memory units m (Fig. 1, diamonds) are fully connected to the +/- units in the sensory layer by
M
connections vlm
and they derive their activations yjM (t) by integrating their inputs:
X
M
M
M
M 0
ym
= ?(aM
vlm
xl ,
(3)
m ) with am = am (t ? 1) +
l
with ? as defined in eqn. (2). Output layer units k are fully connected to the hidden layer by connecR
R
M
tions wjk
(for regular hiddens, w0k
is a bias weight) and wmk
(for memory hiddens). Activations
are computed as:
X
X
R
M M
qk =
yjR wjk
+
ym
wmk .
(4)
m
j
2
A Winner Takes All (WTA) competition now selects an action based on the estimated Q-values.
We used a max-Boltzmann [15] controller which executes the action with the highest estimated Qvalue with probability 1 ? and otherwise it chooses an action with probabilities according to the
Boltzmann distribution:
exp qk
P r(zk = 1) = P
.
(5)
0
k0 exp qk
The WTA mechanism then sets the activation of the winning unit to 1 and the activation of all
other units to 0; zk = ?kK where ?kK is the Kronecker delta function. The winning unit sends
feedback signals to the earlier processing layers, informing the rest of the network about the action
that was taken. This feedback signal interacts with the feedforward activations to give rise to synaptic
tags on those synapses that were involved in taking the decision. The tags then interact with a
neuromodulatory signal ?, which codes a TD error, to modify synaptic strengths.
2.1
Learning
After executing an action, the environment returns a new observation s0 , a scalar reward r, and
possibly a signal indicating the end of a trial. The network computes a SARSA TD error [10, 2]:
? = r + ?qK 0 ? qK ,
(6)
where qK 0 is the predicted value of the winning action for the new observation, and ? ? [0, 1] is the
temporal discount parameter [2]. AuGMEnT learns by minimizing the squared prediction error E:
1 2
1
(?) = (r + ?qK 0 ? qK )2 ,
(7)
2
2
The synaptic updates have two factors. The first is a synaptic tag (Fig. 2, pentagons; equivalent
to an eligibility trace in RL [2]) that arises from an interaction between feedforward and feedback
activations. The second is a global neuromodulatory signal ? which interacts with these tags to yield
synaptic plasticity. The updates can be derived by the chain rule for derivatives [16].
E=
R
The update for synapses wjk
is:
R
?wjk
R
?T agjk
?E
R
R
T agjk
= ??(t)T agjk
,
?qK
?qK
R
R
(?? ? 1)T agjk
+
= (?? ? 1)T agjk
+ yjR zk ,
R
?wjk
= ??
(8)
=
(9)
R
where ? is a learning rate, T agjk
are the synaptic tags on synapses between regular hidden units
?E ?qK
?E
R
and the motor layer, and ? is a decay parameter [2]. Note that ?wjk
? ?? ?q
R = ?? ?w R ,
K ?w
jk
jk
holding with equality if ?? = 0. If ?? > 0, tags decay exponentially so that synapses that were
responsible for previous actions are also assigned credit for the currently observed error.
Equivalently, updates for synapses between memory units and motor units are:
M
?wmk
M
?T agmk
=
=
M
??(t)T agmk
,
(?? ?
M
1)T agmk
(10)
+
M
ym
zk
.
(11)
The updates for synapses between instantaneous sensory units and regular association units are:
R
?vij
R
?T agij
?E
R
R
T agij
= ??T agij
,
?qK
?qK ?yjR ?aR
j
R
(?? ? 1)T agij
+ R R R ,
?yj ?aj ?vij
= ??
(12)
=
(13)
=
0
R
R R
(?? ? 1)T agij
+ wKj
yj (1 ? yjR )xi ,
0
(14)
R
where wKj
are feedback weights from the motor layer back to the association layer. The intuition
for the last equation is that the winning output unit K provides feedback to the units in the association layer that were responsible for its activation. Association units with a strong feedforward
connection also have a strong feedback connection. As a result, synapses onto association units that
3
provided strong input to the winning unit will have the strongest plasticity. This ?attentional feedback? mechanism was introduced in [13]. For convenience, we have assumed that feedforward and
feedback weights are symmetrical, but they can also be trained as in [13].
For the updates for the synapses between +/- sensory units and memory units we first approximate
the activation aM
m (see eqn. (3)) as:
M
aM
m = am (t ? 1) +
X
M 0
M
vlm
xl ? vlm
t
X
x0l (t0 ) ,
(15)
t0 =0
l
M
which is a good approximation if the synapses vlm
change slowly. We can then write the updates as:
M
?vlm
M
?T aglm
?E
M
M
T aglm
= ??T aglm
,
?qK
?qK ?y M ?aM
m
M
= ?T aglm
+ M m
,
M
?ym ?aM
m ?vlm
= ??
(16)
(17)
"
=
M
?T aglm
+
0
M M
ym (t)(1
wKj
?
M
ym
(t))
t
X
#
x0l (t0 )
.
(18)
t0 =0
Note that one can interpret a memory unit as a regular one that receives all sensory input in a trial
simultaneously. For synapses onto memory units, we set ? = 0 to arrive at the last equation. The
intuition behind the last equation is that because the activity of a memory unit does not decay, the
influence of its inputs x0l on the activity in the motor layer does not decay either (?? = 0).
A special condition occurs when the environment returns the end-trial signal. In this case, the
estimate qK in eqn. (6) is set to 0 (see [2]) and after the synaptic updates we reset the memory units
and synaptic tags, so that there is no confounding between different trials.
AuGMEnT is biologically plausible because the information required for the synaptic updates is
locally available by the interaction of feedforward and feedback signals and a globally released
neuromodulator coding TD errors. As we will show, this mechanism is powerful enough to learn
non-linear transformations and to create relevant working memories.
3
Experiments
We tested AuGMEnT on a set of memory tasks that have been used to investigate the effects of
training on neuronal activity in area LIP. Across all of our simulations, we fixed the configuration
of the association layer (three regular units, four memory units) and Q-layer (three output units,
for directing gaze to the left, center or right of a virtual screen). The input layer was tailored to
the specific task (see below). In all tasks, we trained the network by trial and error to fixate on a
fixation mark and to respond to task-related cues. As is usual in training animals for complex tasks,
we used a small shaping reward rf ix (arbitrary units) to facilitate learning to fixate [17]. At the end
of trials the model had to make an eye-movement to the left or right. The full task reward rf in was
given if this saccade was accurate, while we aborted trials and gave no reward if the model made
the wrong eye-movement or broke fixation before the go signal. We used a single set of parameters
for the network; ? = 0.15; ? = 0.20; ? = 0.90; = 0.025 and ? = 2.5, which shifts the sigmoidal
activation function for association units so that that units with little input have almost zero output.
Initial synaptic weights were drawn from a uniform distribution U ? [?0.25, 0.25]. For all tasks we
used rf ix = 0.2 and rf in = 1.5.
3.1
Saccade/Antisaccade
The memory saccade/anti-saccade task (Fig. 2A) is based on [3]. This task requires a non-linear
transformation and cannot be solved by a direct mapping from sensory units to Q-value units. Trials
started with an empty screen, shown for one time step. Then either a black or white fixation mark
was shown indicating a pro-saccade or anti-saccade trial, respectively. The model had to fixate on
the fixation mark within ten time-steps, or the trial was terminated. After fixating for two timesteps, a cue was presented on the left or right and a small shaping reward rf ix was given. The
4
L
F
R
0.65
0
C
0
0.65
SI Cue Location
Pro-Saccade
Left Cue
Assoc.
D
R
R R
L L
0.5
Assoc.
B
0.5
Assoc.
Anti
0.5
Q
Pro
SI Trial Type
Go
Delay
Cue
Fixation
A
Anti-Saccade
Right Cue
Left Cue
Right Cue
0
0
0
1
0
F F F F F
F
L L
C D G
F
F F F F
F
R R
C D G
F
F F F F
F
R R
C D G
F F F F F
F
L L
C D G
Figure 2: A Memory saccade/antisaccade task. B Model network. In the association layer, a regular
unit and two memory units are color coded gray, green and orange, respectively. Output units L,F ,R
are colored green, blue and red, respectively. C Unit activation traces for a sample trained network.
Symbols in bottom graph indicate highest valued action. F, fixation onset; C, cue onset; D, delay; G,
fixation offset (?Go? signal). Thick blue: fixate, dashed green: left, red: right. D Selectivity indices
of memory units in saccade/antisaccade task (black) and in pro-saccade only task (red).
cue was shown for one time-step, and then only the fixation mark was visible for two time-steps
before turning off. In the pro-saccade condition, the offset of the fixation mark indicated that the
model should make an eye-movement towards the cue location to collect rf in . In the anti-saccade
condition, the model had to make an eye-movement away from the cue location. The model had to
make the correct eye-movement within eight time steps. The input to the model (Fig. 2B) consisted
of four binary variables representing the information on the virtual screen; two for the fixation marks
and two for the cue location. Due to the +/? cells, the input layer thus had 12 binary units.
We trained the models for at most 25, 000 trials, or until convergence. We measured convergence as
the proportion of correct trials for the last 50 examples of all trial-types (N = 4). When this proportion reached 0.9 or higher for all trial-types, learning in the network was stopped and we evaluated
accuracy on all trial types without stochastic exploration of actions. We considered learning successful if the model performed all trial-types accurately.
We trained 10, 000 randomly initialized networks with and without a shaping reward (rf ix = 0).
Of the networks that received fixation rewards, 9, 945 learned the task versus 7, 641 that did not
receive fixation rewards; ?2 (1, N = 10, 000) = 2, 498, P < 10?6 . The 10, 000 models trained
with shaping learned the complete task in a median of 4, 117 trials. This is at least an order of
magnitude faster than monkeys that typically learn such a task after months of training with more
than 1, 000 trials per day, e. g. [6].
The activity of a trained network is illustrated in Fig. 2C. The Q-unit for fixating at the center had
strongest activity at fixation onset and throughout the fixation and memory delays, whereas the Qunit for the appropriate eye movement became more active after the go-signal. Interestingly, the
activity of the Q-cells also depended on cue-location during the memory delay, as is observed, for
example, in the frontal eye fields [18]. This activity derives from memory units in the association
layer that maintain a trace of the cue as persistent elevation of their activity and are also tuned to
the difference between pro- and antisaccade trials. To illustrate this, we defined selectivity indices
(SIs) to characterize the tuning of memory units to the difference between pro- or antisaccade trials
and to the difference in cue location. The sensitivity of units to differences in trial types, SItype
was |0.5((RP L + RP R ) ? (RAL + RAR ))|, with R representing a units? activation level (at ?Go?
time) in pro (P) and anti-saccade trials (A) with a left (L) or right (R) cue. A unit has an SI of
0 if it does not distinguish between pro- and antisaccade trials, and an SI of 1 if it is fully active
for one trial type and inactive for the other. The sensitivity to cue location, SIcue , was defined
|0.5((RP L + RAL ) ? (RP R + RAR ))|. We trained 100 networks and found that units tuned to
cue-location also tended to be selective for trial-type (black data points in Fig. 2D; SI correlation
0.79, (N = 400, P < 10?6 )). To show that the association layer only learns to represent relevant
features, we trained the same 100 networks using the same stimuli, but now only required pro5
G
R
R
0
0.5
0.1
F
R
0.6 0
Model
+
0.6 0 0.6 0
Time (s)
1
0
-1
E
?
S1
S2
S3
Symbols presented
S4
-1
-?
0.6
Average Weights
20
0.3
L
subjective WOE
30
Data
Prop. of Units
G
D
Data
40
LogLR
B
C
Response (sp/s)
??
Favouring green
Shapes
?0.9
?0.7
?0.5
?0.3
0.3
0.5
0.7
0.9
Favouring red
+?
Assigned weights
Activation
Go Delay S4
S1 Fixation
A
3
Model
0
-3
-1
0
0
1+?
-?
True symbol weights
1+?
0.4
0.2
-1
-0.5
0.5
1
Spearman Correlation
Figure 3: A Probabilistic classification task (redrawn from [6]). B Model network C Population
averages, conditional on LogLR-quintile (inset) for LIP neurons (redrawn from [6]) (top) and model
memory units over 100, 000 trials after learning had converged (bottom). D Subjective weights
inferred for a trained monkey (redrawn from [6]) (left) and average synaptic weights to an example
memory unit (right) versus true symbol weights (A, right). E Histogram of weight correlations for
400 memory units from 100 trained networks.
saccades, rendering the color of the fixation point irrelevant. Memory units in the 97 converged
networks now became tuned to cue-location but not to fixation point color (Fig. 2D, red data points.
SI Correlation 0.04, (N = 388, P > 0.48)), indicating that the association layer indeed only learns
to represent relevant features.
3.2
Probabilistic Classification
Neurons in area LIP also play a role in perceptual decision making [5]. We hypothesized that
memory units could learn to integrate probabilistic evidence for a decision. Yang and Shadlen [6]
investigated how monkeys learn to combine information about four briefly presented symbols, which
provided probabilistic cues whether a red or green eye movement target was baited with reward
(Fig. 3A). A previous model with only one layer of modifiable synapses could learn a simplified,
linear version of this task [19]. We tested if AuGMEnT could train the network to adapt to the full
complexity of the task that demands a non-linear combination of information about the four symbols
with the position of the red and green eye-movement targets. Trials followed the same structure as
described in section 3.1, but now four cues were subsequently added to the display. Cues were
drawn with replacement from a set of ten (Fig. 3A, right), each with a different associated weight.
The sum of these weights, W , determined the probability that rf in was assigned to the red target
(R) as follows: P (R|W ) = 10W /(1 + 10W ). For the green target G, P (G|W ) = 1 ? P (R|W ).
At fixation mark offset, the model had to make a saccade to the target with the highest reward
probability. The sensory layer of the model (Fig. 3B) had four retinotopic fields with binary units
for all possible symbols, a binary unit for the fixation mark and four binary units coding the locations
of the colored targets on the virtual screen. Due to the +/- units, this made 45 ? 3 units in total.
As in [6], we increased the difficulty of the task gradually (i. e. we used a shaping strategy) by
increasing the set of input symbols (2, 4, . . . , 10) and sequence length (1?4) in eight steps. Training
started with the ?trump? shapes which guarantee reward for the correct decision (Fig. 3A, right; see
[6]) and then added the symbols with the next absolute highest weights. We determined that the
task had been learned when the proportion of trials on which the correct decision was taken over
the last n trials reached 0.85, where n was increased with the difficulty level l of the task. For the
first 5 levels, n(l) = 500 + 500l and for l = 6, 7, 8 n was 10, 000; 10, 000 and 20, 000, respectively.
Networks were trained for at most 500, 000 trials.
The behavior of a trained network is shown in figure 3C (bottom). Memory units integrated information for one of the choices over the symbol sequence and maintained information about the value of
this choice as persistent activity during the memory delay. Their activation was correlated to the log
likelihood that the targets were baited, just like LIP neurons [6] (Fig. 3C). The graphs show average
activations of populations of real and model neurons in the four cue presentation epochs. Each pos6
B
2
0
2
51
4+
38 256
2+
19 28
+1
96 4
+6
48 2
+3
24 6
+1
12
8
6+
Association layer units (reg. + mem.)
4
4
3+
2
51
4+
38 256
2+
19 28
+1
96 4
+6
48 2
+3
24 6
+1
12
8
4
Association layer units (reg. + mem.)
6+
0
3+
2
51
4+
38 256
2+
19 28
+1
96 4
+6
48 2
+3
24 6
+1
0
12
Association layer units (reg. + mem.)
2
8
6+
4
3+
2
51
4+
38 256
2+
19 28
+1
96 4
+6
48 2
+3
24 6
+1
12
8
4
6+
3+
0
4
25
1
Convergence Rate
Convergence Rate
Median Convergence Speed
25
Median Convergence Speed
A1
Association layer units (reg. + mem.)
Figure 4: Association layer scaling behavior for A default learning parameters and, B optimized
learning parameters. Error bars are 95% confidence intervals. Parameters used are indicated by
shading (see inset)
sible sub-sequence of cues was assigned to a log-likelihood ratio (logLR) quintile, which correlates
with the probability that the neurons? preferred eye-movement is rewarded. Note that sub-sequences
from the same trial might be assigned to different quintiles. We computed LogLR quintiles by
enumerating all combinations of four symbols and then computing the probabilities of reward for
saccades to red and green targets. Given these probabilities, we computed reward probability for
all sub-sequences by marginalizing over the unknown symbols, i. e. to compute the probability that
the red target was baited given only a first symbol si , P (R|si ), we summed the probabilities for full
sequences starting with si and divided by the number of such full sequences. We then computed the
logLR for the sub-sequences and divided those into quintiles. For model units we rearranged the
quintiles so that they were aligned in the last epoch to compute the population average.
Synaptic weights from input neurons to memory cells became strongly correlated to the true weights
of the symbols (Fig. 3D, right; Spearman correlation, ? = 1, P < 10?6 ). Thus, the training of
synaptic weights to memory neurons in parietal cortex can explain how the monkeys valuate the
symbols [19]. We trained 100 networks on the same task and computed Spearman correlations for
the memory unit weights with the true weights and found that in general they learn to represent
the symbols (Fig. 3E). The learning scheme thus offers a biologically realistic explanation of how
neurons in LIP learn to integrate relevant information in a probabilistic classification task.
3.3
Scaling behavior
To show that the learning scheme scales well, we ran a series of simulations with increasing numbers
of association units. We scaled the number of association units by powers of two, from 21 = 2
(yielding 6 regular units and 8 memory units) to 27 = 128 (yielding 384 regular and 512 memory
units). For each scale, we trained 100 networks on the saccade/antisaccade task, as described in
section 3.1. We first evaluated these scaled networks with the standard set of learning parameters
and found that these yielded stable results within a wide range but that performance deteriorated
for the largest networks (from 26 = 64; 192 regular units and 256 memory units) (Fig. 4A). In a
second experiment (Fig. 4B), we also varied the learning rate (?) and trace decay (?) parameters.
We jointly scaled these parameters by 21 , 14 and 18 and selected the parameter combination which
resulted in the highest convergence rate and the fastest median convergence speed. It can be seen
that the performance of the larger networks was at least as good as that of the default network,
provided the learning parameters were scaled. Furthermore, we ran extensive grid-searches over the
?, ? parameter space using default networks (not shown) and found that the model robustly learns
both tasks with a wide range of parameters.
4
Discussion
We have shown that AuGMEnT can train networks to solve working memory tasks that require nonlinear stimulus-response mappings and the integration of sensory evidence in a biologically plausible
way. All the information required for the synaptic updates is available locally, at the synapses. The
network is trained by a form of SARSA(?) [10, 2], and synaptic updates minimize TD errors by
stochastic gradient descent. Although there is an ongoing debate whether SARSA or Q-learning
7
[20] like algorithms are used by the brain [21, 22], we used SARSA because this has stronger convergence guarantees than Q-learning when used to train neural networks [23]. Although stability
is considered a problem for neural networks implementing reinforcement learning methods [24],
AuGMEnT robustly trained networks on our tasks for a wide range of model parameters.
Technically, working memory tasks are Partially Observable Markov Decision Processes
(POMDPs), because current observations do not contain the information to make optimal decisions
[25]. Although AuGMEnT is not a solution for all POMDPs, as these are in general intractable [25],
its simple learning mechanism is well able to learn challenging working memory tasks.
The problem of learning new working memory representations by reinforcement learning is not
well-studied. Some early work used the biologically implausible backpropagation-through-time
algorithm to learn memory representations [26, 27]. Most other work pre-wires some aspects of
working memory and only has a single layer of plastic weights (e. g. [19]), so that the learning
mechanism is not general. To our knowledge, the model by O?Reilly and Frank [7] is most closely
related to AuGMEnT. This model is able to learn a variety of working memory tasks, but it requires
a teaching signal that provides the correct actions on each time-step and the architecture and learning
rules are elaborate. AuGMEnT only requires scalar rewards and the learning rules are simple and
well-grounded in RL theory [2].
AuGMEnT explains how neurons become tuned to relevant sensory stimuli in sequential decision
tasks that animals learn by trial and error. The scheme uses units with properties that resemble cortical and subcortical neurons: transient and sustained neurons in sensory cortices [28], action-value
coding neurons in frontal cortex and basal ganglia [29, 30] and neurons which integrate input and
therefore carry traces of previously presented stimuli in association cortex. The persistent activity of
these memory cells could derive from intracellular processes, local circuit reverberations or recurrent
activity in larger networks spanning cortex, thalamus and basal ganglia [31]. The learning scheme
adopts previously proposed ideas that globally released neuromodulatory signals code deviations
from reward expectancy and gate synaptic plasticity [8, 9, 14]. In addition to this neuromodulatory signal, plasticity in AuGMEnT is gated by an attentional feedback signal that tags synapses
responsible for the chosen action. Such a feedback signal exists in the brain because neurons at
the motor stage that code a selected action enhance the activity of upstream neurons that provided
input for this action [32], a signal that explains a corresponding shift of visual attention [33]. AuGMEnT trains networks to direct feedback (i.e. selective attention) to features that are critical for the
stimulus-response mapping and are associated with reward. Although the hypothesis that attentional
feedback controls the formation of tags is new, there is ample evidence for the existence of synaptic
tags [34, 12]. Recent studies have started to elucidate the identity of the tags [35, 36] and future
work could investigate how they are influenced by attention. Interestingly, neuromodulatory signals
influence synaptic plasticity even if released seconds or minutes later than the plasticity-inducing
event [12, 35], which supports that they interact with a trace of the stimulus, i.e. some form of tag.
Here we have shown how interactions between synaptic tags and neuromodulatory signals explain
how neurons in association areas acquire working memory representations for apparently disparate
tasks that rely on working memory or decision making. These tasks now fit in a single, unified
reinforcement learning framework.
References
[1] Gnadt, J. and Andersen, R. A. Memory Related motor planning activity in posterior parietal cortex of
macaque. Experimental brain research., 70(1):216?220, 1988.
[2] Sutton, R. S. and Barto, A. G. Reinforcement learning. MIT Press, Cambridge, MA, 1998.
[3] Gottlieb, J. and Goldberg, M. E. Activity of neurons in the lateral intraparietal area of the monkey during
an antisaccade task. Nature neuroscience, 2(10):906?12, 1999.
[4] Bisley, J. W. and Goldberg, M. E. Attention, intention, and priority in the parietal lobe. Annual review of
neuroscience, 33:1?21, 2010.
[5] Gold, J. I. and Shadlen, M. N. The neural basis of decision making. Annual review of neuroscience,
30:535?74, 2007.
[6] Yang, T. and Shadlen, M. N. Probabilistic reasoning by neurons. Nature, 447(7148):1075?80, 2007.
[7] O?Reilly, R. C. and Frank, M. J. Making working memory work: a computational model of learning in
the prefrontal cortex and basal ganglia. Neural computation, 18(2):283?328, 2006.
8
[8] Izhikevich, E. M. Solving the distal reward problem through linkage of STDP and dopamine signaling.
Cerebral cortex, 17(10):2443?52, 2007.
[9] Montague, P. R., Hyman, S. E., et al. Computational roles for dopamine in behavioural control. Nature,
431(7010):760?7, 2004.
[10] Rummery, G. A. and Niranjan, M. Online Q-learning using connectionist systems. Technical report,
Cambridge University Engineering Department, 1994.
[11] Funahashi, S., Bruce, C. J., et al. Mnemonic Coding of Visual Space in the Monkey?s Dorsolateral
Prefrontal Cortex. Journal of Neurophysiology, 6(2):331?349, 1989.
[12] Cassenaer, S. and Laurent, G. Conditional modulation of spike-timing- dependent plasticity for olfactory
learning. Nature, 482(7383):47?52, 2012.
[13] Roelfsema, P. R. and van Ooyen, A. Attention-Gated Reinforcement Learning of Internal Representations
for Classification. Neural Computation, 2214(17):2176?2214, 2005.
[14] Schultz, W. Multiple dopamine functions at different time courses. Annual review of neuroscience,
30:259?88, 2007.
[15] Wiering, M. and Schmidhuber, J. HQ-Learning. Adaptive Behavior, 6(2):219?246, 1997.
[16] Rumelhart, D. E., Hinton, G. E., et al. Learning representations by back-propagating errors. Nature,
323(6088):533?536, 1986.
[17] Krueger, K. A. and Dayan, P. Flexible shaping: how learning in small steps helps. Cognition, 110(3):380?
94, 2009.
[18] Sommer, M. A. and Wurtz, R. H. Frontal Eye Field Sends Delay Activity Related to Movement, Memory,
and Vision to the Superior Colliculus. Journal of Neurophysiology, 85(4):1673?1685, 2001.
[19] Soltani, A. and Wang, X.-J. Synaptic computation underlying probabilistic inference. Nature Neuroscience, 13(1):112?119, 2009.
[20] Watkins, C. J. and Dayan, P. Q-learning. Machine learning, 292:279?292, 1992.
[21] Morris, G., Nevet, A., et al. Midbrain dopamine neurons encode decisions for future action. Nature
neuroscience, 9(8):1057?63, 2006.
[22] Roesch, M. R., Calu, D. J., et al. Dopamine neurons encode the better option in rats deciding between
differently delayed or sized rewards. Nature neuroscience, 10(12):1615?24, 2007.
[23] van Seijen, H., van Hasselt, H., et al. A theoretical and empirical analysis of Expected Sarsa. 2009 IEEE
Symposium on Adaptive Dynamic Programming and Reinforcement Learning, pages 177?184, 2009.
[24] Baird, L. Residual algorithms: Reinforcement learning with function approximation. In Proceedings of
the 26th International Conference on Machine Learning (ICML), pages 30?37, 1995.
[25] Todd, M. T., Niv, Y., et al. Learning to use working memory in partially observable environments through
dopaminic reinforcement. In NIPS, volume 21, pages 1689?1696, 2009.
[26] Zipser, D. Recurrent network model of the neural mechanism of short-term active memory. Neural
Computation, 3(2):179?193, 1991.
[27] Moody, S. L., Wise, S. P., et al. A model that accounts for activity in primate frontal cortex during a
delayed matching-to-sample task. The journal of Neuroscience, 18(1):399?410, 1998.
[28] Nassi, J. J. and Callaway, E. M. Parallel processing strategies of the primate visual system. Nature
reviews. Neuroscience, 10(5):360?72, 2009.
[29] Hikosaka, O., Nakamura, K., et al. Basal ganglia orient eyes to reward. Journal of neurophysiology,
95(2):567?84, 2006.
[30] Samejima, K., Ueda, Y., et al. Representation of action-specific reward values in the striatum. Science,
310(5752):1337?40, 2005.
[31] Wang, X.-J. Synaptic reverberation underlying mnemonic persistent activity. Trends in neurosciences,
24(8):455?63, 2001.
[32] Roelfsema, P. R., van Ooyen, A., et al. Perceptual learning rules based on reinforcers and attention. Trends
in cognitive sciences, 14(2):64?71, 2010.
[33] Deubel, H. and Schneider, W. Saccade target selection and object recognition: Evidence for a common
attentional mechanism. Vision Research, 36(12):1827?1837, 1996.
[34] Frey, U. and Morris, R. Synaptic tagging and long-term potentiation. Nature, 385(6616):533?536, 1997.
[35] Moncada, D., Ballarini, F., et al. Identification of transmitter systems and learning tag molecules involved
in behavioral tagging during memory formation. PNAS, 108(31):12931?6, 2011.
[36] Sajikumar, S. and Korte, M. Metaplasticity governs compartmentalization of synaptic tagging and capture through brain-derived neurotrophic factor (BDNF) and protein kinase Mzeta (PKMzeta). PNAS,
108(6):2551?6, 2011.
9
| 4813 |@word neurophysiology:3 trial:34 version:1 briefly:1 proportion:3 stronger:1 pieter:1 simulation:2 lobe:1 shading:1 carry:1 initial:1 configuration:1 contains:2 series:1 tuned:5 interestingly:2 subjective:2 favouring:2 hasselt:1 current:1 activation:17 yet:1 si:16 visible:1 v0j:1 realistic:1 plasticity:10 shape:2 motor:7 update:13 cue:26 leaf:1 selected:2 deubel:1 short:1 funahashi:1 colored:2 provides:2 location:10 sigmoidal:1 direct:2 become:3 symposium:1 persistent:6 fixation:19 sustained:1 combine:1 behavioral:1 olfactory:1 introduce:1 x0:1 tagging:4 expected:1 indeed:1 behavior:4 aborted:1 planning:1 brain:5 inspired:1 globally:3 td:7 little:1 increasing:2 provided:4 retinotopic:1 underlying:2 circuit:1 monkey:7 unified:1 transformation:2 guarantee:2 temporal:2 wrong:1 assoc:3 scaled:4 control:2 unit:84 compartmentalization:1 before:3 positive:2 engineering:1 local:1 modify:1 timing:1 depended:1 todd:1 striatum:1 sutton:1 accumulates:1 frey:1 laurent:1 modulation:1 black:3 might:1 studied:1 collect:1 challenging:1 fastest:1 callaway:1 range:3 responsible:5 yj:2 backpropagation:1 signaling:1 substantia:1 area:5 empirical:1 reilly:2 persistence:1 confidence:1 integrating:1 regular:12 pre:1 intention:1 matching:1 protein:1 onto:3 convenience:1 selection:3 operator:1 cannot:1 storage:1 influence:2 equivalent:1 center:2 go:6 attention:7 starting:1 pomdp:1 rule:6 population:3 stability:1 deteriorated:1 target:10 play:2 elucidate:1 programming:1 us:1 goldberg:2 hypothesis:1 trend:2 rumelhart:1 recognition:1 jk:2 predicts:1 observed:2 role:3 bottom:3 solved:1 wang:2 capture:1 wiering:1 connected:3 movement:10 highest:5 ran:2 intuition:2 environment:4 transforming:2 complexity:1 reward:22 dynamic:1 trained:18 solving:1 technically:1 basis:1 montague:1 k0:1 differently:1 represented:2 train:5 formation:2 quintiles:4 larger:2 plausible:7 solve:4 valued:1 otherwise:1 jointly:1 online:1 sequence:8 reinforcer:1 propose:2 interaction:4 reset:1 relevant:10 aligned:1 gold:1 inducing:1 competition:1 wjk:6 convergence:9 empty:1 nin:1 executing:1 hyman:1 object:1 tions:1 derive:2 illustrate:1 recurrent:2 propagating:1 help:1 measured:1 received:1 progress:1 strong:3 predicted:1 resemble:1 indicate:1 guided:1 thick:1 closely:1 correct:5 stochastic:4 redrawn:3 exploration:1 broke:1 subsequently:1 transient:2 virtual:3 implementing:1 rar:2 explains:5 require:2 trump:1 potentiation:1 jaldert:1 niv:1 elevation:2 sarsa:6 credit:1 considered:2 stdp:1 exp:3 deciding:1 mapping:7 predict:1 cognition:1 ventral:1 early:1 released:4 currently:1 largest:1 create:1 reflects:1 mit:1 tegmental:1 varying:1 barto:1 cwi:2 encode:4 derived:2 ral:2 likelihood:2 transmitter:1 am:8 inference:1 abstraction:1 dependent:1 dayan:2 typically:1 integrated:1 hidden:3 x0l:3 selective:3 selects:1 classification:4 flexible:1 augment:14 animal:5 integration:2 special:1 orange:1 summed:1 field:3 icml:1 future:2 connectionist:1 stimulus:10 report:1 randomly:1 simultaneously:1 resulted:1 delayed:2 replacement:1 versatility:1 maintain:1 undoubtedly:1 investigate:2 nl:2 yielding:2 behind:1 chain:1 accurate:1 initialized:1 circle:1 theoretical:1 stopped:1 uncertain:1 increased:2 earlier:2 ar:3 deviation:1 uniform:1 delay:7 successful:1 optimally:2 stored:1 characterize:1 chooses:1 quintile:2 hiddens:2 sensitivity:2 international:1 probabilistic:9 off:3 gaze:1 ym:6 enhance:1 moody:1 squared:1 andersen:1 prefrontal:3 possibly:1 slowly:1 priority:1 cognitive:1 derivative:1 return:3 sible:1 account:2 fixating:2 coding:4 baird:1 explicitly:1 onset:3 later:3 performed:1 apparently:1 w0k:1 red:10 reached:2 option:1 parallel:1 bruce:1 minimize:2 accuracy:1 became:3 qk:16 yield:2 identification:1 plastic:1 accurately:1 pomdps:3 executes:1 converged:2 explain:2 synapsis:15 strongest:2 tended:1 implausible:1 influenced:1 synaptic:31 involved:2 fixate:4 associated:3 color:3 knowledge:1 shaping:6 neurotrophic:1 back:2 higher:1 day:1 response:7 synapse:1 evaluated:2 strongly:1 furthermore:1 just:1 stage:2 until:1 correlation:6 working:19 eqn:3 receives:1 nonlinear:1 aj:2 gray:1 indicated:2 izhikevich:1 facilitate:1 effect:1 hypothesized:1 consisted:1 true:4 contain:1 bohte:2 tagged:1 equality:1 assigned:5 illustrated:1 white:1 distal:1 during:5 eligibility:1 ambiguous:1 maintained:1 rat:1 antisaccade:8 complete:1 pro:9 reasoning:1 wise:1 instantaneous:4 krueger:1 superior:1 common:1 rl:3 winner:1 exponentially:1 cerebral:1 volume:1 association:25 interpret:1 significant:1 cambridge:2 neuromodulatory:9 tuning:1 grid:1 teaching:1 had:10 stable:1 cortex:16 posterior:1 recent:1 confounding:1 irrelevant:1 rewarded:1 schmidhuber:1 store:2 selectivity:2 binary:5 life:1 seen:1 vlm:7 nigra:1 schneider:1 determine:1 signal:25 dashed:1 neurally:1 full:4 multiple:1 thalamus:1 pnas:2 technical:1 faster:1 adapt:1 offer:1 mnemonic:2 hikosaka:1 long:1 divided:2 niranjan:1 coded:1 a1:1 seijen:1 prediction:4 maintenance:1 controller:1 vision:2 wurtz:1 dopamine:6 histogram:1 represent:6 grounded:2 tailored:1 cell:4 receive:1 whereas:1 addition:1 interval:1 wkj:3 median:4 sends:2 rest:1 pass:1 ample:1 call:1 zipser:1 yang:2 feedforward:9 sander:1 enough:1 rendering:1 variety:2 fit:1 gave:1 timesteps:1 architecture:1 idea:1 enumerating:1 shift:2 t0:4 inactive:1 whether:2 linkage:1 action:23 useful:1 korte:1 governs:1 netherlands:3 s4:2 discount:1 locally:2 ten:2 morris:2 soltani:1 rearranged:1 s3:1 sign:1 neuroscience:11 intraparietal:2 estimated:2 delta:1 per:1 blue:2 modifiable:1 write:2 basal:4 key:1 four:9 threshold:1 drawn:2 graph:2 sum:1 colliculus:1 orient:1 powerful:1 respond:1 arrive:1 roelfsema:4 almost:1 throughout:1 ueda:1 decision:15 scaling:2 dorsolateral:1 layer:30 followed:1 distinguish:1 display:1 yielded:1 activity:19 annual:3 strength:2 kronecker:1 tag:20 aspect:1 speed:3 department:1 according:1 combination:4 spearman:3 across:1 wta:2 biologically:8 making:6 s1:2 loglr:5 midbrain:1 primate:2 gradually:1 taken:2 behavioural:1 equation:3 previously:2 discus:1 mechanism:7 end:3 available:3 permit:1 eight:2 away:1 generic:2 appropriate:1 robustly:2 gate:1 rp:4 existence:1 top:1 sommer:1 ooyen:2 instant:1 pentagon:2 giving:1 unchanged:1 added:2 occurs:1 spike:1 strategy:2 dependence:1 usual:1 interacts:4 exhibit:1 rombouts:2 gradient:3 hq:1 attentional:6 lateral:2 spanning:1 code:3 length:1 modeled:1 index:2 kk:2 ratio:1 minimizing:1 acquire:2 equivalently:1 holding:1 debate:1 frank:2 trace:6 negative:2 rise:1 reverberation:2 disparate:1 boltzmann:2 unknown:1 perform:1 gated:3 diamond:1 kinase:1 neuron:29 observation:3 markov:2 wire:1 descent:3 anti:6 parietal:3 hinton:1 yjm:1 directing:1 varied:1 arbitrary:1 bisley:1 inferred:1 introduced:1 required:5 extensive:1 connection:5 optimized:1 learned:4 metaplasticity:1 macaque:1 nip:1 able:2 bar:1 below:1 rf:8 max:1 memory:61 green:8 explanation:1 power:1 wmk:3 critical:1 difficulty:2 event:1 rely:1 nakamura:1 yjr:6 turning:1 residual:1 representing:2 scheme:7 rummery:1 mdps:2 eye:12 started:3 epoch:2 review:4 marginalizing:1 fully:4 subcortical:1 versus:2 integrate:5 s0:1 shadlen:3 vij:4 neuromodulator:1 course:1 last:6 bias:2 institute:1 wide:3 taking:1 differentiating:1 absolute:1 van:4 feedback:18 default:3 cortical:1 computes:1 sensory:17 adopts:1 made:4 reinforcement:11 expectancy:1 simplified:1 schultz:1 adaptive:2 correlate:1 approximate:1 observable:3 preferred:1 global:2 active:3 mem:4 symmetrical:1 assumed:1 samejima:1 xi:5 search:1 lip:6 learn:17 zk:4 nature:10 molecule:1 interact:3 investigated:1 complex:2 upstream:1 did:1 sp:1 intracellular:1 terminated:1 s2:1 valuate:1 neuronal:1 fig:18 screen:4 elaborate:1 sub:4 position:1 winning:5 xl:2 qvalue:1 perceptual:4 watkins:1 learns:5 ix:4 minute:1 specific:2 inset:2 symbol:16 offset:3 decay:5 evidence:8 derives:1 intractable:1 exists:1 sequential:1 magnitude:1 demand:1 simply:1 ganglion:4 visual:4 amsterdam:2 temporarily:2 partially:3 scalar:2 saccade:19 ma:1 prop:1 conditional:2 month:1 presentation:1 identity:1 sized:1 informing:1 towards:1 change:2 determined:2 gottlieb:1 total:1 experimental:1 indicating:3 internal:3 mark:8 support:1 arises:2 frontal:4 ongoing:1 reg:4 tested:2 correlated:2 |
4,214 | 4,814 | Learning visual motion in recurrent neural networks
Marius Pachitariu, Maneesh Sahani
Gatsby Computational Neuroscience Unit
University College London, UK
{marius, maneesh}@gatsby.ucl.ac.uk
Abstract
We present a dynamic nonlinear generative model for visual motion based on a
latent representation of binary-gated Gaussian variables. Trained on sequences of
images, the model learns to represent different movement directions in different
variables. We use an online approximate inference scheme that can be mapped
to the dynamics of networks of neurons. Probed with drifting grating stimuli and
moving bars of light, neurons in the model show patterns of responses analogous
to those of direction-selective simple cells in primary visual cortex. Most model
neurons also show speed tuning and respond equally well to a range of motion
directions and speeds aligned to the constraint line of their respective preferred
speed. We show how these computations are enabled by a specific pattern of
recurrent connections learned by the model.
1
Introduction
Perhaps the most striking property of biological visual systems is their ability to efficiently cope with
the high bandwidth data streams received from the eyes. Continuous sequences of images represent
complex trajectories through the high-dimensional nonlinear space of two dimensional images. The
survival of animal species depends on their ability to represent these trajectories efficiently and to
distinguish visual motion on a fast time scale. Neurophysiological experiments have revealed complicated neural machinery dedicated to the computation of motion [1]. In primates, the classical
picture of the visual system distinguishes between an object-recognition-focused ventral pathway
and an equally large dorsal pathway for object localization and visual motion. In this paper we propose a model for the very first cortical computation in the dorsal pathway: that of direction-selective
simple cells in primary visual cortex [2]. We continue a line of models which treats visual motion
as a general sequence learning problem and proposes asymmetric Hebbian rules for learning such
sequences [3, 4]. We reformulate these earlier models in a generative probabilistic framework which
allows us to train them on sequences of natural images. For inference we use an online approximate
filtering method which resembles the dynamics of recurrently-connected neural networks.
Previous low-level generative models of image sequences have mostly treated time as a third dimension in a sparse coding problem [5]. These approaches have thus far been difficult to map to neural
architecture as they have been implemented with noncausal inference algorithms. Furthermore, the
spatiotemporal sensitivity of each learned variable is determined by a separate three-dimensional basis function, requiring many variables to encode all possible orientations, directions of motion and
speeds. Cortical architecture points to a more distributed formation of motion representation, with
temporal sensitivity determined by the interaction of neurons with different spatial receptive fields.
Another major line of generative models of video analyzes the slowly changing features of visual
input and proposes complex cells as such slow feature learners [6], [7]. However, these models are
not expressive enough to encode visual motion and are more specifically designed to discover image
dimensions invariant in time.
1
A recent hierarchical generative model for mid-level visual motion separates the phases and amplitudes of complex coefficients applied to complex spatial basis functions [8]. This separation makes
it possible to build a second layer of variables that specifies a distribution on the phase coefficients
alone. This second layer learns to pool together first layer neurons with similar preferred directions.
The introduction of real and imaginary parts in the basis functions is inspired by older energy-based
approaches where pairs of neurons with receptive fields in quadrature phase feed their outputs with
different time delays to a higher-order neuron which thus acquires direction selectivity. The model
of [8], and models based on motion energy in general, do not however reproduce direction-selective
simple cells. In this paper we propose a network in which local motion is computed in a more
distributed fashion than is postulated by feedforward implementations of energy models.
1.1
Recurrent Network Models for Neural Sequence Learning.
Another view of the development of visual motion processing sees it as a special case of the general
problem of sequence learning [4]. Many structures in the brain seem to show various forms of sequence learning, and recurrent networks of neurons can naturally produce learned sequences through
their dynamics [9, 10]. Indeed, it has been suggested that the reproduction of remembered sequences
within the hippocampus has an important navigational role. Similarly, motor systems must be able
to generate sequences of control signals that drive appropriate muscle activity. Thus many neural
sequence models are fundamentally generative. By contrast, it is not evident that V1 should need
to reproduce the learnt sequences of retinal input that represent visual motion. Although generative
modelling provides a powerful mathematical device for the construction of inferential sensory representations, the role of actual generation has been debated. Is there really a potential connection
then, to the generative sequence reproduction models developed for other areas?
One possible role for explicit sequence generation in a sensory system is for prediction. Predictive
coding has indeed been proposed as a central mechanism to visual processing [11] and even as
a more general theory of cortical responses [12]. More specifically as a visual motion learning
mechanism, sequence learning forms the basis of an earlier simple toy but biophysically realistic
model based on STDP at the lateral synapses of a recurrently connected network [4]. In another
biophysically realistic model, recurrent connections are set by hand rather than learned but they
produce direction selectivity and speed tuning in simulations of cat primary visual cortex [13]. Thus,
the recurrent mechanisms of sequence learning may indeed be important. In the following section
we define mathematically a probabilistic sequence modelling network which can learn patterns of
visual motion in an unsupervised manner from 16 by 16 patches with 512 latent variables connected
densely to each other in a nonlinear dynamical system.
?x
R, b
ht
. . . zt?1
xt
ht+1
...
zt
W, ?y
yt
t
(a)
(b)
Figure 1: a. Toy sequence learning model with biophysically realistic neurons from [4]. Neurons
N1 and N2 have the same RF as indicated by the dotted line, but after STDP learning of the recurrent connections with other neurons in the chain, N1 and N2 learn to fire only for rightward and
respectively leftward motion. b Graphical model representation of the bgG-RNN. The square box
represents that the variable zt is not random, but is given by zt = xt ? ht .
2
2
Probabilistic Recurrent Neural Networks
In this section we introduce the binary-gated Gaussian recurrent neural network as a generative
model of sequences of images. The model belongs to the class of nonlinear dynamical systems.
Inference methods in such models typically require expensive variational [14] or sampling based
approximations [15], but we found that a low cost online filtering method works sufficiently well to
learn an interesting model. We begin with a description of binary-gated Gaussian sparse coding for
still images and then describe how to define the dependencies in time between variables.
2.1
Binary Gated Gaussian Sparse Coding (bgG-SC).
Binary-gated Gaussian sparse coding ([16], also called spike-and-slab sparse coding [17]1 ) may
be seen as a limit of sparse coding with a mixture of Gaussians priors [18] where one mixture
component has zero variance. Mathematically, the data yt is obtained by multiplying together a
matrix W of basis filters with a vector ht ?xt , where ? denotes the operation of Hadamard or elementwise product, xt ? RN is Gaussian and spherically distributed with standard deviation ?x and
ht ? {0, 1}N is a vector of independent Bernoulli-distributed elements with success probabilities p.
Finally, small amounts of isotropic Gaussian noise with standard deviation ?y are added to produce
yt . For notational consistency with the proposed dynamic version of this model, the t superscript
indexes time. The joint log-likelihood is
LtSC = ? kyt ? W ? (ht ? xt )k2 /2?y2 ? kxt k2 /2?x2 +
+
N
X
htj log pj + (1 ? htj ) log (1 ? pj ) + const,
(1)
j=1
where N is the number of basis filters in the model. By using appropriately small activation probabilities p, the effective prior on ht ? xt can be made arbitrarily sparse. Probabilistic inference in
sparse coding is intractable but efficient variational approximation methods exist. We use a very fast
approximation to MAP inference, the matching pursuit algorithm (MP) [19]. Instead of using MP
to extract a fixed number of coefficients per patch as usual, we extract coefficients for as long as the
joint log-likelihood increases. Patches with more complicated structure will naturally require more
coefficients to code. Once values for xt and ht are filled in, the gradient of the joint log likelihood
with respect to the parameters is easy to derive. Note that xtk for which htk = 0 can be integrated
out in the likelihood, as they receive no contribution from the data term in (1). Due to the MAP
approximation, only the W can be learned. Therefore, we set ?x2 , ?y2 to reasonable values, both on
the order of the data variance. We also adapted pk during learning so that each filter was selected
by the MP process a roughly equal number of times. This helped to stabilise learning, avoiding a
tendency to very unequal convergence rates otherwise encountered.
When applied to whitened small patches from images, the algorithm produced localized Gabor-like
receptive fields as usual for sparse coding, with a range of frequencies, phases, widths and aspect
ratios. The same model was shown in [16] to reproduce the diverse shapes of V1 receptive fields,
unlike standard sparse coding [20]. We found that when we varied the average number of coefficients
recruited per image, the receptive fields of the learned filters varied in size. For example with only
one coefficient per image, a large number of filters represented edges extending from one end of the
patch to the other. With a large number of coefficients, the filters concentrated their mass around
just a few pixels. With even more coefficients, the learned filters gradually became Fourier-like.
During learning, we gradually adapted the average activation of each variable htk by changing the
prior activation probabilities pk . For 16x16 patches in a twice overcomplete SC model (number
of filters = twice the number of pixels), we found that learning with 10-50 coefficients on average
prevented the filters from becoming too much or too little localized in space.
1
Although less evocative, we prefer the term ?binary-gated Gaussian? to ?spike-and-slab? partly because our
slab is really more of a hump, and partly because the spike refers to a feature seen only in the density of the
product hi xi , rather than in either the values or distributions of the component variables.
3
2.2
Binary-Gated Gaussian Recurrent Neural Network (bgG-RNN).
To obtain a dynamic hidden model for sequences of images {yt } we specify the following conditional probabilities between hidden chains of variables ht , xt :
P xt+1 , ht+1 |xt , ht = P xt+1 P ht+1 |ht ? xt
P xt+1 = N 0, ?x2 I
P ht+1 |ht ? xt = ? R ? ht ? xt + b ,
(2)
where R is a matrix of recurrent connections, b is a vector of biases and ? is the standard sigmoid
function ?(a) = 1/(1 + exp(?a)). Note how the xt are always drawn independently while the
conditional probability for ht+1 depends only on ht ? xt . We arrived at these designs based on a few
observations. First, similar to inference in bgG-RNN, the conditional dependence on ht ? xt , allows
us to integrate out variables xt , xt+1 for which the respective gates in ht , ht+1 are 0. Second, we
observed that adding Gaussian linear dependencies between xt+1 andxt ? ht did not modify qual-
itatively the results reported here. However, dropping P ht+1 |ht ? xt in favor of P xt+1 |ht ? xt
resulted in a model which could no longer learn a direction-selective representation. For simplicity we choseX
the minimal model specified by (2). The full log likelihood for the bgG-RNN is
LtbgG-RNN where
LbgG-RNN =
t
LtbgG-RNN = const ? kyt ? W(xt ? ht )k2 /2?y2 ? kxt k2 /2?x2
+
N
X
htj log ? R ht?1 ? xt?1 + b j
j=1
N
X
+
(1 ? htj ) log (1 ? ? R ht?1 ? xt?1 + b j ),
(3)
j=1
where x0 = 0 and h0 = 0 are both defined to be vectors of zeros.
2.3
Inference and learning of bgG-RNN.
?t for all t in such a way as to minimize the objective
The goal of inference is to set the values of ?xt , h
?t for t = 1 to T , we propose to obtain ?xT +1 , h
?T +1
set by (3). Assuming we have already set ?xt , h
?T . This scheme might be called greedy filtering. In greedy filtering, inference
exclusively from ?
xT , h
is causal and Markov with respect to time. At step T + 1 we only need to solve a simple SC problem
+1
given by the slice LTbgG-RNN
of the likelihood (3), where xT , hT have been replaced with the estimates
?T . The greedy filtering algorithm proposed here scales linearly with the number of time steps
?
xT , h
considered and is well suited for online inference. The algorithm might not produce very accurate
estimates of the global MAP settings of the hidden variables, but we found it was sufficient for
learning a complex bgG-RNN model. In addition, its simplicity coupled with the fast MP algorithm
in each LtbgG-RNN slice, resulted in very fast inference and consequently fast learning. We usually
learned models in under one hour on a quad core workstation.
Due to our approximate inference scheme, some parameters in the model had to be set manually.
These are ?x2 and ?y2 , which control the relative strengths in the likelihood of three terms: the data
likelihood, the smallness prior on the Gaussian variables and the interaction between sets of xt , ht
consecutive in time. In our experiments we set ?y2 equal to the data variance and ?x2 = 2?y2 . We
found that such large levels of expected observation noise were necessary to drive robust learning in
R.
For learning we initialized parameters randomly to small values and first learned W exclusively.
Once the filters converge, we turn on learning for R. W does not change very much beyond this
point. We found learning of R was sensitive to learning rate. We set the learning rate to 0.05 per
batch, used a momentum term of 0.75 and batches of 30 sets of 100 frame sequences. We stabilized
the mean activation probability of each neuron individually by actively and quickly tuning the biases
4
b during learning. We whitened images with a center-surround filter and standardized the whitened
pixel values.
Gradients required for learning R show similarities to the STDP learning rule used in [3] and [4].
?LtbgG-RNN
?Rjk
t
t?1
t
t
= ht?1
x
?
h
.
?
?
R
h
?
x
+
b
j
k
k
j
(4)
We will assume for neural interpretation that the positive and negative values of xt ? ht are encoded
by different neurons. If for a given neuron xt?1
is always positive, then the gradient (4) is only
k
t
strictly positive when ht?1
=
1
and
h
=
1
and
strictly
negative when ht?1
= 1 and htj = 0. In
j
k
k
other words, the connection Rjk is strengthened when neuron k appears to cause neuron j to activate
and inhibited if neuron k fails to activate neuron j. A similar effect can be observed for the negative
part of xt?1
k . This kind of Hebbian rule is widespread in cortex for long term learning and is used in
previous computational models of neural sequence learning that partly motivated our work [4].
3
Results
For data, we selected about 100 short 100 frame long clips from a high resolution BBC wild life
documentary. Clips were chosen only if they seemed on visual inspection to have sufficient motion
energy over the 100 frames. The clips chosen ended up being mostly panning shots and close-ups
of animals in their natural habitats (the closer the camera is to a moving object, the faster it appears
to move).
The results presented below measure the ability of the model to produce responses similar to those of
neurons recorded in primate experiments. The stimuli used in these experiments are typically of two
kinds: drifting gratings presented inside circular or square apertures or translating bars of various
lengths. These two kinds of stimuli produce very clear motion signals, unlike motion produced
by natural movies. In fact, most patches we used in training contained a wide range of spatial
orientations, most of which were not orthogonal to the direction of local translation. After comparing
model responses to neural data, we finish with an analysis of the network connectivity pattern that
underlies the responses of model neurons.
3.1
Measuring responses in the model.
We needed to deal with the potential negativity of the variables in the model, since neural responses
are always positive quantities. We decided to separate the positive and negative parts of the Gaussian
variables into two distinct sets of responses. This interpretation is relatively common for sparse
coding models and we also found that in many units direction selectivity was enhanced when the
positive and negative parts of xt were separated (as opposed to taking ht as the neural response).
The enhancement was supported by a particular pattern of network connectivity which we describe
in a later subsection.
Since our inference procedure is deterministic it will produce the exact same response to the same
stimulus every time. We added Gaussian noise to the spatially whitened test image sequences, partly
to capture the noisy environments in cortex and partly to show robustness of direction selectivity to
noise. The amount of noise added was about half the expected variance of the stimulus.
3.2
Direction selectivity and speed tuning.
Direction selectivity is measured with the following index: DI = 1 ? Ropp /Rmax . Here Rmax
represents the response of a neuron in its preferred direction, while Ropp is the response in the
direction opposite to that preferred. This selectivity index is commonly used to characterize neural
data. To define a neuron?s preferred direction, we inferred latent coefficients over many repetitions
of square gratings drifting in 24 directions, at speeds ranging from 0 to 3 pixels/frame in 0.25
steps. The periodicity of the stimulus was twice the patch size, so that motion locally appeared as
an advancing long edge. The neuron?s preferred direction was defined as the direction in which it
responded most strongly, averaged over all speeds. Once a preferred direction was established, we
defined the neuron?s preferred speed, as the speed at which it responded most strongly in its preferred
5
direction. Finally, at this preferred speed and direction, we calculated the DI of the neuron. Similar
results were obtained if we averaged over all speed conditions.
non-preferred
200
Number of units
100
0
0
0.5
1
Direction Index
150
Response (a.u.)
preferred
100
50
0
0
1
2
3
Preferred speed (pix/frame)
0
(a)
3
Speed (pix/frame)
(b)
(c)
Figure 2: a. Speed tuning of 16 randomly chosen neurons. Note that some neurons only respond
weakly without motion, some are inhibited in the non-preferred direction compared to static responses and most have a clear peak in the preferred direction at specific speeds. b. top: Histogram
of direction selectivity indices. bottom: Histogram of preferred speeds. c. For each of the 10
strongest excitatory connections per neuron we plot a dot indicating the orientation selectivity of
pre and post-synaptic units. Note that most of the points are within ?/4 from the diagonal, an area
marked by the black lines. Notice also the relatively increased frequency of horizontal and vertical
edges.
We found that most neurons in the model had sharp tuning curves and direction-selective responses.
We cross validated the value of the direction index with a new set of responses (fixing the preferred
direction) to obtain an averaged DI of 0.65, with many neurons having a DI close to 1 (see figure
2(a)). This distribution is similar to those shown in [21] for real V1 neurons. 714 of 1024 neurons
were classified as direction-selective, on the basis of having DI > 0.5. Distributions of direction
indices and optimal speeds are shown in figure 2(a). A neuron?s preferred direction was always
close to orthogonal to the axis of its Gabor receptive field, except for a few degenerate cases around
the edges of the patch. We defined the population tuning curve as the average of the tuning curves
of individual neurons, each aligned by their preferred direction of motion. The DI of the population
was 0.66. Neurons were also speed tuned, in that responses could vary greatly and systematically as
a function of speed and DI was non-constant as a function of speed (see figure 2(b)). Usually at low
and high speeds the DI was 0, but in between a variety of responses were observed. Speed tuning is
also present in recorded V1 neurons [22], and could form the basis for global motion computation
based on the intersection of constraints method [23].
3.3
Vector velocity tuning.
To get a more detailed description of single-neuron tuning, we investigated responses to different
stimulus velocities. Since drifting gratings only contain motion orthogonal to their orientation, we
switched to small (1.25pix x 2pix) drifting Gabors for these experiments. We tested the network?s
behavior with a full set of 24 Gabor orientations, drifting in a full set of 24 directions with speeds
ranging from 0.25 pixels/frame to 3 pixels/frame, for a total of 6912 = 24 x 24 x 12 conditions
with hundreds of repetitions of each condition. For each neuron we isolated its responses to drifting
Gabors of the same orientation travelling at the 12 different speeds in the 24 different directions.
We present these for several neurons in polar plots in figure 3(b). Note responses tend to be high to
vector velocities lying on a particular line.
3.4
Connectomics in silico
We had anticipated that the network would learn direction selectivity via specific patterns of recurrent connection, in a fashion similar to the toy model studied in [4]. We now show that the pattern
of connectivity indeed supports this computation.
6
The most obvious connectivity pattern, clearly visible for single neurons in figure 3(a), shows that
neurons in the model excite other neurons in their preferred direction and inhibit neurons in the
opposite direction. This asymmetric wiring naturally supports direction selectivity.
Asymmetry is not sufficient for direction selectivity to emerge. In addition, strong excitatory projections have to connect together neurons with similar preferred orientations and similar preferred
directions. Only then will direction information propagate in the network in the identities of the active variables (and the signs of their respective coefficients xt ). We considered for each neuron its 10
strongest excitatory outputs and calculated the expected deviation between the orientation of these
outputs and the orientation of the root neuron. The average deviation was 23? , half the expected deviation if connections were random. Figure 2(c) shows a raster plot of the pairs of orientations. The
same pattern held when we considered the strongest excitatory inputs to a given neuron with an expected deviation of orientations of 24? . We could not directly measure if neurons connected together
according to direction selectivity because of the sign ambiguity of xt variables. One can visually
assess in figure 3(a) that neurons connected asymmetrically with respect to their RF axis, but did
they also respond to motion primarily in that direction? As can be seen in figure 3(b), which shows
the same neurons as figure 3(a), they did indeed. Direction tuning is a measure of the incoming
connections to the neuron, while figure 3(a) shows the outgoing connections. We can qualitatively
assess recurrence primarily connected together neurons with similar direction preferences.
(a)
(b)
Figure 3: a. Each plot is based on the outgoing connections of a random set of direction-selective
neurons. The centers of the Gabor fits to the neurons? receptive fields are shown as circles on a square
representing the 16 by 16 image patch. The root neurons are shown as filled black circles. Filled
red/blue circles show neurons to which the root neurons have strong positive/negative connections,
with a cutoff at one fourth of the maximal absolute connection. The width of the connecting lines
and the area of the filled circles are proportional to the strength of the connection. A dynamic version
of this plot during learning is shown as a movie in the supplementary material. b. The polar plots
show the responses of neurons presented in a to small, drifting Gabors that match their respective
orientations. Neurons are aligned in exactly the same manner on the 4 by 4 grid. Every small
disc in every polar plot represents one combination of speed and direction and the color of the disc
represents the magnitude of the response, with intense red being maximal and dark blue minimal.
The vector from the center of the polar plot to the center of each disc is proportional to the vector
displacement of each consecutive frame in the stimulus sequence. Increasing disc sizes at faster
speeds are used for display purposes. The very last polar plot shows the average of the responses of
the entire population, when all neurons are aligned by their preferred direction.
We also observed that neurons mostly projected strong excitatory outputs to other neurons that were
aligned parallel to the root neuron?s main axis (visible in figures 3(a)). We think this is related to the
fact that locally all edges appear to translate parallel to themselves. A neuron X with a preferred
direction v and preferred speed s has a so-called constraint line (CL), parallel to the Gabor?s axis.
7
When the neuron is activated by an edge E, the constraint line is formed by all possible future
locations of edge E that are consistent with global motion in the direction v with speed s. Due to
the presence of long contours in natural scenes, the activation of X can predict at the next time step
the activations of other neurons with RFs aligned on the CL. Our likelihood function encourages
the model to learn to make such predictions as well as it can. To quantify the degree to which
connections were made along a CL, for each neuron we fit a 2D Gaussian to the distribution of
RF positions of the 20 most strongly connected neurons (the filled red circles in figure 3(a)), each
further weighted by its strength. The major axis of the Gaussians represent the constraint lines of
the root neuron and are in 862 out of 1024 neurons less than 15? away from perfectly parallel to the
root neurons? axis. The distance of each neuron to their constraint line was on average 1.68 pixels.
Yet perhaps the strongest manifestation of the CL tuning property of neurons in the model can be
seen in their responses to small stimuli drifting with different vector velocities. Many of the neurons
in figure 3(b) respond best when the velocity vector ends on the constraint line and a similar trend
holds for the aligned population average.
It is already known from experiments of axon mappings simultaneous with dye-sensitive imaging
that neurons in V1 are more likely to connect with neurons of similar orientations situated as far away
as 4 mm / 4-8 minicolumns away [24]. The model presented here makes three further predictions:
that neurons connect more strongly to neurons in their preferred direction, that connected neurons
lie on the constraint line and that they have similar preferred directions to the root neuron.
4
Discussion
We have shown that a network of recurrently-connected neurons can learn to discriminate motion
direction at the level of individual neurons. Online greedy filtering in the model is a sufficient
approximate-inference method to produce direction-selective responses. Fast, causal and online
inference is a necessary requirement for practical vision systems (such as the brain) but previous
visual-motion models did not provide such an implementation of their inference algorithms. Another
shortcoming of these previous models is that they obtain direction selectivity by having variables
with different RFs at different time lags, effectively treating time as a third spatial dimension. A
dynamic generative model may be more suited for online inference with methods such as particle
filtering, assumed density filtering, or the far cheaper method employed here of greedy filtering.
The model neurons can be interpreted as predicting the motion of the stimulus. The lateral inputs
they receive are however not sufficient in themselves to produce a response, the prediction also has to
be consistent with the bottom-up input. When the two sources of information disagree, the network
compromises but not always in favor of the bottom-up input, as this source of information might be
noisy. This is reflected by the decrease in reconstruction accuracy from 80% to 60 % after learning
the recurrent connections. It is tempting to think of V1 direction selective neurons as not only edge
detectors and contour predictors (through the nonclassical RF) but also predictors of future edge
locations, through their specific patterns of connectivity.
The source of direction selectivity in cortex is still an unresolved question, but note that in the
retina of non-primate mammals it is known with some certainty that recurrent inhibition in the
non preferred direction is largely responsible for the direction selectivity of retinal ganglion cells
[25]. It is also known that unlike orientation and ocular dominance, direction selectivity requires
visual experience to develop [26], perhaps because direction selectivity depends on a specific pattern
of lateral connectivity unlike the largely feedforward orientation and binocular tuning. Another
experiment showed that after many exposures to the same moving stimulus, the sequence of spikes
triggered in different neurons along the motion trajectory was also triggered in the complete absence
of motion, again indicating that motion signals in cortex may be generated internally from lateral
connections [27].
Thus, we see a number of reasons to propose that direction selectivity in the cortex may indeed
develop and be computed through a mechanism analagous to the one we have developed here. If so,
then experimental tests of the various predictions developed above should prove to be revealing.
8
References
[1] A Mikami, WT Newsome, and RH Wurtz. Motion selectivity in macaque visual cortex. II. Spatiotemporal
range of directional interactions in MT and V1. Journal of Neurophysiology, 55(6):1328?1339, 1986.
[2] MS Livingstone. Mechanisms of direction selectivity in macaque V1. Neuron, 20:509?526, 1998.
[3] LF Abbott and KI Blum. Functional significance of long-term potentiation for sequence learning and
prediction. Cerebral Cortex, 6:406?416, 1996.
[4] RPN Rao and TJ Sejnowski. Predictive sequence learning in recurrent neocortical circuits. Advances in
Neural Information Processing, 12:164?170, 2000.
[5] B Olshausen. Learning sparse, overcomplete representations of time-varying natural images. IEEE International Conference on Image Processing, 2003.
[6] P Berkes, RE Turner, and M Sahani. A structured model of video produces primary visual cortical
organisation. PLoS Computational Biology, 5, 2009.
[7] L Wiskott and TJ Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715?770, 2002.
[8] C Cadieu and B Olshausen. Learning transformational invariants from natural movies. Advances in
Neural Information Processing, 21:209?216, 2009.
[9] D Barber. Learning in spiking neural assemblies. Advances in Neural Information Processing, 15, 2002.
[10] J Brea, W Senn, and JP Pfister. Sequence learning with hidden units in spiking neural networks. Advances
in Neural Information Processing, 24, 2011.
[11] RP Rao and Ballard DH. Predictive coding in the visual cortex: a functional interpretation of some
extra-classical receptive-field effects. Nature Neuroscience, 2(1):79?87, 1999.
[12] K Friston. A theory of cortical responses. Phil. Trans. R. Soc. B, 360(1456):815?836, 1999.
[13] RJ Douglas, C Koch, M Mahowald, KA Martin, and HH Suarez. Recurrent excitation in neocortical
circuits. Science, 269(5226):981?985, 1995.
[14] TP Minka. Expectation propagation for approximate Bayesian inference. UAI?01 Proceedings of the
Seventeenth conference on Uncertainty in artificial intelligence, pages 362?369, 2001.
[15] A Doucet, N Freitas, K Murphy, and S Russell. Rao-blackwellised particle filtering for dynamic Bayesian
networks. UAI?00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence, pages
176?183, 2000.
[16] M Rehn and FT Sommer. A network that uses few active neurones to code visual input predicts the diverse
shapes of cortical receptive fields. Journal of Computational Neuroscience, 22:135?146, 2007.
[17] IJ Goodfellow, A Courville, and Y Bengio. Spike-and-slab sparse coding for unsupervised feature discovery. arXiv:1201.3382v2, 2012.
[18] BA Olshausen and KJ Millman. Learning sparse codes with a mixture-of-Gaussians prior. Advances in
Neural Information Processing, 12, 2000.
[19] SG Mallat and Z Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on
Signal Processing, 41(12):3397?3415, 1993.
[20] BA Olshausen and DJ Field. Emergence of simple-cell receptive field properties by learning a sparse code
for natural images. Nature, 381:607?609, 1996.
[21] MR Peterson, B Li, and RD Freeman. The derivation of direction selectivity in the striate cortex. Journal
of Neuroscience, 24:35833591, 2004.
[22] GA Orban, H Kennedy, and J Bullier. Velocity sensitivity and direction selectivity of neurons in areas V1
and V2 of the monkey: influence of eccentricity. Journal of Neurophysiology, 56(2):462?480, 1986.
[23] EP Simoncelli and DJ Heeger. A model of neuronal responses in visual area MT. Vision Research,
38(5):743?761, 1998.
[24] WH Bosking, Y Zhang, B Schofield, and D FitzPatrick. Orientation selectivity and the arrangement of
horizontal connections in tree shrew striate cortex. Journal of Neuroscience, 17(6):2112?2127, 1997.
[25] SI Fried, TA Munch, and FS Werblin. Directional selectivity is formed at multiple levels by laterally
offset inhibition in the rabbit retina. Neuron, 46(1):117?127, 2005.
[26] Y Li, D FitzPatrick, and LE White. The development of direction selectivity in ferret visual cortex requires
early visual experience. Nature Neuroscience, 9(5):676?681, 2006.
[27] S Xu, W Jiang, M Poo, and Y Dan. Activity recall in a visual cortical ensemble. Nature Neuroscience,
15:449?455, 2012.
9
| 4814 |@word neurophysiology:2 version:2 hippocampus:1 simulation:1 propagate:1 mammal:1 shot:1 exclusively:2 tuned:1 imaginary:1 freitas:1 ka:1 comparing:1 activation:6 yet:1 si:1 must:1 connectomics:1 realistic:3 visible:2 shape:2 motor:1 designed:1 plot:9 treating:1 rpn:1 alone:1 generative:10 selected:2 device:1 greedy:5 half:2 intelligence:2 inspection:1 isotropic:1 fried:1 core:1 short:1 provides:1 location:2 preference:1 zhang:2 mathematical:1 along:2 prove:1 pathway:3 wild:1 dan:1 inside:1 introduce:1 manner:2 x0:1 expected:5 indeed:6 behavior:1 themselves:2 roughly:1 brain:2 inspired:1 freeman:1 brea:1 actual:1 little:1 quad:1 increasing:1 begin:1 discover:1 circuit:2 mass:1 kind:3 rmax:2 interpreted:1 monkey:1 developed:3 htj:5 ended:1 temporal:1 blackwellised:1 certainty:1 every:3 exactly:1 laterally:1 k2:4 uk:2 control:2 unit:5 internally:1 appear:1 werblin:1 positive:7 local:2 treat:1 modify:1 limit:1 jiang:1 becoming:1 might:3 black:2 twice:3 bosking:1 resembles:1 studied:1 range:4 averaged:3 seventeenth:1 decided:1 practical:1 camera:1 responsible:1 lf:1 procedure:1 displacement:1 area:5 rnn:12 maneesh:2 gabor:8 revealing:1 inferential:1 matching:2 word:1 ups:1 refers:1 pre:1 projection:1 get:1 close:3 ga:1 silico:1 influence:1 map:4 deterministic:1 yt:4 center:4 phil:1 exposure:1 poo:1 independently:1 rabbit:1 focused:1 resolution:1 simplicity:2 rule:3 enabled:1 population:4 analogous:1 construction:1 enhanced:1 mallat:1 exact:1 us:1 goodfellow:1 element:1 velocity:6 recognition:1 expensive:1 trend:1 asymmetric:2 predicts:1 observed:4 role:3 bottom:3 suarez:1 ft:1 ep:1 capture:1 connected:9 plo:1 movement:1 inhibit:1 decrease:1 russell:1 environment:1 dynamic:9 trained:1 weakly:1 compromise:1 predictive:3 localization:1 bbc:1 learner:1 basis:8 rightward:1 joint:3 various:3 cat:1 represented:1 derivation:1 train:1 separated:1 distinct:1 fast:6 describe:2 london:1 effective:1 activate:2 shortcoming:1 sc:3 sejnowski:2 artificial:2 formation:1 h0:1 encoded:1 supplementary:1 solve:1 lag:1 otherwise:1 ability:3 favor:2 think:2 emergence:1 noisy:2 superscript:1 online:7 sequence:30 kxt:2 triggered:2 shrew:1 ucl:1 propose:4 reconstruction:1 interaction:3 product:2 maximal:2 unresolved:1 nonclassical:1 aligned:7 hadamard:1 translate:1 degenerate:1 sixteenth:1 description:2 convergence:1 enhancement:1 asymmetry:1 extending:1 requirement:1 produce:10 eccentricity:1 object:3 derive:1 recurrent:16 ac:1 fixing:1 develop:2 measured:1 ij:1 received:1 strong:3 soc:1 implemented:1 grating:4 quantify:1 direction:69 filter:11 translating:1 material:1 require:2 potentiation:1 really:2 biological:1 mathematically:2 strictly:2 hold:1 lying:1 sufficiently:1 around:2 considered:3 stdp:3 exp:1 visually:1 koch:1 mapping:1 predict:1 slab:4 fitzpatrick:2 major:2 ventral:1 dictionary:1 consecutive:2 vary:1 early:1 purpose:1 polar:5 sensitive:2 individually:1 repetition:2 weighted:1 clearly:1 gaussian:14 always:5 rather:2 varying:1 encode:2 validated:1 notational:1 modelling:2 bernoulli:1 likelihood:9 greatly:1 contrast:1 inference:19 stabilise:1 typically:2 integrated:1 entire:1 hidden:4 selective:9 reproduce:3 pixel:7 orientation:16 proposes:2 animal:2 spatial:4 development:2 special:1 schofield:1 documentary:1 field:11 once:3 equal:2 having:3 sampling:1 manually:1 biology:1 represents:4 cadieu:1 unsupervised:3 anticipated:1 future:2 stimulus:11 fundamentally:1 inhibited:2 few:4 retina:2 primarily:2 distinguishes:1 randomly:2 densely:1 resulted:2 individual:2 cheaper:1 murphy:1 replaced:1 phase:4 fire:1 n1:2 circular:1 hump:1 mixture:3 light:1 activated:1 tj:2 held:1 chain:2 accurate:1 noncausal:1 edge:9 closer:1 necessary:2 experience:2 respective:4 machinery:1 orthogonal:3 filled:5 intense:1 tree:1 initialized:1 circle:5 re:1 causal:2 overcomplete:2 isolated:1 minimal:2 increased:1 earlier:2 rao:3 tp:1 measuring:1 newsome:1 mahowald:1 cost:1 deviation:6 hundred:1 predictor:2 delay:1 munch:1 too:2 characterize:1 reported:1 dependency:2 connect:3 spatiotemporal:2 learnt:1 density:2 peak:1 sensitivity:3 international:1 bullier:1 probabilistic:4 pool:1 together:5 quickly:1 connecting:1 connectivity:6 again:1 central:1 recorded:2 ambiguity:1 opposed:1 slowly:1 toy:3 actively:1 li:2 potential:2 transformational:1 retinal:2 coding:13 coefficient:12 analagous:1 postulated:1 mp:4 depends:3 stream:1 later:1 view:1 helped:1 root:7 ropp:2 red:3 complicated:2 parallel:4 contribution:1 ass:2 square:4 minimize:1 formed:2 became:1 variance:4 responded:2 efficiently:2 accuracy:1 largely:2 ensemble:1 directional:2 biophysically:3 bayesian:2 produced:2 disc:4 trajectory:3 multiplying:1 drive:2 kennedy:1 classified:1 simultaneous:1 synapsis:1 strongest:4 detector:1 synaptic:1 raster:1 energy:4 frequency:3 ocular:1 minka:1 obvious:1 naturally:3 di:8 workstation:1 static:1 wh:1 recall:1 subsection:1 color:1 amplitude:1 appears:2 feed:1 higher:1 htk:2 ta:1 reflected:1 response:29 specify:1 box:1 strongly:4 furthermore:1 just:1 binocular:1 hand:1 horizontal:2 expressive:1 nonlinear:4 propagation:1 widespread:1 perhaps:3 indicated:1 qual:1 olshausen:4 effect:2 requiring:1 y2:6 contain:1 spatially:1 spherically:1 deal:1 white:1 wiring:1 during:4 width:2 recurrence:1 encourages:1 acquires:1 excitation:1 m:1 manifestation:1 arrived:1 evident:1 complete:1 neocortical:2 motion:35 dedicated:1 image:18 variational:2 ranging:2 sigmoid:1 common:1 functional:2 mt:2 spiking:2 jp:1 cerebral:1 interpretation:3 elementwise:1 surround:1 tuning:14 rd:1 consistency:1 grid:1 similarly:1 particle:2 had:3 dot:1 dj:2 moving:3 cortex:14 longer:1 similarity:1 inhibition:2 berkes:1 recent:1 dye:1 leftward:1 showed:1 belongs:1 selectivity:26 binary:7 continue:1 remembered:1 success:1 arbitrarily:1 life:1 muscle:1 seen:4 analyzes:1 mr:1 employed:1 converge:1 tempting:1 signal:4 ii:1 full:3 simoncelli:1 rj:1 multiple:1 hebbian:2 faster:2 match:1 cross:1 long:6 post:1 equally:2 prevented:1 prediction:6 underlies:1 whitened:4 vision:2 expectation:1 wurtz:1 arxiv:1 histogram:2 represent:5 cell:6 receive:2 addition:2 ferret:1 source:3 appropriately:1 extra:1 unlike:4 recruited:1 tend:1 seem:1 presence:1 revealed:1 feedforward:2 enough:1 easy:1 bengio:1 variety:1 finish:1 fit:2 architecture:2 bandwidth:1 opposite:2 perfectly:1 motivated:1 panning:1 f:1 neurones:1 cause:1 clear:2 detailed:1 amount:2 dark:1 mid:1 locally:2 situated:1 concentrated:1 clip:3 generate:1 specifies:1 exist:1 stabilized:1 notice:1 dotted:1 senn:1 sign:2 neuroscience:7 per:5 blue:2 diverse:2 probed:1 dropping:1 dominance:1 blum:1 drawn:1 changing:2 pj:2 cutoff:1 abbott:1 douglas:1 ht:35 advancing:1 v1:9 imaging:1 pix:4 powerful:1 respond:4 striking:1 fourth:1 uncertainty:2 reasonable:1 separation:1 patch:10 prefer:1 layer:3 hi:1 ki:1 distinguish:1 display:1 courville:1 kyt:2 encountered:1 activity:2 adapted:2 strength:3 constraint:8 x2:6 scene:1 aspect:1 speed:29 fourier:1 orban:1 xtk:1 relatively:2 marius:2 martin:1 structured:1 according:1 combination:1 primate:3 invariant:2 gradually:2 turn:1 mechanism:5 hh:1 needed:1 end:2 travelling:1 pursuit:2 gaussians:3 operation:1 pachitariu:1 hierarchical:1 away:3 appropriate:1 v2:2 batch:2 robustness:1 gate:1 drifting:9 rp:1 denotes:1 standardized:1 top:1 assembly:1 sommer:1 graphical:1 mm:1 const:2 build:1 classical:2 objective:1 move:1 arrangement:1 added:3 already:2 spike:5 quantity:1 receptive:10 primary:4 dependence:1 usual:2 diagonal:1 striate:2 gradient:3 distance:1 separate:3 mapped:1 lateral:4 barber:1 reason:1 rjk:2 assuming:1 code:4 length:1 index:7 reformulate:1 ratio:1 difficult:1 mostly:3 negative:6 ba:2 implementation:2 design:1 zt:4 gated:7 disagree:1 vertical:1 neuron:88 observation:2 markov:1 frame:9 rn:1 varied:2 sharp:1 inferred:1 pair:2 required:1 specified:1 connection:19 unequal:1 learned:9 established:1 hour:1 macaque:2 trans:1 able:1 bar:2 suggested:1 dynamical:2 pattern:11 usually:2 beyond:1 below:1 appeared:1 navigational:1 rf:6 video:2 natural:7 treated:1 friston:1 predicting:1 turner:1 smallness:1 scheme:3 older:1 movie:3 representing:1 habitat:1 eye:1 picture:1 axis:6 negativity:1 extract:2 coupled:1 kj:1 sahani:2 prior:5 sg:1 discovery:1 millman:1 relative:1 generation:2 interesting:1 filtering:10 proportional:2 localized:2 integrate:1 switched:1 degree:1 sufficient:5 consistent:2 rehn:1 wiskott:1 systematically:1 translation:1 periodicity:1 excitatory:5 supported:1 last:1 bias:2 wide:1 peterson:1 taking:1 emerge:1 absolute:1 sparse:15 distributed:4 slice:2 curve:3 dimension:3 cortical:7 calculated:2 contour:2 seemed:1 sensory:2 made:2 commonly:1 qualitatively:1 projected:1 far:3 cope:1 transaction:1 approximate:5 preferred:28 aperture:1 global:3 active:2 incoming:1 question:1 uai:2 doucet:1 assumed:1 excite:1 xi:1 continuous:1 latent:3 learn:7 ballard:1 robust:1 nature:4 investigated:1 complex:5 cl:4 did:4 pk:2 main:1 significance:1 linearly:1 rh:1 noise:5 n2:2 quadrature:1 xu:1 neuronal:1 fashion:2 gatsby:2 slow:2 x16:1 strengthened:1 axon:1 fails:1 momentum:1 position:1 explicit:1 heeger:1 debated:1 lie:1 third:2 learns:2 specific:5 xt:41 recurrently:3 offset:1 survival:1 reproduction:2 intractable:1 organisation:1 adding:1 effectively:1 magnitude:1 suited:2 intersection:1 likely:1 minicolumns:1 neurophysiological:1 visual:29 ganglion:1 contained:1 dh:1 conditional:3 goal:1 marked:1 identity:1 consequently:1 absence:1 change:1 determined:2 specifically:2 except:1 wt:1 called:3 specie:1 total:1 partly:5 tendency:1 asymmetrically:1 discriminate:1 experimental:1 livingstone:1 invariance:1 pfister:1 indicating:2 college:1 support:2 dorsal:2 outgoing:2 evocative:1 tested:1 avoiding:1 |
4,215 | 4,815 | Weighted Likelihood Policy Search
with Model Selection
Tsuyoshi Ueno ?
Japan Science and Technology Agency
[email protected]
Takashi Washio
Osaka University
[email protected]
Kohei Hayashi
University of Tokyo
[email protected]
Yoshinobu Kawahara
Osaka University
[email protected]
Abstract
Reinforcement learning (RL) methods based on direct policy search (DPS) have
been actively discussed to achieve an efficient approach to complicated Markov
decision processes (MDPs). Although they have brought much progress in practical applications of RL, there still remains an unsolved problem in DPS related
to model selection for the policy. In this paper, we propose a novel DPS method,
weighted likelihood policy search (WLPS), where a policy is efficiently learned
through the weighted likelihood estimation. WLPS naturally connects DPS to the
statistical inference problem and thus various sophisticated techniques in statistics can be applied to DPS problems directly. Hence, by following the idea of the
information criterion, we develop a new measurement for model comparison in
DPS based on the weighted log-likelihood.
1 Introduction
In the last decade, several direct policy search (DPS) methods have been developed in the field of
reinforcement learning (RL) [1, 2, 3, 4, 5, 6, 7, 8, 9] and have been successfully applied to practical
decision making applications [5, 7, 9]. Unlike classical approaches [10], DPS characterizes a policy
as a parametric model and explores parameters such that the expected reward is maximized in a
given model space. Hence, if one employs a model with a reasonable number of DoF (degrees of
freedom), DPS could find a reasonable policy efficiently even when the target decision making task
has a huge number of DoF. Therefore, the development of an efficient model selection methodology
for the policy is crucial in RL research.
In this paper, we propose weighted likelihood policy search (WLPS): an efficient iterative policy
search algorithm that allows us to select an appropriate model automatically from a set of candidate
models. To this end, we first introduce a log-likelihood function weighted by the discounted sum of
future rewards as the cost function for DPS. In WLPS, the policy parameters are updated by iteratively maximizing the weighted log-likelihood for the obtained sample sequence. A key property of
WLPS is that the maximization of weighted log-likelihood corresponds to that of the lower bound of
the expected reward and thus, WLPS is guaranteed to increase the expected reward monotonically at
each iteration. This can be shown to converge to the same solution as the expectation-maximization
policy search (EMPS) [1, 4, 9]. In this way, our framework gives a natural connection between
DPS and the statistical inference problem for maximum likelihood estimation. One benefit of this
approach is that we can directly apply the information criterion scheme [11, 12], which is a familiar
theory in statistics, to the weighted log-likelihood. This enables us to construct a model selection
strategy for the policy by comparing the information criterion based on the weighted log-likelihood.
The contribution of this paper is summarized as follows:
?
https://sites.google.com/site/tsuyoshiueno/
1
1. We prove that each update to the policy parameters resulting from the maximization of the
weighted log-likelihood has consistency and asymptotic normality, which have been not
elucidated yet in DPS, and converges to the same solution as EMPS.
2. We introduce prior distribution on the policy parameter, and analyze the asymptotic behavior of the marginal weighted likelihood given by marginalizing out the policy parameter.
We then propose a measure of the goodness of the policy model based on the posterior
probability of the model in a similar way as Bayesian information criterion [12].
The rest of the paper is organized as follows. We first give a formulation of the DPS problem in RL,
and a short overview of EMPS (Section 2). Next, we present our new DPS framework, WLPS, and
investigate the theoretical aspects thereof (Section 3). In addition, we construct the model selection
strategy for the policy (Section 4). Finally, we present a statistical interpretation of WLPS and
discuss future directions of study in this regard (Section 5).
Related Works Several approaches have been proposed for the model selection problem in the
estimation of a state-action value function [13, 14]. [14] derived the PAC-Bayesian bounds for
estimating state-action value functions. [13] developed a complexity regularization based model
selection algorithm from the viewpoint of the minimization of the Bellman error, and investigated its
theoretical aspects. Although these studies allow us to select a good model for a state-value function
with theoretical supports, they cannot be applied to model selection for DPS directly. [15] developed
a model selection strategy for DPS by reusing the past observed sample sequences through the
importance weighted cross-validation (IWCV). However, IWCV requires heavy computational costs
and includes computational instability when estimating the importance sampling weights.
Recently, there are a number of studies that reformulate stochastic optimal control and RL as a
minimization problem of Kullback-Leiblar (KL) divergence [16, 17, 18]. Our approach is closely
related to these works; in fact, WLPS can also be interpreted as the minimization problem of the
reverse form of KL divergence compared with that used in [16, 17, 18].
2
Preliminary: EMPS
We consider discrete-time infinite horizon Markov Decision Processes (MDPs), defined as the
quadruple (X , U, p, r): X ? Rdx is a state space; U ? Rdu is an action space; p(x? |x, u) is a stationary transition distribution to the next state x? when taking action u at state x; and r : X ? U 7? R+
is a positive reward received with the state transition. Let ?? (u|x) := p(u|x, ?) be the stochastic
parametrized policy followed by the agent, where an m-dimensional vector ? ? ?, ? ? Rm means
an adjustable parameter. Given initial state x1 and parameter vector ?, the joint distribution of the
sample sequence, {x2:n , u1:n }, of the MDP is described as
n
?
p? (x2:n , u1:n |x1 ) = ?? (u1 |x1 )
p(xi |xi?1 , ui?1 )?? (ui |xi ).
(1)
i=2
We further impose the following assumptions on MDPs.
Assumption 1. For any ? ? ?, the MDP given by Eq. (1), is aperiodic and Harris recurrent. Hence,
MDP (1) is ergodic and has a unique invariant stationary distribution ?? (x), for any ? ? ? [19].
Assumption 2. For any x ? X and u ? U, reward r(x, u) is uniformly bounded.
Assumption 3. Policy ?? (u|x) is thrice continuously differentiable with respect to parameter ?.
The general goal of DPS is to find an optimal policy parameter ?? ? ? that maximizes the expected
reward defined by
??
?(?) := lim
p? (x2:n , u1:n |x1 ) {Rn } dx2:n du1:n ,
(2)
n??
?n
where Rn := Rn (x1:n , u1:n ) = (1/n) i=1 r(xi , ui ). In general, the direct maximization of objective function (2) forces us to solve a non-convex optimization problem with a high non-linearity.
Thus, instead of maximizing Eq. (2), many of the DPS methods maximize the lower bound on
Eq. (2), which may be much more tractable than the original objective function.
Lemma 1 shows that there is a tight lower bound on objective function (2).
Lemma 1. [1, 4, 9] The following inequality holds for any distribution q(x2:n , u1:n |x1 ):
{
}
??
p? (x2:n , u1:n |x1 )Rn
ln ?n (?) ? Fn (q, ?) :=
q(x2:n , u1:n |x1 ) ln
dx2:n du1:n , ?n
q(x2:n , u1:n |x1 )
2
(3)
??
where ?n (?) =
p(x2:n , u1:n |x1 ) {Rn } dx2:n du1:n . The equality holds if q(x2:n , u1:n |x1 ) is a
maximizer of Fn (q, ?) for some ?, i.e., q ? (x2:n , u1:n |x1 ) = argmaxq Fn (q, ?), which is satisfied
when q ? (x2:n , u1:n |x1 ) ? p? (x2:n , u1:n |x1 ){Rn }.
The proof is given in Section 1 in the supporting material. Lemma 1 leads to an effective iterative
algorithm, the so-called EMPS, which breaks down the potentially difficult maximization problem
for the expected reward into two stages: 1) evaluation of the path distribution q??? (x2:n , u1:n |x1 ) ?
p?? (x2:n , u1:n |x1 ){Rn } at the current policy parameter ?? , and 2) maximization of Fn (q??? , ?) with
respect to parameter ?. This EMPS cycle is guaranteed to increase the value of the expected reward
unless the parameters already correspond to a local maximum [1, 4, 9].
Taking the partial derivative of the policy with respect to parameter ?, a new parameter vector ?? that
maximizes Fn (q??? , ?) is found by solving the following equation:
( n
)
??
?
p?? (x2:n , u1:n |x1 )
???(xi , ui ) Rn dx2:n du1:n = 0,
(4)
i=1
where ? : X ? U ? ? denotes a partial derivative of the logarithm of the policy with respect to
parameter ?, i.e., ?? (x, u) := (?)/(??) ln ?? (u|x).
Note that if the state transition distribution p(x? |x, u) is known, we can easily derive parameter
?? analytically or numerically. However, p(x? |x, u) is generally unknown, and it is a non-trivial
problem to identify this distribution in large-scale applications. Thus, it is desirable to find parameter
?? in model-free ways, i.e., parameter is estimated from the sample sequences alone, instead of using
p(x? |x, u). Although many variants of model-free EMPS algorithms [4, 6, 9, 15] have hitherto been
developed, their fundamental theoretical properties such as consistency and asymptotic normality at
each iteration have not yet been elucidated. Moreover, it is not even obvious whether they have such
desirable statistical properties.
3 Proposed framework: WLPS
In this section, we newly introduce a weighted likelihood as the objective function for DPS (Definition 1), and derive the WLPS algorithm, executed by iterating two steps: evaluation and maximization of the weighted log-likelihood function (Algorithm 1). Then, in Section 3.2, we show that
WLPS is guaranteed to increase the expected reward at each iteration, and to converge to the same
solution as EMPS (Theorem 1).
3.1 Overview of WLPS
In this study, we consider the following weighted likelihood function.
Definition 1. Suppose that given initial state x1 , a random sequence {x2:n , u1:n } is generated from model p?? (x2:n , u1:n |x1 ) of the MDP. Then, we define a weighted likelihood function
?
p??? ,? (x2:n , u1:n |x1 ) and a weighted log-likelihood function L?n (?), respectively, as
n
? ?
?
p??? ,? (x2:n , u1:n |x1 ) := ?? (u1 |x1 )Q1
?? (ui |xi )Qi p(xi |xi?1 , ui?1 )
(5)
i=2
?
L?n (?) := ln p??? ,? (x2:n , u1:n |x1 ) :=
n
?
Q?i ln ?? (ui |xi ) +
i=1
n
?
i=2
where Q?i is the discounted sum of the future rewards given by Q?i :=
is a discounted factor such that ? ? [0, 1).
ln p(xi |xi?1 , ui?1 ),
?n
j=i
(6)
? j?i r(xj , uj ), and ?
Now, let us consider the maximization of weighted log-likelihood function (6). Taking the partial
derivative of weighted log-likelihood (6) with respect to parameter ?, we can obtain the maximum
? 1:n , u1:n ) as a solution of the following estimation
weighted log-likelihood estimator ??n := ?(x
equation:
?
G?n (??n ) :=
n
?
i=1
???n (xi , ui )Q?i =
n ?
n
?
i=1 j=i
3
? j?i ???n (xi , ui )r(xj , uj ) = 0.
(7)
Note that when policy ?? is modeled by an exponential family, estimating equation (7) can easily
be solved analytically or numerically using convex optimization techniques. In WLPS, the update
of the policy parameter is performed by evaluating estimating equation (7) and finding estimator ??n
iteratively from this equation. Algorithm 1 gives an outline of the WLPS procedure.
Algorithm 1 (WLPS).
1. Generate a sample sequence {x1:n , u1:n } by employing the current policy parameter ?,
and evaluate estimating equation (7).
2. Find a new estimator by solving estimating equation (7) and check for convergence. If
convergence is not satisfied, return to step 1.
It should be noted that WLPS guarantees to monotonically increase the expected reward ?(?), and to
converge asymptotically under certain conditions to the same solution as EMPS, given by Eq. (4). In
the next subsection, we discuss the reason why WLPS satisfies such desirable statistical properties.
3.2
Convergence of WLPS
To begin with, we show consistency and asymptotic normality of estimator ??n given by Eq. (7)
when ? is any constant between 0 and 1. To this end, we first introduce the notion of uniform
mixing, which plays an important role when discussing statistical properties in stochastic processes
[19]. The definition of uniform mixing is given below.
Definition 2. Let {Yi : i = {? ? ? , ?1, 0, 1, ? ? ? }} be a strictly stationary process on a probabilistic
space (?, F, P ), and Fkm be the ?-algebra generated by {Yk , ? ? ? , Ym }. Then, process {Yi } is said
to be uniform mixing (?-mixing) if ?(s) ? 0 as s ? ?, where
?(s) :=
sup
k
?
A?F??
,B?Fk+s
|P (B|A) ? P (B)| = 0, P (A) ?= 0.
Function ?(s) is called the mixing coefficient, and if the mixing coefficient converges to zero exponentially fast, i.e., there exist constants D > 0 and ? ? [0, 1) such that ?(s) < D?s , then the
stochastic process is called geometrically uniform mixing. Note that if a stochastic process is a
strictly stationary finite-state Markov process and ergodic, the process satisfies the geometrically
uniform mixing conditions [19].
Now, we impose certain conditions for proving the consistency and asymptotic normality of estimator ??n , summarized as follows.
Assumption 4. For any ? ? ?, MDP p? (x2:n , u1:n |x1 ) is geometrically uniform mixing.
Assumption 5. For any x ? X , u ? U, and ? ? ?, function ?? (x, u) is uniformly bounded.
Assumption 6. For any ? ? ?, there exists a parameter value ?? ? ? such that
?
?
?
?
E?x1? ??? ????(x1 , u1 )
? j?1 r(xj , uj )? = 0,
(8)
j=1
where E?x1? ??? [?] denotes the expectation over {x2:? , u1:? } with respect to distribution
?n
lim ?? (x1 )?? (u1 |x1 ) i=2 p(xi |xi , ui )?? (ui |xi ).
n??
Assumption 7. For any ? ? ? and ? > 0,
[
]
?
?
??
j?1
?
r(xj , uj ) > 0.
sup
Ex1 ??? ??? (x1 , u1 )
?
?
?
? :|? ??|>?
j=1
[
]
? = E?x? ?? K??(x1 , u1 ) ?? ? j?1 r(xj , uj ) is
Assumption 8. For any ? ? ?, matrix A := A(?)
1
1
j=1
invertible, where K? (x, u) := ?? ?? (x, u) = ? 2 /(????? ) ln ?? (u|x).
Under the conditions given in Assumptions 1-7, estimator ??n converges to ?? in probability, as shown
in the following lemma.
Lemma 2. Suppose that given initial state x1 , a random sequence {x2:n , u1:n } is generated from
model {p? (x2:n , u1:n |x1 )|?} of the MDP. If Assumptions 1-7 are satisfied, then estimator ??n given by
estimating equation (7) shows consistency, i.e., estimator ??n converges to parameter ?? in probability.
4
The proof is given in Section 2 in the supporting material. Note that if the policy is characterized as
an exponential family, we can replace Assumption 7 with Assumption 8 to prove the result in Lemma
3. Next, we show the asymptotic convergence rate of the estimator given a consistent estimator.
Lemma 3 shows that the estimator converges at the rate Op (n?1/2 ).
Lemma 3. Suppose that given initial state x1 , a random sequence {x2:n , u1:n } is generated from
model p?? (x2:n , u1:n |x1 ), and Assumptions 1-6 and 8 are satisfied. If estimator ??n , given by estimating equation (7) converges to ?? in probability, then we have
n
n ?
?
?
? = ? ?1 A?1
n(??n ? ?)
? j?i ???(xi , ui )r(xj , uj ) + op (1).
(9)
n
i=1 j=i
Furthermore, the right hand side of Eq. (9) converges to
bution whose mean and covariance? are, respectively,
zero
?
?
? +
? + ?? ?j (?)
? ?.
where ? := ?(?)
=
?(?)
?
(
?)
i
i=2
j=2
? ?
Ex1? ????
[(?
?
j=1
? j?1 r(xj , uj )
) (?
?
j ? =1+i
?j
?
?1
a Gaussian distriand A?1 ?(A?1 )? ,
?
Here, ?i (?)
:=
)
]
r(xj ? , uj ? ) ???(x1 , u1 )???(xi , ui )? .
The proof is given in Section 3 in the supporting material.
Now we consider the relation between WLPS and EMPS. The following theorem shows that the
estimator ??n given by Eq. (7) converges to the same solution as that of EMPS asymptotically, when
taking the limit of ? to 1.
Theorem 1. Suppose that Assumptions 1-7 are satisfied. If ? approaches to 1 from below, WLPS
leads to the same solution with EMPS given by Eq. (4) as n ? ?1 .
Proof. We introduce the following support lemma.
Lemma 4. Suppose that Assumptions 1-6 are satisfied. Then, the partial derivative of the lower
bound with q??? satisfies
[
]
?
?
?
?? ?
j?1
?
?
r(xj , uj ) ,
lim
Fn (q?? , ?) = lim Ex1 ???? ?? (x1 , u1 )
n?? ??
??1?
j=1
where ? ? 1? denotes that ? converges to 1 from below.
The proof is given in Section 4 in the supporting material. From the results in Lemmas 2 and 4, it
is obvious that the estimator ??n given by Eq. (7) converges to the same solution as that of EMPS as
? ? 1 from bellow.
Theorem 1 implies that WLPS monotonically increases the expected reward. It should be emphasized that WLPS provides us with an important insight into DPS, i.e., the parameter update of EMPS
can be interpreted as a well-studied maximum (weighted) likelihood estimation problem. This allows us to naturally apply various sophisticated techniques for model selection, which are well
established in statistics, to DPS. In the next section, we discuss model selection for policy ?? (u|x).
4
Model selection with WLPS
Common model selection strategies are carried out by comparing candidate models, which are specified in advance, based on a criterion that evaluates the goodness of fit of the model estimated from
the obtained samples. Since the motivation for RL is to maximize the expected reward given in (2),
it would be natural to seek an appropriate model for the policy through the computation of some
reasonable measure to evaluate the expected reward from the sample sequences. However, since different policy models give different generative models for sample sequences, we need to obtain new
sample sequences to evaluate the measure each time the model is changed. Therefore, employing a
strategy of model selection based directly on the expected reward would be hopelessly inefficient.
1
In practice, the constant ? is set to an arbitrary value close to one. If we can analyze the finite sample
behavior of the expected reward with the WLPS estimator, we may obtain a better estimator by finding an
optimal ? in the sense of the maximization of the expected reward. Some researches have recently tackled to
establish the finite sample analysis for RL based on statistical learning theory [20, 21]. These works might
provide us with some insights into the finite sample analysis of WLPS.
5
Instead, to develop a tractable model selection, we focus on the weighted likelihood given by Eq. (5).
As mentioned before, the policy with the maximum weighted log-likelihood must satisfy the maximum of the lower bound of the expected reward asymptotically. Moreover, since the weighted
likelihood is defined under a certain fixed generative process for the sample sequences, unlike the
expected reward case, the weighted likelihood can be evaluated using unique sample sequences
even when the model has been changed. These observations lead to the fact that if it were possible
to choose a good model from the candidate models in the sense of the weighted likelihood at each
iteration in WLPS, we could realize an efficient DPS algorithm with model selection that achieves a
monotonic increase in the expected reward.
In this study, we develop a criterion for choosing a suitable model by following the analogy of the
Bayesian information criterion (BIC) [12], designed through asymptotic analysis of the posterior
probability of the models given the data. Let M1 , M2 , ? ? ? , Mk be k candidate policy models, and
assume that each model Mj is characterized by a parametric policy ??j (u|x) and the prior distribution p(?j |Mj ) of the policy parameter. Also, define the marginal weighted likelihood of the j-th
candidate model p??? ,j (x2:n , u1:n |x1 ) as
?
n
?
?
Q?
1
?
p?? ,j (x2:n , u1:n |x1 ) := ??j (u1 |x1 )
??j (ui |xi )Qi p(xi |xi?1 , ui?1 )p(?j |Mj )d?j . (10)
i=1
In a similar manner to the BIC, we now consider the posterior probability of the j-th model given the
sample sequence by introducing the prior probability of the j-th model p(Mj ). From the generalized
Bayes? rule, the posterior distribution of the j-th model is given by
p??? ,j (x2:n , u1:n |x1 )p(Mj )
.
(11)
p(Mj |x1:n , u1:n ) := ?k
??? ,j ? (x2:n , u1:n |x1 )p(Mj ? )
j ? =1 p
and in our model selection strategy, we adopt the model with the largest posterior probability.
For notational simplicity, in the following discussion we omit the subscript that represents the index
indicating the number of models. Assuming that the prior probability is uniform in all models,
the model with the maximum posterior probability corresponds to that of the marginal weighted
likelihood. The behavior of the marginal weighted likelihood can be evaluated when the integrand of
marginal weighted likelihood (10) is concentrated in a neighborhood of the weighted log-likelihood
estimator given by estimating equation (7), as described in the following theorem.
Theorem 2. Suppose that, given an initial state x1 , a random sequence {x2:n , u1:n } is generated
from the model p?? (x2:n , u1:n |x1 ) of the MDP. Suppose that Assumptions 1-3 and 5 are satisfied. If
the following conditions
(a) The estimator ??n given by Eq. (7) converges to ? at the rate of Op (n?1/2 ).
(b) The prior distribution p(?|M ) satisfies p(??n |M ) = Op (1).
??
?
(c) The matrix A(?) := Ex1?????? [K? (x1 , u1 ) j=1 ? j?i r(xj , uj )] is invertible.
(d) For any x ? X , u ? U and ? ? ?, K? (x, u) is uniformly bounded.
are satisfied, the log marginal weighted likelihood can be calculated as
?
1
ln p??? (x2:n , u1:n |x1 ) = L?n (??n ) ? m ln n + Op (1),
2
where recall that m denotes the number of dimensional of the model (policy parameter).
The proof is given in Section 5 in the supporting material.
Note that the term,
?n
?? ?
ln
p(x
|x
,
u
)
in
L
(
?
),
does
not
depend
on
the
model.
Therefore,
when evaluati
i?1
i?1
n
n
i=2
ing the posterior probability of the model, it is sufficient to compute the following model selection
criterion:
n
?
?
1
(12)
IC =
ln ???n (ui |xi )Qi ? m ln n.
2
i=1
As can be seen, this model selection criterion consists of two terms, where the first term is the
weighted log-likelihood of the policy and the second is a penalty term that penalizes highly complex
models. Also, since the first term is larger than the second term, this criterion gives the model with
the maximum weighted log-likelihood asymptotically. Algorithm 2 describes the algorithm flow of
WLPS including the model selection strategy.
6
Algorithm 2 (WLPS with model selection).
1. Generate a sample sequence {x1:n , u1:n } by employing the current policy parameter ?.
2. For all models, find estimator ??n by solving estimating equation (7) and evaluate model
selection criterion (12).
3. Choose the best model based on model selection criterion (12) and check for convergence.
If convergence is not satisfied, return to 1.
Empirical Example We evaluated the performance of the proposed model-selection method using a simple one-dimensional linear quadratic Gaussian (LQG) problem. This problem is known
to be sufficiently difficult as an empirical evaluation, while it is analytically solvable. In this
problem, we characterized the state transition distribution p(xi |xi?1 , ui?1 ) as a Gaussian distribution N (xi |?
xi , ?) with mean x
?i = xi?1 + ui?1 and variance ? = 0.52 . The reward function
was set to a quadratic function r(xi , ui ) = ?x2i ? u2i + c, where c is a positive scalar value for
preventing the reward r(x, u) being negative. The control signal ui was generated from a Gaussian distribution N (ui |?
ui , ? ? ) with mean u
?i and variance ? ? = 0.5. We used a linear model
?k
j
with polynomial basis functions defined as u
?i =
j=1 ?j xj + ?0 , where k is the order of the
polynomial. Note that, in this LQG setting, the optimal controller can be represented as a linear
model, i.e., the optimal policy can be obtained when the order of polynomial is selected as k = 1.
700
In this experiment, we validated whether the proposed model 600
proposed model selection
selection method can detect the true order of the polynoweighted log-likelihood
500
mial. To illustrate how our proposed model selection criterion works, we compared the performance of the proposed 400
model selection method with a na??ve method based on the 300
weighted log-likelihood (6). The weighted-log-likelihood- 200
based selection, similarly to the proposed method, was per- 100
formed by computing the weighted log-likelihood scores (6) 0
1
2
3
4
5
over all candidate models and selecting the model with the
Theorder
oder of
The
of basis
basis functions
functions
maximum score among the candidates.
Figure 1: Distribution of order k seFigure 1 shows the distribution on the scores of the selected lected by our model selection criterion
polynomial orders k in the learned policies from first to fifth (left bar) and the weighted likelihood
order by using the weighted log-likelihood and our model se- (right bar).
lection criterion. The distributions of the scores were obtained by repeating random 1000 trials. A
learning process was performed by 200 iterations of WLPS, each of which contained 200 samples
generated by the current policy. The discounted factor ? was set to 0.99. As shown in Figure 1, in
the proposed method, the peak of the selected order was located at the true order k = 1. On the
other hand, in the weighted log-likelihood method, the distribution of the orders converged to a one
with two peaks at k = 1 and k = 4. This result seems to show that the penalized term in our model
selection criterion worked well.
5
Discussion
In this study, we have discussed a DPS problem in the framework of weighted likelihood estimation.
We introduced a weighted likelihood function as the objective function of DPS, and proposed an
incremental algorithm, WLPS, based on the iteration of maximum weighted log-likelihood estimation. WLPS shows desirable theoretical properties, namely, consistency, asymptotic normality, and
a monotonic increase in the expected reward at each iteration. Furthermore, we have constructed a
model selection strategy based on the posterior probability of the model given a sample sequence
through asymptotic analysis of the marginal weighted likelihood.
WLPS framework has a potential to bring a new theoretical insight to DPS, and derive more efficient
algorithms based on the theoretical considerations. In the rest of this paper, we summarize some key
issues that need to be addressed in future research.
5.1 Statistical interpretation of model-free and model-based WLPS
One of the important open issues in RL is how to combine model-free and model-based approaches
with theoretical support. To this end, it is necessary to clarify the difference between model-based
and model-free approaches in the theoretical sense. WLPS provides us with an interesting insight
into the relation between model-free and model-based DPS from the viewpoint of statistics.
7
We begin by introducing the model-based WLPS method. Let us specify the state transition distribution p(x? |x, u) as a parametric model p? (x? |x, u) := p(x? |x, u, ?), where ? is an m? -dimensional
parameter vector. Assuming p? (x? |x, u) with respect to parameter ? and taking the partial derivative
of the log weighted likelihood (6), we obtain the estimating equation for parameter ?:
n
?
??? n (xi?1 , ui?1 , xi ) = 0,
(13)
i=2
where ?? (x, u, x? ) is the partial derivative of the state transition distribution p? (x? |x, u) with respect
to ?. As can be seen, estimating equation (13) corresponds to the likelihood equation, i.e., the estimator, ?
?n = ?
? n (x1:n , u1:n?1 ), given by (13) is the maximum likelihood estimator. This observation
indicates that the weighted likelihood integrates two different objective functions: one for learning
policy ?? (u|x), and the other for the state predictor, p? (x? |x, u). Having obtained estimator ?
?n
from estimating equation (13), the model-based WLPS estimates the policy parameter by finding
? 1:n , u1:n ), of the following estimating equation:
the solution, ??n := ?(x
??
p?? ,??n (x2:n , u1:n |x1 )
{ n n
??
}
?
j?i
???n (xi , ui )r(xj , uj )
dx2:n du1:n = 0.
(14)
i=1 j=i
Note that estimating equation (14) is derived by taking the integral of Eq. (7) over the sample sequence {x2:n , u1:n } based on the current estimated model p?? ,??n (x2:n , u1:n |x1 ). Thus, the modelbased WLPS converges to the same parameter as the model-free WLPS, if model p? (x? |x, u) is well
specified2 .
We now consider the general treatment for model-free and model-based WLPS from a statistical
viewpoint. Model-based WLPS fully specifies the weighted likelihood by using the parametric
policy and parametric state transition models, and estimates all the parameters that appear in the
parametric weighted likelihood. Hence, model-based WLPS can be framed as a parametric statistical inference problem. Meanwhile, model-free WLPS partially specifies the weighted likelihood
by only using the parametric policy model. This can be seen as a semiparametric statistical model
[22, 23], which includes not only parameters of interest, but also additional nuisance parameters
with possibly infinite DoF, where only the policy is modeled parametrically and the other unspecified part corresponds to the nuisance parameters. Therefore, model-free WLPS can be framed as
a semiparametric statistical inference problem. Hence, the difference between model-based and
model-free WLPS methods can be interpreted as the difference between parametric and semiparametric statistical inference. The theoretical aspects of both parametric and semiparametric inference
have been actively investigated and several approaches for combining their estimators have been
proposed [23, 24, 25]. Therefore, by following these works, we have developed a novel hybrid DPS
algorithm that combines model-free and model-based WLPS with desirable statistical properties.
5.2 Variance reduction technique for WLPS
In order to perform fast learning of the policy, it is necessary to find estimators that can reduce the
estimation variance of the policy parameters in DPS. Although variance reduction techniques have
been proposed in DPS [26, 27, 28], these employ indirect approaches, i.e., instead of considering
the estimation variance of the policy parameters, they reduce the estimation variance of the moments necessary to learn the policy parameter. Unfortunately, these variance reduction techniques
do not guarantee decreasing the estimation variance of the policy parameters, and thus it is desirable to develop a direct approach that can evaluate and reduce the estimation variance of the policy
parameters.
As stated above, we can interpret model-free WLPS as a semiparametric statistical inference problem. This interpretation allows us to apply the estimating function method [22, 23], which has been
well established in semiparametric statistics, directly to WLPS. The estimating function method is
a powerful tool for the design of consistent estimators and the evaluation of the estimation variance
of parameters in a semiparametric inference problem. The advantage of considering the estimating
function is the ability 1) to characterize an entire set of consistent estimators, and 2) to find the optimal estimator with the minimum parameter estimation variance from the set of estimators [23, 29].
Therefore, by applying this to WLPS, we can characterize an entire set of estimators, which maximizes the expected reward without identifying the state transition distribution, and find the optimal
estimator with the minimum estimation variance.
2
In the following discussion, in order to clarify the difference between the model-free and the model-based
manners, we write original WLPS as model-free WLPS.
8
References
[1] P. Dayan and G. Hinton, ?Using expectation-maximization for reinforcement learning,? Neural Computation, vol. 9, no. 2, pp. 271?278, 1997.
[2] J. Baxter and P. L. Bartlett, ?Infinite-horizon policy-gradient estimation,? Journal of Artificial Intelligence
Research, vol. 15, no. 4, pp. 319?350, 2001.
[3] V. R. Konda and J. N. Tsitsiklis, ?On actor-critic algorithms,? SIAM Journal on Control and Optimization,
vol. 42, no. 4, pp. 1143?1166, 2003.
[4] J. Peters and S. Schaal, ?Reinforcement learning by reward-weighted regression for operational space
control,? in Proceedings of the 24th International Conference on Machine Learning, 2007.
[5] ??, ?Natural actor-critic,? Neurocomputing, vol. 71, no. 7-9, pp. 1180?1190, 2008.
[6] N. Vlassis, M. Toussaint, G. Kontes, and S. Piperidis, ?Learning model-free robot control by a monte
carlo em algorithm,? Autonomous Robots, vol. 27, no. 2, pp. 123?130, 2009.
[7] E. Theodorou, J. Buchli, and S. Schaal, ?A generalized path integral control approach to reinforcement
learning,? Journal of Machine Learning Research, vol. 11, pp. 3137?3181, 2010.
[8] J. Peters, K. M?ulling, and Y. Alt?un, ?Relative entropy policy search,? in Proceedings of the 24-th National
Conference on Artificial Intelligence, 2010.
[9] J. Kober and J. Peters, ?Policy search for motor primitives in robotics,? Machine Learning, vol. 84, no.
1-2, pp. 171?203, 2011.
[10] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. MIT Press, 1998.
[11] H. Akaike, ?A new look at the statistical model identification,? IEEE Transactions on Automatic Control,
vol. 19, no. 6, pp. 716?723, 1974.
[12] G. Schwarz, ?Estimating the dimension of a model,? The Annals of Statistics, vol. 6, no. 2, pp. 461?464,
1978.
[13] A. Farahmand and C. Szepesv?ari, ?Model selection in reinforcement learning,? Machine Learning, pp.
1?34, 2011.
[14] M. M. Fard and J. Pineau, ?PAC-Bayesian model selection for reinforcement learning,? in Advances in
Neural Information Processing Systems 22, 2010.
[15] H. Hachiya, J. Peters, and M. Sugiyama, ?Reward-weighted regression with sample reuse for direct policy
search in reinforcement learning,? Neural Computation, vol. 23, no. 11, pp. 2798?2832, 2011.
[16] M. G. Azar and H. J. Kappen, ?Dynamic policy programming,? Tech. Rep. arXiv:1004.202, 2010.
[17] H. Kappen, V. G?omez, and M. Opper, ?Optimal control as a graphical model inference problem,? Machine
learning, pp. 1?24, 2012.
[18] K. Rawlik, M. Toussaint, and S. Vijayakumar, ?On stochastic optimal control and reinforcement learning
by approximate inference,? in International Conference on Robotics Science and Systems, 2012.
[19] R. C. Bradley, ?Basic properties of strong mixing conditions. A survey and some open questions,? Probability Surveys, vol. 2, pp. 107?144, 2005.
[20] R. Munos and C. Szepesv?ari, ?Finite-time bounds for fitted value iteration,? Journal of Machine Learning
Research, vol. 9, pp. 815?857, 2008.
[21] A. Lazaric, M. Ghavamzadeh, and R. Munos, ?Finite-sample analysis of least-squares policy iteration,?
Journal of Machine Learning Research, vol. 13, p. 30413074, 2012.
[22] V. P. Godambe, Ed., Estimating Functions. Oxford University Press, 1991.
[23] S. Amari and M. Kawanabe, ?Information geometry of estimating functions in semi-parametric statistical
models,? Bernoulli, vol. 3, no. 1, pp. 29?54, 1997.
[24] P. J. Bickel, C. A. Klaassen, Y. Ritov, and J. A. Wellner, Efficient and Adaptive Estimation for Semiparametric Models. Springer, 1998.
[25] G. Bouchard and B. Triggs, ?The tradeoff between generative and discriminative classifiers,? in Proceedings 1998 16th IASC International Symposium on Computational Statistics, 2004, pp. 721?728.
[26] E. Greensmith, P. L. Bartlett, and J. Baxter, ?Variance reduction techniques for gradient estimates in
reinforcement learning,? Journal of Machine Learning Research, vol. 5, pp. 1471?1530, 2004.
[27] R. Munos, ?Geometric variance reduction in markov chains: application to value function and gradient
estimation,? Journal of Machine Learning Research, vol. 7, pp. 413?427, 2006.
[28] T. Zhao, H. Hachiya, G. Niu, and M. Sugiyama, ?Analysis and improvement of policy gradient estimation,? Neural Networks, 2011.
[29] T. Ueno, S. Maeda, M. Kawanabe, and S. Ishii, ?Generalized TD learning,? Journal of Machine Learning
Research, vol. 12, pp. 1977?2020, 2011.
9
| 4815 |@word trial:1 polynomial:4 seems:1 triggs:1 open:2 seek:1 covariance:1 q1:1 kappen:2 moment:1 reduction:5 initial:5 score:4 selecting:1 past:1 bradley:1 current:5 com:2 comparing:2 gmail:1 yet:2 must:1 realize:1 fn:6 enables:1 lqg:2 motor:1 designed:1 update:3 stationary:4 alone:1 generative:3 selected:3 intelligence:2 short:1 provides:2 u2i:1 constructed:1 direct:5 symposium:1 farahmand:1 prove:2 consists:1 combine:2 manner:2 introduce:5 lection:1 expected:19 behavior:3 bellman:1 discounted:4 decreasing:1 automatically:1 td:1 considering:2 begin:2 estimating:21 bounded:3 linearity:1 maximizes:3 moreover:2 hitherto:1 interpreted:3 unspecified:1 developed:5 finding:3 guarantee:2 rm:1 classifier:1 control:9 omit:1 appear:1 greensmith:1 positive:2 before:1 local:1 limit:1 sutton:1 oxford:1 quadruple:1 path:2 subscript:1 niu:1 might:1 studied:1 practical:2 unique:2 practice:1 procedure:1 empirical:2 kohei:2 fard:1 cannot:1 close:1 selection:34 applying:1 instability:1 godambe:1 maximizing:2 primitive:1 fkm:1 convex:2 ergodic:2 survey:2 simplicity:1 identifying:1 m2:1 estimator:30 insight:4 rule:1 osaka:5 proving:1 notion:1 autonomous:1 updated:1 annals:1 target:1 suppose:7 play:1 programming:1 akaike:1 located:1 observed:1 role:1 solved:1 cycle:1 yk:1 mentioned:1 agency:1 complexity:1 ui:25 reward:27 dynamic:1 ghavamzadeh:1 depend:1 tight:1 solving:3 algebra:1 basis:3 easily:2 joint:1 indirect:1 various:2 represented:1 fast:2 effective:1 monte:1 artificial:2 choosing:1 dof:3 kawahara:2 neighborhood:1 whose:1 larger:1 solve:1 amari:1 ability:1 statistic:7 sequence:18 differentiable:1 advantage:1 propose:3 kober:1 combining:1 mixing:10 achieve:1 convergence:6 kontes:1 incremental:1 converges:12 derive:3 develop:4 ac:3 recurrent:1 illustrate:1 op:5 received:1 progress:1 eq:12 strong:1 implies:1 direction:1 closely:1 tokyo:1 aperiodic:1 stochastic:6 material:5 preliminary:1 strictly:2 clarify:2 hold:2 sufficiently:1 ic:1 rawlik:1 achieves:1 sanken:3 adopt:1 bickel:1 lected:1 estimation:19 integrates:1 schwarz:1 largest:1 successfully:1 tool:1 weighted:52 minimization:3 brought:1 mit:1 gaussian:4 rdx:1 barto:1 takashi:1 derived:2 focus:1 validated:1 schaal:2 notational:1 improvement:1 bernoulli:1 likelihood:53 indicates:1 check:2 tech:1 ishii:1 sense:3 detect:1 inference:10 dayan:1 entire:2 relation:2 tsuyoshi:1 issue:2 among:1 development:1 marginal:7 field:1 construct:2 having:1 sampling:1 represents:1 look:1 future:4 employ:2 divergence:2 ve:1 neurocomputing:1 national:1 familiar:1 geometry:1 connects:1 argmaxq:1 freedom:1 huge:1 interest:1 investigate:1 highly:1 evaluation:4 chain:1 integral:2 partial:6 necessary:3 unless:1 logarithm:1 penalizes:1 theoretical:10 mk:1 fitted:1 ar:3 goodness:2 maximization:10 cost:2 introducing:2 parametrically:1 uniform:7 predictor:1 theodorou:1 characterize:2 explores:1 fundamental:1 peak:2 siam:1 international:3 vijayakumar:1 probabilistic:1 invertible:2 modelbased:1 ym:1 continuously:1 na:1 iwcv:2 satisfied:9 choose:2 possibly:1 derivative:6 inefficient:1 return:2 zhao:1 actively:2 japan:1 reusing:1 potential:1 summarized:2 includes:2 coefficient:2 satisfy:1 performed:2 break:1 analyze:2 characterizes:1 sup:2 bution:1 bayes:1 complicated:1 bouchard:1 contribution:1 formed:1 square:1 variance:15 efficiently:2 maximized:1 correspond:1 identify:1 bayesian:4 identification:1 carlo:1 converged:1 hachiya:2 ed:1 definition:4 evaluates:1 pp:19 thereof:1 obvious:2 naturally:2 proof:6 unsolved:1 newly:1 treatment:1 recall:1 lim:4 subsection:1 organized:1 sophisticated:2 methodology:1 specify:1 formulation:1 evaluated:3 ritov:1 furthermore:2 stage:1 hand:2 maximizer:1 google:1 pineau:1 mdp:7 true:2 du1:5 hence:5 regularization:1 equality:1 analytically:3 iteratively:2 dx2:5 ex1:4 nuisance:2 noted:1 criterion:16 generalized:3 outline:1 bring:1 consideration:1 novel:2 recently:2 ari:2 common:1 rl:9 overview:2 jp:3 exponentially:1 discussed:2 interpretation:3 m1:1 numerically:2 interpret:1 measurement:1 piperidis:1 framed:2 automatic:1 consistency:6 fk:1 similarly:1 sugiyama:2 thrice:1 robot:2 actor:2 posterior:8 reverse:1 certain:3 inequality:1 rep:1 discussing:1 yi:2 seen:3 minimum:2 additional:1 impose:2 converge:3 maximize:2 monotonically:3 signal:1 semi:1 desirable:6 ing:1 characterized:3 cross:1 qi:3 variant:1 regression:2 basic:1 controller:1 expectation:3 arxiv:1 iteration:9 robotics:2 addition:1 semiparametric:8 szepesv:2 addressed:1 crucial:1 rest:2 unlike:2 flow:1 buchli:1 baxter:2 xj:12 fit:1 bic:2 reduce:3 idea:1 tradeoff:1 whether:2 bartlett:2 reuse:1 wellner:1 penalty:1 peter:4 action:4 oder:1 generally:1 iterating:1 se:1 repeating:1 concentrated:1 http:1 generate:2 specifies:2 exist:1 estimated:3 lazaric:1 per:1 discrete:1 write:1 vol:17 key:2 asymptotically:4 geometrically:3 sum:2 powerful:1 klaassen:1 family:2 reasonable:3 mial:1 decision:4 bound:7 guaranteed:3 followed:1 tackled:1 quadratic:2 elucidated:2 worked:1 x2:36 u1:53 aspect:3 integrand:1 describes:1 em:1 making:2 invariant:1 ln:12 equation:17 remains:1 discus:3 tractable:2 end:3 apply:3 kawanabe:2 appropriate:2 original:2 denotes:4 graphical:1 konda:1 uj:11 establish:1 classical:1 objective:6 already:1 question:1 parametric:11 strategy:8 said:1 gradient:4 dp:29 parametrized:1 trivial:1 reason:1 assuming:2 modeled:2 index:1 reformulate:1 difficult:2 executed:1 unfortunately:1 potentially:1 negative:1 stated:1 design:1 policy:60 adjustable:1 unknown:1 perform:1 observation:2 markov:4 finite:6 supporting:5 hinton:1 vlassis:1 rn:8 arbitrary:1 introduced:1 namely:1 kl:2 specified:1 connection:1 learned:2 established:2 bar:2 below:3 maeda:1 summarize:1 including:1 suitable:1 natural:3 force:1 hybrid:1 solvable:1 normality:5 scheme:1 technology:1 mdps:3 x2i:1 carried:1 prior:5 geometric:1 ulling:1 marginalizing:1 asymptotic:9 relative:1 fully:1 interesting:1 analogy:1 toussaint:2 validation:1 degree:1 agent:1 sufficient:1 consistent:3 viewpoint:3 critic:2 heavy:1 changed:2 penalized:1 last:1 free:16 tsitsiklis:1 side:1 allow:1 taking:6 munos:3 fifth:1 benefit:1 regard:1 calculated:1 dimension:1 transition:8 evaluating:1 opper:1 preventing:1 reinforcement:11 adaptive:1 employing:3 transaction:1 approximate:1 kullback:1 xi:31 discriminative:1 search:10 iterative:2 un:1 decade:1 why:1 mj:7 yoshinobu:1 learn:1 operational:1 investigated:2 complex:1 meanwhile:1 motivation:1 azar:1 x1:52 site:2 exponential:2 candidate:7 down:1 theorem:6 emphasized:1 pac:2 alt:1 exists:1 importance:2 horizon:2 entropy:1 hopelessly:1 contained:1 omez:1 partially:1 scalar:1 hayashi:2 monotonic:2 springer:1 corresponds:4 satisfies:4 harris:1 ueno:3 goal:1 replace:1 infinite:3 uniformly:3 lemma:11 bellow:1 called:3 indicating:1 select:2 support:3 evaluate:5 washio:2 |
4,216 | 4,816 | Trajectory-Based Short-Sighted Probabilistic
Planning
Felipe W. Trevizan
Manuela M. Veloso
Machine Learning Department
Computer Science Department
Carnegie Mellon University - Pittsburgh, PA
{fwt,mmv}@cs.cmu.edu
Abstract
Probabilistic planning captures the uncertainty of plan execution by probabilistically modeling the effects of actions in the environment, and therefore the probability of reaching different states from a given state and action. In order to compute
a solution for a probabilistic planning problem, planners need to manage the uncertainty associated with the different paths from the initial state to a goal state.
Several approaches to manage uncertainty were proposed, e.g., consider all paths
at once, perform determinization of actions, and sampling. In this paper, we introduce trajectory-based short-sighted Stochastic Shortest Path Problems (SSPs),
a novel approach to manage uncertainty for probabilistic planning problems in
which states reachable with low probability are substituted by artificial goals that
heuristically estimate their cost to reach a goal state. We also extend the theoretical
results of Short-Sighted Probabilistic Planner (SSiPP) [1] by proving that SSiPP
always finishes and is asymptotically optimal under sufficient conditions on the
structure of short-sighted SSPs. We empirically compare SSiPP using trajectorybased short-sighted SSPs with the winners of the previous probabilistic planning
competitions and other state-of-the-art planners in the triangle tireworld problems.
Trajectory-based SSiPP outperforms all the competitors and is the only planner
able to scale up to problem number 60, a problem in which the optimal solution
contains approximately 1070 states.
1
Introduction
The uncertainty of plan execution can be modeled by using probabilistic effects in actions, and
therefore the probability of reaching different states from a given state and action. This search space,
defined by the probabilistic paths from the initial state to a goal state, challenges the scalability of
planners. Planners manage the uncertainty by choosing a search strategy to explore the space. In
this work, we present a novel approach to manage uncertainty for probabilistic planning problems
that improves its scalability while still being optimal.
One approach to manage uncertainty while searching for the solution of probabilistic planning problems is to consider the complete search space at once. Examples of such algorithms are value
iteration and policy iteration [2]. Planners based on these algorithms return a closed policy, i.e., a
universal mapping function from every state to the optimal action that leads to a goal state. Assuming the model correctly captures the cost and uncertainty of the actions in the environment, closed
policies are extremely powerful as their execution never ?fails,? and the planner does not need to
be re-invoked ever. Unfortunately the computation of such policies is prohibitive in complexity as
problems scale up. Value iteration based probabilistic planners can be improved by combining asynchronous updates and heuristic search [3?7]. Although these techniques allow planners to compute
compact policies, in the worst case, these policies are still linear in the size of the state space, which
itself can be exponential in the size of the state or goals.
1
Another approach to manage uncertainty is basically to ignore uncertainty during planning, i.e., to
approximate the probabilistic actions as deterministic actions. Examples of replanners based on
determinization are FF-Replan [8], the winner of the first International Probabilistic Planning Competition (IPPC) [9], Robust FF [10], the winner of the third IPPC [11] and FF-Hindsight [12, 13].
Despite the major success of determinization, this simplification in the action space results in algorithms oblivious to probabilities and dead-ends, leading to poor performance in specific problems,
e.g., the triangle tireworld [14].
Besides the action space simplification, uncertainty management can be performed by simplifying
the problem horizon, i.e., look-ahead search [15]. Based on sampling, the Upper Confidence bound
for Trees (UCT) algorithm [16] approximates the look-ahead search by focusing the search in the
most promising nodes.
The state space can also be simplified to manage uncertainty in probabilistic planning. One example
of such approach is Envelope Propagation (EP) [17]. EP computes an initial partial policy ? and
then prunes all the states not considered by ?. The pruned states are represented by a special meta
state. Then EP iteratively improves its approximation of the state space. Previously, we introduced
short-sighted planning [1], a new approach to manage uncertainty in planning problems: given a
state s, only the uncertainty structure of the problem in the neighborhood of s is taken into account
and the remaining states are approximated by artificial goals that heuristically estimate their cost to
reach a goal state.
In this paper, we introduced trajectory-based short-sighted Stochastic Shortest Path Problems
(SSPs), a novel model to manage uncertainty in probabilistic planning problems. Trajectory-based
short-sighted SSPs manage uncertainty by pruning the state space based on the most likely trajectory
between states and defining artificial goal states that guide the solution towards the original goal. We
also contribute by defining a class of short-sighted models and proving that the Short-Sighted Probabilistic Planner (SSiPP) [1] always terminates and is asymptotically optimal for models in this class
of short-sighted models.
The remainder of this paper is organized as follows: Section 2 introduces the basic concepts and
notation. Section 3 defines formally trajectory-based short-sighted SSPs. Section 4 presents our
new theoretical results for SSiPP. Section 5 empirically evaluates SSiPP using trajectory-based shortsighted SSPs with the winners of the previous IPPCs and other state-of-the-art planner. Section 6
concludes the paper.
2
Background
A Stochastic Shortest Path Problem (SSP) is defined by the tuple S = hS, s0 , G, A, P, Ci, in which
[1, 18]: S is the finite set of state; s0 ? S is the initial state; G ? S is the set of goal states; A is
the finite set of actions; P (s? |s, a) represents the probability that s? ? S is reached after applying
action a ? A in state s ? S; C(s, a, s? ) ? (0, +?) is the cost incurred when state s? is reached after
applying action a in state s and this function is required to be defined for all s ? S, a ? A, s? ? S
such that P (s? |s, a) > 0.
A solution to an SSP is a policy ?, i.e., a mapping from S to A. If ? is defined over the entire space
S, then ? is a closed policy. A policy ? defined only for the states reachable from s0 when following
? is a closed policy w.r.t. s0 and S(?, s0 ) denotes this set of reachable states. For instance, in the
SSP depicted in Figure 1(a), the policy ?0 = {(s0 , a0 ), (s?1 , a0 ), (s?2 , a0 ), (s?3 , a0 )} is a closed policy
w.r.t. s0 and S(?0 , s0 ) = {s0 , s?1 , s?2 , s?3 , sG }.
Given a policy ?, we define trajectory as a sequence T? = hs(0) , . . . , s(k) i such that, for all
i ? {0, ? ? ? , k ? 1}, ?(s(i) ) is defined and P (s(i+1) |s(i) , ?(s(i) )) > 0. The probability of a traQi<|T |
jectory T? is defined as P (T? ) = i=0 ? P (s(i+1) |s(i) , ?(s(i) )) and maximum probability of a
trajectory between two states Pmax (s, s? ) is defined as max? P (T? = hs, . . . , s? i).
An optimal policy ? ? for an SSP is any policy that always reaches a goal state when followed from
s0 and also minimizes the expected cost of T?? . For a given SSP, ? ? might not be unique, however
the optimal value function V ? , i.e., the mapping from states to the minimum expected cost to reach
a goal state, is unique. V ? is the fixed point of the set of equations defined by (1) for all s ? S \ G
and V ? (s) = 0 for all s ? G. Notice that under the optimality criterion given by (1), SSPs are
2
a0
a1
t>1
.6
.4
.4
s2
.4
s3
.75
.25
s?1
.75
s?2
.75
.4
.4
s2
.4
s3
25
s?1
.75
s?3
.4
s2
.4
s3
.4
s?3
.75
sG
s0
.75
.25
.25
.25
.6
.6
s1
.4
.75
s?2 .75
p<0.4 2
p<0.4
.6
sG
.75
p<1.0
.4
s0
.75
s?3
.25
.25
.6
s1
sG
t>4
.6
.6
.4
s0
t>3
.6
.6
.6
.6
s1
t>2
.6
25
s?1
.75
s?2 .75
.25
.25
.25
.25
p<0.75
p<0.75 2 p<0.75 3
?
Figure 1: (a) Example of an SSP. The initial state is s0 , the goal state is sG , C(s, a, s ) = 1, ?s ? S,
a ? A and s? ? S. (b) State-space partition of (a) according to the depth-based short-sighted SSPs:
Gs0 ,t contains all the states in dotted regions which their conditions hold for the given value of t. (c)
State-space partition of (a) according to the trajectory-based short-sighted SSPs: Gs0 ,? contains all
the states in dotted regions which their conditions hold for the given value of ?.
more general than Markov Decision Processes (MDPs) [19], therefore all the work presented here
is directly applicable to MDPs.
V ? (s) = min
a?A
i
Xh
C(s, a, s? ) + P (s? |s, a)V ? (s? )
(1)
s? ?S
Definition 1 (reachability assumption). An SSP satisfies the reachability assumption if, for all s ? S,
there exists sG ? G such that Pmax (s, sG ) > 0.
Given an SSP S, if a goal state can be reached with positive probability from every state s ? S,
then the reachability assumption (Definition 1) holds for S and 0 ? V ? (s) < ? [19]. Once V ? is
known, any optimal policy ? ? can be extracted from V ? by substituting the operator min by argmin
in equation (1).
A possible approach to compute V ? is the value iteration algorithm: define V i+1 (s) as in (1)
with V i in the right hand side instead of V ? and the sequence hV 0 , V 1 , . . . , V k i converges to
V ? as k ? ? [19]. The process of computing V i+1 from V i is known as Bellman update and V 0 (s) can be initialized with an admissible heuristic H(s), i.e., a lower bound for
V ? . In practice we P
are interested in reaching ?-convergence, that is, given ?, find V such that
maxs |V (s) ? mina s? [C(s, a, s? ) + P (s? |s, a)V (s? )]| ? ?. The following well-known result is
necessary in most of our proofs [2, Assumption 2.2 and Lemma 2.1]:
Theorem 1. Given an SSP S, if the reachability assumption holds for S, then the admissibility and
monotonicity of V are preserved through the Bellman updates.
3
Trajectory-Based Short-Sighted Stochastic SSPs
Short-sighted Stochastic Path Problems (short-sighted SSPs) [1] are a special case of SSPs in which
the original problem is transformed into a smaller one by: (i) pruning the state space; and (ii) adding
artificial goal states to heuristically guide the search towards the goals of the original problem.
Depth-based short-sighted SSPs are defined based on the action-distance between states [1]:
Definition 2 (action-distance). The non-symmetric action-distance ?(s, s? ) between two states s and
s? is argmink {T? = hs, s(1) , . . . , s(k?1) , s? i|?? and T? is a trajectory}.
Definition 3 (Depth-Based Short-Sighted SSP). Given an SSP S = hS, s0 , G, A, P, Ci, a state s ?
S, t > 0 and a heuristic H, the (s, t)-depth-based short-sighted SSP Ss,t = hSs,t , s, Gs,t , A, P, Cs,t i
associated with S is defined as:
? Ss,t = {s? ? S|?(s, s? ) ? t};
? Gs,t = {s? ? S|?(s, s? ) = t} ? (G ? Ss,t );
C(s? , a, s?? ) + H(s?? ) if s?? ? Gs,t
? Cs,t (s? , a, s?? ) =
,
C(s? , a, s?? )
otherwise
?s? ? Ss,t , a ? A, s?? ? Ss,t
Figure 1(b) shows, for different values of t, Ss0 ,t for the SSP in Figure 1(a); for instance, if t = 2
then Ss0 ,2 = {s0 , s1 , s?1 , s2 , s?2 } and Gs0 ,2 = {s2 , s?2 }. In the example shown in Figure 1(b), we can
3
see that generation of Ss0 ,t is independent of the trajectories probabilities: for t = 2, s2 ? Ss0 ,2 and
s?3 6? Ss0 ,2 , however Pmax (s0 , s2 ) = 0.16 < Pmax (s0 , s?3 ) = 0.753 ? 0.42.
Definition 4 (Trajectory-Based Short-Sighted SSP). Given an SSP S = hS, s0 , G, A, P, Ci, a
state s ? S, ? ? [0, 1] and a heuristic H, the (s, ?)-trajectory-based short-sighted SSP Ss,? =
hSs,? , s, Gs,? , A, P, Cs,? i associated with S is defined as:
? Ss,? = {s? ? S|??
s ? S and a ? A s.t. Pmax (s, s?) ? ? and P (s? |?
s, a) > 0};
? Gs,? = (G ? Ss,? ) ? (Ss,? ? {s? ? S|Pmax (s, s? ) < ?});
C(s? , a, s?? ) + H(s?? ) if s?? ? Gs,?
?
??
? Cs,? (s , a, s ) =
,
C(s? , a, s?? )
otherwise
?s? ? Ss,? , a ? A, s?? ? Ss,?
For simplicity, when H is not clear by context nor explicit, then H(s) = 0 for all s ? S.
Our novel model, the trajectory-based short-sighted SSPs (Definition 4), addresses the issue of
states with low trajectory probability by explicitly defining its state space Ss,? based on maximum probability of a trajectory between s and the candidate states s? (Pmax (s, s? )). Figure 1(c)
shows, for all values of ? ? [0, 1], the trajectory-based Ss0 ,? for the SSP in Figure 1(a): for instance, if ? = 0.753 then Ss0 ,0.753 = {s0 , s1 , s?1 , s?2 , s?3 , sG } and Gs0 ,0.75 = {s1 , sG }. This example
shows how trajectory-based short-sighted SSP can manage uncertainty efficiently: for ? = 0.753 ,
|Ss0 ,? | = 6 and the goal of the original SSP sG is already included in Ss0 ,? while, for the depthbased short-sighted SSPs, sG ? Ss0 ,t only for t ? 4 case in which |Ss0 ,t | = |S| = 8.
Notice that the definition of Ss,? cannot be simplified to {?
s ? S|Pmax (s, s?) ? ?} since not all
the resulting states of actions would be included in Ss,? . For example, consider S = {s, s? , s?? },
P (s? |s, a) = 0.9 and P (s?? |s, a) = 0.1, then for ? ? (0.1, 1], {?
s ? S|Pmax (s, s?) ? ?} = {s, s? },
generating an invalid SSP since not all the resulting states of a would be contained in the model.
4
Short-Sighted Probabilistic Planner
The Short-Sighted Probabilistic Planner (SSiPP) is an algorithm that solves SSPs based on shortsighted SSPs [1]. SSiPP is reviewed in Algorithm 1 and consists of iteratively generating and solving
short-sighted SSPs of the given SSP. Due to the reduced size of the short-sighted problems, SSiPP
solves each of them by computing a closed policy w.r.t. their initial state. Therefore, we obtain
a ?fail-proof? solution for each short-sighted SSP, thus if this solution is directly executed in the
environment, then replanning is not needed until an artificial goal is reached. Alternatively, an
anytime behavior is obtained if the execution of the computed closed policy for the short-sighted
SSP is simulated (Algorithm 1 line 4) until an artificial goal sa is reached and this procedure is
repeated, starting sa , until convergence or an interruption.
In [1], we proved that SSiPP always terminates and is asymptotically optimal for depth-based shortsighted SSPs. We generalize the results regarding SSiPP by: (i) providing the sufficient conditions
for the generation of short-sighted problems (Algorithm 1, line 1) in Definition 5; and (ii) proving
that SSiPP always terminates (Theorem 3) and is asymptotically optimal (Corollary 4) when the
short-sighted SSP generator respects Definition 5. Notice that, by definition, both depth-based and
trajectory-based short-sighted SSPs meet the sufficient conditions presented on Definition 5.
Definition 5. Given an SSP hS, s0 , G, A, P, Ci, the sufficient conditions on the short-sighted SSPs
hS? , s?, G? , A, P ? , C ? i returned by the generator in Algorithm 1 line 1 are:
1. G ? S? ? G? ;
2. s? 6? G ? s? 6? G? ; and
3. for all s ? S, a ? A and s? ? S? \ G? , if P (s|s? , a) > 0 then s ? S? and P ? (s|s? , a) =
P (s|s? , a).
Lemma 2. SSiPP performs Bellman updates on the original SSP S.
4
1
2
3
4
SS I PP(SSP S = hS, s0 , G, A, P, Ci, H a heuristic for V ? and params the parameters to generate
short-sighted SSPs)
begin
V ? Value function for S initialized by H
s ? s0
while s 6? G do
hS? , s, G? , A, P, C ? i ? G ENERATE -S HORT-S IGHTED -SSP(S, s, V , params)
(?
? ? , V? ? ) ? O PTIMAL -SSP-S OLVER(hS? , s, G? , A, P, C ? i, V )
forall s? ? S? (?
? ? , s) do
?
?
V (s ) ? V? (s? )
while s 6? G? do
s ? execute-action(?
? ? (s))
return V
end
Algorithm 1: SSiPP algorithm [1]. G ENERATE -S HORT-S IGHTED -SSP represents a procedure to
generate short-sighted SSPs, either depth-based or trajectory-based. In the former case params = t
and params = ? for the latter. O PTIMAL -SSP-S OLVER returns an optimal policy ? ? w.r.t. s0 for S
and V ? associated to ? ? , i.e., V ? needs to be defined only for s ? S(? ? , s0 ).
Proof. In order to show that SSiPP performs Bellman updates implicitly, consider the loop
in line 2 of Algorithm 1. Since O PTIMAL -S OLVER computes V? ? , by definition of shortsighted SSP: (i) V? ? (sG ) equals V (sG ) for all sG ? G? , therefore the value of V (sG ) remains
P
the same; and (ii) mina?A s? ?S [C(s, a, s? ) + P (s? |s, a)V (s? )] ? V? ? (s) for s ? S? \ G? ,
i.e., the assignment V (s) ? V? ? is equivalent to at least one Bellman update on V (s), beis a lower bound on V? ? and Theorem
1. Because s 6? G? and Definition 5,
cause VP
?
?
?
?
?
mina?A
s? ?S C(s, a, s ) + P (s |s, a)V (s ) ? V (s) is equivalent to the one Bellman update
in the original SSP S.
Theorem 3. Given an SSP S = hS, s0 , G, A, P, Ci such that the reachability assumption holds, an
admissible heuristic H and a short-sighted problem generator that respects Definition 5, then SSiPP
always terminates.
Proof. Since O PTIMAL -S OLVER always finishes and the short-sighted SSP is an SSP by definition,
then a goal state sG of the short-sighted SSP is always reached, therefore the loop in line 3 of
Algorithm 1 always finishes. If sG ? G, then SSiPP terminates in this iteration. Otherwise, sG
is an artificial goal and sG 6= s (Definition 5), i.e., sG differs from the state s used as initial state
for the short-sighted SSP generation. Thus another iteration of SSiPP is performed using sG as
s. Suppose, for contradiction purpose, that every goal state reached during SSiPP execution is an
artificial goal, i.e., SSiPP does not terminate. Then infinitely many short-sighted SSPs are solved.
Since S is finite, then there exists s ? S that is updated infinitely often, therefore V (s) ? ?.
However, V ? (s) < ? by the reachability assumption. Since SSiPP performs Bellman updates
(Lemma 2) then V (s) ? V ? (s) by monotonicity of Bellman updates (Theorem 1) and admissibility
of H, a contradiction. Thus every execution of SSiPP reaches a goal state s?G ? G and therefore
terminates.
Corollary 4. Under the same assumptions of Theorem 3, the sequence hV 0 , V 1 , ? ? ? , V t i, where
V 0 = H and V t = SSiPP(S, t, V t?1 ), converges to V ? as t ? ? for all s ? S(? ? , s0 ).
Proof. Let S? ? S be the set of states being visited infinitely many times. Clearly, S(? ? , s0 ) ? S?
since a partial policy cannot be executed ad infinitum without reaching a state in which it is not
defined. Since SSiPP performs Bellman updates in the original SSP space (Lemma 2) and every execution of SSiPP terminates (Theorem 3), then we can view the sequence of lower bounds
hV 0 , V 1 , ? ? ? , V t i generated by SSiPP as asynchronous value iteration. The convergence of V t?1 (s)
to V ? (s) as t ? ? for all s ? S(? ? , s0 ) ? S? follows by [2, Proposition 2.2, p. 27] and guarantees
the convergence of SSiPP.
5
80
10
70
10
60
Number of States (log scale)
10
50
10
40
10
30
10
20
10
10
10
|S(?*,s )|
0
|S|
0
10
0
(a)
5
10
15
20
25
30
35
40
Triangle Tireworld Problem Size
45
50
55
60
(b)
Figure 2: (a) Map of the triangle tireworld for the sizes 1, 2 and 3. Circles (squares) represent
locations in which there is one (no) spare tire. The shades of gray represent, for each location l,
max? P (car reaches l and the tire is not flat when following the policy ? from s0 ). (b) Log-lin plot
of the state space size (|S|) and the size of the states reachable from s0 when following the optimal
policy ? ? (|S(? ? , s0 )|) versus the number of the triangle tireworld problem.
5
Experiments
We present two sets of experiments using the triangle tireworld problems [9, 11, 20], a series of
probabilistic interesting problems [14] in which a car has to travel between locations in order to
reach a goal location from its initial location. The roads are represented as directed graph in a shape
of a triangle and, every time the car moves between locations, a flat tire happens with probability
0.5. Some locations have a spare tire and in these locations the car can deterministically replace
its flat tire by new one. When the car has a flat tire, it cannot change its location, therefore the car
can get stuck in locations that do not have a spare tire (dead-ends). Figure 2(a) depicts the map of
the triangle tireworld problems 1, 2 and 3 and Figure 2(b) shows the size of S and S(? ? , s0 ) for
problems up to size 60. For example, the problem number 3 has 28 locations, i.e., 28 nodes in the
corresponding graph on Figure 2(a), its state space has 19562 states and its optimal policy reaches
8190 states.
Every triangle tireworld problem is a probabilistic interesting problem [14]: there is only one policy
that reaches the goal with probability 1 and all the other policies have probability at most 0.5 of
reaching the goal. Also, the solution based on the shortest path has probability 0.52n?1 of reaching
the goal, where n is the problem number. This property is illustrated by the shades of gray in
Figure 2(a) that represents, for each location l, max? P (car reaches l and the tire is not flat when
following the policy ? from s0 ).
For the experiments in this section, we use the zero-heuristic for all the planners, i.e., V (s) = 0 for
all s ? S and LRTDP [4] as O PTIMAL -S OLVER for SSiPP. For all planners, the parameter ? (for
?-convergence) is set to 10?4 . For UCT, we disabled the random rollouts because the probability
of any policy other than the optimal policy to reach a dead-end is at least 0.5 therefore, with highprobability, UCT would assign ? (cost of a dead-end) as the cost of all the states including the
initial state.
The experiments are conducted in a Linux machine with 4 cores running at 3.07GHz using MDPSIM [9] as environment simulator. The following terminology is used for describing the experiments: round, the computation for a solution for the given SSP; and run, a set of rounds in which
learning is allowed between rounds, i.e., the knowledge obtained from one round can be used to
solve subsequent rounds. The solution computed during one round is simulated by MDPSIM in a
client-server loop: MDPSIM sends a state s and requests an action from the planner, then the planner replies by sending the action a to be executed in s. The evaluation is done by the number of
rounds simulated by MDPSIM that reached a goal state. The maximum number of actions allowed
per round is 2000 and rounds that exceed this limit are stopped by MDPSIM and declared as failure,
i.e., goal not reached.
6
Planner
SSiPP depth=8
UCT
SSiPP trajectory
5
50.0
50.0
50.0
10
40.7
50.0
50.0
15
41.2
50.0
50.0
Triangle Tireworld Problem Number
20
25
30
35
40
45
40.8 41.1 41.0 40.9 40.0 40.6
50.0 50.0 43.1 15.7 12.1
8.2
50.0 50.0 50.0 50.0 50.0 50.0
50
40.8
6.8
50.0
55
40.3
5.0
50.0
60
40.4
4.0
50.0
Table 1: Number of rounds solved out of 50 for experiment in Section 5.1. Results are averaged
over 10 runs and the 95% confidence interval is always less than 1.0. In all the problems, SSiPP
using trajectory-based short-sighted SSPs solves all the 50 round in all the 10 runs, therefore its 95%
confidence interval is 0.0 for all the problems. Best results shown in bold font.
Planner
SSiPP depth=8
LRTDP
UCT (4, 100)
UCT (8, 100)
UCT (2, 100)
SSiPP ? = 1.0
SSiPP ? = 0.50
SSiPP ? = 0.25
SSiPP ? = 0.125
5
50.0
50.0
50.0
50.0
50.0
50.0
50.0
50.0
50.0
10
45.4
23.0
50.0
50.0
50.0
27.9
50.0
50.0
50.0
15
41.2
14.1
50.0
50.0
50.0
29.1
50.0
50.0
50.0
Triangle Tireworld Problem Number
20
25
30
35
40
45
42.3 41.2 44.1 42.4 32.7 20.6
0.3
48.8 24.0 12.3
6.5
4.0
2.5
46.3 24.0 12.3
6.7
3.7
2.2
49.5 23.2 12.0
7.5
3.5
2.2
26.8 26.0 26.6 28.6 27.2 26.6
50.0 50.0 50.0 50.0 50.0 50.0
50.0 47.6 45.0 41.1 42.7 41.9
50.0 50.0 50.0 50.0 50.0 49.8
50
14.1
1.3
1.2
1.2
27.6
50.0
40.7
37.4
55
9.9
1.0
1.0
1.0
26.2
50.0
40.1
26.4
60
7.0
0.7
0.6
0.6
26.9
50.0
40.4
18.9
Table 2: Number of rounds solved out of 50 for experiment in Section 5.2. Results are averaged
over 10 runs and the 95% confidence interval is always less than 2.6. UCT (c, w) represents UCT
using c as bias parameter and w samples per decision. In all the problems, trajectory-based SSiPP
for ? = 0.5 solves all the 50 round in all the 10 runs, therefore its 95% confidence interval is 0.0 for
all the problems. Best results shown in bold font.
5.1
Fixed number of search nodes per decision
In this experiment, we compare the performance of UCT, depth-based SSiPP, and trajectory-based
SSiPP with respect to the number of nodes explored by depth-based SSiPP. Formally, to decide what
action to apply in a given state s, each planner is allowed to use at most B = |Ss,t | search nodes,
i.e., the size of the search space is bounded by the equivalent (s, t)-short-sighted SSP. We choose t
equals to 8 since it obtains the best performance in the triangle tireworld problems [1]. Given the
search nodes budget B, for UCT we sample the environment until the search tree contains B nodes;
and for trajectory-based SSiPP we use ? = argmax? {|Ss,? | s.t. B ? |Ss,? |}.
The methodology for this experiment is as follows: for each problem, 10 runs of 50 rounds are
performed for each planner using the search nodes budget B. The results, averaged over the 10 runs,
are presented in Table 1. We set as time and memory cut-off 8 hours and 8 Gb, respectively, and
UCT for problems 35 to 60 was the only planner preempted by the time cut-off. Trace-based SSiPP
outperforms both depth-based SSiPP and UCT, solving all the 50 rounds in all the 10 runs for all the
problems.
5.2
Fixed maximum planning time
In this experiment, we compare planners by limiting the maximum planning time. The methodology
used in this experiment is similar to the one in IPPC?04 and IPPC?06: for each problem, planners
need to solve 1 run of 50 rounds in 20 minutes. For this experiment, the planners are allowed to perform internal simulations, for instance, a planner can spend 15 minutes solving rounds using internal
simulations and then use the computed policy to solve the required 50 rounds through MDPSIM in
the remaining 5 minutes. The memory cut-off is 3Gb.
For this experiment, we consider the following planners: depth-based SSiPP for t = 8 [1], trajectorybased SSiPP for ? ? {1.0, 0.5, 0.25, 0.125}, LRTDP using 3-look-ahead [1] and 12 different
parametrizations of UCT obtained by using the bias parameter c ? {1, 2, 4, 8} and the number
of samples per decision w ? {10, 100, 1000}. The winners of IPPC?04, IPPC?06 and IPPC?08 are
7
omitted since their performance on the triangle tireworld problems are strictly dominated by depthbase SSiPP for t = 8. Table 2 shows the results of this experiment and due to space limitations we
show only the top 3 parametrizations of UCT: 1st (c = 4, w = 100); 2nd (c = 8, w = 100); and 3rd
(c = 2, w = 100).
All the four parametrizations of trajectory-based SSiPP outperform the other planners for problems
of size equal or greater than 45. Trajectory-based SSiPP using ? = 0.5 is especially noteworthy
because it achieves the perfect score in all problems, i.e., it reaches a goal state in all the 50 rounds
in all the 10 runs for all the problems. The same happens for ? = 0.125 and problems up to size
40. For larger problems, trajectory-based SSiPP using ? = 0.125 reaches the 20 minutes time
cut-off before solving 50 rounds, however all the solved rounds successfully reach the goal. This
interesting behavior of trajectory-based SSiPP for the triangle tireworld can be explained by the
following theorem:
Theorem 5. For the triangle tireworld, trajectory-based SSiPP using an admissible heuristic never
falls in a dead-end for ? ? (0.5i+1 , 0.5i ] for i ? {1, 3, 5, . . . }.
Proof Sketch. The optimal policy for the triangle tireworld is to follow the longest path: move from
the initial location l0 to the goal location lG passing through location lc , where l0 , lc and lG are the
vertices of the triangle formed by the problem?s map. The path from lc to lG is unique, i.e., there
is only one applicable move-car action for all the locations in this path. Therefore all the decision
making to find the optimal policy happens between the locations l0 and lc . Each location l? in the
path from l0 to lc has either two or three applicable move-car actions and we refer to the set of
locations l? with three applicable move-car actions as N. Every location l? ? N is reachable from
l0 by applying an even number of move-car actions (Figure 2(a)) and the three applicable move-car
actions in l? are: (i) the optimal action ac , i.e., move the car towards lc ; (ii) the action aG that moves
the car towards lG ; and (iii) the action ap that moves the car parallel to the shortest-path from l0 to
lG . The location reached by ap does not have a spare tire, therefore ap is never selected by a greedy
choice over any admissible heuristic since it reaches a dead-end with probability 0.5. The locations
reached by applying either ac or aG have a spare tire and the greedy choice between them depends
on the admissible heuristic used, thus aG might be selected instead of ac . However, after applying
aG , only one move-car action a is available and it reaches a location that does not have a spare
tire. Therefore, the greedy choice between ac and aG considering two or more move-car actions is
optimal under any admissible heuristic: every sequence of actions haG , a, . . . i reaches a dead-end
with probability at least 0.5 and at least one sequence of actions starting with ac has probability 0 to
reach a dead-end, e.g., the optimal solution.
Given ?, we denote as Ls,? the set of all locations corresponding to states in Ss,? and as ls the
location corresponding to the state s. Thus, Ls,? contains all the locations reachable from ls using
up to m = ?log0.5 ?? + 1 move-car actions. If m is even and ls ? N, then every location in
Ls,? ? N represents a state either in Gs,? or at least two move-car actions away from any state
in Gs,? . Therefore the solution of the (s, ?)-trajectory-based short-sighted SSP only chooses the
action ac to move the car. Also, since m is even, every state s used by SSiPP for generating
(s, ?)-trajectory-based short-sighted SSPs has ls ? N. Therefore, for even values of m, i.e., for
? ? (0.5i+1 , 0.5i ] and i ? {1, 3, 5, . . . }, trajectory-based SSiPP always chooses the actions ac to
move the car to lc , thus avoiding the all dead-ends.
6
Conclusion
In this paper, we introduced trajectory-based short-sighted SSPs, a new model to manage uncertainty
in probabilistic planning problems. This approach consists of pruning the state space based on the
most likely trajectory between states and defining artificial goal states that guide the solution towards
the original goals. We also defined a class of short-sighted models that includes depth-based and
trajectory-based short-sighted SSPs and proved that SSiPP always terminates and is asymptotically
optimal for short-sighted models in this class.
We empirically compared trajectory-based SSiPP with depth-based SSiPP and other state-of-the-art
planners in the triangle tireworld. Trajectory-based SSiPP outperforms all the other planners and it
is the only planner able to scale up to problem number 60, a problem in which the optimal solution
contains approximately 1070 states, under the IPPC evaluation methodology.
8
References
[1] F. W. Trevizan and M. M. Veloso. Short-sighted stochastic shortest path problems. In In Proc.
of the 22nd International Conference on Automated Planning and Scheduling (ICAPS), 2012.
[2] D. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[3] A.G. Barto, S.J. Bradtke, and S.P. Singh. Learning to act using real-time dynamic programming. Artificial Intelligence, 72(1-2):81?138, 1995.
[4] B. Bonet and H. Geffner. Labeled RTDP: Improving the convergence of real-time dynamic
programming. In Proc. of the 13th International Conference on Automated Planning and
Scheduling (ICAPS), 2003.
[5] H.B. McMahan, M. Likhachev, and G.J. Gordon. Bounded real-time dynamic programming:
RTDP with monotone upper bounds and performance guarantees. In Proc. of the 22nd International Conference on Machine Learning (ICML), 2005.
[6] Trey Smith and Reid G. Simmons. Focused Real-Time Dynamic Programming for MDPs:
Squeezing More Out of a Heuristic. In Proc. of the 21st National Conference on Artificial
Intelligence (AAAI), 2006.
[7] S. Sanner, R. Goetschalckx, K. Driessens, and G. Shani. Bayesian real-time dynamic programming. In Proc. of the 21st International Joint Conference on Artificial Intelligence (IJCAI),
2009.
[8] S. Yoon, A. Fern, and R. Givan. FF-Replan: A baseline for probabilistic planning. In Proc. of
the 17th International Conference on Automated Planning and Scheduling (ICAPS), 2007.
[9] H.L.S. Younes, M.L. Littman, D. Weissman, and J. Asmuth. The first probabilistic track of the
international planning competition. Journal of Artificial Intelligence Research, 24(1):851?887,
2005.
[10] F. Teichteil-Koenigsbuch, G. Infantes, and U. Kuter. RFF: A robust, FF-based mdp planning
algorithm for generating policies with low probability of failure. 3rd International Planning
Competition (IPPC-ICAPS), 2008.
[11] D. Bryce and O. Buffet. 6th International Planning Competition: Uncertainty Track. In 3rd
International Probabilistic Planning Competition (IPPC-ICAPS), 2008.
[12] S. Yoon, A. Fern, R. Givan, and S. Kambhampati. Probabilistic planning via determinization
in hindsight. In Proc. of the 23rd National Conference on Artificial Intelligence (AAAI), 2008.
[13] S. Yoon, W. Ruml, J. Benton, and M. B. Do. Improving Determinization in Hindsight for
Online Probabilistic Planning. In Proc. of the 20th International Conference on Automated
Planning and Scheduling (ICAPS), 2010.
[14] I. Little and S. Thi?ebaux. Probabilistic planning vs replanning. In Proc. of ICAPS Workshop
on IPC: Past, Present and Future, 2007.
[15] J. Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving. AddisonWesley, Menlo Park, California, 1985.
[16] Levente Kocsis and Csaba Szepesvri. Bandit based Monte-Carlo Planning. In Proc. of the
European Conference on Machine Learning (ECML), 2006.
[17] T. Dean, L.P. Kaelbling, J. Kirman, and A. Nicholson. Planning under time constraints in
stochastic domains. Artificial Intelligence, 76(1-2):35?74, 1995.
[18] D.P. Bertsekas and J.N. Tsitsiklis. An analysis of stochastic shortest path problems. Mathematics of Operations Research, 16(3):580?595, 1991.
[19] D.P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 1995.
[20] Blai Bonet and Robert Givan. 2th International Probabilistic Planning Competition (IPPCICAPS). http://www.ldc.usb.ve/?bonet/ipc5/ (accessed on Dec 13, 2011),
2007.
9
| 4816 |@word h:12 nd:3 heuristically:3 simulation:2 nicholson:1 simplifying:1 initial:10 contains:6 series:1 score:1 outperforms:3 past:1 subsequent:1 partition:2 shape:1 plot:1 update:10 v:1 greedy:3 prohibitive:1 selected:2 intelligence:6 smith:1 short:51 core:1 node:8 contribute:1 location:27 accessed:1 consists:2 introduce:1 expected:2 behavior:2 planning:32 nor:1 simulator:1 bellman:9 little:1 considering:1 begin:1 notation:1 bounded:2 what:1 argmin:1 minimizes:1 rtdp:2 hindsight:3 ag:5 shani:1 csaba:1 guarantee:2 every:11 act:1 icaps:7 control:1 bertsekas:3 reid:1 positive:1 before:1 trajectorybased:2 limit:1 driessens:1 despite:1 meet:1 path:15 approximately:2 noteworthy:1 might:2 ap:3 averaged:3 directed:1 unique:3 practice:1 differs:1 procedure:2 thi:1 universal:1 confidence:5 road:1 get:1 cannot:3 operator:1 scheduling:4 context:1 applying:5 www:1 equivalent:3 deterministic:1 map:3 dean:1 starting:2 l:7 focused:1 simplicity:1 contradiction:2 proving:3 searching:1 updated:1 limiting:1 simmons:1 suppose:1 programming:7 pa:1 approximated:1 cut:4 labeled:1 ep:3 yoon:3 solved:4 capture:2 worst:1 hv:3 region:2 ebaux:1 environment:5 complexity:1 littman:1 dynamic:7 singh:1 solving:5 triangle:18 joint:1 represented:2 olver:5 monte:1 artificial:15 choosing:1 neighborhood:1 heuristic:13 spend:1 solve:3 larger:1 s:19 otherwise:3 itself:1 online:1 kocsis:1 sequence:6 remainder:1 combining:1 loop:3 argmink:1 parametrizations:3 competition:7 scalability:2 convergence:6 felipe:1 ijcai:1 rff:1 generating:4 perfect:1 converges:2 ac:7 sa:2 solves:4 c:5 reachability:6 ptimal:5 stochastic:8 spare:6 assign:1 givan:3 proposition:1 strictly:1 hold:5 considered:1 mapping:3 substituting:1 major:1 achieves:1 omitted:1 purpose:1 proc:10 travel:1 applicable:5 visited:1 replanning:2 successfully:1 clearly:1 always:13 reaching:6 forall:1 barto:1 probabilistically:1 corollary:2 l0:6 longest:1 baseline:1 szepesvri:1 entire:1 a0:5 bandit:1 transformed:1 interested:1 issue:1 plan:2 art:3 special:2 equal:3 once:3 never:3 sampling:2 represents:5 park:1 look:3 icml:1 future:1 gordon:1 intelligent:1 oblivious:1 national:2 ve:1 argmax:1 rollouts:1 evaluation:2 introduces:1 tuple:1 partial:2 necessary:1 addisonwesley:1 tree:2 initialized:2 re:1 circle:1 theoretical:2 stopped:1 instance:4 modeling:1 assignment:1 cost:8 kaelbling:1 vertex:1 conducted:1 params:4 chooses:2 st:3 international:12 probabilistic:28 off:4 linux:1 aaai:2 manage:13 management:1 choose:1 geffner:1 dead:9 leading:1 return:3 account:1 bold:2 includes:1 explicitly:1 infinitum:1 ad:1 depends:1 performed:3 view:1 closed:7 reached:11 parallel:1 square:1 formed:1 efficiently:1 generalize:1 bayesian:1 basically:1 fern:2 carlo:1 trajectory:41 usb:1 reach:18 definition:17 competitor:1 evaluates:1 failure:2 pp:1 associated:4 proof:6 proved:2 anytime:1 car:21 improves:2 knowledge:1 organized:1 focusing:1 asmuth:1 follow:1 methodology:3 improved:1 execute:1 done:1 uct:15 reply:1 until:4 hand:1 sketch:1 bonet:3 propagation:1 defines:1 gray:2 scientific:2 disabled:1 mdp:1 effect:2 concept:1 former:1 symmetric:1 iteratively:2 illustrated:1 round:21 during:3 replan:2 criterion:1 mina:3 complete:1 performs:4 bradtke:1 invoked:1 novel:4 empirically:3 ipc:1 winner:5 extend:1 approximates:1 mellon:1 refer:1 rd:4 mathematics:1 reachable:6 server:1 meta:1 success:1 minimum:1 greater:1 prune:1 shortest:7 ii:4 veloso:2 lin:1 weissman:1 a1:1 neuro:1 basic:1 cmu:1 iteration:7 represent:2 dec:1 preserved:1 background:1 interval:4 sends:1 envelope:1 kirman:1 exceed:1 iii:1 automated:4 finish:3 regarding:1 gb:2 likhachev:1 returned:1 passing:1 cause:1 action:39 clear:1 younes:1 reduced:1 generate:2 http:1 outperform:1 notice:3 s3:3 dotted:2 correctly:1 gs0:4 per:4 track:2 benton:1 carnegie:1 four:1 terminology:1 levente:1 asymptotically:5 graph:2 monotone:1 run:10 uncertainty:20 powerful:1 planner:32 decide:1 decision:5 bound:5 followed:1 simplification:2 g:8 ahead:3 constraint:1 flat:5 dominated:1 declared:1 extremely:1 optimality:1 pruned:1 min:2 department:2 ss0:11 according:2 fwt:1 poor:1 request:1 terminates:8 smaller:1 making:1 s1:6 happens:3 explained:1 taken:1 equation:2 previously:1 remains:1 describing:1 fail:1 needed:1 end:10 sending:1 available:1 operation:1 shortsighted:4 apply:1 away:1 buffet:1 original:8 denotes:1 remaining:2 running:1 top:1 sighted:51 especially:1 move:16 already:1 font:2 strategy:2 interruption:1 ssp:42 distance:3 simulated:3 athena:2 assuming:1 besides:1 modeled:1 providing:1 lg:5 unfortunately:1 mmv:1 executed:3 robert:1 trace:1 pmax:9 policy:33 perform:2 upper:2 markov:1 finite:3 ecml:1 defining:4 ever:1 introduced:3 required:2 california:1 hour:1 pearl:1 address:1 able:2 challenge:1 max:4 including:1 memory:2 ldc:1 client:1 sanner:1 mdps:3 concludes:1 bryce:1 sg:21 jectory:1 admissibility:2 generation:3 interesting:3 limitation:1 hs:2 versus:1 squeezing:1 generator:3 incurred:1 sufficient:4 s0:34 asynchronous:2 tire:11 guide:3 allow:1 side:1 highprobability:1 bias:2 fall:1 tsitsiklis:2 ghz:1 depth:15 computes:2 stuck:1 simplified:2 approximate:1 compact:1 ignore:1 pruning:3 implicitly:1 obtains:1 monotonicity:2 manuela:1 pittsburgh:1 alternatively:1 search:15 reviewed:1 table:4 promising:1 terminate:1 robust:2 menlo:1 improving:2 european:1 domain:1 substituted:1 ruml:1 s2:7 repeated:1 allowed:4 ff:5 depicts:1 lc:7 fails:1 explicit:1 xh:1 exponential:1 deterministically:1 candidate:1 mcmahan:1 third:1 admissible:6 theorem:9 minute:4 shade:2 specific:1 explored:1 exists:2 workshop:1 determinization:5 adding:1 ci:6 execution:7 budget:2 horizon:1 depicted:1 explore:1 likely:2 infinitely:3 contained:1 satisfies:1 kambhampati:1 extracted:1 goal:36 invalid:1 towards:5 replace:1 change:1 included:2 lemma:4 formally:2 internal:2 latter:1 avoiding:1 |
4,217 | 4,817 | Efficient high-dimensional maximum entropy
modeling via symmetric partition functions
J. Andrew Bagnell
The Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Paul Vernaza
The Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Maximum entropy (MaxEnt) modeling is a popular choice for sequence analysis
in applications such as natural language processing, where the sequences are embedded in discrete, tractably-sized spaces. We consider the problem of applying
MaxEnt to distributions over paths in continuous spaces of high dimensionality?
a problem for which inference is generally intractable. Our main contribution
is to show that this intractability can be avoided as long as the constrained features possess a certain kind of low dimensional structure. In this case, we show
that the associated partition function is symmetric and that this symmetry can be
exploited to compute the partition function efficiently in a compressed form. Empirical results are given showing an application of our method to learning models
of high-dimensional human motion capture data.
1
Introduction
This work aims to generate useful probabilistic models of high dimensional trajectories in continuous spaces. This is illustrated in Fig. 1, which demonstrates the application of our proposed method
to the problem of building generative models of high dimensional human motion capture data. Using
this method, we may efficiently learn models and perform inferences including but not limited to the
following: (1) Given any single pose, what is the probability that a certain type of motion ever visits
this pose? (2) Given any pose, what is the distribution over future positions of the actor?s hands? (3)
Given any initial sequence of poses, what are the odds that this sequence corresponds to one action
type versus another? (4) What is the most likely sequence of poses interpolating any two states?
The maximum entropy learning (MaxEnt) approach advocated here has the distinct advantage of
being able to efficiently answer all of the aforementioned global inferences in a unified framework
while also allowing the use of global features of the state and observations. In this sense, it is analogous to another MaxEnt learning method: the Conditional Random Field (CRF), which is typically
applied to modeling discrete sequences. We show how MaxEnt modeling may be efficiently applied to paths in continuous state spaces of high dimensionality. This is achieved without having
to resort to expensive, approximate inference methods based on MCMC, and without having to assume that the sequences themselves lie in or near a low dimensional submanifold, as in standard
dimensionality-reduction-based methods. The key to our method is to make a natural assumption
about the complexity of the features, rather than the paths, that results in simplifying symmetries.
This idea is illustrated in Fig. 2. Here we suppose that we are tasked with the problem of comparing
two sets of paths: the first, sampled from an empirical distribution; and the second, sampled from a
learned distribution intended to model the distribution underlying the empirical samples. Suppose
first that we are to determine whether the learned distribution correctly samples the desired distribution. We claim that a natural approach to this problem is to visualize both sets of paths by projecting
1
up-phase
jumping jack
Log prob.
-49.9
down-phase
jumping jack
Log prob.
-70.0
side twist
Log prob.
-2 * 10-11
cross-toe touch
Log prob.
-24.6
(a) True held-out class = side twist
up-phase
jumping jack
down-phase
jumping jack
side twist
cross-toe touch
Log prob.
-2*10-5
Log prob.
-10.6
Log prob.
-81.6
Log prob.
-79.0
(b) True held-out class = down-phase jumping jack
Figure 1: Visualizations of predictions of future locations of hands for an individually held-out
motion capture frame, conditioned on classes indicated by labels above figures, and corresponding
class membership probabilities. See supplementary material for video demonstration.
Figure 2: Illustration of the constraint that paths sampled from the learned distribution should (in
expectation) visit certain regions of space exactly as often as they are visited by paths sampled from
the true distribution, after projection of both onto a low dimensional subspace. The shading of each
planar cell is proportional to the expected number of times that cell is visited by a path.
them onto a common low dimensional basis. If these projections appear similar, then we might conclude that the learned model is valid. If they do not appear similar, we might try to adjust the learned
distribution, and compare projections again, iterating until the projections appear similar enough to
convince us that the learned model is valid.
We then might consider automating this procedure by choosing numerical features of the projected
paths and comparing these features in order to determine whether the projected paths appear similar.
Our approach may be thought of as a way of formalizing this procedure. The MaxEnt method
described here iteratively samples paths, projects them onto a low dimensional subspace, computes
features of these projected paths, and adjusts the distribution so as to ensure that, in expectation,
these features match the desired features.
A key contribution of this work is to show that that employing low dimensional features of this sort
enables tractable inference and learning algorithms, even in high dimensional spaces. Maximum
entropy learning requires repeatedly calculating feature statistics for different distributions, which
generally requires computing average feature values over all paths sampled from the distributions.
Though this is straightforward to accomplish via dynamic programming in low dimensional spaces,
it may not be obvious that the same can be accomplished in high-dimensional spaces. We will show
how this is possible by exploiting symmetries that result from this assumption.
The organization of this paper is as follows. We first review some preliminary material. We then
continue with a detailed exposition of our method, followed by experimental results. Finally, we
describe the relation of our method to existing methods and discuss conclusions.
2
2
Preliminaries
We now briefly review the basic MaxEnt modeling problem in discrete state spaces. In the basic MaxEnt problem, we have N disjoint events xi , K random variables denoted features ?j (xi )
mapping events to scalars, and K expected values of these features E?j . To continue the example
previously discussed, we will think of each xi as being a path, ?j (xi ) as being the number of times
that a path passes through the jth spatial region, and E?j as the empirically estimated number of
times that a path visits the jth region.
Our goal is to find a distribution p(xi ) over the events consistent with our empirical observations in
the sense that it generates the observed feature expectations:
X
?j (xi )p(xi ) = E?j , ?j ? {1 . . . K}.
i
Of all such distributions, we will seek the one whose entropy is maximal [6]. This problem can be
written compactly as
X
max ?
pi log pi s.t. ?p = E?,
(1)
p??
i
where we have defined vectors pi = p(xi ) and ?, the feature matrix ?ij = ?i (xj ), and the probability simplex ?. Introducing a vector of Lagrange multipliers ?, the Lagrangian dual of this concave
maximization problem is [3]
?
?
X
X
max ? log ?
exp(?
?ji ?j )? ? E?T ?.
(2)
?
i
j
It is straightforward to show that the gradient of the dual objective g(?) is given by ?? g = Ep?[? |
?] ? E?, where p? is the Gibbs distribution over x defined by
?
?
X
p?(xi | ?) ? exp ??
?j (xi )?j ? .
(3)
j
3
MaxEnt modeling of continuous paths
We now consider an extension of the MaxEnt formalism to the case that the events are paths embedded in a continuous space. The main questions to be addressed here are how to handle the transition
from a finite number of events to an infinite number of events, and how to define appropriate features.
We will address the latter problem first.
We suppose that each event x now consists of a continuous, arc-length-parameterized path, expressed as a function R+ ? RN mapping a non-negative time into the state space RN . A natural
choice in this case is to express each feature ?j as an integral of the following form:
Z T
?j (x) =
?j (x(s))ds,
(4)
0
where T is the duration (or length) of x and each ?j : RN ? R+ is what we refer to as a feature
potential. Continuing the previous example, if we choose ?j (x(t)) = 1 if x(t) is in region j and
?j (x(t)) = 0 otherwise, then ?j (x) is the total time that x spends within the jth region of space.
An analogous expression for the probability of a continuous
P path is then obtained by substituting
these features into (3). Defining the cost function C? := j ?j ?j and the cost functional
Z T
S? {x} :=
C? (x(s))ds,
(5)
0
we have that
p?(x | ?) = R
exp ?S? {x}
,
exp ?S? {x}Dx
3
(6)
R
where the notation exp ?S? {x}Dx denotes the integral
of the cost functional over the space of all
R
continuous paths. The normalization factor Z? := exp ?S? {x}Dx is referred to as the partition
function. As in the discrete case, computing the partition function is of prime concern, as it enables
a variety of inference and learning techniques.
The functional integral in (6) can be formalized in several ways, including taking an expectation
with respect to Wiener measure [12] or as a Feynman integral [4]. Computationally, evaluating Z?
requires the solution of an elliptic partial differential equation over the state space, which can be
derived via the Feynman-Kac theorem [12, 5]. The solution, denoted Z? (a) for a ? RN , gives the
value of the functional integral evaluated over all paths beginning at a and ending at a given goal
location (henceforth assumed w.l.o.g. to be the origin).
A discrete approximation to the partition function can therefore be computed via standard numerical
methods such as finite differences, finite elements, or spectral methods [2]. However, we proceed by
discretizing the state space as a lattice graph and computing the partition function associated with
discrete paths in this graph via a standard dynamic programming method [1, 15, 11]. Recent work
has shown that this method recovers the PDE solution in the discretization limit [5]. Concretely, the
discretized partition function is computed as the fixed point of the following iteration:
X
Z? (a) ? ?(a) + exp(?C? (a))
Z? (a0 ),
(7)
a0 ?a
0
0
where a ? a denotes the set of a adjacent to a in the lattice, is the spacing between adjacent
lattice elements, and ? is the Kronecker delta. 1
4
Efficient inference via symmetry reduction
Unfortunately, the dynamic programming approach described above is tractable only for low dimensional problems; for problems in more than a few dimensions, even storing the partition function
would be infeasible. Fortunately, we show in this section that it is possible to compute the partition
function directly in a compressed form, given that the features also satisfy a certain compressibility
property.
4.1
Symmetry of the partition function
Elaborating on this statement, we now recall Eq. (4), which expresses the features as integrals of
feature potentials ?j over paths. We then examine the effects of assuming that the ?j are compressible in the sense that they may be predicted exactly from their projection onto a low dimensional
subspace?i.e., we assume that
?j (a) = ?j (W W T a), ?j, a,
(8)
for some given N ?d matrix W , with d < N . The following results show that compressibility of the
features in this sense implies that the corresponding partition function is also compressible, in the
sense that we need only compute it restricted to a d + 1 dimensional subspace in order to determine
its values at arbitrary locations in N -dimensional space. This is shown in two steps. First, we show
that the partition function is symmetric about rotations about the origin that preserve the subspace
spanned by the columns of W . We then show that there always exists such a rotation that also brings
an arbitrary point in RN into correspondence with a point in a a d + 1-dimensional slice where the
partition function has been computed.
R
Theorem 4.1. Let Z? = exp ?S? {x}Dx, with S? as defined in Eq. 5 and features derived from
feature potentials ?j . Suppose that ?j (x) = ?j (W W T x), ?j, x. Then for any orthogonal R such
that RW = W ,
Z? (a) = Z? (Ra), ?a ? RN .
(9)
Proof. By definition,
Z
Z? (Ra) =
x(0)=0
x(T )=Ra
Z
exp ?
T
!
C? (x(s))ds Dx.
0
1
In practice, this is typically done with respect to log Z? , which yields an iteration similar to a soft version
of value iteration of the Bellman equation [15]
4
The substitution y(t) = RT x(t) yields
Z
Z? (Ra) =
y(0)=0
y(T )=a
Z
!
T
C? (Ry(s))ds Dy.
exp ?
0
Since ?j (a) = ?j (W W T a), ?j, a implies that C? (x) = C? (W W T x) ?x, we can make the substitutions C? (Ry) = C? (W W T Ry) = C? (W W T y) = C? (y) in the previous expression to prove the
result.
The next theorem makes explicit how to exploit the symmetry of the partition function by computing
it restricted to a low-dimensional slice of the state space.
Corollary 4.2. Let W be a matrix such that ?j (a) = ?j (W W T a), ?j, a, and let ? be any vector
such that W T ? = 0 and k?k = 1. Then
Z? (a) = Z? (W W T a + k(I ? W W T )ak?), ?a
(10)
Proof. The proof of this result is to show that there always exists a rotation satisfying the conditions
of Theorem 4.1 that rotates b onto the subspace spanned by the columns of W and ?. We simply
choose an R such that RW = W and R(I ? W W T b) = kI ? W W T bk?. That this is a valid
rotation follows from the orthogonality of W and ? and the unit-norm assumption on ?. Applying
any such rotation to b proves the result.
4.2
Exploiting symmetry in DP
We proceed to compute the discretized partition function via a modified version of the dynamic
programming algorithm described in Sec. 3. The only substantial change is that we leverage Corollary 4.2 in order to represent the partition function in a compressed form. This implies corresponding
changes in the updates, as these must now be derived from the new, compressed representation.
Figure 3 illustrates the algorithm applied to computing the partition function associated with a constant C(x) in a two-dimensional space. The partition function is represented by its values on a
regular lattice lying in the low-dimensional slice spanned by the columns of W and ?, as defined in
Corollary 4.2. In the illustrated example, W is empty, and ? is any arbitrary line. At each iteration
of the algorithm, we update each value in the slice based on adjacent values, as before. However, it
is now the case that some of the adjacent nodes lie off of the slice. We compute the values associated
with such nodes by rotating them onto the slice (according to Corollary 4.2) and interpolating the
value based on those of adjacent nodes within the slice.
An explicit formula for these updates is readily obtained. Suppose that b is a point contained within
the slice and y := b + ? is an adjacent point lying off the slice whose value we wish to compute.
By assumption, W T ? = ? T ? = 0. We therefore observe that ? T (I ? W W T )b = 0, since (I ?
W W T )b ? ?. Hence,
V (y)
=
V (W W T (b + ?) + k(I ? W W T )(b + ?)k?)
=
V (W W T b + k(I ? W W T )b + ?k?)
q
V (W W T b + k(I ? W W T )bk2 + k?k2 ?).
=
(11)
An interesting observation is that this formula depends on y only through k?k. Therefore, assuming
that all nodes adjacent to b lie at a distance of ? from it, all of the updates from the off-slice neighbors
will be identical, which allows us to compute the net contribution due to all such nodes simply by
multiplying the above value by their cardinality. The computational complexity of the algorithm is
in this case independent of the dimension of the ambient space.
A detailed description of the algorithm is given in Algorithm 1.
4.3
MaxEnt training procedure
Given the ability to efficiently compute the partition function, learning may proceed in a way exactly analogous to the discrete case (Sec. 2). A particular complication in our case is that exactly
5
Figure 3: Illustration of dynamic programming update (constant cost example). The large sphere
marked goal denotes origin with respect to which partition function is computed. Partition function
in this case is symmetric about all rotations around the origin; hence, any value can be computed by
rotation onto any axis (slice) where the partition function is known (?). Contributions from off-slice
and on-slice points are denoted by off and on, respectively. Symmetry implies that value updates
from off-axis nodes can be computed by rotation (proj) onto the axis. See supplementary material
for video demonstration.
computing feature expectations under the model distribution is not as straightforward as in the low
dimensional case, as we must account for the symmetry of the partition function. As such, we
compute feature expectations by sampling paths from the model given the partition function.
Algorithm 1 PartitionFunc(xT , C? , W, N, d)
Z : Rd+1 ? R : y 7? 0
{initialize partition function to zero}
? ? (? | h?, ?i = 1, W T ? = 0)
{choose an appropriate ?}
lift : Rd+1 ? RN : y 7? [W
?]y
+
x
{define
lifting
and projection operators}
T
W T (x ? xT )
N
d+1
proj : R ? R
: x 7?
k(I ? W W T )(x ? xT )k
while Z not converged do
for y ? GP
? Zd+1 do
zon ? {??Zd+1 |k?k=1} Z(y 0 + ?)
{calculate on-slice contributions}
q
2
zoff ? 2(N ? d ? 1)Z(y1 , . . . , yd , yd+1 + 1)
{calculate off-slice contributions}
+zoff +2N ?(y)
Z(y) ? 2Nzon(exp
C? (lift(y)))
end for
end while
Z 0 : RN ? R : x 7? Z(proj(x))
return Z 0
5
{iterate fixed-point equation}
{return partition function in original coordinates}
Results
We implemented the method and applied it to the problem of modeling high dimensional motion capture data, as described in the introduction. Our training set consisted of a small sample of trajectories
representing four different exercises performed by a human actor. Each sequence is represented as a
123-dimensional time series representing the Cartesian coordinates of 41 reflective markers located
on the actor?s body.
The feature potentials employed consisted of indicator functions of the form
?j (a) = {1 if W T a ? Cj , 0 otherwise},
(12)
where the Cj were non-overlapping, rectangular regions of the projected state space. A W was chosen with two columns, using the method proposed in [13], which is effectively similar to performing
PCA on the velocities of the trajectory.
6
HDMaxEnt
100
fraction of path revealed
correct discrimination threshold
fraction of path revealed
HDMaxEnt
100
log. reg.
0
cross-toe touch
200
100
log. reg.
0
HDMaxEnt
200
100
correct discrimination threshold
side twist
log odds ratio
HDMaxEnt
200
log. reg.
0
down-phase jumping jack
log odds ratio
log odds ratio
log odds ratio
200
up-phase jumping jack
correct discrimination threshold
fraction of path revealed
0
correct discrimination threshold
log. reg.
fraction of path revealed
Figure 4: Results of classification experiment given progressively revealed trajectories. Title indicates true class of held-out trajectory. Abscissa indicates the fraction of the trajectory revealed
to the classifiers. Samples of held-out trajectory at different points along abscissa are illustrated
above fraction of path revealed. Ordinate shows predicted log-odds ratio between correct class and
next-most-probable class.
We applied our method to train a maximum entropy model independently for each of the four classes.
Given our ability to efficiently compute the partition function, this enables us to normalize each
of these probability distributions. Classification can then be performed simply by evaluating the
probability of a held-out example under each of the class models. Knowing the partition function
also enables us to perform various marginalizations of the distribution that would otherwise be intractable. [8, 15]
In particular, we performed an experiment consisting of evaluating the probability of a held-out
trajectory under each model as it was progressively revealed in time. This can be accomplished by
evaluating the following quantity:
!
t
X
Z? (xt )
t
P (x0 )? exp ?
C? (xi )
,
(13)
Z
? (x0 )
i=1
where x0 , . . . , xt represents the portion of the trajectory revealed up to time t, P (x0 ) is the prior
probability of the initial state, and is the spacing between successive samples. Results of this
experiment are shown in Fig. 4, which plots the predicted log-odds ratio between the correct and
next-most-probable classes.
For comparison, we also implemented a classifier based on logistic regression. Features for this
classifier consisted of radial basis functions centered around the portion of each training trajectory
revealed up to the current time step. Both methods also employed the same prior initial state probability P (x0 ), which was constructed as a single isotropic Gaussian distribution for each class. Both
classifiers therefore predict the same class distributions at time t = 0.
In the first three held-out examples, the initial state was distinctive enough to unambiguously predict
the sequence label. The logistic regression predictions were generally inaccurate on their own, but
the the confidence of these predictions was so low that these probabilities were far outweighed
by the prior?the log-odds ratio in time therefore appears almost flat for logistic regression. Our
method (denoted HDMaxEnt in the figure), on the other hand, demonstrated exponentially increasing
confidence as the sequences were progressively revealed.
In the last example, the initial state appeared more similar to that of another class, causing the prior
to mispredict its label. Logistic regression again exhibited no deviation from the prior in time. Our
method, however, quickly recovered the correct label as the rest of the sequence was revealed.
Figures 1(a) and 1(b) show the result of a different inference?here we used the same learned class
models to evaluate the probability that a single held-out frame was generated by a path in each
class. This probability can be computed as the product of forward and backwards partition functions
evaluated at the held-out frame divided by the partition function between nominal start and goal
positions. [15] We also sampled trajectories given each potential class label, given the held-out
frame as a starting point, and visualized the results.
7
The first held-out frame, displayed in Fig. 1(a), is distinctive enough that its marginal probability
under the correct class, is far greater than its probability under any other class. The visualizations
make it apparent that it is highly unlikely that this frame was sampled from one of the jumping jack
paths, as this would require an unnatural excursion from the kinds of trajectory normally produced
by those classes, while it is slightly more plausible that the frame could have been taken from a path
sampled from the cross-toe touch class.
Fig. 1(b) shows a case where the held-out frame is ambiguous enough that it could have been generated by either the jumping jack up or down phases. In this case, the most likely prediction is
incorrect, but it is still the case that the probabilities of the two plausible classes far outweigh those
of the visibly less-plausible classes.
6
Related work
Our work bears the most relation to the extensive literature on maximum entropy modeling in sequence analysis. A well-known example of such a technique is the Conditional Random Field [9],
which is applicable to modeling discrete sequences, such as those encountered in natural language
processing. Our method is also an instance of MaxEnt modeling applied to sequence analysis; however, our method applies to high-dimensional paths in continuous spaces with a continuous notion
of (potentially unbounded) time (as opposed to the discrete notions of finite sequence length or horizon). These considerations necessitate the development of the formulation and inference techniques
described here.
Also notable are latent variable models that employ Gaussian process regression to probabilistically
represent observation models and the latent dynamics [14, 10, 7]. Our method differs from these
principally in two ways. First our method is able to exploit global, contextual features of sequences
without having to model how these features are generated from a latent state. Although the features
used in the experiments shown here were fairly simple, we plan to show in future work how our
method can leverage context-dependent features to generalize across different environments. Second, global inferences in the aforementioned GP-based methods are intractable, since the state distribution as a function of time is generally not a Gaussian process, unless the dynamics are assumed
linear. Therefore, expensive, approximate inference methods such as MCMC would be required to
compute any of the inferences demonstrated here.
7
Conclusions
We have demonstrated a method for efficiently performing inference and learning for maximumentropy modeling of high dimensional, continuous trajectories. Key to the method is the assumption
that features arise from potentials that vary only in low dimensional subspaces. The partition functions associated with such features can be computed efficiently by exploiting the symmetries that
arise in this case. The ability to efficiently compute the partition function enables tractable learning
as well as the opportunity to compute a variety of inferences that would otherwise be intractable.
We have demonstrated experimentally that the method is able to build plausible models of high
dimensional motion capture trajectories that are well-suited for classification and other prediction
tasks.
As future work, we would like to explore similar ideas to leverage more generic types of low dimensional structure that might arise in maximum entropy modeling. In particular, we anticipate that the
method described here might be leveraged as a subroutine in future approximate inference methods
for this class of problems. We are also investigating problem domains such as assistive teleoperation,
where the ability to leverage contextual features is essential to learning policies that generalize.
8
Acknowledgments
This work is supported by the ONR MURI grant N00014-09-1-1052, Distributed Reasoning in Reduced Information Spaces.
8
References
[1] T. Akamatsu. Cyclic flows, markov process and stochastic traffic assignment. Transportation
Research Part B: Methodological, 30(5):369?386, 1996.
[2] J.P. Boyd. Chebyshev and Fourier spectral methods. Dover, 2001.
[3] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge Univ Pr, 2004.
[4] R.P. Feynman, A.R. Hibbs, and D.F. Styer. Quantum Mechanics and Path Integrals: Emended
Edition. Dover Publications, 2010.
[5] S. Garc??a-D??ez, E. Vandenbussche, and M. Saerens. A continuous-state version of discrete
randomized shortest-paths, with application to path planning. In CDC and ECC, 2011.
[6] E.T. Jaynes. Information theory and statistical mechanics. The Physical Review, 106(4):620?
630, 1957.
[7] J. Ko and D. Fox. Gp-BayesFilters: Bayesian filtering using Gaussian process prediction and
observation models. Autonomous Robots, 27(1):75?90, 2009.
[8] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT
Press, 2009.
[9] J. Lafferty. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001.
[10] N.D. Lawrence and J. Qui?nonero-Candela. Local distance preservation in the GP-LVM through
back constraints. In Proceedings of the 23rd international conference on Machine learning,
pages 513?520. ACM, 2006.
[11] A. Mantrach, L. Yen, J. Callut, K. Francoisse, M. Shimbo, and M. Saerens. The sum-over-paths
covariance kernel: A novel covariance measure between nodes of a directed graph. PAMI,
32(6):1112?1126, 2010.
[12] B.K. ?ksendal. Stochastic differential equations: an introduction with applications. Springer
Verlag, 2003.
[13] P. Vernaza, D.D. Lee, and S.J. Yi. Learning and planning high-dimensional physical trajectories via structured lagrangians. In ICRA, pages 846?852. IEEE, 2010.
[14] J. Wang, D. Fleet, and A. Hertzmann. Gaussian process dynamical models. NIPS, 18:1441,
2006.
[15] Brian D. Ziebart, Andrew Maas, J. Andrew Bagnell, and Anind K. Dey. Maximum entropy
inverse reinforcement learning. In AAAI, pages 1433?1438, 2008.
9
| 4817 |@word version:3 briefly:1 norm:1 seek:1 simplifying:1 covariance:2 shading:1 reduction:2 initial:5 substitution:2 series:1 cyclic:1 elaborating:1 existing:1 current:1 comparing:2 discretization:1 recovered:1 contextual:2 jaynes:1 dx:5 written:1 must:2 readily:1 numerical:2 partition:33 enables:5 plot:1 update:6 progressively:3 discrimination:4 generative:1 isotropic:1 beginning:1 dover:2 node:7 location:3 compressible:2 complication:1 successive:1 bayesfilters:1 unbounded:1 along:1 constructed:1 differential:2 incorrect:1 consists:1 prove:1 x0:5 ra:4 expected:2 themselves:1 examine:1 abscissa:2 ry:3 mechanic:2 discretized:2 bellman:1 planning:2 cardinality:1 increasing:1 project:1 underlying:1 formalizing:1 notation:1 what:5 kind:2 spends:1 unified:1 concave:1 exactly:4 demonstrates:1 k2:1 classifier:4 unit:1 normally:1 grant:1 appear:4 segmenting:1 before:1 ecc:1 lvm:1 local:1 limit:1 akamatsu:1 ak:1 path:38 yd:2 pami:1 might:5 limited:1 directed:1 acknowledgment:1 practice:1 differs:1 procedure:3 empirical:4 thought:1 projection:6 boyd:2 confidence:2 radial:1 regular:1 onto:8 operator:1 context:1 applying:2 outweigh:1 lagrangian:1 demonstrated:4 maximumentropy:1 transportation:1 straightforward:3 starting:1 duration:1 independently:1 rectangular:1 convex:1 formalized:1 adjusts:1 spanned:3 vandenberghe:1 handle:1 notion:2 coordinate:2 autonomous:1 analogous:3 suppose:5 nominal:1 programming:5 origin:4 pa:2 element:2 velocity:1 expensive:2 satisfying:1 located:1 muri:1 observed:1 ep:1 wang:1 capture:5 calculate:2 region:6 substantial:1 environment:1 complexity:2 hertzmann:1 ziebart:1 dynamic:7 distinctive:2 basis:2 compactly:1 represented:2 various:1 assistive:1 train:1 univ:1 distinct:1 describe:1 labeling:1 lift:2 choosing:1 whose:2 apparent:1 supplementary:2 plausible:4 otherwise:4 compressed:4 ability:4 statistic:1 think:1 gp:4 sequence:17 advantage:1 net:1 maximal:1 product:1 causing:1 nonero:1 description:1 normalize:1 exploiting:3 empty:1 andrew:3 pose:5 ij:1 advocated:1 eq:2 implemented:2 predicted:3 implies:4 correct:8 stochastic:2 centered:1 human:3 material:3 garc:1 require:1 lagrangians:1 preliminary:2 probable:2 anticipate:1 brian:1 dbagnell:1 extension:1 lying:2 around:2 exp:12 lawrence:1 mapping:2 predict:2 visualize:1 claim:1 substituting:1 vary:1 applicable:1 label:5 visited:2 title:1 individually:1 mit:1 always:2 gaussian:5 aim:1 modified:1 rather:1 probabilistically:1 publication:1 corollary:4 derived:3 methodological:1 indicates:2 visibly:1 sense:5 inference:15 dependent:1 membership:1 inaccurate:1 typically:2 unlikely:1 a0:2 relation:2 koller:1 proj:3 subroutine:1 aforementioned:2 dual:2 classification:3 denoted:4 development:1 plan:1 constrained:1 spatial:1 initialize:1 zon:1 marginal:1 field:3 fairly:1 having:3 sampling:1 identical:1 represents:1 icml:1 future:5 simplex:1 few:1 employ:1 preserve:1 intended:1 phase:8 consisting:1 friedman:1 organization:1 highly:1 adjust:1 held:13 ambient:1 integral:7 partial:1 jumping:9 orthogonal:1 unless:1 fox:1 continuing:1 maxent:12 rotating:1 desired:2 instance:1 formalism:1 modeling:12 column:4 soft:1 assignment:1 maximization:1 lattice:4 cost:4 introducing:1 deviation:1 submanifold:1 answer:1 accomplish:1 convince:1 international:1 randomized:1 automating:1 probabilistic:3 off:7 lee:1 quickly:1 again:2 aaai:1 opposed:1 choose:3 leveraged:1 henceforth:1 necessitate:1 resort:1 return:2 account:1 potential:6 sec:2 satisfy:1 notable:1 depends:1 performed:3 try:1 candela:1 traffic:1 portion:2 start:1 sort:1 contribution:6 yen:1 wiener:1 efficiently:9 yield:2 outweighed:1 generalize:2 bayesian:1 produced:1 trajectory:15 multiplying:1 converged:1 definition:1 obvious:1 toe:4 associated:5 proof:3 recovers:1 sampled:8 popular:1 recall:1 dimensionality:3 cj:2 back:1 appears:1 planar:1 unambiguously:1 formulation:1 evaluated:2 though:1 done:1 dey:1 until:1 d:4 hand:3 touch:4 marker:1 overlapping:1 logistic:4 brings:1 indicated:1 building:1 effect:1 consisted:3 true:4 multiplier:1 hence:2 symmetric:4 iteratively:1 illustrated:4 adjacent:7 ambiguous:1 mantrach:1 crf:1 motion:6 saerens:2 reasoning:1 jack:9 consideration:1 novel:1 common:1 rotation:8 functional:4 empirically:1 twist:4 ji:1 physical:2 exponentially:1 discussed:1 mellon:2 refer:1 cambridge:1 gibbs:1 rd:3 language:2 robot:1 actor:3 own:1 recent:1 prime:1 certain:4 n00014:1 verlag:1 discretizing:1 continue:2 onr:1 accomplished:2 exploited:1 yi:1 fortunately:1 greater:1 employed:2 determine:3 vernaza:2 shortest:1 preservation:1 match:1 cross:4 long:1 pde:1 sphere:1 divided:1 visit:3 prediction:6 basic:2 regression:5 ko:1 cmu:2 tasked:1 expectation:6 iteration:4 normalization:1 represent:2 kernel:1 robotics:2 achieved:1 cell:2 spacing:2 addressed:1 rest:1 posse:1 exhibited:1 pass:1 flow:1 lafferty:1 odds:8 reflective:1 near:1 leverage:4 backwards:1 revealed:12 enough:4 variety:2 xj:1 iterate:1 marginalization:1 idea:2 knowing:1 chebyshev:1 fleet:1 whether:2 expression:2 pca:1 unnatural:1 hibbs:1 proceed:3 action:1 repeatedly:1 generally:4 useful:1 iterating:1 detailed:2 visualized:1 rw:2 reduced:1 generate:1 kac:1 estimated:1 disjoint:1 correctly:1 delta:1 zd:2 carnegie:2 discrete:10 express:2 key:3 four:2 threshold:4 graph:3 fraction:6 sum:1 prob:8 parameterized:1 inverse:1 almost:1 excursion:1 dy:1 qui:1 ki:1 followed:1 correspondence:1 encountered:1 constraint:2 kronecker:1 orthogonality:1 ri:1 flat:1 generates:1 fourier:1 performing:2 structured:1 according:1 across:1 slightly:1 projecting:1 restricted:2 pr:1 principally:1 taken:1 computationally:1 equation:4 visualization:2 previously:1 discus:1 ksendal:1 tractable:3 feynman:3 end:2 observe:1 appropriate:2 elliptic:1 spectral:2 generic:1 original:1 denotes:3 ensure:1 graphical:1 opportunity:1 calculating:1 exploit:2 prof:1 build:1 icra:1 objective:1 question:1 quantity:1 rt:1 bagnell:2 gradient:1 dp:1 subspace:7 distance:2 vandenbussche:1 rotates:1 assuming:2 length:3 illustration:2 ratio:7 demonstration:2 unfortunately:1 statement:1 potentially:1 negative:1 policy:1 perform:2 allowing:1 shimbo:1 observation:5 mispredict:1 markov:1 arc:1 finite:4 displayed:1 defining:1 ever:1 frame:8 rn:8 y1:1 compressibility:2 arbitrary:3 ordinate:1 bk:1 required:1 extensive:1 learned:7 tractably:1 nip:1 address:1 able:3 dynamical:1 appeared:1 including:2 max:2 video:2 event:7 natural:5 indicator:1 representing:2 axis:3 review:3 prior:5 literature:1 embedded:2 bear:1 cdc:1 interesting:1 proportional:1 filtering:1 versus:1 consistent:1 principle:1 bk2:1 intractability:1 storing:1 pi:3 maas:1 supported:1 last:1 jth:3 infeasible:1 side:4 institute:2 neighbor:1 taking:1 distributed:1 slice:15 dimension:2 valid:3 transition:1 evaluating:4 computes:1 ending:1 concretely:1 forward:1 quantum:1 projected:4 avoided:1 reinforcement:1 employing:1 far:3 approximate:3 global:4 investigating:1 pittsburgh:2 conclude:1 assumed:2 xi:11 continuous:12 latent:3 learn:1 symmetry:10 interpolating:2 domain:1 main:2 paul:1 arise:3 edition:1 body:1 fig:5 referred:1 position:2 explicit:2 wish:1 exercise:1 lie:3 down:5 theorem:4 formula:2 xt:5 showing:1 concern:1 intractable:4 exists:2 essential:1 effectively:1 lifting:1 anind:1 conditioned:1 illustrates:1 cartesian:1 horizon:1 suited:1 entropy:9 simply:3 likely:2 explore:1 ez:1 lagrange:1 expressed:1 contained:1 scalar:1 applies:1 springer:1 corresponds:1 acm:1 conditional:3 sized:1 goal:4 marked:1 exposition:1 change:2 experimentally:1 infinite:1 total:1 experimental:1 latter:1 evaluate:1 mcmc:2 reg:4 |
4,218 | 4,818 | Parametric Local Metric Learning for Nearest
Neighbor Classification
Adam Woznica
Department of Computer Science
University of Geneva
Switzerland
[email protected]
Jun Wang
Department of Computer Science
University of Geneva
Switzerland
[email protected]
Alexandros Kalousis
Department of Business Informatics
University of Applied Sciences
Western Switzerland
[email protected]
Abstract
We study the problem of learning local metrics for nearest neighbor classification.
Most previous works on local metric learning learn a number of local unrelated
metrics. While this ?independence? approach delivers an increased flexibility its
downside is the considerable risk of overfitting. We present a new parametric local
metric learning method in which we learn a smooth metric matrix function over the
data manifold. Using an approximation error bound of the metric matrix function
we learn local metrics as linear combinations of basis metrics defined on anchor
points over different regions of the instance space. We constrain the metric matrix
function by imposing on the linear combinations manifold regularization which
makes the learned metric matrix function vary smoothly along the geodesics of
the data manifold. Our metric learning method has excellent performance both
in terms of predictive power and scalability. We experimented with several largescale classification problems, tens of thousands of instances, and compared it with
several state of the art metric learning methods, both global and local, as well as to
SVM with automatic kernel selection, all of which it outperforms in a significant
manner.
1
Introduction
The nearest neighbor (NN) classifier is one of the simplest and most classical non-linear classification algorithms. It is guaranteed to yield an error no worse than twice the Bayes error as the number
of instances approaches infinity. With finite learning instances, its performance strongly depends
on the use of an appropriate distance measure. Mahalanobis metric learning [4, 15, 9, 10, 17, 14]
improves the performance of the NN classifier if used instead of the Euclidean metric. It learns
a global distance metric which determines the importance of the different input features and their
correlations. However, since the discriminatory power of the input features might vary between different neighborhoods, learning a global metric cannot fit well the distance over the data manifold.
Thus a more appropriate way is to learn a metric on each neighborhood and local metric learning [8, 3, 15, 7] does exactly that. It increases the expressive power of standard Mahalanobis metric
learning by learning a number of local metrics (e.g. one per each instance).
1
Local metric learning has been shown to be effective for different learning scenarios. One of the
first local metric learning works, Discriminant Adaptive Nearest Neighbor classification [8], DANN,
learns local metrics by shrinking neighborhoods in directions orthogonal to the local decision boundaries and enlarging the neighborhoods parallel to the boundaries. It learns the local metrics independently with no regularization between them which makes it prone to overfitting. The authors of
LMNN-Multiple Metric (LMNN-MM) [15] significantly limited the number of learned metrics and
constrained all instances in a given region to share the same metric in an effort to combat overfitting.
In the supervised setting they fixed the number of metrics to the number of classes; a similar idea
has been also considered in [3]. However, they too learn the metrics independently for each region
making them also prone to overfitting since the local metrics will be overly specific to their respective regions. The authors of [16] learn local metrics using a least-squares approach by minimizing a
weighted sum of the distances of each instance to apriori defined target positions and constraining the
instances in the projected space to preserve the original geometric structure of the data in an effort to
alleviate overfitting. However, the method learns the local metrics using a learning-order-sensitive
propagation strategy, and depends heavily on the appropriate definition of the target positions for
each instance, a task far from obvious. In another effort to overcome the overfitting problem of the
discriminative methods [8, 15], Generative Local Metric Learning, GLML, [11], propose to learn
local metrics by minimizing the NN expected classification error under strong model assumptions.
They use the Gaussian distribution to model the learning instances of each class. However, the
strong model assumptions might easily be very inflexible for many learning problems.
In this paper we propose the Parametric Local Metric Learning method (PLML) which learns a
smooth metric matrix function over the data manifold. More precisely, we parametrize the metric
matrix of each instance as a linear combination of basis metric matrices of a small set of anchor
points; this parametrization is naturally derived from an error bound on local metric approximation.
Additionally we incorporate a manifold regularization on the linear combinations, forcing the linear
combinations to vary smoothly over the data manifold. We develop an efficient two stage algorithm
that first learns the linear combinations of each instance and then the metric matrices of the anchor
points. To improve scalability and efficiency we employ a fast first-order optimization algorithm,
FISTA [2], to learn the linear combinations as well as the basis metrics of the anchor points. We
experiment with the PLML method on a number of large scale classification problems with tens of
thousands of learning instances. The experimental results clearly demonstrate that PLML significantly improves the predictive performance over the current state-of-the-art metric learning methods,
as well as over multi-class SVM with automatic kernel selection.
2
Preliminaries
We denote by X the n?d matrix of learning instances, the i-th row of which is the xTi ? Rd instance,
and by y = (y1 , . . . , yn )T , yi ? {1, . . . , c} the vector of class labels. The squared Mahalanobis
distance between two instances in the input space is given by:
d2M (xi , xj ) = (xi ? xj )T M(xi ? xj )
where M is a PSD metric matrix (M 0). A linear metric learning method learns a Mahalanobis
metric M by optimizing some cost function under the PSD constraints for M and a set of additional
constraints on the pairwise instance distances. Depending on the actual metric learning method,
different kinds of constraints on pairwise distances are used. The most successful ones are the
large margin triplet constraints. A triplet constraint denoted by c(xi , xj , xk ), indicates that in the
projected space induced by M the distance between xi and xj should be smaller than the distance
between xi and xk .
Very often a single metric M can not model adequately the complexity of a given learning problem
in which discriminative features vary between different neighborhoods. To address this limitation
in local metric learning we learn a set of local metrics. In most cases we learn a local metric for
each learning instance [8, 11], however we can also learn a local metric for some part of the instance
space in which case the number of learned metrics can be considerably smaller than n, e.g. [15]. We
follow the former approach and learn one local metric per instance. In principle, distances should
then be defined as geodesic distances using the local metric on a Riemannian manifold. However,
this is computationally difficult, thus we define the distance between instances xi and xj as:
d2Mi (xi , xj ) = (xi ? xj )T Mi (xi ? xj )
2
where Mi is the local metric of instance xi . Note that most often the local metric Mi of instance
xi is different from that of xj . As a result, the distance d2Mi (xi , xj ) does not satisfy the symmetric
property, i.e. it is not a proper metric. Nevertheless, in accordance to the standard practice we will
continue to use the term local metric learning following [15, 11].
3
Parametric Local Metric Learning
We assume that there exists a Lipschitz smooth vector-valued function f (x), the output of which
is the vectorized local metric matrix of instance x. Learning the local metric of each instance is
essentially learning the value of this function at different points over the data manifold. In order to
significantly reduce the computational complexity we will approximate the metric function instead
of directly learning it.
Definition 1 A vector-valued function f (x) on Rd is a (?, ?, p)-Lipschitz smooth
function
with respect to a vector
norm k?k if kf (x) ? f (x0 )k ? ? kx ? x0 k and
0
0
T
0
f (x) ? f (x ) ? ?f (x ) (x ? x )
? ? kx ? x0 k1+p , where ?f (x0 )T is the derivative of
the f function at x0 . We assume ?, ? > 0 and p ? (0, 1].
[18] have shown that any Lipschitz smooth real function f (x) defined on a lower dimensional manifold can be approximated by a linear combination of function values f (u), u ? U, of a set U of
anchor points. Based on this result we have the following lemma that gives the respective error
bound for learning a Lipschitz smooth vector-valued function.
Lemma 1 Let (?, U) be a nonnegative weighting on anchor points U in Rd . Let f be an (?, ?, p)Lipschitz smooth vector function. We have for all x ? Rd :
X
X
X
1+p
?u (x)f (u)
? ?
x ?
?u (x)u
+ ?
?u (x) kx ? uk
(1)
f (x) ?
u?U
u?U
u?U
The proof of the above Lemma 1 is similar to the proof of Lemma 2.1 in [18]; for lack of space
we omit its presentation. By the nonnegative weighting strategy (?, U), the PSD constraints on the
approximated local metric is automatically satisfied if the local metrics of anchor points are PSD
matrices.
Lemma 1 suggests a natural way to approximate the local metric function by parameterizing the
metric Mi of each instance xi as a weighted linear combination, Wi ? Rm , of a small set of
metric basis, {Mb1 , . . . , Mbm }, each one associated with an anchor point defined in some region
of the instance space. This parametrization will also provide us with a global way to regularize the
flexibility of the metric function. We will first learn the vector of weights Wi for each instance xi ,
and then the basis metric matrices; these two together, will give us the Mi metric for the instance
xi .
More formally, we define a m ? d matrix U of anchor points, the i-th row of which is the anchor
point ui , where uTi ? Rd . We denote by Mbi the Mahalanobis metric matrix associated with ui .
The anchor points can be defined using some clustering algorithm, we have chosen to define them
as the means of clusters constructed by the k-means algorithm. The local metric Mi of an instance
xi is parametrized by:
X
X
Mi =
Wibk Mbk , Wibk ? 0,
Wibk = 1
(2)
bk
bk
where W is a n ? m weight matrix,
and its Wibk entry is the weight of the basis metric Mbk for
P
the instance xi . The constraint bk Wibk = 1 removes the scaling problem between different local
metrics. Using the parametrization of equation (2), the squared distance of xi to xj under the metric
Mi is:
X
d2Mi (xi , xj ) =
Wibk d2Mb (xi , xj )
(3)
k
bk
where d2Mb (xi , xj ) is the squared Mahalanobis distance between xi and xj under the basis metric
k
Mbk . We will show in the next section how to learn the weights of the basis metrics for each instance
and in section 3.2 how to learn the basis metrics.
3
Algorithm 1 Smoothl Local Linear Weight Learning
Input: W0 , X, U, G, L, ?1 , and ?2
Output: matrix W
2
define ge?,Y (W) = g(Y) + tr(?g(Y)T (W ? Y)) + ?2 kW ? YkF
1
0
initialize: t1 = 1, ? = 1,Y = W , and i = 0
repeat
i = i + 1, Wi = P roj((Yi ? ?1 ?g(Yi )))
while g(Wi ) > ge?,Yi (Wi ) do
? = 2?, Wi = P roj((Yi ? ?1 ?g(Yi )))
end while ?
1+ 1+4t2i
?1
ti+1 =
, Yi+1 = Wi + ttii+1
(Wi ? Wi?1 )
2
until converges;
3.1
Smooth Local Linear Weighting
Lemma 1 bounds the approximation error by two terms. The first term states that x should be close
to its linear approximation, and the second that the weighting should be local. In addition we want
the local metrics to vary smoothly over the data manifold. To achieve this smoothness we rely
on manifold regularization and constrain the weight vectors of neighboring instances to be similar.
Following this reasoning we will learn Smooth Local Linear Weights for the basis metrics by minimizing the error bound of (1) together with a regularization term that controls the weight variation
2
P
of similar
instances. To simplify
the objective function, we use the term
x ? u?U ?u (x)u
P
instead of
x ? u?U ?u (x)u
. By including the constraints on the W weight matrix in (2), the
optimization problem is given by:
min g(W)
W
s.t.
=
2
kX ? WUkF + ?1 tr(WG) + ?2 tr(WT LW)
X
Wibk ? 0,
Wibk = 1, ?i, bk
(4)
bk
where tr(?) and k?kF denote respectively the trace norm of a square matrix and the Frobenius norm
of a matrix. The m ? n matrix G is the squared distance matrix between each anchor point ui and
each instance xj , obtained for p = 1 in (1), i.e. its (i, j) entry is the squared Euclidean distance
between ui and xj . L is the n ? n Laplacian matrix constructed by D ? S, where S is the n ? n
symmetric
pairwise similarity matrix of learning instances and D is a diagonal matrix with Dii =
P
T
k Sik . Thus the minimization of the tr(W LW) term constrains similar instances to have similar
weight coefficients. The minimization of the tr(WG) term forces the weights of the instances
to reflect their local properties. Most often the similarity matrix S is constructed using k-nearest
neighbors graph [19]. The ?1 and ?2 parameters control the importance of the different terms.
Since the cost function g(W) is convex quadratic with W and the constraint is simply linear, (4) is
a convex optimization problem with a unique optimal solution. The constraints on W in (4) can be
seen as n simplex constraints on each row of W; we will use the projected gradient method to solve
the optimization problem. At each iteration t, the learned weight matrix W is updated by:
Wt+1 = P roj(Wt ? ??g(Wt ))
(5)
where ? > 0 is the step size and ?g(Wt ) is the gradient of the cost function g(W) at Wt . The
P roj(?) denotes the simplex projection operator on each row of W. Such a projection operator can
be efficiently implemented with a complexity of O(nm log(m)) [6]. To speed up the optimization
procedure we employ a fast first-order optimization method FISTA, [2]. The detailed algorithm is
described in Algorithm 1. The Lipschitz constant ? required by this algorithm is estimated by using
the condition of g(Wi ) ? ge?,Yi (Wi ) [1]. At each iteration, the main computations are in the
gradient and the objective value with complexity O(nmd + n2 m).
To set the weights of the basis metrics for a testing instance we can optimize (4) given the weight of
the basis metrics for the training instances. Alternatively we can simply set them as the weights of
its nearest neighbor in the training instances. In the experiments we used the latter approach.
4
3.2
Large Margin Basis Metric Learning
In this section we define a large margin based algorithm to learn the basis metrics Mb1 , . . . , Mbm .
Given the W weight matrix of basis metrics obtained using Algorithm 1, the local metric Mi of
an instance xi defined in (2) is linear with respect to the basis metrics Mb1 , . . . , Mbm . We define
the relative comparison distance of instances xi , xj and xk as: d2Mi (xi , xk ) ? d2Mi (xi , xj ). In
a large margin constraint c(xi , xj , xk ), the squared distance d2Mi (xi , xk ) is required to be larger
than d2Mi (xi , xj ) + 1, otherwise an error ? ijk ? 0 is generated. Note that, this relative comparison
definition is different from that defined in LMNN-MM [15]. In LMNN-MM to avoid over-fitting,
different local metrics Mj and Mk are used to compute the squared distance d2Mj (xi , xj ) and
d2Mk (xi , xk ) respectively, as no smoothness constraint is added between metrics of different local
regions.
Given a set of triplet constraints, we learn the basis metrics Mb1 , . . . , Mbm with the following
optimization problem:
XX
X
X
min
?1
||Mbl ||2F +
? ijk + ?2
Wibl d2Mb (xi , xj )
(6)
Mb1 ,...,Mbm ,?
l
bl
X
s.t.
ij
ijk
bl
Wibl (d2Mb (xi , xk ) ? d2Mb (xi , xj )) ? 1 ? ? ijk ?i, j, k
l
l
bl
? ijk ? 0; ?i, j, k Mbl 0; ?bl
where ?1 and ?2 are parameters that balance the importance of the different terms. The large margin
triplet constraints for each instance are generated using its k1 same class nearest neighbors and k2
different class nearest neighbors by requiring its distances to the k2 different class instances to be
larger than those to its k1 same class instances. In the objective function of (6) the basis metrics are
learned by minimizing the sum of large margin errors and the sum of squared pairwise distances of
each instance to its k1 nearest neighbors computed using the local metric. Unlike LMNN we add the
squared Frobenius norm on each basis metrics in the objective function. We do this for two reasons.
First we exploit the connection between LMNN and SVM shown in [5] under which the squared
Frobenius norm of the metric matrix is related to the SVM margin. Second because adding this term
leads to an easy-to-optimize dual formulation of (6) [12].
Unlike many special solvers which optimize the primal form of the metric learning problem [15, 13],
we follow [12] and optimize the Lagrangian dual problem of (6). The dual formulation leads to an
efficient basis metric learning algorithm. Introducing the Lagrangian dual multipliers ? ijk , pijk and
the PSD matrices Zbl to respectively associate with every large margin triplet constraints, ? ijk ? 0
and the PSD constraints Mbl 0 in (6), we can easily derive the following Lagrangian dual form
X
max
Zb1 ,...,Zbm ,?
ijk
?ijk ?
X
X
X 1
? kZbl +
?ijk Wibl Cijk ? ?2
Wibl Aij k2F
4?1
ij
(7)
ijk
bl
1 ? ?ijk ? 0; ?i,j,k Zbl 0; ?bl
s.t.
P
(Z? +
? ? Wib Cijk ??2
P
Wib Aij )
ijk ijk
ij
l
l
and the corresponding optimality conditions: M?bl = bl
and
2?1
1 ? ?ijk ? 0, where the matrices Aij and Cijk are given by xTij xij and xTik xik ?xTij xij respectively,
where xij = xi ? xj .
Compared to the primal form, the main advantage of the dual formulation is that the second term
in the objective function of (7) has a P
closed-form solution
P for Zbl given a fixed ?. To drive the
optimal solution of Zbl , let Kbl = ?2 ij Wibl Aij ? ijk ?ijk Wibl Cijk . Then, given a fixed ?,
the optimal solution of Zbl is Z?bl = (Kbl )+ , where (Kbl )+ projects the matrix Kbl onto the PSD
cone, i.e. (Kbl )+ = U[max(diag(?)), 0)]UT with Kbl = U?UT .
Now, (7) is rewritten as:
min
?
s.t.
g(?) = ?
X
?ijk +
ijk
X 1
2
k(Kbl )+ ? Kbl kF
4?1
bl
1 ? ?ijk ? 0; ?i, j, k
5
(8)
And the optimal condition for Mbl is M?bl = 2?1 1 ((K?bl )+ ? K?bl ). The gradient of the objective
P
function in (8), ?g(?ijk ), is given by: ?g(?ijk ) = ?1 + bl 2?1 1 h(Kbl )+ ? Kbl , Wibl Cijk i. At
each iteration, ? is updated by: ? i+1 = BoxP roj(? i ? ??g(? i )) where ? > 0 is the step size.
The BoxP roj(?) denotes the simple box projection operator on ? as specified in the constraints
of (8). At each iteration, the main computational complexity lies in the computation of the eigendecomposition with a complexity of O(md3 ) and the computation of the gradient with a complexity
of O(m(nd2 + cd)), where m is the number of basis metrics and c is the number of large margin
triplet constraints. As in the weight learning problem the FISTA algorithm is employed to accelerate
the optimization process; for lack of space we omit the algorithm presentation.
4
Experiments
In this section we will evaluate the performance of PLML and compare it with a number of relevant baseline methods on six datasets with large number of instances, ranging from 5K to 70K
instances; these datasets are Letter, USPS, Pendigits, Optdigits, Isolet and MNIST. We want to determine whether the addition of manifold regularization on the local metrics improves the predictive
performance of local metric learning, and whether the local metric learning improves over learning
with single global metric. We will compare PLML against six baseline methods. The first, SML, is
a variant of PLML where a single global metric is learned, i.e. we set the number of basis in (6) to
one. The second, Cluster-Based LML (CBLML), is also a variant of PLML without weight learning. Here we learn one local metric for each cluster and we assign a weight of one for a basis metric
Mbi if the corresponding cluster of Mbi contains the instance, and zero otherwise. Finally, we
also compare against four state of the art metric learning methods LMNN [15], BoostMetric [13]1 ,
GLML [11] and LMNN-MM [15]2 . The former two learn a single global metric and the latter two
a number of local metrics. In addition to the different metric learning methods, we also compare
PLML against multi-class SVMs in which we use the one-against-all strategy to determine the class
label for multi-class problems and select the best kernel with inner cross validation.
Since metric learning is computationally expensive for datasets with large number of features we
followed [15] and reduced the dimensionality of the USPS, Isolet and MINIST datasets by applying
PCA. In these datasets the retained PCA components explain 95% of their total variances. We
preprocessed all datasets by first standardizing the input features, and then normalizing the instances
to so that their L2-norm is one.
PLML has a number of hyper-parameters. To reduce the computational time we do not tune ?1
and ?2 of the weight learning optimization problem (4), and we set them to their default values of
?1 = 1 and ?2 = 100. The Laplacian matrix L is constructed using the six nearest neighbors graph
following [19]. The anchor points U are the means of clusters constructed with k-means clustering.
The number m of anchor points, i.e. the number of basis metrics, depends on the complexity of
the learning problem. More complex problems will often require a larger number of anchor points
to better model the complexity of the data. As the number of classes in the examined datasets is
10 or 26, we simply set m = 20 for all datasets. In the basis metric learning problem (6), the
number of the dual parameters ? is the same as the number of triplet constraints. To speedup the
learning process, the triplet constraints are constructed only using the three same-class and the three
different-class nearest neighbors for each learning instance. The parameter ?2 is set to 1, while
the parameter ?1 is the only parameter that we select from the set {0.01, 0.1, 1, 10, 100} using
2-fold inner cross-validation. The above setting of basis metric learning for PLML is also used
with the SML and CBLML methods. For LMNN and LMNN-MM we use their default settings,
[15], in which the triplet constraints are constructed by the three nearest same-class neighbors and
all different-class samples. As a result, the number of triplet constraints optimized in LMNN and
LMNN-MM is much larger than those of PLML, SML, BoostMetric and CBLML. The local metrics
are initialized by identity matrices. As in [11], GLML uses the Gaussian distribution to model the
learning instances from the same class. Finally, we use the 1-NN rule to evaluate the performance
of the different metric learning methods. In addition as we already mentioned we also compare
against multi-class SVM. Since the performance of the latter depends heavily on the kernel with
which it is coupled we do automatic kernel selection with inner cross validation to select the best
1
2
http://code.google.com/p/boosting
http://www.cse.wustl.edu/?kilian/code/code.html.
6
(a) LMNN-MM
(b) CBLML
(c) GLML
(d) PLML
Figure 1: The visualization of learned local metrics of LMNN-MM, CBLML, GLML and PLML.
Table 1: Accuracy results. The superscripts +?= next to the accuracies of PLML indicate the result
of the McNemar?s statistical test with LMNN, BoostMetric, SML, CBLML, LMNN-MM, GMLM
and SVM. They denote respectively a significant win, loss or no difference for PLML. The number
in the parenthesis indicates the score of the respective algorithm for the given dataset based on the
pairwise comparisons of the McNemar?s statistical test.
Datasets
Letter
Pendigits
Optdigits
Isolet
USPS
MNIST
Total Score
PLML
97.22+++|+++|+ (7.0)
98.34+++|+++|+ (7.0)
97.72===|+++|= (5.0)
95.25=+=|+++|= (5.5)
98.26+++|+++|= (6.5)
97.30=++|+++|= (6.0)
37
Single Metric Learning Baselines
LMNN
BoostMetric
SML
96.08(2.5)
97.43(2.0)
97.55(5.0)
95.51(5.5)
97.92(4.5)
97.30(6.0)
25.5
96.49(4.5)
97.43(2.5)
97.61(5.0)
89.16(2.5)
97.65(2.5)
96.03(2.5)
19.5
96.71(5.5)
97.80(4.5)
97.22(5.0)
94.68(5.5)
97.94(4.0)
96.57(4.0)
28.5
Local Metric Learning Baselines
CBLML
LMNN-MM
GLML
95.82(2.5)
97.94(5.0)
95.94(1.5)
89.03(2.5)
96.22(0.5)
95.77(2.5)
14.5
95.02(1.0)
97.43(2.0)
95.94(1.5)
84.61(0.5)
97.90(4.0)
93.24(1.0)
10
93.86(0.0)
96.88(0.0)
94.82(0.0)
84.03(0.5)
96.05(0.5)
84.02(0.0)
1
SVM
96.64(5.0)
97.91(5.0)
97.33(5.0)
95.19(5.5)
98.19(5.5)
97.62(6.0)
32.5
kernel and parameter setting. The kernels were chosen from the set of linear, polynomial (degree 2,3
and 4), and Gaussian kernels; the width of the Gaussian kernel was set to the average of all pairwise
distances. Its C parameter of the hinge loss term was selected from {0.1, 1, 10, 100}.
To estimate the classification accuracy for Pendigits, Optdigits, Isolet and MNIST we used the default train and test split, for the other datasets we used 10-fold cross-validation. The statistical
significance of the differences were tested with McNemar?s test with a p-value of 0.05. In order to
get a better understanding of the relative performance of the different algorithms for a given dataset
we used a simple ranking schema in which an algorithm A was assigned one point if it was found
to have a statistically significantly better accuracy than another algorithm B, 0.5 points if the two
algorithms did not have a significant difference, and zero points if A was found to be significantly
worse than B.
4.1
Results
In Table 1 we report the experimental results. PLML consistently outperforms the single global
metric learning methods LMNN, BoostMetric and SML, for all datasets except Isolet on which
its accuracy is slightly lower than that of LMNN. Depending on the single global metric learning
method with which we compare it, it is significantly better in three, four, and five datasets ( for
LMNN, SML, and BoostMetric respectively), out of the six and never singificantly worse. When
we compare PLML with CBLML and LMNN-MM, the two baseline methods which learn one local
metric for each cluster and each class respectively with no smoothness constraints, we see that it is
statistically significantly better in all the datasets. GLML fails to learn appropriate metrics on all
datasets because its fundamental generative model assumption is often not valid. Finally, we see
that PLML is significantly better than SVM in two out of the six datasets and it is never significantly
worse; remember here that with SVM we also do inner fold kernel selection to automatically select
the appropriate feature speace. Overall PLML is the best performing methods scoring 37 points over
the different datasets, followed by SVM with automatic kernel selection and SML which score 32.5
and 28.5 points respectively. The other metric learning methods perform rather poorly.
Examining more closely the performance of the baseline local metric learning methods CBLML and
LMNN-MM we observe that they tend to overfit the learning problems. This can be seen by their
considerably worse performance with respect to that of SML and LMNN which rely on a single
global model. On the other hand PLML even though it also learns local metrics it does not suffer
from the overfitting problem due to the manifold regularization. The poor performance of LMNN7
(a) Letter
(b) Pendigits
(c) Optdigits
(d) USPS
(e) Isolet
(f) MNIST
Figure 2: Accuracy results of PLML and CBLML with varying number of basis metrics.
MM is not in agreement with the results reported in [15]. The main reason for the difference is the
experimental setting. In [15], 30% of the training instance of each dataset were used as a validation
set to avoid overfitting.
To provide a better understanding of the behavior of the learned metrics, we applied PLML LMNNMM, CBLML and GLML, on an image dataset containing instances of four different handwritten
digits, zero, one, two, and four, from the MNIST dataset. As in [15], we use the two main principal
components to learn. Figure 1 shows the learned local metrics by plotting the axis of their corresponding ellipses(black line). The direction of the longer axis is the more discriminative. Clearly
PLML fits the data much better than LMNN-MM and as expected its local metrics vary smoothly.
In terms of the predictive performance, PLML has the best with 82.76% accuracy. The CBLML,
LMNN-MM and GLML have an almost identical performance with respective accuracies of 82.59%,
82.56% and 82.51%.
Finally we investigated the sensitivity of PLML and CBLML to the number of basis metrics, we
experimented with m ? {5, 10, 15, 20, 25, 30, 35, 40}. The results are given in Figure 2. We see
that the predictive performance of PLML often improves as we increase the number of the basis
metrics. Its performance saturates when the number of basis metrics becomes sufficient to model the
underlying training data. As expected different learning problems require different number of basis
metrics. PLML does not overfit on any of the datasets. In contrast, the performance of CBLML gets
worse when the number of basis metrics is large which provides further evidence that CBLML does
indeed overfit the learning problems, demonstrating clearly the utility of the manifold regularization.
5
Conclusions
Local metric learning provides a more flexible way to learn the distance function. However they are
prone to overfitting since the number of parameters they learn can be very large. In this paper we
presented PLML, a local metric learning method which regularizes local metrics to vary smoothly
over the data manifold. Using an approximation error bound of the metric matrix function, we
parametrize the local metrics by a weighted linear combinations of local metrics of anchor points.
Our method scales to learning problems with tens of thousands of instances and avoids the overfitting
problems that plague the other local metric learning methods. The experimental results show that
PLML outperforms significantly the state of the art metric learning methods and it has a performance
which is significantly better or equivalent to that of SVM with automatic kernel selection.
Acknowledgments
This work was funded by the Swiss NSF (Grant 200021-137949). The support of EU projects
DebugIT (FP7-217139) and e-LICO (FP7-231519), as well as that of COST Action BM072 (?Urine
and Kidney Proteomics?) is also gratefully acknowledged.
8
References
[1] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex optimization with sparsity-inducing
norms. Optimization for Machine Learning.
[2] A. Beck and M. Teboulle. Gradient-based algorithms with applications to signal-recovery
problems. Convex Optimization in Signal Processing and Communications, pages 42?88,
2010.
[3] M. Bilenko, S. Basu, and R.J. Mooney. Integrating constraints and metric learning in semisupervised clustering. In ICML, page 11, 2004.
[4] J.V. Davis, B. Kulis, P. Jain, S. Sra, and I.S. Dhillon. Information-theoretic metric learning. In
ICML, 2007.
[5] H. Do, A. Kalousis, J. Wang, and A. Woznica. A metric learning perspective of svm: on the
relation of svm and lmnn. AISTATS, 2012.
[6] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l 1-ball
for learning in high dimensions. In ICML, 2008.
[7] A. Frome, Y. Singer, and J. Malik. Image retrieval and classification using local distance
functions. In Advances in Neural Information Processing Systems, volume 19, pages 417?424.
MIT Press, 2007.
[8] T. Hastie and R. Tibshirani. Discriminant adaptive nearest neighbor classification. IEEE Trans.
on PAMI, 1996.
[9] P. Jain, B. Kulis, J.V. Davis, and I.S. Dhillon. Metric and kernel learning using a linear transformation. JMLR, 2012.
[10] R. Jin, S. Wang, and Y. Zhou. Regularized distance metric learning: Theory and algorithm. In
NIPS, 2009.
[11] Y.K. Noh, B.T. Zhang, and D.D. Lee. Generative local metric learning for nearest neighbor
classification. NIPS, 2009.
[12] C. Shen, J. Kim, and L. Wang. A scalable dual approach to semidefinite metric learning. In
CVPR, 2011.
[13] C. Shen, J. Kim, L. Wang, and A. Hengel. Positive semidefinite metric learning using boostinglike algorithms. JMLR, 2012.
[14] J. Wang, H. Do, A. Woznica, and A. Kalousis. Metric learning with multiple kernels. In NIPS,
2011.
[15] K.Q. Weinberger and L.K. Saul. Distance metric learning for large margin nearest neighbor
classification. JMLR, 2009.
[16] D.Y. Yeung and H. Chang. Locally smooth metric learning with application to image retrieval.
In ICCV, 2007.
[17] Y. Ying, K. Huang, and C. Campbell. Sparse metric learning via smooth optimization. NIPS,
2009.
[18] K. Yu, T. Zhang, and Y. Gong. Nonlinear learning using local coordinate coding. NIPS, 2009.
[19] L. Zelnik-Manor and P. Perona. Self-tuning spectral clustering. NIPS, 2004.
9
| 4818 |@word kulis:2 polynomial:1 norm:7 mb1:5 zelnik:1 tr:6 zbl:5 contains:1 score:3 outperforms:3 current:1 com:1 remove:1 generative:3 selected:1 xk:8 parametrization:3 alexandros:2 provides:2 boosting:1 cse:1 zhang:2 five:1 along:1 constructed:7 fitting:1 manner:1 x0:5 pairwise:6 indeed:1 expected:3 behavior:1 multi:4 lmnn:27 automatically:2 bilenko:1 xti:1 actual:1 solver:1 becomes:1 project:2 xx:1 unrelated:1 underlying:1 kind:1 cijk:5 transformation:1 combat:1 every:1 remember:1 ti:1 exactly:1 classifier:2 rm:1 uk:1 control:2 k2:2 grant:1 omit:2 yn:1 t1:1 positive:1 local:70 accordance:1 pami:1 might:2 black:1 twice:1 pendigits:4 examined:1 suggests:1 limited:1 discriminatory:1 statistically:2 unique:1 acknowledgment:1 testing:1 practice:1 swiss:1 digit:1 procedure:1 significantly:11 projection:4 wustl:1 integrating:1 get:2 cannot:1 close:1 selection:6 operator:3 onto:2 risk:1 applying:1 optimize:4 www:1 equivalent:1 lagrangian:3 independently:2 convex:4 kidney:1 shen:2 recovery:1 parameterizing:1 isolet:6 rule:1 regularize:1 variation:1 coordinate:1 updated:2 target:2 heavily:2 mbm:5 us:1 agreement:1 associate:1 approximated:2 expensive:1 t2i:1 wang:7 thousand:3 region:6 kilian:1 eu:1 mentioned:1 complexity:9 ui:4 constrains:1 geodesic:2 predictive:5 efficiency:1 basis:32 usps:4 easily:2 accelerate:1 train:1 jain:2 fast:2 effective:1 hyper:1 neighborhood:5 shalev:1 unige:2 larger:4 valued:3 solve:1 cvpr:1 otherwise:2 wg:2 superscript:1 advantage:1 propose:2 neighboring:1 relevant:1 flexibility:2 achieve:1 poorly:1 frobenius:3 inducing:1 scalability:2 cluster:6 adam:2 converges:1 depending:2 develop:1 derive:1 gong:1 ij:4 nearest:15 strong:2 implemented:1 frome:1 indicate:1 switzerland:3 direction:2 closely:1 dii:1 require:2 assign:1 alleviate:1 preliminary:1 mm:15 considered:1 vary:7 label:2 sensitive:1 weighted:3 minimization:2 mit:1 clearly:3 gaussian:4 manor:1 rather:1 avoid:2 zhou:1 varying:1 derived:1 nd2:1 consistently:1 indicates:2 contrast:1 baseline:6 kim:2 nn:4 perona:1 relation:1 overall:1 classification:12 dual:8 html:1 denoted:1 flexible:1 noh:1 art:4 constrained:1 initialize:1 special:1 apriori:1 never:2 identical:1 kw:1 yu:1 k2f:1 icml:3 simplex:2 report:1 simplify:1 roj:6 employ:2 preserve:1 beck:1 psd:7 semidefinite:2 primal:2 respective:4 orthogonal:1 euclidean:2 initialized:1 mbk:3 mk:1 increased:1 instance:56 downside:1 teboulle:1 boostmetric:6 cost:4 introducing:1 entry:2 successful:1 examining:1 too:1 reported:1 considerably:2 fundamental:1 sensitivity:1 lee:1 informatics:1 together:2 squared:10 reflect:1 satisfied:1 nm:1 containing:1 huang:1 worse:6 derivative:1 standardizing:1 sml:9 coding:1 coefficient:1 satisfy:1 dann:1 depends:4 ranking:1 closed:1 schema:1 nmd:1 bayes:1 parallel:1 square:2 accuracy:8 variance:1 efficiently:1 yield:1 handwritten:1 drive:1 mooney:1 minist:1 explain:1 definition:3 against:5 obvious:1 naturally:1 proof:2 riemannian:1 mi:9 associated:2 dataset:5 ut:2 improves:5 dimensionality:1 jenatton:1 campbell:1 supervised:1 follow:2 formulation:3 box:1 strongly:1 though:1 stage:1 correlation:1 until:1 overfit:3 hand:1 ykf:1 expressive:1 nonlinear:1 western:1 propagation:1 lack:2 google:1 semisupervised:1 requiring:1 multiplier:1 adequately:1 regularization:8 assigned:1 former:2 symmetric:2 dhillon:2 mahalanobis:6 width:1 self:1 davis:2 theoretic:1 demonstrate:1 duchi:1 delivers:1 reasoning:1 ranging:1 image:3 volume:1 significant:3 imposing:1 smoothness:3 automatic:5 rd:5 tuning:1 gratefully:1 funded:1 similarity:2 longer:1 add:1 perspective:1 optimizing:1 forcing:1 scenario:1 continue:1 wib:2 mcnemar:3 yi:8 scoring:1 seen:2 additional:1 employed:1 determine:2 signal:2 multiple:2 smooth:11 cross:4 bach:1 retrieval:2 ellipsis:1 laplacian:2 parenthesis:1 variant:2 scalable:1 essentially:1 metric:154 proteomics:1 chandra:1 yeung:1 iteration:4 kernel:14 addition:4 want:2 unlike:2 induced:1 tend:1 constraining:1 split:1 easy:1 independence:1 fit:2 xj:26 hastie:1 reduce:2 idea:1 inner:4 whether:2 six:5 pca:2 utility:1 effort:3 suffer:1 action:1 detailed:1 tune:1 ten:3 locally:1 svms:1 simplest:1 reduced:1 http:2 xij:3 nsf:1 estimated:1 overly:1 per:2 tibshirani:1 woznica:4 four:4 nevertheless:1 demonstrating:1 acknowledged:1 preprocessed:1 graph:2 sum:3 cone:1 letter:3 almost:1 uti:1 decision:1 scaling:1 bound:6 guaranteed:1 followed:2 fold:3 quadratic:1 nonnegative:2 pijk:1 precisely:1 infinity:1 constrain:2 constraint:25 speed:1 min:3 optimality:1 performing:1 speedup:1 department:3 kalousis:4 combination:10 poor:1 ball:1 inflexible:1 smaller:2 slightly:1 wi:11 making:1 iccv:1 lml:1 computationally:2 equation:1 visualization:1 singer:2 ge:3 fp7:2 end:1 parametrize:2 rewritten:1 observe:1 appropriate:5 spectral:1 weinberger:1 original:1 denotes:2 clustering:4 hinge:1 xtij:2 exploit:1 k1:4 classical:1 bl:14 objective:6 malik:1 added:1 already:1 parametric:4 strategy:3 diagonal:1 gradient:6 win:1 distance:27 parametrized:1 w0:1 manifold:16 discriminant:2 d2m:1 reason:2 code:3 retained:1 minimizing:4 balance:1 ying:1 difficult:1 xik:1 trace:1 proper:1 perform:1 datasets:17 finite:1 jin:1 regularizes:1 saturates:1 communication:1 y1:1 bk:6 required:2 specified:1 connection:1 optimized:1 plague:1 learned:9 nip:6 trans:1 address:1 sparsity:1 including:1 max:2 power:3 business:1 natural:1 rely:2 force:1 largescale:1 regularized:1 improve:1 axis:2 jun:2 coupled:1 geometric:1 l2:1 understanding:2 kf:3 relative:3 loss:2 limitation:1 validation:5 eigendecomposition:1 degree:1 mbl:4 vectorized:1 sufficient:1 principle:1 sik:1 plotting:1 share:1 cd:1 row:4 prone:3 repeat:1 kbl:10 aij:4 neighbor:15 basu:1 saul:1 sparse:1 boundary:2 overcome:1 default:3 valid:1 avoids:1 dimension:1 hengel:1 mbi:3 author:2 adaptive:2 projected:3 far:1 geneva:2 approximate:2 global:10 overfitting:10 anchor:16 mairal:1 discriminative:3 xi:36 alternatively:1 shwartz:1 triplet:10 urine:1 table:2 additionally:1 learn:25 mj:1 sra:1 xtik:1 excellent:1 complex:1 investigated:1 diag:1 did:1 significance:1 main:5 aistats:1 n2:1 shrinking:1 fails:1 position:2 lie:1 lw:2 weighting:4 jmlr:3 learns:8 enlarging:1 specific:1 experimented:2 svm:13 normalizing:1 evidence:1 exists:1 mnist:5 adding:1 importance:3 margin:10 kx:4 smoothly:5 simply:3 glml:9 chang:1 ch:3 determines:1 obozinski:1 optdigits:4 presentation:2 identity:1 lipschitz:6 considerable:1 fista:3 except:1 wt:6 lemma:6 principal:1 total:2 experimental:4 ijk:22 formally:1 select:4 support:1 latter:3 incorporate:1 evaluate:2 tested:1 |
4,219 | 4,819 | MAP Inference in Chains using Column Generation
David Belanger?, Alexandre Passos?, Sebastian Riedel?, Andrew McCallum
Department of Computer Science, University of Massachusetts, Amherst
? Department of Computer Science, University College London
{belanger,apassos,mccallum}@cs.umass.edu, [email protected]
Abstract
Linear chains and trees are basic building blocks in many applications of graphical models, and they admit simple exact maximum a-posteriori (MAP) inference
algorithms based on message passing. However, in many cases this computation is prohibitively expensive, due to quadratic dependence on variables? domain
sizes. The standard algorithms are inefficient because they compute scores for
hypotheses for which there is strong negative local evidence. For this reason
there has been significant previous interest in beam search and its variants; however, these methods provide only approximate results. This paper presents new
exact inference algorithms based on the combination of column generation and
pre-computed bounds on terms of the model?s scoring function. While we do
not improve worst-case performance, our method substantially speeds real-world,
typical-case inference in chains and trees. Experiments show our method to be
twice as fast as exact Viterbi for Wall Street Journal part-of-speech tagging and
over thirteen times faster for a joint part-of-speed and named-entity-recognition
task. Our algorithm is also extendable to new techniques for approximate inference, to faster 0/1 loss oracles, and new opportunities for connections between
inference and learning. We encourage further exploration of high-level reasoning
about the optimization problem implicit in dynamic programs.
1
Introduction
Many uses of graphical models either directly employ chains or tree structures?as in part-of-speech
tagging?or employ them to enable inference in more complex models?as in junction trees and tree
block coordinate descent [1]. Traditional message-passing inference in these structures requires an
amount of computation dependent on the product of the domain sizes of variables sharing an edge
in the graph. Even in chains, exact inference is prohibitive in tasks with large domains due to the
quadratic dependence on domain size. For this reason, many practitioners rely on beam search or
other approximate inference techniques [2]. However, inference by beam search is approximate.
This not only hurts test-time accuracy, but can also interfere with parameter estimation [3].
We present a new algorithm for exact MAP inference in chains that is substantially faster than Viterbi
in the typical case. We draw on four key ideas: (1) it is wasteful to compute and store messages to
and from low-scoring states, (2) it is possible to compute bounds on data-independent (not varying
with the input data) scores of the model offline, (3) inference should make decisions based on local
evidence for variables? values and rely on the graph only for disambiguation [4], and (4) runtime
behavior should adapt to the cost structure of the model (i.e., the algorithm should be energy-aware
[5]). The combination of these ideas yields provably exact MAP inference for chains and trees that
can be more than an order of magnitude faster than traditional methods. Our algorithm has wideranging applicability, and we believe it could beneficially replace many traditional uses of Viterbi
and beam search.
?
The first two authors contributed equally to this paper.
1
We exploit the connections between message-passing algorithms and LP relaxations for MAP inference. Directly solving LP relaxations for MAP using a state-of-the-art solver is inefficient because
it ignores key structure of the problem [6]. However, it is possible to leverage message-passing as a
fast subroutine to solve smaller LPs, and use high-level techniques to compose these solutions into
a solution to the original problem.
With this interplay in mind, we employ column generation [7], a family of approaches to solving
linear programs that are dual to cutting planes: they start by solving restricted primal problems,
where many LP variables are set to zero, and slowly add other LP variables until they are able to
prove that adding no other variable can improve the solution. From these properties of column
generation, we also show how to perform approximate inference that is guaranteed not to be worse
than the optimal by a given gap, how to construct an efficient 0/1-loss oracle by running 2-best
inference in a subset of the graphical model, and how to learn parameters in such a way to make
inference even faster.
The use of column generation has not been widely explored or appreciated in graphical models.
This paper is intended to demonstrate its benefits and encourage further work in this direction.
We demonstrate experimentally that our method has substantial speed advantages while retaining
guaranteed exact inference. In Wall Street Journal part-of-speech tagging our method is more than
2.5 times faster than Viterbi, and also faster than beam search with a width of two. In joint POS
tagging and named entity recognition, our method is thirteen times faster than Viterbi and also faster
than beam search with a width of seven.
2
Delayed Column Generation in LPs
In LPs used for combinatorial optimization problems, we know a priori that there are optimal solutions in which many variables will be set to zero. This is enforced by the problem?s constraints or it
characterizes optimality (e.g., the solution to a shortest path LP would not include multiple paths).
Column generation is a technique for exploiting this sparsity for faster inference. It restricts an LP
to a subset of its variables (implicitly setting the others to zero) and alternates between solving this
restricted linear program and selecting which variables should be added to it, based on whether
they could potentially improve the objective. When no candidates remain, the current solution to the
restricted problem is guaranteed to be the exact solution of the unrestricted problem.
The test to determine whether un-generated variables could potentially improve the objective is
whether their reduced cost is positive, which is also the test employed by some pivoting rules in
the simplex algorithm [8, 7]. The difference between the algorithms is that simplex enumerates
primal variables explicitly, while column generation ?generates? them only as needed. The key to
an efficient column generation algorithm is an oracle that can either prove that no variable with
positive reduced cost exists or produce one.
Consider the general LP:
max.
cT x
s.t.
Ax ? b,
x?0
(1)
With corresponding Lagrangian:
L(x, ?) = cT x + ?t (b ? Ax) = ?i ci ? ATi ? xi + ?t b.
(2)
For a given assignment to the dual variables ?, a variable xi is a candidate for being added to the
restricted problem if its reduced cost ri = ci ? ATi ?, the scalar multiplying it in the Lagrangian, is
positive. Another way to justify this decision rule is by considering the constraints in the LP dual:
min.
bT ?
AT ? ? c
s.t.
??0
(3)
Here, the reduced cost of a primal variable equals the degree to which its dual constraint is violated,
and thus column generation in the primal is equivalent to cutting planes in the dual [7]. If there is
no variable of positive reduced cost, then the current dual variables from the restricted problem are
feasible in the unrestricted problem, and thus we have a primal-dual optimal pair, and can terminate
column generation. An advantageous property of column generation that we employ later on is that
it maintains primal feasibility across iterations, and thus it can be halted to provide approximate,
anytime solutions.
2
3
Connection Between LP Relaxations and Message-Passing in Chains
This section provides background showing how the LP formulation of the inference problem in
chains leads to the known message-passing algorithm. The derivation follows Wainwright and Jordan [9], but is specialized for chains and highlights connections to our contributions.
The LP for MAP inference in chains is as follows
P
P
max.
? (x )? (x ) + i,xi ,xi+1 ?i (xi , xi+1 )? (xi , xi+1 )
Pi,xi i i i i
s.t. Pxi ?i (xi ) = 1
?i
?
(x
,
x
)
=
?
(x
)
?i, xi+1
i
i
i+1
i+1
i+1
Pxi
?
(x
,
x
)
=
?
(x
)
?i, xi
i
i
i+1
i
i
xi+1
(4)
where ?i (xi ) is the score obtained from assigning the i-th variable to value xi , ?i (xi ) is an indicator variable saying whether or not the MAP assignment sets the i-th variable to the value xi , and
? (xi , xi+1 ) is the score the model assigns to a transition from value xi to value xi+1 . It?s implicitly
assumed that all variables are positive. We assume a static ? , but all statements trivially generalize
to position-dependent ?i .
We can restructure this LP to only depend on the pairwise assignment variables ?i (xi , xi+1 ) by
creating an edge between the last variable in the chain and an artificial variable and then ?billing?
all local scores to the pairwise edge that touches them from the right. Then we restructure the
constraints to sum out both sides of each edge, and add indicator variables ?n (xn , ?) and 0-scoring
transitions for the artificial edge. This leaves the following LP (with dual variables written after their
corresponding constraints).
P
max.
? (x , x )(? (x , x ) + ?i (xi ))
Pi,xi ,xi+1 i i i+1 i i i+1
s.t. Pxn ?n (xn , ?) = 1
(N )
(5)
P
(?i (xi ))
xi?1 ?i?1 (xi?1 , xi ) =
xi+1 ?i (xi , xi+1 )
The dual of this linear program is
min.
s.t.
N
?i+1 (xi+1 ) ? ?i (xi ) ? ? (xi , xi+1 ) + ?i (xi ) ?i, xi , xi+1
N ? ?n (xn ) ? ?n (xn )
?xn
(6)
and setting the ? dual variables by
?i+1 (xi+1 ) = max ?i (xi ) + ?i (xi ) + ? (xi , xi+1 )
xi
(7)
and N = maxxn ?n (xn ) + ?n (xn ) is a sufficient condition for dual feasibility, and as N will
have the value of the primal solution, for optimality. Note that this equation is exactly the forward
message-passing equation for max-product belief propagation in chains, i.e. the Viterbi algorithm.
A setting of the dual variables is optimal if maximization of the problem?s Lagrangian over the
primal variables yields a primal-feasible setting. The coefficients on the edge variables ?i (xi , xi+1 )
are their reduced costs,
?i (xi ) ? ?i+1 (xi+1 ) + ?i (xi ) + ? (xi , xi+1 ).
(8)
For duals that obey the constraints of (6), it is clear that the maximal reduced cost is zero, when xi
is set to the argmax used when constructing ?i+1 (xi+1 ). Therefore, to a obtain a primal-optimal
solution, we start at the end of the chain and follow the argmax indices to the beginning, which is
the same backward sweep of the Viterbi algorithm.
3.1
Improving the reduced cost with information from both ends of the chain
Column generation adds all variables with positive reduced cost to the restricted LP, but equation (8)
leads to an inefficient algorithm because it is positive for many irrelevant edge settings. In (8),
the only terms that involve xi+1 are ? (xi , xi+1 ) and the ? (x0i , xi+1 ) term that is part of ?i+1 (xi+1 ).
These are data-independent. Therefore, even if there is very strong local evidence against a particular
3
setting xi+1 , pairs xi , xi+1 may have positive reduced cost if the global transition factor ? (xi , xi+1 )
places positive weight on their compatibility.
We can improve upon this by exploring different LP formulations than that of Wainwright and
Jordan. Note that in equation (5) a local score is ?billed? to its rightmost edge. Instead, if we split
it halfway (now using phantom edges in both sides of the chain), we would obtain slightly different
message passing rules and the following reduced cost expression:
1
?i (xi ) ? ?i+1 (xi+1 ) + (?i (xi ) + ?j (xj )) + ? (xi , xi+1 ).
(9)
2
This contains local information for both xi and xi+1 , though it halves the magnitude of it. In table
2 we demonstrate that this yields comparable performance to using the reduced cost of (8), which
still outperforms Viterbi. An even better reduced cost expression can be obtained by duplicating the
marginalization constraints, we have:
P
max.
?i (xi , xi+1 ) ? (xi , xi+1 ) + 21 ?i (xi ) + 21 ?i+1 (xi+1 )
i,x
,x
i
i+1
P
s.t. Pxn ?n (xn , ?) = 1
(N + )
?
?
(?,
x
)
=
1
(N
)
(10)
1
Px1 0
P
?
(x
,
x
)
=
?
(x
,
x
)
(?
(x
i?1 i?1
i
i i
i+1
i i ))
x
x
i?1
i+1
P
P
(?i (xi ))
xi+1 ?i (xi , xi+1 ) =
xi?1 ?i?1 (xi?1 , xi )
Following similar logic as in the previous section, setting the dual variables according to (7) and
?i?1 (xi?1 ) = max ?i (xi ) + ?i (xi ) + ?( xi?1 , xi )
(11)
xi
is a sufficient condition for optimality.
In effect, we solve the LP of equation (10) in two independent procedures, each solving the onedirectional subproblem in (6), and either one of these subroutines is sufficient to construct a primal
optimal solution. This redundancy is important, though, because the resulting reduced cost
2Ri (xi , xi+1 ) = 2? (xi , xi+1 ) + ?i (xi ) + ?i+1 (xi+1 )
+ (?i (xi ) ? ?i+1 (xi+1 )) + (?i+1 (xi+1 ) ? ?i (xi )) .
(12)
incorporates global information from both directions in the chain. In table 2 we show that column
generation with (12) is fastest, which is not obvious, given the extra overhead of computing the ?
messages. This is the reduced cost that we use in the following discussion and experiments, unless
explicitly stated otherwise.
4
Column Generation Algorithm
We present an algorithm for exact MAP inference that in practice is usually faster than traditional
message passing. Like all column generation algorithms, our technique requires components for
three tasks: choosing the initial set of variables in the restricted LP, solving the restricted LP, and
finding variables with positive reduced cost. When no variable of positive reduced cost exists, the
current solution to the restricted problem is optimal because we have a primal-feasible, dual-feasible
pair.
Pseudocode for our algorithm is provided in Figure 1. In the following description, many concepts
will be explained in terms of nodes, despite our LP being defined over edges. The edge quantities
can be defined in terms of node quantities, such as the ? and ? messages, and it is more efficient to
store these than the quadratically-many edge quantities.
4.1
Initialization
To initialize the LP, we first define a restricted domain for each node in the graphical model consisting of only xL
i = argmax ?i (xi ). Other initialization strategies, such as adding the high-scoring
transitions, or the k best xi , are also valid. Next, we include in the initial restricted LP all the indicaL
tor variables ?i (xL
i , xi+1 ) corresponding to these size-one domains. Solving the initial restricted LP
is very efficient, since all nodes have only one valid setting, and no maximization is needed when
passing messages.
4
4.2
Warm-Starting the Restricted LP
Updating all messages using the max-product rules of equations (7) and (11) is a valid way to solve
the restricted LP, but it doesn?t leverage the messages that were optimal for previous calls to the
problem. In practice, the restricted domains of every node are not updated at every iteration, and
hence many of the previous messages may still appear in a dual-optimal setting of the current restricted problem. As usual, solving the restricted LP, can be decomposed into independently solving
each of the one-directional LPs, and thus we update ? independently of ?.
To construct a primal setting from either the ? or ? messages, we employ the standard technique
of back-tracing the argmaxes used in their update equations. In some regions of the chain, we can
avoid updating messages because we can guarantee that the proposed message updates would yield
the same maximization and thus the same primal setting. Simple rules include, for example, avoiding
updating ? to the left of the first updated domain and to avoid updating ?i (?) if |Di?1 |= 1, since
maximization over |Di?1 | is trivial. Furthermore, to the right of the the last updated domain, if we
compute new messages ?i0 (?) and find that the argmax at the current MAP assignment x?i doesn?t
change, we can revert to the previous ?i (?) and terminate message passing. An analogous statement
can be made about the ? variables.
When solving the restricted LP, some constraints are trivially satisfied because they only involve
variables that are implicitly set to zero, and hence the corresponding dual variables can be set arbitrarily. To prevent extraneous un-generated variables from having a high reduced cost, we choose
duals by guessing values that should be feasible in the unrestricted LP, with a smaller computational cost than solving the unrestricted LP directly. We employ the same update equation used for
the in-domain messages in (7) and (11), and maximize over the restricted domain of the variable?s
neighbor. In our experiments, over 90% of the restricted domains were of size 1, so this dependence
on the size of the neighbor domain was not a computational bottleneck in practice, and still allowed
the reduced-cost oracle to consider five or less candidate edges in each iteration in more than 86%
of the calls.
4.3
Reduced-Cost Oracle
Exhaustively searching the chain for variables of positive reduced cost by iterating over all settings of
all edges would be as expensive as exact max-product message-passing. However, our oracle search
strategy is efficient because it prunes these away using precomputed bounds on the transitions.
First we decompose equation (12) as follows
2Ri (xi , xi+1 ) = 2? (xi , xi+1 ) + Si+ (xi ) + Si? (xi+1 )
(13)
where Si+ (xi ) = ?i (xi )+?i (xi )??i (xi ) and Si? (xi+1 ) = ?i+1 (xi+1 )??i+1 (xi+1 )+?i+1 (xi+1 ).
If in practice, most settings for each edge have negative reduced cost, we can efficiently find candidate settings by first upper-bounding Si+ (xi ) + 2? (xi , xi+1 ), finding all possible values xi+1 that
could yield a positive reduced cost, and then doing the reverse. Finally, we search over the much
smaller set of candidates for xi and xi+1 . This strategy is described in Figure 1.
After the first round of column generation, if Ri (xi , xi+1 ) hasn?t changed for every xi , xi+1 , then
no variables of positive reduced cost can exist because they would have been added in the previous
iteration, and we can skip the oracle. This condition can be checked while passing messages.
Lastly, a final pruning strategy is that if there are settings xi , x0i such that
?i (xi ) + min ? (xi?1 , xi ) + min ? (xi , xi+1 ) > ?i (x0i ) + max ? (xi?1 , x0i ) + max ? (x0i , xi+1 ), (14)
xi?1
xi+1
xi?1
xi+1
then we know with certainty that setting x0i is suboptimal. This helps prune the oracle?s search space
efficiently because the above maxima and minima are data-independent offline computations. We
can do so by first linearly searching through the labels for a node for the one with highest local score
and then using precomputed bounds on the transition scores to linearly discard states whose upper
bound on the score is smaller than the lower bound of the best state.
5
: Algorithm: CG-Infer
: Algorithm: ReducedCostOracle(i)
begin
for i = 1 ? n do
Di = {argmax ?i (xi )}
end
while domains haven?t converged do
(?, ?) ? GetM essages(D, ?)
for i = 1 ? n do
?
Di? , Di+1
? ReducedCostOracle(i)
Di ? Di ? Di?
?
Di+1 ? Di+1 ? Di+1
end
end
end
begin
U? (?, xj ) ? maxxi ? (xi , xj )
U? (xi , ?) ? maxxj ? (xi , xj )
Ui ? maxxi Si+ (xi )
Ci0 ? {xj |Si? (xj ) + Ui + 2U? (?, xj ) > 0}
Ui0 ? maxxi+i ?Ci0 Si? (xj )
Ci ? {xi |Si+ (xi ) + Ui0 + 2U? (xi , ?) > 0}
D ?D0 ? {xi , xj ? Ci , Ci0 |R(xi , xj ) > 0}
return D, D0
end
Figure 1: Column Generation Algorithm and Pruning Strategy for Reduced Cost Oracle
5
Extensions of the Algorithm
The column generation algorithm is fairly general, and can be easily extended to allow for many
interesting use cases. In section 7 we provide experiments supporting the usefulness of these extensions, and they are described in more detail in appendix A.
First of all, our algorithm generalizes easily to MAP inference in trees by using a similar structure
but a different reduced cost expression that considers messages flowing in both directions across
each edge (appendix A.1). The reduced cost oracle can also be used to compute the duality gap
of an approximate solution. This allows early stopping of our algorithm if the gap is small and
also provides analysis of the sub-optimality of the output of beam search (appendix A.2). Furthermore, margin violation queries when doing structured SVM training with a 0/1 loss can be done
efficiently using a small modification of our algorithm, in which we also add variables of small
negative reduced cost and do 2-best inference within the restricted domains (appendix A.3). Lastly,
regularizing the transition weights more strongly allows one to train models that will decode more
quickly (appendix A.4). Most standard inference algorithms, such as Viterbi, do not have this behavior where the inference time is affected by the actual model scores. By coupling inference and
learning, practitioners have more freedom to trade off test-time speed vs. accuracy.
6
Related Work
Column generation has been employed as a way of dramatically speeding up MAP inference problems in Riedel et al [10], which applies it directly to the LP relaxation for dependency parsing with
grandparent edges.
There has been substantial prior work on improving the speed of max-product inference in chains
by pruning the search process. CarpeDiem [11] relies on an an expression similar to the oriented,
left-to-right reduced cost equation of (8), also with a similar pruning strategy to the one described
in section 4.3. Following up, Kaji et al. [12] presented a staggered decoding strategy that similarly
attempts to bound the best achievable score using uninstantiated domains, but only used local scores
when searching for new candidates. The dual variables obtained in earlier runs were then used to
warm-start the inference in later runs, similarly to what is done in section 4.2. Their techniques
obtained similar speed-ups as ours over Viterbi inference. However, their algorithms do not provide extensions to inference in trees, a margin-violation oracle, and approximate inference using
a duality gap. Furthermore, Kaji et al. use data-dependent transition scores. This may improve
our performance as well, if the transition scores are more sharply peaked. Similarly, Raphael [13]
also presents a staggered decoding strategy, but does so in a way that applies to many dynamic
programming algorithms.
The strategy of preprocessing data-independent factors to speed up max-product has been previously
explored by McAuley and Caetano [14], who showed that if the transition weights are large, savings
can be obtained by sorting them offline. Our contributions, on the other hand, are more effective
6
when the transitions are small. The same authors have also explored strategies to reduce the worstcase complexity of message-passing by exploiting faster matrix multiplication algorithms [15].
Alternative methods of leveraging the interplay between fast dynamic programming algorithms and
higher-level LP techniques have been explored elsewhere. For example, in dual decomposition [16], inference in joint models is reduced to repeated inference in independent models. Tree block-coordinate
descent performs approximate inference in loopy
models using exact inference in trees as a subroutine [1]. Column generation is cutting planes in the
dual, and cutting planes have been used successfully
in various machine learning contexts. See, for example, Sontag et al [17] and Riedel et al [18].
There is a mapping between dynamic programs and Figure 2: Training-time manipulation of acshortest path problems [19]. Our reduced cost is an curacy vs. test throughput for our algorithm
estimate of the desirability of an edge setting, and
thus our algorithm is heuristic search in the space of
edge settings. With dual feasibility, this heuristic is consistent, and thus our algorithm is iteratively
constructing a heuristic such that it can perform A? search for the final restricted LP [20].
7
Experiments
We compare the performance of column generation with exact and approximate inference on
Wall Street Journal [21] part-of-speech (POS) tagging and joint POS tagging and named-entityrecognition (POS/NER). The output variable domain size is 45 for POS and 360 for POS/NER. The
test set contains 5463 sentences. The POS model was trained with a 0/1 loss structured SVM and
the POS/NER model was trained using SampleRank [22].
Table 1 compares the inference times and accuracies of column generation (CG), Viterbi, Viterbi
with the final pruning technique described in section 4.3 (Viterbi+P), CG with duality gap termination condition 0.15% (CG+DG), and beam search. For POS, CG, is more than twice as fast as
Viterbi, with speed comparable to a beam of size 3. Whereas CG is exact, Beam-3 loses 1.6%
accuracy. Exact inference in the model obtains a tagging accuracy of 95.3%.
For joint POS and NER tagging, the speedups are even more dramatic. We observe a 13x speedup
over Viterbi and are comparable in speed with a beam of size 7, while being exact. As in POS,
CG-DG provides a mild speedup.
Over 90% of tokens in the POS task had a domain of size one, and over 99% had a domain of size
3 or smaller. Column generation always finished in at most three iterations, and 22% of the time it
terminated after one. 86% of the time, the reduced-cost oracle iterated over at most 5 candidate edge
settings, which is a significant reduction from the worst-case behavior of 452 . The pruning strategy
in Viterbi+P manages to restrict the number of possible labels for each token to at most 5 for over
65% of the tokens, and prunes the size of each domain by half over 95% of the time.
Table 2.A presents results for a 0/1 loss oracle described in section 5. Baselines are a standard Viterbi
2-best search1 and Viterbi 2-best with the pruning technique of 4.3 (Viterbi+P). CG outperforms
Viterbi 2-best on both POS and POS/NER. Though Viterbi+P presents an effective speedup, we
are still 19x faster on POS/NER. In terms of absolute throughput, POS/NER is faster than POS
because the POS/NER model wasn?t trained with a regularized structured SVM, and thus there are
fewer margin violations. Our 0/1 oracle is quite efficient when determining that there isn?t a margin
violation, but requires extra work when required to actually produce the 2-best setting.
Table 2.B shows column generation with two other reduced-cost formulations on the same POS
tagging task. CG-? uses the reduced-cost from equation (8) while CG-?+?i+1 uses the reducedcost from equation (9). The full CG is clearly beneficial, despite requiring computation of ?.
1
Implemented by replacing all maximizations in the viterbi code with two-best maximizations.
7
Algorithm
Viterbi
Viterbi+P
CG
CG-DG
Beam-1
Beam-2
Beam-3
Beam-4
% Exact
100
100
100
98.9
57.7
92.6
98.4
99.5
Sent./sec.
3144.6
4515.3
8227.6
9355.6
12117.6
7519.3
6802.5
5731.2
Algorithm
Viterbi
Viterbi+P
CG
CG-DG
Beam-1
Beam-5
Beam-7
Beam-10
% Exact
100
100
100
98.4
66.6
98.5
99.2
99.5
Sent./sec.
56.9
498.9
779.9
804
3717.0
994.97
772.8
575.1
Table 1: Comparing inference time and exactness of Column Generation (CG), Viterbi, Viterbi with
the final pruning technique of section 4.3 (Viterbi+P), and CG with duality gap termination condition
0.15%(CG+DG), and beam search on POS tagging (left) and joint POS/NER (right).
Method
CG
Viterbi 2-best
Viterbi+P 2-best
POS
Sent./sec.
85.0
56.0
119.6
POS/NER
Sent./sec.
299.9
.06
11.7
Reduced Cost
CG
CG-?
CG-?+?i+1
POS Sent./sec.
8227.6
5125.8
4532.1
Table 2: (A) the speedups for a 0/1 loss oracle (B) comparing reduced cost formulations
In Figure 2, we explore the ability to manipulate training time regularization to trade off test accuracy and test speed, as discussed in section 5. We train a structured SVM with L2 regularization
(coefficient 0.1) the emission weights, and vary the L2 coefficient on the transition weights from 0.1
to 10. A 4x gain in speed can be obtained at the expense of an 8% relative decrease in accuracy.
8
Conclusions and future work
In this paper we presented an efficient family of algorithms based on column generation for MAP
inference in chains and trees. This algorithm exploits the fact that inference can often rule out
many possible values, and we can efficiently expand the set of values on the fly. Depending on the
parameter settings it can be twice as fast as Viterbi in WSJ POS tagging and 13x faster in a joint
POS-NER task.
One avenue of further work is to extend the bounding strategies in this algorithm for inference
in cluster graphs or junction trees, allowing faster inference in higher-order chains or even loopy
graphical models. The connection between inference and learning shown in section 5 also bears
further study, since it would be helpful to have more prescriptive advice for regularization strategies
to achieve certain desired accuracy/time tradeoffs.
Acknowledgments
This work was supported in part by the Center for Intelligent Information Retrieval. The University of Massachusetts gratefully acknowledges the support of Defense Advanced Research Projects
Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime
contract no. FA8750-09-C-0181, in part by IARPA via DoI/NBC contract #D11PC20152, in part by
Army prime contract number W911NF-07-1-0216 and University of Pennsylvania subaward number 103-548106 , and in part by UPenn NSF medium IIS-0803847. Any opinions, findings and
conclusions or recommendations expressed in this material are the authors? and do not necessarily
reflect those of the sponsor. The U.S. Government is authorized to reproduce and distribute reprint
for Governmental purposes notwithstanding any copyright annotation thereon
References
[1] David Sontag and Tommi Jaakkola. Tree block coordinate descent for MAP in graphical models. In
Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AI-STATS),
volume 8, pages 544?551. JMLR: W&CP, 2009.
8
[2] C. Pal, C. Sutton, and A. McCallum. Sparse forward-backward using minimum divergence beams for fast
training of conditional random fields. In Acoustics, Speech and Signal Processing, 2006. ICASSP 2006
Proceedings. 2006 IEEE International Conference on, volume 5, pages V?V. IEEE, 2006.
[3] A. Kulesza, F. Pereira, et al. Structured learning with approximate inference. Advances in neural information processing systems, 20:785?792, 2007.
[4] L. Shen, G. Satta, and A. Joshi. Guided learning for bidirectional sequence classification. In Annual
Meeting-Association for Computational Linguistics, volume 45, page 760, 2007.
[5] D. Tarlow, D. Batra, P. Kohli, and V. Kolmogorov. Dynamic tree block coordinate ascent. In ICML, pages
113?120, 2011.
[6] C. Yanover, T. Meltzer, and Y. Weiss. Linear programming relaxations and belief propagation?an empirical study. The Journal of Machine Learning Research, 7:1887?1907, 2006.
[7] M. Lubbecke and J. Desrosiers. Selected topics in column generation. Operations Research, 53:1007?
1023, 2004.
[8] D. Bertsimas and J. Tsitsiklis. Introduction to Linear Optimization. Athena Scientific, 1997.
[9] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[10] S. Riedel, D. Smith, and A. McCallum. Parse, price and cutdelayed column and row generation for graph
based parsers. Proceedings of the Conference on Empirical methods in natural language processing
(EMNLP ?12), 2012.
[11] R. Esposito and D.P. Radicioni. Carpediem: an algorithm for the fast evaluation of SSL classifiers. In
Proceedings of the 24th international conference on Machine learning, pages 257?264. ACM, 2007.
[12] N. Kaji, Y. Fujiwara, N. Yoshinaga, and M. Kitsuregawa. Efficient staggered decoding for sequence
labeling. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics,
pages 485?494. Association for Computational Linguistics, 2010.
[13] C. Raphael. Coarse-to-fine dynamic programming. Pattern Analysis and Machine Intelligence, IEEE
Transactions on, 23(12):1379?1390, 2001.
[14] J. McAuley and T. Caetano. Exploiting data-independence for fast belief-propagation. In International
Conference on Machine Learning 2010, volume 767, page 774, 2010.
[15] J.J. McAuley and T.S. Caetano. Faster algorithms for max-product message-passing. Journal of Machine
Learning Research, 12:1349?1388, 2011.
[16] A.M. Rush, D. Sontag, M. Collins, and T. Jaakkola. On dual decomposition and linear programming
relaxations for natural language processing. In Proceedings of the 2010 Conference on Empirical Methods
in Natural Language Processing, pages 1?11. Association for Computational Linguistics, 2010.
[17] D. Sontag and T. Jaakkola. New outer bounds on the marginal polytope. In Advances in Neural Information Processing Systems, 2007.
[18] S. Riedel. Improving the accuracy and efficiency of MAP inference for Markov logic. Proceedings of
UAI 2008, pages 468?475, 2008.
[19] R. Kipp Martin, Ronald L. Rardin, and Brian A. Campbell. Polyhedral characterization of discrete dynamic programming. Operations Research, 38(1):pp. 127?138, 1990.
[20] R.K. Ahuja, T.L. Magnanti, J.B. Orlin, and K. Weihe. Network flows: theory, algorithms and applications.
ZOR-Methods and Models of Operations Research, 41(3):252?254, 1995.
[21] M.P. Marcus, M.A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The
penn treebank. Computational linguistics, 19(2):313?330, 1993.
[22] M. Wick, K. Rohanimanesh, K. Bellare, A. Culotta, and A. McCallum. SampleRank: training factor
graphs with atomic gradients. In Proceedings of ICML, 2011.
9
| 4819 |@word mild:1 kohli:1 achievable:1 advantageous:1 twelfth:1 termination:2 decomposition:2 dramatic:1 mcauley:3 reduction:1 initial:3 contains:2 uma:1 score:14 selecting:1 prescriptive:1 ours:1 fa8750:1 ati:2 rightmost:1 outperforms:2 current:5 comparing:2 si:9 assigning:1 written:1 parsing:1 ronald:1 update:4 v:2 half:2 prohibitive:1 leaf:1 fewer:1 selected:1 intelligence:2 plane:4 mccallum:5 beginning:1 smith:1 tarlow:1 provides:3 coarse:1 node:6 characterization:1 five:1 prove:2 compose:1 overhead:1 polyhedral:1 magnanti:1 nbc:1 pairwise:2 upenn:1 tagging:11 behavior:3 decomposed:1 actual:1 solver:1 considering:1 provided:1 begin:2 project:1 medium:1 what:1 substantially:2 finding:3 guarantee:1 certainty:1 duplicating:1 every:3 runtime:1 exactly:1 prohibitively:1 classifier:1 uk:1 penn:1 appear:1 positive:14 ner:11 thereon:1 local:8 despite:2 sutton:1 path:3 twice:3 initialization:2 fastest:1 acknowledgment:1 fujiwara:1 satta:1 atomic:1 practice:4 block:5 kipp:1 procedure:1 empirical:3 ups:1 pre:1 context:1 equivalent:1 map:15 lagrangian:3 phantom:1 center:1 starting:1 independently:2 shen:1 assigns:1 stats:1 rule:6 searching:3 coordinate:4 hurt:1 analogous:1 updated:3 parser:1 decode:1 exact:17 programming:6 us:4 hypothesis:1 pxn:2 trend:1 expensive:2 recognition:2 updating:4 ci0:3 subproblem:1 fly:1 worst:2 region:1 culotta:1 caetano:3 trade:2 highest:1 decrease:1 substantial:2 agency:1 ui:2 complexity:1 dynamic:7 exhaustively:1 trained:3 depend:1 solving:11 passos:1 upon:1 efficiency:1 po:26 joint:7 easily:2 darpa:1 icassp:1 various:1 kolmogorov:1 derivation:1 train:2 revert:1 uninstantiated:1 fast:8 effective:2 london:1 doi:1 artificial:3 query:1 labeling:1 choosing:1 whose:1 heuristic:3 widely:1 solve:3 quite:1 kaji:3 otherwise:1 ability:1 statistic:1 final:4 interplay:2 advantage:1 sequence:2 ucl:1 product:7 maximal:1 raphael:2 achieve:1 description:1 beneficially:1 exploiting:3 cluster:1 produce:2 wsj:1 help:1 coupling:1 andrew:1 ac:1 depending:1 x0i:6 strong:2 implemented:1 c:2 skip:1 tommi:1 direction:3 guided:1 annotated:1 exploration:1 duals:2 desrosiers:1 enable:1 opinion:1 material:1 government:1 marcinkiewicz:1 wall:3 decompose:1 brian:1 exploring:1 extension:3 viterbi:32 mapping:1 tor:1 vary:1 early:1 purpose:1 estimation:1 combinatorial:1 label:2 successfully:1 exactness:1 clearly:1 always:1 desirability:1 avoid:2 varying:1 jaakkola:3 ax:2 pxi:2 emission:1 staggered:3 cg:22 baseline:1 posteriori:1 inference:50 helpful:1 dependent:3 stopping:1 i0:1 bt:1 expand:1 reproduce:1 subroutine:3 provably:1 compatibility:1 maxxj:1 dual:21 classification:1 priori:1 retaining:1 extraneous:1 art:1 ssl:1 initialize:1 fairly:1 marginal:1 equal:1 aware:1 construct:3 having:1 field:1 saving:1 icml:2 throughput:2 peaked:1 future:1 simplex:2 others:1 intelligent:1 haven:1 employ:6 curacy:1 oriented:1 dg:5 divergence:1 delayed:1 intended:1 argmax:5 consisting:1 attempt:1 freedom:1 interest:1 message:27 evaluation:1 violation:4 primal:14 copyright:1 chain:22 edge:20 encourage:2 unless:1 tree:14 desired:1 rush:1 column:29 earlier:1 w911nf:1 halted:1 assignment:4 maximization:6 loopy:2 cost:37 applicability:1 subset:2 usefulness:1 pal:1 dependency:1 extendable:1 international:4 amherst:1 contract:3 off:2 decoding:3 quickly:1 reflect:1 satisfied:1 choose:1 slowly:1 emnlp:1 restructure:2 worse:1 admit:1 creating:1 inefficient:3 return:1 distribute:1 sec:5 coefficient:3 hasn:1 explicitly:2 later:2 doing:2 characterizes:1 start:3 maintains:1 annotation:1 contribution:2 orlin:1 air:1 accuracy:9 who:1 efficiently:4 yield:5 directional:1 generalize:1 iterated:1 manages:1 multiplying:1 converged:1 sebastian:1 sharing:1 checked:1 against:1 energy:1 pp:1 obvious:1 di:11 static:1 gain:1 massachusetts:2 enumerates:1 anytime:1 actually:1 back:1 campbell:1 alexandre:1 afrl:1 higher:2 bidirectional:1 follow:1 flowing:1 wei:1 formulation:4 done:2 though:3 strongly:1 furthermore:3 implicit:1 lastly:2 until:1 hand:1 belanger:2 parse:1 touch:1 replacing:1 propagation:3 interfere:1 scientific:1 believe:1 building:2 effect:1 concept:1 requiring:1 hence:2 regularization:3 iteratively:1 laboratory:1 round:1 width:2 demonstrate:3 performs:1 cp:1 reasoning:1 samplerank:2 variational:1 pivoting:1 specialized:1 pseudocode:1 volume:4 discussed:1 extend:1 association:4 significant:2 ai:1 trivially:2 px1:1 similarly:3 gratefully:1 had:2 language:3 add:4 showed:1 irrelevant:1 reverse:1 discard:1 store:2 manipulation:1 certain:1 prime:2 arbitrarily:1 meeting:2 scoring:4 minimum:2 unrestricted:4 employed:2 prune:3 determine:1 shortest:1 maximize:1 signal:1 ii:1 multiple:1 full:1 infer:1 d0:2 argmaxes:1 faster:17 adapt:1 retrieval:1 equally:1 manipulate:1 feasibility:3 sponsor:1 variant:1 basic:1 iteration:5 beam:21 background:1 whereas:1 fine:1 extra:2 billed:1 ascent:1 sent:5 incorporates:1 leveraging:1 flow:1 jordan:3 practitioner:2 call:2 joshi:1 leverage:2 split:1 meltzer:1 xj:10 marginalization:1 independence:1 pennsylvania:1 restrict:1 suboptimal:1 billing:1 idea:2 reduce:1 wasn:1 avenue:1 tradeoff:1 bottleneck:1 whether:4 expression:4 defense:1 sontag:4 speech:5 passing:15 dramatically:1 iterating:1 clear:1 involve:2 amount:1 bellare:1 reduced:36 exist:1 restricts:1 nsf:1 governmental:1 discrete:1 wick:1 affected:1 key:3 four:1 redundancy:1 wasteful:1 prevent:1 backward:2 graph:5 relaxation:6 bertsimas:1 halfway:1 sum:1 enforced:1 run:2 named:3 place:1 family:3 saying:1 draw:1 disambiguation:1 decision:2 appendix:5 comparable:3 esposito:1 bound:8 ct:2 guaranteed:3 quadratic:2 oracle:15 annual:2 riedel:6 constraint:8 sharply:1 ri:4 generates:1 speed:11 optimality:4 min:4 martin:1 speedup:5 department:2 structured:5 according:1 alternate:1 combination:2 smaller:5 remain:1 across:2 slightly:1 beneficial:1 lp:35 modification:1 explained:1 restricted:22 equation:12 previously:1 precomputed:2 needed:2 mind:1 know:2 end:7 junction:2 generalizes:1 operation:3 obey:1 observe:1 away:1 alternative:1 original:1 running:1 include:3 linguistics:5 graphical:8 opportunity:1 exploit:2 sweep:1 objective:2 added:3 quantity:3 strategy:13 dependence:3 usual:1 traditional:4 guessing:1 gradient:1 grandparent:1 entity:2 street:3 athena:1 outer:1 ui0:2 seven:1 topic:1 polytope:1 considers:1 trivial:1 reason:2 marcus:1 code:1 index:1 thirteen:2 potentially:2 statement:2 expense:1 negative:3 stated:1 contributed:1 perform:2 upper:2 allowing:1 markov:1 descent:3 supporting:1 extended:1 santorini:1 david:2 pair:3 required:1 connection:5 sentence:1 acoustic:1 quadratically:1 able:1 usually:1 pattern:1 kulesza:1 sparsity:1 reading:1 program:6 max:14 belief:3 wainwright:3 natural:3 rely:2 warm:2 regularized:1 indicator:2 force:1 advanced:1 yanover:1 improve:6 finished:1 reprint:1 acknowledges:1 isn:1 speeding:1 prior:1 l2:2 multiplication:1 determining:1 relative:1 loss:6 highlight:1 bear:1 generation:29 interesting:1 foundation:1 degree:1 sufficient:3 consistent:1 treebank:1 pi:2 row:1 elsewhere:1 changed:1 token:3 supported:1 last:2 english:1 appreciated:1 offline:3 side:2 allow:1 tsitsiklis:1 neighbor:2 absolute:1 sparse:1 tracing:1 benefit:1 xn:8 world:1 transition:12 valid:3 doesn:2 ignores:1 author:3 forward:2 made:1 preprocessing:1 transaction:1 approximate:11 pruning:8 obtains:1 cutting:4 implicitly:3 logic:2 global:2 uai:1 corpus:1 assumed:1 xi:150 search:15 un:2 table:7 learn:1 terminate:2 rohanimanesh:1 improving:3 complex:1 necessarily:1 constructing:2 domain:20 linearly:2 terminated:1 bounding:2 iarpa:1 allowed:1 repeated:1 advice:1 ahuja:1 sub:1 position:1 pereira:1 exponential:1 xl:2 candidate:7 jmlr:1 maxxi:3 showing:1 explored:4 svm:4 evidence:3 exists:2 adding:2 ci:4 magnitude:2 notwithstanding:1 margin:4 gap:6 sorting:1 authorized:1 explore:1 army:1 rardin:1 expressed:1 scalar:1 recommendation:1 applies:2 loses:1 relies:1 worstcase:1 acm:1 conditional:1 replace:1 price:1 feasible:5 experimentally:1 change:1 typical:2 justify:1 batra:1 duality:4 college:1 support:1 collins:1 violated:1 regularizing:1 avoiding:1 subaward:1 |
4,220 | 482 | Application of Neural Network Methodology to
the Modelling of the Yield Strength in a Steel
Rolling Plate Mill
Ah Chung Tsoi
Department of Electrical Engineering
University of Queensland,
St Lucia, Queensland 4072,
Australia.
Abstract
In this paper, a tree based neural network viz. MARS (Friedman, 1991) for
the modelling of the yield strength of a steel rolling plate mill is described.
The inputs to the time series model are temperature, strain, strain rate,
and interpass time and the output is the corresponding yield stress. It
is found that the MARS-based model reveals which variable's functional
dependence is nonlinear, and significant. The results are compared with
those obta.ined by using a Kalman filter based online tuning method and
other classification methods, e.g. CART, C4 .5, Bayesian classification. It
is found that the MARS-based method consistently outperforms the other
methods.
1
Introduction
Hot rolling of steel slabs into fiat plates is a common process in a steel mill. This
technology has been in use for many years. The process of rolling hot slabs into
plates is relatively well understood [see, e.g., Underwood, 1950]. But with the
intense intrnational market competition, there is more and more demand on the
quality of the finished plates. This demand for quality fuels the search for a better
understanding of the underlying mechanisms of the transformation of hot slabs
into plates, and a better control of the parameters involved. Hopefully, a better
understanding of the controlling parameters will lead to a more optimal setting
of the control on the process, which will lead ultimately to a better quality final
product.
698
ANN Modelling of a Steel Rolling Plate Mill
In this paper, we consider the problem of modelling the plate yield stress in a
hot steel rolling plate mill. Rolling is a process of plastic deformation and its
objective is achieved by subjecting the material to forces of such a magnitude that
the reSUlting stresses produce permanent change of shape. Apart from the obvious
dependence on the materials used, the characteristics of the material undergoing
plastic deformation are described by stress, strain and temperature, if the rolling
is performed on hot slabs . In addition, the interpass time, i.e., the time between
passes of the slab through the rollers (an indirect measure of the rolling velocity),
directly influences the metallurgical structure of the metal during rolling.
There is considerable evidence that the yield stress is also dependent 011 the strain
rate. In fact, it is observed that as the strain rate increases, the initial yield point
increases appreciably, but after an extension is achieved, the effect of strain rate on
the yield stress is very much reduced [see, e.g., Underwood, 1950].
The effect of temperature on the yield stress is important. It is shown that the
resistance to deformation increases with a decrease in temperature. The resistance
to deformation versus temperature diagram shows a "hump" in the curve, which
corresponds to the temperature at which the structure of material changes fundamentally [see, e.g., Underwood, 1950, Hodgson & Collinson, 1990].
Using, e.g., an energy method, it is possible to formulate a theoretical model of the
dependence of deformation resistance on temperature, strain, strain rate, velocity
(indirectly, the interpass time). One may then validate the theoretical model by
performing a rolling experiment on a piece of material, perhaps under laboratory
conditions [see .e.g., Horihata, Motomura, 1988, for consideration of a three roller
system] .
It is difficult to apply the derived theoretical model to a practical situation, due to
the fact that in a practical process, the measurement of strain and strain rate are
not accurate. Secondly, one cannot possibly perform a rolling experiment on each
new piece of material to be rolled. Thus though the theoretical model may serve as
a guide to our understanding of the process, it is not suitable for controller design
purposes.
There are empirical models relating the resistance of deformation to temperature,
strain and strain rate [see, e.g., Underwood, 1950, for an account of older models].
These models are often obtained by fitting the observed data to a general data
model.
The following model has been found useful in fitting the observed practical data
km =
a{b
d
sinh -1 (ci: exp( T)!)
(1)
where k m is the yield stress, { is the strain, i is the corresponding strain rate,
and T is the temperature. a, b, c, d and f are unknown constants. It is claimed
that this model will give a good prediction of the yield stress, especially at lower
temperatures, and for thin plate passes [Hodgson & Collinson, 1990] .
This model does not always give good predictions over all temperatures as mill
conditions vary with time, and the model is only "tuned" on a limited set of data.
699
700
Tsoi
In order to overcome this problem, McFarlane, Telford, and Petersen [1991] have
experimented with a recursive model based on the Kalman filter in control theory
to update the parameters (see, e.g. Anderson, Moore, [1980]), a, b, c, d, / in the
above model. To better describe the material behaviour at different temperatures,
the model explicitly incorporates two separate sub-models with a temperature dependence:
1. Full crystallisation (T
< Tupper)
km =
alb
sinh-1(ci exp( :)J)
(2)
The constants a, b, c, d, f are model coefficients.
2. Partial recrystallisation (Tiower
~
T
~
Tupper).
+ f*)bsinh-1(ciexp(:)J)
to .5 = j(Ai-lfi-l + {i)9 h?q(Ti - 1, n)h?
km
=
a({
Ai
= h(t, to.5)
(3)
(4)
(5)
where A is the fractional retained strain; {*, expressed as a Taylor series expansion
of Ai-l (i-l, is the retained strain; t is the interpass time; to.5 is the 50 % recrystallisation time; q(n-l, Ti) is a prescribed nonlinear function of n-l and n; h(.) and
12(.) are pre-specified nonlinear functions; i, the roll pass number; j, h, g are the
model coefficients; Tupper is an experimentally determined temperature at which the
material undergoes a permanent change in structure; and 1iower is a temperature
below which the material does not exhibit any plastic behaviour.
Model coefficients a,b,c,d,/,g,h,j are either estimated in a batch mode (i.e., all
the past data are assumed to be available simultaneously) or adapted recursively
on-line (i.e., only a limited number of the past data is available) using a Kalman
filter algorithm in order to provide the best model predictions [McFarlane, Telford,
Petersen, 1991].
It is noted that these models are motivated by the desire to fit a nonlinear model
of a special type, i.e., one which has an inverse hyperbolic sine function. But, since
the basic operation is data fitting, i.e., to fit a model to the set of given data, it
is possible to consider more general nonlinear models. These models may not have
any ready interpretation in metallurgical terms, but these models may be better in
fitting a nonlinear model to the given data set in the sense that it may give a better
prediction of the output.
It has been shown (see, e.g., Hornik et aI, 1989) that a class of artificial neural
networks, viz., a multilayer perceptron with a single hidden layer can approximate
any arbitrary input output function to an arbitrary degree of accuracy. Thus it
is reasonable to experiment with different classes of artificial neural network or
induction tree structures for fitting the set of given data and to examine which
structure gives the best performance.
ANN Modelling of a Steel Rolling Plate Mill
The structure of the paper is as follows: in section 2, a brief review of a special
class of neural networks is given. In section 3, results in applying the neural network
model to the plate mill data are given.
2
A Tree Based Neural Network model
Friedman [1991] introduced a new class of neural network architecture which is
called MARS (Multivariate Adaptive Regression Spline). This class of methods
can be interpreted as a tree of neurons, in which each leaf of the tree consists of a
neuron. The model of the neuron may be a piecewise linear polynomial, or a cubic
polynomial, with the knot as a variable. In view of the lack of space, we will refer
the interested readers to Friedman's paper [1991] for details on this method.
3
Results
MARS has been applied to the platemill data. We have used the data in the
following manner.
We concatenate different runs of the plate mill into a single time series. This consists
of 2877 points corresponding to 180 individual plates with approximately 16 passes
on each plate. There are 4 independent variables, viz., interpass time, temperature,
strain, and strain rate. The desired output variable is the yield stress.
A plot of the individual variables, viz temperature, strain, strain rate, interpass
time and stress versus time reveal that the variables can vary rather considerably
over the entire time series. In addition, a plot of stress versus temperature, stress
versus strain, stress versus strain rate and stress versus interpass time reveals that
the functional dependence could be highly nonlinear.
We have chosen to use an additive model (Friedman [1991]), instead of the more
general multivariate model, as this will allow us to observe any possible nonlinear
functional dependencies of the output as a function of the inputs.
(6)
=
where k., i I, 2, 3, 4 are gains, and fi' i
mial models found by MARS .
= 1, 2, 3,4 are piecewise nonlinear polyno-
The results are as follows:
Both the piecewise linear polynomial and the piecewise cubic polynomial are used
to study this set of data. It is found that the cubic polynomial gives a better fit than
the linear polynomial fit. Figure I(a) shows the error plot between the estimated
output from a cubic spline fit, and the training data. It is observed that the error
is very small. The maximum error is about -0.07. Figure I(b) shows the plot of the
predicted yield stress and the original yield stress over the set of training data.
These figures indicate that the cubic polynomial fit has captured most of the variation of the data. It is interesting to note that in this model, the interpass time
701
702
Tsoi
JI
28
.1'
13
2'
12
- 12
Figure 1: (a) The prediction error on the training data set (b) The prediction and
the training data set superimposed
plays no significant part. This feature may be a peculiar aspect of this set of data
points. It is not true in general.
It is found that the strain rate has the most influence on the data, followed by
temperature, and followed by strain. The model, once obtained, can be used to
predict the yield stress from a given set of temperature, strain, and strain rate.
Figure 2(a) shows the prediction error between the yield stress and the predicted
yield stress on a set of testing data, i.e. the data which is not used to train the model
and Figure 2(b) shows a plot of the predicted value of yield stress superimposed on
the original yield stress.
It is observed that the prediction on the set of testing data is reasonable. This
indicates that the MARS model has captured most of the dynamics underlying the
original training data, and is capable of extending this captured knowledge onto a
set of hitherto unseen data.
4
Comparison with the results obtained by conventional
approaches
In order to compare the artificial neural network approach to more conventional
methods for model tuning, the same data set was processed using:
1. A MARS model with cubic polynomials
2. An inverse hyperbolic sine law model using least square batch parameter tuning
ANN Modelling of a Steel Rolling Plate Mill
..
~
32
...
28
.3.
26
.
2'
~
22
. ? 2.
.3
.6
??
'2
II
-
.?.
..
16V
~
- .? ~.
~
??
61
, ??"
'3
...
'61
,.
211
23
24.
26'
....
3
??
61
?
'"
.3
...
'61
,.
211
23
2"
26'
Figure 2: (a) The prediction error on the testing data set (b) The prediction and
the testing data set superimposed
3. An inverse hyperbolic sine law model using a recursive least squares tuning
4. CART based classification [Brie men et. al., 1984]
5. C4.5 based method [Quinlan, 1986,1987]
6. Bayesian classification [Buntine, [1990]
In each case, we used a training data set of78 plates (1242 passes) and a testing data
set of 16 plates (252 passes). In the cases of CART, C4.5, and Bayesian classification
methods, the yield stress variable is divided equally into 10 classes, and this is used
as the desired output instead of the original real values.
The comparison of the results between MARS and the Kalman filter based approach
are shown in the following table
mean%
mean abs%
std %
Bll
B12
-.64
4.61
6.26
1.69
4.22
5.11
All
-.64
4.61
6.26
Al2
2.38
5.3
6.25
ell
C12
-0.2
3.5
4.7
4.5
5.3
4.9
where
Bll = Batch Tuning: tuning model ( forgetting factors =1 in adaption) on the
training data
B12 = Batch Tuning: running tuned model on the testing data
All = Adaptation: on the training data
Al2 = Adaptation: on the testing data
703
704
Tsoi
Cll = MARS on the training data
C u = MARS on the testing data,
and mean% = mean?k mea , - kpred)/kmea,),
meanabs% = mean(abs(kmea , - kpred)/kmea,)),
std% = stdev(k mea , - kpred)/kmea,); where mean and stdev stands for the mean
and the standard deviations respectively, and k mea" kpred represents the measured
and predicted values of the yield stress respectively.
It is found that the MARS based model performs extremely well compared with the
other methods. The standard deviation of the prediction errors in a MARS model
is considerably less than the corresponding standard deviation of prediction errors
in a Kalman filter type batch or online tuning model on the testing data set.
We have also compared MARS with both the CART based method and the C4.5
based method. As both CART and C4.5 operate only on an output category,
rather than a continuous output value, it is necessary to convert the yield stress
into a category type of variable. We have chosen to divide equally the yield stress
into 10 classes. With this modification, the CART and C4.5 methods are readily
applicable.
The following table summarises the results of this comparison. The values given are
the percentage of the prediction error on the testing data set for various methods.
In the case of MARS, we have converted the prediction error from a continuous
variable into the corresponding classes as used in the CART and C4.5 methods.
I 65.4
Bayes I CART I C4.5 I MARS I
12.99
16.14 6.2
It is found that the MARS model is more consistent in predicting the output classes
than either the CART method, the C4.5 based method, or the Bayesian classifier.
The fact that the MARS model performs better than the CART model can be seen
as a confirmation that the MARS model is a generalisation of the CART model
(see Friedman [1991]). But it is rather surprising to see that the MARS model
outperforms a Bayesian classifier.
The results are similar over a number of other typical data sets, e.g., when the
interpass time variable becomes significant.
5
Conclusion
It is found that MARS can be applied to model the platemill data with very good
accuracy. In terms of predictive power on unseen data, it performs better than
the more traditional methods, e.g., Kalman filter batch or online tuning methods,
CART, C4.5 or Bayesian classifier.
It is almost impossible to convert the MARS model into one given in section 1. The
Hodgson-Collinson model places a breakpoint at a temperature of 925 deg G, while
in the MARS model, the temperature breakpoints are found to be at 1017 degG
and 1129 deg C respectively. Hence it is difficult to convert the MARS model into
those given by the Hodgson-Collinson model, the Kalman filter type models or vice
ANN Modelling of a Steel Rolling Plate Mill
versa.
A possible improvement to the current MARS technique would be to restrict the
breakpoints, so that they must exist within a temperature region where microstructural changes are known to occur.
6
Acknowledgement
The author acknowledges the assistance given by the staff at the BHP Melbourne
Research Laboratory in providing the data, as well as in providing the background
material in this paper. He specially thanks Dr D McFarlane in giving his generous
time in assisting in the understanding of the more traditional approaches, and also
for providing the results on the Kalman filtering approach. Also, he is indebted
to Dr W Buntine, RIACS, NASA, Ames Research Center for providing an early
version of the induction tree based programs.
7
References
Anderson, B.D.O., Moore, J .B., (1980). Optimal Fitering. Prentice Hall, Eaglewood, NJ.
Brieman, L., Friedman, J., Olshen, R.A., Stone, C.J., (1984). Classification and
Regrression Trees. Wadworth, Belmont, CA.
Buntine, W, (1990). A Theory of Learning Classification Rules. PhD Thesis submitted to the University of Technology, Sydney.
Friedman, J, (1991). "Multivariate Adaptive Regression Splines". Ann Stat. to
appear. (Also, the implication of the paper on neural network models was presented
orally in the 1990 NIPS Conference)
Hodgson, Collinson, (1990). Manuscript under preparation (authors are with BHP
Research Lab., Melbourne, Australia) .
Horihata, M, Motomura, M, (1988). "Theoretical analysis of 3-roll Rolling Process
by the energy method". Trans of the Iron and Steel Institute of Japan, 28:6, 434439.
Hornik, K., Stinchcombe, M., White, H., (1989). "Multilayer Feedforward Networks
are Universal Approximators". Neural Networks, 2, 359-366.
McFarlane, D, Telford, A, Petersen, I, (1991). Manuscript under preparation
Quinlan, R. (1986). "Induction of Decision Trees". Machine Learning. 1,81-106.
Quinlan, R. (1987). "Simplifying Decision Trees". International J Man-Machine
Studies. 27, 221-234.
Underwood, L R, (1950). The Rolling of Metals. Chapman & Hall, London.
705
| 482 |@word version:1 polynomial:8 km:3 queensland:2 simplifying:1 recursively:1 initial:1 series:4 tuned:2 outperforms:2 past:2 current:1 surprising:1 must:1 readily:1 belmont:1 additive:1 riacs:1 concatenate:1 shape:1 plot:5 update:1 leaf:1 ames:1 consists:2 fitting:5 manner:1 forgetting:1 market:1 examine:1 becomes:1 underlying:2 fuel:1 hitherto:1 interpreted:1 transformation:1 nj:1 ti:2 classifier:3 control:3 appear:1 engineering:1 understood:1 approximately:1 b12:2 limited:2 practical:3 tsoi:4 testing:10 recursive:2 universal:1 empirical:1 hyperbolic:3 pre:1 petersen:3 cannot:1 onto:1 mea:3 prentice:1 influence:2 applying:1 impossible:1 conventional:2 center:1 formulate:1 rule:1 his:1 variation:1 controlling:1 play:1 orally:1 velocity:2 lfi:1 std:2 observed:5 electrical:1 region:1 decrease:1 dynamic:1 ultimately:1 predictive:1 serve:1 indirect:1 various:1 stdev:2 train:1 describe:1 london:1 artificial:3 unseen:2 final:1 online:3 product:1 adaptation:2 validate:1 competition:1 extending:1 produce:1 stat:1 measured:1 sydney:1 predicted:4 indicate:1 filter:7 australia:2 material:10 behaviour:2 secondly:1 extension:1 hall:2 exp:2 predict:1 slab:5 vary:2 generous:1 early:1 purpose:1 applicable:1 appreciably:1 vice:1 always:1 rather:3 derived:1 viz:4 improvement:1 consistently:1 modelling:7 superimposed:3 indicates:1 sense:1 dependent:1 entire:1 ined:1 hidden:1 interested:1 classification:7 special:2 ell:1 once:1 chapman:1 represents:1 thin:1 subjecting:1 breakpoint:1 spline:3 fundamentally:1 piecewise:4 simultaneously:1 individual:2 friedman:7 ab:2 highly:1 hump:1 rolled:1 implication:1 accurate:1 peculiar:1 capable:1 partial:1 necessary:1 intense:1 tree:9 taylor:1 divide:1 desired:2 deformation:6 theoretical:5 melbourne:2 deviation:3 rolling:17 buntine:3 dependency:1 considerably:2 st:1 thanks:1 international:1 cll:1 thesis:1 possibly:1 dr:2 chung:1 japan:1 account:1 converted:1 c12:1 coefficient:3 permanent:2 explicitly:1 piece:2 performed:1 sine:3 view:1 lab:1 bayes:1 square:2 accuracy:2 roll:2 characteristic:1 yield:22 bayesian:6 plastic:3 knot:1 indebted:1 ah:1 submitted:1 energy:2 involved:1 obvious:1 gain:1 knowledge:1 fractional:1 iron:1 fiat:1 nasa:1 manuscript:2 tupper:3 methodology:1 though:1 mar:25 anderson:2 nonlinear:9 hopefully:1 lack:1 undergoes:1 mode:1 quality:3 reveal:1 perhaps:1 alb:1 effect:2 true:1 hence:1 laboratory:2 moore:2 white:1 assistance:1 during:1 noted:1 stone:1 plate:19 stress:26 performs:3 temperature:23 consideration:1 fi:1 common:1 functional:3 ji:1 al2:2 interpretation:1 he:2 relating:1 significant:3 measurement:1 refer:1 versa:1 ai:4 tuning:9 multivariate:3 apart:1 claimed:1 approximators:1 captured:3 seen:1 staff:1 ii:1 assisting:1 full:1 motomura:2 divided:1 equally:2 prediction:14 basic:1 regression:2 multilayer:2 controller:1 achieved:2 addition:2 background:1 diagram:1 operate:1 specially:1 pass:5 cart:12 incorporates:1 feedforward:1 fit:6 architecture:1 restrict:1 motivated:1 resistance:4 kpred:4 useful:1 processed:1 category:2 reduced:1 percentage:1 exist:1 estimated:2 year:1 convert:3 run:1 inverse:3 place:1 almost:1 reasonable:2 reader:1 mial:1 decision:2 layer:1 breakpoints:2 sinh:2 followed:2 strength:2 adapted:1 occur:1 lucia:1 aspect:1 prescribed:1 extremely:1 performing:1 relatively:1 department:1 modification:1 mechanism:1 available:2 operation:1 apply:1 observe:1 indirectly:1 batch:6 original:4 running:1 underwood:5 quinlan:3 giving:1 especially:1 summarises:1 objective:1 dependence:5 traditional:2 exhibit:1 separate:1 induction:3 kalman:8 retained:2 providing:4 difficult:2 olshen:1 steel:10 design:1 unknown:1 perform:1 neuron:3 situation:1 strain:26 bll:2 arbitrary:2 introduced:1 specified:1 c4:10 nip:1 trans:1 below:1 mcfarlane:4 program:1 stinchcombe:1 hot:5 suitable:1 power:1 force:1 predicting:1 older:1 technology:2 brief:1 finished:1 ready:1 acknowledges:1 roller:2 review:1 understanding:4 acknowledgement:1 degg:1 law:2 interesting:1 men:1 filtering:1 versus:6 degree:1 metal:2 consistent:1 guide:1 allow:1 perceptron:1 institute:1 curve:1 overcome:1 stand:1 author:2 adaptive:2 microstructural:1 approximate:1 deg:2 reveals:2 assumed:1 search:1 continuous:2 table:2 confirmation:1 ca:1 hornik:2 expansion:1 brie:1 cubic:6 sub:1 undergoing:1 experimented:1 evidence:1 ci:2 phd:1 magnitude:1 demand:2 mill:11 expressed:1 desire:1 corresponds:1 adaption:1 ann:5 man:1 considerable:1 change:4 experimentally:1 determined:1 generalisation:1 typical:1 called:1 pas:1 preparation:2 |
4,221 | 4,820 | Active Learning of Multi-Index Function Models
Hemant Tyagi and Volkan Cevher
LIONS ? EPFL
Abstract
We consider the problem of actively learning multi-index functions of the form
Pk
f (x) = g(Ax) = i=1 gi (aTi x) from point evaluations of f . We assume that
the function f is defined on an `2 -ball in Rd , g is twice continuously differentiable almost everywhere, and A ? Rk?d is a rank k matrix, where k d. We
propose a randomized, active sampling scheme for estimating such functions with
uniform approximation guarantees. Our theoretical developments leverage recent
techniques from low rank matrix recovery, which enables us to derive an estimator of the function f along with sample complexity bounds. We also characterize
the noise robustness of the scheme, and provide empirical evidence that the highdimensional scaling of our sample complexity bounds are quite accurate.
1
Introduction
d
Learning functions f : x ? y based on training data (yi , xi )m
i=1 : R ? R is a fundamental problem
with many scientific and engineering applications. Often, the function f has a parametric model,
as in linear regression when f (x) = aT x, and hence, learning the function amounts to learning the
model parameters. In this setting, obtaining an approximate model fb when d 1 is challenging due
to the curse-of-dimensionality. Fortunately, low-dimensional parameter models, such as sparsity and
low-rank models, enable successful learning from dimensionality reduced or incomplete data [1, 2].
Since any parametric form is at best an approximation, non-parametric models remain as important
alternatives where we also attempt to learn the structure of the mapping f from data [3?15]. Unfortunately, the curse-of-dimensionality problem in non-parametric function learning in high-dimensions
is particularly difficult even with smoothness assumptions on f [16?18]. For instance, learning
functions f ? C s (i.e., the derivatives f 0 , . . . , f (s) exist and are continuous), defined over compact
supports, require m = ?((1/?)d/s ) samples for a uniform approximation guarantee of ? 1 (i.e.,
kf ? fbkL? ? ?) [17]. Surprisingly, even infinitely differentiable functions (s = ?) are not immune
to this problem (m = ?(2bd/2c )) [18]. Therefore, further assumptions on the multivariate functions
beyond smoothness are needed for the tractability of successful learning [13, 14, 16, 19].
To this end, we seek to learn low-dimensional function models f (x) = g(Ax) that decompose as
Model 1: f (x) =
k
X
gi (aTi x) | Model 2: f (x) = aT1 x +
i=1
k
X
gi (aTi x),
(1)
i=2
thereby constraining f to effectively live on k-dimensional subspaces, where k d. The models in
(1) have several important machine learning applications, and are known as the multi-index models
in statistics and econometrics, and multi-ridge functions in signal processing [4?7, 20?24].
In stark contrast to the classical regression setting where (yi , xi )m
i=1 is given a priori, we posit the
active learning setting where we can query the function to obtain first an explicit approximation of
A and subsequently of f . As a stylized example of the active learning setting, consider numerical
solutions of parametric partial differential equations (PDE). Given PDE(f, x) = 0, where f (x):
1
? ? R is the implicit solution, obtaining a function sample typically requires running a computationally expensive numerical solver. As we have the ability to choose the samples, we can minimize
the number of queries to the PDE solver in order to learn an explicit approximation of f [13].
Background: To set the context for our contributions, it is necessary to review the (rather extensive) literature that revolve around the models (1). We categorize the earlier works by how the
samples are obtained (regression (passive) vs. active learning), what the underlying low-dimensional
model is (low-rank vs. sparse), and how the smoothness is encoded (kernels vs. C s ).
Regression/low-rank [3?7]: We consider the function model f (x) = g(Ax) to be kernel smooth
or C s . Noting the differentiability of f , we observe that the gradients ?f (x) = AT ?g(Ax) live
within the low-dimensional subspaces of AT . Assuming that ?g has sufficient richness to span kdimensional subspaces of the rows of A, we use the given samples to obtain Hessian estimates via
local smoothing techniques, such as kernel estimates, nearest-neighbor, or spline methods. We then
use the k-principal vectors of the estimated Hessian to approximate A. In some cases, we can even
establish asymptotic distribution of A estimates but not finite sample complexity bounds.
Regression/sparse [8?12]: We add sparsity restrictions on the function models: for instance, we
assume only one coordinate is active per additive term in (1). To encode smoothness, we restrict
f to a particular functional space, such as the reproducing kernel Hilbert or Sobolev spaces. We
employ greedy algorithms, back-fitting approaches, or convex regularizers to not only estimate the
active coordinates but also the function itself. We can then establish finite sample complexity rates
with guarantees of the form kf ? fbkL2 ? ?, which grow logarithmically with d as well as match the
minimax bounds for the learning problem. Moreover, the function estimation incurs a linear cost in
k since the problem formulation affords a rotation-free structure between x and gi ?s.
Active learning [13?15]: The majority of the (rather limited) literature on active non-parametric
function learning makes sparsity assumptions on A to obtain guarantees of the form kf ? fbkL? ? ?,
where f ? C s with s > 1.1 For instance, we consider the form f (x) = g(Ax), where the rows
of A live in a weak `q ball with q < 2 (i.e., they are approximately sparse).2 We then leverage
a prescribed random sampling, and prove that the sample complexity grows logarithmically with
d and is inversely proportional to the k-th singular value of a ?Hessian? matrix H f (for a precise
definition of H f , see (7)). Thus far, the only known characterization for the k-th singular value of
H f is for radial basis functions, i.e., f (x) = g(kAxk2 ). Just recently, we also see a low-rank model
to handle f (x) = g(aT x) for a general a (k = 1) with a sample complexity proportional to d [15].
Our contributions: In this paper, which is a summary of [26], we take the active learning perspective via low-rank methods, where we have a general A with only C s assumptions on gi ?s.
Our main contributions are as follows:
1. k-th singular value of H f [14, 15]: Based on the random sampling schemes of [14, 15], we
rigorously establish the first high-dimensional scaling characterization of the k-th singular
value of H f , which governs the sample complexity in both sparse and general A for the
multi-index models in (1). To achieve this result, we introduce an easy-to-verify, new
analysis tool based on Lipschitz continuous second order partial derivatives.
2. Generalization of [13?15]: We derive the first sample complexity bound for the C s functions in (1) with arbitrary number of linear parameters k without the compressibility assumptions on the rows of A. Along the way, we leverage the conventional low-rank models in regression approaches and bridge them with the recent low-rank recovery algorithms.
Our result also lifts the sparse additive models in regression [8?12] to a basis-free setting.
3. Impact of additive noise: We analytically show how additive white Gaussian noise in the
function queries impacts the sample complexity of our low-rank approach.
1
Not to be confused with the online active learning approaches, which ?optimize? a function, such as finding
its maximum [25]. In contrast, we would like to obtain uniform approximation guarantees on f , which might
lead to redundant samples if we truly are only interested in finding a critical point of the function.
2
As having one known basis to sparsify all k-dimensions in order to obtain a sparse A is rather restrictive,
this model does not provide a basis-free generalization of the sparse additive models in regression [8?12].
2
2
A recipe for active learning of low-dimensional non-parametric models
This section provides the preliminaries for our low-rank active learning approach for multi-index
models in (1). We first introduce our sampling scheme (based on [14, 15]), summarize our main
observation model (based on [6, 7, 14, 15]), and explain our algorithmic approach (based on [15]).
This discussion sets the stage for our main theoretical contributions, as described in Section 4.
Our sampling scheme: Our sampling approach relies on a specific interaction of two sets: sampling centers and an associated set of directions for each center. We denote the set of sampling
centers as X = {?j ? Sd?1 ; j = 1, . . . , mX }. We form X by sampling points uniformly at random
in Sd?1 (the unit sphere in d-dimensions) according to the uniform measure ?Sd?1 . Along with
T
each ?j ? X , we define a directions vector ?j = [?1,j | . . . |?m? ,j ] , and construct the sampling
directions operator ? for j = 1, . . . , mX , i = 1, . . . , m? , and l = 1, . . . , d as
p
1
? = ?i,j ? BRd
with probability 1/2 ,
(2)
d/m? : [?i,j ]l = ? ?
m?
p
p
where BRd
d/m? is the `2 -ball with radius r = d/m? .
Our low-rank observation model: We first write the Taylor series approximation of f as follows
f (x + ?) = f (x) + h?, ?f (x)i + E(x, , ?); E(x, , ?) = ?T ?2 f (?(x, ?))?, (3)
2
where 1, E(x, , ?) is the curvature error, and ?(x, ?) ? [x, x + ?] ? BRd (1 + r). Substituting f (x) = g(Ax) into (3), we obtain a perturbed observation model (?g(?) is a k ? 1 vector):
1
?, AT ?g(Ax) = (f (x + ?) ? f (x)) ? E(x, , ?).
(4)
We then introduce a matrix X := AT G with G := [?g(A?1 )|?g(A?2 )| ? ? ? |?g(A?mX )]k?mX .
Based on (4), we then derive the following linear system via the operator ? : Rd?mX ? Rm?
y = ?(X) + ?; yi = ?1
mX
X
[f (?j + ?i,j ) ? f (?j )] ,
(5)
j=1
where y ? Rm? are the perturbed measurements of X with [?(X)]j = trace ?Tj X , and ? =
E(X , , ?) is the curvature perturbations. The formulation (5) motivates us to leverage affine rankminimization algorithms [27?29] for low-rank matrix recovery since rank(X) ? k d.
Our active low-rank learning algorithm Algorithm 1 outlines the main steps involved in our
approximation scheme. Step 1 constructs the operator ? and the measurements y, given m? , mX ,
and . Step 2 revolves around the affine-rank minimization algorithms. Step 3 maps the recovered
b using the singular value decomposition (SVD) and rank-k approximation.
low-rank matrix to A
b
b as our estimator, where gb(y) = f (A
b T y).
Given A, step 4 constructs fb(x) = gb(Ax)
Algorithm 1: Active learner algorithm for the non-parametric model f (x) = g(Ax)
1: Choose m? , mX , and and construct the sets X and ?, and the measurements y.
b via a stable low-rank recovery algorithm (see Section 3 for an example).
2: Obtain X
b = U
b?
b T and set A
bT = U
b (k) , corresponding to k largest singular values.
bV
3: Compute SVD(X)
b
b
b T y).
4: Obtain an approximation f (x) := g
b(Ax) via quasi interpolants where gb(y) := f (A
Remark 1. We uniformly approximate the function gb by first sampling it on a rectangular grid:
hZk ? (?(1 + ?), (1 + ?))k with uniformly spaced points in each direction (step size h). We then
use quasi-interpolants to interpolate in between the points thereby obtaining the approximation g?h ,
where the complexity is exponential in k (see the tractability discussion in the introduction). We
refer the reader to Chapter 12 of [17] regarding the construction of these operators.
3
3
Stable low-rank recovery algorithms within our learning scheme
b with the
By stable low-rank recovery in Algorithm 1, we mean any algorithm that returns an X
b
following guarantee: kX ? XkF ? c1 kX ? Xk kF + c2 k?k2 , where c1,2 are constants, and Xk is the
best rank-k approximation of X. Since there exists a vast set of algorithms with such guarantees, we
use the matrix Dantzig selector [29] as a running example. This discussion is intended to expose the
reader to the key elements necessary to re-derive the sample complexity of our scheme in Section 4
for different algorithms, which might offer additional computational trade-offs.
Stable embedding: We first explain an elementary result stating that our sampling mechanism
satisfies the restricted isometry property (RIP) for all rank-k matrices with overwhelming probabil2
2
2
ity. That is, (1 ? ?k ) kXk kF ? k?(Xk )k`2 ? (1 + ?k ) kXk kF , where ?k is the RIP constant [29]).
This property can be used in establishing stability of virtually all low-rank recovery algorithms.
As ? in (5) is a Bernoulli random measurement ensemble, it follows from standard concentration in2
2
2
equalities [30,31] that for any rank-k X ? Rd?mX , we have P(| k?(X)k`2 ? kXkF | > t kXkF ) ?
m?
2
3
2e? 2 (t /2?t /3) , t ? (0, 1). By using a standard covering argument, as shown in Theorem 2.3 of
[29], we can verify that our ? satisfies RIP with isometry constant
0 < ?k < ? < 1 withprobability
?
3
1
2
?m? q(?)+k(d+mX +1)u(?)
at least 1 ? 2e
, where q(?) = 144 ? ? ?9 and u(?) = log 36? 2 .
Recovery algorithm and its tuning parameters: The Dantzig selector criteria is given by
b DS = arg min kM k s.t. k?? (y ? ?(M ))k ? ?,
X
?
M
(6)
where k?k? and k?k are the nuclear and spectral norms, respectively, and ? is a tuning parameter. We
require the true X to be feasible, i.e., k?? (?)k ? ?. Hence, the parameter ? can be obtained via
2
? X k . Moreover, it holds that k?? (?)k ? ? =
Proposition 1. In (5), we have k?k`m? ? C2 dm
2
2 m?
2
C2 dm
? X k (1 + ?)1/2 , with probability at least 1 ? 2e?m? q(?)+(d+mX +1)u(?) .
2 m?
Proposition 1 is a new result that provides the typical low-rank recovery algorithm tuning parameters
for the random sampling scheme in Section 2. We prove Proposition 1 in [26]. Note that the
dimension d appears in the bound as we do not make any compressibility assumption on A. If the
Pd
q
rows of A are compressible, that is ( j=1 |aij | )1/q ? D1 ? i = 1, . . . , k for some 0 < q <
1, D1 > 0, we can then remove the explicit d-dependence in the bound here.
Stability of low-rank recovery: We first restate a stability result from [29] for bounded noise in
Theorem 1. We then exploit this result in Corollary 1 along with Proposition 1 in order to obtain the
b (k) to X in step 4 of our Algorithm 1:
error bound for the rank-k approximation X
DS
b DS be the solution to (6). If ?4k <
Theorem
2.4 in [29]). Let rank(X) ? k and let X
? 1 (Theorem
?
? < 2?1 and k? (?)k ? ?, then we have with probability at least 1?2e?m? q(?)+4k(d+mX +1)u(?)
2
b
XDS ? X
? C0 k?2 ,
F
where C0 depends only on the isometry constant ?4k .
b DS to be the solution of (6), if X
b (k) is the best rank-k approximation to
Corollary 1. Denoting X
DS
?
b DS in the sense of k?k , and if ?4k < ? < 2 ? 1, then we have
X
F
2
C0 C22 k 5 2 d2 m2X
2
b (k)
(1 + ?),
X ? X
DS
? 4C0 k? =
m?
F
with probability at least 1 ? 2e?m? q(?)+4k(d+mX +1)u(?) .
Corollary 1 is the main result of this section, which is proved in [26]. The approximation guarantee
in Corollary 1 can be tightened if other low-rank recovery algorithms are employed in estimation of
X. However, we note again that the Dantzig selector enables us to highlight the key steps that lead
to the sample complexity of our approach in the next section.
4
4
Main results
Overview: Below, we study m? , mX , and that together achieve and balance three objectives:
mX : Sampling centers X are chosen so that the matrix G has rank-k. This is critical in ensuring
that G explores the full k-dimensional subspaces as spanned by AT lest X is rank deficient.
m? : Sampling directions ? (2) are designed to satisfy the RIP for rank-k matrices (cf., Section
3). This property is typically key in proving low-rank recovery guarantees.
: The step-size in (3) manages the impact of the curvature effects E in the linear system
(5). Unfortunately, this leads to a collateral damage of amplifying the impact of noise if the
queries are corrupted. We provide a remedy below based on sampling the same data points.
Assumptions: We explicitly mention our assumptions here. Without loss of generality, we assume
A = [a1 , . . . , ak ]T is an arbitrary k ? d matrix with orthogonal rows so that AAT = Ik , and the
function f is defined over the unit ball, i.e., f : BRd (1) ? R.3 For simplicity, we carry out our
analysis by assuming g to be a C 2 function. By our set up, g also lives over a compact set, hence all
its partial derivatives till the order of two are bounded as a result of the Stone-Weierstrass theorem:
sup|?|?2
D? g
? ? C2 ; D? g =
? |?|
?y1?1
. . . ?yk?k
;
|?| = ?1 + ? ? ? + ?k ,
for some constant C2 > 0. Finally, the effectiveness of our sampling approach depends on whether
or not the following ?Hessian? matrix H f is well-conditioned:
Z
(7)
H f :=
?f (x)?f (x)T d?Sd?1 (x).
Sd?1
f
That is, for the singular values of H , we assume ?1 (H f ) ? ? ? ? ? ?k (H f ) ? ? > 0 for some ?.
This assumption ensures X has full rank-k so that A can be successfully learned.
Restricted singular values of multi-index models: Our first main technical contribution provides
a local condition in Proposition 2 that fully characterizes ? for multi-index models in (1) below. We
prove Proposition 2 and the ensuing Proposition 3 in [26].
Proposition 2. Assume that g ? C 2 : BRk ? R has Lipschitz continuous second order partial
derivatives in an open neighborhood of the origin, U? = BRk (?) for some fixed ? = O d?(s+1) ,
and for some s > 0:
?2g
?2g
?yi ?yj (y1 ) ? ?yi ?yj (y2 )
? Li,j ?y1 , y2 ? U? , y1 6= y2 , i, j = 1, . . . , k.
ky1 ? y2 k`k
2
? 2 g(y)
Denote L = max1?i,j?k Li,j . Also assume,
6= 0 ?i = 1, . . . , k for Model 1 and
2
?yi y=0
?i = 2, . . . , k for Model 2 in (1). Then, we have ? = ?(1/d) as d ? ?.
The proof of Proposition 2 also leads to the following proposition for tractability of learning the
general set f (x) = g(Ax) without the particular modular decomposition as in (1):
Proposition 3. With the same Lipschitz continuous second order partial derivative assumption as
in Proposition 2, if ?2 g(0) is rank-k, then we have ? = ?(1/d) as d ? ?
Sampling complexity of active multi-index model learning: The importance of Proposition 2
and Proposition 3 is further made explicit in our second main technical contribution as Theorem 2
below, which characterizes the sample complexity of our low-rank learning recipe in Section 2 for
non-parametric models along with the Dantzig selector algorithm. Its proof can be found in [26].
3
Unless further assumptions are made on f or gi ?s, we can only identify the subspace spanned by the rows of
A up to a rotation. Hence, while we discuss approximation results on A, the reader should keep in mind that our
final guarantees only apply to the function f and not necessarily for A and g individually. Moreover, if f lives
in some other convex body other than BRd (1), say L? -ball, our analysis can be extended in a straightforward
fashion (cf., the concluding discussion in [14]). We also assume that an enlargement of the unit ball BRd (1) on
the domain of f for a sufficiently small ? > 0 is allowed. This is not a restriction, but is a consequence of our
scheme as we work with directional derivatives of f at points on the unit sphere Sd?1 .
5
?
Theorem 2. [Sample complexity of Algorithm 1] Let ? ? R+ , ? 1, and ? < 2 ? 1 be
log(2/p2 ) + 4k(d + mX + 1)u(?)
2kC22
fixed constants. Choose mX ?
log(k/p1 ), m? ?
, and
q(?)
??2
1/2
(1 ? ?)m? ?
?
?
?
. Then, given m = mX (m? + 1) samples,
C2 k 5/2 d(? + 2C2 2k) (1 + ?)C0 mX
? ? with probability at least
our function estimator fb in step 4 of Algorithm 1 obeys
f ? fb
L?
1 ? p1 ? p2 .
Theorem 2 characterizes the necessary scaling of the sample complexity for our active learning
scheme in order to
obtain uniform approximation guarantees
onf with overwhelming probability:
k log k
mX = O
, m? = O(k(d + mX )), and = O ??d . Note the important role played
?
by ? in the sample complexity. Finally, we also mention that the sample complexity can be written
differently to trade-off ? among mX , m? , and . For instance, we can remove ? dependence in the
sampling bound for : let ? < 1, then we just need to scale mX by ? ?2 , and m? by ? ?4 .
Remark
4?q 2.2 Note that
the sample complexity in [14] for learning compressible A is m =
O k 2?q d 2?q log(k) with uniform approximation guarantees on f ? C 2 . However, the authors
are able to obtain this result only for a restricted set of radial basis functions. Surprisingly, our
sample complexity for multi-index models (1) not only generalizes this result for general
A but fea
tures a better dimensional dependence for q ? (1, 2) : m = O k 3 d2 (log(k))2 . Of course, we
require more computation since we use low-rank recovery as opposed to sparse recovery methods.
Impact of noisy queries: Here, we focus on how ? impacts in particular. Our motivation is to
understand how additive noise in function queries, a realistic assumption in many applications, can
impact our learning scheme, which will form the basis of our third main technical contribution.
Let us assume that the evaluation of f at a point x yields: f (x) + Z, where Z ? N (0, ? 2 ). Thus
PmX zij
under this noise model, (5) changes to y = ?(X) + ? + z, where z ? Rm? and zi = j=1
.
2
Assuming
independent
and
identically
distributed
(iid)
noise,
we
have
z
?
N
(0,
2?
),
and
z
ij
i ?
N 0, 2mX2 ?
2
. Therefore, the noise variance gets amplified by a factor of
2mX
2
.
In our analysis in Section 3, recall that we require the true matrix X to be feasible. Then, from
Lemma 1.1 in [29] and Proposition 1, it follows that the bound below holds with high probability.
k?? (? + z)k ?
p
2?? p
C2 dmX k 2
?
2(1 + ?)mX m? +
(1 + ?)1/2 , (? > 2 log 12).
2 m?
(8)
Unfortunately, we cannot control the upper bound ? on k?? (? + z)k by simply choosing smaller ,
due to the appearance of the (1/) term. Hence, unless ? is O() or less, (e.g., ? reduces with d), we
can only declare that our learning scheme with the matrix Dantzig selector is sensitive to noise unless
we resample the same data points O(?1 )-times and average. If the noise variance ? 2 is constant,
this would keep the impact of noise below a constant times the impact of the
errors,
? curvature
which our scheme can handle. The sample complexity then becomes m = O d/? mX (m? +
1), since we
choose mX (m? + 1) unique points, and then re-query and average
?
the same points
?
O d/? -times. Unfortunately, we cannot qualitatively improve the O d/? -expansion for
noise robustness by simply changing the low-rank recovery algorithm since it depends on the relative
ratio of the curvature errors k?k2 to the norm of the noise vector kzk. As ? satisfies the RIP
assumption, we can verify that this relative ratio is approximately preserved in (8) for iid Gaussian
noise.
5
Numerical Experiments
We present simulation results on toy examples to empirically demonstrate the tightness of the sampling bounds. In the sequel, we assume A to be row orthonormal and concern ourselves only with
the recovery of A upto an orthonormal transformation. Therefore, we seek a guaranteed lower
6
bT
bound on
AA
? (k?)1/2 for some 0 < ? < 1. Then it is possible to show, along the lines of
F
the proof for Theorem 2 (see [26]), we would need to pick as follows:
1/2
1
(1 ? ?)m? ?(1 ? ?)
p
?
?
.
(9)
(1 + ?)C0 mX
C2 k 2 d( k(1 ? ?) + 2)
Logistic function (k = 1) We first consider f (x) = g(aT x) where g(y) = (1 + e?y )?1 is the
logistic function. This case allows us to explicitly calculate all the
parameters within
necessary
our paper. For instance, we can easily verify that C2 = sup|?|?2 g (?) (y) = 1. Furthermore we
2
R
2
compute the value of ? through the approximation: ? = g 0 (aT x) d?Sd?1 ? |g 0 (0)| = (1/16),
?
which holds for large d. We require |h?
a, ai| to be greater then ? = 0.99. We fix values of ? < 2?1,
? ? (0, 1) and = 10?3 . The value of mX (number of points sampled on Sd?1 ) is fixed at 20 and
we vary d over the range 200?3000. For each value of d, we increase m? till |h?
a, ai| reaches
the specified performance criteria of ?. We remark that for each value of d and m? , we choose
according to the derived equation (9) for the specified performance criteria given by ?.
Figure 1 depicts the scaling of m? with the dimension d. The results are obtained by selecting a
uniformly at random on Sd?1 and averaging the value of |h?
a, ai| over 10 independent trials using
the Danzig selector. We observe that for large values of d, the minimum number of directional
derivatives needed to achieve the performance bound on |h?
a, ai| scales approximately linearly with
d, with a scaling factor of around 1.45.
Figure 1: Plot of
m?
d
versus d for mX = 20 , with m? chosen to be minimum value needed to achieve
|h?
a, ai| ? 0.99. is fixed at 10?3 . m? scales approximately linearly with d where the constant is 1.45.
Sum of Gaussian functions (k > 1) We next consider functions of the form f (x) = g(Ax +
Pk
b) = i=1 gi (aTi x + bi ), where gi (y) = (2??i2 )?1/2 exp ?(y + bi )2 /(2?i2 ) . We fix d = 100,
= 10?3 , mX = 100 and vary k from 8 to 32 in steps of 4. For each value of k we are interested
b
2
in the minimum value of m? needed to achieve 1
AA
? 0.99. In Figure 2(a), we see that
k
F
m? scales approximately linearly with the number of Gaussian atoms k. The results are averaged
over 10 trials. In each trial, we select the rows of A over the left Haar measure on Sd?1 , and the
parameter b uniformly at random on Sk?1 scaled by a factor 0.2. Furthermore we generate the
standard deviations of the individual Gaussian functions uniformly over the range [0.1, 0.5].
2
Impact of Noise (k > 1) We now consider quadratic forms, i.e. f (x) = g(Ax) = kAx ? bk
with the point queries corrupted with Gaussian noise. Here, we take ? to be 1/d. We fix k = 5,
mX = 30, = 10?1 and vary d from 30 to 120 in steps of 15. For each d we perturb the point queries
with Gaussian noise of standard deviation: 0.01/d3/2 . This is the same as repeatedly sampling each
random location approximately d3/2 times followed by averaging. We then compute the minimum
b
2
value of m? needed to achieve 1
AA
? 0.99. We average the results over 10 trials, and in
k
F
each trial, we select the rows of A over the left Haar measure on Sd?1 . The parameter b is chosen
uniformly at random on Sk?1 . In Figure 2(b), we see that m? scales approximately linearly with d,
which follows our sample complexity bound for m? in Theorem 2.
7
(a) k > 1 (Gaussian)
(b) k > 1 (quadratic) with noise
Figure 2: The empirical performance of our oracle-based low-rank learning scheme (circles) agrees
well with the theoretical scaling (dashed). Section 5 has further details.
6
Conclusions
In this work, we consider the problem of learning non-parametric low-dimensional functions
f (x) = g(Ax), which can also have a modular decomposition as in (1), for arbitrary A ? Rk?d
where rank(A) = k. The main contributions of the work are three-fold. By introducing a new analysis tool based on Lipschitz property on the second order derivatives, we provide the first rigorous
characterization of the dimension dependence of the k-restricted singular value of the ?Hessian? matrix H f for general multi-index models. We establish the first sample complexity bound for learning
non-parametric multi-index models with low-rank recovery algorithms and also analyze the impact
of additive noise to the sample complexity of the scheme. Lastly, we provide empirical evidence
on toy examples to show the tightness of the sampling bounds. Finally, while our active learning
scheme ensures the tractability of learning non-parametric multi-index models, it does not establish
a lowerbound on the sample complexity, which is left for future work.
7
Acknowledgments
This work was supported in part by the European Commission under Grant MIRG-268398, ERC
Future Proof, SNF 200021-132548, ARO MURI W911NF0910383, and DARPA KeCoM program
#11-DARPA-1055. VC also would like to acknowledge Rice University for his Faculty Fellowship.
The authors thank Jan Vybiral for useful discussions and Anastasios Kyrillidis for his help with the
low-rank matrix recovery simulations.
References
[1] P. B?uhlmann and S. Van De Geer. Statistics for High-Dimensional Data: Methods, Theory and
Applications. Springer-Verlag New York Inc, 2011.
[2] L. Carin, R.G. Baraniuk, V. Cevher, D. Dunson, M.I. Jordan, G. Sapiro, and M.B. Wakin.
Learning low-dimensional signal models. Signal Processing Magazine, IEEE, 28(2):39?51,
2011.
[3] M. Hristache, A. Juditsky, J. Polzehl, and V. Spokoiny. Structure adaptive approach for dimension reduction. The Annals of Statistics, 29(6):1537?1566, 2001.
[4] K.C. Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical
Association, pages 316?327, 1991.
[5] P. Hall and K.C. Li. On almost linearity of low dimensional projections from high dimensional
data. The Annals of Statistics, pages 867?889, 1993.
[6] Y. Xia, H. Tong, WK Li, and L.X. Zhu. An adaptive estimation of dimension reduction space.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64(3):363?410,
2002.
[7] Y. Xia. A multiple-index model and dimension reduction. Journal of the American Statistical
Association, 103(484):1631?1640, 2008.
8
[8] Y. Lin and H.H. Zhang. Component selection and smoothing in multivariate nonparametric
regression. The Annals of Statistics, 34(5):2272?2297, 2006.
[9] L. Meier, S. Van De Geer, and P. B?uhlmann. High-dimensional additive modeling. The Annals
of Statistics, 37(6B):3779?3821, 2009.
[10] G. Raskutti, M. J. Wainwright, and B. Yu. Minimax-optimal rates for sparse additive models
over kernel classes via convex programming. Technical Report, UC Berkeley, Department of
Statistics, August 2010.
[11] P. Ravikumar, J. Lafferty, H. Liu, and L. Wasserman. Sparse additive models. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 71(5):1009?1030, 2009.
[12] V. Koltchinskii and M. Yuan. Sparsity in multiple kernel learning. The Annals of Statistics,
38(6):3660?3695, 2010.
[13] A. Cohen, I. Daubechies, R. A. DeVore, G. Kerkyacharian, and D. Picard. Capturing ridge
functions in high dimensions from point queries. Constr. Approx., pages 1?19, 2011.
[14] M. Fornasier, K. Schnass, and J. Vyb??ral. Learning functions of few arbitrary linear parameters
in high dimensions. Preprint, 2010.
[15] H. Tyagi and V. Cevher. Learning ridge functions with randomized sampling in high dimensions. In ICASSP, 2011.
[16] J.F. Traub, G.W Wasilkowski, and H. Wozniakowski. Information-based complexity. Academic Press, New York, 1988.
[17] R. DeVore and G.G. Lorentz. Constructive approximation. vol. 303, Grundlehren, Springer
Verlag, N.Y., 1993.
[18] E.Novak and H.Woniakowski. Approximation of infinitely differentiable multivariate functions
is intractable. J. Complex., 25:398?404, August 2009.
[19] W. Hardle. Applied nonparametric regression, volume 26. Cambridge Univ Press, 1990.
[20] J.H. Friedman and W. Stuetzel. Projection pursuit regression. J. Amer. Statist. Assoc., 76:817?
823, 1981.
[21] D.L. Donoho and I.M. Johnstone. Projection based regression and a duality with kernel methods. Ann. Statist., 17:58?106, 1989.
[22] P.J. Huber. Projection pursuit. Ann. Statist., 13:435?475, 1985.
[23] A. Pinkus. Approximation theory of the MLP model in neural networks. Acta Numerica,
8:143?195, 1999.
[24] E.J Cand`es. Harmonic analysis of neural networks. Appl. Comput. Harmon. Anal., 6(2):197?
218, 1999.
[25] N. Srinivas, A. Krause, S. Kakade, and M. Seeger. Information-theoretic regret bounds for
gaussian process optimization in the bandit setting. To appear in the IEEE Trans. on Information Theory, 2012.
[26] Hemant Tyagi and Volkan Cevher. Learning non-parametric basis independent models from
point queries via low-rank methods. Technical Report, Infoscience EPFL, 2012.
[27] E.J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of
Computational Mathematics, 9(6):717?772, 2009.
[28] E.J. Cand`es and T. Tao. The power of convex relaxation: near-optimal matrix completion.
IEEE Trans. Inf. Theor., 56:2053?2080, May 2010.
[29] E.J. Cand`es and Y. Plan. Tight oracle bounds for low-rank matrix recovery from a minimal
number of random measurements. CoRR, abs/1001.0339, 2010.
[30] B. Recht, M. Fazel, and P.A. Parrilo. Guaranteed minimum-rank solutions of linear matrix
equations via nuclear norm minimization. SIAM REVIEW, 52:471?501, 2010.
[31] B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection.
The Annals of Statistics, 28(5):1302?1338.
9
| 4820 |@word trial:5 faculty:1 norm:3 c0:6 open:1 km:1 d2:2 seek:2 simulation:2 decomposition:3 pick:1 incurs:1 thereby:2 mention:2 carry:1 reduction:4 liu:1 series:3 zij:1 selecting:1 denoting:1 ati:4 recovered:1 bd:1 written:1 lorentz:1 numerical:3 additive:10 realistic:1 enables:2 remove:2 designed:1 plot:1 juditsky:1 v:3 greedy:1 xk:3 volkan:2 weierstrass:1 characterization:3 provides:3 location:1 compressible:2 c22:1 zhang:1 along:6 c2:10 novak:1 differential:1 ik:1 yuan:1 prove:3 fitting:1 introduce:3 huber:1 p1:2 cand:4 multi:13 curse:2 overwhelming:2 solver:2 becomes:1 confused:1 estimating:1 underlying:1 moreover:3 bounded:2 linearity:1 what:1 finding:2 transformation:1 guarantee:12 sapiro:1 berkeley:1 rm:3 k2:2 scaled:1 control:1 unit:4 grant:1 assoc:1 appear:1 declare:1 engineering:1 local:2 aat:1 sd:11 consequence:1 hemant:2 ak:1 establishing:1 laurent:1 approximately:7 might:2 twice:1 koltchinskii:1 dantzig:5 acta:1 challenging:1 wozniakowski:1 appl:1 revolves:1 limited:1 range:2 bi:2 obeys:1 averaged:1 lowerbound:1 unique:1 acknowledgment:1 fazel:1 yj:2 regret:1 jan:1 snf:1 empirical:3 projection:4 radial:2 get:1 cannot:2 selection:2 operator:4 context:1 live:3 restriction:2 conventional:1 optimize:1 map:1 center:4 straightforward:1 convex:5 rectangular:1 simplicity:1 recovery:19 wasserman:1 estimator:3 nuclear:2 spanned:2 orthonormal:2 his:2 ity:1 embedding:1 handle:2 stability:3 coordinate:2 proving:1 annals:6 construction:1 rip:5 magazine:1 programming:1 exact:1 origin:1 logarithmically:2 element:1 expensive:1 particularly:1 mirg:1 econometrics:1 muri:1 role:1 preprint:1 calculate:1 ensures:2 richness:1 brk:2 trade:2 yk:1 pd:1 complexity:26 kaxk2:1 rigorously:1 tight:1 max1:1 learner:1 basis:7 easily:1 stylized:1 darpa:2 differently:1 icassp:1 chapter:1 brd:6 univ:1 query:11 lift:1 neighborhood:1 choosing:1 quite:1 encoded:1 modular:2 say:1 tightness:2 ability:1 statistic:9 gi:8 itself:1 noisy:1 final:1 online:1 differentiable:3 propose:1 aro:1 interaction:1 fea:1 till:2 achieve:6 amplified:1 recipe:2 hzk:1 help:1 derive:4 stating:1 completion:2 ij:1 nearest:1 p2:2 direction:5 posit:1 radius:1 restate:1 subsequently:1 vc:1 enable:1 require:5 fix:3 generalization:2 fornasier:1 decompose:1 preliminary:1 proposition:15 elementary:1 theor:1 hold:3 around:3 sufficiently:1 hall:1 exp:1 mapping:1 algorithmic:1 substituting:1 vary:3 resample:1 estimation:4 expose:1 amplifying:1 bridge:1 individually:1 largest:1 sensitive:1 agrees:1 uhlmann:2 successfully:1 tool:2 minimization:2 offs:1 gaussian:9 tyagi:3 rather:3 sparsify:1 corollary:4 encode:1 ax:14 focus:1 derived:1 rank:47 bernoulli:1 ral:1 contrast:2 rigorous:1 seeger:1 sense:1 epfl:2 typically:2 bt:2 bandit:1 quasi:2 interested:2 tao:1 arg:1 among:1 priori:1 development:1 vybiral:1 smoothing:2 plan:1 uc:1 construct:4 having:1 sampling:23 atom:1 yu:1 carin:1 future:2 report:2 spline:1 employ:1 few:1 interpolate:1 individual:1 intended:1 ourselves:1 attempt:1 friedman:1 ab:1 mlp:1 picard:1 evaluation:2 truly:1 traub:1 tj:1 regularizers:1 accurate:1 partial:5 necessary:4 collateral:1 orthogonal:1 unless:3 harmon:1 incomplete:1 taylor:1 re:2 pmx:1 circle:1 theoretical:3 minimal:1 cevher:4 instance:5 earlier:1 modeling:1 kxkf:2 tractability:4 cost:1 deviation:2 introducing:1 uniform:6 successful:2 characterize:1 commission:1 perturbed:2 corrupted:2 recht:2 fundamental:1 randomized:2 explores:1 siam:1 sequel:1 off:1 kdimensional:1 together:1 continuously:1 again:1 daubechies:1 opposed:1 choose:5 american:2 derivative:8 return:1 stark:1 actively:1 li:5 toy:2 parrilo:1 de:2 wk:1 inc:1 spokoiny:1 satisfy:1 explicitly:2 depends:3 analyze:1 sup:2 characterizes:3 contribution:8 minimize:1 variance:2 ensemble:1 spaced:1 identify:1 yield:1 directional:2 dmx:1 weak:1 manages:1 iid:2 explain:2 reach:1 definition:1 involved:1 dm:2 associated:1 proof:4 sampled:1 proved:1 recall:1 dimensionality:3 hilbert:1 back:1 appears:1 methodology:2 devore:2 formulation:2 amer:1 generality:1 furthermore:2 just:2 implicit:1 stage:1 lastly:1 d:7 logistic:2 scientific:1 grows:1 effect:1 verify:4 true:2 remedy:1 y2:4 hence:5 analytically:1 equality:1 i2:2 white:1 covering:1 criterion:3 stone:1 outline:1 ridge:3 xds:1 enlargement:1 demonstrate:1 theoretic:1 passive:1 harmonic:1 recently:1 rotation:2 raskutti:1 functional:2 empirically:1 overview:1 cohen:1 volume:1 pinkus:1 association:2 schnass:1 measurement:5 refer:1 cambridge:1 ai:5 smoothness:4 rd:3 tuning:3 grid:1 approx:1 mathematics:1 erc:1 immune:1 stable:4 add:1 curvature:5 multivariate:3 isometry:3 recent:2 perspective:1 inf:1 verlag:2 life:2 yi:6 minimum:5 fortunately:1 additional:1 greater:1 employed:1 redundant:1 signal:3 dashed:1 full:2 multiple:2 reduces:1 anastasios:1 smooth:1 technical:5 match:1 academic:1 offer:1 pde:3 sphere:2 lin:1 ravikumar:1 a1:1 impact:11 ensuring:1 kax:1 regression:13 kernel:7 c1:2 preserved:1 background:1 fellowship:1 krause:1 grow:1 singular:9 lest:1 massart:1 virtually:1 deficient:1 grundlehren:1 lafferty:1 effectiveness:1 jordan:1 near:1 leverage:4 noting:1 constraining:1 easy:1 identically:1 zi:1 restrict:1 regarding:1 kyrillidis:1 whether:1 gb:4 hessian:5 york:2 remark:3 repeatedly:1 useful:1 governs:1 amount:1 nonparametric:2 statist:3 differentiability:1 reduced:1 generate:1 exist:1 affords:1 estimated:1 per:1 write:1 numerica:1 vol:1 revolve:1 key:3 d3:2 changing:1 vast:1 relaxation:1 sum:1 inverse:1 everywhere:1 baraniuk:1 almost:2 reader:3 interpolants:2 sobolev:1 scaling:6 capturing:1 bound:19 guaranteed:2 played:1 followed:1 fold:1 quadratic:3 oracle:2 polzehl:1 vyb:1 bv:1 x2:1 argument:1 span:1 prescribed:1 min:1 concluding:1 kerkyacharian:1 department:1 according:2 ball:6 remain:1 smaller:1 kakade:1 constr:1 restricted:4 computationally:1 equation:3 discus:1 mechanism:1 needed:5 mind:1 end:1 generalizes:1 pursuit:2 apply:1 observe:2 hristache:1 spectral:1 upto:1 alternative:1 robustness:2 ky1:1 xkf:1 in2:1 running:2 cf:2 wakin:1 exploit:1 restrictive:1 perturb:1 establish:5 classical:1 society:2 objective:1 parametric:13 concentration:1 dependence:4 damage:1 gradient:1 subspace:5 mx:32 thank:1 majority:1 ensuing:1 assuming:3 index:13 ratio:2 balance:1 difficult:1 unfortunately:4 dunson:1 trace:1 anal:1 motivates:1 upper:1 observation:3 finite:2 acknowledge:1 extended:1 precise:1 y1:4 perturbation:1 reproducing:1 compressibility:2 arbitrary:4 august:2 bk:1 meier:1 specified:2 extensive:1 learned:1 trans:2 beyond:1 able:1 lion:1 below:6 sparsity:4 summarize:1 program:1 royal:2 wainwright:1 power:1 critical:2 haar:2 zhu:1 minimax:2 scheme:17 improve:1 inversely:1 review:2 literature:2 kf:6 asymptotic:1 relative:2 loss:1 fully:1 highlight:1 proportional:2 tures:1 versus:1 at1:1 foundation:1 affine:2 sufficient:1 tightened:1 row:9 course:1 summary:1 surprisingly:2 supported:1 free:3 aij:1 understand:1 johnstone:1 neighbor:1 sparse:10 distributed:1 van:2 kzk:1 dimension:13 xia:2 wasilkowski:1 fb:4 author:2 made:2 qualitatively:1 adaptive:3 far:1 approximate:3 compact:2 selector:6 keep:2 active:18 xi:2 continuous:4 sk:2 learn:3 obtaining:3 expansion:1 necessarily:1 european:1 complex:1 domain:1 pk:2 main:10 linearly:4 motivation:1 noise:20 allowed:1 sliced:1 body:1 depicts:1 fashion:1 tong:1 explicit:4 exponential:1 comput:1 third:1 rk:2 theorem:10 specific:1 evidence:2 concern:1 exists:1 intractable:1 effectively:1 importance:1 corr:1 conditioned:1 kx:2 simply:2 appearance:1 infinitely:2 kxk:2 springer:2 aa:3 satisfies:3 relies:1 rice:1 donoho:1 ann:2 lipschitz:4 feasible:2 change:1 typical:1 uniformly:7 averaging:2 principal:1 lemma:1 geer:2 duality:1 svd:2 e:4 select:2 highdimensional:1 support:1 categorize:1 constructive:1 d1:2 srinivas:1 |
4,222 | 4,821 | On Multilabel Classification and Ranking with
Partial Feedback
Claudio Gentile
DiSTA, Universit`a dell?Insubria, Italy
[email protected]
Francesco Orabona
TTI Chicago, USA
[email protected]
Abstract
We present a novel multilabel/ranking algorithm working in partial information
settings. The algorithm is based on 2nd-order descent methods, and relies on
upper-confidence bounds to trade-off exploration and exploitation. We analyze
this algorithm in a partial adversarial setting, where covariates can be adversarial,
but multilabel probabilities are ruled by (generalized) linear models. We show
O(T 1/2 log T ) regret bounds, which improve in several ways on the existing results. We test the effectiveness of our upper-confidence scheme by contrasting
against full-information baselines on real-world multilabel datasets, often obtaining comparable performance.
1
Introduction
Consider a book recommendation system. Given a customer?s profile, the system recommends a few
possible books to the user by means of, e.g., a limited number of banners placed at different positions
on a webpage. The system?s goal is to select books that the user likes and possibly purchases.
Typical feedback in such systems is the actual action of the user or, in particular, what books he has
bought/preferred, if any. The system cannot observe what would have been the user?s actions had
other books got recommended, or had the same book ads been placed in a different order within the
webpage. Such problems are collectively referred to as learning with partial feedback. As opposed
to the full information case, where the system (the learning algorithm) knows the outcome of each
possible response (e.g., the user?s action for each and every possible book recommendation placed
in the largest banner ad), in the partial feedback setting, the system only observes the response to
very limited options and, specifically, the option that was actually recommended. In this and many
other examples of this sort, it is reasonable to assume that recommended options are not given the
same treatment by the system, e.g., large banners which are displayed on top of the page should
somehow be more committing as a recommendation than smaller ones placed elsewhere. Moreover,
it is often plausible to interpret the user feedback as a preference (if any) restricted to the displayed
alternatives.
We consider instantiations of this problem in the multilabel and learning-to-rank settings. Learning
proceeds in rounds, in each time step t the algorithm receives an instance xt and outputs an ordered
subset Y?t of labels from a finite set of possible labels [K] = {1, 2, . . . , K}. Restrictions might apply
to the size of Y?t (due, e.g., to the number of available slots in the webpage). The set Y?t corresponds
to the aforementioned recommendations, and is intended to approximate the true set of preferences
associated with xt . However, the latter set is never observed. In its stead, the algorithm receives
Yt ? Y?t , where Yt ? [K] is a noisy version of the true set of user preferences on xt . When we are
restricted to |Y?t | = 1 for all t, this becomes a multiclass classification problem with bandit feedback
? see below.
Related work. This paper lies at the intersection between online learning with partial feedback and
multilabel classification/ranking. Both fields include a substantial amount of work, so we can hardly
do it justice here. We outline some of the main contributions in the two fields, with an emphasis on
those we believe are the most related to this paper.
1
A well-known and standard tool of facing the problem of partial feedback in online learning is to
trade off exploration and exploitation through upper confidence bounds [16]. In the so-called bandit
setting with contextual information (sometimes called bandits with side information or bandits with
covariates, e.g., [3, 4, 5, 7, 15], and references therein) an online algorithm receives at each time
step a context (typically, in the form of a feature vector x) and is compelled to select an action
(e.g., a label), whose goodness is quantified by a predefined loss function. Full information about
the loss function is not available. The specifics of the interaction model determines which pieces
of loss will be observed by the algorithm, e.g., the actual value of the loss on the chosen action,
some information on more profitable directions on the action space, noisy versions thereof, etc. The
overall goal is to compete against classes of functions that map contexts to (expected) losses in a
regret sense, that is, to obtain sublinear cumulative regret bounds. For instance, [1, 3, 5, 7] work in
a finite action space where the mappings context-to-loss for each action are linear (or generalized
linear, as in [7]) functions of the features. They all obtain T 1/2 -like regret bounds, where T is
the time horizon. This is extended in [15], where the loss function is modeled as a sample from
a Gaussian process over the joint context-action space. We are using a similar (generalized) linear
modeling here. Linear multiclass classification problems with bandit feedback are considered in,
e.g., [4, 11, 13], where either T 2/3 or T 1/2 or even logarithmic regret bounds are proven, depending
on the noise model and the underlying loss functions.
All the above papers do not consider structured action spaces, where the learner is afforded to select
sets of actions, which is more suitable to multilabel and ranking problems. Along these lines are
the papers [10, 14, 19, 20, 22]. The general problem of online minimization of a submodular loss
function under both full and bandit information without covariates is considered in [10], achieving a
regret T 2/3 in the bandit case. In [22] the problem of online learning of assignments is considered,
where an algorithm is requested to assign positions (e.g., rankings) to sets of items (e.g., ads) with
given constraints on the set of items that can be placed in each position. Their problem shares
similar motivations as ours but, again, the bandit version of their algorithm does not explicitly take
side information into account, and leads to a T 2/3 regret bound. In [14] the aim is to learn a suitable
ordering of the available actions. Among other things, the authors prove a T 1/2 regret bound in
the bandit setting with a multiplicative weight updating scheme. Yet, no contextual information is
incorporated. In [20] the ability of selecting sets of actions is motivated by a problem of diverse
retrieval in large document collections which are meant to live in a general metric space. The
generality of this approach does not lead to strong regret guarantees for specific (e.g., smooth) loss
functions. [19] uses a simple linear model for the hidden utility function of users interacting with a
web system and providing partial feedback in any form that allows the system to make significant
progress in learning this function. A regret bound of T 1/2 is again provided that depends on the
degree of informativeness of the feedback. It is experimentally argued that this feedback is typically
made available by a user that clicks on relevant URLs out of a list presented by a search engine.
Despite the neatness of the argument, no formal effort is put into relating this information to the
context information at hand or to the way data are generated. Finally, the recent paper [2] investigates
classes of graphical models for contextual bandit settings that afford richer interaction between
contexts and actions leading again to a T 2/3 regret bound.
The literature on multilabel learning and learning to rank is overwhelming. The wide attention this
literature attracts is often motivated by its web-search-engine or recommender-system applications,
and many of the papers are experimental in nature. Relevant references include [6, 9, 23], along
with references therein. Moreover, when dealing with multilabel, the typical assumption is full
supervision, an important concern being modeling correlations among classes. In contrast to that,
the specific setting we are considering here need not face such a modeling. Other related references
are [8, 12], where learning is by pairs of examples. Yet, these approaches need i.i.d. assumptions on
the data, and typically deliver batch learning procedures. To summarize, whereas we are technically
close to [1, 3, 4, 5, 7, 15], from a motivational standpoint we are perhaps closest to [14, 19, 22].
Our results. We investigate the multilabel and learning-to-rank problems in a partial feedback
scenario with contextual information, where we assume a probabilistic linear model over the labels,
although the contexts can be chosen by an adaptive adversary. We consider two families of loss functions, one is a cost-sensitive multilabel loss that generalizes the standard Hamming loss in several
respects, the other is a kind of (unnormalized) ranking loss. In both cases, the learning algorithm is
maintaining a (generalized) linear predictor for the probability that a given label occurs, the ranking
being produced by upper confidence-corrected estimated probabilities. In such settings, we prove
2
T 1/2 log T cumulative regret bounds ? these bounds are optimal, up to log factors, when the label
probabilities are fully linear in the contexts. A distinguishing feature of our user feedback model is
that, unlike previous papers (e.g., [1, 10, 15, 22]), we are not assuming the algorithm is observing a
noisy version of the risk function on the currently selected action. In fact, when a generalized linear
model is adopted, the mapping context-to-risk turns out to be nonconvex in the parameter space.
Furthermore, when operating on structured action spaces this more traditional form of bandit model
does not seem appropriate to capture the typical user preference feedback. Our approach is based on
having the loss decouple from the label generating model, the user feedback being a noisy version of
the gradient of a surrogate convex loss associated with the model itself. As a consequence, the algorithm is not directly dealing with the original loss when making exploration. Though the emphasis is
on theoretical results, we also validate our algorithms on two real-world multilabel datasets w.r.t. a
number of loss functions, showing good comparative performance against simple multilabel/ranking
baselines that operate with full information.
2
Model and preliminaries
We consider a setting where the algorithm receives at time t the side information vector xt ? Rd ,
is allowed to output at a (possibly ordered) subset Y?t ? [K] of the set of possible labels, then the
subset of labels Yt ? [K] associated with xt is generated, and the algorithm gets as feedback Y?t ?Yt .
The loss suffered by the algorithm may take into account several things: the distance between Yt
and Y?t (both viewed as sets), as well as the cost for playing Y?t . The cost c(Y?t ) associated with
Y?t might be given by the sum of costs suffered on each class i ? Y?t , where we possibly take into
account the order in which i occurs within Y?t (viewed as an ordered list of labels). Specifically,
given constant a ? [0, 1] and costs c = {c(i, s), i = 1, . . . , s, s ? [K]}, such that 1 ? c(1, s) ?
c(2, s) ? . . . c(s, s) ? 0, for all s ? [K], we consider the loss function
P
`a,c (Yt , Y?t ) = a |Yt \ Y?t | + (1 ? a) i?Y?t \Yt c(ji , |Y?t |),
where ji is the position of class i in Y?t , and c(ji , ?) depends on Y?t only through its size |Y?t |. In the
above, the first term accounts for the false negative mistakes, hence there is no specific ordering of
labels therein. The second term collects the loss contribution provided by all false positive classes,
taking into account through the costs c(ji , |Y?t |) the order in which labels occur in Y?t . The constant
a serves as weighting the relative importance of false positive vs. false negative mistakes. As a
specific example, suppose that K = 10, the costs c(i, s) are given by c(i, s) = (s ? i + 1)/s, i =
1, . . . , s, the algorithm plays Y?t = (4, 3, 6), but Yt is {1, 3, 8}. In this case, |Yt \ Y?t | = 2, and
P
?
?
i?Y?t \Yt c(ji , |Yt |) = 3/3 + 1/3, i.e., the cost for mistakingly playing class 4 in the top slot of Yt is
more damaging than mistakingly playing class 6 in the third slot. In the special case when all costs
are unitary, there is no longer need to view Y?t as an ordered collection, and the above loss reduces to
a standard Hamming-like loss between sets Yt and Y?t , i.e., a |Yt \ Y?t | + (1 ? a) |Y?t \ Yt |. Notice that
the partial feedback Y?t ? Yt allows the algorithm to know which of the chosen classes in Y?t are good
or bad (and to what extent, because of the selected ordering within Y?t ). Yet, the algorithm does not
observe the value of `a,c (Yt , Y?t ) because Yt \ Y?t remains hidden.
Working with the above loss function makes the algorithm?s output Y?t become a ranked list of
classes, where ranking is restricted to the deemed relevant classes only. In our setting, only a relevance feedback among the selected classes is observed (the set Yt ? Y?t ), but no supervised ranking
information (e.g., in the form of pairwise preferences) is provided to the algorithm within this set.
Alternatively, we can think of a ranking framework where restrictions on the size of Y?t are set by an
exogenous (and possibly time-varying) parameter of the problem, and the algorithm is required to
provide a ranking complying with these restrictions. More on the connection to the ranking setting
with partial feedback is in Section 4.
The problem arises as to which noise model we should adopt so as to encompass significant real-world settings while at the same time affording efficient implementation of the resulting algorithms. For any subset Yt ? [K], we let (y1,t , . . . , yK,t ) ? {0, 1}K be the corPK
responding indicator vector.
Then it is
easy
to see that `a,c (Yt , Y?t ) = a i=1 yi,t + (1 ?
P
a)
c(ji , |Y?t |) ? a + c(ji , |Y?t |) yi,t . Moreover, because the first sum does not de?
i?Yt
1?a
3
pend on Y?t , for the sake of optimizing over Y?t we can equivalently define
X
a
`a,c (Yt , Y?t ) = (1 ? a)
c(ji , |Y?t |) ? 1?a
+ c(ji , |Y?t |) yi,t .
(1)
i?Y?t
Let Pt (?) be a shorthand for the conditional probability Pt (? | xt ), where the side information vector xt can in principle be generated by an adaptive adversary as a function of the past. Then
Pt (y1,t , . . . , yK,t ) = P(y1,t , . . . , yK,t | xt ), where the marginals Pt (yi,t = 1) satisfy1
Pt (yi,t = 1) =
g(?u>
i xt )
,
x
)
+
g(?u>
g(u>
t
i xt )
i
i = 1, . . . , K,
(2)
for some K vectors u1 , . . . , uK ? Rd and some (known) function g : D ? R ? R+ . The
d
model is well defined if u>
i x ? D for all i and all x ? R chosen by the adversary. We assume
for the sake of simplicity that ||xt || = 1 for all t. Notice that at this point the variables yi,t need
not be conditionally independent. We are only definining a family of allowed joint distributions
Pt (y1,t , . . . , yK,t ) through the properties of their marginals Pt (yi,t ).
The function g above will be instantiated to the negative derivative of a suitable convex and nonincreasing loss function L which our algorithm will be based upon. For instance, if L is the square
loss L(?) = (1 ? ?)2 /2, then g(?) = 1 ? ?, resulting in Pt (yi,t = 1) = (1 + u>
i xt )/2, under the
assumption D = [?1, 1]. If L is the logistic loss L(?) = ln(1 + e?? ), then g(?) = (e? + 1)?1 ,
>
>
and Pt (yi,t = 1) = eui xt /(eui xt + 1), with domain D = R.
Set for brevity ?i,t = u>
i xt . Taking into account (1), this model allows us to write the (conditional)
expected loss of the algorithm playing Y?t as
X
a
c(ji , |Y?t |) ? 1?a
Et [`a,c (Yt , Y?t )] = (1 ? a)
+ c(ji , |Y?t |) pi,t ,
(3)
g(??
)
i?Y?t
i,t
, and the expectation Et above is w.r.t. the generation of labels Yt ,
where pi,t = g(?i,t )+g(??
i,t )
conditioned on both xt , and all previous x and Y . A key aspect of this formalization is that the
Bayes optimal ordered subset Yt? = argminY =(j1 ,j2 ,...,j|Y | )?[K] Et [`a,c (Yt , Y )] can be computed
efficiently when knowing ?1,t , . . . , ?K,t . This is handled by the next lemma. In words, this lemma
says that, in order to minimize (3), it suffices to try out all possible sizes s = 0, 1, . . . , K for Yt?
?
that minimizes (3) over all sequences of size
and, for each such value, determine the sequence Ys,t
?
s. In turn, Ys,t can be computed just by sorting classes i ? [K] in decreasing order of pi,t , sequence
?
being given by the first s classes in this sorted list.2
Ys,t
Lemma 1. With the notation introduced so far, let pi1 ,t ? pi2 ,t ? . . . piK ,t be the sequence of pi,t
?
sorted in nonincreasing order. Then we have that Yt? = argmins=0,1,...K Et [`a,c (Yt , Ys,t
)], where
?
?
Ys,t = (i1 , i2 , . . . , is ), and Y0,t = ?.
Notice the way costs c(i, s) influence the Bayes optimal computation. We see from (3) that placing
class i within Y?t in position ji is beneficial (i.e., it leads to a reduction of loss) if and only if pi,t >
a
c(ji , |Y?t |)/( 1?a
+ c(ji , |Y?t |)). Hence, the higher is the slot ij in Y?t the larger should be pi,t in order
for this inclusion to be convenient.3 It is Yt? that we interpret as the true set of user preferences on
xt .
We would like to compete against the above Yt? in a cumulative regret sense, i.e., we would like to
PT
bound RT = t=1 Et [`a,c (Yt , Y?t )] ? Et [`a,c (Yt , Yt? )] with high probability. We use a similar but
largely more general analysis than [4]?s, to devise an online second-order descent algorithm whose
updating rule makes the comparison vector U = (u1 , . . . , uK ) ? RdK defined through (2) be Bayes
optimal w.r.t. a surrogate convex loss L(?) such that g(?) = ?L0 (?). Observe that the expected
loss function (3) is, generally speaking, nonconvex in the margins ?i,t (consider, for instance the
logistic case g(?) = e?1+1 ). Thus, we cannot directly minimize this expected loss.
1
The reader familiar with generalized linear models will recognize the derivative of the function p(?) =
as the (inverse) link function of the associated canonical exponential family of distributions [17].
Due to space limitations, all proofs are given in the supplementary material.
3
Notice that this depends on the actual size of Y?t , so we cannot decompose this problem into K independent
problems. The decomposition does occur if the costs c(i, s) are constants, independent of i and s, and the
criterion for inclusion becomes pi,t ? ?, for some constant threshold ?.
g(??)
g(?)+g(??)
2
4
Parameters: loss parameters a ? [0, 1], cost values c(i, s), interval D = [?R, R], function g :
D ? R, confidence level ? ? [0, 1].
Initialization: Ai,0 = I ? Rd?d , i = 1, . . . , K, wi,1 = 0 ? Rd , i = 1, . . . , K;
For t = 1, 2 . . . , T :
1. Get instance xt ? Rd : ||xt || = 1;
0
b 0i,t = x>
2. For i ? [K], set ?
t w i,t , where
?
?wi,t
>
w0i,t =
w>
i,t xt ?R sign(w i,t xt )
?wi,t ?
A?1
?1
i,t?1 xt
>
xt Ai,t?1 xt
if w>
i,t xt ? [?R, R],
otherwise;
3. Output
P
a
Y?t = argminY =(j1 ,j2 ,...j|Y | )?[K]
c(ji , |Y |) ? 1?a
+ c(ji , |Y |) pbi,t
,
i?Y
where : pbi,t =
b 0 +i,t ]D )
g(?[?
i,t
b 0 +i,t ]D )+g(?[?
b 0 +i,t ]D ) ,
g([?
i,t
i,t
12 c0L
d c0L
?1
2
t?1
2i,t = x>
;
+ c00 c00 + 3L(?R) ln K(t+4)
t Ai,t?1 xt U + (c00 )2 ln 1 + d
?
L
L
L
4. Get feedback Yt ? Y?t ;
?1
0
1
5. For i ? [K], update Ai,t = Ai,t?1 + |si,t |xt x>
t , w i,t+1 = w i,t ? c00 Ai,t ?i,t , where
L
si,t
?
?
If i ? Yt ? Y?t
?1
= ?1 If i ? Y?t \ Yt = Y?t \ (Yt ? Y?t )
?
?0
otherwise;
b 0i,t ) si,t xt .
and ?i,t = ?w L(si,t w> xt )|w=w0i,t = ?g(si,t ?
Figure 1: The partial feedback algorithm in the (ordered) multiple label setting.
3
Algorithm and regret bounds
In Figure 1 is our bandit algorithm for (ordered) multiple labels. The algorithm is based on replacing
the unknown model vectors u1 , . . . , uK with prototype vectors w01,t , . . . , w0K,t , being w0i,t the timet approximation to ui , satisying similar constraints we set for the ui vectors. For the sake of brevity,
b 0 = x> w0 , and ?i,t = u> xt , i ? [K]. The algorithm uses ?
b 0 as proxies for the unwe let ?
t
i,t
i,t
i
i,t
b 0 + i,t ]D ,
derlying ?i,t according to the (upper confidence) approximation scheme ?i,t ? [?
i,t
where i,t ? 0 is a suitable upper-confidence level for class i at time t, and [?]D denotes the
clipping-to-D operation, i.e., [x]D = max(min(x, R), ?R). The algorithm?s prediction at time
t has the same form as the computation of the Bayes optimal sequence Yt? , where we replace
g(??i,t )
the true (and unknown) pi,t = g(?i,t )+g(??
with the corresponding upper confidence proxy
i,t )
pbi,t =
b 0 +i,t ]D )
g(?[?
i,t
. Computing Y?t can be done by mimicking the computation of
b 0 +i,t ]D )+g(?[?
b 0 +i,t ]D )
g([?
i,t
i,t
Bayes optimal Yt? (just replace pi,t
the
by pbi,t ), i.e., order of K log K running time per prediction.
Thus the algorithm is producing a ranked list of relevant classes based on upper-confidence-corrected
b0
scores pbi,t . Class i is deemed relevant and ranked high among the relevant ones when either ?
i,t
is a good approximation to ?i,t and pi,t is large, or when the algorithm is not very confident on its
own approximation about i (that is, the upper confidence level i,t is large).
The algorithm receives in input the loss parameters a and c(i, s), the model function g(?) and the
associated margin domain D = [?R, R], and maintains both K positive definite matrices Ai,t of
dimension d (initially set to the d ? d identity matrix), and K weight vector wi,t ? Rd (initially
set to the zero vector). At each time step t, upon receiving the d-dimensional instance vector xt
the algorithm uses the weight vectors wi,t to compute the prediction vectors w0i,t . These vectors
can easily be seen as the result of projecting wi,t onto the space of w where |w> xt | ? R w.r.t.
the distance function di,t?1 , i.e., w0i,t = argminw?Rd : w> xt ?D di,t?1 (w, wi,t ), i ? [K], where
di,t (u, w) = (u ? w)> Ai,t (u ? w) . Vectors w0i,t are then used to produce prediction values
b 0 involved in the upper-confidence calculation of Y?t ? [K]. Next, the feedback Yt ? Y?t is
?
i,t
observed, and the algorithm in Figure 1 promotes all classes i ? Yt ? Y?t (sign si,t = 1), demotes
5
all classes i ? Y?t \ Yt (sign si,t = ?1), and leaves all remaining classes i ?
/ Y?t unchanged (sign
si,t = 0). The update w0i,t ? wi,t+1 is based on the gradients ?i,t of a loss function L(?) satisfying
L0 (?) = ?g(?). On the other hand, the update Ai,t?1 ? Ai,t uses the rank one matrix4 xt x>
t .
In both the update of w0i,t and the one involving Ai,t?1 , the reader should observe the role played
by the signs si,t . Finally, the constants c0L and c00L occurring in the expression for 2i,t are related to
smoothness properties of L(?) ? see next theorem.
Theorem 2. Let L : D = [?R, R] ? R ? R+ be a C 2 (D) convex and nonincreasing function
of its argument, (u1 , . . . , uK ) ? RdK be defined in (2) with g(?) = ?L0 (?) for all ? ? D, and
such that kui k ? U for all i ? [K]. Assume there are positive constants cL , c0L and c00L such that:
0
00
(??)+L00 (?) L0 (??)
i. L (?) L
? ?cL and ii. (L0 (?))2 ? c0L , and iii. L00 (?) ? c00L hold for all
(L0 (?)+L0 (??))2
? ? D. Then the cumulative regret RT of the algorithm in Figure 1 satisfies, with probability at
least 1 ? ?,
q
RT = O (1 ? a) cL K T C d ln 1 + Td ,
c0
d c0
KT
where C = O U 2 + (c00 L)2 ln 1 + Td + (c00L)2 + L(?R)
.
ln
00
c
?
L
L
L
It is easy to see that when L(?) is the square loss L(?) = (1 ? ?)2 /2 and D = [?1, 1], we
have cL = 1/2, c0L = 4 and c00L = 1; when L(?) is the logistic loss L(?) = ln(1 + e?? ) and
x
?x
1
D = [?R, R], we have cL = 1/4, c0L ? 1 and c00L = 2(1+cosh(R))
, where cosh(x) = e +e
.
2
Remark 1. A drawback of Theorem 2 is that, in order to properly set the upper confidence levels i,t , we assume prior knowledge of the norm upper bound U . Because this
information is often unavailable, we present here a simple modification to the algorithm
that copes with this limitation. We change the definition of 2i,t in Figure 1 to 2i,t =
)
(
12 c0
2 d c0L
K(t+4)
t?1
2
> ?1
, 4 R . This immedimax x Ai,t?1 x (c00 )2 ln 1 + d + c00 cL00 + 3L(?R) ln ?
L
L
L
ately leads to the following result.
Theorem 3. With the same assumptions and notation as in Theorem 2, if we replace 2i,t as explained
above we have that, with probability at least 1 ? ?, RT satisfies
q
00 2 2
(cL ) U
RT = O (1 ? a) cL K T C d ln 1 + Td + (1 ? a) cL K R d exp
?
1
.
0
c d
L
4
On ranking with partial feedback
As Lemma 1 points out, when the cost values c(i, s) in `a,c are stricly decreasing then the Bayes
optimal ordered sequence Yt? on xt can be obtained by sorting classes in decreasing values of pi,t ,
and then decide on a cutoff point5 induced by the loss parameters, so as to tell relevant classes
g(??)
is increasing in ?, this ordering
apart from irrelevant ones. In turn, because p(?) = g(?)+g(??)
corresponds to sorting classes in decreasing values of ?i,t . Now, if parameter a in `a,c is very close6
to 1, then |Yt? | = K, and the algorithm itself will produce ordered subsets Y?t such that |Y?t | = K.
Moreover, it does so by receiving full feedback on the relevant classes at time t (since Yt ? Y?t =
Yt ). As is customary (e.g., [6]), one can view any multilabel assignment Y = (y1 , . . . , yK ) ?
{0, 1}K as a ranking among the K classes in the most natural way: i preceeds j if and only if
yi > yj . The (unnormalized) ranking loss function `rank (Y, fb) between the multilabel Y and a
ranking function fb : Rd ? RK , representing degrees of class relevance sorted in a decreasing
order fbj1 (xt ) ? fbj2 (xt ) ? . . . ? fbjK (xt ) ? 0, counts
the number of class pairs that disagree
P
b
b
in the two rankings: `rank (Y, f ) =
{fi (xt ) < fbj (xt )} + 1 {fbi (xt ) = fbj (xt )} ,
i,j?[K] : yi >yj
2
2
Notice that A?1
i,t can be computed incrementally in O(d ) time per update. [4] and references therein also
use diagonal approximations thereof, reporting good empirical performance with just O(d) time per update.
5
This is called the zero point in [9].
6
If a = 1, the algorithm only cares about false negative mistakes, the best strategy being always predicting
Y?t = [K]. Unsurprisingly, this yields zero regret in both Theorems 2 and 3.
4
6
where {. . .} is the indicator function of the predicate at argument. As pointed out in [6], the ranking
function fb(xt ) = (p1,t , . . . , pK,t ) is also Bayes optimal w.r.t. `rank (Y, fb), no matter if the class
labels yi are conditionally independent or not. Hence we can use this algorithm for tackling ranking
problems derived from multilabel ones, when the measure of choice is `rank and the feedback is
full.
In fact, a partial information version of the above can easily be obtained. Suppose that at each
time t, the environment discloses both xt and a maximal size St for the ordered subset Y?t =
(j1 , j2 , . . . , j|Y?t | ) (both xt and St can be chosen adaptively by an adversary). Here St might be
the number of available slots in a webpage or the number of URLs returned by a search engine in
response to query xt . Then it is plausible to compete in a regret sense against the best time-t offline
ranking of the form f (xt ) = (f1 (xt ), f2 (xt ), . . . , fh (xt ), 0, . . . , 0), with h ? St . Further, the ranking loss could be reasonably restricted to count the number of class pairs disagreeing within Y?t plus a
quantity related to number of false negative mistakes. E.g., if fbj1 (xt ) ? fbj2 (xt ) ? . . . ? fbj|Y? | (xt ),
t
we can set (the factor St below serves as balancing the contribution of the two main terms):
P
`rank,t (Y, fb) = i,j?Y?t : yi >yj {fbi (xt ) < fbj (xt )} + 21 {fbi (xt ) = fbj (xt )} + St |Yt \ Y?t | .
Q
It is not hard to see that if classes are conditionally independent, Pt (y1,t , ..., yK,t ) = i?[K] pi,t ,
then the Bayes optimal ranking for `rank,t is given by f ? (xt ; St ) = (pi1 ,t , . . . , piSt ,t , 0, . . . , 0). If
we put on the argmin (Step 3 in Figure 1) the further constraint |Y | ? St (we are still sorting classes
according to decreasing values of pbi,t ), one can prove the following ranking version of Theorem 2.
Theorem 4. With the same assumptions and notation
Qas in Theorem 2, let the classes in [K] be
conditionally independent, i.e., Pt (y1,t , ..., yK,t ) = i?[K] pi,t for all t, and let the cumulative
regret RT w.r.t. `rank,t be defined as
PT
RT = t=1 Et [`rank,t (Yt , (b
pj1 ,t , ..., pbjSt ,t , 0, ..., 0))] ? Et [`rank,t (Yt , (pi1,t , ..., piSt ,t , 0, ..., 0))],
where pbj1 ,t ? . . . ? pbjSt ,t ? 0 and pi1 ,t ? . . . ? piSt ,t ? 0. Then, with probability at least 1 ? ?,
q
we have RT = O cL S K T C d ln 1 + Td , where S = maxt=1,...,T St .
The proof (see the appendix) is very similar to the one of Theorem 2. This suggests that, to some
extent, we are decoupling the label generating model from the loss function ` under consideration.
Notice that the linear dependence on the total number?of classes K (which is often much larger
than S in a multilabel/ranking problem) is replaced by SK. One could get similar benefits out of
Theorem 2. Finally, one could also combine Theorem 4 with the argument contained in Remark 1.
5
Experiments and conclusions
The experiments we report here are meant to validate the exploration-exploitation tradeoff implemented by our algorithm under different conditions (restricted vs. nonrestricted number of classes),
loss measures (`a,c , `rank,t , and Hamming loss) and model/parameter settings (L = square loss, L =
logistic loss, with varying R).
Datasets. We used two multilabel datasets. The first one, called Mediamill, was introduced in a
video annotation challenge [21]. It comprises 30,993 training samples and 12,914 test ones. The
number of features d is 120, and the number of classes K is 101. The second dataset is Sony CSL
Paris [18], made up of 16,452 train samples and 16,519 test samples, each sample being described
by d = 98 features. The number of classes K is 632. In both cases, feature vectors have been
normalized to unit L2 norm.
Parameter setting and loss measures. We used the algorithm in Figure 1 with two different
loss functions, the square loss and the logistic loss, and varied the parameter R for the latter. The
setting of the cost function c(i, s) depends on the task at hand, and for this preliminary experiments
we decided to evaluate two possible settings only. The first one, denoted by ?decreasing c? is
, i = 1, . . . , s, the second one, denoted by ?constant c?, is c(i, s) = 1, for all i and
c(i, s) = s?i+1
s
s. In all experiments, the a parameter was set to 0.5, so that `a,c with constant c reduces to half
the Hamming loss. In the decreasing c scenario, we evaluated the performance of the algorithm on
the loss `a,c that the algorithm is minimizing, but also its ability to produce meaningful (partial)
7
Sony CSL Paris ? Hamming loss and constant c
Final Average Hamming loss
50
100
50
33
OBR
Square
Logistic (R=1.5)
Logistic (R=2.0)
Logistic (R=2.5)
Logistic (R=3.0)
32
31
Final Average Rank loss / S
Square
Logistic (R=1.5)
Logistic (R=2.0)
Logistic (R=2.5)
Logistic (R=3.0)
Running average la,c
Sony CLS Paris ? Ranking loss
55
Sony CSL Paris ? Loss(a,c) and decreasing c
150
45
40
30
29
28
27
OBR
Square
Logistic (R=1.5)
Logistic (R=2.0)
Logistic (R=2.5)
Logistic (R=3.0)
26
35
25
0 0
10
1
10
2
3
4
10
10
Number of samples
10
30 0
10
5
10
1
24
2
10
10
5
10
15
S
20
S
25
30
35
Figure 2: Experiments on the Sony CSL Paris dataset.
2
7
Final Average Hamming loss
6.5
15
10
6
OBR
Square
Logistic (R=1.5)
Logistic (R=2.0)
Logistic (R=2.5)
Logistic (R=3.0)
1.8
Final Average Rank loss / S
Square
Logistic (R=1.5)
Logistic (R=2.0)
Logistic (R=2.5)
Logistic (R=3.0)
20
Running average la,c
Mediamill ? Ranking loss
Mediamill ? Hamming loss and constant c
Mediamill ? Loss(a,c) and decreasing c
25
5.5
5
4.5
4
OBR
Square
Logistic (R=1.5)
Logistic (R=2.0)
Logistic (R=2.5)
Logistic (R=3.0)
1.6
1.4
1.2
5
3.5
0 0
10
1
10
2
3
10
10
Number of samples
4
10
5
10
1
3 0
10
1
10
S
2
10
5
10
15
S
20
25
Figure 3: Experiments on the Mediamill dataset.
rankings through `rank,t . On the constant c setting, we evaluated the Hamming loss. As is typical
of multilabel problems, the label density, i.e., the average fraction of labels associated with the
examples, is quite small. For instance, on Mediamill this is 4.3%. Hence, it is clearly beneficial
to impose an upper bound S on |Y?t |. For the constant c and ranking loss experiments we tried out
different values of S, and reported the final performance.
Baseline. As baseline, we considered a full information version of Algorithm 1 using the square
loss, that receives after each prediction the full array of true labels Yt for each sample. We call
this algorithm OBR (Online Binary Relevance), because it is a natural online adaptation of the
binary relevance algorithm, widely used as a baseline in the multilabel literature. Comparing to
OBR stresses the effectiveness of the exploration/exploitation rule above and beyond the details of
underlying generalized linear predictor. OBR was used to produce subsets (as in the Hamming loss
case), and restricted rankings (as in the case of `rank,t ).
Results. Our results are summarized in Figures 2 and 3. The algorithms have been trained by
sweeping only once over the training data. Though preliminary in nature, these experiments allow
us to draw a few conclusions. Our results for the avarage `a,c (Yt , Y?t ) with decreasing c are contained
in the two left plots. We can see that the performance is improving over time on both datasets, as
predicted by Theorem 2. In the middle plots are the final cumulative Hamming losses with constant
c divided by the number of training samples, as a function of S. Similar plots are on the right with
the final average ranking losses `rank,t divided by S. In both cases we see that there is an optimal
value of S that allows to balance the exploration and the exploitation of the algorithm. Moreover
the performance of our algorithm is always pretty close to the performance of OBR, even if our
algorithm is receiving only partial feedback. In many experiments the square loss seems to give
better results. Exception is the ranking loss on the Mediamill dataset (Figure 3, right).
Conclusions. We have used generalized linear models to formalize the exploration-exploitation
tradeoff in a multilabel/ranking setting with partial feedback, providing T 1/2 -like regret bounds under semi-adversarial settings. Our analysis decouples the multilabel/ranking loss at hand from the
label-generation model. Thanks to the usage of calibrated score values pbi,t , our algorithm is capable
of automatically inferring where to split the ranking between relevant and nonrelevant classes [9],
the split being clearly induced by the loss parameters in `a,c . We are planning on using more general label models that explicitly capture label correlations to be applied to other loss functions (e.g.,
F-measure, 0/1, average precision, etc.). We are also planning on carrying out a more thorough experimental comparison, especially to full information multilabel methods that take such correlations
into account. Finally, we are currenty working on extending our framework to structured output
tasks, like (multilabel) hierarchical classification.
8
References
[1] Y. Abbasi-Yadkori, D. Pal, and C. Szepesv?ari. Improved algorithms for linear stochastic bandits. In 25th NIPS, 2011.
[2] K. Amin, M. Kearns, and U. Syed. Graphical models for bandit problems. In UAI, 2011.
[3] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. JMLR, 3, 2003.
[4] K. Crammer and C. Gentile. Multiclass classification with bandit feedback using adaptive
regularization. In 28th ICML, 2011.
[5] V. Dani, T. Hayes, and S. Kakade. Stochastic linear optimization under bandit feedback. In
21th Colt, 2008.
[6] K. Dembczynski, W. Waegeman, W. Cheng, and E. Hullermeier. On label dependence and loss
minimization in multi-label classification. Machine Learning, to appear.
[7] S. Filippi, O. Capp?e, A. Garivier, and C. Szepesv?ari. Parametric bandits: The generalized
linear case. In NIPS, pages 586?594, 2010.
[8] Y. Freund, R. D. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for
combining preferences. JMLR, 4:933?969, 2003.
[9] J. Furnkranz, E. Hullermeier, E. Loza Menca, and K. Brinker. Multilabel classification via
calibrated label ranking. Machine Learning, 73:133?153, 2008.
[10] E. Hazan and S. Kale. Online submodular minimization. In NIPS 22, 2009.
[11] E. Hazan and S. Kale. Newtron: an efficient bandit algorithm for online multiclass prediction.
In NIPS, 2011.
[12] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In Advances in Large Margin Classifiers. MIT Press, 2000.
[13] S. Kakade, S. Shalev-Shwartz, and A. Tewari. Efficient bandit algorithms for online multiclass
prediction. In 25th ICML, 2008.
[14] S. Kale, L. Reyzin, and R. Schapire. Non-stochastic bandit slate problems. In 24th NIPS, 2010.
[15] A. Krause and C. S. Ong. Contextual gaussian process bandit optimization. In 25th NIPS,
2011.
[16] T. H. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Adv. Appl. Math,
6, 1985.
[17] P. McCullagh and J.A. Nelder. Generalized linear models. Chapman and Hall, 1989.
[18] F. Pachet and P. Roy. Improving multilabel analysis of music titles: A large-scale validation
of the correction approach. IEEE Trans. on Audio, Speech, and Lang. Proc., 17(2):335?343,
February 2009.
[19] P. Shivaswamy and T. Joachims. Online structured prediction via coactive learning. In 29th
ICML, 2012, to appear.
[20] A. Slivkins, F. Radlinski, and S. Gollapudi. Learning optimally diverse rankings over large
document collections. In 27th ICML, 2010.
[21] C. G. M. Snoek, M. Worring, J.C. van Gemert, J.-M. Geusebroek, and A. W. M. Smeulders.
The challenge problem for automated detection of 101 semantic concepts in multimedia. In
Proc. of the 14th ACM international conference on Multimedia, MULTIMEDIA ?06, pages
421?430, New York, NY, USA, 2006.
[22] M. Streeter, D. Golovin, and A. Krause. Online learning of assignments. In 23rd NIPS, 2009.
[23] G. Tsoumakas, I. Katakis, and I. Vlahavas. Random k-labelsets for multilabel classification.
IEEE Transactions on Knowledge and Data Engineering, 23:1079?1089, 2011.
9
| 4821 |@word exploitation:7 middle:1 version:8 complying:1 norm:2 seems:1 nd:1 justice:1 c0:3 tried:1 decomposition:1 reduction:1 score:2 selecting:1 ours:1 document:2 fbj:5 past:1 existing:1 coactive:1 com:1 contextual:5 comparing:1 si:9 yet:3 tackling:1 lang:1 chicago:1 j1:3 plot:3 update:6 v:2 half:1 selected:3 leaf:1 item:2 compelled:1 boosting:1 math:1 preference:7 herbrich:1 dell:1 along:2 become:1 prove:3 shorthand:1 combine:1 newtron:1 pairwise:1 snoek:1 expected:4 p1:1 planning:2 multi:1 decreasing:11 td:4 automatically:1 actual:3 overwhelming:1 csl:4 considering:1 increasing:1 becomes:2 provided:3 motivational:1 moreover:5 underlying:2 notation:3 avarage:1 katakis:1 what:3 kind:1 argmin:1 minimizes:1 contrasting:1 guarantee:1 thorough:1 every:1 universit:1 decouples:1 classifier:1 uk:4 unit:1 appear:2 producing:1 positive:4 engineering:1 mistake:4 consequence:1 despite:1 nonrelevant:1 might:3 plus:1 emphasis:2 therein:4 initialization:1 quantified:1 collect:1 suggests:1 appl:1 limited:2 decided:1 yj:3 regret:19 definite:1 procedure:1 empirical:1 got:1 convenient:1 confidence:13 word:1 get:4 cannot:3 close:2 onto:1 put:2 context:9 live:1 risk:2 influence:1 pachet:1 restriction:3 map:1 customer:1 yt:54 kale:3 attention:1 convex:4 simplicity:1 rule:3 array:1 insubria:1 obr:8 profitable:1 pt:13 suppose:2 play:1 user:13 us:4 distinguishing:1 roy:1 satisfying:1 updating:2 observed:4 role:1 disagreeing:1 capture:2 hullermeier:2 adv:1 ordering:4 trade:3 observes:1 yk:7 substantial:1 environment:1 ui:2 covariates:3 ong:1 multilabel:27 trained:1 carrying:1 deliver:1 technically:1 upon:2 f2:1 learner:1 capp:1 easily:2 joint:2 slate:1 train:1 instantiated:1 committing:1 dista:1 query:1 tell:1 outcome:1 shalev:1 whose:2 richer:1 larger:2 plausible:2 supplementary:1 say:1 widely:1 otherwise:2 quite:1 stead:1 ability:2 think:1 noisy:4 itself:2 final:7 online:13 sequence:6 timet:1 interaction:2 maximal:1 adaptation:1 argminw:1 j2:3 relevant:9 combining:1 reyzin:1 mediamill:7 amin:1 validate:2 gollapudi:1 w01:1 webpage:4 extending:1 produce:4 generating:2 comparative:1 tti:1 pi2:1 depending:1 ij:1 b0:1 progress:1 strong:1 implemented:1 predicted:1 direction:1 drawback:1 stochastic:3 exploration:8 material:1 tsoumakas:1 argued:1 assign:1 suffices:1 f1:1 preliminary:3 decompose:1 c00:7 correction:1 hold:1 considered:4 hall:1 exp:1 mapping:2 adopt:1 fh:1 proc:2 label:26 currently:1 title:1 sensitive:1 robbins:1 largest:1 uninsubria:1 tool:1 minimization:3 dani:1 offs:1 clearly:2 mit:1 gaussian:2 always:2 aim:1 claudio:2 varying:2 l0:7 derived:1 joachim:1 properly:1 rank:20 contrast:1 adversarial:3 baseline:5 sense:3 shivaswamy:1 brinker:1 ately:1 typically:3 initially:2 hidden:2 bandit:21 i1:1 mimicking:1 overall:1 classification:9 aforementioned:1 among:5 denoted:2 colt:1 special:1 field:2 once:1 never:1 having:1 chapman:1 placing:1 icml:4 purchase:1 report:1 few:2 recognize:1 familiar:1 replaced:1 intended:1 detection:1 investigate:1 pj1:1 nonincreasing:3 predefined:1 kt:1 capable:1 partial:17 ruled:1 theoretical:1 instance:7 modeling:3 goodness:1 assignment:3 clipping:1 cost:14 subset:8 predictor:2 predicate:1 pal:1 optimally:1 reported:1 banner:3 confident:1 st:9 adaptively:1 density:1 thanks:1 calibrated:2 international:1 probabilistic:1 off:2 receiving:3 again:3 abbasi:1 opposed:1 possibly:4 book:7 derivative:2 leading:1 account:7 filippi:1 de:1 summarized:1 matter:1 explicitly:2 ranking:37 ad:3 depends:4 piece:1 multiplicative:1 view:2 pend:1 try:1 exogenous:1 hazan:2 analyze:1 observing:1 w0k:1 sort:1 option:3 bayes:8 maintains:1 annotation:1 dembczynski:1 contribution:3 minimize:2 square:12 smeulders:1 largely:1 efficiently:1 yield:1 produced:1 definition:1 against:5 involved:1 thereof:2 associated:7 rdk:2 proof:2 hamming:11 di:3 dataset:4 treatment:1 knowledge:2 graepel:1 formalize:1 actually:1 auer:1 higher:1 supervised:1 response:3 improved:1 done:1 though:2 evaluated:2 generality:1 furthermore:1 just:3 correlation:3 working:3 receives:6 hand:4 web:2 replacing:1 incrementally:1 somehow:1 logistic:29 perhaps:1 believe:1 usa:2 usage:1 normalized:1 true:5 concept:1 hence:4 regularization:1 i2:1 semantic:1 conditionally:4 round:1 unnormalized:2 criterion:1 generalized:10 stress:1 outline:1 consideration:1 novel:1 fi:1 ari:2 argminy:2 ji:16 he:1 relating:1 interpret:2 marginals:2 significant:2 ai:12 smoothness:1 rd:9 inclusion:2 pointed:1 submodular:2 had:2 supervision:1 operating:1 longer:1 etc:2 closest:1 own:1 recent:1 italy:1 optimizing:1 apart:1 irrelevant:1 scenario:2 nonconvex:2 binary:2 yi:13 devise:1 seen:1 gentile:3 care:1 impose:1 determine:1 eui:2 recommended:3 semi:1 ii:1 full:11 encompass:1 multiple:2 reduces:2 smooth:1 calculation:1 retrieval:1 divided:2 lai:1 y:5 promotes:1 prediction:8 involving:1 regression:1 metric:1 expectation:1 sometimes:1 labelsets:1 whereas:1 szepesv:2 krause:2 interval:1 suffered:2 standpoint:1 operate:1 unlike:1 induced:2 thing:2 effectiveness:2 seem:1 bought:1 call:1 unitary:1 iii:1 recommends:1 easy:2 split:2 automated:1 attracts:1 click:1 prototype:1 knowing:1 multiclass:5 tradeoff:2 motivated:2 handled:1 expression:1 utility:1 url:2 effort:1 returned:1 speech:1 speaking:1 afford:1 hardly:1 action:16 remark:2 york:1 generally:1 tewari:1 amount:1 cosh:2 schapire:2 canonical:1 notice:6 sign:5 estimated:1 per:3 diverse:2 write:1 discloses:1 key:1 waegeman:1 threshold:1 achieving:1 cutoff:1 garivier:1 asymptotically:1 fraction:1 sum:2 compete:3 inverse:1 reporting:1 family:3 reasonable:1 reader:2 decide:1 draw:1 pbi:7 pik:1 appendix:1 investigates:1 comparable:1 bound:18 played:1 cheng:1 occur:2 constraint:3 afforded:1 sake:3 qas:1 u1:4 aspect:1 argument:4 min:1 preceeds:1 pi1:4 structured:4 according:2 smaller:1 beneficial:2 y0:1 wi:8 kakade:2 making:1 modification:1 projecting:1 restricted:6 explained:1 ln:11 remains:1 turn:3 count:2 singer:1 know:2 ordinal:1 sony:5 gemert:1 serf:2 adopted:1 available:5 generalizes:1 operation:1 apply:1 observe:4 hierarchical:1 appropriate:1 fbi:3 vlahavas:1 alternative:1 batch:1 yadkori:1 customary:1 original:1 top:2 responding:1 include:2 denotes:1 running:3 graphical:2 remaining:1 maintaining:1 music:1 especially:1 furnkranz:1 february:1 unchanged:1 quantity:1 occurs:2 strategy:1 parametric:1 rt:8 dependence:2 traditional:1 surrogate:2 diagonal:1 obermayer:1 gradient:2 distance:2 link:1 w0:1 extent:2 argmins:1 assuming:1 modeled:1 providing:2 minimizing:1 balance:1 equivalently:1 matrix4:1 negative:5 implementation:1 unknown:2 upper:13 recommender:1 disagree:1 francesco:2 datasets:5 finite:2 descent:2 displayed:2 extended:1 incorporated:1 worring:1 y1:7 interacting:1 varied:1 sweeping:1 introduced:2 pair:3 required:1 paris:5 connection:1 slivkins:1 engine:3 nip:7 trans:1 beyond:1 adversary:4 proceeds:1 below:2 summarize:1 challenge:2 geusebroek:1 max:1 video:1 suitable:4 syed:1 ranked:3 natural:2 predicting:1 indicator:2 stricly:1 representing:1 scheme:3 improve:1 deemed:2 prior:1 literature:3 l2:1 loza:1 relative:1 unsurprisingly:1 freund:1 loss:73 fully:1 sublinear:1 generation:2 limitation:2 allocation:1 proven:1 facing:1 validation:1 degree:2 proxy:2 informativeness:1 principle:1 playing:4 share:1 pi:13 balancing:1 maxt:1 elsewhere:1 placed:5 offline:1 side:4 formal:1 allow:1 wide:1 face:1 taking:2 benefit:1 van:1 feedback:30 dimension:1 boundary:1 world:3 cumulative:6 fb:5 author:1 collection:3 made:2 adaptive:4 w0i:8 far:1 cope:1 transaction:1 approximate:1 preferred:1 l00:2 dealing:2 instantiation:1 uai:1 hayes:1 nelder:1 shwartz:1 alternatively:1 search:3 sk:1 pretty:1 streeter:1 learn:1 nature:2 reasonably:1 golovin:1 decoupling:1 obtaining:1 unavailable:1 requested:1 improving:2 kui:1 cl:10 domain:2 pk:1 main:2 motivation:1 noise:2 profile:1 allowed:2 referred:1 ny:1 formalization:1 precision:1 position:5 comprises:1 inferring:1 exponential:1 lie:1 jmlr:2 weighting:1 third:1 theorem:13 rk:1 bad:1 xt:58 specific:5 showing:1 list:5 concern:1 false:6 importance:1 iyer:1 conditioned:1 occurring:1 horizon:1 margin:4 sorting:4 intersection:1 logarithmic:1 ordered:10 contained:2 recommendation:4 collectively:1 corresponds:2 determines:1 relies:1 satisfies:2 acm:1 slot:5 conditional:2 goal:2 viewed:2 sorted:3 identity:1 orabona:2 replace:3 experimentally:1 change:1 hard:1 typical:4 specifically:2 corrected:2 mccullagh:1 decouple:1 lemma:4 kearns:1 called:4 total:1 multimedia:3 experimental:2 la:2 meaningful:1 exception:1 select:3 damaging:1 radlinski:1 latter:2 arises:1 meant:2 brevity:2 relevance:4 crammer:1 evaluate:1 audio:1 |
4,223 | 4,822 | Transelliptical Graphical Models
Fang Han
Department of Biostatistics
Johns Hopkins University
Baltimore, MD 21210
[email protected]
Han Liu
Department of Operations Research
and Financial Engineering
Princeton University, NJ 08544
[email protected]
Cun-hui Zhang
Department of Statistics
Rutgers University
Piscataway, NJ 08854
[email protected]
Abstract
We advocate the use of a new distribution family?the transelliptical?for robust
inference of high dimensional graphical models. The transelliptical family is an
extension of the nonparanormal family proposed by Liu et al. (2009). Just as the
nonparanormal extends the normal by transforming the variables using univariate
functions, the transelliptical extends the elliptical family in the same way. We
propose a nonparametric rank-based regularization estimator which achieves the
parametric rates of convergence for both graph recovery and parameter estimation. Such a result suggests that the extra robustness and flexibility obtained by
the semiparametric transelliptical modeling incurs almost no efficiency loss. We
also discuss the relationship between this work with the transelliptical component
analysis proposed by Han and Liu (2012).
1
Introduction
We consider the problem of learning high dimensional graphical models. In a typical setting, a
d-dimensional random vector X = (X1 , ..., Xd )T can be represented as an undirected graph denoted by G = (V, E), where V contains nodes corresponding to the d variables in X, and the
edge set E describes the conditional independence relationship among X1 , ..., Xd . Let X\{i,j} :=
{Xk : k 6= i, j}. We say the joint distribution of X is Markov to G if Xi is independent of Xj given
X\{i,j} for all (i, j) ?
/ E. While often G is assumed given, here we want to estimate it from data.
Most graph estimation methods rely on the Gaussian graphical models, in which the random vector
X is assumed to be Gaussian: X ? Nd (?, ?). Under this assumption, the graph G is encoded
by the precision matrix ? := ??1 . More specifically, no edge connects Xj and Xk if and only
if ?jk = 0. This problem of estimating G is called covariance selection [5]. In low dimensions
where d < n, [6, 7] develop a multiple testing procedure for identifying the sparsity pattern of the
precision matrix. In high dimensions where d n, [21] propose a neighborhood pursuit approach
for estimating Gaussian graphical models by solving a collection of sparse regression problems using
the Lasso [25, 3]. Such an approach can be viewed as a pseudo-likelihood approximation of the full
likelihood. In contrast, [1, 30, 10] propose a penalized likelihood approach to directly estimate
?. [15, 14, 24] maximize the non-concave penalized likelihood to obtain an estimator with less
bias than the traditional L1 -regularized estimator. Under the irrepresentable conditions [33, 31, 27],
[22, 23] study the theoretical properties of the penalized likelihood methods. More recently, [29, 2]
propose the graphical Dantzig selector and CLIME, which can be solved by linear programming and
possess more favorable theoretical properties than the penalized likelihood approach.
1
Besides Gaussian models, [18] propose a semiparametric procedure named nonparanormal SKEP TIC which extends the Gaussian family to the more flexible semiparametric Gaussian copula family.
Instead of assuming X follows a Gaussian distribution, they assume there exists a set of monotone
functions f1 , . . . , fd , such that the transformed data f (X) := (f1 (X1 ), . . . , fd (Xd ))T is Gaussian.
More details can be found in [18]. [32] has developed a scalable software package to implement
these algorithms. In another line of research, [26] extends the Gaussian graphical models to the
elliptical graphical models. However, for elliptical distributions, only the generalized partial correlation graph can be reliably estimated. These graphs only represent the conditional uncorrelatedness, but conditional independence, among variables. Therefore, by extending the Gaussian to the
elliptical family, the gain in modeling flexibility is traded off with a loss in the strength of inference.
In a related work, [9] provide a latent variable interpretation of the generalized partial correlation
graph for multivariate t-distributions. An EM-type algorithm is proposed to fit the model for high
dimensional data. However, the theoretical properties of their estimator is unknown.
In this paper, we introduce a new distribution family named transelliptical graphical model. A key
concept is the transelliptical distribution [12]. The transelliptical distribution is a generalization of
the nonparanormal distribution proposed by [18]. By mimicking how the nonparanormal extends the
normal family, the transelliptical extends the elliptical family in the same way. The transelliptical
family contains the nonparanomral family and elliptical family. To infer the graph structure, a rankbased procedure using the Kendall?s tau statistic is proposed. We show such a procedure is adaptive
over the transelliptical family: the procedure by default delivers a conditional uncorrelated graphs
among certain latent variables; however, if the true distribution is the nonparanormal, the procedure
automatically delivers the conditional independence graph. Computationally, the only extra cost is a
one-pass data sort, which is almost negligible. Theoretically, even though the transelliptical family
is much larger than the nonparanormal family, the same parametric rates of convergence for graph
recovery and parameter estimation can be established. These results suggest that the transelliptical
graphical model can be used routinely as a replacement of the nonparanormal models. Thorough
numerical results are provided to back up our theory.
2
Background on Elliptical Distributions
d
Let X and Y be two random variables, we denote by X = Y if they have the same distribution.
Definition 2.1 (elliptical distribution [8]). Let ? ? Rd and ? ? Rd?d with rank(?) = q ? d. A
d-dimensional random vector X has an elliptical distribution, denoted by X ? ECd (?, ?, ?), if it
d
has a stochastic representation: X = ? + ?AU , where U is a random vector uniformly distributed
on the unit sphere in Rq , ? ? 0 is a scalar random variable independent of U , A ? Rd?q is a
deterministic matrix such that AAT = ?.
Remark 2.1. An equivalent definition of an elliptical distribution is that its characteristic function
can be written as exp(itT ?)?(tT ?t), where ? is a properly-defined characteristic function which
has a one-to-one mapping with ? in Definition 2.1. In this setting we denote by X ? ECd (?, ?, ?).
An elliptical distribution does not necessarily have a density. One example is the rank-deficient
Gaussian. More examples can be found in [11]. However, when the random variable ? is absolutely
continuous with respect to the Lebesgue measure and ? is non-singular, the density of X exists and
has the form
p(x) = |?|?1/2 g (x ? ?)T ??1 (x ? ?) ,
(1)
where g(?) is a scale function uniquely determined by the distribution of ?. In this case, we can also
denote it as X ? ECd (?, ?, g). Many multivariate distributions belong to the elliptical family. For
example, when g(x) = (2?)?d/2 exp {?x/2}, X is d-dimensional Gaussian. Another important
subclass is the multivariate t-distribution with the degrees of freedom v, in which, we choose
? v+d
2
? v+d
c2v x
2
g(x) = cv
1
?
,
(2)
d
v
(v?) 2 ?( v2 )
where cv is a normalizing constant.
The model family in Definition 2.1 is not identifiable. For example, given X ? ECd (?, ?, ?) with
rank(?) = q, there will be multiple As corresponding to the same ?. i.e., there exist A1 6= A2 ?
2
Rd?q such that A1 AT1 = A2 AT2 = ?. For some constant c 6= 0, we define ? ? = ?/c and A? = c ? A,
then ?AU = ? ? A? U . Therefore, the matrix ? is unique only up to a constant scaling. To make the
model identifiable, we impose the condition that max{diag(?)} = 1. More discussions about the
identifiability issue can be found in [12].
3
Transelliptical Graphical Models
In this paper we only consider distributions with continuous marginals. We introduce the transelliptical graphical models in analogy to the nonparanormal graphical models [19, 18]. The key concept
is transelliptical distribution which is also introduced in [12]. However, the definition of transelliptical distribution in this paper is slightly more restrictive than that in [12] due to the complication of
graphical modeling. More specifically, let
d?d
R+
: ?T = ?, diag(?) = 1, ? 0},
d := {? ? R
(3)
we define the transelliptical distribution as follows:
Definition 3.1 (transelliptical distribution). A continuous random vector X = (X1 , . . . , Xd )T is
transelliptical, denoted by X ? T Ed (?, ?; f1 , . . . , fd ), if there exists a set of monotone univariate
functions f1 , . . . , fd and a nonnegative random variable ? satisfying P(? = 0) = 0, such that
(f1 (X1 ), . . . , fd (Xd ))T ? ECd (0, ?, ?), where ? ? R+
d.
(4)
1
Here, ? is called latent generalized correlation matrix .
We then discuss the relationship between the transelliptical family with the nonparanormal family,
which is defined as follows:
Definition 3.2 (nonparanormal distribution). A ramdom vector X = (X1 , . . . , Xd )T is nonparanormal, denoted by X ? N P Nd (?; f1 , . . . , fd ), if there exist monotone functions f1 , . . . , fd such
that (f1 (X1 ), . . . , fd (Xd ))T ? Nd (0, ?), where ? ? R+
d is called latent correlation matrix.
From Definitions 3.1 and 3.2, we see the transelliptical is a strict extension of the nonparanormal.
Both families assume there exits a set of univariate transformations such that the transformed data
follow a base distribution: the nonparanormal exploits a normal base distribution; while the transelliptical exploits an elliptical base distribution. In the nonparanormal, ? is the correlation matrix for
the latent normal, therefore it is called latent correlation matrix; In the transelliptical, ? is the generalized correlation matrix for the latent elliptical distribution, therefore it is called latent generalized
correlation matrix.
We now define the transelliptical graphical models. Let X ? T Ed (?, ?; f1 , . . . , fd ) where ? ? R+
d
is the latent generalized correlation matrix. In this paper, we always assume the second moment
E? 2 < ?. We define ? := ??1 to be the latent generalized concentration matrix. Let ?jk be the
element of ? on the j-th rowpand k-th column. We define the latent generalized partial correlation
matrix ? as ?jk := ??jk / ?jj ? ?kk . Let diag(A) be the matrix A with off-diagonal elements
replaced by zero and A1/2 be the squared root matrix of A. It is easy to see that
? = ?[diag(??1 )]?1/2 ??1 [diag(??1 )]?1/2 .
(5)
Therefore, ? has the same nonzero pattern as ??1 . We then define a undirected graph G = (V, E):
the vertex set V contains nodes corresponding to the d variables in X, and the edge set E satisfies
(Xj , Xk ) ? E if and only if ?jk 6= 0 for j, k = 1, . . . , d.
R+
d (G)
(6)
R+
d
Given a graph G, we define
to be the set containing all the ? ?
with zero entries at the
positions specified by the graph G. The transelliptical graphical model induced by G is defined as:
Definition 3.3 (transelliptical graphical model). The transelliptical graphical model induced by a
graph G, denoted by P(G), is defined to be the set of distributions:
P(G) := all the transelliptical distributions T Ed (?, ?; f1 , . . . , fd ) satisfying ? ? R+
d (G) . (7)
In the rest of this section, we prove some properties of the transelliptical family and discuss the interpretation of the meaning of the graph G. This graph is called latent generalized partial correlation
graph. First, we show the transelliptical family is closed under marginalization and conditioning.
1
One thing to note is that in [12], the condition that ? ? Rd+ is not required.
3
Lemma 3.1. Let X := (X1 , . . . , Xd )T ? T Ed (?, ?; f1 , . . . , fd ). The marginal and the conditional
distributions of (X1 , X2 )T given the remaining variables are still transellpitical.
Proof. Since X ? T Ed (?, ?; f1 , . . . , fd ), we have (f1 (X1 ), . . . , fd (Xd ))T ? ECd (0, ?, ?). Let
Zj := fj (Xj ) for j = 1, . . . , d. From Theorem 2.18 of [8], the marginal distribution of (Z1 , Z2 )T
and the conditional distribution of (Z1 , Z2 )T given the remaining Z3 , . . . , Zd are both elliptical. By
definition, the marginal distribution of (X1 , X2 )T is transelliptical. To see the conditional case, since
X has continuous marginals and f1 , . . . , fd are monotone, the distribution of (X1 , X2 )T conditional
on X\{1,2} is the same as conditional on Z\{1,2} . Combined with the fact that Z1 = f1 (X1 ),
Z2 = f2 (X2 ), we know that (X1 , X2 )T | X\{1,2} follows a transelliptical distribution.
From (5), we see the matrices ? and ? have the same nonzero pattern, therefore, they encode
the same graph G. Let X ? T Ed (?, ?; f1 , . . . , fd ). The next lemma shows that, if the second
moment of X exists, the absence of an edge in the graph G is equivalent to the pairwise conditional
uncorrelatedness of two corresponding latent variables.
Lemma 3.2. Let X := (X1 , . . . , Xd )T ? T Ed (?, ?; f1 , . . . , fd ) with E? 2 < ?, and Zj := fj (Xj )
for j = 1, . . . , d. ?jk = 0 if and only if Zj and Zk are conditionally uncorrelated given Z\{j,k} .
Proof. Let Z := (Z1 , . . . , Zd )T . Since X ? T Ed (?, ?; f1 , . . . , fd ), we have Z ? ECd (0, ?, ?).
Therefore, the latent generalized correlation matrix ? is the generalized correlation matrix of the
latent variable Z. It suffices to prove that, for elliptical distributions with E? 2 < ?, the generalized
partial correlation matrix ? as defined in (5) encodes the conditional uncorrelatedness among the
variables. Such a result has been proved in the section 2 of [26].
Let A, B, C ? {1, . . . , d}. We say C separates A and B in the graph G if any path from a node
in A to a node in B goes through at least one node in C. We denote by XA the subvector of X
indexed by A. The next lemma implies the equivalence between the pairwise and global conditional
uncorrelatedness of the latent variables for the transelliptical graphical models. This lemma connects
the graph theory with probability theory.
Lemma 3.3. Let X ? T Ed (?, ?; f1 , . . . , fd ) be any element of the transelliptical graphical model
P(G) satisfying E? 2 < ?. Let Z := (Z1 , . . . , Zd )T with Zj = fj (Xj ) and A, B, C ? {1, . . . , d}.
Then C separates A and B in G if and only if ZA and ZB are conditional uncorrelated given ZC .
Proof. By definition, we know Z ? ECd (0, ?, ?). It then suffices to show the pairwise conditional
uncorrelatedness implies the global conditional uncorrelatedness for the elliptical family. This follows from the same induction argument as in Theorem 3.7 of [16].
Compared with the nonparanormal graphical model, the transelliptical graphical model gains a lot
on modeling flexibility, but at the price of inferring a weaker notion of graphs: a missing edge in
the graph only represents the conditional uncorrelatedness of the latent variables. The next lemma
shows that we do not lose any thing compared with the nonparanormal graphical model. The proof
of this lemma is simple and is omitted. Some related discussions can be found in [19].
Lemma 3.4. Let X ? T Ed (?, ?; f1 , . . . , fd ) be a member of the transelliptical graphical model
P(G). If X is also nonparanormal, the graph G encodes the conditional independence relationship
of X (In other words, the distribution of X is Markov to G).
4
Rank-based Regularization Estimator
In this section, we propose a nonparametric rank-based regularization estimator which achieves the
optimal parametric rates of convergence for both graph recovery and parameter estimation. The
main idea of our procedure is to treat the marginal transformation functions fj and the generating
variable ? as nuisance parameters, and exploit the nonparametric Kendall?s tau statistic to directly
estimate the latent generalized correlation matrix ?. The obtained correlation matrix estimate is then
plugged into the CLIME procedure to estimate the sparse latent generalized concentration matrix ?.
From the previous discussion, we know the graph G is encoded by the nonzero pattern of ?. We
b
then get a graph estimator by thresholding the estimated ?.
4
4.1 The Kendall?s tau Statistic and its Invariance Property
Let x1 , . . . , xn ? Rd be n observations of a random vector X ? T Ed (?, ?; f1 , . . . , fd ). Our task is
to estimate the latent generalized concentration matrix ? := ??1 . The Kendall?s tau is defined as:
X
0
0
2
?bjk =
xik ? xik ,
(8)
sign xij ? xij
n(n ? 1)
0
1?i<i ?n
which is a monotone transformation-invariant correlation between the empirical realizations of two
ej and X
ek be two independent copies of Xj and Xk . The
random variables Xj and Xk . Let X
ej ), sign(Xk ? X
ek ) .
population version of the Kendall?s tau statistic is ?jk := Corr sign(Xj ? X
Let X ? T Ed (?, ?; f1 , . . . , fd ), the following theorem from [12] illustrates an important relationship between the population Kendall?s tau statistic ?jk and the latent generalized correlation
coefficient ?jk .
Theorem 4.1 (Invariance Property of Kendall?s tau Statistic[12]). Let X := (X1 , . . . , Xd )T ?
T Ed (?, ?; f1 , . . . , fd ). We denote ?jk to be the population Kendall?s tau statistic between Xj and
Xk . Then ?jk = sin ?2 ?jk .
4.2
Rank-based Regularization Method
We start with some notations. We denote by I(?) to be the indicator functionPand Id be the identity
matrix. Given a matrix A, we define kAkmax := maxjk |Ajk | and kAk1 := jk |Ajk |.
Motivated by Theorem 4.1, we define Sb = [Sbjk ] ? Rd?d to estimate ?:
?
?bjk ? I(j 6= k) + I(j = k).
Sbjk = sin
2
(9)
We then plug Sb into the CLIME estimator [2] to get the final parameter and graph estimates. More
specifically, the latent generalized concentration matrix ? can be estimated by solving
X
b = arg min
b ? Id kmax ? ?,
(10)
?
|?jk | s.t. kS?
?
j,k
where ? > 0 is a tuning parameter. [2] show that this optimization can be decomposed into d
vector minimization problems, each of which can be reformulated as a linear program. Thus it
b is obtained, we can apply an additional
has the potential to scale to very large problems. Once ?
b = (V, E),
b in
thresholding step to estimate the graph G. For this, we define a graph estimator G
b
b
which an edge (j, k) ? E if ?jk ? ?. Here ? is another tuning parameter.
Compared with the original CLIME, the extra cost of our rank-based procedure is the computation of
b which requires us to evaluate d(d ? 1)/2 pairwise Kendal?s tau statistics. A naive implementation
S,
of the Kendall?s tau requires O(n2 ) computation. However, efficient algorithm based on sorting and
balanced binary trees has been developed to calculate the Kendall?s tau statistic with a computational
complexity O(n log n) [4]. Therefore, the incurred computational burden is negligible.
Remark 4.1. Similar rank-based procedures have been discussed in [19, 18, 28]. Unlike our work,
they focus on the more restrictive nonparanromal family and discuss several rank-based procedures
using the normal-score, Spearman?s rho, and Kendall?s tau. Unlike our results, they advocate the
use of the Spearman?s rho and normal-score correlation coefficients. Their main concern is that,
within the more restrictive nonparanormal family, the Spearman?s rho and normal-score correlations
are slightly easier to compute and have smaller asymptotic variance. In constrast to their results,
the new insight obtained from this current paper is that we advocate the usage of the Kendall?s tau
due to its invariance property within the much larger transelliptical family. In fact, we can show that
the Spearman?s rho is not invariant within the transelliptical family unless the true distribution is
nonparanormal. More details on this issue can be found in [8].
5
Asymptotic Properties
We analyze the theoretical properties of the rank-based regularization estimator proposed in Section
4.2. Our main result shows: under the same conditions on ? that ensure the parameter estimation
5
and graph recovery consistency of the original CLIME estimator for Gaussian graphical models, our
rank-based regularization procedure achieves exactly the same parametric rates of convergence for
both parameter estimation and graph recovery for the much larger transelliptical family. This result
suggests that the transelliptical graphical model can be used as a safe replacement of the Gaussian
graphical models, the nonparanormal graphical models, and the elliptical graphical models.
We introduce some
P additional notations. Given a symmetric matrix A, for 0 ? q < 1, we define
kAkLq := maxi j |Aij |q and the spectral norm kAkL2 to be its largest eigenvalue. We define
Sd (q, s, M ) := {? : k?kL1 ? M and k?kLq ? s}.
(11)
For q = 0, the class Sd (0, s, M ) contains all the s-sparse matrices. Our main result is Theorem 5.1
?1
Theorem 5.1. Let X ? T Ed (?, ?; f1 , . . . , fd ) with ? ? R+
? Sd (q, S, M ) with
d and ? := ?
b
0 ? q < 1. Let ? be defined
in
(10).
There
exist
constants
C
and
C
only
depending
on q, such
0
1
p
that, whenever ? = C0 M (log d)/n, with probability no less than 1 ? d?2 , we have
(1?q)/2
b ? ?kL ? C1 M 2?2q ? s ? log d
(Parameter estimation)
k?
(12)
.
2
n
b be the graph estimator defined in Section 4.2 with the additional tuning parameter ? = 4M ?.
Let G
If we further assume ? ? Sd (0, s, M ) and minj,k:|?jk |6=0 |?jk | ? 2?, then
b 6= G ? 1 ? o(1),
(Graph recovery)
P G
(13)
where G is the graph determined by the nonzero pattern of ?.
Proof. The difference between the rank-based CLIME and the original CLIME is that we replace
b by the Kendall?s tau matrix S.
b By examing the proofs
the Pearson correlation coefficient matrix R
b is an exponential concentration inequality
of Theorems 1 and 7 in [2], the only property needed of R
bjk ? ?jk | > t ? c1 exp(?c2 nt2 )
P |R
. Therefore, it suffices if we can prove a similar concentration inequality for |Sbjk ? ?jk |. Since
?
?
Sb = sin
?bjk
?jk ,
and ?jk = sin
2
2
we have |Sbjk ? ?jk | ? |b
?jk ? ? |. Therefore, we only need to prove
P (|b
?jk ? ?jk | > t) ? exp ?nt2 /(2?) .
P
2
i
i0
This result holds since ?bjk is a U-statistic: ?bjk = n(n?1)
1?i<i0 ?n K? (x , x ), where
0
0
0
K? (xi , xi ) = sign xij ? xij xik ? xik is a bounded kernel between -1 and 1. The result follows from the Hoeffding?s inequality for U-statistic [13].
6
Numerical Experiments
We investigate the empirical performance of the rank-based regularization estimator. We compare
it with the following methods: (1) Pearson: the CLIME using the Pearson sample correlation; (2)
Kendall: the CLIME using the Kendall?s tau; (3) Spearman: the CLIME using the Spearman?s
rho; (4) NPN: the CLIME using the original nonparanormal correlation estimator [19]; (5) NS:
the CLIME using the normal score correlation. The later three methods are discussed under the
nonparanormal graphical model and we refer to [18] for detailed descriptions.
6.1 Simulation Studies
We adopt the same data generating procedure as in [18]. To generate a d-dimensional sparse graph
G = (V, E) where V = {1, . . . , d} correspond to variables X = (X1 , . . . , Xd ), we associate each
(1)
(2)
(k)
(k)
index j ? {1, . . . , d} with a bivariate data point (Yj , Yj ) ? [0, 1]2 where Y1 , . . . , Yn ?
Uniform[0, 1] for k = 1, 2. Each pair of vertices (i, j) is included in the edge set E with probability
?
(1) (2)
P((i, j) ? E) = exp(?kyi ?yj k2n /0.25)/ 2?, where yi := (yi , yi ) is the empirical observation
(1)
(2)
of (Yi , Yi ) and k ? kn represents the Euclidean distance. We restrict the maximum degree of the
6
0.2
0.4
0.6
0.8
0.6
0.8
0.6
0.8
1.0
0.8
TPR
0.2
1.0
0.6
TPR
0.2
0.4
0.6
0.8
1.0
0.0
0.4
0.4
0.6
0.8
0.6
0.8
1.0
1.0
0.4
TPR
0.6
0.8
1.0
0.6
TPR
0.4
0.2
0.2
FPR
0.8
1.0
0.2
0.0
0.0
1.0
0.4
0.0
0.6
TPR
1.0
0.8
Pearson
Kendall
Spearman
NPN
NS
FPR
0.4
0.8
0.6
0.2
0.2
1.0
Pearson
Kendall
Spearman
NPN
NS
FPR
0.4
0.8
1.0
0.4
0.2
FPR
0.6
0.2
0.8
1.0
0.8
0.6
0.0
0.4
0.0
0.6
TPR
0.4
0.2
0.4
1.0
Pearson
Kendall
Spearman
NPN
NS
FPR
0.0
0.2
0.8
TPR
1.0
Pearson
Kendall
Spearman
NPN
NS
0.6
0.0
0.0
0.2
Pearson
Kendall
Spearman
NPN
NS
FPR
0.0
0.4
0.8
1.0
0.6
TPR
0.4
0.4
0.2
FPR
0.8
1.0
0.8
0.6
TPR
0.4
0.2
0.0
0.2
0.4
0.0
0.0
0.2
1.0
FPR
Pearson
Kendall
Spearman
NPN
NS
0.0
0.6
0.8
0.0
0.0
0.2
0.0
1.0
0.0
0.8
Pearson
Kendall
Spearman
NPN
NS
1.0
0.0
0.2
FPR
0.4
0.6
0.8
Pearson
Kendall
Spearman
NPN
NS
0.2
0.6
FPR
Pearson
Kendall
Spearman
NPN
NS
0.0
0.4
Pearson
Kendall
Spearman
NPN
NS
0.2
0.2
Pearson
Kendall
Spearman
NPN
NS
0.0
0.0
0.4
TPR
0.6
0.8
0.4
TPR
0.6
0.8
0.6
TPR
0.4
0.0
0.2
Pearson
Kendall
Spearman
NPN
NS
Scheme 4
1.0
Scheme 3
1.0
Scheme 2
1.0
Scheme 1
1.0
0.0
0.2
0.4
FPR
0.6
0.8
1.0
FPR
Figure 1: ROC curves for different methods in generating schemes 1 to 4 and different contamination
level r = 0, 0.02, 0.05 (top, middle, bottom) using the CLIME. Here n = 400 and d = 100.
graph to be 4 and build the inverse correlation matrix ? according to ?jk = 1 if j = k, ?jk = 0.145
if (j, k) ? E, and ?jk = 0 otherwise. The value 0.145 guarantees the positive definiteness of ?.
Let ? = ??1 . To obtain the correlation matrix, we rescale ? so that all its diagonal elements are 1.
In the simulated study we randomly sample n data points from a certain transelliptical distribution
X ? T Ed (?, ?; f1 , . . . , fd ). We set d = 100. To determine the transelliptical distribution, we first
generate ? as discussed in the previous paragraph. Secondly, three types of ? are considered:
(1) ? (1) ? ?d , i.e., ? follows a chi-distribution with degree of freedom d;
d
(2) ? (2) = ?1? /?2? , ?1? ? ?d , ?2? ? ?1 , ?1? is independent of ?2? ;
(3) ? (3) ? F (d, 1), i.e., ? follows an F -distribution with degree of freedom d and 1.
Thirdly, two type of transformation functions f = {fj }dj=1 are considered:
(1) linear transformation: f (1) = {f0 , . . . , f0 } with f0 (x) = x;
(2) nonlinear transformation: f (2) = {f1 , . . . , fd } = {h1 , h2 , h3 , h4 , h5 , h1 , h2 , h3 , h4 , h5 , . . .},
3
sign(x)|x|1/2
?1
?
?R x6
, h?1
, h?1
where h?1
R
3 (x) :=
4 (x) :=
1 (x) := x, h2 (x) :=
?R
R
?(x)? ?(t)?(t)dt
,
R
(?(y)? ?(t)?(t)dt)2 ?(y)dy
?R
h?1
5 (x) :=
|t|?(t)dt
t ?(t)dt
R
exp(x)? exp(t)?(t)dt
.
R
(exp(y)? exp(t)?(t)dt)2 ?(y)dy
We consider the following four data generating schemes:
? Scheme 1: X ? T Ed (?, ? (1) ; f (1) ), i.e., X ? N (0, ?).
? Scheme 2: X ? T Ed (?, ? (2) ; f (1) ), i.e., X follows the multivariate Cauchy.
? Scheme 3: X ? T Ed (?, ? (3) ; f (1) ), i.e., the distribution is highly related to the multivariate t.
? Scheme 4: X ? T Ed (?, ? (3) ; f (2) ).
To evaluate the robustness of different methods, let r ? [0, 1) represent the proportion of samples
being contaminated. For each dimension, we randomly select bnrc entries and replace them with
7
either 5 or -5 with equal probability. The final data matrix we obtained is X ? Rn?d . Here we
pick r = 0, 0.02 or 0.05. Under the Scheme 1 to Scheme 4 with different levels of contamination
(r = 0, 0.02 or 0.05), we repeatedly generate the data matrix X for 100 times and compute the
averaged False Positive Rates and False Negative Rates using a path of tuning parameters ? from
0.01 to 0.5 and ? = 10?5 . The feature selection performances of different methods are evaluated
by plotting (FPR(?), 1 ? FNR(?)). The corresponding ROC curves are presented in Figure 1. We
see: (1) when the data are perfectly Gaussian without contamination, all methods perform well; (2)
when data are non-Gaussian, with outliers existing or latent elliptical distribution different from the
Gaussian, Kendall is better than the other methods in terms of achieving a lower FPR + FNR.
6.2 Equities Data
We compare different methods on the stock price data from Yahoo! Finance (finance.yahoo.
com). We collect the daily closing prices for 452 stocks that are consistently in the S&P 500 index
between January 1, 2003 through January 1, 2008. This gives us altogether 1,257 data points, each
data point corresponding to the vector of closing prices on a trading day. With St,j denoting the
closing price of stock j on day t, we consider the variables Xtj = log (St,j /St?1,j ) and build
graphs over the indices j. Though a time series, we treat the instances Xt as independent replicates.
Pearson
Kendall
Spearman
NPN
NS
Figure 2: The graph estimated from the S&P 500 stock data from Jan. 1, 2003 to Jan. 1, 2008 using Pearson,
Kendall,Spearman, NPN, NS (left to right). The nodes are colored according to their GICS sector categories.
The 452 stocks are categorized into 10 Global Industry Classification Standard (GICS) sectors, including Consumer Discretionary (70 stocks), Consumer Staples (35 stocks),
Energy (37 stocks), Financials (74 stocks), Health Care (46 stocks), Industrials
(59 stocks), Information Technology (64 stocks) Telecommunications Services
(6 stocks), , Materials (29 stocks), and Utilities (32 stocks).
Figure 2 illustrates the estimated graphs using the same layout, the nodes are colored according to
the GICS sector of the corresponding stock. The tuning parameter is automatically selected using
a stability based approach named StARS [20]. We see that different methods get slightly different
graphs. The layout is drawn by a force-based algorithm using the estimated graph from the Kendall.
We see the stocks from the same GICS sector tends to be grouped with each other, suggesting that
our method delivers an informative graph estimate.
7
Discussion and Comparison with Related Work
The transelliptical distribution is also proposed by [12] for semiparametric scale-invariant principle
component analysis. Though both papers are based on the transelliptical family, the core idea and
analysis are fundamentally different. For scale-invariant principle component analysis, we impose
structural assumption of the latent generalized correlation matrix; For graph estimation, we impose
structural assumption on the latent generalized concentration matrix. Since the latent generalized
correlation matrix encodes marginal uncorrelatedness while the latent generalized concentration matrix encodes conditional uncorrelatedness of the variables, the analysis of the population models are
orthogonal and complementary to each other. In particular, for graphical models, we need to characterize the properties of marginal and conditional distributions of a transelliptical distribution. These
properties are not needed for principle component analysis. Moreover, the model interpretation
of the inferred transelliptical graph is very nontrivial. In a longer version technical report [17],
we provide a three-layer hierarchal interpretation of the estimated transelliptical graphical model
and sharply characterize the relationships between nonparnaormal, elliptical, meta-elliptical, and
transelliptical families. This research was supported by NSF award IIS-1116730.
8
References
[1] O. Banerjee, L. E. Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood
estimation. Journal of Machine Learning Research, 9(3):485?516, 2008.
[2] T. Cai, W. Liu, and X. Luo. A constrained `1 minimization approach to sparse precision matrix estimation.
Journal of the American Statistical Association, 106(494):594?607, 2011.
[3] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on
Scientific Computing, 20(1):33?61, 1998.
[4] David Christensen. Fast algorithms for the calculation of Kendall?s ? . Computational Statistics, 20(1):51?
62, 2005.
[5] A. Dempster. Covariance selection. Biometrics, 28:157?175, 1972.
[6] M. Drton and M. Perlman. Multiple testing and error control in Gaussian graphical model selection.
Statistical Science, 22(3):430?449, 2007.
[7] M. Drton and M. Perlman. A SINful approach to Gaussian graphical model selection. Journal of Statistical Planning and Inference, 138(4):1179?1200, 2008.
[8] KT Fang, S. Kotz, and KW Ng. Symmetric multivariate and related distributions. Chapman&Hall,
London, 1990.
[9] Michael A. Finegold and Mathias Drton. Robust graphical modeling with t-distributions. In Proceedings
of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ?09, pages 169?176, 2009.
[10] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso.
Biostatistics, 9(3):432?441, 2008.
[11] P.R. Halmos. Measure theory, volume 18. Springer, 1974.
[12] F. Han and H. Liu. Tca: Transelliptical principal component analysis for high dimensional non-gaussian
data. Technical Report, 2012.
[13] Wassily Hoeffding. Probability Inequalities for Sums of Bounded Random Variables. Journal of the
American Statistical Association, 58(301):13?30, 1963.
[14] A. Jalali, C. Johnson, and P. Ravikumar. High-dimensional sparse inverse covariance estimation using
greedy methods. International Conference on Artificial Intelligence and Statistics, 2012. to appear.
[15] C. Lam and J. Fan. Sparsistency and rates of convergence in large covariance matrix estimation. Annals
of Statistics, 37:42?54, 2009.
[16] Steffen L. Lauritzen. Graphical Models. Oxford University Press, 1996.
[17] H. Liu, F. Han, and Zhang C-H. Transelliptical graphical modeling under a hierarchical latent variable
framework. Technical Report, 2012.
[18] H. Liu, F. Han, M. Yuan, J. Lafferty, and L. Wasserman. High dimensional semiparametric gaussian
copula graphical models. Annals of Statistics, 2012.
[19] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10:2295?2328, 2009.
[20] Han Liu, Kathryn Roeder, and Larry Wasserman. Stability approach to regularization selection (stars) for
high dimensional graphical models. In Proceedings of the Twenty-Third Annual Conference on Neural
Information Processing Systems (NIPS), 2010.
[21] N. Meinshausen and P. B?uhlmann. High dimensional graphs and variable selection with the lasso. Annals
of Statistics, 34(3):1436?1462, 2006.
[22] P. Ravikumar, M. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing `1 -penalized log-determinant divergence. Electronic Journal of Statistics, 5:935?980, 2011.
[23] A. Rothman, P. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494?515, 2008.
[24] X. Shen, W. Pan, and Y. Zhu. ). likelihood-based selection and sharp parameter estimation. Journal of the
American Statistical Association, 2012. to appear.
[25] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
Series B, 58(1):267?288, 1996.
[26] D. Vogel and R. Fried. Elliptical graphical modelling. Biometrika, 98(4):935?951, December 2011.
[27] M. Wainwright. Sharp thresholds for highdimensional and noisy sparsity recovery using `1 constrained
quadratic programming. IEEE Transactions on Information Theory, 55(5):2183?2201, 2009.
[28] L. Xue and H. Zou. Regularized rank-based estimation of high-dimensional nonparanormal graphical
models. Annals of Statistics, 2012.
[29] M. Yuan. High dimensional inverse covariance matrix estimation via linear programming. Journal of
Machine Learning Research, 11(8):2261?2286, 2010.
[30] M. Yuan and Y. Lin. Model selection and estimation in the gaussian graphical model. Biometrika,
94(1):19?35, 2007.
[31] P. Zhao and B. Yu. On model selection consistency of lasso. Journal of Machine Learning Research,
7(11):2541?2563, 2006.
[32] T. Zhao, H. Liu, K. Roeder, J. Lafferty, and L. Wasserman. The huge package for high-dimensional
undirected graph estimation in r. Journal of Machine Learning Research, 2012. to appear.
[33] H. Zou. The adaptive lasso and its oracle properties. Journal of the American Statistical Association,
101(476):1418?1429, 2006.
9
| 4822 |@word determinant:1 middle:1 version:2 norm:1 proportion:1 nd:3 c0:1 simulation:1 covariance:8 decomposition:1 pick:1 incurs:1 moment:2 liu:10 contains:4 score:4 series:2 denoting:1 nonparanormal:25 existing:1 elliptical:22 z2:3 current:1 com:1 luo:1 written:1 john:1 numerical:2 informative:1 intelligence:2 selected:1 greedy:1 xk:7 fpr:14 fried:1 core:1 colored:2 node:7 complication:1 zhang:2 c2:1 h4:2 yuan:3 prove:4 wassily:1 advocate:3 paragraph:1 introduce:3 pairwise:4 theoretically:1 planning:1 steffen:1 chi:1 decomposed:1 automatically:2 provided:1 estimating:2 notation:2 bounded:2 klq:1 biostatistics:2 moreover:1 tic:1 developed:2 transformation:6 nj:2 guarantee:1 pseudo:1 thorough:1 subclass:1 concave:1 xd:12 finance:2 exactly:1 biometrika:2 control:1 unit:1 yn:1 appear:3 positive:2 negligible:2 aat:1 engineering:1 treat:2 sd:4 service:1 tends:1 id:2 oxford:1 path:2 au:2 dantzig:1 k:1 equivalence:1 suggests:2 collect:1 meinshausen:1 averaged:1 bjk:6 unique:1 testing:2 yj:3 atomic:1 perlman:2 implement:1 procedure:13 jan:2 empirical:3 word:1 staple:1 suggest:1 get:3 irrepresentable:1 selection:12 kmax:1 equivalent:2 deterministic:1 missing:1 go:1 layout:2 shen:1 recovery:7 identifying:1 constrast:1 wasserman:4 estimator:14 insight:1 fang:2 financial:1 population:4 stability:2 notion:1 annals:4 programming:3 kathryn:1 associate:1 element:4 satisfying:3 jk:28 bottom:1 solved:1 calculate:1 jhsph:1 contamination:3 rq:1 balanced:1 transforming:1 dempster:1 complexity:1 bnrc:1 solving:2 efficiency:1 exit:1 f2:1 basis:1 tca:1 joint:1 stock:17 represented:1 routinely:1 fast:1 london:1 artificial:2 neighborhood:1 pearson:17 saunders:1 encoded:2 larger:3 rho:5 say:2 otherwise:1 statistic:20 noisy:1 final:2 eigenvalue:1 cai:1 propose:6 lam:1 realization:1 kak1:1 flexibility:3 description:1 convergence:5 extending:1 generating:4 depending:1 develop:1 stat:1 rescale:1 lauritzen:1 h3:2 implies:2 trading:1 safe:1 stochastic:1 larry:1 material:1 f1:26 generalization:1 suffices:3 rothman:1 secondly:1 extension:2 hold:1 considered:2 hall:1 normal:8 exp:9 mapping:1 traded:1 achieves:3 adopt:1 a2:2 omitted:1 bickel:1 estimation:21 favorable:1 lose:1 uhlmann:1 largest:1 grouped:1 minimization:2 gaussian:22 always:1 ej:2 shrinkage:1 encode:1 focus:1 properly:1 consistently:1 rank:15 likelihood:8 modelling:1 contrast:1 inference:3 roeder:2 i0:2 sb:3 transformed:2 mimicking:1 issue:2 among:4 flexible:1 arg:1 denoted:5 classification:1 yahoo:2 constrained:2 copula:2 marginal:6 equal:1 once:1 ng:1 chapman:1 represents:2 kw:1 yu:2 contaminated:1 report:3 fundamentally:1 randomly:2 divergence:1 sparsistency:1 xtj:1 replaced:1 connects:2 replacement:2 lebesgue:1 friedman:1 freedom:3 drton:3 huge:1 fd:25 investigate:1 highly:1 replicates:1 kt:1 edge:7 partial:5 daily:1 orthogonal:1 unless:1 indexed:1 tree:1 plugged:1 euclidean:1 biometrics:1 theoretical:4 instance:1 column:1 modeling:6 industry:1 cost:2 vertex:2 entry:2 kl1:1 uniform:1 at2:1 johnson:1 characterize:2 kn:1 xue:1 combined:1 st:3 density:2 international:1 siam:1 off:2 michael:1 hopkins:1 squared:1 containing:1 choose:1 hoeffding:2 ek:2 american:4 zhao:2 suggesting:1 potential:1 star:2 coefficient:3 later:1 h1:2 root:1 closed:1 kendall:32 lot:1 analyze:1 start:1 sort:1 identifiability:1 clime:13 variance:1 characteristic:2 correspond:1 za:1 minj:1 whenever:1 ed:19 definition:11 energy:1 proof:6 gain:2 proved:1 back:1 dt:6 day:2 follow:1 x6:1 evaluated:1 though:3 just:1 xa:1 correlation:28 nonlinear:1 banerjee:1 scientific:1 usage:1 concept:2 true:2 regularization:8 symmetric:2 nonzero:4 conditionally:1 sin:4 finegold:1 uniquely:1 nuisance:1 generalized:21 tt:1 l1:1 delivers:3 fj:5 meaning:1 recently:1 raskutti:1 conditioning:1 volume:1 thirdly:1 belong:1 interpretation:4 discussed:3 tpr:12 marginals:2 association:4 refer:1 cv:2 rd:7 tuning:5 consistency:2 closing:3 dj:1 han:7 f0:3 longer:1 base:3 uncorrelatedness:9 multivariate:6 hierarchal:1 certain:2 inequality:4 binary:1 meta:1 yi:5 additional:3 care:1 impose:3 determine:1 maximize:1 ii:1 multiple:3 full:1 infer:1 technical:3 levina:1 plug:1 calculation:1 sphere:1 lin:1 ravikumar:2 award:1 a1:3 scalable:1 regression:2 rutgers:2 represent:2 kernel:1 c1:2 background:1 semiparametric:6 want:1 baltimore:1 singular:1 extra:3 rest:1 unlike:2 posse:1 vogel:1 strict:1 induced:2 deficient:1 undirected:4 thing:2 member:1 december:1 lafferty:3 structural:2 easy:1 npn:15 independence:4 fit:1 xj:10 marginalization:1 hastie:1 lasso:6 restrict:1 perfectly:1 idea:2 motivated:1 sbjk:4 utility:1 reformulated:1 jj:1 remark:2 repeatedly:1 detailed:1 k2n:1 nonparametric:3 category:1 generate:3 exist:3 xij:4 zj:4 nsf:1 sign:5 estimated:7 tibshirani:2 zd:3 key:2 four:1 threshold:1 achieving:1 drawn:1 kyi:1 graph:49 monotone:5 sum:1 package:2 inverse:4 uncertainty:1 telecommunication:1 named:3 extends:6 family:31 almost:2 kotz:1 electronic:2 dy:2 scaling:1 layer:1 fan:1 quadratic:1 identifiable:2 nonnegative:1 nontrivial:1 strength:1 annual:1 oracle:1 sharply:1 x2:5 software:1 encodes:4 transelliptical:52 argument:1 min:1 department:3 according:3 piscataway:1 spearman:20 describes:1 slightly:3 em:1 smaller:1 pan:1 cun:1 christensen:1 outlier:1 invariant:5 ghaoui:1 computationally:1 discus:4 needed:2 know:3 maxjk:1 pursuit:2 operation:1 apply:1 hierarchical:1 v2:1 spectral:1 robustness:2 altogether:1 original:4 top:1 remaining:2 ecd:8 ensure:1 graphical:43 exploit:3 restrictive:3 build:2 society:1 parametric:4 concentration:8 md:1 traditional:1 diagonal:2 jalali:1 distance:1 separate:2 simulated:1 evaluate:2 cauchy:1 induction:1 assuming:1 consumer:2 besides:1 index:3 relationship:6 kk:1 z3:1 minimizing:1 sector:4 xik:4 negative:1 implementation:1 reliably:1 unknown:1 perform:1 twenty:2 observation:2 markov:2 january:2 y1:1 rn:1 sharp:2 inferred:1 introduced:1 david:1 pair:1 required:1 specified:1 subvector:1 z1:5 kl:1 established:1 nip:1 pattern:5 sparsity:2 program:1 max:1 tau:15 including:1 royal:1 wainwright:2 rely:1 regularized:2 force:1 indicator:1 zhu:2 scheme:12 technology:1 aspremont:1 naive:1 health:1 asymptotic:2 fhan:1 loss:2 permutation:1 analogy:1 at1:1 h2:3 incurred:1 degree:4 thresholding:2 plotting:1 principle:3 uncorrelated:3 penalized:5 supported:1 copy:1 zc:1 bias:1 weaker:1 aij:1 fifth:1 sparse:9 distributed:1 curve:2 dimension:3 default:1 xn:1 collection:1 adaptive:2 transaction:1 selector:1 global:3 uai:1 assumed:2 xi:3 continuous:4 latent:28 itt:1 robust:2 zk:1 necessarily:1 zou:2 diag:5 main:4 n2:1 complementary:1 categorized:1 x1:18 roc:2 definiteness:1 n:15 precision:3 position:1 inferring:1 exponential:1 gics:4 third:1 theorem:8 xt:1 hanliu:1 maxi:1 normalizing:1 concern:1 exists:4 burden:1 bivariate:1 false:2 corr:1 hui:1 halmos:1 illustrates:2 sorting:1 easier:1 chen:1 univariate:3 scalar:1 springer:1 satisfies:1 conditional:20 viewed:1 identity:1 donoho:1 price:5 absence:1 ajk:2 replace:2 included:1 typical:1 specifically:3 uniformly:1 determined:2 fnr:2 lemma:9 zb:1 called:6 principal:1 pas:1 invariance:3 mathias:1 equity:1 select:1 nt2:2 highdimensional:1 absolutely:1 h5:2 princeton:2 |
4,224 | 4,823 | Calibrated Elastic Regularization in Matrix
Completion
Cun-Hui Zhang
Department of Statistics and Biostatistics
Rutgers University
Piscataway, New Jersey 08854
[email protected]
Tingni Sun
Statistics Department, The Wharton School
University of Pennsylvania
Philadelphia, Pennsylvania 19104
[email protected]
Abstract
This paper concerns the problem of matrix completion, which is to estimate a
matrix from observations in a small subset of indices. We propose a calibrated
spectrum elastic net method with a sum of the nuclear and Frobenius penalties and
develop an iterative algorithm to solve the convex minimization problem. The iterative algorithm alternates between imputing the missing entries in the incomplete
matrix by the current guess and estimating the matrix by a scaled soft-thresholding
singular value decomposition of the imputed matrix until the resulting matrix converges. A calibration step follows to correct the bias caused by the Frobenius
penalty. Under proper coherence conditions and for suitable penalties levels, we
prove that the proposed estimator achieves an error bound of nearly optimal order
and in proportion to the noise level. This provides a unified analysis of the noisy
and noiseless matrix completion problems. Simulation results are presented to
compare our proposal with previous ones.
1
Introduction
Let ? ? IRd1 ?d2 be a matrix of interest and ?? = {1, . . . , d1 } ? {1, . . . , d2 }. Suppose we observe
vectors (?i , yi ),
yi = ??i + ?i ,
i = 1, . . . , n,
(1)
where ?i ? ?? and ?i are random errors. We are interested in estimating ? when n is a small
fraction of d1 d2 . A well-known application of matrix completion is the Netflix problem where yi
is the rating of movie bj by user ai for ? = (ai , bj ) ? ?? [1]. In such applications, the proportion
of the observed entries is typically very small, so that the estimation or recovery of ? is impossible
without a structure assumption on ?. In this paper, we assume that ? is of low rank.
A focus of recent studies of matrix completion has been on a simpler formulation, also known
as exact recovery, where the observations are assumed to be uncorrupted, i.e. ?i = 0. A direct
approach is to minimize rank(M ) subject to M?i = yi . An iterative algorithm was proposed in [5]
to project a trimmed SVD of the incomplete data matrix to the space of matrices of a fixed rank
r. The nuclear norm was proposed as a surrogate for the rank, leading to the following convex
minimization problem in a linear space [2]:
n
o
b (CR) = arg min kM k(N ) : M? = yi ? i ? n .
?
i
M
We denote the nuclear norm by k ? k(N ) here and throughout this paper. This procedure, analyzed
in [2, 3, 4, 11] among others, is parallel to the replacement of the `0 penalty by the `1 penalty in
solving the sparse recovery problem in a linear space.
1
In this paper, we focus on the problem of matrix completion with noisy observations (1) and take the
exact recovery as a special case. P
Since the exact constraint is no longer appropriate in the presence
n
of noise, penalized squared error i=1 (M?i ? yi )2 is considered. By reformulating the problem in
Lagrange form, [8] proposed the spectrum Lasso
n
n
nX
o
X
b (MHT) = arg min
?
M?2i /2 ?
yi M?i + ?kM k(N ) ,
(2)
M
i=1
i=1
along with an iterative convex minimization algorithm. However, (2) is difficult to analyze
the
Pwhen
n
sample fraction ?0 = n/(d1 d2 ) is small, due to the ill-posedness of the quadratic term i=1 M?2i .
This has led to two alternatives in [7] and [9]. While [9] proposed to minimize
Pn (2) under an additional
`? constraint on M , [7] modified (2) by replacing the quadratic term i=1 M?2i with ?0 kM k2(F ) .
Both [7, 9] provided nearly optimal error bounds when the noise level is of no smaller order than
the `? norm of the target matrix ?, but not of smaller order, especially not for exact recovery. In
a different approach, [6] proposed a non-convex recursive algorithm and provided error bounds in
proportion to the noise level. However, the procedure requires the knowledge of the rank r of the
unknown ? and the error bound is optimal only when d1 and d2 are of the same order.
Our goal is to develop an algorithm for matrix completion that can be as easily computed as the
spectrum Lasso (2) and enjoys a nearly optimal error bound proportional to the noise level to continuously cover both the noisy and noiseless cases. We propose to use an elastic penalty, a linear
combination of the nuclear and Frobenius norms, which leads to the estimator
n
n
nX
o
X
e = arg min
?
M?2i /2 ?
yi M?i + ?1 kM k(N ) + (?2 /2)kM k2(F ) ,
(3)
M
i=1
i=1
where k ? k(N ) and k ? k(F ) are the nuclear and Frobenius norms, respectively. We call (3) spectrum
elastic net (E-net) since it is parallel to the E-net in linear regression, the least squares estimator
with a sum of the `1 and `2 penalties, introduced in [15]. Here the nuclear penalty provides the
sparsity in the spectrum, while the Frobenius penalty regularizes the inversion of the quadratic term.
Meanwhile, since the Frobenius penalty roughly shrinks the estimator by a factor ?0 /(?0 + ?2 ), we
correct this bias by a calibration step,
b = (1 + ?2 /?0 )?.
e
?
(4)
We call this estimator calibrated spectrum E-net.
Motivated by [8], we develop an EM algorithm to solve (3) for matrix completion. The algorithm
iteratively replaces the missing entries with those obtained from a scaled soft-thresholding singular
value decomposition (SVD) until the resulting matrix converges. This EM algorithm is guaranteed
to converge to the solution of (3).
Under proper coherence conditions, we prove that for suitable penalty levels ?1 and ?2 , the calibrated spectrum E-net (4) achieves a desired error bound in the Frobenius norm. Our error bound
is of nearly optimal order and in proportion to the noise level. This provides a sharper result than
those of [7, 9] when the noise level is of smaller order than the `? norm of ?, and than that of [6]
when d2 /d1 is large. Our simulation results support the use of the calibrated spectrum E-net. They
illustrate that (4) performs comparably to (2) and outperforms the modified method of [7].
Our analysis of the calibrated spectrum E-net uses an inequality similar to a duel certificate bound
in [3]. The bound in [3] requires sample size n min{(r log d)2 , r(log d)6 }d log d, where d =
d1 + d2 . We use the method of moments to remove a log d factor in the first component of their
sample size requirement. This leads to a sample size requirement of n r2 d log d, with an extra r
in comparison to the ideal n rd log d. Since the extra r does not appear in our error bound, its
appearance in the sample size requirement seems to be a technicality.
The rest of the paper is organized as follows. In Section 2, we describe an iterative algorithm for the
computation of the spectrum E-net and study its convergence. In Section 3, we derive error bounds
for the calibrated spectrum E-net. Some simulation results are presented in Section 4. Section 5
provides the proof of our main result.
We use the following notation throughout this paper. For matrices M ? Rd1 ?d2 , kM k(N ) is the
nuclear norm (the sum of all singular values of M ), kM k(S) is the spectrum norm (the largest
2
singular value), kM k(F ) is the Frobenius norm (the `2 norm of vectorized M ), and kM k? =
maxjk |Mjk |. Linear mappings from Rd1 ?d2 to Rd1 ?d2 are denoted by the calligraphic letters. For
a linear mapping Q, the operator norm is kQk(op) = supkM k(F ) =1 kQM k(F ) . We equip Rd1 ?d2
with the inner product hM1 , M2 i = trace(M1> M2 ) so that hM, M i = kM k2(F ) . For projections
P, P ? = I ? P with I being the identity. We denote by E? the unit matrix with 1 at ? ?
{1, . . . , d1 } ? {1, . . . , d2 }, and by P? the projection to E? : M ? M? E? = hE? , M iE? .
2
An algorithm for spectrum elastic regularization
We first present a lemma for the M-step of our iterative algorithm.
Lemma 1 Suppose the matrix Z has rank r. The solution to the optimization problem
o
n
arg min kZ ? W k2(F ) /2 + ?1 kZk(N ) + ?2 kZk2(F ) /2
Z
is given by S(W ; ?1 , ?2 ) = U D?1 ,?2 V 0 with D?1 ,?2 = diag{(d1 ??1 )+ , . . . , (dr ??1 )+ }/(1+?2 ),
where U DV 0 is the SVD of W , D = diag{d1 , . . . , dr } and t+ = max(t, 0).
The minimization problem in Lemma 1 is solved by a scaled soft-thresholding SVD. This is parallel
to Lemma 1 in [8] and justified by Remark 1 there. We use Lemma 1 to solve the M-step of the EM
algorithm for the spectrum E-net (3).
We still need an E-step to impute a complete matrix given the observed data {yi , ?i : i = 1, . . . , n}.
Since ?i are allowed to have ties, we need the following notation. Let m? = #{i : ?i = ?, i ? n}
be the multiplicity of observations at ? ? ?? and m? = max? m? be the maximum multiplicity.
Suppose that the complete data is composed of m? observations at each ? for a certain integer m? .
(com)
(com)
Let Y ?
be the sample mean of the complete data at ? and Y
be the matrix with components
(com)
Y?
. If the complete data are available, (3) is equivalent to
n
o
(com)
arg min (m? /2)kY
? M k2(F ) + ?1 kM k(N ) + (?2 /2)kM k2(F ) .
M
(obs)
Let Y ?
= m?1
?
(obs)
(Y ? )d1 ?d2 . In
(obs)
(m? /m? )Y ? +
Y
(imp)
?i =?
yi be the sample mean of the observations at ? and Y
the white noise model, the conditional expectation of Y
(com)
?
(obs)
given Y
=
(obs)
is
(1 ? m? /m? )?? for m? ? m? . This leads to a generalized E-step:
(imp)
= (Y ?
P
(imp)
)d1 ?d2 , Y ?
(obs)
= min{1, (m? /m? )}Y ?
+ (1 ? m? /m? )+ Z?(old) ,
(5)
where Z (old) is the estimation of ? in the previous iteration. This is a genuine E-step when m? = m?
but also allows a smaller m? to reduce the proportion of missing data.
e in (3).
We now present the EM-algorithm for the computation of the spectrum E-net ?
Algorithm 1 Initialize with Z (0) and k = 0. Repeat the following steps:
? E-step: Compute Y
(imp)
in (5) with Z (old) = Z (k) and assign k ? k + 1,
? M-step: Compute Z (k) = S(Y
(imp)
; ?1 /m? , ?2 /m? ),
until kZ (k) ? Z (k?1) k2(F ) /kZ (k) k2(F ) ? . Then, return Z (k) .
The following theorem states the convergence of Algorithm 1.
Theorem 1 As k ? ?, Z (k) converges to a limit Z (?) as a function of the data and (?1 , ?2 , m? ),
e for m? ? m? .
and Z (?) = ?
3
Theorem 1 is a variation of a parallel result in [8] and follows from the same proof there. As [8]
pointed out, a main advantage of Algorithm 1 is the speed of each iteration. When the maximum
(obs)
multiplicity m? is small, we simply use Z (0) = Y
and m? = m? ; Otherwise, we may first run
?
the EM-algorithm for an m? < m and use the output as the initialization Z (0) for a second run of
the EM-algorithm with m? = m? .
3
Analysis of estimation accuracy
In this section, we derive error bounds for the calibrated spectrum E-net. We need the following
notation. Let r = rank(?), U DV > be the SVD of ?, and s1 ? . . . ? sr be the nonzero singular
values of ?. Let T be the tangent space with respect to U V > , the space of all matrices of the form
U U > M1 + M2 V V > . The orthogonal projection to T is given by
PT M = U U > M + M V V > ? U U > M V V > .
Pn
Theorem 2 Let ? = 1 + ?2 /?0 and H = i=1 P?i . Define
R
=
(H ? ?0 )PT /(?0 + ?2 ),
?
=
R(?2 ? + ?1 U V > ),
(6)
Q = I ? H(PT HPT + ?2 PT )?1 PT .
Let ? =
Pn
i=1 ?i E?i .
Suppose
kPT Rk(op) ? 1/2, sr ? 5?1 /?2 ,
?
kPT ?k(F ) ? r?1 /8, k? ? R(PT R + PT )?1 PT ?
(S) ? ?1 /4,
?
kPT ?k(F ) ? r?1 /8, kQ?k(S) ? 3?1 /4, kPT? ?k(S) ? ?1 .
(7)
(8)
(9)
Then the calibrate spectrum E-net (4) satisfies
?
b ? ?k(F ) ? 2 r?1 /?0 .
k?
(10)
The proof of Theorem 2 is provided in Section 5. When ?i are random entries in ?? , EH = ?0 I,
so that (8) and the first inequality of (7) are expected to hold under proper conditions. Since the
rank of PT ? is no greater than 2r, (9) essentially requires k?k(S) ?1 . Our analysis allows ?2 to
lie in a certain range [?? , ?? ], and ?? /?? is large under proper conditions. Still, the choice of ?2 is
constrained by (7) and (8) since ? is linear in ?2 . When ?2 /?0 diverges to infinity, the calibrated
spectrum E-net (4) becomes the modified spectrum Lasso of [7].
Theorem 2 provides sufficient conditions on the target matrix and the noise for achieving a certain level of estimation error. Intuitively, these conditions on the target matrix ? must imply a
certain level of coherence (or flatness) of the unknown matrix since it is impossible to distinguish
the unknown from zero when the observations are completely outside its support. In [2, 3, 4, 11],
coherence conditions are imposed on
p
?0 = max{(d1 /r)kU U > k? , (d2 /r)kV V > k? }, ?1 = d1 d2 /rkU V > k? ,
(11)
where U and V are matrices of singular vectors of ?. [9] considered a more general notation of
spikiness of a matrix M , defined as the ratio between the `? and dimension-normalized `2 norms,
p
?sp (M ) = kM k? d1 d2 /kM k(F ) .
(12)
Suppose in the rest of the section that ?i are iid points uniformly distributed in ?? and ?i are iid
N (0, ? 2 ) variables independent of {?i }. The following theorem asserts that under certain coherence
conditions on the matrices ?, U U > , V V > and U V > , all conditions of Theorem 2 hold with large
probability when the sample size n is of the order r2 d log d.
Theorem 3 Let d = d1 + d2 . Consider ?1 and ?2 satisfying
?1 = ?
p
8?0 d log d,
1?
?2 k?k(F )
? 2.
?1 {n/(d log d)}1/4
4
(13)
Then, there exists a constant C such that
n
o
4/3
n ? C max ?20 r2 d log d, (?1 + r)?1 rd log d, (?sp
? ?4? )r2 d log d
(14)
implies
b ? ?k2 /(d1 d2 ) ? 32(? 2 rd log d)/n
k?
(F )
with probability at least 1 ? 1/d2 , where ?0 and ?1 are the coherence constants in (11), ?sp =
?sp (?) is the spikiness of ? and ?? = k?k(F ) /(r1/2 sr ).
We require the knowledge of noise level ? to determine the penalty level that is usually considered as tuning parameter in practice. The Frobenius norm k?k(F ) in (13) can be replaced
by an estimate of the same magnitude in Theorem 3. In our simulation experiment, we use
Pn
?2 = ?1 {n/(d log d)}1/4 /Fb with Fb = ( i=1 yi2 /?0 )1/2 . The Chebyshev inequality provides
Fb/k?k(F ) ? 1 when ?sp = O(1) and ? 2 k?k2? .
A key element in our analysis is to find a probabilistic bound for the second inequality of (8), or
equivalently an upper bound for
P kR(PT R + PT )?1 (?2 ? + ?1 U V > )k(S) > ?1 /4 .
(15)
This guarantees the existence of a primal dual certificate for the spectrum E-net penalty [14].
For ?2 = 0, a similar inequality was proved in [3], where the sample size requirement is
n ? C0 min{?2 r2 (log d)2 d, ?2 r(log d)6 d} for a certain coherence factor ?. We remove a log
factor in the first bound, resulting in the sample size requirement in (14), which is optimal when
r = O(1). For exact recovery in the noiseless case, the sample size n rd(log d)2 is sufficient if
a golfing scheme is used to construct an approximate dual certificate [4, 11]. We use the following
lemma to bound (15).
Pn
?
Lemma 2 Let H =
i=1 P?i where ?i are iid points uniformly distributed in ? . Let R =
(H ? ?0 )PT /(?0 + ?2 ) and ? = 1 + ?2 /?0 . Let M be a deterministic matrix. Then, there exists a
numerical constant C such that, for all k ? 1 and m ? 1,
n
okm
2m
p
?2
2 2
? 2km EkRk M k2m
?
C?
r
dkm/n
?
(
d
d
/r)kM
k
.
(16)
1
2
?
0
0
(S)
We use a different graphical approach than those in [3] to bound E trace({(Rk M )> (Rk M )}m ) in
the proof of Lemma 2. The rest of the proof of Theorem 3 can be outlined as follows. Assume
that all coherence factors are O(1). Let M = ?2 ? + ?1 U V > and write R(PT R + PT )?1 M =
?
?
RM ?R2 M +? ? ?+(?1)k ?1 Rk M +Rem. By?(16) with km log d for k ? 2 and an even simpler
bound for k = 1 and Rem, (15) holds when ( d1 d2 /r)kM k? ?1 ?, where ? r2 d(log d)/n.
Since ?sp + ?1 + k?k2(F ) /(rs2r ) = O(1), this is equivalent to ?(sr ?2 /?1 + 1) . 1. Finally, we use
matrix exponential inequalities [10, 12] to verify other conditions of Theorem 2. We omit technical
details of the proof of Lemma 2 and Theorem 3. We would like to point out that if the r2 in (16) can
be replaced by r(log d)? , e.g. ? = 5 in view of [3], the rest of the proof of Theorem 3 is intact with
? rd(log d)1+? /n and a proper adjustment of ?2 in (13).
Compared with [7] and [9], the main advantage ofPTheorem 3 is the proportionality of its error
n
bound to the noise level. In [7], the quadratic term i=1 M?2i in (2) is replaced by its expectation
2
?0 kM k(F ) and the resulting minimizer is proved to satisfy
b (KLT) ? ?k2 /(d1 d2 ) ? C max(? 2 , k?k2? )rd(log d)/n
k?
(F )
(17)
with large probability, where C is a numerical constant. This error bound achieves the squared error
rate ? 2 rd(log d)/n as in Theorem 3 when the noise level ? is of no smaller order than k?k? , but
not of smaller order. In particular, (17) does not imply exact recovery when ? = 0. In Theorem 3,
the error bound converges to zero as the noise level diminishes, implying exact recovery in the
noiseless case.?In [9], a constrained spectrum Lasso was proposed that minimizes (2) subject to
kM k? ? ?? / d1 d2 . For k?k(F ) ? 1 and ?sp (?) ? ?? , [9] proved
b (NW) ? ?k2 ? C max(d1 d2 ? 2 , 1)(?? )2 rd(log d)/n
k?
(F )
5
(18)
with large probability. Scale change from the above error bound yields
b (NW) ? ?k2 /(d1 d2 ) ? C max{? 2 , k?k2 /(d1 d2 )}(?? )2 rd(log d)/n.
k?
(F )
(F )
?
?
?
Since ? ? 1 and ? k?k(F ) / d1 d2 ? k?k? , the right-hand side of (18) is of no smaller order
than that of (17). We shall point out that (17) and (18) only require sample size n rd log d. In
addition, [9] allows more practical weighted sampling models.
Compared with [6], the main advantage of Theorem 3 is the independence of its sample size requirement on the aspect ratio d2 /d1 , where d2 ? d1 is assumed without loss of generality by symmetry.
The error bound in [6] implies
b (KMO) ? ?k2 /(d1 d2 ) ? C0 (s1 /sr )4 ? 2 rd(log d)/n
k?
(19)
(F )
p
for sample size n ? C1? rd log d + C2? r2 d d2 /d1 , where {C1? , C2? } are constants depending on the
same set of coherence factors as in (14) and s1 > ? ?p
? > sr are the singular values of ?. Therefore,
Theorem 3 effectively replaces the root aspect ratio d2 /d1 in the sample size requirement of (19)
with a log factor, and removes the coherence factor (s1 /sr )4 on the right-hand side of (19). We
note that s1 /sr is a larger coherence factor than k?k(F ) /(r1/2 sr ) in the sample size requirement in
Theorem 3. The root aspect ratio can be removed from the sample size requirement for (19) if ?
can be divided into square blocks uniformly satisfying the coherence conditions.
4
Simulation study
This experiment has the same setting as in Section 9 of [8]. We provide the description of the
simulation settings in our notation as follows: The target matrix is ? = U V > , where Ud1 ?r and
Vd2 ?r are random matrices with independent standard normal entries. The sampling points ?i have
no tie and ? = {?i : i = 1, . . . , n} is a uniformly distributed random subset of {1, . . . , d1 } ?
{1, . . . , d2 }, where n is fixed. The P
errors ? are iid N (0, ? 2 ) variables. Thus, the observed matrix is
n
Y = P? (? + ?) with
? P? = H = i=1 P?i being a projection. The signal to noise ratio (SNR) is
defined as SNR = r/?.
We compare the calibrated spectrum E-net (4) with the spectrum Lasso (2) and its modification
b (KLT) of [7]. For all methods, we compute a series of estimators with 100 different penalty lev?
els, where the smallest penalty level corresponds to a full-rank solution and the largest penalty
level corresponds to a zero solution. For the calibrated spectrum E-net, we always use ?2 =
Pn
?1 {n/(d log d)}1/4 /Fb, where Fb = ( i=1 yi2 /?0 )1/2 is an estimator for k?k(F ) . We plot the
training errors and test errors as functions of estimated ranks, where the training and test errors are
defined as
b ? ?)k2
b ? Y )k2
kP?? (?
kP? (?
(F )
(F )
.
,
Test
error
=
Training error =
?
2
kP? Y k2(F )
kP? ?k(F )
In Figure 1, we report the estimation performance of three methods. The rank of ? is 10 but
{?, ?, ?} are regenerated in each replication. Different noise levels and proportions of the observed entries are considered. All the results are averaged over 50 replications. In this experiment,
the calibrated spectrum E-net and the spectrum Lasso estimator have very close testing and training
errors, and both of them significantly outperform the modified Lasso. Figure 1 also illustrates that
in most cases, the calibrated spectrum E-net and spectrum Lasso achieve the optimal test error when
the estimated rank is around the true rank.
b (NW) would have the same performance as
We note that the constrained spectrum Lasso estimator ?
b ? ?? is set with a sufficiently high ?? . However,
the spectrum Lasso when the constraint ?sp (?)
analytic properties of the spectrum Lasso is unclear without constraint or modification.
5
Proof of Theorem 2
The proof of Theorem 2 requires the following proposition that controls the approximation error of
the Taylor expansion of the nuclear norm with subdifferentiation. The result, closely related to those
6
?0=0.2, SNR=1
?0=0.2, SNR=10
1
Error
Error
1
0.5
0
0
10
20
30
0.5
0
40
0
10
Rank
?0=0.5, SNR=1
0.5
0
20
40
0.5
0
60
0
10
Rank
?0=0.8, SNR=1
30
40
1
Error
Error
20
Rank
?0=0.8, SNR=10
1
0.5
0
30
1
Error
Error
1
0
20
Rank
?0=0.5, SNR=10
0
20
40
60
0.5
0
80
Rank
0
5
10
15
20
25
Rank
Figure 1: Plots of training and testing errors against the estimated rank: testing error with solid lines;
training error with dashed lines; spectrum Lasso in blue, calibrated spectrum E-net in red; modified
spectrum Lasso in black; d1 = d2 = 100, rank(?) = 10.
in [13], is used to control the variation of the tangent space of the spectrum E-net estimator. We omit
its proof.
Proposition 1 Let ? = U DV > be the SVD and M be another matrix. Then,
0
? kM k(N ) ? k?k(N ) ? kPT? M k(N ) ? hU V > , M ? ?i
? k(PT M ? ?)V D?1/2 k2(F ) + kD?1/2 U > (PT M ? ?)k2(F ) .
Proof of Theorem 2. Define
?? = (PT HPT + ?2 PT )?1 (PT ? + PT H? ? ?1 U V > ),
? = (?0 + ?2 )?1 (?0 ? ? ?1 U V > ),
e ? ?? , ?? = ?? ? ?, ?? = ?
e ? ?.
?=?
b = ??
e and ?? ? ? = ?(?1 /?0 )U V > ,
Since ?
b ? ?k(F )
k?
? ?k?? k(F ) + k?? ? ?k(F )
?
= ?k?? k(F ) + r?1 /?0
?
? ?k?k(F ) + ?k?? k(F ) + r?1 /?0 .
(20)
(21)
We consider two cases by comparing ?2 and ?0 .
Case 1: ?2 ? ?0 . By algebra ??? = ?0?1 (PT R + PT )?1 PT (? + ?), so that
?
?k?? k(F ) ? ?0?1 k(PT R + PT )?1 k(op) kPT ? + PT ?k(F ) ? r?1 /(2?0 ).
(22)
The last inequalityP
above follows from the first inequalities in (7), (8) and (9). It remains to bound
n
k?k(F ) . Let Y = i=1 yi E?i . We write the spectrum E-net estimator (3) as
n
o
e = arg min hHM, M i/2 ? hY, M i + ?1 kM k(N ) + (?2 /2)kM k2
?
(F ) .
M
7
b in the sub-differential of kM k(N ) at M = ?,
e
It follows that for a certain member G
e = H?
e ? Y + ?2 ?
e + ?1 G
b = (H + ?2 )? + (H + ?2 )?? ? Y + ?1 G.
b
0 = ?L?1 ,?2 (?)
e (N ) ? ?h?, Gi,
b we have
Let Rem1 = k?? k(N ) ? hU V > , ?? i. Since k?? k(N ) ? k?k
h(H + ?2 )?, ?i
e (N )
? hH? + ? ? (H + ?2 )?? , ?i + ?1 k?? k(N ) ? ?1 k?k
?
?
e (N )
= hH(? ? ? ) + ? ? ?2 ? , ?i + ?1 Rem1 + ?1 hU V > , ?? i ? ?1 k?k
?
?
>
?
? ?1 Rem1 + h? + H(? ? ? ) ? ?2 ? ? ?1 U V , ?i ? ?1 kPT ?k(N )
= ?1 Rem1 + h? + H(? ? ?? ), PT? ?i ? ?1 kPT? ?k(N ) .
(23)
e (N ) ? kP ? ?k
e (N ) + hU V > , ?i
e and P ? ?
e = P ? ?. The
The second inequality in (23) is due to k?k
T
T
T
?
last equality in (23) follows from the definition of ? ? T , since it gives PT ? + PT H(? ? ?? ) ?
?2 ?? ? ?1 U V > = ?(PT HPT + ?2 PT )?? + PT ? + PT H? ? ?1 U V > = 0. By the definitions
of Q, ?? and ?, ? + H(? ? ?? ) = Q? + H(? ? ?) ? H(PT HPT + ?2 PT )?1 PT ?. Since
PT? HPT = PT? (H ? ?0 )PT = PT? R(?0 + ?2 ) and (H ? ?0 )(? ? ?) = ?, we find
h? + H(? ? ?? ), PT? ?i
= hQ? + (H ? ?0 ){? ? ? ? (PT HPT + ?2 PT )?1 PT ?}, PT? ?i
= hQ? + ? ? R(PT R + PT )?1 PT ?, PT? ?i.
Thus, by the second inequalities of (8) and (9),
h? + H(? ? ?? ), PT? ?i ? ?1 kPT? ?k(N ) .
(24)
Since ?? = ?? ? ? ? T and the singular values of ? is no smaller than (?0 sr ? ?1 )/(?0 + ?2 ) ?
(sr ? ?1 /?2 )/? ? 4?1 /(?2 ?) by the second inequality in (7), Proposition 1 and (22) imply
Rem1 ? 2k?? ? ?k2(F ) /{(?0 sr ? ?1 )/(?0 + ?2 )} ? r(?1 /?0 )2 /(8??1 /?2 ).
(25)
It follows from (23), (24) and (25) that
? 2 k?k2(F ) ? ? 2 h(H + ?2 )?, ?i/?2 ? ? 2 (?1 /?2 )Rem1 ? r?21 /(4?02 ).
(26)
Therefore, the error bound (10) follows from (21), (22) and (26).
Case 2: ?2 ? ?0 . By applying the derivation of (23) to ? instead of ?? , we find
h(H + ?2 )?? , ?? i + ?1 kPT? ?? k(N )
? ?1 k?k(N ) ? hU V > , ?i + h? + H(? ? ?) ? ?2 ? ? ?1 U V > , ?? i.
By the definitions of ?, R, and ?, ? = (H ? ?0 )(? ? ?) = H(? ? ?) ? ?2 ? ? ?1 U V > . This
and k?k(N ) = hU V > , ?i gives
h(H + ?2 )?? , ?? i + ?1 kPT? ?? k(N ) ? h? + ?, ?? i.
(27)
Since kPT? (? + ?)k(S) = kPT? ?k(S) ? ?1 by the third inequality in (9), we have
hPT? (? + ?), ?? i ? ?1 kPT? ?? k(N ) .
(28)
It follows from (27), (28) and the first inequalities of (8) and (9) that
n
o
?
?2 k?? k2(F ) ? hPT (? + ?), ?? i ? kPT ?k(F ) + kPT ?k(F ) k?? k(F ) ? r?1 k?? k(F ) /2.
Thus, due to ?2 ? ?0 ,
?
?
?k?? k(F ) ? (?/?2 ) r?1 /2 ? r?1 /?0 .
Therefore, the error bound (10) follows from (20) and (29).
(29)
Acknowledgments
This research is partially supported by the NSF Grants DMS 0906420, DMS-11-06753 and DMS12-09014, and NSA Grant H98230-11-1-0205.
8
References
[1] ACM SIGKDD and Netflix. Proceedings of KDD Cup and workshop. 2007.
[2] E. Candes and B. Recht. Exact matrix completion via convex optimization. Found. Comput.
Math., 9:717?772, 2009.
[3] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion.
IEEE Trans. Inform. Theory, 56(5):2053?2080, 2009.
[4] D. Gross. Recovering low-rank matrices from few coefficients in any basis.
abs/0910.1879, 2009.
CoRR,
[5] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE
Transactions on Information Theory, 56(6):2980?2998, 2010.
[6] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. Journal of
Machine Learning Research, 11:2057?2078, 2010.
[7] V. Koltchinskii, K. Lounici, and A. B. Tsybakov. Nuclear-norm penalization and optimal rates
for noisy low-rank matrix completion. The Annals of Statistics, 39:2302?2329, 2011.
[8] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning
large incomplete matrices. Journal of Machine Learning Research, 11:2287?2322, 2010.
[9] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. 2010.
[10] R. I. Oliveira. Concentration of the adjacency matrix and of the laplacian in random graphs
with independent edges. Technical Report arXiv:0911.0600, arXiv, 2010.
[11] B. Recht. A simpler approach to matrix completion. Journal of Machine Learning Research,
12:3413?3430, 2011.
[12] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Found. Comput. Math.
doi:10.1007/s10208-011-9099-z., 2011.
[13] P.-A. Wedin. Perturbation bounds in connection with singular value decomposition. BIT,
12:99?111, 1972.
[14] C.-H. Zhang and T. Zhang. A general framework of dual certificate analysis for structured
sparse recovery problems. Technical report, arXiv: 1201.3302v1, 2012.
[15] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. J. R. Statist.
Soc. B, 67:301?320, 2005.
9
| 4823 |@word inversion:1 seems:1 proportion:6 norm:16 c0:2 proportionality:1 d2:34 km:24 simulation:6 hu:6 decomposition:3 solid:1 moment:1 series:1 outperforms:1 current:1 com:5 comparing:1 must:1 numerical:2 kdd:1 analytic:1 remove:3 plot:2 implying:1 guess:1 rku:1 provides:6 certificate:4 math:2 simpler:3 zhang:3 k2m:1 along:1 c2:2 direct:1 differential:1 replication:2 prove:2 upenn:1 expected:1 roughly:1 cand:1 rem:2 becomes:1 project:1 estimating:2 notation:5 provided:3 biostatistics:1 minimizes:1 unified:1 guarantee:1 friendly:1 tie:2 scaled:3 k2:26 rm:1 control:2 unit:1 grant:2 omit:2 appear:1 limit:1 lev:1 black:1 initialization:1 koltchinskii:1 range:1 averaged:1 practical:1 acknowledgment:1 testing:3 recursive:1 practice:1 block:1 procedure:2 kpt:16 significantly:1 projection:4 ud1:1 close:1 selection:1 operator:1 impossible:2 applying:1 equivalent:2 imposed:1 deterministic:1 missing:3 convex:6 recovery:9 m2:3 estimator:11 nuclear:9 oh:2 variation:2 annals:1 target:4 suppose:5 pt:50 user:2 exact:8 us:1 element:1 satisfying:2 observed:4 solved:1 vd2:1 sun:1 removed:1 gross:1 convexity:1 solving:1 algebra:1 completely:1 basis:1 easily:1 jersey:1 mht:1 derivation:1 describe:1 kp:5 doi:1 outside:1 larger:1 solve:3 otherwise:1 statistic:3 gi:1 noisy:5 advantage:3 net:24 propose:2 product:1 achieve:1 description:1 frobenius:9 kv:1 asserts:1 ky:1 convergence:2 requirement:9 diverges:1 r1:2 converges:4 illustrate:1 develop:3 completion:15 stat:1 derive:2 depending:1 op:3 school:1 strong:1 soc:1 recovering:1 implies:2 closely:1 correct:2 adjacency:1 require:2 assign:1 proposition:3 hold:3 around:1 considered:4 sufficiently:1 normal:1 mapping:2 bj:2 nw:3 achieves:3 smallest:1 estimation:5 diminishes:1 largest:2 weighted:2 minimization:4 always:1 modified:5 pn:6 cr:1 focus:2 klt:2 rank:23 sigkdd:1 el:1 typically:1 interested:1 tao:1 arg:6 among:1 ill:1 dual:3 denoted:1 constrained:3 special:1 initialize:1 wharton:2 genuine:1 construct:1 sampling:2 nearly:4 imp:5 regenerated:1 others:1 report:3 few:2 composed:1 replaced:3 replacement:1 ab:1 interest:1 nsa:1 analyzed:1 wedin:1 hpt:8 primal:1 edge:1 orthogonal:1 incomplete:3 old:3 taylor:1 desired:1 soft:3 cover:1 tingni:2 calibrate:1 subset:2 entry:8 snr:8 kq:1 calibrated:14 recht:2 negahban:1 ie:1 probabilistic:1 continuously:1 squared:2 dr:2 leading:1 return:1 coefficient:1 satisfy:1 caused:1 kzk2:1 view:1 root:2 analyze:1 red:1 netflix:2 czhang:1 parallel:4 candes:1 minimize:2 square:2 accuracy:1 yield:1 comparably:1 iid:4 inform:1 duel:1 definition:3 against:1 dm:2 proof:11 proved:3 knowledge:2 organized:1 formulation:1 lounici:1 shrink:1 generality:1 until:3 hand:2 okm:1 tropp:1 replacing:1 keshavan:2 normalized:1 verify:1 true:1 regularization:4 equality:1 reformulating:1 iteratively:1 nonzero:1 white:1 impute:1 generalized:1 complete:4 performs:1 imputing:1 tail:1 he:1 m1:2 cup:1 ai:2 rd:12 tuning:1 outlined:1 pointed:1 calibration:2 longer:1 recent:1 certain:7 inequality:12 calligraphic:1 yi:11 uncorrupted:1 additional:1 greater:1 converge:1 determine:1 dashed:1 signal:1 full:1 flatness:1 technical:3 divided:1 laplacian:1 regression:1 essentially:1 expectation:2 noiseless:4 rutgers:2 arxiv:3 iteration:2 c1:2 proposal:1 justified:1 addition:1 spikiness:2 singular:9 extra:2 rest:4 hhm:1 sr:12 subject:2 member:1 call:2 integer:1 near:1 presence:1 ideal:1 independence:1 pennsylvania:2 lasso:13 hastie:2 inner:1 reduce:1 chebyshev:1 motivated:1 trimmed:1 penalty:16 remark:1 tsybakov:1 oliveira:1 statist:1 imputed:1 outperform:1 nsf:1 estimated:3 tibshirani:1 blue:1 write:2 shall:1 key:1 achieving:1 kqk:1 v1:1 graph:1 relaxation:1 fraction:2 sum:4 run:2 letter:1 throughout:2 dkm:1 coherence:12 ob:7 bit:1 bound:29 guaranteed:1 distinguish:1 quadratic:4 replaces:2 constraint:4 infinity:1 hy:1 aspect:3 speed:1 min:9 department:2 structured:1 piscataway:1 alternate:1 combination:1 kd:1 smaller:8 em:6 cun:1 modification:2 s1:5 dv:3 intuitively:1 multiplicity:3 restricted:1 remains:1 hh:2 maxjk:1 available:1 observe:1 appropriate:1 spectral:1 alternative:1 existence:1 graphical:1 especially:1 s10208:1 concentration:1 surrogate:1 unclear:1 hq:2 nx:2 equip:1 index:1 ratio:5 equivalently:1 difficult:1 sharper:1 trace:2 proper:5 unknown:3 upper:1 observation:7 regularizes:1 perturbation:1 posedness:1 rating:1 introduced:1 connection:1 trans:1 usually:1 sparsity:1 max:7 wainwright:1 power:1 suitable:2 eh:1 scheme:1 mjk:1 movie:1 imply:3 hm:1 philadelphia:1 tangent:2 loss:1 proportional:1 penalization:1 vectorized:1 sufficient:2 thresholding:3 penalized:1 repeat:1 last:2 supported:1 enjoys:1 bias:2 side:2 sparse:2 distributed:3 kzk:1 dimension:1 kz:3 fb:5 transaction:1 approximate:1 technicality:1 assumed:2 spectrum:36 iterative:6 ku:1 elastic:6 symmetry:1 mazumder:1 expansion:1 meanwhile:1 zou:1 diag:2 sp:8 main:4 yi2:2 montanari:2 noise:16 h98230:1 allowed:1 golfing:1 sub:1 exponential:1 comput:2 lie:1 third:1 theorem:22 rk:4 r2:9 concern:1 exists:2 workshop:1 effectively:1 kr:1 hui:1 corr:1 magnitude:1 illustrates:1 rd1:4 led:1 simply:1 appearance:1 lagrange:1 adjustment:1 partially:1 corresponds:2 minimizer:1 satisfies:1 acm:1 conditional:1 goal:1 hm1:1 identity:1 change:1 uniformly:4 lemma:9 svd:6 e:1 intact:1 support:2 d1:30 |
4,225 | 4,824 | ImageNet Classification with Deep Convolutional
Neural Networks
Alex Krizhevsky
University of Toronto
[email protected]
Ilya Sutskever
University of Toronto
[email protected]
Geoffrey E. Hinton
University of Toronto
[email protected]
Abstract
We trained a large, deep convolutional neural network to classify the 1.2 million
high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5%
and 17.0% which is considerably better than the previous state-of-the-art. The
neural network, which has 60 million parameters and 650,000 neurons, consists
of five convolutional layers, some of which are followed by max-pooling layers,
and three fully-connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully-connected
layers we employed a recently-developed regularization method called ?dropout?
that proved to be very effective. We also entered a variant of this model in the
ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%,
compared to 26.2% achieved by the second-best entry.
1
Introduction
Current approaches to object recognition make essential use of machine learning methods. To improve their performance, we can collect larger datasets, learn more powerful models, and use better techniques for preventing overfitting. Until recently, datasets of labeled images were relatively
small ? on the order of tens of thousands of images (e.g., NORB [16], Caltech-101/256 [8, 9], and
CIFAR-10/100 [12]). Simple recognition tasks can be solved quite well with datasets of this size,
especially if they are augmented with label-preserving transformations. For example, the currentbest error rate on the MNIST digit-recognition task (<0.3%) approaches human performance [4].
But objects in realistic settings exhibit considerable variability, so to learn to recognize them it is
necessary to use much larger training sets. And indeed, the shortcomings of small image datasets
have been widely recognized (e.g., Pinto et al. [21]), but it has only recently become possible to collect labeled datasets with millions of images. The new larger datasets include LabelMe [23], which
consists of hundreds of thousands of fully-segmented images, and ImageNet [6], which consists of
over 15 million labeled high-resolution images in over 22,000 categories.
To learn about thousands of objects from millions of images, we need a model with a large learning
capacity. However, the immense complexity of the object recognition task means that this problem cannot be specified even by a dataset as large as ImageNet, so our model should also have lots
of prior knowledge to compensate for all the data we don?t have. Convolutional neural networks
(CNNs) constitute one such class of models [16, 11, 13, 18, 15, 22, 26]. Their capacity can be controlled by varying their depth and breadth, and they also make strong and mostly correct assumptions
about the nature of images (namely, stationarity of statistics and locality of pixel dependencies).
Thus, compared to standard feedforward neural networks with similarly-sized layers, CNNs have
much fewer connections and parameters and so they are easier to train, while their theoretically-best
performance is likely to be only slightly worse.
1
Despite the attractive qualities of CNNs, and despite the relative efficiency of their local architecture,
they have still been prohibitively expensive to apply in large scale to high-resolution images. Luckily, current GPUs, paired with a highly-optimized implementation of 2D convolution, are powerful
enough to facilitate the training of interestingly-large CNNs, and recent datasets such as ImageNet
contain enough labeled examples to train such models without severe overfitting.
The specific contributions of this paper are as follows: we trained one of the largest convolutional
neural networks to date on the subsets of ImageNet used in the ILSVRC-2010 and ILSVRC-2012
competitions [2] and achieved by far the best results ever reported on these datasets. We wrote a
highly-optimized GPU implementation of 2D convolution and all the other operations inherent in
training convolutional neural networks, which we make available publicly1 . Our network contains
a number of new and unusual features which improve its performance and reduce its training time,
which are detailed in Section 3. The size of our network made overfitting a significant problem, even
with 1.2 million labeled training examples, so we used several effective techniques for preventing
overfitting, which are described in Section 4. Our final network contains five convolutional and
three fully-connected layers, and this depth seems to be important: we found that removing any
convolutional layer (each of which contains no more than 1% of the model?s parameters) resulted in
inferior performance.
In the end, the network?s size is limited mainly by the amount of memory available on current GPUs
and by the amount of training time that we are willing to tolerate. Our network takes between five
and six days to train on two GTX 580 3GB GPUs. All of our experiments suggest that our results
can be improved simply by waiting for faster GPUs and bigger datasets to become available.
2
The Dataset
ImageNet is a dataset of over 15 million labeled high-resolution images belonging to roughly 22,000
categories. The images were collected from the web and labeled by human labelers using Amazon?s Mechanical Turk crowd-sourcing tool. Starting in 2010, as part of the Pascal Visual Object
Challenge, an annual competition called the ImageNet Large-Scale Visual Recognition Challenge
(ILSVRC) has been held. ILSVRC uses a subset of ImageNet with roughly 1000 images in each of
1000 categories. In all, there are roughly 1.2 million training images, 50,000 validation images, and
150,000 testing images.
ILSVRC-2010 is the only version of ILSVRC for which the test set labels are available, so this is
the version on which we performed most of our experiments. Since we also entered our model in
the ILSVRC-2012 competition, in Section 6 we report our results on this version of the dataset as
well, for which test set labels are unavailable. On ImageNet, it is customary to report two error rates:
top-1 and top-5, where the top-5 error rate is the fraction of test images for which the correct label
is not among the five labels considered most probable by the model.
ImageNet consists of variable-resolution images, while our system requires a constant input dimensionality. Therefore, we down-sampled the images to a fixed resolution of 256 ? 256. Given a
rectangular image, we first rescaled the image such that the shorter side was of length 256, and then
cropped out the central 256?256 patch from the resulting image. We did not pre-process the images
in any other way, except for subtracting the mean activity over the training set from each pixel. So
we trained our network on the (centered) raw RGB values of the pixels.
3
The Architecture
The architecture of our network is summarized in Figure 2. It contains eight learned layers ?
five convolutional and three fully-connected. Below, we describe some of the novel or unusual
features of our network?s architecture. Sections 3.1-3.4 are sorted according to our estimation of
their importance, with the most important first.
1
http://code.google.com/p/cuda-convnet/
2
3.1
ReLU Nonlinearity
The standard way to model a neuron?s output f as
a function of its input x is with f (x) = tanh(x)
or f (x) = (1 + e?x )?1 . In terms of training time
with gradient descent, these saturating nonlinearities
are much slower than the non-saturating nonlinearity
f (x) = max(0, x). Following Nair and Hinton [20],
we refer to neurons with this nonlinearity as Rectified
Linear Units (ReLUs). Deep convolutional neural networks with ReLUs train several times faster than their
equivalents with tanh units. This is demonstrated in
Figure 1, which shows the number of iterations required to reach 25% training error on the CIFAR-10
dataset for a particular four-layer convolutional network. This plot shows that we would not have been
able to experiment with such large neural networks for
this work if we had used traditional saturating neuron Figure 1: A four-layer convolutional neural
models.
network with ReLUs (solid line) reaches a 25%
We are not the first to consider alternatives to traditional neuron models in CNNs. For example, Jarrett
et al. [11] claim that the nonlinearity f (x) = |tanh(x)|
works particularly well with their type of contrast normalization followed by local average pooling on the
Caltech-101 dataset. However, on this dataset the primary concern is preventing overfitting, so the effect
they are observing is different from the accelerated
ability to fit the training set which we report when using ReLUs. Faster learning has a great influence on the
performance of large models trained on large datasets.
3.2
training error rate on CIFAR-10 six times faster
than an equivalent network with tanh neurons
(dashed line). The learning rates for each network were chosen independently to make training as fast as possible. No regularization of
any kind was employed. The magnitude of the
effect demonstrated here varies with network
architecture, but networks with ReLUs consistently learn several times faster than equivalents
with saturating neurons.
Training on Multiple GPUs
A single GTX 580 GPU has only 3GB of memory, which limits the maximum size of the networks
that can be trained on it. It turns out that 1.2 million training examples are enough to train networks
which are too big to fit on one GPU. Therefore we spread the net across two GPUs. Current GPUs
are particularly well-suited to cross-GPU parallelization, as they are able to read from and write to
one another?s memory directly, without going through host machine memory. The parallelization
scheme that we employ essentially puts half of the kernels (or neurons) on each GPU, with one
additional trick: the GPUs communicate only in certain layers. This means that, for example, the
kernels of layer 3 take input from all kernel maps in layer 2. However, kernels in layer 4 take input
only from those kernel maps in layer 3 which reside on the same GPU. Choosing the pattern of
connectivity is a problem for cross-validation, but this allows us to precisely tune the amount of
communication until it is an acceptable fraction of the amount of computation.
The resultant architecture is somewhat similar to that of the ?columnar? CNN employed by Cire?san
et al. [5], except that our columns are not independent (see Figure 2). This scheme reduces our top-1
and top-5 error rates by 1.7% and 1.2%, respectively, as compared with a net with half as many
kernels in each convolutional layer trained on one GPU. The two-GPU net takes slightly less time
to train than the one-GPU net2 .
2
The one-GPU net actually has the same number of kernels as the two-GPU net in the final convolutional
layer. This is because most of the net?s parameters are in the first fully-connected layer, which takes the last
convolutional layer as input. So to make the two nets have approximately the same number of parameters, we
did not halve the size of the final convolutional layer (nor the fully-conneced layers which follow). Therefore
this comparison is biased in favor of the one-GPU net, since it is bigger than ?half the size? of the two-GPU
net.
3
3.3
Local Response Normalization
ReLUs have the desirable property that they do not require input normalization to prevent them
from saturating. If at least some training examples produce a positive input to a ReLU, learning will
happen in that neuron. However, we still find that the following local normalization scheme aids
generalization. Denoting by aix,y the activity of a neuron computed by applying kernel i at position
(x, y) and then applying the ReLU nonlinearity, the response-normalized activity bix,y is given by
the expression
?
??
min(N ?1,i+n/2)
X
bix,y = aix,y /?k + ?
(ajx,y )2 ?
j=max(0,i?n/2)
where the sum runs over n ?adjacent? kernel maps at the same spatial position, and N is the total
number of kernels in the layer. The ordering of the kernel maps is of course arbitrary and determined
before training begins. This sort of response normalization implements a form of lateral inhibition
inspired by the type found in real neurons, creating competition for big activities amongst neuron
outputs computed using different kernels. The constants k, n, ?, and ? are hyper-parameters whose
values are determined using a validation set; we used k = 2, n = 5, ? = 10?4 , and ? = 0.75. We
applied this normalization after applying the ReLU nonlinearity in certain layers (see Section 3.5).
This scheme bears some resemblance to the local contrast normalization scheme of Jarrett et al. [11],
but ours would be more correctly termed ?brightness normalization?, since we do not subtract the
mean activity. Response normalization reduces our top-1 and top-5 error rates by 1.4% and 1.2%,
respectively. We also verified the effectiveness of this scheme on the CIFAR-10 dataset: a four-layer
CNN achieved a 13% test error rate without normalization and 11% with normalization3 .
3.4
Overlapping Pooling
Pooling layers in CNNs summarize the outputs of neighboring groups of neurons in the same kernel
map. Traditionally, the neighborhoods summarized by adjacent pooling units do not overlap (e.g.,
[17, 11, 4]). To be more precise, a pooling layer can be thought of as consisting of a grid of pooling
units spaced s pixels apart, each summarizing a neighborhood of size z ? z centered at the location
of the pooling unit. If we set s = z, we obtain traditional local pooling as commonly employed
in CNNs. If we set s < z, we obtain overlapping pooling. This is what we use throughout our
network, with s = 2 and z = 3. This scheme reduces the top-1 and top-5 error rates by 0.4% and
0.3%, respectively, as compared with the non-overlapping scheme s = 2, z = 2, which produces
output of equivalent dimensions. We generally observe during training that models with overlapping
pooling find it slightly more difficult to overfit.
3.5
Overall Architecture
Now we are ready to describe the overall architecture of our CNN. As depicted in Figure 2, the net
contains eight layers with weights; the first five are convolutional and the remaining three are fullyconnected. The output of the last fully-connected layer is fed to a 1000-way softmax which produces
a distribution over the 1000 class labels. Our network maximizes the multinomial logistic regression
objective, which is equivalent to maximizing the average across training cases of the log-probability
of the correct label under the prediction distribution.
The kernels of the second, fourth, and fifth convolutional layers are connected only to those kernel
maps in the previous layer which reside on the same GPU (see Figure 2). The kernels of the third
convolutional layer are connected to all kernel maps in the second layer. The neurons in the fullyconnected layers are connected to all neurons in the previous layer. Response-normalization layers
follow the first and second convolutional layers. Max-pooling layers, of the kind described in Section
3.4, follow both response-normalization layers as well as the fifth convolutional layer. The ReLU
non-linearity is applied to the output of every convolutional and fully-connected layer.
The first convolutional layer filters the 224 ? 224 ? 3 input image with 96 kernels of size 11 ? 11 ? 3
with a stride of 4 pixels (this is the distance between the receptive field centers of neighboring
3
We cannot describe this network in detail due to space constraints, but it is specified precisely by the code
and parameter files provided here: http://code.google.com/p/cuda-convnet/.
4
Figure 2: An illustration of the architecture of our CNN, explicitly showing the delineation of responsibilities
between the two GPUs. One GPU runs the layer-parts at the top of the figure while the other runs the layer-parts
at the bottom. The GPUs communicate only at certain layers. The network?s input is 150,528-dimensional, and
the number of neurons in the network?s remaining layers is given by 253,440?186,624?64,896?64,896?43,264?
4096?4096?1000.
neurons in a kernel map). The second convolutional layer takes as input the (response-normalized
and pooled) output of the first convolutional layer and filters it with 256 kernels of size 5 ? 5 ? 48.
The third, fourth, and fifth convolutional layers are connected to one another without any intervening
pooling or normalization layers. The third convolutional layer has 384 kernels of size 3 ? 3 ?
256 connected to the (normalized, pooled) outputs of the second convolutional layer. The fourth
convolutional layer has 384 kernels of size 3 ? 3 ? 192 , and the fifth convolutional layer has 256
kernels of size 3 ? 3 ? 192. The fully-connected layers have 4096 neurons each.
4
Reducing Overfitting
Our neural network architecture has 60 million parameters. Although the 1000 classes of ILSVRC
make each training example impose 10 bits of constraint on the mapping from image to label, this
turns out to be insufficient to learn so many parameters without considerable overfitting. Below, we
describe the two primary ways in which we combat overfitting.
4.1
Data Augmentation
The easiest and most common method to reduce overfitting on image data is to artificially enlarge
the dataset using label-preserving transformations (e.g., [25, 4, 5]). We employ two distinct forms
of data augmentation, both of which allow transformed images to be produced from the original
images with very little computation, so the transformed images do not need to be stored on disk.
In our implementation, the transformed images are generated in Python code on the CPU while the
GPU is training on the previous batch of images. So these data augmentation schemes are, in effect,
computationally free.
The first form of data augmentation consists of generating image translations and horizontal reflections. We do this by extracting random 224 ? 224 patches (and their horizontal reflections) from the
256?256 images and training our network on these extracted patches4 . This increases the size of our
training set by a factor of 2048, though the resulting training examples are, of course, highly interdependent. Without this scheme, our network suffers from substantial overfitting, which would have
forced us to use much smaller networks. At test time, the network makes a prediction by extracting
five 224 ? 224 patches (the four corner patches and the center patch) as well as their horizontal
reflections (hence ten patches in all), and averaging the predictions made by the network?s softmax
layer on the ten patches.
The second form of data augmentation consists of altering the intensities of the RGB channels in
training images. Specifically, we perform PCA on the set of RGB pixel values throughout the
ImageNet training set. To each training image, we add multiples of the found principal components,
4
This is the reason why the input images in Figure 2 are 224 ? 224 ? 3-dimensional.
5
with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from
a Gaussian with mean zero and standard deviation 0.1. Therefore to each RGB image pixel Ixy =
R
G
B T
[Ixy
, Ixy
, Ixy
] we add the following quantity:
[p1 , p2 , p3 ][?1 ?1 , ?2 ?2 , ?3 ?3 ]T
where pi and ?i are ith eigenvector and eigenvalue of the 3 ? 3 covariance matrix of RGB pixel
values, respectively, and ?i is the aforementioned random variable. Each ?i is drawn only once
for all the pixels of a particular training image until that image is used for training again, at which
point it is re-drawn. This scheme approximately captures an important property of natural images,
namely, that object identity is invariant to changes in the intensity and color of the illumination. This
scheme reduces the top-1 error rate by over 1%.
4.2
Dropout
Combining the predictions of many different models is a very successful way to reduce test errors
[1, 3], but it appears to be too expensive for big neural networks that already take several days
to train. There is, however, a very efficient version of model combination that only costs about a
factor of two during training. The recently-introduced technique, called ?dropout? [10], consists
of setting to zero the output of each hidden neuron with probability 0.5. The neurons which are
?dropped out? in this way do not contribute to the forward pass and do not participate in backpropagation. So every time an input is presented, the neural network samples a different architecture,
but all these architectures share weights. This technique reduces complex co-adaptations of neurons,
since a neuron cannot rely on the presence of particular other neurons. It is, therefore, forced to
learn more robust features that are useful in conjunction with many different random subsets of the
other neurons. At test time, we use all the neurons but multiply their outputs by 0.5, which is a
reasonable approximation to taking the geometric mean of the predictive distributions produced by
the exponentially-many dropout networks.
We use dropout in the first two fully-connected layers of Figure 2. Without dropout, our network exhibits substantial overfitting. Dropout roughly doubles the number of iterations required to converge.
5
Details of learning
We trained our models using stochastic gradient descent
with a batch size of 128 examples, momentum of 0.9, and
weight decay of 0.0005. We found that this small amount
of weight decay was important for the model to learn. In
other words, weight decay here is not merely a regularizer:
it reduces the model?s training error. The update rule for
weight w was
?L
vi+1 := 0.9 ? vi ? 0.0005 ? ? wi ? ?
?w wi Di
wi+1 := wi + vi+1
Figure 3: 96 convolutional kernels of size
11?11?3 learned by the first convolutional
layer on the 224?224?3 input images. The
top 48 kernels were learned on GPU 1 while
the bottom 48 kernels were learned on GPU
2. See Section 6.1 for details.
where i is the iteration index, v is the momentum variable, is the learning rate, and
D
E
?L
?w wi
Di
is
the average over the ith batch Di of the derivative of the objective with respect to w, evaluated at
wi .
We initialized the weights in each layer from a zero-mean Gaussian distribution with standard deviation 0.01. We initialized the neuron biases in the second, fourth, and fifth convolutional layers,
as well as in the fully-connected hidden layers, with the constant 1. This initialization accelerates
the early stages of learning by providing the ReLUs with positive inputs. We initialized the neuron
biases in the remaining layers with the constant 0.
We used an equal learning rate for all layers, which we adjusted manually throughout training.
The heuristic which we followed was to divide the learning rate by 10 when the validation error
rate stopped improving with the current learning rate. The learning rate was initialized at 0.01 and
6
reduced three times prior to termination. We trained the network for roughly 90 cycles through the
training set of 1.2 million images, which took five to six days on two NVIDIA GTX 580 3GB GPUs.
6
Results
Our results on ILSVRC-2010 are summarized in Table 1. Our network achieves top-1 and top-5
test set error rates of 37.5% and 17.0%5 . The best performance achieved during the ILSVRC2010 competition was 47.1% and 28.2% with an approach that averages the predictions produced
from six sparse-coding models trained on different features [2], and since then the best published results are 45.7% and 25.7% with an approach that averages the predictions of two classifiers trained on Fisher Vectors (FVs) computed from two types of densely-sampled features [24].
Model
Top-1
Top-5
We also entered our model in the ILSVRC-2012 comSparse coding [2] 47.1% 28.2%
petition and report our results in Table 2. Since the
SIFT + FVs [24]
45.7% 25.7%
ILSVRC-2012 test set labels are not publicly available,
we cannot report test error rates for all the models that
CNN
37.5% 17.0%
we tried. In the remainder of this paragraph, we use
validation and test error rates interchangeably because Table 1: Comparison of results on ILSVRCin our experience they do not differ by more than 0.1% 2010 test set. In italics are best results
(see Table 2). The CNN described in this paper achieves achieved by others.
a top-5 error rate of 18.2%. Averaging the predictions
of five similar CNNs gives an error rate of 16.4%. Training one CNN, with an extra sixth convolutional layer over the last pooling layer, to classify the entire ImageNet Fall 2011 release
(15M images, 22K categories), and then ?fine-tuning? it on ILSVRC-2012 gives an error rate of
16.6%. Averaging the predictions of two CNNs that were pre-trained on the entire Fall 2011 release with the aforementioned five CNNs gives an error rate of 15.3%. The second-best contest entry achieved an error rate of 26.2% with an approach that averages the predictions of several classifiers trained on FVs computed from different types of densely-sampled features [7].
Finally, we also report our error
rates on the Fall 2009 version of
Model
Top-1 (val) Top-5 (val) Top-5 (test)
ImageNet with 10,184 categories
SIFT + FVs [7]
?
?
26.2%
and 8.9 million images. On this
1 CNN
40.7%
18.2%
?
dataset we follow the convention
5
CNNs
38.1%
16.4%
16.4%
in the literature of using half of
1 CNN*
39.0%
16.6%
?
the images for training and half
7
CNNs*
36.7%
15.4%
15.3%
for testing. Since there is no established test set, our split necessarily differs from the splits used Table 2: Comparison of error rates on ILSVRC-2012 validation and
by previous authors, but this does test sets. In italics are best results achieved by others. Models with an
not affect the results appreciably. asterisk* were ?pre-trained? to classify the entire ImageNet 2011 Fall
Our top-1 and top-5 error rates release. See Section 6 for details.
on this dataset are 67.4% and
40.9%, attained by the net described above but with an additional, sixth convolutional layer over the
last pooling layer. The best published results on this dataset are 78.1% and 60.9% [19].
6.1
Qualitative Evaluations
Figure 3 shows the convolutional kernels learned by the network?s two data-connected layers. The
network has learned a variety of frequency- and orientation-selective kernels, as well as various colored blobs. Notice the specialization exhibited by the two GPUs, a result of the restricted connectivity described in Section 3.5. The kernels on GPU 1 are largely color-agnostic, while the kernels
on on GPU 2 are largely color-specific. This kind of specialization occurs during every run and is
independent of any particular random weight initialization (modulo a renumbering of the GPUs).
5
The error rates without averaging predictions over ten patches as described in Section 4.1 are 39.0% and
18.3%.
7
Figure 4: (Left) Eight ILSVRC-2010 test images and the five labels considered most probable by our model.
The correct label is written under each image, and the probability assigned to the correct label is also shown
with a red bar (if it happens to be in the top 5). (Right) Five ILSVRC-2010 test images in the first column. The
remaining columns show the six training images that produce feature vectors in the last hidden layer with the
smallest Euclidean distance from the feature vector for the test image.
In the left panel of Figure 4 we qualitatively assess what the network has learned by computing its
top-5 predictions on eight test images. Notice that even off-center objects, such as the mite in the
top-left, can be recognized by the net. Most of the top-5 labels appear reasonable. For example,
only other types of cat are considered plausible labels for the leopard. In some cases (grille, cherry)
there is genuine ambiguity about the intended focus of the photograph.
Another way to probe the network?s visual knowledge is to consider the feature activations induced
by an image at the last, 4096-dimensional hidden layer. If two images produce feature activation
vectors with a small Euclidean separation, we can say that the higher levels of the neural network
consider them to be similar. Figure 4 shows five images from the test set and the six images from
the training set that are most similar to each of them according to this measure. Notice that at the
pixel level, the retrieved training images are generally not close in L2 to the query images in the first
column. For example, the retrieved dogs and elephants appear in a variety of poses. We present the
results for many more test images in the supplementary material.
Computing similarity by using Euclidean distance between two 4096-dimensional, real-valued vectors is inefficient, but it could be made efficient by training an auto-encoder to compress these vectors
to short binary codes. This should produce a much better image retrieval method than applying autoencoders to the raw pixels [14], which does not make use of image labels and hence has a tendency
to retrieve images with similar patterns of edges, whether or not they are semantically similar.
7
Discussion
Our results show that a large, deep convolutional neural network is capable of achieving recordbreaking results on a highly challenging dataset using purely supervised learning. It is notable
that our network?s performance degrades if a single convolutional layer is removed. For example,
removing any of the middle layers results in a loss of about 2% for the top-1 performance of the
network. So the depth really is important for achieving our results.
To simplify our experiments, we did not use any unsupervised pre-training even though we expect
that it will help, especially if we obtain enough computational power to significantly increase the
size of the network without obtaining a corresponding increase in the amount of labeled data. Thus
far, our results have improved as we have made our network larger and trained it longer but we still
have many orders of magnitude to go in order to match the infero-temporal pathway of the human
visual system. Ultimately we would like to use very large and deep convolutional nets on video
sequences where the temporal structure provides very helpful information that is missing or far less
obvious in static images.
8
References
[1] R.M. Bell and Y. Koren. Lessons from the netflix prize challenge. ACM SIGKDD Explorations Newsletter,
9(2):75?79, 2007.
[2] A. Berg, J. Deng, and L. Fei-Fei. Large scale visual recognition challenge 2010. www.imagenet.org/challenges. 2010.
[3] L. Breiman. Random forests. Machine learning, 45(1):5?32, 2001.
[4] D. Cire?san, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification.
Arxiv preprint arXiv:1202.2745, 2012.
[5] D.C. Cire?san, U. Meier, J. Masci, L.M. Gambardella, and J. Schmidhuber. High-performance neural
networks for visual object classification. Arxiv preprint arXiv:1102.0183, 2011.
[6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
Image Database. In CVPR09, 2009.
[7] J. Deng, A. Berg, S. Satheesh, H. Su, A. Khosla, and L. Fei-Fei. ILSVRC-2012, 2012. URL
http://www.image-net.org/challenges/LSVRC/2012/.
[8] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An
incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding, 106(1):59?70, 2007.
[9] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology, 2007. URL http://authors.library.caltech.edu/7694.
[10] G.E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012.
[11] K. Jarrett, K. Kavukcuoglu, M. A. Ranzato, and Y. LeCun. What is the best multi-stage architecture for
object recognition? In International Conference on Computer Vision, pages 2146?2153. IEEE, 2009.
[12] A. Krizhevsky. Learning multiple layers of features from tiny images. Master?s thesis, Department of
Computer Science, University of Toronto, 2009.
[13] A. Krizhevsky. Convolutional deep belief networks on cifar-10. Unpublished manuscript, 2010.
[14] A. Krizhevsky and G.E. Hinton. Using very deep autoencoders for content-based image retrieval. In
ESANN, 2011.
[15] Y. Le Cun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, L.D. Jackel, et al. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing
systems, 1990.
[16] Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to
pose and lighting. In Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the
2004 IEEE Computer Society Conference on, volume 2, pages II?97. IEEE, 2004.
[17] Y. LeCun, K. Kavukcuoglu, and C. Farabet. Convolutional networks and applications in vision. In
Circuits and Systems (ISCAS), Proceedings of 2010 IEEE International Symposium on, pages 253?256.
IEEE, 2010.
[18] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th Annual International Conference
on Machine Learning, pages 609?616. ACM, 2009.
[19] T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Metric Learning for Large Scale Image Classification: Generalizing to New Classes at Near-Zero Cost. In ECCV - European Conference on Computer
Vision, Florence, Italy, October 2012.
[20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proc. 27th
International Conference on Machine Learning, 2010.
[21] N. Pinto, D.D. Cox, and J.J. DiCarlo. Why is real-world visual object recognition hard? PLoS computational biology, 4(1):e27, 2008.
[22] N. Pinto, D. Doukhan, J.J. DiCarlo, and D.D. Cox. A high-throughput screening approach to discovering
good forms of biologically inspired visual representation. PLoS computational biology, 5(11):e1000579,
2009.
[23] B.C. Russell, A. Torralba, K.P. Murphy, and W.T. Freeman. Labelme: a database and web-based tool for
image annotation. International journal of computer vision, 77(1):157?173, 2008.
[24] J. S?nchez and F. Perronnin. High-dimensional signature compression for large-scale image classification.
In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1665?1672. IEEE,
2011.
[25] P.Y. Simard, D. Steinkraus, and J.C. Platt. Best practices for convolutional neural networks applied to
visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis
and Recognition, volume 2, pages 958?962, 2003.
[26] S.C. Turaga, J.F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman, W. Denk, and H.S. Seung. Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Computation,
22(2):511?538, 2010.
9
| 4824 |@word cnn:9 middle:1 version:5 cox:2 seems:1 compression:1 disk:1 termination:1 willing:1 tried:1 rgb:5 covariance:1 brightness:1 solid:1 briggman:1 contains:5 denoting:1 ours:1 interestingly:1 document:2 current:5 com:2 activation:2 written:1 gpu:21 realistic:1 happen:1 plot:1 update:1 half:5 fewer:1 generative:1 discovering:1 ith:2 prize:1 short:1 colored:1 provides:1 contribute:1 toronto:4 location:1 org:2 five:13 become:2 symposium:1 qualitative:1 consists:7 pathway:1 fullyconnected:2 paragraph:1 theoretically:1 indeed:1 roughly:5 p1:1 nor:1 multi:2 inspired:2 salakhutdinov:1 freeman:1 steinkraus:1 little:1 delineation:1 cpu:1 begin:1 provided:1 linearity:1 maximizes:1 agnostic:1 panel:1 circuit:1 what:3 easiest:1 kind:3 eigenvector:1 developed:1 transformation:2 temporal:2 combat:1 every:3 prohibitively:1 classifier:2 platt:1 unit:6 appear:2 positive:2 before:1 dropped:1 local:6 limit:1 petition:1 despite:2 approximately:2 initialization:2 collect:2 challenging:1 bix:2 co:2 limited:1 doukhan:1 jarrett:3 lecun:3 testing:2 practice:1 implement:1 differs:1 backpropagation:1 digit:2 bell:1 thought:1 significantly:1 pre:4 word:1 suggest:1 cannot:4 close:1 put:1 influence:1 applying:4 www:2 equivalent:5 map:8 demonstrated:2 center:3 maximizing:1 missing:1 go:1 roth:1 starting:1 independently:1 rectangular:1 resolution:6 amazon:1 rule:1 retrieve:1 cvpr09:1 traditionally:1 modulo:1 us:1 trick:1 recognition:13 expensive:2 particularly:2 net2:1 labeled:8 database:2 bottom:2 preprint:3 solved:1 capture:1 thousand:3 connected:16 cycle:1 ranzato:1 ordering:1 plo:2 rescaled:1 removed:1 russell:1 substantial:2 complexity:1 seung:1 denk:1 ultimately:1 signature:1 trained:14 predictive:1 purely:1 efficiency:1 various:1 cat:1 regularizer:1 train:7 jain:1 distinct:1 fast:1 effective:2 shortcoming:1 describe:4 forced:2 query:1 hyper:1 choosing:1 crowd:1 neighborhood:2 quite:1 whose:1 larger:4 widely:1 cvpr:2 heuristic:1 plausible:1 say:1 elephant:1 encoder:1 ability:1 statistic:1 favor:1 final:4 blob:1 eigenvalue:2 sequence:1 net:14 took:1 subtracting:1 adaptation:2 remainder:1 neighboring:2 combining:1 entered:3 date:1 intervening:1 competition:6 sutskever:2 double:1 produce:6 generating:1 incremental:1 object:13 help:1 supplementary:1 pose:2 p2:1 esann:1 strong:1 c:3 convention:1 differ:1 correct:5 cnns:12 filter:2 luckily:1 centered:2 human:3 stochastic:1 exploration:1 material:1 require:1 generalization:1 really:1 probable:2 adjusted:1 leopard:1 considered:3 great:1 mapping:1 claim:1 achieves:2 early:1 smallest:1 torralba:1 estimation:1 proc:1 label:16 tanh:4 jackel:1 hubbard:1 largest:1 appreciably:1 tool:2 gaussian:2 breiman:1 varying:1 conjunction:1 release:3 focus:1 consistently:1 mainly:1 contrast:2 sigkdd:1 summarizing:1 helpful:1 perronnin:2 entire:3 hidden:4 perona:2 going:1 transformed:3 selective:1 pixel:11 overall:2 classification:5 among:1 pascal:1 aforementioned:2 orientation:1 art:1 softmax:3 spatial:1 genuine:1 field:1 once:1 equal:1 enlarge:1 ng:1 manually:1 biology:2 unsupervised:2 throughput:1 report:7 others:2 simplify:1 inherent:1 employ:2 few:1 recognize:1 resulted:1 densely:2 murphy:1 intended:1 consisting:1 iscas:1 stationarity:1 screening:1 highly:4 multiply:1 evaluation:1 severe:1 henderson:1 held:1 immense:1 cherry:1 edge:1 capable:1 necessary:1 experience:1 shorter:1 divide:1 euclidean:3 initialized:4 re:1 stopped:1 classify:3 column:5 altering:1 cost:2 deviation:2 subset:3 entry:2 hundred:1 krizhevsky:5 successful:1 seventh:1 too:2 reported:1 stored:1 dependency:1 varies:1 considerably:1 international:6 lee:1 off:1 dong:1 aix:2 ilya:2 connectivity:2 augmentation:5 central:1 again:1 ambiguity:1 thesis:1 huang:1 worse:1 corner:1 creating:1 derivative:1 inefficient:1 simard:1 li:2 nonlinearities:1 stride:1 summarized:3 pooled:2 coding:2 notable:1 explicitly:1 vi:3 performed:1 csurka:1 lot:1 responsibility:1 observing:1 red:1 netflix:1 relus:7 sort:1 annotation:1 florence:1 contribution:1 ass:1 publicly:1 convolutional:44 largely:2 spaced:1 lesson:1 raw:2 bayesian:1 kavukcuoglu:2 handwritten:1 produced:3 lighting:1 rectified:2 published:2 detector:1 reach:2 suffers:1 halve:1 farabet:1 sixth:2 frequency:1 turk:1 obvious:1 resultant:1 di:3 static:1 sampled:3 proved:1 dataset:14 knowledge:2 color:3 dimensionality:1 segmentation:1 holub:1 actually:1 back:1 appears:1 manuscript:1 tolerate:1 attained:1 day:3 follow:4 higher:1 response:7 improved:2 supervised:1 mensink:1 evaluated:1 though:2 stage:2 until:3 overfit:1 autoencoders:2 horizontal:3 web:2 su:1 overlapping:4 propagation:1 google:2 logistic:1 quality:1 resemblance:1 facilitate:1 effect:3 contain:1 gtx:3 normalized:3 regularization:2 hence:2 assigned:1 read:1 attractive:1 adjacent:2 during:4 interchangeably:1 inferior:1 kriz:1 ixy:4 newsletter:1 reflection:3 image:72 novel:1 recently:4 common:1 multinomial:1 exponentially:1 volume:2 million:12 significant:1 refer:1 tuning:1 grid:1 similarly:1 contest:2 nonlinearity:6 had:1 similarity:1 longer:1 inhibition:1 labelers:1 add:2 recent:1 retrieved:2 italy:1 apart:1 termed:1 schmidhuber:2 certain:3 nvidia:1 binary:1 caltech:4 preserving:2 additional:2 somewhat:1 impose:1 employed:4 deng:3 recognized:2 converge:1 gambardella:1 dashed:1 ii:1 multiple:3 desirable:1 reduces:6 segmented:1 technical:1 faster:6 match:1 cross:2 compensate:1 cifar:5 renumbering:1 retrieval:2 host:1 bigger:2 paired:1 controlled:1 prediction:11 variant:1 regression:1 scalable:1 verbeek:1 essentially:1 vision:7 metric:1 arxiv:6 iteration:3 normalization:13 kernel:30 achieved:9 cropped:1 fine:1 parallelization:2 biased:1 extra:1 exhibited:1 file:1 pooling:15 induced:1 effectiveness:1 extracting:2 near:1 presence:1 feedforward:1 split:2 enough:4 variety:2 affect:1 relu:5 fit:2 architecture:13 reduce:4 whether:1 six:6 expression:1 pca:1 specialization:2 gb:3 url:2 constitute:1 deep:9 generally:2 useful:1 detailed:1 tune:1 amount:6 ten:4 category:7 reduced:1 http:4 generate:1 cuda:2 notice:3 correctly:1 write:1 waiting:1 group:1 ilsvrc2010:1 four:4 achieving:2 drawn:3 prevent:1 breadth:1 verified:1 graph:1 merely:1 fraction:2 sum:1 run:4 powerful:2 communicate:2 fourth:4 master:1 throughout:3 reasonable:2 patch:8 p3:1 separation:1 griffin:1 acceptable:1 bit:1 dropout:7 layer:72 accelerates:1 followed:3 valued:1 koren:1 annual:2 activity:5 precisely:2 constraint:2 alex:1 fei:8 min:1 relatively:1 gpus:13 department:1 according:2 turaga:1 combination:1 belonging:1 across:2 slightly:3 smaller:1 wi:6 cun:1 biologically:1 happens:1 invariant:1 restricted:2 computationally:1 turn:2 fed:1 end:1 unusual:2 available:5 operation:2 apply:1 eight:4 observe:1 probe:1 hierarchical:2 denker:1 generic:1 alternative:1 batch:3 slower:1 customary:1 original:1 compress:1 top:30 remaining:4 include:1 especially:2 murray:1 society:1 objective:2 already:1 quantity:1 cire:3 occurs:1 receptive:1 primary:2 degrades:1 traditional:3 italic:2 exhibit:2 gradient:2 amongst:1 affinity:1 convnet:2 distance:3 lateral:1 capacity:2 participate:1 collected:1 reason:1 length:1 code:5 index:1 dicarlo:2 illustration:1 insufficient:1 providing:1 difficult:1 mostly:1 october:1 implementation:4 satheesh:1 boltzmann:1 perform:1 neuron:28 convolution:3 datasets:10 howard:1 descent:2 hinton:6 variability:1 ever:1 communication:1 precise:1 arbitrary:1 intensity:2 introduced:1 unpublished:1 namely:2 mechanical:1 specified:2 required:2 connection:1 imagenet:17 optimized:2 dog:1 meier:2 california:1 learned:7 boser:1 established:1 ajx:1 able:2 bar:1 below:2 pattern:4 challenge:6 summarize:1 max:4 memory:4 video:1 belief:2 power:1 overlap:1 natural:1 rely:1 scheme:12 improve:3 technology:1 library:1 ready:1 auto:1 prior:2 interdependent:1 geometric:1 python:1 val:2 literature:1 l2:1 relative:1 understanding:1 fully:12 loss:1 bear:1 expect:1 proportional:1 geoffrey:1 validation:6 asterisk:1 tiny:1 pi:1 share:1 translation:1 eccv:1 course:2 sourcing:1 last:6 free:1 side:1 allow:1 bias:2 institute:1 fall:4 taking:1 fifth:5 sparse:1 depth:3 dimension:1 world:1 preventing:4 reside:2 made:4 commonly:1 san:3 forward:1 author:2 qualitatively:1 far:3 ranganath:1 wrote:1 overfitting:12 norb:1 fergus:1 don:1 khosla:1 why:2 table:5 learn:8 nature:1 channel:1 ca:3 helmstaedter:1 robust:1 obtaining:1 unavailable:1 improving:2 forest:1 bottou:1 complex:1 artificially:1 necessarily:1 european:1 did:3 spread:1 big:3 augmented:1 e27:1 grosse:1 aid:1 position:2 momentum:2 winning:1 third:3 masci:1 removing:2 down:1 specific:2 utoronto:3 showing:1 sift:2 decay:3 concern:1 essential:1 socher:1 mnist:1 importance:1 magnitude:3 illumination:1 easier:1 columnar:1 locality:1 suited:1 subtract:1 depicted:1 photograph:1 simply:1 likely:1 generalizing:1 nchez:1 visual:10 saturating:6 pinto:3 extracted:1 acm:2 nair:2 sized:1 sorted:1 identity:1 labelme:2 fisher:1 considerable:2 change:1 content:1 lsvrc:2 determined:2 except:2 reducing:1 specifically:1 averaging:4 semantically:1 hard:1 principal:1 called:3 total:1 pas:1 invariance:1 tendency:1 fvs:4 ilsvrc:17 berg:2 accelerated:1 tested:1 srivastava:1 |
4,226 | 4,825 | Learning from Distributions via Support Measure
Machines
Krikamol Muandet
MPI for Intelligent Systems, T?ubingen
[email protected]
Kenji Fukumizu
The Institute of Statistical Mathematics, Tokyo
[email protected]
Francesco Dinuzzo
MPI for Intelligent Systems, T?ubingen
[email protected]
Bernhard Sch?olkopf
MPI for Intelligent Systems, T?ubingen
[email protected]
Abstract
This paper presents a kernel-based discriminative learning framework on probability measures. Rather than relying on large collections of vectorial training
examples, our framework learns using a collection of probability distributions
that have been constructed to meaningfully represent training data. By representing these probability distributions as mean embeddings in the reproducing kernel
Hilbert space (RKHS), we are able to apply many standard kernel-based learning
techniques in straightforward fashion. To accomplish this, we construct a generalization of the support vector machine (SVM) called a support measure machine
(SMM). Our analyses of SMMs provides several insights into their relationship
to traditional SVMs. Based on such insights, we propose a flexible SVM (FlexSVM) that places different kernel functions on each training example. Experimental results on both synthetic and real-world data demonstrate the effectiveness
of our proposed framework.
1
Introduction
Discriminative learning algorithms are typically trained from large collections of vectorial training
examples. In many classical learning problems, however, it is arguably more appropriate to represent
training data not as individual data points, but as probability distributions. There are, in fact, multiple
reasons why probability distributions may be preferable.
Firstly, uncertain or missing data naturally arises in many applications. For example, gene expression data obtained from the microarray experiments are known to be very noisy due to various
sources of variabilities [1]. In order to reduce uncertainty, and to allow for estimates of confidence
levels, experiments are often replicated. Unfortunately, the feasibility of replicating the microarray
experiments is often inhibited by cost constraints, as well as the amount of available mRNA. To cope
with experimental uncertainty given a limited amount of data, it is natural to represent each array as
a probability distribution that has been designed to approximate the variability of gene expressions
across slides.
Probability distributions may be equally appropriate given an abundance of training data. In datarich disciplines such as neuroinformatics, climate informatics, and astronomy, a high throughput
experiment can easily generate a huge amount of data, leading to significant computational challenges in both time and space. Instead of scaling up one?s learning algorithms, one can scale down
one?s dataset by constructing a smaller collection of distributions which represents groups of similar
samples. Besides computational efficiency, aggregate statistics can potentially incorporate higherlevel information that represents the collective behavior of multiple data points.
1
Previous attempts have been made to learn from distributions by creating positive definite (p.d.)
kernels on probability measures. In [2], the probability product kernel (PPK) was proposed as a
generalized inner product between two input objects, which is in fact closely related to well-known
kernels such as the Bhattacharyya kernel [3] and the exponential symmetrized Kullback-Leibler
(KL) divergence [4]. In [5], an extension of a two-parameter family of Hilbertian metrics of Tops?e
was used to define Hilbertian kernels on probability measures. In [6], the semi-group kernels were
designed for objects with additive semi-group structure such as positive measures. Recently, [7] introduced nonextensive information theoretic kernels on probability measures based on new JensenShannon-type divergences. Although these kernels have proven successful in many applications,
they are designed specifically for certain properties of distributions and application domains. Moreover, there has been no attempt in making a connection to the kernels on corresponding input spaces.
The contributions of this paper can be summarized as follows. First, we prove the representer theorem for a regularization framework over the space of probability distributions, which is a generalization of regularization over the input space on which the distributions are defined (Section 2).
Second, a family of positive definite kernels on distributions is introduced (Section 3). Based on
such kernels, a learning algorithm on probability measures called support measure machine (SMM)
is proposed. An SVM on the input space is provably a special case of the SMM. Third, the paper
presents the relations between sample-based and distribution-based methods (Section 4). If the distributions depend only on the locations in the input space, the SMM particularly reduces to a more
flexible SVM that places different kernels on each data point.
2
Regularization on probability distributions
Given a non-empty set X , let P denote the set of all probability measures P on a measurable
space (X , A), where A is a ?-algebra of subsets of X . The goal of this work is to learn a function
h : P ? Y given a set of example pairs {(Pi , yi )}m
i=1 , where Pi ? P and yi ? Y. In other words,
we consider a supervised setting in which input training examples are probability distributions. In
this paper, we focus on the binary classification problem, i.e., Y = {+1, ?1}.
In order to learn from distributions, we employ a compact representation that not only preserves
necessary information of individual distributions, but also permits efficient computations. That is,
we adopt a Hilbert space embedding to represent the distribution as a mean function in an RKHS
[8, 9]. Formally, let H denote an RKHS of functions f : X ? R, endowed with a reproducing
kernel k : X ? X ? R. The mean map from P into H is defined as
Z
? : P ? H, P 7??
k(x, ?) dP(x) .
(1)
X
We assume that k(x, ?) is bounded for any x ? X . It can be shown that, if k is characteristic, the map
(1) is injective, i.e., all the information about the distribution is preserved [10]. For any P, letting
?P = ?(P), we have the reproducing property
EP [f ] = h?P , f iH , ?f ? H .
(2)
That is, we can see the mean embedding ?P as a feature map associated with the kernel K :
P ? P ? R, defined
as K(P, Q) = h?P , ?Q iH . Since
RR
RR supx kk(x, ?)kH < ?, it also follows
that K(P, Q) =
hk(x, ?), k(z, ?)iH dP(x) dQ(z) =
k(x, z) dP(x) dQ(z), where the second
equality follows from the reproducing property of H. It is immediate that K is a p.d. kernel on P.
The following theorem shows that optimal solutions of a suitable class of regularization problems
involving distributions can be expressed as a finite linear combination of mean embeddings.
Theorem 1. Given training examples (Pi , yi ) ? P ? R, i = 1, . . . , m, a strictly monotonically
increasing function ? : [0, +?) ? R, and a loss function ? : (P ? R2 )m ? R ? {+?}, any
f ? H minimizing the regularized risk functional
? (P1 , y1 , EP1 [f ], . . . , Pm , ym , EPm [f ]) + ? (kf kH )
Pm
i=1 ?i ?Pi for some ?i ? R, i = 1, . . . , m.
(3)
admits a representation of the form f =
Theorem 1 clearly indicates how each distribution contributes to the minimizer of (3). Roughly
speaking, the coefficients ?i controls the contribution of the distributions through the mean embeddings ?Pi . Furthermore, if we restrict P to a class of Dirac measures ?x on X and consider
2
functional (3) reduces to the usual regularization functional [11]
the training set {(?xi , yi )}m
i=1 , theP
m
and the solution reduces to f = i=1 ?i k(xi , ?). Therefore, the standard representer theorem is
recovered as a particular case (see also [12] for more general results on representer theorem).
Note that, on the one hand, the minimization problem (3) is different from minimizing the functional
EP1 . . . EPm ?(x1 , y1 , f (x1 ), . . . , xm , ym , f (xm ))+?(kf kH ) for the special case of the additive loss
?. Therefore, the solution of our regularization problem is different from what one would get in the
limit by training on an infinitely many points sampled from P1 , . . . , Pm . On the other hand, it is
also different from minimizing the functional ?(M1 , y1 , f (M1 ), . . . , Mm , ym , f (Mm )) + ?(kf kH )
where Mi = Ex?Pi [x]. In a sense, our framework is something in between.
3
Kernels on probability distributions
As the map (1) is linear in P, optimizing the functional (3) amounts to finding
R a function in H that
approximate well functions from P to R in the function class F , {P ? X g dP | P ? P, g ?
C(X )} where C(X ) is a class of bounded continuous functions on X . Since ?x ? P for any x ? X ,
it follows that C(X ) ? F ? C(P) where C(P) is a class of bounded continuous functions on P
endowed with the topology of weak convergence and the associated Borel ?-algebra. The following
lemma states the relation between the RKHS H induced by the kernel k and the function class F.
Lemma 2. Assuming that X is compact, the RKHS H induced by a kernel k is dense in F if k
is universal, i.e., Rfor every function F ? F and every ? > 0 there exists a function g ? H with
supP?P |F (P) ? g dP| ? ?.
Proof. Assume that k is universal. Then, for every function f ? C(X ) and every ? > 0 there exists a
function g ? H induced by k with supx?X |f (x)?g(x)| ? ? [13]. Hence, by linearity
R of F, for every
F ? F and every ? > 0 there exists a function h ? H such that supP?P |F (P) ? h dP| ? ?.
Nonlinear kernels on P can be defined in an analogous way to nonlinear kernels on X , by treating
mean embeddings ?P of P ? P as its feature representation. First, assume that the map (1) is
injective and let h?, ?iP be an inner product on P. By linearity, we have hP, QiP = h?P , ?Q iH (cf.
[8] for more details). Then, the nonlinear kernels on P can be defined as K(P, Q) = ?(?P , ?Q ) =
h?(?P ), ?(?Q )iH? where ? is a p.d. kernel. As a result, many standard nonlinear kernels on X can
be used to define nonlinear kernels on P as long as the kernel evaluation depends entirely on the inner product h?P , ?Q iH , e.g., K(P, Q) = (h?P , ?Q iH + c)d . Although requiring more computational
effort, their practical use is simple and flexible. Specifically, the notion of p.d. kernels on distributions proposed in this work is so generic that standard kernel functions can be reused to derive
kernels on distributions that are different from many other kernel functions proposed specifically for
certain distributions.
It has been recently proved that the Gaussian RBF kernel given by K(P, Q) = exp(? ?2 k?P ?
?Q k2H ), ?P, Q ? P is universal w.r.t C(P) given that X is compact and the map ? is injective
[14]. Despite its success in real-world applications, the theory of kernel-based classifiers beyond
the input space X ? Rd , as also mentioned by [14], is still incomplete. It is therefore of theoretical
interest to consider more general classes of universal kernels on probability distributions.
3.1
Support measure machines
This subsection extends SVMs to deal with probability distributions, leading to support measure
machines (SMMs). In its general form, an SMM amounts to solving an SVM problem with the
expected kernel K(P, Q) = Ex?P,z?Q [k(x, z)]. This kernel can be computed in closed-form for
certain classes of distributions and kernels k. Examples are given in Table 1.
Alternatively, one can approximate the kernel K(P, Q) by the empirical estimate:
n
bn , Q
b m) =
Kemp (P
m
1 XX
k(xi , zj )
n ? m i=1 j=1
(4)
bn and Q
b m are empirical distributions of P and Q given random samples {xi }n and
where P
i=1
m
{zj }j=1 , respectively. A finite sample of size m from a distribution P suffices (with high probability)
3
Table 1: the analytic forms of expected kernels for different choices of kernels and distributions.
Distributions
Embedding kernel k(x, y)
K(Pi , Pj ) = h?Pi , ?Pj iH
Arbitrary P(m; ?)
Gaussian N (m; ?)
Linear hx, yi
Gaussian RBF exp(? ?2 kx ? yk2 )
Gaussian N (m; ?)
Gaussian N (m; ?)
Polynomial degree 2 (hx, yi + 1)2
Polynomial degree 3 (hx, yi + 1)3
mT
i mj + ?ij tr ?i
exp(? 21 (mi ? mj )T (?i + ?j + ? ?1 I)?1 (mi ? mj ))
1
/|??i + ??j + I| 2
2
T
(hmi , mj i + 1) + tr ?i ?j + mT
i ?j mi + mj ?i mj
(hmi , mj i + 1)3 + 6mT
?
?
m
i i j j
T
+3(hmi , mj i + 1)(tr ?i ?j + mT
i ?j mi + mj ?i mj )
1
to compute an approximation within an error of O(m? 2 ). Instead, if the sample set is sufficiently
large, one may choose to approximate the true distribution by simpler probabilistic models, e.g., a
mixture of Gaussians model, and choose a kernel k whose expected value admits an analytic form.
Storing only the parameters of probabilistic models may save some space compared to storing all
data points.
Note that the standard SVM feature map ?(x) is usually nonlinear in x, whereas ?P is linear in P.
Thus, for an SMM, the first level kernel k is used to obtain a vectorial representation of the measures,
and the second level kernel K allows for a nonlinear algorithm on distributions. For clarity, we will
refer to k and K as the embedding kernel and the level-2 kernel, respectively
4
Theoretical analyses
This section presents key theoretical aspects of the proposed framework, which reveal important
connection between kernel-based learning algorithms on the space of distributions and on the input
space on which they are defined.
4.1
Risk deviation bound
Given a training sample {(Pi , yi )}m
i=1 drawn i.i.d. from some unknown probability distribution P on P ? Y, a loss function ? : R ? R ? R, and a function class ?, the goal of
statistical Rlearning
is to find the function f ? ? that minimizes the expected risk functional
R
R(f ) = R P X ?(y, f (x)) dP(x) dP(P, y). Since P is unknown, the empirical risk Remp (f ) =
Pm
1
considered instead. Furthermore,
i=1 X ?(yi , f (x)) dPi (x) based on the training sample is P
m
m P
1
the risk functional can be simplified further by considering m?n
i=1
xij ?Pi ?(yi , f (xij )) based
on n samples xij drawn from each Pi .
Our framework,
on the other hand, alleviates the problem by minimizing the risk functional
R
R? (f ) = P ?(y, EP [f (x)]) dP(P, y) for f ? H with corresponding empirical risk functional
Pm
1
R?emp (f ) = m
i=1 ?(yi , EPi [f (x)]) (cf. the discussion at the end of Section 2). It is often easier
?
to optimize Remp (f ) as the expectation can be computed exactly for certain choices of Pi and H.
Moreover, for universal H, this simplification preserves all information of the distributions. Nevertheless, there is still a loss of information due to the loss function ?.
Due to the i.i.d. assumption, the analysis of the difference between R and R? can be simplified
w.l.o.g. to the analysis of the difference between EP [?(y, f (x))] and ?(y, EP [f (x)]) for a particular
distribution P ? P. The theorem below provides a bound on the difference between EP [?(y, f (x))]
and ?(y, EP [f (x)]).
Theorem 3. Given an arbitrary probability distribution P with variance ? 2 , a Lipschitz continuous function f : R ? R with constant Cf , an arbitrary loss function ? : R ? R ? R that is
Lipschitz continuous in the second argument with constant C? , it follows that |Ex?P [?(y, f (x))] ?
?(y, Ex?P [f (x)])| ? 2C? Cf ? for any y ? R.
Theorem 3 indicates that if the random variable x is concentrated around its mean and the function f and ? are well-behaved, i.e., Lipschitz continuous, then the loss deviation |EP [?(y, f (x))] ?
?(y, EP [f (x)])| will be small. As a result, if this holds for any distribution Pi in the training set
?
{(Pi , yi )}m
i=1 , the true risk deviation |R ? R | is also expected to be small.
4
4.2
Flexible support vector machines
It turns out that, for certain choices of distributions P, the linear SMM trained using {(Pi , yi )}m
i=1
is equivalent to an SVM trained using some samples {(xi , yi )}m
i=1 with an appropriate choice of
kernel function.
RR
Lemma 4. Let k(x, z) be a bounded p.d. kernel on a measureR space such that
k(x, z)2 dx dz <
?, and g(x, x
?) be a square integrable function such that g(x, x
?) d?
x < ? for all x. Given
a sample {(Pi , yi )}m
i=1 where each Pi is assumed to have a density given by g(xi , x), the linear SMM is equivalent to the SVM on the training sample {(xi , yi )}m
i=1 with kernel Kg (x, z) =
RR
k(?
x, z?)g(x, x
?)g(z, z?) d?
x d?
z.
Note that the important assumption for this equivalence is that the distributions Pi differ only in their
location in the parameter space. This need not be the case in all possible applications of SMMs.
R
R
Furthermore, we have Kg (x, z) =
k(?
x, ?)g(x, x
?) d?
x, k(?
z , ?)g(z, z?) d?
z H . Thus, it is clear that
the feature map of x depends not only on the kernel k, but also on the density g(x, x
?). Consequently,
by virtue of Lemma 4, the kernel Kg allows the SVM to place different kernels at each data point.
We call this algorithm a flexible SVM (Flex-SVM).
2
Consider for example the linear SMM with Gaussian distributions N (x1 ; ?12 ? I), . . . , N (xm ; ?m
? I)
and Gaussian RBF kernel k?2 with bandwidth parameter ?. The convolution theorem of Gaussian
distributions implies that this SMM is equivalent to a flexible SVM that places a data-dependent
kernel k?2 +2?i2 (xi , ?) on training example xi , i.e., a Gaussian RBF kernel with larger bandwidth.
5
Related works
The kernel K(P, Q) = h?P , ?Q iH is in fact a special case of the Hilbertian metric [5], with the
associated kernel K(P, Q) = EP,Q [k(x, x
?)], and a generative mean map kernel (GMMK) proposed
by [15]. In the GMMK, the kernel between two objects x and y is defined via p?x and p?y , which are
estimated probabilistic models of x and y, respectively. That is, a probabilistic model p?x is learned
for each example and used as a surrogate to construct the kernel between those examples. The idea
of surrogate kernels hasRalso been adopted by the Probability Product Kernel (PPK) [2]. In this case,
we have K? (p, p? ) = X p(x)? p? (x)? dx, which has been shown to be a special case of GMMK
when ? = 1 [15]. Consequently, GMMK, PPK with ? = 1, and our linear kernels are equivalent
when the embedding kernel is k(x, x? ) = ?(x ? x? ). More recently, the empirical kernel (4) was
employed in an unsupervised way for multi-task learning to generalize to a previously unseen task
[16]. In contrast, we treat the probability distributions in a supervised way (cf. the regularized
functional (3)) and the kernel is not restricted to only the empirical kernel.
The use of expected kernels in dealing with the uncertainty in the input data has a connection to
robust SVMs. For instance, a generalized form of the SVM in [17] incorporates the probabilistic
uncertainty into the maximization of the margin. This results in a second-order cone programming
(SOCP) that generalizes the standard SVM. In SOCP, one needs to specify the parameter ?i that
reflects the probability of correctly classifying the ith training example. The parameter ?i is therefore
closely related to the parameter ?i , which specifies the variance of the distribution centered at the
ith example. [18] showed the equivalence between SVMs using expected kernels and SOCP when
?i = 0. When ?i > 0, the mean and covariance of missing kernel entries have to be estimated
explicitly, making the SOCP more involved for nonlinear kernels. Although achieving comparable
performance to the standard SVM with expected kernels, the SOCP requires a more computationally
extensive SOCP solver, as opposed to simple quadratic programming (QP).
6
Experimental results
In the experiments, we primarily consider three different learning algorithms: i) SVM is considered
as a baseline algorithm. ii) Augmented SVM (ASVM) is an SVM trained on augmented samples
drawn according to the distributions {Pi }m
i=1 . The same number of examples are drawn from each
distribution. iii) SMM is distribution-based method that can be applied directly on the distributions1 .
1
We used the LIBSVM implementation.
5
100
80
60
40
Embedding RBF 1
Level-2 RBF
Embedding RBF 2
Level-2 Poly
20
0
0
(a) decision boundaries.
1
2
3
4
5
6
Parameters
7
8
Embedding RBF 2
Embedding RBF 1
Accuracy(%)
100
90
80
70
60
50
40
Level-2 POLY
Level-2 RBF
(b) sensitivity of kernel parameters
Figure 1: (a) the decision boundaries of SVM, ASVM, and SMM. (b) the heatmap plots of average
accuracies of SMM over 30 experiments using POLY-RBF (center) and RBF-RBF (right) kernel
combinations with the plots of average accuracies at different parameter values (left).
Level-2
kernels
Table 2: accuracies (%) of SMM on synthetic data with different combinations of embedding and
level-2 kernels.
6.1
LIN
POLY
RBF
Embedding kernels
POLY3
RBF
LIN
POLY2
85.20?2.20
83.95?2.11
87.80?1.96
81.04?3.11
81.34?1.21
73.12?3.29
81.10?2.76
82.66?1.75
78.28?2.19
87.74?2.19
88.06?1.73
89.65?1.37
URBF
85.39?2.56
86.84?1.51
86.86?1.88
Synthetic data
Firstly, we conducted a basic experiment that illustrates a fundamental difference between SVM,
ASVM, and SMM. A binary classification problem of 7 Gaussian distributions with different means
and covariances was considered. We trained the SVM using only the means of the distributions,
ASVM with 30 virtual examples generated from each distribution, and SMM using distributions as
training examples. A Gaussian RBF kernel with ? = 0.25 was used for all algorithms.
Figure 1a shows the resulting decision boundaries. Having been trained only on means of the distributions, the SVM classifier tends to overemphasize the regions with high densities and underrepresent the lower density regions. In contrast, the ASVM is more expensive and sensitive to outliers,
especially when learning on heavy-tailed distributions. The SMM treats each distribution as a training example and implicitly incorporates properties of the distributions, i.e., means and covariances,
into the classifier. Note that the SVM can be trained to achieve a similar result to the SMM by
choosing an appropriate value for ? (cf. Lemma 4). Nevertheless, this becomes more difficult if the
training distributions are, for example, nonisotropic and have different covariance matrices.
Secondly, we evaluate the performance of the SMM for different combinations of embedding and
level-2 kernels. Two classes of synthetic Gaussian distributions on R10 were generated. The mean
parameters of the positive and negative distributions are normally distributed with means m+ =
(1, . . . , 1) and m? = (2, . . . , 2) and identical covariance matrix ? = 0.5 ? I10 , respectively. The
covariance matrix for each distribution is generated according to two Wishart distributions with
covariance matrices given by ?+ = 0.6 ? I10 and ?? = 1.2 ? I10 with 10 degrees of freedom.
The training set consists of 500 distributions from the positive class and 500 distributions from the
negative class. The test set consists of 200 distributions with the same class proportion.
The kernels used in the experiment include linear kernel (LIN), polynomial kernel of degree 2
(POLY2), polynomial kernel of degree 3 (POLY3), unnormalized Gaussian RBF kernel (RBF), and
normalized Gaussian RBF kernel (NRBF). To fix parameter values of both kernel functions and
SMM, 10-fold cross-validation (10-CV) is performed on a parameter grid, C ? {2?3 , 2?2 , . . . , 27 }
for SMM, bandwidth parameter ? ? {10?3 , 10?2 , . . . , 102 } for Gaussian RBF kernels, and degree
parameter d ? {2, 3, 4, 5, 6} for polynomial kernels. The average accuracy and ?1 standard deviation for all kernel combinations over 30 repetitions are reported in Table 2. Moreover, we also
investigate the sensitivity of kernel parameters for two kernel combinations: RBF-RBF and POLYRBF. In this case, we consider the bandwidth parameter ? = {10?3 , 10?2 , . . . , 103 } for Gaussian
6
90
100
90
100
95
90
85
100
100
100
95
95
100
95
100
95
90
100
90
80
70
95
95
95
90
85
10
20
30
10
20
30
10
20
Number of virtual examples
Scaling
95
100
30
10
20
30
102
101
100
2000
4000
6000
Number of virtual examples
Figure 3: relative computational cost of
ASVM and SMM (baseline: SMM with
2000 virtual examples).
70
65
60
55
50
Figure 2: the performance of SVM, ASVM, and SMM
algorithms on handwritten digits constructed using three
basic transformations.
ASVM
103
10-1
Accuracy (%)
Accuracy (%)
100
95
90
85
100
95
6 vs 9
Translation
100
3 vs 8
Rotation
3 vs 4
Relative comp. cost
SMM
1 vs 8
pLSA
SVM
LSMM NLSMM
Figure 4: accuracies of four different
techniques for natural scene categorization.
RBF kernels and degree parameter d = {2, 3, . . . , 8} for polynomial kernels. Figure 1b depicts the
accuracy values and average accuracies for considered kernel functions.
Table 2 indicates that both embedding and level-2 kernels are important for the performance of the
classifier. The embedding kernels tend to have more impact on the predictive performance compared
to the level-2 kernels. This conclusion also coincides with the results depicted in Figure 1b.
6.2
Handwritten digit recognition
In this section, the proposed framework is applied to distributions over equivalence classes of images
that are invariant to basic transformations, namely, scaling, translation, and rotation. We consider
the handwritten digits obtained from the USPS dataset. For each 16 ? 16 image, the distribution
over the equivalence class of the transformations is determined by a prior on parameters associated
with such transformations. Scaling and translation are parametrized by the scale factors (sx , sy ) and
displacements (tx , ty ) along the x and y axes, respectively. The rotation is parametrized by an angle
?. We adopt Gaussian distributions as prior distributions, including N ([1, 1], 0.1?I2 ), N ([0, 0], 5?I2 ),
and N (0; ?). For each image, the virtual examples are obtained by sampling parameter values from
the distribution and applying the transformation accordingly.
Experiments are categorized into simple and difficult binary classification tasks. The former consists
of classifying digit 1 against digit 8 and digit 3 against digit 4. The latter considers classifying digit 3
against digit 8 and digit 6 against digit 9. The initial dataset for each task is constructed by randomly
selecting 100 examples from each class. Then, for each example in the initial dataset, we generate
10, 20, and 30 virtual examples using the aforementioned transformations to construct virtual data
sets consisting of 2,000, 4,000, and 6,000 examples, respectively. One third of examples in the
initial dataset are used as a test set. The original examples are excluded from the virtual datasets.
The virtual examples are normalized such that their feature values are in [0, 1]. Then, to reduce
computational cost, principle component analysis (PCA) is performed to reduce the dimensionality
to 16. We compare the SVM on the initial dataset, the ASVM on the virtual datasets, and the SMM.
For SVM and ASVM, the Gaussian RBF kernel is used. For SMM, we employ the empirical kernel
(4) with Gaussian RBF kernel as a base kernel. The parameters of the algorithms are fixed by 10-CV
over parameters C ? {2?3 , 2?2 , . . . , 27 } and ? ? {0.01, 0.1, 1}.
The results depicted in Figure 2 clearly demonstrate the benefits of learning directly from the equivalence classes of digits under basic transformations2 . In most cases, the SMM outperforms both the
SVM and the ASVM as the number of virtual examples increases. Moreover, Figure 3 shows the
benefit of the SMM over the ASVM in term of computational cost3 .
2
While the reported results were obtained using virtual examples with Gaussian parameter distributions
(Sec. 6.2), we got similar results using uniform distributions.
TM
R
3
Core 2 Duo CPU E8400 at
The evaluation was made on a 64-bit desktop computer with Intel
3.00GHz?2 and 4GB of memory.
7
6.3
Natural scene categorization
This section illustrates benefits of the nonlinear kernels between distributions for learning natural
scene categories in which the bag-of-word (BoW) representation is used to represent images in the
dataset. Each image is represented as a collection of local patches, each being a codeword from a
large vocabulary of codewords called codebook. Standard BoW representations encode each image
as a histogram that enumerates the occurrence probability of local patches detected in the image w.r.t.
those in the codebook. On the other hand, our setting represents each image as a distribution over
these codewords. Thus, images of different scenes tends to generate distinct set of patches. Based
on this representation, both the histogram and the local patches can be used in our framework.
We use the dataset presented in [19]. According to their results, most errors occurs among the four
indoor categories (830 images), namely, bedroom (174 images), living room (289 images), kitchen
(151 images), and office (216 images). Therefore, we will focus on these four categories. For each
category, we split the dataset randomly into two separate sets of images, 100 for training and the rest
for testing.
A codebook is formed from the training images of all categories. Firstly, interesting keypoints in the
image are randomly detected. Local patches are then generated accordingly. After patch detection,
each patch is transformed into a 128-dim SIFT vector [20]. Given the collection of detected patches,
K-means clustering is performed over all local patches. Codewords are then defined as the centers
of the learned clusters. Then, each patch in an image is mapped to a codeword and the image can
be represented by the histogram of the codewords. In addition, we also have an M ? 128 matrix of
SIFT vectors where M is the number of codewords.
We compare the performance of a Probabilistic Latent Semantic Analysis (pLSA) with the standard BoW representation, SVM, linear SMM (LSMM), and nonlinear SMM (NLSMM). For
SMM, we use the empirical embedding kernel with Gaussian RBF base kernel k: K(hi , hj ) =
P M PM
r=1
s=1 hi (cr )hj (cs )k(cr , cs ) where hi is the histogram of the ith image and cr is the rth
SIFT vector. A Gaussian RBF kernel is also used as the level-2 kernel for nonlinear SMM. For
the SVM, we adopt a Gaussian RBF kernel with ?2 -distance between the histograms [21], i.e.,
PM (h (c )?h (c ))2
K(hi , hj ) = exp ???2 (hi , hj ) where ?2 (hi , hj ) = r=1 hii (crr )+hjj (crr ) . The parameters of
the algorithms are fixed by 10-CV over parameters C ? {2?3 , 2?2 , . . . , 27 } and ? ? {0.01, 0.1, 1}.
For NLSMM, we use the best ? of LSMM in the base kernel and perform 10-CV to choose ? parameter only for the level-2 kernel. To deal with multiple categories, we adopt the pairwise approach
and voting scheme to categorize test images. The results in Figure 4 illustrate the benefit of the
distribution-based framework. Understanding the context of a complex scene is challenging. Employing distribution-based methods provides an elegant way of utilizing higher-order statistics in
natural images that could not be captured by traditional sample-based methods.
7
Conclusions
This paper proposes a method for kernel-based discriminative learning on probability distributions.
The trick is to embed distributions into an RKHS, resulting in a simple and efficient learning algorithm on distributions. A family of linear and nonlinear kernels on distributions allows one to
flexibly choose the kernel function that is suitable for the problems at hand. Our analyses provide
insights into the relations between distribution-based methods and traditional sample-based methods, particularly the flexible SVM that allows the SVM to place different kernels on each training
example. The experimental results illustrate the benefits of learning from a pool of distributions,
compared to a pool of examples, both on synthetic and real-world data.
Acknowledgments
KM would like to thank Zoubin Gharamani, Arthur Gretton, Christian Walder, and Philipp Hennig
for a fruitful discussion. We also thank all three insightful reviewers for their invaluable comments.
8
References
[1] Y. H. Yang and T. Speed. Design issues for cDNA microarray experiments. Nat. Rev. Genet.,
3(8):579?588, 2002.
[2] T. Jebara, R. Kondor, A. Howard, K. Bennett, and N. Cesa-bianchi. Probability product kernels.
Journal of Machine Learning Research, 5:819?844, 2004.
[3] A. Bhattacharyya. On a measure of divergence between two statistical populations defined by
their probability distributions. Bull. Calcutta Math Soc., 1943.
[4] P. J. Moreno, P. P. Ho, and N. Vasconcelos. A Kullback-Leibler divergence based kernel
for SVM classification in multimedia applications. In Proceedings of Advances in Neural
Information Processing Systems. MIT Press, 2004.
[5] M. Hein and O. Bousquet. Hilbertian metrics and positive definite kernels on probability.
In Proceedings of The 12th International Conference on Artificial Intelligence and Statistics,
pages 136?143, 2005.
[6] M. Cuturi, K. Fukumizu, and J-P. Vert. Semigroup kernels on measures. Journal of Machine
Learning Research, 6:1169?1198, 2005.
[7] Andr?e F. T. Martins, Noah A. Smith, Eric P. Xing, Pedro M. Q. Aguiar, and M?ario A. T.
Figueiredo. Nonextensive information theoretic kernels on measures. Journal of Machine
Learning Research, 10:935?975, 2009.
[8] A. Berlinet and Thomas C. Agnan. Reproducing kernel Hilbert spaces in probability and
statistics. Kluwer Academic Publishers, 2004.
[9] A. Smola, A. Gretton, L. Song, and B. Sch?olkopf. A hilbert space embedding for distributions.
In Proceedings of the 18th International Conference on Algorithmic Learning Theory, pages
13?31. Springer-Verlag, 2007.
[10] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?olkopf, and Gert R. G. Lanckriet.
Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 99:1517?1561, 2010.
[11] B. Sch?olkopf, R. Herbrich, and A. J. Smola. A generalized representer theorem. In COLT
?01/EuroCOLT ?01, pages 416?426. Springer-Verlag, 2001.
[12] F. Dinuzzo and B. Sch?olkopf. The representer theorem for Hilbert spaces: a necessary and
sufficient condition. In Advances in Neural Information Processing Systems 25, pages 189?
196. 2012.
[13] I. Steinwart. On the influence of the kernel on the consistency of support vector machines.
Journal of Machine Learning Research, 2:67?93, 2001.
[14] A. Christmann and I. Steinwart. Universal kernels on non-standard input spaces. In Proceedings of Advances in Neural Information Processing Systems, pages 406?414. 2010.
[15] N. A. Mehta and A. G. Gray. Generative and latent mean map kernels. CoRR, abs/1005.0188,
2010.
[16] G. Blanchard, G. Lee, and C. Scott. Generalizing from several related classification tasks to
a new unlabeled sample. In Advances in Neural Information Processing Systems 24, pages
2178?2186. 2011.
[17] P. K. Shivaswamy, C. Bhattacharyya, and A. J. Smola. Second order cone programming approaches for handling missing and uncertain data. Journal of Machine Learning Research,
7:1283?1314, 2006.
[18] H.S. Anderson and M.R. Gupta. Expected kernel for missing features in support vector machines. In Statistical Signal Processing Workshop, pages 285?288, 2011.
[19] L. Fei-fei. A bayesian hierarchical model for learning natural scene categories. In Proceedings
of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 524?531,
2005.
[20] D. G. Lowe. Object recognition from local scale-invariant features. In Proceedings of the International Conference on Computer Vision, pages 1150?1157, Washington, DC, USA, 1999.
[21] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection.
In Proceedings of the International Conference on Computer Vision, pages 606?613, 2009.
9
| 4825 |@word kondor:1 polynomial:6 proportion:1 reused:1 plsa:2 mehta:1 km:1 bn:2 covariance:7 tr:3 initial:4 selecting:1 rkhs:6 bhattacharyya:3 outperforms:1 recovered:1 dx:2 additive:2 analytic:2 krikamol:2 christian:1 designed:3 treating:1 plot:2 moreno:1 v:4 generative:2 intelligence:1 accordingly:2 desktop:1 ith:3 dinuzzo:2 core:1 smith:1 provides:3 math:1 codebook:3 location:2 philipp:1 herbrich:1 firstly:3 simpler:1 along:1 constructed:3 prove:1 consists:3 pairwise:1 expected:9 roughly:1 mpg:3 p1:2 behavior:1 multi:1 relying:1 eurocolt:1 cpu:1 considering:1 increasing:1 solver:1 becomes:1 xx:1 moreover:4 bounded:4 linearity:2 duo:1 what:1 kg:3 minimizes:1 astronomy:1 finding:1 transformation:6 every:6 voting:1 preferable:1 exactly:1 classifier:4 berlinet:1 control:1 normally:1 arguably:1 positive:6 local:6 treat:2 tends:2 limit:1 despite:1 equivalence:5 challenging:1 limited:1 practical:1 acknowledgment:1 flex:1 testing:1 definite:3 digit:12 displacement:1 universal:6 empirical:8 got:1 vert:1 vedaldi:1 confidence:1 nonextensive:2 word:2 zoubin:1 get:1 unlabeled:1 risk:8 applying:1 context:1 influence:1 optimize:1 measurable:1 map:10 equivalent:4 missing:4 dz:1 center:2 straightforward:1 mrna:1 flexibly:1 reviewer:1 insight:3 array:1 utilizing:1 varma:1 embedding:16 population:1 notion:1 gert:1 analogous:1 programming:3 lanckriet:1 trick:1 expensive:1 particularly:2 recognition:3 ep:9 region:2 mentioned:1 cuturi:1 trained:7 depend:1 solving:1 algebra:2 predictive:1 efficiency:1 eric:1 usps:1 easily:1 various:1 tx:1 represented:2 epi:1 distinct:1 detected:3 artificial:1 aggregate:1 choosing:1 neuroinformatics:1 whose:1 larger:1 cvpr:1 statistic:4 unseen:1 noisy:1 ip:1 higherlevel:1 rr:4 propose:1 product:6 bow:3 alleviates:1 achieve:1 kh:4 dirac:1 olkopf:5 convergence:1 empty:1 cluster:1 categorization:2 object:5 derive:1 illustrate:2 ac:1 ij:1 ex:4 soc:1 kenji:1 c:2 implies:1 christmann:1 differ:1 closely:2 tokyo:1 rfor:1 centered:1 virtual:12 hx:3 suffices:1 generalization:2 fix:1 secondly:1 extension:1 strictly:1 mm:2 hold:1 sufficiently:1 considered:4 around:1 exp:4 k2h:1 algorithmic:1 adopt:4 bag:1 sensitive:1 repetition:1 reflects:1 fukumizu:4 minimization:1 mit:1 clearly:2 gaussian:23 rather:1 hj:5 cr:3 office:1 encode:1 ax:1 focus:2 indicates:3 hk:1 contrast:2 baseline:2 sense:1 dim:1 shivaswamy:1 dependent:1 typically:1 relation:3 transformed:1 provably:1 issue:1 classification:5 flexible:7 aforementioned:1 among:1 hilbertian:4 colt:1 proposes:1 heatmap:1 special:4 construct:3 having:1 vasconcelos:1 sampling:1 washington:1 identical:1 represents:3 unsupervised:1 throughput:1 representer:5 intelligent:3 inhibited:1 employ:2 primarily:1 randomly:3 preserve:2 divergence:4 individual:2 kitchen:1 consisting:1 attempt:2 freedom:1 detection:2 ab:1 huge:1 interest:1 investigate:1 evaluation:2 mixture:1 necessary:2 injective:3 arthur:1 incomplete:1 hein:1 theoretical:3 uncertain:2 instance:1 maximization:1 bull:1 cost:4 deviation:4 subset:1 entry:1 uniform:1 successful:1 conducted:1 reported:2 supx:2 accomplish:1 synthetic:5 muandet:1 density:4 fundamental:1 ppk:3 sensitivity:2 international:4 probabilistic:6 lee:1 informatics:1 jensenshannon:1 discipline:1 pool:2 ym:3 cesa:1 opposed:1 choose:4 wishart:1 overemphasize:1 creating:1 leading:2 supp:2 de:3 socp:6 summarized:1 sec:1 coefficient:1 blanchard:1 explicitly:1 depends:2 performed:3 lowe:1 closed:1 xing:1 contribution:2 gulshan:1 formed:1 square:1 accuracy:10 variance:2 characteristic:1 sy:1 generalize:1 weak:1 handwritten:3 crr:2 bayesian:1 comp:1 against:4 ty:1 sriperumbudur:1 involved:1 naturally:1 associated:4 mi:5 proof:1 sampled:1 dataset:9 proved:1 remp:2 subsection:1 enumerates:1 dimensionality:1 hilbert:6 fruitful:1 higher:1 supervised:2 specify:1 zisserman:1 anderson:1 furthermore:3 smola:3 hand:5 steinwart:2 nonlinear:12 reveal:1 behaved:1 gray:1 usa:1 requiring:1 true:2 normalized:2 former:1 regularization:6 equality:1 hence:1 excluded:1 leibler:2 semigroup:1 i2:3 semantic:1 climate:1 deal:2 mpi:3 unnormalized:1 coincides:1 generalized:3 theoretic:2 demonstrate:2 invaluable:1 image:22 recently:3 rotation:3 functional:11 mt:4 qp:1 jp:1 m1:2 rth:1 kluwer:1 cdna:1 significant:1 refer:1 cv:4 rd:1 grid:1 mathematics:1 pm:7 hp:1 consistency:1 replicating:1 yk2:1 base:3 something:1 showed:1 optimizing:1 codeword:2 certain:5 verlag:2 ubingen:3 binary:3 success:1 yi:16 integrable:1 captured:1 employed:1 monotonically:1 signal:1 living:1 semi:2 multiple:4 ii:1 keypoints:1 reduces:3 gretton:3 academic:1 cross:1 long:1 lin:3 equally:1 feasibility:1 impact:1 involving:1 basic:4 vision:3 metric:4 expectation:1 histogram:5 kernel:127 represent:5 preserved:1 whereas:1 addition:1 source:1 microarray:3 publisher:1 sch:5 rest:1 comment:1 induced:3 tend:1 elegant:1 meaningfully:1 incorporates:2 effectiveness:1 call:1 yang:1 iii:1 embeddings:5 distributions1:1 split:1 bedroom:1 restrict:1 topology:1 bandwidth:4 reduce:3 inner:3 idea:1 tm:1 genet:1 expression:2 pca:1 gb:1 smms:3 effort:1 song:1 speaking:1 urbf:1 clear:1 amount:5 slide:1 concentrated:1 svms:4 category:7 generate:3 specifies:1 xij:3 zj:2 andr:1 estimated:2 correctly:1 hennig:1 group:3 key:1 four:3 ario:1 nevertheless:2 achieving:1 drawn:4 clarity:1 pj:2 libsvm:1 r10:1 cone:2 angle:1 uncertainty:4 place:5 family:3 extends:1 hjj:1 patch:10 decision:3 scaling:4 comparable:1 bit:1 entirely:1 bound:2 hi:6 simplification:1 fold:1 quadratic:1 vectorial:3 constraint:1 noah:1 fei:2 scene:6 bousquet:1 aspect:1 speed:1 argument:1 martin:1 according:3 combination:6 across:1 smaller:1 ep1:2 rev:1 b:1 making:2 outlier:1 restricted:1 invariant:2 computationally:1 previously:1 turn:1 letting:1 end:1 adopted:1 available:1 gaussians:1 endowed:2 permit:1 epm:2 apply:1 generalizes:1 hierarchical:1 appropriate:4 generic:1 i10:3 hii:1 occurrence:1 save:1 symmetrized:1 ho:1 original:1 thomas:1 top:1 clustering:1 cf:6 include:1 ism:1 especially:1 classical:1 occurs:1 codewords:5 usual:1 traditional:3 surrogate:2 dp:9 distance:1 separate:1 mapped:1 thank:2 parametrized:2 considers:1 tuebingen:3 kemp:1 reason:1 assuming:1 besides:1 relationship:1 kk:1 minimizing:4 difficult:2 unfortunately:1 potentially:1 negative:2 implementation:1 design:1 collective:1 unknown:2 perform:1 bianchi:1 convolution:1 francesco:1 datasets:2 howard:1 finite:2 walder:1 immediate:1 variability:2 y1:3 dc:1 reproducing:5 dpi:1 arbitrary:3 jebara:1 introduced:2 pair:1 namely:2 kl:1 extensive:1 connection:3 learned:2 able:1 beyond:1 usually:1 below:1 xm:3 indoor:1 agnan:1 scott:1 asvm:12 challenge:1 pattern:1 including:1 memory:1 suitable:2 natural:6 regularized:2 representing:1 scheme:1 prior:2 understanding:1 kf:3 relative:2 loss:7 interesting:1 proven:1 validation:1 degree:7 sufficient:1 dq:2 principle:1 storing:2 pi:19 classifying:3 heavy:1 translation:3 figueiredo:1 allow:1 institute:1 emp:1 distributed:1 benefit:5 boundary:3 ghz:1 vocabulary:1 world:3 collection:6 made:2 replicated:1 simplified:2 employing:1 cope:1 approximate:4 compact:3 smm:33 implicitly:1 bernhard:1 kullback:2 gene:2 dealing:1 assumed:1 discriminative:3 xi:9 thep:1 alternatively:1 continuous:5 latent:2 tailed:1 why:1 table:5 learn:3 mj:10 robust:1 gharamani:1 contributes:1 qip:1 hmi:3 poly:4 complex:1 constructing:1 domain:1 dense:1 categorized:1 x1:3 augmented:2 intel:1 borel:1 depicts:1 fashion:1 exponential:1 third:2 learns:1 abundance:1 calcutta:1 down:1 theorem:12 embed:1 sift:3 insightful:1 r2:1 svm:33 admits:2 virtue:1 gupta:1 exists:3 workshop:1 ih:9 corr:1 nat:1 illustrates:2 kx:1 margin:1 sx:1 easier:1 depicted:2 generalizing:1 infinitely:1 expressed:1 springer:2 pedro:1 minimizer:1 goal:2 consequently:2 rbf:28 aguiar:1 room:1 lipschitz:3 bennett:1 specifically:3 determined:1 lemma:5 called:3 multimedia:1 experimental:4 formally:1 poly2:2 support:9 cost3:1 latter:1 arises:1 categorize:1 incorporate:1 evaluate:1 handling:1 |
4,227 | 4,826 | Bayesian Nonparametric Modeling of Suicide
Attempts
Isabel Valera
Department of Signal Processing
and Communications
University Carlos III in Madrid
[email protected]
Francisco J. R. Ruiz
Department of Signal Processing
and Communications
University Carlos III in Madrid
[email protected]
Fernando Perez-Cruz
Department of Signal Processing
and Communications
University Carlos III in Madrid
[email protected]
Carlos Blanco
Columbia University College of
Physicians and Surgeons
[email protected]
Abstract
The National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) database contains a large amount of information, regarding the way of
life, medical conditions, etc., of a representative sample of the U.S. population. In
this paper, we are interested in seeking the hidden causes behind the suicide attempts, for which we propose to model the subjects using a nonparametric latent
model based on the Indian Buffet Process (IBP). Due to the nature of the data, we
need to adapt the observation model for discrete random variables. We propose
a generative model in which the observations are drawn from a multinomial-logit
distribution given the IBP matrix. The implementation of an efficient Gibbs sampler is accomplished using the Laplace approximation, which allows integrating
out the weighting factors of the multinomial-logit likelihood model. Finally, the
experiments over the NESARC database show that our model properly captures
some of the hidden causes that model suicide attempts.
1
Introduction
Every year, more than 34,000 suicides occur and over 370,000 individuals are treated for selfinflicted injuries in emergency rooms in the U.S., where suicide prevention is one of the top public
health priorities [1]. The current strategies for suicide prevention have focused mainly on both the
detection and treatment of mental disorders [13], and on the treatment of the suicidal behaviors
themselves [4]. However, despite prevention efforts including improvements in the treatment of depression, the lifetime prevalence of suicide attempts in the U.S. has remained unchanged over the
past decade [8]. This suggests that there is a need to improve understanding of the risk factors for
suicide attempts beyond psychiatric disorders, particularly in non-clinical populations.
According to the National Strategy for Suicide Prevention, an important first step in a public health
approach to suicide prevention is to identify those at increased risk for suicide attempts [1]. Suicide
attempts are, by far, the best predictor of completed suicide [12] and are also associated with major
morbidity themselves [11]. The estimation of suicide attempt risk is a challenging and complex task,
with multiple risk factors linked to increased risk. In the absence of reliable tools for identifying
those at risk for suicide attempts, be they clinical or laboratory tests, risk detection still relays mainly
on clinical variables. The adequacy of the current predictive models and screening methods has been
1
questioned [12], and it has been suggested that the methods currently used for research on suicide
risk factors and prediction models need revamping [9].
Databases that model the behavior of human populations present typically many related questions
and analyzing each one of them individually, or a small group of them, do not lead to conclusive
results. For example, the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC) samples the U.S. population with nearly 3,000 questions regarding, among others, their
way of life, their medical conditions, depression and other mental disorders. It contains yes-or-no
questions, and some multiple-choice and questions with ordinal answers.
In this paper, we propose to model the subjects in this database using a nonparametric latent model
that allows us to seek hidden causes and compact in a few features the immense redundant information. Our starting point is the Indian Buffet Process (IBP) [5], because it allows us to infer which
latent features influence the observations and how many features there are. We need to adapt the observation model for discrete random variables, as the discrete nature of the database does not allow
us to use the standard Gaussian observation model. There are several options for modeling discrete
outputs given the hidden latent features, like a Dirichlet distribution or sampling from the features,
but we prefer a generative model in which the observations are drawn from a multinomial-logit
distribution because it is similar to the standard Gaussian observation model, where the observation probability distribution depends on the IBP matrix weighted by some factors. Furthermore,
the multinomial-logit model, besides its versatility, allows the implementation of an efficient Gibbs
sampler where the Laplace approximation [10] is used to integrate out the weighting factors, which
can be efficiently computed using the Matrix Inversion Lemma.
The IBP model combined with discrete observations has already been tackled in several related
works. In [17], the authors propose a model that combines properties from both the hierarchical
Dirichlet process (HDP) and the IBP, called IBP compound Dirichlet (ICD) process. They apply the
ICD to focused topic modeling, where the instances are documents and the observations are words
from a finite vocabulary, and focus on decoupling the prevalence of a topic in a document and its
prevalence in all documents. Despite the discrete nature of the observations under this model, these
assumptions are not appropriate for categorical observations such as the set of possible responses to
the questions in the NESARC database. Titsias [14] introduced the infinite gamma-Poisson process
as a prior probability distribution over non-negative integer valued matrices with a potentially infinite
number of columns, and he applies it to topic modeling of images. In this model, each (discrete)
component in the observation vector of an instance depends only on one of the active latent features
of that object, randomly drawn from a multinomial distribution. Therefore, different components
of the observation vector might be equally distributed. Our model is more flexible in the sense that
it allows different probability distributions for every component in the observation vector, which is
accomplished by weighting differently the latent variables.
2
The Indian Buffet Process
In latent feature modeling, each object can be represented by a vector of latent features, and the
observations are generated from a distribution determined by those latent feature values. Typically,
we have access to the set of observations and the main goal of these models is to find out the latent
variables that represent the data. The most common nonparametric tool for latent feature modeling
is the Indian Buffet Process (IBP).
The IBP places a prior distribution over binary matrices where the number of columns (features) K
is not bounded, i.e., K ? ?. However, given a finite number of data points N , it ensures that the
number of non-zero columns K+ is finite with probability one. Let Z be a random N ? K binary
matrix distributed following an IBP, i.e., Z ? IBP(?), where ? is the concentration parameter of
the process. The nth row of Z, denoted by zn? , represents the vector of latent features of the nth
data point, and every entry nk is denoted by znk . Note that each element znk ? {0, 1} indicates
whether the k th feature contributes to the nth data point.
Given a binary latent feature matrix Z, we assume that the N ? D observation matrix X, where the
nth row contains a D-dimensional observation vector xn? , is distributed according to a probability
distribution p(X|Z). Additionally, x?d stands for the dth column of X, and each element of the
2
matrix is denoted by xnd . For instance, in the standard observation model described in [5], p(X|Z)
is a Gaussian probability density function.
MCMC (Markov Chain Monte Carlo) methods have been broadly applied to infer the latent structure
Z from a given observation matrix X (see, e.g., [5, 17, 15, 14]). In particular, we focus on the use of
Gibbs sampling for posterior inference over the latent variables. The algorithm iteratively samples
the value of each element znk given the remaining variables, i.e., it samples from
p(znk = 1|X, Z?nk ) ? p(X|Z)p(znk = 1|Z?nk ),
(1)
where Z?nk denotes all the entries of Z other than znk . The distribution p(znk = 1|Z?nk ) can be
readily derived from the exchangeable IBP and can be written as p(znk = 1|Z?nk ) = m
P?n,k /N,
where m?n,k is the number of data points with feature k, not including n, i.e., m?n,k = i6=n zik .
3
Observation model
Let us consider that the observations are discrete, i.e., each element xnd ? {1, . . . , Rd }, where this
finite set contains the indexes to all the possible values of xnd . For simplicity and without loss of
generality, we consider that Rd = R, but the following results can be readily extended to a different
cardinality per input dimension, as well as mixing continuous variables with discrete variables, since
given the latent matrix Z the columns of X are assumed to be independent.
We introduce matrices Bd of size K ? R to model the probability distribution over X, such that
Bd links the hidden latent variables with the dth column of the observation matrix X. We assume
r
, is given by the multiplethat the probability of xnd taking value r (r = 1, . . . , R), denoted by ?nd
logistic function, i.e.,
r
?nd
= p(xnd = r|zn? , Bd ) =
exp (zn? bd?r )
,
R
X
d
exp (zn? b?r0 )
(2)
r 0 =1
bd?r
th
d
where
denotes the r column of B . Note that the matrices Bd are used to weight differently
the contribution of every latent feature for every component d, similarly as in the standard Gaussian
observation model in [5]. We assume that the mixing vectors bd?r are Gaussian distributed with zero
2
I.
mean and covariance matrix ?b = ?B
The choice of the observation model in Eq. 2, which combines the multiple-logistic function with
r
Gaussian parameters, is based on the fact that it induces dependencies among the probabilities ?nd
that cannot be captured with other distributions, such as the Dirichlet distribution [2]. Furthermore,
this multinomial-logistic normal distribution has been widely used to define probability distributions
over discrete random variables (see, e.g., [16, 2]).
We consider that elements xnd are independent given the latent feature matrix Z and the D matrices
Bd . Then, the likelihood for any matrix X can be expressed as
p(X|Z, B1 , . . . , BD ) =
N Y
D
Y
p(xnd |zn? , Bd ) =
n=1 d=1
3.1
N Y
D
Y
xnd
?nd
.
(3)
n=1 d=1
Laplace approximation for inference
In Section 2, the (heuristic) Gibbs sampling algorithm for the posterior inference over the latent
variables of the IBP has been reviewed and it is detailed in [5]. To sample from Eq. 1, we need to
integrate out Bd in (3), as sequentially sampling from the posterior distribution of Bd is intractable,
for which an approximation is required. We rely on the Laplace approximation to integrate out the
parameters Bd for simplicity and ease of implementation. We first consider the finite form of the
proposed model, where K is bounded.
Recall that our model assumes independence among the observations given the hidden latent variables. Then, the posterior p(B1 , . . . , BD |X, Z) factorizes as
p(B1 , . . . , BD |X, Z) =
D
Y
p(Bd |x?d , Z) =
d=1
D
Y
p(x?d |Bd , Z)p(Bd )
.
p(x?d |Z)
d=1
3
(4)
Hence, we only need to deal with each term p(Bd |x?d , Z) individually. Although the prior p(Bd )
is Gaussian, due to the non-conjugacy with the likelihood term, the computation of the posterior
p(Bd |x?d , Z) turns out to be intractable. Following a similar procedure as in Gaussian processes for
multiclass classification [16], we approximate the posterior p(Bd |x?d , Z) as a Gaussian distribution
using Laplace?s method. In order to obtain the parameters of the Gaussian distribution, we define
?(Bd ) as the un-normalized log-posterior of p(Bd |x?d , Z), i.e.,
?(Bd ) = log p(x?d |Bd , Z) + log p(Bd )
!
R
N
o X
n
n > o RK
X
1
d
d> d
2
log
exp(zn? b?r0 ) ? 2 trace Bd Bd ?
= trace M B ?
log(2??B
),
2?
2
B
n=1
r 0 =1
(5)
where (Md )kr counts the number of data points for which xnd = r and znk = 1, namely, (Md )kr =
PN
n=1 ?(xnd = r)znk , where ?(?) is the Kronecker delta function.
As we prove below, the function ?(Bd ) is a strictly concave function of Bd and therefore it has a
unique maximum, which is reached at BdMAP , denoted by the subscript ?MAP? because it coincides
with the mean value of the Gaussian distribution in the Laplace?s method (MAP stands for maximum
a posteriori). We apply Newton?s method to compute this maximum.
PN
r
By defining (?d )kr = n=1 znk ?nd
, the gradient of ?(Bd ) can be derived as
?? = Md ? ?d ?
1 d
2 B .
?B
(6)
To compute the Hessian, it is easier to define the gradient ?? as a vector, instead of a matrix, and
hence we stack the columns of Bd into ? d , i.e., for avid Matlab users, ? d = Bd (:). The Hessian
matrix can now be readily computed taking the derivatives of the gradient, yielding
??? = ?
1
d
2 IRK + ?? log p(x?d |? , Z)
?B
N
X
1
diag(? nd ) ? (? nd )> ? nd ? (z>
(7)
I
?
RK
n? zn? ),
2
?B
n=1
1
R
2
, and diag(? nd ) is a diagonal matrix with the values of
where ? nd = ?nd
, . . . , ?nd
, ?nd
the vector ? nd as its diagonal elements. The posterior p(? d |x?d , Z) can be approximated as
=?
p(? d |x?d , Z) ? q(? d |x?d , Z) = N (? d |? dMAP , (????)|?d
MAP
),
(8)
where ? dMAP contains all the columns of BdMAP stacked into a vector.
Since p(x?d |? d , Z) is a log-concave function of ? d (see [3, p. 87]), ???? is a positive definite
matrix, which guarantees that the maximum of ?(? d ) is unique. Once the maximum BdMAP has
been determined, the marginal likelihood p(x?d |Z) can be readily approximated by
1
log p(x?d |Z) ? log q(x?d |Z) = ? 2 trace (BdMAP )> BdMAP
2?B
N
X
1
2
>
>
b nd ? (zn? zn? ) + log p(x?d |BdMAP , Z),
? log IRK + ?B
diag(b
? nd ) ? (b
? nd ) ?
2
n=1
(9)
b nd is the vector ? nd evaluated at Bd = BdMAP .
where ?
Similarly as in [5], it is straightforward to prove that the limit of Eq. 9 is well-defined if Z has an unbounded number of columns, i.e., as K ? ?. The resulting expression for the marginal likelihood
p(x?d |Z) can be readily obtained from Eq. 9 by replacing K by K+ , Z by the submatrix containing
only the non-zero columns of Z, and BdMAP by the submatrix containing the K+ corresponding
rows. Through the rest of the paper, let us denote with Z the matrix that contains only the K+
non-zero columns of the full IBP matrix.
4
3.2
Speeding up the matrix inversion
The inverse of the Hessian matrix, as well as its determinant in (9), can be efficiently carried out if
we rearrange the inverse of ??? as follows
!?1
N
X
(????)?1 = D ?
vn vn>
,
(10)
n=1
where vn = (? nd )> ? z>
n? and D is a block-diagonal matrix, in which each diagonal submatrix is
Dr =
1
>
r
2 IK+ + Z diag (? ?d ) Z,
?B
(11)
>
r
r
>
, . . . , ?N
with ? r?d = [ ?1d
d ] . Since vn vn is a rank-one matrix, we can apply the Woodbury
identity [18] N times to invert the matrix ????, similar to the RLS (Recursive Least Squares)
updates [7]. At each iteration n = 1, . . . , N , we compute
?1
(D(n?1) )?1 vn vn> (D(n?1) )?1
(D(n) )?1 = D(n?1) ? vn vn>
= (D(n?1) )?1 +
.
(12)
1 ? vn> (D(n?1) )?1 vn
For the first iteration, we define D(0) as the block-diagonal matrix D, whose inverse matrix involves
computing the R matrix inversions of size K+ ? K+ of the matrices in (11), which can be efficiently solved applying the Matrix Inversion Lemma. After N iterations of (12), it turns out that
(????)?1 = (D(N ) )?1 .
For the determinant in (9), similar recursions can be applied using the Matrix Determinant Lemma
QR
[6], which states that |D + vu> | = (1 + v> Du)|D|, and |D(0) | = r=1 |Dr |.
4
4.1
Experiments
Inference over synthetic images
We generate a simple example inspired by the experiment in [5, p. 1205] to show that the proposed
model works as it should. We define four base black-and-white images that can be present or absent
with probability 0.5 independently of each other (Figure 1a), which are combined to create a binary
composite image. We also multiply each pixel independently with equiprobable binary noise, hence
each white pixel in the composite image can be turned black 50% of the times, while black pixels
always remain black. Several examples can be found in Figure 1c. We generate 200 examples to
learn the IBP model. The Gibbs sampler has been initialized with K+ = 2, setting each znk = 1
2
with probability 1/2, and the hyperparameters have been set to ? = 0.5 and ?B
= 1.
After 200 iterations, the Gibbs sampler returns four latent features. Each of the four features recovers
one of the base images with a different ordering, which is inconsequential. In Figure 1b, we have
plotted the posterior probability for each pixel being white, when only one of the components is
active. As expected, the black pixels are known to be black (almost zero probability of being white)
and the white pixels have about a 50/50 chance of being black or white, due to the multiplicative
noise. The Gibbs sampler has used as many as nine hidden features, but after iteration 60, the first
four features represent the base images and the others just lock on a noise pattern, which eventually
fades away.
4.2
National Epidemiologic Survey on Alcohol and Related Conditions (NESARC)
The NESARC was designed to determine the magnitude of alcohol use disorders and their associated
disabilities. Two waves of interviews have been fielded for this survey (first wave in 2001-2002 and
second wave in 2004-2005). For the following experimental results, we only use the data from the
first wave, for which 43,093 people were selected to represent the U.S. population 18 years of age
and older. Public use data are currently available for this wave of data collection.
Through 2,991 entries, the NESARC collects data on the background of participants, alcohol and
other drug consumption and abuse, medicine use, medical treatment, mental disorders, phobias,
5
(a)
(b)
(d)
?3600
8
?3800
6
log p(X|Z)
Number of Features (K+)
(c)
4
?4200
2
0
0
?4000
20
40
60
80
100
Iteration
120
140
160
180
?4400
0
200
(e)
20
40
60
80
100
Iteration
120
140
160
180
200
(f)
Figure 1: Experimental results of the infinite binary multinomial-logistic model over the image data
set. (a) The four base images used to generate the 200 observations. (b) Probability of each pixel
being white, when a single feature is active (ordered to match the images on the left), computed
using BdMAP . (c) Four data points generated as described in the text. The numbers above each figure
indicate which features are present in that image. (d) Probabilities of each pixel being white after
200 iterations of the Gibbs sampler inferred for the four data points on (c). The numbers above each
figure show the inferred value of zn? for these data points. (e) The number of latent features K+ and
(f) the approximate log of p(X|Z) over the 200 iterations of the Gibbs sampler.
family history, etc. The survey includes a question about having attempted suicide as well as other
related questions such as ?felt like wanted to die? and ?thought a lot about own death?. In the present
paper, we use the IBP with discrete observations for a preliminary study in seeking the latent causes
which lead to committing suicide. Most of the questions in the survey (over 2,500) are yes-or-no
questions, which have four possible outcomes: ?blank? (B), ?unknown? (U), ?yes? (Y) and ?no? (N).
If a question is left blank the question was not asked1 . If a question is said to be unknown either it
was not answered or was unknown to the respondent.
In our ongoing study, we want to find a latent model that describes this database and can be used
to infer patterns of behavior and, specifically, be able to predict suicide. In this paper, we build
an unsupervised model with the 20 variables that present the highest mutual information with the
suicide attempt question, which are shown in Table 1 together with their code in the questionnaire.
We run the Gibbs sampler over 500 randomly chosen subjects out of the 13,670 that have answered
affirmatively to having had a period of low mood. In this study, we use another 9,500 as test cases
and have left the remaining samples for further validation. We have initialized the sampler with an
active feature, i.e., K+ = 1, and have set znk = 1 randomly with probability 0.5, and fixing ? = 1
2
and ?B
= 1. After 200 iterations, we obtain seven latent features.
In Figure 2, we have plotted the posterior probability for each question when a single feature is
active. In these plots, white means 0 and black 1, and each row sums up to one. Feature 1 is active
for modeling the ?blank? and ?no? answers and, fundamentally, those who were not asked Questions
8 and 10. Feature 2 models the ?yes? and ?no? answers and favors affirmative responses to Questions
1, 2, 5, 9, 11, 12, 17 and 18, which indicates depression. Feature 3 models blank answers for most
of the questions and negative responses to 1, 2, 5, 8 and 10, which are questions related to suicide.
Feature 4 models the affirmative answers to 1, 2, 5, 9 and 11 and also have higher probability for
unknowns in Questions 3, 4, 6 and 7. Feature 5 models the ?yes? answer to Questions 3, 4, 6, 7, 8,
1
In a questionnaire of this size some questions are not asked when a previous question was answered in a
predetermined way to reduce the burden of taking the survey. For example, if a person has never had a period
of low mood, the attempt suicide question is not asked.
6
10, 17 and 18, being ambivalent in Questions 1 and 2. Feature 6 favors ?blank? and ?no? answers in
most questions. Feature 7 models answering affirmatively to Questions 15, 16, 19 and 20, which are
related to alcohol abuse.
We show the percentage of respondents that answered positively to the suicide attempt questions in
Table 2, independently for the 500 samples that were used to learn the IBP and the 9,500 hold-out
samples, together with the total number of respondents. A dash indicates that the feature can be
active or inactive. Table 2 is divided in three parts. The first part deals with each individual feature
and the other two study some cases of interest. Throughout the database, the prevalence of suicide
attempt is 7.83%. As expected, Features 2, 4, 5 and 7 favor suicide attempt risk, although Feature 5
only mildly, and Features 1, 3 and 6 decrease the probability of attempting suicide. From the above
description of each feature, it is clear that having Features 4 or 7 active should increase the risk of
attempting suicide, while having Features 3 and 1 active should cause the opposite effect.
Features 3 and 4 present the lowest and the highest risk of suicide, respectively, and they are studied
together in the second part of Table 2, in which we can see that having Feature 3 and not having
Feature 4 reduces this risk by an order of magnitude, and that combination is present in 70% of
the population. The other combinations favor an increased rate of suicide attempts that goes from
doubling (?11?) to quadrupling (?00?), to a ten-fold increase (?01?), and the percentages of population
with these features are, respectively, 21%, 6% and 3%.
In the final part of Table 2, we show combinations of features that significantly increase the suicide
attempt rate for a reduced percentage of the population, as well as combinations of features that
significantly decrease the suicide attempt rate for a large chunk of the population. These results are
interesting as they can be used to discard significant portions of the population in suicide attempt
studies and focus on the groups that present much higher risk. Hence, our IBP with discrete observations is being able to obtain features that describe the hidden structure of the NESARC database
and makes it possible to pin-point the people that have a higher risk of attempting suicide.
#
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
Source Code
S4AQ4A17
S4AQ4A18
S4AQ17A
S4AQ17B
S4AQ4A19
S4AQ16
S4AQ18
S4CQ15A
S4AQ4A12
S4CQ15B
S4AQ52
S4AQ55
S4AQ21C
S4AQ21A
S4AQ20A
S4AQ20C
S4AQ56
S4AQ54
S4AQ11
S4AQ15IR
Description
Thought about committing suicide
Felt like wanted to die
Stayed overnight in hospital because of depression
Went to emergency room for help because of depression
Thought a lot about own death
Went to counselor/therapist/doctor/other person for help to improve mood
Doctor prescribed medicine/drug to improve mood/make you feel better
Stayed overnight in hospital because of dysthymia
Felt worthless most of the time for 2+ weeks
Went to emergency room for help because of dysthymia
Had arguments/friction with family, friends, people at work, or anyone else
Spent more time than usual alone because didn?t want to be around people
Used medicine/drug on own to improve low mood prior to last 12 months
Ever used medicine/drug on own to improve low mood/make self feel better
Ever drank alcohol to improve low mood/make self feel better
Drank alcohol to improve mood prior to last 12 months
Couldn?t do things usually did/wanted to do
Had trouble doing things supposed to do -like working, doing schoolwork, etc.
Any episode began after drinking heavily/more than usual
Only/any episode prior to last 12 months began after drinking/drug use
Table 1: Enumeration of the 20 selected questions in the experiments, sorted in decreasing order
according to their mutual information with the ?attempted suicide? question.
5
Conclusions
In this paper, we have proposed a new model that combines the IBP with discrete observations using
the multinomial-logit distribution. We have used the Laplace approximation to integrate out the
weighting factors, which allows us to efficiently run the Gibbs sampler. We have applied our model
to the NESARC database to find out the hidden features that characterize the suicide attempt risk. We
7
Hidden features
1
0
1
1
1
-
1
0
0
1
1
0
0
1
1
1
1
0
1
0
1
1
1
0
0
0
1
-
1
1
-
1
1
0
0
-
Suicide attempt probability
Train
Hold-out
6.74%
5.55%
10.56%
11.16%
3.72%
4.60%
25.23%
22.25%
8.64%
9.69%
6.90%
7.18%
14.29%
14.18%
30.77%
28.55%
82.35%
61.95%
0.83%
0.87%
14.89%
16.52%
100.00%
69.41%
80.00%
66.10%
0.00%
0.25%
0.33%
0.63%
0.32%
0.41%
Number of cases
Train Hold-out
430
8072
322
6083
457
8632
111
2355
301
5782
464
8928
91
1664
26
571
17
297
363
6574
94
2058
4
85
5
118
252
4739
299
5543
317
5807
Table 2: Probabilities of attempting suicide for different values of the latent feature vector, together
with the number of subjects possessing those values. The symbol ?-? denotes either 0 or 1. The ?train
ensemble? columns contain the results for the 500 data points used to obtain the model, whereas the
?hold-out ensemble? columns contain the results for the remaining subjects.
Figure 2: Probability of answering ?blank? (B), ?unknown? (U), ?yes? (Y) and ?no? (N) to each of
the 20 selected questions, sorted as in Table 1, after 200 iterations of the Gibbs sampler. These
probabilities have been obtained with the posterior mean weights BdMAP , when only one of the
seven latent features (sorted from left to right to match the order in Table 2) is active.
have analyzed how each of the seven inferred features contributes to the suicide attempt probability.
We are developing a variational inference algorithm to be able to extend these remarkable results for
larger fractions (subjects and questions) of the NESARC database.
Acknowledgments
Francisco J. R. Ruiz is supported by an FPU fellowship from the Spanish Ministry of Education,
Isabel Valera is supported by the Plan Regional-Programas I+D of Comunidad de Madrid (AGESCM S2010/BMD-2422), and Fernando P?erez-Cruz has been partially supported by a Salvador de
Madariaga grant. The authors also acknowledge the support of Ministerio de Ciencia e Innovaci?on of
Spain (project DEIPRO TEC2009-14504-C02-00 and program Consolider-Ingenio 2010 CSD200800010 COMONSENS).
8
References
[1] Summary of national strategy for suicide prevention: Goals and objectives for action, 2007. Available at:
http://www.sprc.org/library/nssp.pdf.
[2] D. M. Blei and J. D. Lafferty. A correlated topic model of Science. Annals of Applied Statistics, 1(1):17?
35, August 2007.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, March 2004.
[4] G. K. Brown, T. Ten Have, G. R. Henriques, S.X. Xie, J.E. Hollander, and A. T. Beck. Cognitive therapy
for the prevention of suicide attempts: a randomized controlled trial. Journal of the American Medical
Association, 294(5):563?570, 2005.
[5] T. L. Griffiths and Z. Ghahramani. The Indian Buffet Process: An introduction and review. Journal of
Machine Learning Research, 12:1185?1224, 2011.
[6] D. A. Harville. Matrix Algebra From a Statistician?s Perspective. Springer-Verlag, 1997.
[7] S. Haykin. Adaptive Filter Theory. Prentice Hall, 2002.
[8] R. C. Kessler, P. Berglund, G. Borges, M. Nock, and P. S. Wang. Trends in suicide ideation, plans,
gestures, and attempts in the united states, 1990-1992 to 2001-2003. Journal of the American Medical
Association, 293(20):2487?2495, 2005.
[9] K. Krysinska and G. Martin. The struggle to prevent and evaluate: application of population attributable
risk and preventive fraction to suicide prevention research. Suicide and Life-Threatening Behavior,
39(5):548?557, 2009.
[10] D. J. C. MacKay. Information Theory, Inference & Learning Algorithms. Cambridge University Press,
New York, NY, USA, 2002.
[11] J. J. Mann, A. Apter, J. Bertolote, A. Beautrais, D. Currier, A. Haas, U. Hegerl, J. Lonnqvist, K. Malone,
A. Marusic, L. Mehlum, G. Patton, M. Phillips, W. Rutz, Z. Rihmer, A. Schmidtke, D. Shaffer, M. Silverman, Y. Takahashi, A. Varnik, D. Wasserman, P. Yip, and H. Hendin. Suicide prevention strategies: a
systematic review. The Journal of the American Medical Association, 294(16):2064?2074, 2005.
[12] M. A. Oquendo, E. B. Garc??a, J. J. Mann, and J. Giner. Issues for DSM-V: suicidal behavior as a separate
diagnosis on a separate axis. The American Journal of Psychiatry, 165(11):1383?1384, November 2008.
[13] K. Szanto, S. Kalmar, H. Hendin, Z. Rihmer, and J. J. Mann. A suicide prevention program in a region
with a very high suicide rate. Archives of General Psychiatry, 64(8):914?920, 2007.
[14] M. Titsias. The infinite gamma-Poisson feature model. Advances in Neural Information Processing
Systems (NIPS), 19, 2007.
[15] J. Van Gael, Y. W. Teh, and Z. Ghahramani. The infinite factorial hidden Markov model. In Advances in
Neural Information Processing Systems (NIPS), volume 21, 2009.
[16] C. K. I. Williams and D. Barber. Bayesian classification with Gaussian Processes. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 20:1342?1351, 1998.
[17] S. Williamson, C. Wang, K. A. Heller, and D. M. Blei. The IBP Compound Dirichlet Process and its
application to focused topic modeling. 11:1151?1158, 2010.
[18] M. A. Woodbury. The stability of out-input matrices. Mathematical Reviews, 1949.
9
| 4826 |@word trial:1 determinant:3 inversion:4 logit:5 nd:20 consolider:1 seek:1 covariance:1 contains:6 united:1 document:3 past:1 current:2 blank:6 written:1 readily:5 bd:35 cruz:2 tec2009:1 ministerio:1 predetermined:1 wanted:3 designed:1 plot:1 update:1 zik:1 alone:1 generative:2 selected:3 s2010:1 malone:1 intelligence:1 ivalera:1 haykin:1 blei:2 mental:3 org:1 unbounded:1 mathematical:1 ik:1 shaffer:1 prove:2 combine:3 borges:1 introduce:1 expected:2 behavior:5 themselves:2 inspired:1 decreasing:1 enumeration:1 cardinality:1 spain:1 project:1 bounded:2 kessler:1 didn:1 lowest:1 affirmative:2 deipro:1 guarantee:1 every:5 concave:2 exchangeable:1 medical:6 grant:1 positive:1 struggle:1 limit:1 despite:2 analyzing:1 subscript:1 abuse:2 inconsequential:1 might:1 black:8 studied:1 suggests:1 challenging:1 collect:1 ease:1 bmd:1 unique:2 woodbury:2 acknowledgment:1 vu:1 recursive:1 block:2 definite:1 silverman:1 prevalence:4 procedure:1 drug:5 thought:3 composite:2 significantly:2 boyd:1 word:1 integrating:1 griffith:1 psychiatric:1 cannot:1 prentice:1 risk:16 influence:1 applying:1 www:1 map:3 straightforward:1 go:1 starting:1 independently:3 convex:1 survey:7 focused:3 williams:1 simplicity:2 disorder:5 identifying:1 wasserman:1 fade:1 vandenberghe:1 population:11 stability:1 laplace:7 feel:3 annals:1 heavily:1 user:1 element:6 trend:1 approximated:2 particularly:1 xnd:10 database:11 drank:2 solved:1 capture:1 wang:2 region:1 ensures:1 episode:2 went:3 ordering:1 decrease:2 highest:2 questionnaire:2 asked:3 ciencia:1 algebra:1 surgeon:1 predictive:1 titsias:2 differently:2 isabel:2 represented:1 stacked:1 irk:2 train:3 committing:2 describe:1 monte:1 couldn:1 outcome:1 whose:1 heuristic:1 widely:1 valued:1 larger:1 hollander:1 favor:4 statistic:1 final:1 mood:8 interview:1 propose:4 turned:1 mixing:2 supposed:1 description:2 blanco:1 qr:1 object:2 help:3 spent:1 friend:1 fixing:1 ibp:20 eq:4 involves:1 indicate:1 overnight:2 nock:1 filter:1 human:1 public:3 mann:3 education:1 garc:1 stayed:2 preliminary:1 strictly:1 drinking:2 hold:4 around:1 therapy:1 hall:1 normal:1 exp:3 predict:1 week:1 major:1 relay:1 estimation:1 ambivalent:1 currently:2 individually:2 create:1 tool:2 weighted:1 gaussian:12 always:1 pn:2 factorizes:1 derived:2 focus:3 properly:1 improvement:1 rank:1 likelihood:5 mainly:2 indicates:3 psychiatry:2 sense:1 posteriori:1 inference:6 typically:2 hidden:11 interested:1 pixel:8 issue:1 among:3 flexible:1 classification:2 denoted:5 prevention:10 plan:2 ingenio:1 yip:1 mackay:1 mutual:2 marginal:2 once:1 never:1 having:6 sampling:4 represents:1 rls:1 nearly:1 unsupervised:1 others:2 fundamentally:1 few:1 equiprobable:1 randomly:3 gamma:2 national:5 individual:2 beck:1 comunidad:1 versatility:1 statistician:1 attempt:23 detection:2 screening:1 interest:1 threatening:1 multiply:1 analyzed:1 yielding:1 perez:1 behind:1 rearrange:1 immense:1 chain:1 initialized:2 plotted:2 increased:3 column:14 modeling:8 instance:3 injury:1 zn:10 dsm:1 entry:3 predictor:1 innovaci:1 characterize:1 dependency:1 answer:7 synthetic:1 combined:2 chunk:1 person:2 density:1 randomized:1 systematic:1 physician:1 together:4 containing:2 berglund:1 priority:1 dr:2 cognitive:1 american:4 derivative:1 return:1 takahashi:1 de:3 includes:1 depends:2 multiplicative:1 lot:2 linked:1 doing:2 reached:1 wave:5 portion:1 carlos:4 option:1 participant:1 doctor:2 contribution:1 square:1 who:1 efficiently:4 ensemble:2 identify:1 yes:6 bayesian:2 carlo:1 history:1 associated:2 recovers:1 treatment:4 recall:1 uc3m:3 higher:3 xie:1 response:3 evaluated:1 generality:1 lifetime:1 furthermore:2 just:1 salvador:1 working:1 replacing:1 logistic:4 usa:1 effect:1 epidemiologic:3 normalized:1 contain:2 brown:1 hence:4 laboratory:1 iteratively:1 death:2 deal:2 white:9 self:2 spanish:1 die:2 coincides:1 tsc:3 pdf:1 image:11 variational:1 possessing:1 began:2 common:1 multinomial:8 volume:1 extend:1 he:1 association:3 significant:1 cambridge:2 gibbs:12 phillips:1 rd:2 i6:1 similarly:2 erez:1 had:4 access:1 etc:3 base:4 posterior:11 own:4 perspective:1 discard:1 compound:2 verlag:1 binary:6 life:3 accomplished:2 captured:1 ministry:1 r0:2 determine:1 fernando:3 redundant:1 period:2 signal:3 multiple:3 full:1 infer:3 reduces:1 match:2 adapt:2 gesture:1 clinical:3 divided:1 equally:1 controlled:1 prediction:1 poisson:2 iteration:11 represent:3 invert:1 background:1 respondent:3 want:2 whereas:1 fellowship:1 kalmar:1 else:1 source:1 rest:1 regional:1 morbidity:1 archive:1 subject:6 thing:2 lafferty:1 adequacy:1 integer:1 iii:3 independence:1 opposite:1 reduce:1 regarding:2 avid:1 multiclass:1 absent:1 icd:2 whether:1 expression:1 inactive:1 effort:1 questioned:1 hessian:3 cause:5 nine:1 action:1 depression:5 matlab:1 york:1 gael:1 detailed:1 clear:1 factorial:1 amount:1 nonparametric:4 ten:2 induces:1 reduced:1 generate:3 http:1 percentage:3 delta:1 per:1 broadly:1 diagnosis:1 discrete:13 group:2 four:8 drawn:3 harville:1 prevent:1 fraction:2 year:2 sum:1 run:2 inverse:3 you:1 place:1 almost:1 family:2 throughout:1 c02:1 vn:11 prefer:1 submatrix:3 emergency:3 dash:1 tackled:1 fold:1 occur:1 kronecker:1 felt:3 answered:4 argument:1 prescribed:1 friction:1 anyone:1 attempting:4 martin:1 department:3 developing:1 according:3 combination:4 march:1 remain:1 describes:1 conjugacy:1 turn:2 count:1 eventually:1 pin:1 ordinal:1 available:2 apply:3 hierarchical:1 away:1 appropriate:1 preventive:1 buffet:5 top:1 dirichlet:5 remaining:3 denotes:3 completed:1 assumes:1 lock:1 worthless:1 newton:1 trouble:1 medicine:4 ghahramani:2 build:1 unchanged:1 seeking:2 objective:1 question:31 already:1 strategy:4 concentration:1 md:3 diagonal:5 disability:1 said:1 usual:2 gradient:3 link:1 separate:2 consumption:1 suicidal:2 topic:5 seven:3 haas:1 barber:1 hdp:1 besides:1 suicide:47 index:1 code:2 potentially:1 trace:3 negative:2 implementation:3 unknown:5 teh:1 observation:31 markov:2 finite:5 acknowledge:1 november:1 affirmatively:2 defining:1 extended:1 communication:3 ever:2 stack:1 august:1 inferred:3 introduced:1 namely:1 required:1 comonsens:1 nesarc:10 conclusive:1 fielded:1 nip:2 beyond:1 suggested:1 dth:2 below:1 pattern:3 able:3 usually:1 program:2 including:2 reliable:1 treated:1 rely:1 valera:2 recursion:1 nth:4 alcohol:8 older:1 improve:7 library:1 axis:1 carried:1 categorical:1 columbia:2 health:2 speeding:1 text:1 prior:6 understanding:1 review:3 heller:1 loss:1 interesting:1 remarkable:1 age:1 validation:1 integrate:4 znk:13 row:4 summary:1 supported:3 last:3 henriques:1 allow:1 patton:1 taking:3 distributed:4 van:1 dimension:1 vocabulary:1 xn:1 stand:2 author:2 collection:1 adaptive:1 far:1 transaction:1 approximate:2 compact:1 active:10 sequentially:1 b1:3 assumed:1 francisco:2 continuous:1 latent:28 un:1 decade:1 reviewed:1 additionally:1 table:9 nature:3 learn:2 fpu:1 decoupling:1 contributes:2 du:1 williamson:1 complex:1 diag:4 did:1 main:1 noise:3 hyperparameters:1 positively:1 representative:1 madrid:4 attributable:1 ny:1 answering:2 weighting:4 programas:1 ruiz:2 rk:2 remained:1 symbol:1 intractable:2 burden:1 kr:3 magnitude:2 nk:6 mildly:1 easier:1 expressed:1 ordered:1 partially:1 doubling:1 applies:1 springer:1 chance:1 goal:2 identity:1 month:3 sorted:3 room:3 absence:1 infinite:5 determined:2 specifically:1 sampler:11 lemma:3 called:1 total:1 hospital:2 e:3 experimental:2 attempted:2 college:1 people:4 support:1 indian:5 ongoing:1 evaluate:1 mcmc:1 correlated:1 |
4,228 | 4,827 | Learning about Canonical Views from Internet Image
Collections
Yair Weiss
Elad Mezuman
Interdisciplinary Center for Neural Computation School of Computer Science and Engineering
Edmond & Lily Safra Center for Brain Sciences Edmond & Lily Safra Center for Brain Sciences
Hebrew University of Jerusalem
Hebrew University of Jerusalem
http://www.cs.huji.ac.il/~yweiss
http://www.cs.huji.ac.il/~mezuman
Abstract
Although human object recognition is supposedly robust to viewpoint, much research on human perception indicates that there is a preferred or ?canonical? view
of objects. This phenomenon was discovered more than 30 years ago but the canonical view of only a small number of categories has been validated experimentally.
Moreover, the explanation for why humans prefer the canonical view over other
views remains elusive. In this paper we ask: Can we use Internet image collections
to learn more about canonical views?
We start by manually finding the most common view in the results returned by
Internet search engines when queried with the objects used in psychophysical
experiments. Our results clearly show that the most likely view in the search engine
corresponds to the same view preferred by human subjects in experiments. We also
present a simple method to find the most likely view in an image collection and
apply it to hundreds of categories. Using the new data we have collected we present
strong evidence against the two most prominent formal theories of canonical views
and provide novel constraints for new theories.
1
Introduction
Images of three dimensional objects exhibit a great deal of variation due to viewpoint. Although
ideally object recognition should be viewpoint invariant, much research in human perception indicates
that certain views are privileged, or ?canonical?. As summarized in Blanz et al. [1] there are at least
four senses in which a view can be canonical:
?
?
?
?
The viewpoint that is assigned the highest goodness rating by participants
The viewpoint that is first imagined in visual imagery
The viewpoint that is subjectively selected as the ?best? photograph taken with a camera
The viewpoint found to have the lowest response time and error rate in recognition and
naming experiments
The seminal work of Palmer, Rosch and Chase [2] suggested that all of these definitions give the same
canonical view. Fig. 1 presents different views of a horse used in their experiments and the average
goodness rating given by human subjects. For the horse, the canonical view is a slightly off-axis
sideways view, while the least favored view is from above. Subsequent psychological research using
slightly different paradigms have mostly supported their conclusions (see [1, 3, 4] for more recent
surveys) and expanded it also to scenes rather than just objects [5].
The preference for side views of horses is very robust and can be reliably demonstrated in simple
classroom experiments [6]. What makes this view special? Palmer et al. suggested two formal
1
1.60
1.84
2.12
2.80
3.48
3.72
4.12
4.29
4.8
5.56
5.68
6.36
Figure 1: When people are asked to rate images of the same object from different views some views
consistently get better grades than others. The view that gets the best grade is called the canonical
view. The images that were used by Palmer et al. [2] for the horse category in their experiments are
presented along with their ratings (1-best, 7-worse).
theories. The first one, called the frequency hypothesis argues that the canonical view is the one from
which we most often see the object. The second one, called the maximal information hypothesis
argues that the canonical view is the view that gives the most information about the 3D structure of
the object. This view is related to the concept of stable or non-accidental views, i.e. the object will
look more or less the same under small transformations of the view. Both of these hypotheses lead
to predictions that are testable in principle. If we have access to the statistics with which we view
certain objects, we can compute the most frequent view and given the 3D shape of an object we can
automatically compute the most stable view [7, 8, 9].
Both of these formal theories have been shown to be insufficient to predict the canonical views
preferred by human observers; Palmer et al. [3] presented a small number of counter-examples for
each hypothesis. They concluded with the rather vague explanation that: ?Canonical views appear to
provide the perceiver with what might be called the most diagnostic information about the object:
the information that best discriminates it from other objects, derived from the views from which it is
most often seen? [3].
One reason for the relative vagueness of theories of canonical views may be the lack of data: the
number of objects for which canonical views have been tested in the lab is at most a few dozens. In
this paper, we seek to dramatically increase the number of examples for canonical views using Internet
search engines and computer vision tools. We expect that since the canonical view of an object
corresponds to what people perceive as the "best" photograph, when people include a photograph of
an object in their web page, they are most likely to choose a photograph from the canonical view. In
other words, we expect the canonical view to be the most frequent view in the set of images retrieved
by a search engine when queried for the object.
We start by manually validating our hypothesis and showing that indeed the most frequent view in
Internet image collections often corresponds to the cognitive canonical view. We then present an
automatic method for finding the most frequent view in a large dataset of images. Rather than trying
to map images to views and then finding the most frequent view, we find it by analyzing the density of
global image descriptors. Using images for which we have ground truth, we verify that our automatic
method indeed finds the most frequent view in a large percentage of the cases. We next apply this
method to images retrieved by search engines and find the canonical view for hundreds of categories.
Finally we use the canonical views we find to present strong evidence against the two most prominent
formal theories of canonical views and provide novel constraints for new theories.
2
Figure 2: The four most frequent views (frequencies specified) manually found in images returned by
Google images (second-fifth rows) often corresponds to the canonical view found in psychophysical
experiments (first row).
2
Manual experiments with Internet image collections
We first asked whether Internet image collections will show the same view biases as reported in
psychophysical experiments. In order to answer this question, we downloaded images of the twelve
categories used by Palmer et al. [2] in their psychophysical experiments. To download these images
we simply queried Google Image search with the object and retrieved the top returned images.
For each category we manually sorted the images into bins corresponding to similar views (each
category could have a different number of bins), counted the number of images in each bin and found
the most frequent view. We used 400 images for the four categories presented in Figure 2 and 100
images for the other eight categories. Figure 2 shows the bins with the highest frequencies along with
their frequencies and the cognitive canonical view for car, horse, shoe, and steaming iron categories.
The results of this manual experiment are clear cut: for 11 out of the 12 categories, the most frequent
view in Google images is the canonical view found by Palmer et al. in the psychophysical experiment
(or its mirror view). The only exception is the horse category for which the most frequent view is the
one that received the second best ratings in the psychophysical experiments (see figure 1).
This study validates our hypothesis that when humans decide which view of an object to embed in a
web page, they exhibit a very similar view bias as is seen in psychophysical experiments. This result
now gives us the possibility to harness the huge numbers of images available on the Internet to study
these view biases in many categories.
3
Can we find the most frequent view automatically?
While the results of the previous section suggests that we can harness Internet image collections,
repeating our manual experiment for many categories is impractical. Can we find the most frequent
view automatically?
In the computer vision literature we can find several methods to find representative images. Simon
et al. [10] showed how clustering Internet photographs of tourist sites can find several "canonical"
views of the site. Clustering on images from the Internet is also used to find canonical views (or
iconic images) in other works e.g. Berg and Berg [11] and Raguram and Lazebnik [12]. The earlier
3
work of Denton et al. [13] uses similarity measure between images to find a small subset of canonical
images to a larger set of images. The main issue with clustering is that the results depend on the
details of the clustering algorithm (initialization, number of clusters etc.) while we look for a method
that gives a simple, unique solution. We experimented with clustering methods but found that due to
the high variability in our dataset and the difficulty of optimizing the clustering, it was difficult to
reliably find clusters that correspond to the most frequent view. Deselaers and Ferrari [14] present a
simpler method that finds the image in the center of the GIST image descriptor [15] space to select
the prototype image for categories in ImageNet [16]. We experimented with this method and found
that often the prototypical image did not correspond to the most frequent view. Jing et al. [17] suggest
a method to find a single most representative image (canonical image) for a category relying on
similarities between images based on local invariant features. Since they use invariant features the
view of the object in the image has no role in the selection of the canonical image. Weyand and Leibe
[18] use mode estimation to find iconic images for many images of a single scene using a distance
measure based on calculating a homography between the images and measuring the overlap. This
is not suitable for our case where we have images of different instances of the same category, not a
single rigid scene.
Our method to find the most frequent view is based on estimating the density of views using the
Parzen window method, and simply choosing the modes of the density as the most frequent views. If
we were given the view of each image as input (e.g. its azimuth and elevation) this would be trivial.
Pn
In that case the estimated density at point x is f?? (x) = n1 i=1 K? (x ? xi ) where {xi }ni=1 are the
2
2
sample points (xi - image i, represented using its view) and K? (x) = e?kxk2 /2? .
In real life, of course, the azimuth and elevation are not given as input for each image. One option is
to try to compute them. This problem, called pose estimation, is widely studied in computer vision
(see [19] for a recent survey for the special case of head poses) and is quite difficult. Here, we
take an alternative approach using an attractive feature of the Parzen estimator - it only requires the
view similarity between any two images, not the actual views. In other words, if we have an image
descriptor so that the distance between descriptors for two images approximates the similarity of
views between the objects, we can calculate the Parzen density without ever computing the views.
We chose to use the 512 dimension GIST descriptor [15] which has previously been used to model
the similarity between images [12, 14, 20, 21]. The descriptor uses Gabor-like filters on the grayscale
image, tuned to 8 orientations at 4 different scales and the average square output on a 4x4 grid for
each is its output. This descriptor is pose variant (which is good for our application) but also sensitive
to the background (which is bad). We hypothesize that despite this sensitivity to the background, the
maximum of the Parzen density when we use GIST similarity between images will serve as a useful
proxy for the maximum of the Parzen density when we use view similarity.
3.1
Our method
In summary, given an object category our algorithm automatically finds the modes of the GIST
distribution in images of that object. However, these modes in the GIST distribution are only
approximations to the modes of the view distribution. Our method therefore also includes a manual
phase which requires a human to view the output of the algorithm and to verify whether or not this
mode in the GIST distribution actually corresponds to a mode in the view distribution.
In the automatic phase we download images for the category (e.g using Google), remove duplicate
images and create GIST descriptors for each image. Next we find the two first modes in the GIST
space using Parzen window. The first mode is simply the most frequent image in the GIST space and
its k closest neighbors. The second mode is the most frequent image that is not a close neighbor of
the first most frequent image (e.g not one of its 10% closest neighbors) and its k closest neighbors.
For each mode we create a collage of images representing it and this is the output of the first phase
(see fig. 4 for example collages). In the second phase a human is required to glance at each collage
and to decide if most of the images are from the same view; i.e. a human observer verifies whether
the output of the algorithm corresponds to a true point of high density in view space. To validate this
second phase, we have conducted several experiments with synthetic images, where the true view
distribution is known. We found that when a human verifies that a set of images that are modes in the
GIST space are indeed of the same view, then in almost all cases these images are indeed the modes
in view space. These experiments are discussed in the supplementary material.
4
(a)
(b)
(c)
(d)
Figure 3: By using Parzen density estimation on GIST features, we are able to find the most frequent
view without calculating the view for a given image. (a) Distribution of views for 715 images of
Notre Dame Cathedral in Paris, adapted from [22]. (b) Random image from this dataset. The image
from the most frequent view (c) is the same image of the most frequent GIST descriptor (d).
Although the second phase of our method does require human intervention, it requires only a few
seconds. This is much less painful than requiring a human to look at all retrieved images which can
take a few hours (the automatic part of the method, that finds the modes in GIST space, takes a few
seconds of computer time).
3.2
Validation
As mentioned above, the main assumption behind our method is that GIST similarity can serve as
a proxy for true view similarity. In order to test this assumption, we conducted experiments on
datasets where we knew the ground truth distribution of views. In the first experiment, we ran our
automatic method on the same images that we manually sorted into views in the previous section:
images downloaded from Google image search for the twelve categories used by Palmer et al. in their
psychophysical experiments. Results are shown in figure 4. We find that in 10 out of 12 categories
our automatic method found the same most frequent view as we found manually.
In a second experiment, we used the Notre Dame dataset of PhotoTourism [22]. This is a dataset of
715 images of the Notre Dame cathedral taken with consumer cameras. The location of each camera
was calculated using bundle adjustment [22]. On this dataset, we calculated the most frequent view
using Parzen density estimation in two different ways (1) using the similarity between the camera?s
rotation matrices and (2) using the GIST similarity between images. As shown in figure 3 the most
frequent view calculated using the two methods was identical.
3.3
Control
As can be seen in figure 4, the most frequent view chosen by our method often has a white, or uniform
background. Will a method that simply chooses images with uniform background can also find
canonical views? We checked it and this is not the case, among images with smooth backgrounds
there is still a large variation in views.
Another possible artifact we considered is the source of the dataset. We wanted to verify we indeed
find a global character of the image collections and not a local character of Google. We used our
method also on images from ImageNet [16] and Yahoo image search. The ImageNet images were
collected by querying various Internet search engines with the desired object, and the resulting set of
images was then ?cleaned up? by humans. It is important to note that the humans were not instructed
to choose particular views but rather to verify that the image contained the desired object. For a subset
of the images, ImageNet also supplies bounding boxes around the object of interest; we cropped the
objects from the images and considered it as a fourth dataset. There were almost no repeating images
between Google, ImageNet and Yahoo datasets. We saw that our method finds preferred views also
in the other datasets and that these preferred views are usually the cognitive canonical views. We also
saw that using bounding boxes improves the results somewhat. One example of this improvement is
the horse category for which we did not find the most frequent view using the the full images but did
find it when we used the cropped images.
Results for these control experiment are shown in the supplementary material.
5
Random
Palmer
First mode
Figure 4: Results on categories we downloaded from Google for which the canonical view was found
in Palmer et al. experiments. The collages in the third column are of the first mode of the GIST
distribution; the first (top left) image is the most frequent image found where the rest of the images
are ordered by their closeness (GIST distance) to the most frequent view
6
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 5: Our experiments reveal hundreds of counter-examples against the two most formal theories
of canonical views. Prototypical counter-examples found in our experiments for (a-d) the frequency
and (e-i) the maximal information hypotheses.
4
What can we learn from hundreds of canonical views?
To summarize our validation experiments: although we use GIST similarity as a proxy for view
similarity, our method often finds the canonical view. We now turn to use our method on a large
number of categories. We used our method to find canonical views for two groups of object categories:
(1) 54 categories inspired by the work of Rosch et al. [23], in which human recognition for categories
in different levels of abstraction was studied (Rosch?s categories). (2) 552 categories of mammals (all
the categories of mammals in ImageNet [16] for which there are bounding boxes around the objects),
for these categories we used the cropped objects.
For every object category tested we downloaded all corresponding images (in average more than
1,200 images, out of them around 300 with bounding boxes) from ImageNet. The ? parameter for the
RBF kernel window was fixed for each group of categories and was chosen manually (i.e. we used the
same parameter for all the 552 mammal categories but a different one for the Google categories where
the data is more noisy). For Rosch?s categories we used full images since for some of them bounding
boxes are not supplied, for the mammals we used cropped images. For most of the categories the
modes found by our algorithm were indeed verified by a human observer as representing a true mode
in view space. Thus while our method does not succeed in finding preferred views for all categories,
by focusing only on the categories for which humans verified that preferred views were found, we
still have canonical views for hundreds of categories. What can we learn from these canonical views?
4.1
Do the basic canonical view theories hold?
Palmer et al. [2] raised two basic theories to explain the phenomenon of canonical views: (1) the
frequency hypothesis and (2) the maximal information hypothesis. Our experiments reveal hundreds
of counter-examples against both theories. We find canonical views of animals that are from the
animals? height rather than ours (fig. 5a-b); dogs, for example, are usually seen from above while
many of the canonical views we find for dogs are from their height. The canonical views of vehicles
are another counter-example for the frequency hypothesis, we usually see vehicles from the side (as
pedestrians) or from behind (as drivers), but the canonical views we find are the ?perfect? off-axis
view (fig. 5a-b). As a third family of examples we have the tools; we usually see them when we use
them, this is not the canonical view we find (fig. 5d). For the maximal information hypothesis we
find hundreds of counter-examples. While for 20% of the categories we find off-axis canonical views
that give the most information about the shape of the object, for more than 60% of the categories
we find canonical views that are either side-views (fig. 5f,i) or frontal views (especially views of the
face - fig. 5g). Not only do these views not give us the full information about the 3D structure of
the object, they are also accidental, i.e. a small change in the view will cause a big change of the
appearance of the object; for example in some of the side-views we see only two legs out of four, a
small change in the view will reveal the two other legs.
4.2
Constraints for new theories
We believe that our experiments reveal several robust features of canonical views that every future
theory should take into considerations. The first aspect is that there are several preferred views for
a given object. Sometimes these several views are related to symmetry (e.g. a mirror image of the
preferred view is also preferred) but in other cases they are different views that are just slightly less
preferred than the canonical view (e.g. both the off-axis and the side-view). Another thing we find is
that for images of animals, there is a strong preference for photographing just the face (compared to
7
Random Set
First Mode
finback
cavy
rhinoceros
uakari
Persian cat
pickup
motor vehicle
Figure 6: Selected collages of the automatic method.
Palmer?s result on the horse, where a view just of the face was not given as an option and was hence
not preferred). The preference for faces depends on the type of animals (e.g. we find it much more for
cats and apes than for big animals like horses). When an animal has very unique features, photographs
that include this feature are often preferred. Finally, the view biases are most pronounced for basic
and subordinate level categories and less so for superordinate categories (e.g. see motor vehicle in fig.
6). While many of these findings are consistent with the vague theory that ?Canonical views appear
to provide the perceiver with what might be called the most diagnostic information about the object?,
we hope that our experimental data with hundreds of categories will enable formalizing these notions
into a computational theory.
5
Conclusion
In this work we revisited a cognitive phenomenon that was discovered over 30 years ago: a preference
by human observers for particular "canonical" views of objects. We showed that a nearly identical
view bias can be observed in the results of Internet image search engines, suggesting that when
humans decide which image to embed in a web page, they prefer the same canonical view that is
assigned highest goodness in laboratory experiments. We presented an automatic method to discover
the most likely view in an image collection and used this algorithm to obtain canonical views for
hundreds of object categories. Our results provide strong counter-examples for the two formal
hypotheses of canonical views; we hope they will serve as a basis for a computational explanation for
this fascinating effect.
Acknowledgments
This work has been supported by the Charitable Gatsby Foundation and the ISF. The authors wish to
thank the anonymous reviewers for their helpful comments.
8
References
[1] V. Blanz, M.J. Tarr, H.H. B?lthoff, and T. Vetter. What object attributes determine canonical views?
PERCEPTION-LONDON-, 28:575?600, 1999.
[2] S. Palmer, E. Rosch, and P. Chase. Canonical perspective and the perception of objects. Attention and
performance IX, pages 135?151, 1981.
[3] S.E. Palmer. Vision science: Photons to phenomenology, volume 2. MIT press Cambridge, MA., 1999.
[4] H.H. B?lthoff and S. Edelman. Psychophysical support for a two-dimensional view interpolation theory
of object recognition. Proceedings of the National Academy of Sciences of the United States of America,
89(1):60, 1992.
[5] K.A. Ehinger and A. Oliva. Canonical views of scenes depend on the shape of the space. CogSci, 2011.
[6] A Torralba.
Lecture notes on explicit and implicit
http://people.csail.mit.edu/torralba/courses/6.870/slides/lecture4.ppt.
3d
object
models.
[7] D. Weinshall and M. Werman. On View Likelihood and Stability. IEEE Trans. Pattern Anal. Mach. Intell.
[8] W.T. Freeman. The generic viewpoint assumption in a framework for visual perception. Nature, 368(6471).
[9] PM Hall and MJ Owen. Simple canonical views. In The British Machine Vision Conf.(BMVC05, volume 1,
pages 7?16, 2005.
[10] I. Simon, N. Snavely, and S.M. Seitz. Scene summarization for online image collections. In Computer
Vision, 2007. ICCV 2007. IEEE 11th International Conference on.
[11] T.L. Berg and A.C. Berg. Finding iconic images. In CVPR Workshops 2009.
[12] R. Raguram and S. Lazebnik. Computing iconic summaries of general visual concepts. In Computer Vision
and Pattern Recognition Workshops, 2008. CVPRW?08. IEEE Computer Society Conference on, pages 1?8.
IEEE, 2008.
[13] T. Denton, M.F. Demirci, J. Abrahamson, A. Shokoufandeh, and S. Dickinson. Selecting canonical views
for view-based 3-D object recognition. In ICPR 2004.
[14] T. Deselaers and V. Ferrari. Visual and semantic similarity in imagenet. In Computer Vision and Pattern
Recognition (CVPR), 2011 IEEE Conference on, pages 1777?1784. IEEE, 2011.
[15] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial envelope.
International Journal of Computer Vision, 42(3):145?175, 2001.
[16] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image
Database. In CVPR09, 2009.
[17] Y. Jing, S. Baluja, and H. Rowley. Canonical image selection from the web. In Proceedings of the 6th
ACM international conference on Image and video retrieval, pages 280?287. ACM, 2007.
[18] T. Weyand and Leibe. B. Discovering favorite views of popular places with iconoid shift. In International
Conference on Computer Vision (ICCV), 2011 IEEE Conference on. IEEE, 2011.
[19] E. Murphy-Chutorian and M.M. Trivedi. Head pose estimation in computer vision: A survey. Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 31(4):607?626, 2009.
[20] M. Douze, H. J?gou, H. Sandhawalia, L. Amsaleg, and C. Schmid. Evaluation of gist descriptors for
web-scale image search. In Proceeding of the ACM International Conference on Image and Video Retrieval,
page 19. ACM, 2009.
[21] J. Xiao, J. Hays, K.A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition
from abbey to zoo. In CVPR 2010.
[22] N. Snavely, S.M. Seitz, and R. Szeliski. Photo tourism: exploring photo collections in 3d. In ACM
Transactions on Graphics (TOG), volume 25, pages 835?846. ACM, 2006.
[23] E. Rosch, C.B. Mervis, W.D. Gray, D.M. Johnson, and P. Boyes-Braem. Basic objects in natural categories.
Cognitive psychology, 8(3):382?439, 1976.
9
| 4827 |@word mezuman:2 seitz:2 seek:1 mammal:4 united:1 selecting:1 tuned:1 ours:1 subsequent:1 shape:4 wanted:1 hypothesize:1 remove:1 gist:19 motor:2 intelligence:1 selected:2 discovering:1 revisited:1 location:1 preference:4 simpler:1 height:2 along:2 supply:1 driver:1 edelman:1 indeed:6 brain:2 grade:2 inspired:1 relying:1 freeman:1 automatically:4 actual:1 gou:1 window:3 estimating:1 moreover:1 formalizing:1 discover:1 lowest:1 what:7 weinshall:1 finding:6 transformation:1 impractical:1 photographing:1 every:2 control:2 intervention:1 appear:2 engineering:1 local:2 despite:1 mach:1 analyzing:1 interpolation:1 might:2 chose:1 initialization:1 studied:2 suggests:1 palmer:13 unique:2 camera:4 acknowledgment:1 gabor:1 word:2 vetter:1 suggest:1 get:2 close:1 selection:2 seminal:1 www:2 map:1 demonstrated:1 center:4 rhinoceros:1 elusive:1 jerusalem:2 reviewer:1 attention:1 survey:3 painful:1 perceive:1 estimator:1 weyand:2 perceiver:2 stability:1 ferrari:2 variation:2 notion:1 cvpr09:1 dickinson:1 us:2 hypothesis:12 recognition:9 cut:1 database:2 observed:1 role:1 calculate:1 sun:1 counter:7 highest:3 ran:1 mentioned:1 discriminates:1 supposedly:1 rowley:1 ideally:1 asked:2 depend:2 serve:3 tog:1 basis:1 vague:2 represented:1 various:1 cat:2 america:1 london:1 cogsci:1 horse:9 choosing:1 quite:1 elad:1 larger:1 widely:1 supplementary:2 cvpr:3 blanz:2 statistic:1 noisy:1 validates:1 online:1 chase:2 douze:1 maximal:4 frequent:29 holistic:1 academy:1 validate:1 pronounced:1 cluster:2 jing:2 perfect:1 object:45 ac:2 pose:4 school:1 received:1 strong:4 c:2 attribute:1 filter:1 human:21 enable:1 material:2 bin:4 subordinate:1 require:1 yweiss:1 anonymous:1 elevation:2 exploring:1 hold:1 around:3 considered:2 ground:2 hall:1 great:1 predict:1 werman:1 torralba:4 abbey:1 estimation:5 sensitive:1 saw:2 create:2 sideways:1 tool:2 hope:2 mit:2 clearly:1 rather:5 pn:1 deselaers:2 validated:1 derived:1 iconic:4 consistently:1 improvement:1 indicates:2 likelihood:1 helpful:1 abstraction:1 rigid:1 issue:1 among:1 orientation:1 favored:1 yahoo:2 animal:6 raised:1 special:2 spatial:1 tourism:1 tarr:1 manually:7 x4:1 identical:2 look:3 denton:2 nearly:1 future:1 others:1 duplicate:1 few:4 ppt:1 national:1 intell:1 murphy:1 amsaleg:1 phase:6 n1:1 huge:1 interest:1 possibility:1 evaluation:1 notre:3 sens:1 behind:2 bundle:1 desired:2 psychological:1 instance:1 column:1 earlier:1 modeling:1 goodness:3 measuring:1 subset:2 hundred:9 uniform:2 conducted:2 azimuth:2 graphic:1 johnson:1 reported:1 answer:1 synthetic:1 chooses:1 density:10 twelve:2 sensitivity:1 huji:2 international:5 interdisciplinary:1 csail:1 off:4 dong:1 homography:1 parzen:8 imagery:1 choose:2 worse:1 cognitive:5 conf:1 li:2 suggesting:1 lily:2 photon:1 summarized:1 includes:1 pedestrian:1 depends:1 vehicle:4 view:161 observer:4 lab:1 try:1 start:2 participant:1 option:2 simon:2 il:2 ni:1 square:1 descriptor:10 correspond:2 zoo:1 ago:2 explain:1 manual:4 checked:1 definition:1 against:4 frequency:7 dataset:8 popular:1 ask:1 car:1 improves:1 iron:1 classroom:1 actually:1 focusing:1 harness:2 response:1 wei:1 box:5 just:4 implicit:1 web:5 lack:1 google:9 glance:1 mode:19 artifact:1 gray:1 reveal:4 believe:1 effect:1 concept:2 verify:4 true:4 requiring:1 hence:1 assigned:2 laboratory:1 semantic:1 deal:1 white:1 attractive:1 prominent:2 trying:1 argues:2 image:111 lazebnik:2 consideration:1 novel:2 common:1 rotation:1 chutorian:1 volume:3 imagined:1 discussed:1 approximates:1 isf:1 cambridge:1 queried:3 automatic:8 grid:1 pm:1 stable:2 access:1 similarity:14 subjectively:1 etc:1 closest:3 recent:2 showed:2 retrieved:4 optimizing:1 perspective:1 certain:2 hay:1 life:1 seen:4 somewhat:1 deng:1 determine:1 paradigm:1 full:3 persian:1 smooth:1 lthoff:2 retrieval:2 naming:1 privileged:1 prediction:1 variant:1 basic:4 oliva:3 vision:11 kernel:1 sometimes:1 background:5 cropped:4 source:1 concluded:1 envelope:1 rest:1 ape:1 subject:2 comment:1 validating:1 thing:1 psychology:1 prototype:1 shift:1 whether:3 returned:3 cause:1 dramatically:1 useful:1 clear:1 cathedral:2 repeating:2 slide:1 category:47 http:3 supplied:1 percentage:1 canonical:64 diagnostic:2 estimated:1 group:2 four:4 verified:2 year:2 fourth:1 place:1 almost:2 family:1 decide:3 prefer:2 internet:13 dame:3 accidental:2 fascinating:1 adapted:1 constraint:3 fei:2 scene:7 aspect:1 expanded:1 icpr:1 slightly:3 character:2 leg:2 invariant:3 iccv:2 taken:2 remains:1 previously:1 turn:1 shokoufandeh:1 photo:2 available:1 phenomenology:1 apply:2 eight:1 edmond:2 leibe:2 generic:1 hierarchical:1 alternative:1 yair:1 top:2 clustering:6 include:2 calculating:2 testable:1 especially:1 braem:1 society:1 psychophysical:9 rosch:6 question:1 snavely:2 exhibit:2 distance:3 thank:1 cavy:1 collected:2 trivial:1 reason:1 consumer:1 insufficient:1 hebrew:2 difficult:2 mostly:1 anal:1 reliably:2 summarization:1 datasets:3 pickup:1 variability:1 head:2 ever:1 discovered:2 download:2 rating:4 dog:2 required:1 specified:1 paris:1 cleaned:1 imagenet:9 engine:7 hour:1 mervis:1 trans:1 able:1 suggested:2 superordinate:1 usually:4 perception:5 pattern:4 summarize:1 safra:2 explanation:3 video:2 overlap:1 suitable:1 difficulty:1 natural:1 representing:2 axis:4 schmid:1 literature:1 relative:1 expect:2 lecture:1 prototypical:2 querying:1 validation:2 foundation:1 downloaded:4 proxy:3 consistent:1 xiao:1 principle:1 viewpoint:8 charitable:1 row:2 course:2 summary:2 supported:2 formal:6 side:5 bias:5 steaming:1 szeliski:1 neighbor:4 face:4 fifth:1 dimension:1 calculated:3 instructed:1 collection:11 author:1 counted:1 transaction:2 tourist:1 preferred:13 global:2 knew:1 xi:3 grayscale:1 search:11 why:1 favorite:1 learn:3 nature:1 robust:3 mj:1 symmetry:1 did:3 main:2 bounding:5 big:2 verifies:2 fig:8 representative:2 site:2 gatsby:1 ehinger:2 wish:1 explicit:1 kxk2:1 collage:5 third:2 ix:1 dozen:1 british:1 embed:2 bad:1 cvprw:1 showing:1 experimented:2 evidence:2 closeness:1 workshop:2 socher:1 mirror:2 trivedi:1 photograph:6 simply:4 likely:4 appearance:1 shoe:1 visual:4 adjustment:1 contained:1 ordered:1 corresponds:6 truth:2 acm:6 ma:1 succeed:1 sorted:2 rbf:1 owen:1 experimentally:1 change:3 demirci:1 baluja:1 called:6 boyes:1 experimental:1 exception:1 select:1 berg:4 people:4 support:1 frontal:1 tested:2 phenomenon:3 |
4,229 | 4,828 | Transelliptical Component Analysis
Han Liu
Department of Operations Research
and Financial Engineering
Princeton University, NJ 08544
[email protected]
Fang Han
Department of Biostatistics
Johns Hopkins University
Baltimore, MD 21210
[email protected]
Abstract
We propose a high dimensional semiparametric scale-invariant principle component analysis, named TCA, by utilize the natural connection between the elliptical distribution family and the principal component analysis. Elliptical distribution family includes many well-known multivariate distributions like multivariate Gaussian, t and logistic and it is extended to the meta-elliptical by Fang et.al
(2002) using the copula techniques. In this paper we extend the meta-elliptical
distribution family to a even larger
p family, called transelliptical. We prove that
TCA can obtain a near-optimal s log d/n estimation consistency rate in recovering the leading eigenvector of the latent generalized correlation matrix under the
transelliptical distribution family, even if the distributions are very heavy-tailed,
have infinite second moments, do not have densities and possess arbitrarily continuous marginal distributions. A feature selection result with explicit rate is also
provided. TCA is further implemented in both numerical simulations and largescale stock data to illustrate its empirical usefulness. Both theories and experiments confirm that TCA can achieve model flexibility, estimation accuracy and
robustness at almost no cost.
1
Introduction
Given x1 , . . . , xn ? Rd as n i.i.d realizations of a random vector X ? Rd with population covariance matrix ? and correlation matrix ?0 , the Principal Component Analysis (PCA) aims at
recovering the top m leading eigenvectors u1 , . . . , um of ?. In practice, ? is unknown and the top
m leading eigenvectors u
b1 , . . . , u
bm of the Pearson sample covariance matrix are obtained as the
estimators. However, because the PCA is well-known to be scale-variant, meaning that changing
the measurement scale of variables will make the estimators different, the PCA conducted on the
sample correlation matrix is also regular in literatures [2]. It aims at recovering the top m leading eigenvectors ?1 , . . . , ?m of ?0 using the top m leading eigenvectors ?b1 , . . . , ?bm of the Pearson
sample correlation matrix. Because ?0 is scale-invariant, we call the PCA aiming at recovering the
eigenvectors of ?0 the scale-invariant PCA.
In high dimensional settings, when d scales with n, it has been discussed in [14] that u
b1 and ?b1
d
are generally not consistent estimators of u1 and ?1 . For any two vectors v1 , v2 ? R , denote the
angle between v1 and v2 by ?(v1 , v2 ). [14] proved that ?(u1 , u
b1 ) and ?(?1 , ?b1 ) do not converge
to zero. Therefore, it is commonly assumed that ?1 = (?11 , . . . , ?1d )T is sparse, meaning that
card(supp(?1 )) := card({?1j : ?1j 6= 0}) = s < n. This results in a variety of sparse PCA
procedures. Here we note that supp(uj ) = supp(?j ), for j = 1, . . . , d.
The elliptical distributions are of special interest in Principal Component Analysis. The study of
elliptical distributions and their extensions have been launched in statistics recently by [4]. The
elliptical distributions can be characterized by their stochastic representations [5]. A random vector
Z = (Z1 , . . . , Zd )T is said to follow an elliptical distribution or be elliptically distributed with
parameters ?, ? 0, and rank(?) = q, if it admits the stochastic representation: Z = ? + ?AU ,
where ? ? Rd , ? ? R and U ? Rq are independent random variables, ? ? 0, U is uniformly
distributed on the unit sphere in Rq , and A ? Rd?q is a fixed matrix such that AAT = ?. We call
1
? the generating variable. The density of Z does not necessarily exist. Elliptical distribution family
includes a variety of famous multivariate distributions: multivariate Gaussian, multivariate Cauchy,
Student?s t, logistic, Kotz, symmetric Pearson type-II and type-VII distributions. We refer to [3, 5]
and [4] for more details.
[4] introduce the term meta-elliptical distribution in extending the continuous elliptical distributions
whose densities exist to a wider class of distributions with densities existing. The construction of
the meta-elliptical distributions is based on the copula technique and it was initially introduced by
[25]. In particular, when the latent elliptical distribution is the multivariate Gaussian, we have the
meta-Gaussian or the nonparanormal distributions introduced by [16] and [19].
The elliptical distribution is of special interest in Principal Component Analysis (PCA). It has been
shown in a variety of literatures [27, 11, 22, 12, 24] that the PCA conducted on elliptical distributions
shares a number of good properties enjoyed by the PCA conducted on the Gaussian distribution. In
particular, [11] show that with regard to a range of hypothesis relevant to PCA, tests based on a multivariate Gaussian assumption have the identical power for all elliptical distributions even without
second moments. We will utilize this connection to construct a new model in this paper.
In this paper, a new high dimensional scale-invariant principle component analysis approach is proposed, named Transelliptical Component Analysis (TCA). Firstly, to achieve both the estimation
accuracy and model flexibility, we build the model of TCA on the transelliptical distributions. A
random vector X = (X1 , . . . , Xd )T is said to follow a transelliptical distribution if there exists a set
of univariate strictly monotone functions f = {fj }dj=1 such that f (X) := (f1 (X1 ), . . . , fd (Xd ))T
follows a continuous elliptical distribution with parameters ? = 0 and ?0 = [?0jk ] 0. Here
diag(?0 ) = 1. Transelliptical distributions do not necessarily possess densities and are strict extensions to the meta-elliptical distributions defined in [4]. TCA aims at recovering the top m leading
eigenvectors ?1 , . . . , ?m of ?0 .
Secondly, to estimate ?0 robustly and efficiently, instead of estimating the transformation functions
{fbj }dj=1 of {fj }dj=1 as [19] did, realizing that {fj }dj=1 preserve the ranks of the data, we utilize the
nonparametric rank-based correlation coefficient estimator, Kendall?s tau, to estimate ?0 . We prove
that even though the generating variable ? is changing and marginal distributions areparbitrarily
continuous, Kendall?s tau correlation matrix approximates ?0 in a parametric rate OP ( log d/n).
This key observation makes Kendall?s tau a better estimator than Pearson sample correlation matrix
with regard to a much larger distribution family than the Gaussian.
Thirdly, in terms of methodology and theory, we analyze the general case that X follows a
transelliptical distribution and ?1 is sparse. Here ?1 is the leading eigenvector of ?0 . We obtain the TCA estimator ?e1? of ?1 utilizing the Kendall?s tau correlation matrix. We prove that
the TCA can obtain a fast
p convergence rate in terms of parameter estimation and is of the rate
sin ?(?1 , ?e? ) = OP (s log d/n), where ?e? is the estimator TCA obtains. A feature selection
consistency result with explicit rate is also provided.
2
Background
We start with notations: Let M = [Mjk ] ? Rd?d and v = (v1 , ..., vd )T ? Rd . Let v?s subvector
with entries indexed by I be denoted by vI , M ?s submatrix with rows indexed by I and columns
indexed by J be denoted by MIJ . Let MI? and M?J be the submatrix of M with rows in I and all
columns, and the submatrix of M with columns in J and all rows. For 0 < q < ?, we define the
`0 , `q and `? vector norm as
kvk0 := card(supp(v)), kvkq := (
d
X
i=1
|vi |q )1/q and kvk? := max |vi |.
1?i?d
We define the matrix `max norm as the elementwise
maximum value: kM kmax := max{|Mij |} and
Pn
the `? norm as kM k? := max1?i?m j=1 |Mij |. Let ?j (M ) be the toppest j?th eigenvalue
of M. In special, ?min (M ) := ?d (M ) and ?max (M ) := ?1 (M ) are the smallest and largest
eigenvalues of M . The vectorized matrix of M , denoted by vec(M ), is defined as: vec(M ) :=
T
T T
(M?1
, . . . , M?d
) . Let Sd?1 := {v ? Rd : kvk2 = 1} be the d-dimensional unit sphere. The
d
sign = denotes that the two sides of the equality have the same distributions. For any two vectors
a, b ? Rd and any two squared matrices A, B ? Rd?d , denote the inner product of a and b, A and
2
B by
ha, bi := aT b and hA, Bi := Tr(AT B).
2.1
Elliptical and Transelliptical Distributions
This section is devoted to a brief discussion of elliptical and transelliptical distributions. In the
sequel, to be clear, a random vector X = (X1 , . . . , Xd )T is said to be continuous if the marginal
distribution functions are all continuous.
2.1.1 Elliptical Distributions
In this section we shall firstly provide a definition of the elliptical distributions following [5].
Definition 2.1. Given ? ? Rd and ? ? Rd?d , where rank(?) = q ? d, a random vector Z =
(Z1 , . . . , Zd )T is said to have an elliptical distribution or is elliptically distributed with parameters ?
and ?, if and only if Z has a stochastic representation: Z =d ? + ?AU , where ? ? Rd , A ? Rd?q ,
AAT = ?, ? ? 0 is a random variable independent of U , U ? Sq?1 is uniformly distributed in the
unit sphere in Rq . In this setting we denote by Z ? ECd (?, ?, ?).
A random variable in R with continuous marginal distribution function does not necessarily possess
density. A well-known set of examples is the cantor distribution, whose support set is the cantor set.
We refer to [7] for more discussions on this phenomenon. ? is symmetric and positive semi-definite,
but not necessarily to be positive definite.
Proposition 2.1. A random vector Z = (Z1 , . . . , Zd )T has the stochastic representation Z ?
ECd (?, ?, ?), if and only if Z has the characteristic function exp(it0 ?)?(t0 ?t), where ? is a
properly-defined characteristic function. We denote by X ? ECd (?, ?, ?). If ? is absolutely
continuous and ? is non-singular,
then the density of Z exists and is of the form: pZ (z) =
|?|?1/2 g (z ? ?)T ??1 (z ? ?) , where g : [0, ?) ? [0, ?). We denote by Z ? ECd (?, ?, g).
A proof can be found in page 42 of [5]. When the density exists, ?, ? and g are uniquely determined
by one of the other. The relationship among ?, ? and g are described in Theorem 2.2 and Theorem
2.9 of [5]. The next proposition states that ?, ?, ? and A are not unique.
Proposition 2.2 (Theorem 2.15 of [5]). (i) If Z = ? + ?AU and Z = ?? + ? ? A? U ? , where
A ? Rd?q and A? ? Rd?q , Z is continuous, then there exists a constant c > 0 such that
?? = ?, A? A?T = cAAT , ? ? = c?1/2 ?. (ii) If Z ? ECd (?, ?, ?) and Z ? ECd (?? , ?? , ?? ),
Z is continuous, then there exists a constant c > 0 such that ?? = ?, ?? = c?, ?? (?) = ?(c?1 ?).
The next proposition discusses the cases where (?, ?, ?) is identifiable for Z.
Proposition 2.3. If Z ? ECd (?, ?, ?) is continuous with rank(?) = q, then (1) P(? = 0) = 0;
(2)?ii > 0 for i ? {1, . . . , d}; (3)(?, ?, ?) is identifiable for Z under the constraint that
max(diag(?)) = 1.
p
We define ?0 = [?0jk ] with ?0jk = ?jk / ?jj ?kk to be the generalized correlation matrix of Z. ?0
is the correlation matrix of Z when Z?s second moment exists and still reflects the rank dependency
even when Z has infinite second moment [13].
2.1.2
Transelliptical Distributions
To extend the elliptical distribution, we firstly define two sets of symmetric matrices: R+
d = {? ?
Rd?d : ?T = ?, diag(?) = 1, ? 0}; Rd = {? ? Rd?d : ?T = ?, diag(?) = 1, ? 0}.
Definition 2.2. A random vector X = (X1 , . . . , Xd )T with continuous marginal distribution functions F1 , . . . , Fd and density existing is said to follow a meta-elliptical distribution if and only if
there exists a continuous elliptically distributed random vector Z ? ECd (0, ?0 , g) with the marginal
?1
?1
T
d
distribution function Qg and ?0 ? R+
d , such that (Qg (F1 (X1 )), . . . , Qg (Fd (Xd ))) = Z.
In this paper, we generalize the meta-elliptical distribution family to a broader class, named the
transelliptical. The transelliptical distributions do not assume that densities exist for both X and Z
and are therefore strict extensions to meta-elliptical distributions.
Definition 2.3. A random vector X = (X1 , . . . , Xd )T is said to follow a transelliptical distribution if and only if there exists a set of strictly monotone functions f = {fj }dj=1 and a latent
continuous elliptically distributed random vector Z ? ECd (0, ?0 , ?) with ?0 ? Rd , such that
(f1 (X1 ), . . . , fd (Xd ))T =d Z. We call such X ? T Ed (?0 , ?; f1 , . . . , fd ) and ?0 the latent generalized correlation matrix.
3
Proposition 2.4. If X follows a meta-elliptical distribution, in other words, X possesses density and has continuous marginal distributions F1 , . . . , Fd of X and a continuous random vec?1
T
tor Z ? ECd (0, ?0 , g) such that (Q?1
=d Z, then we have
g (F1 (X1 )), . . . , Qg (Fd (Xd )))
0
?1
?1
X ? T Ed (? , ?; Qg (F1 ), . . . , Qg (Fd )).
To be more clear, the transelliptical distribution family is strictly larger than the meta-elliptical
distribution family in three senses: (i) the generating variable ? of the latent elliptical distribution is
not necessarily absolute continuous in transelliptical distributions; (ii) the parameter ?0 is strictly
enlarged from R+
d to Rd ; (iii) the marginal distributions of X do not necessarily possess densities.
The term meta-Gaussian (or the nonparanormal) is introduced by [16, 19]. The term meta-elliptical
copula is introduced in [6]. This is actually an alternative definition of the meta-elliptical distribution. The term elliptical copula is introduced in [18]. In summary,
transelliptical ? meta-elliptical = meta-elliptical copula ? elliptical* ? elliptical copula,
transelliptical ? meta-Gaussian = nonparanormal.
Here elliptical* represents the elliptical distributions which are continuous and possess densities.
2.2
Latent Correlation Matrix Estimation for Transelliptical Distributions
We firstly study the correlation and covariance matrices of elliptical distributions. Given Z ?
ECd (?, ?, ?), we first explore the relationship between the moments of Z and ? and ?.
Proposition 2.5. Given Z ? ECd (?, ?, ?) with rank(?) = q and finite second moments and ?0 the
2
generalized correlation matrix of Z, we have E(Z) = ?, Var(Z) = E(?q ) ?, and Cor(Z) = ?0 .
When the random vector is elliptically distributed with second moment finite, the sample mean and
correlation matrices are element-wise consistent estimators of ? and ?0 . However, the elliptical
distributions are generally very heavy-tailed (multivariate t or Cauchy distributions for example),
making Pearson sample correlation matrix a bad estimator. When the distribution family is extended
to the transelliptical, the Pearson sample correlation matrix is generally no longer a element-wise
consistent estimator of ?0 . A similar ?plug-in? idea as [6] works when ? is known. In the general
case when ? is unknown, the ?plug-in? idea itself is unavailable.
3
The TCA
In this section we propose the TCA approach. TCA is a two-stage method in estimating the leading
b Secondly, we plug
eigenvectors of ?0 . Firstly, we estimate the Kendall?s tau correlation matrix R.
b
R into a sparse PCA algorithm.
3.1
Rank-based Measures of Associations
The main idea of the TCA is to exploit the Kendall?s tau statistic to estimate the generalized correlation matrix ?0 efficiently and robustly. In detail, let X = (X1 , . . . , Xd )T be a d?dimensional
random vector with marginal distributions F1 , . . . , Fd and the joint distributions Fjk for the pair
(Xj , Xk ). The population Spearman?s rho and Kendall?s tau correlation coefficients are given by
?(Xj , Xk ) = Corr(Fj (Xj ), Fk (Xk )),
ej )(Xk ? X
ek ) > 0) ? P((Xj ? X
ej )(Xk ? X
ek ) < 0),
? (Xj , Xk ) = P((Xj ? X
ej , X
ek ) is a independent copy of (Xj , Xk ). In particular, for Kendall?s tau, we have
where (X
the following theorem, which states an explicit relationship between ?jk and ?0jk given X ?
T Ed (?0 , ?; f1 , . . . , fd ), no matter what the generating variable ? is. This is a strict extension to
[4]?s result on the meta-elliptical distribution family.
Theorem 3.1. Given X ? T Ed (?0 , ?; f1 , . . . , fd ) transelliptically distributed, we have
?
?0jk = sin
? (Xj , Xk ) .
(3.1)
2
Remark 3.1. Although the conclusion in Theorem 3.1 of [4] is correct, the proof provided is wrong
or at least very ambiguous. Theorem 2.22 in [5] builds the result only for one sample statistic and
cannot be generalized to the statistic of multiple samples, like the Kendall?s tau or Spearman?s rho.
Therefore, we provide a new and clear version here. Detailed proofs can be found in the long version
of this paper [8].
4
Spearman?s rho depends not only on ? but also on the generating variable ?. When X follows multivariate Gaussian, [17] proves that: ?(Xj , Xk ) = ?6 arcsin(?0jk /2). On the other hand, when X ?
T Ed (?0 , ?; f1 , . . . , fd ) with ? =d 1, [10] proves that: ?(Xj , Xk ) = 3(
arcsin ?0jk
)
?
? 4(
arcsin ?0jk 3
) .
?
In estimating ? (Xj , Xk ), let x1 , . . . , xn be n independent realizations of X, where xi =
(xi1 , . . . , xid )T . We consider the following rank-based statistic:
?
X
2
?
? ?bjk =
sign (xij ? xi0 j ) (xik ? xi0 k ) , if j 6= k
n(n ? 1)
(3.2)
1?i<i0 ?n
?
?
?bjk = 1, if j = k.
to approximate ? (Xj , Xk ) and measure the association between Xj and Xk . We define the Kendall?s
b = [R
bjk ] such that R
bjk = sin ? ?bjk .
tau correlation matrix R
2
3.2
Methods
The elliptical distribution is of special interest in Principal Component Analysis (PCA). It has been
shown in a variety of literatures [27, 11, 22, 12, 24] that the PCA conducted on elliptical distributions
share a number of good properties enjoyed by the PCA conducted on the Gaussian distribution. We
will utilize this connection to construct a new model in this paper.
3.2.1
TCA Model
Utilizing the natural relationship between elliptical distributions and the PCA, we propose the model
of Transelliptical Component Analysis (TCA). Here ideas of transelliptical distribution family and
scale-invariant PCA are exploited. We wish to estimate the leading eigenvector of the latent generalized correlation matrix. In particular, the following model Md (?0 , ?, s; f ) with f = {fj }dj=1 is
considered:
X ? T Ed (?0 , ?; f1 , . . . , fd ),
Md (?0 , ?, s; f ) :
(3.3)
k?1 k0 = s,
where ?1 is the leading eigenvectors of the latent generalized correlation matrix ?0 we are interested
Pd
T
in estimating. By spectral decomposition, we write: ?0 =
j=1 ?d ?d ?d , where ?1 ? ?2 ?
0
d?1
. . . ? ?d ? 0 and ?1 > 0 to make ? non-degenerate. ?1 , . . . , ?d ? S
are the corresponding
eigenvectors of ?1 , . . . , ?d . Inspired by the model Md (?0 , ?, s; f ), it is natural to consider the
following optimization problem:
b
?e1? = arg max v T Rv,
v?Rd
subject to v ? Sd?1 ? B0 (s),
(3.4)
b is the estimated Kendall?s tau correlation matrix. The
where B0 (s) := {v ? Rd : kvk0 ? s} and R
corresponding global optimum is denoted by ?e1? .
3.2.2
TCA Algorithm
b to any sparse PCA algorithm listed
Generally we can plug in the Kendall?s tau correlation matrix R
above. In this paper, to approximate ?1 , we consider using the Truncated Power method (TPower)
proposed by [28] and [20]. The main idea of the TPower is to utilize the power method, but truncate
the vector to a `0 ball with radius k in each iteration. Detailed algorithms are provided in the long
version of this paper [8]. The final estimator is denoted by ?e? with k?e? k0 = k. It will be shown
in Section 4 and Section 5 that the Kendall?s tau correlation matrix is a better statistic in estimating
the correlation matrix than the Pearson sample correlation matrix in the sense that (i) it enjoys the
Gaussian parametric rate in a much larger distribution family, including many distributions with
heavy tails; (ii) it is a more robust estimator, i.e. resistant to outliers.
We use the iterative deflation method to learn the first k instead of the first one leading eigenvectors,
b ? Rd?s deflates a vector v ? Rd
following the discussions of [21, 15, 28, 29]. In detail, a matrix ?
0 b0
T b
T
b
b 0 is orthogonal to v.
and achieves a new matrix ? : ? := (I ? vv )?(I ? vv ). In this way, ?
5
4
Theoretical Properties
In this section the theoretical properties of the TCA estimators are provided. Especially, we are
interested in the high dimensional case when d > n.
4.1
Rank-based Correlation Matrix Estimation
b to the
This section is devoted to the concentration result of the Kendall sample correlation matrix R
0
b is provided in the next theorem.
Pearson correlation matrix ? . The `max convergence rate of R
Theorem 4.1. Given x1 , . . . , xn n independent realizations of X ? T Ed (?0 , ?; f1 , . . . , fd ) and
b be the Kendall tau correlation matrix, we have with probability at least 1 ? d?5/2 ,
letting R
p
b ? ?0 kmax ? 3? log d/n.
kR
(4.1)
Proof sketch. Theorem 4.1 can be proved by realizing that ?bjk is an unbiased estimator of ? (Xj , Xk )
and is a U-statistic with size 2. Hoeffding?s inequality for U-statistic can then be applied to obtain
the result. Detailed proofs can be found in the long version of this paper [8].
4.2
TCA Estimators
This section is devoted to the statement of our main result on the upper bound of the estimated error
of the TCA global optimum ?e1? and TPower solver ?e? . We assume that the Model Md (?0 , ?, s; f )
holds and the next theorem provides an upper bound on the angle between the estimated leading
eigenvector ?e1? and true leading eigenvector ?1 .
Theorem 4.2. Let ?e1? be the global solution to Equation (3.4) and the Model Md (?0 , ?, s; f ) holds.
For any two vectors v1 ? Sd?1 and v2 ? Sd?1 , letting
q
| sin ?(v1 , v2 )| = 1 ? (v1T v2 )2 ,
then we have
P | sin ?(?e1? , ?1 )| ?
!
r
log d
6?
? 1 ? d?5/2 .
?s
? 1 ? ?2
n
(4.2)
b to ?0 .
Proof sketch. The key idea of the proof is to utilize the `max norm convergence result of R
Detailed proofs can be found in the long version of this paper [8].
p
Generally, when s and ?1 , ?2 do not scale with (n, d), the rate is OP ( log d/n), which is the
parametric rate [20, 26, 23] obtains. When (n, d) goes to infinity, the two leading eigenvalues ?1
and ?2 will typically go to infinity and will at least be away from zero. Hence, ourqrate shown in
Theorem 4.2 will be usually better than the seemingly more common rate:
6??1
?1 ??2
?s
log d
n .
Corollary 4.1 (Feature Selection Consistency of the TCA). Let ?e1? be the global solution to
Equation (3.4) and the Model Md (?0 , ?, s; f ) holds. Let
b ? := supp(?e1? ).
? := supp(?1 ) and ?
If we further have
r
?
6 2?
log d
min |?1j | ?
?s
,
j??
? 1 ? ?2
n
b ? = ?) ? 1 ? d?5/2 .
then we have, P(?
Proof sketch. The key of the proof is to construct a contradiction given Theorem 4.2 and the condition on the minimum value of |?1 |. Detailed proofs can be found in the long version of this paper
[8].
6
5
Experiments
In this section we investigate the empirical performance of the TCA method. We utilize the TPower
algorithm proposed by [28] and the following three methods are considered: (1) Pearson: the
classic high dimensional scale-invariant PCA using the Pearson sample correlation matrix of the
data; (2) Kendall: the TCA using the Kendall correlation matrix; (3) LatPearson: the classic high
dimensional scale-invariant PCA using the Pearson sample correlation matrix of the data drawn from
the latent elliptical distribution (perfect without data contamination).
5.1
Numerical Simulations
In the simulation study we randomly sample n data points from a certain transelliptical distribution
T Ed (?0 , ?; f1 , . . . , fd ). Here we consider the set up of d = 100. To determine the transelliptical
distribution, firstly, we derive ?0 in the following way: A covariance matrix ? is firstly synthesized
through the eigenvalue decomposition, where the first two eigenvalues are given and the correspondPd
ing eigenvectors are pre-specified to be sparse. In detail, let ? = j=1 ?j uj uTj , where ?1 =
6, ?2 = 3, ?3 = . . . = ?d = 1, and the first two leading eigenvectors of ?, u1 and u2 , are sparse
with the first s = 10 entries of u1 and the second s = 10 entries of u2 are nonzero, i.e.
1
1
?
?
1 ? j ? 10
11 ? j ? 20
10
10
u1j =
and u2j =
.
(5.1)
0
otherwise
0
otherwise
The remaining eigenvectors are chosen arbitrarily. The generalized correlation matrix ?0 is generated from ?, with ?1 = 4, ?2 = 2.5, ?3 , . . . , ?d ? 1 and the top two leading eigenvectors sparse:
? ?110 1 ? j ? 10
? ?110 11 ? j ? 20
?1j =
(5.2)
and ?2j =
.
0
otherwise
0
otherwise
Secondly, using ?0 , we consider the following three generating schemes:
[Scheme 1] X ? T Ed (?0 , ?; f1 , . . . , fd ) with ? ? ?d and f1 (x) = . . . = fd (x) = x. Here
p
Y12 + . . . + Yd2 ? ?d with Y1 , . . . , Yd ?i.i.d N (0, 1). In other words, ?d is the chi-distribution
with degree of freedom d. This is equivalent to say that X ? N (0, ?0 ) (Example 2.4 of [5]).
?
[Scheme 2] X ? T Ed (?0 , ?; f1 , . . . , fd ) with ? =d m?1? /?2? and f1 (x) = . . . = fd (x) = x.
?
?
?
?
Here ?1 ? ?d , ?2 ? ?m , ?1 is independent of ?2 and m ? N. This is equivalent to say that
X ? M td (m, 0, ?0 ), i.e. X following a multivariate-t distribution with degree of freedom m, mean
0 and covariance matrix ?0 (Example 2.5 of [5]). Here we consider m = 3.
?
[Scheme 3] X ? T Ed (?0 , ?; f1 , . . . , fd ) with ? =d m?1? /?2? . Here ?1? ? ?d , ?2? ? ?m , ?1? is in?
dependent of ?2 and m = 3. Moreover, {f1 , . . . , fd } = {h1 , h2 , h3 , h4 , h5 , h1 , h2 , h3 , h4 , h5 , . . .},
where
R
?(x) ? ?(t)?(t)dt
sign(x)|x|1/2 ?1
?1
?1
h1 (x) := x, h2 (x) := qR
, h3 (x) := qR
,
2
R
|t|?(t)dt
?(y) ? ?(t)?(t)dt ?(y)dy
R
exp(x) ? exp(t)?(t)dt
x3
?1
?1
h4 (x) := qR
, h5 (x) := qR
.
2
R
t6 ?(t)dt
exp(y) ? exp(t)?(t)dt ?(y)dy
This is equivalent to say that X is transelliptically distributed with the latent elliptical distribution
Z ? M td (3, 0, ?0 ).
To evaluate the robustness of different methods, let r ? [0, 1) represent the proportion of samples
being contaminated. For each dimension, we randomly select bnrc entries and replace them with
either 5 or -5 with equal probability. The final data matrix we obtained is X ? Rn?d . Here we
pick r = 0, 0.02 or 0.05. Under the Scheme 1 to Scheme 3 with different levels of contamination
(r = 0, 0.02 or 0.05), we repeatedly generate the data matrix X for 1,000 times and compute
the averaged False Positive Rates and False Negative Rates using a path of tuning parameters k
from 5 to 90. The feature selection performances of different methods are then evaluated by plotting
(FPR(k), 1 ? FNR(k)). The corresponding ROC curves are presented in Figure 1 (A). More results
are shown in the long version of this paper [8]. It can be observed that Kendall is generally better
and more resistance to the outliers compared with Pearson.
7
1.0
0.6
0.8
0.2
0.4
0.6
0.8
1.0
95
0.2
0.8
0.0
0.2
0.4
0.6
0.8
1.0
0.8
0.4
Pearson
Kendall
LatPearson
1.0
1.0
0.8
Pearson
Kendall
0.6
0.8
1.0
1.0
0.4
0.2
0.4
0.0
0.6
0.6
0.8
0.4
0.2
0.0
Pearson
Kendall
LatPearson
0.6
0.8
0.6
Pearson
Kendall
LatPearson
0.0
0.4
0.2
0.0
Pearson
Kendall
LatPearson
0.4
90
0.4
0.2
85
0.2
0.0
80
0.0
1.0
0.6
0.8
0.6
0.2
0.4
1.0
1.0
0.0
Pearson
Kendall
LatPearson
0.0
0.0
0.8
75
0.8
0.6
Successful Matches %
0.6
0.4
1.0
0.4
0.2
0.2
0.2
0.0
0.0
0.0
1.0
0.0
0.2
0.4
0.6
0.8
0
0.2
0.8
Pearson
Kendall
LatPearson
1.0
50
100
150
200
Pearson
Kendall
LatPearson
0.0
0.6
0.4
0.6
0.8
1.0
0.4
Pearson
Kendall
LatPearson
1.0
0.2
1.0
0.0
0.2
0.4
0.6
0.8
1.0
0.8
0.6
0.4
0.2
0.0
Pearson
Kendall
LatPearson
0.0
0.2
0.4
0.6
0.8
k
1.0
(A)
(B)
Figure 1: (A) ROC curves under Scheme 1, Scheme 2 and Scheme 3 (top, middle, bottom) and data
contamination at different levels (r = 0, 0.02, 0.05 from left to right). x?axis is FPR and y?axis
is TPR. Here n = 100 and d = 100. (B) Successful matches of the market trend proportions only
using the stocks in Ak and Bk . The x?axis represents the tuning parameter k scaling from 1 to 200;
the y?axis represents the % of successful matches. The curve denoted by ?Kendall? represents the
points of (k, ?Ak ) and the curves denoted by ?Pearson? represents the points of (k, ?Bk ).
5.2 Equities Data
In this section we apply the TCA on the stock price data from Yahoo! Finance (finance.yahoo.
com). We collected the daily closing prices for J=452 stocks that were consistently in the S&P 500
index between January 1, 2003 through January 1, 2008. This gave us altogether T=1,257 data
points, each data point corresponds to the vector of closing prices on a trading day. Let St = [Stt,j ]
denote by the closing price of stock j on day t.
We wish to evaluate the ability of using the only k stocks to represent the trend of the whole stock
market. To this end, we run Kendall and Pearson on St and obtain the leading eigenvectors
?eKendall and ?eP earson using the tuning parameter k ? N. Let Ak := supp(?eKendall ) and Bk :=
supp(?eP earson ). And then we let TtW , TtAk and TtBk denote by the trend of the whole stocks, Ak
stocks and Bk stocks in tth day compared with t ? 1th date, i.e:
X
X
X
X
TtW := I(
Stt,j ?
Stt?1,j >), TtAk := I(
Stt,j ?
Stt?1,j > 0)
j
j
j?Ak
and
TtBk := I(
X
Stt,j ?
j?Bk
X
j?Ak
Stt?1,j > 0),
j?Bk
here I is the indicator function. In this way, we can calculate the proportion
of successful matches
P
of the market trend using the stocks in Ak and Bk as: ?Ak := T1 t I(TtW = TtAk ) and ?Bk :=
P
Bk
1
W
t I(Tt = Tt ). We visualize the result by plotting (k, ?Ak ) and (k, ?Bk ) on a 2D figure. The
T
result is presented in Figure 1 (B).
It can be observed from Figure 1 (B) that Kendall summarizes the trend of the whole stock market
constantly
better than Pearson. Moreover, the averaged difference between the two methods are
P
1
k (?Ak ? ?Bk ) = 1.4025 with the standard deviation 0.6743. Therefore, the difference is
200
significant.
6
Acknowledgement
This research was supported by NSF award IIS-1116730.
8
References
[1] TW Anderson. Statistical inference in elliptically contoured and related distributions.
Recherche, 67:02, 1990.
[2] M.G. Borgognone, J. Bussi, and G. Hough. Principal component analysis in sensory analysis:
covariance or correlation matrix? Food quality and preference, 12(5-7):323?326, 2001.
[3] S. Cambanis, S. Huang, and G. Simons. On the theory of elliptically contoured distributions.
Journal of Multivariate Analysis, 11(3):368?385, 1981.
[4] H.B. Fang, K.T. Fang, and S. Kotz. The meta-elliptical distributions with given marginals.
Journal of Multivariate Analysis, 82(1):1?16, 2002.
[5] KT Fang, S. Kotz, and KW Ng. Symmetric multivariate and related distributions. Chapman&Hall, London, 1990.
[6] C. Genest, AC Favre, J. B?eliveau, and C. Jacques. Metaelliptical copulas and their use in
frequency analysis of multivariate hydrological data. Water Resour. Res, 43(9):W09401, 2007.
[7] P.R. Halmos. Measure theory, volume 18. Springer, 1974.
[8] F. Han and H. Liu. Tca: Transelliptical principal component analysis for high dimensional
non-gaussian data. Technical Report, 2012.
[9] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the
American Statistical Association, pages 13?30, 1963.
[10] H. Hult and F. Lindskog. Multivariate extremes, aggregation and dependence in elliptical
distributions. Advances in Applied probability, 34(3):587?608, 2002.
[11] DR Jensen. The structure of ellipsoidal distributions, ii. principal components. Biometrical
Journal, 28(3):363?369, 1986.
[12] DR Jensen. Conditioning and concentration of principal components. Australian Journal of
Statistics, 39(1):93?104, 1997.
[13] H. Joe. Multivariate models and dependence concepts, volume 73. Chapman & Hall/CRC,
1997.
[14] I.M. Johnstone and A.Y. Lu. Sparse principal components analysis. Arxiv preprint
arXiv:0901.4392, 2009.
[15] M. Journ?ee, Y. Nesterov, P. Richt?arik, and R. Sepulchre. Generalized power method for sparse
principal component analysis. The Journal of Machine Learning Research, 11:517?553, 2010.
[16] KS Kelly and R. Krzysztofowicz. A bivariate meta-gaussian density for use in hydrology.
Stochastic Hydrology and Hydraulics, 11(1):17?31, 1997.
[17] W.H. Kruskal. Ordinal measures of association. Journal of the American Statistical Association, pages 814?861, 1958.
[18] D. Kurowicka, J. Misiewicz, and RM Cooke. Elliptical copulae. In Proc of the International
Conference on Monte Carlo Simulation-Monte Carlo, pages 209?214, 2000.
[19] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high
dimensional undirected graphs. The Journal of Machine Learning Research, 10:2295?2328,
2009.
[20] Z. Ma. Sparse principal component analysis and iterative thresholding. Arxiv preprint
arXiv:1112.2432, 2011.
[21] L. Mackey. Deflation methods for sparse pca. Advances in neural information processing
systems, 21:1017?1024, 2009.
[22] G.P. McCabe. Principal variables. Technometrics, pages 137?144, 1984.
[23] D. Paul and I.M. Johnstone. Augmented sparse principal component analysis for high dimensional data. Arxiv preprint arXiv:1202.1242, 2012.
[24] GQ Qian, G. Gabor, and RP Gupta. Principal components selection by the criterion of the
minimum mean difference of complexity. Journal of multivariate analysis, 49(1):55?75, 1994.
[25] A. Sklar. Fonctions de r?epartition a` n dimensions et leurs marges. Publ. Inst. Statist. Univ.
Paris, 8(1):11, 1959.
[26] V.Q. Vu and J. Lei. Minimax rates of estimation for sparse pca in high dimensions. Arxiv
preprint arXiv:1202.0786, 2012.
[27] C.M. Waternaux. Principal components in the nonnormal case: The test of equality of q roots.
Journal of Multivariate Analysis, 14(3):323?335, 1984.
[28] X.T. Yuan and T. Zhang. Truncated power method for sparse eigenvalue problems. Arxiv
preprint arXiv:1112.2679, 2011.
[29] Y. Zhang, A. dAspremont, and L.E. Ghaoui. Sparse pca: Convex relaxations, algorithms and
applications. Handbook on Semidefinite, Conic and Polynomial Optimization, pages 915?940,
2012.
9
| 4828 |@word middle:1 version:7 polynomial:1 norm:4 proportion:3 km:2 simulation:4 covariance:6 decomposition:2 pick:1 tr:1 sepulchre:1 moment:7 liu:3 nonparanormal:4 fbj:1 existing:2 elliptical:50 com:1 john:1 numerical:2 mackey:1 xk:14 fpr:2 earson:2 realizing:2 recherche:1 provides:1 preference:1 firstly:7 zhang:2 kvk2:1 h4:3 yuan:1 prove:3 introduce:1 market:4 v1t:1 chi:1 inspired:1 td:2 food:1 solver:1 provided:6 estimating:5 notation:1 moreover:2 bounded:1 biostatistics:1 mccabe:1 nonnormal:1 what:1 eigenvector:5 transformation:1 nj:1 xd:9 finance:2 um:1 wrong:1 rm:1 unit:3 positive:3 t1:1 engineering:1 aat:2 sd:4 aiming:1 ak:10 path:1 yd:1 au:3 k:1 range:1 bi:2 averaged:2 bjk:6 unique:1 vu:1 practice:1 definite:2 x3:1 sq:1 procedure:1 empirical:2 gabor:1 word:2 pre:1 regular:1 cannot:1 selection:5 kmax:2 equivalent:3 go:2 convex:1 qian:1 wasserman:1 contradiction:1 estimator:15 utilizing:2 financial:1 fang:5 population:2 classic:2 construction:1 hypothesis:1 element:2 trend:5 jk:10 yd2:1 observed:2 bottom:1 ep:2 preprint:5 u2j:1 calculate:1 jhsph:1 richt:1 contamination:3 rq:3 pd:1 bnrc:1 complexity:1 nesterov:1 max1:1 tca:25 joint:1 stock:12 k0:2 sklar:1 univ:1 fast:1 london:1 monte:2 pearson:25 whose:2 larger:4 rho:3 say:3 otherwise:4 ability:1 statistic:9 itself:1 final:2 seemingly:1 eigenvalue:6 propose:3 product:1 gq:1 relevant:1 realization:3 date:1 flexibility:2 achieve:2 degenerate:1 qr:4 convergence:3 optimum:2 extending:1 generating:6 perfect:1 wider:1 illustrate:1 derive:1 ac:1 kvkq:1 h3:3 op:3 b0:3 recovering:5 implemented:1 trading:1 australian:1 radius:1 correct:1 stochastic:5 xid:1 crc:1 f1:21 proposition:7 secondly:3 extension:4 strictly:4 hold:3 considered:2 stt:7 hall:2 exp:5 visualize:1 tor:1 achieves:1 kruskal:1 smallest:1 estimation:8 proc:1 largest:1 reflects:1 gaussian:14 arik:1 aim:3 hydrological:1 pn:1 ej:3 broader:1 corollary:1 properly:1 consistently:1 rank:10 sense:1 inst:1 inference:1 dependent:1 i0:1 typically:1 initially:1 journ:1 interested:2 arg:1 among:1 denoted:7 yahoo:2 special:4 copula:8 marginal:9 equal:1 construct:3 ng:1 chapman:2 identical:1 represents:5 kw:1 contaminated:1 report:1 randomly:2 preserve:1 technometrics:1 freedom:2 interest:3 fd:21 investigate:1 extreme:1 kvk:1 semidefinite:1 sens:1 devoted:3 kt:1 daily:1 orthogonal:1 indexed:3 hough:1 re:1 theoretical:2 column:3 cost:1 deviation:1 entry:4 leurs:1 usefulness:1 successful:4 conducted:5 dependency:1 st:2 density:14 international:1 sequel:1 xi1:1 hopkins:1 squared:1 huang:1 hoeffding:2 dr:2 ek:3 american:2 leading:17 supp:8 de:1 student:1 biometrical:1 includes:2 coefficient:2 matter:1 vi:3 depends:1 h1:3 root:1 kendall:31 analyze:1 kurowicka:1 start:1 aggregation:1 simon:1 accuracy:2 characteristic:2 efficiently:2 generalize:1 famous:1 lu:1 carlo:2 cambanis:1 ed:11 definition:5 frequency:1 hydrology:2 proof:11 mi:1 proved:2 actually:1 dt:6 day:3 follow:4 methodology:1 evaluated:1 though:1 anderson:1 contoured:2 stage:1 correlation:37 hand:1 sketch:3 logistic:2 quality:1 lei:1 concept:1 true:1 unbiased:1 daspremont:1 hence:1 equality:2 y12:1 symmetric:4 nonzero:1 sin:5 uniquely:1 ambiguous:1 criterion:1 generalized:10 tt:2 fj:6 meaning:2 wise:2 tpower:4 recently:1 common:1 conditioning:1 volume:2 thirdly:1 extend:2 discussed:1 approximates:1 elementwise:1 association:5 xi0:2 tail:1 measurement:1 refer:2 synthesized:1 tpr:1 vec:3 significant:1 marginals:1 fonctions:1 rd:24 enjoyed:2 consistency:3 fk:1 tuning:3 closing:3 dj:6 resistant:1 han:3 longer:1 multivariate:18 cantor:2 certain:1 meta:20 inequality:2 arbitrarily:2 exploited:1 minimum:2 converge:1 determine:1 ii:7 semi:1 multiple:1 rv:1 ing:1 technical:1 match:4 characterized:1 plug:4 sphere:3 long:6 e1:9 award:1 qg:6 variant:1 arxiv:10 iteration:1 represent:2 background:1 semiparametric:2 baltimore:1 singular:1 launched:1 posse:6 strict:3 subject:1 undirected:1 lafferty:1 call:3 ee:1 near:1 iii:1 variety:4 xj:14 gave:1 inner:1 idea:6 t0:1 pca:22 resistance:1 jj:1 elliptically:7 remark:1 repeatedly:1 generally:6 clear:3 eigenvectors:15 detailed:5 listed:1 nonparametric:1 ellipsoidal:1 statist:1 tth:1 generate:1 exist:3 xij:1 nsf:1 sign:3 estimated:3 jacques:1 zd:3 write:1 shall:1 key:3 drawn:1 changing:2 utilize:7 v1:6 graph:1 relaxation:1 monotone:2 sum:1 run:1 angle:2 h5:3 named:3 family:14 almost:1 kotz:3 dy:2 scaling:1 summarizes:1 submatrix:3 bound:2 identifiable:2 constraint:1 infinity:2 transelliptical:25 u1:5 min:2 utj:1 department:2 truncate:1 ball:1 spearman:3 tw:1 making:1 outlier:2 invariant:7 ghaoui:1 equation:2 discus:1 deflation:2 ordinal:1 letting:2 cor:1 end:1 operation:1 apply:1 v2:6 spectral:1 away:1 lindskog:1 robustly:2 alternative:1 robustness:2 altogether:1 rp:1 top:7 denotes:1 ecd:12 remaining:1 exploit:1 uj:2 build:2 prof:2 especially:1 parametric:3 concentration:2 dependence:2 md:7 said:6 card:3 vd:1 epartition:1 cauchy:2 collected:1 water:1 index:1 relationship:4 kk:1 ttw:3 statement:1 xik:1 negative:1 publ:1 unknown:2 upper:2 observation:1 finite:2 u1j:1 truncated:2 january:2 extended:2 y1:1 rn:1 kvk0:2 introduced:5 bk:11 pair:1 subvector:1 specified:1 paris:1 connection:3 z1:3 usually:1 max:8 tau:14 including:1 power:5 natural:3 largescale:1 indicator:1 minimax:1 scheme:9 mjk:1 brief:1 conic:1 axis:4 fjk:1 literature:3 acknowledgement:1 kelly:1 fhan:1 var:1 h2:3 degree:2 vectorized:1 consistent:3 principle:2 plotting:2 thresholding:1 share:2 heavy:3 cooke:1 row:3 summary:1 supported:1 copy:1 t6:1 enjoys:1 side:1 vv:2 johnstone:2 absolute:1 sparse:16 distributed:9 regard:2 curve:4 dimension:3 xn:3 sensory:1 commonly:1 bm:2 approximate:2 obtains:2 confirm:1 global:4 handbook:1 b1:6 assumed:1 it0:1 xi:1 continuous:18 latent:10 iterative:2 tailed:2 learn:1 robust:1 unavailable:1 genest:1 necessarily:6 diag:4 did:1 main:3 whole:3 paul:1 x1:12 enlarged:1 augmented:1 roc:2 explicit:3 wish:2 theorem:14 bad:1 hanliu:1 jensen:2 pz:1 admits:1 gupta:1 bivariate:1 exists:8 joe:1 false:2 corr:1 kr:1 halmos:1 arcsin:3 vii:1 univariate:1 explore:1 u2:2 springer:1 mij:3 corresponds:1 constantly:1 ma:1 replace:1 price:4 fnr:1 infinite:2 determined:1 uniformly:2 principal:16 called:1 equity:1 select:1 support:1 absolutely:1 evaluate:2 princeton:2 marge:1 phenomenon:1 |
4,230 | 4,829 | Collaborative Ranking With 17 Parameters
Richard S. Zemel
University of Toronto
[email protected]
Maksims N. Volkovs
University of Toronto
[email protected]
Abstract
The primary application of collaborate filtering (CF) is to recommend a small set
of items to a user, which entails ranking. Most approaches, however, formulate the
CF problem as rating prediction, overlooking the ranking perspective. In this work
we present a method for collaborative ranking that leverages the strengths of the
two main CF approaches, neighborhood- and model-based. Our novel method is
highly efficient, with only seventeen parameters to optimize and a single hyperparameter to tune, and beats the state-of-the-art collaborative ranking methods. We
also show that parameters learned on datasets from one item domain yield excellent results on a dataset from very different item domain, without any retraining.
1
Introduction
Collaborative Filtering (CF) is a method of making predictions about an individual?s preferences
based on the preference information from many users. The emerging popularity of web-based services such as Amazon, YouTube, and Netflix has led to significant developments in CF in recent
years. Most applications use CF to recommend a small set of items to the user. For instance, Amazon presents a list of top-T products it predicts a user is most likely to buy next. Similarly, Netflix
recommends top-T movies it predicts a user will like based on his/her rating and viewing history.
However, while recommending a small ordered list of items is a ranking problem, ranking in CF has
gained relatively little attention from the learning-to-rank community. One possible reason for this
is the Netflix[3] challenge which was the primary venue for CF model development and evaluation
in recent years. The challenge was formulated as a rating prediction problem, and almost all of the
proposed models were designed specifically for this task, and were evaluated using the normalized
squared error objective. Another potential reason is the absence of user-item features. The standard
learning-to-rank problem in information retrieval (IR), which is well explored with many powerful
approaches available, always includes item features, which are used to learn the models. These
features incorporate a lot of external information and are highly engineered to accurately describe
the query-document pairs. While a similar approach can be taken in CF settings, it is likely to be
very time consuming to develop analogous features, and features developed for one item domain
(books, movies, songs etc.) are likely to not generalize well to another. Moreover, user features
typically include personal information which cannot be publicly released, preventing open research
in the area. An example of this is the second part of the Netflix challenge which had to be shut down
due to privacy concerns. The absence of user-item features makes it very challenging to apply the
models from the learning-to-rank domain to this task. However, recent work [23, 15, 2] has shown
that by optimizing a ranking objective just given the known ratings a significantly higher ranking
accuracy can be achieved as compared to models that optimize rating prediction.
Inspired by these results we propose a new ranking framework where we show how the observed
ratings can be used to extract effective feature descriptors for every user-item pair. The features do
not require any external information and make it it possible to apply any learning-to-rank method to
optimize the parameters of the ranking function for the target metric. Experiments on MovieLens
and Yahoo! datasets show that our model outperforms existing rating and ranking approaches to CF.
1
Moreover, we show that a model learned with our approach on a dataset from one user/item domain
can then be applied to a different domain without retraining and still achieve excellent performance.
2
Collaborative Ranking Framework
In a typical collaborative filtering (CF) problem we are given a set of N users U = {u1 , ..., uN }
and a set of M items V = {v1 , ..., vM }. The users? ratings of the items can be represented by an
N ?M matrix R where R(un , vm ) is the rating assigned by user un to item vm and R(un , vm ) = 0
if vm is not rated by un . We use U(vm ) to denote the set of all users that have rated vm and V(un )
to denote the set of items that have been rated by un . We use vector notation: R(un , :) denotes the
n?th row of R (1 ? M vector), and R(:, vm ) denotes the m?th column (N ? 1 vector).
As mentioned above, most research has concentrated on the rating prediction problem in CF where
the aim is to accurately predict the ratings for the unrated items for each user. However, most applications that use CF typically aim to recommend only a small ranked set of items to each user. Thus
rather than concentrating on rating prediction we instead approach this problem from the ranking
viewpoint and refer to it as Collaborative Ranking (CR). In CR the goal is to rank the unrated items
in the order of relevance to the user. A ranking of the items V can be represented as a permutation
? : {1, ..., M } ? {1, ..., M } where ?(m) = l denotes the rank of the item vm and m = ? ?1 (l). A
number of evaluation metrics have been proposed in IR to evaluate the performance of the ranking.
Here we use the most commonly used metric, Normalized Discounted Cumulative Gain (NDCG)
[12]. For a given user un and ranking ? the NDCG is given by:
N DCG(un , ?, R)@T =
T
X
2?R(un , v??1 (t) ) ? 1
1
GT (un , R) t=1
log(t + 1)
(1)
where T is a truncation constant, v??1 (t) is the item in position t in ? and GT (un , R) is a normalizing term which ensures that N DCG ? [0, 1] for all rankings. T is typically set to a small value
to emphasize that the user will only be shown the top-T ranked items and the items below the top-T
are not evaluated.
3
Related Work
Related work in CF and CR can be divided into two categories: neighborhood-based approaches and
model-based approaches. In this section we describe both types of models.
3.1
Neighborhood-Based Approaches
Neighborhood-based CF approaches estimate the unknown ratings for a target user based on the
ratings from the set of neighborhood users that tend to rate similarly to the target user. Formally,
given the target user un and item vm the neighborhood-based methods find a subset of K neighbor
users who are most similar to un and have rated vm , i.e., are in the set U(vm ) \ un . We use
K(un , vm ) ? U(vm ) \ un to denote the set of K neighboring users. A central component of these
methods is the similarity function ? used to compute the neighbors. Several such functions have
been proposed including the Cosine Similarity [4] and the Pearson Correlation [20, 10]:
?cos (un , u0 ) =
R(un , :) ? R(u0 , :)T
kR(un , :)kkR(u0 , :)k
?pears (un , u0 ) =
(R(un , :) ? ?(un )) ? (R(u0 , :) ? ?(u0 ))T
kR(i, :) ? ?(un )kkR(u0 , :) ? ?(u0 )k
where ?(un ) is the average rating for un . Once the K neighbors are found the rating is predicted
by taking the weighted average of the neighbors? ratings. An analogous item-based approach [22]
can be used when the number of items is smaller than the number of users.
One problem with the neighborhood-based approaches is that the raw ratings often contain user bias.
For instance, some users tend to give high ratings while others tend to give low ones. To correct for
these biases various methods have been proposed to normalize or center the ratings [4, 20] before
computing the predictions.
Another major problem with the neighborhood-based approaches arises from the fact that the observed rating matrix R is typically highly sparse, making it very difficult to find similar neighbors
reliably. To addresss this sparsity, most methods employ dimensionality reduction [9] and data
smoothing [24] to fill in some of the unknown ratings, or to cluster users before computing user
2
similarity. This however adds computational overhead and typically requires tuning additional parameters such as the number of clusters.
A neighborhood-based approach to ranking has been proposed recently by Liu & Yang [15]. Instead
of predicting ratings, this method uses the neighbors of un to fill in the missing entries in the M ?M
pairwise preference matrix Yn , where Yn (vm , vl ) is the preference strength for vm over vl by
un . Once the matrix is completed an approximate Markov chain algorithm is used to infer the
ranking from the pairwise preferences. The main drawback of this approach is that the model is
not optimized for the target evaluation metric, such as NDCG. The ranking is inferred directly from
Yn and no additional parameters are learned. In general, to the best of our knowledge, no existing
neighborhood-based CR method takes the target metric into account during optimization.
3.2
Model-Based Approaches
In contrast to the neighborhood-based approaches, the model-based approaches use the observed
ratings to create a compact model of the data which is then used to predict the unobserved ratings.
Methods in this category include latent models [11, 16, 21], clustering methods [24] and Bayesian
networks [19]. Latent factorization models such as Probabilistic Matrix Factorization (PMF) [21]
are the most popular model-based approaches. In PMF every user un and item vm are represented
by latent vectors ?(un ) and ?(vm ) of length D. For a given user-item pair (un , vm ) the dot product
of the corresponding latent vectors gives the rating prediction: R(un , vm ) ? ?(un ) ? ?(vm ). The
latent representations are learned by minimizing the squared error between the observed ratings and
the predicted ones.
Latent models have more expressive power and typically perform better than the neighborhoodbased models when the number of observed ratings is small because they are able to learn preference
correlations that extend beyond the simple neighborhood similarity. However, this comes at the cost
of a large number of parameters and complex optimization. For example, with the suggested setting
of D = 20 the PMF model on the full Netflix dataset has over 10 million parameters and is prone to
overfitting. To prevent overfitting the weighted `2 norms of the latent representations are minimized
together with the squared error during the optimization phase, which introduces additional hyperparameters to tune.
Another problem with the majority of the model-based approaches is that inference for a new
user/item is typically expensive. For instance, in PMF the latent representation has to be learned
before any predictions can be made for a new user/item, and if many new users/items are added the
entire model has to be retrained. On the other hand, inference for a new user in neighborhood-based
methods can be done efficiently by simply computing the K neighbors, which is a key advantage of
these approaches.
Several model-based approaches to CR have recently been proposed, notably CofiRank [23] and the
PMF-based ranking model [2]. CofiRank learns latent representations that minimize a ranking-based
loss instead of the squared error. The PMF-based approach uses the latent representations produced
by PMF as user-item features and learns a ranking model on these features. The authors of that
work also note that the PMF representations might not be optimal for ranking since they are learned
using a squared error objective which is very different from most ranking metric. To account for this
they propose an extension where both user-item features and the weights of the ranking function are
optimized during learning. Both methods incorporate NDCG during the optimization phase which is
a significant advantage over most neighborhood-based approaches to CR. However, neither method
addresses the optimization or inference problems mentioned above. In the following section we
present our approach to CR which leverages the advantages of both neighborhood and model-based
methods.
3.3
Learning-to-Rank
Learning-to-rank has received a lot of attention in the machine learning community due to its importance in a wide variety of applications ranging from information retrieval to natural language
processing to computer vision. In IR the learning-to-rank problem consists of a set of training
queries where for each query we are given a set of retrieved documents and their relevance labels
that indicate the degree of relevance to the query. The documents are represented as query dependent
feature vectors and the goal is to learn a feature-based ranking function to rank the documents in the
order of relevance to the query. Existing approaches to this problem can be partitioned into three
3
Figure 1: An example rating matrix R and the resulting WIN, LOSS and TIE matrices for the user-item pair
(u3 , v4 ) with K = 3 (number of neighbors). (1) Top-3 closest neighbors {u1 , u5 , u6 } are selected from
U(v4 ) = {u1 , u2 , u5 , u6 } (all users who rated v4 ). Note that u2 is not selected because the ratings for u2
deviate significantly from those for u3 . (2) The WIN, LOSS and TIE matrices are computed for each neighbor
using Equation 2. Here g ? 1 is used to compute the matrices. For example, u5 gave a rating of 3 to v4 which
ties it with v3 and beats v1 . Normalizing by |V(u5 )| ? 1 = 2 gives WIN34 (u5 ) = 0.5, LOSS34 (u5 ) = 0 and
TIE34 (u5 ) = 0.5.
categories: pointwise, pairwise, and listwise. Due to the lack of space we omit the description of the
individual approaches here and instead refer the reader to [14] for an excellent overview.
4
Our Approach
The main idea behind our approach is to transform the CR problem into a learning-to-rank one and
then utilize one of the many developed ranking methods to learn the ranking function. CR can be
placed into the learning-to-rank framework by noting that the users correspond to queries and items
to documents. For each user the observed ratings indicate the relevance of the corresponding items
to that user and can be used to train the ranking function. The key difference between this setup and
the standard learning-to-rank one is the absence of user-item features. In this work we bridge this
gap and develop a robust feature extraction approach which does not require any external user or
item information and is based only on the available training ratings.
4.1
Feature Extraction
The PMF-based ranking approach [2] extracts user-item features by concatenating together the latent
representations learned by the PMF model. The model thus requires the user-item representations
to be learned before the items can be ranked and hence suffers from the main disadvantages of the
model-based approaches: the large number of parameters, complex optimization, and expensive
inference for new users and items. In this work we take a different approach which avoids these
disadvantages. We propose to use the neighbor preferences to extract the features for a given useritem pair.
Formally, given a user-item pair (un , vm ) and a similarity function ?, we use ? to extract a subset of
the K most similar users to un that rated vm , i.e., K(un , vm ). This step is identical to the standard
neighborhood-based model, and ? can be any rating or preference based similarity function. Once
K(un , vm ) = {uk }K
k=1 is found, instead of using only the ratings for vm , we use all of the observed
ratings for each neighbor and summarize the net preference for vm into three K ? 1 summary
preference matrices WINnm , LOSSnm and TIEnm :
1
|V(uk )| ? 1
WINnm (k)
=
LOSSnm (k)
1
=
|V(uk )| ? 1
TIEnm (k)
=
1
|V(uk )| ? 1
X
g(R(uk , vm ), R(uk , v 0 ))I[R(uk , vm ) > R(uk , v 0 )]
v 0 ?V(uk )\vm
X
g(R(uk , vm ), R(uk , v 0 ))I[R(uk , vm ) < R(uk , v 0 )]
(2)
v 0 ?V(uk )\vm
X
I[R(uk , vm ) = R(uk , v 0 )]
v 0 ?V(uk )\vm
where I[x] is an indicator function evaluating to 1 if x is true and to 0 otherwise, and g : R2 ?
R is the pairwise preference function used to convert ratings to pairwise preferences. A simple
choice for g is g ? 1 which ignores the rating magnitude and turns the matrices into normalized
counts. However, recent work in preference aggregation [8, 13] has shown that additional gain can be
achieved by taking the relative rating magnitude into account by using either the normalized rating or
log rating difference. All three versions of g address the user bias problem mentioned above by using
4
relative comparisons rather than the absolute rating magnitude. In this form WINnm (k) corresponds
to the net positive preference for vm by neighbor uk . Similarly, LOSSnm (k) corresponds to the net
negative preference and TIEnm (k) counts the number of ties. Together the three matrices thus
describe the relative preferences for vm across all the neighbors of un . Normalization by |V(uk ) \
vm | (number of observed ratings for uk excluding vm ), ensures that the entries are comparable across
neighbors with different numbers of ratings. For unpopular items vm that do not have many ratings
with |U(vm )| < K, the number of neighbors will be less than K, i.e., |K(un , vm )| < K. When
such an item is encountered we shrink the preference matrices to be the same size as |K(un , vm )|.
Figure 1 shows an example rating matrix R together with the preference matrices computed for the
user-item pair (u3 , v4 ).
Given the preference matrix WINnm we summarize it with a set of simple descriptive statistics:
"
1 X
?(WINnm ) = ?(WINnm ), ?(WINnm ), max(WINnm ), min(WINnm ),
I[WINnm (k) 6= 0]
K
#
k
where ? and ? are mean and standard deviation functions respectively. The last statistic counts the
number of neighbors (out of K) that express any positive preference towards vm , and together with
? summarizes the overall confidence of the preference. Extending this procedure to the other two
preference matrices and concatenating the resulting statistics gives the feature vector for (un , vm ):
?(un , vm ) = [?(WINnm ), ?(LOSSnm ), ?(TIEnm )]
(3)
Intuitively the features describe the net preference for vm and its variability across the neighbors.
Note that since ? is independent of K, N and M this representation will have the same length for every user-item pair. We have
thus created a fixed length feature representation for every useritem pair, effectively transforming the CR problem into a standard
learning-to-rank one. During training our aim is now to use the
observed training ratings to learn a scoring function f : R|?| ? R
which maximizes the target IR metric, such as NDCG, across all
users. At test time, given a user u and items {v1 , ..., vM }, we (1)
extract features for each item vm using the neighbors of (u, vm );
(2) apply the learned scoring function to get the score for every
item; and (3) sort the scores to produce the ranking. This process
is shown in Figure 2.
It is important to note here that, first, a single scoring function
Figure 2: The flow diagram for
is learned for all users and items so the number of parameters is WLT, our feature-based CR model.
independent of the number of users or items and only depends on
the size of ?. This is a significant advantage over most model-based approaches where the number of
parameters typically scales linearly with the number of users and/or items. Second, given a new user
u no optimization is necessary to produce a ranking of the items for u. Similarly to neighborhoodbased methods, our approach only requires computing the neighbors to extract the features and apply
the learned scoring function to get the ranking. This is also a significant advantage over most userbased approaches where it is typically necessary to learn a new model for every user not present
in the training data before predictions can be made. Finally, unlike the existing neighborhoodbased methods to CR our approach allows to optimize the parameters of the model for the target
metric. Moreover, the extracted features incorporate preference confidence information such as the
variance across the neighbors and the fraction of the neighbors that generated each preference type
(positive, negative and tie). Taking this information into account allows us to adapt the parameters
of the scoring function to sparse low-confidence settings and addresses the reliability problem of the
neighborhood-based methods (see Section 3.1). Note that an analogous item-based approach can be
taken here by similarly summarizing the preferences of un for items that are closest to vm , we leave
this for future work. A modified version of this approach adapted to binary ratings recently placed
second in the Million Song Dataset Challenge [18] ran by Kaggle.
4.2
Learning the Scoring Function
Given the user-item features extracted based on the neighbors our goal is to use the observed training
ratings for each user to optimize the parameters of the scoring function for the target IR metric. A
key difference between this feature-based CR approach and the typical learning-to-rank setup is the
5
possibility of missing features. If a given training item vm is not ranked by any other user except
un the feature vector is set to zero (?(un , vm ) ? 0). One way to avoid missing features is to learn
only with those items that have at least ratings in the training set. However, in very sparse settings
this would force us to discard some of the valuable training data. We take a different approach,
modifying the conventional linear scoring function to include an additional bias term b0 :
f (?(un , vm ), W) = w ? ?(un , vm ) + b + I[U(vm ) \ un = ?]b0
(4)
where W = {w, b, b0 } is the set of free parameters to be learned. Here w has the same dimension as
?, and I is an indicator function. The bias term b0 provides a base score for vm if vm does not have
enough ratings in the training data. Several possible extensions of this model are worth mentioning
here. First, the scoring function can be made non-linear by adding additional hidden layer(s) as
done in conventional multilayer neural networks. Second, user information can be incorporated
into the model by learning user specific weights. To incorporate user information we can learn a
separate set of weights wn for each user un or group of users. The weights will provide user specific
information and are then applied to rank the unrated items for the corresponding user(s). However,
this extension makes the approach similar to the model-based approaches, with all the corresponding
disadvantages mentioned above. Finally, additional user/item information such as, for example,
personal information for users and description/genre etc. for items, can be incorporated by simply
concatenating it with ?(un , vm ) and expanding the dimensionality of W. Note that if these additional
features can be extracted efficiently, incorporating them will not add significant overhead to either
learning or inference and the model can still be applied to new users and items very efficiently.
In the form given by Equation 4 our model has a total of |?|+2 parameters to be learned. We can use
any of the developed learning-to-rank approaches to optimize W. In this work we chose to use the
LambdaRank method, due it its excellent performance, having recently won the Yahoo! LearningTo-Rank Challenge [7]. We omit the description of LambdaRank here due to the lack of space, and
refer the reader to [6] and [5] for a detailed description.
5
Experiments
To validate the proposed approach we conducted extensive experiments on three publicly available datasets: two movie datasets MovieLens-1, MovieLens-2, and a musical artist dataset from
Yahoo! [1]. All datasets were kept as is except Yahoo!, which we subsampled by first selecting
the 10,000 most popular items and then selecting the 100,000 users with the most ratings. The
subsampling was done to speed up the experiments as the original dataset has close to 2 million
users and 100,000 items. In addition to subsampling we rescaled user ratings from 0-100 to the 1-5
interval to make the data consistent with the other two datasets. The rescaling was done by mapping 0-19 to 1, 20-39 to 2, etc. The user, item and rating statistics are summarized in Table 1. To
investigate the effect that the number of ratings has on accuracy we follow the framework of [23, 2].
For each dataset we randomly select 10, 20, 30, 40 ratings
Table 1: Dataset statistics.
from each user for training, 10 for validation and test on
Dataset
Users
Items
Ratings
the remaining ratings. Users with less than 30, 40, 50,
MovieLens-1
1000
1700
100,000
60 ratings were removed to ensure that we could evaluate
MovieLens-2
72,000
10,000
10,000,000
on at least 10 ratings for each user. Note that the number
Yahoo!
100,000
10,000
45,729,723
of test items varies significantly across users with many
users having more test ratings than training ones. This simulates the real life CR scenario where the
set of unrated items from which the recommendations are generated is typically much larger than
the rated item set for each user.
We trained our ranking model, referred to as WLT, using stochastic gradient descent with the learning rates 10?2 , 10?3 , 10?4 for MovieLens-1, MovieLens-2 and Yahoo! respectively. We found that
1 to 21 iterations was sufficient to trained the models. We also found that using smaller learning
rates typically resulted in better generalization. We compare WLT with a well established userbased (UB) collaborative filtering model. We also compare with two collaborative ranking models:
PMF-based ranker [2] (PMF-R) and CofiRank [23] (CO). To make the comparison fair we used the
same LambdaRank architecture to train both WLT and PMF-R. Note that both PMF-R and CofiRank
report state-of-the-art CR results. To compute the PMF features we used extensive cross-validation
to determine the L2 penalty weights and the latent dimension size D (5, 10, 10 for MovieLens1, MovieLens-2, and Yahoo! datasets respectively). For CofiRank we used the settings suggested
1
Note that 1 iteration of stochastic gradient descent corresponds to |U| weight updates.
6
Table 2: Collaborative Ranking results. NDCG values at different truncation levels are shown within the main
columns, which are split based on the number of training ratings. Each model?s rounded number of parameters
is shown in brackets, with K = thousand, M = million.
10
20
30
40
N@1 N@3 N@5
N@1 N@3 N@5
N@1 N@3 N@5
N@1 N@3 N@5
MovieLens-1:
UB
PMF-R(12K)
CO(240K)
WLT(17)
49.30
69.39
67.28
70.96
54.67
68.33
66.23
68.25
57.36
68.65
66.59
67.98
57.49
72.50
71.82
70.34
61.81
70.42
70.80
69.50
62.88
69.95
70.30
69.21
64.25
72.77
71.60
71.41
65.75
72.23
71.15
71.16
66.58
71.55
70.58
71.02
62.27
74.02
71.43
74.09
64.92
71.55
71.64
71.85
66.14
70.90
71.43
71.52
MovieLens-2:
UB
PMF-R(500K)
CO(5M)
WLT(17)
67.62
70.12
70.14
72.78
68.23
69.41
68.40
71.70
68.74
69.35
68.46
71.49
71.29
70.65
68.80
73.93
70.78
70.04
68.51
72.63
70.87
70.09
68.76
72.37
72.65
72.22
64.60
74.67
71.98
71.48
65.62
73.37
71.90
71.43
66.38
73.04
73.33
72.18
62.82
75.19
72.63
71.60
63.49
73.73
72.42
71.55
64.25
73.30
Yahoo!:
UB
PMF-R(1M)
CO(10M)
WLT(17)
57.20
52.86
57.42
58.76
55.29
51.98
56.88
55.20
54.31
51.53
56.46
53.53
64.29
63.93
60.59
66.06
61.48
62.42
59.94
62.77
60.16
61.65
59.48
61.21
66.82
66.82
62.07
69.74
63.83
65.41
61.10
66.58
62.42
64.61
60.54
65.02
68.97
69.46
61.68
71.50
65.89
68.05
60.78
68.52
64.50
67.21
60.24
67.00
in [23] and ran the code available on the author?s home page. Similarly to [2], we found that the
regression-based objective almost always gave the best results for CofiRank, consistently outperforming NDCG and ordinal objectives.
For WLT and UB models we use cosine similarity as the distance function to find the top-K neighbors. Note that using the same similarity function ensures that both models select the same neighbor
sets and allows for fair comparison. The number of neighbors K was cross validated in the range
[10, 100] on the small MovieLens-1 dataset and set to 200 on all other datasets as we found the
results to be insensitive for K above 100 which is consistent with the findings of [15]. In all experiments only ratings in the training set were used to select the neighbors, and make predictions for the
validation and test set items.
5.1
Results
The NDCG (N@T) results at truncations 1,3 and 5 are shown in Table 2. From the table it is seen
that the WLT model performs comparably to the best baseline on MovieLens-1, outperforms all
methods on MovieLens-2 and is also the best overall approach on Yahoo!. Across the datasets the
gains are especially large at lower truncations N@1 and N@3, which is important since those items
will most likely be the ones viewed by the user.
Several patterns can also be seen from the table. First, when the number of users and ratings is small
(MovieLens-1) the performance of the UB approach significantly drops. This is likely due to the fact
that neighbors cannot be found reliably in this setting since users have little overlap in ratings. By
taking into account the confidence information such as the number of available neighbors WLT is
able to significantly improve over UB while using the same set of neighbors. On MovieLens-1 WLT
outperforms UB by as much as 20 NDCG points. Second, for larger datasets such as MovieLens-2
and Yahoo! the model-based approaches have millions of parameters (shown in brackets in Table 2)
to optimize and are highly prone to overfitting. Tuning the hyper-parameters for these models is difficult and computationally expensive in this setting as it requires conducting many cross-validation
runs over large datasets. On the other hand, our approach achieves consistently better performance
with only 17 parameters, and a single hyper-parameter K which is fixed to 200. Overall, the results
demonstrate the robustness of the proposed features which generalize well when both few and many
users available.
5.2
Transfer Learning Results
In addition to the small number of parameters, another advantage of our approach over most modelbased methods is that inference for a new user only requires finding the K neighbors. Thus both
users and items can be taken from a different, unseen during training, set. This transfer learning task
is much more difficult than the strong generalization task [17] commonly used to test CF methods
on new users. In strong generalization the models are evaluated on users not present at training time
while keeping the item set fixed, while here the item set also changes. Note that it is impossible to
7
Table 3: Transfer learning NDCG results. Original: WLT model trained on the respective dataset. WLT-M1
and WLT-M2 models are trained on MovieLens-1 and MovieLens-2 respectively, WLT-Y is trained on Yahoo!.
WLT-M1, WLT-M2 and WLT-Y models are applied to other datasets without retraining.
10
20
30
40
N@1 N@3 N@5
N@1 N@3 N@5
N@1 N@3 N@5
N@1 N@3 N@5
MovieLens-1:
Original
70.96 68.25 67.98
WLT-M2
63.15 62.46 62.75
WLT-Y
44.12 47.06 48.75
70.34 69.50 69.21
69.66 68.61 68.47
61.73 62.60 63.57
71.41 71.16 71.02
71.02 70.99 70.88
67.33 66.99 67.99
74.09 71.85 71.52
73.28 71.70 71.46
71.11 69.22 68.95
MovieLens-2:
Original
72.78 71.70 71.49
WLT-M1
72.90 71.77 71.57
WLT-Y
68.04 68.03 68.41
73.93 72.63 72.37
73.97 72.59 72.34
71.54 71.02 71.07
74.67 73.37 73.04
74.67 73.36 73.01
73.15 72.38 72.25
75.19 73.73 73.30
75.28 73.76 73.28
74.00 73.03 72.79
Yahoo!:
Original
WLT-M1
WLT-M2
66.06 62.77 61.21
66.03 62.68 61.18
65.29 61.95 60.47
69.74 66.58 65.02
68.93 65.85 64.32
68.68 65.55 64.07
71.50 68.52 67.00
71.15 68.17 66.65
70.84 67.91 66.44
58.76 55.20 53.53
57.93 53.91 52.35
58.81 54.70 53.15
apply PMF-R, CO and most other model-based methods to this setting without re-training the entire
model. Our model, on the other hand, can be applied without re-training by simply extracting the
features for every new user-item pair and applying the learned scoring function to rank the items.
To test the generalization properties of the model we took the three learned WLT models (referred to as WLT-M1, WLT-M2, WLT-Y for MovieLens-1&2 and Yahoo! respectively) and applied each model to the datasets that it was not trained on. So for instance
WLT-M1 was applied to MovieLens-2 and Yahoo!. Table 3 shows the transfer results for
each of the datasets along with the original results for the WLT model trained on each
dataset (referred to as Original). Note that
none of the models were re-trained or tuned
in any way. From the table it seen that our
model generalizes very well to different domains. For instance, WLT-M1 trained on
MovieLens-1 is able to achieve state-of-the
art performance on MovieLens-2, outper- Figure 3: Normalized WLT weights. White/black correforming all the baselines that were trained spond to positive/negative weights; the weight magnitude
on MovieLens-2. Note that MovieLens-2 is proportional to the square size.
has over 5 times more items and 72 times more users than MovieLens-1, majority of which the WLTM1 model has not seen during training. Moreover, perhaps surprisingly, our model also generalizes
well across item domains. The WLT-Y model trained on musical artist data achieves state-of-the-art
performance on MovieLens-2 movie data, performing better than all the baselines when 20, 30 and
40 ratings are used for training. Moreover, both WLT-M1 and WLT-M2 achieve very competitive
results on Yahoo! outperforming most of the baselines.
More insight into why the model generalizes well can be gained from Figure 3, which shows the
normalized weights learned by the WLT models on each of the three datsets. The weights are
partitioned into feature sets from each of the three preference matrices (see Equation 2). From the
figure it can be seen that the learned weights share a lot of similarities. The weights on the features
from the WIN matrix are mostly positive while those on the features from the LOSS matrix are
mostly negative. Mean preferences and the number of neighbors features have the highest absolute
weights which indicates that they are the most useful for predicting the item scores. The similarity
between the weight vectors suggests that the features convey very similar information and remain
invariant across different user/item sets.
6
Conclusion
In this work we presented an effective approach to extract user-item features based on neighbor
preferences. The features allow us to apply any learning-to-rank approach to learn the ranking
function. Experimental results show that by using these features state-of-the art ranking results
can be achieved. Going forward, the strong transfer results call into question whether the complex
machinery developed for CF is appropriate when the true goal is recommendation, as the required
information for finding the best items to recommend can be obtained from basic neighborhood
statistics. We are also currently investigating additional features such as neighbors? rating overlap.
8
References
[1] The Yahoo! R1 dataset. http://webscope.sandbox.yahoo.com/catalog.php?
datatype=r.
[2] S. Balakrishnan and S. Chopra. Collaborative ranking. In WSDM, 2012.
[3] J. Bennet and S. Lanning.
The Netflix prize.
www.cs.uic.edu/?liub/
KDD-cup-2007/NetflixPrize-description.pdf.
[4] J. S. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithm for
collaborative filtering. In UAI, 1998.
[5] C. J. C. Burges. From RankNet to LambdaRank to LambdaMART: An overview. Technical
Report MSR-TR-2010-82, 2010.
[6] C. J. C. Burges, R. Rango, and Q. V. Le. Learning to rank with nonsmooth cost functions. In
NIPS, 2007.
[7] O. Chapelle, Y. Chang, and T.-Y. Liu. The Yahoo! Learning To Rank Challenge. http:
//learningtorankchallenge.yahoo.com, 2010.
[8] D. F. Gleich and L.-H. Lim. Rank aggregation via nuclear norm minimization. In SIGKDD,
2011.
[9] K. Y. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time collaborative
filtering algorithm. Information Retrieval, 4(2), 2001.
[10] J. Herlocker, J. A. Konstan, and J. Riedl. An empirical analysis of design choices in
neighborhood-based collaborative filtering algorithms. Information Retrieval, 5(4), 2002.
[11] T. Hofmann. Latent semantic models for collaborative filtering. ACM Trans. Inf. Syst., 22(1),
2004.
[12] K. Jarvelin and J. Kekalainen. IR evaluation methods for retrieving highly relevant documents.
In SIGIR, 2000.
[13] X. Jiang, L.-H. Lim, Y. Yao, and Y. Ye. Statistical ranking and combinatorial hodge theory.
Mathematical Programming, 127, 2011.
[14] H. Li. Learning to Rank for Information Retrieval and Natural Language Processing. Morgan
& Claypool, 2011.
[15] N. Liu and Q. Yang. Eigenrank: A ranking-oriented approach to collaborative filtering. In
SIGIR, 2008.
[16] B. Marlin. Modeling user rating profiles for collaborative filtering. In NIPS, 2003.
[17] B. Marlin. Collaborative filtering: A machine learning perspective. Master?s thesis, University
of Toronto, 2004.
[18] B. McFee, T. Bertin-Mahieux, D. Ellis, and G. R. G. Lanckriet. The Million Song Dataset
Challenge. In WWW, http://www.kaggle.com/c/msdchallenge, 2012.
[19] D. M. Pennock, E. Horvitz, S. Lawrence, and C. L. Giles. Collaborative filtering by personality
diagnosis: A hybrid memory and model-based approach. In UAI, 2000.
[20] P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. Grouplens: An open architecture for collaborative filtering of netnews. In CSCW, 1994.
[21] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In NIPS, 2008.
[22] B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Item-based collaborative filtering recommendation algorithms. In WWW, 2001.
[23] M. Weimer, A. Karatzoglou, Q. V. Le, and A. J. Smola. CofiRank - maximum margin matrix
factorization for collaborative ranking. In NIPS, 2007.
[24] G.-R. Xue, C. Lin, Q. Yang, W. Xi, H.-J. Zeng, Y. Yu, and Z. Chen. Scalable collaborative
filtering using cluster-based smoothing. In SIGIR, 2005.
9
| 4829 |@word msr:1 version:2 norm:2 retraining:3 open:2 tr:1 reduction:1 liu:3 score:4 selecting:2 tuned:1 document:6 outperforms:3 existing:4 horvitz:1 com:3 kdd:1 hofmann:1 designed:1 drop:1 update:1 selected:2 item:84 shut:1 prize:1 provides:1 toronto:5 preference:29 mathematical:1 along:1 mahieux:1 retrieving:1 consists:1 overhead:2 privacy:1 pairwise:5 notably:1 inspired:1 discounted:1 wsdm:1 salakhutdinov:1 little:2 moreover:5 notation:1 maximizes:1 emerging:1 developed:4 unobserved:1 finding:3 marlin:2 kkr:2 every:7 tie:5 uk:20 omit:2 yn:3 neighborhoodbased:3 before:5 service:1 positive:5 jiang:1 ndcg:10 might:1 chose:1 black:1 bergstrom:1 suggests:1 challenging:1 co:6 bennet:1 mentioning:1 factorization:4 range:1 karypis:1 procedure:1 mcfee:1 area:1 empirical:2 significantly:5 confidence:4 get:2 cannot:2 close:1 impossible:1 lambdarank:4 applying:1 optimize:7 conventional:2 www:4 center:1 missing:3 attention:2 sigir:3 formulate:1 amazon:2 kekalainen:1 m2:6 insight:1 fill:2 nuclear:1 his:1 u6:2 analogous:3 target:9 user:97 programming:1 us:2 goldberg:1 lanckriet:1 expensive:3 predicts:2 observed:10 resnick:1 thousand:1 ensures:3 rescaled:1 removed:1 valuable:1 ran:2 mentioned:4 highest:1 transforming:1 personal:2 trained:11 predictive:1 represented:4 various:1 overlooking:1 genre:1 train:2 describe:4 effective:2 query:7 zemel:2 hyper:2 neighborhood:19 pearson:1 netnews:1 netflixprize:1 larger:2 otherwise:1 statistic:6 unseen:1 uic:1 transform:1 advantage:6 descriptive:1 net:4 took:1 propose:3 product:2 neighboring:1 relevant:1 achieve:3 description:5 validate:1 normalize:1 cluster:3 extending:1 r1:1 produce:2 leave:1 develop:2 b0:4 received:1 strong:3 c:3 predicted:2 come:1 indicate:2 drawback:1 correct:1 modifying:1 stochastic:2 engineered:1 viewing:1 karatzoglou:1 require:2 datsets:1 generalization:4 sandbox:1 extension:3 claypool:1 lawrence:1 mapping:1 predict:2 major:1 u3:3 achieves:2 released:1 label:1 currently:1 combinatorial:1 grouplens:1 bridge:1 create:1 weighted:2 minimization:1 lanning:1 always:2 aim:3 modified:1 rather:2 avoid:1 cr:15 validated:1 consistently:2 rank:24 indicates:1 pear:1 sigkdd:1 contrast:1 baseline:4 summarizing:1 inference:6 roeder:1 dependent:1 vl:2 typically:11 dcg:2 entire:2 her:1 hidden:1 going:1 overall:3 yahoo:19 development:2 art:5 smoothing:2 once:3 extraction:2 unpopular:1 having:2 identical:1 yu:1 jarvelin:1 future:1 minimized:1 report:2 others:1 recommend:4 richard:1 employ:1 wlt:36 few:1 randomly:1 oriented:1 nonsmooth:1 resulted:1 individual:2 subsampled:1 phase:2 highly:5 mnih:1 possibility:1 investigate:1 evaluation:4 introduces:1 bracket:2 behind:1 chain:1 necessary:2 respective:1 machinery:1 pmf:19 re:3 instance:5 column:2 modeling:1 elli:1 giles:1 suchak:1 disadvantage:3 cost:2 deviation:1 subset:2 entry:2 conducted:1 varies:1 xue:1 venue:1 volkovs:1 probabilistic:2 vm:59 v4:5 rounded:1 modelbased:1 together:5 yao:1 squared:5 central:1 thesis:1 external:3 book:1 rescaling:1 li:1 syst:1 account:5 potential:1 summarized:1 includes:1 kadie:1 ranking:45 depends:1 lot:3 netflix:6 aggregation:2 sort:1 competitive:1 collaborative:23 minimize:1 square:1 php:1 ir:6 publicly:2 accuracy:2 descriptor:1 who:2 efficiently:3 yield:1 correspond:1 variance:1 musical:2 conducting:1 generalize:2 raw:1 bayesian:1 accurately:2 produced:1 artist:2 comparably:1 none:1 worth:1 history:1 suffers:1 iacovou:1 gain:3 seventeen:1 dataset:14 concentrating:1 popular:2 knowledge:1 lim:2 dimensionality:2 gleich:1 higher:1 follow:1 evaluated:3 done:4 shrink:1 just:1 smola:1 correlation:2 hand:3 web:1 expressive:1 zeng:1 lack:2 perhaps:1 effect:1 ye:1 normalized:6 contain:1 true:2 hence:1 assigned:1 semantic:1 white:1 during:7 cosine:2 won:1 learningto:1 pdf:1 demonstrate:1 performs:1 ranging:1 novel:1 recently:4 overview:2 insensitive:1 million:6 extend:1 m1:8 significant:5 refer:3 cup:1 tuning:2 collaborate:1 kaggle:2 similarly:6 language:2 had:1 dot:1 reliability:1 chapelle:1 entail:1 similarity:10 sarwar:1 etc:3 gt:2 add:2 base:1 closest:2 recent:4 perspective:2 optimizing:1 retrieved:1 inf:1 discard:1 scenario:1 binary:1 outperforming:2 life:1 unrated:4 scoring:10 seen:5 morgan:1 additional:9 determine:1 v3:1 u0:8 full:1 infer:1 technical:1 adapt:1 cross:3 retrieval:5 lin:1 divided:1 prediction:11 scalable:1 regression:1 basic:1 multilayer:1 vision:1 metric:9 iteration:2 normalization:1 achieved:3 addition:2 interval:1 diagram:1 unlike:1 webscope:1 pennock:1 tend:3 simulates:1 balakrishnan:1 flow:1 call:1 extracting:1 chopra:1 leverage:2 yang:3 noting:1 split:1 recommends:1 enough:1 wn:1 variety:1 gave:2 architecture:2 idea:1 ranker:1 whether:1 penalty:1 song:3 ranknet:1 useful:1 detailed:1 tune:2 u5:7 concentrated:1 category:3 http:3 popularity:1 diagnosis:1 hyperparameter:1 express:1 group:1 key:3 prevent:1 neither:1 utilize:1 kept:1 v1:3 fraction:1 year:2 convert:1 run:1 powerful:1 master:1 almost:2 reader:2 home:1 summarizes:1 comparable:1 layer:1 encountered:1 strength:2 adapted:1 perkins:1 u1:3 speed:1 min:1 performing:1 relatively:1 riedl:3 smaller:2 across:9 remain:1 heckerman:1 partitioned:2 making:2 intuitively:1 invariant:1 taken:3 computationally:1 equation:3 turn:1 count:3 ordinal:1 available:6 generalizes:3 apply:6 appropriate:1 robustness:1 original:7 personality:1 top:6 denotes:3 cf:17 include:3 completed:1 clustering:1 subsampling:2 remaining:1 ensure:1 especially:1 objective:5 added:1 question:1 primary:2 gradient:2 win:3 distance:1 separate:1 majority:2 reason:2 cofirank:7 length:3 code:1 pointwise:1 minimizing:1 difficult:3 setup:2 mostly:2 negative:4 herlocker:1 design:1 reliably:2 unknown:2 perform:1 eigentaste:1 datasets:14 markov:1 descent:2 beat:2 excluding:1 variability:1 incorporated:2 retrained:1 community:2 inferred:1 rating:68 pair:10 required:1 extensive:2 optimized:2 catalog:1 learned:17 established:1 nip:4 trans:1 address:4 able:3 beyond:1 suggested:2 below:1 pattern:1 sparsity:1 challenge:7 summarize:2 including:1 max:1 memory:1 power:1 overlap:2 ranked:4 natural:2 force:1 predicting:2 indicator:2 hybrid:1 improve:1 movie:4 rated:7 created:1 extract:7 deviate:1 l2:1 relative:3 maksims:1 loss:4 permutation:1 filtering:15 proportional:1 bertin:1 validation:4 degree:1 sufficient:1 consistent:2 viewpoint:1 share:1 row:1 prone:2 summary:1 placed:2 last:1 truncation:4 free:1 keeping:1 surprisingly:1 bias:5 allow:1 burges:2 neighbor:34 wide:1 taking:4 absolute:2 sparse:3 listwise:1 dimension:2 evaluating:1 cumulative:1 avoids:1 preventing:1 author:2 commonly:2 made:3 ignores:1 forward:1 approximate:1 emphasize:1 compact:1 overfitting:3 buy:1 investigating:1 uai:2 recommending:1 consuming:1 xi:1 un:51 latent:13 why:1 table:10 learn:9 transfer:5 robust:1 expanding:1 excellent:4 complex:3 domain:8 main:5 linearly:1 weimer:1 hyperparameters:1 profile:1 fair:2 convey:1 referred:3 position:1 concatenating:3 konstan:2 learns:2 down:1 specific:2 list:2 explored:1 r2:1 gupta:1 concern:1 normalizing:2 incorporating:1 adding:1 effectively:1 gained:2 kr:2 importance:1 magnitude:4 margin:1 gap:1 chen:1 led:1 simply:3 likely:5 hodge:1 cscw:1 ordered:1 u2:3 recommendation:3 chang:1 corresponds:3 extracted:3 acm:1 goal:4 formulated:1 viewed:1 towards:1 absence:3 change:1 youtube:1 specifically:1 movielens:28 typical:2 except:2 useritem:2 total:1 breese:1 experimental:1 formally:2 select:3 arises:1 lambdamart:1 relevance:5 ub:8 incorporate:4 evaluate:2 |
4,231 | 483 | Modeling Applications with the Focused Gamma Net
Jose C. Principe, Bert de Vries, Jyh-Ming Kuo and Pedro Guedes de Oliveira?
Department of Electrical Engineering
University of Florida, CSE 447
Gainesville, FL 32611
[email protected]
*Departamento EletronicalINESC
Universidade de Aveiro
A veiro, Portugal
Abstract
The focused gamma network is proposed as one of the possible
implementations of the gamma neural model. The focused gamma
network is compared with the focused backpropagation network and
TDNN for a time series prediction problem, and with ADALINE in
a system identification problem.
1
INTRODUCTION
At NIPS-90 we introduced the gamma neural model, a real time neural net for
temporal processing (de Vries and Principe, 1991). This model is characterized by a
neural short term memory mechanism, the gamma memory structure, which is
implemented as a tapped delay line of adaptive dispersive elements. The gamma
model seems to provide an integrative framework to study the neural processing of
time varying patterns (de Vries and Principe, 1992). In fact both the memory by
delays as implemented in TDNN (Lang et aI, 1990) and memory by local feedback
(self-recurrent loops) as proposed by Jordan (1986), and Elman (1990) are special
cases of the gamma memory structure . The preprocessor utilized in Tank's and
Hopfield concentration in time (CIT) network (Tank and Hopfield, 1989) can be
shown to be very similar to the dispersive structure utilized in the gamma memory
(deVries, 1991). We studied the gamma memory as an independent adaptive filter
structure (Principe et ai, 1992), and concluded that it is a special case of a class of
IIR (infinite impulse response) adaptive filters, which we called the generalized
feedforward structures . For these structures, the well known Wiener-Hopf solution to
find the optimal filter weights can be analytically computed . One of the advantages
of the gamma memory as an adaptive filter is that. although being a recursive
structure. stability is easily ensured . Moreover. the LMS algorithm can be easily
143
144
Principe, de Vries, Kuo, and de Oliveira
extended to adapt all the filter weights, including the parameter that controls the
depth of memory, with the same complexity as the conventional LMS algorithm (i.e.
the algorithm complexity is linear in the number of weights). Therefore, we achieved
a theoretical framework to study memory mechanisms in neural networks.
In this paper we compare the gamma neural model with other well established neural
networks that process time varying signals. Therefore the first step is to establish a
topology for the gamma model. To make the comparison easier with respect to TDNN
and Jordan's networks, we will present our results based on the focused gamma
network. The focused gamma network is a multilayer feedforward structure with a
gamma memory plane in the first layer (Figure 1). The learning equations for the
focused gamma network and its memory characteristics will be addressed in detail.
Examples will be presented for prediction of complex biological signals
(electroencephalogram-EEG) and chaotic time series, as well as a system
identification example.
2
THE FOCUSED GAMMA NET
The focused neural architecture was introduced by Mozer (1988) and Stornetta et al
(1988). It is characterized by a a two stage topology where the input stage stores
traces of the input signal, followed by a nonlinear continuous feedforward mapper
network (Figure 1). The gamma memory plane represents the input signal in a timespace plane (spatial dimension M, temporal dimension K). The activations in the
memory layer are Iik(t), and the activations in the feedforward network are
represented by xi(t). Therefore the following equations apply respectively for the
input memory plane and for the feedforward network,
Io(t)
= Ii(t)
Iik(t) = (1-~)Iik(t-1)+Jl/j,k_l(t-1),i=1, ... ,M;k=1, ... ,K.
Xj(t) = (J(Lwijxj(t)
j <i
+ LWijkIjk(t?
, i=1, ... ,N.
(1)
( 2)
j, k
where ~i is an adaptive parameter that controls the depth of memory (Principe et aI,
1992), and Wijk are the spatial weights. Notice that the focused gamma network for
K=1 is very similar to the focused-backpropagation network of Mozer and Stornetta.
Moreover, when Jl= I the gamma memory becomes a tapped delay line which is the
configuration utilized in TDNN, with the time-to-space conversion restricted to the
first layer (Lang et aI, 1990). Notice also that if the nonlinear feedforward mapper is
restricted to one layer of linear elements, and Jl=1, the focused gamma memory
becomes the adaptive linear combiner - ADALINE (Widrow et al,1960).
In order to better understand the computational properties of the gamma memory we
defined two parameters, the mean memory depth D and memory resolution R as
K
D=~
K
R=-=Jl
D
(3)
Modeling Applications with the Focused Gamma Net
(de Vries, 1991). Memory depth measures how far into the past the signal conveys
information for the processing task, while resolution quantifies the temporal
proximity of the memory traces.
Figure 1. The focus gamma network architecture
The important aspect in the gamma memory formalism is that Il, which controls both
the memory resolution and depth, is an adaptive parameter that is learned from the
signal according to the optimization of a performance measure. Therefore the
focused gamma network always works with the optimal memory depth/ resolution for
the processing problem. The gamma memory is an adaptive recursive structure, and
as such can go unstable during adaptation. But due to the local feedback nature of
G(z), stability is easily ensured by keeping O<Il<2.
The focused gamma network is a recurrent neural model, but due to the topology
selected, the spatial weights can be learned using regular backpropagation
(Rumelhart et aI, 1986). However for the adaptation of Il, a recurrent learning
procedure is necessary. Since most of the times the order of the gamma memory is
small, we recommend adapting I..l with direct differentiation using the real time
recurrent learning (RTRL) algorithm (Williams and Zipzer,1989), which when
applied to the gamma memory yields,
145
146
Principe, de Vries, Kuo, and de Oliveira
= -
where by definition
U: (t)
I/m (t) cr'
[net m(t)] LWmik U : (t)
m
k
= -:-.d Iik (t) , and
oJl i
-:-.d Iik(t) = (l-Jlj)Uki(t-I) +Jl i u:- 1 (t-I) + [I i,k-l(t-I) -li,k(t-I)]
oJl.I
However, backpropagation through time (BPTT) (Werbos, 1990) can also be utilized,
and will be more efficient when the temporal patterns are short.
3
EXPERIMENTAL RESULTS
The results for prediction that will be presented here utilized the focused gamma
network as depicted in Figure 2a, while for the case of system identification, the
block diagram is presented in Figure 2b.
plant
d(n)
-4-----i~
let)
joclise.
gamvla
wij,1l
,--::--------'
a) Prediction
b) System Identification
::.'
?
: Ftg~~~1~Bio~k diag rarii;fJ;~ }h~~~;e riments
Prediction of EEG
We selected an EEG signal segment for our first comparison, because the EEG is
notorious for its complexity. The problem was to predict the signal five steps ahead
(feedforward prediction). Figure 3 shows a four second segment of sleep stage 2. The
topology utilized was K gamma units, a one-hidden layer configuration with 5 units
(nonlinear) and one linear output unit. The performance criterion is the mean square
error signal. We utilized backpropagation to adapt the spatial weights (wijk), and
parametrized Jl between 0 and I in steps of 0.1. Figure 3b displays the curves of
minimal mean square error versus Jl.
One can immediately see that the minimum mean square error is obtained for values
of Jl different from one, therefore for the same memory order the gamma memory
outperforms the tapped delay line as utilized in TDNN (which once again is
equivalent to the gamma memory for Jl=l). For the case of the EEG it seems that the
advantage of the gamma memory diminishes when the order of the memory is
Modeling Applications with the Focused Gamma Net
increased. However, the case of K=2, 11=0.6 produces equivalent performance of a
TDNN with 4 memory taps (K=4). Since in experimental conditions there is always
noise, experience has shown that the fewer number of adaptive parameters yield
better signal fitting and simplifies training, so the focused gamma network is
preferable.
0.1
K=2
?0'
?0.6
=--::::
...~1000
?0.O:--:-::100~200~];;:;:-OO-----:-::.oo~'OO:::---600=--:;-:700:O--;...
~t
o
0 I
0
0]
0'0~01!07
01
09
1
.
P~edicti~~~;r~'(5;~~p;jwith the g~l7ii;;~fiit;;~;~f~~~n~n~JU /
the EEG. The best MSE is obtained for 11 < 1. The dot shows the performance
Notice also that the case of networks with first order context unit is obtained for K= I,
so even if the time constant is chosen right (11=0.2 in this case), the performance can
be improved if higher order memory kernels are utilized. It is also interesting to note
that the optimal memory depth for the EEG prediction problem seems to be arollnd
4, as this is the value of KIll optimal. The information regarding the "optimal memory
depth" is not obtainable with conventional models.
Prediction of Mackey-Glass time series
The Mackey-Glass system is a delay differential equation that becomes chaotic for
some values of the parameters and delays (Mackey-Glass, 1977). The results that will
be presented here regard the Mackey-Glass system with delay D=30. The time series
was generated using a fourth order Runge-Kutta algorithm. The table in Figure 4
shows the performance of TDNN and the focused gamma network with the same
number of free parameters. The number of hidden units was kept the same in both
networks, but TDNN utilized 5 input units, while the focused gamma network had 4
input units, and the adaptive memory depth parameter 11. The two systems were
trained with the same number of samples, and training epochs. For TDNN this was
the value that gave the best training when cross validation was utilized (the error in
the test set increased after 100 epochs). For this example 11 was adapted on-line using
RTRL, with the initial value set at 11= I, and with the same step size as for the spatial
weights. As the Table shows, the MSE in the training for the gamma network is
substantially lower than for TDNN. Figure 4 shows the behavior of 11 during the
training epochs. It is interesting to see that the value of 11 changes during training and
settles around a value of 0.92. In terms of learning curve (the MSE as a function of
epoch) notice that there is an intersection of the learning curves for the TDNN and
gamma network around epoch 42 when the value of 11=1, as we could expect from our
analysis. The gamma network starts outperforming TDNN when the correct value of
11 is approached.
147
148
Principe, de Vries, Kuo, and de Oliveira
This example shows that Jl can be learned on line, and that once again having the
freedom to select the right value of memory depth helps in terms of prediction
performance. For both these cases the required memory depth is relatively shallow,
what we can expect since a chaotic time series has positive Lyapunov exponents, so
the important information to predict the next point is in the short-term past. The same
argument applies to the EEG, that can also be modeled as a chaotic time series (Lo
and Principe, 1989). Cases where the long-term past is important for the task should
enhance the advantage of the gamma memory.
.\ \
test
..
,~~gam~ma
TD1NN i
train
TDNN
.":: , .
'
0, _" .. . " ? . .
,
??.??
.... ....
gamma
~
... .
::::
ch
Gamma Net
Architecture
(l+K=4)-12-1
800
same number offree parameters. Notice that
learning curves intersect around
epoch 42, exactly when the m of the gamma was 1. The Figure 4b also shows that
the gamma network is able to achieve a smaller error in this problem.
Linear System Identification
The last example is the identification of a third order linear lowpass elliptic transfer
function with poles and zeros, given by
H (z) =
1 - 0.873 1z-} - 0.87 3 1z-2 + z-3
1 - 2.8653z- 1 + 2.7505z- 2 - 0.8843z- 3
The cutoff frequency of this filter was selected such that the impulse response was
long, effectively creating the need for a deep memory for good identification. For this
case the focused gamma network was reduced to an ADALINE(Jl) (de Vries et aI,
1991), i.e. the feedforward mapper was a one layer linear network. The block
diagram of Figure 2b was utilized to train the gamma network, and I(t) was chosen
to be white gaussian noise. Figure 5 shows the MSE as a function of Jl for gamma
memory orders up to k=3 . Notice that the information gained from the Figure 5
agrees with our speculations. The optimal value of the memory is K/Jl - 17 samples.
For this value the third order ADALINE performs very poorly because there is not
enough information in 3 delays to identify the transfer function with small error. The
gamma memory, on the other hand can choose Jl small to encompass the req uired
Modeling Applications with the Focused Gamma Net
length, even for a third order memory. The price paid is reduced resolution, but the
performance is still much better than the ADALINE of the same order (a factor of 10
improvement).
4
CONCLUSIONS
In this paper we propose a specific topology of the gamma neural model, the focused
gamma network. Several important neural networks become special cases of the
focused gamma network. This allowed us to compare the advantages of having a
more versatile memory structure than any of the networks under comparison .
. :. 110 . 8
::::;::~:~~:r:;:~:::
:::::::::{
';:-:.;
i : o .4
i:!?? ?? ?0.2
????.????.???.iii?????
o. 2
?Figure
1
.0 . 4 .:1Ii! ~ 6
.
0.8
1 :
v~. J..l for H(z). The error achieved with
10 tlInes smaller than for the ADAL1NE.
.s:. E
{ ~=0.18 IS
The conclusion is that the gamma memory is computationally more powerful than
fixed delays or first order context units. The major advantage is that the gamma model
formalism allows the memory depth to be optimally set for the problem at hand. In
the case of the chaotic time series, where the information to predict the future is
concentrated in the neighborhood of the present sample, the gamma memory selected
the most appropriate value, but its performance is similar to TDNN. However, in
cases where the required depth of memory is much larger than the size of the tapped
delay line, the gamma memory outperforms the fixed depth topologies with the same
number of free parameters.
The price paid for this optimal performance is insignificant. As a matter of fact, ~ can
be adapted in real-time with RTRL (or BPTT), and since it is a single global
parameter the complexity of the algorithm is still O(K) with RTRL. The other
possible problem, instability, is easily controlled by requiring that the value of ~ be
limited to O<~<2.
The focused gamma memory is just one of the possible neural networks that can be
implemented with the gamma model. The use of gamma memory planes in the hidden
or output processing elements will enhance the computational power of the neural
network. Notice that in these cases the short term mechanism is not only utilized to
store information of the signal past, but will also be utilized to store the past values
of the neural states. We can expect great savings in terms of network size with these
149
150
Principe, de Vries, Kuo, and de Oliveira
other structures, mainly in cases where the information of the long-term past is
important for the processing task.
Acknowledgments
This work has been partially supported by NSF grant DDM-8914084.
References
De Vries B. and Principe J.e. (1991). A Theory for Neural Nets with Time Delays.
In Lippmann R., Moody J., and Touretzky D. (eds.), NIPS90 proceedings, San
Mateo, CA, Morgan Kaufmann.
DeVries B., Principe J., Oliveira P. (1991). Adaline with Adaptive Recursive Memory,
Proc. IEEE Work. Neural Netsfor Sig. Proc., Princeton, 101-110, IEEE Press.
De Vries and Principe, (1992). The gamma neural net - A new model for temporal
processing. Accepted for publication, Neural Networks.
DeVries B .(1991) . Temporal Processing with Neural Networks- The Development of
the Gamma Model, Ph.D. Dissertation, University of Florida.
Elman, (1988). Finding structure in time. CRL technical report 8801, 1988.
Jordan, (1986). Attractor dynamics and parallelism in a connectionist sequential
machine. Proc. Cognitive Science 1986.
Lang et. al. (1990). A time-delay neural network architecture for isolated word
recognition. Neural Networks, vol.3 (1), 1990.
Lo P.e. and Principe, J.e. (1989). Dimensionality analysis of EEG Segments:
experimental considerations, Proc. llCNN 89, vol I, 693-698.
Mackey D., Glass L. (1977). Oscillation and Chaos in Physiological Control
Systems, Science 197,287 .
Mozer M.C . (1989). A Focused Backpropagation Algorithm for Temporal Pattern
Recognition. Complex Systems 3,349-381.
Principe J.C., De Vries B., Oliveira, P.(1992). The Gamma Filter - A New Class of
Adaptive IIR Filters with Restricted Feedback. Accepted for publication in IEEE
Transactions on Signal Processing.
Rumelhart D.E., Hinton G.E. and Williams R .J . (1986). Learning Internal
Representations by Error Back-propagation. In Rumelhart D.E., McClelland J.L.
(eds.) , Parallel Distributed Processing, vol. 1, ch. 8, MIT Press.
Stornetta W.S ., Hogg T. and Huberman B.A. (1988). A Dynamical Approach to
Temporal Pattern Processing . In Anderson D.Z. (ed.), Neural Information
Processing Systems, 750-759 .
Tank and Hopfield , (1987). Concentrating information in time: analog neural
networks with applications to speech recognition problems. 1st into con! on
neural networks, IEEE, 1987.
Werbos P. (1990). Backpropagation through time:what it does and how to do it., Proc.
IEEE, vol 78, no 10, 1550-1560.
Widrow B., Hoff M. (1960). Adaptive Switching Circuits, IRE Wescon Con! Rep., pt
4.
Williams J., and Zipzer D (1989). A learning algorithm for continually tunning fiully
recurrent neural networks, Neural Computation, vol I no 2, pp 270-281, MIT
Press.
| 483 |@word seems:3 bptt:2 integrative:1 gainesville:1 paid:2 versatile:1 initial:1 configuration:2 series:7 past:6 outperforms:2 lang:3 activation:2 mackey:5 selected:4 fewer:1 plane:5 short:4 dissertation:1 ire:1 cse:1 five:1 direct:1 differential:1 hopf:1 become:1 fitting:1 behavior:1 elman:2 aveiro:1 ming:1 becomes:3 moreover:2 circuit:1 what:2 substantially:1 finding:1 differentiation:1 temporal:8 preferable:1 ensured:2 exactly:1 control:4 bio:1 unit:8 grant:1 continually:1 positive:1 engineering:1 local:2 io:1 switching:1 studied:1 mateo:1 limited:1 acknowledgment:1 recursive:3 block:2 backpropagation:7 chaotic:5 procedure:1 intersect:1 adapting:1 word:1 regular:1 uired:1 context:2 instability:1 conventional:2 equivalent:2 go:1 williams:3 focused:26 resolution:5 immediately:1 tunning:1 stability:2 pt:1 sig:1 tapped:4 element:3 rumelhart:3 jlj:1 recognition:3 utilized:14 werbos:2 electrical:1 mozer:3 complexity:4 dynamic:1 trained:1 segment:3 req:1 easily:4 lowpass:1 hopfield:3 represented:1 train:2 approached:1 neighborhood:1 larger:1 runge:1 advantage:5 net:10 propose:1 adaptation:2 loop:1 adaline:6 poorly:1 achieve:1 produce:1 help:1 oo:3 recurrent:5 widrow:2 implemented:3 lyapunov:1 correct:1 filter:8 settle:1 biological:1 zipzer:2 proximity:1 around:3 great:1 predict:3 lm:2 major:1 diminishes:1 proc:5 agrees:1 nips90:1 universidade:1 mit:2 always:2 gaussian:1 cr:1 varying:2 publication:2 combiner:1 focus:1 improvement:1 mainly:1 glass:5 jwith:1 hidden:3 wij:1 tank:3 exponent:1 development:1 spatial:5 special:3 hoff:1 once:2 saving:1 having:2 llcnn:1 represents:1 future:1 report:1 recommend:1 connectionist:1 gamma:66 attractor:1 freedom:1 wijk:2 necessary:1 experience:1 isolated:1 theoretical:1 minimal:1 increased:2 formalism:2 modeling:4 pole:1 delay:12 optimally:1 iir:2 ju:1 st:1 enhance:2 moody:1 again:2 choose:1 cognitive:1 creating:1 li:1 de:18 matter:1 start:1 parallel:1 il:3 square:3 wiener:1 kaufmann:1 characteristic:1 yield:2 identify:1 identification:7 devries:3 touretzky:1 ed:3 definition:1 frequency:1 pp:1 conveys:1 con:2 concentrating:1 dispersive:2 dimensionality:1 obtainable:1 back:1 higher:1 response:2 improved:1 synapse:1 anderson:1 just:1 stage:3 hand:2 nonlinear:3 propagation:1 impulse:2 requiring:1 analytically:1 offree:1 white:1 during:3 self:1 criterion:1 generalized:1 electroencephalogram:1 performs:1 fj:1 consideration:1 chaos:1 jl:14 analog:1 ai:6 hogg:1 portugal:1 mapper:3 dot:1 had:1 store:3 outperforming:1 rep:1 morgan:1 minimum:1 signal:12 ii:2 encompass:1 technical:1 characterized:2 adapt:2 cross:1 long:3 controlled:1 prediction:9 multilayer:1 guedes:1 kernel:1 achieved:2 addressed:1 diagram:2 concluded:1 ufl:1 jordan:3 ee:1 uki:1 feedforward:8 iii:1 enough:1 xj:1 gave:1 architecture:4 topology:6 simplifies:1 regarding:1 speech:1 deep:1 oliveira:7 ddm:1 ph:1 concentrated:1 mcclelland:1 cit:1 reduced:2 nsf:1 notice:7 kill:1 vol:5 four:1 cutoff:1 kept:1 jose:1 fourth:1 powerful:1 oscillation:1 fl:1 layer:6 followed:1 display:1 sleep:1 adapted:2 ahead:1 aspect:1 argument:1 relatively:1 department:1 according:1 smaller:2 rtrl:4 shallow:1 restricted:3 notorious:1 computationally:1 equation:3 mechanism:3 apply:1 gam:1 elliptic:1 appropriate:1 k_l:1 florida:2 establish:1 concentration:1 kutta:1 parametrized:1 unstable:1 length:1 modeled:1 trace:2 implementation:1 conversion:1 extended:1 hinton:1 bert:1 introduced:2 required:2 speculation:1 tap:1 learned:3 established:1 nip:1 able:1 parallelism:1 pattern:4 dynamical:1 including:1 memory:56 power:1 tdnn:14 epoch:6 plant:1 expect:3 interesting:2 versus:1 validation:1 lo:2 supported:1 last:1 keeping:1 free:2 understand:1 distributed:1 regard:1 feedback:3 depth:14 dimension:2 curve:4 adaptive:13 san:1 far:1 transaction:1 lippmann:1 global:1 ojl:2 wescon:1 xi:1 jyh:1 continuous:1 quantifies:1 table:2 nature:1 transfer:2 ca:1 eeg:9 mse:4 complex:2 diag:1 noise:2 stornetta:3 allowed:1 third:3 preprocessor:1 specific:1 departamento:1 insignificant:1 physiological:1 sequential:1 effectively:1 gained:1 vries:12 easier:1 depicted:1 intersection:1 iik:5 partially:1 applies:1 pedro:1 ch:2 ma:1 price:2 crl:1 change:1 infinite:1 huberman:1 called:1 kuo:5 accepted:2 experimental:3 select:1 principe:16 internal:1 princeton:1 |
4,232 | 4,830 | Learning Invariant Representations of Molecules for
Atomization Energy Prediction
Gr?goire Montavon1?, Katja Hansen2 , Siamac Fazli1 , Matthias Rupp3 , Franziska Biegler1 ,
Andreas Ziehe1 , Alexandre Tkatchenko2 , O. Anatole von Lilienfeld4 , Klaus-Robert M?ller1,5?
1. Machine Learning Group, TU Berlin
2. Fritz-Haber-Institut der Max-Planck-Gesellschaft, Berlin
3. Institute of Pharmaceutical Sciences, ETH Zurich
4. Argonne Leadership Computing Facility, Argonne National Laboratory, Lemont, IL
5. Dept. of Brain and Cognitive Engineering, Korea University
Abstract
The accurate prediction of molecular energetics in chemical compound space is
a crucial ingredient for rational compound design. The inherently graph-like,
non-vectorial nature of molecular data gives rise to a unique and difficult machine learning problem. In this paper, we adopt a learning-from-scratch approach
where quantum-mechanical molecular energies are predicted directly from the raw
molecular geometry. The study suggests a benefit from setting flexible priors and
enforcing invariance stochastically rather than structurally. Our results improve
the state-of-the-art by a factor of almost three, bringing statistical methods one
step closer to chemical accuracy.
1
Introduction
The accurate prediction of molecular energetics in chemical compound space (CCS) is a crucial
ingredient for compound design efforts in chemical and pharmaceutical industries. One of the major challenges consists of making quantitative estimates in CCS at moderate computational cost
(milliseconds per compound or faster). Currently only high level quantum-chemistry calculations,
which can take days per molecule depending on property and system, yield the desired ?chemical
accuracy? of 1 kcal/mol required for computational molecular design.
This problem has only recently captured the interest of the machine learning community (Baldi
et al., 2011). The inherently graph-like, non-vectorial nature of molecular data gives rise to a unique
and difficult machine learning problem. A central question is how to represent molecules in a way
that makes prediction of molecular properties feasible and accurate (Von Lilienfeld and Tuckerman,
2006). This question has already been extensively discussed in the cheminformatics literature, and
many so-called molecular descriptors exist (Todeschini and Consonni, 2009). Unfortunately, they
often require a substantial amount of domain knowledge and engineering. Furthermore, they are not
necessarily transferable across the whole chemical compound space.
In this paper, we pursue a more direct approach initiated by Rupp et al. (2012) to the problem.
We learn the mapping between the molecule and its atomization energy from scratch1 using the
?Coulomb matrix? as a low-level molecular descriptor (Rupp et al., 2012). As we will see later, an
?
Electronic address: [email protected]
Electronic address: [email protected]
1
This approach has already been applied in multiple domains such as natural language processing (Collobert
et al., 2011) or speech recognition (Jaitly and Hinton, 2011).
?
1
(a)
(b)
(c)
(d)
(e)
Figure 1: Different representations of the same molecule: (a) raw molecule with Cartesian coordinates and associated charges, (b) original (non-sorted) Coulomb matrix as computed by Equation
1, (c) eigenspectrum of the Coulomb matrix, (d) sorted Coulomb matrix, (e) set of randomly sorted
Coulomb matrices.
inherent problem of the Coulomb matrix descriptor is that it lacks invariance with respect to permutation of atom indices, thus, leading to an exponential blow-up of the problem?s dimensionality. We
center the discussion around the two following questions: How to inject permutation invariance optimally into the machine learning model? What are the model characteristics that lead to the highest
prediction accuracy?
Our study extends the work of Rupp et al. (2012) by empirically comparing several methods for
enforcing permutation invariance: (1) computing the sorted eigenspectrum of the Coulomb matrix,
(2) sorting the rows and columns by their respective norm and (3), a new idea, randomly sorting rows
and columns in order to associate a set of randomly sorted Coulomb matrices to each molecule, thus
extending the dataset considerably. These three representations are then compared in the light of
several models such as Gaussian kernel ridge regression or multilayer neural networks where the
Gaussian prior is traded against more flexibility and the ability to learn the representation directly
from the data.
Related Work
In atomic-scale physics and in material sciences, neural networks have been used to model the potential energy surface of single systems (e.g., the dynamics of a single molecule over time) since the
early 1990s (Lorenz et al., 2004; Manzhos and Carrington, 2006; Behler, 2011). Recently, Gaussian processes were used for this as well (Bart?k et al., 2010). The major difference to the problem
presented here is that previous work in modeling quantum mechanical energies looked mostly at the
dynamics of one molecule, whereas we use data from different molecules simultaneously (?learning
across chemical compound space?). Attempts in this direction have been rare (Balabin and Lomakina, 2009; Hautier et al., 2010; Balabin and Lomakina, 2011).
2
Representing Molecules
Electronic structure methods based on quantum-mechanical first principles, only require a set of
nuclear charges Zi and the corresponding Cartesian coordinates of the atomic positions in 3D space
Ri as an input for the calculation of molecular energetics. Here we use exactly the same information
as input for our machine learning algorithms. Specifically, for each molecule, we construct the socalled Coulomb matrix C, that contains information about Zi and Ri in a way that preserves many
of the required properties of a good descriptor (Rupp et al., 2012):
(
0.5Zi2.4 ?i = j
Cij =
(1)
Zi Zj
?i 6= j.
|Ri ?Rj |
The diagonal elements of the Coulomb matrix correspond to a polynomial fit of the potential energies of the free atoms, while the off-diagonal elements encode the Coulomb repulsion between all
possible pairs of nuclei in the molecule. As such, the Coulomb matrix is invariant to translations
and rotations of the molecule in 3D space; both transformations must keep the potential energy of
the molecule constant by definition.
Two problems with the Coulomb matrix representation that prevent it from being used out-of-thebox in a vector-space model are the following: (1) the dimension of the Coulomb matrix depends
2
on the number of atoms in the molecule and (2) the ordering of atoms in the Coulomb matrix is
undefined, that is, many Coulomb matrices can be associated to the same molecule by just permuting
rows and columns.
The first problem can be mitigated by introducing ?invisible atoms? in the molecules, that have
nuclear charge zero and do not interact with other atoms. These invisible atoms do not influence
the physics of the molecule of interest and make the total number of atoms in the molecule sum to
a constant d. In practice, this corresponds to padding the Coulomb matrix by zero-valued entries so
that the Coulomb matrix has size d ? d, as it has been done by Rupp et al. (2012).
Solving the second problem is more difficult and has no obvious physically plausible workaround.
Three candidate representations are depicted in Figure 1 and presented below.
2.1
Eigenspectrum Representation
The eigenspectrum representation (Rupp et al., 2012) is obtained by solving the eigenvalue problem
Cv = ?v under the constraint ?i ? ?i+1 where ?i > 0. The spectrum (?1 , . . . , ?d ) is used as the
representation. It is easy to see that this representation is invariant to permutation of atoms in the
Coulomb matrix.
On the other hand, the dimensionality of the eigenspectrum d is low compared to the initial 3d ? 6
degrees of freedom of most molecules. While this sharp dimensionality reduction may yield some
useful built-in regularization, it may also introduce unrecoverable noise.
2.2
Sorted Coulomb Matrices
Another solution to the ordering problem is to choose the permutation of atoms whose associated
Coulomb matrix C satisfies ||Ci || ? ||Ci+1 || ? i where Ci denotes the ith row of the Coulomb
matrix. Unlike the eigenspectrum representation, two different molecules have necessarily different
associated sorted Coulomb matrices.
2.3
Random(-ly sorted) Coulomb Matrices
A way to deal with the larger dimensionality subsequent to taking the whole Coulomb matrix instead
of the eigenspectrum is to extend the dataset with Coulomb matrices that are randomly sorted. This is
achieved by associating a conditional distribution over Coulomb matrices p(C|M ) to each molecule
M . Let C(M ) define the set of matrices that are valid Coulomb matrices of the molecule M . The
unnormalized probability distribution from which we would like to sample Coulomb matrices is
defined as:
X
p? (C|M ) =
1C?C(M ) ? 1{||Ci ||+ni ?||Ci+1 ||+ni+1 ?i} ? pN (0,?I) (n)
(2)
n
The first term constrains the sample to be a valid Coulomb matrix of M, the second term ensures
the sorting constraint and the third term defines the randomness parameterized by the noise level ?.
Sampling from this distribution can be achieved approximately using the following algorithm:
Algorithm for generating a random Coulomb matrix
1. Take any Coulomb matrix C among the set of matrices that are valid Coulomb
matrices of M and compute its row norm ||C|| = (||C1 ||, . . . , ||Cd ||).
2. Draw n ? N (0, ?I) and find the permutation P that sorts ||C|| + n, that is, find
the permutation that satisfies permuteP (||C|| + n) = sort(||C|| + n).
3. Permute C row-wise and then column-wise with the same permutation, that is,
Crandom = permutecolsP (permuterowsP (C)).
The idea of dataset extension has already been used in the context of handwritten character recognition by, among others, LeCun et al. (1998), Ciresan et al. (2010) and in the context of support
vector machines, by DeCoste and Sch?lkopf (2002). Random Coulomb matrices can be used at
3
Input (Coulomb matrix)
Output (atomization energy)
Figure 2: Two-dimensional PCA of the data with increasingly strong label contribution (from left
to right). Molecules with low atomization energies are depicted in red and molecules with high
atomization energies are depicted in blue. The plots suggest an interesting mix of global and local
statistics with highly non-Gaussian distributions.
training time in order to multiply the number of data points but also at prediction time: predicting
the property of a molecule consists of predicting the properties for all Coulomb matrices among the
distribution of Coulomb matrices associated to M and output the average of all these predictions
y = EC|M [f (C)].
3
Predicting Atomization Energies
The atomization energy E quantifies the potential energy stored in all chemical bonds. As such,
it is defined as the difference between the potential energy of a molecule and the sum of potential
energies of its composing isolated atoms. The potential energy of a molecule is the solution to the
electronic Schr?dinger equation H? = E?, where H is the Hamiltonian of the molecule and ? is
the state of the system. Note that the Hamiltonian is uniquely defined by the Coulomb matrix up to
rotation and translation symmetries. A dataset {(M1 , E1 ), . . . , (Mn , En )} is created by running a
Schr?dinger equation solver on a small set of molecules. Figure 2 shows a two-dimensional PCA
visualization of the dataset where input and output distributions exhibit an interesting mix of local
and global statistics.
Obtaining atomization energies from the Schr?dinger equation solver is computationally expensive
and, as a consequence, only a fraction of the molecules in the chemical compound space can be
labeled. The learning algorithm is then asked to generalize from these few data points to unseen
molecules. In this section, we show how two algorithms of study, kernel ridge regression and the
multilayer neural network, are applied to this problem. These algorithms are well-established nonlinear methods and are good candidates for handling the intrinsic nonlinearities of the problem. In
kernel ridge regression, the measure of similarity is encoded in the kernel. On the other hand, in
multilayer neural networks, the measure of similarity is learned essentially from data and implicitly
given by the mapping onto increasingly many layers. In general, neural networks are more flexible
and make less assumptions about the data. However, it comes at the cost of being more difficult to
train and regularize.
3.1
Kernel Ridge Regression
The most basic algorithm to solve the nonlinear regression problem at hand is kernel ridge regression
(cf. Hastie et al., 2001). It uses a quadratic constraint on the norm of ?i . As is well known, the
solution of the minimization problem
X
X
2
min
E est (xi ) ? Eiref + ?
?i2
?
i
i
reads ? = (K + ?I)?1 E ref , where K is the empirical kernel and the input data xi is either the
eigenspectrum of the Coulomb matrix or the vectorized sorted Coulomb matrix.
Expanding the dataset with the randomly generated Coulomb matrices described in Section 2.3
yields a huge dataset that is difficult to handle with standard kernel ridge regression algorithms.
Although approximations of the kernel can improve its scalability, random Coulomb matrices can
be handled more easily by encoding permutations directly into the kernel. We redefine the kernel as
4
(a)
(b)
(c)
1
1
1
0
1
0
0
0
0
0
1
1
1
0
0
0
0
0
0
0
1
1
1
01
01
01
0
0
01
0
0
0
01
01
01
01
01
0
0
01
0
0
1 0 0 0 0 0
01 01 01 0 0 01
01 01 01 01 01 01
01 01 01 01 01 01
11 01 01 01 01 01
01 01 01 01 01 01
01 01 01 01 01 01
01 01 01 01 01 00
01 01 01 01 01 00
01 01 01 01 01 01
01 0 0 0 0 0
0 0 0 0 0 0
(d)
0
01
01
01
01
01
0
01
01
0
0
0
01
01
01
01
01
01
01
01
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
(e)
E
Figure 3: Data flow from the raw molecule to the predicted atomization energy E. The molecule (a)
is converted to its randomly sorted Coulomb matrix representation (b). The Coulomb matrix is then
converted into a suitable sensory input (c) that is fed to the neural network (d). The output of the
neural network is then rescaled to the original energy unit (e).
a sum over permutations:
L
X
? i , xj ) = 1
(K(xi , Pl (xj )) + K(Pl (xi ), xj ))
K(x
2
(3)
l=1
where Pl is the l-th permutation of atoms corresponding to the l-th realization of the random
Coulomb matrix and L is the total number of permutations. This sum over multiple permutations
has the effect of testing multiple plausible alignments of molecules. Note that the summation can be
replaced by a ?max? operator in order to focus on correct alignments of molecules and ignore poor
alignments.
3.2
Multilayer Neural Networks
A main feature of multilayer neural networks is their ability to learn internal representations that
potentially make models statistically and computationally more efficient. Unfortunately, the intrinsically non-convex nature of neural networks makes them hard to optimize and regularize in a
principled manner. Often, a crucial factor for training neural networks successfully, is to start with
a favorable initial conditioning of the learning problem, that is, a good sensory input representation
and a proper weights initialization.
Unlike images or speech data, an important amount of label-relevant information is contained within
the elements of the Coulomb matrix and not only in their dependencies. For these reasons, taking
the real quantities directly as input is likely to lead to a poorly conditioned optimization problem.
Instead, we choose to break apart each dimension of the Coulomb matrix C by converting the representation into a three-dimensional tensor of essentially binary predicates as follows:
C
C + ?
i
h
C ? ?
, tanh
, tanh
,...
x = . . . , tanh
?
?
?
(4)
The new representation x is fed as input to the neural network. Note that in the new representation,
many elements are constant and can be pruned. In practice, by choosing an appropriate step ?, the
dimensionality of the sensory input is kept to tractable levels.
This binarization of the input space improves the conditioning of the learning problem and makes
the model more flexible. As we will see in Section 5, learning from this flexible representation
requires enough data in order to compensate for the lack of a strong prior and might lead to low
performance if this condition is not met. The full data flow from the raw molecule to the predicted
atomization energy is depicted in Figure 3.
4
Methodology
Dataset As in Rupp et al. (2012), we select a subset of 7165 small molecules extracted from a
huge database of nearly one billion small molecules collected by Blum and Reymond (2009). These
molecules are composed of a maximum of 23 atoms, a maximum of 7 of them are heavy atoms.
5
Molecules are converted to a suitable Cartesian coordinates representation using universal forcefield method (Rapp? et al., 1992) as implemented in the software OpenBabel (Guha et al., 2006).
The Coulomb matrices can then be computed from these Cartesian coordinates using Equation 1. Atomization energies are calculated for each molecule and are ranging from ?800 to ?2000 kcal/mol.
As a result, we have a dataset of 7165 Coulomb matrices of size 23 ? 23 with their associated onedimensional labels2 . Random Coulomb matrices are generated with the noise parameter ? = 1 (see
Equation 2).
Model validation For each learning method we used stratified 5-fold cross validation with identical cross validation folds, where the stratification was done by grouping molecules into groups
of five by their energies and then randomly assigning one molecule to each fold, as in Rupp et al.
(2012). This sampling reduces the variance of the test error estimator. Each algorithm is optimized
for mean squared error. To illustrate how the prediction accuracy changes when increasing the training sample size, each model was trained on 500 to 7000 data points which were sampled identically
for the different methods.
Choice of parameters for kernel ridge regression The kernel ridge regression model was trained
using a Gaussian kernel (Kij = exp[?||xi ? xj ||2 /(2? 2 )]) where ? is the kernel width. No further scaling or normalization of the data was done, as the meaningfulness of the data in chemical compound space was to be preserved. A grid search with an inner cross validation was used
to determine the hyperparameters for each of the five cross validation folds for each method,
namely kernel width ? and regularization strength ?. Grid-searching for optimal hyperparameters can be easily parallelized. The regularization parameter was varied from 10?11 to 101 on
a logarithmic scale and the kernel width was varied from 5 to 81 on a linear scale with a step
size of 4. For the eigenspectrum representation the individual folds showed lower regularization parameters (?eig = 2.15 ? 10?10 ? 0.00) as compared to the sorted Coulomb representation
(?sorted = 1.67 ? 10?7 ? 0.00). The optimal kernel width parameters are ?eig = 41 ? 6.07 and
?sorted = 77 ? 0.00. As indicated by the standard deviation 0.00, identical parameters are often chosen for all folds of cross-validation. Training one fold, for one particular set of parameters took approximately 10 seconds. When the algorithm is trained on random Coulomb matrices, we set the number of permutations involved in the kernel to L = 250 (see Equation 3)
and grid-search hyperparameters over both the ?sum? and ?max? kernels. Obtained parameters are
?random = 0.0157 ? 0.0247 and ?random = 74 ? 4.38.
Choice of parameters for the neural network We choose a binarization step ? = 1 (see Equation
4). As a result, the neural network takes approximately 1800 inputs. We use two hidden layers
composed of 400 and 100 units with sigmoidal activation
Initial weights
? functions, respectively.
?
W0 and learning rates ? are chosen as W0 ? N (0, 1/ m) and ? = ?0 / m where m is the
number of input units and ?0 is the global learning rate of the network set to ?0 = 0.01.
p The
error derivative is backpropagated from layer l to layer l ? 1 by multiplying it by ? = m/n
where m and n are the number of input and output units of layer l. These choices for W0 , ? and
? ensure that the representations at each layer fall into the correct regime of the nonlinearity and
that weights in each layer evolve at the correct speed. Inputs and outputs are scaled to have mean
0 and standard deviation 1. We use averaged stochastic gradient descent (ASGD) with minibatches
of size 25 for a maximum of 250000 iterations and with ASGD coefficients set so that the neural
network remembers approximately 10% of its training history. The training is performed on 90% of
the training set and the rest is used for early stopping. Training the neural network takes between one
hour and one day on a CPU depending on the sample complexity. When using the random Coulomb
matrix representation, the prediction for a new molecule is averaged over 10 different realizations of
its associated random Coulomb matrix.
5
Results
Cross-validation results for each learning algorithm and representation are shown in Table 1. For
the sake of completeness, we also include some baseline results such as the mean predictor (simply
predicting the mean of labels in the training set), linear regression, k-nearest neighbors, mixed effects
2
The dataset is available at http://www.quantum-machine.org.
6
Learning algorithm
Molecule representation
Mean predictor
None
Eigenspectrum
Sorted Coulomb
Eigenspectrum
Sorted Coulomb
Eigenspectrum
Sorted Coulomb
Eigenspectrum
Sorted Coulomb
K-nearest neighbors
Linear regression
Mixed effects
Gaussian support vector regression
Gaussian kernel ridge regression
Multilayer neural network
Eigenspectrum
Sorted Coulomb
Random Coulomb
Eigenspectrum
Sorted Coulomb
Random Coulomb
MAE
RMSE
179.02 ? 0.08
70.72 ? 2.12
71.54 ? 0.97
29.17 ? 0.35
20.72 ? 0.32
10.50 ? 0.48
8.5 ? 0.45
10.78 ? 0.58
8.06 ? 0.38
223.92 ? 0.32
92.49 ? 2.70
95.97 ? 1.45
38.01 ? 1.11
27.22 ? 0.84
20.38 ? 9.29
12.16 ? 0.95
19.47 ? 9.46
12.59 ? 2.17
11.39 ? 0.81
8.72 ? 0.40
7.79 ? 0.42
14.08 ? 0.29
11.82 ? 0.45
3.51 ? 0.13
16.01 ? 1.71
12.59 ? 1.35
11.40 ? 1.11
20.29 ? 0.73
16.01 ? 0.81
5.96 ? 0.48
Table 1: Prediction errors in terms of mean absolute error (MAE) and root mean square error
(RMSE) for several algorithms and types of representations. Linear regression and k-nearest neighbors are inaccurate compared to the more refined kernel methods and multilayer neural network. The
multilayer neural network performance varies considerably depending on the type of representation
but sets the lowest error in our study on the random Coulomb representation.
models (Pinheiro and Bates, 2000; Fazli et al., 2011) and kernel support vector regression (Smola
and Sch?lkopf, 2004). Linear regression and k-nearest neighbors are clearly off-the-mark compared
to the other more sophisticated models such as mixed effects models, kernel methods and multilayer
neural networks.
While results for kernel algorithms are similar, they all differ considerably from those obtained with
the multilayer neural network. In particular, we can observe that they are performing reasonably well
with all types of representation while the multilayer neural network performance is highly dependent
on the representation fed as input.
More specifically, the multilayer neural network tends to perform better as the input representation
gets richer (as the total amount of information in the input distribution increases), suggesting that the
lack of a strong inbuilt prior in the neural network must be compensated by a large amount of data.
The neural network performs best with random Coulomb matrices that are intrinsically the richest
representation as a whole distribution over Coulomb matrices is associated to each molecule.
A similar phenomenon can be observed from the learning curves in Figure 4. As the training data
increases, the error for Gaussian kernel ridge regression decreases slowly while the neural network
can take greater advantage from this additional data.
6
Conclusion
Predicting molecular energies quickly and accurately across the chemical compound space (CCS)
is an important problem as the quantum-mechanical calculations are typically taking days and do
not scale well to more complex systems. Supervised statistical learning is a natural candidate for
solving this problem as it encourages computational units to focus on solving the problem of interest
rather than solving the more general Schr?dinger equation.
In this paper, we have developed further the learning-from-scratch approach initiated by Rupp et al.
(2012) and provided a deeper understanding of some of the ingredients for learning a successful
mapping between raw molecular geometries and atomization energies. Our results suggest the importance of having flexible priors (in our case, a multilayer network) and lots of data (generated
artificially by exploiting symmetries of the Coulomb matrix). Our work improves the state-of-theart on this dataset by a factor of almost three. From a reference MAE of 9.9 kcal/mol (Rupp et al.,
7
Gaussian kernel ridge regression
b
multilayer neural network
40
25
b
mol
b
b
kcal
b
30
b
eigenspectrum
sorted Coulomb
random Coulomb
b
35
mean absolute error
mean absolute error
mol
kcal
40
b
b
20
b
b
b
b
b
15
b
b
b
b
b
10
b
b
b
b
b
bb
b
b
b
b
b
b
b
b
bb
bb
b
b
b
bb
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
5
0
eigenspectrum
sorted Coulomb
random Coulomb
b
35
b
30
b
b
b
b
25
b
20
b
b
b
b
b
b
b
b
b
15
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
10
b
b
5
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
b
0
0
2000
4000
6000
# samples
8000
0
2000
4000
6000
# samples
8000
Figure 4: Learning curves for Gaussian kernel ridge regression and the multilayer neural network.
Results for kernel ridge regression are more invariant to the representation and to the number of
samples than for the multilayer neural network. The gray area at the bottom of the plot indicates the
level at which the prediction is considered to be ?chemically accurate?.
2012), we went down to a MAE of 3.51 kcal/mol, which is considerably closer to the 1 kcal/mol
required for chemical accuracy.
Many open problems remain that makes quantum chemistry an attractive challenge for Machine
Learning: (1) Are there fundamental modeling limits of the statistical learning approach for quantum
chemistry applications or is it rather a matter of producing more training data? (2) The training
data can be considered noise free. Thus, are there better ML models for the noise free case while
regularizing away the intrinsic problem complexity to keep the ML model small? (3) Can better
representations be devised with inbuilt invariance properties (e.g. Tangent Distance, Simard et al.,
1996), harvesting physical prior knowledge? (4) How can we extract physics insights on quantum
mechanics from the trained nonlinear ML prediction models?
Acknowledgments
This work is supported by the World Class University Program through the National Research
Foundation of Korea funded by the Ministry of Education, Science, and Technology, under Grant
R31-10008, and the FP7 program of the European Community (Marie Curie IEF 273039). This
research used resources of the Argonne Leadership Computing Facility at Argonne National Laboratory, which is supported by the Office of Science of the U.S. DOE under Contract No. DEAC02-06CH11357. This research is supported, in part, by the Natural Sciences and Engineering
Research Council of Canada. The authors also thank M?rton Dan?czy for preliminary work and
useful discussions.
References
Roman M. Balabin and Ekaterina I. Lomakina. Neural network approach to quantum-chemistry
data: Accurate prediction of density functional theory energies. Journal of Chemical Physics,
131(7):074104, 2009.
Roman M. Balabin and Ekaterina I. Lomakina. Support vector machine regression (LS-SVM)?
an alternative to artificial neural networks (ANNs) for the analysis of quantum chemistry data?
Physical Chemistry Chemical Physics, 13(24):11710?11718, 2011.
Pierre Baldi, Klaus-Robert M?ller, and Gisbert Schneider. Editorial: Charting chemical space: Challenges and opportunities for artificial intelligence and machine learning. Molecular Informatics,
30(9):751?751, 2011.
Albert P. Bart?k, Mike C. Payne, Risi Kondor, and G?bor Cs?nyi. Gaussian approximation potentials: The accuracy of quantum mechanics, without the electrons. Phys. Rev. Lett., 104(13):
136403, 2010.
8
J?rg Behler. Neural network potential-energy surfaces in chemistry: a tool for large-scale simulations. Physical Chemistry Chemical Physics, 13(40):17930?17955, 2011.
Lorenz C. Blum and Jean-Louis Reymond. 970 million druglike small molecules for virtual screening in the chemical universe database GDB-13. Journal of the American Chemical Society, 131
(25):8732?8733, 2009.
Dan Claudiu Ciresan, Ueli Meier, Luca Maria Gambardella, and J?rgen Schmidhuber. Deep, big,
simple neural nets for handwritten digit recognition. Neural Computation, 22(12):3207?3220,
2010.
Ronan Collobert, Jason Weston, L?on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel
Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493?2537, 2011.
Dennis DeCoste and Bernhard Sch?lkopf. Training invariant support vector machines. Machine
Learning, 46(1?3):161?190, 2002.
Siamac Fazli, M?rton Dan?czy, J?rg Schelldorfer, and Klaus-Robert M?ller. ?1 -penalized linear
mixed-effects models for high dimensional data with application to BCI. NeuroImage, 56(4):
2100?2108, 2011.
Rajarshi Guha, Michael T. Howard, Geoffrey R. Hutchison, Peter Murray-Rust, Henry Rzepa,
Christoph Steinbeck, J?rg Wegner, and Egon L. Willighagen. The blue obelisk, interoperability in chemical informatics. Journal of Chemical Information and Modeling, 46(3):991?998,
2006.
Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning.
Springer Series in Statistics. Springer New York Inc., 2001.
Geoffroy Hautier, Christopher C. Fisher, Anubhav Jain, Tim Mueller, and Gerbrand Ceder. Finding
nature?s missing ternary oxide compounds using machine learning and density functional theory.
Chemistry of Materials, 22(12):3762?3767, 2010.
Navdeep Jaitly and Geoffrey E. Hinton. Learning a better representation of speech soundwaves
using restricted Boltzmann machines. In ICASSP, pages 5884?5887, 2011.
Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
S?nke Lorenz, Axel Gro?, and Matthias Scheffler. Representing high-dimensional potential-energy
surfaces for reactions at surfaces by neural networks. Chemical Physics Letters, 395(4?6):210?
215, 2004.
Sergei Manzhos and Tucker Carrington. A random-sampling high dimensional model representation
neural network for building potential energy surfaces. J. Chem. Phys., 125:084109, 2006.
Jos? C. Pinheiro and Douglas M. Bates. Mixed-Effects Models in S and S-Plus. Springer, New York,
2000.
Anthony K. Rapp?, Carla J. Casewit, K. S. Colwell, William A. Goddard, and W. M. Skiff. UFF,
a full periodic table force field for molecular mechanics and molecular dynamics simulations.
Journal of the American Chemical Society, 114(25):10024?10035, 1992.
Matthias Rupp, Alexandre Tkatchenko, Klaus-Robert M?ller, and O. Anatole von Lilienfeld. Fast
and accurate modeling of molecular atomization energies with machine learning. Phys. Rev. Lett.,
108(5):058301, 2012.
Patrice Simard, Yann LeCun, John S. Denker, and Bernard Victorri. Transformation invariance in
pattern recognition: Tangent distance and tangent propagation. In Neural Networks: Tricks of the
Trade, pages 239?27, 1996.
Alex J. Smola and Bernd Sch?lkopf. A tutorial on support vector regression. Statistics and computing, 14(3):199?222, 2004.
Roberto Todeschini and Viviana Consonni. Handbook of Molecular Descriptors. Wiley-VCH,
Weinheim, Germany, second edition, 2009.
O Anatole Von Lilienfeld and Mark E. Tuckerman. Molecular grand-canonical ensemble density
functional theory and exploration of chemical space. The Journal of chemical physics, 125(15):
154104, 2006.
9
| 4830 |@word katja:1 kondor:1 polynomial:1 norm:3 open:1 simulation:2 pavel:1 reduction:1 initial:3 contains:1 series:1 document:1 reaction:1 comparing:1 activation:1 assigning:1 must:2 sergei:1 john:1 subsequent:1 ronan:1 plot:2 bart:2 intelligence:1 ith:1 hamiltonian:2 leadership:2 harvesting:1 completeness:1 sigmoidal:1 org:1 five:2 direct:1 consists:2 dan:3 redefine:1 baldi:2 manner:1 introduce:1 kuksa:1 mechanic:3 brain:1 weinheim:1 scheffler:1 consonni:2 cpu:1 decoste:2 solver:2 increasing:1 ller1:1 provided:1 mitigated:1 lowest:1 what:1 pursue:1 developed:1 finding:1 transformation:2 quantitative:1 charge:3 exactly:1 scaled:1 unit:5 ly:1 grant:1 planck:1 producing:1 louis:1 engineering:3 local:2 tends:1 limit:1 consequence:1 encoding:1 initiated:2 approximately:4 might:1 plus:1 initialization:1 suggests:1 christoph:1 stratified:1 statistically:1 averaged:2 unique:2 lecun:3 acknowledgment:1 testing:1 atomic:2 practice:2 ternary:1 digit:1 area:1 empirical:1 universal:1 eth:1 suggest:2 get:1 onto:1 operator:1 context:2 influence:1 optimize:1 www:1 tuckerman:2 center:1 compensated:1 missing:1 l:1 convex:1 estimator:1 insight:1 nuclear:2 regularize:2 hutchison:1 handle:1 searching:1 coordinate:4 us:1 jaitly:2 associate:1 element:5 trick:1 recognition:5 expensive:1 labeled:1 database:2 observed:1 bottom:1 mike:1 ensures:1 went:1 ordering:2 decrease:1 highest:1 rescaled:1 trade:1 substantial:1 principled:1 workaround:1 complexity:2 constrains:1 asked:1 reymond:2 dynamic:3 trained:4 solving:5 egon:1 easily:2 icassp:1 train:1 jain:1 fast:1 artificial:2 klaus:5 choosing:1 refined:1 whose:1 encoded:1 larger:1 valued:1 plausible:2 solve:1 richer:1 jean:1 bci:1 ability:2 statistic:4 gro:1 unseen:1 anatole:3 patrice:1 advantage:1 eigenvalue:1 matthias:3 net:1 took:1 tu:3 relevant:1 payne:1 realization:2 gregoire:1 flexibility:1 poorly:1 scalability:1 franziska:1 billion:1 inbuilt:2 exploiting:1 extending:1 generating:1 tim:1 depending:3 illustrate:1 nearest:4 strong:3 implemented:1 predicted:3 c:1 come:1 met:1 differ:1 direction:1 correct:3 stochastic:1 exploration:1 rapp:2 material:2 virtual:1 education:1 require:2 preliminary:1 summation:1 extension:1 pl:3 around:1 considered:2 ueli:1 exp:1 mapping:3 traded:1 electron:1 rgen:1 major:2 adopt:1 early:2 interoperability:1 favorable:1 label:3 currently:1 bond:1 tanh:3 behler:2 r31:1 council:1 successfully:1 tool:1 minimization:1 clearly:1 gaussian:11 rather:3 pn:1 office:1 encode:1 focus:2 maria:1 indicates:1 baseline:1 mueller:2 dependent:1 repulsion:1 stopping:1 inaccurate:1 typically:1 hidden:1 germany:1 among:3 flexible:5 socalled:1 art:1 field:1 construct:1 having:1 atom:14 sampling:3 identical:2 stratification:1 koray:1 nearly:1 theart:1 others:1 yoshua:1 inherent:1 few:1 roman:2 randomly:7 composed:2 simultaneously:1 national:3 preserve:1 individual:1 pharmaceutical:2 replaced:1 geometry:2 william:1 attempt:1 freedom:1 friedman:1 interest:3 huge:2 screening:1 highly:2 multiply:1 unrecoverable:1 alignment:3 light:1 undefined:1 permuting:1 accurate:6 closer:2 korea:2 respective:1 institut:1 desired:1 isolated:1 kij:1 industry:1 column:4 modeling:4 cost:2 introducing:1 deviation:2 subset:1 entry:1 rare:1 predictor:2 predicate:1 successful:1 gr:1 guha:2 optimally:1 stored:1 dependency:1 varies:1 periodic:1 considerably:4 fritz:1 fundamental:1 density:3 grand:1 contract:1 physic:8 off:2 informatics:2 axel:1 michael:2 jos:1 quickly:1 von:4 central:1 squared:1 choose:3 slowly:1 dinger:4 fazli:2 cognitive:1 stochastically:1 inject:1 derivative:1 leading:1 manzhos:2 simard:2 american:2 oxide:1 suggesting:1 potential:11 nonlinearities:1 de:2 blow:1 chemistry:9 converted:3 coefficient:1 matter:1 inc:1 siamac:2 depends:1 collobert:2 later:1 break:1 asgd:2 performed:1 root:1 lot:1 jason:1 red:1 start:1 sort:2 rmse:2 curie:1 contribution:1 square:1 il:1 accuracy:6 ni:2 descriptor:5 characteristic:1 variance:1 ensemble:1 yield:3 correspond:1 generalize:1 lkopf:4 raw:5 handwritten:2 bor:1 accurately:1 kavukcuoglu:1 none:1 bates:2 multiplying:1 cc:3 randomness:1 history:1 anns:1 phys:3 trevor:1 definition:1 against:1 energy:29 involved:1 tucker:1 obvious:1 associated:8 rational:1 sampled:1 dataset:11 intrinsically:2 knowledge:2 lilienfeld:3 dimensionality:5 improves:2 sophisticated:1 alexandre:2 day:3 supervised:1 methodology:1 done:3 furthermore:1 just:1 smola:2 jerome:1 hand:3 dennis:1 christopher:1 nonlinear:3 eig:2 lack:3 propagation:1 defines:1 indicated:1 gray:1 gdb:1 building:1 effect:6 facility:2 regularization:4 chemical:24 read:1 laboratory:2 gesellschaft:1 i2:1 deal:1 attractive:1 width:4 uniquely:1 encourages:1 transferable:1 unnormalized:1 ridge:13 claudiu:1 invisible:2 performs:1 image:1 wise:2 ranging:1 recently:2 rotation:2 functional:3 empirically:1 physical:3 rust:1 conditioning:2 million:1 discussed:1 extend:1 m1:1 mae:4 onedimensional:1 cv:1 grid:3 nonlinearity:1 rton:2 geoffroy:1 language:2 henry:1 funded:1 similarity:2 surface:5 patrick:1 showed:1 moderate:1 apart:1 schmidhuber:1 compound:11 binary:1 der:1 captured:1 ministry:1 greater:1 additional:1 schneider:1 converting:1 parallelized:1 determine:1 gambardella:1 ller:3 multiple:3 mix:2 rj:1 full:2 reduces:1 karlen:1 faster:1 calculation:3 cross:6 compensate:1 devised:1 luca:1 energetics:3 molecular:19 e1:1 prediction:13 regression:22 basic:1 multilayer:16 essentially:2 lomakina:4 navdeep:1 editorial:1 albert:1 physically:1 iteration:1 represent:1 kernel:29 normalization:1 achieved:2 ief:1 c1:1 preserved:1 whereas:1 rajarshi:1 tkatchenko:1 victorri:1 crucial:3 sch:4 pinheiro:2 rest:1 unlike:2 bringing:1 flow:2 bengio:1 easy:1 enough:1 identically:1 xj:4 fit:1 zi:3 wegner:1 hastie:2 associating:1 ciresan:2 andreas:1 idea:2 inner:1 haffner:1 pca:2 handled:1 padding:1 effort:1 peter:1 speech:3 york:2 deep:1 useful:2 amount:4 backpropagated:1 extensively:1 http:1 exist:1 zj:1 millisecond:1 tutorial:1 canonical:1 per:2 tibshirani:1 blue:2 group:2 blum:2 prevent:1 marie:1 douglas:1 kept:1 graph:2 fraction:1 sum:5 parameterized:1 letter:1 extends:1 almost:3 chemically:1 electronic:4 yann:2 draw:1 scaling:1 ziehe1:1 layer:7 uff:1 fold:7 quadratic:1 strength:1 vectorial:2 constraint:3 alex:1 ri:3 software:1 sake:1 speed:1 min:1 pruned:1 performing:1 poor:1 across:3 remain:1 increasingly:2 character:1 rev:2 making:1 invariant:5 restricted:1 computationally:2 equation:9 zurich:1 visualization:1 resource:1 fed:3 tractable:1 fp7:1 available:1 cheminformatics:1 denker:1 observe:1 away:1 coulomb:70 appropriate:1 pierre:1 zi2:1 alternative:1 original:2 denotes:1 running:1 cf:1 ensure:1 todeschini:2 include:1 opportunity:1 goddard:1 risi:1 murray:1 meaningfulness:1 nyi:1 society:2 tensor:1 question:3 already:3 looked:1 quantity:1 diagonal:2 exhibit:1 gradient:2 distance:2 thank:1 berlin:4 w0:3 collected:1 eigenspectrum:17 reason:1 enforcing:2 rupp:11 charting:1 index:1 difficult:5 unfortunately:2 mostly:1 robert:6 cij:1 potentially:1 rise:2 design:3 proper:1 boltzmann:1 perform:1 howard:1 descent:1 hinton:2 carrington:2 schr:4 varied:2 sharp:1 community:2 canada:1 pair:1 mechanical:4 required:3 namely:1 optimized:1 meier:1 bernd:1 vch:1 learned:1 established:1 hour:1 address:2 below:1 pattern:1 regime:1 challenge:3 program:2 built:1 max:3 haber:1 suitable:2 natural:4 force:1 predicting:5 ekaterina:2 mn:1 representing:2 improve:2 technology:1 created:1 extract:1 remembers:1 roberto:1 binarization:2 prior:6 literature:1 understanding:1 tangent:3 evolve:1 permutation:14 mixed:5 interesting:2 geoffrey:2 ingredient:3 validation:7 foundation:1 nucleus:1 degree:1 vectorized:1 principle:1 cd:1 translation:2 row:6 heavy:1 penalized:1 supported:3 free:3 deeper:1 institute:1 fall:1 neighbor:4 taking:3 absolute:3 benefit:1 curve:2 dimension:2 calculated:1 valid:3 world:1 lett:2 quantum:12 sensory:3 author:1 ec:1 bb:4 ignore:1 implicitly:1 bernhard:1 keep:2 ml:3 global:3 handbook:1 xi:5 spectrum:1 search:2 quantifies:1 table:3 scratch:3 nature:4 reasonably:1 learn:3 expanding:1 molecule:49 composing:1 inherently:2 symmetry:2 mol:7 obtaining:1 interact:1 permute:1 bottou:2 necessarily:2 complex:1 artificially:1 domain:2 european:1 anthony:1 main:1 universe:1 whole:3 noise:5 hyperparameters:3 big:1 edition:1 ref:1 en:1 wiley:1 atomization:13 structurally:1 position:1 neuroimage:1 exponential:1 candidate:3 third:1 montavon:1 kcal:7 down:1 svm:1 grouping:1 intrinsic:2 lorenz:3 importance:1 ci:5 conditioned:1 cartesian:4 forcefield:1 sorting:3 rg:3 depicted:4 logarithmic:1 carla:1 simply:1 likely:1 contained:1 springer:3 corresponds:1 satisfies:2 extracted:1 minibatches:1 weston:1 conditional:1 sorted:22 fisher:1 feasible:1 hard:1 change:1 specifically:2 called:1 total:3 bernard:1 invariance:6 est:1 argonne:4 select:1 internal:1 support:6 mark:2 chem:1 goire:1 dept:1 regularizing:1 phenomenon:1 handling:1 |
4,233 | 4,831 | Angular Quantization-based Binary Codes for
Fast Similarity Search
Yunchao Gong? , Sanjiv Kumar? , Vishal Verma? , Svetlana Lazebnik?
?
Google Research, New York, NY 10011, USA
?
Computer Science Department, University of North Carolina at Chapel Hill, NC 27599, USA
?
Computer Science Department, University of Illinois at Urbana-Champaign, IL 61801, USA
{yunchao,verma}@cs.unc.edu, [email protected], [email protected]
Abstract
This paper focuses on the problem of learning binary codes for efficient retrieval
of high-dimensional non-negative data that arises in vision and text applications
where counts or frequencies are used as features. The similarity of such feature
vectors is commonly measured using the cosine of the angle between them. In
this work, we introduce a novel angular quantization-based binary coding (AQBC)
technique for such data and analyze its properties. In its most basic form, AQBC
works by mapping each non-negative feature vector onto the vertex of the binary hypercube with which it has the smallest angle. Even though the number of
vertices (quantization landmarks) in this scheme grows exponentially with data dimensionality d, we propose a method for mapping feature vectors to their
smallest-angle binary vertices that scales as O(d log d). Further, we propose a
method for learning a linear transformation of the data to minimize the quantization error, and show that it results in improved binary codes. Experiments on
image and text datasets show that the proposed AQBC method outperforms the
state of the art.
1
Introduction
Retrieving relevant content from massive databases containing high-dimensional data is becoming
common in many applications involving images, videos, documents, etc. Two main bottlenecks in
building an efficient retrieval system for such data are the need to store the huge database and the
slow speed of retrieval. Mapping the original high-dimensional data to similarity-preserving binary
codes provides an attractive solution to both of these problems [21, 23]. Several powerful techniques
have been proposed recently to learn binary codes for large-scale nearest neighbor search and retrieval. These methods can be supervised [2, 11, 16], semi-supervised [10, 24] and unsupervised
[7, 8, 9, 12, 15, 18, 20, 26], and can be applied to any type of vector data.
In this work, we investigate whether it is possible to achieve an improved binary embedding if
the data vectors are known to contain only non-negative elements. In many vision and text-related
applications, it is common to represent data as a Bag of Words (BoW), or a vector of counts or
frequencies, which contains only non-negative entries. Furthermore, cosine of angle between vectors
is typically used as a similarity measure for such data. Unfortunately, not much attention has been
paid in the past to exploiting this special yet widely used data type.
A popular binary coding method for cosine similarity is based on Locality Sensitive Hashing
(LSH) [4], but it does not take advantage of the non-negative nature of histogram data. As we
will show in the experiments, the accuracy of LSH is limited for most real-world data. Min-wise
Hashing is another popular method which is designed for non-negative data [3, 13, 14, 22]. However, it is appropriate only for Jaccard distance and also it does not result in binary codes. Special
1
clustering algorithms have been developed for data sampled on the unit hypersphere, but they also
do not lead to binary codes [1]. To the best of our knowledge, this paper describes the first work that
specifically learns binary codes for non-negative data with cosine similarity.
We propose a novel angular quantization technique to learn binary codes from non-negative data,
where the angle between two vectors is used as a similarity measure. Without loss of generality
such data can be assumed to live in the positive orthant of a unit hypersphere. The proposed technique works by quantizing each data point to the vertex of the binary hypercube with which it has
the smallest angle. The number of these quantization centers or landmarks is exponential in the
dimensionality of the data, yielding a low-distortion quantization of a point. Note that it would be
computationally infeasible to perform traditional nearest-neighbor quantization as in [1] with such
a large number of centers. Moreover, even at run time, finding the nearest center for a given point
would be prohibitively expensive. Instead, we present a very efficient method to find the nearest
landmark for a point, i.e., the vertex of the binary hypercube with which it has the smallest angle.
Since the basic form of our quantization method does not take data distribution into account, we further propose a learning algorithm that linearly transforms the data before quantization to reduce the
angular distortion. We show experimentally that it significantly outperforms other state-of-the-art
binary coding methods on both visual and textual data.
2
Angular Quantization-based Binary Codes
Our goal is to find a quantization scheme that maximally preserves the cosine similarity (angle) between vectors in the positive orthant of the unit hypersphere. This section introduces the proposed
angular quantization technique that directly yields binary codes of non-negative data. We first propose a simplified data-independent algorithm which does not involve any learning, and then present
a method to adapt the quantization scheme to the input data to learn robust codes.
2.1
Data-independent Binary Codes
Suppose we are given a database X containing n d-dimensional points such that X = {xi }ni=1 ,
where xi ? Rd . We first address the problem of computing a d-bit binary code of an input vector
xi . A c-bit code for c < d will be described later in Sec. 2.2. For angle-preserving quantization,
we define a set of quantization centers or landmarks by projecting the vertices of the binary hypercube {0, 1}d onto the unit hypersphere. This construction results in 2d ? 1 landmark points for
d-dimensional data.1 An illustration of the proposed quantization model is given in Fig. 1. Given a
point x on the hypersphere, one first finds its nearest2 landmark v i , and the binary encoding for xi
is simply given by the binary vertex bi corresponding to v i .3
One of the main characteristics of the proposed model is that the number of landmarks grows exponentially with d, and for many practical applications d can easily be in thousands or even more.
On the one hand, having a huge number of landmarks is preferred as it can provide a fine-grained,
low-distortion quantization of the input data, but on the other hand, it poses the formidable computational challenge of efficiently finding the nearest landmark (and hence the binary encoding) for
an arbitrary input point. Note that performing brute-force nearest-neighbor search might even be
slower than nearest-neighbor retrieval from the original database! To obtain an efficient solution, we
take advantage of the special structure of our set of landmarks, which are given by the projections
of binary vectors onto the unit hypercube. The nearest landmark of a point x, or the binary vertex
having the smallest angle with x, is given by
T
? = arg max b x
b
b kbk2
s. t. b ? {0, 1}d .
(1)
This is an integer programming problem but its global maximum can be found very efficiently as we
show in the lemma below. The corresponding algorithm is presented in Algorithm 1.
1
Note that the vertex with all 0?s is excluded as its norm is 0, which is not permissible in eq. (1).
In terms of angle or Euclidean distance, which are equivalent for unit-norm data.
3
Since in terms of angle from a point, both bi and v i are equivalent, we will use the term landmark for
either bi or v i depending on the context.
2
2
1
cos(b1,b2)
0.8
0.6
0.4
lower bound (r=1)
upper bound (r=1)
lower bound (r=3)
upper bound (r=3)
lower bound (r=5)
upper bound (r=5)
0.2
0 0
10
(a) Quantization model in 3D.
1
2
10
10
m (log scale)
3
10
(b) Cosine of angle between binary vertices.
Figure 1: (a) An illustration of our quantization model in 3D. Here bi is a vertex of the unit cube and v i is its
projection on the unit sphere. Points v i are used as the landmarks for quantization. To find the binary code of
a given data point x, we first find its nearest landmark point v i on the sphere, and the correponding bi gives its
binary code (v 4 and b4 in this case). (b) Bound on cosine of angle between a binary vertex b1 with Hamming
weight m, and another vertex b2 at a Hamming distance r from b1 . See Lemma 2 for details.
Algorithm 1: Finding the nearest binary landmark for a point on the unit hypersphere.
? binary vector having the smallest angle with x.
Input: point x on the unit hypersphere. Output: b,
1. Sort the entries of x in descending order as x(1) , . . . , x(d) .
2. for k = 1, . . . , d
3.
if x(k) = 0 break.
4.
Form binary vector bk whose elements are
positions in x, 0 otherwise.
P1 for the k largest
?
k
k.
5.
Compute ?(x, k) = (xT bk )/kbk k2 =
x
/
(j)
j=1
6. end for
7. Return bk corresponding to m = arg maxk ?(x, k).
Lemma 1 The globally optimal solution of the integer programming problem in eq. (1) can be
computed in O(d log d) time. Further, for a sparse vector with s non-zero entries, it can be computed
in O(s log s) time.
Proof: Since b?is a d-dimensional
binary vector, its norm kbk2 can have at most d different values,
?
i.e., kbk2 ? { 1, . . . , d}. We can
? separately consider the optimal solution of eq. (1) for each
value of the norm. Given kbk2 = k (i.e., b has k ones), eq. (1) is maximized by setting to one
the entries of b corresponding to the largest k entries of x. Since kbk2 can take on d distinct values,
? for which the
we need to evaluate eq. (1) at most d times, and find the k and the corresponding b
objective function is maximized (see Algorithm 1 for a detailed description of the algorithm). To
find the largest k entries of x for k = 1, . . . , d, we need to sort all the entries of x, which takes
O(d log d) time, and checking the solutions for all k is linear in d. Further, if the vector x is sparse
with only s non-zero elements, it is obvious that the maximum of eq. (1) is achieved when k varies
from 1 to s. Hence, one needs to sort only the non-zero entries of x, which takes O(s log s) time
and checking all possible solutions is linear in s.
Now we study the properties of the proposed quantization model. The following lemma helps to
characterize the angular resolution of the quantization landmarks.
Lemma 2 Suppose b is an arbitrary binary vector with Hamming weight kbk1 = m, where k ? k1
is the L1 norm. Then for all binary vectors
b0 that lie
radius r from b, the cosine of
i
hq
q at a Hamming
the angle between b and b0 is bounded by
m?r
m ,
m
m+r
.
Proof: Since kbk1 = m, there are exactly m ones in b and the rest are zeros, and b0 has exactly
r bits different from b. To find the lower bound on the cosine of the angle between b and b0 , we
T 0
b
want to find a b0 such that ? b ?
is maximized. It is easy to see that this will happen when
0
kbk1
kb k1
b0 has exactly m ? r ones in common positions with b and the remaining
entries are zero, i.e.,
q
0
T 0
m?r
kb k1 = m ? r and b b = m ? r. This gives the lower bound of
m . Similarly, the upper
3
bound can be obtained when b0 has all ones at the same locations
q as b, and additional r ones, i.e.,
0
T 0
m
kb k1 = m + r and b b = m. This yields the upper bound of m+r
.
We can understand this result as follows. The Hamming weight m of each binary vertex corresponds
to its position in space. When m is low, the point is closer to the boundary of the positive orthant
and when m is high, it is closer to the center. The above lemma implies that for landmark points on
the boundary, the Voronoi cells are relatively coarse, and cells become progressively denser as one
moves towards the center. Thus the proposed set of landmarks non-uniformly tessellates the surface
of the positive orthant of the hypersphere. We show the lower and upper bounds on angle for various
m and r in Fig. 1 (b). It is clear that for relatively large m, the angle between different landmarks
is very small, thus providing dense quantization even for large r. To get good performance, the
distribution of the data should be such that a majority of the points fall closer to landmarks with
higher m.
The Algorithm 1 constitutes the core of our proposed angular quantization method, but it has several
limitations: (i) it is data-independent, and thus cannot adapt to the data distribution to control the
quantization error; (ii) it cannot control m which, based on our analysis, is critical for low quantization error; (iii) it can only produce a d-bit code for d-dimensional data, and thus cannot generate
shorter codes. In the following section, we present a learning algorithm to address the above issues.
2.2
Learning Data-dependent Binary Codes
We start by addressing the first issue of how to adapt the method to the given data to minimize
the quantization error. Similarly to the Iterative Quantization (ITQ) method of Gong and Lazebnik
[7], we would like to align the data to a pre-defined set of quantization landmarks using a rotation,
because rotating the data does not change the angles ? and, therefore, the similarities ? between
the data points. Later in this section, we will present an objective function and an optimization
algorithm to accomplish this goal, but first, by way of motivation, we would like to illustrate how
applying even a random rotation to a typical frequency/count vector can affect the Hamming weight
m of its angular binary code.
Zipf?s law or power law is commonly used for modeling frequency/count data in many real-world
applications [17, 28]. Suppose, for a data vector x, the sorted entries x(1) , . . . , x(d) follow Zipf?s
law, i.e., x(k) ? 1/k s , where k is the index of the entries sorted in descending order, and s is the
power parameter that controls how quickly the entries decay. The effective m for x depends directly
on the power s: the larger s is, the faster the entries of x decay, and the smaller m becomes. More
germanely, for a fixed s, applying a random rotation R to x makes the distribution of the entries
of the resulting vector RT x more uniform and raises the effective m. In Fig. 2 (a), we plot the
sorted entries of x generated from Zipf?s law with s = 0.8. Based on Algorithm 1, we compute
Pk x
the scaled cumulative sums ?(x, k) = j=1 ?(j)
, which are shown in Fig. 2 (b). Here the optimal
k
m = arg maxk ?(x, k) is relatively low (m = 2). In Fig. 2 (c), we randomly rotate the data and
show the sorted values of RT x, which become more uniform. Finally, in Fig. 2 (d), we show
?(RT x, k). The Hamming weight m after this random rotation becomes much higher (m = 25).
This effect is typical: the average of m over 1000 random rotations for this example is 27.36. Thus,
even randomly rotating the data tends to lead to finer Voronoi cells and reduced quantization error.
Next, it is natural to ask whether we can optimize the rotation of the data to increase the cosine
similarities between data points and their corresponding binary landmarks.
We seek a d ? d orthogonal transformation R such that the sum of cosine similarities of each
transformed data point RT xi and its corresponding binary landmark bi is maximized.4 Let B ?
{0, 1}d?n denote a matrix whose columns are given by the bi . Then the objective function for our
optimization problem is given by
n
X
bTi
RT xi
B,R
kb
k
i
2
i=1
Q(B, R) = arg max
s. t. bi ? {0, 1}d , RT R = I d ,
(2)
where I d denotes the d ? d identity matrix.
4
Note that after rotation, RT xi may contain negative values but this does not affect the quantization since
the binarization technique described in Algorithm 1 effectively suppresses the negative values to 0.
4
1.1
0.7
0.5
8
0.8
0.4
0.7
0.3
0.2
9
2
0.9
0.6
?(x,k)
data value x(k)
0.8
3
m=2
0.6
7
1
6
?(RTx,k)
1
after rotation (RTx)(k)
1
0.9
0
?1
m=25
4
3
2
?2
1
0.1
0
0
5
20
40
60
80
100
0.5
0
20
sorted index (k)
40
60
80
?3
0
100
20
sorted index (k)
(a)
40
60
80
100
0
0
20
sorted index (k)
(b)
40
60
80
100
sorted index (k)
(c)
(d)
Figure 2: Effect of rotation on Hamming weight m of the landmark corresponding to a particular vector. (a)
Sorted vector elements x(k) following Zipf?s law with s = 0.8; (b) Scaled cumulative sum ?(x, k); (c) Sorted
vector elements after random rotation; (d) Scaled cumulative sum ?(RT x, k) for the rotated data. See text for
discussion.
The above objective function still yields a d-bit binary code for d-dimensional data, while in many
real-world applications, a low-dimensional binary code may be preferable. To generate a c-bit code
where c < d, we can learn a d ? c projection matrix R with orthogonal columns by optimizing the
following modified objective function:
n
X
bTi
R T xi
B,R
kbi k2 kRT xi k2
i=1
s. t. bi ? {0, 1}c , RT R = I c .
Q(B, R) = arg max
(3)
Note that to minimize the angle after a low-dimensional projection (as opposed to a rotation), the
denominator of the objective function contains kRT xi k2 since after projection kRT xi k2 6= 1.
However, adding this new term to the denominator makes the optimization problem hard to solve.
We propose to relax it by optimizing the linear correlation instead of the angle:
n
X
bTi
R T xi
B,R
kb
k
i
2
i=1
Q(B, R) = arg max
s. t. bi ? {0, 1}c , RT R = I c .
(4)
This is similar to eq. (2) but the geometric interpretation is slightly different: we are now looking
for a projection matrix R to map the d-dimensional data to a lower-dimensional space such that
after the mapping, the data has high linear correlation with a set of landmark points lying on the
lower-dimensional hypersphere. Section 3 will demonstrate that this relaxation works quite well in
practice.
2.3
Optimization
The objective function in (4) can be written more compactly in a matrix form:
e R) = arg max Tr(B
e T RT X)
Q(B,
s. t.
RT R = I c ,
(5)
e
B,R
e is the c ? n matrix with columns given by bi /kbi k2 , and X is
where Tr(?) is the trace operator, B
e and X jointly. To
the d ? n matrix with columns given by xi . This objective is nonconvex in B
obtain a local maximum, we use a simple alternating optimization procedure as follows.
e For a fixed R, eq. (5) becomes separable in xi , and we can solve for each bi
(1) Fix R, update B.
separately. Here, the individual sub-problem for each xi can be written as
bT
b?i = arg max i (RT xi ).
bi kbi k2
(6)
Thus, given a point y i = RT xi in c-dimensional space, we want to find the vertex bi on the cdimensional hypercube having the smallest angle with y i . To do this, we use Algorithm 1 to find bi
for each y i , and then normalize each bi back to the unit hypersphere: e
bi = bi /kbi k2 . This yields
e
e
each column of B. Note that the B found in this way is the global optimum for this subproblem.
e update R. When B
e is fixed, we want to find
(2) Fix B,
? = arg max Tr(B
e T RT X) = arg max Tr(RT X B
eT)
R
R
R
5
s. t.
RT R = I c .
(7)
This is a well-known problem and its global optimum can be obtained by polar decomposition [5].
e T as X B
e T = U SV T , let U c be the first c
Namely, we take the SVD of the d ? c matrix X B
singular vectors of U , and finally obtain R = U c V T .
The above formulation involves solving two sub-problems in an alternating fashion. The first subproblem is an integer program, and the second one has non-convex orthogonal constraints. However,
in each iteration the global optimum can be obtained for each sub-problem as discussed above. So,
each step of the alternating method is guaranteed to increase the objective function. Since the objective function is bounded from above, it is guaranteed to converge. In practice, one needs only a few
iterations (less than five) for the method to converge. The optimization procedure is initialized by
first generating a random binary matrix by making each element 0 or 1 with probability 21 , and then
normalizing each column to unit norm. Note that the optimization is also computationally efficient.
The first subproblem takes O(nc log c) time while the second one takes O(dc2 ). This is linear in
data dimension d, which enables us to handle very high-dimensional feature vectors.
2.4
Computation of Cosine Similarity between Binary Codes
Most existing similarity-preserving binary coding methods measure the similarity between pairs of
binary vectors using the Hamming distance, which is extremely efficient to compute by bitwise
XOR followed by bit count (popcount). By contrast, the appropriate similarity measure for our
T 0
b
approach is the cosine of the angle ? between two binary vectors b and b0 : cos(?) = kbkb2 kb
0 k . In
2
this formulation, bT b0 can be obtained by bitwise AND followed by popcount, and kbk2 and kb0 k2
can be obtained by popcount and lookup table to find the square root. Of course, if b is the query
vector that needs to be compared to every database vector b0 , then one can ignore kbk2 . Therefore,
even though the cosine similarity is marginally slower than Hamming distance, it is still very fast
compared to computing similarity of the original data vectors.
3
Experiments
To test the effectiveness of the proposed Angular Quantization-based Binary Codes (AQBC) method,
we have conducted experiments on two image datasets and one text dataset. The first image dataset
is SUN, which contains 142,169 natural scene images [27]. Each image is represented by a 1000dimensional bag of visual words (BoW) feature vector computed on top of dense SIFT descriptors.
The BoW vectors are power-normalized by taking the square root of each entry, which has been
shown to improve performance for recognition tasks [19]. The second dataset contains 122,530
images from ImageNet [6], each represented by a 5000-dimensional vector of locality-constrained
linear coding (LLC) features [25], which are improved versions of BoW features. Dense SIFT is
also used as the local descriptor in this case. The third dataset is 20 Newsgroups,5 which contains
18,846 text documents and 26,214 words. Tf-idf weighting is used for each text document BoW
vector. The feature vectors for all three datasets are sparse, non-negative, and normalized to unit L2
norm. Due to this, Euclidean distance directly corresponds to the cosine similarity as dist2 = 2 ?
2 sim. Therefore, in the following, we will talk about similarity and distance interchangeably.
To perform evaluation on each dataset, we randomly sample and fix 2000 points as queries, and use
the remaining points as the ?database? against which the similarity searches are run. For each query,
we define the ground truth neighbors as all the points within the radius determined by the average distance to the 50th nearest neighbor in the dataset, and plot precision-recall curves of database points
ordered by decreasing similarity of their binary codes with the query. This methodology is similar
to that of other recent works [7, 20, 26]. Since our AQBC method is unsupervised, we compare with
several state-of-the-art unsupervised binary coding methods: Locality Sensitive Hashing (LSH) [4],
Spectral Hashing [26], Iterative Quantization (ITQ) [7], Shift-invariant Kernel LSH (SKLSH) [20],
and Spherical Hashing (SPH) [9]. Although these methods are designed to work with the Euclidean
distance, they can be directly applied here since all the vectors have unit norm. We use the authors?
publicly available implementations and suggested parameters for all the experiments.
Results on SUN and ImageNet. The precision-recall curves for the SUN dataset are shown in
Fig. 3. For all the code lengths (from 64 to 1000 bits), our method (AQBC) performs better than other
state-of-the-art methods. For a relatively large number of bits, SKLSH works much better than other
5
http://people.csail.mit.edu/jrennie/20Newsgroups
6
1
1
ITQ
LSH
SKLSH
SH
SPH
AQBC
0.8
1
ITQ
LSH
SKLSH
SH
SPH
AQBC
0.8
0.8
Precision
0.6
Precision
0.6
Precision
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0.2
0.4
0.6
0.8
0
0
1
ITQ
LSH
SKLSH
SH
SPH
AQBC
AQBC naive
0.2
0.4
Recall
0.6
0.8
0
0
1
0.2
0.4
Recall
(a) 64 bits.
0.6
0.8
1
Recall
(b) 256 bits.
(c) 1000 bits.
Figure 3: Precision-recall curves for different methods on the SUN dataset.
1
1
ITQ
LSH
SKLSH
SH
SPH
AQBC
0.8
1
ITQ
LSH
SKLSH
SH
SPH
AQBC
0.8
0.8
Precision
0.6
Precision
0.6
Precision
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0.2
0.4
0.6
Recall
(a) 64 bits.
0.8
1
ITQ
LSH
SKLSH
SH
SPH
AQBC
0
0
0.2
0.4
0.6
Recall
(b) 256 bits.
0.8
1
0
0
0.2
0.4
0.6
0.8
1
Recall
(c) 1024 bits.
Figure 4: Precision-recall curves for different methods on the ImageNet120K dataset.
methods, while still being worse than ours. It is interesting to verify how much we gain by using the
learned data-dependent quantization instead of the data-independent naive version (Sec. 2.1). Since
the naive version can only learn a d-bit code (1000 bits in this case), its performance (AQBC naive)
is shown only in Fig. 3 (c). The performance is much worse than that of the learned codes, which
clearly shows that adapting quantization to the data distribution is important in practice. Fig. 4 shows
results on ImageNet. On this dataset, the strongest competing method is ITQ. For a relatively low
number of bits (e.g., 64), AQBC and ITQ are comparable, but AQBC has a more clear advantage
as the number of bits increases. This is because for fewer bits, the Hamming weight (m) of the
binary codes tends to be small resulting in larger distortion error as discussed in Sec. 2.1. We also
found the SPH [9] method works well for relatively dense data, while it does not work very well for
high-dimensional sparse data.
Results on 20 Newsgroups. The results on the text features (Fig. 5) are consistent with those on the
image features. Because the text features are the sparsest and have the highest dimensionality, we
would like to verify whether learning the projection R helps in choosing landmarks with larger m as
conjectured in Sec. 2.2. The average empirical distribution over sorted vector elements for this data
is shown in Fig. 6 (a) and the scaled cumulative sum in Fig. 6 (b). It is clear that vector elements
have a rapidly decaying distribution, and the quantization leads to codes with low m implying higher
quantization error. Fig. 6 (c) shows the distribution of entries of vector RT x, which decays more
slowly than the original distribution in Fig. 6 (a). Fig. 6 (d) shows the scaled cumulative sum for the
projected vectors, indicating a much higher m.
Timing. Table 1 compares the binary code generation time and retrieval speed for different methods.
All results are obtained on a workstation with 64GB RAM and 4-core 3.4GHz CPU. Our method
involves linear projection and quantization using Algorithm 1, while ITQ and LSH only involve
linear projections and thresholding. SPH involves Euclidean distance computation and thresholding.
SH and SKLSH involve linear projection, nonlinear mapping, and thresholding. The results show
that the quantization step (Algorithm 1) of our method is fast, adding very little to the coding time.
The coding speed of our method is comparable to that of LSH, ITQ, SPH, and SKLSH. As shown
7
1
1
ITQ
LSH
SKLSH
SH
SPH
AQBC
0.8
1
ITQ
LSH
SKLSH
SH
SPH
AQBC
0.8
0.8
Precision
0.6
Precision
0.6
Precision
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0.2
0.4
0.6
0.8
1
ITQ
LSH
SKLSH
SH
SPH
AQBC
0
0
0.2
0.4
0.6
Recall
0.8
1
0
0
0.2
0.4
Recall
(a) 64 bits.
0.6
0.8
1
Recall
(b) 256 bits.
(c) 1024 bits.
1
0.3
0.9
0.5
m=37
0.7
0.2
0.15
0.6
0.5
0.1
0.4
0.45
0.4
0.04
?(RTx,k)
rotated data (RTx)(k)
0.8
0.25
0.02
0
400
600
800
1000
0.2
0
sorted index (k)
0.25
0.1
200
400
600
800
?0.04
0
1000
200
400
600
800
1000
sorted index (k)
sorted index (k)
(a)
0.3
0.2
0.3
200
m=304
0.35
0.15
?0.02
0.05
0
0
0.55
0.08
0.06
?(x,k)
data value (x(k))
Figure 5: Precision-recall curves for different methods on the 20 Newsgroups dataset.
0.35
(b)
(c)
0.05
0
200
400
600
800
1000
sorted index (k)
(d)
Figure 6: Effect of projection on Hamming weight m for 20 Newsgroups data. (a) Distribution of sorted vector
entries, (b) scaled cumulative function, (c) distribution over vector elements after learned projection, (d) scaled
cumulative function for the projected data. For (a, b) we show only top 1000 entries for better visualization.
For (c, d), we project the data to 1000 dimensions.
code size
64 bits
512 bits
SH
2.20
40.38
LSH
0.14
3.66
(a) Code generation time
ITQ SKLSH SPH
AQBC
0.14
0.33
0.21 0.14 + 0.09 = 0.23
3.66
5.81
3.94 3.66 + 0.55 = 4.21
(b) Retrieval time
Hamming Cosine
2.4
3.4
15.8
20.4
Table 1: Timing results. (a) Average binary code generation time per query (milliseconds) on 5000dimensional LLC features. For the proposed AQBC method, the first number is projection time and the second
is quantization time. (b) Average time per query, i.e., exhaustive similarity computation against the 120K
ImageNet images. Computation of Euclidean distance on this dataset takes 11580 ms.
in Table 1(b), computation of cosine similarity is slightly slower than that of Hamming distance, but
both are orders of magnitude faster than Euclidean distance.
4
Discussion
In this work, we have introduced a novel method for generating binary codes for non-negative frequency/count data. Retrieval results on high-dimensional image and text datasets have demonstrated
that the proposed codes accurately approximate neighbors in the original feature space according to
cosine similarity. Note, however, that our experiments have not focused on evaluating the semantic
accuracy of the retrieved neighbors (i.e., whether these neighbors tend to belong to the same
high-level category as the query). To improve the semantic precision of retrieval, our earlier ITQ
method [7] could take advantage of a supervised linear projection learned from labeled data with
the help of canonical correlation analysis. For the current AQBC method, it is still not clear how to
incorporate supervised label information into learning of the linear projection. We have performed
some preliminary evaluations of semantic precision using unsupervised AQBC, and we have found
it to work very well for retrieving semantic neighbors for extremely high-dimensional sparse
data (like the 20 Newsgroups dataset), while ITQ currently works better for lower-dimensional,
denser data. In the future, we plan to investigate how to improve the semantic precision of AQBC
using either unsupervised or supervised learning. Additional resources and code are available at
http://www.unc.edu/? yunchao/aqbc.htm
Acknowledgments. We thank Henry A. Rowley and Ruiqi Guo for helpful discussions, and the reviewers for
helpful suggestions. Gong and Lazebnik were supported in part by NSF grants IIS 0916829 and IIS 1228082,
and the DARPA Computer Science Study Group (D12AP00305).
8
References
[1] A. Banerjee, I. S. Dhillon, J. Ghosh, and S. Sra. Clustering on the unit hypersphere using von
Mises-Fisher distributions. JMLR, 2005.
[2] A. Bergamo, L. Torresani, and A. Fitzgibbon. Picodes: Learning a compact code for novelcategory recognition. NIPS, 2011.
[3] A. Broder. On the resemblance and containment of documents. Compression and Complexity
of Sequences, 1997.
[4] M. S. Charikar. Similarity estimation techniques from rounding algorithms. STOC, 2002.
[5] X. Chen, B. Bai, Y. Qi, Q. Lin, and J. Carbonell. Sparse latent semantic analysis. SDM, 2011.
[6] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical
image database. CVPR, 2009.
[7] Y. Gong and S. Lazebnik. Iterative quantization: A Procrustean approach to learning binary
codes. CVPR, 2011.
[8] J. He, R. Radhakrishnan, S.-F. Chang, and C. Bauer. Compact hashing with joint optimization
of search accuracy and time. CVPR, 2011.
[9] J.-P. Heo, Y. Lee, J. He, S.-F. Chang, and S.-E. Yoon. Spherical hashing. CVPR, 2012.
[10] P. Jain, B. Kulis, and K. Grauman. Fast image search for learned metrics. CVPR, 2008.
[11] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. NIPS, 2009.
[12] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing for scalable image search. In
ICCV, 2009.
[13] P. Li and C. Konig. Theory and applications of b-bit minwise hashing. Communications of the
ACM, 2011.
[14] P. Li, A. Shrivastava, J. Moore, and C. Konig. Hashing algorithms for large-scale learning.
NIPS, 2011.
[15] W. Liu, S. Kumar, and S.-F. Chang. Hashing with graphs. ICML, 2011.
[16] W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. CVPR,
2012.
[17] C. D. Manning and H. Sch?utze. Foundations of statistical natural language processing. MIT
Press, 1999.
[18] M. Norouzi and D. J. Fleet. Minimal loss hashing for compact binary codes. ICML, 2011.
[19] F. Perronnin, J. Sanchez, , and Y. Liu. Large-scale image categorization with explicit data
embedding. CVPR, 2010.
[20] M. Raginsky and S. Lazebnik. Locality sensitive binary codes from sift-invariant kernels.
NIPS, 2009.
[21] R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate
Reasoning, 2009.
[22] A. Shrivastava and P. Li. Fast near neighbor search in high-dimensional binary data. ECML,
2012.
[23] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition.
CVPR, 2008.
[24] J. Wang, S. Kumar, and S.-F. Chang. Semi-supervised hashing for scalable image retrieval.
CVPR, 2010.
[25] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong. Locality-constrained linear coding for
image classification. CVPR, 2010.
[26] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. NIPS, 2008.
[27] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene
recognition from Abbey to Zoo. CVPR, 2010.
[28] G. K. Zipf. The psychobiology of language. Houghton-Mifflin, 1935.
9
| 4831 |@word kulis:3 version:3 ruiqi:1 compression:1 norm:8 seek:1 carolina:1 decomposition:1 paid:1 tr:4 bai:1 liu:3 contains:5 document:4 ours:1 outperforms:2 past:1 existing:1 bitwise:2 com:1 current:1 yet:1 written:2 sanjiv:1 happen:1 enables:1 designed:2 plot:2 progressively:1 update:2 hash:1 implying:1 fewer:1 core:2 hypersphere:11 provides:1 coarse:1 location:1 five:1 become:2 retrieving:2 introduce:1 p1:1 uiuc:1 salakhutdinov:1 globally:1 decreasing:1 spherical:2 cpu:1 little:1 becomes:3 project:1 moreover:1 bounded:2 formidable:1 suppresses:1 developed:1 rtx:4 finding:3 transformation:2 ghosh:1 every:1 exactly:3 prohibitively:1 k2:9 scaled:7 preferable:1 brute:1 unit:15 control:3 grant:1 grauman:2 positive:4 before:1 local:2 timing:2 tends:2 encoding:2 jiang:1 becoming:1 might:1 co:2 limited:1 bi:18 practical:1 acknowledgment:1 practice:3 fitzgibbon:1 procedure:2 krt:3 empirical:1 significantly:1 adapting:1 projection:15 word:3 pre:1 kbi:4 get:1 unc:2 onto:3 cannot:3 operator:1 context:1 live:1 applying:2 descending:2 optimize:1 equivalent:2 sanjivk:1 map:1 center:6 demonstrated:1 www:1 reviewer:1 attention:1 convex:1 focused:1 resolution:1 chapel:1 embedding:2 handle:1 slazebni:1 construction:1 suppose:3 massive:1 programming:2 element:9 expensive:1 recognition:4 houghton:1 database:10 labeled:1 subproblem:3 yoon:1 wang:3 thousand:1 sun:5 highest:1 complexity:1 rowley:1 raise:1 solving:1 compactly:1 easily:1 htm:1 darpa:1 joint:1 various:1 represented:2 talk:1 distinct:1 fast:5 effective:2 jain:1 reconstructive:1 query:7 choosing:1 exhaustive:1 whose:2 quite:1 widely:1 larger:3 denser:2 distortion:4 solve:2 otherwise:1 relax:1 cvpr:11 jointly:1 advantage:4 sequence:1 quantizing:1 sdm:1 propose:6 relevant:1 mifflin:1 rapidly:1 bow:5 achieve:1 description:1 normalize:1 dist2:1 exploiting:1 konig:2 optimum:3 darrell:1 produce:1 generating:2 radhakrishnan:1 categorization:1 rotated:2 help:3 depending:1 illustrate:1 pose:1 gong:5 measured:1 nearest:11 b0:10 sim:1 eq:8 c:1 involves:3 implies:1 itq:18 radius:2 kb:6 fix:3 preliminary:1 lying:1 ground:1 mapping:5 torralba:3 smallest:7 utze:1 abbey:1 polar:1 estimation:1 bag:2 label:1 currently:1 sensitive:4 largest:3 tf:1 bergamo:1 mit:2 clearly:1 modified:1 kb0:1 focus:1 contrast:1 helpful:2 voronoi:2 dependent:2 perronnin:1 typically:1 bt:2 kernelized:1 transformed:1 arg:10 issue:2 classification:1 plan:1 art:4 special:3 constrained:2 cube:1 having:4 yu:1 unsupervised:5 constitutes:1 icml:2 future:1 torresani:1 few:1 randomly:3 preserve:1 individual:1 huge:2 investigate:2 evaluation:2 introduces:1 sh:11 yielding:1 closer:3 shorter:1 orthogonal:3 euclidean:6 rotating:2 initialized:1 minimal:1 column:6 modeling:1 earlier:1 heo:1 addressing:1 vertex:15 entry:19 uniform:2 rounding:1 conducted:1 characterize:1 varies:1 sv:1 accomplish:1 broder:1 international:1 csail:1 lee:1 dong:1 quickly:1 von:1 containing:2 opposed:1 slowly:1 huang:1 popcount:3 worse:2 return:1 li:5 account:1 lookup:1 coding:9 sec:4 north:1 b2:2 depends:1 later:2 break:1 root:2 performed:1 analyze:1 start:1 sort:3 decaying:1 minimize:3 square:2 il:1 accuracy:3 xor:1 ni:1 characteristic:1 efficiently:2 maximized:4 yield:4 descriptor:2 publicly:1 norouzi:1 accurately:1 marginally:1 zoo:1 psychobiology:1 finer:1 strongest:1 against:2 frequency:5 obvious:1 proof:2 mi:1 hamming:14 workstation:1 sampled:1 gain:1 dataset:13 popular:2 ask:1 recall:14 knowledge:1 dimensionality:3 back:1 hashing:16 higher:4 supervised:7 follow:1 methodology:1 improved:3 maximally:1 wei:2 formulation:2 though:2 generality:1 furthermore:1 angular:10 correlation:3 hand:2 nonlinear:1 banerjee:1 google:2 resemblance:1 grows:2 building:1 usa:3 effect:3 contain:2 normalized:2 verify:2 hence:2 excluded:1 alternating:3 dhillon:1 moore:1 semantic:7 attractive:1 interchangeably:1 cosine:18 m:1 procrustean:1 hill:1 correponding:1 demonstrate:1 performs:1 l1:1 reasoning:1 image:17 lazebnik:5 wise:1 novel:3 recently:1 common:3 rotation:11 ji:1 b4:1 exponentially:2 discussed:2 interpretation:1 belong:1 he:2 zipf:5 rd:1 similarly:2 illinois:1 language:2 henry:1 lsh:16 jrennie:1 similarity:25 surface:1 bti:3 etc:1 align:1 picodes:1 recent:1 retrieved:1 optimizing:2 conjectured:1 store:1 hay:1 nonconvex:1 binary:61 preserving:3 additional:2 deng:1 converge:2 semi:2 ii:3 champaign:1 faster:2 adapt:3 sphere:2 retrieval:10 lin:1 qi:1 involving:1 basic:2 scalable:2 denominator:2 vision:2 metric:1 oliva:1 histogram:1 represent:1 iteration:2 kernel:3 achieved:1 cell:3 want:3 fine:1 separately:2 singular:1 permissible:1 sch:1 rest:1 tend:1 sanchez:1 effectiveness:1 integer:3 near:1 yang:1 iii:1 easy:1 embeddings:1 newsgroups:6 affect:2 competing:1 reduce:1 shift:1 bottleneck:1 whether:4 fleet:1 gb:1 york:1 detailed:1 involve:3 clear:4 transforms:1 category:1 reduced:1 generate:2 http:2 canonical:1 millisecond:1 nsf:1 per:2 group:1 ram:1 graph:1 relaxation:1 sum:6 raginsky:1 run:2 angle:24 powerful:1 svetlana:1 jaccard:1 comparable:2 bit:26 bound:12 guaranteed:2 followed:2 constraint:1 idf:1 fei:2 scene:2 speed:3 min:1 extremely:2 kumar:3 performing:1 separable:1 relatively:6 department:2 charikar:1 according:1 manning:1 describes:1 smaller:1 slightly:2 making:1 kbk:1 projecting:1 invariant:2 iccv:1 computationally:2 resource:1 visualization:1 count:6 end:1 available:2 hierarchical:1 appropriate:2 spectral:2 slower:3 original:5 denotes:1 clustering:2 remaining:2 top:2 k1:4 hypercube:6 objective:10 move:1 yunchao:3 rt:18 traditional:1 hq:1 distance:13 kbk1:3 thank:1 landmark:26 majority:1 carbonell:1 code:44 length:1 index:9 illustration:2 providing:1 nc:2 unfortunately:1 stoc:1 trace:1 negative:13 implementation:1 perform:2 upper:6 datasets:4 urbana:1 orthant:4 ecml:1 maxk:2 hinton:1 looking:1 communication:1 arbitrary:2 bk:3 introduced:1 namely:1 pair:1 imagenet:5 learned:5 textual:1 nip:5 address:2 suggested:1 below:1 challenge:1 program:1 max:8 video:1 power:4 critical:1 natural:3 force:1 scheme:3 improve:3 naive:4 text:10 binarization:1 geometric:1 l2:1 checking:2 law:5 loss:2 interesting:1 limitation:1 generation:3 suggestion:1 lv:1 foundation:1 consistent:1 xiao:1 thresholding:3 verma:2 course:1 supported:1 infeasible:1 understand:1 neighbor:11 fall:1 taking:1 sparse:6 ghz:1 bauer:1 boundary:2 dimension:2 llc:2 world:3 cumulative:7 curve:5 evaluating:1 author:1 commonly:2 projected:2 simplified:1 dc2:1 approximate:2 compact:3 ignore:1 preferred:1 global:4 b1:3 containment:1 assumed:1 xi:17 fergus:2 search:8 iterative:3 latent:1 table:4 learn:5 nature:1 robust:1 sra:1 shrivastava:2 pk:1 main:2 dense:4 linearly:1 motivation:1 fig:15 fashion:1 ehinger:1 ny:1 slow:1 precision:17 sub:3 position:3 sparsest:1 explicit:1 exponential:1 lie:1 jmlr:1 third:1 weighting:1 vishal:1 learns:1 grained:1 xt:1 sift:3 sph:14 decay:3 normalizing:1 socher:1 quantization:42 adding:2 effectively:1 magnitude:1 chen:1 locality:6 kbk2:7 simply:1 visual:2 ordered:1 chang:5 corresponds:2 truth:1 acm:1 goal:2 sorted:16 identity:1 towards:1 fisher:1 content:1 experimentally:1 change:1 hard:1 specifically:1 typical:2 uniformly:1 determined:1 lemma:6 svd:1 indicating:1 people:1 guo:1 arises:1 rotate:1 minwise:1 incorporate:1 evaluate:1 |
4,234 | 4,832 | Training sparse natural image models with a fast
Gibbs sampler of an extended state space
Jascha Sohl-Dickstein
Redwood Center
for Theoretical Neuroscience
[email protected]
Lucas Theis
Werner Reichardt Centre
for Integrative Neuroscience
[email protected]
Matthias Bethge
Werner Reichardt Centre
for Integrative Neuroscience
[email protected]
Abstract
We present a new learning strategy based on an efficient blocked Gibbs sampler
for sparse overcomplete linear models. Particular emphasis is placed on statistical
image modeling, where overcomplete models have played an important role in discovering sparse representations. Our Gibbs sampler is faster than general purpose
sampling schemes while also requiring no tuning as it is free of parameters. Using
the Gibbs sampler and a persistent variant of expectation maximization, we are
able to extract highly sparse distributions over latent sources from data. When applied to natural images, our algorithm learns source distributions which resemble
spike-and-slab distributions. We evaluate the likelihood and quantitatively compare the performance of the overcomplete linear model to its complete counterpart
as well as a product of experts model, which represents another overcomplete generalization of the complete linear model. In contrast to previous claims, we find
that overcomplete representations lead to significant improvements, but that the
overcomplete linear model still underperforms other models.
1
Introduction
Here we study learning and inference in the overcomplete linear model given by
Y
x = As, p(s) =
fi (si ),
(1)
i
where A ? RM ?N , N ? M , and each marginal source distribution fi may depend on additional
parameters. Our goal is to find parameters which maximize the model?s log-likelihood, log p(x), for
a given set of observations x.
Most of the literature on overcomplete linear models assumes observations corrupted by additive
Gaussian noise, that is, x = As + ? for a Gaussian distributed random variable ?. Note that this is a
special case of the model discussed here, as we can always represent this noise by making some of
the sources Gaussian.
When the observations are image patches, the source distributions fi (si ) are typically assumed to be
sparse or leptokurtotic [e.g., 2, 20, 28]. Examples include the Laplace distribution, the Cauchy distribution, and Student?s t-distribution. A large family of leptokurtotic distributions which also contains
1
A
s2
p(z|x)
p(s|x)
A
B
s1
?
s
A
x
B
z
Figure 1: A: In the noiseless overcomplete linear model, the posterior distribution over hidden
sources s lives on a linear subspace. The two parallel lines indicate two different subspaces for
different values of x. For sparse source distributions, the posterior will generally be heavy-tailed and
multimodal, as can be seen on the right. B: A graphical model representation of the overcomplete
linear model extended by two sets of auxiliary variables (Equation 2 and 3). We perform blocked
Gibbs sampling between ? and z to sample from the posterior distribution over all latent variables
given an observation x. For a given ?, the posterior over z becomes Gaussian while for given z, the
posterior over ? becomes factorial and is thus easy to sample from.
the aforementioned distributions as a special case is formed by Gaussian scale mixtures (GSMs),
Z
fi (si ) =
0
?
gi (?i )N (si ; 0, ??1
i ) d?i ,
(2)
where gi (?i ) is a univariate density over precisions ?i . In the following, we will concentrate on
linear models whose marginal source distributions can be represented as GSMs. For a detailed
description of the representational power of GSMs, see Andrews and Mallows? paper [1].
Despite the apparent simplicity of the linear model, inference over the latent variables is computationally hard except for a few special cases such as when all sources are Gaussian distributed. In
particular, the posterior distribution over sources p(s | x) is constrained to a linear subspace and can
have multiple modes with heavy tails (Figure 1A).
Inference can be simplified by assuming additive Gaussian noise, constraining the source distributions to be log-concave or making crude approximations to the posterior. Here, however, we would
like to exhaust the full potential of the linear model. On this account, we use Markov chain Monte
Carlo (MCMC) methods to obtain samples with which we represent the posterior distribution. While
computationally more demanding than many other methods, this allows us, at least in principle, to
approximate the posterior to arbitrary precision.
Other approximations often introduce strong biases and preclude learning of meaningful source
distributions. Using MCMC, on the other hand, we can study the model?s optimal sparseness and
overcompleteness level in a more objective fashion as well as evaluate the model?s log-likelihood.
However, multiple modes and heavy tails also pose challenges to MCMC methods. General purpose
methods are therefore likely to be slow. In the following, we will describe an efficient blocked Gibbs
sampler which exploits the specific structure of the sparse linear model.
2
Sampling and inference
In this section, we first review the nullspace sampling algorithm of Chen and Wu [4], which solves
the problem of sampling from a linear subspace in the noiseless case of the overcomplete linear
model. We then introduce an additional set of auxiliary variables which leads to an efficient blocked
Gibbs sampler.
2
2.1
Nullspace sampling
The basic idea behind the nullspace sampling algorithm is to extend the overcomplete linear model
by an additional set of variables z which essentially makes it complete (Figure 1B),
x
A
=
s,
(3)
z
B
where B ? R(N ?M )?N and square brackets denote concatenation. If in addition to our observation
x we knew the unobserved variables z, we could perform inference as in the complete case by simply
solving the above linear system, provided the concatenation of A and B is invertible. If the rows of
A and B are orthogonal, AB > = 0, or, in other words, B spans the nullspace of A, we have
s = A+ x + B + z,
(4)
where A+ and B + are the pseudoinverses [24] of A and B, respectively. The marginal distributions
over x and s do not depend on our choice of B, which means we can choose B freely. An orthogonal
basis spanning the nullspace of A can be obtained from A?s singular value decomposition [4].
Making use of Equation 4, we can equally well try to obtain samples from the posterior p(z | x)
instead of p(s | x). In contrast to the latter, this distribution has full support and is not restricted to
just a linear subspace,
Y
p(z | x) ? p(z, x) ? p(s) =
fi (wi> x + vi> z),
(5)
i
wi>
vi>
+
+
where
and
are the i-th rows of A and B , respectively. Chen and Wu [4] used Metropolisadjusted Langevin (MALA) sampling [25] to sample from p(z | x).
2.2
Blocked Gibbs sampling
The fact that the marginals fi (si ) are expressed as Gaussian mixtures (Equation 2) can be used
to derive an efficient blocked Gibbs sampler. The Gibbs sampler alternately samples nullspace
representations z and precisions of the source marginals ?. The key observation here is that given
the precisions ?, the distribution over x and z becomes Gaussian which makes sampling from the
posterior distribution tractable.
A similar idea was pursued by Olshausen and Millman [21], who modeled the source distributions
with mixtures of Gaussians and conditionally Gibbs sampled precisions one by one. However, a
change in one of the precision variables entails larger computational costs, so that this algorithm is
most efficient if only few Gaussians are used and the probability of changing precisions is small. In
contrast, here we update all precision variables in parallel by conditioning on the nullspace representation z. This makes it feasible to use a large or even infinite number of precisions.
Conditioned on a data point x and a corresponding nullspace representation z, the distribution over
precisions ? becomes factorial,
Y
p(? | x, z) = p(? | s) ? p(s | ?)p(?) =
N (si ; 0, ??1
(6)
i )gi (?i ),
i
where we have used the fact that we can perfectly recover the sources given x and z (Equation 4).
Using a finite number of precisions ?ik with prior probabilities ?ik , for example, the posterior
probability of ?i being ?ij becomes
N (si ; 0, ??1
ij )?ij
p(?i = ?ij | x, z) = P
.
?1
k N (si ; 0, ?ik )?ik
(7)
Conditioned on ?, s is Gaussian distributed with diagonal covariance ??1 = diag(??1 ). As a linear
transformation of s, the distribution over x and z is also Gaussian with covariance
?xx ?xz
A??1 A> A??1 B >
?=
=
.
(8)
?>
?zz
B??1 A> B??1 B >
xz
3
Using standard Gaussian identities, we obtain
p(z | x, ?) = N (z; ?z|x , ?z|x ),
(9)
?1
> ?1
where ?z|x = ?>
xz ?xx x and ?z|x = ?zz ? ?xz ?xx ?xz . We use the following computationally
efficient method to conditionally sample Gaussian distributions [8, 14]:
0
x
?1
0
? N (0, ?), z = z 0 + ?>
(10)
xz ?xx (x ? x ).
z0
It can easily be shown that z has the desired distribution of Equation 9. Together, equations 7 and 9
implement a rapidly mixing blocked Gibbs sampler. However, the computational cost of solving
Equation 10 is larger than for a single Markov step in other sampling methods such as MALA. We
empirically show in the results section that for natural image patches the benefits of blocked Gibbs
sampling outweigh its computational costs.
A closely related sampling algorithm was proposed by Park and Casella [23] for implementing
Bayesian inference in the linear regression model with Laplace prior. The main differences here are
that we also consider the noiseless case by exploiting the nullspace representation, that instead of
using a fixed Laplace prior we will use the sampler to learn the distribution over source variables,
and that we apply the algorithm in the context of image modeling. Related ideas were also discussed
by Papandreou and Yuille [22], Schmidt et al. [27], and others.
3
Learning
In the following, we describe a learning strategy for the overcomplete linear model based on the idea
of persistent Markov chains [26, 32, 36], which already has led to improved learning strategies for
a number of different models [e.g., 6, 12, 29, 32].
Following Girolami [11] and others, we use expectation maximization (EM) [7] to maximize the
likelihood of the overcomplete linear model. Instead of a variational approximation, here we use the
blocked Gibbs sampler to sample a hidden state z for every data point x in the E-step. Each M-step
then reduces to maximum likelihood learning as in the complete case, for which many algorithms
are available. Due to the sampling step, this variant of EM is known as Monte Carlo EM [34].
Despite our efforts to make sampling efficient, running the Markov chain till convergence can still
be a costly operation due to the generally large number of data points and high dimensionality of
posterior samples. To further reduce computational costs, we developed a learning strategy which
makes use of persistent Markov chains and only requires a few sampling steps in every iteration.
Instead of starting the Markov chain anew in every iteration, we initialize the Markov chain with
the samples of the previous iteration. This approach is based on the following intuition. First, if the
model changes only slightly, the posterior will change only slightly. As a result, the samples from
the previous iteration will provide a good initialization and fewer updates of the Markov chain will
be sufficient to reach convergence. Second, if updating the Markov chain has only a small effect
on the posterior samples z, also the distribution of the complete data (x, z) will change very little.
Thus, the optimal parameters of the previous M-step will be close to optimal in the current M-step.
This causes an inefficient Markov chain to automatically slow down the learning process, so that the
posterior samples will always be close to the stationary distribution.
Even updating the Markov chain only once results in a valid EM strategy, which can be seen as
follows. EM can be viewed as alternately optimizing a lower bound to the log-likelihood with
respect to model parameters ? and an approximating posterior distribution q [18]:
F [q, ?] = log p(x; ?) ? DKL [q(z | x) || p(z | x, ?)] .
(11)
Each M-step increases F for fixed q while each E-step increases F for fixed ?. This is repeated
until a local optimum is reached. Importantly, local maxima of F are also local maxima of the
log-likelihood, log p(x; ?).
Interestingly, improving the lower bound F with respect to q can be accomplished by driving the
Markov chain with our Gibbs sampler or some other transition operator [26]. This can be seen
4
B
Image model
Toy model
7.5
25
7.3
0
7.1
?25
6.9
Autocorrelation
Avg. posterior energy
A
?50
0
5
10
Time [s]
15
0
20
40
60
80
1
MALA
HMC
Gibbs
0.5
0
1
0.5
0
0
Time [s]
Image model
Toy model
5
10
Time [s]
15
0
20
40
60
80
Time [s]
Figure 2: A: The average energy of posterior samples for different sampling methods after deterministic initialization. Depending on the initialization, the average energy can be initially too low
or too high. Gray lines correspond to different hyperparameter choices for the HMC sampler, red
and brown lines indicate the manually picked best performing HMC and MALA samplers. The
dashed line represents an unbiased estimate of the true average posterior energy. B: Autocorrelation
functions for Gibbs sampling and the best HMC and MALA samplers.
by using the fact that application of a transition operator T to any distribution cannot increase its
Kullback-Leibler (KL) divergence to a stationary distribution [5, 15]:
DKL [T q(z | x) || p(z | x, ?)] ? DKL [q(z | x) || p(z | x, ?)] ,
(12)
R
where T q(z | x) = q(z0 | x)T (z | z0 , x) dz0 and T (z | z0 , x) is the probability density of making
a transition from z0 to z. Hence, each Gibbs update of the hidden states implicitly increases F . In
practice, of course, we only have access to samples from T q and will never compute it explicitly.
This shows that the algorithm converges provided the log-likelihood is bounded. This stands in
contrast to other contexts where persistent Markov chains have been successful but training can
diverge [10]. To guarantee not only convergence but convergence to a local optimum of F , we would
also have to prove DKL [T n q(z | x) || p(z | x, ?)] ? 0 for n ? ?. Unfortunately, most results on
MCMC convergence deal with convergence in total variation, which is weaker than convergence in
KL divergence.
4
Results
We trained several linear models on log-transformed, centered and symmetrically whitened image
patches extracted from van Hateren?s dataset of natural images [33]. We explicitly modeled the
DC component of the whitened image patches using a mixture of Gaussians and constrained the
remaining components of the linear basis to be orthogonal to the DC component.
For faster convergence, we initialized the linear basis with the sparse coding algorithm of Olshausen
and Field [19], which corresponds to learning with MAP inference and fixed marginal source distributions. After initialization, we optimized the basis using L-BFGS [3] during each M-step and
updated the representation of the posterior using 2 steps of Gibbs sampling in each E-step. To represent the source marginals, we used finite GSMs (Equation 8) with 10 precisions ?ij each and equal
prior weights, that is, ?ij = 0.1. The source marginals were initialized by fitting them to samples
from the Laplace distribution and later optimized using 10 iterations of standard EM at the beginning
of each M-step.
4.1
Performance of the blocked Gibbs sampler
We compared the sampling performance of our Gibbs sampler to MALA sampling?as used by
Chen and Wu [4]?as well as HMC sampling [9], which is a generalization of MALA. The HMC
sampler has two parameters: a step width and a number of so called leap frog steps. In addition, we
slightly randomized the step width to avoid problems with periodicity [17], which added an additional parameter to control the degree of randomization. After manually determining a reasonable
range for the parameters of HMC, we picked 40 parameter sets for each model to test against our
Gibbs sampler.
5
Basis vector norm, ||ai ||
Laplace, 2x
GSM, 2x
GSM, 3x
GSM, 4x
1
0.75
0.5
0.25
0
64
128
192
256
Basis coefficient, i
Figure 3: We trained models with up to four times overcomplete representations using either
Laplace marginals or GSM marginals. A four times overcomplete basis set is shown in the center. Basis vectors were normalized so that the corresponding source distributions had unit variance.
The left plot shows the norms of the learned basis vectors. With fixed Laplace marginals, the algorithm produces a basis which is barely overcomplete. However, with GSM marginals the model
learns bases which are at least three times overcomplete. The right panel shows log-densities of the
source distributions corresponding to basis vectors inside the dashed rectangle. For reference, each
plot also contains a Laplace distribution of equal variance.
The algorithms were tested on one toy model and one two times overcomplete model trained on
8 ? 8 image patches. The toy model employed 1 visible unit and 3 hidden units with exponential
power distributions whose exponents were 0.5. The entries of its basis matrix were randomly drawn
from a Gaussian distribution with mean 1 and standard deviation 0.2.
Figure 2 shows trace plots and autocorrelation functions for the different sampling methods. The
trace plots were generated by measuring the negative log-density (or energy) of posterior samples
for a fixed set of visible states over time, ? log p(x, zt ), and averaging over data points. Autocorrelation functions were estimated from single Markov chain runs of equal duration for each sampler
and data point. All Markov chains were initialized using 100 burn-in steps of Gibbs sampling, independent of the sampler used to generate the autocorrelation functions. Finally, we averaged several
autocorrelation functions corresponding to different data points (see Supplementary Section 1 for
more information).
For both models we observed faster convergence with Gibbs sampling than with the best MALA
or HMC samplers (Figure 2). The image model in particular benefited from replacing MALA by
HMC. Still, even the best HMC sampler produced more correlated samples than the blocked Gibbs
sampler. While the best HMC sampler reached an autocorrelation of 0.05 after about 64 seconds, it
took only about 26 seconds with the blocked Gibbs sampler (right-hand side of Figure 2B).
All tests were performed on a single core of an AMD Opteron 6174 machine with 2.20 GHz and
implementations written in Python and NumPy.
4.2
Sparsity and overcompleteness
Berkes et al. [2] found that even for very sparse choices of the Student-t prior, the representations
learned by the linear model are barely overcomplete if a variational approximation to the posterior is
used. Similar results and even undercomplete representations were obtained by Seeger [28] with the
Laplace prior. The results of these studies suggest that the optimal basis set is not very overcomplete.
On the other hand, basis sets obtained with other, often more crude approximations are often highly
overcomplete. In the following, we revisit the question of optimal overcompletness and support our
findings with quantitative measurements.
Consistent with the study of Seeger [28], if we fix the source distributions to be Laplacian, our
algorithm learns representations which are only slightly overcomplete (Figure 3). However, much
more overcomplete representations were obtained when the source distributions were learned from
the data. This is in line with the results of Olshausen and Millman [21], who used mixtures of two
6
Log-likelihood ? SEM [bit/pixel]
16 ? 16 image patches
8 ? 8 image patches
1.5
1.48 1.48
1.44 1.47
1.47
1.5
1.55
1.58
1.33
1.25
1.3
1.1
1.41
1.36 1.38 1.39
1.41
1.46 1.49
1.3
Gaussian
GSM
LM
OLM
PoT
1.1 1.03
0.96
0.9
0.9
-
-
1x
2x
3x
4x
2x
3x
-
4x
Overcompleteness
-
1x
2x
3x
4x
2x
3x
4x
Overcompleteness
Figure 4: A comparison of different models for natural image patches. While using overcomplete
representations (OLM) yields substantial improvements over the complete linear model (LM), it still
cannot compete with other models of natural image patches. GSM here refers to a single multivariate
Gaussian scale mixture, that is, an elliptically contoured distribution with very few parameters (see
Supplementary Section 3). Log-likelihoods are reported for non-whitened image patches. Average
log-likelihood and standard error of the mean (SEM) were calculated from log-probabilities of 10000
test data points.
and three Gaussians as source distributions and obtained two times overcomplete representations for
8 ? 8 image patches.
Figure 3 suggests that with GSMs as source distributions, the model can make use of three and
up to four times overcomplete representations. Our quantitative evaluations confirmed a substantial
improvement of the two-times overcomplete model over the complete model. Beyond this, however,
the improvements quickly become negligible (Figure 4).
The source distributions discovered by our algorithm were extremely sparse and resembled spikeand-slab distributions, generating mostly values close to zero with the occasional outlier. Source distributions of low-frequency components generally had narrower peaks than those of high-frequency
components (Figure 3).
4.3
Model comparison
To compare the performance of the overcomplete linear model to the complete linear model and
other image models, we would like to evaluate the overcomplete linear models? log-likelihood on a
test set of images. However, to do this, we would have to integrate out all hidden units, which we
cannot do analytically. One way to nevertheless obtain an unbiased estimate of p(x) is by introducing a tractable distribution as follows:
Z
Z
p(x, z)
p(x) = p(x, z) dz = q(z | x)
dz.
(13)
q(z | x)
We can then estimate the above integral by sampling states zn from q(z | x) and averaging over
p(x, zn )/q(zn | x), a technique called importance sampling. The closer q(z | x) is to p(z | x), the
more efficient the estimator will be.
A procedure for constructing distributions q(z | x) from transition operators such as our Gibbs sampling operator is annealed importance sampling (AIS) [16]. AIS starts with a simple and tractable
distribution and successively brings it closer to p(z | x). The computational and statistical efficiency
of the estimator depends on the efficieny of the transition operator. Here, we used our Gibbs sampler and constructed intermediate distributions by interpolating between a Gaussian distribution and
the overcomplete linear model. For the four-times overcomplete model, we used 300 intermediate
distributions and 300 importance samples to estimate the density of each data point.
We find that the overcomplete linear model is still worse than, for example, a single multivariate
GSM with separately modeled DC component (Figure 4; see also Supplementary Section 3).
7
An alternative overcomplete generalization of the complete linear model is the family of products of
experts (PoE) [13]. Instead of introducing additional source variables, a PoE can have more factors
than visible units,
Y
s = W x, p(x) ?
fi (si ),
(14)
i
N ?M
where W ? R
and each factor is also called an expert. For N = M , the PoE is equivalent to
the linear model (Equation 1). In contrast to the overcomplete linear model, the prior over hidden
sources s here is in general not factorial.
A popular choice of PoE in the context of natural images is the product of Student-t (PoT) distributions, in which experts have the form fi (si ) = (1 + s2i )??i [35]. To train the PoT, we used
a persistent variant of minimum probability flow learning [29, 31]. We used AIS in combination
with HMC to evaluate each PoT model [30]. We find that the PoT is better suited for modeling the
statistics of natural images and takes better advantage of overcomplete representations (Figure 4).
While both the estimator for the PoT and the estimator for the overcomplete linear model are consistent, the former tends to overestimate and the latter tends to underestimate the average loglikelihood. It is thus crucial to test convergence of both estimates if any meaningful comparison
is to be made (see Supplementary Section 2).
5
Discussion
We have shown how to efficiently perform inference, training and evaluation in the sparse overcomplete linear model. While general purpose sampling algorithms such as MALA or HMC have the
advantage of being more widely applicable, we showed that blocked Gibbs sampling can be much
faster when the source distributions are sparse, as for natural images.
Another advantage of our sampler is that it is parameter free. Choosing suboptimal parameters
for the HMC sampler can lead to extremely poor performance. Which parameters are optimal can
change from data point to data point and over time as the model is trained. Furthermore, monitoring
the convergence of the Markov chains can be problematic [28]. We showed that by training a model
with a persistent variant of Monte Carlo EM, even the number of sampling steps performed in each
E-step becomes much less crucial for the success of training.
Optimizing and evaluating the likelihood of overcomplete linear models is a challenging problem.
To our knowledge, our study is the first to show a clear advantage of the overcomplete linear model
over its complete counterpart on natural images. At the same time, we demonstrated that with the
assumptions of a factorial prior, the overcomplete linear model underperforms other generalizations
of the complete linear model. Yet it is easy to see how our algorithm could be extended to other,
much better performing models. For instance, models in which multiple sources are modeled jointly
by a multivariate GSM, or bilinear models with two sets of latent variables.
Code for training and evaluating overcomplete linear models is available at
http://bethgelab.org/code/theis2012d/.
Acknowledgments
The authors would like to thank Bruno Olshausen, Nicolas Heess and George Papandreou for helpful
comments. This study was financially supported by the Bernstein award (BMBF; FKZ: 01GQ0601),
the German Research Foundation (DFG; priority program 1527, BE 3848/2-1), and a DFG-NSF
collaboration grant (TO 409/8-1).
References
[1] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal Statistical
Society, Series B, 36(1):99?102, 1974.
[2] P. Berkes, R. Turner, and M. Sahani. On sparsity and overcompleteness in image models. Advances in
Neural Information Processing Systems, 20, 2008.
[3] R. H. Byrd, P. Lu, and J. Nocedal. A limited memory algorithm for bound constrained optimization.
SIAM Journal on Scientific and Statistical Computing, 16(5):1190?1208, 1995.
8
[4] R.-B. Chen and Y. N. Wu. A null space method for over-complete blind source separation. Computational
Statistics & Data Analysis, 51(12):5519?5536, 2007.
[5] T. Cover and J. Thomas. Elements of Information Theory. Wiley, 1991.
[6] B. J. Culpepper, J. Sohl-Dickstein, and B. A. Olshausen. Building a better probabilistic model of images
by factorization. Proceedings of the International Conference on Computer Vision, 13, 2011.
[7] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1?38, 1977.
[8] A. Doucet. A note on efficient conditional simulation of Gaussian distributions, 2010.
[9] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics Letters B, 195
(2):216?222, 1987.
[10] A. Fischer and C. Igel. Empirical analysis of the divergence of Gibbs sampling based learning algorithms
for restricted Boltzmann machines. Proceedings of the 20th International Conference on Artificial Neural
Networks, 2010.
[11] M. Girolami. A variational method for learning sparse and overcomplete representations. Neural Computation, 13(11):2517?2532, 2001.
[12] N. Heess, N. Le Roux, and J. Winn. Weakly supervised learning of foreground-background segmentation
using masked rbms. International Conference on Artificial Neural Networks, 2011.
[13] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14(8):1771?1800, 2002.
[14] Y. Hoffman and E. Ribak. Constrained realizations of Gaussian fields: a simple algorithm. The Astrophysical Journal, 380:L5?L8, 1991.
[15] I. Murray and R. Salakhutdinov. Notes on the KL-divergence between a Markov chain and its equilibrium
distribution, 2008.
[16] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125?139, 2001.
[17] R. M. Neal. MCMC using Hamiltonian Dynamics, pages 113?162. Chapman & Hall/CRC Press, 2011.
[18] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other
variants, pages 355?368. MIT Press, 1998.
[19] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381:607?609, 1996.
[20] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy employed by
V1? Vision Research, 37(23):3311?3325, 1997.
[21] B. A. Olshausen and K. J. Millman. Learning sparse codes with a mixture-of-Gaussians prior. Advances
in Neural Information Processing Systems, 12, 2000.
[22] G. Papandreou and A. L. Yuille. Gaussian sampling by local perturbations. Advances in Neural Information Processing Systems, 23, 2010.
[23] T. Park and G. Casella. The Bayesian lasso. Journal of the American Statistical Association, 103(482):
681?686, 2008.
[24] R. Penrose. A generalized inverse for matrices. Proceedings of the Cambridge Philosophical Society,
51:406?413, 1955.
[25] G. O. Roberts and R. L. Tweedie. Exponential convergence of Langevin diffusions and their discrete
approximations. Bernoulli, 2(4):341?363, 1996.
[26] B. Sallans. A hierarchical community of experts. Master?s thesis, University of Toronto, 1998.
[27] U. Schmidt, Q. Gao, and S. Roth. A generative perspective on MRFs in low-level vision. Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition, 2010.
[28] M. W. Seeger. Bayesian inference and optimal design for the sparse linear model. Journal of Machine
Learning Research, 9:759?813, 2008.
[29] J. Sohl-Dickstein. Persistent minimum probability flow, 2011.
[30] J. Sohl-Dickstein and B. J. Culpepper. Hamiltonian annealed importance sampling for partition function
estimation, 2012.
[31] J. Sohl-Dickstein, P. Battaglino, and M. R. DeWeese. Minimum probability flow learning. Proceedings
of the 28th International Conference on Machine Learning, 2011.
[32] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient.
Proceedings of the 25th International Conference on Machine Learning, 2008.
[33] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proc. of the Royal Society B: Biological Sciences, 265(1394), 1998.
[34] G. C. G. Wei and M. A. Tanner. A Monte Carlo implementation of the EM algorithm and the poor man?s
data augmentation algorithms. Journal of the American Statistical Association, 85(411):699?704, 1990.
[35] M. Welling, G. Hinton, and S. Osindero. Learning sparse topographic representations with products of
Student-t distributions. Advances in Neural Information Processing Systems, 15, 2003.
[36] L. Younes. Parametric inference for imperfectly observed Gibbsian fields. Probability Theory and Related
Fields, 1999.
9
| 4832 |@word norm:2 integrative:2 simulation:1 decomposition:1 covariance:2 contrastive:1 contains:2 series:2 interestingly:1 current:1 si:10 yet:1 written:1 visible:3 additive:2 partition:1 plot:4 update:3 stationary:2 pursued:1 discovering:1 fewer:1 generative:1 beginning:1 hamiltonian:2 core:1 toronto:1 org:3 constructed:1 become:1 ik:4 persistent:7 prove:1 fitting:1 autocorrelation:7 inside:1 introduce:2 xz:6 bethgelab:3 salakhutdinov:1 automatically:1 byrd:1 little:1 preclude:1 becomes:6 provided:2 xx:4 bounded:1 panel:1 null:1 developed:1 unobserved:1 transformation:1 finding:1 guarantee:1 berkeley:1 every:3 quantitative:2 concave:1 rm:1 control:1 unit:5 grant:1 overestimate:1 negligible:1 local:5 gsms:5 tends:2 despite:2 bilinear:1 burn:1 emphasis:1 initialization:4 frog:1 suggests:1 challenging:1 limited:1 factorization:1 range:1 averaged:1 igel:1 acknowledgment:1 mallow:2 practice:1 implement:1 procedure:1 empirical:1 word:1 refers:1 suggest:1 cannot:3 close:3 operator:5 context:3 outweigh:1 deterministic:1 map:1 center:2 dz:2 annealed:3 demonstrated:1 roth:1 starting:1 duration:1 simplicity:1 roux:1 jascha:2 estimator:4 importantly:1 variation:1 laplace:9 updated:1 element:1 recognition:1 updating:2 observed:2 role:1 substantial:2 intuition:1 dempster:1 dynamic:1 trained:4 depend:2 solving:2 weakly:1 yuille:2 efficiency:1 basis:15 multimodal:1 easily:1 represented:1 s2i:1 train:1 fast:1 describe:2 monte:5 artificial:2 choosing:1 pendleton:1 whose:2 apparent:1 larger:2 supplementary:4 widely:1 loglikelihood:1 statistic:3 gi:3 fischer:1 topographic:1 jointly:1 emergence:1 laird:1 advantage:4 matthias:2 took:1 product:5 realization:1 rapidly:1 mixing:1 till:1 representational:1 description:1 exploiting:1 convergence:12 optimum:2 produce:1 generating:1 incremental:1 converges:1 derive:1 andrew:2 depending:1 pose:1 ij:6 strong:1 solves:1 auxiliary:2 pot:6 resemble:1 indicate:2 girolami:2 concentrate:1 closely:1 filter:1 opteron:1 centered:1 implementing:1 crc:1 fix:1 generalization:4 randomization:1 biological:1 hall:1 normal:1 equilibrium:1 slab:2 claim:1 lm:2 driving:1 purpose:3 estimation:1 proc:1 gq0601:1 applicable:1 leap:1 overcompleteness:5 hoffman:1 mit:1 gaussian:20 always:2 avoid:1 improvement:4 bernoulli:1 likelihood:15 contrast:5 seeger:3 helpful:1 inference:10 mrfs:1 typically:1 initially:1 hidden:6 transformed:1 pixel:1 aforementioned:1 exponent:1 lucas:2 constrained:4 special:3 initialize:1 schaaf:1 marginal:4 field:7 once:1 never:1 equal:3 sampling:36 zz:2 manually:2 represents:2 park:2 chapman:1 foreground:1 others:2 quantitatively:1 culpepper:2 few:4 randomly:1 divergence:5 numpy:1 dfg:2 ab:1 highly:2 evaluation:2 mixture:8 bracket:1 behind:1 chain:16 gibbsian:1 integral:1 closer:2 orthogonal:3 tweedie:1 incomplete:1 initialized:3 desired:1 overcomplete:44 theoretical:1 roweth:1 instance:1 modeling:3 cover:1 papandreou:3 measuring:1 zn:3 werner:2 maximization:2 cost:4 introducing:2 deviation:1 entry:1 imperfectly:1 masked:1 undercomplete:1 successful:1 osindero:1 too:2 reported:1 corrupted:1 mala:10 density:5 peak:1 randomized:1 siam:1 international:5 l5:1 probabilistic:1 physic:1 invertible:1 diverge:1 bethge:1 together:1 quickly:1 tanner:1 thesis:1 augmentation:1 successively:1 choose:1 priority:1 worse:1 expert:6 inefficient:1 american:2 toy:4 account:1 potential:1 bfgs:1 student:4 exhaust:1 coding:2 coefficient:1 explicitly:2 vi:2 depends:1 blind:1 later:1 try:1 picked:2 performed:2 view:1 astrophysical:1 reached:2 red:1 recover:1 start:1 parallel:2 formed:1 square:1 variance:2 who:2 efficiently:1 correspond:1 yield:1 bayesian:3 produced:1 lu:1 carlo:5 monitoring:1 confirmed:1 kennedy:1 gsm:9 reach:1 casella:2 against:1 underestimate:1 energy:5 rbms:1 frequency:2 sampled:1 dataset:1 popular:1 knowledge:1 dimensionality:1 segmentation:1 pseudoinverses:1 supervised:1 improved:1 wei:1 furthermore:1 just:1 contoured:1 until:1 hand:3 replacing:1 mode:2 brings:1 gray:1 scientific:1 olshausen:8 building:1 effect:1 requiring:1 brown:1 unbiased:2 counterpart:2 true:1 hence:1 normalized:1 analytically:1 former:1 leibler:1 neal:3 deal:1 conditionally:2 during:1 width:2 generalized:1 complete:13 image:29 variational:3 fi:8 empirically:1 conditioning:1 extend:1 discussed:2 tail:2 association:2 marginals:8 significant:1 blocked:13 measurement:1 cambridge:1 gibbs:30 ai:4 tuning:1 centre:2 bruno:1 had:2 access:1 entail:1 cortex:1 base:1 berkes:2 posterior:23 multivariate:3 showed:2 perspective:1 optimizing:2 success:1 life:1 accomplished:1 der:1 seen:3 minimum:3 additional:5 george:1 employed:2 freely:1 maximize:2 dashed:2 multiple:3 full:2 reduces:1 faster:4 equally:1 award:1 dkl:4 laplacian:1 variant:5 basic:1 regression:1 whitened:3 noiseless:3 expectation:2 essentially:1 vision:4 iteration:5 represent:3 underperforms:2 cell:2 addition:2 background:1 separately:1 winn:1 singular:1 source:32 crucial:2 comment:1 flow:3 symmetrically:1 constraining:1 intermediate:2 easy:2 bernstein:1 perfectly:1 suboptimal:1 fkz:1 reduce:1 idea:4 lasso:1 poe:4 effort:1 cause:1 elliptically:1 heess:2 generally:3 detailed:1 clear:1 factorial:4 younes:1 generate:1 http:1 problematic:1 nsf:1 revisit:1 neuroscience:3 estimated:1 discrete:1 hyperparameter:1 dickstein:5 key:1 four:4 nevertheless:1 drawn:1 changing:1 deweese:1 diffusion:1 rectangle:1 nocedal:1 v1:1 olm:2 run:1 compete:1 letter:1 inverse:1 master:1 family:2 reasonable:1 wu:4 patch:11 separation:1 sallans:1 bit:1 bound:3 played:1 span:1 extremely:2 performing:2 combination:1 poor:2 slightly:4 em:10 wi:2 making:4 s1:1 outlier:1 restricted:3 computationally:3 equation:9 german:1 tractable:3 available:2 gaussians:5 operation:1 apply:1 occasional:1 hierarchical:1 schmidt:2 alternative:1 thomas:1 assumes:1 running:1 include:1 remaining:1 graphical:1 exploit:1 murray:1 approximating:1 society:4 objective:1 already:1 added:1 spike:1 question:1 strategy:6 costly:1 equivalent:1 receptive:1 diagonal:1 primary:1 parametric:1 financially:1 gradient:1 subspace:5 thank:1 concatenation:2 amd:1 cauchy:1 spanning:1 barely:2 assuming:1 code:4 modeled:4 minimizing:1 hmc:14 unfortunately:1 mostly:1 robert:1 trace:2 negative:1 implementation:2 design:1 zt:1 boltzmann:2 perform:3 spikeand:1 observation:6 markov:17 finite:2 langevin:2 extended:3 hinton:3 dc:3 discovered:1 redwood:1 perturbation:1 arbitrary:1 community:1 kl:3 optimized:2 philosophical:1 learned:3 alternately:2 able:1 beyond:1 pattern:1 sparsity:2 challenge:1 program:1 royal:3 memory:1 power:2 demanding:1 natural:12 hybrid:1 turner:1 dz0:1 scheme:1 extract:1 sahani:1 reichardt:2 review:1 literature:1 prior:9 python:1 theis:1 millman:3 determining:1 foundation:1 integrate:1 degree:1 sufficient:1 consistent:2 rubin:1 principle:1 heavy:3 collaboration:1 row:2 l8:1 course:1 periodicity:1 placed:1 supported:1 free:2 bias:1 weaker:1 side:1 sparse:19 distributed:3 benefit:1 van:3 ghz:1 calculated:1 valid:1 transition:5 stand:1 evaluating:2 author:1 made:1 avg:1 simplified:1 welling:1 approximate:1 implicitly:1 kullback:1 anew:1 doucet:1 assumed:1 knew:1 latent:4 tailed:1 learn:1 nature:1 nicolas:1 sem:2 improving:1 interpolating:1 constructing:1 diag:1 main:1 s2:1 noise:3 repeated:1 benefited:1 fashion:1 slow:2 wiley:1 bmbf:1 precision:12 exponential:2 crude:2 nullspace:9 learns:3 z0:5 down:1 specific:1 resembled:1 sohl:5 importance:5 conditioned:2 justifies:1 sparseness:1 chen:4 suited:1 led:1 simply:1 univariate:1 likely:1 gao:1 penrose:1 visual:1 expressed:1 duane:1 corresponds:1 tieleman:1 extracted:1 conditional:1 goal:1 identity:1 viewed:1 narrower:1 man:1 feasible:1 hard:1 change:5 infinite:1 except:1 sampler:29 averaging:2 total:1 called:3 meaningful:2 support:2 latter:2 hateren:2 evaluate:4 mcmc:5 tested:1 correlated:1 |
4,235 | 4,833 | Learning with Partially Absorbing Random Walks
Xiao-Ming Wu1 , Zhenguo Li1 , Anthony Man-Cho So3 , John Wright1 and Shih-Fu Chang1,2
1
Department of Electrical Engineering, Columbia University
2
Department of Computer Science, Columbia University
3
Department of SEEM, The Chinese University of Hong Kong
{xmwu, zgli, johnwright, sfchang}@ee.columbia.edu, [email protected]
Abstract
We propose a novel stochastic process that is with probability ?i being absorbed
at current state i, and with probability 1 ? ?i follows a random edge out of it.
We analyze its properties and show its potential for exploring graph structures.
We prove that under proper absorption rates, a random walk starting from a set
S of low conductance will be mostly absorbed in S. Moreover, the absorption
probabilities vary slowly inside S, while dropping sharply outside, thus implementing the desirable cluster assumption for graph-based learning. Remarkably,
the partially absorbing process unifies many popular models arising in a variety
of contexts, provides new insights into them, and makes it possible for transferring findings from one paradigm to another. Simulation results demonstrate its
promising applications in retrieval and classification.
1 Introduction
Random walks have been widely used for graph-based learning, leading to a variety of models including PageRank [14] for web page ranking, hitting and commute times [8] for similarity measure
between vertices, harmonic functions [20] for semi-supervised learning, diffusion maps [7] for dimensionality reduction, and normalized cuts [12] for clustering. In graph-based learning one often
adopts the cluster assumption, which states that the semantics usually vary smoothly for vertices
within regions of high density [17], and suggests to place the prediction boundary in regions of
low density [5]. It is thus interesting to ask how the cluster assumption can be realized in terms of
random walks.
Although a random walk appears to explore the graph globally, it converges to a stationary distribution determined solely by vertex degrees regardless of the starting points, a phenomenon well known
as the mixing of random walks [11]. This causes some random walk approaches intended to capture
non-local graph structures to fail, especially when the underlying graph is well connected, i.e., the
random walk has a large mixing rate. For example, it was recently proven in [16] that under some
mild conditions the hitting and commute times on large graphs do not take into account the global
structure of the graph at all, despite the fact that they have integrated all the relevant paths on the
graph. It is also shown in [13] that the ?harmonic? walks [20] in high-dimensional spaces converge
to a constant distribution as the data size approaches infinity, which is undesirable for classification
and regression. These findings show that intuitions regarding random walks can sometimes be misleading, and should be taken with caution. A natural question is: can we design a random walk
which implements the cluster assumption with some guarantees?
In this paper, we propose partially absorbing random walks (PARWs), a novel random walk model whose properties can be analyzed theoretically. In PARWs, a random walk is with probability
?i being absorbed at current state i, and with probability 1 ? ?i follows a random edge out of it.
PARWs are guaranteed to implement the cluster assumption in the sense that under proper absorp1
pkk
pii
k'
p jj
k
k
t =0
t =1
j
i
t=2
(a)
(b)
j'
i'
j
i
(c)
Figure 1: A partially absorbing random walk. (a) A flow perspective (see text). (b) A second-order
Markov chain. (c) An equivalent standard Markov chain with additional sinks.
tion rates, a random walk starting from a set S of low conductance will be mostly absorbed in S.
Furthermore, we show that by setting the absorption rates, the absorption probabilities can vary slowly inside S, while dropping sharply outside S. This approximately piecewise constant property
makes PARWs highly desirable and robust for a variety of learning tasks including ranking, clustering, and classification, as demonstrated in Section 4. More interestingly, it turns out that many
existing models including PageRank, hitting and commute times, and label propagation algorithms
in semi-supervised learning, can be unified or related in PARWs, which brings at least two benefits.
On one hand, our theoretical analysis sheds some light on the understanding of existing models; on
the other hand, it enables transferring findings among different paradigms. We present our model in
Section 2, analyze a special case of it in Section 3, and show simulation results in Section 4. Section
5 concludes the paper. Most of our proofs are included in supplementary material.
2 Partially Absorbing Random Walks
Let us consider a simple diffusion process illustrated in Fig. 1(a). At the beginning, a unit flow (blue)
is injected to the graph at a selected vertex. After one step, some of the flow (red) is ?stored? at the
vertex while the rest (blue) propagates to its neighbors. Whenever the flow passes a vertex, some
fraction of it is retained at that vertex. As this process continues, the amount of flow stored in each
vertex will accumulate and there will be less and less flow left running on the graph. After a certain
number of steps, there will be almost no flow left running and the flow stored will nearly sum up to
1. The above diffusion process can be made precise in terms of random walks, as shown below.
Consider a discrete-time stochastic process X = {Xt : t ? 0} on the state space N = {1, 2, . . . , n},
where the initial state X0 is given, say X0 = i, the next state X1 is determined by the transition
probability P(X1 = j|X0 = i) = pij , and the subsequent states are determined by the transition
probabilities
(
1, i = j, i = k,
0, i 6= j, i = k,
(1)
P(Xt+2 = j|Xt+1 = i, Xt = k) =
P(Xt+2 = j|Xt+1 = i) = pij , i 6= k,
where t ? 0. Note that the process X is time homogeneous, i.e., the transition probabilities in (1)
are independent of t. In other words, if the previous and current states are the same, the process
will remain in the current state forever. Otherwise, the next state is conditionally independent of the
previous state given the current state, i.e., the process behaves like a usual random walk.
To illustrate the above construction, consider Fig. 1(b). Starting from state i, there is some probability pii that the process will stay at i in the next step; and once it stays, the process will be absorbed
into state i. Hence, we shall call the above process a partially absorbing random walk (PARW),
where pii is the absorption rate of state i. If 0 < pii < 1, then we say that i is a partially absorbing
state. If pii = 1, then we say that i is a fully absorbing state. Finally, if pii = 0, then we say that i
is a transient state. Note that if pii ? {0, 1} for every state i ? N , then the above process reduces to
a standard Markov chain [9].
A PARW is a second-order Markov chain completely specified by its first order transition probabilities {pij }. One can observe that any PARW can be realized as a standard Markov chain by adding
a sink (fully absorbing state) to each vertex in the graph, as illustrated in Fig. 1(c). The transition
2
probability from i to its sink i? equals the absorption rate pii in PARWs. One may also notice that
the construction of PARWs can be generalized to the m-th order, i.e., the process is absorbed at a
state only after it has stayed at that state for m-consecutive steps. However, it can be shown that any
m-th order PARW can be realized by a second-order PARW. We will not elaborate on this due to
space constraints.
2.1 PARWs on Graphs
Let G = (V, W ) be an undirected weighted graph, where V is a set of n vertices and W = [wij ] ?
Rn?n is a symmetric non-negative matrix of pairwisePaffinities among vertices. We assume G is
connected. Let D = diag(d1 , d2 , . . . , dn ) with di = j wij as the degree of vertex i, and define
P
the Laplacian of G by L = D ? W [6]. Denote by d(S) := i?S di the volume of a subset S ? V
of vertices. Let ?1 , ?2 , . . . , ?n ? 0 be arbitrary, and set ? = diag(?1 , ?2 , . . . , ?n ). Suppose that
we define the first order transition probabilities of a PARW by
( ?
i
?i +di , i = j,
pij =
(2)
wij
?i +di , i 6= j.
Then, we see that state i is an absorbing state (either partially or fully) when ?i > 0, and is a transient
state when ?i = 0. In particular, the matrix ? acts like a regularizer that controls the absorption rate
of each state, i.e., the larger ?i , the larger pii . In the sequel, we refer to ? as the regularizer matrix.
Absorption Probabilities. We are interested in the probability aij that a random walk starting from
state i, is absorbed at state j in any finite number of steps. Let A = [aij ] ? Rn?n be the matrix of
absorption probabilities. The following theorem shows that A has a closed-form.
Theorem 2.1. Suppose ?i > 0 for some i. Then A = (? + L)?1 ?.
Proof. Since ?i > 0 for some i, the matrix ? + L is positive definite and hence non-singular.
Moreover, the matrix ? + D is non-singular, since D is non-singular. Thus, the matrix I ? (? +
D)?1 W = (? + D)?1 (? + L) is also non-singular. Now, observe that the absorbing probabilities
{aij } satisfy the following equations:
X wij
?i
aii =
?1+
aji ,
(3)
?i + di
?i + di
j6=i
X wik
aij =
akj , i 6= j.
(4)
?i + di
k6=i
Upon writing equations (3) and (4) in matrix form, we have (I ? (? + D)?1 W )A = (? + D)?1 ?,
whence A = (I ? (? + D)?1 W )?1 (? + D)?1 ? = (? + D ? W )?1 ? = (? + L)?1 ?.
The following result confirms that A is indeed a probability matrix.
Proposition 2.1. Suppose ?i > 0 for some i. Then A is a non-negative matrix with each row
summing up to 1.
P
By Proposition 2.1, k ajk = 1 for any j. This means that a PARW starting from any vertex will
eventually be absorbed, provided that there is at least one absorbing state in the state space.
2.2 Limits of Absorption Probabilities
By Theorem 2.1, we see that the absorption probabilities (A) are governed by both the structure
of the graph (L) and the regularizer matrix (?). It would be interesting to see how A varies with
?, particularly when ?i ?s become small which allows the flow to propagate sufficiently (Fig. 1(a)).
The following result shows that as ? (?i ?s) vanishes, each row of A converges to a distribution
proportional to (?1 , ?2 , . . . , ?n ), regardless of graph structure.
Theorem 2.2. Suppose ?i > 0 for all i. Then
? ?,
lim (?? + L)?1 ?? = 1?
(5)
??0+
? i = ?i /(Pn ?j ). In particular, lim??0+ (?I + L)?1 ?I =
where (?)
j=1
3
1
?
n 11 .
Theorem 2.2 tells us that with ? = ?I and as ? ? 0 a PARW will converge to the constant
distribution 1/n, regardless of the starting vertex. At first glance, this limit seems meaningless.
However, the following lemma will show that it actually has interesting connections with L+ , the
pseudo-inverse of the graph Laplacian, a matrix that is widely studied and proven useful for many
learning tasks including recommendation and clustering [8].
Proposition 2.2. Suppose ? = ?I and denote A? := (? + L)?1 ? = (?I + L)?1 ?. Then,
lim
??0
A? ? n1 11?
= L+ .
?
(6)
Proposition 2.2 gives a novel probabilistic interpretation of L+ . Note that by Theorem 2.2, A0 :=
lim??0 A? = n1 11? . Thus L+ is the derivative of A? w.r.t. ? at ? = 0, implying that L+ reflects
the variation of absorption probabilities when the absorption rate is very small. By (6), we see that
ranking by L+ is essentially the same as ranking by A? , when ? is sufficiently small.
2.3 Relations with Popular Ranking and Classification Models
Relations with PageRank Vectors. Suppose ?j > 0 for all j. Let a be the absorption probability
vector of a PARW starting from vertex i. Denote by s the indicator vector of i, i.e., s(i) = 1 and
s(j) = 0 for j 6= i. Then a? = s? (? + L)?1 ?, which can be rewritten as
a? = s? (? + D)?1 ? + a? ??1 W (? + D)?1 ?.
(7)
?
D, we have a? = ?s? + (1 ? ?)a? D?1 W, which is exactly the equilibrium
By letting ? = 1??
equation for personalized PageRank [14]. Note that ? is often referred to as the ?teleportation?
probability in PageRank. This shows that personalized PageRank is a special case of PARWs with
i
absorption rates pii = ?i?+d
= ?.
i
Relations with Hitting and Commute Times. The hitting time Hij is the expected time that it
takes a random walk starting from i to first arrive at j, and the commute time Cij is the expected
time it takes a random walk starting from i to travel to j and back to i, which can be computed as
+
+
+
Hij = d(G)(L+
Cij = Hij + Hji = d(G)(L+
(8)
jj ? Lij ),
ii + Ljj ? 2Lij ),
P
where d(G) := i di denotes the volume of the graph. By (6), when ? = ?I and ? is sufficiently
?
small, ranking with Hij or Cij (say, with respect to i) is the same as ranking by A?
jj ? Aij or
?
?
?
Aii + Ajj ? 2Aij respectively. This appears to be not particularly meaningful because the term A?
jj
is the self-absorption probability that does not contain any essential information with the starting
vertex i. Accordingly, it should not be included as part of the ranking function with respect to i.
This argument is also supported in a recent study by [16], where the hitting and commute times are
shown to be dominated by the inverse of degrees of vertices. In other words, they do not take into
account the graph structure at all. A remedy they propose is to throw away the diagonal terms of
L+ and only use the off-diagonal terms. This happens to suggest using absorption probabilities for
ranking and as similarity measure, because when ? is sufficiently small, ranking by the off-diagonal
terms of L+ is essentially the same as ranking by A?
ij , i.e., the absorption probability of starting
from i and being absorbed at j. Our theoretical analysis in Section 3 and the simulation results in
Section 4 further confirm this argument.
Relations with Semi-supervised Learning. Interestingly, many label propagation algorithms in
semi-supervised learning can be cast in PARWs. The harmonic function method [20] is a PARW
when setting ?i = ? (absorption rate 1) for the labeled vertices while ?i = 0 (absorption rate 0) for
the unlabeled. In [19] the authors have made this interpretation in terms of absorbing random walks,
where a random walk arriving at an absorbing state will stay there forever. PARWs can be viewed
as an extension of absorbing random walks. The regularized harmonic function method [5] is also a
PARW when setting ?i = ? for the labeled vertices while ?i = 0 for the unlabeled. The consistency
method [17], if using un-normalized Laplacian instead of normalized Laplacian, is a PARW with
? = ?I. Our analysis in this paper reveals several nice properties of this case (Section 3). A variant
of this method is a PARW with ? = ?D, which is the same as PageRank as shown above. If we
add an additional sink to the graph, a variant of harmonic function method [10] and a variant of the
regularized harmonic function method [3] can all be included as instances of PARWs. We omit the
details here due to space constraints.
4
Benefits of a Unifying View. We have shown that PARWs can unify or relate many models from
different contexts. This brings at least two benefits. First, it sheds some light on existing models. For
instance, hitting and commute times are not suitable for ranking given its interpretation in absorption
probabilities, as discussed above. In the next section, we will show that a special case of PARWs is
better suited for implementing the cluster assumption for graph-based learning. Second, a unifying
view builds bridges between different paradigms thus making it easier to transfer findings between
them. For example, it has been shown in [2, 4] that approximate personalized PageRank vectors can
be computed in O(1/?) iterations, where ? is a precision tolerance parameter. We indicate here that
such a technique is also applicable to PARWs due to PARWs?s generalizing nature. Consequently,
most models included in PARWs can be substantially accelerated using the same technique.
3 PARWs with Graph Conductance
In this section, we present results on the properties of the absorption probability vector ai obtained
by a PARW starting from vertex i (i.e., a?
i is the row i of A). We show that properties of ai
relate closely to the connectivity between i and the rest of graph, which can be captured by the
conductance of the cluster S where i belongs. We also find that properties of ai depend on the
setting of absorption rates. Our key results can be summarized as follows. In general, the probability
mass of ai is mostly absorbed by S. Under proper absorption rates, ai can vary slowly within S
while dropping sharply outside S. Such properties are highly desirable for learning tasks such as
ranking, clustering, and classification.
?
S)
The conductance of a subset S ? V of vertices is defined as ?(S) = minw(S,
? , where
(d(S),d(S))
P
?
?
w(S, S) :=
? wij is the cut between S and its complement S [6]. We denote the
(i,j)?e(S,S)
indicator vector of S by ?S such that ?S (i) = 1 if i ? S and ?S (i) = 0 otherwise; and denote
the stationary distribution w.r.t. S by ? S such that ? S (i) = di /d(S) if i ? S and ?S (i) = 0
otherwise. In terms of the conductance of S, the following theorem gives an upper bound on the
expected probability mass escaped from S if the distribution of the starting vertex is ? S .
Theorem 3.1. Let S be any set of vertices satisfying d(S) ?
?2 = maxi?S? ?dii . Then,
??
S A?S? ?
1
2 d(G).
?2 1 + ?1
?(S).
1 + ?2 ?12
Let ?1 = mini?S
?i
di
and
(9)
Theorem 3.1 shows that most of the probability mass will be absorbed in S, provided that S is of
small conductance and the random walk starts from S according to ? S . In other words, a PARW
will be trapped inside the cluster1 from where it starts, as desired. To identify the entire cluster, what
is more desirable would be that the absorption probabilities vary slowly within the cluster while
dropping sharply outside. As such, the cluster can be identified by detecting the sharp drop. We
show below that such property can be achieved by setting appropriate absorption rates at vertices.
3.1 PARWs with ? = ?I
We will prove that the choice of ? = ?I can fulfill the above goal. Before presenting theoretical
analysis, let us discuss the intuition behind it from both flow (Fig. 1(a)) and random walk perspectives. To vary slowly within the cluster, the flow needs to be distributed evenly within it; while to
drop sharply outside, the flow must be prevented from escaping. This means that the absorption
rates should be small in the interior but large near the boundary area of the cluster. Setting ? = ?I
?
i
achieves this. It corresponds to the absorption rates pii = ?i?+d
= ?+d
, which decrease monotoni
i
ically with di . Since the degrees of vertices are usually relatively large in the interior of the cluster
due to denser connections, and small near its boundary area (Fig. 2(a)), the absorption rates are
therefore much larger at its boundary than in its interior (Fig. 2(b)). State differently, a random walk
may move freely inside the cluster, but it will get absorbed with high probability when traveling
near the cluster?s boundary. In this way, the absorption rates set up a bounding ?wall? around the
cluster to prevent the random walk from escaping, leading to an absorption probability vector that
1
A cluster is understood as a subset of vertices of small conductance.
5
?3
4
?3
x 10
4
2
2
0
0
(a)
x 10
300
(b)
600
900
0
0
(c)
300
600
900
(d)
Figure 2: Absorption rates and absorption probabilities. (a) A data set of three Gaussians with the
degrees of vertices in the underlying graph shown (see Section 4 for the descriptions of the data
and graph construction). A starting vertex is denoted in black circle. (b?c) Absorption rates and
absorption probabilities for ? = ?I (? = 10?3 ). (d) Sorted absorption probabilities of (c). For
illustration purpose, in (a?b), the degrees of vertices and the absorption rates have been properly
scaled, and in (c), the data are arranged such that points within each Gaussian appear consecutively.
varies slowly within the cluster while dropping sharply outside (Figs. 2(c?d)), thus implementing
the cluster assumption. We make these arguments precise below.
It is worth pointing out that a PARW with ? = ?I is symmetric, i.e., the absorption probability of
starting from i and absorbed at j is equal to the probability of starting from j and absorbed at i. For
simplicity, we use the abbreviated notation a to denote ai , the absorption probability vector for the
PARW starting from vertex i. By (3) and the symmetry property, we immediately see that a has the
following ?harmonic? property:
X wik
X wjk
?i
a(i) =
+
a(k), a(j) =
a(k), j 6= i.
(10)
?i + di
?i + di
?j + dj
k6=i
k6=j
We will use this property to prove some interesting results. Another desirable property one should
notice for this PARW is that the starting vertex always has the largest absorption probability, as
shown by the following lemma.
Lemma 3.2. Given ? = ?I, then aii > aij for any i 6= j.
By Lemma 3.2 and without loss of generality, we assume the vertices are sorted so that a(1) >
a(2) ? ? ? ? ? a(n), where vertex 1 is the starting vertex. Let Sk be the set of vertices {1, . . . , k}.
Denote e(Si , Sj ) as the set of edges between Si and Sj .
The following theorem quantifies the drop of the absorption probabilities between Sk and S?k .
Theorem 3.3. For every S ? {Sk | k = 1, 2, . . . , n},
X
wuv (a(u) ? a(v)) = ? 1 ?
?
(u,v)?e(S,S)
X
!
a(k) .
k?S
(11)
Theorem
3.3 shows that the weighted difference in absorption probabilities between Sk and S?k is
Pk
? 1 ? j=1 a(j) , implying that it drops slowly when ? is small and as k increases, as expected.
Next we show the variation of absorption probabilities with graph conductance. Without loss of
generality, we consider sets Sj where d(Sj ) ? 21 d(G).
The following theorem says that a(j +1) will drop little from a(j) if the set Sj has high conductance
or if the vertex j is far away from the starting vertex 1 (i.e., j ? 1).
Lemma 3.4. If ?(Sj ) = ?, then
a(j + 1) ? a(j) ?
Pj
? 1 ? k=1 a(k)
?d(Sj )
.
(12)
The above result can be extended to describe the drop in a much longer range, as stated in the
following theorem.
6
?3
0.4
0.01
2
0.005
1
x 10
?3
1.12
?3
x 10
1.1114
x 10
0.3
0.2
1.1
0.1
0
0
300
600
900
0
0
(a)
300
600
900
0
0
300
(b)
600
900
1.08
1.111
1.06
0
1.1108
0
0.6
300
(c)
600
900
300
(d)
?3
0.8
1.1112
?3
x 10
3
600
900
(e)
?3
x 10
0.03
6
3
0.02
4
2
2
0.01
2
1
1
x 10
0.4
0.2
0
0
300
600
900
0
0
(f)
300
600
900
0
0
300
(g)
600
900
0
0
300
(h)
600
900
0
0
(i)
300
600
900
(j)
Figure 3: Absorption probabilities on the three Gaussians in Fig. 2(a) with the starting vertex
denoted in black circle. (a?e) ? = ?I, ? = 100 , 10?2 , 10?4 , 10?6 , 10?8 ; (f?j) ? = ?D,
? = 100 , 10?2 , 10?4 , 10?6 , 10?8 . For illustration purpose, the data are arranged such that points
within each Gaussian appear consecutively, as in Fig. 2(c).
Table 1: Ranking results (MAP) on USPS
Digits
? = ?I
PageRank
Manifold Ranking
Euclidean Distance
0
.981
.886
.957
.640
1
.988
.972
.987
.980
2
.876
.608
.827
.318
3
.893
.764
.827
.499
4
.646
.488
.467
.337
5
.778
.568
.630
.294
6
.940
.837
.917
.548
7
.919
.825
.822
.620
8
.746
.626
.675
.368
9
.730
.702
.719
.480
All
.850
.728
.783
.508
Theorem 3.5. If ?(Sj ) ? 2?, then there exists a k > j such that
Pj
? 1 ? k=1 a(k)
.
d(Sk ) ? (1 + ?)d(Sj ) and a(k) ? a(j) ?
?d(Sj )
Theorem 3.5 tells us that if the set Sj has high conductance, then there will be a set Sk much larger
than Sj where the absorption probability a(k) remains large. In other words, a(k) will not drop
much if Sj is closely connected with the rest of graph. Combining Theorems 3.3, 3.5, and 3.1, we
see that the absorption probability vector of the PARW with ? = ?I has the nice property of varying
slowly within the cluster while dropping sharply outside.
We remark that similar analyses have been conducted in [1, 2] on personalized PageRank, for the
local clustering problem [15] whose goal is to find a local cut of low conductance near a specified
starting vertex. As shown in Section 2, personalized PageRank is a special case of PARWs with
?
? = ?D = 1??
D, which corresponds to setting the same absorption rate pii = ? at each vertex.
This setting does not take advantage of the cluster assumption. Indeed, despite the significant cluster
structure in the three Gaussians (Fig. 2), no clear drop emerges by varying ? (Section 4). This
explains the ?heuristic? used in [1, 2] where the personalized PageRank vector is divided by the
degrees of vertices to generate a sharp drop. In contrast, our choice of ? = ?I appears to be more
justified, without the need of such post-processing while retaining a probabilistic foundation.
4 Simulation
In this section, we demonstrate our theoretical results on both synthetic and real data. For each data
set, a weighted k-NN graph is constructed with k = 20. The similarity between vertices i and j is
computed as wij = exp(?d2ij /?) if i is within j?s k nearest neighbors or vice versa, and wij = 0
otherwise (wii = 0), where ? = 0.2 ? r and r denotes the average square distance between each
point to its 20th nearest neighbor.
The first experiment is to examine the absorption probabilities when varying absorption rates. We
use the synthetic three Gaussians in Fig. 2(a), which consists of 900 points from three Gaussians,
with 300 in each. Fig. 3 compares the cases of ? = ?I and ? = ?D (PageRank). We can
7
Table 2: Classification accuracy on USPS
HMN
LGC
? = ?D
? = ?I
.782 ? .068 .792 ? .062 .787 ? .048 .881 ? .039
draw several observations. For ? = ?I, when ? is large, most probability mass is absorbed in
the cluster of the starting vertex (Fig. 3(a)). As it becomes appropriately small, the probability mass
distributes evenly within the cluster, and a sharp drop emerges (Fig. 3(b)). As ? ? 0, the probability
mass distributes more evenly within each cluster and also on the entire graph (Figs. 3(c?e)), but the
drops between clusters are still quite significant. In contrast, for ? = ?D, no significant drops
show for all ??s (Figs. 3(f?j)). This is due to the uniform absorption rates on the graph, which
makes the flow favor vertices with denser connections (i.e., of large degrees). These observations
support the theoretical arguments in Section 3 for PARWs with ? = ?I and suggest its robustness
in distinguishing between different clusters.
The second experiment is to test the potential of PARWs for information retrieval. We compare
PARWs with ? = ?I to PageRank (i.e., PARWs with ? = ?D), Manifold Ranking [18], and
the baseline using Euclidean distance. For parameter selection, we use ? = 10?6 for ? = ?I
and ? = 0.15 for PageRank (see Section 2.3) as suggested in [14]. The regularization parameter
in Manifold Ranking is set to 0.99, following [18]. The image benchmark USPS2 is used for this
experiment, which contains 9298 images of handwritten digits from 0 to 9 of size 16 ? 16, with
1553, 1269, 929, 824, 852, 716, 834, 792, 708, and 821 instances of each digit respectively. Each
instance is used as a query and the mean average precision (MAP) is reported. The results are shown
in Table 1. We see that the PARW with ? = ?I consistently gives best results for individual digits
as well as the entire data set.
In the last experiment, we test PARWs on classification/semi-supervised learning, also on USPS
with all 9298 images. We randomly sample 20 instances as labeled data and make sure there is
at least one label for each class. For PARWs, we classify each unlabeled instance u to the class
of the labeled vertex v where u is most likely to be absorbed, i.e., v = arg maxi?L aui where L
denotes the labeled data and aui is the absorption probability. We compare PARWs with ? = ?I
(? = 10?6 ) and ? = ?D (? = 0.15) to the harmonic function method (HMN) [20] coupled
with class mass normalization (CMN) and the local and global consistency (LGC) method [17]. No
parameter in HMN is required, and the regularization parameter in LGC is set to 0.99 following [17].
The classification accuracy averaged over 1000 runs is shown in Table 2. Again, it confirms the
superior performance of the PARW with ? = ?I.
In the second and third experiments, we also tried other parameter settings for methods where appropriate. We found that the performance of PARWs with ? = ?I is quite stable with small ?, and
varying parameters in other methods did not lead to significantly better results, which validates our
previous arguments.
5 Conclusions
We have presented partially absorbing random walks (PARWs), a novel stochastic process generalizing ordinary random walks. Surprisingly, it has been shown to unify or relate many popular existing
models and provide new insights. Moreover, a new algorithm developed from PARWs has been
theoretically shown to be able to reveal cluster structure under the cluster assumption. Simulation
results have confirmed our theoretical analysis and suggested its potential for a variety of learning
tasks including retrieval, clustering, and classification. In future work, we plan to apply our model
to real applications.
Acknowledgements
This work is supported in part by Office of Naval Research (ONR) grant #N00014-10-1-0242. The
authors would like to thank the anonymous reviewers for their insightful comments.
2
http://www-stat.stanford.edu/ tibs/ElemStatLearn/
8
References
[1] R. Andersen and F. Chung. Detecting sharp drops in pagerank and a simplified local partitioning algorithm. Theory and Applications of Models of Computation, pages 1?12, 2007.
[2] R. Andersen, F. Chung, and K. Lang. Local graph partitioning using pagerank vectors. In
FOCS, pages 475?486, 2006.
[3] Y. Bengio, O. Delalleau, and N. Le Roux. Label propagation and quadratic criterion. Semisupervised learning, pages 193?216, 2006.
[4] P. Berkhin. Bookmark-coloring algorithm for personalized pagerank computing. Internet
Mathematics, 3(1):41?62, 2006.
[5] O. Chapelle and A. Zien. Semi-supervised classification by low density separation. In AISTATS, 2005.
[6] F. Chung. Spectral Graph Theory. American Mathematical Society, 1997.
[7] R. Coifman and S. Lafon. Diffusion maps. Applied and Computational Harmonic Analysis,
21(1):5?30, 2006.
[8] F. Fouss, A. Pirotte, J. Renders, and M. Saerens. Random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. IEEE Transactions
on Knowledge and Data Engineering, 19(3):355?369, 2007.
[9] J. Kemeny and J. Snell. Finite markov chains. Springer, 1976.
[10] B. Kveton, M. Valko, A. Rahimi, and L. Huang. Semisupervised learning with max-margin
graph cuts. In AISTATS, pages 421?428, 2010.
[11] L. Lov?asz and M. Simonovits. The mixing rate of markov chains, an isoperimetric inequality,
and computing the volume. In FOCS, pages 346?354, 1990.
[12] M. Meila and J. Shi. A random walks view of spectral segmentation. In AISTATS, 2001.
[13] B. Nadler, N. Srebro, and X. Zhou. Statistical analysis of semi-supervised learning: The limit
of infinite unlabelled data. In NIPS, pages 1330?1338, 2009.
[14] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order
to the web. 1999.
[15] D. A. Spielman and S.-H. Teng. A local clustering algorithm for massive graphs and its application to nearly-linear time graph partitioning. CoRR, abs/0809.3232, 2008.
[16] U. Von Luxburg, A. Radl, and M. Hein. Hitting and commute times in large graphs are often
misleading. Arxiv preprint arXiv:1003.1266, 2010.
[17] D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global
consistency. In NIPS, pages 595?602, 2004.
[18] D. Zhou, J. Weston, A. Gretton, O. Bousquet, and B. Sch?olkopf. Ranking on data manifolds.
In NIPS, 2004.
[19] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation.
Technical Report CMU-CALD-02-107, Carnegie Mellon University, 2002.
[20] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and
harmonic functions. In ICML, 2003.
9
| 4833 |@word mild:1 kong:1 seems:1 d2:1 confirms:2 simulation:5 propagate:1 ajj:1 tried:1 commute:8 reduction:1 initial:1 contains:1 interestingly:2 existing:4 current:5 si:2 lang:1 must:1 john:1 subsequent:1 enables:1 drop:13 stationary:2 implying:2 selected:1 accordingly:1 beginning:1 provides:1 detecting:2 node:1 mathematical:1 dn:1 constructed:1 become:1 focs:2 prove:3 consists:1 inside:4 coifman:1 x0:3 theoretically:2 lov:1 expected:4 indeed:2 examine:1 ming:1 globally:1 little:1 becomes:1 provided:2 moreover:3 underlying:2 notation:1 mass:7 what:1 substantially:1 developed:1 caution:1 unified:1 finding:4 guarantee:1 pseudo:1 every:2 act:1 shed:2 exactly:1 scaled:1 control:1 unit:1 grant:1 omit:1 appear:2 partitioning:3 positive:1 before:1 engineering:2 local:8 understood:1 wuv:1 limit:3 despite:2 solely:1 path:1 approximately:1 black:2 studied:1 suggests:1 range:1 averaged:1 kveton:1 implement:2 definite:1 digit:4 aji:1 area:2 significantly:1 word:4 suggest:2 get:1 undesirable:1 unlabeled:4 interior:3 selection:1 context:2 writing:1 www:1 equivalent:1 map:4 demonstrated:1 reviewer:1 shi:1 regardless:3 starting:24 unify:2 simplicity:1 roux:1 immediately:1 insight:2 variation:2 construction:3 suppose:6 massive:1 homogeneous:1 distinguishing:1 satisfying:1 particularly:2 continues:1 cut:4 labeled:6 winograd:1 tib:1 preprint:1 electrical:1 capture:1 region:2 connected:3 d2ij:1 decrease:1 intuition:2 vanishes:1 isoperimetric:1 depend:1 upon:1 completely:1 sink:4 usps:3 aii:3 differently:1 regularizer:3 describe:1 query:1 tell:2 outside:7 whose:2 heuristic:1 widely:2 supplementary:1 larger:4 say:6 denser:2 otherwise:4 stanford:1 delalleau:1 favor:1 validates:1 advantage:1 propose:3 relevant:1 combining:1 mixing:3 description:1 olkopf:2 wjk:1 cluster:29 motwani:1 converges:2 illustrate:1 stat:1 nearest:2 ij:1 throw:1 indicate:1 pii:12 fouss:1 closely:2 stochastic:3 consecutively:2 transient:2 material:1 implementing:3 dii:1 explains:1 brin:1 stayed:1 wall:1 anonymous:1 snell:1 proposition:4 absorption:51 exploring:1 extension:1 sufficiently:4 around:1 exp:1 equilibrium:1 nadler:1 pointing:1 vary:6 consecutive:1 achieves:1 purpose:2 travel:1 applicable:1 label:5 bridge:1 largest:1 vice:1 weighted:3 reflects:1 gaussian:3 always:1 fulfill:1 pn:1 zhou:3 varying:4 office:1 naval:1 properly:1 consistently:1 hk:1 contrast:2 baseline:1 sense:1 whence:1 nn:1 integrated:1 chang1:1 transferring:2 a0:1 entire:3 relation:4 wij:7 interested:1 semantics:1 arg:1 classification:10 among:2 denoted:2 k6:3 retaining:1 plan:1 special:4 equal:2 once:1 field:1 icml:1 nearly:2 future:1 report:1 so3:1 piecewise:1 randomly:1 individual:1 intended:1 n1:2 ab:1 conductance:12 highly:2 analyzed:1 light:2 behind:1 chain:7 fu:1 edge:3 minw:1 euclidean:2 walk:34 desired:1 circle:2 hein:1 theoretical:6 instance:6 classify:1 ordinary:1 vertex:46 subset:3 uniform:1 conducted:1 stored:3 reported:1 varies:2 synthetic:2 cho:1 density:3 akj:1 stay:3 sequel:1 probabilistic:2 off:2 connectivity:1 again:1 andersen:2 von:1 huang:1 slowly:8 american:1 derivative:1 leading:2 chung:3 account:2 potential:3 summarized:1 satisfy:1 ranking:19 tion:1 view:3 closed:1 analyze:2 red:1 start:2 collaborative:1 square:1 accuracy:2 identify:1 handwritten:1 unifies:1 worth:1 confirmed:1 j6:1 whenever:1 berkhin:1 proof:2 di:13 popular:3 ask:1 lim:4 emerges:2 dimensionality:1 knowledge:1 segmentation:1 actually:1 back:1 coloring:1 appears:3 lgc:3 supervised:8 arranged:2 generality:2 furthermore:1 traveling:1 hand:2 web:2 propagation:4 glance:1 brings:2 reveal:1 semisupervised:2 parw:21 normalized:3 contain:1 remedy:1 cald:1 hence:2 regularization:2 symmetric:2 illustrated:2 conditionally:1 self:1 hong:1 generalized:1 criterion:1 presenting:1 demonstrate:2 saerens:1 image:3 harmonic:10 novel:4 recently:1 superior:1 absorbing:16 behaves:1 volume:3 discussed:1 interpretation:3 accumulate:1 refer:1 significant:3 mellon:1 versa:1 ai:6 meila:1 consistency:3 mathematics:1 dj:1 chapelle:1 stable:1 similarity:4 longer:1 add:1 recent:1 perspective:2 belongs:1 certain:1 n00014:1 inequality:1 onr:1 captured:1 additional:2 freely:1 converge:2 paradigm:3 cuhk:1 semi:8 ii:1 zien:1 desirable:5 reduces:1 rahimi:1 gretton:1 technical:1 unlabelled:1 retrieval:3 escaped:1 divided:1 hmn:3 post:1 prevented:1 laplacian:4 prediction:1 variant:3 regression:1 essentially:2 cmu:1 arxiv:2 iteration:1 sometimes:1 normalization:1 achieved:1 justified:1 remarkably:1 singular:4 appropriately:1 sch:2 rest:3 meaningless:1 bringing:1 asz:1 pass:1 sure:1 comment:1 undirected:1 flow:13 lafferty:1 seem:1 call:1 ee:1 near:4 bengio:1 variety:4 quite:2 li1:1 identified:1 wu1:1 escaping:2 regarding:1 elemstatlearn:1 render:1 cause:1 jj:4 remark:1 useful:1 se:1 clear:1 amount:1 generate:1 http:1 notice:2 trapped:1 arising:1 hji:1 blue:2 discrete:1 carnegie:1 dropping:6 shall:1 key:1 shih:1 prevent:1 pj:2 diffusion:4 graph:38 fraction:1 sum:1 bookmark:1 run:1 inverse:2 luxburg:1 injected:1 place:1 almost:1 arrive:1 separation:1 draw:1 bound:1 internet:1 guaranteed:1 pkk:1 quadratic:1 aui:2 infinity:1 sharply:7 constraint:2 personalized:7 dominated:1 bousquet:2 argument:5 relatively:1 department:3 according:1 remain:1 making:1 happens:1 taken:1 equation:3 remains:1 turn:1 eventually:1 fail:1 discus:1 abbreviated:1 letting:1 gaussians:5 rewritten:1 wii:1 apply:1 observe:2 away:2 appropriate:2 spectral:2 radl:1 robustness:1 denotes:3 clustering:7 running:2 unifying:2 ghahramani:2 chinese:1 especially:1 build:1 society:1 move:1 question:1 realized:3 usual:1 diagonal:3 kemeny:1 distance:3 thank:1 pirotte:1 evenly:3 manifold:4 retained:1 mini:1 illustration:2 mostly:3 cij:3 relate:3 hij:4 negative:2 stated:1 design:1 proper:3 upper:1 observation:2 markov:7 benchmark:1 finite:2 extended:1 precise:2 rn:2 arbitrary:1 sharp:4 complement:1 cast:1 required:1 specified:2 connection:3 lal:1 nip:3 able:1 suggested:2 usually:2 below:3 pagerank:19 including:5 max:1 sfchang:1 suitable:1 natural:1 regularized:2 valko:1 indicator:2 zhu:2 wik:2 misleading:2 concludes:1 coupled:1 columbia:3 lij:2 ljj:1 text:1 nice:2 understanding:1 acknowledgement:1 fully:3 loss:2 ically:1 interesting:4 proportional:1 proven:2 teleportation:1 srebro:1 foundation:1 degree:8 pij:4 xiao:1 propagates:1 row:3 supported:2 last:1 surprisingly:1 arriving:1 aij:7 neighbor:3 zhenguo:1 benefit:3 tolerance:1 boundary:5 distributed:1 transition:6 lafon:1 adopts:1 made:2 author:2 simplified:1 far:1 transaction:1 sj:13 approximate:1 citation:1 forever:2 confirm:1 global:3 reveals:1 simonovits:1 summing:1 un:1 quantifies:1 sk:6 table:4 promising:1 nature:1 transfer:1 robust:1 symmetry:1 anthony:1 diag:2 cmn:1 pk:1 did:1 aistats:3 bounding:1 x1:2 fig:17 referred:1 cluster1:1 elaborate:1 precision:2 governed:1 third:1 theorem:17 xt:6 insightful:1 maxi:2 essential:1 exists:1 adding:1 corr:1 margin:1 easier:1 suited:1 smoothly:1 generalizing:2 explore:1 likely:1 absorbed:16 hitting:8 partially:9 recommendation:2 springer:1 corresponds:2 weston:2 viewed:1 goal:2 sorted:2 consequently:1 man:1 ajk:1 included:4 determined:3 infinite:1 manchoso:1 distributes:2 lemma:5 teng:1 meaningful:1 support:1 accelerated:1 spielman:1 d1:1 phenomenon:1 |
4,236 | 4,834 | Modelling Reciprocating Relationships
with Hawkes Processes
Charles Blundell
Gatsby Computational Neuroscience Unit
University College London
London, United Kingdom
[email protected]
Katherine A. Heller
Duke University
Durham, NC, USA
[email protected]
Jeffrey M. Beck
University of Rochester
Rochester, NY, USA
[email protected]
Abstract
We present a Bayesian nonparametric model that discovers implicit social structure from interaction time-series data. Social groups are often formed implicitly,
through actions among members of groups. Yet many models of social networks
use explicitly declared relationships to infer social structure. We consider a particular class of Hawkes processes, a doubly stochastic point process, that is able
to model reciprocity between groups of individuals. We then extend the Infinite
Relational Model by using these reciprocating Hawkes processes to parameterise
its edges, making events associated with edges co-dependent through time. Our
model outperforms general, unstructured Hawkes processes as well as structured
Poisson process-based models at predicting verbal and email turn-taking, and military conflicts among nations.
1
Introduction
As social animals, people constantly organise themselves into social groups. These social groups can
revolve around particular activities, such as sports teams, particular roles, such as store managers,
or general social alliances, like gang members. Understanding the dynamics of group interactions is
a difficult problem that social scientists strive to address.
One basic problem in understanding group behaviour is that groups are often not explicitly defined,
and the members must be inferred. How might we infer these groups, and from what data? How can
we predict future interactions among individuals based on these inferred groups?
A common approach is to infer groups, or clusters, of people based upon a declared relationship
between pairs of individuals [1, 2, 3, 4]. For example, data from social networks, where two people
declare that they are ?friends? or in each others? social ?neighbourhood?, can potentially be used.
However these declared relationships are not necessarily readily available, truthful, or pertinent to
inferring the social group structure of interest.
In this paper we instead propose an approach to inferring social groups based directly on a set
of real interactions between people. This approach reflects an ?actions speak louder than words?
philosophy. If we are interested in capturing groups that best reflect human behaviour we should be
determining the groups from instances of that same behaviour. We develop a model which can learn
social group structure based on interactions data.
1
In the work that we present, our data will consist of a sequence of many events, each event reflecting
one person, the sender, performing some sort of an action towards another person, the recipient, at
some particular point in time. As examples, the actions we consider are that of one person sending
an email to another, one person speaking to another, or one country engaging in military action
towards another.
The key property that we leverage to infer social groups is reciprocity. Reciprocity is a common
social norm, where one person?s actions towards another increases the probability of the same type
of action being returned. For example, if Bob emails Alice, it increases the probability that Alice
will email Bob in the near future. Reciprocity widely manifests across many cultures, perhaps most
commonly as the golden rule and tit for tat retaliation. When multiple people show a similar pattern
of reciprocity, our model will place these people in their own group.
The Bayesian nonparametric model we use on these time-series data is generative and accounts for
the rate of events between clusters of individuals. It is built upon mutually-exciting point processes,
known as Hawkes processes [5, 6]. Pairs of mutually-exciting Hawkes processes are able to capture
the causal nature of reciprocal interactions. Here the processes excite one another through their
actualised events. Since Poisson processes are a special case of Hawkes processes, our model is also
able to capture simpler one-way, non-reciprocal, relationships as well.
Our model is also related to the Infinite Relational Model (IRM) [1, 2]. The IRM typically assumes
that there is a fixed graph, or social network, which is observed. Here we are interested in inferring
the implicit social structure based only on the occurrences of interactions between vertices in the
graph. We apply our model to reciprocal behaviour in verbal and email conversations and to military
conflicts among nations.
The remainder of the paper is organised as follows: section 2 discusses using Poisson processes
together with the IRM. Section 3 describes our use of self-exciting and pairs of Hawkes processes,
and section 4 specifies how they are used to develop our reciprocity clustering model. Section
5 presents an inference algorithm for our model, section 6 discusses related work, and section 7
presents experimental results using our model on synthetic, email, speech and intercountry conflict
data.
2
Poisson processes with the Infinite Relational Model
The Infinite Relational Model (IRM) [1, 2] was developed to model relationships among entities
as graphs, based upon previously declared relationships. Let V denote the vertices of the graph,
corresponding to individuals, and let euv denote the presence or absence of a relationship between
vertices u and v, corresponding to an edge in the graph. The generative process of the IRM is:
? ? CRP(?)
?pq ? Beta(?, ?)
euv ? Bernoulli(??(u)?(v) )
?p, q ? range(?)
?u, v ? V
(1)
(2)
(3)
where ? is a partition of the vertices V , distributed according to the Chinese restaurant process (CRP)
with concentration parameter ?, with p and q indexing clusters of ?. Hence vertex u belongs to the
cluster given by ?(u), and consequently, the clusters in ? are given by range(?). The probability
of an edge between vertex u and vertex v is then the parameter ?pq associated with their pair of
clusters.
Often in interaction data there are many instances of interactions between the same pair of
individuals?this cannot be modelled by the IRM. A straightforward way to modify the IRM to account for this is to use a Gamma-Poisson observation model instead of this usual Beta-Bernoulli
model. Unfortunately, a vanilla Gamma-Poisson observation model does not allow us to predict
events into the future, outside the observed time window. Therefore we consider using a Poisson
process instead.
Poisson processes are stochastic counting processes. For an introduction see [7]. We shall consider
Poisson processes on [0, ?), such that the number of events in any interval [s, s0 ) of the real-half
line, denoted N [s, s0 ), is Poisson distributed with rate ?(s0 ? s).
2
250
Bob'
Alice'
rate ?pq(t)
200
Mallory'
150
Alice, Bob ? Alice, Bob
Mallory ? Alice, Bob
Alice, Bob ? Mallory
Mallory ? Mallory
100
50
0
Alice, Bob ? Alice, Bob
Mallory ? Alice, Bob
Alice, Bob ? Mallory
Mallory ? Mallory
0.0
0.2
0.4
time t
0.6
0.8
1.0
Figure 1: A simple example. The graph in the top left shows the clusters and edge weights learned by our
model from the data in the bottom right plot. The top right plot shows the rates of interaction events between
clusters. The bottom right plot shows the interaction events. In the graph, the width and temperature (how red
the colour is) denotes the expected rate of events between pairs of clusters (using equations (9) and (10)). While
in plots on the right, line colours indicates the identity of cluster pairs, and box colours indicate the originator
of the event: Alice (red), Bob (blue), Mallory (black). Alice and Bob interact with each other such that they
positively reciprocate each others? actions. Mallory, however, has an asymmetric relationship with both Alice
and Bob. Only after many events caused by Mallory do Alice or Bob respond, and when they do respond they
both, similarly, respond more sparsely.
With Gamma priors on the rate parameter, the full Poisson Process IRM model is:
? ? CRP(?)
?pq ? Gamma(?, ?)
Nuv (?) ? PoissonProcess(??(u)?(v) )
?p, q ? range(?)
?u, v ? V
(4)
(5)
(6)
where Nuv (?) is the random counting measure of the Poisson process, and ? and ? are respectively
the shape and inverse scale parameters of the Gamma prior on the rate of the Poisson processes, ?pq .
Inference proceeds by conditioning on, Nuv [0, T ) = nuv where nuv is the total number of events
directed from u to v in the given time interval. Since conjugacy can be maintained, due to the
superposition property of Poisson processes, inference in this model is possible in much the same
way as in the original IRM [2, 1].
There are two notable deficiencies of this model: the rate of events on each edge is independent of
every other edge, and conditioned on the time interval containing all observed events, the times of
these events are uniformly distributed. This is not the typical pattern we observe in interaction data.
If I send an email to someone, it is more likely that I will receive an email from them than had I not
sent an email, and the probability of receiving a reply decreases as time advances. In the following
sections we will introduce and utilise mutually-exciting Hawkes processes, which are able to exactly
model these phenomena.
3
Self-Exciting and Pairs of Mutually-Exciting Hawkes Processes
Hawkes [5, 6] introduced a family of self- and mutually-exciting Markov point processes, often
called Hawkes processes. These processes are intuitively similar to Poisson processes, but unlike
Poisson processes, the rates of Hawkes processes depend upon their own historic events and those
of other processes in an excitatory fashion.
We shall consider an array of K ? K Hawkes processes, where K is the number of clusters in a
partition drawn from a CRP restricted to the individuals V . As in the IRM, the CRP allows the
3
number of processes to grow in an unconstrained manner as the number of individuals in the graph
grows. However, unlike the IRM, these Hawkes processes will be pairwise-dependent: the Hawkes
process governing events from cluster p to cluster q, will depend upon the Hawkes process governing
events from cluster q to cluster p.
Let Npq be the counting measure of the (p, q)th Hawkes process. Each Hawkes process is a point
process whose rate at time t is given by:
Z t
?pq (t) = ?pq np nq +
gpq (t ? s)dNqp (s)
(7)
??
where ?pq is the base rate of the counting measure of the Hawkes, process, Npq . np and nq are the
number
of individuals in cluster p and q respectively, and gpq is a non-negative function such that
R?
g
(s)ds
< 1, ensuring that Npq is stationary. Nqp is the counting measure of the reciprocating
pq
0
Hawkes process of Npq . Intuitively, if Npq governs events from cluster p to cluster q, then Nqp
governs events from cluster q to cluster p. Equation (7) shows how the rates of events in these two
processes are intimately intertwined.
Since Nqp is an atomic measure, whose atoms correspond to the times of events, we can express the
rate of Npq given in (7), by conditioning on the events of its reciprocating processes Nqp , as:
X
?pq (t) = ?pq np nq +
gpq (t ? tqp
(8)
i )
i:tqp
i <t
where tqp
i denotes the times of the ith event of process Nqp . Thus the rate of the process Npq at time
t is some base rate at which events occur, ?pq , plus an additional rate of gpq (t ? tqp
i ) for each event
in the reciprocating process Nqp . Figure 1(top) shows an example of how ?pq (t) and ?qp (t) vary for
these pairs of processes.
If gpq (?) = 0 then the process is a Poisson process with rate ?pq np nq . When p = q, the process
is self-exciting: its current rate depends solely on its own previous events. In our application, selfexciting processes model interactions within a social group, as they model cohesion in reciprocity:
individual reciprocation within a group is as if towards oneself. In the case of p 6= q, each pair of
processes Npq and Nqp mutually excite one another. An event in one increases the probability of an
event from the other, and so on. Importantly, the type of reciprocation (parameterised by gpq and
gqp , respectively) differs between events from group p to group q and events from group q to group
p. This difference in reciprocity is what we would like our model to leverage to learn about social
groups.
Hawkes processes are an example of doubly stochastic point processes. The rate of events is itself
a random variable. By integrating out the events of Nqp we can see that this process is stationary,
as its rate does not depend upon time, and also gain further insight into the role of the functions gpq
and gqp . For self-exciting Hawkes processes, where p = q, the marginal rate is:
E[?pp (t)] = n2p
?pp
1 ? Gpp
(9)
whilst for a pair of mutually-exciting Hawkes processes the marginal rate is:
E[?pq (t)] = np nq
?pq + ?qp Gpq
1 ? Gpq Gqp
(10)
Rt
where Gpq = ?? gpq (t ? u)du which tempers the effect of the rate of events from one process
on the rate of the other. The closer Gpq is to zero, the more Poisson-like Hawkes processes behave.
Whilst as Gpq approaches one, the rate of events in Npq are increasingly caused by those in Nqp .
4
Hawkes Processes with the Infinite Relational Model
We combine Hawkes processes with the IRM as follows. We pick the form for the gpq functions as
gpq (?) = ?pq e
?
? ?pq
[5, 6, 8, 9]. Examples of using this parameterisation are shown in Figure 1(top).
4
Due to the memorylessness property of the exponential distribution, inference with Hawkes processes with this parameterisation takes time linear in the number of events [10]. Our generative
model is as follows:
? ? CRP(?)
(11)
Z
t
?pq (t) = ?pq np nq + ?pq
e
? t?s
?pq
dNqp (s)
?p, q ? range(?)
(12)
?u, v ? V
(13)
(14)
??
Npq (?) ? HawkesProcess(?pq (?))
Nuv (?) ? Thinning(N?(u)?(v) (?))
where, as before, ? is a partition of the individuals, drawn from a Chinese restaurant process (CRP)
with concentration parameter ?. For each pair of clusters p and q, we associate a time-varying rate
?pq (t) which dictates the rate of events from individuals in cluster p to individuals in cluster q, and
a Hawkes process Npq . As described in the previous section, this rate depends upon the specific
events sent in the opposite direction, from cluster q to cluster p, whose measure is also random and
is denoted Nqp (?).
Each random measure Nuv (?) governs events between a particular pair of individuals within clusters
p and q respectively. Nuv (?) are drawn by thinning the cluster random measure Npq (?) among all
of the edges between individuals in clusters p P
and q. Thinning means distributing the atoms of
Npq (?) among each Nuv (?), such that Npq =
N (?). Constructing the edge measures by
R ? u,v uv
thinning means it is sufficient to ensure that 0 gpq (u)du < 1 for the process to be stationary.
This condition, under the chosen parameterisation, implies that ?pq ?pq < 1. When all ?pq = 0,
this model is equivalent to the Poisson process IRM in section 2. Henceforth we will use uniform
thinning?each event in Npq (?) is assigned uniformly at random among all Nuv (?) where p = ?(u)
and q = ?(v)?but in principle any thinning scheme may be used.
For a Hawkes process Npq , the rate at which no events occurs in the interval [s, s0 ) is:
e?
R s0
s
?pq (t)dt
(15)
nuv
Suppose we observe the times of all the events in [0, T ), {tuv
i }i=1 for process Nuv (nuv being the
total number of events from u to v in [0, T )). Suppose that individual u is in cluster p and that
individual v is in cluster q. Furthermore, assume there are no events before time 0. The likelihood
of each edge between individuals u and v is thus:
n
qp qp
nuv
p({tuv
i }i=1 |?pq , {ti }i=1 ) = e
? np1nq
RT
0
?pq (t)dt
n
uv
Y
i=1
?pq (tuv
i )
np nq
(16)
n
qp
where ?pq = (?pq , ?pq , ?pq ), {tqp
i }i=1 are the times of the reciprocal events. We place proper
uniform priors on log ?, ?pq , ?pq , and ?pq , enforcing the constraint that ?pq ?pq < 1.
5
Inference
We perform posterior inference using Markov chain Monte Carlo. Our model is a departure from
previous IRM-based models as there is no conjugate prior for the likelihood. Thus we cannot simply
integrate out these parameters, and must sample them.
To infer the partition of individuals ?, the concentration parameter ?, and the parameters of each
Hawkes process ?pq = (?pq , ?pq , ?pq ), we use Algorithm 5 [11] adapted to the IRM and slice
sampling [12] to draw samples from the posterior. We initialise the chain from the prior. Slice
sampling is used for ? and each of ?pq , ?pq , and ?pq . When setting the bounds of the slice sampler
for ?pq (?pq ) we set the upper bound to ?1pq ( ?1pq ) respectively, to ensure that ?pq ?pq ? 1.
6
Related work
Several authors have considered modelling occurrence events [13, 14, 15] using piecewise constant
rate Markov point processes for known number of event types. Our work directly models interaction
5
events (where an event is structured to have a sender and recipient) and the number of possible
events types is not limited. [16] describes a model of occurrence events as a discrete time-series
using a latent first-order Markov model. Our model differs in that it considers interaction events in
continuous time and requires no first-order assumption.
The model in Section 2 relates the work of [17] to the IRM [1], yielding a version of their model
that learns the number of clusters whilst maintaining conjugacy. However our model does not use a
Poisson process to model event times, instead using processes which have a time-varying rate.
Simma and Jordan [10] describe a cascade of Poisson processes, forming a marked Hawkes process.
Hawkes processes are also the basis of this work, however our work does not use side-channel
information to group individuals by imposing fixed marks on the process; instead we learn structure
among several co-dependent Hawkes processes and use Bayesian inference for the parameters and
structure.
Paninski et al [18, 19] describe a process similar to a Hawkes process that uses an additional link
function to allow for inhibition amongst neurons. The interest is in modelling the activation and
co-activation of neurons and as such they do not directly model cluster structure among the neurons,
while our model does model this structure. Learning such structure among neurons is a potential
interesting future application of this model.
Our model may also be seen as a probabilistic interpretation of the interaction rank of [20], which
we leverage to discover global clustering structure. An interesting future direction would be to learn
a per-person (i.e., ego-centric) clustering structure.
7
Experiments
In our experiments we compared our model to the Poisson process IRM (Section 2), a single Hawkes
process and a single Poisson process. These latter two models are equivalent to the first two models
where just one cluster is used.
We compared these models quantitatively by comparing their log predictive densities (with respect
to the space of ordered sequences of events) on events falling in the final 10% of the total time of the
data (Table 2). We normalised the times of all events such that the first 90% of total time lay in the
interval [0, 1]. We ran our inference algorithm for 5000 iterations and discarded the first 500 burn-in
samples, repeating each experiment 10 times from different initialisations from the prior.
Synthetic data We generated synthetic data to highlight differences between our model and the alternatives. The data involves three individuals and is plotted in Figure 1. Table 1 shows details of the
fit of the model to the data, and Table 2 shows the predictive results. The Poisson IRM is uncertain
how to cluster individuals as it cannot model the temporal dependence between individuals, while
the Hawkes IRM can and so performs better at prediction as well. A single Hawkes process does
not model the structure among individuals and so performs worse than the Hawkes IRM, although
it is able to model dependence among events.
Enron email threads We took the five longest threads from the Enron 2009 data set [21]. We
identified threads by the set of senders and receivers, and their subject line (after removing common
subject line prefixes such as ?Re:?, ?Fwd:? and so on, removing punctuation and making all letters
lower case). All of these threads involve two different people so there is little scope for learning
much group structure in these data: either both people are in the same cluster, or they are in two
separate clusters. However as can be seen in Table 2 these data suggest a predictive advantage
to using mutually-exciting Hawkes processes, as automatically determined by our model, instead
of a single self-exciting Hawkes process and of both of these approaches over their corresponding
Poisson processes model. A self-exciting Hawkes process is unable to mark the sender and receiver
of events as differing, whilst Poisson process-based models are unable to model the causal structure
of events.
Santa Barbara Conversation Corpus We took five conversations from the Santa Barbara Conversation Corpus [22] involving the largest number of people. These results are labelled ?SB conv?
followed by the conversation identifier in this corpus, in the results in Tables 1 and 2. These con6
Gran$
Rose$
Dan$
KUW$
USA$
Pa,$
Bern$
X$
Cher$
Paul$$
Many$
Dere$
Kare$
AFG$
TAW$
IRQ$
RUS$
CHN$
Figure 2: Graphs of clusters of individuals inferred by our model. Edge width and temperature (how
red the colour is) denotes the expected rate of events between pairs of clusters (using equations (9)
and (10); edges whose marginal rate is below 1 are not included). On the left is the graph inferred
on the ?SB conv 26? data set. On the right is the graph inferred on the ?Small MID? data set.
versations cover a variety of social situations: questions during a university lecture (12), a book
discussion group (23), a meeting among city officials (26), a family argument/discussion (33), and a
conversation at a family birthday party (49). We modelled the turn-taking behaviour of these groups
by taking the times of when one speaker switched to the next. In Figure 2(left) we show the cluster
graph found by our model for conversation 26, involving city officials discussing a grant application.
The identities of participants in all of these data are anonymised, preventing an exact interpretation.
However, the model captures the discussive to-and-fro of the meeting, where PATT appears to be the
chair of the meeting, and DAN, ROSE and GRAN are the main discussors, all of whom discuss with
the chair, initially in question and an answer format, and among themselves, with other members of
the audience chipping in sporadically.
Correlates of war We use version 3.0 of the Militarized Interstate Disputes (MIDs) data set [23] to
model correlates of war. This data set spans the years 1993 to 2001, and consists of MID incidents,
along with the countries involved in the incidents. Incidents vary from diplomatic threats of military
force to the actual deployment of military force against another state. A detailed description of each
incident is available in [24].
The results of all models on the correlates of war data are given in Table 2 with details of the fits
in Table 1 in the rows entitles ?Small MID? and ?Full MID?. The full MID data set consists of 82
countries?yielding a large graph. For exposition purposes, we show the graph (in Figure 2(right))
on part of the MID data set, by restricting to events among the USA, Kuwait, Afghanistan, Taiwan,
Russia, China, and Iraq. Thicker and redder lines between clusters (computed from equations 9 and
10) reflect a higher rate of incidents directed between the countries along the edge.
The results of the clustering given by our model are in keeping with that discussed in [24]. There
were three main conflicts involving the countries we modelled during the time period this data
covers. These conflicts involve 1) Russia and Afghanistan, 2) Taiwan (sometimes with support from
the USA) and China (sometimes with support from Russia), and 3) Iraq, Kuwait, and the USA. 1)
Revolved mostly around border disputes coming out of the Soviet war in Afghanistan, and incidents
sometimes involved using former Soviet countries as proxies. 2) Reflects conflict between Taiwan
and China over potential Taiwanese independence. Lastly, 3) deals with conflicts between Iraq and
either Kuwait or the USA coming out of the Persian Gulf war. It is interesting to note that groups
involving smaller countries were found to be more likely to initiate incidents with larger countries
in a dispute (e.g. Iraq was almost always the instigator of disputes in their conflict with Kuwait and
the USA). Since the data ends in 2001, relatively few disputes with Afghanistan involve the USA.
7
Synthetic
Small MID
Full MID
Enron 0
Enron 1
Enron 2
Enron 3
Enron 4
SB conv 23
SB conv 26
SB conv 12
SB conv 49
SB conv 33
N
3
7
82
2
2
2
2
2
18
11
12
11
10
T
239
57
412
896
204
122
117
85
832
95
133
620
499
Hawkes IRM
E[K] log probability
2.00
594.04?0.01
4.30
33.59?0.02
13.67 -638.25?1.16
2.00 6724.76?0.01
2.00 1202.99?0.02
2.00
616.37?0.02
2.00
497.53?0.02
2.00
252.60?0.02
11.87 1581.72?0.12
4.26
170.34?0.03
4.11
233.41?0.03
8.85 1728.13?0.07
8.44
803.22?0.16
Poisson IRM
E[K]
log probability
1.36
533.65?0.00
1.02
-63.99?0.03
3.93
-1412.49?5.38
2.00
4516.77?0.00
2.00
692.32?0.00
2.00
336.02?0.00
2.00
318.38?0.00
2.00
192.74?0.00
3.01
599.29?0.42
2.00
-51.92?0.14
2.53
-59.12?0.15
3.40
990.75?0.15
2.03
431.59?0.12
Table 1: Details of data sets and fits of the structured models. N denotes the number of individuals in the data
set. T denotes the total number of events in the data set. E[K] is the average number of clusters found in the
posterior. Log probability is the average log probability of the training data.
Synthetic
Small MID
Full MID
Enron 0
Enron 1
Enron 2
Enron 3
Enron 4
SB conv 23
SB conv 26
SB conv 12
SB conv 49
SB conv 33
Hawkes IRM
43.00?0.00
12.69?0.04
-134.97?2.98
259.20?0.01
436.66?0.01
139.40?0.01
124.22?0.01
127.82?0.02
132.57?0.27
-5.85?0.02
96.07?0.03
220.85?0.09
46.19?0.06
Poisson IRM
-6.76?0.02
-50.88?0.02
-355.29?5.61
39.33?0.00
133.29?0.00
24.14?0.00
21.06?0.00
28.38?0.00
-198.34?0.23
-16.83?0.09
-97.89?0.10
-116.62?0.12
-100.83?0.04
Hawkes
39.88?0.01
6.37?0.01
-188.08?0.00
233.44?0.00
380.27?0.01
118.86?0.00
101.71?0.01
109.62?0.00
30.93?0.00
-6.05?0.00
33.18?0.00
126.94?0.00
21.71?0.00
Poisson
-3.88?0.00
-50.86?0.00
-302.65?0.00
40.11?0.00
105.71?0.00
22.88?0.00
21.03?0.00
22.08?0.00
-213.18?0.00
-14.54?0.00
-128.53?0.00
-83.62?0.00
-83.79?0.00
Table 2: Average log predictive results for each model with standard errors
8
Discussion
We have presented a Bayesian nonparametric approach to learning the structure among collections
of co-dependent Hawkes processes, which on several interaction data sets consistently outperforms
both unstructured and Poisson-based models in terms of predictive likelihoods. The intuition behind
why our model works well is that it captures part of the reciprocal nature of interactions among
individuals in social situations, which in turn requires modelling some of the causal relationship of
events. By learning this structure, our model is able to make better predictions.
There are several future directions. For example, individuals might contribute to groups differently
to one another. There may be different kinds of events between individuals and other side-channel
information. Both of these artefacts may be modelled by replacing the uniform thinning scheme
proposed above, with a detailed model of these effects. It would be interesting to consider other
parameterisations of gpq (?) that, for example, include periods of delay between reciprocation; the
exponential parameterisation lends itself to efficient computation [10] whilst other parameterisations
do not necessarily have this property. But different choices of gpq (?) may yield better statistical
models. Another interesting avenue is to explore other structure amongst interaction events using
Hawkes processes, beyond reciprocity.
Acknowledgements The authors are grateful for helpful comments from the anonymous reviewers, and the support of Josh Tenenbaum, the Gatsby Charitable Foundation, PASCAL2 NoE, NIH
award P30 DA028803, and an NSF postdoctoral fellowship,
8
References
[1] Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori
Ueda. Learning systems of concepts with an infinite relational model. AAAI, 2006.
[2] Zhao Xu, Volker Tresp, Kai Yu, and Hans-Peter Kriegel. Infinite hidden relational models.
Uncertainty in Artificial Intelligence (UAI), 2006.
[3] Edoardo M. Airoldi, David M. Blei, Stephen E. Fienberg, and Eric P. Xing. Mixed membership
stochastic blockmodel. Journal of Machine Learning Research, 9:1981?2014, 2008.
[4] Konstantina Palla, David A. Knowles, and Zoubin Ghahramani. An infinite latent attribute
model for network data. In Proceedings of the 29th International Conference on Machine
Learning, ICML 2012. July 2012.
[5] Alan G. Hawkes. Point spectra of some self-exciting and mutually-exciting point processes.
Journal of the Royal Statistical Society. Series B (Methodological), 58:83?90, 1971.
[6] Alan G. Hawkes. Point spectra of some mutually-exciting point processes. Journal of the
Royal Statistical Society. Series B (Methodological), 33(3):438?443, 1971.
[7] John F. C. Kingman. Poisson Processes. Oxford University Press, 1993.
[8] Alan G. Hawkes and David Oakes. A cluster process representation of a self-exciting process.
Journal of Applied Probability, 11(3):493?503, 1974.
[9] David Oakes. The Markovian self-exciting process. Journal of Applied Probability, 12(1):69?
77, 1975.
[10] Aleskandr Simma and Michael I. Jordan. Modeling events with cascades of poisson processes.
Uncertainty in Artificial Intelligence (UAI), 2010.
[11] Radford M. Neal. Markov chain sampling methods for Dirichlet process mixture models.
Technical Report 9815, University of Toronto, 1998.
[12] Radford M. Neal. Slice sampling. Annals of Statistics, 31(3):705767, 2003.
[13] Uri Nodelman, Christian R. Shelton, and Daphne Koller. Continuous time Bayesian networks.
Uncertainty in Artificial Intelligence (UAI), 2002.
[14] Shyamsundar Rajaram, Thore Graepel, and Ralf Herbrich. Poisson-networks: A model of
structured point processes. Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics (AISTATS), 2005.
[15] Asela Gunawardana, Christopher Meek, and Puyang Xu. A model for temporal dependencies
in event streams. Neural Information Processing Systems (NIPS), 2011.
[16] David Wingate, Noah D. Goodman, Daniel M. Roy, and Joshua B. Tenenbaum. The infinite
latent events model. Uncertainty in Artificial Intelligence (UAI), 2009.
[17] Christopher DuBois and Padhraic Smyth. Modeling relational events via latent classes. In
Proceedings of the 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining,
2010.
[18] Liam Paninski. Maximum likelihood estimation of cascade point-process neural encoding
models. Network, 2004.
[19] Liam Paninski, Jonathan Pillow, and Jeremy Lewi. Statistical models for neural encoding,
decoding, and optimal stimulus design. In Computational Neuroscience: Theoretical Insights
Into Brain Function. 2007.
[20] Maayan Roth, Assaf Ben-David, David Deutscher, Guy Flysher, Ilan Horn, Ari Leichtberg,
Naty Leiser, Yossi Matias, and Ron Merom. Suggesting friends using the implicit social graph.
In Proceedings of the 16th ACM SIGKDD Conference on Knowledge Discovery and Data
Mining, 2010.
[21] Enron 2009 Data set. http://www.cs.cmu.edu/ enron/.
[22] John W. DuBois, Wallace L. Chafe, Charles Meyer, and Sandra A. Thompson. Santa Barbara
corpus of spoken American English. Linguistic Data Consortium, 2000.
[23] Faten Ghosn, Glenn Palmer, and Stuart Bremer. The mid3 data set, 19932001: Procedures,
coding rules, and description. Conflict Management and Peace Science, 21:133?154, 2004.
[24] Dispute Narratives. http://www.correlatesofwar.org/cow2%20data/mids/mid v3.0.narratives.pdf.
9
| 4834 |@word version:2 norm:1 tat:1 pick:1 series:5 united:1 initialisation:1 daniel:1 prefix:1 outperforms:2 current:1 comparing:1 activation:2 yet:1 must:2 readily:1 john:2 partition:4 shape:1 pertinent:1 christian:1 plot:4 stationary:3 generative:3 half:1 intelligence:5 nq:7 reciprocal:5 ith:1 yamada:1 blei:1 contribute:1 toronto:1 herbrich:1 ron:1 org:1 simpler:1 daphne:1 five:2 along:2 beta:2 consists:2 doubly:2 combine:1 dan:2 assaf:1 manner:1 introduce:1 pairwise:1 dispute:6 expected:2 themselves:2 wallace:1 manager:1 brain:1 palla:1 automatically:1 little:1 actual:1 window:1 conv:12 discover:1 what:2 cher:1 kind:1 developed:1 whilst:5 differing:1 spoken:1 noe:1 temporal:2 every:1 nation:2 golden:1 ti:1 thicker:1 exactly:1 uk:1 kuwait:4 unit:1 grant:1 before:2 declare:1 scientist:1 modify:1 encoding:2 gpp:1 oxford:1 solely:1 birthday:1 might:2 black:1 plus:1 burn:1 china:3 alice:15 co:4 someone:1 deployment:1 limited:1 liam:2 range:4 palmer:1 directed:2 horn:1 atomic:1 differs:2 lewi:1 procedure:1 cascade:3 dictate:1 anonymised:1 word:1 integrating:1 griffith:1 suggest:1 zoubin:1 consortium:1 cannot:3 www:2 equivalent:2 reviewer:1 roth:1 send:1 straightforward:1 thompson:1 unstructured:2 rule:2 insight:2 array:1 retaliation:1 importantly:1 initialise:1 ralf:1 annals:1 suppose:2 speak:1 duke:2 exact:1 us:1 smyth:1 maayan:1 engaging:1 associate:1 ego:1 pa:1 npq:16 roy:1 lay:1 asymmetric:1 afghanistan:4 sparsely:1 iraq:4 observed:3 role:2 bottom:2 wingate:1 capture:4 decrease:1 ran:1 rose:2 intuition:1 dynamic:1 reciprocate:1 depend:3 grateful:1 tit:1 predictive:5 upon:7 eric:1 basis:1 differently:1 afg:1 soviet:2 describe:2 london:2 monte:1 artificial:5 outside:1 whose:4 widely:1 larger:1 kai:1 statistic:2 itself:2 final:1 sequence:2 advantage:1 ucl:1 propose:1 took:2 interaction:19 coming:2 remainder:1 bremer:1 description:2 cluster:42 ben:1 friend:2 ac:1 stat:1 develop:2 c:1 involves:1 indicate:1 implies:1 direction:3 artefact:1 attribute:1 stochastic:4 human:1 behaviour:5 sandra:1 fwd:1 anonymous:1 gqp:3 around:2 considered:1 scope:1 predict:2 vary:2 cohesion:1 purpose:1 estimation:1 narrative:2 superposition:1 largest:1 city:2 reflects:2 always:1 varying:2 volker:1 linguistic:1 longest:1 methodological:2 rank:1 consistently:1 modelling:4 bernoulli:2 indicates:1 likelihood:4 blockmodel:1 sigkdd:2 helpful:1 inference:8 dependent:4 membership:1 sb:12 typically:1 initially:1 hidden:1 koller:1 interested:2 among:18 denoted:2 temper:1 animal:1 special:1 marginal:3 taw:1 atom:2 sampling:4 stuart:1 yu:1 icml:1 future:6 report:1 others:2 np:7 piecewise:1 quantitatively:1 few:1 stimulus:1 shyamsundar:1 gamma:5 individual:29 beck:1 jeffrey:1 interest:2 mining:2 punctuation:1 mixture:1 yielding:2 behind:1 chain:3 edge:13 closer:1 naonori:1 culture:1 irm:24 alliance:1 plotted:1 causal:3 re:1 theoretical:1 uncertain:1 nuv:14 instance:2 military:5 modeling:2 markovian:1 cover:2 vertex:7 uniform:3 delay:1 mallory:12 dependency:1 answer:1 synthetic:5 person:6 density:1 international:2 probabilistic:1 receiving:1 decoding:1 michael:1 together:1 aaai:1 gunawardana:1 reflect:2 containing:1 oakes:2 padhraic:1 russia:3 management:1 henceforth:1 guy:1 worse:1 book:1 american:1 zhao:1 kingman:1 strive:1 tqp:5 suggesting:1 account:2 potential:2 jeremy:1 ilan:1 coding:1 notable:1 explicitly:2 caused:2 depends:2 nqp:10 tuv:3 stream:1 red:3 sporadically:1 sort:1 participant:1 xing:1 rochester:3 formed:1 rajaram:1 correspond:1 yield:1 modelled:4 bayesian:5 carlo:1 bob:15 email:10 against:1 matias:1 pp:2 involved:2 associated:2 redder:1 gain:1 manifest:1 conversation:7 knowledge:2 graepel:1 reflecting:1 thinning:7 centric:1 appears:1 higher:1 dt:2 leiser:1 box:1 furthermore:1 governing:2 implicit:3 crp:7 reply:1 parameterised:1 d:1 just:1 lastly:1 ghosn:1 replacing:1 christopher:2 perhaps:1 grows:1 thore:1 usa:9 effect:2 concept:1 former:1 hence:1 assigned:1 neal:2 deal:1 during:2 self:10 width:2 euv:2 maintained:1 speaker:1 hawkes:50 pdf:1 performs:2 temperature:2 discovers:1 ari:1 charles:3 nih:1 common:3 qp:5 conditioning:2 extend:1 interpretation:2 discussed:1 reciprocating:5 louder:1 imposing:1 vanilla:1 unconstrained:1 uv:2 similarly:1 had:1 pq:53 han:1 inhibition:1 base:2 posterior:3 own:3 belongs:1 barbara:3 irq:1 store:1 discussing:1 meeting:3 joshua:2 seen:2 additional:2 truthful:1 period:2 v3:1 july:1 stephen:1 relates:1 multiple:1 full:5 bcs:1 infer:5 persian:1 alan:3 technical:1 award:1 peace:1 ensuring:1 prediction:2 involving:4 basic:1 cmu:1 poisson:33 iteration:1 sometimes:3 audience:1 receive:1 fellowship:1 interval:5 grow:1 country:8 goodman:1 unlike:2 enron:14 comment:1 subject:2 sent:2 member:4 jordan:2 near:1 leverage:3 presence:1 counting:5 variety:1 independence:1 restaurant:2 fit:3 identified:1 opposite:1 puyang:1 avenue:1 oneself:1 blundell:2 thread:4 war:5 colour:4 distributing:1 edoardo:1 peter:1 returned:1 gulf:1 speech:1 speaking:1 action:8 governs:3 involve:3 santa:3 detailed:2 takeshi:1 nonparametric:3 repeating:1 mid:11 tenenbaum:3 http:2 specifies:1 nsf:1 neuroscience:2 per:1 blue:1 patt:1 intertwined:1 discrete:1 shall:2 express:1 group:31 revolve:1 key:1 threat:1 p30:1 falling:1 drawn:3 tenth:1 graph:15 year:1 inverse:1 letter:1 uncertainty:4 respond:3 place:2 family:3 almost:1 ueda:1 knowles:1 draw:1 capturing:1 bound:2 followed:1 meek:1 activity:1 gang:1 occur:1 adapted:1 constraint:1 deficiency:1 noah:1 declared:4 argument:1 chair:2 span:1 performing:1 format:1 relatively:1 deutscher:1 structured:4 according:1 gran:2 conjugate:1 across:1 describes:2 increasingly:1 intimately:1 smaller:1 parameterisation:4 making:2 intuitively:2 restricted:1 indexing:1 fienberg:1 equation:4 mutually:10 previously:1 conjugacy:2 turn:3 discus:3 initiate:1 yossi:1 end:1 sending:1 available:2 apply:1 observe:2 occurrence:3 neighbourhood:1 alternative:1 original:1 recipient:2 assumes:1 clustering:4 top:4 denotes:5 ensure:2 include:1 thomas:1 maintaining:1 dirichlet:1 ghahramani:1 chinese:2 society:2 question:2 occurs:1 concentration:3 rt:2 usual:1 dependence:2 amongst:2 lends:1 link:1 separate:1 unable:2 entity:1 whom:1 considers:1 kemp:1 enforcing:1 taiwan:3 ru:1 reciprocity:9 relationship:10 nc:1 kingdom:1 katherine:1 difficult:1 unfortunately:1 potentially:1 mostly:1 negative:1 design:1 proper:1 perform:1 upper:1 observation:2 neuron:4 markov:5 discarded:1 behave:1 situation:2 relational:8 team:1 inferred:5 introduced:1 david:7 pair:14 conflict:9 learned:1 nip:1 address:1 able:6 beyond:1 proceeds:1 below:1 pattern:2 kriegel:1 departure:1 built:1 royal:2 pascal2:1 event:70 force:2 predicting:1 scheme:2 dubois:2 fro:1 tresp:1 heller:1 understanding:2 prior:6 acknowledgement:1 n2p:1 discovery:2 determining:1 nodelman:1 historic:1 highlight:1 lecture:1 mixed:1 parameterise:1 interesting:5 organised:1 foundation:1 integrate:1 switched:1 incident:7 sufficient:1 proxy:1 s0:5 exciting:18 principle:1 charitable:1 row:1 excitatory:1 keeping:1 bern:1 english:1 verbal:2 side:2 organise:1 allow:2 normalised:1 taking:3 distributed:3 slice:4 pillow:1 preventing:1 author:2 commonly:1 collection:1 party:1 social:23 correlate:3 implicitly:1 global:1 uai:4 receiver:2 corpus:4 excite:2 postdoctoral:1 spectrum:2 continuous:2 latent:4 why:1 table:9 glenn:1 learn:4 nature:2 channel:2 kheller:1 interact:1 du:2 necessarily:2 constructing:1 gpq:18 official:2 aistats:1 main:2 border:1 paul:1 identifier:1 positively:1 xu:2 fashion:1 gatsby:3 ny:1 inferring:3 meyer:1 exponential:2 learns:1 removing:2 specific:1 consist:1 workshop:1 restricting:1 airoldi:1 conditioned:1 konstantina:1 uri:1 durham:1 chn:1 simply:1 likely:2 sender:4 forming:1 paninski:3 explore:1 josh:1 ordered:1 sport:1 simma:2 radford:2 utilise:1 constantly:1 acm:2 identity:2 marked:1 consequently:1 exposition:1 towards:4 labelled:1 absence:1 included:1 infinite:9 typical:1 uniformly:2 determined:1 sampler:1 total:5 called:1 experimental:1 parameterisations:2 college:1 people:9 mark:2 latter:1 support:3 jonathan:1 phenomenon:1 philosophy:1 shelton:1 |
4,237 | 4,835 | Mixability in Statistical Learning
Tim van Erven
Universit?e Paris-Sud, France
[email protected]
?
Peter D. Grunwald
CWI and Leiden University, the Netherlands
[email protected]
Mark D. Reid
ANU and NICTA, Australia
[email protected]
Robert C. Williamson
ANU and NICTA, Australia
[email protected]
Abstract
Statistical learning and sequential prediction are two different but related formalisms to study the quality of predictions. Mapping out their relations and transferring ideas is an active area of investigation. We provide another piece of the
puzzle by showing that an important concept in sequential prediction, the mixability of a loss, has a natural counterpart in the statistical setting, which we call
stochastic mixability. Just as ordinary mixability characterizes fast rates for the
worst-case regret in sequential prediction, stochastic mixability characterizes fast
rates in statistical learning. We show that, in the special case of log-loss, stochastic
mixability reduces to a well-known (but usually unnamed) martingale condition,
which is used in existing convergence theorems for minimum description length
and Bayesian inference. In the case of 0/1-loss, it reduces to the margin condition
of Mammen and Tsybakov, and in the case that the model under consideration
contains all possible predictors, it is equivalent to ordinary mixability.
1
Introduction
In statistical learning (also called batch learning) [1] one obtains a random sample
(X1 , Y1 ), . . . , (Xn , Yn ) of independent pairs of observations, which are all distributed according
to the same distribution P ? . The goal is to select a function f? that maps X to a prediction f?(X) of
Y for a new pair (X, Y ) from the same P ? . The quality of f? is measured by its excess risk, which
is the expectation of its loss `(Y, f?(X)) minus the expected loss of the best prediction function f ?
in a given class of functions F. Analysis in this setting usually involves giving guarantees about the
performance of f? in the worst case over the choice of the distribution of the data.
In contrast, the setting of sequential prediction (also called online learning) [2] makes no probabilistic assumptions about the source of the data. Instead, pairs of observations (xt , yt ) are assumed
to become available one at a time, in rounds t = 1, . . . , n, and the goal is to select a function f?t
just before round t, which maps xt to a prediction of yt . The quality of predictions f?1 , . . . , f?n is
evaluated by their regret, which is the sum of their losses `(y1 , f?1 (x1 )), . . . , `(yn , f?n (xn )) on the
actual observations minus the total loss of the best fixed prediction function f ? in a class of functions
F. In sequential prediction the usual analysis involves giving guarantees about the performance of
f?1 , . . . , f?n in the worst case over all possible realisations of the data. When stating rates of convergence, we will divide the worst-case regret by n, which makes the rates comparable to rates in the
statistical learning setting.
Mapping out the relations between statistical learning and sequential prediction is an active area of
investigation, and several connections are known. For example, using any of a variety of online1
to-batch conversion techniques [3], any sequential predictions f?1 , . . . , f?n may be converted into a
single statistical prediction f? and the statistical performance of f? is bounded by the sequential prediction performance of f?1 , . . . , f?n . Moreover, a deep understanding of the relation between worstcase rates in both settings is provided by Abernethy, Agarwal, Bartlett and Rakhlin [4]. Amongst
others, their results imply that for many loss functions the worst-case rate in sequential prediction
exceeds the worst-case rate in statistical learning.
Fast Rates In sequential prediction with a finite class F, it is known that the worst-case regret can
be bounded by a constant if and only if the loss ` has the property of being mixable [5, 6] (subject
to mild regularity conditions on the
p loss). Dividing by n, this corresponds to O(1/n) rates, which is
fast compared to the usual O(1/ n) rates.
In statistical learning, there are two kinds
p of conditions that are associated with fast rates. First,
for 0/1-loss, fast rates (faster than O(1/ n)) are associated with Mammen and Tsybakov?s margin
condition [7, 8], which depends on a parameter ?. In the nicest case, ? = 1 and then O(1/n)
rates are possible. Second, for log(arithmic) loss there is a single supermartingale condition that
is essential to obtain fast rates in all convergence proofs of two-part minimum description length
(MDL) estimators, and in many convergence proofs of Bayesian estimators. This condition, used
by e.g. [9, 10, 11, 12, 13, 14], sometimes remains implicit (see Example 1 below) and usually goes
unnamed. A special case has been called the ?supermartingale property? by Chernov, Kalnishkan,
Zhdanov and Vovk [15]. Audibert [16] also introduced a closely related condition, which does seem
subtly different however.
Our Contribution We define the notion of stochastic mixability of a loss `, set of predictors F,
and distribution P ? , which we argue to be the natural analogue of mixability for the statistical setting
on two grounds: first, we show that it is closely related to both the supermartingale condition and the
margin condition, the two properties that are known to be related to fast rates; second, we show that
it shares various essential properties with ordinary mixability and in specific cases is even equivalent
to ordinary mixability.
To support the first part of our argument, we show the following: (a) for bounded losses (including 0/1-loss), stochastic mixability is equivalent to the best case (? = 1) of a generalization of
the margin condition; other values of ? may be interpreted in terms of a slightly relaxed version of
stochastic mixability; (b) for log-loss, stochastic mixability reduces to the supermartingale condition; (c) in general, stochastic mixability allows uniform O(log |Fn |/n)-statistical learning rates to
be achieved, where |Fn | is the size of a sub-model Fn ? F considered at sample size n. Finally, (d)
if stochastic mixability does not hold, then in general O(log |Fn |/n)-statistical learning rates cannot
be achieved, at least not for 0/1-loss or for log-loss.
To support the second part of our argument, we show: (e) if the set F is ?full?, i.e. it contains all
prediction functions for the given loss, then stochastic mixability turns out to be formally equivalent
to ordinary mixability (if F is not full, then either condition may hold without the other). We choose
to call our property stochastic mixability rather than, say, ?generalized margin condition for ? = 1?
or ?generalized supermartingale condition?, because (f) we also show that the general condition can
be formulated in an alternative way (Theorem 2) that directly indicates a strong relation to ordinary
mixability, and (g) just like ordinary mixability, it can be interpreted as the requirement that a set of
so-called pseudo-likelihoods is (effectively) convex.
We note that special cases of results (a)?(e) already follow from existing work of many other authors;
we provide a detailed comparison in Section 7. Our contributions are to generalize these results, and
to relate them to each other, to the notion of mixability from sequential prediction, and to the interpretation in terms of convexity of a set of pseudo-likelihoods. This leads to our central conclusion:
the concept of stochastic mixability is closely related to mixability and plays a fundamental role in
achieving fast rates in the statistical learning setting.
Outline In ?2 we define both ordinary mixability and stochastic mixability. We show that two of
the standard ways to express mixability have natural analogues that express stochastic mixability
(leading to (f)). In example 1 we specialize the definition to log-loss and explain its importance in
the literature on MDL and Bayesian inference, leading to (b). A third interpretation of mixability
and standard mixability in terms of sets (g) is described in ?3. The equivalence between mixability
2
and stochastic mixability if F is full is presented in ?4 where we also show that the equivalence need
not hold if F is not full (e). In ?5, we turn our attention to a version of the margin condition that
does not assume that F contains the Bayes optimal predictor and we show that (a slightly relaxed
version of) stochastic mixability is equivalent to the margin condition, taking care of (a). We show
(?6) that if stochastic mixability holds, O(log |Fn |/n)-rates can always be achieved (c), and that in
some cases in which it does not hold, O(log |Fn |/n)-rates cannot be achieved (d). Finally (?7) we
connect our results to previous work in the literature. Proofs omitted from the main body of the
paper are in the supplementary material.
2
Mixability and Stochastic Mixability
We now introduce the notions of mixability and stochastic mixability, showing two equivalent formulations of the latter.
2.1
Mixability
A loss function ` : Y?A ! [0, 1] is a nonnegative function that measures the quality of a prediction
a 2 A when the true outcome is y 2 Y by `(y, a). We will assume that all spaces come equipped
with appropriate -algebras, so we may define distributions on them, and that the loss function ` is
measurable.
Definition 1 (Mixability). For ? > 0, a loss ` is called ?-mixable if for any distribution ? on A there
exists a single prediction a? such that
Z
1
`(y, a? ) ?
ln e ?`(y,a) ?(da)
for all y.
(1)
?
It is called mixable if there exists an ? > 0 such that it is ?-mixable.
Let A be a random variable with distribution ?. Then (1) may be rewritten as
? ?`(y,A)
e
E?
?1
for all y.
e ?`(y,a? )
2.2
(2)
Stochastic Mixability
Let F be a set of predictors f : X ! A, which are measurable functions that map any input x 2 X
to a prediction f (x). For example, if A = Y = {0, 1} and the loss is the 0/1-loss, `0/1 (y, a) =
1{y 6= a}, then the predictors are classifiers. Let P ? be the distribution of a pair of random variables
(X, Y ) with values in X ? Y. Most expectations in the paper are with respect to P ? . Whenever this
is not the case we will add a subscript to the expectation operator, as in (2).
Definition 2 (Stochastic Mixability). For any ?
0, we say that (`, F, P ? ) is ?-stochastically
mixable if there exists an f ? 2 F such that
? ?`(Y,f (X))
e
E
?1
for all f 2 F.
(3)
e ?`(Y,f ? (X))
We call (`, F, P ? ) stochastically mixable if there exists an ? > 0 such that it is ?-stochastically
mixable.
h ?`(Y,f (X)) i
?
By Jensen?s inequality, we see that (3) implies 1 E ee ?`(Y,f ? (X))
eE[?(`(Y,f (X)) `(Y,f (X)))] ,
so that
E[`(Y, f ? (X))] ? E[`(Y, f (X)))]
for all f 2 F,
and hence the definition of stochastic mixability presumes that f ? minimizes E[`(Y, f (X))] over all
f 2 F. We will assume throughout the paper that such an f ? exists, and that E[`(Y, f ? (X))] < 1.
The larger ?, the stronger the requirement of ?-stochastic mixability:
Proposition 1. Any triple (`, F, P ? ) is 0-stochastically mixable. And if 0 <
mixability implies -stochastic mixability.
3
< ?, then ?-stochastic
Example 1 (Log-loss). Let F be a set of conditional probability densities and let `log be log-loss,
i.e. A is the set of densities on Y, f (x)(y) is written, as usual, as f (y | x), and `log (y, f (x)) :=
ln f (y | x). For log-loss, statistical learning becomes equivalent to conditional density estimation
with random design (see, e.g., [14]). Equation 3 now becomes equivalent to
?
??
f (Y | X)
?
A? (f kf ) := E
? 1.
(4)
f ? (Y | X)
A? has been called the generalized Hellinger affinity [12] in the literature. If the model is correct,
i.e. it contains the true conditional density p? (y | x), then, because the log-loss is a proper loss [17]
we must have f ? = p? and then, for ? = 1, trivially A? (f kf ? ) = 1 for all f 2 F. Thus if the model
F is correct, then the log-loss is ?-stochastically mixable for ? = 1. In that case, for ? = 1/2, A?
turns into the standard definition of Hellinger affinity [10].
Equation 4 ? which just expresses 1-stochastic mixability for log-loss ? is used in all previous
convergence theorems for 2-part MDL density estimation [10, 12, 11, 18], and, more implicitly, in
various convergence theorems for Bayesian procedures, including the pioneering paper by Doob
[9]. All these results assume that the model F is correct, but, if one studies the proofs, one finds that
the assumption is only needed to establish that (4) holds for ? = 1. For example, as first noted by
[12], if F is a convex set of densities, then (4) also holds for ? = 1, even if the model is incorrect,
and, indeed, two-part MDL converges at fast rates in such cases (see [14] for a precise definition of
what this means, as well as more general treatment of (4)). Kleijn and Van der Vaart [13], in their
extensive analysis of Bayesian nonparametric inference if the model is wrong, also use the fact that
(4) holds with ? = 1 for convex models to show that fast posterior concentration rates hold for such
models even if they do not contain the true p? .
The definition of stochastic mixability looks similar to (2), but whereas ? is a distribution on predictions, P ? is a distribution on outcomes (X, Y ). Thus at first sight the resemblance appears to be
only superficial. It is therefore quite surprising that stochastic mixability can also be expressed in a
way that looks like (1), which provides a first hint that the relation goes deeper.
Theorem 2. Let ? > 0. Then (`, F, P ? ) is ?-stochastically mixable if and only if for any distribution
? on F there exists a single predictor f ? 2 F such that
?
Z
?
?
1
E `(Y, f ? (X)) ? E
ln e ?`(Y,f (X)) ?(df ) .
(5)
?
Notice that, without loss of generality, we can always choose f ? to be the minimizer of
E[`(Y, f (X))]. Then f ? does not depend on ?.
3
The Convexity Interpretation
There is a third way to express mixability, as the convexity of a set of so-called pseudo-likelihoods.
We will now show that stochastic mixability can also be interpreted as convexity of the corresponding set in the statistical learning setting.
Following Chernov et al. [15], we first note that the essential feature of a loss ` with corresponding
set of predictions A is the set of achievable losses they induce:
L = {l : Y ! [0, 1] | 9a 2 A : l(y) = `(y, a) for all y 2 Y}.
If we would reparametrize the loss by a different set of predictions A0 , while keeping L the same,
then essentially nothing would change. For example, for 0/1-loss standard ways to parametrize
predictions are by A = {0, 1}, by A = { 1, +1} or by A = R with the interpretation that predicting
a 0 maps to the prediction 1 and a < 0 maps to the prediction 0. Of course these are all equivalent,
because L is the same.
It will be convenient to consider the set of functions that lie above the achievable losses in L:
S = S` = {l : Y ! [0, 1] | 9l0 2 L : l(y)
l0 (y) for all y 2 Y},
Chernov et al. call this the super prediction set. It plays a role similar to the role of the epigraph
of a function in convex analysis. Let ? > 0. Then with each element l 2 S in the super prediction
4
P?
P?
coPF (?)
f?
f?
PF (?)
PF (?)
coPF (?)
Not stochastically mixable
Stochastically mixable
Figure 1: The relation between convexity and stochastic mixability for log-loss, ? = 1 and X = {x}
a singleton, in which case P ? and the elements of PF (?) can all be interpreted as distributions on Y.
set, we associate Ra pseudo-likelihood p(y) = e ?l(y) . Note that 0 ? p(y) ? 1, but it is generally
not the case that p(y) ?(dy) = 1 for some reference measure ? on Y, so p(y) is not normalized.
Let e ?S = {e ?l | l 2 S} denote the set of all such pseudo-likelihoods. By multiplying (1) by ?
and exponentiating, it can be shown that ?-mixability is exactly equivalent to the requirement that
e ?S is convex [2, 15]. And like for the first two expressions of mixability, there is an analogous
convexity interpretation for stochastic mixability.
In order to define pseudo-likelihoods in the statistical setting, we need to take into account that the
predictions f (X) of the predictors in F are not deterministic, but depend on X. Hence we define
conditional pseudo-likelihoods p(Y |X) = e ?`(Y,f (X)) . (See also Example 1.) There is no need to
introduce a conditional analogue of the super prediction set. Instead, let PF (?) = {e ?`(Y,f (X)) |
f 2 F} denote the set of all conditional pseudo-likelihoods. For 2 [0, 1], a convex combination
of any two p0 , p1 2 PF (?) can be defined as p (Y |X) = (1
)p0 (Y |X) + p1 (Y |X). And
consequently, we may speak of the convex hull co PF (?) = {p | p0 , p1 2 PF (?), 2 [0, 1]} of
PF (?).
Corollary 3. Let ? > 0. Then ?-stochastic mixability of (`, F, P ? ) is equivalent to the requirement
that
?
?
?
?
min E ?1 ln p(Y |X) = min E ?1 ln p(Y |X) .
(6)
p2PF (?)
p2co PF (?)
Proof. This follows directly from Theorem 2 after rewriting it in terms of conditional pseudolikelihoods.
Notice that the left-hand side of (6) equals E[`(Y, f ? (X))], which does not depend on ?.
Equation 6 expresses that the convex hull operator has no effect, which means that PF (?) looks
convex from the perspective of P ? . See Figure 1 for an illustration for log-loss. Thus we obtain
an interpretation of ?-stochastic mixability as effective convexity of the set of pseudo-likelihoods
PF (?) with respect to P ? .
Figure 1 suggests that f ? should be unique if the loss is stochastically mixable, which is almost
right. It is in fact the loss `(Y, f ? (X)) of f ? that is unique (almost surely):
Corollary 4. If (`, F, P ? ) is stochastically mixable and there exist f ? , g ? 2 F such that
E[`(Y, f ? (X))] = E[`(Y, g ? (X))] = minf 2F E[`(Y, f (X))], then `(Y, f ? (X)) = `(Y, g ? (X))
almost surely.
Proof. Let ?(f ? ) = ?(g ? ) = 1/2. Then, by Theorem 2 and (strict) convexity of ln,
?
?
?
1
1
1
ln e ?`(Y,f (X)) + e ?`(Y,g (X))
min E[`(Y, f (X))] ? E
f 2F
?
2
2
?
1
1
? E `(Y, f ? (X)) + `(Y, g ? (X)) = min E[`(Y, f (X))].
f 2F
2
2
5
Hence both inequalities must hold with equality. For the second inequality this is only the case if
`(Y, f ? (X)) = `(Y, g ? (X)) almost surely, which was to be shown.
4
When Mixability and Stochastic Mixability Are the Same
Having observed that mixability and stochastic mixability of a loss share several common features,
we now show that in specific cases the two concepts even coincide. More specifically, Theorem 5
below shows that a loss ` (meeting two requirements) is ?-mixable if and only if it is ?-stochastically
mixable relative to Ffull , the set of all functions from X to A, and all distributions P ? . To avoid
measurability issues, we will assume that X is countable throughout this section.
The two conditions we assume of ` are both related to its set of pseudo-likelihoods
:= e ?S ,
which was defined in Section 3. The first condition is that is closed. When Y is infinite, we mean
closed relative to the topology for the supremum norm kpk1 = supy2Y |p(y)|. The second, more
technical condition is that is pre-supportable. That is, for every pseudo-likelihood p 2 , its
pre-image s 2 S (defined for each y 2 Y by s(y) := ?1 ln p(y)) is supportable. Here, a point s 2 S
is supportable if it is optimal for some distribution PY? over Y ? that is, if there exists a distribution
PY? over Y such that EPY? [s(Y )] ? EPY? [t(Y )] for all t 2 S. This is the case, for example, for all
proper losses [17].
We say (`, F) is ?-stochastically mixable if (`, F, P ? ) is ?-stochastically mixable for all distributions
P ? on X ? Y.
Theorem 5. Suppose X is countable. Let ? > 0 and suppose ` is a loss such that its pseudolikelihood set e ?S is closed and pre-supportable. Then (`, Ffull ) is ?-stochastically mixable if and
only if ` is ?-mixable.
This result generalizes Theorem 9 and Lemma 11 by Chernov et al. [15] from finite Y to arbitrary
continuous Y, which they raised as an open question. In their setting, there are no explanatory
variables x, which may be emulated in our framework by letting X contain only a single element.
Their conditions also imply (by their Lemma 10) that the loss ` is proper, which implies that e ?S is
closed and pre-supportable. We note that for proper losses ?-mixability is especially well understood
[19].
The proof of Theorem 5 is broken into two lemmas (the proofs of which are in the supplementary material). The first establishes conditions for when mixability implies stochastic mixability,
borrowing from a similar result for log-loss by Li [12].
Lemma 6. Let ? > 0. Suppose the Bayes optimal predictor fB? (x) 2 arg mina2A E[`(Y, a)|X = x]
is in the model: fB? = f ? 2 F. If ` is ?-mixable, then (`, F, P ? ) is ?-stochastically mixable.
The second lemma shows that stochastic mixability implies mixability.
Lemma 7. Suppose the conditions of Theorem 5 are satisfied. If (`, Ffull ) is ?-stochastically mixable,
then it is ?-mixable.
The above two lemmata are sufficient to prove the equivalence of stochastic and ordinary mixability.
Proof of Theorem 5. In order to show that ?-mixability of ` implies ?-stochastic mixability of
(`, Ffull ) we note that the Bayes-optimal predictor fB? for any ` and P ? must be in Ffull and so
Lemma 6 implies (`, Ffull , P ? ) is ?-stochastically mixable for any distribution P ? . Conversely,
that ?-stochastic mixability of (`, Ffull ) implies the ?-mixability of ` follows immediately from
Lemma 7.
Example 2 (if F is not full). In this case, we can have either stochastic mixability without ordinary
mixability or the converse. Consider a loss function ` that is not mixable in the ordinary sense,
e.g. ` = `0/1 , the 0/1-loss [6], and a set F consisting of just a single predictor. Then clearly ` is
stochastically mixable relative to F. This is, of course, a trivial case. We do not know whether we
can have stochastic mixability without ordinary mixability in nontrivial cases, and plan to investigate
this for future work. For the converse, we know that it does hold in nontrivial cases: consider
the log-loss `log which is 1-mixable in the standard sense (Example 1). Let Y = {0, 1} and let
the model F be a set of conditional probability mass functions {f? | ? 2 ?} where ? is the
6
set of all classifiers, i.e. all functions X ! Y, and f? (y | x) := e `0/1 (y,?(x)) /(1 + e 1 ) where
`0/1 (y, y?) = 1{y 6= y?} is the 0/1-loss. Then log-loss becomes an affine function of 0/1-loss: for
each ? 2 ?, `log (Y, f? (X)) = `0/1 (Y, ?(X)) + C with C = ln(1 + e 1 ) [14]. Because 0/1-loss is
not standard mixable, by Theorem 5, 0/1-loss is not stochastically mixable relative to ?. But then
we must also have that log-loss is not stochastically mixable relative to F.
5
Stochastic Mixability and the Margin Condition
The excess risk of any f compared to f ? is the mean of the excess loss `(Y, f (X))
?
?
d(f, f ? ) = E `(Y, f (X)) `(Y, f ? (X)) .
`(Y, f ? (X)):
We also define the expected square of the excess loss, which is closely related to its variance:
?
?2
V (f, f ? ) = E `(Y, f (X)) `(Y, f ? (X)) .
Note that, for 0/1-loss, V (f, f ? ) = P ? (f (X) 6= f ? (X)) is the probability that f and f ? disagree.
The margin condition, introduced by Mammen and Tsybakov [7, 8] for 0/1-loss, is satisfied with
constants ? 1 and c0 > 0 if
c0 V (f, f ? )? ? d(f, f ? )
for all f 2 F.
(7)
Unlike Mammen and Tsybakov, we do not assume that F necessarily contains the Bayes predictor,
which minimizes the risk over all possible predictors. The same generalization has been used in the
context of model selection by Arlot and Bartlett [20].
Remark 1. In some practical cases, the margin condition only holds for a subset of the model such
that V (f, f ? ) ? ?0 for some ?0 > 0 [8]. In such cases, the discussion below applies to the same
subset.
Stochastic mixability, as we have defined it, is directly related to the margin condition for the case
? = 1. In order to relate it to other values of ?, we need a little more flexibility: for given ? 0 and
(`, F, P ? ), we define
F? = {f ? } [ {f 2 F | d(f, f ? ) ?},
(8)
which excludes a band of predictors that approximate the best predictor in the model to within excess
risk ?.
Theorem 8. Suppose a loss ` takes values in [0, V ] for 0 < V < 1. Fix a model F and distribution
P ? . Then the margin condition (7) is satisfied if and only if there exists a constant C > 0 such that,
for all ? > 0, (`, F? , P ? ) is ?-stochastically mixable for ? = C?(? 1)/? . In particular, if the margin
condition is satisfied with constants ? and c0 , we can take C = min
1/?
V 2 c0
eV V
,
1 V (?
1
1)/?
.
This theorem gives a new interpretation of the margin condition as the rate at which ? has to go
to 0 when the model F is approximated by ?-stochastically mixable models F? . By the following
corollary, proved in the additional material, stochastic mixability of the whole model F is equivalent
to the best case of the margin condition.
Corollary 9. Suppose ` takes values in [0, V ] for 0 < V < 1. Then (`, F, P ? ) is stochastically
mixable if and only if there exists a constant c0 > 0 such that the margin condition (7) is satisfied
with ? = 1.
6
Connection to Uniform O(log |Fn |/n) Rates
Let ` be a bounded loss function. Assume that, at sample size n, an estimator f? (statistical learning
algorithm) is used based on a finite model Fn , where we allow the size |Fn | to grow with n. Let,
for all n, Pn be any set of distributions on X ? Y such that for all P ? 2 Pn , the generalized
margin condition (7) holds for ? = 1 and uniform constant c0 not depending on n, with model
Fn . In the case of 0/1-loss, the results of e.g. Tsybakov [8] suggest that there exist estimators
7
f?n : (X ? Y)n ! Fn that achieve a convergence rate of O(log |Fn |/n), uniformly for all P ? 2 P;
that is,
sup EP ? [d(f?n , f ? )] = O(log |Fn |/n).
(9)
P ? 2Pn
This can indeed be proven, for general loss functions, using Theorem 4.2. of Zhang [21] and with f?n
set to Zhang?s information-risk-minimization estimator (to see this, at sample size n apply Zhang?s
result with ? set to 0 and a prior ? that is uniform on Fn , so that log ?(f ) = log |Fn | for any
f 2 Fn ). By Theorem 8, this means that, for any bounded loss function `, if, for some ? > 0, all n,
we have that (`, Fn , P ? ) is ?-stochastically mixable for all P ? 2 Pn , then Zhang?s estimator satisfies
(9). Hence, for bounded loss functions, stochastic mixability implies a uniform O(log |Fn |/n) rate.
A connection between stochastic mixability and fast rates is also made by Gr?unwald [14], who
introduces some slack in the definition (allowing the number 1 in (3) to be slightly larger) and uses
the convexity interpretation from Section 3 to empirically determine the largest possible value for
?. His Theorem 2, applied with a slack set to 0, implies an in-probability version of Zhang?s result
above.
Example 3. We just explained that, if ` is stochastically mixable relative to Fn , then uniform
O(log |Fn |/n) rates can be achieved. We now illustrate that if this is not the case, then, at least
if ` is 0/1-loss or if ` is log-loss, uniform O(log |Fn |/n) rates cannot be achieved in general. To see
this, let ?n be a finite set of classifiers ? : X ! Y, Y = {0, 1} and let ` be 0/1-loss. Let for each
n, f?n : (X ? Y)n ! Fn be some arbitrary estimator. It is known from e.g. the work of Vapnik [22]
that for every sequence of estimators f?1 , f?2 , . . ., there exist a sequence ?1 , ?2 , . . ., all finite, and a
sequence P1? , P2? , . . . such that
EPn? [d(f?n , f ? )]
! 1.
log |?n |/n
Clearly then, by Zhang?s result above, there cannot be an ? such that for all n, (`, ?n , Pn? ) is ?stochastically mixable. This establishes that if stochastic mixability does not hold, then uniform rates
of O(log |Fn |/n) are not achievable in general for 0/1-loss. By the construction of Example 2, we
can modify ?n into a set of corresponding log-loss predictors Fn so that the log-loss `log becomes an
affine function of the 0/1-loss; thus, there still is no ? such that for all n, (`log , Fn , Pn? ) is ?-mixable,
and the sequence of estimators still cannot achieve uniform a O(log |Fn |/n) rate with log-loss either.
7
Discussion ? Related Work
Let us now return to the summary of our contributions which we provided as items (a)?(g) in ?1.
We note that slight variations of our formula (3) for stochastic mixability already appear in [14]
(but there no connections to ordinary mixability are made) and [15] (but there it is just a tool for
the worst-case sequential setting, and no connections to fast rates in statistical learning are made).
Equation 3 looks completely different from the margin condition, yet results connecting the two,
somewhat similar to (a), albeit very implicitly, already appear in [23] and [24]. Also, the paper by
Gr?unwald [14] contains a connection between the margin condition somewhat similar to Theorem 8,
but involving a significantly weaker version of stochastic mixability in which the inequality (3) only
holds with some slack. Result (b) is trivial given Definition 2; (c) is a consequence of Theorem 4.2.
of [21] when combined with (a) (see Section 6). Result (d) (Theorem 5) is a significant extension of
a similar result by Chernov et al. [15]. Yet, our proof techniques and interpretation are completely
different from those in [15]. Result (e), Example 3, is a direct consequence of (a). Result (f)
(Theorem 2) is completely new, but the proof is partly based on ideas which already appear in [12]
in a log-loss/MDL context, and (g) is a consequence of (f). Finally, Corollary 3 can be seen as
analogous to the results of Lee et al. [25], who showed the role of convexity of F for fast rates in the
regression setting with squared loss.
Acknowledgments
This work was supported by the ARC and by NICTA, funded by the Australian Government. It
was also supported in part by the IST Programme of the European Community, under the PASCAL
Network of Excellence, IST-2002-506778, and by NWO Rubicon grant 680-50-1112.
8
References
[1] O. Bousquet, S. Boucheron, and G. Lugosi. Introduction to statistical learning theory. In
O. Bousquet, U. von Luxburg, and G. R?atsch, editors, Advanced Lectures on Machine Learning, volume 3176 of Lecture Notes in Computer Science, pages 169?207. Springer Berlin /
Heidelberg, 2004.
[2] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[3] O. Dekel and Y. Singer. Data-driven online to batch conversions. In Y. Weiss, B. Sch?olkopf,
and J. Platt, editors, Advances in Neural Information Processing Systems 18 (NIPS), pages
267?274, Cambridge, MA, 2006. MIT Press.
[4] J. Abernethy, A. Agarwal, P. L. Bartlett, and A. Rakhlin. A stochastic view of optimal regret
through minimax duality. In Proceedings of the 22nd Conference on Learning Theory (COLT),
2009.
[5] Y. Kalnishkan and M. V. Vyugin. The weak aggregating algorithm and weak mixability. Journal of Computer and System Sciences, 74:1228?1244, 2008.
[6] V. Vovk. A game of prediction with expert advice. In Proceedings of the 8th Conference on
Learning Theory (COLT), pages 51?60. ACM, 1995.
[7] E. Mammen and A. B. Tsybakov. Smooth discrimination analysis. The Annals of Statistics,
27(6):1808?1829, 1999.
[8] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135?166, 2004.
[9] J. L. Doob. Application of the theory of martingales. In Le Calcul de Probabilit?es et ses
Applications. Colloques Internationaux du Centre National de la Recherche Scientifique, pages
23?27, Paris, 1949.
[10] A. Barron and T. Cover. Minimum complexity density estimation. IEEE Transactions on
Information Theory, 37(4):1034?1054, 1991.
[11] T. Zhang. From ?-entropy to KL entropy: analysis of minimum information complexity density
estimation. Annals of Statistics, 34(5):2180?2210, 2006.
[12] J. Li. Estimation of Mixture Models. PhD thesis, Yale University, 1999.
[13] B. Kleijn and A. van der Vaart. Misspecification in infinite-dimensional Bayesian statistics.
Annals of Statistics, 34(2), 2006.
[14] P. Gr?unwald. Safe learning: bridging the gap between Bayes, MDL and statistical learning
theory via empirical convexity. In Proceedings of the 24th Conference on Learning Theory
(COLT), 2011.
[15] A. Chernov, Y. Kalnishkan, F. Zhdanov, and V. Vovk. Supermartingales in prediction with
expert advice. Theoretical Computer Science, 411:2647?2669, 2010.
[16] J.-Y. Audibert. Fast learning rates in statistical inference through aggregation. Annals of
Statistics, 37(4):1591?1646, 2009.
[17] E. Vernet, R. C. Williamson, and M. D. Reid. Composite multiclass losses. In Advances in
Neural Information Processing Systems 24 (NIPS), 2011.
[18] P. Gr?unwald. The Minimum Description Length Principle. MIT Press, Cambridge, MA, 2007.
[19] T. van Erven, M. Reid, and R. Williamson. Mixability is Bayes risk curvature relative to log
loss. In Proceedings of the 24th Conference on Learning Theory (COLT), 2011.
[20] S. Arlot and P. L. Bartlett. Margin-adaptive model selection in statistical learning. Bernoulli,
17(2):687?713, 2011.
[21] T. Zhang. Information theoretical upper and lower bounds for statistical estimation. IEEE
Transactions on Information Theory, 52(4):1307?1321, 2006.
[22] V. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.
[23] J.-Y. Audibert. PAC-Bayesian statistical learning theory. PhD thesis, Universit?e Paris VI,
2004.
[24] O. Catoni. PAC-Bayesian Supervised Classification. Lecture Notes-Monograph Series. IMS,
2007.
[25] W. Lee, P. Bartlett, and R. Williamson. The importance of convexity in learning with squared
loss. IEEE Transactions on Information Theory, 44(5):1974?1980, 1998. Correction, Volume
54(9), 4395 (2008).
[26] A. N. Shiryaev. Probability. Springer-Verlag, 1996.
[27] J.-Y. Audibert. A better variance control for PAC-Bayesian classification. Preprint 905, Laboratoire de Probabilit?es et Mod`eles Al?eatoires, Universit?es Paris 6 and Paris 7, 2004.
9
| 4835 |@word mild:1 version:5 achievable:3 stronger:1 norm:1 nd:1 c0:6 dekel:1 open:1 p0:3 minus:2 contains:6 series:1 erven:2 existing:2 surprising:1 yet:2 written:1 must:4 fn:26 discrimination:1 item:1 recherche:1 provides:1 zhang:8 direct:1 become:1 incorrect:1 specialize:1 prove:1 hellinger:2 introduce:2 excellence:1 indeed:2 ra:1 expected:2 p1:4 sud:1 actual:1 little:1 equipped:1 pf:11 becomes:4 provided:2 bounded:6 moreover:1 mass:1 what:1 kind:1 interpreted:4 minimizes:2 guarantee:2 pseudo:11 every:2 exactly:1 universit:3 classifier:4 wrong:1 platt:1 control:1 converse:2 grant:1 yn:2 appear:3 reid:4 arlot:2 before:1 understood:1 aggregating:1 modify:1 consequence:3 subscript:1 lugosi:2 au:2 equivalence:3 suggests:1 conversely:1 co:1 unique:2 practical:1 acknowledgment:1 regret:5 procedure:1 probabilit:2 area:2 empirical:1 significantly:1 composite:1 convenient:1 pre:4 induce:1 suggest:1 cannot:5 selection:2 operator:2 risk:6 context:2 py:2 equivalent:12 map:5 measurable:2 yt:2 deterministic:1 go:3 attention:1 convex:9 immediately:1 estimator:9 his:1 notion:3 variation:1 analogous:2 annals:5 construction:1 play:2 suppose:6 mixable:38 speak:1 us:1 associate:1 element:3 approximated:1 observed:1 role:4 ep:1 preprint:1 eles:1 worst:8 monograph:1 convexity:12 broken:1 complexity:2 depend:3 algebra:1 subtly:1 completely:3 various:2 reparametrize:1 fast:15 effective:1 outcome:2 abernethy:2 quite:1 pseudolikelihoods:1 supplementary:2 larger:2 say:3 s:1 statistic:6 eatoires:1 vaart:2 kleijn:2 online:2 sequence:4 pdg:1 flexibility:1 achieve:2 description:3 olkopf:1 convergence:7 regularity:1 requirement:5 converges:1 tim:2 depending:1 illustrate:1 stating:1 measured:1 p2:1 dividing:1 strong:1 involves:2 come:1 implies:10 australian:1 safe:1 closely:4 correct:3 stochastic:51 hull:2 australia:2 supermartingales:1 unnamed:2 material:3 government:1 fix:1 generalization:2 investigation:2 proposition:1 extension:1 correction:1 hold:15 considered:1 ground:1 mapping:2 puzzle:1 omitted:1 estimation:6 nwo:1 largest:1 establishes:2 tool:1 minimization:1 mit:2 clearly:2 always:2 sight:1 super:3 rather:1 avoid:1 pn:6 corollary:5 cwi:2 l0:2 bernoulli:1 indicates:1 likelihood:11 contrast:1 sense:2 inference:4 transferring:1 a0:1 explanatory:1 borrowing:1 relation:6 doob:2 france:1 issue:1 arg:1 colt:4 pascal:1 classification:2 plan:1 raised:1 special:3 equal:1 having:1 look:4 minf:1 future:1 others:1 realisation:1 hint:1 national:1 consisting:1 investigate:1 mdl:6 introduces:1 mixture:1 nl:2 divide:1 theoretical:2 formalism:1 cover:1 ordinary:13 subset:2 predictor:15 uniform:9 gr:4 connect:1 combined:1 density:8 fundamental:1 probabilistic:1 lee:2 connecting:1 thesis:2 squared:2 central:1 satisfied:5 von:1 choose:2 cesa:1 scientifique:1 stochastically:26 expert:2 leading:2 presumes:1 return:1 li:2 account:1 converted:1 zhdanov:2 singleton:1 de:3 audibert:4 depends:1 vi:1 piece:1 view:1 closed:4 characterizes:2 sup:1 bayes:6 aggregation:2 contribution:3 square:1 variance:2 who:2 generalize:1 weak:2 bayesian:9 emulated:1 multiplying:1 bob:1 explain:1 whenever:1 definition:9 associated:2 proof:11 proved:1 treatment:1 appears:1 supervised:1 follow:1 wei:1 formulation:1 evaluated:1 generality:1 just:7 implicit:1 hand:1 quality:4 measurability:1 resemblance:1 effect:1 concept:3 true:3 contain:2 counterpart:1 normalized:1 equality:1 hence:4 boucheron:1 round:2 game:2 supermartingale:5 noted:1 mammen:5 generalized:4 outline:1 image:1 consideration:1 common:1 empirically:1 volume:2 interpretation:9 slight:1 ims:1 significant:1 cambridge:3 trivially:1 centre:1 funded:1 add:1 curvature:1 posterior:1 showed:1 perspective:1 driven:1 verlag:1 inequality:4 meeting:1 der:2 seen:1 minimum:5 additional:1 relaxed:2 care:1 somewhat:2 timvanerven:1 surely:3 determine:1 full:5 reduces:3 chernov:6 exceeds:1 technical:1 faster:1 smooth:1 prediction:35 involving:1 regression:1 essentially:1 expectation:3 df:1 sometimes:1 epy:2 agarwal:2 achieved:6 whereas:1 laboratoire:1 grow:1 source:1 sch:1 unlike:1 strict:1 subject:1 mod:1 seem:1 call:4 ee:2 variety:1 topology:1 idea:2 multiclass:1 whether:1 expression:1 bartlett:5 bridging:1 peter:1 york:1 remark:1 deep:1 generally:1 detailed:1 netherlands:1 nonparametric:1 tsybakov:7 band:1 kalnishkan:3 exist:3 notice:2 shiryaev:1 express:5 ist:2 achieving:1 rewriting:1 excludes:1 sum:1 luxburg:1 throughout:2 almost:4 dy:1 comparable:1 epn:1 bound:1 yale:1 nonnegative:1 nontrivial:2 kpk1:1 bousquet:2 vyugin:1 argument:2 min:5 according:1 combination:1 slightly:3 online1:1 explained:1 ln:9 equation:4 remains:1 turn:3 slack:3 needed:1 know:2 letting:1 singer:1 available:1 parametrize:1 rewritten:1 generalizes:1 apply:1 vernet:1 barron:1 appropriate:1 batch:3 alternative:1 giving:2 especially:1 establish:1 mixability:86 already:4 question:1 concentration:1 usual:3 amongst:1 affinity:2 berlin:1 argue:1 trivial:2 nicta:3 length:3 illustration:1 robert:1 relate:2 design:1 countable:2 proper:4 allowing:1 conversion:2 disagree:1 observation:3 bianchi:1 upper:1 arc:1 finite:5 precise:1 misspecification:1 y1:2 arbitrary:2 community:1 introduced:2 pair:4 paris:5 kl:1 extensive:1 connection:6 nip:2 usually:3 below:3 ev:1 pioneering:1 including:2 analogue:3 natural:3 predicting:1 advanced:1 minimax:1 imply:2 prior:1 understanding:1 literature:3 calcul:1 kf:2 relative:7 loss:81 lecture:3 proven:1 triple:1 leiden:1 affine:2 sufficient:1 principle:1 editor:2 share:2 course:2 summary:1 supported:2 keeping:1 side:1 pseudolikelihood:1 deeper:1 allow:1 weaker:1 taking:1 van:4 distributed:1 xn:2 fb:3 author:1 made:3 adaptive:1 exponentiating:1 coincide:1 programme:1 transaction:3 excess:5 approximate:1 obtains:1 implicitly:2 supremum:1 active:2 assumed:1 continuous:1 superficial:1 heidelberg:1 du:1 williamson:5 necessarily:1 european:1 da:1 main:1 whole:1 nothing:1 x1:2 body:1 epigraph:1 advice:2 grunwald:1 martingale:2 wiley:1 sub:1 lie:1 third:2 theorem:23 formula:1 xt:2 specific:2 showing:2 pac:3 jensen:1 rakhlin:2 essential:3 exists:9 vapnik:2 sequential:12 effectively:1 importance:2 albeit:1 phd:2 catoni:1 anu:4 margin:20 gap:1 rubicon:1 entropy:2 expressed:1 applies:1 springer:2 corresponds:1 minimizer:1 satisfies:1 worstcase:1 acm:1 ma:2 conditional:8 goal:2 formulated:1 consequently:1 change:1 specifically:1 infinite:2 uniformly:1 vovk:3 lemma:9 called:8 total:1 partly:1 duality:1 e:3 la:1 unwald:4 atsch:1 select:2 formally:1 mark:2 support:2 latter:1 |
4,238 | 4,836 | Spectral learning of linear dynamics from
generalised-linear observations
with application to neural population data
Lars Buesing? , Jakob H. Macke?,? , Maneesh Sahani
Gatsby Computational Neuroscience Unit
University College London, London, UK
{lars, jakob, maneesh}@gatsby.ucl.ac.uk
Abstract
Latent linear dynamical systems with generalised-linear observation models arise
in a variety of applications, for instance when modelling the spiking activity of populations of neurons. Here, we show how spectral learning methods
(usually called subspace identification in this context) for linear systems with
linear-Gaussian observations can be extended to estimate the parameters of a
generalised-linear dynamical system model despite a non-linear and non-Gaussian
observation process. We use this approach to obtain estimates of parameters for
a dynamical model of neural population data, where the observed spike-counts
are Poisson-distributed with log-rates determined by the latent dynamical process,
possibly driven by external inputs. We show that the extended subspace identification algorithm is consistent and accurately recovers the correct parameters on large
simulated data sets with a single calculation, avoiding the costly iterative computation of approximate expectation-maximisation (EM). Even on smaller data sets,
it provides an effective initialisation for EM, avoiding local optima and speeding
convergence. These benefits are shown to extend to real neural data.
1 Introduction
Latent linear dynamical system (LDS) models, also known as Kalman-filter models or linearGaussian state-space models, provide an important framework for modelling shared temporal structure in multivariate time series. If the observation process is linear with additive Gaussian noise, then
there are many established options for parameter learning. Inference of the dynamical state in such
a model can be performed exactly by Kalman smoothing [1] and so the expectation-maximisation
(EM) algorithm may be used to find a local maximum of the likelihood [2]. An alternative is the
spectral approach known as subspace identification (SSID) in the engineering literature [3, 4, 5].
This is a method-of-moments-based estimation process, which, like other spectral methods, provides estimators that are non-iterative, consistent and do not suffer from the problems of multiple
optima that dog maximum-likelihood (ML) learning in practice. However, they are not as statistically efficient as the true (global) ML estimator. Thus, a combined approach often produces the best
results, with the SSID-based parameter estimates being used to initialise the EM iterations.
Many real-world data sets, however, are not well described by a linear-Gaussian output process. Of
particular interest to us here are multiple neural spike-trains measured simultaneously by arrays of
electrodes [6, 7], which are best treated either as multivariate point-processes or, after binning, as a
time series of vectors of small integers. In either case the event rates must be positive, precluding
a linear mapping from the Gaussian latent process, and the noise distribution cannot accurately be
?
These authors contributed equally. ? Current Affiliation: Max Planck Institute for Biological Cybernetics
and Bernstein Center for Computational Neuroscience T?ubingen
1
modelled as normal. Similar point-process or count data may arise in many other settings, such as
seismology or text modelling. More generally, we are interested in the broad class of generalisedlinear output models (defined by analogy to the generalised-linear regression model [8]), where the
expected value of an observation is given by a monotonic function of the latent Gaussian process,
with an arbitrary (most frequently exponential-family) distribution of observations about this mean.
For such models exact inference, and therefore exact EM, is not possible. Instead, approximate
ML learning relies on either Monte-Carlo or deterministic approximations to the posterior. Such
methods may be computationally intensive, suffer from varying degrees of approximation error, and
are subject to the same concerns about multiple likelihood optima as is the linear-Gaussian case2
Thus, a consistent spectral method is likely to be of particular value for such models. In this paper
we show how the SSID approach may be extended to yield consistent estimators for generalisedlinear-output LDS (gl-LDS) models. In experiments with simulated and real neural data, we show
that these estimators may be better than those provided by approximate EM when given sufficient
data. Even when data are few, the approach provides a valuable initialisation to approximate EM.
2 Theory
We briefly review the Ho-Kalman SSID algorithm [10] for linear-Gaussian LDS models, before
extending it to the gl-LDS case. Using this framework, we derive and then evaluate an algorithm to
fit models of Poisson-distributed count data with log-rates generated by an LDS.
2.1 SSID for LDS models with linear-Gaussian observations
Let q-dimensional observations yt , t ? {1, . . . , T } depend on a p-dimensional latent state xt , described by a linear first-order auto-regressive process with Gaussian initial distribution and Gaussian
innovations:
x1 ? N (x0 , Q0 )
xt+1 | xt ? N (Axt , Q)
(1)
zt = Cxt + d
yt | zt ? N (zt , R).
Here, x0 and Q0 are the mean and covariance of the initial state and Q is the covariance of the
innovations. The dynamics matrix A models the temporal dependence of the process x. The variable
zt of dimension q is defined as an affine function of the latent state xt , parametrised by the loading
matrix C and the mean parameter d. Given zt , observations are independently distributed around
this value with covariance R. Furthermore let ? := limt?? Cov[xt ] denote the covariance of the
stationary marginal distribution if the system is stable (i.e. if the spectral radius of A is < 1).
Provided the generative model is stationary (i.e., x0 = 0 and Q0 = ?), SSID algorithms yield
consistent estimates of the parameters A, C, Q, R, d without iteration. We adopt an approach to
SSID based on the Ho-Kalman method [10, 4]. This algorithm takes as input the empirical estimate
of the so-called ?future-past Hankel matrix? H which is defined as the cross-covariance between
time-lagged vectors yt+ (the ?future?) and yt? (the ?past?) of the observed data:
?
?
?
?
yt
yt?1
..
?
H := Cov[yt+ , yt? ]
yt+ := ?
yt? := ? ... ? .
.
yt+k?1
yt?k
The parameter k is called the Hankel size and has to be chosen so that k ? p. The key to SSID is
that H (which is independent of t as stationarity is assumed) has rank equal to the dimensionality
p of the linear dynamical state. Indeed, it is straightforward to show that the Hankel matrix can be
decomposed in terms of the model parameters A, C, ?,
H = [C ? (CA)? . . . (CAk?1 )? ]? ? [A?C ? A2 ?C ? . . . Ak ?C ? ].
(2)
The SSID algorithm first takes the singular value decomposition (SVD) of the empirical estimate
b of H to recover a two-part factorisation as in (2) given a user-defined latent dimensionality p (a
H
? From this low-rank
suitable p may be estimated by inspection of the singular value spectrum of H).
2
A recent paper [9] has argued that the log-likelihood of a model with Poisson count observations is
concave?however, the result therein showed only a necessary condition for concavity of the expected joint
log-likelihood optimised in the M-step.
2
b the model parameters A, C as well as the covariances Q and R can be found by
approximation to H
linear regression and by solving an algebraic Riccati equation; d is given simply by the empirical
mean of the data. However, this specific procedure works only for linear systems with Gaussian
observations and innovations, and not for models which feature non-linear transformations or nonGaussian observation models. Indeed, we find that linear SSID methods can yield poor results
when applied directly to count-process data. Although SSID techniques have been developed for
observations that are Gaussian-distributed around a mean that is a nonlinear function of the latent
state [5], we are unaware of SSID methods that address arbitrary observation models.
2.2 SSID for gl-LDS models by moment conversion
Consider now the gl-LDS in which the Gaussian observation process of model (1) is replaced by the
following more general observation model. We assume yt,i ? yt,j | zt ; i.e. observation dimensions
are independent given zt . Further, let yt,i | zt be arbitrarily distributed around a (known) monotonic
element-wise nonlinear mapping f (?) such that E[yt |zt ] = f (zt ). Following the theory of generalised linear modelling, we also assume that the variance of the observation distribution is a (known)
function V (?) of its mean.3
Our extension to SSID for such models is based on the following idea. The variables z1 , . . . , zT are
jointly normal, so in principle we can apply standard SSID algorithms to z. Although z is unobserved, we can use the fact that the observation model dictates a computable relationship between
the moments of y and those of z. This allows us to determine the future-past Hankel matrix of z
from the moments of y, which can then be fed into standard SSID algorithms. Consider the covariance matrix Cov[y? ] of the combined 2kq-dimensional future-past vector y? which is defined by
stacking y+ and y? (here and henceforth we drop the subscripts t as unnecessary given the assumed
stationarity of the process). Denote the mean and covariance matrix of the normal distribution of z?
(defined analogously to y? ) by ? and ?. We then have,
E[yi? ] = Ez [f (zi? )] =: ?(?i , ?ii )
(3)
E[(yi? )2 ] = Ez [Ey|z [(yi? )2 ]] = Ez [f (zi? )2 + V (f (zi? ))]
=: ?(?i , ?ii ).
(4)
The functions ?(?) and ?(?) are given by Gaussian integrals with mean ?i and variance ?ii over the
functions f (?) and f 2 (?) + V (f (?)), respectively. For off-diagonal second moments we have (i 6= j):
E[yi? yj? ] = Ez [Ey|z [yi? ] ? Ey|z [yj? ]] = Ez [f (zi? )f (zj? )] =: ?(?i , ?ii , ?j , ?jj , ?ij ).
(5)
Equations (3)-(5) are a 4kq + kq(2kq ? 1) system of non-linear equations in 4kq + kq(2kq ? 1)
unknowns ?, ? (with symmetric ? = ?? ). The equations above can be solved efficiently by
separately solving one 2-dimensional system (equations 3-4) for each pair of unknowns ?i , ?ii ,
?i ? {1, . . . , kq}. Once the ?i and ?ii are known, equation (5) reduces to a 1-dimensional nonlinear
equation for ?ij for each pair of indices (i < j). The upper-right block of the covariance matrix ?
then provides an estimate of the future-past Hankel matrix Cov[z+ , z? ] which can be decomposed
as in standard Ho-Kalman SSID.
2.3 SSID for Poisson dynamical systems (PLDSID)
We now consider in greater detail a special case of the gl-LDS model, which is of particular interest
in neuroscience applications. The observations in this model are (when conditioned on the latent
state) Poisson-distributed with a mean that is exponential in the output of the dynamical system,
yt,i | zt,i ? Poisson[exp(zt,i )].
We call this model, which is a special case of a Log-Gaussian Cox Process [11], a Poisson Linear Dynamical System (PLDS). PLDS and close variants have recently been applied for modelling
multi-electrode recordings [12, 13, 14, 15]. In these applications, yt,i models the spike-count of
neuron i in time-bin t and its log-firing-rate (which we will refer to as the ?input to neuron i?) is
given by zt,i . Estimation of the model parameters ? = (A, C, Q, x0 , Q0 , d) often depends on approximate likelihood maximisation, using EM with an approximate E-step [16, 9]. The exponential
nonlinearity ensures that the posterior distribution p(x1...,T |y1...,T , ?) is a log-concave function of
x1...,T [17], making its mode easy to find and justifying unimodal approximations (such as that of
Laplace). However, the typical data likelihood is nonetheless multimodal and the approximations
may introduce bias in estimation [18].
3
Our method readily generalises to models in which each dimension i has different nonlinearities fi and Vi .
3
1
0.5
0
1
20
40
transformed
1
0.5
0
1
20
# SV
40
10
2.75
# training trials
3.5
SV
subspace angle
0.25
log
0
1
20
40
F
1.5
2
1
0.5
# SV
E
0.5
0
true
# SV
D
difference of EVs
C
normed SV
B
normed SV
observed
normed SV
A
0.75
0
2
log
10
2.75
# training trials
3.5
1
1
0.5
0.5
0
1
10
# SV
20
0
1
10
20
# SV
Figure 1: Moment conversion uncovers low-rank structure in artificial data. A) Time-lagged
covariance matrix Cov[yt+1 , yt ] and the singular value (SV) spectrum of the full Hankel matrix
H = Cov[y+ , y? ] computed from the observed count data (artificial data set I). The spectrum
decays gradually. B) Same as A) but after moment conversion. The transformed Hankel matrix now
exhibits a clear cut-off in the spectrum, indicative of low underlying rank. C) Same as A) and B) but
computed from the (ground truth) log-rates z, illustrating the true low-rank structure in the data. D)
Summed absolute difference of the eigenvalue spectra of the ground truth dynamics matrix A and
the one identified by PLDSID. The difference decreases with increasing data set size, indicating that
PLDSID estimates are consistent. E) Same as C) but for the angle between the subspaces spanned
by the loading matrix of the ground truth and estimated models. F) SV spectrum of the Hankel
matrix of multi-electrode data before (left) and after (right) moment conversion.
Under the PLDS model, the equations (3)-(5) can be solved analytically (see also [19] and the
supplementary material for details),
1
?i = 2 log(mi ) ? log(Sii + m2i ? mi )
(6)
2
?ii = log(Sii + m2i ? mi ) ? log(m2i )
(7)
?ij = log(Sij + mi mj ) ? log(mi mj ),
(8)
where mi and Sij denote the empirical estimates of E[yi? ] and Cov[yi? , yj? ], respectively. One
can see that the above equations do not have solutions if any one of the terms in the logarithms
is non-positive, which may happen with finitely sampled moments or a misspecified model. We
therefore scale the matrix S (by left and right multiplication with the same diagonal matrix) such
that all Fano factors that are initially smaller than 1 are set to a given threshold (in simulations we
used 1 + 10?2 ). This procedure ensures that there exists a unique solution (?, ?) to the moment
conversion (6)-(8). It is still the case that the resulting matrix ? might not be positive semidefinite
[20], but this can be rectified by finding its eigendecomposition, thresholding the eigenvalues (EVs)
and then reconstructing ?.
For sufficiently large data sets generated from a ?true? PLDS model, observed Fano factors will be
greater than one with high probability. In such cases, the moment conversion asymptotically yields
the unique correct moments ? and ? of the Gaussian log-rates z. Assuming stationarity, the HoKalman SSID yields consistent estimates of A, C, Q, d given the true ? and ?. Hence, the proposed
two-stage method yields consistent estimates of the parameters A, C, Q, d of a stationary PLDS. In
the remainder, we call this algorithm PLDSID.
It is often of interest to model the conditional distribution of the observables y given some external,
observed covariate or ?input? u. In neuroscience, for instance, u might be a sensory stimulus influencing retinal [14] or other sensory spiking activity. Fortunately, provided that the external inputs
are Gaussian-distributed and perturb the dynamics linearly, PLDSID can be extended to identify the
4
parameters of this augmented model. Let ut denote the r-dimensional observed external input at
time t, and assume that u1 , . . . , uT are jointly normal and influence the latent state of the dynamical
process linearly and instantaneously (through a p ? r matrix B):
xt+1 | xt , ut ? N (Axt + But , Q),
The dynamical state xt is then observed through a generalised-linear process as before, and we define future-past vectors for all relevant time series. In this case, the N4SID algorithm [3] can perform
subspace identification based on the joint covariance of u? and z? . Although this covariance is not
observed directly in the gl-LDS case, our assumptions make u? and z? jointly normal and so we can
use moment transformation again to estimate the required covariance from the observed covariance
of u? and y? . For the Poisson model with exponential nonlinearity, this transformation remains
closed-form, and in combination with N4SID yields consistent estimates of the PLDS parameters
and the input-coupling matrix B. 4 Further details are provided in the supplementary material.
3 Results
We investigated the properties of the proposed PLDSID algorithm in numerical experiments, using
both artificial data and multi-electrode recordings of neural activity.
3.1 PLDSID infers the correct parameters given sufficiently large synthetic data sets
We used three artificial data sets to evaluate our algorithm, each consisting of 200 time-series (?trials?), with each trial being of length T = 100 time steps. Time-series were generated by sampling
from a stationary ground truth PLDS with p = 10 latent and q = 25 observed dimensions. Count
averages across time-bins and neurons ranged from 0.15 to 0.2, corresponding to 15?20 Hz if the
time-step size dt is taken to be 10 ms (the binning used for the multi-electrode recordings, see below). The dynamics matrices A had eigenvalues corresponding to auto-correlation time constants
ranging from < 1 time step (data set III), through 3 dt (data set I) to 20 dt (data set II). The loading
matrices C were generated from a matrix with orthonormal columns and by a subsequent scaling
with 12.5 (data set I) or 5 (data sets II and III). This resulted in instantaneous correlations that were
comparable to (average absolute correlation coefficient data set I: c? = 2 ? 10?2 ) or smaller than (data
sets II, III: c? = 3.5 ? 10?3 ) those observed in the cortical multi-electrode recordings used below
(?
c = 2.2 ? 10?2 ). Hence, all our artificial data sets either roughly match (data sets I, II) or substantially underestimate (data set III) the correlation-structure of typical cortical multi-cell recordings.
Additionally, we generated a data set for identifying PLDS models with external input by driving
the ground truth PLDS of data set II with a 3 dimensional Gaussian AR(1) process ut ; the coupling
matrix B was generated such that But had the same covariance as the innovations Q. A Hankel size
k = 10 was used for all experiments with artificial data.
We first illustrate the moment conversion defined by equations (6)-(8) on artificial data set I. Fig. 1A
shows the time-lagged cross-covariance Cov[yt+1 , yt ] as well as the singular value (SV) spectrum
of the full future-past Hankel matrix H = Cov[y+ , y? ] (normalised such that the largest SV is 1),
both estimated from 200 trials, with a Hankel size of k = 10. The raw spectrum gradually decays
towards small values but does not show a clear low-rank structure of the future-past Hankel matrix
H. In contrast, Fig. 1B shows the output of the moment transformation yielding an approximation
of the cross-covariance Cov[zt+1 , zt ] of the underlying inputs. Further the SV spectrum of the full,
transformed future-past Hankel matrix Cov[z+ , z? ] is shown. The latter is dominated by only a few
SVs, whose number matches the dimension of the ground truth system p = 10, clearly indicating
a low-rank structure. On this synthetic data set, we also have access to the underlying inputs. One
can see that the transformed Hankel matrix Fig. 1B as well as its SV spectrum are close to the ones
computed from the underlying inputs shown in Fig. 1C.
We also evaluated the accuracy of the parameters identified by PLDSID as a function of the training
set size. Fig. 1D shows the difference between the spectra (i.e., the summed absolute differences
between sorted eigenvalues) of the identified and the ground truth dynamics matrix A. The spectrum
of A is an important characteristic of the model, as it determines the time-constants of the underlying
dynamics. It can be seen that the difference between the spectra decreases with increasing data set
size (Fig. 1D), indicating that our method asymptotically identifies the correct dynamics. Furthermore, Fig. 1E shows the subspace-angle between the true loading matrix C and the one estimated
4
Again, simply applying SSID to the log of the observed counts does not work as most counts are 0.
5
A
artificial data set I
B
artificial data set II
?3
20
x 10
3
PLDSID+EM
FA+EM
SSID+EM
RAND+EM
0
1
10
performance
8
performance
performance
artificial data set III
?5
x 10
6
4
D
10
0
0
20
1
200
# EM iterations
400
1
25
# EM iterations
neural data, 100 trials
E
F
neural data, 863 trials
?3
?3
x 10
10
50
# EM iterations
neural data, 500 trials
?3
5
C
?4
x 10
x 10
x 10
4.5
4
performance
performance
performance
6.5
8.5
5
7
1
50
# EM iterations
100
5.75
1
50
# EM iterations
100
1
40
80
# EM iterations
Figure 2: PLDSID is a good initialiser for EM. Cosmoothing performance on the training set as a
function of the number of EM iterations for different initialisers on various data sets. A) Artificial
data set consisting of 200 trials and 25 observed dimensions. EM initialised by PLDSID converges
faster and achieves higher training performance than EM initialised with FA, Gaussian SSID or
random parameter values. B) Same as A) but for data with lower instantaneous correlations and
longer auto-correlation. EM does not improve the performance of PLDSID on this data set. C)
Same as A) but for data with negligible temporal correlations and low instantaneous correlations.
For this weakly structured data set, PLDSID-EM does not work well. D) 100 trials of multi-electrode
recordings with 86 observed dimensions (spike-sorted units). E) Same as D) but of data set size 500
trials, and only using the 40 most active units F) Same as D) but for 863 trials with all 86 units.
by PLDS. As for the dynamics spectrum, the identified loading matrix approaches the true one for
increasing training set size.
Next, we investigated the usefulness of PLDSID as an initialiser for EM. We compared it to 3 different methods, namely initialisation with random parameters (with 20-50 restarts), factor analysis
(FA) and Gaussian SSID. The quality of these initialisers was assessed by monitoring performance
of the identified parameters as a function of EM iterations after initialisation. Good initial parameter
values yield fast convergence of EM in few iterations to high performance values, whereas poor
initialisations are characterised by slow convergence and termination of EM in poor local maxima
(or, potentially, shallow regions of the likelihood). Fast convergence of EM is an important issue
when dealing with large data sets, as EM iterations become computationally expensive (see below).
We monitor performance by evaluating the so-called cosmoothing performance on the training data,
a measure for cross-prediction performance described elsewhere in detail [21, 15]. This measure
yielded more reliable and robust results than computing the likelihood, as the latter cannot be computed exactly and approximations can be numerically unreliable. We evaluated performance on the
training set, as we were interested in comparing fitting-performance of the algorithms for the same
model, and not the generalisation error of the model itself.
Fig. 2A to C show the results of this comparison on three different artificial data sets. On data set
I (Fig. 2A), which was designed to have short auto-correlation time constants but pronounced instantaneous correlations between the observed dimensions, PLDSID initialisation leads to superior
performance compared to competing methods. For the same number of EM iterations (which is
a good proxy of invested computation time, see below), it resulted in better co-smoothing performance. Furthermore, the PLDSID+EM parameters converge to a better local optimum than those
initialised by the other methods. Hence, on this data set, our initialisation yields both faster computation time and better final results. The second artificial data set featured smaller instantaneous
correlations between dimensions but longer auto-correlation time constants. As can be seen in Fig.
2B, the PLDSID initialisation here yields parameters which are not further improved by EM iterations whereas EM with other initialisations becomes stuck in poor local solutions.
6
By contrast, we found PLDSID not to yield useful parameter values on data sets which do not have
temporal correlations (Fig. 2C), and only very small instantaneous correlations across neurons (average instantaneous absolute-correlation c? = 3.5 ? 10?3 ). For this particular data set, PLDSID and
Gaussian SSID both yielded poor parameters compared to factor analysis. In general, we observed
that PLDSID compares favourably to the other initialisation methods on any data sets we investigated as long as it exhibits shared variability across dimensions and time, and it was observed to
work particularly well when correlations were substantial. Fig. 3 shows results for identification
of a PLDS model driven by external inputs. The proposed PLDSID method identifies better PLDS
parameters, including the coupling matrix B, than alternative methods. Notably, identifying the parameters with the PLDSID-variant that ignores external input (and setting the initial value B = 0
for EM) clearly results in suboptimal parameters.
3.2 Expectation Maximisation initialised by PLDSID identifies better models on neural data
We move now to examine the value of PLDSID in providing initial parameter values for subsequent EM iterations on multi-electrode recordings of neural activity. Such data sets are challenging
for statistical modelling as they are high-dimensional (on the order of 102 observed dimensions),
sparse (on the order of 10 Hz of spiking activity) and show shared variability across time and dimensions. The experimental setup, acquisition and preprocessing of the data set are documented
in detail elsewhere [22]. Briefly, spiking activity was acquired from a 96-channel silicon electrode
array (Blackrock, Salt Lake City, UT) implanted in motor areas of the cortex of a rhesus macaque
performing a delayed center-out reach task. For the analysis presented in this paper, we used data
from a single recording session consisting in total of 863 trials, each truncated to be of length 1 s
with 86 distinct single and multi-units identified by spike sorting. The data had an average firing rate
of 10.7 Hz and it was binned at 10 ms which resulted in 9.9% of bins containing at least one spike.
First, we investigated the SV spectrum of the future-past Hankel matrix computed either from the
count-observations of the data, or from the inferred underlying inputs (using Hankel size k = 30
and all trials, see Fig. 1F). While we did not observe a marked difference between the two spectra,
both spectra indicate that the data can be well described using a small number of singular values.
Based on these spectra, we used a dimensionality of q = 10 for subsequent simulations.
Next, we compared PLDSID to FA and Gaussian SSID initialisations for EM on two different subsets as well as the whole multi-electrode recording data set. Fig. 2D shows the performance of EM
with the different initialisations using a training set of modest size (100 trials, Hankel size k = 10).
PLDSID provides the most appropriate initialisation for EM, allowing it to converge rapidly to better parameter values than are found starting from either the FA or SSID estimates. This effect was
still more pronounced for a larger training set of 500 trials, but including only the 40 most active
neurons from the original data (Fig. 2E, Hankel size k = 30). We also applied all of the methods
to the complete data set consisting of 863 trials with all 86 observed neurons (Hankel size k = 30).
The results plotted in Fig. 2F indicate that again PLDSID provided the most useful initialisation
for EM. Interestingly, on this data set EM with random initialisations eventually identifies parameters with performances comparable to PLDSID+EM. However, random initialisation leads to slow
convergence and thus requires substantial computation, as described below. Gaussian SSID yielded
poor values for parameters on all data sets, leading EM to terminate in poor local optima after only
a few iterations. We note that, because of the use of the Laplace approximation during inference (as
well as our non-likelihood performance measure) EM is not guaranteed to increase performance at
each iteration, and, in practice, sometimes terminated after rather few iterations.
3.3 PLDSID improves training time by orders of magnitude compared to conventional EM
The computational time needed to identify PLDS parameters might prove to be an important issue
in practice. For example, when using a PLDS model as part of an algorithm for brain-machine interfacing [12], the parameters must be identified during an experimental session. For multi-electrode
recording data of commonly-encountered size, and using our implementation of EM, inference of
parameters under these time-constraints would be infeasible. Thus, an ideal parameter initialisation
method will not only improve the robustness of the identified parameters, but also reduce the computational time needed for EM convergence. Clearly, the computer time needed will depend on the
implementation, hardware and the properties and size of the data used. We used an EM-algorithm
with a global-Laplace approximation in the E-step [23, 15], and a conjugate-gradient-based optimi7
?3
x 10
performance
8
5
PLDSID+EM with input
PLDSID+EM w/o input
FA+EM
SSID+EM
RAND+EM
2
1
30
# EM iterations
60
Figure 3: Identification of PLDS models with external
inputs. Same as Fig. 2 B) but for an artificial data set
which is generated by sampling from a PLDS with external
input. Using the variant of PLDSID which also identifies
the coupling matrix B yields yields the best parameters. In
contrast, using the PLDSID variant which does not estimate
B (B is initialised at 0) yields parameters which are of the
same quality as alternative methods.
sation method in the M-step implemented in Matlab. Alternative methods based on variational approximations or MCMC-sampling have been reported to be more costly than Laplace-EM [13, 24].
For all of the data sets used above, one single EM iteration in our implementation was substantially
more costly than parameter initialisation by PLDSID (Fig. 2D: factor 6.4, Fig. 2E: factor 4.0, Fig. 2F:
factor 1.4). In addition, EM started with random initialisation still yielded worse performance than
with PLDSID initialisation even after 50 iterations (see Figure 2). Thus, even with a conservative
estimate, PLDSID initialisation reduces computational cost by at least a factor of 50 compared
to random initialisation. Both PLDSID and EM have a time computational complexity which is
proportional to the size N T of the data set (where N is the number of trials and T is the trial
length). However, in PLDSID, only the cost O(N T pq 2 ) of calculating the Hankel-matrix scales
with the data set size (assuming k is of order p). This simple covariance calculation was much
cheaper in our experiments than the moment conversion with cost O(pq 2 ) or the SVD with cost
O(p3 q 3 ), both of which are independent of the data set size N T . In contrast, each iteration of EM
requires at least O(N T (p3 + pq)) time. Therefore, the computational advantage of PLDSID is
expected to be especially great for large data sets. This is also the regime where the performance
benefit is most pronounced.
4 Discussion
We investigated parameter estimation for linear-Gaussian state-space models with generalisedlinear observations and presented a method for parameter identification in such models which builds
on the extensive subspace-identification literature for fully Gaussian state-space models. In numerical experiments we studied a special case of the proposed algorithm (PLDSID) for linear state-space
models with conditionally Poisson-distributed observations. We showed that PLDSID yields consistent estimates of the model parameters without requiring iterative computation. Although this
method generally makes less efficient use of available training data than do maximum likelihood
methods, we found that it sometimes outperformed likelihood hill-climbing by EM from random
initial conditions in practice (presumably due to optimisation difficulties). Even when this was not
the case, EM initialised with the results of PLDSID converged in fewer iterations, and to a better
parameter estimate than when it was initialised randomly, or by other methods?an effect seen with
multiple artificial and multi-electrode recording data sets. As the practical computational difficulties of parameter estimation (slow convergence and shallow optima in parameter estimation with
EM) in this model are substantial, our algorithm facilitates the use of linear state-space models with
non-Gaussian observations in practice.
While proven here in the Poisson case, the underlying moment-transformation algorithm is flexible
and can be applied to a wide range of gl-LDS models. Of particular interest for neural data might be
a dynamical system model which precisely reproduced the marginal distribution of integer observations for each observed dimension (by using a ?Discretised Gaussian? [20] as the observation model).
By contrast, the need for tractability in sampling or deterministic approximations for inference often
limits the range of models in which EM is practical.
Acknowledgements Supported by the Gatsby Charitable Foundation; an EU Marie Curie Fellowship to JHM (hosted by MS); DARPA REPAIR N66001-10-C-2010 and NIH CRCNS R01NS054283 to MS; as well as the Bernstein Center T?ubingen funded by the German Ministry of
Education and Research (BMBF; FKZ: 01GQ1002). We would like to thank Krishna V. Shenoy and
members of his laboratory for many useful discussions as well as for generously sharing their data
with us.
8
References
[1] R. E. Kalman and R. S. Bucy. New results in linear filtering and prediction theory. Trans. Am.
Soc. Mech. Eng., Series D, Journal of Basic Engineering, 83:95?108, 1961.
[2] Z. Ghahramani and G. E. Hinton. Parameter estimation for linear dynamical systems. University of Toronto Technical Report, 6(CRG-TR-96-2), 1996.
[3] P. V. Overschee and B. D. Moor. N4sid: Subspace algorithms for the identification of combined
deterministic-stochastic systems. Automatica, 30(1):75?93, 1994.
[4] T. Katayama. Subspace methods for system identification. Springer Verlag, 2005.
[5] H. Palanthandalam-Madapusi, S. Lacy, J. Hoagg, and D. Bernstein. Subspace-based identification for linear and nonlinear systems. In Proceedings of the American Control Conference,
2005, pp. 2320?2334, 2005.
[6] E. N. Brown, R. E. Kass, and P. P. Mitra. Multiple neural spike train data analysis: state-ofthe-art and future challenges. Nat Neurosci, 7(5):456?61, 2004.
[7] M. M. Churchland, B. M. Yu, M. Sahani, and K. V. Shenoy. Techniques for extracting singletrial activity patterns from large-scale neural recordings. Curr Opin Neurobiol, 17(5):609?618,
2007.
[8] P. McCulloch and J. Nelder. Generalized linear models. Chapman and Hall, London, 1989.
[9] K. Yuan and M. Niranjan. Estimating a state-space model from point process observations: a
note on convergence. Neural Comput, 22(8):1993?2001, 2010.
[10] B. L. Ho and R. E. Kalman. Effective construction of linear state-variable models from input/output functions. Regelungstechnik, 14(12):545?548, 1966.
[11] J. M?ller, A. Syversveen, and R. Waagepetersen. Log gaussian cox processes. Scand J Stat,
25(3):451?482, 1998.
[12] V. Lawhern, W. Wu, N. Hatsopoulos, and L. Paninski. Population decoding of motor cortical
activity using a generalized linear model with hidden states. J Neurosci Methods, 189(2):267?
280, 2010.
[13] A. Z. Mangion, K. Yuan, V. Kadirkamanathan, M. Niranjan, and G. Sanguinetti. Online variational inference for state-space models with point-process observations. Neural Comput,
23(8):1967?1999, 2011.
[14] M. Vidne, Y. Ahmadian, J. Shlens, J. Pillow, J. Kulkarni, A. Litke, E. Chichilnisky, E. Simoncelli, and L. Paninski. Modeling the impact of common noise inputs on the network activity of
retinal ganglion cells. J Comput Neurosci, 2011.
[15] J. H. Macke, L. B?using, J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani. Empirical models of spiking in neural populations. In Advances in Neural Information Processing
Systems, vol. 24. Curran Associates, Inc., 2012.
[16] J. Kulkarni and L. Paninski. Common-input models for multiple neural spike-train data. Network, 18(4):375?407, 2007.
[17] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models.
Network, 15(4):243?262, 2004.
[18] R. E. Turner and M. Sahani. Two problems with variational expectation maximisation for
time-series models. In D. Barber, A. T. Cemgil, and S. Chiappa, eds., Inference and Learning
in Dynamic Models. Cambridge University Press, 2011.
[19] M. Krumin and S. Shoham. Generation of Spike Trains with Controlled Auto-and CrossCorrelation Functions. Neural Comput, pp. 1?23, 2009.
[20] J. Macke, P. Berens, A. Ecker, A. Tolias, and M. Bethge. Generating spike trains with specified
correlation coefficients. Neural Comput, 21(2):397?423, 2009.
[21] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. Gaussianprocess factor analysis for low-dimensional single-trial analysis of neural population activity.
J Neurophysiol, 102(1):614?635, 2009.
[22] M. M. Churchland, B. M. Yu, S. Ryu, G. Santhanam, and K. V. Shenoy. Neural variability
in premotor cortex provides a signature of motor preparation. J Neurosci, 26(14):3697?3712,
2006.
[23] L. Paninski, Y. Ahmadian, D. Ferreira, S. Koyama, K. Rahnama Rad, M. Vidne, J. Vogelstein,
and W. Wu. A new look at state-space models for neural data. J Comput Neurosci, 29:107?126,
2010.
[24] K. Yuan, M. Girolami, and M. Niranjan. Markov chain monte carlo methods for state-space
models with point process observations. Neural Comput, 24(6):1462?1486, 2012.
9
| 4836 |@word trial:20 cox:2 briefly:2 illustrating:1 loading:5 termination:1 simulation:2 rhesus:1 uncovers:1 covariance:18 decomposition:1 eng:1 tr:1 moment:17 initial:6 series:7 initialisation:22 precluding:1 interestingly:1 past:10 current:1 comparing:1 ka:1 must:2 readily:1 additive:1 happen:1 numerical:2 subsequent:3 motor:3 opin:1 drop:1 designed:1 stationary:4 generative:1 fewer:1 indicative:1 inspection:1 short:1 regressive:1 provides:6 toronto:1 sii:2 become:1 yuan:3 prove:1 fitting:1 introduce:1 acquired:1 x0:4 notably:1 indeed:2 expected:3 roughly:1 frequently:1 examine:1 multi:12 brain:1 decomposed:2 increasing:3 becomes:1 provided:5 estimating:1 underlying:7 mcculloch:1 neurobiol:1 substantially:2 developed:1 unobserved:1 transformation:5 finding:1 temporal:4 concave:2 axt:2 exactly:2 ferreira:1 uk:2 control:1 unit:5 planck:1 shenoy:5 generalised:6 positive:3 engineering:2 local:6 before:3 influencing:1 negligible:1 limit:1 mitra:1 despite:1 encoding:1 ak:1 cemgil:1 optimised:1 subscript:1 firing:2 might:4 therein:1 studied:1 challenging:1 co:1 range:2 statistically:1 unique:2 practical:2 yj:3 maximisation:5 practice:5 block:1 procedure:2 mech:1 area:1 featured:1 empirical:5 maneesh:2 cascade:1 dictate:1 shoham:1 rahnama:1 cannot:2 close:2 context:1 influence:1 applying:1 conventional:1 deterministic:3 ecker:1 center:3 yt:22 straightforward:1 starting:1 independently:1 normed:3 identifying:2 factorisation:1 estimator:4 array:2 spanned:1 orthonormal:1 shlens:1 his:1 initialise:1 population:6 laplace:4 construction:1 user:1 exact:2 curran:1 associate:1 element:1 expensive:1 particularly:1 cut:1 binning:2 observed:21 solved:2 region:1 ensures:2 eu:1 decrease:2 valuable:1 hatsopoulos:1 substantial:3 complexity:1 dynamic:10 signature:1 depend:2 solving:2 weakly:1 churchland:2 observables:1 neurophysiol:1 multimodal:1 joint:2 darpa:1 various:1 train:5 distinct:1 fast:2 effective:2 london:3 monte:2 ahmadian:2 artificial:15 whose:1 premotor:1 supplementary:2 larger:1 cov:11 invested:1 jointly:3 itself:1 final:1 reproduced:1 online:1 advantage:1 eigenvalue:4 ucl:1 remainder:1 relevant:1 rapidly:1 riccati:1 singletrial:1 pronounced:3 convergence:8 electrode:12 optimum:6 extending:1 produce:1 generating:1 seismology:1 converges:1 derive:1 coupling:4 ac:1 stat:1 illustrate:1 chiappa:1 finitely:1 measured:1 ij:3 soc:1 implemented:1 indicate:2 girolami:1 radius:1 correct:4 filter:1 lars:2 stochastic:1 material:2 mangion:1 bin:3 education:1 argued:1 biological:1 crg:1 extension:1 around:3 sufficiently:2 ground:7 normal:5 exp:1 great:1 presumably:1 mapping:2 hall:1 driving:1 achieves:1 adopt:1 a2:1 estimation:8 outperformed:1 gaussianprocess:1 largest:1 city:1 instantaneously:1 moor:1 clearly:3 interfacing:1 gaussian:29 generously:1 rather:1 varying:1 modelling:6 likelihood:13 rank:7 contrast:5 litke:1 am:1 inference:7 initially:1 hidden:1 cunningham:2 transformed:4 interested:2 issue:2 flexible:1 smoothing:2 special:3 summed:2 art:1 marginal:2 equal:1 once:1 sampling:4 chapman:1 broad:1 yu:4 look:1 future:11 report:1 stimulus:1 few:5 randomly:1 simultaneously:1 resulted:3 cheaper:1 delayed:1 replaced:1 consisting:4 curr:1 stationarity:3 interest:4 semidefinite:1 yielding:1 parametrised:1 chain:1 integral:1 necessary:1 modest:1 logarithm:1 plotted:1 instance:2 column:1 modeling:1 ar:1 stacking:1 cost:4 tractability:1 subset:1 kq:8 usefulness:1 reported:1 bucy:1 sv:16 synthetic:2 combined:3 off:2 decoding:1 analogously:1 bethge:1 nongaussian:1 again:3 containing:1 possibly:1 henceforth:1 worse:1 external:9 american:1 macke:3 leading:1 crosscorrelation:1 nonlinearities:1 retinal:2 coefficient:2 inc:1 generalisedlinear:3 depends:1 vi:1 performed:1 closed:1 recover:1 option:1 curie:1 cxt:1 accuracy:1 variance:2 characteristic:1 efficiently:1 yield:15 identify:2 ofthe:1 climbing:1 sation:1 n4sid:3 buesing:1 identification:11 lds:12 modelled:1 accurately:2 raw:1 carlo:2 monitoring:1 rectified:1 cybernetics:1 converged:1 reach:1 sharing:1 ed:1 underestimate:1 nonetheless:1 acquisition:1 initialised:7 pp:2 mi:6 recovers:1 sampled:1 ut:5 dimensionality:3 infers:1 improves:1 higher:1 dt:3 restarts:1 improved:1 rand:2 evaluated:2 syversveen:1 furthermore:3 stage:1 correlation:17 favourably:1 nonlinear:4 mode:1 quality:2 effect:2 ranged:1 true:7 requiring:1 brown:1 analytically:1 hence:3 q0:4 symmetric:1 laboratory:1 conditionally:1 during:2 m:4 generalized:2 hill:1 complete:1 ranging:1 wise:1 instantaneous:7 variational:3 recently:1 fi:1 misspecified:1 superior:1 nih:1 common:2 spiking:5 salt:1 extend:1 numerically:1 refer:1 silicon:1 cambridge:1 fano:2 session:2 nonlinearity:2 had:3 pq:3 funded:1 stable:1 access:1 longer:2 cortex:2 multivariate:2 posterior:2 recent:1 showed:2 driven:2 verlag:1 ubingen:2 affiliation:1 arbitrarily:1 yi:7 cak:1 seen:3 greater:2 fortunately:1 ministry:1 krishna:1 ey:3 determine:1 converge:2 ller:1 vogelstein:1 ii:13 multiple:6 simoncelli:1 unimodal:1 reduces:2 full:3 generalises:1 match:2 faster:2 calculation:2 cross:4 long:1 technical:1 justifying:1 equally:1 niranjan:3 controlled:1 impact:1 prediction:2 variant:4 regression:2 basic:1 implanted:1 optimisation:1 expectation:4 poisson:10 m2i:3 iteration:23 sometimes:2 limt:1 cell:2 whereas:2 addition:1 separately:1 fellowship:1 singular:5 subject:1 recording:12 hz:3 facilitates:1 member:1 integer:2 call:2 extracting:1 ideal:1 bernstein:3 iii:5 easy:1 variety:1 fit:1 zi:4 identified:8 competing:1 suboptimal:1 reduce:1 idea:1 fkz:1 computable:1 intensive:1 suffer:2 algebraic:1 jj:1 svs:1 matlab:1 generally:2 useful:3 clear:2 hardware:1 documented:1 zj:1 neuroscience:4 estimated:4 vol:1 santhanam:2 key:1 threshold:1 monitor:1 marie:1 n66001:1 asymptotically:2 angle:3 hankel:20 family:1 wu:2 lake:1 p3:2 scaling:1 comparable:2 guaranteed:1 encountered:1 yielded:4 activity:10 binned:1 constraint:1 precisely:1 dominated:1 u1:1 performing:1 structured:1 combination:1 poor:7 conjugate:1 smaller:4 across:4 em:63 reconstructing:1 shallow:2 making:1 gradually:2 sij:2 repair:1 taken:1 computationally:2 equation:10 remains:1 count:11 eventually:1 german:1 needed:3 fed:1 available:1 plds:16 apply:1 observe:1 spectral:6 appropriate:1 alternative:4 robustness:1 ho:4 original:1 vidne:2 calculating:1 perturb:1 especially:1 build:1 ghahramani:1 move:1 spike:10 kadirkamanathan:1 fa:6 costly:3 dependence:1 diagonal:2 exhibit:2 gradient:1 subspace:11 thank:1 lacy:1 simulated:2 koyama:1 barber:1 assuming:2 kalman:7 length:3 index:1 relationship:1 ssid:28 providing:1 scand:1 innovation:4 setup:1 potentially:1 lagged:3 implementation:3 zt:16 unknown:2 contributed:1 perform:1 conversion:8 upper:1 observation:30 neuron:7 allowing:1 lawhern:1 markov:1 truncated:1 extended:4 variability:3 hinton:1 y1:1 jakob:2 arbitrary:2 inferred:1 dog:1 pair:2 required:1 namely:1 z1:1 extensive:1 discretised:1 chichilnisky:1 specified:1 rad:1 ryu:2 established:1 macaque:1 trans:1 address:1 dynamical:14 usually:1 ev:2 below:5 pattern:1 regime:1 challenge:1 max:1 reliable:1 including:2 overschee:1 event:1 suitable:1 treated:1 difficulty:2 turner:1 improve:2 identifies:5 started:1 auto:6 speeding:1 sahani:5 text:1 review:1 literature:2 acknowledgement:1 katayama:1 multiplication:1 fully:1 generation:1 proportional:1 filtering:1 analogy:1 proven:1 eigendecomposition:1 foundation:1 degree:1 affine:1 sufficient:1 consistent:10 proxy:1 gq1002:1 principle:1 thresholding:1 charitable:1 elsewhere:2 gl:7 supported:1 infeasible:1 bias:1 normalised:1 institute:1 wide:1 waagepetersen:1 absolute:4 sparse:1 distributed:8 benefit:2 dimension:13 cortical:3 world:1 evaluating:1 unaware:1 concavity:1 sensory:2 author:1 stuck:1 ignores:1 preprocessing:1 commonly:1 pillow:1 approximate:6 unreliable:1 dealing:1 ml:3 global:2 active:2 automatica:1 assumed:2 unnecessary:1 nelder:1 tolias:1 sanguinetti:1 spectrum:18 latent:12 iterative:3 jhm:1 additionally:1 terminate:1 mj:2 channel:1 lineargaussian:1 ca:1 robust:1 investigated:5 berens:1 did:1 linearly:2 terminated:1 whole:1 noise:3 arise:2 neurosci:5 x1:3 augmented:1 fig:20 crcns:1 hosted:1 gatsby:3 slow:3 bmbf:1 exponential:4 comput:7 xt:8 specific:1 covariate:1 krumin:1 decay:2 blackrock:1 concern:1 exists:1 kulkarni:2 magnitude:1 nat:1 conditioned:1 sorting:1 simply:2 likely:1 paninski:5 ganglion:1 ez:5 monotonic:2 springer:1 truth:7 determines:1 relies:1 conditional:1 sorted:2 marked:1 towards:1 shared:3 determined:1 typical:2 characterised:1 generalisation:1 conservative:1 called:4 total:1 svd:2 experimental:2 indicating:3 college:1 latter:2 assessed:1 preparation:1 evaluate:2 mcmc:1 avoiding:2 |
4,239 | 4,837 | Bayesian nonparametric models for bipartite graphs
Franc?ois Caron
INRIA
IMB - University of Bordeaux
Talence, France
[email protected]
Abstract
We develop a novel Bayesian nonparametric model for random bipartite graphs.
The model is based on the theory of completely random measures and is able
to handle a potentially infinite number of nodes. We show that the model has
appealing properties and in particular it may exhibit a power-law behavior. We
derive a posterior characterization, a generative process for network growth, and
a simple Gibbs sampler for posterior simulation. Our model is shown to be well
fitted to several real-world social networks.
1
Introduction
The last few years have seen a tremendous interest in the study, understanding and statistical modeling of complex networks [14, 6]. A network is a set if items, called vertices, with connections
between them, called edges. In this article, we shall focus on bipartite networks, also known as twomode, affiliation or collaboration networks [16, 17]. In bipartite networks, items are divided into two
different types A and B, and only connections between items of different types are allowed. Examples of this kind can be found in movie actors co-starring the same movie, scientists co-authoring a
scientific paper, internet users posting a message on the same forum, people reading the same book
or listening to the same song, members of the boards of company directors sitting on the same board,
etc. Following the readers-books example, we will refer to items of type A as readers and items of
type B as books. An example of bipartite graph is shown on Figure 1(b). An important summarizing quantity of a bipartite graph is the degree distribution of readers (resp. books) [14]. The degree
of a vertex in a network is the number of edges connected to that vertex. Degree distributions of
real-world networks are often strongly non-Poissonian and exhibit a power-law behavior [15].
A bipartite graph can be represented by a set of binary variables (zij ) where zij = 1 if reader i has
read book j, 0 otherwise. In many situations, the number of available books may be very large and
potentially unknown. In this case, a Bayesian nonparametric (BNP) approach can be sensible, by
assuming that the pool of books is infinite. To formalize this framework, it will then be convenient
to represent the bipartite graph by a collection of atomic measures Zi , i = 1, . . . , n with
Zi =
?
X
zij ??j
(1)
j=1
where {?j } is the set of books and typically Zi only has a finite set of non-zero zij corresponding to
books reader i has read. Griffiths and Ghahramani [8, 9] have proposed a BNP model for such binary
random measures. The so-called Indian Buffet Process (IBP) is a simple generative process for the
conditional distribution of Zi given Z1 , . . . , Zi?1 . Such process can be constructed by considering
that the binary measures Zi are i.i.d. from some random measure drawn from a beta process [19, 10].
It has found several applications for inferring hidden causes [20], choices [7] or features [5]. Teh
and Gor?ur [18] proposed a three-parameter extension of the IBP, named stable IBP, that enables to
1
model a power-law behavior for the degree distribution of books. Although more flexible, the stable
IBP still induces a Poissonian distribution for the degree of readers.
In this paper, we propose a novel Bayesian nonparametric model for bipartite graphs that addresses
some of the limitations of the stable IBP, while retaining computational tractability. We assume
that each book j is assigned a positive popularity parameter wj > 0. This parameter measures the
popularity of the book, larger weights indicating larger probability to be read. Similarly, each reader
i is assigned a positive parameter ?i which represents its ability to read books. The higher ?i , the
more books the reader i is willing to read. Given the weights wj and ?i , reader i reads book j with
probability 1 ? exp(??i wj ). We will consider that the weights wj and/or ?i are the points of a
Poisson process with a given L?evy measure. We show that depending on the choice of the L?evy
measure, a power-law behavior can be obtained for the degree distribution of books and/or readers.
Moreover, using a set of suitably chosen latent variables, we can derive a generative process for
network growth, and an efficient Gibbs sampler for approximate inference. We provide illustrations
of the fit of the proposed model on several real-world bipartite social networks. Finally, we discuss
some potentially useful extensions of our work, in particular to latent factor models.
2
2.1
Statistical Model
Completely Random Measures
We first provide a brief overview of completely random measures (CRM) [12, 13] before describing
the BNP model for bipartite graphs in Section 2.2. Let ? be a measurable space. A CRM is a
random measure G such that for any collection of disjoint measurable subsets A1 , . . . , An of ?,
the random masses of the subsets G(A1 ), . . . , G(An ) are independent. CRM can be decomposed
into a sum of three independent parts: a non-random measure, a countable collection of atoms with
fixed locations, and a countable collection of atoms with randoms masses at random locations. In this
paper, weP
will be concerned with models defined by CRMs with random masses at random locations,
?
i.e. G = j=1 wj ??j . The law of G can be characterized in terms of a Poisson process over the point
set {(wj , ?j ), j = 1, . . . , ?} ? R+ ? ?. The mean measure ? of this Poisson process is known as
the L?evy measure. We will assume in the following that the L?evy measure decomposes as a product
of two non-atomic densities,
i.e. that G is a homogeneous CRM ?(dw, d?) = ?(w)h(?)dwd? with
R
h : ? ? [0, +?) and ? h(?)d? = 1. It implies that the locations of the atoms in G are independent
of the masses, and are i.i.d. from h, while the masses are distributed according to a Poisson
P?process
over R+ with mean intensity ?. We will further assume that the total mass G(?) = j=1 wj is
positive and finite with probability one, which is guaranteed if the following conditions are satisfied
Z ?
Z ?
?(w)dw = ? and
(1 ? exp(?w))?(w)dw < ?
(2)
0
0
and note g(x) its probability density function evaluated at x. We will refer to ? as the L?evy intensity
in the following, and to h as the base density of G, and write G ? CRM(?, h). We will also note
Z ?
?? (t) = ? log E [exp(?tG(?))] =
(1 ? exp(?tw))?(w)dw
(3)
0
Z ?
?e? (t, b) =
(1 ? exp(?tw))?(w) exp(?bw)dw
(4)
0
Z ?
?(n, z) =
?(w)wn e?zw dw
(5)
0
As a notable particular example of CRM, we can mention the generalized gamma process (GGP) [1],
whose L?evy intensity is given by
?
?(w) =
w???1 e?w?
?(1 ? ?)
GGP encompasses the gamma process (? = 0), the inverse Gaussian process (? = 0.5) and the
stable process (? = 0) as special cases. Table ?? in supplementary material provides the expressions
of ?, ? and ? for these processes.
2
2.2
A Bayesian nonparametric model for bipartite graphs
Let G ? CRM(?, h) where ? satisfies conditions (2). A draw G takes the form
?
X
G=
wj ??j
(6)
j=1
where {?j } is the set of books and {wj } the set of popularity parameters of books. For i = 1, . . . , n,
let consider the latent exponential process
?
X
Vi =
vij ??j
(7)
j=1
defined for j = 1, . . . , ? by vij |wj ? Exp(wj ?i ) where Exp(a) denotes the exponential distribution of rate a. The higher wj and/or ?i , the lower vij . We then define the binary process Zi
conditionally on Vi by
?
X
zij = 1 if vij < 1
(8)
Zi =
zij ??j with
zij = 0 otherwise
j=1
By integrating out the latent variables vij we clearly have p(zij = 1|wj , ?i ) = 1 ? exp(??i wj ).
Proposition 1 Zi is marginally characterized by a Poisson process over the point set {(?j? ), j =
P?
1, . . . , ?} ? ?, of intensity measure ?? (?i )h(?? ). Hence, the total mass Zi (?) =
j=1 zij ,
which corresponds to the total number of books read by reader i is finite with probability one and
admits a Poisson(?? (?i )) distribution, where ?? (z) is defined in Equation (3), while the locations
?j? are i.i.d. from h.
The proof, which makes use of Campbell?s theorem for point processes [13] is given in supplemen
tary material. As an example, for the gamma process we have Zi (?) ? Poisson ? log 1 + ??i .
It will be useful in the following to introduce a censored version of the latent process Vi , defined by
?
X
Ui =
uij ??j
(9)
j=1
where uij = min(vij , 1), for i = 1, . . . , n and j = 1, . . . , ?. Note that Zi can be obtained
deterministically from Ui .
2.3
Characterization of the conditional distributions
The conditional distribution of G given Z1 , . . . , Zn cannot be obtained in closed form1 . We will
make use of the latent process Ui . In this section, we derive the formula for the conditional laws
P (U1 , . . . , Un |G), P (U1 , . . . , Un ) and P (G|U1 , . . . , Un ) . Based on these results, we derive in Section 2.4 a generative process and in Section 2.5 a Gibbs sampler for our model, that both rely on the
introduction of these latent variables.
P?
Assume that K books {?1 , . . . , ?K } have appeared. We write Ki = Zi (?) = j=1 zij the degree
Pn
Pn
of reader i (number of books read by reader i) and mj = i=1 Zi ({?j }) = i=1 zij the degree of
book j (number of people having read book j). The conditional likelihood of U1 , . . . Un given G is
given by
??
?
?
n ? Y
K
?
Y
z
z
?
P (U1 , . . . Un |G) =
?i ij wj ij exp (??i wj uij )? exp (??i G(?\{?1 , . . . , ?K }))
?
?
i=1
j=1
?
?
! K
!
!
!
n
n
n
Y
Y m
X
X
Ki
j
?
=
?i
wj exp ?wj
?i (uij ? 1) ? exp ?
?i G(?)
(10)
i=1
j=1
i=1
i=1
1
In the case where ?i = ?, it is possible to derive P (Z1 , . . . , Zn ) and P (Zn+1 |Z1 , . . . , Zn ) where the
random measure G and the latent variables U are marginalized out. This particular case is described in supplementary material.
3
Proposition 2 The marginal distribution P (U1 , . . . Un ) is given by
!
"
!# K
!
n
n
n
Y
X
Y
X
P (U1 , . . . Un ) =
?iKi exp ???
?i
h(?j )? mj ,
?i uij
i=1
i=1
j=1
(11)
i=1
where ?? and ? are resp. defined by Eq. (3) and (5).
Proof. The proof, detailed in supplementary material, is obtained by an application of the Palm
formula for CRMs [3, 11], and is the same as that of Theorem 1 in [2].
Proposition 3 The conditional distribution of G given the latent processes U1 , . . . Un can be expressed as
K
X
G = G? +
wj ??j
(12)
j=1
where G? and (wj ) are mutually independent with
?
?
G ? CRM(? , h)
?
? (w) = ?(w) exp ?w
n
X
!
?i
(13)
i=1
and the masses are
P (wj |rest) =
Pn
m
?(wj )wj j exp (?wj i=1 ?i Uij )
Pn
?(mj , i=1 ?i uij )
(14)
Proof. The proof, based on the application of the Palm formula and detailed in supplementary
material, is the same as that of Theorem 2 in [2].
Pn
In the case of the GGP, G? is still a GGP of parameters (?? = ?, ? ? = ?, ? ? = ? + i=1 ?i ), while
the wj ?s are conditionally gamma distributed, i.e.
!
n
X
wj |rest ? Gamma mj ? ?, ? +
?i uij
i=1
Corollary 4 The predictive distribution of Zn+1 given the latent processes U1 , . . . , Un is given by
?
Zn+1 = Zn+1
+
K
X
zn+1,j ??j
j=1
?
where the zn+1,j are independent of Zn+1
with
Pn
?(mj , ? + ?n+1 + i=1 ?i uij )
Pn
zn+1,j |U ? Ber 1 ?
?(mj , ? + i=1 ?i uij )
?
where Ber is the Bernoulli distribution and Zn+1 is a homogeneous Poisson process over ? of
intensity measure ??? (?n+1 ) h(?).
For the GGP, we have
and
?
i
h
?
? Poisson ? ? + Pn+1 ?i ? (? + Pn ?i )?
i=1
i=1
?
?
Zn+1
(?) ?
?n+1
? Poisson ? log 1 + P
?+ n
i=1 ?i
?mj +? !
?n+1
Pn
.
zn+1,j |U ? Ber 1 ? 1 +
? + i=1 ?i uij
if ? 6= 0
if ? = 0
Finally, we consider the distribution of un+1,j |zn+1,j = 1, u1:n,j . This is given by
n
X
p(un+1,j |zn+1,j = 1, u1:n,j ) ? ?(mj + 1, un+1,j ?n+1 +
?i uij )1un+1,j ?[0,1]
(15)
i=1
In supplementary material, we show how to sample from this distribution by the inverse cdf method
for the GGP.
4
Books
A1
A2
A3
B3
B4
B5
...
Reader 1
18
4
14
Reader 2
12
0
8
13
4
Reader 3
16 10
0
0
14
...
9
6
...
B1
B2
(a)
B6
B7
(b)
Figure 1: Illustration of the generative process described in Section 2.4.
2.4
A generative process
In this section we describe the generative process for Zi given (U1 , . . . , Ui?1 ), G being integrated
out. This reinforcement process, where popular books will be more likely to be picked, is reminiscent of the generative process for the beta-Bernoulli process, popularized under the name of the
Indian buffet process [8]. Let xij = ? log(uij ) ? 0 be latent positive scores.
Consider a set of n readers who successively enter into a library with an infinite number of books.
Each reader i = 1, . . . n, has some interest in reading quantified by a positive parameter ?i > 0.
The first reader picks a number K1 ? Poisson(?? (?1 )) books. Then he assigns a positive score
x1j = ? log(u1j ) > 0 to each of these books, where u1j is drawn from distribution (15).
Now consider that reader i enters into the library, and knows about the books read by previous
readers and their scores. Let K be the total number of books chosen by the previous i ? 1 readers,
and mj the number of times each of the K books has been read. Then for each book j = 1, . . . , K,
reader i will choose this book with probability
Pi?1
?(mj , ? + ?i + k=1 ?k ukj )
1?
Pi?1
?(mj , ? + k=1 ?k ukj )
and then will choose an additional number of Ki+ books where
Ki+
? Poisson ?e?
?i ,
i?1
X
!!
?k
k=1
Reader i will then assign a score xij = ? log uij > 0 to each book j he has read, where uij is drawn
from (15). Otherwise he will set the default score xij = 0. This generative process is illustrated in
Figure 1 together with the underlying bipartite graph . In Figure 2 are represented draws from this
generative process with a GGP with parameters ?i = 2 for all i, ? = 1, and different values for ?
and ?.
2.5
Gibbs sampling
From the results derived in Proposition 3, a Gibbs sampler can be easily derived to approximate
the posterior distribution P (G, U |Z). The sampler successively updates U given (w, G? (?)) then
(w, G? (?)) given U . We present here the conditional distributions in the GGP case. For i =
1, . . . , n, j = 1, . . . , K, set uij = 1 if zij = 0, otherwise sample
uij |zij , wj , ?i ? rExp(?i wj , 1)
where rExp(?, a) is the right-truncated exponential distribution of pdf ? exp(??x)/(1 ?
exp(??a))1x?[0,a] from which we can sample exactly. For j = 1, . . . , K, sample
!
n
X
wj |U, ?i ? Gamma mj ? ?, ? +
?i uij
i=1
Pn
and the total mass G (?) follows a distribution g (w) ? g(w) exp (?w i=1 ?i ) where g(w) is
the distribution of G(?). In the case of the GGP, g ? (w) is an exponentially tilted stable distribution
for which exact samplers exist [4]. P
In the particular case of the gamma process, we have the simple
n
update G? (?) ? Gamma (?, ? + i=1 ?i ) .
?
?
5
5
10
10
10
15
15
15
20
20
20
25
25
25
30
30
20
40
Books
60
30
80
20
(a) ? = 1, ? = 0
40
Books
60
80
20
(b) ? = 5, ? = 0
5
5
10
10
Readers
5
15
15
20
20
25
25
25
30
30
40
Books
60
80
30
80
20
(d) ? = 2, ? = 0.1
60
15
20
20
40
Books
(c) ? = 10, ? = 0
10
Readers
Readers
Readers
5
Readers
Readers
5
40
Books
60
80
(e) ? = 2, ? = 0.5
20
40
Books
60
80
(f) ? = 2, ? = 0.9
Figure 2: Realisations from the generative process of Section 2.4 with a GGP of parameters ? = 2,
? = 1 and various values of ? and ?.
3
Update of ?i and other hyperparameters
We may also consider the weight parameters ?i to be unknown and estimate them from the graph.
We can assign a gamma prior ?i ? Gamma(a? , b? ) with parameters (a? > 0, b? > 0) and update
it conditionally on other variables with
?
?
K
K
X
X
?i |G, U ? Gamma ?a? +
zij , b? +
wj uij + G? (?)?
j=1
j=1
In this case, the marginal distribution of Zi (?), hence the degree distribution of books, follows a
continuous mixture of Poisson distributions, which offers more flexibility in the modelling.
We may also go a step further and consider that there is an infinite number of readers with weights ?i
e
associated
P? to a given CRM ? ? CRM(?? , h? ) and a measurable space of readers ?. We then have
? = i=1 ?i ??ei . This provides a lot of flexibility in the modelling of the distribution of the degree
of readers, allowing in particular to obtain a power-law behavior, as shown in Section 5. We focus
here on the case where ? is drawn from a generalized gamma process
Pn of parameters (?? , ?? , ?? ) for
simplicity. Conditionally on (w, G? (?), U ), we have ? = ?? + i=1 ?i ??ei where for i = 1, . . . , n,
?
?
K
K
X
X
?i |G, U ? Gamma ?
zij ? ?? , ? +
wj uij + G? (?)?
j=1
j=1
P
K
?
and ?? ? CRM(??? , h? ) with ??? (?) = ?? (?) exp ??
w
+
G
(?)
. In this case, the
j=1 j
e is now for j = 1, . . . , K
update for (w, G? ) conditional on (U, ?, ?(?))
!
n
X
? e
wj |U, ? ? Gamma mj ? ?, ? +
?i uij + ? (?)
i=1
P
n
? e
and G ? CRM(? , h) with ? (w) = ?(w) exp ?w
. Note that
i=1 ?i + ? (?)
there is now symmetry in the treatment of books/readers. For the scale parameter ? of
the GGP, we can assign agamma prior ?
? Gamma(a? , b? ) and update it with ?|? ?
Pn
? e
Gamma a? + K, b? + ??
?
+
?
(
?)
. Other parameters of the GGP can be updated
i=1 i
using a Metropolis-Hastings step.
?
?
?
6
4
Discussion
Power-law behavior. We now discuss some of the properties of the model, in the case of the
GGP. The total number of books read by n readers is O(n? ). Moreover, for ? > 0, the degree
distribution follows a power-law distribution: asymptotically, the proportion of books read by m
readers is O(m?1?? ) (details in supplementary material). These results are similar to those of the
stable IBP [18]. However, in our case, a similar behavior can be obtained for the degree distribution
of readers when assigning a GGP to it, while it will always be Poisson for the stable IBP.
Connection to IBP. The stable beta process [18] is a particular case of our construction, obtained
by setting weights ?i = ? and L?evy measure
?(w) = ?
?(1 + c)
?(1 ? e??w )???1 e??w(c+?)
?(1 ? ?)?(c + ?)
(16)
The proof is obtained by a change of variable from the L?evy measure of the stable beta process.
Extensions to latent factor models. So far, we have assumed that the binary matrix Z was observed.
The proposed model can also be used as a prior for latent factor models, similarly to the IBP. As
an example of the potential usefulness of our model compared to IBP, consider the extraction of
features from time series of different lengths. Longer time series are more likely to exhibit more
features than shorter ones, and it is sensible in this case to assume different weights ?i . In a more
general setting, we may want ?i to depend on a set of metadata associated to reader i. Inference for
latent factor models is described in supplementary material.
5
Illustrations on real-world social networks
We now consider estimating the parameters of our model and evaluating its predictive performance
on six bipartite social networks of various sizes. We first provide a short description of these networks. The dataset ?Boards? contains information about members of the boards of Norwegian companies sitting at the same board in August 20112 . ?Forum? is a forum network about web users
contributing to the same forums3 . ?Books? concerns data collected from the Book-Crossing community about users providing ratings on books4 where we extracted the bipartite network from the
ratings. ?Citations? is the co-authorship network based on preprints posted to Condensed Matter
section of ArXiv between 1995 and 1999 [15]. ?Movielens100k? contains information about users
rating particular movies5 from which we extracted the bipartite network. Finally, ?IMDB? contains
information about actors co-starring a movie6 . The sizes of the different networks are given in
Table 1.
Dataset
Board
n
355
K
5766
Edges
1746
Forum
Books
Citations
Movielens100k
IMDB
899
5064
16726
943
28088
552
36275
22016
1682
178074
7089
49997
58595
100000
341313
S-IBP
9.82
(29.8)
-6.7e3
83.1
-3.7e4
-6.7e4
-1.5e5
SG
8.3
(30.8)
-6.7e3
214
-3.7e4
-6.7e4
-1.5e5
IG
-145.1
(81.9)
-5.5e3
4.6e4
-3.1e4
-5.5e4
-1.1e5
GGP
-68.6
(31.9)
-5.6e3
4.4e4
-3.4e4
-5.5e4
-1.1e5
Table 1: Size of the different datasets and test log-likelihood of four different models.
We evaluate the fit of four different models on these datasets. First, the stable IBP [18] with parameters (?IBP , ?IBP , ?IBP ) (S-IBP). Second, our model where the parameter ? is the same over different readers, and is assigned a flat prior (SG). Third our model where each ?i ? Gamma(a? , b? )
where (a? , b? ) are unknown parameters with flat improper prior (IG). Finally, our model with a
GGP model for ?i , with parameters (?? , ?? , ?? ) (GGP). We divide each dataset between a training
2
Data can be downloaded from http://www.boardsandgender.com/data.php
Data for the forum and citation datasets can be downloaded from http://toreopsahl.com/datasets/
4
http://www.informatik.uni-freiburg.de/ cziegler/BX/
5
The dataset can be downloaded from http://www.grouplens.org
6
The dataset can be downloaded from http://www.cise.ufl.edu/research/sparse/matrices/Pajek/IMDB.html
3
7
3
3
10
3
10
4
10
Model
Data
10
Model
Data
Model
Data
Model
Data
3
10
2
2
10
2
10
10
2
10
1
1
10
1
10
10
1
10
0
0
10
0
10
0
10
0
10
2
10
Degree
(a) S-IBP
4
4
3
4
10
1
1
1
1
10
0
0
10
0
10
Degree
(e) S-IBP
10
10
0
10
0
10
Degree
2
10
10
0
10
0
10
10
2
10
10
3
10
2
Model
Data
4
10
3
10
2
5
10
Model
Data
10
3
10
(d) GGP
5
10
Model
Data
10
2
10
Degree
(c) IG
5
10
Model
Data
10
10
0
10
2
10
Degree
(b) GS
5
10
0
10
0
10
2
10
Degree
10
0
10
Degree
(f) GS
Degree
(g) IG
(h) GGP
Figure 3: Degree distributions for movies (a-d) and actors (e-h) for the IMDB movie-actor dataset
with four different models. Data are represented by red plus and samples from the model by blue
crosses.
3
3
10
3
10
2
1
1
1
(a) S-IBP
4
4
3
4
2
2
2
(e) S-IBP
1
10
0
10
0
10
Degree
2
10
1
10
0
10
0
10
10
10
1
10
0
3
10
10
1
10
0
10
0
10
Degree
10
0
10
Degree
(f) GS
Model
Data
4
10
3
10
10
5
10
Model
Data
10
3
10
(d) GGP
5
10
Model
Data
10
2
10
Degree
(c) IG
5
10
Model
Data
10
0
10
0
10
2
10
Degree
(b) GS
5
1
10
0
10
0
10
2
10
Degree
10
10
10
0
10
0
10
2
10
Degree
Model
Data
2
10
10
0
10
0
10
Model
Data
2
10
10
10
Model
Data
2
10
3
10
Model
Data
(g) IG
Degree
(h) GGP
Figure 4: Degree distributions for readers (a-d) and books (e-h) for the BX books dataset with four
different models. Data are represented by red plus and samples from the model by blue crosses.
set containing 3/4 of the readers and a test set with the remaining. For each model, we approximate
the posterior mean of the unknown parameters (respectively (?IBP , ?IBP , ?IBP ), ?, (a? , b? ) and
(?? , ?? , ?? ) for S-IBP, SG, IG and GGP) given the training network with a Gibbs sampler with
10000 burn-in iterations then 10000 samples; then we evaluate the log-likelihood of the estimated
model on the test data. For GGP, we use ??test = ?
b? /3 to take into account the different sample
sizes. For ?Boards?, we do 10 replications with random permutations given the small sample size
and report standard deviation together with mean value. Table 1 shows the results over the different
networks for the different models. Typically, S-IBP and SG give very similar results. This is not
surprising, as they share the same properties, i.e. Poissonian degree distribution for readers and
power-law degree distribution for books. Both methods perform better solely on the Board dataset,
where the Poisson assumption on the number of people sitting on the same board makes sense. On
all the other datasets, IG and GGP perform better and similarly, with slightly better performances for
IG. These two models are better able to capture the power-law distribution of the degrees of readers.
These properties are shown on Figures 3 and 4 which resp. give the empirical degree distributions
of the test network and a draw from the estimated models, for the IMDB dataset and the Books
dataset. It is clearly seen that the four models are able to capture the power-law behavior of the
degree distribution of actors (Figure 3(e-h)) or books (Figure 4(e-h)). However, only IG and GGP
are able to capture the power-law behavior of the degree distribution of movies (Figure 3(a-d)) or
readers (Figure 4(a-d)).
8
References
[1] A. Brix. Generalized gamma measures and shot-noise Cox processes. Advances in Applied
Probability, 31(4):929?953, 1999.
[2] F. Caron and Y. W. Teh. Bayesian nonparametric models for ranked data. In Neural Information
Processing Systems (NIPS), 2012.
[3] D.J. Daley and D. Vere-Jones. An introduction to the theory of point processes. Springer
Verlag, 2008.
[4] L. Devroye. Random variate generation for exponentially and polynomially tilted stable distributions. ACM Transactions on Modeling and Computer Simulation (TOMACS), 19(4):18,
2009.
[5] E.B. Fox, E.B. Sudderth, M.I. Jordan, and A.S. Willsky. Sharing features among dynamical
systems with beta processes. In Advances in Neural Information Processing Systems, volume 22, pages 549?557, 2009.
[6] A. Goldenberg, A.X. Zheng, S.E. Fienberg, and E.M. Airoldi. A survey of statistical network
models. Foundations and Trends in Machine Learning, 2(2):129?233, 2010.
[7] D. G?or?ur, F. J?akel, and C.E. Rasmussen. A choice model with infinitely many latent features. In
Proceedings of the 23rd international conference on Machine learning, pages 361?368. ACM,
2006.
[8] T Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In
NIPS, 2005.
[9] T. Griffiths and Z. Ghahramani. The Indian buffet process: an introduction and review. Journal
of Machine Learning Research, 12(April):1185?1224, 2011.
[10] N.L. Hjort. Nonparametric bayes estimators based on beta processes in models for life history
data. The Annals of Statistics, 18(3):1259?1294, 1990.
[11] L.F. James, A. Lijoi, and I. Pr?unster. Posterior analysis for normalized random measures with
independent increments. Scandinavian Journal of Statistics, 36(1):76?97, 2009.
[12] J.F.C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59?78,
1967.
[13] J.F.C. Kingman. Poisson processes, volume 3. Oxford University Press, USA, 1993.
[14] M.E.J. Newman. The structure and function of complex networks. SIAM review, pages 167?
256, 2003.
[15] M.E.J. Newman, S.H. Strogatz, and D.J. Watts. Random graphs with arbitrary degree distributions and their applications. Physical Review E, 64(2):26118, 2001.
[16] M.E.J. Newman, D.J. Watts, and S.H. Strogatz. Random graph models of social networks.
Proceedings of the National Academy of Sciences, 99:2566, 2002.
[17] J.J. Ramasco, S.N. Dorogovtsev, and R. Pastor-Satorras. Self-organization of collaboration
networks. Physical review E, 70(3):036106, 2004.
[18] Y.W. Teh and D. G?or?ur. Indian buffet processes with power-law behavior. In NIPS, 2009.
[19] R. Thibaux and M. Jordan. Hierarchical beta processes and the Indian buffet process. In
International Conference on Artificial Intelligence and Statistics, volume 11, pages 564?571,
2007.
[20] F. Wood, T.L. Griffiths, and Z. Ghahramani. A non-parametric Bayesian method for inferring
hidden causes. In Proceedings of the Conference on Uncertainty in Artificial Intelligence,
volume 22, 2006.
9
| 4837 |@word cox:1 version:1 proportion:1 suitably:1 willing:1 iki:1 simulation:2 pick:1 mention:1 shot:1 series:2 score:5 zij:15 contains:3 com:2 surprising:1 assigning:1 starring:2 reminiscent:1 vere:1 tilted:2 enables:1 update:6 generative:11 intelligence:2 item:5 short:1 characterization:2 provides:2 node:1 evy:8 location:5 org:1 constructed:1 beta:7 director:1 replication:1 introduce:1 behavior:10 bnp:3 decomposed:1 company:2 considering:1 estimating:1 moreover:2 underlying:1 mass:9 kind:1 growth:2 exactly:1 positive:6 before:1 scientist:1 oxford:1 solely:1 inria:2 plus:2 burn:1 quantified:1 co:4 atomic:2 empirical:1 convenient:1 integrating:1 griffith:4 cannot:1 preprints:1 www:4 measurable:3 go:1 survey:1 simplicity:1 assigns:1 estimator:1 tary:1 dw:6 handle:1 increment:1 updated:1 resp:3 construction:1 annals:1 user:4 exact:1 homogeneous:2 crossing:1 trend:1 observed:1 enters:1 capture:3 wj:32 connected:1 improper:1 ui:4 depend:1 predictive:2 bipartite:16 imdb:5 completely:4 easily:1 represented:4 various:2 describe:1 artificial:2 newman:3 whose:1 larger:2 supplementary:7 otherwise:4 ability:1 statistic:3 propose:1 product:1 fr:1 flexibility:2 academy:1 description:1 francois:1 unster:1 derive:5 develop:1 depending:1 ij:2 ibp:25 eq:1 ois:1 implies:1 lijoi:1 material:8 assign:3 proposition:4 extension:3 rexp:2 exp:21 satorras:1 a2:1 condensed:1 grouplens:1 clearly:2 gaussian:1 always:1 pn:13 corollary:1 derived:2 focus:2 bernoulli:2 likelihood:3 modelling:2 summarizing:1 sense:1 inference:2 goldenberg:1 typically:2 integrated:1 hidden:2 uij:21 france:1 among:1 flexible:1 html:1 retaining:1 special:1 marginal:2 having:1 extraction:1 atom:3 sampling:1 represents:1 jones:1 report:1 realisation:1 few:1 franc:1 wep:1 gamma:19 national:1 bw:1 organization:1 interest:2 message:1 zheng:1 mixture:1 edge:3 censored:1 shorter:1 fox:1 divide:1 fitted:1 modeling:2 zn:16 tg:1 tractability:1 vertex:3 subset:2 deviation:1 usefulness:1 thibaux:1 density:3 international:2 siam:1 pool:1 together:2 satisfied:1 successively:2 containing:1 choose:2 book:53 pajek:1 bx:2 kingman:2 account:1 potential:1 de:1 b2:1 matter:1 notable:1 vi:3 picked:1 closed:1 lot:1 red:2 bayes:1 b6:1 php:1 who:1 akel:1 sitting:3 bayesian:7 informatik:1 marginally:1 movielens100k:2 history:1 randoms:1 sharing:1 james:1 proof:6 associated:2 dataset:10 treatment:1 popular:1 formalize:1 x1j:1 campbell:1 higher:2 april:1 evaluated:1 strongly:1 hastings:1 web:1 ei:2 scientific:1 name:1 usa:1 b3:1 normalized:1 hence:2 assigned:3 read:14 illustrated:1 conditionally:4 self:1 authorship:1 generalized:3 pdf:1 freiburg:1 novel:2 physical:2 overview:1 b4:1 exponentially:2 volume:4 he:3 refer:2 caron:3 gibbs:6 enter:1 rd:1 mathematics:1 similarly:3 stable:11 actor:5 longer:1 scandinavian:1 etc:1 base:1 posterior:5 pastor:1 verlag:1 affiliation:1 binary:5 life:1 seen:2 additional:1 dwd:1 characterized:2 offer:1 cross:2 divided:1 a1:3 poisson:16 arxiv:1 iteration:1 represent:1 want:1 sudderth:1 zw:1 rest:2 ufl:1 member:2 jordan:2 hjort:1 wn:1 crm:12 concerned:1 b7:1 fit:2 zi:16 variate:1 listening:1 expression:1 six:1 b5:1 song:1 e3:4 cause:2 useful:2 detailed:2 nonparametric:7 induces:1 http:5 xij:3 exist:1 estimated:2 disjoint:1 popularity:3 blue:2 write:2 shall:1 four:5 drawn:4 graph:13 asymptotically:1 year:1 sum:1 wood:1 inverse:2 uncertainty:1 named:1 reader:44 draw:3 ki:4 internet:1 guaranteed:1 g:4 flat:2 u1:12 min:1 palm:2 pacific:1 according:1 popularized:1 imb:1 watt:2 slightly:1 ur:3 appealing:1 tw:2 metropolis:1 pr:1 fienberg:1 equation:1 mutually:1 discus:2 describing:1 know:1 available:1 hierarchical:1 talence:1 buffet:6 denotes:1 remaining:1 marginalized:1 ghahramani:4 k1:1 forum:5 quantity:1 parametric:1 exhibit:3 sensible:2 collected:1 willsky:1 assuming:1 length:1 devroye:1 illustration:3 providing:1 potentially:3 ggp:25 countable:2 unknown:4 perform:2 teh:3 allowing:1 datasets:5 finite:3 u1j:2 gor:1 truncated:1 situation:1 crms:2 norwegian:1 arbitrary:1 august:1 community:1 intensity:5 rating:3 connection:3 z1:4 tremendous:1 nip:3 poissonian:3 able:4 address:1 dynamical:1 appeared:1 reading:2 encompasses:1 power:12 ranked:1 bordeaux:1 rely:1 movie:5 brief:1 library:2 authoring:1 metadata:1 form1:1 prior:5 understanding:1 sg:4 review:4 contributing:1 law:14 permutation:1 generation:1 limitation:1 foundation:1 downloaded:4 degree:37 article:1 vij:6 pi:2 collaboration:2 share:1 last:1 rasmussen:1 ber:3 sparse:1 distributed:2 default:1 world:4 evaluating:1 collection:4 reinforcement:1 ig:10 far:1 polynomially:1 social:5 transaction:1 approximate:3 citation:3 uni:1 b1:1 assumed:1 un:13 latent:16 continuous:1 decomposes:1 table:4 mj:13 symmetry:1 e5:4 complex:2 posted:1 noise:1 hyperparameters:1 allowed:1 board:9 inferring:2 deterministically:1 daley:1 exponential:3 third:1 posting:1 theorem:3 formula:3 e4:10 admits:1 a3:1 concern:1 airoldi:1 likely:2 infinitely:1 expressed:1 strogatz:2 springer:1 corresponds:1 satisfies:1 ukj:2 extracted:2 cdf:1 acm:2 conditional:8 cise:1 change:1 infinite:5 sampler:7 total:6 called:3 indicating:1 people:3 indian:6 evaluate:2 |
4,240 | 4,838 | Cocktail Party Processing via Structured Prediction
Yuxuan Wang1 , DeLiang Wang1,2
Department of Computer Science and Engineering
2
Center for Cognitive Science
The Ohio State University
Columbus, OH 43210
{wangyuxu,dwang}@cse.ohio-state.edu
1
Abstract
While human listeners excel at selectively attending to a conversation in a cocktail
party, machine performance is still far inferior by comparison. We show that the
cocktail party problem, or the speech separation problem, can be effectively approached via structured prediction. To account for temporal dynamics in speech,
we employ conditional random fields (CRFs) to classify speech dominance within
each time-frequency unit for a sound mixture. To capture complex, nonlinear relationship between input and output, both state and transition feature functions
in CRFs are learned by deep neural networks. The formulation of the problem
as classification allows us to directly optimize a measure that is well correlated
with human speech intelligibility. The proposed system substantially outperforms
existing ones in a variety of noises.
1 Introduction
The cocktail party problem, or the speech separation problem, is one of the central problems in
speech processing. A particularly difficult scenario is monaural speech separation, in which mixtures are recorded by a single microphone and the task is to separate the target speech from its
interference. This is a severely underdetermined figure-ground separation problem, and has been
studied for decades with limited success.
Researchers have attempted to solve the monaural speech separation problem from various angles.
In signal processing, speech enhancement (e.g., [1, 2]) has been extensively studied, and assumptions regarding the statistical properties of noise are crucial to its success. Model-based methods
(e.g., [3]) work well in constrained environments, and source models need to be trained in advance.
Computational auditory scene analysis (CASA) [4] is inspired by how human auditory system functions [5]. CASA has the potential to deal with general acoustic environments but existing systems
have limited performance, particularly in dealing with unvoiced speech.
Recent studies suggest a new formulation to the cocktail party problem, where the focus is to classify whether a time-frequency (T-F) unit is dominated by the target speech [6]. Motivated by this
viewpoint, we propose to approach the monaural speech separation problem via structured prediction. The use of structured predictors, as opposed to binary classifiers, is motivated by temporal
dynamics in speech signal. Our study makes the following contributions: (1) we demonstrate that
modeling temporal dynamics via structured prediction can significantly improve separation; (2) to
capture nonlinearity, we propose a new structured prediction model that makes use of the discriminative feature learning power of deep neural networks; and (3) instead of classification accuracy, we
show how to directly optimize a measure that is well correlated with human speech intelligibility.
1
2 Separation as binary classification
We aim to estimate a time-frequency matrix called the ideal binary mask (IBM). The IBM is a binary
matrix constructed from premixed target and interference, where 1 indicates that the target energy
exceeds the interference energy by a local signal-to-noise (SNR) criterion (LC) in the corresponding
T-F unit, and 0 otherwise. The IBM is defined as:
IBM (t, f ) =
1, if SN R(t, f ) > LC
0, otherwise,
where SN R(t, f ) denotes the local SNR (in decibels) within the T-F unit at time t and frequency
f . We adopt the common choice of LC = 0 in this paper [7]. Despite its simplicity, adopting the
IBM as a computational objective offers several advantages. First, the IBM is directly based on
the auditory masking phenomenon whereby a stronger sound tends to mask a weaker one within a
critical band. Second, unlike other objectives such as maximizing SNR, it is well established that
large human speech intelligibility improvements result from IBM processing, even for very low SNR
mixtures [7?9]. Improving human speech intelligibility is considered as a gold standard for speech
separation. Third, IBM estimation naturally leads to classification, which opens the cocktail party
problem to a plethora of machine learning techniques.
We propose to formulate IBM estimation as binary classification as follows, which is a form of
supervised learning. A sound mixture with the 16 kHz sampling rate is passed through a 64-channel
gammatone filterbank spanning from 50 Hz to 8000 Hz on the equivalent rectangular bandwidth
rate scale. The output from each filter channel is divided into 20-ms frames with 10-ms frame shift,
producing a cochleagram [4]. Due to different spectral properties of speech, a subband classifier
is trained for each filter channel independently, with the IBM providing training labels. Acoustic
features for each subband classifier are extracted from T-F units in the cochleagram. The target
speech is separated by binary weighting of the cochleagram using the estimated IBM [4].
Several recent studies have attempted to directly estimate the IBM via classification. By employing
Gaussian mixture models (GMMs) as classifiers and amplitude modulation spectrograms (AMS)
as features, Kim et al. [10] show that estimated masks can improve human speech intelligibility in
noise. Han and Wang [11] have improved Kim et al.?s system by employing support vector machines
(SVMs) as classifiers. Wang et al. [12] propose a set of complementary acoustic features that shows
further improvements over previous systems. The complementary feature is a concatenation of
AMS, relative spectral transform and perceptual linear prediction (RASTA-PLP), mel-frequency
cepstral coefficients (MFCC), and pitch-based features.
Because the ratio of 1?s to 0?s in the IBM is often skewed, simply using classification accuracy as
the evaluation criterion may not be appropriate. Speech intelligibility studies [9, 10] have evaluated
the influence of the hit (HIT) and false-alarm (FA) rate on intelligibility scores. The difference, the
HIT?FA rate, is found to be well correlated to human speech intelligibility in noise [10]. The HIT
rate is the percent of correctly classified target-dominant T-F units (1?s) in the IBM, and the FA rate
is the percent of wrongly classified interference-dominant T-F units (0?s). Therefore, it is desirable
to design a separation algorithm that maximizes HIT?FA of the output mask.
3 Proposed system
Dictated by speech production mechanisms, the IBM contains highly structured, rather than, random
patterns. Previous systems do not explicitly model such structure. As a result, temporal dynamics,
which is a fundamental characteristic of speech, is largely ignored in previous work. Separation
systems accounting for temporal dynamics exist. For example, Mysore et al. [13] incorporate temporal dynamics using HMMs. Hershey et al. [14] consider different levels of dynamic constraints.
However, these works do not treat separation as classification. Contrary to standard binary classifiers, structured prediction models are able to model correlations in the output. In this paper, we
treat unit classification at each filter channel as a sequence labeling problem and employ linear-chain
conditional random fields (CRFs) [15] as subband classifiers.
2
3.1 Conditional random fields
Different from HMM, a CRF is a discriminative model and does not need independence assumptions
of features, making it more suitable to our task. A CRF models the posterior probability P (y|x) as
follows. Denoting y as a label sequence and x as an input sequence,
P T
exp
t w f (y, x, t)
.
(1)
P (y|x) =
Z(x)
P
P T
?
Here t indexes time frames, w is the parameters to learn, and Z(x) = y? exp
t w f (y , x, t)
is the partition function. f is a vector-valued feature function associated with each local site (T-F
unit in our task), and often categorized into state feature functions s(yt , x, t) and transition feature
functions t(yt?1 , yt , x, t). State feature functions define the local discriminant functions for each
T-F unit and transition feature functions capture the interaction between neighboring labels. We
assume a linear-chain setting and the first-order Markovian property, i.e., only interactions between
two neighboring units in time are modeled. In our task, we can simply use acoustic feature vectors
in each T-F unit as state feature functions and their concatenations as transition feature functions:
s(yt , x, t) =
t(yt?1 , yt , x, t) =
[?(yt =0) xt , ?(yt =1) xt ]T ,
(2)
T
[?(yt?1 =yt ) zt , ?(yt?1 6=yt ) zt ] ,
(3)
T
where ? is the indicator function and zt = [xt?1 , xt ] . Equation (3) essentially encodes temporal
continuity in the IBM. To simplify notations, all feature functions are written as f (yt?1 , yt , x, t) in
the remainder of the paper.
Training is for estimating
w, and
is usually done by maximizing the conditional log-likelihood on a
training set T = x(m) , y(m) , i.e., we seek w by
X
max
log p(y(m) |x(m) , w) + R(w),
(4)
w
m
where m is the index of a training sample, and R(w) is a regularizer of w (we use ?2 in this paper).
For gradient ascent, a popular choice is the limited-memory BFGS (L-BFGS) algorithm [16].
3.2 Nonlinear expansion using deep neural networks
A CRF is a log-linear model, which has only linear modeling power. As acoustic features are
generally not linearly separable, the direct use of CRFs unlikely produces good results. In the
following, we propose a method to transform the standard CRF into a nonlinear sequence classifier.
We employ pretrained deep neural networks (DNNs) to capture nonlinearity between input and
output. DNNs have received widespread attention since Hinton et al.?s paper [17]. DNNs can be
viewed as hierarchical feature detectors that learn increasingly complex feature mappings as the
number of hidden layers increases. To deal with problems such as vanishing gradients, Hinton et
al. suggest to first pretrain a DNN using a stack of restricted Boltzmann machines (RBMs) in a
unsupervised and layerwise fashion. The resulting network weights are then supervisedly finetuned
by backpropagation.
We first train DNN in the standard way to classify speech dominance in each T-F unit. After pretraining and supervised finetuning, we then take the last hidden layer representations from the DNN
as learned features to train the CRF. In a discriminatively trained DNN, the weights from the last
hidden layer to the output layer would define a linear classifier, hence the last hidden layer representations are more amenable to linear classification. In other words, we replace x by h in equations
(1)-(4), where h represents the learned hidden features. This way CRFs would greatly benefit from
the nonlinear modeling power of deep architectures.
To better encode local contextual information, we could use a window (across both time and frequency) of learned features to label the current T-F unit. A more parsimonious way is to use a
window of posteriors estimated by DNNs as features to train the CRF, which can dramatically reduce the dimensionality. We note in passing that the correlations across both time and frequency
can also be encoded at the model level, e.g., by using grid-structured CRFs. However the decoding
algorithm may substantially increase the computational complexity of the system.
3
We want to point out that an important advantage of using neural networks for feature learning is
its efficiency in the test phase; once trained, the nonlinear feature extraction of DNN is extremely
fast (only involves forward pass). This is, however, not always true for other methods. For example, sparse coding may need to solve a new optimization problem to get the features. Test phase
efficiency is crucial for real-time implementation of a speech separation system.
There is related work on developing nonlinear sequence classifiers in the machine learning community. For example, van der Maaten et al. [18] and Morency et al. [19] consider incorporating hidden
variables into the training and inference in CRF. Peng et al. [20] investigate a combination of neural
networks and CRFs. Other related studies include [21] and [22]. The proposed model differs from
the previous methods in that (1) discriminatively trained deep architecture is used, and/or (2) a CRF
instead of a Viterbi decoder is used on top of a neural network for sequence labeling, and/or (3)
nonlinear features are also used in modeling transitions. In addition, the use of a contextual window
and the change of the objective function discussed in the next subsection is specifically tailored to
the speech separation problem.
3.3 Maximizing HIT?FA rate
As argued before, it is desirable to train a classifier to maximize the HIT?FA rate of the estimated
mask. In this subsection, we show how to change the objective function and efficiently calculate the
gradients in CRF. Since subband classifiers are used, we aim to maximize the channelwise HIT?FA.
Denote the output label as
1} and the
t ? {0, 1}. The per utterance HIT?FA
Put ? {0,P
Ptrue label as yP
rate can be expressed as t ut yt / t yt ? t ut (1 ? yt )/ t (1 ? yt ), where the first term is the
HIT rate and the second the FA rate. To make the objective function differentiable, we replace ut by
the marginal probability p(yt = 1|x), hence we seek w by maximizing the HIT?FA on a training
set:
!
P P
P P
(m)
(m)
(m)
(m)
= 1|x(m) , w)yt
= 1|x(m) , w)(1 ? yt )
m
t p(yt
m
t p(yt
max
. (5)
?
P P (m)
P P
(m)
w
)
m
t yt
m
t (1 ? yt
Clearly, computing the gradient of (5) boils down to computing the gradient of the marginal. A
speech utterance (sentence) typically spans several hundreds of time frames, therefore numerical
stability is critically important in our task. As can be seen later, computing the gradient of the
marginal requires the gradient of forward/backward scores. We adopt Rabiner?s scaling trick [23]
used in HMM to normalize the forward/backward score at each time point. Specifically, define
?(t, u) and ?(t, u) as the forward and
P backward score of label u at time t, respectively. We norto normalize the
malize the forward score such that u ?(t, u) = 1, and use the resulting scaling
backward score. Defining potential function ?t (v, u) = exp wT f (v, u, x, t) , the recurrence of the
normalized forward/backward score is written as,
X
?(t, u) =
?(t ? 1, v)?t (v, u)/s(t),
(6)
v
?(t, u)
=
X
?(t + 1, v)?t (u, v)/s(t + 1),
(7)
v
Q
P P
where s(t) =
t s(t), and now
u
v ?(t ? 1, v)?t (v, u). It is easy to show that Z(x) =
the marginal has a simpler form of p(yt |x, w) = ?(t, yt )?(t, yt ). Therefore, the gradient of the
marginal is,
?p(yt |x, w)
= G? (t, yt )?(t, yt ) + ?(t, yt )G? (t, yt ),
(8)
?w
where G? and G? are the gradients of the normalized forward and backward score, respectively.
Due to score normalization, G? and P
G? will very unlikely overflow. We now show that G? can be
calculated recursively. Let q(t, u) = v ?(t ? 1, v)?t (v, u), we have
P ?q(t,v)
?q(t,u) P
??(t, u)
v ?w q(t, u)
v q(t, v) ?
?w
,
(9)
G? (t, u) =
=
P
2
?w
( v q(t, v))
and,
X
?q(t, u) X
=
G? (t ? 1, v)?t (v, u) +
?(t ? 1, v)?t (v, u)f (v, u, x, t).
?w
v
v
4
(10)
95
95
DNN
DNN*
DNN?CRF
DNN?CRF*
90
90
85
HIT?FA
80
80
75
75
70
70
70
?10 dB
?5 dB
65
0 dB
?10 dB
(a) matched: overall
?5 dB
65
0 dB
75
HIT?FA
HIT?FA
70
75
70
70
65
65
DNN
DNN*
DNN?CRF
DNN?CRF*
80
80
75
0 dB
85
DNN
DNN*
DNN?CRF
DNN?CRF*
85
80
?5 dB
(c) matched: unvoiced
90
DNN
DNN*
DNN?CRF
DNN?CRF*
85
?10 dB
(b) matched: voiced
90
HIT?FA
80
75
65
DNN
DNN*
DNN?CRF
DNN?CRF*
90
85
HIT?FA
85
HIT?FA
95
DNN
DNN*
DNN?CRF
DNN?CRF*
65
60
55
50
45
60
?10 dB
?5 dB
60
0 dB
?10 dB
(d) unmatched: overall
?5 dB
40
0 dB
?10 dB
(e) unmatched: voiced
?5 dB
0 dB
(f) unmatched: unvoiced
Figure 1: HIT?FA results. (a)-(c): matched-noise test condition; (d)-(f): unmatched-noise test
condition.
90
90
85
85
90
80
80
80
70
70
HIT?FA
HIT?FA
HIT?FA
75
75
70
60
65
50
65
60
DNN
DNN*
DNN?CRF
DNN?CRF*
60
55
DNN
DNN*
DNN?CRF
DNN?CRF*
55
50
0
10
20
30
40
50
60
Channel
(a) overall
DNN
DNN*
DNN?CRF
DNN?CRF*
40
30
0
10
20
30
40
50
60
Channel
(b) voiced speech intervals
0
10
20
30
40
50
60
Channel
(c) unvoiced speech intervals
Figure 2: Channelwise HIT?FA comparisons on the 0 dB test mixtures.
The derivation of G? is similar, thus omitted. The time complexity of calculating G? and G? is
O(L|S|2 ), where L and |S| are the utterance length and the size of the label set, respectively. This
is the same as the forward-backward recursion.
The objective function in (5) is not concave. Since high accuracy correlates with high HIT?FA, a
safe practice is to use a solution from (4) as a warm start for the subsequent optimization of (5). For
feature learning, DNN is also trained using (5) in the final system. The gradient calculation is much
simpler due to the absence of transition features. We found that L-BFGS performs well and shows
fast and stable convergence for both feature learning and CRF training.
4 Experimental results
4.1 Experimental setup
Our training and test sets are primarily created from the IEEE corpus [24] recorded by a single female speaker. This enables us to directly compare with previous intelligibility studies [10], where
the same speaker is used in training and testing. The training set is created by mixing 50 utterances with 12 noises at 0 dB. To create the test set, we choose 20 unseen utterances from the same
speaker. First, the 20 utterances are mixed with the previous 12 noises to create a matched-noise test
5
60
50
50
40
40
40
30
20
Channel
60
50
Channel
Channel
60
30
20
10
20
10
20
40
60
80
100
120
140
160
180
200
220
10
20
40
60
80
Frame
(a) Ideal binary mask
30
100
120
140
160
180
200
Frame
(b) DNN-CRF? -P mask
220
20
40
60
80
100
120
140
160
180
200
220
Frame
(c) DNN mask
Figure 3: Masks for a test utterance mixed with an unseen crowd noise at 0 dB. White represents 1?s
and black represents 0?s.
condition, then 5 unseen noises to create a unmatched-noise test condition. The test noises1 cover
a variety of daily noises and most of them are highly non-stationary. In each frequency channel,
there are roughly 150,000 and 82,000 T-F units in the training and test set, respectively. Speakerindependent experiments are presented in Section 4.4.
The proposed system is called DNN-CRF or DNN-CRF? if it is trained to maximize HIT?FA. We
use suffix R and P to distinguish training features for CRF, where R stands for learned features without a context window (features are learned from the complementary acoustic feature set mentioned
in Section 2) and P stands for a window of posterior features. We use a two hidden layer DNN as
it provides a good trade-off between performance and complexity, and use a context window spanning 5 time frames and 17 frequency channels to construct the posterior feature vector. We use the
cross-entropy objective function for training the standard DNN in comparisons.
4.2 Experiment 1: HIT?FA maximization
In this subsection, we show the effect of directly maximizing the HIT?FA rate. To evaluate the
contribution from the change of the objective alone, we use ideal pitch in the following experiments
to neutralize pitch estimation errors. The models are trained on 0 dB mixtures. In addition to 0 dB,
we also test the trained models on -10 and -5 dB mixtures. Such a test setting not only allows us
to measure the system?s generalization to different SNR conditions, but also to show the effects of
HIT?FA maximization on estimating sparse IBMs. We compare DNN-CRF? -R with DNN, DNN?
and DNN-CRF-R, and the results are shown in Figure 1 and 2.
We document HIT?FA rates on three levels: overall, voiced intervals (pitched frames) and unvoiced
intervals (unpitched frames). Voicing boundaries are determined using ideal pitch. Figure 1 shows
the results for both matched-noise and unmatched-noise test conditions. First, comparing the performances of DNN-CRFs and DNNs, we can see that modeling temporal continuity always improves
performance. It also seems very helpful for generalization to different SNRs. In the matched condition, the improvement by directly maximizing HIT?FA is most significant in unvoiced intervals.
The improvement becomes larger when SNR decreases. In the unmatched condition, as classification becomes much harder, direct maximization of HIT?FA offers more improvements in all cases.
The largest HIT?FA improvement of DNN-CRF? -R over DNN is about 10.7% and 21.2% absolute in overall and unvoiced speech intervals, respectively. For a closer inspection, Figure 2 shows
channelwise HIT?FA comparisons on the 0 dB test mixtures in the matched-noise test condition. It
is well known that unvoiced speech is indispensable for speech intelligibility but hard to separate.
Due to the lack of harmonicity and weak energy, frequency channels containing unvoiced speech
often have significantly skewed distributions of target-dominant and interference-dominant units.
Therefore, an accuracy-maximizing classifier tends to output all 0?s to attain a high accuracy. As
an illustration, Figure 3 shows two masks for an utterance mixed with an unseen crowd noise at 0
dB using DNN and DNN-CRF? -P respectively. The two estimated masks achieve similar accuracy
around 90%. However, it is clear that the DNN mask misses significant portions of unvoiced speech,
e.g., between frame 30-50 and 220-240.
1
Test noises are: babble, bird chirp, crow, cocktail party, yelling, clap, rain, rock music, siren, telephone,
white, wind, crowd, fan, speech shaped, traffic, and factory noise. The first 12 are used in training.
6
Table 1: Performance comparisons between different systems. Boldface indicates best result
Matched-noise condition
Unmatched-noise condition
Accuracy HIT?FA SNR (dB) SegSNR (dB) Accuracy HIT?FA SNR (dB) SegSNR (dB)
GMM [10]
77.4%
55.4%
10.2
7.3
65.9%
31.6%
6.8
1.9
SVM [11]
86.6%
68.0%
10.5
10.9
91.2%
64.1%
9.7
7.9
DNN
87.7%
71.6%
11.4
11.8
91.1%
66.2%
9.9
8.1
CRF
82.3%
59.8%
8.8
8.7
90.8%
64.0%
9.3
7.8
SVM-Struct
81.7%
58.6%
8.4
8.1
90.7%
63.5%
9.1
7.5
CNF
87.8%
71.7%
11.2
12.0
91.1%
66.9%
9.8
8.4
LD-CRF
86.3%
68.4%
9.7
10.5
91.1%
63.6%
8.9
7.8
DNN-CRF? -R
89.1%
75.6%
12.1
13.2
90.8%
70.2%
10.3
9.0
DNN-CRF? -P
89.9%
76.9%
12.0
13.5
91.1%
70.7%
10.0
8.9
Hendriks et al. [1]
n/a
n/a
4.6
0.5
n/a
n/a
6.2
1.1
Wiener Filter [2]
n/a
n/a
3.7
-0.7
n/a
n/a
5.6
-0.6
System
Table 2: Performance comparisons when tested on different unseen speakers
Matched-noise condition
Unmatched-noise condition
Accuracy HIT?FA SNR (dB) SegSNR (dB) Accuracy HIT?FA SNR (dB) SegSNR (dB)
SVM [11]
86.2%
65.0%
10.2
9.9
91.1%
60.6%
9.4
7.3
DNN-CRF? -P
87.3%
72.0%
12.1
11.2
90.9%
68.3%
10.1
8.1
Hendriks et al. [1]
n/a
n/a
4.5
-2.9
n/a
n/a
6.9
-1.0
Wiener Filter [2]
n/a
n/a
3.8
-4.5
n/a
n/a
6.0
-3.3
System
In summary, direct maximization of HIT?FA improves HIT?FA performance compared to accuracy maximization, especially for unvoiced speech, and the improvement is more significant when
the system is tested on unseen acoustic environments.
4.3 Experiment 2: system comparisons
We systematically compare the proposed system with three kinds of systems on 0 dB mixtures:
binary classifier based, structured predictor based, and speech enhancement based. In addition to
HIT?FA, we also include classification accuracy, SNR and segmental SNR (segSNR) as alternative evaluation criteria. To compute SNRs, we use the target speech resynthesized from the IBM as
the ground truth signal for all classification-based systems. This way of computing SNRs is commonly adopted in the literature [4, 25], as the IBM represents the ground truth of classification. All
classification-based systems use the same feature set, but with estimated pitch, described in Section
2, except for Kim et al.?s GMM based system which uses AMS features [10]. Note that we fail
to produce reasonable results using the complementary feature set in Kim et al.?s system, possibly
because GMM requires much more training data than discriminative models for high dimensional
features. Results are summarized in Table 1.
We first compare with methods based on binary classifiers. These include two existing systems
[10, 11] and a DNN based system. Due to the variety of noises, classification is challenging even
in the matched-noise condition. It is clear that the proposed system significantly outperforms the
others in terms of all criteria. The improvement of DNN-CRF? s over DNN demonstrates the benefit
of modeling temporal continuity. It is interesting to see that DNN significantly outperforms SVM,
especially for unvoiced speech (not shown) which is important for speech intelligibility. We note
that without RBM pretraining, DNN performs significantly worse. Classification in the unmatchednoise condition is obviously more difficult, as feature distributions are likely mismatched between
the training and the test set. Kim et al.?s system fails to generalize to different acoustic environments
due to substantially increased FA rates. The proposed system significantly outperforms SVM and
DNN, achieving about 71% overall HIT?FA and 10 dB SNR for unseen noises. Kim et al.?s system
has been shown to improve human speech intelligibility [10], it is therefore reasonable to project
that the proposed system will provide further speech intelligibility improvements.
We next compare with systems based on structured predictors, including CRF, SVM-Struct [26],
conditional neural fields (CNF) [20] and latent-dynamic CRF (LD-CRF) [19]. For fair comparisons, we use a two hidden layer CNF model with the same number of parameters as DNN-CRF? s.
Conventional structured predictors such as CRF and SVM-Struct (linear kernel) are able to explicitly model temporal dynamics, but only with linear modeling capability. Direct use of CRF turns
out to be much worse than using kernel SVM. Nevertheless, the performance can be substantially
7
boosted by adding latent variables (LD-CRF) or by using nonlinear feature functions (CNF and
DNN-CRF? s). With the same network architecture, CNF mainly differs from our model in two aspects. First, CNF does not use unsupervised RBM pretraining. Second, CNF only uses bias units in
building transition features. As a result, the proposed system significantly outperforms CNF, even
if CRF and neural networks are jointly trained in the CNF model. With better ability of encoding
contextual information, using a window of posteriors as features clearly outperforms single unit
features in terms of classification. It is worth noting that although SVM achieves slightly higher
accuracy in the unmatched-noise condition, the resulting HIT?FA and SNRs are worse than some
other systems. This is consistent with our analysis in Section 4.2.
Finally, we compare with two representative speech enhancement systems [1, 2]. The algorithm
proposed in [1] represents a recent state-of-the-art method and Wiener filtering [2] is one of the most
widely used speech enhancement algorithms. Since speech enhancement does not aim to estimate
the IBM, we compare SNRs by using clean speech (not the IBM) as the ground truth. As shown in
Table 1, the speech enhancement algorithms are much worse, and this is true of all 17 noises.
Due to temporal continuity modeling and the use of T-F context, the proposed system produces
masks that are smoother than those from the other systems (e.g., Figure 3). As a result, the outputs
seem to contain less musical noise.
4.4 Experiment 3: speaker generalization
Although the training set contains only a single IEEE speaker, the proposed system generalizes
reasonably well to different unseen speakers. To show this, we create a new test set by mixing 20
utterances from the TIMIT corpus [27] at 0 dB. The new test utterances are chosen from 10 different
female TIMIT speakers, each providing 2 utterances. We show the results in Table 2, and it is
clear that the proposed system generalizes better than existing ones to unseen speakers. Note that
significantly better performance and generalization to different genders can be obtained by including
the speaker(s) of interest into the training set.
5 Discussion and conclusion
Listening tests have shown that a high FA rate is more detrimental to speech intelligibility than a
high miss (or low HIT) [9]. The proposed classification framework affords us control over these two
quantities. For example, we could constrain the upper bound of the FA rate while still maximizing
the HIT rate. In this case, a constrained optimization should substitute (5). Our experimental results
(not shown due to lack of space) indicate that this can effectively remove spurious target segments
while still produce intelligible speech.
Being able to efficiently compute the derivative of marginals, in principle one could optimize a
class of objectives other than HIT?FA. These may include objectives concerning either speech intelligibility or quality, as long as the objective of interest can be expressed or approximated by a
combination of marginal probabilities. For example, we have tried to simultaneously minimize two
traditional CASA measures PEL and PN R (see e.g., [25]), where PEL represents the percent of target energy loss and PN R the percent of noise energy residue. Significant reductions in both measures
can be achieved compared to methods that maximize accuracy or conditional log-likelihood.
We have demonstrated that the challenge of the monaural speech separation problem can be effectively approached via structured prediction. Observing that the IBM exhibits highly structured
patterns, we have proposed to use CRF to explicitly model the temporal continuity in the IBM. This
linear sequence classifier is further transformed to a nonlinear one by using state and transition feature functions learned from DNN. Consistent with the results from speech perception, we train the
proposed DNN-CRF model to maximize a measure that is well correlated to human speech intelligibility in noise. Experimental results show that the proposed system significantly outperforms
existing ones and generalizes better to different acoustic environments. Aside from temporal continuity, other ASA principles [5] such as common onset and co-modulation also contribute to the
structure in the IBM, and we will investigate these in future work.
Acknowledgements. This research was supported in part by an AFOSR grant (FA9550-12-1-0130), an STTR
subcontract from Kuzer, and the Ohio Supercomputer Center.
8
References
[1] R. Hendriks, R. Heusdens, and J. Jensen, ?MMSE based noise PSD tracking with low complexity,? in
ICASSP, 2010.
[2] P. Scalart and J. Filho, ?Speech enhancement based on a priori signal to noise estimation,? in ICASSP,
1996.
[3] S. Roweis, ?One microphone source separation,? in NIPS, 2001.
[4] D. Wang and G. Brown, Eds., Computational Auditory Scene Analysis: Principles, Algorithms and Applications. Hoboken, NJ: Wiley-IEEE Press, 2006.
[5] A.S. Bregman, Auditory scene analysis: The perceptual organization of sound.
The MIT Press, 1994.
[6] D. Wang, ?On ideal binary mask as the computational goal of auditory scene analysis,? in Speech Separation by Humans and Machines, Divenyi P., Ed. Kluwer Academic, Norwell MA., 2005, pp. 181?197.
[7] D. Brungart, P. Chang, B. Simpson, and D. Wang, ?Isolating the energetic component of speech-on-speech
masking with ideal time-frequency segregation,? J. Acoust. Soc. Am., vol. 120, pp. 4007?4018, 2006.
[8] M. Anzalone, L. Calandruccio, K. Doherty, and L. Carney, ?Determination of the potential benefit of
time-frequency gain manipulation,? Ear and hearing, vol. 27, no. 5, pp. 480?492, 2006.
[9] N. Li and P. Loizou, ?Factors influencing intelligibility of ideal binary-masked speech: Implications for
noise reduction,? J. Acoust. Soc. Am., vol. 123, no. 3, pp. 1673?1682, 2008.
[10] G. Kim, Y. Lu, Y. Hu, and P. Loizou, ?An algorithm that improves speech intelligibility in noise for
normal-hearing listeners,? J. Acoust. Soc. Am., vol. 126, pp. 1486?1494, 2009.
[11] K. Han and D. Wang, ?An SVM based classification approach to speech separation,? in ICASSP, 2011.
[12] Y. Wang, K. Han, and D. Wang, ?Exploring monaural features for classification-based speech segregation,? IEEE Trans. Audio, Speech, Lang. Process., in press, 2012.
[13] G. Mysore and P. Smaragdis, ?A non-negative approach to semi-supervised separation of speech from
noise with the use of temporal dynamics,? in ICASSP, 2011.
[14] J. Hershey, T. Kristjansson, S. Rennie, and P. Olsen, ?Single channel speech separation using factorial
dynamics,? in NIPS, 2007.
[15] J. Lafferty, A. McCallum, and F. Pareira, ?Conditional random fields: probabilistic models for segmenting
and labeling sequence data,? in ICML, 2001.
[16] J. Nocedal and S. Wright, Numerical optimization.
Springer verlag, 1999.
[17] G. Hinton, S. Osindero, and Y. Teh, ?A fast learning algorithm for deep belief nets,? Neural Computation,
vol. 18, no. 7, pp. 1527?1554, 2006.
[18] L. van der Maaten, M. Welling, and L. Saul, ?Hidden-unit conditional random fields,? in AISTATS, 2011.
[19] L. Morency, A. Quattoni, and T. Darrell, ?Latent-dynamic discriminative models for continuous gesture
recognition,? in CVPR, 2007.
[20] J. Peng, L. Bo, and J. Xu, ?Conditional neural fields,? in NIPS, 2009.
[21] A. Mohamed, G. Dahl, and G. Hinton, ?Deep belief networks for phone recognition,? in NIPS workshop
on speech recognition and related applications, 2009.
[22] T. Do and T. Artieres, ?Neural conditional random fields,? in AISTATS, 2010.
[23] L. Rabiner, ?A tutorial on hidden Markov models and selected applications in speech recognition,? Proc.
IEEE, vol. 77, no. 2, pp. 257?286, 2003.
[24] IEEE, ?IEEE recommended practice for speech quality measurements,? IEEE Trans. Audio Electroacoust., vol. 17, pp. 225?246, 1969.
[25] G. Hu and D. Wang, ?Monaural speech segregation based on pitch tracking and amplitude modulation,?
IEEE Trans. Neural Networks, vol. 15, no. 5, pp. 1135?1150, 2004.
[26] I. Tsochataridis, T. Hofmann, and T. Joachims, ?Support vector machine for interdependent and structured
output spaces,? in ICML, 2004.
[27] J. Garofolo, DARPA TIMIT acoustic-phonetic continuous speech corpus, NIST, 1993.
9
| 4838 |@word stronger:1 seems:1 cochleagram:3 open:1 hu:2 seek:2 tried:1 kristjansson:1 accounting:1 mysore:2 harder:1 recursively:1 ld:3 reduction:2 contains:2 score:9 denoting:1 document:1 mmse:1 outperforms:7 existing:5 current:1 contextual:3 yuxuan:1 comparing:1 lang:1 written:2 hoboken:1 numerical:2 partition:1 subsequent:1 speakerindependent:1 hofmann:1 enables:1 remove:1 aside:1 stationary:1 alone:1 selected:1 inspection:1 mccallum:1 vanishing:1 fa9550:1 provides:1 cse:1 contribute:1 simpler:2 constructed:1 direct:4 peng:2 mask:14 roughly:1 inspired:1 window:7 becomes:2 project:1 estimating:2 notation:1 matched:11 maximizes:1 pel:2 kind:1 substantially:4 acoust:3 nj:1 temporal:14 concave:1 classifier:16 filterbank:1 hit:44 control:1 unit:19 grant:1 demonstrates:1 producing:1 segmenting:1 before:1 influencing:1 engineering:1 treat:2 local:5 tends:2 severely:1 despite:1 encoding:1 modulation:3 black:1 garofolo:1 bird:1 studied:2 chirp:1 challenging:1 co:1 hmms:1 limited:3 testing:1 practice:2 differs:2 backpropagation:1 significantly:9 attain:1 word:1 suggest:2 get:1 wrongly:1 put:1 context:3 influence:1 optimize:3 equivalent:1 conventional:1 demonstrated:1 center:2 crfs:8 maximizing:8 yt:33 attention:1 independently:1 rectangular:1 formulate:1 simplicity:1 attending:1 oh:1 stability:1 target:10 us:2 trick:1 finetuned:1 particularly:2 approximated:1 recognition:4 artieres:1 wang:9 capture:4 loizou:2 calculate:1 trade:1 decrease:1 mentioned:1 environment:5 complexity:4 dynamic:12 trained:10 segment:1 asa:1 efficiency:2 icassp:4 finetuning:1 darpa:1 various:1 listener:2 regularizer:1 derivation:1 train:5 separated:1 snrs:5 fast:3 supervisedly:1 approached:2 labeling:3 crowd:3 encoded:1 larger:1 solve:2 valued:1 widely:1 rennie:1 otherwise:2 cvpr:1 ability:1 unseen:9 transform:2 jointly:1 final:1 obviously:1 advantage:2 sequence:8 differentiable:1 net:1 rock:1 propose:5 interaction:2 remainder:1 neighboring:2 mixing:2 achieve:1 roweis:1 gammatone:1 gold:1 normalize:2 convergence:1 enhancement:7 darrell:1 plethora:1 produce:4 received:1 soc:3 involves:1 indicate:1 safe:1 filter:5 human:11 babble:1 argued:1 dnns:5 generalization:4 underdetermined:1 exploring:1 segsnr:5 around:1 considered:1 ground:4 normal:1 exp:3 wright:1 mapping:1 viterbi:1 achieves:1 adopt:2 omitted:1 estimation:4 proc:1 label:8 neutralize:1 largest:1 create:4 mit:1 clearly:2 gaussian:1 always:2 aim:3 rather:1 pn:2 boosted:1 encode:1 focus:1 joachim:1 improvement:9 indicates:2 likelihood:2 mainly:1 pretrain:1 greatly:1 kim:7 am:6 wang1:2 helpful:1 inference:1 suffix:1 unlikely:2 typically:1 hidden:10 spurious:1 dnn:72 transformed:1 overall:6 classification:21 priori:1 constrained:2 art:1 marginal:6 field:8 once:1 construct:1 extraction:1 shaped:1 sampling:1 represents:6 unsupervised:2 icml:2 future:1 others:1 simplify:1 employ:3 primarily:1 simultaneously:1 phase:2 psd:1 organization:1 interest:2 highly:3 investigate:2 simpson:1 evaluation:2 mixture:10 chain:2 amenable:1 norwell:1 implication:1 bregman:1 closer:1 daily:1 isolating:1 increased:1 classify:3 modeling:8 markovian:1 cover:1 maximization:5 hearing:2 snr:13 predictor:4 hundred:1 masked:1 osindero:1 fundamental:1 probabilistic:1 off:1 decoding:1 central:1 ear:1 recorded:2 opposed:1 choose:1 containing:1 possibly:1 unmatched:10 worse:4 cognitive:1 derivative:1 yp:1 li:1 account:1 potential:3 bfgs:3 coding:1 summarized:1 coefficient:1 explicitly:3 onset:1 later:1 wind:1 observing:1 traffic:1 portion:1 start:1 capability:1 masking:2 voiced:4 timit:3 contribution:2 minimize:1 accuracy:14 wiener:3 musical:1 characteristic:1 largely:1 efficiently:2 rabiner:2 generalize:1 weak:1 critically:1 lu:1 mfcc:1 researcher:1 worth:1 classified:2 detector:1 quattoni:1 ed:2 energy:5 rbms:1 frequency:12 pp:9 mohamed:1 naturally:1 associated:1 rbm:2 boil:1 pitched:1 gain:1 auditory:6 popular:1 conversation:1 subsection:3 dimensionality:1 ut:3 improves:3 amplitude:2 higher:1 supervised:3 hershey:2 improved:1 formulation:2 evaluated:1 done:1 correlation:2 nonlinear:9 lack:2 widespread:1 continuity:6 quality:2 columbus:1 building:1 effect:2 normalized:2 true:2 contain:1 brown:1 hence:2 deal:2 white:2 hendriks:3 skewed:2 recurrence:1 inferior:1 plp:1 whereby:1 mel:1 speaker:10 criterion:4 m:2 subcontract:1 crf:53 demonstrate:1 doherty:1 performs:2 percent:4 ohio:3 common:2 khz:1 discussed:1 kluwer:1 marginals:1 significant:4 measurement:1 grid:1 nonlinearity:2 stable:1 han:3 deliang:1 dominant:4 segmental:1 posterior:5 recent:3 dictated:1 female:2 phone:1 scenario:1 indispensable:1 manipulation:1 verlag:1 phonetic:1 binary:12 success:2 der:2 seen:1 spectrogram:1 filho:1 maximize:5 recommended:1 signal:5 semi:1 smoother:1 sound:4 desirable:2 exceeds:1 academic:1 calculation:1 offer:2 cross:1 long:1 determination:1 divided:1 concerning:1 gesture:1 prediction:8 pitch:6 essentially:1 normalization:1 adopting:1 tailored:1 kernel:2 achieved:1 addition:3 want:1 residue:1 interval:6 source:2 crucial:2 unlike:1 ascent:1 hz:2 resynthesized:1 db:37 contrary:1 lafferty:1 gmms:1 seem:1 sttr:1 noting:1 ideal:7 easy:1 variety:3 independence:1 architecture:3 bandwidth:1 reduce:1 regarding:1 listening:1 shift:1 whether:1 motivated:2 passed:1 energetic:1 speech:71 passing:1 cnf:9 pretraining:3 cocktail:7 deep:8 ignored:1 generally:1 dramatically:1 clear:3 factorial:1 extensively:1 band:1 svms:1 exist:1 affords:1 tutorial:1 estimated:6 correctly:1 per:1 vol:8 dominance:2 nevertheless:1 achieving:1 gmm:3 clean:1 dahl:1 backward:7 nocedal:1 angle:1 harmonicity:1 reasonable:2 separation:20 parsimonious:1 maaten:2 scaling:2 layer:7 bound:1 distinguish:1 fan:1 smaragdis:1 constraint:1 constrain:1 scene:4 encodes:1 dominated:1 aspect:1 layerwise:1 extremely:1 span:1 separable:1 structured:15 department:1 developing:1 combination:2 across:2 slightly:1 increasingly:1 dwang:1 making:1 restricted:1 interference:5 equation:2 segregation:3 turn:1 mechanism:1 fail:1 adopted:1 generalizes:3 hierarchical:1 intelligibility:18 spectral:2 appropriate:1 voicing:1 alternative:1 struct:3 supercomputer:1 substitute:1 denotes:1 top:1 include:4 rain:1 calculating:1 music:1 subband:4 especially:2 overflow:1 objective:11 quantity:1 fa:44 traditional:1 exhibit:1 gradient:10 detrimental:1 separate:2 concatenation:2 hmm:2 decoder:1 discriminant:1 spanning:2 boldface:1 length:1 index:2 relationship:1 modeled:1 providing:2 ratio:1 illustration:1 difficult:2 setup:1 negative:1 design:1 implementation:1 zt:3 boltzmann:1 teh:1 rasta:1 upper:1 unvoiced:12 markov:1 nist:1 defining:1 hinton:4 frame:11 monaural:6 stack:1 community:1 sentence:1 acoustic:10 learned:7 established:1 nip:4 trans:3 able:3 usually:1 pattern:2 perception:1 challenge:1 max:2 memory:1 including:2 belief:2 power:3 critical:1 suitable:1 warm:1 indicator:1 recursion:1 siren:1 improve:3 created:2 excel:1 utterance:11 sn:2 literature:1 acknowledgement:1 interdependent:1 relative:1 afosr:1 loss:1 discriminatively:2 mixed:3 interesting:1 filtering:1 consistent:2 principle:3 viewpoint:1 systematically:1 ibm:23 production:1 clap:1 summary:1 supported:1 last:3 bias:1 weaker:1 mismatched:1 saul:1 cepstral:1 absolute:1 sparse:2 benefit:3 van:2 boundary:1 calculated:1 transition:8 stand:2 forward:8 commonly:1 party:7 far:1 employing:2 correlate:1 welling:1 olsen:1 dealing:1 corpus:3 discriminative:4 continuous:2 latent:3 decade:1 table:5 channel:14 learn:2 crow:1 reasonably:1 improving:1 expansion:1 complex:2 aistats:2 linearly:1 intelligible:1 noise:37 alarm:1 fair:1 complementary:4 categorized:1 xu:1 site:1 representative:1 fashion:1 wiley:1 lc:3 fails:1 factory:1 carney:1 perceptual:2 third:1 weighting:1 down:1 casa:3 decibel:1 xt:4 jensen:1 svm:10 incorporating:1 workshop:1 false:1 adding:1 effectively:3 entropy:1 simply:2 likely:1 expressed:2 tracking:2 bo:1 pretrained:1 chang:1 ptrue:1 gender:1 springer:1 truth:3 extracted:1 ma:1 conditional:10 viewed:1 goal:1 replace:2 absence:1 change:3 hard:1 specifically:2 determined:1 telephone:1 except:1 wt:1 miss:2 microphone:2 called:2 morency:2 pas:1 experimental:4 attempted:2 selectively:1 support:2 incorporate:1 evaluate:1 audio:2 tested:2 phenomenon:1 correlated:4 |
4,241 | 4,839 | Slice sampling normalized kernel-weighted
completely random measure mixture models
Sinead A. Williamson
Department of Machine Learning
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Nicholas J. Foti
Department of Computer Science
Dartmouth College
Hanover, NH 03755
[email protected]
Abstract
A number of dependent nonparametric processes have been proposed to model
non-stationary data with unknown latent dimensionality. However, the inference
algorithms are often slow and unwieldy, and are in general highly specific to a
given model formulation. In this paper, we describe a large class of dependent
nonparametric processes, including several existing models, and present a slice
sampler that allows efficient inference across this class of models.
1
Introduction
Nonparametric mixture models allow us to bypass the issue of model selection, by modeling data
using a random number of mixture components that can grow if we observe more data. However,
such models work on the assumption that data can be considered exchangeable. This assumption
often does not hold in practice as distributions commonly vary with some covariate. For example,
the proportions of different species may vary across geographic regions, and the distribution over
topics discussed on Twitter is likely to evolve over time.
Recently, there has been increasing interest in dependent nonparametric processes [1], that extend
existing nonparametric distributions to non-stationary data. While a nonparametric process is a distribution over a single measure, a dependent nonparametric process is a distribution over a collection
of measures, which may be associated with values in a covariate space. The key property of a dependent nonparametric process is that the measure at each covariate value is marginally distributed
according to a known nonparametric process.
A number of dependent nonparametric processes have been developed in the literature ([2] ?6).
For example, the single-p DDP [1] defines a collection of Dirichlet processes with common atom
sizes but variable atom locations. The order-based DDP [3] constructs a collection of Dirichlet
processes using a common set of beta random variables, but permuting the order in which they are
used in a stick-breaking construction. The Spatial Normalized Gamma Process (SNGP) [4] defines
a gamma process on an augmented space, such that at each covariate location a subset of the atoms
are available. This creates a dependent gamma process, that can be normalized to obtain a dependent
Dirichlet process. The kernel beta process (KBP) [5] defines a beta process on an augmented space,
and at each covariate location modulates the atom sizes using a collection of kernels, to create a
collection of dependent beta processes.
Unfortunately, while such models have a number of appealing properties, inference can be challenging. While there are many similarities between existing dependent nonparametric processes, most of
the inference schemes that have been proposed are highly specific, and cannot be generally applied
without significant modification.
1
The contributions of this paper are twofold. First, in Section 2 we describe a general class of dependent nonparametric processes, based on defining completely random measures on an extended
space. This class of models includes the SNGP and the KBP as special cases. Second, we develop
a slice sampler that is applicable for all the dependent probability measures in this framework. We
compare our slice sampler to existing inference algorithms, and show that we are able to achieve
superior performance over existing algorithms. Further, the generality of our algorithm mean we are
able to easily modify the assumptions of existing models to better fit the data, without the need to
significantly modify our sampler.
2
Constructing dependent nonparametric models using kernels
In this section, we describe a general class of dependent completely random measures, that includes
the kernel beta process as a special case. We then describe the class of dependent normalized random
measures obtained by normalizing these dependent completely random measures, and show that the
SNGP lies in this framework.
2.1
Kernel CRMs
A completely random measure (CRM) [6, 7] is a distribution over discrete1 measures B on some
measurable space ? such that, for any disjoint subsets Ak ? ?, the masses B(Ak ) are independent.
Commonly used examples of CRMs include the gamma process, the generalized gamma process, the
beta process, and the stable process. A CRM is uniquely categorized by a L?evy measure ?(d?, d?)
on ? ? R+ , which controls the location and size of the jumps. We can interpret a CRM as a Poisson
process on ? ? R+ with mean measure ?(d?, d?).
+
Let ? = (X ??), and let ? = {(?k , ?k , ?k )}?
k=1 be a Poisson process on the space X ???R with
associated product ?-algebra. The space has three components: X , a bounded space of covariates;
?, a space of parameter values; and R+ , the space of atom masses. Let the mean measure of ?
be described by the positive L?evy measure ?(d?, d?, d?). While the construction herein applies for
any such L?evy measure, we focus on the class of L?evy measures that factorize as ?(d?, d?, d?) =
R0 (d?)H0 (d?)?0 (d?). This corresponds to the class of homogeneous CRMs, where the size of an
atom is independent of its location in ? ? X , and covers most CRMs encountered in the literature.
We assume that X is a discrete space with P unique values, ??p , in order to simplify the exposition,
and without loss of generality we assume that R0 (X ) = 1. Additionally, let K(?, ?) : X ?X ? [0, 1]
be a bounded kernel function. Though any such kernel may be used, for concreteness we only
consider a box kernel and square exponential kernel defined as
? Box kernel: K(x, ?) = 1 (||x ? ?|| < W ), where we call W the width.
2
? Square exponential kernel: K(x, ?) = exp ?? ||x ? ?|| , for ||?|| a dissimilarity measure, and ? > 0 a fixed constant.
Using the setup above we define a kernel-weighted CRM (KCRM) at a fixed covariate x ? X and
for A measurable as
P?
Bx (A) = m=1 K(x, ?m )?m ??m (A)
(1)
which is seen to be a CRM on ? by the mapping theorem for Poisson processes [8]. For a fixed set of
observations (x1 , . . . , xG )T we define B(A) = (Bx1 (A), . . . , BxG (A))T as the vector of measures
of the KCRM at the observed covariates. CRMs are characterized by their characteristic function
(CF) [9] which for the CRM B can be written as
Z
E[exp(?v T B(A))] = exp ?
(1 ? exp(?v T K? ?)?(d?, d?, d?))
(2)
X ?A?R+
where v ? RG and K? = (K(x1 , ?), . . . , K(xG , ?))T . Equation 2 is easily derived from the general
form of the CF of a Poisson process [8] and by noting that the one-dimensional CFs are exactly those
of the individual Bxi (A). See [5] for a discussion of the dependence structure between Bx and Bx0
for x, x0 ? X .
1
with, possibly, a deterministic continuous component
2
Taking ?0 to be the L?evy measure of a beta process [10] results in the KBP. Alternatively, taking ?0
as the L?evy measure of a gamma process, ?GaP [11], and K(?, ?) as the box kernel we recover the
unnormalized form of the SNGP.
2.2
Kernel NRMs
A distribution over probability measures can be obtained by starting from a CRM, and normalizing
the resulting random measure. Such distributions are often referred to as normalized random measures (NRM) [12]. The most commonly used example of an NRM is the Dirichlet process, which
can be obtained as a normalized gamma process [11]. Other CRMs yield NRMs with different properties ? for example a normalized generalized gamma process can have heavier tails than a Dirichlet
process [13].
We can define a class of dependent NRMs in a similar manner, starting from the KCRM defined
above. Since each marginal measure Bx of B is a CRM, we can normalize it by its total mass,
Bx (?), to produce a NRM
Px (A) = Bx (A)/Bx (?) =
?
X
K(x, ?m )?m
P?
??m (A)
l=1 K(x, ?l )?l
m=1
(3)
This formulation of a kernel NRM (KNRM) is similar to that in [14] for Ornstein-Uhlenbeck NRMs
(OUNRM). While the OUNRM framework allows for arbitrary CRMs, in theory, extending it to arbitrary kernel functions is non-trivial. A fundamental difference between OUNRMs and normalized
KCRMs is that the marginals of an OUNRM follow a specified process, whereas the marginals of a
KCRM may be different than the underlying CRM.
A common use in statistics and machine learning for NRMs is as prior distributions for mixture
models with an unbounded number of components [15]. Analogously, covariate-dependent NRMs
can be used as priors for mixture models where the probability of being associated with a mixture
component varies with the covariate [4, 14]. For concreteness, we limit ourselves to a kernel gamma
process (KGaP) which we denote as B ? KGaP(K, R0 , H0 , ?GaP ), although the slice sampler can
be adapted to any normalized KCRM.
Specifically, we observe data {(xj , yj )}N
j=1 where xj ? X denotes the covariate of observation j
d
and yj ? R denotes the quantities we wish to model. Let x?g denote the gth unique covariate value
among all the xj which induces a partition on the observations so that observation j belongs to group
g if xj = x?g . We denote the ith observation corresponding to x?g as yg,i .
Each observation is associated with a mixture component which we denote as sg,i which is drawn
according to a normalized KGaP on a parameter space ?, such that (?, ?) ? ?, where ? is a mean
and ? a precision. Conditional on sg,i , each observation is then drawn from some density q(?|?, ?)
which we assume to be N(?, ??1 ). The full model can then be specified as
Pg (A)|B ? Bg (A)/Bg (?)
?
X
K(x? , ?m )?m
P? g ?
sg,i |Pg ?
?m
l=1 K(xg , ?l )?l
m=1
(4)
?
(?m
, ??m )
?
?
? H0 (d?, d?)
yg,i |sg,i , {(? , ? )} ? q(yg,i |?s?g,i , ??sg,i )
If K(?, ?) is a box kernel, Eq. 4 describes a SNGP mixture model [4].
3
A slice sampler for dependent NRMs
The slice sampler of [16] allows us to perform inference in arbitrary NRMs. We extend this slice
sampler to perform inference in the class of dependent NRMs described in Sec. 2.2. The slice
sampler can be used with any underlying CRM, but for simplicity we concentrate on an underlying
gamma process, as described in Eq. 4. In the supplement we also derive a Rao-Blackwellized
estimator of the predictive density for unobserved data using the output from the slice sampler. We
use this estimator to compute predictive densities in the experiments.
3
Analogously to [16] we introduce a set of auxiliary slice variables ? one for each data point. Each
data point can only belong to clusters corresponding to atoms larger than its slice variable. The set of
slice variables thus defines a minimum atom size that need be represented, ensuring a finite number
of instantiated atoms.
We extend this idea to the KNRM framework. Note that, in this case, an atom will exhibit different
sizes at different covariate locations. We refer to these sizes as the kernelized atom sizes, K(x?g , ?)?,
obtained by applying a kernel K, evaluated at location x?g , to the raw atom ?. Following [16], we
introduce a local slice variable ug,i . This allows us to write the joint distribution over the data points
yg,i , their cluster allocations sg,i and their slice variables ug,i as
f (y, u, s|?, ?, ?, ?) =
G
Y
Vgng ?1 e(?Vg BT g )
g=1
ng
Y
1 ug,i < K(x?g , ?sg,i )?sg,i q(yg,i |?sg,i , ?sg,i )
i=1
(5)
P?
?
m=1 K(xg , ?m )?m
where BT g = Bx?g (?) =
and Vg ? Ga(ng , BT g ) is an auxiliary variable2 .
See the supplement and [16, 17] for a complete derivation.
In order to evaluate Eq. 5, we need to evaluate BT g , the total mass of the unnormalized CRM at each
covariate value. This involves summing over an infinite number of atoms ? which we do not wish to
represent. Define 0 < L = min {usg,i }. This gives the smallest possible (kernelized) atom size to
which data can be attached. Therefore, if we instantiate all atoms with raw size greater than L, we
will include all atoms associated with occupied clusters. For any value of L, there will be a finite
number M of atoms above this threshold. From these M raw atoms, we can obtain the kernelized
atoms above the slice corresponding to a given data point.
We must obtain the remaining mass by marginalizing over all kernelized atoms that are below the
slice (see the supplement). We can split this mass into, a.) the mass due to atoms that are not
instantiated (i.e. whose kernelized value is below the slice at all covariate locations) and, b.) the
mass due to currently instantiated atoms (i.e. atoms whose kernelized value is above the slice at at
least one covariate location) 3 . As we show in the supplement, the first term, a, corresponds to atoms
(?, ?) where ? < L, the mass of which can be written as
!
Z L
X
?
T
R0 (? )
(1 ? exp (?V K?? ?))?0 (d?)
(6)
?? ?X
0
where V = (V1 , . . . , VG )T . This can be evaluated numerically for many CRMs including gamma
and generalized gamma processes [16]. The second term, b, consists of realized atoms {(?k , ?k )}
such that K(x?g , ?k )?k < L at covariate x?g . We use a Monte Carlo estimate for b that we describe
in the supplement. For box kernels term b vanishes, and we have found that even for the square
exponential kernel ignoring this term yields good results.
3.1
Sampling equations
Having specified the joint distribution in terms of a finite measure with a random truncation point
L we can now describe a sampler that samples in turn from the conditional distributions for the
auxiliary variables Vg , the gamma process parameter ? = H0 (?), the instantiated raw atom sizes
?m and corresponding locations in covariate space ?m and in parameter space (?m , ?m ), and the
slice variables ug,i . We define some simplifying notation: K? = (K(x?1 , ?), . . . , K(x?G , ?))T ;
PM
B+ = (B+1 , . . . , B+G )T , B? = (B?1 , . . . , B?G )T , where B+g = m=1 K(x?g , ?m )?m , B?g =
P?
?
m=M +1 K(xg , ?m )?m so that BT g = B+g +B?g ; and ng,m = |{sg,i : sg,i = m, i ? 1, . . . , ng }|.
? Auxiliary variables Vg : The full conditional distribution for Vg is given by
p(Vg | ng , V?g , B+ , B? ) ? Vgng ?1 exp(?V T B+ )E[exp(?V T B? )], Vg > 0
(7)
which we sample using Metropolis-Hastings moves, as in [18].
We parametrize the gamma distribution so that X ? Ga(a, b) has mean a/b and variance a/b2
If X were not bounded there would be a third term consisting of raw atoms > L that when kernelized
fall below the slice everywhere. These can be ignored by a judicious choice of the space X and the allowable
kernel widths.
2
3
4
? Gamma process parameter ?: The conditional distribution for ? is given by
R?
p(? | K, V, ?, ?) ? p(?)?K e??[
R R
?0 (d?)+ 0L X (1?exp (?V T K? ?))R0 (d?)?0 (d?)]
L
(8)
If p(?) = Ga(a0 , b0 ) then the posterior is also a gamma distribution with parameters
a = a0 + K
Z ?
Z Z
b = b0 +
?0 (d?) +
X
L
(9)
L
(1 ? exp(?V T K? ?))?0 (d?)R0 (d?)
(10)
0
where the first integral in Eq. 10 can be evaluated for many processes of interest and the
second integral can be evaluated as in Eq. 6.
? Raw atom sizes ?m : The posterior for atoms associated with occupied clusters is given by
!
G
PG
X
n
g,m
p(?m | ng,m , ?m , V, B+ ) ? ?m g=1
exp ??m
Vg K(x?g , ?m ) ?0 (?m ) (11)
g=1
For an underlying gamma or generalized gamma process, the posterior of ?m will be given
by a gamma distribution due to conjugacy [16]. There will also be a number of atoms
with raw size ?m > L that do not
R have associated data. The number of such atoms is
Poisson distributed with mean ? A exp(?V T K? ?)?0 (d?)R0 (d?), where A = {(?, ?) :
K(x?g , ?)? > L, for some g} and which can be computed using the approach described
for Eq. 6.
? Raw atom covariate locations ?m : Since we assume a finite set of covariate locations, we
can sample ?m according to the discrete distribution
!
G
K
Y
X
?
ng,m
?
p(?m | ng,m , V, B+ ) ?
K(xg , ?k )
exp ??m
Vg K(xg , ?m ) R0 (?m )
g=1
g=1
(12)
? Slice variables ug,i : Sampled as ug,i |{?}, {?}, sg,i ? Un[0, K(x?g , ?sg,i )?sg,i ].
? Cluster allocations sg,i : The prior on sg,i cancels with the prior on ug,i , yielding
p(sg,i = m | yg,i , ug,i , ?m , ?m , ?m ) ? q(yg,i |?m , ?m )1 ug,i < K(x?g , ?m )?m
(13)
where only a finite number of m need be evaluated.
? Parameter locations: Can be sampled as in a standard mixture model [16].
4
Experiments
We evaluate the performance of the proposed slice sampler in the setting of covariate dependent
density estimation. We assume the statistical model in Eq. 4 and consider a univariate Gaussian
distribution as the data generating distribution. We use both synthetic and real data sets in our
experiments and compare the slice sampler to a Gibbs sampler for a finite approximation to the
model (see the supplement for details of the model and sampler) and to the original SNGP sampler.
We assess the mixing characteristics of the sampler using the integrated autocorrelation time ? of the
number of clusters used by the sampler at each iteration after a burn-in period, and by the predictive
quality of the collective samples on held-out data. The integrated autocorrelation time of samples
drawn from an MCMC algorithm controls the Monte Carlo error inherent in a sample drawn from
the MCMC algorithm. It can be shown that in a set of T samples from the MCMC algorithm, there
are in effect only T /(2? ) ?independent? samples. Therefore, lower values of ? are deemed better.
We obtain an estimate ?? of the integrated autocorrelation time following [19].
We assess the predictive performance of the collected samples from the various algorithms by computing a Monte Carlo estimate of the predictive log-likelihood of a held-out data point under the
model. Specifically, for a held out point y ? we have
? (t)
?
T
M
X
X
1
(t)
(t)
?.
(14)
log p(y ? |y) ?
log ?
wm
q y ? |?m
, ?(t)
m
T t=1
m=1
5
Table 1: Results of the samplers using different kernels. Entries are of the form ?average predictive
density / average number of clusters used / ??? where two standard errors are shown in parentheses.
Results are averaged over 5 hold-out data sets.
Synthetic
CMB
Motorcycle
Slice Box -2.70 (0.12) / 11.6 / 2442 -0.15 (0.11) / 14.4 / 2465 -0.90 (0.28) / 10.3 / 2414
SNGP
-2.67 (0.12) / 43.3 / 2488 -0.22 (0.14) / 79.1 / 2495
NA
Finite Box -2.78 (0.15) / 11.7 / 2497 -0.41 (0.14) / 18.2 / 2444 -1.19 (0.16) / 16.4 / 2352
Slice SE
NA
-0.28 (0.07) / 14.7 / 2447 -0.87 (0.28) / 8.2 / 2377
Finite SE
NA
-0.29 (0.05) / 9.5 / 2491
-0.99 (0.19) / 7.3 / 2159
10
40
?10
?20
30
600
Frequency
# used clusters
Observation
0
800
Slice
SNGP
Finite
35
25
20
400
200
15
?30
0
0.2
0.4
0.6
Time
0.8
1
10
0
1000 2000 3000 4000 5000
Iteration
0
?25
?20
?15
log(L)
?10
?5
Figure 1: Left: Synthetic data. Middle: Trace plots of the number of clusters used by the three
samplers. Right: Histogram of truncation point L.
(t)
The weight wm is the probability of choosing atom m for sample t. We did not use the RaoBlackwellized estimator to compute Eq. 14 for the slice sampler to achieve fair comparisons (see
the supplement for the results using the Rao-Blackwellized estimator).
4.1
Synthetic data
We generated synthetic data from a dynamic mixture model with 12 components (Figure 1).
Each component has an associated location, ?k , that can take the value of any of ten uniformly
spaced time stamps, tj ? [0, 1]. The components are active according to the kernel K(x, ?k ) =
1 (|x ? ?k | < .2) ? i.e. components are active for two time stamps around their location. At each
time stamp, tj , we generate 60 data points. For each data point we choose a component, k, such that
1 (|tj ? ?k | < .2) and then generate that data point from a Gaussian distribution with mean ?k and
variance 10. We use 50 of the generated data points per time stamp as a training set and hold out 10
data points for prediction.
Since the SNGP is a special case of the normalized KGaP, we compare the finite and slice samplers,
which are both conditional samplers, to the original marginal sampler proposed in [4]. We use the
basic version of the SNGP that uses fixed-width kernels, as we assume fixed width kernel functions
for simplicity. The implementation of the SNGP sampler we used also only allows for fixed component variances, so we fix all ?k = 1/10, the true data generating precision. We use the true kernel
function that was used to generate the data as the kernel for the normalized KGaP model.
We ran the slice sampler for 10, 000 burn-in iterations and subsequently collected 5, 000 samples.
We truncated the finite version of the model to 100 atoms and ran the sampler for 5, 000 burn-in
iterations and collected 5, 000 samples. The SNGP sampler was run for 2, 000 burn-in iterations and
5, 000 samples were collected4 . The predictive log-likelihood, mean number of clusters used and ??
are shown in the ?Synthetic? column in Table 1.
We see that all three algorithms find a region of the posterior that gives predictive estimates of a
similar quality. The autocorrelation estimates for the three samplers are also very similar. This might
seem surprising, since the SNGP sampler uses sophisticated split-merge moves to improve mixing,
which have no analogue in the slice sampler. In addition, we note that although the per-iteration
4
No thinning was performed in any of the experiments in this paper.
6
mixing performance is comparable, the average time per 100 iterations for the slice sampler was
? 10 seconds, for the SNGP sampler was ? 30 seconds and for the finite sampler was ? 200
seconds. Even with only 100 atoms the finite sampler is much more expensive than the slice and
SNGP5 samplers.
We also observe (Figure 1) that both the slice and finite samplers use essentially the true number
of components underlying the data and that the SNGP sampler uses on average twice as many
components. The finite sampler finds a posterior mode with 13 clusters and rarely makes small
moves from that mode. The slice sampler explores modes with 10-17 clusters, but never makes
large jumps away from this region. The SNGP sampler explores the largest number of used clusters
ranging from 23-40, however, it has not explored regions that use less clusters.
Figure 1 also depicts the distribution of the variable truncation level L over all samples in the slice
sampler. This suggests that a finite model that discards atoms with ?k < 10?18 introduces negligible
truncation error. However, this value of L corresponds to ? 1018 atoms in the finite model which
is computationally intractable. To keep the computation times reasonable we were only able to use
100 atoms, a far cry from the number implied by L.
In Figure 2 (Left) we plot estimates of the predictive density at each time stamp for the slice (a), finite
(b) and SNGP (c) samplers. All three samplers capture the evolving structure of the distribution.
However, the finite sampler seems unable to discard unneeded components. This is evidenced by
the small mass of probability that spans times [0, 0.8] when the data that the component explains only
exists at times [0.2, 0.5]. The slice and SNGP samplers seem to both provide reasonable explanations
for the distribution, with the slice sampler tending to provide smoother estimates.
4.2
Real data
As well as providing an alternative inference method for existing models, our slice sampler can be
used in a range of models that fall under the general class of KNRMs. To demonstrate this, we
use the finite and slice versions of our sampler to learn two kernel DPs, one using a box kernel,
K(x, ?) = 1 (|x ? ?| < 0.2) (the setting in the SNGP), and the other using a square exponential
kernel K(x, ?) = exp(?200(x ? ?)2 ), which has support approximately on [? ? .2, ? + .2]. The
kernel was chosen to be somewhat comparable to the box kernel, however, this kernel allows the
influence of an atom to diminish gradually as opposed to being constant. We compare to the SNGP
sampler for the box kernel model, but note that this sampler is not applicable to the exponential
kernel model.
We compare these approaches on two real-world datasets:
? Cosmic microwave background radiation (CMB)[20]: TT power spectrum measurements, ?, from the cosmic microwave background radiation (CMB) at various ?multipole
moments?, denoted M . Both variables are considered continuous and exhibit dependence.
We rescale M to be in [0, 1] and standardize ? to have mean 0 and unit variance.
? Motorcycle crash data [21]. This data set records the head acceleration, A, at various
times during a simulated motorcycle crash. We normalize time to [0, 1] and standardize A
to have mean 0 and unit variance.
Both datasets exhibit local heteroskedasticity, which cannot be captured using the SNGP. For the
CMB data, we consider only the first 600 multipole moments, where the variance is approximately
constant, allowing us to compare the SNGP sampler to the other algorithms. For all models we
fixed the observation variance to 0.02, which we estimated from the standardized data. To ease the
computational burden of the samplers we picked 18 time stamps in [0.05, 0.95], equally spaced 0.05
apart and assigned each observation to the time stamp closest to its associated value of M . This
step is by no means necessary, but the running time of the algorithms improves significantly. For the
5
Sampling the cluster means and assignments is the slowest step for the SNGP sampler taking about 3
seconds. The times reported here only performed this step every 25 iterations achieving reasonable results. If
this step were performed every iteration the results may improve, but the computation time will explode.
7
sampler
1
finite
slice
0
sngp
?1
0.2
0.4
0.6
0.8
multipole moment
TT power spectrum
TT power spectrum
2
2
1
sampler
finite
0
slice
?1
0.2
0.4
0.6
0.8
multipole moment
Figure 2: Left: Predictive density at each time stamp for synthetic data using the slice (a), finite (b)
and SNGP (c) samplers. The scales of all three axis are identical. Middle: Mean and 95% CI of
predictive distribution for all three samplers on CMB data using the box kernel. Right: Mean and
95% CI of predictive distribution using the square exponential kernel.
motorcycle data, there was no regime of constant variance, so we only compare the slice and finite
truncation samplers6 .
For each dataset and each model/sampler, the held-out predictive log-likelihood, the mean number
of used clusters and ?? are reported in Table 1. The mixing characteristics of the chain are similar to
those obtained for the synthetic data. We see in Table 1 that the box kernel and the square exponential
kernel produce similar results on the CMB data. However, the kernel width was not optimized and
different values may prove to yield superior results. For the motorcycle data we see a noticeable
difference between using the box and square exponential kernels where using the latter improves the
held-out predictive likelihood and results in both samplers using fewer components on average.
Figure 2 shows the predictive distributions obtained on the CMB data. Looking at the mean and 95%
CI of the predictive distribution (middle) we see that when using the box kernel the SNGP actually
fits the data the best. This is most likely due to the fact that the SNGP is using more atoms than
the slice or finite samplers. We show that the square exponential kernel (right) gives much smoother
estimates and appears to fit the data better, using the same number of atoms as were learned with the
box kernel (see Table 1). We note that the slice sampler took ? 20 seconds per 100 iterations while
the finite sampler used ? 150 seconds.
5
Conclusion
We presented the class of normalized kernel CRMs, a type of dependent normalized random measure. This class generalizes previous work by allowing more flexibility in the underlying CRM and
kernel function used to induce dependence. We developed a slice sampler to perform inference on
the infinite dimensional measure and compared this method with samplers for a finite approximation and for the SNGP. We found that the slice sampler yields samples with competitive predictive
accuracy at a fraction of the computational cost.
There are many directions for future research. Incorporating reversible-jump moves [22] such as
split-merge proposals should allow the slice sampler to explore larger regions of the parameter space
with a limited decrease in computational efficiency. A similar methodology may yield efficient
inference algorithms for KCRMs such as the KBP, extending the existing slice sampler for the
Indian Buffet Process [23].
Acknowledgments
NF was funded by grant AFOSR FA9550-11-1-0166. SW was funded by grants NIH R01GM087694
and AFOSR FA9550010247.
6
The SNGP could still be used to model this data, however, then we would be comparing the models as
opposed to the samplers.
8
References
[1] S.N. MacEachern. Dependent nonparametric processes. In ASA Proceedings of the Section on
Bayesian Statistical Science, 1999.
[2] D. Dunson. Nonparametric Bayes applications to biostatistics. In N. L. Hjort, C. Holmes,
P. M?uller, and S. G. Walker, editors, Bayesian Nonparametrics. Cambridge University Press,
2010.
[3] J.E. Griffin and M.F.J. Steel. Order-based dependent Dirichlet processes. JASA, 101(473):179?
194, 2006.
[4] V. Rao and Y.W. Teh. Spatial normalized gamma processes. In NIPS, 2009.
[5] L. Ren, Y. Wang, D. Dunson, and L. Carin. The kernel beta process. In NIPS, 2011.
[6] J.F.C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59?78,
1967.
[7] A. Lijoi and I. Pr?unster. Models beyond the Dirichlet process. Technical Report 129, Collegio
Carlo Alberto, 2009.
[8] J.F.C. Kingman. Poisson processes. OUP, 1993.
[9] B. Fristedt and L.F. Gray. A Modern Approach to Probability Theory. Probability and Its
Applications. Birkh?auser, 1997.
[10] N.L. Hjort. Nonparametric Bayes estimators based on beta processes in models for life history
data. Annals of Statistics, 18:1259?1294, 1990.
[11] T.S. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics,
1(2):209?230, 1973.
[12] E. Regazzini, A. Lijoi, and I. Pr?unster. Distributional results for means of normalized random
measures with independent increments. Annals of Statistics, 31(2):pp. 560?585, 2003.
[13] A. Lijoi, R.H. Mena, and I. Pr?unster. Controlling the reinforcement in Bayesian non-parametric
mixture models. JRSS B, 69(4):715?740, 2007.
[14] J.E. Griffin. The Ornstein-Uhlenbeck Dirichlet process and other time-varying processes for
Bayesian nonparametric inference. Technical report, Department of Statistics, University of
Warwick, 2007.
[15] S. Favaro and Y.W. Teh. MCMC for normalized random measure mixture models. Submitted,
2012.
[16] J. E. Griffin and S. G. Walker. Posterior simulation of normalized random measure mixtures.
Journal of Computational and Graphical Statistics, 20(1):241?259, 2011.
[17] L.F. James, A. Lijoi, and I. Pr?unster. Posterior analysis for normalized random measures with
independent increments. Scandinavian Journal of Statistics, 36(1):76?97, 2009.
[18] J.E. Griffin, M. Kolossiatis, and M.F.J. Steel. Comparing distributions using dependent normalized random measure mixtures. Technical report, University of Warwick, 2010.
[19] M. Kalli, J.E. Griffin, and S.G. Walker. Slice sampling mixture models. Statistics and Computing, 21(1):93?105, 2011.
[20] C.L. Bennett et al. First year Wilkinson Microwave Anisotropy Probe (WMAP) observations:
Preliminary maps and basic results. Astrophysics Journal Supplement, 148:1, 2003.
[21] B.W. Silverman. Some aspects of the spline smoothing approach to non-parametric curve
fitting. JRSS B, 47:1?52, 1985.
[22] P.J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model
determination. Biometrika, 82(4):711?732, 1995.
[23] Y.W. Teh, D. G?or?ur, and Z. Ghahramani. Stick-breaking construction for the Indian buffet
process. In AISTATS, volume 11, 2007.
9
| 4839 |@word version:3 middle:3 proportion:1 seems:1 simulation:1 simplifying:1 pg:3 moment:4 existing:8 comparing:2 surprising:1 written:2 must:1 partition:1 plot:2 stationary:2 instantiate:1 fewer:1 ith:1 record:1 fa9550:1 location:15 evy:6 nrm:4 favaro:1 unbounded:1 blackwellized:2 beta:9 consists:1 prove:1 fitting:1 autocorrelation:4 introduce:2 manner:1 x0:1 anisotropy:1 increasing:1 bounded:3 underlying:6 notation:1 mass:10 biostatistics:1 developed:2 gth:1 unobserved:1 every:2 nf:1 exactly:1 biometrika:1 stick:2 exchangeable:1 control:2 unit:2 grant:2 positive:1 negligible:1 local:2 modify:2 limit:1 ak:2 merge:2 approximately:2 might:1 burn:4 twice:1 suggests:1 challenging:1 ease:1 limited:1 range:1 averaged:1 unique:2 acknowledgment:1 yj:2 practice:1 silverman:1 evolving:1 significantly:2 induce:1 cannot:2 ga:3 selection:1 applying:1 influence:1 measurable:2 deterministic:1 map:1 starting:2 simplicity:2 estimator:5 holmes:1 increment:2 annals:3 construction:3 controlling:1 homogeneous:1 us:3 pa:1 standardize:2 expensive:1 distributional:1 observed:1 wang:1 capture:1 region:5 decrease:1 ran:2 vanishes:1 covariates:2 wilkinson:1 dynamic:1 algebra:1 bx1:1 predictive:17 asa:1 creates:1 efficiency:1 completely:6 easily:2 joint:2 represented:1 various:3 derivation:1 instantiated:4 describe:6 monte:4 birkh:1 choosing:1 h0:4 whose:2 larger:2 warwick:2 statistic:8 took:1 product:1 motorcycle:5 mixing:4 flexibility:1 achieve:2 normalize:2 cluster:16 extending:2 produce:2 generating:2 unster:4 derive:1 develop:1 radiation:2 rescale:1 noticeable:1 b0:2 eq:8 auxiliary:4 c:2 involves:1 concentrate:1 direction:1 lijoi:4 subsequently:1 explains:1 fix:1 preliminary:1 hold:3 around:1 considered:2 diminish:1 exp:13 mapping:1 vary:2 smallest:1 estimation:1 applicable:2 currently:1 largest:1 create:1 weighted:2 uller:1 gaussian:2 occupied:2 usg:1 varying:1 derived:1 focus:1 likelihood:4 slowest:1 inference:11 twitter:1 dependent:25 ferguson:1 bt:5 integrated:3 a0:2 unneeded:1 kernelized:7 fa9550010247:1 issue:1 among:1 denoted:1 spatial:2 special:3 auser:1 smoothing:1 marginal:2 construct:1 never:1 having:1 ng:8 sampling:4 atom:41 identical:1 cancel:1 carin:1 foti:1 future:1 bx0:1 report:3 spline:1 simplify:1 inherent:1 modern:1 gamma:20 individual:1 ourselves:1 consisting:1 interest:2 highly:2 introduces:1 mixture:15 yielding:1 permuting:1 tj:3 held:5 chain:2 microwave:3 integral:2 necessary:1 regazzini:1 column:1 modeling:1 rao:3 cover:1 assignment:1 cost:1 subset:2 entry:1 reported:2 varies:1 synthetic:8 density:7 fundamental:1 explores:2 analogously:2 yg:7 na:3 opposed:2 choose:1 possibly:1 kingman:2 bx:7 sec:1 b2:1 includes:2 ornstein:2 bg:2 performed:3 picked:1 wm:2 recover:1 competitive:1 bayes:2 contribution:1 ass:2 square:8 accuracy:1 variance:8 characteristic:3 yield:5 spaced:2 raw:8 bayesian:6 marginally:1 carlo:5 ren:1 cmb:7 history:1 submitted:1 frequency:1 pp:1 james:1 associated:9 sampled:2 fristedt:1 dataset:1 sinead:2 dimensionality:1 improves:2 sophisticated:1 actually:1 thinning:1 appears:1 follow:1 methodology:1 formulation:2 evaluated:5 box:15 though:1 generality:2 nonparametrics:1 hastings:1 reversible:2 defines:4 mode:3 quality:2 gray:1 effect:1 mena:1 normalized:20 geographic:1 true:3 assigned:1 during:1 width:5 uniquely:1 unnormalized:2 generalized:4 allowable:1 complete:1 demonstrate:1 tt:3 ranging:1 recently:1 nih:1 common:3 superior:2 tending:1 ug:9 bxi:1 attached:1 nh:1 volume:1 discussed:1 extend:3 tail:1 belong:1 interpret:1 marginals:2 numerically:1 mellon:1 significant:1 refer:1 measurement:1 gibbs:1 cambridge:1 pm:1 mathematics:1 funded:2 stable:1 scandinavian:1 similarity:1 r01gm087694:1 heteroskedasticity:1 posterior:7 closest:1 belongs:1 apart:1 discard:2 life:1 seen:1 minimum:1 greater:1 somewhat:1 captured:1 r0:8 period:1 smoother:2 full:2 nrms:9 technical:3 characterized:1 determination:1 alberto:1 equally:1 parenthesis:1 ensuring:1 prediction:1 basic:2 essentially:1 cmu:1 poisson:6 iteration:10 kernel:49 uhlenbeck:2 represent:1 histogram:1 cosmic:2 proposal:1 whereas:1 addition:1 background:2 crash:2 grow:1 walker:3 seem:2 call:1 noting:1 hjort:2 split:3 crm:12 xj:4 fit:3 dartmouth:2 idea:1 heavier:1 ignored:1 generally:1 se:2 nonparametric:18 ten:1 induces:1 generate:3 estimated:1 disjoint:1 per:4 carnegie:1 discrete:2 write:1 group:1 key:1 threshold:1 achieving:1 drawn:4 v1:1 concreteness:2 fraction:1 year:1 run:1 everywhere:1 kbp:4 reasonable:3 griffin:5 comparable:2 ddp:2 encountered:1 adapted:1 explode:1 aspect:1 min:1 span:1 oup:1 px:1 department:3 pacific:1 according:4 jr:2 across:2 describes:1 ur:1 appealing:1 metropolis:1 modification:1 gradually:1 pr:4 computationally:1 equation:2 conjugacy:1 turn:1 available:1 parametrize:1 generalizes:1 hanover:1 probe:1 observe:3 away:1 nicholas:1 alternative:1 buffet:2 original:2 denotes:2 dirichlet:8 include:2 cf:3 remaining:1 multipole:4 standardized:1 running:1 sw:1 graphical:1 ghahramani:1 implied:1 move:4 quantity:1 realized:1 parametric:2 dependence:3 exhibit:3 dp:1 unable:1 simulated:1 topic:1 collected:3 trivial:1 providing:1 setup:1 unfortunately:1 dunson:2 raoblackwellized:1 trace:1 steel:2 astrophysics:1 implementation:1 collective:1 unknown:1 bxg:1 perform:3 allowing:2 teh:3 observation:11 datasets:2 markov:1 finite:27 truncated:1 defining:1 extended:1 crms:9 head:1 looking:1 arbitrary:3 evidenced:1 specified:3 optimized:1 learned:1 herein:1 nip:2 able:3 beyond:1 below:3 regime:1 including:2 green:1 explanation:1 analogue:1 power:3 scheme:1 improve:2 axis:1 xg:7 deemed:1 prior:4 literature:2 sg:18 evolve:1 marginalizing:1 afosr:2 loss:1 sngp:29 allocation:2 vg:10 jasa:1 editor:1 bypass:1 cry:1 truncation:5 allow:2 fall:2 taking:3 distributed:2 slice:52 curve:1 world:1 commonly:3 collection:5 jump:4 reinforcement:1 far:1 keep:1 active:2 summing:1 pittsburgh:1 factorize:1 alternatively:1 spectrum:3 continuous:2 latent:1 un:1 table:5 additionally:1 learn:1 ignoring:1 williamson:1 constructing:1 did:1 aistats:1 fair:1 categorized:1 x1:2 augmented:2 referred:1 depicts:1 slow:1 precision:2 wish:2 exponential:9 lie:1 stamp:8 breaking:2 third:1 unwieldy:1 theorem:1 specific:2 covariate:19 explored:1 normalizing:2 intractable:1 exists:1 burden:1 incorporating:1 modulates:1 ci:3 supplement:8 dissimilarity:1 gap:2 rg:1 likely:2 univariate:1 explore:1 applies:1 corresponds:3 conditional:5 acceleration:1 exposition:1 twofold:1 bennett:1 judicious:1 specifically:2 infinite:2 uniformly:1 nfoti:1 sampler:69 total:2 specie:1 rarely:1 college:1 support:1 maceachern:1 latter:1 indian:2 wmap:1 evaluate:3 mcmc:4 |
4,242 | 484 | Information Measure Based Skeletonisation
Sowmya Ramachandran
Department of Computer Science
University of Texas at Austin
Austin, TX 78712-1188
Lorien Y. Pratt *
Department of Computer Science
Rutgers University
New Brunswick, NJ 08903
Abstract
Automatic determination of proper neural network topology by trimming
over-sized networks is an important area of study, which has previously
been addressed using a variety of techniques. In this paper, we present
Information Measure Based Skeletonisation (IMBS), a new approach to
this problem where superfluous hidden units are removed based on their
information measure (1M). This measure, borrowed from decision tree induction techniques, reflects the degree to which the hyperplane formed
by a hidden unit discriminates between training data classes. We show
the results of applying IMBS to three classification tasks and demonstrate
that it removes a substantial number of hidden units without significantly
affecting network performance.
1
INTRODUCTION
Neural networks can be evaluated based on their learning speed, the space and time
complexity of the learned network, and generalisation performance. Pruning oversized networks (skeletonisation) has the potential to improve networks along these
dimensions as follows:
? Learning Speed: Empirical observation indicates that networks which have
been constrained to have fewer parameters lack flexibility during search, and
so tend to learn slower. Training a network that is larger than necessary and
*This work was partially supported by DOE #DE-FG02-91ER61129, through subcontract #097P753 from the University of Wisconsin.
1080
Information Measure Based Skeletonisation
trimming it back to a reduced architecture could lead to improved learning
speed .
? Network Complexity: Skeletonisation improves both space and time complexity
by reducing the number of weights and hidden units .
? Generalisation: Skeletonisation could constrain networks to generalise better
by reducing the number of parameters used to fit the data.
Various techniques have been proposed for skeletonisation. One approach [Hanson
and Pratt, 1989, Chauvin, 1989, Weigend et al., 1991] is to add a cost term or
bias to the objective function. This causes weights to decay to zero unless they
are reinforced. Another technique is to measure the increase in error caused by
removing a parameter or a unit, as in [Mozer and Smolensky, 1989, Le Cun et al.,
1990]. Parameters that have the least effect on the error may be pruned from the
network.
In this paper, we present Information Measure Based Skeletonisation (IMBS), an
alternate approach to this problem, in which superfluous hidden units in a single
hidden-layer network are removed based on their information measure (1M). This
idea is somewhat related to that presented in [Siestma and Dow, 1991], though we
use a different algorithm for detecting superfluous hidden units.
We also demonstrate that when IMBS is applied to a vowel recognition task, to
a subset of the Peterson-Barney 10-vowel classification problem, and to a heart
disease diagnosis problem, it removes a substantial number of hidden units without
significantly affecting network performance.
2
1M AND THE HIDDEN LAYER
Several decision tree induction schemes use a particular information-theoretic measure, called 1M, of the degree to which an attribute separates (discriminates between
the classes of) a given set of training data [Quinlan, 1986]. 1M is a measure of the
information gained by knowing the value of an attribute for the purpose of classification. The higher the 1M of an attribute, the greater the uniformity of class data
in the subsets of feature space it creates.
A useful simplification of the sigmoidal activation function used in back-propagation
networks [Rumelhart et al., 1986] is to reduce this function to a threshold by mapping activations greater than 0.5 to 1 and less than 0.5 to O. In this simplified
model, the hidden units form hyperplanes in the feature space which separate data.
Thus, they can be considered analogous to binary-valued attributes, and the 1M of
each hidden unit can be calculated as in decision tree induction [Quinlan, 1986].
Figure 1 shows the training data for a fabricated two-feature, two-class problem and
a possible configuration of the hyperplanes formed by each hidden unit at the end
of training. Hyperplane h1 's higher 1M corresponds to the fact that it separates the
two classes better than h2.
1081
1082
Ramachandran and Pratt
1M
= .0115
1
o
1
1
1
1
Figure 1: Hyperplanes and their IM. Arrows indicate regions where hidden units have
activations> 0.5.
3
1M TO DETECT SUPERFLUOUS HIDDEN UNITS
One of the important goals of training is to adjust the set of hyperplanes formed
by the hidden layer so that they separate the training data. 1 We define superfluous
units as those whose corresponding hyperplanes are not necessary for the proper
separation of training data. For example, in Figure 1, hyperplane h2 is superfluous
because:
1. hI separates the data better than h2 and
2. h2 does not separate the data in either of the two regions created by hI.
The IMBS algorithm to identify superfluous hidden units, shown in Figure 2, recursively finds hidden units that are necessary to separate the data and classifies
the rest as superfluous. It is similar to the decision tree induction algorithm in
[Quinlan, 1986].
The hidden layer is skeletonised by removing the superfluous hidden units. Since
the removal of these units perturbs the inputs to the output layer, the network will
have to be trained further after skeletonisation to recover lost performance.
4
RESULTS
We have tested IMBS on three classification problems, as follows:
1. Train a network to an acceptable level of performance.
2. Identify and remove superfluous hidden units.
3. Train the skeletonised network further to an acceptable level of performance.
We will refer to the stopping point of training at step 1 as the skeletonisation point
(SP); further training will be referred to in terms of SP + number of training epochs.
1 This again is not strictly true for hidden units with sigmoidal activation, but holds for
the approximate model.
Information Measure Based Skeletonisation
Input:
Training data
Hidden unit activations for each training data pattern.
Output:
List of superfluous hidden units.
Method:
main ident-superfluous-hu
begin
data-set~ training data
useful-hu-list~ nil
pick-best-hu (data-set, useful-hu-list)
output hidden units that are not in useful-hu-list
end
procedure pick-best-hu(data-set, useful-hu-list)
begin
if all the data in data-set belong to the same class then return
Calculate 1M of each hidden unit.
hl~ hidden unit with best 1M.
add hl to the useful-hu list
dsl~ all the data in data-set for which hl has an activation of> .5
ds2~ all the data in data-set for which hl has an activation of <= .5
pick-best-hu(dsl, useful-hu-list)
pick-best-hu(ds2, useful-hu-list)
end
Figure 2: IMBS: An Algorithm for Identifying Superfluous Hidden Units
For each problem, data was divided into a training set and a test set. Several
networks were run for a few epochs with different back-propagation parameters 'rJ
(learning rate) and 0: (momentum) to determine their locally optimal values.
For each problem, we chose an initial architecture and trained 10 networks with different random initial weights for the same number of epochs. The performances of
the original (i.e. the network before skeletonisation) and the skeletonised networks,
measured as number of correct classifications of the training and test sets, was mea.sured both at SP and after further training. The retrained skeletonised network was
compared with the original network at SP as well as the original network that had
been trained further for the same number of weight updates. 2 All training was via
the standard back-propagation algorithm with a sigmoidal activation function and
updates after every pattern presentation [Rumelhart et al., 1986]. A paired T-test
[Siegel, 1988] was used to measure the significance of the difference in performance
between the skeletonised and original networks. Our experimental results are summarised in Figure 3, and Tables 1 and 2; detailed experimental conditions are given
below.
2This was ensured by adjusting the number of epochs a network was trained after
skeletonisation according to the number of hidden units in the network. Thus, a network
with 10 hidden units was trained on twice as many epochs as one with 20 hidden units.
1083
1084
Ramachandran and Pratt
~
-a
ji
S
~
i5
u
PB Vowel
0
0>
0>
<Xl
18
S;
~
EJ
240000
-.:>
Q)
en
~
en
=:.
:r
~
C>
240000
<Xl
0>
300000
~ 8[
32000
Watght Updatas
Heart disease
8 Robinson vowel
9.0
A5
1 .1
AA
1.3
AA
1 .5
AA
166000
172000
9 .0
&5
1 .1
&6
1 .3
&6
1 .5
&6
166000
17600C
C>
eN
C>
Waight Updata&
174000
Watght Updata&
Figure 3: Summary of experimental results. Circles represent skeletonised networks;
triangles represent unskeletonised networks for comparison. Note that when performance
drops upon skeletonisation, the original performance level is recovered within a few weight
updates. In all cases, hidden unit count is reduced.
4.1
PETERSON-BARNEY DATA
IMBS was first evaluated on a 3-class subset of the Peterson-Barney 10-vowel classification data set, originally described in [Peterson and Barney, 1952], and recreated
by [Watrous, 1991]. This data consists of the formant values F1 and F2 for each of
two repetitions of each of ten vowels by 76 speaker (1520 utterances). The vowels
were pronounced in isolated words consisting of the consonant "h", followed by a
vowel, followed by "d". This set was randomly divided into a ~,~ training/test
split, with 298 and 150 patterns, respectively.
Our initial architecture was a fully connected network with 2 input units, one hidden
layer with 20 units, and 3 output units. We trained the networks with T] 1.0 and
ex = 0.001 until the TSS (total sum of squared error) scores seemed to reach a
plateau. The networks were trained for 2000 epochs and then skeletonised.
=
The skeletonisation procedure removed an average of 10.1 (50.5%) hidden units.
Though the average performance of the skeletonised networks was worse than that
of the original, this difference was not statistically significant (p 0.001).
=
4.2
ROBINSON VOWEL RECOGNITION
Using data from [Robinson, 1989], we trained networks to perform speaker independent recognition of the 11 steady-state vowels of British English using a training set
of LPC-derived log area ratios. Training and test sets were as used by [Robinson,
1989], with 528 and 462 patterns, respectively.
The initial network architecture was fully connected, with 10 input units, 11 output
units, and 30 hidden units. Networks were trained with T]
1.0 and ex
0.01,
until the performance on the training set exceeded 95%. The networks were trained
for 1500 epochs and then skeletonised. The skeletonisation procedure removed
an average of 5.8 (19.3%) hidden units. The difference in performance was not
statistically significant (p 0.001).
=
=
=
Information Measure Based Skeletonisation
Table 1: Performance of unskeletonised networks
Table 2: Mean difference in the number of correct classifications between the original and
skeletonised networks. Positive differences indicate that the original network did better
after further training. The numbers in parentheses indicate the 99.9% confidence intervals
for the mean.
comparison points
Original Skeletonised
Peterson-Barney
SP
SP
SP
SP+1010
SP+500 SP+1010
Robinson Vowel
SP
SP
SP
SP+620
SP+500 SP+620
Heart Disease
SP
SP
SP
SP+33
SP+14
SP+33
mean difference
Training set
3.10 l-0.83, 7.03J
-0.1 [-1.76, 1.56]
0.20 [-1.52, 1.91]
J
I
Test set
-0.10 l-2.05, 1.84J
0.7 [-0.73, 2.13]
0.30 [-1.30, 1.90]
J
1.70 -2.40, 5.80)
-8.2 [-20.33, 3.93]
-0.30 [ -3.15, 2.55]
2.40 -2.39, 7.19J
-4.4 [-18.26, 9.46]
-0.301-8.36, 7.76]
20.80 J-5.66, 47.26J
o [-4.28, +4.28]
0.60 [ -4.55, 5.75]
12.20 l-1.65, 26.051
o [-2.85, 2.85]
0.40 [ -3.03, 3.83]
I
1085
1086
Ramachandran and Pratt
4.3
HEART DISEASE DATA
Using a 14-attribute set of diagnosis information, we trained networks on a heart
disease diagnosis problem [Detrano et al., 1989]. Training and test data were chosen
randomly in a ~, ~ split of 820 and 410 patterns, respectively. The initial networks
were fully connected, with 25 input units, one hidden layer with 20 units, and 2
output units. The networks were trained with a = 1.25 and 'rJ = 0.005. Training
was stopped when the TSS scores seemed to reach a plateau. The networks were
trained for 300 epochs and then skeletonised.
The skeletonisation procedure removed an average of 9.6 (48%) hidden units. Here,
removing superfluous units degraded the performance by an average of 2.5% on
the training set and 3.0% on the test set. However, after being trained further for
only 30 epochs, the skeletonised networks recovered to do as well as the original
networks.
5
CONCLUSION AND EXTENSIONS
We have introduced an algorithm, called IMBS, which uses an information measure borrowed from decision tree induction schemes to skeletonise over-sized backpropagation networks. Empirical tests showed that IMBS removed a substantial
percentage of hidden units without significantly affecting the network performance.
Potential extensions to this work include:
? Using decision tree reduction schemes to allow for trimming not only superfluous hyperplanes, but also those responsible for overfitting the training data, in
an effort to improve generalisation.
? Extending IMBS to better identify superfluous hidden units under conditions
of less than 100% performance on the training data.
? Extending IMBS to work for networks with more than one hidden layer.
? Performing more rigorous empirical evaluation.
? Making IMBS less sensitive to the hyperplane-as-threshold assumption. In particular, a model with variable-width hyperplanes (depending on the sigmoidal
gain) may be effective.
Acknowledgements
Our thanks to Haym Hirsh and Tom Lee for insightful comments on earlier drafts of
this paper, to Christian Roehr for an update to the IMBS algorithm, and to Vince
Sgro, David Lubinsky, David Loewenstern and Jack Mostow for feedback on later
drafts. Matthias Pfister, M.D., of University Hospital in Zurich, Switzerland was
responsible for collection of the heart disease data. We used software distributed
with [McClelland and Rumelhart, 1988] for many of our simulations.
Information Measure Based Skeletonisation
References
[Chauvin, 1989] Chauvin, Y. 1989. A back-propagation algorithm with optimal use
of hidden units. In Touretzky, D. S., editor 1989, Advances in Neural Information
Processing Systems 1. Morgan Kaufmann, San Mateo, CA. 519-526.
[Detrano et al., 1989] Detrano, R.j Janosi, A.j Steinbrunn, W.j Pfisterer, M.;
Schmid, J.; Sandhu, S.; Guppy, K.; Lee, S.; and Froelicher, V. 1989. International application of a new probability algorithm for the diagnosis of coronary
artery disease. American Journal of Cardiology 64:304-310.
[Hanson and Pratt, 1989] Hanson, Stephen Jose and Pratt, Lorien Y. 1989. Comparing biases for minimal network construction with back-propagation. In Touretzky, D. S., editor 1989, Advances in Neural Information Processing Systems 1.
Morgan Kaufmann, San Mateo, CA. 177-185.
[Le Cun et al., 1990] Le Cun, Yanni Denker, John; Solla, Sara A.; Howard,
Richard E.; and Jackel, Lawrence D. 1990. Optimal brain damage. In Touretzky, D. S., editor 1990, Advances in Neural Information Processing Systems 2.
Morgan Kaufmann, San Mateo, CA.
[McClelland and Rumelhart, 1988] McClelland, James L. and Rumelhart, David E.
1988. Explorations in Parallel Distributed Processing: A Handbook of Models,
Programs, and Exercises. Cambridge, MA, The MIT Press.
[Mozer and Smolensky, 1989] Mozer, Michael C. and Smolensky, Paul 1989. Skeletonization: A technique for trimming the fat from a network via relevance assessment. In Touretzky, D. S., editor 1989, Advances in Neural Information
Processing Systems 1. Morgan Kaufmann, San Mateo, CA. 107-115.
[Peterson and Barney, 1952] Peterson, and Barney, 1952. Control methods used in
a study of the vowels. J. Acoust. Soc. Am. 24(2):175-184.
[Quinlan, 1986] Quinlan, J. R. 1986. Induction of decision trees. Machine Learning
1(1):81-106.
[Robinson, 1989] Robinson, Anthony John 1989. Dynamic Error Propagation Networks. Ph.D. Dissertation, Cambridge University, Engineering Department.
[Rumelhart et al., 1986] Rumelhart, D.; Hinton, G.; and Williams, R. 1986. Learning representations by back-propagating errors. Nature 323:533-536.
[Siegel, 1988] Siegel, Andrew F. 1988. Statistics and data analysis: An Introduction.
John Wiley and Sons. chapter 15, 336-339.
[Siestma and Dow, 1991] Siestma, Jocelyn and Dow, Robert J. F. 1991. Creating
artificial neural networks that generalize. Neural Networks 4:67-79.
[Watrous, 1991] Watrous, Raymond L. 1991. Current status of peterson-barney
vowel formant data. Journal of the Acoustical Society of America 89(3):2459-60.
[Weigend et al., 1991] Weigend, Andreas S.; Rumelhart, David E.; and Huberman,
Bernardo A. 1991. Generalization by weight-elimination with application to forecasting. In Lippmann, R. P.; Moody, J. E.; and Touretzky, D. S., editors 1991,
Advances in Neural Information Processing Systems 3. Morgan Kaufmann, San
Mateo, CA. 875-882.
1087
| 484 |@word hu:12 simulation:1 pick:4 barney:8 recursively:1 reduction:1 initial:5 configuration:1 score:2 recovered:2 comparing:1 current:1 activation:8 john:3 christian:1 remove:3 drop:1 update:4 fewer:1 dissertation:1 detecting:1 draft:2 hyperplanes:7 sigmoidal:4 along:1 consists:1 brain:1 begin:2 classifies:1 watrous:3 acoust:1 fabricated:1 nj:1 every:1 bernardo:1 fat:1 ensured:1 control:1 unit:46 before:1 positive:1 hirsh:1 engineering:1 chose:1 twice:1 mateo:5 sara:1 statistically:2 responsible:2 lost:1 backpropagation:1 procedure:4 area:2 empirical:3 significantly:3 word:1 confidence:1 cardiology:1 mea:1 applying:1 williams:1 identifying:1 analogous:1 construction:1 us:1 rumelhart:8 recognition:3 haym:1 calculate:1 region:2 connected:3 solla:1 removed:6 substantial:3 discriminates:2 mozer:3 disease:7 complexity:3 dynamic:1 trained:14 uniformity:1 creates:1 upon:1 f2:1 triangle:1 various:1 tx:1 chapter:1 america:1 train:2 effective:1 artificial:1 whose:1 larger:1 valued:1 formant:2 statistic:1 oversized:1 matthias:1 flexibility:1 pronounced:1 artery:1 extending:2 depending:1 mostow:1 andrew:1 propagating:1 measured:1 borrowed:2 soc:1 indicate:3 switzerland:1 correct:2 attribute:5 exploration:1 elimination:1 f1:1 generalization:1 im:1 strictly:1 extension:2 hold:1 considered:1 vince:1 lawrence:1 mapping:1 purpose:1 jackel:1 sensitive:1 repetition:1 reflects:1 mit:1 ej:1 derived:1 indicates:1 rigorous:1 detect:1 am:1 stopping:1 hidden:41 classification:7 constrained:1 richard:1 few:2 randomly:2 consisting:1 vowel:13 tss:2 trimming:4 a5:1 evaluation:1 adjust:1 superfluous:16 necessary:3 unless:1 tree:7 circle:1 isolated:1 minimal:1 stopped:1 earlier:1 cost:1 subset:3 thanks:1 international:1 lee:2 recreated:1 michael:1 moody:1 lorien:2 again:1 squared:1 worse:1 creating:1 american:1 return:1 potential:2 de:1 caused:1 later:1 h1:1 recover:1 parallel:1 formed:3 degraded:1 kaufmann:5 reinforced:1 identify:3 dsl:2 generalize:1 plateau:2 reach:2 touretzky:5 sured:1 james:1 gain:1 adjusting:1 improves:1 back:7 exceeded:1 higher:2 originally:1 tom:1 improved:1 evaluated:2 though:2 until:2 ramachandran:4 dow:3 ident:1 assessment:1 lack:1 propagation:6 effect:1 true:1 during:1 width:1 speaker:2 steady:1 subcontract:1 theoretic:1 demonstrate:2 jack:1 ji:1 belong:1 jocelyn:1 refer:1 significant:2 cambridge:2 automatic:1 had:1 add:2 showed:1 binary:1 morgan:5 greater:2 somewhat:1 determine:1 stephen:1 rj:2 determination:1 divided:2 paired:1 parenthesis:1 rutgers:1 represent:2 affecting:3 addressed:1 interval:1 rest:1 comment:1 tend:1 split:2 pratt:7 variety:1 fit:1 architecture:4 topology:1 reduce:1 idea:1 andreas:1 knowing:1 texas:1 effort:1 forecasting:1 cause:1 useful:8 detailed:1 locally:1 ten:1 ph:1 mcclelland:3 reduced:2 percentage:1 diagnosis:4 summarised:1 threshold:2 pb:1 sum:1 weigend:3 run:1 jose:1 i5:1 separation:1 decision:7 acceptable:2 layer:8 hi:2 followed:2 simplification:1 constrain:1 software:1 speed:3 pruned:1 performing:1 department:3 according:1 alternate:1 son:1 cun:3 making:1 hl:4 heart:6 zurich:1 previously:1 count:1 end:3 denker:1 skeletonization:1 slower:1 original:10 include:1 quinlan:5 society:1 objective:1 damage:1 perturbs:1 separate:7 acoustical:1 chauvin:3 induction:6 ratio:1 robert:1 ds2:2 proper:2 perform:1 observation:1 howard:1 hinton:1 retrained:1 introduced:1 david:4 hanson:3 learned:1 robinson:7 guppy:1 below:1 pattern:5 smolensky:3 lpc:1 program:1 scheme:3 improve:2 created:1 utterance:1 schmid:1 raymond:1 epoch:9 acknowledgement:1 removal:1 wisconsin:1 fully:3 coronary:1 h2:4 degree:2 editor:5 austin:2 summary:1 supported:1 english:1 bias:2 detrano:3 allow:1 generalise:1 peterson:8 er61129:1 distributed:2 feedback:1 dimension:1 calculated:1 seemed:2 collection:1 san:5 simplified:1 pruning:1 approximate:1 lippmann:1 status:1 overfitting:1 handbook:1 consonant:1 search:1 table:3 learn:1 nature:1 ca:5 anthony:1 sp:22 significance:1 main:1 did:1 arrow:1 paul:1 referred:1 siegel:3 en:3 wiley:1 momentum:1 xl:2 exercise:1 removing:3 british:1 insightful:1 list:8 decay:1 gained:1 fg02:1 partially:1 aa:3 corresponds:1 ma:1 sized:2 goal:1 presentation:1 generalisation:3 reducing:2 huberman:1 hyperplane:4 called:2 nil:1 total:1 pfister:1 experimental:3 hospital:1 brunswick:1 relevance:1 tested:1 ex:2 |
4,243 | 4,840 | Non-linear Metric Learning
Dor Kedem, Stephen Tyree, Kilian Q. Weinberger
Dept. of Comp. Sci. & Engi.
Washington U.
St. Louis, MO 63130
kedem.dor,swtyree,[email protected]
Fei Sha
Dept. of Comp. Sci.
U. of Southern California
Los Angeles, CA 90089
[email protected]
Gert Lanckriet
Dept. of Elec. & Comp. Engineering
U. of California
La Jolla, CA 92093
[email protected]
Abstract
In this paper, we introduce two novel metric learning algorithms, ?2 -LMNN and
GB-LMNN, which are explicitly designed to be non-linear and easy-to-use. The
two approaches achieve this goal in fundamentally different ways: ?2 -LMNN
inherits the computational benefits of a linear mapping from linear metric learning, but uses a non-linear ?2 -distance to explicitly capture similarities within histogram data sets; GB-LMNN applies gradient-boosting to learn non-linear mappings directly in function space and takes advantage of this approach?s robustness, speed, parallelizability and insensitivity towards the single additional hyperparameter. On various benchmark data sets, we demonstrate these methods not
only match the current state-of-the-art in terms of kNN classification error, but in
the case of ?2 -LMNN, obtain best results in 19 out of 20 learning settings.
1
Introduction
How to compare examples is a fundamental question in machine learning. If an algorithm could
perfectly determine whether two examples were semantically similar or dissimilar, most subsequent
machine learning tasks would become trivial (i.e., a nearest neighbor classifier will achieve perfect
results). Guided by this motivation, a surge of recent research [10, 13, 15, 24, 31, 32] has focused on
Mahalanobis metric learning. The resulting methods greatly improve the performance of metric dependent algorithms, such as k-means clustering and kNN classification, and have gained popularity
in many research areas and applications within and beyond machine learning.
One reason for this success is the out-of-the-box usability and robustness of several popular methods
to learn these linear metrics. So far, non-linear approaches [6, 18, 26, 30] to metric learning have
not managed to replicate this success. Although more expressive, the optimization problems are
often expensive to solve and plagued by sensitivity to many hyper-parameters. Ideally, we would
like to develop easy-to-use black-box algorithms that learn new data representations for the use
of established metrics. Further, non-linear transformations should be applied depending on the
specifics of a given data set.
In this paper, we introduce two novel extensions to the popular Large Margin Nearest Neighbors
(LMNN) framework [31] which provide non-linear capabilities and are applicable out-of-the-box.
The two algorithms follow different approaches to achieve this goal:
(i) Our first algorithm, ?2 -LMNN is specialized for histogram data. It generalizes the non-linear
?2 -distance and learns a metric that strictly preserve the histogram properties of input data on a
probability simplex. It successfully combines the simplicity and elegance of the LMNN objective
and the domain-specific expressiveness of the ?2 -distance.
1
(ii) Our second algorithm, gradient boosted LMNN (GB-LMNN) employs a non-linear mapping
combined with a traditional Euclidean distance function. It is a natural extension of LMNN from
linear to non-linear mappings. By training the non-linear transformation directly in function space
with gradient-boosted regression trees (GBRT) [11] the resulting algorithm inherits the positive
aspects of GBRT?its insensitivity to hyper-parameters, robustness against overfitting, speed and
natural parallelizability [28].
Both approaches scale naturally to medium-sized data sets, can be optimized using standard techniques and only introduce a single additional hyper-parameter. We demonstrate the efficacy of both
algorithms on several real-world data sets and observe two noticeable trends: i) GB-LMNN (with
default settings) achieves state-of-the-art k-nearest neighbor classification errors with high consistency across all our data sets. For learning tasks where non-linearity is not required, it reduces to
LMNN as a special case. On more complex data sets it reliably improves over linear metrics and
matches or out-performs previous work on non-linear metric learning. ii) For data sampled from a
simplex, ?2 -LMNN is strongly superior to alternative approaches that do not explicitly incorporate
the histogram aspect of the data?in fact it obtains best results in 19/20 learning settings.
2
Background and Notation
Let {(x1 , y1 ), . . . , (xn , yn )} ? Rd ?C be labeled training data with discrete labels C = {1, . . . , c}.
Large margin nearest neighbors (LMNN) [30, 31] is an algorithm to learn a Mahalanobis metric
specifically to improve the classification error of k-nearest neighbors (kNN) [7] classification. As
the kNN rule relies heavily on the underlying metric (a test input is classified by a majority vote
amongst its k nearest neighbors), it is a good indicator for the quality of the metric in use. The
Mahalanobis metric can be viewed as a straight-forward generalization of the Euclidean metric,
DL (xi , xj ) = kL(xi ? xj )k2 ,
(1)
parameterized by a matrix L ? Rd?d , which in the case of LMNN is learned such that the linear
transformation x ? Lx better represents similarity in the target domain. In the remainder of this
section we briefly review the necessary terminology and basic framework behind LMNN and refer
the interested reader to [31] for more details.
Local neighborhoods. LMNN identifies two types of neighborhood relations between an input
xi and other inputs in the data set: For each xi , as a first step, k dedicated target neighbors are
identified prior to learning. These are the inputs that should ideally be the actual nearest neighbors
after applying the transformation (we use the notation j i to indicate that xj is a target neighbor
of xi ). A common heuristic for choosing target neighbors is picking the k closest inputs (according
to the Euclidean distance) to a given xi within the same class. The second type of neighbors are
impostors. These are inputs that should not be among the k-nearest neighbors ? defined to be all
inputs from a different class that are within the local neighborhood of xi .
LMNN optimization. The LMNN objective has two terms, one for each neighborhood objective:
First, it reduces the distance between an instance and its target neighbors, thus pulling them closer
and making the input?s local neighborhood smaller. Second, it moves impostor neighbors (i.e.,
differently labeled inputs) farther away so that the distances to impostors should exceed the distances
to target neighbors by a large margin. Weinberger et. al [31] combine these two objectives into a
single unconstrained optimization problem:
X
X
min
DL (xi , xj )2
(2)
1 + DL (xi , xj )2 ? DL (xi , xk )2 +
+ ?
L
|
{z
}
i,j:j i
k : yi 6=yk
pull target neighbor xj closer
|
{z
}
push impostor xk away, beyond target neighbor xj by a large margin `
The parameter ? defines a trade-off between the two objectives and [x]+ is defined as the hinge-loss
[x]+ = max(0, x). The optimization (2) can be transformed into a semidefinite program (SDP) [31]
for which a global solution can be found efficiently. The large margin in (2) is set to 1 as its exact
value only impacts the scale of L and not the kNN classifier.
Dimensionality reduction. As an extension to the original LMNN formulation, [26, 30] show that
with L ? Rr?d with r < d, LMNN learns a projection into a lower-dimensional space Rr that still
represents domain specific similarities. While this low-rank constraint breaks the convexity of the
optimization problem, significant speed-ups [30] can be obtained when the kNN classifier is applied
in the r-dimensional space ? especially when combined with special-purpose data structures [33].
2
3
?2 -LMNN: Non-linear Distance Functions on the Probability Simplex
The original LMNN algorithm learns a linear transformation L ? Rd?d that captures semantic similarity for kNN
classification on data in some Euclidean vector space
Rd . In this section we extend this formulation to settings in which data are sampled from a probability simplex S d = {x ? Rd |x ? 0, x> 1 = 1}, where 1 ? Rd denotes the vector of all-ones. Each input xi ? S d can be
interpreted as a histogram over d buckets. Such data are
ubiquitous in computer vision where the histograms can
be distributions over visual codebooks [27] or colors [25],
in text-data as normalized bag-of-words or topic assign- Figure2 1: A schematic illustration of
the ? -LMNN optimization. The mapments [3], and many other fields [9, 17, 21].
ping is constrained to preserve all inputs
Histogram distances. The abundance of such data has on the simplex S 3 (grey surface). The
sparked the development of several specialized distance arrows indicate the push (red and yelmetrics designed to compare histograms. Examples are low) and pull (blue) forces from the ?2 Quadratic-Form distance [16], Earth Mover?s Distance LMNN objective.
[21], Quadratic-Chi distance family [20] and ?2 histogram distance [16]. We focus explicitly on the latter. Transforming the inputs with a linear
transformation learned with LMNN will almost certainly result in a loss of their histogram properties ? and the ability to use such distances. In this section, we introduce our first non-linear
extension for LMNN, to address this issue. In particular, we propose two significant changes to the
original LMNN formulation: i) we learn a constrained mapping that keeps the transformed data on
the simplex (illustrated in Figure 1), and ii) we optimize the kNN classification performance with
respect to the non-linear ?2 histogram distance directly.
?2 histogram distance. We focus on the ?2 histogram distance, whose origin is the ?2 statistical
hypothesis test [19], and which has successfully been applied in many domains [8, 27, 29]. The ?2
distance is a bin-to-bin distance measurement, which takes into account the size of the bins and their
differences. Formally, the ?2 distance is a well-defined metric ?2 : S d ? [0, 1] defined as [20]
?2 (xi , xj ) =
d
1 X ([xi ]f ? [xj ]f )2
,
2
[xi ]f + [xj ]f
(3)
f =1
where [xi ]f indicates the f th feature value of the vector xi .
Generalized ?2 distance. First, analogous to the generalized Euclidean metric in (1), we generalize
the ?2 distance with a linear transformation and introduce the pseudo-metric ?2L (xi , xj ), defined as
?2L (xi , xj ) = ?2 (Lxi , Lxj ).
(4)
The ?2 distance is only a well-defined metric within the simplex S d and therefore we constrain
L to map any x onto S d . We define the set of such simplex-preserving linear transformations as
P = {L ? Rd?d : ?x ? S d , Lx ? S d }.
?2 -LMNN Objective. To optimize the transformation L with respect to the ?2 histogram distance
directly, we replace the Mahalanobis distance DL in (2) with ?2L and obtain the following:
X
X
min
?2L (xi , xj ) + ?
` + ?2L (xi , xj ) ? ?2L (xi , xk ) + .
(5)
L?P
i,j: j
i
k: yi 6=yk
Besides the substituted distance function, there are two important changes in the optimization problem (5) compared to (2). First, as mentioned before, we have an additional constraint L ? P. Second,
because (4) is not linear in L> L, different values for the margin parameter ` lead to truly different
solutions (which differ not just up to a scaling factor as before). We therefore can no longer arbitrarily set ` = 1. Instead, ` becomes an additional hyper-parameter of the model. We refer to this
algorithm as ?2 -LMNN.
Optimization. To learn (5), it can be shownP
L ? P if and only if L is element-wise non-negative, i.e.,
L ? 0, and each column is normalized, i.e., i Lij = 1, ?j. These constraints are linear with respect
3
to L and we can optimize (5) efficiently with a projected sub-gradient method [2]. As an even faster
optimization method, we propose a simple change of variables to generate an unconstrained version
of (5). Let us define f : Rd?d ? P to be the column-wise soft-max operator
eAij
[f (A)]ij = P Akj .
ke
(6)
By design, all columns of f (A) are normalized and every matrix entry is non-negative. The function
f (?) is continuous and differentiable. By defining L = f (A) we obtain L ? P for any choice of A ?
Rd?d . This allows us to minimize (5) with respect to A using unconstrained sub-gradient descent1 .
We initialize the optimization with A = 10 I + 0.01 11> (where I denotes the identity matrix) to
approximate the non-transformed ?2 histogram distance after the change of variable (f (A) ? I).
Dimensionality Reduction. Analogous to the original LMNN formulation (described in Section 2),
we can restrict from a square matrix to L ? Rr?d with r < d. In this case ?2 -LMNN learns a
projection into a lower dimensional simplex L : S d ? S r . All other parts of the algorithm change
analogously. This extension can be very valuable to enable faster nearest neighbor search [33]
especially for time-sensitive applications, e.g., object recognition tasks in computer vision [27]. In
section 6 we evaluate this version of ?2 -LMNN under a range of settings for r.
4
GB-LMNN: Non-linear Transformations with Gradient Boosting
Whereas section 3 focuses on the learning scenario where a linear transformation is too general, in
this section we explore the opposite case where it is too restrictive. Affine transformations preserve
collinearity and ratios of distances along lines ? i.e., inputs on a straight line remain on a straight
line and their relative distances are preserved. This can be too restrictive for data where similarities
change locally (e.g., because similar data lie on non-linear sub-manifolds). Chopra et al. [6] pioneered non-linear metric learning, using convolutional neural networks to learn embeddings for faceverification tasks. Inspired by their work, we propose to optimize the LMNN objective (2) directly
in function space with gradient boosted CART trees [11]. Combining the learned transformation
?(x) : Rd ? Rd with a Euclidean distance function has the capability to capture highly non-linear
similarity relations. It can be optimized using standard techniques, naturally scales to large data sets
while only introducing a single additional hyper-parameter in comparison with LMNN.
Generalized LMNN. To generalize the LMNN objective 2 to a non-linear transformation ?(?), we
denote the Euclidean distance after the transformation as
D? (xi , xj ) = k?(xi ) ? ?(xj )k2 ,
(7)
which satisfies all properties of a well-defined pseudo-metric in the original input space. To optimize
the LMNN objective directly with respect to D? , we follow the same steps as in Section 3 and
substitute D? for DL in (2). The resulting unconstrained loss function becomes
X
X
L(?) =
1 + k?(xi )??(xj )k22 ? k?(xi )??(xk )k22 + . (8)
k?(xi )??(xj )k22 + ?
i,j: j
i
k: yi 6=yk
In its most general form, with an unspecified mapping ?, (8) unifies most of the existing variations of
LMNN metric learning. The original linear LMNN mapping [31] is a special case where ?(x) = Lx.
Kernelized versions [5, 12, 26] are captured by ?(x) = L?(x), producing the kernel K(xi , xj ) =
?(xi )> ?(xj ) = ?(xi )> L> L?(xj ). The embedding of Globerson and Roweis [14] corresponds to
the most expressive mapping function ?(xi ) = zi , where each input xi is transformed independently
to a new location zi to satisfy similarity constraints ? without out-of-sample extensions.
GB-LMNN. The previous examples vary widely in expressiveness, scalability, and generalization,
largely as a consequence of the mapping function ?. It is important to find the right non-linear form
for ?, and we believe an elegant solution lies in gradient boosted regression trees.
Our method, termed GB-LMNN, learns a global non-linear mapping. The construction of the mapping, an ensemble of multivariate regression trees selected by gradient boosting [11], minimizes the
general LMNN objective (8) directly in function space. Formally, the GB-LMNN transformation
1
The set of all possible matrices f (A) is slightly more restricted than P, as it reaches zero entries only in
the limit. However, given finite computational precision, this does not seem to be a problem in practice.
4
True
Gradient
Approximated
Gradient
Itera/on
?1
Itera/on
?10
Itera/on
?20
Itera/on
?40
Itera/on
?100
Figure 2: GB-LMNN illustrated on a toy data set sampled from two concentric circles of different
classes (blue and red dots). The figure depicts the true gradient (top row) with respect to each input
and its least squares approximation (bottom row) with a multi-variate regression tree (depth, p = 4).
PT
is an additive function ? = ?0 + ? t=1 ht initialized by ?0 and constructed by iteratively adding
regression trees ht of limited depth p [4], each weighted by a learning rate ?. Individually, the trees
are weak learners and are capable of learning only simple functions, but additively they form powerful ensembles with good generalization to out-of-sample data. In iteration t, the tree ht is selected
greedily to best minimize the objective upon its addition to the ensemble,
?t (?) = ?t?1 (?) + ?ht (?), where ht ? argmin L(?t?1 + ?h).
(9)
h?T p
Here, T p denotes the set of all regression trees of depth p. The (approximately) optimal tree ht is
found by a first-order Taylor approximation of L. This makes the optimization akin to a steepest
descent step in function space, where ht is selected to approximate the negative gradient gt of the
objective L(?t?1 ) with respect to the transformation learned at the previous iteration ?t?1 . Since
we learn an approximation of gt as a function of the training data, sub-gradients are computed with
respect to each training input xi , and approximated by the tree ht (?) in the least-squared sense,
ht (?) = argmin
h?T p
n
X
?L(?t?1 )
.
(gt (xi ) ? ht (xi ))2 , where: gt (xi ) =
??
t?1 (xi )
i=1
(10)
Intuitively, at each iteration, the tree ht (?) of depth p splits the input space into 2p axis-aligned
regions. All inputs that fall into one region are translated by a constant vector ? consequently,
the inputs in different regions are shifted in different directions. We learn the trees greedily with a
modified version of the public-domain CART implementation pGBRT [28].
Optimization details. Since (8) is non-convex with respect to ?, we initialize with the linear
transformation learned by LMNN, ?0 = Lx, making our method a non-linear refinement of LMNN.
The only additional hyperparameter to the optimization is the maximum tree depth p to which the
algorithm is not particularly sensitive (we set p = 6). 2
Figure 2 depicts a simple toy-example with concentric circles of inputs from two different classes.
By design, the inputs are sampled such that the nearest neighbor for any given input is from the
other class. A linear transformation is incapable of separating the two classes. However GB-LMNN
produces a mapping with the desired separation. The figure illustrates the actual gradient (top row)
and its approximation (bottom row). The limited-depth regression trees are unable to capture the
gradient for all inputs in a single iteration. But by greedily focusing on inputs with the largest
gradients or groups of inputs with the most easily encoded gradients, the gradient boosting process
additively constructs the transformation function. At iteration 100, the gradients with respect to
most inputs vanish, indicating that a local minimum of L(?) is almost reached ? the inputs from
the two classes are separated by a large margin.
2
Here, we set the step-size, a common hyper-parameter across all variations of LMNN, to ? = 0.01.
5
Dimensionality reduction. Like linear LMNN and ?2 -LMNN, it is possible to learn a non-linear
transformation to a lower dimensional space, ?(x) : Rd ? Rr , r ? d. Initialization is made with
the rectangular matrix output of the dimensionality-reduced LMNN transformation, ?0 = Lx with
L ? Rr?d . Training proceeds by learning trees with r- rather than d-dimensional outputs.
5
Related Work
There have been previous attempts to generalize learning linear distances to nonlinear metrics. The
nonlinear mapping ?(x) of eq. (7) can be implemented with kernels [5, 12, 18, 26]. These extensions have the advantages of maintaining computational tractability as convex optimization problems. However, their utility is limited inherently by the sizes of kernel matrices .Weinberger et. al
[30] propose M 2 -LMNN, a locally linear extension to LMNN. They partition the space into multiple
regions, and jointly learn a separate metric for each region?however, these local metrics do not give
rise to a global metric and distances between inputs within different regions are not well-defined.
Neural network-based approaches offer the flexibility of learning arbitrarily complex nonlinear mappings [6]. However, they often demand high computational expense, not only in parameter fitting but
also in model selection and hyper-parameter tuning. Of particular relevance to our GB-LMNN work
is the use of boosting ensembles to learn distances between bit-vectors [1, 23]. Note that their goals
are to preserve distances computed by locality sensitive hashing to enable fast search and retrieval.
Ours are very different: we alter the distances discriminatively to minimize classification error.
Our work on ?2 -LMNN echoes the recent interest in learning earth-mover-distance (EMD) which
is also frequently used in measuring similarities between histogram-type data [9]. Despite its name,
EMD is not necessarily a metric [20]. Investigating the link between our work and those new advances is a subject for future work.
6
Experimental Results
We evaluate our non-linear metric learning algorithms against several competitive methods. The effectiveness of learned metrics is assessed by kNN classification error. Our open-source implementations are available for download at http://www.cse.wustl.edu/?kilian/code/code.html.
GB-LMNN We compare the non-linear global metric learned by GB-LMNN to three linear metrics:
the Euclidean metric and metrics learned by LMNN [31] and Information-Theoretic Metric Learning
(ITML) [10]. Both optimize similar discriminative loss functions. We also compare to the metrics
learned by Multi-Metric LMNN (M 2 -LMNN) [30]. M 2 -LMNN learns |C| linear metrics, one for
each the input labels.
We evaluate these methods and our GB-LMNN on several medium-sized data sets: ISOLET, USPS
and Letters from the UCI repository. ISOLET and USPS have predefined test sets, otherwise results
are averaged over 5 train/test splits (80%/20%). A hold-out set of 25% of the training set3 is used to
assign hyper-parameters and to determine feature pre-processing (i.e., feature-wise normalization).
We set k = 3 for kNN classification, following [31]. Table 1 reports the means and standard errors
of each approach (standard error is omitted for data with pre-defined test sets), with numbers in bold
font indicating the best results up to one standard error.
On all three datasets, GB-LMNN outperforms methods of learning linear metrics. This shows the
benefit of learning nonlinear metrics. On Letters, GB-LMNN outperforms the second-best method
M 2 -LMNN by significant margins. On the other two, GB-LMNN is as good as M 2 -LMNN.
We also apply GB-LMNN to four datasets with histogram data ? setting the stage for an interesting
comparison to ?2 -LMNN below. The results are displayed on the right side of the table. These
datasets are popularly used in computer vision for object recognition [22]. Data instances are 800bin histograms of visual codebook entries. There are ten common categories to the four datasets and
we use them for multiway classification with kNN.
Neither method evaluated so far is specifically adapted to histogram features. Especially linear
models, such as LMNN and ITML, are expected to fumble over the intricate similarities that such
3
In the case of ISOLET, which consists of audio signals of spoken letters by different individuals, the holdout set consisted of one speaker.
6
!"#$%&
-./0123456784/3
'"("
$%&&%)"
*"$)
89:;4;101<=671>803?
+%,-./
./.0#1
-.$&%-2
1344546*3748 1358596*38:; 138<<<<6*37; 137:46*39<< 1385:6*39<< 135:96*39<< 13778=6*39<<
>'-$!*%.1
BCDE
EDFF
D8GEDFF
HIGEDFF
J8
KLM
KLF
J8GEDFF
9?@
:?=A<?<
!"#$%"!
!"&$%"!
!"&$%"%
G
G
G
G
;?8
:?4
8?;
'"#
'"#
G
G
G
G
;?<A<?8
;?<A<?8
=?9A<?=
=?9A<?8
!"($%"!
G
G
G
G
;<?;A=?7
!"#$%&#$
89?5A7?;
84?@A8?7
!!#*%!#+
88?8A7?9
8:?;A8?4
84?9A@?7
'%")$!"!
@=?9A7?4
'!#(%'#)
7:?9A=?<
7:?4A=?8
'!#(%$#*
7=?<A7?8
75?@A7?7
74?:A8?7
*"+$%"(
==?4A<?4
=7?;A7?8
=7?9A7?@
&'#!%'#'
!*#)%'#+
=@?=A7?<
==?5A8?<
=@?:A7?:
'+",$%"*
:=?9A7?=
:8?8A8?7
"$#*%'#(
:7?:A7?:
(*#,%'#$
:9?9A7?7
:4?8A7?8
:;?7A7?8
&)"#$!"!
Table 1: kNN classification error (in %, ? standard error where applicable), for general methods
(top section) and histogram methods (bottom section). Best results up to one standard error in bold.
Best results among general methods for simplex data in red italics.
data types may encode. As shown in the table, GB-LMNN consistently outperforms the linear
methods and M 2 -LMNN.
?2 -LMNN In Table 1, we compare ?2 -LMNN to other methods for computing distances on histogram features: ?2 -distance without transformation (equivalent to our parameterized distance ?2L
distance with the transformation L being the identity matrix), Quadratic-Chi-Squared (QCS) and
Quadratic-Chi-Normalized (QCN) distances, defined in [20]. For QCS and QCN, we use histogram
intersection as the ground distance. Unlike our approach, none of these is discriminatively learned
from data. ?2 -LMNN outperforms all other methods significantly.
It is also instructive to compare the results to the performance of non-histogram specific methods.
We observe that LMNN performs better than the standard ?2 -distance on Amazon and Caltech. This
seems to suggest that for those two datasets, linear metrics may be adequate and GB-LMNN?s nonlinear mapping might not be able to provide extra expressiveness and benefits. This is confirmed
in Table 1: GB-LMNN improves performance less significantly for Amazon and Caltech than for
the other two datasets, DSLR and Webcam. For the latter two, on the contrary, LMNN performs
worse than ?2 -distance. In such cases, GB-LMNN?s nonlinear mapping seems more beneficial. It
provides a significant performance boost, and matches the performance of ?2 -distance (up to one
standard-error). Nonetheless, despite learning a nonlinear mapping, GB-LMNN still underperforms
?2 -LMNN. In other words, it is possible that no matter how flexible a nonlinear mapping could be,
it is still best to use metrics that respect the semantic features of the data.
Dimensionality reduction. GB-LMNN and ?2 -LMNN are both capable of performing dimensionality reduction. We compare these with three dimensionality reduction methods (PCA, LMNN, and
M 2 -LMNN) on the histogram datasets and the larger UCI datasets. Each dataset is reduced to an
output dimensionality of r = 10, 20, 40, 80 features. As we can see from the results in Table 6, it is
fair to say that GB-LMNN performs comparably with LMNN and M 2 -LMNN, whereas ?2 -LMNN
obtains at times phenomenally low kNN error rates on the histograms data sets (e.g., Webcam). This
suggests that dimensionality reduction of histogram data can be highly effective, if the data properties are carefully incorporated in the process. We do not apply dimensionality reduction to Letters
as it already lies in a low-dimensional space (d = 16).
Sensitivity to parameters. One of the most compelling aspects of our methods is that each introduces only a single new hyper-parameter to the LMNN framework. During our experiments, ` was
selected by cross-validation and p was fixed to p = 6. We found very little sensitivity in GB-LMNN
to regression tree depth, while large margin size was an important but well-behaved parameter for
?2 -LMNN. Additional graphs are included in the supplementary material.
7
Conclusion and Future Work
In this paper we introduced two non-linear extensions to LMNN, ?2 -LMNN and GB-LMNN. Although based on fundamentally different approaches, both algorithms lead to significant improvements over the original (linear) LMNN metrics and match or out-perform existing non-linear algorithms. The non-convexity of our proposed methods does not seem to impact their performance,
7
*I9:
*I6:
*I>:
*I@:
345
ABCC
B6EABCC
FGEABCC
H6EABCC
345
ABCC
B6EABCC
FGEABCC
H6EABCC
345
ABCC
B6EABCC
FGEABCC
H6EABCC
345
ABCC
B6EABCC
FGEABCC
H6EABCC
;<=32>91?67-1=9
!"#$%&
'"("
6787
9:89
>86:=:89 78:
>8;=:86
(#"
&#'%+#+
D8;
E
E
9D89
787
"#*%+#*
;8@
"#*%+#"
&#&
"#+%+#*
;8@
E
E
998:
78:
98D=:8:
;86
*#"%+#*
;86
98>=:89
"#,
E
E
?8>
789
*#)%+#*
;86
*#)%+#+
*#$
*#)%+#*
68>
E
E
)"$*
!"#$%&#'
D789=68:
D789=68:
>78<=<8>
!&#&%"#)
>789=;8<
D;8;=68@
D;8;=68@
D:8:=;8>
&*#*%&#"
>78<=;8:
D98<=:8<
D98<=:8<
D:8:=689
&&#&%"#+
;?8>=98@
D989=68>
D989=68>
D:8:=98?
&&#&%"#+
-./010232456728-39:
+%,-./
./.0#1
;68<=987
>?89=686
;@89=68D
>;87=>87
;@8>=68<
>68@=98<
;>87=687
>987=68<
*'#$%"#,
&&#,%*#*
6<8;=98<
>;8?=989
;>8:=68?
;?8?=98D
;>8;=687
>:8;=98;
;;8:=68@
;@8<=:8@
*!#&%&#*
&&#*%+#,
6?86=686
>;89=987
;78@=68:
;?8>=98:
;786=98;
;?8>=98;
;98<=98;
;?8;=98;
*+#"%"#&
&&#+%*#&
;?8>=98@
>78:=989
;78D=68@
>;8>=:8?
;D8?=68<
>;8>=98:
6<8;=;8D
>989=986
$#&%*#)
",#(%*#"
-.$&%-2
7;8@=989
(!#)%"#*
((#+%"#"
((#$%"#*
(!#'%*#,
D?8?=:8D
((#!%*#'
DD8D=98D
(&#'%*#&
DD87=:8?
D<8<=:8D
D789=98D
D789=987
(&#&%*#!
(!#+%*#!
7?8>=;8?
7:8;=:8@
D>8>=98D
D>89=98;
(*#*%"#&
Table 2: kNN classification error (in %, ? standard error where applicable) with dimensionality
reduction to output dimensionality r. Best results up to one standard error in bold.
indicating that convex algorithms (LMNN) as initialization for more expressive non-convex methods
can be a winning combination.
The strong results obtained with ?2 -LMNN show that the incorporation of data-specific constraints
can be highly beneficial?indicating that there is great potential for future research in specialized
metric learning algorithms for specific data types. Further, the ability of ?2 -LMNN to reduce the
dimensionality of data sampled from probability simplexes is highly encouraging and might lead
to interesting applications in computer vision and other fields, where histogram data is ubiquitous.
Here, it might be possible to reduce the running time of time critical algorithms drastically by shrinking the data dimensionality, while strictly maintaining its histogram properties.
The high consistency with which GB-LMNN obtains state-of-the-art results across diverse data sets
is highly encouraging. In fact, the use of ensembles of CART trees [4] not only inherits all positive
aspects of gradient boosting (robustness, speed and insensitivity to hyper-parameters) but is also a
natural match for metric learning. Each tree splits the space into different regions and in contrast to
prior work [30], this splitting is fully automated, results in new (discriminatively learned) Euclidean
representations of the data and gives rise to well-defined pseudo-metrics.
8
Acknowledgements
KQW, DK and ST would like to thank NIH for their support through grant U01 1U01NS073457-01
and NSF for grants 1149882 and 1137211. FS would like to thank DARPA for its support with grant
D11AP00278 and ONR for grant N00014-12-1-0066. GL was supported in part by the NSF under
Grants CCF-0830535 and IIS-1054960, and by the Sloan Foundation. DK would also like to thank
the McDonnell International Scholars Academy for their support.
References
[1] B. Babenko, S. Branson, and S. Belongie. Similarity metrics for categorization: from monolithic to
category specific. In ICCV ?09, pages 293?300. IEEE, 2009.
[2] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167?175, 2003.
[3] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent dirichlet allocation. The Journal of Machine Learning
Research, 3:993?1022, 2003.
[4] L. Breiman. Classification and regression trees. Chapman & Hall/CRC, 1984.
8
[5] R. Chatpatanasiri, T. Korsrilabutr, P. Tangchanachaianan, and B. Kijsirikul. A new kernelization framework for mahalanobis distance learning algorithms. Neurocomputing, 73(10-12):1570?1579, 2010.
[6] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to
face verification. In CVPR ?05, pages 539?546. IEEE, 2005.
[7] T. Cover and P. Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory,
13(1):21?27, 1967.
[8] O.G. Cula and K.J. Dana. 3D texture recognition using bidirectional feature histograms. International
Journal of Computer Vision, 59(1):33?60, 2004.
[9] M. Cuturi and D. Avis. Ground metric learning. arXiv preprint, arXiv:1110.2306, 2011.
[10] J.V. Davis, B. Kulis, P. Jain, S. Sra, and I.S. Dhillon. Information-theoretic metric learning. In ICML ?07,
pages 209?216. ACM, 2007.
[11] J.H. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, pages
1189?1232, 2001.
[12] C. Galleguillos, B. McFee, S. Belongie, and G. Lanckriet. Multi-class object localization by combining
local contextual interactions. CVPR ?10, pages 113?120, 2010.
[13] A. Globerson and S. Roweis. Metric learning by collapsing classes. In NIPS ?06, pages 451?458. MIT
Press, 2006.
[14] A. Globerson and S. Roweis. Visualizing pairwise similarity via semidefinite programming. In AISTATS
?07, pages 139?146, 2007.
[15] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In
NIPS ?05, pages 513?520. MIT Press, 2005.
[16] J. Hafner, H.S. Sawhney, W. Equitz, M. Flickner, and W. Niblack. Efficient color histogram indexing
for quadratic form distance functions. Pattern Analysis and Machine Intelligence, IEEE Transactions on,
17(7):729?736, 1995.
[17] M. Hoffman, D. Blei, and P. Cook. Easy as CBA: A simple probabilistic model for tagging music. In
ISMIR ?09, pages 369?374, 2009.
[18] P. Jain, B. Kulis, J.V. Davis, and I.S. Dhillon. Metric and kernel learning using a linear transformation.
Journal of Machine Learning Research, 13:519?547, 03 2012.
[19] A.M. Mood, F.A. Graybill, and D.C. Boes. Introduction in the theory of statistics. McGraw-Hill International Book Company, 1963.
[20] O. Pele and M. Werman. The quadratic-chi histogram distance family. ECCV ?10, pages 749?762, 2010.
[21] Y. Rubner, C. Tomasi, and L.J. Guibas. The earth mover?s distance as a metric for image retrieval.
International Journal of Computer Vision, 40(2):99?121, 2000.
[22] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. Computer
Vision?ECCV 2010, pages 213?226, 2010.
[23] G. Shakhnarovich. Learning task-specific similarity. PhD thesis, MIT, 2005.
[24] N. Shental, T. Hertz, D. Weinshall, and M. Pavel. Adjustment learning and relevant component analysis.
In ECCV ?02, volume 4, pages 776?792. Springer-Verlag, 2002.
[25] M. Stricker and M. Orengo. Similarity of color images. In Storage and Retrieval for Image and Video
Databases, volume 2420, pages 381?392, 1995.
[26] L. Torresani and K. Lee. Large margin component analysis. NIPS ?07, pages 1385?1392, 2007.
R
[27] T. Tuytelaars and K. Mikolajczyk. Local invariant feature detectors: a survey. Foundations and Trends
in Computer Graphics and Vision, 3(3):177?280, 2008.
[28] S. Tyree, K.Q. Weinberger, K. Agrawal, and J. Paykin. Parallel boosted regression trees for web search
ranking. In WWW ?11, pages 387?396. ACM, 2011.
[29] M. Varma and A. Zisserman. A statistical approach to material classification using image patch exemplars.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(11):2032?2047, 2009.
[30] K.Q. Weinberger and L.K. Saul. Fast solvers and efficient implementations for distance metric learning.
In ICML ?08, pages 1160?1167. ACM, 2008.
[31] K.Q. Weinberger and L.K. Saul. Distance metric learning for large margin nearest neighbor classification.
The Journal of Machine Learning Research, 10:207?244, 2009.
[32] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning, with application to clustering
with side-information. In NIPS ?02, pages 505?512. MIT Press, 2002.
[33] P.N. Yianilos. Data structures and algorithms for nearest neighbor search in general metric spaces. In
ACM-SIAM Symposium on Discrete Algorithms ?93, pages 311?321, 1993.
9
| 4840 |@word collinearity:1 repository:1 version:4 briefly:1 kulis:3 seems:2 replicate:1 open:1 grey:1 additively:2 pavel:1 reduction:9 efficacy:1 ours:1 outperforms:4 existing:2 current:1 contextual:1 babenko:1 goldberger:1 subsequent:1 additive:1 partition:1 designed:2 greedy:1 selected:4 intelligence:2 cook:1 xk:4 steepest:1 farther:1 blei:2 provides:1 boosting:7 cse:1 location:1 lx:5 codebook:1 along:1 constructed:1 become:1 symposium:1 consists:1 combine:2 fitting:1 introduce:5 pairwise:1 tagging:1 intricate:1 expected:1 surge:1 sdp:1 multi:3 frequently:1 chi:4 inspired:1 lmnn:104 salakhutdinov:1 company:1 actual:2 little:1 encouraging:2 solver:1 becomes:2 itera:5 linearity:1 notation:2 underlying:1 medium:2 weinshall:1 argmin:2 interpreted:1 unspecified:1 minimizes:1 spoken:1 transformation:25 pseudo:3 every:1 k2:2 classifier:3 grant:5 yn:1 louis:1 producing:1 positive:2 before:2 engineering:1 local:7 monolithic:1 limit:1 consequence:1 despite:2 approximately:1 black:1 might:3 initialization:2 suggests:1 branson:1 limited:3 range:1 averaged:1 globerson:3 lecun:1 practice:1 impostor:4 sawhney:1 mcfee:1 area:1 significantly:2 adapting:1 projection:2 ups:1 word:2 wustl:2 pre:2 suggest:1 onto:1 selection:1 operator:1 storage:1 applying:1 optimize:6 www:2 map:1 equivalent:1 independently:1 convex:5 focused:1 ke:1 rectangular:1 simplicity:1 amazon:2 splitting:1 hadsell:1 survey:1 rule:1 isolet:3 pull:2 varma:1 paykin:1 embedding:1 gert:2 variation:2 analogous:2 annals:1 target:8 construction:1 heavily:1 pioneered:1 exact:1 pt:1 programming:1 us:1 hypothesis:1 origin:1 lanckriet:2 trend:2 element:1 expensive:1 recognition:3 approximated:2 particularly:1 labeled:2 database:1 bottom:3 kedem:2 preprint:1 capture:4 region:7 kilian:3 trade:1 russell:1 valuable:1 yk:3 lxj:1 mentioned:1 transforming:1 convexity:2 cuturi:1 ideally:2 shakhnarovich:1 upon:1 localization:1 learner:1 usps:2 translated:1 easily:1 darpa:1 differently:1 various:1 train:1 elec:1 separated:1 fast:2 effective:1 jain:2 hyper:10 neighborhood:5 choosing:1 whose:1 heuristic:1 widely:1 solve:1 encoded:1 larger:1 say:1 otherwise:1 supplementary:1 cvpr:2 ability:2 statistic:2 knn:14 tuytelaars:1 jointly:1 echo:1 mood:1 advantage:2 rr:5 differentiable:1 agrawal:1 propose:4 interaction:1 remainder:1 aligned:1 combining:2 uci:2 relevant:1 flexibility:1 achieve:3 insensitivity:3 roweis:4 academy:1 scalability:1 los:1 darrell:1 produce:1 categorization:1 perfect:1 object:3 depending:1 develop:1 exemplar:1 ij:1 nearest:13 noticeable:1 eq:1 strong:1 implemented:1 indicate:2 differ:1 direction:1 guided:1 popularly:1 enable:2 public:1 material:2 bin:4 crc:1 assign:2 generalization:3 scholar:1 cba:1 extension:9 strictly:2 hold:1 hall:1 ground:2 guibas:1 plagued:1 great:1 mapping:18 mo:1 werman:1 achieves:1 vary:1 omitted:1 purpose:1 earth:3 applicable:3 bag:1 label:2 sensitive:3 individually:1 largest:1 successfully:2 weighted:1 hoffman:1 mit:4 feisha:1 modified:1 rather:1 boosted:5 breiman:1 encode:1 inherits:3 focus:3 improvement:1 consistently:1 rank:1 indicates:1 greatly:1 contrast:1 greedily:3 sense:1 dependent:1 kernelized:1 relation:2 transformed:4 interested:1 issue:1 classification:17 html:1 flexible:1 among:2 development:1 art:3 special:3 constrained:2 initialize:2 field:2 construct:1 washington:1 emd:2 ng:2 chapman:1 represents:2 icml:2 alter:1 future:3 simplex:10 report:1 fundamentally:2 torresani:1 employ:1 preserve:4 mover:3 neurocomputing:1 individual:1 usc:1 beck:1 hafner:1 dor:2 attempt:1 friedman:1 interest:1 highly:5 eaij:1 certainly:1 introduces:1 truly:1 semidefinite:2 behind:1 predefined:1 closer:2 capable:2 necessary:1 tree:21 euclidean:9 taylor:1 initialized:1 circle:2 desired:1 instance:2 column:3 soft:1 compelling:1 teboulle:1 cover:1 measuring:1 tractability:1 introducing:1 entry:3 too:3 itml:2 klm:1 graphic:1 combined:2 st:2 fritz:1 fundamental:1 sensitivity:3 international:4 akj:1 kijsirikul:1 siam:1 probabilistic:1 off:1 lee:1 picking:1 analogously:1 squared:2 thesis:1 d8:2 worse:1 collapsing:1 book:1 toy:2 account:1 potential:1 bold:3 u01:1 matter:1 satisfy:1 explicitly:4 sloan:1 ranking:1 break:1 red:3 reached:1 competitive:1 xing:1 capability:2 parallel:1 minimize:3 square:2 convolutional:1 largely:1 efficiently:2 ensemble:5 generalize:3 weak:1 unifies:1 comparably:1 none:1 comp:3 confirmed:1 straight:3 classified:1 ping:1 detector:1 reach:1 dslr:1 against:2 nonetheless:1 simplexes:1 elegance:1 naturally:2 sampled:5 holdout:1 dataset:1 popular:2 color:3 improves:2 dimensionality:14 ubiquitous:2 carefully:1 focusing:1 bidirectional:1 hashing:1 parallelizability:2 follow:2 zisserman:1 formulation:4 evaluated:1 box:3 strongly:1 just:1 stage:1 web:1 expressive:3 nonlinear:9 a7:14 defines:1 quality:1 behaved:1 pulling:1 believe:1 name:1 k22:3 normalized:4 true:2 managed:1 consisted:1 galleguillos:1 ccf:1 iteratively:1 dhillon:2 semantic:2 illustrated:2 mahalanobis:5 visualizing:1 during:1 davis:2 speaker:1 generalized:3 hill:1 theoretic:2 demonstrate:2 performs:4 dedicated:1 image:4 wise:3 novel:2 nih:1 superior:1 common:3 specialized:3 volume:2 extend:1 refer:2 significant:5 measurement:1 rd:12 unconstrained:4 consistency:2 tuning:1 i6:1 multiway:1 dot:1 similarity:14 surface:1 longer:1 gt:4 closest:1 multivariate:1 recent:2 jolla:1 scenario:1 termed:1 n00014:1 verlag:1 incapable:1 onr:1 success:2 arbitrarily:2 yi:3 caltech:2 preserving:1 captured:1 additional:7 minimum:1 determine:2 signal:1 stephen:1 ii:4 multiple:1 reduces:2 match:5 usability:1 faster:2 offer:1 cross:1 retrieval:3 qcs:2 dept:3 hart:1 impact:2 schematic:1 regression:10 basic:1 vision:8 metric:57 arxiv:2 histogram:31 kernel:4 iteration:5 normalization:1 underperforms:1 preserved:1 background:1 whereas:2 addition:1 set3:1 source:1 boes:1 extra:1 unlike:1 cart:3 subject:1 elegant:1 contrary:1 seem:2 effectiveness:1 jordan:2 chopra:2 exceed:1 split:3 easy:3 embeddings:1 automated:1 xj:21 variate:1 zi:2 perfectly:1 identified:1 restrict:1 codebooks:1 figure2:1 opposite:1 reduce:2 angeles:1 whether:1 pca:1 utility:1 gb:28 pele:1 akin:1 f:1 adequate:1 descent1:1 locally:2 ten:1 category:3 reduced:2 generate:1 http:1 nsf:2 shifted:1 popularity:1 blue:2 diverse:1 discrete:2 hyperparameter:2 shental:1 group:1 four:2 terminology:1 neither:1 ht:11 graph:1 subgradient:1 parameterized:2 powerful:1 letter:5 family:2 reader:1 almost:2 ismir:1 separation:1 i9:1 gbrt:2 patch:1 scaling:1 bit:1 quadratic:6 adapted:1 constraint:5 sparked:1 fei:1 constrain:1 incorporation:1 aspect:4 speed:4 min:2 performing:1 according:1 combination:1 mcdonnell:1 hertz:1 across:3 smaller:1 remain:1 slightly:1 beneficial:2 making:2 intuitively:1 restricted:1 iccv:1 indexing:1 invariant:1 bucket:1 generalizes:1 available:1 operation:1 apply:2 observe:2 away:2 neighbourhood:1 alternative:1 weinberger:6 robustness:4 lxi:1 original:7 substitute:1 denotes:3 clustering:2 top:3 running:1 dirichlet:1 hinge:1 maintaining:2 music:1 restrictive:2 especially:3 webcam:2 objective:13 move:1 question:1 already:1 font:1 sha:1 traditional:1 italic:1 southern:1 gradient:22 amongst:1 distance:55 unable:1 separate:1 sci:2 separating:1 majority:1 link:1 thank:3 topic:1 manifold:1 trivial:1 reason:1 besides:1 code:2 illustration:1 ratio:1 expense:1 negative:3 rise:2 design:2 reliably:1 implementation:3 perform:1 datasets:8 benchmark:1 finite:1 descent:2 displayed:1 defining:1 hinton:1 incorporated:1 y1:1 ucsd:1 download:1 expressiveness:3 concentric:2 introduced:1 required:1 kl:1 optimized:2 tomasi:1 california:2 learned:11 established:1 boost:1 nip:4 address:1 beyond:2 able:1 proceeds:1 below:1 pattern:3 program:1 max:2 video:1 niblack:1 critical:1 natural:3 force:1 indicator:1 improve:2 identifies:1 axis:1 lij:1 text:1 review:1 prior:2 acknowledgement:1 relative:1 loss:4 fully:1 discriminatively:4 j8:1 interesting:2 allocation:1 dana:1 validation:1 foundation:2 rubner:1 affine:1 verification:1 tyree:2 row:4 eccv:3 gl:1 supported:1 drastically:1 side:2 neighbor:22 fall:1 face:1 avis:1 saul:2 benefit:3 default:1 xn:1 world:1 depth:7 mikolajczyk:1 forward:1 made:1 refinement:1 projected:2 far:2 transaction:3 approximate:2 obtains:3 mcgraw:1 keep:1 global:4 overfitting:1 investigating:1 belongie:2 xi:36 discriminative:1 continuous:1 search:4 latent:1 table:8 learn:12 ca:2 inherently:1 sra:1 complex:2 necessarily:1 domain:6 substituted:1 yianilos:1 aistats:1 arrow:1 motivation:1 fair:1 kqw:1 x1:1 depicts:2 precision:1 sub:4 shrinking:1 winning:1 lie:3 vanish:1 learns:6 abundance:1 stricker:1 specific:8 dk:2 dl:6 adding:1 gained:1 mirror:1 texture:1 phd:1 illustrates:1 push:2 margin:11 demand:1 locality:1 intersection:1 explore:1 visual:3 adjustment:1 applies:1 springer:1 corresponds:1 a8:5 satisfies:1 relies:1 acm:4 goal:3 sized:2 viewed:1 identity:2 consequently:1 towards:1 replace:1 change:6 included:1 specifically:2 semantically:1 ece:1 experimental:1 la:1 vote:1 saenko:1 indicating:4 formally:2 support:3 latter:2 pgbrt:1 assessed:1 dissimilar:1 relevance:1 incorporate:1 evaluate:3 kernelization:1 audio:1 instructive:1 |
4,244 | 4,841 | The representer theorem for Hilbert spaces: a
necessary and sufficient condition
Francesco Dinuzzo and Bernhard Sch?olkopf
Max Planck Institute for Intelligent Systems
Spemannstrasse 38,72076 T?ubingen
Germany
[[email protected], [email protected]]
Abstract
The representer theorem is a property that lies at the foundation of regularization
theory and kernel methods. A class of regularization functionals is said to admit
a linear representer theorem if every member of the class admits minimizers that
lie in the finite dimensional subspace spanned by the representers of the data.
A recent characterization states that certain classes of regularization functionals
with differentiable regularization term admit a linear representer theorem for any
choice of the data if and only if the regularization term is a radial nondecreasing
function. In this paper, we extend such result by weakening the assumptions on
the regularization term. In particular, the main result of this paper implies that,
for a sufficiently large family of regularization functionals, radial nondecreasing
functions are the only lower semicontinuous regularization terms that guarantee
existence of a representer theorem for any choice of the data.
1
Introduction
Regularization [1] is a popular and well-studied methodology to address ill-posed estimation problems [2] and learning from examples [3]. In this paper, we focus on regularization problems defined
over a real Hilbert space H. A Hilbert space is a vector space endowed with a inner product and a
norm that is complete1 . Such setting is general enough to take into account a broad family of finitedimensional regularization techniques such as regularized least squares or support vector machines
(SVM) for classification or regression, kernel principal component analysis, as well as a variety of
methods based on regularization over reproducing kernel Hilbert spaces (RKHS).
The focus of our study is the general problem of minimizing an extended real-valued regularization
functional J : H ? R ? {+?} of the form
J(w) = f (L1 w, . . . , L` w) + ?(w),
(1)
where L1 , . . . , L` are bounded linear functionals on H. The functional J is the sum of an error
term f , which typically depends on empirical data, and a regularization term ? that enforces certain
desirable properties on the solution. By allowing the error term f to take the value +?, problems
with hard constraints on the values Li w (for instance, interpolation problems) are included in the
framework. Moreover, by allowing ? to take the value +?, regularization problems of the Ivanov
type are also taken into account.
In machine learning, the most common class of regularization problems concerns a situation where
a set of data pairs (xi , yi ) is available, H is a space of real-valued functions, and the objective
functional to be minimized is of the form
J(w) = c ((x1 , y1 , w(x1 )), ? ? ? , (x` , y` , w(x` )) + ?(w).
1
Meaning that Cauchy sequences are convergent.
1
It is easy to see that this setting is a particular case of (1), where the dependence on the data pairs
(xi , yi ) can be absorbed into the definition of f , and Li are point-wise evaluation functionals, i.e.
such that Li w = w(xi ). Several popular techniques can be cast in such regularization framework.
Example 1 (Regularized least squares). Also known as ridge regression when H is finitedimensional. Corresponds to the choice
c ((x1 , y1 , w(x1 )), ? ? ? , (x` , y` , w(x` )) = ?
`
X
(yi ? w(xi ))2 ,
i=1
and ?(w) = kwk2 , where the complexity parameter ? ? 0 controls the trade-off between fitting of
training data and regularity of the solution.
Example 2 (Support vector machine). Given binary labels yi = ?1, the SVM classifier (without
bias) can be interpreted as a regularization method corresponding to the choice
c ((x1 , y1 , w(x1 )), ? ? ? , (x` , y` , w(x` )) = ?
`
X
max{0, 1 ? yi w(xi )},
i=1
and ?(w) = kwk2 . The hard-margin SVM can be recovered by letting ? ? +?.
Example 3 (Kernel principal component analysis). Kernel PCA can be shown to be equivalent to a
regularization problem where
(
2
P`
P`
1
1
0,
w(x
)
?
w(x
)
=1 ,
i
j
i=1
j=1
`
`
c ((x1 , y1 , w(x1 )), ? ? ? , (x` , y` , w(x` )) =
+?, otherwise
and ? is any strictly monotonically increasing function of the norm kwk, see [4]. In this problem,
there are no labels yi , but the feature extractor function w is constrained to produce vectors with
unitary empirical variance.
The possibility of choosing general continuous linear functionals Li in (1) allows to consider a much
broader class of regularization problems. Some examples are the following.
Example 4 (Tikhonov deconvolution). Given a ?input signal? u : X ? R, assume that the convolution u ? w is well-defined for any w ? H, and the point-wise evaluated convolution functionals
Z
Li w = (u ? w)(xi ) =
u(s)w(xi ? s)ds,
X
are continuous. A possible way to recover w from noisy measurements yi of the ?output signal? is
to solve regularization problems such as
!
`
X
2
2
min ?
(yi ? (u ? w)(xi )) + kwk ,
w?H
i=1
where the objective functional is of the form (1).
Example 5 (Learning from probability measures). In certain learning problems, it may be appropriate to represent input data as probability distributions. Given a finite set of probability measures Pi
on a measurable space (X , A), where A is a ?-algebra of subsets of X , introduce the expectations
Z
Li w = EPi (w) =
w(x)dPi (x).
X
Then, given output labels yi , one can learn a input-output relationship by solving regularization
problems of the form
min c ((y1 , EP1 (w)), ? ? ? , (y` , EP` (w)) + kwk2 .
w?H
If the expectations are bounded linear functionals, such regularization functional is of the form (1).
Example 6 (Ivanov regularization). By allowing the regularization term ? to take the value +?,
we can also take into account the whole class of Ivanov-type regularization problems of the form
min f (L1 w, . . . , L` w), subject to ?(w) ? 1,
w?H
by reformulating them as the minimization of a functional of the type (1), where
0,
?(w) ? 1
?(w) =
.
+?, otherwise
2
1.1
The representer theorem
Let?s now go back to the general formulation (1). By the Riesz representation theorem [5, 6], J can
be rewritten as
J(w) = f (hw, w1 i, . . . , hw, w` i) + ?(w),
where wi is the representer of the linear functional Li with respect to the inner product. Consider
the following definition.
Definition 1. A family F of regularization functionals of the form (1) is said to admit a linear
representer theorem if, for any J ? F, and any choice of bounded linear functionals Li , there exists
a minimizer w? that can be written as a linear combination of the representers:
w? =
`
X
ci wi .
i=1
If a linear representer theorem holds, the regularization problem under study can be reduced to a
`-dimensional optimization problem on the scalar coefficients ci , independently of the dimension
of H. This property is fundamental in practice: without a finite-dimensional parametrization, it
wouldn?t be possible to employ numerical optimization techniques to compute a solution. Sufficient conditions under which a family of functionals admits a representer theorem have been widely
studied in the literature of statistics, inverse problems, and machine learning. The theorem also provides the foundations of learning techniques such as regularized kernel methods and support vector
machines, see [7, 8, 9] and references therein.
Representer theorems are of particular interest when H is a reproducing kernel Hilbert space
(RKHS) [10]. Given a non-empty set X , a RKHS is a space of functions w : X ? R such that
point-wise evaluation functionals are bounded, namely, for any x ? X , there exists a non-negative
real number Cx such that
|w(x)| ? Cx kwk, ?w ? H.
It can be shown that a RKHS can be uniquely associated to a positive-semidefinite kernel function
K : X ? X ? R (called reproducing kernel), such that the so-called reproducing property holds:
w(x) = hw, Kx i,
? (x, w) ? X ? H,
where the kernel sections Kx are defined as
Kx (y) = K(x, y),
?y ? X .
The reproducing property states that the representers of point-wise evaluation functionals coincide
with the kernel sections. Starting from the reproducing property, it is also easy to show that the
representer of any bounded linear functional L is given by a function KL ? H such that
KL (x) = LKx ,
?x ? X .
Therefore, in a RKHS, the representer of any bounded linear functional can be obtained explicitly
in terms of the reproducing kernel.
If the regularization functional (1) admits minimizers, and the regularization term ? is a nondecreasing function of the norm, i.e.
?(w) = h(kwk),
with h : R ? R ? {+?}, nondecreasing,
(2)
the linear representer theorem follows easily from the Pythagorean identity. A proof that the condition (2) is sufficient appeared in [11] in the case where H is a RKHS and Li are point-wise
evaluation functionals. Earlier instances of representer theorems can be found in [12, 13, 14]. More
recently, the question of whether condition (2) is also necessary for the existence of linear representer theorems has been investigated [15]. In particular, [15] shows that, if ? is differentiable (and
certain technical existence conditions hold), then (2) is a necessary and sufficient condition for certain classes of regularization functionals to admit a representer theorem. The proof of [15] heavily
exploits differentiability of ?, but the authors conjecture that the hypothesis can be relaxed. In the
following, we indeed show that (2) is necessary and sufficient for the family of regularization functionals of the form (1) to admit a linear representer theorem, by merely assuming that ? is lower
semicontinuous and satisfies basic conditions for the existence of minimizers. The proof is based on
a characterization of radial nondecreasing functions defined on a Hilbert space.
3
2
A characterization of radial nondecreasing functions
In this section, we present a characterization of radial nondecreasing functions defined over Hilbert
spaces. We will make use of the following definition.
Definition 2. A subset S of a Hilbert space H is called star-shaped with respect to a point z ? H if
(1 ? ?)z + ?x ? S,
?x ? S,
?? ? [0, 1].
It is easy to verify that a convex set is star-shaped with respect to any point of the set, whereas a
star-shaped set does not have to be convex.
The following Theorem provides a geometric characterization of radial nondecreasing functions
defined on a Hilbert space that generalizes the analogous result of [15] for differentiable functions.
Theorem 1. Let H denote a Hilbert space such that dim H ? 2, and ? : H ? R ? {+?} a lower
semicontinuous function. Then, (2) holds if and only if
?(x + y) ? max{?(x), ?(y)},
?x, y ? H : hx, yi = 0.
(3)
Proof. Assume that (2) holds. Then, for any pair of orthogonal vectors x, y ? H, we have
p
?(x + y) = h (kx + yk) = h
kxk2 + kyk2 ? max{h (kxk) , h (kyk)}
= max{?(x), ?(y)}.
Conversely, assume that condition (3) holds. Since dim H ? 2, by fixing a generic vector x ?
X \ {0} and a number ? ? [0, 1], there exists a vector y such that kyk = 1 and
? = 1 ? cos2 ?,
where
cos ? =
hx, yi
.
kxkkyk
In view of (3), we have
?(x) = ?(x ? hx, yiy + hx, yiy)
? ?(x ? hx, yiy) = ? x ? cos2 ?x + cos2 ?x ? hx, yiy
? ? (?x) .
Since the last inequality trivially holds also when x = 0, we conclude that
?(x) ? ?(?x),
?x ? H,
?? ? [0, 1],
(4)
so that ? is nondecreasing along all the rays passing through the origin. In particular, the minimum
of ? is attained at x = 0.
Now, for any c ? ?(0), consider the sublevel sets
Sc = {x ? H : ?(x) ? c} .
From (4), it follows that Sc is not empty and star-shaped with respect to the origin. In addition, since
? is lower semicontinuous, Sc is also closed. We now show that Sc is either a closed ball centered
at the origin, or the whole space. To this end, we show that, for any x ? Sc , the whole ball
B = {y ? H : kyk ? kxk},
is contained in Sc . First, take any y ? int(B) \ span{x}, where int denotes the interior. Then, y has
norm strictly less than kxk, that is
0 < kyk < kxk,
and is not aligned with x, i.e.
y 6= ?x,
?? ? R.
4
Let ? ? R denote the angle between x and y. Now, construct a sequence of points xk as follows:
x0 = y,
xk+1 = xk + ak uk ,
where
?
ak = kxk k tan
,
n?N
n
and uk is the unique unitary vector that is orthogonal to xk , belongs to the two-dimensional subspace
span{x, y}, and is such that huk , xi > 0, that is
uk ? span{x, y},
kuk k = 1,
huk , xk i = 0,
huk , xi > 0.
See Figure 1 for a geometrical illustration of the sequence xk .
By orthogonality, we have
k+1
?
?
kxk+1 k2 = kxk k2 + a2k = kxk k2 1 + tan2
= kyk2 1 + tan2
.
n
n
(5)
In addition, the angle between xk+1 and xk is given by
?
ak
= ,
?k = arctan
kxk k
n
so that the total angle between y and xn is given by
n?1
X
?k = ?.
k=0
Since all the points xk belong to the subspace spanned by x and y, and the angle between x and xn
is zero, we have that xn is positively aligned with x, that is
xn = ?x,
? ? 0.
Now, we show that n can be chosen in such a way that ? ? 1. Indeed, from (5) we have
2
2
n
kxn k
kyk
?
2
2
? =
=
1 + tan
,
kxk
kxk
n
and it can be verified that
n
?
2
lim
1 + tan
= 1,
n?+?
n
therefore ? ? 1 for a sufficiently large n. Now, write the difference vector in the form
?x ? y =
n?1
X
(xk+1 ? xk ),
k=0
and observe that
hxk+1 ? xk , xk i = 0.
By using (4) and proceeding by induction, we have
c ? ?(?x) = ? (xn ? xn?1 + xn?1 ) ? ?(xn?1 ) ? ? ? ? ? ?(x0 ) = ?(y),
so that y ? Sc . Since Sc is closed and the closure of int(B) \ span{x} is the whole ball B, every
point y ? B is also included in Sc . This proves that Sc is either a closed ball centered at the origin,
or the whole space H.
Finally, for any pair of points such that kxk = kyk, we have x ? S?(y) , and y ? S?(x) , so that
?(x) = ?(y).
5
y
x
Figure 1: The sequence xk constructed in the proof of Theorem 1 is associated with a geometrical
construction known as spiral of Theodorus. Starting from any y in the interior of the ball (excluding
points aligned with x), a point of the type ?x (with 0 ? ? ? 1) can be reached by using a finite
number of right triangles.
3
Representer theorem: a necessary and sufficient condition
In this section, we prove that condition (2) is necessary and sufficient for suitable families of regularization functionals of the type (1) to admit a linear representer theorem.
Theorem 2. Let H denote a Hilbert space of dimension at least 2. Let F denote a family of functionals J : H ? R ? {+?} of the form (1) that admit minimizers, and assume that F contains a
set of functionals of the form
Jp? (w) = ?f (hw, pi) + ? (w) ,
?p ? H, ?? ? R+ ,
(6)
where f (z) is uniquely minimized at z = 1. Then, for any lower semicontinuous ?, the family F
admits a linear representer theorem if and only if (2) holds.
Proof. The first part of the theorem (sufficiency) follows from an orthogonality argument. Take any
functional J ? F. Let R = span{w1 , . . . , w` } and let R? denote its orthogonal complement. Any
minimizer w? of J can be uniquely decomposed as
w? = u + v,
u ? R,
v ? R? .
If (2) holds, then we have
J(w? ) ? J(u) = h(kw? k) ? h(kuk) ? 0,
so that u ? R is also a minimizer.
Now, let?s prove the second part of the theorem (necessity). First of all, observe that the functional
J0? (w) = ?f (0) + ?(w),
obtained by setting p = 0 in (6), belongs to F. By hypothesis, J0? admits minimizers. In addition,
by the representer theorem, the only admissible minimizer of J0 is the origin, that is
?(y) ? ?(0),
?y ? H.
(7)
Now take any x ? H \ {0} and let
p=
x
.
kxk2
By the representer theorem, the functional Jp? of the form (6) admits a minimizer of the type
w = ?(?)x.
Now, take any y ? H such that hx, yi = 0. By using the fact that f (z) is minimized at z = 1, and
the linear representer theorem, we have
?f (1) + ? (?(?)x) ? ?f (?(?)) + ? (?(?)x) = Jp? (?(?)x) ? Jp? (x + y) = ?f (1) + ? (x + y) .
By combining this last inequality with (7), we conclude that
? (x + y) ? ? (?(?)x) ,
?x, y ? H : hx, yi = 0,
?? ? R+ .
(8)
Now, there are two cases:
6
? ? (x + y) = +?
? ? (x + y) = C < +?.
In the first case, we trivially have
? (x + y) ? ?(x).
In the second case, using (7) and (8), we obtain
0 ? ? (f (?(?)) ? f (1)) ? ? (x + y) ? ? (?(?)x) ? C ? ?(0) < +?,
?? ? R+ .
(9)
Let ?k denote a sequence such that limk?+? ?k = +?, and consider the sequence
ak = ?k (f (?(?k )) ? f (1)) .
From (9), it follows that ak is bounded. Since z = 1 is the only minimizer of f (z), the sequence ak
can remain bounded only if
lim ?(?k ) = 1.
k?+?
By taking the limit inferior in (8) for ? ? +?, and using the fact that ? is lower semicontinuous, we
obtain condition (3). It follows that ? satisfies the hypotheses of Theorem 1, therefore (2) holds.
The second part of Theorem 2 states that any lower semicontinuous regularization term ? has to be
of the form (2) in order for the family F to admit a linear representer theorem. Observe that ? is not
required to be differentiable or even continuous. Moreover, it needs not to have bounded lower level
sets. For the necessary condition to hold, the family F has to be broad enough to contain at least
a set of regularization functionals of the form (6). The following examples show how to apply the
necessary condition of Theorem 2 to classes of regularization problems with standard loss functions.
? Let L : R2 ? R ? {+?} denote any loss function of the type
e ? z),
L(y, z) = L(y
e is uniquely minimized at t = 0. Then, for any lower semicontinuous regulasuch that L(t)
ration term ?, the family of regularization functionals of the form
J(w) = ?
`
X
L (yi , hw, wi i) + ?(w),
i=1
admits a linear representer theorem if and only if (2) holds. To see that the hypotheses of
Theorem 2 are satisfied, it is sufficient to consider the subset of functionals with ` = 1,
y1 = 1, and w1 = p ? H. These functionals can be written in the form (6) with
f (z) = L(1, z).
? The class of regularization problems with the hinge (SVM) loss of the form
J(w) = ?
`
X
max{0, 1 ? yi hw, wi i} + ?(w),
i=1
with ? lower semicontinuous, admits a linear representer theorem if and only if ? satisfy
(2). For instance, by choosing ` = 2, and
(y1 , w1 ) = (1, p),
(y2 , w2 ) = (?1, p/2),
we obtain regularization functionals of the form (6) with
f (z) = max{0, 1 ? z} + max{0, 1 + z/2},
and it is easy to verify that f is uniquely minimized at z = 1.
7
4
Conclusions
Sufficiently broad families of regularization functionals defined over a Hilbert space with lower
semicontinuous regularization term admit a linear representer theorem if and only if the regularization term is a radial nondecreasing function. More precisely, the main result of this paper (Theorem
2) implies that, for any sufficiently large family of regularization functionals, nondecreasing functions of the norm are the only lower semicontinuous (extended-real valued) regularization terms that
guarantee existence of a representer theorem for any choice of the data functionals Li .
As a concluding remark, it is important to observe that other types of regularization terms are possible if the representer theorem is only required to hold for a restricted subset of the data functionals.
Exploring necessary conditions for the existence of representer theorems under different types of
restrictions on the data functionals is an interesting future research direction.
5
Acknowledgments
The authors would like to thank Andreas Argyriou for useful discussions.
References
[1] A. N. Tikhonov and V. Y. Arsenin. Solutions of Ill Posed Problems. W. H. Winston, Washington, D. C., 1977.
[2] G. Wahba. Spline Models for Observational Data. SIAM, Philadelphia, USA, 1990.
[3] F. Cucker and S. Smale. On the mathematical foundations of learning. Bulletin of the American
mathematical society, 39:1?49, 2001.
[4] B. Sch?olkopf, A. J. Smola, and K-R M?uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10(5):1299?1319, 1998.
[5] F. Riesz. Sur une esp`ece de g?eom?etrie analytique des syst`emes de fonctions sommables.
Comptes rendus de l?Acad?emie des sciences Paris, 144:1409?1411, 1907.
[6] M. Fr?echet. Sur les ensembles de fonctions et les op?erations lin?eaires. Comptes rendus de
l?Acad?emie des sciences Paris, 144:1414?1416, 1907.
[7] V. Vapnik. Statistical Learning Theory. Wiley, New York, NY, USA, 1998.
[8] B. Sch?olkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. (Adaptive Computation and Machine Learning). MIT Press,
2001.
[9] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, New York, NY, USA, 2004.
[10] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical
Society, 68:337?404, 1950.
[11] B. Sch?olkopf, R. Herbrich, and A. J. Smola. A generalized representer theorem. In In Proceedings of the Annual Conference on Computational Learning Theory, pages 416?426, 2001.
[12] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. Journal of
Mathematical Analysis and Applications, 33(1):82?95, 1971.
[13] D. Cox and F. O? Sullivan. Asymptotic analysis of penalized likelihood and related estimators.
The Annals of Statistics, 18:1676?1695, 1990.
[14] T. Poggio and F. Girosi. Networks for approximation and learning. In Proceedings of the IEEE,
volume 78, pages 1481?1497, 1990.
[15] A. Argyriou, C. A. Micchelli, and M. Pontil. When is there a representer theorem? Vector
versus matrix regularizers. Journal of Machine Learning Research, 10:2507?2529, 2009.
8
| 4841 |@word cox:1 norm:5 closure:1 semicontinuous:11 cos2:3 necessity:1 contains:1 rkhs:6 recovered:1 written:2 numerical:1 girosi:1 kyk:6 une:1 xk:14 parametrization:1 dinuzzo:1 provides:2 characterization:5 herbrich:1 arctan:1 mathematical:4 along:1 constructed:1 prove:2 fitting:1 ray:1 introduce:1 x0:2 indeed:2 mpg:2 decomposed:1 ivanov:3 increasing:1 bounded:9 moreover:2 kimeldorf:1 interpreted:1 guarantee:2 every:2 classifier:1 k2:3 uk:3 control:1 planck:1 positive:1 limit:1 esp:1 acad:2 ak:6 interpolation:1 therein:1 studied:2 conversely:1 co:1 unique:1 acknowledgment:1 enforces:1 practice:1 kxkkyk:1 sullivan:1 pontil:1 j0:3 empirical:2 radial:7 interior:2 restriction:1 equivalent:1 measurable:1 go:1 starting:2 independently:1 convex:2 estimator:1 spanned:2 analogous:1 annals:1 construction:1 tan:3 heavily:1 hypothesis:4 origin:5 ep:1 trade:1 yk:1 complexity:1 ration:1 cristianini:1 solving:1 algebra:1 triangle:1 easily:1 epi:1 sc:10 choosing:2 widely:1 posed:2 valued:3 solve:1 otherwise:2 statistic:2 nondecreasing:11 noisy:1 sequence:7 differentiable:4 eigenvalue:1 product:2 fr:1 aligned:3 combining:1 olkopf:4 regularity:1 empty:2 produce:1 fixing:1 a2k:1 op:1 implies:2 riesz:2 direction:1 centered:2 observational:1 hx:8 strictly:2 exploring:1 hold:13 sufficiently:4 estimation:1 erations:1 label:3 minimization:1 uller:1 mit:1 broader:1 focus:2 likelihood:1 dim:2 minimizers:5 weakening:1 typically:1 germany:1 classification:1 ill:2 constrained:1 construct:1 shaped:4 washington:1 kw:1 broad:3 representer:33 future:1 minimized:5 spline:2 intelligent:1 employ:1 yiy:4 interest:1 possibility:1 evaluation:4 semidefinite:1 regularizers:1 necessary:9 poggio:1 orthogonal:3 taylor:1 instance:3 earlier:1 subset:4 fundamental:1 siam:1 off:1 cucker:1 w1:4 satisfied:1 sublevel:1 admit:9 american:2 li:10 syst:1 account:3 de:10 star:4 coefficient:1 representers:3 int:3 satisfy:1 explicitly:1 depends:1 view:1 closed:4 kwk:4 reached:1 recover:1 square:2 variance:1 ensemble:1 definition:5 echet:1 associated:2 proof:6 popular:2 lim:2 hilbert:12 back:1 attained:1 methodology:1 formulation:1 evaluated:1 sufficiency:1 smola:3 d:1 aronszajn:1 nonlinear:1 usa:3 verify:2 contain:1 y2:1 regularization:46 reformulating:1 kxn:1 spemannstrasse:1 kyk2:2 uniquely:5 inferior:1 generalized:1 eaires:1 ridge:1 l1:3 geometrical:2 tan2:2 meaning:1 wise:5 recently:1 common:1 functional:13 jp:4 volume:1 extend:1 belong:1 kwk2:3 measurement:1 cambridge:1 fonctions:2 trivially:2 shawe:1 lkx:1 recent:1 belongs:2 hxk:1 tikhonov:2 certain:5 ubingen:1 inequality:2 binary:1 yi:15 minimum:1 relaxed:1 monotonically:1 signal:2 desirable:1 technical:1 lin:1 regression:2 basic:1 expectation:2 kernel:16 represent:1 whereas:1 addition:3 sch:4 w2:1 limk:1 subject:1 member:1 unitary:2 enough:2 easy:4 spiral:1 variety:1 wahba:2 inner:2 andreas:1 whether:1 pca:1 passing:1 york:2 remark:1 useful:1 differentiability:1 reduced:1 write:1 tchebycheffian:1 kuk:2 verified:1 merely:1 sum:1 inverse:1 angle:4 family:13 convergent:1 winston:1 annual:1 constraint:1 orthogonality:2 precisely:1 argument:1 min:3 span:5 concluding:1 emes:1 conjecture:1 combination:1 ball:5 remain:1 ep1:1 wi:4 b:1 restricted:1 taken:1 rendus:2 letting:1 end:1 available:1 generalizes:1 endowed:1 rewritten:1 apply:1 observe:4 appropriate:1 generic:1 existence:6 denotes:1 hinge:1 exploit:1 prof:1 society:2 micchelli:1 objective:2 question:1 dependence:1 said:2 subspace:3 thank:1 cauchy:1 tuebingen:2 induction:1 assuming:1 sur:2 relationship:1 illustration:1 minimizing:1 smale:1 negative:1 allowing:3 convolution:2 francesco:1 finite:4 situation:1 extended:2 excluding:1 y1:7 reproducing:8 dpi:1 complement:1 pair:4 cast:1 namely:1 kl:2 required:2 paris:2 address:1 beyond:1 pattern:1 appeared:1 emie:2 max:8 suitable:1 regularized:3 eom:1 philadelphia:1 literature:1 geometric:1 asymptotic:1 loss:3 interesting:1 versus:1 foundation:3 sufficient:8 pi:2 arsenin:1 penalized:1 last:2 bias:1 institute:1 taking:1 bulletin:1 dimension:2 finitedimensional:2 xn:8 author:2 adaptive:1 wouldn:1 coincide:1 transaction:1 functionals:29 bernhard:1 conclude:2 xi:10 continuous:3 learn:1 huk:3 investigated:1 main:2 whole:5 x1:8 positively:1 ny:2 wiley:1 lie:2 kxk2:2 extractor:1 hw:6 admissible:1 theorem:43 r2:1 admits:8 svm:4 concern:1 deconvolution:1 exists:3 vapnik:1 ci:2 margin:1 kx:4 cx:2 absorbed:1 kxk:12 contained:1 scalar:1 corresponds:1 minimizer:6 satisfies:2 identity:1 hard:2 included:2 principal:2 comptes:2 called:3 total:1 ece:1 support:4 pythagorean:1 argyriou:2 |
4,245 | 4,842 | Localizing 3D cuboids in single-view images
Jianxiong Xiao
Bryan C. Russell?
Massachusetts Institute of Technology
?
Antonio Torralba
University of Washington
Abstract
In this paper we seek to detect rectangular cuboids and localize their corners in
uncalibrated single-view images depicting everyday scenes. In contrast to recent
approaches that rely on detecting vanishing points of the scene and grouping line
segments to form cuboids, we build a discriminative parts-based detector that
models the appearance of the cuboid corners and internal edges while enforcing
consistency to a 3D cuboid model. Our model copes with different 3D viewpoints
and aspect ratios and is able to detect cuboids across many different object categories. We introduce a database of images with cuboid annotations that spans a
variety of indoor and outdoor scenes and show qualitative and quantitative results
on our collected database. Our model out-performs baseline detectors that use 2D
constraints alone on the task of localizing cuboid corners.
1
Introduction
Extracting a 3D representation from a single-view image depicting a 3D object has been a longstanding goal of computer vision [20]. Traditional approaches have sought to recover 3D properties,
such as creases, folds, and occlusions of surfaces, from a line representation extracted from the
image [18]. Among these are works that have characterized and detected geometric primitives, such
as quadrics (or ?geons?) and surfaces of revolution, which have been thought to form the components
for many different object types [1]. While these approaches have achieved notable early successes,
they could not be scaled-up due to their dependence on reliable contour extraction from natural
images.
In this work we focus on the task of detecting rectangular cuboids, which are a basic geometric
primitive type and occur often in 3D scenes (e.g. indoor and outdoor man-made scenes [22, 23, 24]).
Moreover, we wish to recover the shape parameters of the detected cuboids. The detection and
recovery of shape parameters yield at least a partial geometric description of the depicted scene,
which allows a system to reason about the affordances of a scene in an object-agnostic fashion [9,
15]. This is especially important when the category of the object is ambiguous or unknown.
There have been several recent efforts that revisit this problem [9, 11, 12, 17, 19, 21, 26, 28, 29].
Although there are many technical differences amongst these works, the main pipeline of these approaches is similar. Most of them estimate the camera parameters using three orthogonal vanishing
points with a Manhattan world assumption of a man-made scene. They detect line segments via
Canny edges and recover surface orientations [13] to form 3D cuboid hypotheses using bottomup grouping of line and region segments. Then, they score these hypotheses based on the image
evidence for lines and surface orientations [13].
In this paper we look to take a different approach for this problem. As shown in Figure 1, we aim to
build a 3D cuboid detector to detect individual boxy volumetric structures. We build a discriminative
parts-based detector that models the appearance of the corners and internal edges of cuboids while
enforcing spatial consistency of the corners and edges to a 3D cuboid model. Our model is trained
in a similar fashion to recent work that detects articulated human body joints [27].
1
Input Image
3D Cuboid Detector
Output Detection Result
Synthesized New Views
detect
Figure 1: Problem summary. Given a single-view input image, our goal is to detect the 2D corner
locations of the cuboids depicted in the image. With the output part locations we can subsequently
recover information about the camera and 3D shape via camera resectioning.
Our cuboid detector is trained across different 3D viewpoints and aspect ratios. This is in contrast to
view-based approaches for object detection that train separate models for different viewpoints, e.g.
[7]. Moreover, instead of relying on edge detection and grouping to form an initial hypothesis of a
cuboid [9, 17, 26, 29], we use a 2D sliding window approach to exhaustively evaluate all possible
detection windows. Also, our model does not rely on any preprocessing step, such as computing
surface orientations [13]. Instead, we learn the parameters for our model using a structural SVM
framework. This allows the detector to adapt to the training data to identify the relative importance
of corners, edges and 3D shape constraints by learning the weights for these terms. We introduce an
annotated database of images with geometric primitives labeled and validate our model by showing
qualitative and quantitative results on our collected database. We also compare to baseline detectors
that use 2D constraints alone on the tasks of geometric primitive detection and part localization. We
show improved performance on the part localization task.
2
Model for 3D cuboid localization
We represent the appearance of cuboids by a set of parts located at the corners of the cuboid and
a set of internal edges. We enforce spatial consistency among the corners and edges by explicitly
reasoning about its 3D shape. Let I be the image and pi = (xi , yi ) be the 2D image location of the
ith corner on the cuboid. We define an undirected loopy graph G = (V, E) over the corners of the
cuboid, with vertices V and edges E connecting the corners of the cuboid. We illustrate our loopy
graph layout in Figure 2(a). We define a scoring function associated with the corner locations in the
image:
D
S(I, p) =
wiH ? HOG(I, pi ) +
wij
? Displacement2D (pi , pj )
i?V
+
ij?E
ij?E
E
wij
? Edge(I, pi , pj ) + wS ? Shape3D (p)
(1)
where HOG(I, pi ) is a HOG descriptor [4] computed at image location pi and
Displacement2D (pi , pj ) = ?[(xi ? xj )2 , xi ? xj , (yi ? yj )2 , yi ? yj ] is a 2D corner displacement term that is used in other pictorial parts-based models [7, 27]. By reasoning about the
3D shape, our model handles different 3D viewpoints and aspect ratios, as illustrated in Figure 2.
Notice that our model is linear in the weights w. Moreover, the model is flexible as it adapts to
the training data by automatically learning weights that measure the relative importance of the
appearance and spatial terms. We define the Edge and Shape3D terms as follows.
Edge(I, pi , pj ): The internal edge information on cuboids is informative and provides a salient
feature for the locations of the corners. For this, we include a term that models the appearance of
the internal edges, which is illustrated in Figure 3. For adjacent corners on the cuboid, we identify
the edge between the two corners and calculate the image evidence to support the existence of such
an edge. Given the corner locations pi and pj , we use Chamfer matching to align the straight line
between the two corners to edges extracted from the image. We find image edges using Canny edge
detection [3] and efficiently compute the distance of each pixel along the line segment to the nearest
edge via the truncated distance transform. We use Bresenham?s line algorithm [2] to efficiently find
the 2D image locations on the line between the two points. The final edge term is the negative mean
value of the Chamfer matching score for all pixels on the line. As there are usually 9 visible edges
for a cuboid, we have 9 dimensions for the edge term.
2
(a) Our Full Model. (b) 2D Tree Model.
(c) Example Part Detections.
Figure 2: Model visualization. Corresponding model parts are colored consistently in the figure.
In (a) and (b) the displayed corner locations are the average 2D locations across all viewpoints and
aspect ratios in our database. In (a) the edge thickness corresponds to the learned weight for the edge
term. We can see that the bottom edge is significantly thicker, which indicates that it is informative
for detection, possibly due to shadows and contact with a supporting plane.
Shape3D (p): The 3D shape of a cuboid constrains the layout of the parts and edges in the image.
We propose to define a shape term that measures how well the configuration of corner locations
respect the 3D shape. In other words, given the 2D locations p of the corners, we define a term
that tells us how likely this configuration of corner locations p can be interpreted as the reprojection
of a valid cuboid in 3D. When combined with the weights wS , we get an overall evaluation of
the goodness of the 2D locations with respect to the 3D shape. Let X (written in homogeneous
coordinates) be the 3D points on the unit cube centered at the world origin. Then, X transforms as
x = PLX, where L is a matrix that transforms the shape of the unit cube and P is a 3 ? 4 camera
matrix. For each corner, we use the other six 2D corner locations to estimate the product PL using
camera resectioning [10]. The estimated matrix is used to predict the corner location. We use the
negative L2 distance to the predicted corner location as a feature for the corner in our model. As we
model 7 corners on the cuboid, there are 7 dimensions in the feature vector. When combined with
the learned weights wS through dot-product, this represents a weighted reprojection error score.
2.1
Inference
Our goal is to find the 2D corner locations p over the HOG grid of I that maximizes the score given
in Equation (1). Note that exact inference is hard due to the global shape term. Therefore, we
propose a spanning tree approximation to the graph to obtain multiple initial solutions for possible
corner locations. Then we adjust the corner locations using randomized simple hill climbing.
For the initialization, it is important for the computation to be efficient since we need to evaluate all
possible detection windows in the image. Therefore, for simplicity and speed, we use a spanning
tree T to approximate the full graph G, as shown in Figure 2(b). In addition to the HOG feature as
a unary term, we use a popular pairwise spring term along the edges of the tree to establish weak
spatial constraints on the corners. We use the following scoring function for the initialization:
D
ST (I, p) =
wiH ? HOG(I, pi ) +
wij
? Displacement2D (pi , pj )
(2)
i?V
ij?T
Note that the model used for obtaining initial solutions is similar to [7, 27], which is only able
to handle a fixed viewpoint and 2D aspect ratio. Nonetheless, we use it since it meets our speed
requirement via dynamic programming and the distance transform [8].
With the tree approximation, we pick the top 1000 possible configurations of corner locations from
each image and optimize our scoring function by adjusting the corner locations using randomized
simple hill climbing. Given the initial corner locations for a single configuration, we iteratively
choose a random corner i with the goal of finding a new pixel location p?i that increases the scoring
function given in Equation (1) while holding the other corner locations fixed. We compute the scores
at neighboring pixel locations to the current setting pi . We also consider the pixel location that the
3D rigid model predicts when estimated from the other corner locations. We randomly choose one
of the locations and update pi if it yields a higher score. Otherwise, we choose another random
corner and location. The algorithm terminates when no corner can reach a location that improves
the score, which indicates that we have reached a local maxima.
During detection, since the edge and 3D shape terms are non-positive and the weights are constrained
to be positive, this allows us to upper-bound the scoring function and quickly reject candidate loca3
?10
?20
?30
?40
?50
?60
?70
?80
Dot-product is the Edge Term
?90
?100
Image
Distance Transformed Edge Map
Pixels Covered by Line Segment
Figure 3: Illustration of the edge term in our model. Given line endpoints, we compute a Chamfer
matching score for pixels that lie on the line using the response from a Canny edge detector.
tions without evaluating the entire function. Also, since only one corner can change locations at each
iteration, we can reuse the computed scoring function from previous iterations during hill climbing.
Finally, we perform non-maximal suppression among the parts and then perform non-maximal suppression over the entire object to get the final detection result.
2.2
Learning
For learning, we first note that our scoring function in Equation (1) is linear in the weights w.
This allows us to use existing structured prediction procedures for learning. To learn the weights,
we adapt the structural SVM framework of [16]. Given positive training images with the 2D corner
locations labeled {In , pn } and negative training images {In }, we wish to learn weights and bias term
? = (wH , wD , wE , wS , b) that minimizes the following structured prediction objective function:
X
1
min
???+C
?n
(3)
?,??0
2
n
?n ? pos ? ? ? (In , pn ) ? 1 ? ?n
?n ? neg, ?p ? P ? ? ? (In , p) ? ?1 + ?n
where all appearance and spatial feature vectors are concatenated into the vector ?(In , p) and P
is the set of all possible part locations. During training we constrain the weights wD , wE , wS ?
0.0001. We tried mining negatives from the wrong corner locations in the positive examples but
found that it did not improve the performance. We also tried latent positive mining and empirically
observed that it slightly helps. Since the latent positive mining helped, we also tried an offset
compensation as post-processing to obtain the offset of corner locations introduced during latent
positive mining. For this, we ran the trained detector on the training set to obtain the offsets and
used the mean to compensate for the location changes. However, we observed empirically that it did
not help performance.
2.3
Discussion
Sliding window object detectors typically use a root filter that covers the entire object [4] or a
combination of root filter and part filters [7]. The use of a root filter is sufficient to capture the
appearance for many object categories since they have canonical 3D viewpoints and aspect ratios.
However, cuboids in general span a large number of object categories and do not have a consistent
3D viewpoint or aspect ratio. The diversity of 3D viewpoints and aspect ratios causes dramatic
changes in the root filter response. However, we have observed that the responses for the part filters
are less affected.
Moreover, we argue that a purely view-based approach that trains separate models for the different
viewpoints and aspect ratios may not capture well this diversity. For example, such a strategy would
require dividing the training data to train each model. In contrast, we train our model for all 3D
viewpoints and aspect ratios. We illustrate this in Figure 2, where detected parts are colored consistently in the figure. As our model handles different viewpoints and aspect ratios, we are able to
make use of the entire database during training.
Due to the diversity of cuboid appearance, our model is designed to capture the most salient features,
namely the corners and edges. While the corners and edges may be occluded (e.g. by self-occlusion,
4
90
Elevation
45
0
?45
0
(a)
15
Azimuth
30
45
1?
(b)
9?
18?
26?
37?
43?
(c)
Figure 4: Illustration of the labeling tool and 3D viewpoint statistics. (a) A cuboid being labeled
through the tool. A projection of the cuboid model is overlaid on the image and the user must
select and drag anchor points to their corresponding location in the image. (b) Scatter plot of 3D
azimuth and elevation angles for annotated cuboids with zenith angle close zero. We perform an
image left/right swap to limit the rotation range. (c) Crops of cuboids at different azimuth angles for
a fixed elevation, with the shown examples marked as red points in the scatter plot of (b).
other objects in front, or cropping), for now we do not handle these cases explicitly in our model.
Furthermore, we do not make use of other appearance cues, such as the appearance within the cuboid
faces, since they have a larger variation across the object categories (e.g. dice and fire alarm trigger)
and may not generalize as well. We also take into account the tractability of our model as adding
additional appearance cues will increase the complexity of our model and the detector needs to be
evaluated over a large number of possible sliding windows in an image.
Compared with recent approaches that detect cuboids by reasoning about the shape of the entire
scene [9, 11, 12, 17, 19, 29], one of the key differences is that we detect cuboids directly without
consideration of the global scene geometry. These prior approaches rely heavily on the assumption
that the camera is located inside a cuboid-like room and held at human height, with the parameters
of the room cuboid inferred through vanishing points based on a Manhattan world assumption.
Therefore, they cannot handle outdoor scenes or close-up snapshots of an object (e.g. the boxes on
a shelf in row 1, column 3 of Figure 6). As our detector is agnostic to the scene geometry, we are
able to detect cuboids even when these assumptions are violated.
While previous approaches reason over rigid cuboids, our model is flexible in that it can adapt
to deformations of the 3D shape. We observe that not all cuboid-like objects are perfect cuboids
in practice. Deformations of the shape may arise due to the design of the object (e.g. the printer
in Figure 1), natural deformation or degradation of the object (e.g. a cardboard box), or a global
transformation of the image (e.g. camera radial distortion). We argue that modeling the deformations
is important in practice since a violation of the rigid constraints may make a 3D reconstructionbased approach numerically unstable. In our approach, we model the 3D deformation and allow the
structural SVM to learn based on the training data how to weight the importance of the 3D shape
term. Moreover, a rigid shape requires a perfect 3D reconstruction and it is usually done with nonlinear optimization [17], which is expensive to compute and becomes impractical in an exhaustive
sliding-window search in order to maintain a high recall rate. With our approach, if a rigid cuboid
is needed, we can recover the 3D shape parameters via camera resectioning, as shown in Figure 9.
3
Database of 3D cuboids
To develop and evaluate any models for 3D cuboid detection in real-world environments, it is necessary to have a large database of images depicting everyday scenes with 3D cuboids labeled. In
this work, we seek to build a database by manually labeling point correspondences between images
and 3D cuboids. We have built a labeling tool that allows a user to select and drag key points on
a projected 3D cuboid model to its corresponding location in the image. This is similar to existing
tools, such as Google building maker [14], which has been used to build 3D models of buildings for
maps. Figure 4(a) shows a screenshot of our tool. For the database, we have harvested images from
four sources: (i) a subset of the SUN database [25], which contains images depicting a large variety
of different scene categories, (ii) ImageNet synsets [5] with objects having one or more 3D cuboids
depicted, (iii) images returned from an Internet search using keywords for objects that are wholly or
partially described by 3D cuboids, and (iv) a set of images that we manually collected from our personal photographs. Given the corner correspondences, the parameters for the 3D cuboids and camera
are estimated. The cuboid and camera parameters are estimated up to a similarity transformation via
camera resectioning using Levenberg-Marquardt optimization [10].
5
Figure 5: Single top 3D cuboid detection in each image. Yellow: ground truth, green: correct
detection, red: false alarm. Bottom row - false positives. The false positives tend to occur when a
part fires on a ?cuboid-like? corner region (e.g. row 3, column 5) or finds a smaller cuboid (e.g. the
Rubik?s cube depicted in row 3, column 1).
Figure 6: All 3D cuboid detections above a fixed threshold in each image. Notice that our model is
able to detect the presence of multiple cuboids in an image (e.g. row 1, columns 2-5) and handles
partial occlusions (e.g. row 1, column 4), small objects, and a range of 3D viewpoints, aspect ratios,
and object classes. Moreover, the depicted scenes have varying amount of clutter. Yellow - ground
truth. Green - correct prediction. Red - false positive. Line thickness corresponds to detector
confidence.
For our database, we have 785 images with 1269 cuboids annotated. We have also collected a
negative set containing 2746 images that do contain any cuboid like objects. We perform an image
left/right swap to limit the rotation range. As a result, the min/max azimuth, elevation, and zenith
angles are 0/45, -90/90, -180/180 degrees respectively. In Figure 4(b) we show a scatter plot of the
azimuth and elevation angles for all of the labeled cuboids with zenith angle close to zero. Notice that
the cuboids cover a large range of azimuth angles for elevation angles between 0 (frontal view) and
45 degrees. We also show a number of cropped examples for a fixed elevation angle in Figure 4(c),
with their corresponding azimuth angles indicated by the red points in the scatter plot. Figure 8(c)
shows the distribution of objects from the SUN database [25] that overlap with our cuboids (there
are 326 objects total from 114 unique classes). Compared with [12], our database covers a larger set
of object and scene categories, with images focusing on both objects and scenes (all images in [12]
are indoor scene images). Moreover, we annotate objects closely resembling a 3D cuboid (in [12]
there are many non-cuboids that are annotated with a bounding cuboid) and overall our cuboids are
more accurately labeled.
4
Evaluation
In this section we show qualitative results of our model on the 3D cuboids database and report
quantitative results on two tasks: (i) 3D cuboid detection and (ii) corner localization accuracy. For
training and testing, we randomly split equally the positive and negative images. As discussed in
Section 3, there is rotational symmetry in the 3D cuboids. During training, we allow the image
6
(a)
(b)
(c)
Figure 7: Corner localization comparison for detected geometric primitives. (a) Input image and
ground truth annotation. (b) 2D tree-based initialization. (c) Our full model. Notice that our model
is able to better localize cuboid corners over the baseline 2D tree-based model, which corresponds
to 2D parts-based models used in object detection and articulated pose estimation [7, 27]. The last
column shows a failure case where a part fires on a ?cuboid-like? corner region in the image.
to mirror left-right and orient the 3D cuboid to minimize the variation in rotational angle. During
testing, we run the detector on left-right mirrors of the image and select the output at each location
with the highest detector response. For the parts we extract HOG features [4] in a window centered at
each corner with scale of 10% of the object bounding box size. Figure 5 shows the single top cuboid
detection in each image and Figure 6 shows all of the most confident detections in the image. Notice
that our model is able to handle partial occlusions (e.g. row 1, column 4 of Figure 6), small objects,
and a range of 3D viewpoints, aspect ratios, and object classes. Moreover, the depicted scenes have
varying amount of clutter. We note that our model fails when a corner fires on a ?cuboid-like? corner
region (e.g. row 3, column 5 of Figure 5).
We compare the various components of our model against two baseline approaches. The first baseline is a root HOG template [4] trained over the appearance within a bounding box covering the
entire object. A single model using the root HOG template is trained for all viewpoints and aspect ratios. During detection, output corner locations corresponding to the average training corner
locations relative to the bounding boxes are returned. The second baseline is the 2D tree-based
approximation of Equation (2), which corresponds to existing 2D parts models used in object detection and articulated pose estimation [7, 27]. Figure 7 shows a qualitative comparison of our model
against the 2D tree-based model. Notice that our model localizes well and often provides a tighter
fit to the image data than the baseline model.
We evaluate geometric primitive detection accuracy using the bounding box overlap criteria in the
Pascal VOC [6]. We report precision recall in Figure 8(a). We have observed that all of the cornerbased models achieve almost identical detection accuracy across all recall levels, and out-perform
the root HOG template detector [4]. This is expected as we initialize our full model with the output
of the 2D tree-based model and it generally does not drift too far from this initialization. This in
effect does not allow us to detect additional cuboids but allows for better part localization.
In addition to detection accuracy, we also measure corner localization accuracy for correctly detected
examples for a given model. A corner is deemed correct if its predicted image location is within t
pixels of the ground truth corner location. We set t to be 15% of the square root of the area of the
ground truth bounding box for the object. The reported trends in the corner localization performance
hold for nearby values of t. In Figure 8 we plot corner localization accuracy as a function of recall
and compare our model against the two baselines. Moreover, we report performance when either the
edge term or the 3D shape term is omitted from our model. Notice that our full model out-performs
the other baselines. Also, the additional edge and 3D shape terms provide a gain in performance
over using the appearance and 2D spatial terms alone. The edge term provides a slightly larger gain
in performance over the 3D shape term, but when integrated together consistently provides the best
performance on our database.
7
1
1
Root Filter [0.16]
2D Tree Approximation [0.23]
Full Model?Edge [0.26]
Full Model?Shape [0.24]
Full Model [0.24]
0.8
precision recall
0.7
0.6
0.5
0.4
0.3
0.2
0.8
building (16/49)
bed (15/22)
0.7
others
0.6
0.5
0
0.2
0.4
recall
0.6
0.8
(a) Cuboid detection
1
0.4
0.3
0.2
0
night table (15/29)
97 categories
(168/883)
0.1
0.1
0
cabinet (28/87)
Root Filter [0.25]
2D Tree Approximation [0.30]
Full Model?Edge [0.37]
Full Model?Shape [0.37]
Full Model [0.38]
0.9
reprojection accuracy (criteria=0.150)
0.9
0
0.2
0.4
recall
0.6
0.8
(b) Corner localization
1
chest of drawers (10/10)
box (9/18)
desk (8/22)
table (8/26)
CPU (7/8)
stand (7/11)
brick (5/5)
cabinets (5/22)
kitchen island (5/6)
night table occluded (5/12)
refrigerator (5/8)
stove (5/13)
screen (5/16)
(c) Object distribution
Figure 8: Cuboid detection (precision vs. recall) and corner localization accuracy (accuracy vs.
recall). The area under the curve is reported in the plot legends. Notice that all of the corner-based
models achieve almost identical detection accuracy across all recall levels and out-perform the root
HOG template detector [4]. For the task of corner localization, our full model out-performs the
two baseline detectors or when either the Edge or Shape3D terms are omitted from our model. (c)
Distribution of objects from the SUN database [25] that overlap with our cuboids. There are 326
objects total from 114 unique classes. The first number within the parentheses indicates the number
of instances in each object category that overlaps with a labeled cuboid, while the second number is
the total number of labeled instances for the object category within our dataset.
Figure 9: Detected cuboids and subsequent synthesized new views via camera resectioning.
5
Conclusion
We have introduced a novel model that detects 3D cuboids and localizes their corners in single-view
images. Our 3D cuboid detector makes use of both corner and edge information. Moreover, we
have constructed a dataset with ground truth cuboid annotations. Our detector handles different 3D
viewpoints and aspect ratios and, in contrast to recent approaches for 3D cuboid detection, does
not make any assumptions about the scene geometry and allows for deformation of the 3D cuboid
shape. As HOG is not invariant to viewpoint, we believe that part mixtures would allow the model
to be invariant to viewpoint. We believe our approach extends to other shapes, such as cylinders
and pyramids. Our work raises a number of (long-standing) issues that would be interesting to
address. For instance, which objects can be described by one or more geometric primitives and how
to best represent the compositionality of objects in general? By detecting geometric primitives, what
applications and systems can be developed to exploit this? Our dataset and source code is publicly
available at the project webpage: http://SUNprimitive.csail.mit.edu.
Acknowledgments: Jianxiong Xiao is supported by Google U.S./Canada Ph.D. Fellowship in Computer Vision. Bryan Russell was funded by the Intel Science and Technology Center for Pervasive Computing (ISTC-PC). This work is funded by ONR MURI N000141010933 and NSF Career
Award No. 0747120 to Antonio Torralba.
8
References
[1] I. Biederman. Recognition by components: a theory of human image interpretation. Pyschological review,
94:115?147, 1987.
[2] J. E. Bresenham. Algorithm for computer control of a digital plotter. IBM Systems Journal, 4(1):25?30,
1965.
[3] J. F. Canny. A computational approach to edge detection. IEEE PAMI, 8(6):679?698, 1986.
[4] N. Dalal and B. Triggs. Histograms of Oriented Gradients for Human Detection. In CVPR, 2005.
[5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image
database. In CVPR, 2009.
[6] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The Pascal visual object
classes (VOC) challenge. IJCV, 88(2):303?338, 2010.
[7] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part based models. IEEE PAMI, 32(9), 2010.
[8] P. Felzenszwalb and D. Huttenlocher. Pictorial structures for object recognition. IJCV, 61(1), 2005.
[9] A. Gupta, S. Satkin, A. A. Efros, and M. Hebert. From 3d scene geometry to human workspace. In CVPR,
2011.
[10] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University
Press, ISBN: 0521540518, second edition, 2004.
[11] V. Hedau, D. Hoiem, and D. Forsyth. Thinking inside the box: Using appearance models and context
based on room geometry. In ECCV, 2010.
[12] V. Hedau, D. Hoiem, and D. Forsyth. Recovering free space of indoor scenes from a single image. In
CVPR, 2012.
[13] D. Hoiem, A. Efros, and M. Hebert. Geometric context from a single image. In ICCV, 2005.
[14] http://sketchup.google.com, 2012.
[15] K. Ikeuchi and T. Suehiro. Toward an assembly plan from observation: Task recognition with polyhedral
objects. In Robotics and Automation, 1994.
[16] T. Joachims, T. Finley, and C.-N. J. Yu. Cutting-plane training of structural svms. Machine Learning,
77(1), 2009.
[17] D. C. Lee, A. Gupta, M. Hebert, and T. Kanade. Estimating spatial layout of rooms using volumetric
reasoning about objects and surfaces. In NIPS, 2010.
[18] J. L. Mundy. Object recognition in the geometric era: A retrospective. In Toward Category-Level Object
Recognition, volume 4170 of Lecture Notes in Computer Science, pages 3?29. Springer, 2006.
[19] L. D. Pero, J. C. Bowdish, D. Fried, B. D. Kermgard, E. L. Hartley, and K. Barnard. Bayesian geometric
modelling of indoor scenes. In CVPR, 2012.
[20] L. Roberts. Machine perception of 3-d solids. In PhD. Thesis, 1965.
[21] H. Wang, S. Gould, and D. Koller. Discriminative learning with latent variables for cluttered indoor scene
understanding. In ECCV, 2010.
[22] J. Xiao, T. Fang, P. Tan, P. Zhao, E. Ofek, and L. Quan. Image-based fac?ade modeling. In SIGGRAPH
Asia, 2008.
[23] J. Xiao, T. Fang, P. Zhao, M. Lhuillier, and L. Quan. Image-based street-side city modeling. In SIGGRAPH Asia, 2009.
[24] J. Xiao and Y. Furukawa. Reconstructing the world?s museums. In ECCV, 2012.
[25] J. Xiao, J. Hays, K. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition
from abbey to zoo. In CVPR, 2010.
[26] J. Xiao, B. C. Russell, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. Basic level scene understanding:
From labels to structure and beyond. In SIGGRAPH Asia, 2012.
[27] Y. Yang and D. Ramanan. Articulated pose estimation using flexible mixtures of parts. In CVPR, 2011.
[28] S. Yu, H. Zhang, and J. Malik. Inferring spatial layout from a single image via depth-ordered grouping.
In IEEE Workshop on Perceptual Organization in Computer Vision, 2008.
[29] Y. Zhao and S.-C. Zhu. Image parsing with stochastic scene grammar. In NIPS. 2011.
9
| 4842 |@word dalal:1 printer:1 everingham:1 triggs:1 seek:2 tried:3 pick:1 dramatic:1 solid:1 initial:4 configuration:4 contains:1 score:8 hoiem:3 existing:3 current:1 wd:2 com:1 marquardt:1 scatter:4 written:1 must:1 parsing:1 visible:1 subsequent:1 informative:2 shape:26 designed:1 plot:6 update:1 v:2 alone:3 cue:2 plane:2 fried:1 ith:1 vanishing:3 colored:2 detecting:3 provides:4 location:44 zhang:1 height:1 along:2 constructed:1 qualitative:4 ijcv:2 inside:2 polyhedral:1 introduce:2 pairwise:1 expected:1 affordances:1 detects:2 relying:1 automatically:1 voc:2 cpu:1 window:7 becomes:1 project:1 estimating:1 moreover:10 maximizes:1 agnostic:2 what:1 interpreted:1 minimizes:1 developed:1 finding:1 transformation:2 impractical:1 quantitative:3 thicker:1 scaled:1 wrong:1 control:1 unit:2 ramanan:2 positive:11 local:1 limit:2 era:1 meet:1 pami:2 initialization:4 drag:2 range:5 unique:2 camera:12 acknowledgment:1 yj:2 testing:2 practice:2 procedure:1 displacement:1 wholly:1 dice:1 area:2 thought:1 significantly:1 matching:3 reject:1 word:1 projection:1 radial:1 confidence:1 get:2 cannot:1 close:3 context:2 optimize:1 map:2 center:1 resembling:1 primitive:8 layout:4 williams:1 cluttered:1 rectangular:2 simplicity:1 recovery:1 fang:2 handle:8 coordinate:1 variation:2 trigger:1 heavily:1 user:2 exact:1 programming:1 homogeneous:1 tan:1 hypothesis:3 origin:1 trend:1 expensive:1 recognition:6 located:2 predicts:1 database:19 labeled:8 bottom:2 observed:4 muri:1 huttenlocher:1 wang:1 capture:3 refrigerator:1 calculate:1 region:4 sun:4 russell:3 highest:1 uncalibrated:1 ran:1 environment:1 complexity:1 constrains:1 occluded:2 exhaustively:1 dynamic:1 personal:1 trained:6 raise:1 segment:5 purely:1 localization:12 swap:2 po:1 joint:1 siggraph:3 various:1 articulated:4 train:4 fac:1 detected:6 tell:1 labeling:3 exhaustive:1 larger:3 cvpr:7 distortion:1 otherwise:1 grammar:1 statistic:1 transform:2 final:2 isbn:1 wih:2 propose:2 reconstruction:1 product:3 maximal:2 drawer:1 canny:4 neighboring:1 achieve:2 adapts:1 description:1 bed:1 validate:1 everyday:2 resectioning:5 webpage:1 reprojection:3 requirement:1 cropping:1 perfect:2 object:48 tions:1 illustrate:2 help:2 develop:1 pose:3 nearest:1 ij:3 keywords:1 dividing:1 recovering:1 predicted:2 shadow:1 closely:1 annotated:4 correct:3 filter:8 subsequently:1 stochastic:1 centered:2 human:5 hartley:2 zenith:3 mcallester:1 require:1 elevation:7 tighter:1 pl:1 hold:1 ground:6 overlaid:1 predict:1 efros:2 sought:1 torralba:4 early:1 omitted:2 abbey:1 estimation:3 label:1 maker:1 city:1 tool:5 weighted:1 istc:1 mit:1 suehiro:1 quadric:1 aim:1 pn:2 shelf:1 varying:2 pervasive:1 focus:1 joachim:1 consistently:3 modelling:1 indicates:3 contrast:4 suppression:2 baseline:10 detect:11 inference:2 rigid:5 unary:1 integrated:1 entire:6 typically:1 w:5 koller:1 wij:3 transformed:1 pixel:8 issue:1 overall:2 among:3 flexible:3 pascal:2 orientation:3 plan:1 spatial:8 constrained:1 initialize:1 cube:3 extraction:1 washington:1 having:1 manually:2 identical:2 represents:1 look:1 yu:2 thinking:1 report:3 others:1 randomly:2 oriented:1 museum:1 individual:1 pictorial:2 kitchen:1 geometry:6 occlusion:4 fire:4 maintain:1 cylinder:1 detection:32 organization:1 mining:4 evaluation:2 adjust:1 violation:1 mixture:2 pc:1 held:1 edge:43 partial:3 necessary:1 orthogonal:1 tree:12 iv:1 pero:1 cardboard:1 deformation:6 girshick:1 brick:1 column:8 modeling:3 instance:3 cover:3 localizing:2 goodness:1 loopy:2 tractability:1 vertex:1 subset:1 azimuth:7 front:1 too:1 reported:2 thickness:2 combined:2 confident:1 st:1 randomized:2 csail:1 standing:1 workspace:1 dong:1 lee:1 connecting:1 quickly:1 together:1 thesis:1 containing:1 choose:3 possibly:1 corner:69 zhao:3 li:2 account:1 diversity:3 automation:1 forsyth:2 notable:1 explicitly:2 view:11 helped:1 root:11 reached:1 red:4 recover:5 annotation:3 minimize:1 square:1 publicly:1 accuracy:10 descriptor:1 efficiently:2 yield:2 identify:2 climbing:3 yellow:2 generalize:1 weak:1 bayesian:1 accurately:1 zoo:1 straight:1 n000141010933:1 detector:21 reach:1 volumetric:2 failure:1 against:3 nonetheless:1 associated:1 cabinet:2 gain:2 rubik:1 dataset:3 adjusting:1 massachusetts:1 popular:1 wh:1 recall:10 improves:1 focusing:1 higher:1 asia:3 response:4 improved:1 zisserman:2 evaluated:1 box:9 done:1 furthermore:1 night:2 nonlinear:1 google:3 indicated:1 believe:2 building:3 effect:1 contain:1 iteratively:1 illustrated:2 adjacent:1 during:8 self:1 ambiguous:1 covering:1 levenberg:1 criterion:2 hill:3 performs:3 reasoning:4 image:64 consideration:1 novel:1 rotation:2 empirically:2 endpoint:1 volume:1 discussed:1 interpretation:1 synthesized:2 numerically:1 cambridge:1 consistency:3 grid:1 ofek:1 dot:2 funded:2 similarity:1 surface:6 align:1 recent:5 hay:2 onr:1 success:1 yi:3 scoring:7 neg:1 furukawa:1 additional:3 deng:1 ii:2 sliding:4 full:12 multiple:3 technical:1 characterized:1 adapt:3 compensate:1 long:1 crease:1 post:1 equally:1 award:1 parenthesis:1 prediction:3 basic:2 crop:1 oliva:2 vision:4 iteration:2 represent:2 annotate:1 histogram:1 pyramid:1 achieved:1 robotics:1 addition:2 cropped:1 fellowship:1 winn:1 source:2 tend:1 undirected:1 quan:2 legend:1 ikeuchi:1 chest:1 extracting:1 structural:4 presence:1 yang:1 iii:1 split:1 variety:2 xj:2 fit:1 plx:1 six:1 reuse:1 effort:1 retrospective:1 returned:2 cause:1 antonio:2 generally:1 covered:1 transforms:2 amount:2 clutter:2 desk:1 ph:1 svms:1 category:11 http:2 canonical:1 nsf:1 revisit:1 notice:8 estimated:4 correctly:1 bryan:2 affected:1 key:2 salient:2 four:1 threshold:1 localize:2 pj:6 graph:4 orient:1 angle:11 run:1 extends:1 almost:2 bound:1 internet:1 stove:1 correspondence:2 fold:1 occur:2 constraint:5 constrain:1 fei:2 scene:27 nearby:1 aspect:15 speed:2 span:2 spring:1 min:2 gould:1 structured:2 combination:1 across:6 terminates:1 slightly:2 smaller:1 reconstructing:1 island:1 invariant:2 iccv:1 pipeline:1 equation:4 visualization:1 needed:1 available:1 observe:1 hierarchical:1 enforce:1 existence:1 top:3 include:1 assembly:1 exploit:1 concatenated:1 build:5 especially:1 establish:1 contact:1 objective:1 malik:1 strategy:1 dependence:1 traditional:1 amongst:1 gradient:1 distance:5 separate:2 street:1 argue:2 collected:4 unstable:1 reason:2 enforcing:2 spanning:2 toward:2 code:1 illustration:2 ratio:15 rotational:2 robert:1 hog:12 holding:1 negative:6 design:1 unknown:1 perform:6 upper:1 observation:1 snapshot:1 mundy:1 compensation:1 displayed:1 truncated:1 supporting:1 drift:1 canada:1 inferred:1 compositionality:1 introduced:2 biederman:1 namely:1 imagenet:2 learned:2 nip:2 address:1 able:7 beyond:1 usually:2 perception:1 indoor:6 challenge:1 built:1 reliable:1 green:2 max:1 gool:1 overlap:4 natural:2 rely:3 localizes:2 zhu:1 improve:1 technology:2 deemed:1 finley:1 extract:1 prior:1 geometric:12 l2:1 review:1 understanding:2 relative:3 manhattan:2 harvested:1 lecture:1 discriminatively:1 interesting:1 digital:1 degree:2 sufficient:1 consistent:1 xiao:7 viewpoint:19 pi:13 ibm:1 row:8 eccv:3 summary:1 supported:1 last:1 free:1 hebert:3 synset:1 bias:1 allow:4 side:1 institute:1 template:4 face:1 felzenszwalb:2 van:1 curve:1 dimension:2 depth:1 world:5 valid:1 contour:1 evaluating:1 stand:1 hedau:2 made:2 preprocessing:1 projected:1 longstanding:1 far:1 cope:1 approximate:1 cutting:1 cuboid:88 global:3 anchor:1 discriminative:3 xi:3 bottomup:1 search:2 latent:4 table:3 kanade:1 learn:4 career:1 obtaining:1 depicting:4 symmetry:1 did:2 main:1 bounding:6 alarm:2 arise:1 edition:1 body:1 intel:1 screen:1 fashion:2 ehinger:2 precision:3 fails:1 inferring:1 wish:2 candidate:1 outdoor:3 lie:1 screenshot:1 perceptual:1 chamfer:3 revolution:1 showing:1 offset:3 svm:3 gupta:2 evidence:2 grouping:4 workshop:1 socher:1 false:4 adding:1 importance:3 mirror:2 phd:1 depicted:6 photograph:1 appearance:14 likely:1 visual:1 ordered:1 partially:1 springer:1 corresponds:4 truth:6 extracted:2 goal:4 marked:1 room:4 barnard:1 man:2 hard:1 change:3 degradation:1 total:3 ade:1 select:3 internal:5 support:1 jianxiong:2 violated:1 frontal:1 evaluate:4 |
4,246 | 4,843 | Nonparametric Reduced Rank Regression
Rina Foygel?,? , Michael Horrell? , Mathias Drton?,? , John Lafferty?
?
Department of Statistics
Stanford University
?
Department of Statistics
University of Chicago
?
Department of Statistics
University of Washington
Abstract
We propose an approach to multivariate nonparametric regression that generalizes
reduced rank regression for linear models. An additive model is estimated for each
dimension of a q-dimensional response, with a shared p-dimensional predictor
variable. To control the complexity of the model, we employ a functional form of
the Ky-Fan or nuclear norm, resulting in a set of function estimates that have low
rank. Backfitting algorithms are derived and justified using a nonparametric form
of the nuclear norm subdifferential. Oracle inequalities on excess risk are derived
that exhibit the scaling behavior of the procedure in the high dimensional setting.
The methods are illustrated on gene expression data.
1
Introduction
In the multivariate regression problem the objective is to estimate the conditional mean E(Y ? X) =
m(X) = (m1 (X), . . . , mq (X))? where Y is a q-dimensional response vector and X is a pdimensional covariate vector. This is also referred to as multi-task learning in the machine learning
literature. We are given a sample of n iid pairs {(Xi , Yi )} from the joint distribution of X and Y .
Under a linear model, the mean is estimated as m(X) = BX where B ? Rq?p is a q ? p matrix
of regression coefficients. When the dimensions p and q are large relative to the sample size n, the
coefficients of B cannot be reliably estimated, without further assumptions.
In reduced rank regression the matrix B is estimated under a rank constraint r = rank(B) ? C, so
that the rows or columns of B lie in an r-dimensional subspace of Rq or Rp . Intuitively, this implies
that the model is based on a smaller number of features than the ambient dimensionality p would
suggest, or that the tasks representing the components Y k of the response are closely related. In low
dimensions, the constrained rank model can be computed as an orthogonal projection of the least
squares solution; but in high dimensions this is not well defined.
Recent research has studied the use of the nuclear norm as a convex surrogate for the rank constraint.
The nuclear norm ?B?? , also known as the trace or Ky-Fan norm, is the sum of the singular vectors
of B. A rank constraint can be thought of as imposing sparsity, but in an unknown basis; the nuclear
norm plays the role of the 1 norm in sparse estimation. Its use for low rank estimation problems
was proposed by Fazel in [2]. More recently, nuclear norm regularization in multivariate linear
regression has been studied by Yuan et al. [10], and by Neghaban and Wainwright [4], who analyzed
the scaling properties of the procedure in high dimensions.
In this paper we study nonparametric parallels of reduced rank linear models. We focus our attention
on additive models, so that the regression function m(X) = (m1 (X), . . . , mq (X))? has each component mk (X) = ?pj=1 mkj (Xj ) equal to a sum of p functions, one for each covariate. The objective
is then to estimate the q ? p matrix of functions M (X) = [mkj (Xj )].
The first problem we address, in Section 2, is to determine a replacement for the regularization
penalty ?B?? in the linear model. Because we must estimate a matrix of functions, the analogue of
the nuclear norm is not immediately apparent. We propose two related regularization penalties for
1
nonparametric low rank regression, and show how they specialize to the linear case. We then study,
in Section 4, the (infinite dimensional) subdifferential of these penalties. In the population setting,
this leads to stationary conditions for the minimizer of the regularized mean squared error. This
subdifferential calculus then justifies penalized backfitting algorithms for carrying out the optimization for a finite sample. Constrained rank additive models (CRAM) for multivariate regression are
analogous to sparse additive models (S PAM) for the case where the response is 1-dimensional [6]
(studied also in the reproducing kernel Hilbert space setting by [5]), but with the goal of recovering
a low-rank matrix rather than an entry-wise sparse vector. The backfitting algorithms we derive in
Section 5 are analogous to the iterative smoothing and soft thresholding backfitting algorithms for
S PAM proposed in [6]. A uniform bound on the excess risk of the estimator relative to an oracle
is given Section 6. This shows the statistical scaling behavior of the methods for prediction. The
analysis requires a concentration result for nonparametric covariance matrices in the spectral norm.
Experiments with gene data are given in Section 7, which are used to illustrate different facets of the
proposed nonparametric reduced rank regression techniques.
2
Nonparametric Nuclear Norm Penalization
We begin by presenting the penalty that we will use to induce nonparametric regression estimates to be low rank. To motivate our choice of penalty and provide some intuition, suppose
that f 1 (x), . . . , f q (x) are q smooth one-dimensional functions with a common domain. What
does it mean for this collection of functions to be low rank? Let x1 , x2 , . . . , xn be a collection
of points in the common domain of the functions. We require that the n ? q matrix of function values
F(x1?n ) = [f k (xi )] is low rank. This matrix is of rank at most r < q for every set {xi } of arbitrary
size n if and only if the functions {f k } are r-linearly independent?each function can be written as
a linear combination of r of the other functions.
In the multivariate regression setting, but still assuming the domain is one-dimensional for simplicity
(q > 1 and p = 1), we have a random sample X1 , . . . , Xn . Consider the n ? q sample matrix
M = [mk (Xi )] associated with a vector M = (m1 , . . . , mq ) of q smooth (regression) functions,
and suppose that n > q. We would
like for this to be a low rank matrix. This suggests the penalty
?
q
q
?M?? = ?s=1 ?s (M) = ?s=1 ?s (M? M), where {?s (A)} denotes the eigenvalues of a symmetric
matrix A and {?s (B)} denotes the singular values of a matrix B. Now, assuming the columns of M
?
are centered, and E[mk (X)] = 0 for each k, we recognize n1 M? M as the sample covariance ?(M
)
k
l
of the population covariance ?(M ) ?= Cov(M (X)) = [E(m (X)m (X))]. This motivates the
following sample and population penalties, where A1/2 denotes the matrix square root:
population penalty: ??(M )1/2 ?? = ? Cov(M (X))1/2 ??
1
?
sample penalty: ??(M
)1/2 ?? = ? ?M?? .
n
(2.1)
(2.2)
With Y denoting the n ? q matrix of response values for the sample (Xi , Yi ), this leads to the following population and empirical regularized risk functionals for low rank nonparametric regression:
population penalized risk:
empirical penalized risk:
1
E?Y ? M (X)?22 + ???(M )1/2 ??
2
?
1
?Y ? M?2F + ? ?M?? .
2n
n
(2.3)
(2.4)
We recall that if A ? 0 has spectral decomposition A = U DU ? then A1/2 = U D1/2 U ? .
3
Constrained Rank Additive Models (CRAM)
We now consider the case where X is p-dimensional. Throughout the paper we use superscripts to
denote indices of the q-dimensional response, and subscripts to denote indices of the p-dimensional
covariate. We consider the family of additive models, with regression functions of the form m(X) =
(m1 (X), . . . , mq (X))? = ?pj=1 Mj (Xj ), where each term Mj (Xj ) = (m1j (Xj ), . . . , mqj (Xj ))? is
a q-vector of functions evaluated at Xj .
2
In this setting we propose two different penalties. The first penalty, intuitively, encourages the
vector (m1j (Xj ), . . . , mqj (Xj )) to be low rank, for each j. Assume that the functions mkj (Xj )
all have mean zero; this is required for identifiability in the additive model. As a shorthand, let
?j = ?(Mj ) = Cov(Mj (Xj )) denote the covariance matrix of the j-th component functions, with
? j . The population and sample versions of the first penalty are then given by
sample version ?
1/2
1/2
??1 ?? + ??2 ?? + ? + ??1/2
p ??
(3.1)
p
? 1/2 ? + ? + ??
? 1/2 ? = ?1 ? ?Mj ?? .
? 1/2 ? + ??
??
p
1
2
?
?
?
n j=1
(3.2)
The second penalty, intuitively, encourages the set of q vector-valued functions (mk1 , mk2 , . . . , mkp )?
to be low rank. This penalty is given by
1/2
?(?1 ??1/2
p )?
(3.3)
? 1/2 ??
? 1/2 )? = ?1 ?M1?p ??
?(?
p
1
?
n
(3.4)
?
?
where, for convenience of notation, M1?p = (M?1 ?M?p ) is an np ? q matrix. The corresponding
population and empirical risk functionals, for the first penalty, are then
p
p
2
1
1/2
E?Y ? ? Mj (X)? + ? ? ??j ??
2
2
j=1
j=1
(3.5)
p
2
1
? p
?Y ? ? Mj ? + ? ? ?Mj ??
F
2n
n j=1
j=1
(3.6)
and similarly for the second penalty.
Now suppose that each Xj is normalized so that E(Xj2 ) = 1. In the linear case we have Mj (Xj ) =
Xj Bj where Bj ? Rq . Let B = (B1 ?Bp ) ? Rq?p . Some straightforward calculation shows that
1/2
1/2
1/2
the penalties reduce to ?pj=1 ??j ?? = ?pj=1 ?Bj ?2 for the first penalty, and ??1 ??p ?? = ?B??
for the second. Thus, in the linear case the first penalty is encouraging B to be column-wise sparse,
so that many of the Bj s are zero, meaning that Xj doesn?t appear in the fit. This is a version of
the group lasso [11]. The second penalty reduces to the nuclear norm regularization ?B?? used for
high-dimensional reduced-rank regression.
4
Subdifferentials for Functional Matrix Norms
A key to deriving algorithms for functional low-rank regression is computation of the subdifferentials of the penalties. We are interested in (q ? p)-dimensional matrices of functions F = [fjk ]. For
each column index j and row index k, fjk is a function of a random variable Xj , and we will take
expectations with respect to Xj implicitly. We write Fj to mean the jth column of F , which is a
q-vector of functions of Xj . We define the inner product between two matrices of functions as
p
q
p
?F, G? ?= ? ? E(fjk gjk ) = ? E(Fj? Gj ) = tr (E(F G? )) ,
j=1 k=1
(4.1)
j=1
?
?
and write ?F ?2 = ?F, F ?. Note that ?F ?2 = ? E(F F ? )? where E(F F ? ) = ?j E(Fj Fj? ) ? 0
F
is a positive semidefinite q ? q matrix.
We define two further norms on a matrix of functions F , namely,
?
?
?
and ???F ???? ?= ? E(F F ? )?? ,
???F ???sp ?= ?E(F F ? )?sp = ? E(F F ? )?
sp
where ?A?sp is the spectral norm (operator norm), the largest singular value of A, and it is convenient
?
to write the matrix square root as A = A1/2 . Each of the norms depends on F only through
?
E(F F ). In fact, these two norms are dual?for any F ,
???F ???? = sup ?G, F ? ,
???G???sp ?1
3
(4.2)
?
?1
where the supremum is attained by setting G = ( E(F F ? )) F , with A?1 denoting the matrix
pseudo-inverse.
Proposition 4.1. The subdifferential of ???F ???? is the set
?
?1
S(F ) ?= {( E(F F ? )) F + H ? ???H???sp ? 1, E(F H ? ) = 0q?q , E(F F ? )H = 0q?p a.e.} .
(4.3)
Proof. The fact that S(F ) contains the subdifferential ????F ???? can be proved by comparing our
setting (matrices of functions) to the ordinary matrix case; see [9, 7]. Here, we show the reverse
inclusion, S(F ) ? ????F ???? . Let D ? S(F ) and let G be any element of the function space. We need
to show
???F + G???? ? ???F ???? + ?G, D? ,
(4.4)
?1
?
where D = ( E(F F ? )) F + H =? F? + H for some H satisfying the conditions in (4.3) above.
Expanding the right-hand side of (4.4), we have
???F ???? + ?G, D? = ???F ???? + ?G, F? + H? = ?F + G, F? + H? ? ???F + G???? ???D???sp ,
where the second equality follows from ???F ???? = ?F, F??, and the fact that ?F, H? = tr(E(F H ? )) =
0. The inequality follows from the duality of the norms.
Finally, we show that ???D???sp ? 1. We have
E(DD? ) = E(F?F?? ) + E(F?H ? ) + E(H F?? ) + E(HH ? ) = E(F?F?? ) + E(HH ? ) ,
where we use the fact that E(F H ? ) = 0q?q , implying E(F?H ? ) = 0q?q . Next, let E(F F ? ) = V DV ?
be a reduced singular value decomposition, where D is a positive diagonal matrix of size q? ? q.
Then E(F?F?? ) = V V ? , and we have
E(F F ? ) ? H = 0q?p a.e. ? V ? H = 0q? ?p a.e. ? E(F?F?? )H = 0q?p a.e. .
This implies that E(F?F?? ) ? E(HH ? ) = 0q?q and so these two symmetric matrices have orthogonal
row spans and orthogonal column spans. Therefore,
?E(DD? )?sp = ?E(F?F?? ) + E(HH ? )?sp = max {?E(F?F?? )?sp , ?E(HH ? )?sp } ? 1 ,
where the last bound comes from the fact that ???F????sp , ???H???sp ? 1. Therefore ???D???sp ? 1.
This gives the subdifferential of penalty 2, defined in (3.3). We can view the first penalty update as
just a special case of the second penalty update. For penalty 1 in (3.1), if we are updating Fj and fix
all the other functions, we are now penalizing the norm
?
???Fj ???? = ? E(Fj Fj? )? ,
(4.5)
?
which is clearly just a special case of penalty 2 with a single q-vector of functions instead of p
different q-vectors of functions. So, we have
?
?1
????Fj ???? = {( E(Fj Fj? )) Fj + Hj ? ???Hj ???sp ? 1, E(Fj Hj? ) = 0, E(Fj Fj? )Hj = 0 a.e.} . (4.6)
5
Stationary Conditions and Backfitting Algorithms
Returning to the base case of p = 1 covariate, consider the population regularized risk optimization
1
(5.1)
min{ E?Y ? M (X)?22 + ????M ???? },
M
2
where M is a vector of q univariate functions. The stationary condition for this optimization is
E(Y ? X) = M (X) + ?V (X) a.e. for some V ? ????M ???? .
Define P (X) ?= E(Y ? X).
4
(5.2)
CRAM
BACKFITTING A LGORITHM ? F IRST P ENALTY
Input: Data (Xi , Yi ), regularization parameter ?.
?j = 0, for j = 1, . . . , p.
Initialize M
Iterate until convergence:
For each j = 1, . . . , p:
?k (Xk );
(1) Compute the residual: Zj = Y ? ?k?j M
(2) Estimate Pj = E[Zj ? Xj ] by smoothing: P?j = Sj Zj ;
(3) Compute SVD: n1 P?j P?j? = U diag(? )U ?
?j = U diag([1 ? ?/?? ] )U ? P?j ;
(4) Soft threshold: M
+
?j ? M
?j ? mean(M
?j ).
(5) Center: M
?j and estimator M
?(Xi ) = ?j M
?j (Xij ).
Output: Component functions M
Figure 1: The CRAM backfitting algorithm, using the first penalty, which penalizes each component.
Proposition 5.1. Let E(P P ? ) = U diag(? )U ? be the singular value decomposition and define
?
M = U diag([1 ? ?/ ? ]+ )U ? P
(5.3)
where [x]+ = max(x, 0). Then M satisfies stationary condition (5.2), and is a minimizer of (5.1).
Proof. Assume the singular values are sorted ?
as ?1 ? ?2 ? ? ? ?q , and let r be the largest index such
?
?
that ?r > ?. Thus, M has rank r. Note that E(M M ? ) = U diag([ ? ? ?]+ )U ? , and therefore
?
?
?1
?( E(M M ? )) M = U diag(?/ ?1?r , 0q?r )U ? P
(5.4)
where x1?k = (x1 , . . . , xk ) and ck = (c, . . . , c). It follows that
?
?1
M + ?( E(M M ? )) M = U diag(1r , 0q?r )U ? P.
Now define
H=
1
U diag(0r , 1q?r )U ? P
?
(5.5)
(5.6)
?
?1
and take V = ( E(M M ? )) M + H. Then we have M + ?V = P .
?
It remains to?show that H satisfies the conditions of the subdifferential in (4.3). Since E(HH ? ) =
?
U diag(0r , ?r+1 /?, . . . , ?q /?)U ? we have ???H???sp ? 1. Also, E(M H ? ) = 0q?q since
?
(5.7)
diag(1 ? ?/ ?1?r , 0q?r ) diag(0r , 1q?r /?) = 0q?q .
Similarly, E(M M ? )H = 0q?q since
?
diag(( ?1?r ? ?)2 , 0q?r ) diag(0r , 1q?r /?) = 0q?q .
(5.8)
It follows that V ? ????M ???sp .
The analysis above justifies a backfitting algorithm for estimating a constrained rank additive model
with the first penalty, where the objective is
p
p
2
1
min{ E?Y ? ? Mj (Xj )? + ? ? ???Mj ???? }.
Mj 2
2
j=1
j=1
(5.9)
For a given coordinate j, we form the residual Zj = Y ? ?k?j Mk , and then compute the projection
Pj = E(Zj ? Xj ), with singular value decomposition E(Pj Pj? ) = U diag(? )U ? . We then update
?
(5.10)
Mj = U diag([1 ? ?/ ? ]+ )U ? Pj
5
and proceed to the next variable. This is a Gauss-Seidel procedure that parallels the population
backfitting algorithm for S PAM [6]. In the sample version we replace the conditional expectation
Pj = E(Zj ? Xj ) by a nonparametric linear smoother, P?j = Sj Zj . The algorithm is given in Figure 1.
Note that to predict at a point x not included in the training set, the smoother matrices are constructed
using that point; that is, P?j (xj ) = Sj (xj )? Zj .
The algorithm for penalty 2 is similar. In step (3) of the algorithm in Figure 1 we compute the SVD
?
?1?p = U diag([1 ? ?/?? ] )U ? P?1?p .
. Then, in step (4) we soft threshold according to M
of n1 P?1?p P?1?p
+
Both algorithms can be viewed as functional projected gradient descent procedures.
6
Excess Risk Bounds
The population risk of a q ? p regression matrix M (X) = [M1 (X1 )?Mp (Xp )] is
R(M ) = E?Y ? M (X)1p ?22 ,
?
with sample version denoted R(M
). Consider all models that can be written as
M (X) = U ? D ? V (X)?
where U is an orthogonal q ? r matrix, D is a positive diagonal matrix, and V (X) = [vjs (Xj )]
satisfies E(V ? V ) = Ir . The population risk can be reexpressed as
?
?
?I
Y
Y
?I
R(M ) = tr {( q? ) E [(
)(
) ] ( q? )}
V (X)?
V (X)?
DU
DU
?
?
?I
= tr {( q? ) ( ?Y Y
DU
?Y V
?Y V
?I
) ( q )}
?V V DU ?
? n (V ) replacing ?(V ) ?= Cov((Y, V (X)? )) above. The
and similarly for the sample risk, with ?
?uncontrollable? contribution to the risk, which does not depend on M , is Ru = tr{?Y Y }. We can
express the remaining ?controllable? risk as
?
?2Iq
0
Rc (M ) = R(M ) ? Ru = tr {(
) ?(V ) ( q ? )} .
DU ?
DU
Using the von Neumann trace inequality, tr(AB) ? ?A?p ?B?p? where 1/p + 1/p? = 1,
?
?c (M ) ? ?( ?2Iq? ) (?(V ) ? ?
? n (V ))? ?( 0q ? )?
Rc (M ) ? R
DU
DU
?
sp
?
?2Iq
? n (V )? ?D?
? ?(
) ? ??(V ) ? ?
?
sp
DU ?
sp
? n (V )? ?D?
? C max(2, ?D?sp ) ??(V ) ? ?
?
sp
2
? n (V )?
? C max{2, ?D?? } ??(V ) ? ?
sp
(6.1)
where here and in the following C is a generic constant. For the last factor in (6.1), it holds that
? n (V )? ? C sup sup w? (?(V ) ? ?
? n (V )) w
sup ??(V ) ? ?
sp
V
V
w?N
where N is a 1/2-covering of the unit (q + r)-sphere, which has size ?N ? ? 6q+r ? 36q ; see [8]. We
now assume that the functions vsj (xj ) are uniformly bounded from a Sobolev space of order two.
Specifically, let {?jk ? k = 0, 1, . . .} denote a uniformly bounded, orthonormal basis with respect to
L2 [0, 1], and assume that vsj ? Hj where
?
?
k=0
k=0
Hj = {fj ? fj (xj ) = ? ajk ?jk (xj ), ? a2jk k 4 ? K 2 }
?
for some 0 < K < ?. The L? -covering number of Hj satisfies log N (Hj , ) ? K/ .
6
Suppose that Y ? E(Y ? X) = W is Gaussian and the true regression function E(Y ? X) is bounded.
?
? n (V ))w is sub-Gaussian and
Then the family of random variables Z(V,w) ?= n ? w? (?(V ) ? ?
sample continuous. It follows from a result of Cesa-Bianchi and Lugosi [1] that
?
B
C
K
?
?
E (sup sup w (?(V ) ? ?n (V ))w) ? ? ?
q log(36) + log(pq) + ? d
n 0
V w?N
for some constant B. Thus, by Markov?s inequality we conclude that
?
?
q + log(pq) ?
? n (V )? = OP
sup ??(V ) ? ?
.
sp
n
?
?
V
(6.2)
1/4
If ???M ???? = ?D?? = o (n/(q + log(pq))) , then returning to (6.1), this gives us a bound on Rc (M )?
?c (M ) that is oP (1). More precisely, we define a class of matrices of functions:
R
1/4 ?
?
?
?
n
?
?
) ?.
Mn = ?M ? M (X) = U DV (X)? , with E(V ? V ) = I, vsj ? Hj , ?D?? = o (
?
?
q
+
log(pq)
?
?
?
?
?
Then, for a fitted matrix M chosen from Mn , writing M? = arg minM ?M R(M ), we have
n
?) ? inf R(M ) = R(M
?) ? R(
? M
?) ? (R(M? ) ? R(M
? ? )) + (R(
? M
?) ? R(M
? ? ))
R(M
M ?Mn
?) ? R(
? M
?)] ? [R(M? ) ? R(M
? ? )].
? [R(M
?u from each of the bracketed differences, we obtain that
Subtracting Ru ? R
?) ? inf R(M ) ? [Rc (M
?) ? R
?c (M
?)] ? [Rc (M? ) ? R
?c (M? )]
R(M
M ?Mn
?c (M )}
? 2 sup {Rc (M ) ? R
M ?Mn
by (6.1)
?
(6.2)
2
? n (V )? ) by =
O (?D?? ??(V ) ? ?
oP (1).
sp
This proves the following result.
? minimize the empirical risk 1 ?i ?Yi ? ?j Mj (Xij )?2 over the class Mn .
Proposition 6.1. Let M
2
n
Then
P
?) ? inf R(M ) D?
R(M
0.
M ?Mn
7
Application to Gene Expression Data
To illustrate the proposed nonparametric reduced rank regression techniques, we consider data on
gene expression in E. coli from the ?DREAM 5 Network Inference Challenge?1 [3]. In this challenge
genes were classified as transcription factors (TFs) or target genes (TGs). Transcription factors
regulate the target genes, as well as other TFs.
We focus on predicting the expression levels Y for a particular set of q = 27 TGs, using the expression levels X for p = 6 TFs. Our motivation for analyzing these 33 genes is that, according to the
gold standard gene regulatory network used for the DREAM 5 challenge, the 6 TFs form the parent
set common to two additional TFs, which have the 27 TGs as their child nodes. In fact, the two
intermediate nodes d-separate the 6 TFs and the 27 TGs in a Bayesian network interpretation of this
gold standard. This means that if we treat the gold standard as a causal network, then up to noise, the
functional relationship between X and Y is given by the composition of a map g ? R6 ? R2 and a
map h ? R2 ? R27 . If g and h are both linear, their composition h ? g is a linear map of rank no more
than 2. As observed in Section 2, such a reduced rank linear model is a special case of an additive
model with reduced rank in the sense of penalty 2. More generally, if g is an additive function and h
is linear, then h ? g has rank at most 2 in the sense of penalty 2. Higher rank can in principle occur
1
http://wiki.c2b2.columbia.edu/dream/index.php/D5c4
7
Penalty 1, ? = 20
Penalty 2, ? = 5
Figure 2: Left: Penalty 1 with large tuning parameter. Right: Penalty 2 with tuning parameter obtained through 10-fold cross-validation. Plotted points are residuals holding out the given predictor.
under functional composition, since even a univariate additive map h ? R ? Rq may have rank up to
q under our penalties (recall that penalty 1 and 2 coincide for univariate maps).
The backfitting algorithm of Figure 1 with penalty 1 and a rather aggressive choice of the tuning
parameter ? produces the estimates shown in Figure 2, for which we have selected three of the 27
TGs. Under such strong regularization, the 5th column of functions is rank zero and, thus, identically
zero. The remaining columns have rank one; the estimated fitted values are scalar multiples of one
another. We also see that scalings can be different for different columns. The third plot in the third
row shows a slightly negative slope, indicating a negative scaling for this particular estimate. The
remaining functions in this row are oriented similarly to the other rows, indicating the same, positive
scaling. This property characterizes the difference between penalties 1 and 2; in an application of
penalty 2, the scalings would have been the same across all functions in a given row.
Next, we illustrate a higher-rank solution for penalty 2. Choosing the regularization parameter ? by
ten-fold cross-validation gives a fit of rank 5, considerably lower than 27, the maximum possible
rank. Figure 2 shows a selection of three coordinates of the fitted functions. Under rank five, each
row of functions is a linear combination of up to five other, linearly independent rows. We remark
that the use of cross-validation generally produces somewhat more complex models than is necessary
to capture an underlying low-rank data-generating mechanism. Hence, if the causal relationships for
these data were indeed additive and low rank, then the true low rank might well be smaller than five.
8
Summary
This paper introduced two penalties that induce reduced rank fits in multivariate additive nonparametric regression. Under linearity, the penalties specialize to group lasso and nuclear norm penalties
for classical reduced rank regression. Examining the subdifferentials of each of these penalties, we
developed backfitting algorithms for the two resulting optimization problems that are based on softthresholding of singular values of smoothed residual matrices. The algorithms were demonstrated
on a gene expression data set constructed to have a naturally low-rank structure. We also provided a
persistence analysis that shows error tending to zero under a scaling assumption on the sample size
n and the dimensions q and p of the regression problem.
Acknowledgements
Research supported in part by NSF grants IIS-1116730, DMS-0746265, and DMS-1203762,
AFOSR grant FA9550-09-1-0373, ONR grant N000141210762, and an Alfred P. Sloan Fellowship.
8
References
[1] Nicol`o Cesa-Bianchi and G?abor Lugosi. On prediction of individual sequences. The Annals of
Statistics, 27(6):1865?1894, 1999.
[2] Maryam Fazel. Matrix rank minimization with applications. Technical report, Stanford University, 2002. Doctoral Dissertation, Electrical Engineering Department.
[3] D. Marbach, J. C. Costello, R. K?uffner, N. Vega, R. J. Prill, D. M. Camacho, K. R. Allison,
the DREAM5 Consortium, M. Kellis, J. J. Collins, and G. Stolovitzky. Wisdom of crowds for
robust gene network inference. Nature Methods, 9(8):796?804, 2012.
[4] Sahan Negahban and Martin J. Wainwright. Estimation of (near) low-rank matrices with noise
and high-dimensional scaling. Annals of Statistics, 39:1069?1097, 2011.
[5] Garvesh Raskutti, Martin J. Wainwright, and Bin Yu. Minimax-optimal rates for sparse additive models over kernel classes via convex programming. arxiv:1008.3654, 2010.
[6] Pradeep Ravikumar, John Lafferty, Han Liu, and Larry Wasserman. Sparse additive models.
Journal of the Royal Statistical Society, Series B, Methodological, 71(5):1009?1030, 2009.
[7] Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo. Guaranteed minimum rank solutions to
linear matrix equations via nuclear norm minimization. SIAM Review, 52(3):471?501, 2010.
[8] Roman Vershynin. How close is the sample covariance matrix to the actual covariance matrix?
arxiv:1004.3484, 2010.
[9] G. A. Watson. Characterization of the subdifferential of some matrix norms. Linear Algebra
and Applications, 170:1039?1053, 1992.
[10] Ming Yuan, Ali Ekici, Zhaosong Lu, and Renato Monteiro. Dimension reduction and coeffcient estimation in multivariate linear regression. J. R. Statist. Soc. B, 69(3):329?346, 2007.
[11] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49?67, 2006.
9
| 4843 |@word m1j:2 version:5 norm:23 calculus:1 covariance:6 decomposition:4 tr:7 reduction:1 liu:1 contains:1 series:2 denoting:2 comparing:1 must:1 written:2 john:2 additive:15 chicago:1 camacho:1 plot:1 update:3 stationary:4 implying:1 selected:1 xk:2 dissertation:1 fa9550:1 characterization:1 node:2 five:3 rc:6 constructed:2 yuan:3 specialize:2 backfitting:11 shorthand:1 indeed:1 behavior:2 multi:1 gjk:1 ming:2 encouraging:1 actual:1 begin:1 estimating:1 notation:1 bounded:3 underlying:1 linearity:1 provided:1 what:1 developed:1 pseudo:1 every:1 returning:2 control:1 unit:1 grant:3 appear:1 positive:4 engineering:1 treat:1 analyzing:1 subscript:1 lugosi:2 might:1 pam:3 doctoral:1 studied:3 suggests:1 fazel:3 procedure:4 vsj:3 empirical:4 thought:1 projection:2 convenient:1 persistence:1 induce:2 cram:4 suggest:1 consortium:1 mkj:3 cannot:1 convenience:1 selection:2 operator:1 close:1 risk:14 writing:1 map:5 demonstrated:1 center:1 straightforward:1 attention:1 convex:2 simplicity:1 immediately:1 wasserman:1 estimator:2 nuclear:11 deriving:1 orthonormal:1 mq:4 population:12 coordinate:2 analogous:2 annals:2 target:2 play:1 suppose:4 programming:1 element:1 satisfying:1 jk:2 updating:1 observed:1 role:1 electrical:1 capture:1 rina:1 rq:5 intuition:1 benjamin:1 complexity:1 stolovitzky:1 tfs:6 motivate:1 carrying:1 depend:1 algebra:1 ali:1 basis:2 joint:1 choosing:1 crowd:1 apparent:1 stanford:2 valued:1 statistic:5 cov:4 tgs:5 superscript:1 sequence:1 eigenvalue:1 propose:3 subtracting:1 maryam:2 product:1 gold:3 ky:2 xj2:1 convergence:1 parent:1 neumann:1 produce:2 generating:1 derive:1 illustrate:3 iq:3 op:3 strong:1 soc:1 recovering:1 implies:2 come:1 closely:1 centered:1 larry:1 bin:1 require:1 fix:1 uncontrollable:1 proposition:3 hold:1 bj:4 predict:1 estimation:5 largest:2 grouped:1 minimization:2 clearly:1 gaussian:2 rather:2 ck:1 hj:9 derived:2 focus:2 methodological:1 rank:50 sense:2 inference:2 abor:1 interested:1 monteiro:1 arg:1 dual:1 denoted:1 constrained:4 smoothing:2 special:3 initialize:1 equal:1 pdimensional:1 washington:1 yu:1 np:1 report:1 roman:1 employ:1 oriented:1 recognize:1 individual:1 replacement:1 n1:3 ab:1 drton:1 zhaosong:1 analyzed:1 allison:1 pradeep:1 semidefinite:1 ekici:1 ambient:1 necessary:1 orthogonal:4 penalizes:1 plotted:1 causal:2 mk:4 fitted:3 column:9 soft:3 facet:1 ordinary:1 entry:1 predictor:2 uniform:1 examining:1 considerably:1 vershynin:1 recht:1 negahban:1 siam:1 michael:1 squared:1 von:1 cesa:2 coli:1 bx:1 aggressive:1 parrilo:1 coefficient:2 mp:1 bracketed:1 depends:1 sloan:1 root:2 view:1 sup:8 characterizes:1 irst:1 parallel:2 identifiability:1 reexpressed:1 slope:1 contribution:1 minimize:1 square:3 ir:1 php:1 who:1 wisdom:1 bayesian:1 iid:1 lu:1 minm:1 classified:1 dm:2 naturally:1 associated:1 proof:2 proved:1 recall:2 n000141210762:1 dimensionality:1 hilbert:1 attained:1 higher:2 methodology:1 response:6 evaluated:1 just:2 until:1 hand:1 replacing:1 subdifferentials:3 normalized:1 lgorithm:1 true:2 regularization:7 equality:1 hence:1 symmetric:2 illustrated:1 encourages:2 covering:2 a2jk:1 presenting:1 fj:17 meaning:1 wise:2 vega:1 recently:1 common:3 garvesh:1 tending:1 functional:6 raskutti:1 interpretation:1 m1:7 composition:3 imposing:1 tuning:3 similarly:4 inclusion:1 marbach:1 pq:4 han:1 gj:1 base:1 multivariate:7 recent:1 inf:3 reverse:1 inequality:4 onr:1 watson:1 yi:5 minimum:1 additional:1 somewhat:1 r27:1 determine:1 ii:1 smoother:2 multiple:1 reduces:1 seidel:1 smooth:2 technical:1 calculation:1 cross:3 sphere:1 lin:1 ravikumar:1 a1:3 prediction:2 mk1:1 regression:26 expectation:2 arxiv:2 kernel:2 justified:1 subdifferential:8 fellowship:1 singular:8 lafferty:2 near:1 intermediate:1 identically:1 iterate:1 xj:28 fit:3 lasso:2 reduce:1 inner:1 expression:6 penalty:45 proceed:1 remark:1 generally:2 nonparametric:13 ten:1 statist:1 reduced:12 http:1 wiki:1 xij:2 zj:8 nsf:1 estimated:5 alfred:1 write:3 express:1 group:2 key:1 threshold:2 pj:10 penalizing:1 sum:2 inverse:1 mk2:1 throughout:1 family:2 sobolev:1 scaling:9 bound:4 renato:1 guaranteed:1 fan:2 fold:2 oracle:2 occur:1 constraint:3 precisely:1 bp:1 x2:1 prill:1 span:2 min:2 martin:2 department:4 according:2 combination:2 smaller:2 slightly:1 across:1 costello:1 intuitively:3 mkp:1 dv:2 equation:1 remains:1 foygel:1 mechanism:1 hh:6 generalizes:1 spectral:3 generic:1 regulate:1 softthresholding:1 rp:1 denotes:3 remaining:3 prof:1 classical:1 kellis:1 society:2 objective:3 concentration:1 diagonal:2 surrogate:1 exhibit:1 gradient:1 subspace:1 separate:1 dream:3 assuming:2 ru:3 index:6 relationship:2 holding:1 trace:2 negative:2 reliably:1 motivates:1 unknown:1 bianchi:2 markov:1 finite:1 descent:1 reproducing:1 smoothed:1 arbitrary:1 introduced:1 pablo:1 pair:1 required:1 namely:1 address:1 sparsity:1 challenge:3 max:4 royal:2 wainwright:3 analogue:1 regularized:3 predicting:1 residual:4 mn:7 representing:1 minimax:1 fjk:3 columbia:1 review:1 literature:1 l2:1 acknowledgement:1 nicol:1 relative:2 afosr:1 penalization:1 validation:3 xp:1 thresholding:1 dd:2 principle:1 row:9 penalized:3 summary:1 supported:1 last:2 jth:1 side:1 sparse:6 dimension:7 xn:2 doesn:1 collection:2 projected:1 coincide:1 functionals:2 excess:3 sj:3 implicitly:1 transcription:2 gene:11 supremum:1 b1:1 conclude:1 xi:7 continuous:1 iterative:1 regulatory:1 mj:14 nature:1 robust:1 expanding:1 controllable:1 du:10 complex:1 domain:3 diag:16 sp:27 linearly:2 motivation:1 noise:2 child:1 x1:6 referred:1 sub:1 lie:1 r6:1 third:2 covariate:4 r2:2 justifies:2 univariate:3 scalar:1 minimizer:2 satisfies:4 conditional:2 goal:1 sorted:1 viewed:1 shared:1 replace:1 ajk:1 included:1 infinite:1 specifically:1 uniformly:2 mathias:1 duality:1 svd:2 gauss:1 indicating:2 collins:1 d1:1 |
4,247 | 4,844 | A new metric on the manifold of kernel matrices with
application to matrix geometric means
Suvrit Sra
Max Planck Institute for Intelligent Systems
72076 T?ubigen, Germany
[email protected]
Abstract
Symmetric positive definite (spd) matrices pervade numerous scientific disciplines, including machine learning and optimization. We consider the key task
of measuring distances between two spd matrices; a task that is often nontrivial whenever the distance function must respect the non-Euclidean geometry of
spd matrices. Typical non-Euclidean distance measures such as the Riemannian
metric ?R (X, Y ) = klog(Y ?1/2 XY ?1/2 )kF , are computationally demanding and
also complicated to use. To allay some of these difficulties, we introduce a new
metric on spd matrices, which not only respects non-Euclidean geometry but also
offers faster computation than ?R while being less complicated to use. We support our claims theoretically by listing a set of theorems that relate our metric to
?R (X, Y ), and experimentally by studying the nonconvex problem of computing
matrix geometric means based on squared distances.
1
Introduction
Symmetric positive definite (spd) matrices1 are remarkably pervasive in a multitude of areas, especially machine learning and optimization. Several applications in these areas require an answer to
the fundamental question: how to measure a distance between two spd matrices?
This question arises, for instance, when optimizing over the set of spd matrices. To judge convergence of an optimization procedure or in the design of algorithms we may need to compute
distances between spd matrices [1?3]. As a more concrete example, suppose we wish to retrieve
from a large database of spd matrices the ?closest? spd matrix to an input query. The quality of
such a retrieval depends crucially on the distance function used to measure closeness; a choice that
also dramatically impacts the actual search algorithm itself [4, 5]. Another familiar setting is that
of computing statistical metrics for multivariate Gaussian distributions [6], or more recently, quantum statistics [7]. Several other applications depend on being able to effectively measure distances
between spd matrices?see e.g., [8?10] and references therein.
In many of these domains, viewing spd matrices as members of a Euclidean vector space is insufficient, and the non-Euclidean geometry conferred by a suitable metric is of great importance. Indeed,
the set of (strict) spd matrices forms a differentiable Riemannian manifold [11, 10] that is perhaps
the most studied example of a manifold of nonpositive curvature [12; Ch.10]. These matrices also
form a convex cone, and the set of spd matrices in fact serves as a canonical higher-rank symmetric
space [13]. The conic view is of great importance in convex optimization [14?16], symmetric spaces
are important in algebra and analysis [13, 17], and in optimization [14, 18], while the manifold and
other views are also widely important?see e.g., [11; Ch.6] for an overview.
1
We could equally consider Hermitian matrices, but for simplicity we consider only real matrices.
1
The starting point for this paper is the manifold view. For space reasons, we limit our discussion
to P(n) as a Riemannian manifold, noting that most of the discussion could also be set in terms of
Finsler manifolds. But before we go further, let us fix basic notation.
Notation. Let Sn denote the set of n ? n real symmetric matrices. A matrix A ? Sn is called
positive (we drop the word ?definite? for brevity) if
hx, Axi > 0
for all x 6= 0;
also denoted as A > 0.
(1)
We denote the set of n ? n positive matrices by Pn . If only the non-strict inequality hx, Axi ? 0
holds (for all x ? Rn ) we say A is positive semidefinite; this is also denoted as A ? 0. For two
matrices A, B ? Sn , the operator inequality A ? B means that
p the difference A ? B ? 0. The
Frobenius norm of a matrix X ? Rm?n is defined as kXkF = tr(X T X), while kXk denotes the
standard operator norm. For an analytic function f on C, and a diagonalizable matrix A = U ?U T ,
f (A) := U f (?)U T . Let ?(X) denote the vector of eigenvalues of X (in any order) and Eig(X) a
diagonal matrix that has ?(X) as its diagonal. We also use ?? (X) to denote a sorted (in descending
order) version of ?(X) and ?? (X) is defined likewise. Finally, we define Eig? (X) and Eig? (X) as
the corresponding diagonal matrices.
Background. The set Pn is a canonical higher-rank symmetric space that is actually an open set
within Sn , and thereby a differentiable manifold of dimension n(n + 1)/2. The tangent space at a
point A ? Pn can be identified with Sn , so a suitable inner-product on Sn leads to the Riemannian
distance on Pn [11; Ch.6]. At the point A this metric is induced by the differential form
ds2 = kA?1/2 dAA?1/2 k2F = tr(A?1 dAA?1 dA).
(2)
For A, B ? Pn , it is known that there is a unique geodesic joining them given by [11; Thm.6.1.6]:
?(t) := A]t B := A1/2 (A?1/2 BA?1/2 )t A1/2 ,
0 ? t ? 1,
(3)
and its midpoint ?(1/2) is the geometric mean of A and B. The associated Riemannian metric is
?R (A, B) := klog(A?1/2 BA?1/2 )kF ,
for
A, B > 0.
(4)
From definition (4) it is apparent that computing ?R will be computationally demanding, and requires
care. Indeed, to compute (4) we must essentially compute generalized eigenvalues of A and B. For
an application that must repeatedly compute distances between numerous pairs of matrices this
computational burden can be excessive [4]. Driven by such computational concerns, Cherian et al.
[4] introduced a symmetrized ?log-det? based matrix divergence:
J(A, B) = log det A+B
? 21 log det(AB) for A, B > 0.
(5)
2
This divergence was used as a proxy for ?R and observed that J(A, B) offers the same level of performance on a difficult nearest neighbor retrieval task as ?R , while being many times faster! Among
other reasons, a large part of their speedup was attributed to the avoidance of eigenvalue computations for obtaining J(A, B) or its derivatives, a luxury the ?R does not permit. Independently,
Chebbi and Moahker [2] also introduced a slightly generalized version of (5) and studied some of its
properties, especially computation of ?centroids? of positive matrices using their matrix divergence.
p
Interestingly, Cherian et al. [4]p
claimed that J(A, B) might not be metric, whereas Chebbi and
Moahker [2] conjectured that J(A, B) is a metric. We resolve this uncertainty and prove that
p
J(A, B) is indeed a metric, albeit not one that embeds isometrically into a Hilbert space.
Due to space constraints, we only summarily mention several
? of the properties that this metric satisfies, primarily to help develop intuition that motivates J as a good proxy for the Riemannian
metric ?R . We apply these insights to study ?matrix geometric means? of set of positive matrices:
a problem also studied in [4, 2]. Both cited papers have some gaps in their claims, which we fill
by proving that even though computing the geometric mean is a nonconvex problem, we can still
compute it efficiently and optimally.
2
2
The ?`d metric
The main result of this paper is Theorem 1.
Theorem 1. Let J be as in (5), and define ?`d :=
?
J. Then, ?`d is a metric on Pn .
Our proof of Theorem 1 depends on several key steps. Due to restrictions on space we cannot include
full proofs of all the results, and refer the reader to the longer article [19] instead. We do, however,
provide sketches for the crucial steps in our proof.
Proposition 2. Let A, B, C ? Pn . Then, (i) ?`d (I, A) = ?`d (I, Eig(A)); (ii) for P, Q ? GL(n, C),
?
?
?`d (P AQ, P BQ) = ?`d (A, B); (iii) for X ? GL(n, C), ??
`d (X AX, X BX) = ?`d (A, B);
?1
?1
(iv) ?`d (A, B) = ?`d (A , B ); (v) ?`d (A ? B, A ? C) = n?`d (B, C), where ? denotes the
Kronecker or tensor product.
The first crucial result is that for positive scalars, ?`d is indeed a metric. To prove this, recall the
notion of negative definite functions (Def. 3), and a related classical result of Schoenberg (Thm. 4).
Definition 3 ([20; Def. 1.1]). Let X be a nonempty set. A function ? : X ? X ? R is said to be
negative definite if for all x, y ? X it is symmetric (?(x, y) = ?(y, x)), and satisfies the inequality
Xn
ci cj ?(xi , xj ) ? 0,
(6)
i,j=1
Pn
n
n
for all integers n ? 2, and subsets {xi }i=1 ? X , {ci }i=1 ? R with i=1 ci = 0.
Theorem 4 ([20; Prop. 3.2, Chap. 3]). Let ? : X ? X ? R be negative definite. Then, there is a
Hilbert space H ? RX and a mapping x 7? ?(x) from X ? H such that we have the equality
k?(x) ? ?(y)k2H = ?(x, y) ? 12 (?(x, x) + ?(y, y)).
(7)
Moreover, negative definiteness of ? is necessary for such a mapping to exist.
?
Theorem 5 (Scalar case). Define ?s2 (x, y) := log[(x + y)/(2 xy)] for scalars x, y > 0. Then,
?s (x, y) ? ?s (x, z) + ?s (y, z) for all x, y, z > 0.
(8)
Proof. We show that ?(x, y) = log x+y
is negative definite. Since ?s2 (x, y) = ?(x, y) ?
2
1
2 (?(x, x)+?(y, y)), Thm. 4 then implies the triangle inequality (8). To prove ? is negative definite,
by [Thm. 2.2, Chap. 3, 20] we may equivalently show that e???(x,y) = ((x + y)/2)?? is a positive
definite function for ? > 0, and all x, y > 0. To that end, it suffices to show that the matrix
H = [hij ] = (xi + xj )?? , 1 ? i, j ? n,
n
is positive definite for every integer n ? 1, and positive numbers {xi }i=1 . Now, observe that
Z ?
1
1
hij =
=
e?t(xi +xj ) t??1 dt,
(9)
(xi + xj )?
?(?) 0
R?
??1
where ?(?) = 0 e?t t??1 dt is the well-known Gamma function. Thus, with fi (t) = e?txi t 2 ?
L2 ([0, ?)), we see that [hij ] equals the Gram matrix [hfi , fj i], whereby H > 0.
Using Thm. 5 we obtain the following simple but important ?Minkowsi? inequality for ?s .
Corollary 6. Let x, y, z > 0 be scalars, and let p ? 1. Then,
Xn
1/p Xn
1/p Xn
1/p
?sp (xi , yi )
?
?sp (xi , zi )
+
?sp (yi , zi )
.
i=1
i=1
i=1
(10)
Corollary 7. Let X, Y, Z > 0 be diagonal matrices. Then,
?`d (X, Y ) ? ?`d (X, Z) + ?`d (Y, Z)
Next, we recall a fundamental determinantal inequality.
Theorem 8 ([21; Exercise VI.7.2]). Let A, B ? Pn . Then,
Yn
Yn
(??i (A) + ??i (B)) ? det(A + B) ?
(??i (A) + ??i (B)).
i=1
i=1
3
(11)
(12)
Corollary 9. Let A, B > 0. Then,
?`d (Eig? (A), Eig? (B))
?
?`d (A, B)
?
?`d (Eig? (A), Eig? (B))
The final result that we need is a well-known fact from linear algebra (our own proof is in [19]).
Lemma 10 ([e.g., 22; p.58]). Let A > 0, and let B be Hermitian. There is a matrix P for which
P ? AP = I,
and
P ? BP = D,
and D is diagonal.
(13)
With all these theorems and lemmas in hand, we are now finally ready to prove Thm. 1.
Proof. (Theorem 1). We must prove that ?`d is symmetric, nonnegative, definite, and that is satisfies
the triangle inequality. Symmetry is immediate from definition. Nonnegativity and definiteness
follow from the strict log-concavity (on Pn ) of the determinant, whereby
det X+Y
? det(X)1/2 det(Y )1/2 ,
2
which equality iff X = Y , which in turn implies that ?`d (X, Y ) ? 0 with equality iff X = Y . The
only hard part is to prove the triangle inequality, a result that has eluded previous attempts [4, 2].
Let X, Y, Z > 0 be arbitrary. From Lemma 10 we know that there is a matrix P such that P ? XP =
I and P ? Y P = D. Since Z > 0 is arbitrary, and congruence preserves positive definiteness, we
may write just Z instead of P ? ZP . Also, since ?`d (P ? XP, P ? Y P ) = ?`d (X, Y ) (see Prop. 2),
proving the triangle inequality reduces to showing that
?`d (I, D) ? ?`d (I, Z) + ?`d (D, Z).
(14)
?
?
Consider now the diagonal matrices D and Eig (Z). Corollary 7 asserts the inequality
?`d (I, D? ) ? ?`d (I, Eig? (Z)) + ?`d (D? , Eig? (Z)).
(15)
?
?
Prop. 2(i) implies that ?`d (I, D) = ?`d (I, D ) and ?`d (I, Z) = ?`d (I, Eig (Z)), while Cor. 9 shows
that ?`d (D? , Eig? (Z)) ? ?`d (D, Z). Combining these inequalities, we obtain (14), as desired.
Although the metric space (Pn , ?`d ) has numerous fascinating properties, due to space concerns, we
do not discuss it further. Instead we discuss a connection more important to machine learning and
related areas: kernel functions arising from ?`d . Indeed, some of connections (e.g., Thm. 11) have
already been successfully applied very recently in computer vision [23].
2.1
Hilbert space embedding of ?`d
Theorem 1 shows that ?`d is a metric and Theorem 5 shows that actually for positive scalars, the
metric space (R++ , ?s ) embeds isometrically into a Hilbert space. It is, therefore, natural to ask
whether (Pn , ?`d ) also admits such an embedding?
2
Theorem 4 says that such a kernel exists if and only if ?`d
is negative definite; equivalently, iff
2
e???`d (X,Y ) =
det(XY )?
,
det((X+Y )/2)?
is a positive definite kernel for all ? > 0. To verify this, it suffices to check if the matrix
h
i
H? = [hij ] := det(Xi1+Xj )? , 1 ? i, j ? m,
(16)
(17)
is positive for every integer m ? 1 and arbitrary positive matrices X1 , . . . , Xm .
Unfortunately, a numerical experiment (see [19]) reveals that H? is not always positive. This implies
that (Pd , ?`d ) cannot embed isometrically into a Hilbert space. Undeterred, we still ask: For what
choices of ? is H? positive? Surprisingly, this question admits a complete answer. Theorem 11
characterizes the values of ? necessary and sufficient for H? to be positive. We note here that the
case ? = 1 was essentially treated in [24], in the context of semigroup kernels on measures.
Theorem 11. Let X1 , . . . , Xm ? Pn . The matrix H? defined by (17) is positive, if and only if
? ? 2j : j ? N, and 1 ? j ? (n ? 1) ? ? : ? ? R, and ? > 12 (n ? 1) .
(18)
4
T
1
e?x Xi x (for 1 ? i ? m). Then,
Proof. We first prove the ?if? part. Define the function fi := ?n/4
n
fi ? L2 (R ), where the inner-product is given by the Gaussian integral
Z
T
1
1
hfi , fj i := d/2
e?x (Xi +Xj )x dx = det(Xi +X
(19)
1/2 .
j)
?
n
R
From (19) it follows that H1/2 is positive. Since the Schur (elementwise) product of two positive
matrices is again positive, it follows that H? > 0 whenever ? is an integer multiple of 1/2. To
extend the result to all ? covered by (18), we need a more intricate integral representation, namely
the multivariate Gamma function, defined as [25; ?2.1.2]
Z
?n (?) :=
e? tr(A) det(A)??(n+1)/2 dA,
(20)
Pn
where the integral converges for ? > 12 (n ? 1). Define for each i the function fi := ce? tr(AXi )
(c > 0 is a constant). Then, fi ? L2 (Pn ), which we equip with the inner product
Z
e? tr(A(Xi +Xj )) det(A)??(n+1)/2 dA = det(Xi + Xj )?? ,
hfi , fj i := c2
Pn
and it exists whenever ? > 21 (n ? 1). Consequently, H? is positive for all ? defined by (18).
The ?only if? part follows from deeper results in the rich theory of symmetric spaces.2 Specifically,
since Pn is a symmetric cone, and 1/ det(X) is a decreasing function on this cone, (i.e., 1/ det(X +
Y ) ? 1/ det(X) for all X, Y > 0), an appeal to [26; VII.3.1] grants our claim.
Remark 12. Readers versed in stochastic processes will recognize that the above result provides a
different perspective on a classical result concerning infinite divisibility of Wishart processes [27],
where the set (18) also arises as a consequence of Gindikin?s theorem [28].
At this point, it is worth mentioning the following ?obvious? result.
Theorem 13. Let X be a set of positive matrices that commute with each other. Then, (X , ?`d ) can
be isometrically embedded into some Hilbert space.
Proof. The proof follows because a commuting
set of matrices can be simultaneously diagonalized,
P
2
and for diagonal matrices, ?`d
(X, Y ) = i ?s2 (Xii , Yii ), which is a nonnegative sum of negative
definite kernels and is therefore itself negative definite.
3
Connections between ?`d and ?R
After showing that ?`d is a metric and studying its relation to kernel functions, let us now return to
our original motivation: introducing ?`d as a reasonable alternative to the widely used Riemannian
metric ?R . We note here that Cherian et al. [4; 29] offer strong experimental evidence supporting
?`d as an alternative; we offer more theoretical results.
Our theoretical results are based around showing that ?`d fulfills several properties akin to those
displayed by ?R . Due to lack of space, we present only a summary of our results in Table 1, and
cite the corresponding theorems in the longer article [19] for proofs. While the actual proofs are
valuable and instructive, the key message worth noting is: both ?R and ?`d express the (negatively
curved) non-Euclidean geometry of their respective metric spaces by displaying similar properties.
4
Application: computing geometric means
In this section we turn our attention to an object that perhaps connects ?R and ?`d most intimately:
the operator geometric mean (GM), which is given by the midpoint of the geodesic (3), denoted as
A]B := ?(1/2) = A1/2 (A?1/2 BA?1/2 )1/2 A1/2 .
(21)
2
Specifically, the set (18) is identical to the Wallach set which is important in the study of Hilbert spaces of
holomorphic functions over symmetric domains [26; Ch.XIII].
5
Riemannian metric
?R (X ? AX, X ? BX) = ?R (A, B)
?R (A?1 , B ?1 ) = ?R (A, B)
?R (At , B t ) ? t?R (A, B)
?R (As , B s ) ? (s/u)?R (Au , B u )
?R (A, A]B) = ?R (B, A]B)
?R (A, A]t B) = t?R (A, B)
?R (A]t B, A]t C) ? t?R (B, C)
min
2
2
?R
(X, A) + ?R
(X, B) 7? GM
?R (A + X, A + Y ) ? ?R (X, Y )
Ref.
[11; Ch.6]
[11; Ch.6]
[11; Ex.6.5.4]
[19; Th.4.11]
Trivial
[11; Th.6.1.6]
[11; Th.6.1.2]
?`d -metric
?`d (X ? AX, X ? BX) = ?`d (A, B)
?`d (A?1 , B ?1 )?= ?`d (A, B)
?`d (At , B t ) ? pt?`d (A, B)
?`d (As , B s ) ? s/u?`d (Au , B u )
?`d (A, A]B) = ??
`d (B, A]B)
?`d (A, A]t B) ? t??
`d (A, B)
?`d (A]t B, A]t C) ? t?`d (B, C)
[11; Ch. 6]
[3]
2
2
?`d
(X, A) + ?`d
(X, B) 7? GM
?`d (A + X, A + Y ) ? ?`d (X, Y )
min
Ref.
Prop. 2
Prop. 2
[19; Th.4.6]
[19; Th.4.11]
Th.14
[19; Th.4.7]
[19; Th.4.8]
Th.14
[19; Th.4.9]
Table 1: Some of the similarities between ?R and ?`d . All matrices are assumed to be in Pn . The
scalars t, s, u satisfy 0 < t ? 1, 1 ? s ? u < ?.
The GM (21) has numerous attractive properties?see for instance [30]?among these, the following
variational characterization is very important [31, 32],
A]B = argminX>0
2
2
?R
(A, X) + ?R
(B, X).
(22)
especially because it generalizes the matrix geometric mean to more than two matrices. Specifically,
this ?natural? generalization is the Karcher mean (Fr?echet mean) [31, 32, 11]:
Xm
2
GM (A1 , . . . , Am ) := argminX>0
?R
(X, Ai ).
(23)
i=1
This multivariable generalization is in fact a well-studied difficult problem?see e.g., [33] for information on state-of-the-art. Indeed, its inordinate computational expenses motivated Cherian et al.
[4] to study the alternative mean
Xm
2
GM`d (A1 , . . . , Am ) := argmin ?(X) :=
?`d
(X, Ai ),
(24)
i=1
X>0
which has also been more thoroughly studied by Chebbi and Moahker [2].
Although the mean (24) was previously studied in [4, 2], some crucial aspects were missing. Specifically, Cherian et al. [4] only proved their solution to be a stationary point of ?(X); they did not
prove either global or local optimality. Although Chebbi and Moahker [2] showed that (24) has a
unique solution, like [4] they too only proved stationarity, neither global nor local optimality.
We fill these gaps, and we make the following main contributions below:
1. We connect (24) to the Karcher mean more closely, where in Theorem 14 we shows that
for the two matrix case both problems have the same solution;
2. We show that the unique positive solution to (24) is globally optimal; this result is particularly interesting because ?(X) is nonconvex.
We begin by looking at the two variable case of GM`d (24).
Theorem 14. Let A, B > 0. Then,
A]B = argminX>0
2
2
?(X) := ?`d
(X, A) + ?`d
(X, B).
(25)
Moreover, A]B is equidistant from A and B, i.e., ?`d (A, A]B) = ?`d (B, A]B).
Proof. If A = B, then clearly X = A minimizes ?(X). Assume therefore, that A 6= B. Ignoring
the constraint X > 0 momentarily, we see that any stationary point must satisfy ??(X) = 0. Thus,
?1 1
?1
X+B ?1 1
??(X) = X+A
=0
2
2 +
2
2 ?X
=? (X + A)X ?1 (X + B) = 2X + A + B
=?
B = XA?1 X.
(26)
The latter equation is a Riccati equation that is known to have a unique, positive definite solution
given by the matrix GM (21) (see [11; Prop 1.2.13]). All that remains to show is that this GM is in
fact a local minimizer. To that end, we must show that the Hessian ?2 ?(X) > 0 at X = A]B; but
this claim is immediate from Theorem 18. So A]B is a strict local minimum of (8), which is actually
a global minimum because it is the unique positive solution to ?(X) = 0. Finally, the equidistance
property follows after some algebraic manipulations; we omit details for brevity [19].
6
Let us now turn to the general case (24). The first-order optimality condition is
Xm
1 X+Ai ?1
??(X) =
? 12 mX ?1 = 0,
X > 0.
2
2
i=1
(27)
From (27) using Lemma 15 it can be inferred that [see also 2, 4] that any critical point X of (24) lies
Pm
Pm
?1 ?1
1
1
in a convex, compact set specified by m
X m
i=1 Ai
i=1 Ai .
Lemma 15 ([21; Ch.5]). The map X ?1 on Pn is order reversing and operator convex. That is, for
?1
X, Y ? Pn , if X ? Y , then X ?1 ? Y ?1 ; for t ? [0, 1], (tX + (1 ? t)Y ) ? tX ?1 +(1?t)Y ?1 .
Lemma 16 ([19]). Let A, B, C, D ? Pn , so that A ? B and C ? D. Then, A ? C ? B ? D.
Lemma 17 (Uniqueness [2]). The nonlinear equation (27) has a unique positive solution.
Using the above results, we can finally prove the main theorem of this section.
Theorem 18. Let X be a matrix satisfying (27). Then, it is the unique global minimizer of (24).
Proof. The objective function ?(X) (24) has only one positive stationary point, which follows from
Lemma 17. Let X be this stationary point satisfying (27). We show that X is actually a local
minimum; global optimality is immediate from uniqueness of X.
To show local optimality, we prove that the Hessian ?2 ?(X) > 0. Ignoring constants, showing
positivity of the Hessian reduces to proving that
Xm
?1
1 X+Ai ?1
i
? X+A
> 0.
(28)
mX ?1 ? X ?1 ?
2
2
2
i=1
Now replace mX
?1
in (28) using the condition (27); therewith inequality (28) turns into
Xm
Xm
?1
X+Ai ?1
X+Ai ?1
? X ?1 >
? (X + Ai )
2
2
i=1
i=1
Xm
Xm
?1
X+Ai ?1
X+Ai ?1
??
? X ?1 >
? (X + Ai ) .
2
2
i=1
(29)
i=1
?1
From Lemma 15 we know that X ?1 > (X + Ai ) , so that an application of Lemma 16 shows that
?1
?1
X+Ai ?1
i
? X ?1 > X+A
? (X + Ai ) for 1 ? i ? m. Summing up, we obtain (29),
2
2
which implies the desired local (and by uniqueness, global) optimality of X.
Remark 19. It is worth noting that Theorem 18 establishes that solving (27) yields the global
minimum of a nonconvex optimization problem. This result is even more remarkable because unlike
CAT(0)-metrics such as ?R , the metric ?`d is not geodesically convex.
4.1
Numerical Results
We present a key numerical result to illustrate the large savings in running time when computing with
?`d when compared with ?R . To compute the Karcher mean we downloaded the ?Matrix Means
Toolbox? of Bini and Iannazzo from http://bezout.dm.unipi.it/software/mmtoolbox/. In particular,
we use the file called rich.m which implements a state-of-the-art method [33].
The first plot in Fig. 1 indicate that ?`d can be around 5 times faster than ?R2 and up to 50 times
faster than ?R1 . The second plot shows how expensive it can be to compute GM (23) as opposed
to GM`d (24)?up to 1000 times! The former was computed using the method of [33], while the
latter runs the fixed-point iteration proposed in [2] (the iteration was run until k??(X)k fell below
10?10 ). The key point here is not that the fixed-point iteration is faster, but rather that (24) is a much
simpler problem thanks to the convenient eigenvalue free structure of ?`d .
5
Conclusions and future work
We presented a new metric on the manifold of positive definite matrices, and related it to the classical
Riemmannian metric on this manifold. Empirically, our new metric was shown to lead to large
computational gains, while theoretically, a series of theorems demonstrated how it expresses the
negatively curved non-Euclidean geometry in a manner analogous to the Riemannian metric.
7
Time taken to compute ?R and ?S
2
Time taken to compute GM and GMld for 10 matrices
3
10
10
1
2
10
Running time (seconds)
Running time (seconds)
10
0
10
?1
10
?2
10
?R1
?3
10
1
10
0
10
?1
GM
GMld
10
?R2
?S
?4
10
0
?2
500
1000
1500
Dimensionality (n) of the matrices used
10
2000
0
50
100
150
Dimensionality (n) of the matrices used
200
Figure 1: Running time comparisons between ?R and ?`d . The left panel shows time (in seconds)
taken to compute ?R and ?`d , averaged over 10 runs to reduce variance. In the plot, ?R1 refers to the
implementation of ?R in the matrix means toolbox [33], while ?R2 is our own implementation.
At this point, there are several directions of future work opened by our paper. We mention some of
the most relevant ones below. (i) Study further geometric properties of the metric space (Pn , ?`d );
(ii) Further enrich the connections to ?R , and to other (Finsler) metrics on Pn ; (iii) Study properties
of geometric mean GM`d (24), including faster algorithms to compute it; (iv) Akin to [4], apply ?`d
in where ?R has been so far dominant. We plan to tackle some of these problems, and hope that our
paper encourages other researchers in machine learning and optimization to also study them.
References
[1] H. Lee and Y. Lim. Invariant metrics, contractions and nonlinear matrix equations. Nonlinearity, 21:
857?878, 2008.
[2] Z. Chebbi and M. Moahker. Means of hermitian positive-definite matrices based on the log-determinant
?-divergence function. Linear Algebra and its Applications, 436:1872?1889, 2012.
[3] P. Bougerol. Kalman Filtering with Random Coefficients and Contractions. SIAM J. Control Optim., 31
(4):942?959, 1993.
[4] A. Cherian, S. Sra, A. Banerjee, and N. Papanikolopoulos. Efficient Similarity Search for Covariance
Matrices via the Jensen-Bregman LogDet Divergence. In International Conference on Computer Vision
(ICCV), Nov. 2011.
[5] F. Porikli, O. Tuzel, and P. Meer. Covariance Tracking using Model Update Based on Lie Algebra. In
IEEE CVPR, 2006.
[6] L. T. Skovgaard. A Riemannian Geometry of the Multivariate Normal Model. Scandinavian Journal of
Statistics, 11(4):211?223, 1984.
[7] D. Petz. Quantum Information Theory and Quantum Statistics. Springer, 2008.
[8] I. Dryden, A. Koloydenko, and D. Zhou. Non-Euclidean statistics for covariance matrices, with applications to diffusion tensor imaging. Annals of Applied Statistics, 3(3):1102?1123, 2009.
[9] H. Zhu, H. Zhang, J. G. Ibrahim, and B. S. Peterson. Statistical Analysis of Diffusion Tensors in DiffusionWeighted Magnetic Resonance Imaging Data. Journal of the American Statistical Association, 102(480):
1085?1102, 2007.
[10] F. Hiai and D. Petz. Riemannian metrics on positive definite matrices related to means. Linear Algebra
and its Applications, 430:3105?3130, 2009.
[11] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007.
[12] M. R. Bridson and A. Haeflinger. Metric Spaces of Non-Positive Curvature. Springer, 1999.
[13] A. Terras. Harmonic Analysis on Symmetric Spaces and Applications, volume II. Springer, 1988.
[14] Yu. Nesterov and A. Nemirovskii. Interior-Point Polynomial Algorithms in Convex Programming. SIAM,
1987.
[15] A. Ben-Tal and A. Nemirovksii. Lectures on modern convex optimization: Analysis, algorithms, and
engineering applications. SIAM, 2001.
[16] Yu. Nesterov and M. J. Todd. On the riemannian geometry defined for self-concordant barriers and interior
point methods. Found. Comput. Math., 2:333?361, 2002.
8
[17] S. Helgason. Geometric Analysis on Symmetric Spaces. Number 39 in Mathematical Surveys and Monographs. AMS, second edition, 2008.
[18] H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors. Handbook of Semidefinite Programming: Theory,
Algorithms, and Applications. Kluwer Academic, 2000.
[19] S. Sra. Positive definite matrices and the Symmetric Stein Divergence. arXiv: 1110.1773, October 2012.
[20] C. Berg, J. P. R. Christensen, and P. Ressel. Harmonic analysis on semigroups: theory of positive definite
and related functions, volume 100 of GTM. Springer, 1984.
[21] R. Bhatia. Matrix Analysis. Springer, 1997.
[22] R. Bellman. Introduction to Matrix Analysis. SIAM, second edition, 1970.
[23] M. Harandi, C. Sanderson, R. Hartley, and B. Lovell. Sparse Coding and Dictionary Learning for Symmetric Positive Definite Matrices: A Kernel Approach. In European Conference on Computer Vision
(ECCV), 2012.
[24] M. Cuturi, K. Fukumizu, and J. P. Vert. Semigroup kernels on measures. JMLR, 6:1169?1198, 2005.
[25] R. J. Muirhead. Aspects of multivariate statistical theory. Wiley Interscience, 1982.
[26] J. Faraut and A. Kor?anyi. Analysis on Symmetric Cones. Clarendon Press, 1994.
[27] M.-F. Bru. Wishart Processes. J. Theoretical Probability, 4(4), 1991.
[28] S. G. Gindikin. Invariant generalized functions in homogeneous domains. Functional Analysis and its
Applications, 9:50?52, 1975.
[29] A. Cherian, S. Sra, A. Banerjee, and N. Papanikolopoulos. Jensen-Bregman LogDet Divergence with
Application to Efficient Similarity Search for Covariance Matrices. IEEE TPAMI, 2012. Submitted.
[30] T. Ando. Concavity of certain maps on positive definite matrices and applications to hadamard products.
Linear Algebra and its Applications, 26(0):203?241, 1979.
[31] R. Bhatia and J. A. R. Holbrook. Riemannian geometry and matrix geometric means. Linear Algebra
Appl., 413:594?618, 2006.
[32] M. Moakher. A differential geometric approach to the geometric mean of symmetric positive-definite
matrices. SIAM J. Matrix Anal. Appl. (SIMAX), 26:735?747, 2005.
[33] D. A. Bini and B. Iannazzo. Computing the Karcher mean of symmetric positive definite matrices. Linear
Algebra and its Applications, Oct. 2011. Available online.
9
| 4844 |@word determinant:2 version:2 polynomial:1 norm:2 open:1 crucially:1 contraction:2 covariance:4 commute:1 mention:2 thereby:1 tr:5 cherian:7 series:1 interestingly:1 petz:2 diagonalized:1 ka:1 optim:1 dx:1 must:6 determinantal:1 numerical:3 analytic:1 drop:1 plot:3 update:1 stationary:4 provides:1 characterization:1 math:1 simpler:1 zhang:1 mathematical:1 c2:1 differential:2 prove:10 interscience:1 hermitian:3 manner:1 introduce:1 theoretically:2 intricate:1 indeed:6 mpg:1 nor:1 bellman:1 globally:1 chap:2 decreasing:1 resolve:1 actual:2 begin:1 notation:2 moreover:2 panel:1 what:1 argmin:1 minimizes:1 porikli:1 every:2 tackle:1 isometrically:4 rm:1 control:1 grant:1 omit:1 yn:2 planck:1 positive:41 before:1 engineering:1 local:7 todd:1 limit:1 consequence:1 joining:1 inordinate:1 ap:1 might:1 therein:1 studied:6 au:2 wallach:1 appl:2 mentioning:1 klog:2 pervade:1 averaged:1 unique:7 definite:26 implement:1 hiai:1 procedure:1 holomorphic:1 area:3 tuzel:1 vert:1 convenient:1 word:1 refers:1 cannot:2 interior:2 operator:4 context:1 descending:1 restriction:1 map:2 demonstrated:1 missing:1 go:1 attention:1 starting:1 independently:1 convex:7 survey:1 simplicity:1 insight:1 avoidance:1 muirhead:1 fill:2 vandenberghe:1 retrieve:1 proving:3 embedding:2 notion:1 meer:1 schoenberg:1 diagonalizable:1 analogous:1 annals:1 pt:1 suppose:1 gm:14 programming:2 homogeneous:1 satisfying:2 particularly:1 expensive:1 terras:1 unipi:1 database:1 observed:1 momentarily:1 valuable:1 monograph:1 intuition:1 pd:1 cuturi:1 nesterov:2 geodesic:2 depend:1 solving:1 algebra:8 negatively:2 triangle:4 cat:1 tx:2 gtm:1 query:1 bhatia:3 apparent:1 daa:2 widely:2 cvpr:1 say:2 statistic:5 moakher:1 itself:2 final:1 online:1 tpami:1 differentiable:2 eigenvalue:4 product:6 fr:1 yii:1 combining:1 relevant:1 hadamard:1 riccati:1 iff:3 frobenius:1 asserts:1 convergence:1 zp:1 r1:3 anyi:1 converges:1 ben:1 object:1 help:1 illustrate:1 develop:1 nearest:1 strong:1 matrices1:1 judge:1 implies:5 indicate:1 direction:1 closely:1 hartley:1 summarily:1 stochastic:1 opened:1 viewing:1 require:1 hx:2 fix:1 suffices:2 generalization:2 proposition:1 hold:1 around:2 normal:1 great:2 k2h:1 mapping:2 congruence:1 claim:4 dictionary:1 uniqueness:3 successfully:1 establishes:1 hope:1 fukumizu:1 clearly:1 gaussian:2 always:1 papanikolopoulos:2 rather:1 pn:23 zhou:1 pervasive:1 corollary:4 ax:3 rank:2 check:1 centroid:1 geodesically:1 am:3 relation:1 germany:1 among:2 denoted:3 enrich:1 art:2 plan:1 resonance:1 equal:1 saving:1 identical:1 yu:2 k2f:1 excessive:1 future:2 intelligent:1 xiii:1 primarily:1 modern:1 simultaneously:1 gamma:2 divergence:7 recognize:1 preserve:1 familiar:1 semigroups:1 geometry:8 connects:1 argminx:3 luxury:1 ando:1 ab:1 attempt:1 stationarity:1 message:1 semidefinite:2 bregman:2 integral:3 necessary:2 xy:3 respective:1 bq:1 iv:2 euclidean:8 desired:2 theoretical:3 instance:2 kxkf:1 karcher:4 measuring:1 introducing:1 subset:1 too:1 optimally:1 connect:1 answer:2 thoroughly:1 thanks:1 cited:1 international:1 fundamental:2 siam:5 lee:1 xi1:1 discipline:1 concrete:1 squared:1 again:1 opposed:1 positivity:1 wishart:2 american:1 derivative:1 bx:3 return:1 de:1 coding:1 coefficient:1 satisfy:2 depends:2 vi:1 view:3 h1:1 characterizes:1 complicated:2 contribution:1 variance:1 likewise:1 listing:1 efficiently:1 yield:1 therewith:1 rx:1 worth:3 researcher:1 submitted:1 whenever:3 definition:3 echet:1 obvious:1 dm:1 associated:1 riemannian:13 attributed:1 proof:13 nonpositive:1 gain:1 proved:2 wolkowicz:1 ask:2 recall:2 lim:1 dimensionality:2 riemmannian:1 hilbert:7 cj:1 actually:4 clarendon:1 higher:2 dt:2 follow:1 though:1 just:1 xa:1 until:1 sketch:1 hand:1 nonlinear:2 eig:13 lack:1 banerjee:2 quality:1 perhaps:2 scientific:1 verify:1 former:1 equality:3 symmetric:18 semigroup:2 attractive:1 self:1 encourages:1 whereby:2 multivariable:1 generalized:3 lovell:1 complete:1 txi:1 fj:3 variational:1 harmonic:2 recently:2 fi:5 functional:1 empirically:1 overview:1 volume:2 extend:1 association:1 elementwise:1 kluwer:1 refer:1 ai:15 pm:2 nonlinearity:1 aq:1 scandinavian:1 longer:2 similarity:3 dominant:1 curvature:2 closest:1 multivariate:4 hfi:3 own:2 perspective:1 optimizing:1 conjectured:1 driven:1 showed:1 manipulation:1 claimed:1 certain:1 nonconvex:4 suvrit:2 inequality:12 yi:2 minimum:4 care:1 ii:3 full:1 multiple:1 reduces:2 faster:6 academic:1 offer:4 retrieval:2 concerning:1 equally:1 a1:6 impact:1 basic:1 essentially:2 metric:35 vision:3 arxiv:1 iteration:3 kernel:9 background:1 remarkably:1 whereas:1 crucial:3 unlike:1 strict:4 file:1 induced:1 fell:1 member:1 schur:1 integer:4 noting:3 iii:2 spd:14 xj:8 zi:2 equidistant:1 identified:1 inner:3 reduce:1 det:17 whether:1 motivated:1 ibrahim:1 akin:2 algebraic:1 hessian:3 logdet:2 repeatedly:1 remark:2 dramatically:1 covered:1 stein:1 http:1 exist:1 canonical:2 arising:1 xii:1 write:1 express:2 kor:1 key:5 neither:1 ce:1 diffusion:2 imaging:2 cone:4 sum:1 run:3 uncertainty:1 reader:2 reasonable:1 def:2 fascinating:1 nonnegative:2 nontrivial:1 constraint:2 kronecker:1 helgason:1 bp:1 software:1 tal:1 aspect:2 min:2 optimality:6 speedup:1 slightly:1 intimately:1 christensen:1 invariant:2 iccv:1 taken:3 computationally:2 equation:4 previously:1 remains:1 turn:4 discus:2 conferred:1 nonempty:1 know:2 sanderson:1 serf:1 end:2 studying:2 cor:1 generalizes:1 available:1 permit:1 apply:2 observe:1 finsler:2 magnetic:1 alternative:3 symmetrized:1 original:1 denotes:2 running:4 include:1 bini:2 especially:3 classical:3 tensor:3 objective:1 question:3 already:1 diagonal:7 said:1 mx:3 distance:10 manifold:10 ressel:1 tuebingen:1 trivial:1 reason:2 equip:1 kalman:1 insufficient:1 equivalently:2 difficult:2 unfortunately:1 october:1 relate:1 hij:4 expense:1 ds2:1 negative:9 ba:3 design:1 implementation:2 motivates:1 anal:1 commuting:1 curved:2 displayed:1 supporting:1 immediate:3 looking:1 nemirovskii:1 rn:1 arbitrary:3 thm:7 inferred:1 princeton:1 introduced:2 pair:1 namely:1 specified:1 toolbox:2 connection:4 eluded:1 able:1 below:3 xm:10 max:1 including:2 suitable:2 demanding:2 difficulty:1 natural:2 treated:1 critical:1 zhu:1 numerous:4 conic:1 ready:1 sn:6 geometric:14 l2:3 tangent:1 kf:2 embedded:1 lecture:1 interesting:1 filtering:1 remarkable:1 downloaded:1 sufficient:1 proxy:2 xp:2 article:2 displaying:1 saigal:1 editor:1 eccv:1 summary:1 gl:2 surprisingly:1 free:1 deeper:1 institute:1 neighbor:1 peterson:1 barrier:1 midpoint:2 sparse:1 axi:3 dimension:1 xn:4 gram:1 rich:2 quantum:3 concavity:2 far:1 nov:1 compact:1 global:7 reveals:1 handbook:1 summing:1 assumed:1 xi:13 search:3 table:2 sra:4 ignoring:2 obtaining:1 symmetry:1 european:1 domain:3 da:3 sp:3 did:1 main:3 s2:3 motivation:1 edition:2 ref:2 x1:2 fig:1 definiteness:3 wiley:1 embeds:2 nonnegativity:1 wish:1 exercise:1 lie:2 comput:1 jmlr:1 theorem:24 embed:1 harandi:1 showing:4 jensen:2 r2:3 appeal:1 admits:2 iannazzo:2 multitude:1 closeness:1 concern:2 burden:1 exists:2 evidence:1 albeit:1 bru:1 effectively:1 importance:2 ci:3 gap:2 vii:1 kxk:1 tracking:1 scalar:6 springer:5 ch:8 cite:1 minimizer:2 satisfies:3 prop:6 oct:1 sorted:1 consequently:1 replace:1 experimentally:1 hard:1 typical:1 specifically:4 infinite:1 reversing:1 lemma:10 called:2 experimental:1 concordant:1 berg:1 support:1 latter:2 arises:2 fulfills:1 brevity:2 dryden:1 bridson:1 instructive:1 ex:1 |
4,248 | 4,845 | Clustering by Nonnegative Matrix Factorization
Using Graph Random Walk
Zhirong Yang, Tele Hao, Onur Dikmen, Xi Chen and Erkki Oja
Department of Information and Computer Science
Aalto University, 00076, Finland
{zhirong.yang,tele.hao,onur.dikmen,xi.chen,erkki.oja}@aalto.fi
Abstract
Nonnegative Matrix Factorization (NMF) is a promising relaxation technique for
clustering analysis. However, conventional NMF methods that directly approximate the pairwise similarities using the least square error often yield mediocre
performance for data in curved manifolds because they can capture only the immediate similarities between data samples. Here we propose a new NMF clustering
method which replaces the approximated matrix with its smoothed version using
random walk. Our method can thus accommodate farther relationships between
data samples. Furthermore, we introduce a novel regularization in the proposed
objective function in order to improve over spectral clustering. The new learning
objective is optimized by a multiplicative Majorization-Minimization algorithm
with a scalable implementation for learning the factorizing matrix. Extensive experimental results on real-world datasets show that our method has strong performance in terms of cluster purity.
1
Introduction
Clustering analysis as a discrete optimization problem is usually NP-hard. Nonnegative Matrix Factorization (NMF) as a relaxation technique for clustering has shown remarkable progress in the past
decade (see e.g. [9, 4, 2, 26]). In general, NMF finds a low-rank approximating matrix to the input
nonnegative data matrix, where the most popular approximation criterion or divergence in NMF is
the Least Square Error (LSE). It has been shown that certain NMF variants with this divergence
measure are equivalent to k-means, kernel k-means, or spectral graph cuts [7]. In addition, NMF
with LSE can be implemented efficiently by existing optimization methods (see e.g. [16]).
Although popularly used, previous NMF methods based on LSE often yield mediocre performance
for clustering, especially for data that lie in a curved manifold. In clustering analysis, the cluster assignment is often inferred from pairwise similarities between data samples. Commonly the
similarities are calculated based on Euclidean distances. For data in a curved manifold, only local
Euclidean distances are reliable and similarities between non-neighboring samples are usually set
to zero, which yields a sparse input matrix to NMF. If the LSE is directly used in approximation
to such a similarity matrix, a lot of learning effort will be wasted due to the large majority of zero
entries. The same problem occurs for clustering nodes of a sparse network.
In this paper we propose a new NMF method for clustering such manifold data or sparse network
data. Previous NMF clustering methods based on LSE used an approximated matrix that takes only
similarities within immediate neighborhood into account. Here we consider multi-step similarities
between data samples using graph random walk, which has shown to be an effective smoothing
approach for finding global data structures such as clusters. In NMF the smoothing can reduce the
sparsity gap in the approximation and thus ease cluster analysis. We name the new method NMF
using graph Random walk (NMFR).
1
In implementation, we face two obstacles when the input matrix is replaced by its random walk
version: (1) the performance of unconstrained NMFR remains similar to classical spectral clustering
because smoothing that manipulates eigenvalues of Laplacian of the similarity graph does not change
the eigensubspace; (2) The similarities by random walk require inverting an n ? n matrix for n data
samples. Explicit matrix inversion is infeasible for large datasets. To overcome the above obstacles,
we employ (1) a regularization technique that supplements the orthogonality constraint for better
clustering, and (2) a more scalable fixed-point algorithm to calculate the product of the inverted
matrix and the factorizing matrix.
We have conducted extensive experiments for evaluating the new method. The proposed algorithm
is compared with nine other state-of-the-art clustering approaches on a large variety of real-world
datasets. Experimental results show that with only simple initialization NMFR performs pretty
robust across 46 clustering tasks. The new method achieves the best clustering purity for 36 of the
selected datasets, and nearly the best for the rest. In particular, NMFR is remarkably superior to the
other methods for large-scale manifold data from various domains.
In the remaining, we briefly review some related work of clustering by NMF in Section 2. In Section
3 we point out a major drawback in previous NMF methods with least square error and present our
solution. Experimental settings and results are given in Section 4. Section 5 concludes the paper
and discusses potential future work.
2
Pairwise Clustering by NMF
Cluster analysis or clustering is the task of assigning a set of data samples into groups (called clusters) so that the objects in the same cluster are more similar to each other than to those in other
clusters. Denote R+ = R ? {0}. The pairwise similarities between n data samples can be encoded
n?n
in an undirected graph with adjacency matrix S ? R+
. Because clustered data tend to have
higher similarities within clusters and lower similarities between clusters, the similarity matrix in
visualization has nearly diagonal blockwise looking if we sort the rows and columns by clusters.
Such structure motivated approximative low-rank factorization of S by the cluster indicator matrix
U ? {0, 1}n?r for r clusters: S ? U U T , where Uik = 1 if the i-th sample is assigned to the k-th
cluster and 0 otherwise. Moreover, clusters of balanced sizes are desired in most clustering tasks.
This can be achieved by suitableqnormalization of the approximating matrix. A common way is to
P
P
T
T
normalize Uik by Mik = Uik /
j Ujk such that M M = I and
i (M M )ij = 1 (see e.g.
[6, 7, 27]).
However, directly optimizing over U or M is difficult due to discrete solution space, which usually
leads to an NP-hard problem. Continuous relaxation is thus needed to ease the optimization. One
of the popular choices is nonnegativity and orthogonality constraint combination [11, 23]. That is,
we replace M with W where Wik ? 0 and W T W = I. In this way, each row of W has only
one non-zero entry because the non-zero parts of two nonnegative and orthogonal vectors do not
overlap. Some other Nonnegative Matrix Factorization (NMF) relaxations exist, for example, the
kernel Convex NMF [9] and its special case Projective NMF [23], as well as the relaxation by using
a left-stochastic matrix [2].
A commonly used divergence that measures the approximation error is the squared Euclidean distance or Frobenius norm [15, 13]. The NMF objective to be minimized thus becomes
Xh
i2
kS ? W W T k2F =
Sij ? W W T ij .
(1)
ij
The above least square error objective is widely used because we have better understanding of its
algebra and geometric properties. For example, Zhao et al. [13] showed that the multiplicative
optimization algorithm for the above Symmetric NMF (SNMF) problem is guaranteed to converge
to a local minimum if S is positive semi-definite. Furthermore, SNMF with orthogonality has tight
connection to classical objectives such as kernel k-means and normalized cuts [7, 23]. In this paper,
we choose this divergence also because it is the sole one in ??-divergence family [5] that involves
only the product SW instead of S itself in the gradient. As we shall see in Section 3.2, this property
enables a scalable implementation of gradient-based optimization algorithm.
2
Figure 1: Illustration of clustering the SEMEION handwritten digit dataset by NMF based on LSE:
(a) the symmetrized 5-NN graph, (b) the correct clusters to be found, (c) the ideally assumed data
that suits the least square error, (d) the smoothed input by using graph random walk. The matrix
entries are visualized as image pixels. Darker pixels represent higher similarities. For clarity we
show only the subset of digits ?2? and ?3?. In this paper we show that because (d) is ?closer? to (c)
than (a), it is easier to find correct clusters using (d)?(b) instead of (a)?(b) by NMF with LSE .
3
NMF Using Graph Random Walk
There is a serious drawback in previous NMF clustering methods using least square errors. When
b 2 for given S, the approximating matrix Sb should be diagonal blockwise for
minimizing kS ? Sk
F
clustering analysis, as shown in Figure 1 (b). Correspondingly, the ideal input S for LSE should
look like Figure 1 (c) because the underlying distribution of LSE is Gaussian.
However, the similarity matrix of real-world data often occurs differently from the ideal case. In
many clustering tasks, the raw features of data are usually weak. That is, the given distance measure between data points, such as the Euclidean distance, is only valid in a small neighborhood.
The similarities calculated from such distances are thus sparse, where the similarities between nonneighboring samples are usually set to zero. For example, symmetrized K-nearest-neighbor (K-NN)
graph is a popularly used similarity input. Therefore, similarity matrices in real-world clustering
tasks often look like Figure 1 (a), where the non-zero entries are much sparser than the ideal case.
It is a mismatch to approximate a sparse similarity matrix by a dense diagonal blockwise matrix
using LSE. Because squared Euclidean distance is a symmetric metric, the learning objective can
be dominated by the approximation to the majority of zero entries, which is undesired for finding
correct cluster assignments. Although various matrix factorization schemes and factorizing matrix
3
constraints have been proposed for NMF, little research effort has been made to overcome the above
mismatch.
In this work we present a different way to formalize NMF for clustering to reduce the sparsity gap
between input and output matrices. Instead of approximation to the sparse input S, which only encodes the immediate similarities between data samples, we propose to approximate a smoothed version of S which takes farther relationships between data samples into account. Graph random walk
?1/2
is a common way to implement multi-step similarities. Denote
SD?1/2 the normalized
PQ = D
similarity matrix, where D is a diagonal matrix with Dii = j Sij . The similarities between data
j
nodes using j steps are given by (?Q) , where ? ? (0, 1) is a decay parameter controlling the ranP?
j
dom walk extent. Summing over all possible numbers of steps gives j=0 (?Q) = (I ? ?Q)?1 .
We thus propose to replace S in Eq. (1) with
A = c?1 (I ? ?Q)?1 ,
(2)
P
where c = ij (I ? ?Q)?1 ij is a normalizing factor. Here the parameter ? controls the smoothness: a larger ? tends to produce smoother A while a smaller one makes A concentrate on its
diagonal. A smoothed approximated matrix A is shown in Figure 1 (d), from which we can see the
sparsity gap to the approximating matrix is reduced.
Just smoothing the input matrix by random walk is not enough, as we are presented with two difficulties. First, random walk only alters the spectrum of Q, while the eigensubspaces of A and Q
are the same. Smoothing therefore does not change the result of clustering algorithms that operate
on the eigenvectors (e.g. [20]). If we simply replace S by A in Eq. (1), the resulting W is often
the same as the leading eigenvectors of Q up to an r ? r rotation. That is, smoothing by random
walk itself can bring little improvement unless we impose extra constraints or regularization. Second, explicitly calculating A is infeasible because when S is large and sparse, A is also large but
dense. This requires a more careful design of a scalable optimization algorithm. Below we present
solutions to overcome these two difficulties in Sections 3.1 and 3.2, respectively.
3.1
Learning Objective
Minimizing kA ? W W T k2F over W subject to W T W = I is equivalent to maximizing
Tr W T AW . To improve over spectral clustering, we propose to regularize the trace maximization
by an extra penalty term on W . The new optimization problem for pairwise clustering is:
!2
X X
T
2
minimize J (W ) = ?Tr W AW + ?
Wik
(3)
W ?0
i
k
subject to W T W = I,
(4)
where ? > 0 is the tradeoff parameter. We find that ? =
1
2r
works well in this work.
The extra penalty term collaborates with the orthogonality constraint for pairwise clustering, which
is justified by two interpretations.
P P
2 2
? It emphasizes off-diagonal correlation in the trace. Because
=
i
k Wik
P
T 2
, the minimization tends to reduce the diagonal magnitude in the approxii WW
ii
mating matrix. This is desired because self-similarities usually give little information for
grouping data samples. Given the constraints W ? 0 and W T W = I, it is beneficial to
push the magnitudes in W W T off-diagonal for maximizing the correlation to similarities
between different data samples.
P
2
? It tends toPequalize the norms of W rows. ToP
see this, let us write ai ? P
k Wik for brevity.
2
Because i ai = r is constant, minimizing i ai actually maximizing ij:i6=j ai aj . The
maximum is achieved when {ai }ni=1 are equal. Originally, the nonnegativity and orthogonality constraint combination only guarantees that each row of W has one non-zero entry,
though norms of different W rows can be diverse. The equalization by the proposed penalty
term thus well supplements the nonnegativity and orthogonality constraints and, as a whole,
provides closer relaxation to the normalized cluster indicator matrix M .
4
Algorithm 1 Large-Scale Relaxed Majorization and Minimization Algorithm for W
Input: similarity matrix S, random walk extent ? ? (0, 1), number of clusters r, nonnegative
initial guess of W .
repeat
Calculate c=IterativeTracer(Q,?, e).
Calculate G=IterativeSolver(Q,?, W ).
Update W by Eq. (5), using c?1 G in place of AW .
until W converges
Discretized W to cluster indicator matrix U
Output: U .
function I TERATIVE T RACER(Q, ?, W )
F =IterativeSolver(Q,?, W )
return Tr(W T F )
end function
function I TERATIVE S OLVER(Q, ?, W )
Initialize F = W
repeat
Update F ? ?QF + (1 ? ?)W
until F converges
return F/(1 ? ?)
end function
3.2
Optimization
The optimization algorithm is developed by following the procedure in [24, 26]. Introducing
Lagrangian multipliers {?
constraint, we have the augmented objective
kl } for the orthogonality
L(W, ?) = J (W ) + Tr ? W T W ? I . Using the Majorization-Minimization development procedure in [24, 26], we can obtain the preliminary multiplicative update rule. We then use the orthogonality constraint to solve the multipliers. Substituting the multipliers in the preliminary update
rule, we obtain an optimization algorithm which iterates the following multiplicative update rule:
"
#1/4
AW + 2?W W T V W ik
new
Wik = Wik
(5)
(2?V W + W W T AW )ik
P
where V is a diagonal matrix with Vii = l Wil2 .
1 T ?J
new
Theorem 1. L(W , ?) ? L(W, ?) for ? = W
.
2
?W
The proof is given the appendix. Note that J (W ) does not necessarily decrease after each iteration.
Instead, the monotonicity stated in the theorem justifies that the above algorithm jointly minimizes
the J (W ) and drives W towards the manifold defined by the orthogonality constraint. After W
converges, we discretize it and obtain the cluster indicator matrix U .
It is a crucial observation that the update rule Eq. (5) requires only the product of (I ? ?Q)?1
with a low-rank matrix instead of A itself. We can thus avoid expensive computation and storage
of large smoothed similarity matrix. There is an iterative and more scalable way to calculate F =
(I ? ?Q)?1 W [29]. See the IterativeSolver function in Algorithm 1. In practice, the calculation for
F usually converges nicely within 100 iterations. The same technique can be applied to calculating
the normalizing factor c in Eq. (2), using e = [1, 1, . . . , 1] instead of W . The resulting algorithm for
optimization w.r.t. W is summarized in Algorithm 1. Matlab codes can be found in [1].
3.3
Initialization
Most state-of-the-art clustering methods involve non-convex optimization objectives and thus only
return local optima in general. This is also the case for our algorithm. To achieve a better local
5
optimum, a clustering algorithm should start from one or more relatively considerate initial guesses.
Different strategies for choosing the starting point can be classified into the following levels, sorted
by their computational cost:
Level-0: (random-init) The starting relaxed indicator matrix is filled by randomly generated numbers.
Level-1: (simple-init) The starting matrix is the result of a cheap clustering method, e.g. Normalized Cut or k-means, plus a small perturbation.
Level-2: (family-init) The initial guesses are results of the methods in a parameterized family. Typical examples include various regularization extents or Bayesian priors with different hyperparameters (see e.g. [25]).
Level-3: (meta-init) The initial guesses can come from methods of various principles. Each initialization method runs only once.
Level-4: (meta-co-init) Same as Level-3 except that clustering methods provide initialization for
each other. A method can serve initialization multiple times if it finds a better local minimum. The whole procedure stops when each of the involved methods fails to find better
local optimum (see e.g. [10]).
Some methods are not sensitive to initializations but tend to return less accurate clustering. On the
other hand, some other methods can find more accurate results but require comprehensive initialization. A preferable clustering method should achieve high accuracy with cheap initialization. As we
shall see, the proposed NMFR algorithm can attain satisfactory clustering accuracy with only simple
initialization (Level-1).
4
Experiments
We have compared our method against a variety of state-of-the-art clustering methods, including
Projective NMF [23], Nonnegative Spectral Cut (NSC) [8], (symmetric) Orthogonal NMF (ONMF)
[11], Left-Stochastic matrix Decomposition (LSD) [2], Data-Cluster-Data decomposition (DCD)
[25], as well as classical Normalized Cut (Ncut) [21]. We also selected two recent clustering methods beyond NMF: 1-Spectral (1Spec) [14] which uses balanced graph cut, and Interaction Component Model (ICM) [22] which is the symmetric version of topic model [3].
We used default settings in the compared methods. For 1Spec, we used ratio Cheeger cut. For ICM,
the hyper-parameters for Dirichlet processes prior are updated by Minka?s learning method [19].
The other NMF-type methods that use multiplicative updates were run with 10,000 iterations to
guarantee convergence. For our method, we trained W by using Algorithm 1 for each candidate ? ?
{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.99} when n ? 8000. The best ? and the corresponding
clustering result were then obtained by minimizing kA ? bW W T k2F with a suitable positive scalar
b. Here we set b = 2? using the heuristic that the penalty term in gradient can be interpreted as
1
removal of diagonal effect of approximating matrix. When ? = 2r
, we obtain b = 1/r. The new
clustering method is not very sensitive to the choice of ? for large-scale datasets. We simply used
? = 0.8 in experiments when n > 8000. All methods except Ncut, 1Spec, and ICM were initialized
by Normalized Cut. That is, their starting point was the Ncut cluster indicator matrix plus a small
constant 0.2 to all entries.
We have compared the above methods on clustering various datasets. The domains of the datasets
range from network, text, biology, image, etc. All datasets are publicly available on the Internet. The
data sources and statistics are given in the supplemental document. We constructed symmetrized
K-NN graphs from the multivariate data, where K = 5 for the 30 smallest datasets, text datasets,
PROTEIN and SEISMIC datasets, while K = 10 for the remaining datasets. Following [25], we
extract the scattering features [18] for images before calculating the K-NN graph. We used Tf-Idf
features for text data. The adjacency matrices of P
network data were symmetrized. The clustering
r
performance is evaluated by cluster purity = n1 k=1 max1?l?r nlk , where nlk is the number of
data samples in the cluster k that belong to ground-truth class l. A larger purity in general corresponds to a better clustering result.
The resulting purities are shown in Table 1, where the rows are ordered by dataset size. We can
see that our method has much better performance than the other methods. NMFR wins 36 out of 46
6
Table 1: Clustering purities for the compared methods on various datasets. Boldface numbers indicate the best in each row.
Dataset
Size
Ncut PNMF NSC ONMF PLSI LSD 1Spec ICM DCD NMFR
STRIKE
KOREA
AMLALL
DUKE
HIGHSCHOOL
KHAN
POLBOOKS
FOOTBALL
IRIS
CANCER
SPECT
ROSETTA
ECOLI
IONOSPHERE
ORL
UMIST
WDBC
DIABETES
VOWEL
MED
PIE
YALEB
TERROR
ALPHADIGS
COIL-20
YEAST
SEMEION
FAULTS
SEG
ADS
CORA
MIREX
CITESEER
WEBKB4
7SECTORS
SPAM
CURETGREY
OPTDIGITS
GISETTE
REUTERS
RCV1
PENDIGITS
PROTEIN
20NEWS
MNIST
SEISMIC
24
35
38
44
60
83
105
115
150
198
267
300
327
351
400
575
683
768
1.0K
1.0K
1.2K
1.3K
1.3K
1.4K
1.4K
1.5K
1.6K
1.9K
2.3K
2.4K
2.7K
3.1K
3.3K
4.2K
4.6K
4.6K
5.6K
5.6K
7.0K
8.3K
9.6K
11K
18K
20K
70K
99K
0.96
1.00
0.92
0.52
0.83
0.57
0.78
0.93
0.90
0.53
0.79
0.77
0.79
0.69
0.81
0.68
0.65
0.65
0.36
0.53
0.67
0.45
0.45
0.49
0.79
0.53
0.83
0.40
0.61
0.84
0.38
0.41
0.24
0.40
0.25
0.61
0.26
0.92
0.90
0.77
0.33
0.80
0.46
0.25
0.77
0.52
1.00
0.94
0.92
0.52
0.82
0.60
0.78
0.93
0.93
0.54
0.79
0.77
0.78
0.69
0.82
0.64
0.65
0.65
0.35
0.54
0.66
0.42
0.45
0.45
0.71
0.53
0.87
0.39
0.51
0.84
0.37
0.40
0.31
0.39
0.27
0.61
0.22
0.90
0.52
0.74
0.35
0.82
0.46
0.33
0.87
0.50
0.96
0.71
0.92
0.52
0.83
0.55
0.81
0.93
0.90
0.53
0.79
0.77
0.79
0.70
0.82
0.68
0.65
0.65
0.36
0.54
0.68
0.46
0.46
0.49
0.79
0.54
0.83
0.40
0.61
0.84
0.37
0.42
0.23
0.40
0.25
0.61
0.26
0.92
0.93
0.76
0.32
0.80
0.46
0.21
0.79
0.51
1.00
1.00
0.92
0.52
0.82
0.60
0.77
0.93
0.92
0.53
0.79
0.77
0.78
0.69
0.82
0.66
0.65
0.65
0.30
0.54
0.66
0.41
0.46
0.44
0.65
0.52
0.85
0.39
0.53
0.84
0.37
0.38
0.31
0.39
0.25
0.61
0.21
0.90
0.51
0.72
0.31
0.77
0.46
0.31
0.73
0.50
0.96
1.00
0.92
0.70
0.83
0.55
0.78
0.93
0.91
0.54
0.79
0.77
0.80
0.69
0.83
0.69
0.65
0.65
0.36
0.54
0.68
0.51
0.46
0.49
0.79
0.53
0.85
0.40
0.61
0.84
0.44
0.41
0.36
0.49
0.29
0.65
0.26
0.93
0.93
0.76
0.37
0.80
0.46
0.31
0.79
0.52
1.00
1.00
0.92
0.70
0.82
0.52
0.78
0.93
0.75
0.53
0.79
0.77
0.68
0.64
0.81
0.68
0.65
0.65
0.34
0.55
0.69
0.50
0.45
0.49
0.75
0.52
0.89
0.40
0.64
0.84
0.46
0.38
0.36
0.51
0.26
0.68
0.21
0.92
0.93
0.75
0.48
0.86
0.46
0.32
0.76
0.54
1.00
0.71
0.92
0.52
0.82
0.58
0.83
0.90
0.91
0.51
0.79
0.77
0.83
0.69
0.80
0.74
0.65
0.65
0.20
0.50
0.64
0.37
0.44
0.48
0.77
0.54
0.82
0.38
0.55
0.84
0.36
0.12
0.22
0.39
0.25
0.61
0.22
0.87
0.93
0.63
0.31
0.82
0.46
0.07
0.88
0.51
0.58
0.66
0.50
0.52
0.82
0.49
0.78
0.93
0.53
0.53
0.79
0.77
0.78
0.69
0.19
0.15
0.65
0.65
0.15
0.33
0.12
0.10
0.34
0.10
0.11
0.34
0.13
0.38
0.32
0.84
0.30
0.27
0.41
0.48
0.28
0.61
0.11
0.90
0.62
0.71
0.38
0.52
0.46
0.23
0.95
0.50
0.96
0.97
0.92
0.52
0.83
0.55
0.79
0.93
0.91
0.54
0.79
0.77
0.80
0.69
0.83
0.69
0.65
0.65
0.36
0.55
0.68
0.51
0.45
0.50
0.79
0.52
0.85
0.41
0.61
0.84
0.44
0.18
0.35
0.51
0.28
0.67
0.27
0.92
0.93
0.76
0.36
0.80
0.46
0.31
0.82
0.52
0.96
1.00
0.89
0.70
0.95
0.51
0.79
0.93
0.91
0.52
0.79
0.77
0.79
0.68
0.83
0.72
0.65
0.65
0.37
0.56
0.74
0.51
0.49
0.51
0.81
0.55
0.94
0.39
0.73
0.84
0.47
0.43
0.44
0.63
0.34
0.69
0.28
0.98
0.94
0.77
0.54
0.87
0.50
0.63
0.97
0.59
selected clustering tasks. Our method is especially superior for large-scale data in a curved manifold,
for example, OPTDIGITS and MNIST. Note that cluster purity can be regarded as classification
accuracy if we have a few labeled data samples to remove ambiguity between clusters and classes.
In this sense, the resulting purities for such manifold data are even comparable to the state-of-theart supervised classification results. Compared with the DCD results which require Level-2 family
initialization (see [25]), NMFR only needs Level-1 simple initialization. In addition, NMFR also
brings remarkable improvement for datasets beyond digit or letter recognition, for example, the text
data RCV1, 20NEWS, protein data PROTEIN and sensor data SEISMIC. Furthermore, it is worth
to notice that our method has more robust performance over various datasets compared with other
approaches. Even for some small datasets where NMFR is not the winner, its cluster purities are
still close to the best.
7
5
Conclusions
We have presented a new NMF method using random walk for clustering. Our work includes two
major contributions: (1) we have shown that NMF approximation using least square error should be
applied on smoothed similarities; the smoothing accompanied with a novel regularization can often
significantly outperform spectral clustering; (2) the smoothing is realized in an implicit and scalable
way. Extensive empirical study has shown that our method can often improve clustering accuracy
remarkably given simple initialization.
Some issues could be included in the future work. Here we only discuss a certain type of smoothing
by random walk, while the proposed method could be extended by using other types of smoothing,
e.g. diffusion kernels, where scalable optimization could also be developed by using a similar iterative subroutine. Moreover, the smoothing brings improved clustering accuracy but at the cost of
increased running time. Algorithms that are more efficient in both time and space should be further
investigated. In addition, the approximated matrix could also be learnable. In current experiments,
we used constant K-NN graphs as input for fair comparison, which could be replaced by a more
comprehensive graph construction method (e.g. [28, 12, 17]).
6
Acknowledgement
This work was financially supported by the Academy of Finland (Finnish Center of Excellence in
Computational Inference Research COIN, grant no 251170; Zhirong Yang additionally by decision
number 140398).
Appendix: proof of Theorem 1
The proof follows the Majorization-Minimization development procedure in [26]. We use W and
f to distinguish the current estimate and the variable, respectively.
W
Given a real-valued matrix B, we can always decompose it into two nonnegative parts such that
+
?
B = B + ? B ? , where Bij
= (|Bij | + Bij )/2 and Bij
= (|Bij | ? Bij )/2. In this way we
f
?J
(
W
)
decompose ? = ?+ ? ?? and
? ? = ?+ ? ?? , where ?+ = 4?V W and
f f
?W
W =W
?? = 2AW .
(Majorization) Up to some additive constant,
f , ?)
Je(W
!
X X
f4 X W
f2
W
ik
f T AW + ?
f T ?? W
? ? 2Tr W
Wil2
+
?+ W ik ? 2Tr W
2
Wik
Wik
ik
l
ik
!4
!
X X
f 4 X Wik (?+ W )
fik
W
W
ik
f T AW + ?
f T ?? W
? ? 2Tr W
Wil2
+
?
2Tr
W
2
Wik
2
Wik
ik
ik
l
f , W ),
?G(W
where the first inequality is by standard convex-concave procedure, and the second upper bound is
za ? 1
zb ? 1
due to the inequality
?
for z > 0 and a < b.
a
b
f , ?)/? W
fik = 0 gives
(Minimization) Setting ?G(W
"
new
Wik
= Wik
?? + 2W?+
?+ + 2W??
#1/4
ik
.
(6)
ik
Zeroing ?L(W, ?)/?W gives 2W ? = ?+ ? ?? . Using W T W = I, we obtain ? = 21 W T (?+ ?
?? ), i.e. 2W ?+ = W W T ?+ and 2W ?? = W W T ?? . Inserting these into Eq. (6), we obtain
update rule in Eq. (5).
8
References
[1] http://users.ics.aalto.fi/rozyang/nmfr/index.shtml.
[2] R. Arora, M. Gupta, A. Kapila, and M. Fazel. Clustering by left-stochastic matrix factorization. In ICML,
2011.
[3] D. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research,
3:993?1022, 2001.
[4] Deng Cai, Xiaofei He, Jiawei Han, and Thomas S. Huang. Graph regularized non-negative matrix factorization for data representation. IEEE Transactions on Pattern Analysis and Machine Intelligence,
33(8):1548?1560, 2011.
[5] A. Cichocki, S. Cruces, and S. Amari. Generalized alpha-beta divergences and their application to robust
nonnegative matrix factorization. Entropy, 13:134?170, 2011.
[6] I. Dhillon, Y. Guan, and B. Kulis. Kernel k-means, spectral clustering and normalized cuts. In KDD,
2004.
[7] C. Ding, X. He, and H. D. Simon. On the equivalence of nonnegative matrix factorization and spectral
clustering. In ICDM, 2005.
[8] C. Ding, T. Li, and M. I. Jordan. Nonnegative matrix factorization for combinatorial optimization: Spectral clustering, graph matching, and clique finding. In ICDM, 2008.
[9] C. Ding, T. Li, and M. I. Jordan. Convex and semi-nonnegative matrix factorizations. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 32(1):45?55, 2010.
[10] C. Ding, T. Li, and W. Peng. On the equivalence between non-negative matrix factorization and probabilistic laten semantic indexing. Computational Statistics and Data Analysis, 52(8):3913?3927, 2008.
[11] C. Ding, T. Li, W. Peng, and H. Park. Orthogonal nonnegative matrix t-factorizations for clustering. In
SIGKDD, 2006.
[12] E. Elhamifar and R. Vidal. Sparse manifold clustering and embedding. In NIPS, 2011.
[13] Z. He, S. Xie, R. Zdunek, G. Zhou, and A. Cichocki. Symmetric nonnegative matrix factorization: Algorithms and applications to probabilistic clustering. IEEE Transactions on Neural Networks, 22(12):2117?
2131, 2011.
[14] M. Hein and T. B?uhler. An inverse power method for nonlinear eigenproblems with applications in 1Spectral clustering and sparse PCA. In NIPS, 2010.
[15] D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In NIPS, 2000.
[16] C.-J. Lin. Projected gradient methods for non-negative matrix factorization. Neural Computation,
19:2756?2779, 2007.
[17] M. Maier, U. von Luxburg, and M. Hein. How the result of graph clustering methods depends on the
construction of the graph. ESAIM: Probability & Statistics, 2012. in press.
[18] S. Mallat. Group invariant scattering. ArXiv e-prints, 2011.
[19] T. Minka. Estimating a dirichlet distribution, 2000.
[20] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, 2001.
[21] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 22(8):888?905, 2000.
[22] J. Sinkkonen, J. Aukia, and S. Kaski. Component models for large networks. ArXiv e-prints, 2008.
[23] Z. Yang and E. Oja. Linear and nonlinear projective nonnegative matrix factorization. IEEE Transaction
on Neural Networks, 21(5):734?749, 2010.
[24] Z. Yang and E. Oja. Unified development of multiplicative algorithms for linear and quadratic nonnegative
matrix factorization. IEEE Transactions on Neural Networks, 22(12):1878?1891, 2011.
[25] Z. Yang and E. Oja. Clustering by low-rank doubly stochastic matrix decomposition. In ICML, 2012.
[26] Z. Yang and E. Oja. Quadratic nonnegative matrix factorization. Pattern Recognition, 45(4):1500?1510,
2012.
[27] R. Zass and A. Shashua. A unifying approach to hard and probabilistic clustering. In ICCV, 2005.
[28] L. Zelnik-Manor and P. Perona. Self-tuning spectral clustering. In NIPS, 2004.
[29] D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global consistency.
In NIPS, 2003.
9
| 4845 |@word kulis:1 briefly:1 version:4 inversion:1 norm:3 zelnik:1 decomposition:3 citeseer:1 tr:8 accommodate:1 initial:4 document:1 past:1 existing:1 ka:2 current:2 assigning:1 additive:1 kdd:1 enables:1 cheap:2 remove:1 update:8 spec:4 selected:3 guess:4 intelligence:3 pnmf:1 farther:2 blei:1 provides:1 iterates:1 node:2 constructed:1 beta:1 ik:11 doubly:1 introduce:1 excellence:1 pairwise:6 peng:2 multi:2 discretized:1 little:3 becomes:1 estimating:1 moreover:2 underlying:1 gisette:1 interpreted:1 minimizes:1 developed:2 supplemental:1 unified:1 finding:3 guarantee:2 sinkkonen:1 concave:1 preferable:1 control:1 grant:1 positive:2 before:1 local:7 sd:1 tends:3 lsd:2 webkb4:1 plus:2 pendigits:1 initialization:12 k:2 equivalence:2 co:1 ease:2 factorization:20 projective:3 range:1 fazel:1 practice:1 definite:1 implement:1 digit:3 procedure:5 empirical:1 attain:1 significantly:1 matching:1 protein:4 close:1 mediocre:2 storage:1 equalization:1 conventional:1 equivalent:2 lagrangian:1 center:1 maximizing:3 shi:1 starting:4 convex:4 manipulates:1 fik:2 rule:5 regarded:1 regularize:1 embedding:1 updated:1 controlling:1 construction:2 kapila:1 user:1 mallat:1 duke:1 approximative:1 us:1 diabetes:1 approximated:4 expensive:1 recognition:2 cut:10 labeled:1 ding:5 capture:1 calculate:4 seg:1 news:2 decrease:1 balanced:2 cheeger:1 ideally:1 seung:1 dom:1 trained:1 tight:1 algebra:1 serve:1 max1:1 f2:1 differently:1 various:7 olver:1 kaski:1 effective:1 hyper:1 neighborhood:2 choosing:1 encoded:1 widely:1 larger:2 solve:1 heuristic:1 valued:1 otherwise:1 football:1 amari:1 statistic:3 jointly:1 itself:3 eigenvalue:1 cai:1 propose:5 interaction:1 product:3 neighboring:1 inserting:1 achieve:2 academy:1 frobenius:1 normalize:1 olkopf:1 convergence:1 cluster:29 optimum:3 produce:1 converges:4 object:1 nearest:1 ij:6 sole:1 progress:1 eq:7 strong:1 implemented:1 involves:1 come:1 indicate:1 concentrate:1 popularly:2 drawback:2 correct:3 f4:1 stochastic:4 dii:1 adjacency:2 require:3 crux:1 clustered:1 preliminary:2 decompose:2 ground:1 ic:1 substituting:1 major:2 finland:2 achieves:1 smallest:1 combinatorial:1 sensitive:2 tf:1 minimization:6 cora:1 sensor:1 gaussian:1 always:1 manor:1 avoid:1 zhou:2 shtml:1 semeion:2 improvement:2 rank:4 aalto:3 sigkdd:1 sense:1 inference:1 nn:5 sb:1 jiawei:1 perona:1 subroutine:1 pixel:2 issue:1 classification:2 development:3 smoothing:11 art:3 special:1 initialize:1 equal:1 once:1 nicely:1 ng:2 biology:1 park:1 look:2 k2f:3 nearly:2 theart:1 icml:2 future:2 minimized:1 np:2 serious:1 employ:1 few:1 randomly:1 oja:6 divergence:6 comprehensive:2 replaced:2 bw:1 n1:1 suit:1 vowel:1 uhler:1 nmfr:11 accurate:2 closer:2 korea:1 orthogonal:3 unless:1 filled:1 euclidean:5 walk:16 desired:2 initialized:1 hein:2 increased:1 column:1 obstacle:2 umist:1 assignment:2 maximization:1 cost:2 introducing:1 entry:7 subset:1 conducted:1 aw:8 probabilistic:3 off:2 lee:1 squared:2 ambiguity:1 von:1 choose:1 huang:1 zhao:1 leading:1 return:4 li:4 account:2 potential:1 accompanied:1 summarized:1 includes:1 polbooks:1 explicitly:1 ad:1 depends:1 multiplicative:6 lot:1 shashua:1 start:1 sort:1 simon:1 majorization:5 minimize:1 square:7 ni:1 accuracy:5 publicly:1 contribution:1 maier:1 efficiently:1 yield:3 weak:1 handwritten:1 raw:1 bayesian:1 emphasizes:1 ecoli:1 drive:1 worth:1 classified:1 za:1 mating:1 against:1 involved:1 minka:2 proof:3 stop:1 dataset:3 popular:2 segmentation:1 formalize:1 actually:1 scattering:2 higher:2 originally:1 supervised:1 xie:1 improved:1 wei:1 evaluated:1 though:1 furthermore:3 just:1 implicit:1 correlation:2 until:2 hand:1 nonlinear:2 brings:2 aj:1 considerate:1 yeast:1 name:1 effect:1 normalized:8 multiplier:3 yaleb:1 regularization:5 assigned:1 symmetric:5 dhillon:1 satisfactory:1 i2:1 semantic:1 undesired:1 self:2 iris:1 criterion:1 generalized:1 performs:1 bring:1 lse:10 image:4 novel:2 fi:2 superior:2 common:2 rotation:1 winner:1 belong:1 interpretation:1 he:3 ai:5 smoothness:1 tuning:1 unconstrained:1 consistency:1 i6:1 zeroing:1 pq:1 han:1 similarity:30 etc:1 multivariate:1 showed:1 recent:1 plsi:1 optimizing:1 zhirong:3 certain:2 meta:2 inequality:2 fault:1 inverted:1 minimum:2 relaxed:2 impose:1 deng:1 purity:9 converge:1 strike:1 semi:2 smoother:1 ii:1 multiple:1 calculation:1 lin:1 zass:1 icdm:2 laplacian:1 scalable:7 variant:1 metric:1 arxiv:2 iteration:3 kernel:5 represent:1 achieved:2 justified:1 addition:3 remarkably:2 source:1 crucial:1 sch:1 extra:3 rest:1 operate:1 onmf:2 finnish:1 subject:2 tend:2 med:1 undirected:1 jordan:4 yang:7 ideal:3 enough:1 variety:2 ujk:1 reduce:3 tradeoff:1 motivated:1 pca:1 effort:2 penalty:4 nine:1 matlab:1 eigenvectors:2 involve:1 eigenproblems:1 visualized:1 reduced:1 http:1 outperform:1 exist:1 alters:1 notice:1 diverse:1 discrete:2 write:1 shall:2 group:2 clarity:1 diffusion:1 graph:20 relaxation:6 wasted:1 run:2 inverse:1 parameterized:1 letter:1 luxburg:1 place:1 family:4 decision:1 appendix:2 orl:1 comparable:1 bound:1 internet:1 spect:1 guaranteed:1 distinguish:1 replaces:1 quadratic:2 nonnegative:18 orthogonality:9 constraint:11 idf:1 erkki:2 encodes:1 dominated:1 bousquet:1 rcv1:2 relatively:1 department:1 combination:2 across:1 smaller:1 beneficial:1 invariant:1 sij:2 indexing:1 iccv:1 visualization:1 remains:1 discus:2 needed:1 end:2 available:1 vidal:1 spectral:13 coin:1 symmetrized:4 thomas:1 top:1 clustering:63 remaining:2 include:1 dirichlet:3 running:1 sw:1 unifying:1 calculating:3 racer:1 especially:2 approximating:5 classical:3 objective:9 malik:1 print:2 realized:1 occurs:2 strategy:1 diagonal:10 financially:1 gradient:4 win:1 distance:7 onur:2 majority:2 topic:1 manifold:9 extent:3 boldface:1 code:1 index:1 relationship:2 illustration:1 ratio:1 minimizing:4 difficult:1 pie:1 sector:1 blockwise:3 hao:2 trace:2 stated:1 negative:4 implementation:3 design:1 seismic:3 discretize:1 upper:1 observation:1 datasets:16 xiaofei:1 tele:2 curved:4 immediate:3 extended:1 looking:1 ww:1 perturbation:1 smoothed:6 nmf:34 inferred:1 inverting:1 kl:1 extensive:3 optimized:1 connection:1 khan:1 lal:1 nsc:2 nip:6 beyond:2 usually:7 below:1 mismatch:2 pattern:4 sparsity:3 reliable:1 including:1 power:1 overlap:1 suitable:1 difficulty:2 regularized:1 indicator:6 wik:13 scheme:1 improve:3 esaim:1 arora:1 eigensubspace:1 concludes:1 extract:1 cichocki:2 text:4 review:1 understanding:1 geometric:1 prior:2 removal:1 acknowledgement:1 allocation:1 remarkable:2 principle:1 row:7 qf:1 cancer:1 repeat:2 supported:1 infeasible:2 dcd:3 mik:1 neighbor:1 face:1 correspondingly:1 sparse:9 overcome:3 calculated:2 default:1 world:4 evaluating:1 valid:1 commonly:2 made:1 projected:1 spam:1 transaction:6 approximate:3 alpha:1 monotonicity:1 clique:1 global:2 summing:1 assumed:1 xi:2 spectrum:1 factorizing:3 continuous:1 terative:2 iterative:2 decade:1 sk:1 pretty:1 table:2 additionally:1 promising:1 latent:1 robust:3 init:5 rosetta:1 investigated:1 necessarily:1 domain:2 dense:2 whole:2 reuters:1 hyperparameters:1 terror:1 fair:1 icm:4 augmented:1 je:1 uik:3 darker:1 fails:1 nonnegativity:3 explicit:1 xh:1 lie:1 candidate:1 guan:1 bij:6 theorem:3 learnable:1 zdunek:1 decay:1 gupta:1 ionosphere:1 normalizing:2 grouping:1 mnist:2 supplement:2 magnitude:2 justifies:1 push:1 elhamifar:1 chen:2 gap:3 easier:1 sparser:1 vii:1 wdbc:1 entropy:1 simply:2 ncut:4 wil2:3 ordered:1 scalar:1 nlk:2 truth:1 corresponds:1 coil:1 weston:1 dikmen:2 sorted:1 optdigits:2 careful:1 towards:1 replace:3 hard:3 change:2 included:1 typical:1 except:2 zb:1 called:1 experimental:3 brevity:1 |
4,249 | 4,846 | Isotropic Hashing
Weihao Kong, Wu-Jun Li
Shanghai Key Laboratory of Scalable Computing and Systems
Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
{kongweihao,liwujun}@cs.sjtu.edu.cn
Abstract
Most existing hashing methods adopt some projection functions to project the original data into several dimensions of real values, and then each of these projected
dimensions is quantized into one bit (zero or one) by thresholding. Typically, the
variances of different projected dimensions are different for existing projection
functions such as principal component analysis (PCA). Using the same number
of bits for different projected dimensions is unreasonable because larger-variance
dimensions will carry more information. Although this viewpoint has been widely
accepted by many researchers, it is still not verified by either theory or experiment
because no methods have been proposed to find a projection with equal variances
for different dimensions. In this paper, we propose a novel method, called isotropic hashing (IsoHash), to learn projection functions which can produce projected
dimensions with isotropic variances (equal variances). Experimental results on
real data sets show that IsoHash can outperform its counterpart with different variances for different dimensions, which verifies the viewpoint that projections with
isotropic variances will be better than those with anisotropic variances.
1
Introduction
Due to its fast query speed and low storage cost, hashing [1, 5] has been successfully used for
approximate nearest neighbor (ANN) search [28]. The basic idea of hashing is to learn similaritypreserving binary codes for data representation. More specifically, each data point will be hashed
into a compact binary string, and similar points in the original feature space should be hashed into
close points in the hashcode space. Compared with the original feature representation, hashing has
two advantages. One is the reduced storage cost, and the other is the constant or sub-linear query
time complexity [28]. These two advantages make hashing become a promising choice for efficient
ANN search in massive data sets [1, 5, 6, 9, 10, 14, 15, 17, 20, 21, 23, 26, 29, 30, 31, 32, 33, 34].
Most existing hashing methods adopt some projection functions to project the original data into
several dimensions of real values, and then each of these projected dimensions is quantized into
one bit (zero or one) by thresholding. Locality-sensitive hashing (LSH) [1, 5] and its extensions [4, 18, 19, 22, 25] use simple random projections for hash functions. These methods are called
data-independent methods because the projection functions are independent of training data. Another class of methods are called data-dependent methods, whose projection functions are learned from
training data. Representative data-dependent methods include spectral hashing (SH) [31], anchor
graph hashing (AGH) [21], sequential projection learning (SPL) [29], principal component analysis [13] based hashing (PCAH) [7], and iterative quantization (ITQ) [7, 8]. SH learns the hashing
functions based on spectral graph partitioning. AGH adopts anchor graphs to speed up the computation of graph Laplacian eigenvectors, based on which the Nystr?om method is used to compute
projection functions. SPL leans the projection functions in a sequential way that each function is
designed to correct the errors caused by the previous one. PCAH adopts principal component analysis (PCA) to learn the projection functions. ITQ tries to learn an orthogonal rotation matrix to
refine the initial projection matrix learned by PCA so that the quantization error of mapping the data
1
to the vertices of binary hypercube is minimized. Compared to the data-dependent methods, the
data-independent methods need longer codes to achieve satisfactory performance [7].
For most existing projection functions such as those mentioned above, the variances of different
projected dimensions are different. Many researchers [7, 12, 21] have argued that using the same
number of bits for different projected dimensions with unequal variances is unreasonable because
larger-variance dimensions will carry more information. Some methods [7, 12] use orthogonal transformation to the PCA-projected data with the expectation of balancing the variances of different
PCA dimensions, and achieve better performance than the original PCA based hashing. However,
to the best of our knowledge, there exist no methods which can guarantee to learn a projection with
equal variances for different dimensions. Hence, the viewpoint that using the same number of bits for different projected dimensions is unreasonable has still not been verified by either theory or
experiment.
In this paper, a novel hashing method, called isotropic hashing (IsoHash), is proposed to learn a projection function which can produce projected dimensions with isotropic variances (equal variances).
To the best of our knowledge, this is the first work which can learn projections with isotropic variances for hashing. Experimental results on real data sets show that IsoHash can outperform its
counterpart with anisotropic variances for different dimensions, which verifies the intuitive viewpoint that projections with isotropic variances will be better than those with anisotropic variances.
Furthermore, the performance of IsoHash is also comparable, if not superior, to the state-of-the-art
methods.
2
2.1
Isotropic Hashing
Problem Statement
Assume we are given n data points {x1 , x2 , ? ? ? , xn } with xi ? Rd , which form the columns of
the data matrix X ? Rd?n .PWithout loss of generality, in this paper the data are assumed to be
n
zero centered which means i=1 xi = 0. The basic idea of hashing is to map each point xi into
m
a binary string yi ? {0, 1} with m denoting the code size. Furthermore, close points in the
original space Rd should be hashed into similar binary codes in the code space {0, 1}m to preserve
the similarity structure in the original space. In general, we compute the binary code of xi as
yi = [h1 (xi ), h2 (xi ), ? ? ? , hm (xi )]T with m binary hash functions {hk (?)}m
k=1 .
Because it is NP hard to directly compute the best binary functions hk (?) for a given data set [31],
most hashing methods adopt a two-stage strategy to learn hk (?). In the projection stage, m realvalued projection functions {fk (x)}m
k=1 are learned and each function can generate one real value.
Hence, we have m projected dimensions each of which corresponds to one projection function. In
the quantization stage, the real-values are quantized into a binary string by thresholding.
Currently, most methods use one bit to quantize each projected dimension. More specifically,
hk (xi ) = sgn(fk (xi )) where sgn(x) = 1 if x ? 0 and 0 otherwise. The exceptions of the quantization methods only contain AGH [21], DBQ [14] and MH [15], which use two bits to quantize
each dimension. In sum, all of these methods adopt the same number (either one or two) of bits
for different projected dimensions. However, the variances of different projected dimensions are
unequal, and larger-variance dimensions typically carry more information. Hence, using the same
number of bits for different projected dimensions with unequal variances is unreasonable, which has
also been argued by many researchers [7, 12, 21]. Unfortunately, there exist no methods which can
learn projection functions with equal variances for different dimensions. In the following content of
this section, we present a novel model to learn projections with isotropic variances.
2.2
Model Formulation
The idea of our IsoHash method is to learn an orthogonal matrix to rotate the PCA projection matrix.
To generate a code of m bits, PCAH performs PCA on X, and then use the top m eigenvectors of the
covariance matrix XX T as columns of the projection matrix W ? Rd?m . Here, top m eigenvectors
are those corresponding to the m largest eigenvalues {?k }m
k=1 , generally arranged with the non2
increasing order ?1 ? ?2 ? ? ? ? ? ?m . Hence, the projection functions of PCAH are defined as
follows: fk (x) = wkT x, where wk is the kth column of W .
Let ? = [?1 , ?2 , ? ? ? , ?m ]T and ? = diag(?), where diag(?) denotes the diagonal matrix whose
diagonal entries are formed from the vector ?. It is easy to prove that W T XX T W = ?. Hence, the
variance of the values {fk (xi )}ni=1 on the kth projected dimension, which corresponds to the kth
row of W T X, is ?k . Obviously, the variances for different PCA dimensions are anisotropic.
To get isotropic projection functions, the idea of our IsoHash method is to learn an orthogonal
matrix Q ? Rm?m which makes QT W T XX T W Q become a matrix with equal diagonal values,
i.e., [QT W T XX T W Q]11 = [QT W T XX T W Q]22 = ? ? ? = [QT W T XX T W Q]mm . Here, Aii
denotes the ith diagonal entry of a square matrix A, and a matrix Q is said to be orthogonal if
QT Q = I where I is an identity matrix whose dimensionality depends on the context. The effect
of the orthogonal matrix Q is to rotate the coordinate axes while keeping the Euclidean distances
between any two points unchanged. It is easy to prove that the new projection functions of IsoHash
are fk (x) = (W Q)Tk x which have the same (isotropic) variance. Here (W Q)k denotes the kth
column of W Q.
If we use tr(A) to denote the trace of a symmetric matrix A, we have the following Lemma 1.
Lemma 1. If QT Q = I, tr(QT AQ) = tr(A).
Pm
Based on Lemma 1, we have tr(QT W T XX T W Q) = tr(W T XX T W ) = tr(?) = i=1 ?i if
QT Q = I. Hence, to make QT WPT XX T W Q become a matrix with equal diagonal values, we
m
?i
should set this diagonal value a = i=1
.
m
Let
Pm
a = [a1 , a2 , ? ? ? , am ] with ai = a =
i=1
?i
m
,
(1)
and
T (z) = {T ? Rm?m |diag(T ) = diag(z)},
where z is a vector of length m, diag(T ) is overloaded to denote a diagonal matrix with the same
diagonal entries of matrix T .
Based on our motivation of IsoHash, we can define the problem of IsoHash as follows:
Problem 1. The problem of IsoHash is to find an orthogonal matrix Q making QT W T XX T W Q ?
T (a), where a is defined in (1).
Then, we have the following Theorem 1:
Theorem 1. Assume QT Q = I and T ? T (a). If QT ?Q = T , Q will be a solution to the problem
of IsoHash.
Proof. Because W T XX T W = ?, we have QT ?Q = QT [W T XX T W ]Q. It is obvious that Q
will be a solution to the problem of IsoHash.
As in [2], we define
M(?) = {QT ?Q|Q ? O(m)},
(2)
where O(m) is the set of all orthogonal matrices in Rm?m , i.e., QT Q = I.
According to Theorem 1, the problem of IsoHash is equivalent to finding an orthogonal matrix Q
for the following equation [2]:
||T ? Z||F = 0,
(3)
where T ? T (a), Z ? M(?), || ? ||F denotes the Frobenius norm. Please note that for ease of
understanding, we use the same notations as those in [2].
In the following content, we will use the Schur-Horn lemma [11] to prove that we can always find a
solution to problem (3).
3
Lemma 2. [Schur-Horn Lemma] Let c = {ci } ? Rm and b = {bi } ? Rm be real vectors in
non-increasing order respectively 1 , i.e., c1 ? c2 ? ? ? ? ? cm , b1 ? b2 ? ? ? ? ? bm . There exists a
Hermitian matrix H with eigenvalues c and diagonal values b if and only if
k
X
i=1
m
X
i=1
bi ?
bi =
k
X
i=1
m
X
ci , for any k = 1, 2, ..., m,
ci .
i=1
Proof. Please refer to Horn?s article [11].
Base on Lemma 2, we have the following Theorem 2.
Theorem 2. There exists a solution to the IsoHash problem in (3). And this solution is in the
intersection of T (a) and M(?).
Pm
?
i=1 i
Proof. Because ?1 ? ?2 ? ? ? ? ? ?m and a1 = a2 = ? ? ? = am =
, it is easy to
m
Pk
Pm
Pk
Pm
P
k
i=1 ?i
i=1 ?i
i=1 ?i
i=1 ?i
prove that
?
for
any
k.
Hence,
?
=
k
?
?
k
?
=
i
k
m
k
m
Pk
Pm
Pi=1
m
i=1 ai . Furthermore, we can prove that
i=1 ?i =
i=1 ai . According to Lemma 2, there exists
a Hermitian matrix H with eigenvalues ? and diagonal values a.
Moreover, we can prove that H is in the intersection of T (a) and M(?), i.e., H ? T (a) and
H ? M(?).
According to Theorem 2, to find a Q solving the problem in (3) is equivalent to finding the intersection point of T (a) and M(?), which is just an inverse eigenvalue problem called SHIEP in [2].
2.3
Learning
The problem in (3) can be reformulated as the following optimization problem:
||T ? Z||F .
argmin
(4)
Q:T ?T (a),Z?M(?)
As in [2], we propose two algorithms to learn Q: one is called lift and projection (LP), and the other
is called gradient flow (GF). For ease of understanding, we use the same notations as those in [2],
and some proofs of theorems are omitted. The readers can refer to [2] for the details.
2.3.1
Lift and Projection
The main idea of lift and projection (LP) algorithm is to alternate between the following two steps:
? Lift step:
Given a T (k) ? T (a), we find the point Z (k) ? M(?) such that ||T (k) ? Z (k) ||F =
dist(T (k) , M(?)), where dist(T (k) , M(?)) denotes the minimum distance between T (k)
and the points in M(?).
? Projection step:
Given a Z (k) , we find T (k+1) ? T (a) such that ||T (k+1) ? Z (k) ||F = dist(T (a), Z (k) ),
where dist(T (a), Z (k) ) denotes the minimum distance between Z (k) and the points in
T (a).
1
Please note in [2], the values are in increasing order. It is easy to prove that our presentation of Schur-Horn
lemma is equivalent to that in [2]. The non-increasing order is chosen here just because it will facilitate our
following presentation due to the non-increasing order of the eigenvalues in ?.
4
We call Z (k) a lift of T (k) onto M(?) and T (k+1) a projection of Z (k) onto T (a). The projection
operation is easy to complete. Suppose Z (k) = [zij ], then T (k+1) = [tij ] must be given by
zij , if i 6= j
tij =
(5)
ai , if i = j
For the lift operation, we have the following Theorem 3.
Theorem 3. Suppose T = QT DQ is an eigen-decomposition of T where D = diag(d) with
d = [d1 , d2 , ..., dm ]T being T ?s eigenvalues which are ordered as d1 ? d2 ? ? ? ? ? dm . Then the
nearest neighbor of T in M(?) is given by
Z = QT ?Q.
(6)
Proof. See Theorem 4.1 in [3].
Since in each step we minimize the distance between T and Z, we have
||T (k) ? Z (k) ||F ? ||T (k+1) ? Z (k) ||F ? ||T (k+1) ? Z (k+1) ||F .
It is easy to see that (T (k) , Z (k) ) will converge to a stationary point. The whole IsoHash algorithm
based on LP, abbreviated as IsoHash-LP, is briefly summarized in Algorithm 1.
Algorithm 1 Lift and projection based IsoHash (IsoHash-LP)
Input: X ? Rd?n , m ? N+ , t ? N+
? [?, W ] = P CA(X, m), as stated in Section 2.2.
? Generate a random orthogonal matrix Q0 ? Rm?m .
? Z (0) ? QT0 ?Q0 .
? for k = 1 ? t do
Calculate T (k) from Z (k?1) by equation (5).
Perform eigen-decomposition of T (k) to get QTk DQk = T (k) .
Calculate Z (k) from Qk and ? by equation (6).
? end for
? Y = sgn(QTt W T X).
Output: Y
Because M(?) is not a convex set, the stationary point we find is not necessarily inside the intersection of T (a) and M(?). For example, if we set Z (0) = ?, the lift and projection learning algorithm
would get no progress because the Z and T are already in a stationary point. To solve this problem
of degenerate solutions, we initiate Z as a transformed ? with some random orthogonal matrix Q0 ,
which is illustrated in Algorithm 1.
2.3.2
Gradient Flow
Another learning algorithm is a continuous one based on the construction of a gradient flow (GF)
on the surface M(?) that moves towards the desired intersection point. Because there always exists a solution for the problem in (3) according to Theorem 2, the objective function in (4) can be
reformulated as follows [2]:
min F (Q) =
Q?O(m)
1
||diag(QT ?Q) ? diag(a)||2F .
2
(7)
The details about how to optimize (7) can be found in [2]. We just show some key steps of the
learning algorithm in the following content.
The gradient ?F at Q can be calculated as
?F (Q) = 2??(Q),
T
(8)
where ?(Q) = diag(Q ?Q) ? diag(a). Once we have computed the gradient of F , it can be
projected onto the manifold O(m) according to the following Theorem 4.
5
Theorem 4. The projection of ?F (Q) onto O(m) is given by
g(Q) = Q[QT ?Q, ?(Q)]
(9)
where [A, B] = AB ? BA is the Lie bracket.
Proof. See the formulas (20), (21) and (22) in [3].
The vector field Q? = ?g(Q) defines a steepest descent flow on the manifold O(m) for function
F (Q). Letting Z = QT ?Q and ?(Z) = ?(Q), we get
Z? = [Z, [?(Z), Z]],
(10)
where Z? is an isospectral flow that moves to reduce the objective function F (Q).
As stated by Theorems 3.3 and 3.4 in [2], a stable equilibrium point of (10) must be combined
with ?(Q) = 0, which means that F (Q) has decreased to zero. Hence, the gradient flow method
can always find an intersection point as the solution. The whole IsoHash algorithm based on GF,
abbreviated as IsoHash-GF, is briefly summarized in Algorithm 2.
Algorithm 2 Gradient flow based IsoHash (IsoHash-GF)
Input: X ? Rd?n , m ? N+
? [?, W ] = P CA(X, m), as stated in Section 2.2.
? Generate a random orthogonal matrix Q0 ? Rm?m .
? Z (0) ? QT0 ?Q0 .
? Start integration from Z = Z (0) with gradient computed from equation (10).
? Stop integration when reaching a stable equilibrium point.
? Perform eigen-decomposition of Z to get QT ?Q = Z.
? Y = sgn(QT W T X).
Output: Y
We now discuss some implementation details of IsoHash-GF. Since all diagonal matrices in M(?)
result in Z? = 0, one should not use ? as the starting point. In our implementation, we use the same
method as that in IsoHash-LP to avoid this degenerate case, i.e., a random orthogonal transformation
matrix Q0 is use to rotate ?. To integrate Z with gradient in (10), we use Adams-Bashforth-Moulton
PECE solver in [27] where the parameter RelTol is set to 10?3 . The relative error of the algorithm
is computed by comparing the diagonal entries of Z to the target diag(a). The whole integration
process will be terminated when their relative error is below 10?7 .
2.4
Complexity Analysis
The learning of our IsoHash method contains two phases: the first phase is PCA and the second
phase is LP or GF. The time complexity of PCA is O(min(n2 d, nd2 )). The time complexity of
LP after PCA is O(m3 t), and that of GF after PCA is O(m3 ). In our experiments, t is set to 100
because good performance can be achieved at this setting. Because m is typically set to be a very
small number like 64 or 128, the main time complexity of IsoHash is from the PCA phase. In
general, the training of IsoHash-GF will be faster than IsoHash-LP in our experiments.
One promising property of both LP and GF is that the time complexity after PCA is independent of
the number of training data, which makes them scalable to large-scale data sets.
3
Relation to Existing Works
The most related method of IsoHash is ITQ [7], because both ITQ and IsoHash have to learn an
orthogonal matrix. However, IsoHash is different from ITQ in many aspects: firstly, the goal of
IsoHash is to learn a projection with isotropic variances, but the results of ITQ cannot necessarily guarantee isotropic variances; secondly, IsoHash directly learns the orthogonal matrix from the
eigenvalues and eigenvectors of PCA, but ITQ first quantizes the PCA results to get some binary
6
codes, and then learns the orthogonal matrix based on the resulting binary codes; thirdly, IsoHash
has an explicit objective function to optimize, but ITQ uses a two-step heuristic strategy whose
goal cannot be formulated by a single objective function; fourthly, to learn the orthogonal matrix,
IsoHash uses Lift and Projection or Gradient Flow, but ITQ uses Procruster method which is much
slower than IsoHash. From the experimental results which will be presented in the next section, we
can find that IsoHash can achieve accuracy comparable to ITQ with much faster training speed.
4
4.1
Experiment
Data Sets
We evaluate our methods on two widely used data sets, CIFAR [16] and LabelMe [28].
The first data set is CIFAR-10 [16] which consists of 60,000 images. These images are manually
labeled into 10 classes, which are airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and
truck. The size of each image is 32?32 pixels. We represent them with 256-dimensional gray-scale
GIST descriptors [24].
The second data set is 22K LabelMe used in [23, 28] which contains 22,019 images sampled from
the large LabelMe data set. As in [28], The images are scaled to 32?32 pixels, and then represented
by 512-dimensional GIST descriptors [24].
4.2
Evaluation Protocols and Baselines
As the protocols widely used in recent papers [7, 23, 25, 31], Euclidean neighbors in the original space are considered as ground truth. More specifically, a threshold of the average distance to the 50th
nearest neighbor is used to define whether a point is a true positive or not. Based on the Euclidean
ground truth, we compute the precision-recall curve and mean average precision (mAP) [7, 21]. For
all experiments, we randomly select 1000 points as queries, and leave the rest as training set to learn
the hash functions. All the experimental results are averaged over 10 random training/test partitions.
Although a lot of hashing methods have been proposed, some of them are either supervised [23]
or semi-supervised [29]. Our IsoHash method is essentially an unsupervised one. Hence, for fair
comparison, we select the most representative unsupervised methods for evaluation, which contain
PCAH [7], ITQ [7], SH [31], LSH [1], and SIKH [25]. Among these methods, PCAH, ITQ and SH
are data-dependent methods, while SIKH and LSH are data-independent methods.
All experiments are conducted on our workstation with Intel(R) Xeon(R) CPU [email protected]
and 64G memory.
4.3
Accuracy
Table 1 shows the Hamming ranking performance measured by mAP on LabelMe and CIFAR. It
is clear that our IsoHash methods, including both IsoHash-GF and IsoHash-LP, achieve far better
performance than PCAH. The main difference between IsoHash and PCAH is that the PCAH dimensions have anisotropic variances while IsoHash dimensions have isotropic variances. Hence,
the intuitive viewpoint that using the same number of bits for different projected dimensions with
anisotropic variances is unreasonable has been successfully verified by our experiments. Furthermore, the performance of IsoHash is also comparable, if not superior, to the state-of-the-art methods,
such as ITQ.
Figure 1 illustrates the precision-recall curves on LabelMe data set with different code sizes. The
relative performance in the precision-recall curves on CIFAR is similar to that on LabelMe. We omit
the results on CIFAR due to space limitation. Once again, we can find that our IsoHash methods can
achieve performance which is far better than PCAH and comparable to the state-of-the-art.
4.4
Computational Cost
Table 2 shows the training time on CIFAR. We can see that our IsoHash methods are much faster
than ITQ. The time complexity of ITQ also contains two parts: the first part is PCA which is the same
7
Table 1: mAP on LabelMe and CIFAR data sets.
32
0.2580
0.2534
0.0516
0.2786
0.0826
0.0590
0.1549
64
0.3269
0.3223
0.0401
0.3328
0.1034
0.1482
0.2574
0.6
0.4
0.2
0.4
0.6
0.4
0.6
0.8
1
0
0
Recall
0.2
0.4
0.6
0.8
0.6
IsoHash?GF
IsoHash?LP
ITQ
SH
SIKH
LSH
PCAH
0.6
0.4
0.2
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
Recall
(b) 64 bits
256
0.3600
0.3651
0.0168
0.3436
0.1535
0.3614
0.3432
0.8
0.4
0
0
1
128
0.3357
0.3223
0.0216
0.3319
0.1121
0.1909
0.2776
1
IsoHash?GF
IsoHash?LP
ITQ
SH
SIKH
LSH
PCAH
0.2
Recall
(a) 32 bits
CIFAR
96
0.3256
0.3027
0.0241
0.3238
0.0802
0.1245
0.2396
64
0.2969
0.2624
0.0274
0.3051
0.0589
0.0902
0.1907
0.8
0.2
0.2
32
0.2249
0.1907
0.0319
0.2490
0.0510
0.0353
0.1052
1
IsoHash?GF
IsoHash?LP
ITQ
SH
SIKH
LSH
PCAH
0.8
Precision
Precision
256
0.3889
0.4274
0.0232
0.3728
0.2080
0.4488
0.4034
1
IsoHash?GF
IsoHash?LP
ITQ
SH
SIKH
LSH
PCAH
0.8
0
0
128
0.3662
0.3826
0.0307
0.3615
0.1653
0.2526
0.3375
Precision
1
LabelMe
96
0.3528
0.3577
0.0341
0.3504
0.1447
0.2074
0.3147
Precision
Method
# bits
IsoHash-GF
IsoHash-LP
PCAH
ITQ
SH
SIKH
LSH
0.6
0.8
1
Recall
(c) 96 bits
(d) 256 bits
Figure 1: Precision-recall curves on LabelMe data set.
as that in IsoHash, and the second part is an iteration algorithm to rotate the original PCA matrix
with time complexity O(nm2 ), where n is the number of training points and m is the number of bits
in the binary code. Hence, as the number of training data increases, the second-part time complexity
of ITQ will increase linearly to the number of training points. But the time complexity of IsoHash
after PCA is independent of the number of training points. Hence, IsoHash will be much faster than
ITQ, particularly in the case with a large number of training points. This is clearly shown in Figure 2
which illustrates the training time when the numbers of training data are changed.
50
Table 2: Training time (in second) on CIFAR.
# bits
IsoHash-GF
IsoHash-LP
PCAH
ITQ
SH
SIKH
LSH
32
2.48
2.14
1.84
4.35
1.60
1.30
0.05
64
2.45
2.43
2.14
6.33
3.41
1.44
0.08
96
2.70
2.94
2.23
9.73
8.37
1.57
0.11
128
3.00
3.47
2.36
12.40
13.66
1.55
0.19
256
5.55
8.83
2.92
29.25
49.44
2.20
0.31
Training Time(s)
40
30
IsoHash?GF
IsoHash?LP
ITQ
SH
SIKH
LSH
PCAH
20
10
0
0
1
2
3
4
Number of training data
5
6
4
x 10
Figure 2: Training time on CIFAR .
5
Conclusion
Although many researchers have intuitively argued that using the same number of bits for different
projected dimensions with anisotropic variances is unreasonable, this viewpoint has still not been
verified by either theory or experiment because no methods have been proposed to find projection
functions with isotropic variances for different dimensions. The proposed IsoHash method in this
paper is the first work to learn projection functions which can produce projected dimensions with
isotropic variances (equal variances). Experimental results on real data sets have successfully verified the viewpoint that projections with isotropic variances will be better than those with anisotropic
variances. Furthermore, IsoHash can achieve accuracy comparable to the state-of-the-art methods
with faster training speed.
6
Acknowledgments
This work is supported by the NSFC (No. 61100125), the 863 Program of China (No. 2011AA01A202, No. 2012AA011003), and the Program
for Changjiang Scholars and Innovative Research Team in University of China (IRT1158, PCSIRT).
8
References
[1] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high
dimensions. Commun. ACM, 51(1):117?122, 2008.
[2] M.T. Chu. Constructing a Hermitian matrix from its diagonal entries and eigenvalues. SIAM Journal on
Matrix Analysis and Applications, 16(1):207?217, 1995.
[3] M.T. Chu and K.R. Driessel. The projected gradient method for least squares matrix approximations with
spectral constraints. SIAM Journal on Numerical Analysis, pages 1050?1060, 1990.
[4] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokni. Locality-sensitive hashing scheme based on p-stable
distributions. In Proceedings of the ACM Symposium on Computational Geometry, 2004.
[5] A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In VLDB, 1999.
[6] Y. Gong, S. Kumar, V. Verma, and S. Lazebnik. Angular quantization based binary codes for fast similarity
search. In NIPS, 2012.
[7] Y. Gong and S. Lazebnik. Iterative quantization: A Procrustean approach to learning binary codes. In
CVPR, 2011.
[8] Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A Procrustean approach to
learning binary codes for large-scale image retrieval. In IEEE Trans. Pattern Anal. Mach. Intell., 2012.
[9] J. He, W. Liu, and S.-F. Chang. Scalable similarity search with optimized kernel hashing. In KDD, 2010.
[10] J.-P. Heo, Y. Lee, J. He, S.-F. Chang, and S.-E. Yoon. Spherical hashing. In CVPR, 2012.
[11] A. Horn. Doubly stochastic matrices and the diagonal of a rotation matrix. American Journal of Mathematics, 76(3):620?630, 1954.
[12] H. Jegou, M. Douze, C. Schmid, and P. P?erez. Aggregating local descriptors into a compact image
representation. In CVPR, 2010.
[13] I. Jolliffe. Principal Component Analysis. Springer, 2002.
[14] W. Kong and W.-J. Li. Double-bit quantization for hashing. In AAAI, 2012.
[15] W. Kong, W.-J. Li, and M. Guo. Manhattan hashing for large-scale image retrieval. In SIGIR, 2012.
[16] A. Krizhevsky. Learning multiple layers of features from tiny images. Tech report, University of Toronto,
2009.
[17] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. In NIPS, 2009.
[18] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing for scalable image search. In ICCV,
2009.
[19] B. Kulis, P. Jain, and K. Grauman. Fast similarity search for learned metrics. IEEE Trans. Pattern Anal.
Mach. Intell., 31(12):2143?2157, 2009.
[20] W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. In CVPR, 2012.
[21] W. Liu, J. Wang, S. Kumar, and S.-F. Chang. Hashing with graphs. In ICML, 2011.
[22] Y. Mu and S. Yan. Non-metric locality-sensitive hashing. In AAAI, 2010.
[23] M. Norouzi and D. J. Fleet. Minimal loss hashing for compact binary codes. In ICML, 2011.
[24] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial
envelope. International Journal of Computer Vision, 42(3):145?175, 2001.
[25] M. Raginsky and S. Lazebnik. Locality-sensitive binary codes from shift-invariant kernels. In NIPS,
2009.
[26] R. Salakhutdinov and G. E. Hinton. Semantic hashing. Int. J. Approx. Reasoning, 50(7):969?978, 2009.
[27] L.F. Shampine and M.K. Gordon. Computer solution of ordinary differential equations: the initial value
problem. Freeman, San Francisco, California, 1975.
[28] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large image databases for recognition. In CVPR,
2008.
[29] J. Wang, S. Kumar, and S.-F. Chang. Sequential projection learning for hashing with compact codes. In
ICML, 2010.
[30] J. Wang, S. Kumar, and S.-F. Chang. Semi-supervised hashing for large-scale search. IEEE Trans. Pattern
Anal. Mach. Intell., 34(12):2393?2406, 2012.
[31] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, 2008.
[32] H. Xu, J. Wang, Z. Li, G. Zeng, S. Li, and N. Yu. Complementary hashing for approximate nearest
neighbor search. In ICCV, 2011.
[33] D. Zhang, F. Wang, and L. Si. Composite hashing with multiple information sources. In SIGIR, 2011.
[34] Y. Zhen and D.-Y. Yeung. A probabilistic model for multimodal hash function learning. In KDD, 2012.
9
| 4846 |@word kong:3 kulis:3 briefly:2 norm:1 d2:2 vldb:1 covariance:1 decomposition:3 nystr:1 tr:6 carry:3 initial:2 liu:3 contains:3 zij:2 denoting:1 existing:5 comparing:1 si:1 chu:2 must:2 numerical:1 partition:1 kdd:2 shape:1 designed:1 gist:2 hash:5 stationary:3 isotropic:18 ith:1 steepest:1 quantized:3 toronto:1 firstly:1 zhang:1 c2:1 become:3 symposium:1 differential:1 prove:7 consists:1 doubly:1 inside:1 hermitian:3 dist:4 jegou:1 salakhutdinov:1 freeman:1 spherical:1 cpu:1 solver:1 increasing:5 project:2 xx:12 notation:2 moreover:1 cm:1 argmin:1 string:3 finding:2 transformation:2 guarantee:2 grauman:2 rm:7 scaled:1 partitioning:1 omit:1 positive:1 engineering:1 aggregating:1 local:1 mach:3 nsfc:1 jiang:1 datar:1 bird:1 frog:1 china:3 ease:2 bi:3 averaged:1 horn:5 acknowledgment:1 yan:1 composite:1 projection:44 get:6 onto:4 close:2 cannot:2 storage:2 context:1 optimize:2 equivalent:3 map:4 starting:1 convex:1 agh:3 sigir:2 coordinate:1 construction:1 suppose:2 target:1 massive:1 us:3 recognition:1 particularly:1 lean:1 labeled:1 database:1 yoon:1 wang:6 calculate:2 mentioned:1 mu:1 complexity:10 solving:1 multimodal:1 mh:1 aii:1 cat:1 represented:1 jiao:1 jain:1 fast:3 reconstructive:1 query:3 horse:1 lift:9 deer:1 whose:4 heuristic:1 larger:3 widely:3 solve:1 cvpr:5 otherwise:1 indyk:3 obviously:1 advantage:2 eigenvalue:8 propose:2 douze:1 holistic:1 degenerate:2 achieve:6 intuitive:2 frobenius:1 motwani:1 double:1 darrell:1 produce:3 adam:1 leave:1 tk:1 gong:3 measured:1 nearest:5 qt:24 progress:1 c:1 itq:24 correct:1 weihao:1 stochastic:1 centered:1 sgn:4 argued:3 scholar:1 secondly:1 extension:1 fourthly:1 mm:1 considered:1 ground:2 equilibrium:2 mapping:1 gordo:1 adopt:4 a2:2 omitted:1 torralba:3 currently:1 sensitive:5 largest:1 successfully:3 clearly:1 always:3 reaching:1 avoid:1 ax:1 nd2:1 hk:4 tech:1 baseline:1 am:2 dependent:4 perronnin:1 typically:3 kernelized:1 relation:1 transformed:1 pixel:2 among:1 art:4 integration:3 spatial:1 equal:8 once:2 field:1 manually:1 yu:1 unsupervised:2 icml:3 minimized:1 np:1 report:1 gordon:1 randomly:1 preserve:1 intell:3 phase:4 geometry:1 ab:1 evaluation:2 sh:11 bracket:1 orthogonal:17 euclidean:3 desired:1 minimal:1 column:4 xeon:1 modeling:1 heo:1 ordinary:1 cost:3 vertex:1 entry:5 krizhevsky:1 conducted:1 combined:1 qtt:1 international:1 siam:2 lee:1 probabilistic:1 again:1 aaai:2 american:1 li:5 b2:1 wk:1 summarized:2 int:1 changjiang:1 gionis:1 caused:1 ranking:1 depends:1 h1:1 try:1 lot:1 start:1 om:1 formed:1 ni:1 square:2 minimize:1 variance:38 qk:1 accuracy:3 descriptor:3 norouzi:1 researcher:4 obvious:1 dm:2 proof:6 workstation:1 hamming:1 stop:1 sampled:1 recall:8 knowledge:2 dimensionality:1 hashing:39 supervised:4 wei:2 formulation:1 arranged:1 generality:1 furthermore:5 just:3 stage:3 angular:1 zeng:1 defines:1 gray:1 facilitate:1 effect:1 contain:2 true:1 counterpart:2 hence:12 symmetric:1 laboratory:1 satisfactory:1 q0:6 semantic:1 illustrated:1 please:3 procrustean:2 complete:1 performs:1 reasoning:1 image:11 lazebnik:4 novel:3 superior:2 rotation:2 ji:1 shanghai:2 anisotropic:8 thirdly:1 he:2 refer:2 ai:4 rd:6 approx:1 fk:5 pm:6 mathematics:1 erez:1 aq:1 lsh:10 stable:3 hashed:3 longer:1 similarity:5 surface:1 base:1 recent:1 commun:1 ship:1 binary:18 yi:2 minimum:2 converge:1 semi:2 multiple:2 faster:5 wpt:1 cifar:10 retrieval:2 a1:2 laplacian:1 scalable:4 basic:2 moulton:1 oliva:1 essentially:1 expectation:1 metric:2 vision:1 yeung:1 iteration:1 represent:1 kernel:3 achieved:1 c1:1 decreased:1 source:1 envelope:1 rest:1 wkt:1 flow:8 schur:3 call:1 near:1 easy:6 embeddings:1 reduce:1 idea:5 cn:1 airplane:1 shift:1 fleet:1 whether:1 pca:20 reformulated:2 generally:1 tij:2 clear:1 eigenvectors:4 liwujun:1 reduced:1 generate:4 outperform:2 exist:2 key:2 threshold:1 verified:5 graph:5 sum:1 raginsky:1 inverse:1 reader:1 spl:2 wu:1 comparable:5 bit:20 bashforth:1 layer:1 refine:1 truck:1 constraint:1 x2:1 scene:1 aspect:1 speed:4 min:2 innovative:1 kumar:4 department:1 according:5 alternate:1 lp:18 making:1 intuitively:1 iccv:2 invariant:1 equation:5 abbreviated:2 discus:1 jolliffe:1 initiate:1 letting:1 end:1 operation:2 unreasonable:6 spectral:4 eigen:3 slower:1 similaritypreserving:1 original:9 top:2 denotes:6 include:1 hypercube:1 unchanged:1 move:2 objective:4 already:1 strategy:2 mirrokni:1 diagonal:14 said:1 qt0:2 gradient:11 kth:4 distance:5 manifold:2 sjtu:1 code:18 length:1 dqk:1 unfortunately:1 statement:1 trace:1 stated:3 ba:1 implementation:2 anal:3 perform:2 descent:1 hinton:1 team:1 qtk:1 overloaded:1 dog:1 optimized:1 california:1 unequal:3 learned:4 nm2:1 nip:4 trans:3 below:1 pattern:3 program:2 including:1 memory:1 scheme:1 realvalued:1 hm:1 jun:1 zhen:1 schmid:1 gf:18 understanding:2 relative:3 manhattan:1 loss:2 limitation:1 h2:1 integrate:1 article:1 thresholding:3 viewpoint:7 dq:1 verma:1 pi:1 balancing:1 tiny:1 row:1 changed:1 supported:1 keeping:1 neighbor:6 ghz:1 curve:4 dimension:36 xn:1 calculated:1 adopts:2 projected:21 san:1 bm:1 far:2 approximate:3 compact:4 anchor:2 b1:1 assumed:1 quantizes:1 francisco:1 xi:10 fergus:2 search:9 iterative:3 continuous:1 table:4 promising:2 learn:18 ca:2 quantize:2 automobile:1 necessarily:2 constructing:1 protocol:2 diag:11 pk:3 main:3 linearly:1 terminated:1 motivation:1 whole:3 n2:1 verifies:2 fair:1 complementary:1 x1:1 xu:1 representative:2 intel:1 tong:1 precision:9 sub:1 explicit:1 lie:1 learns:3 theorem:14 formula:1 exists:4 quantization:8 andoni:1 sequential:3 ci:3 illustrates:2 locality:5 intersection:6 ordered:1 chang:6 springer:1 corresponds:2 truth:2 acm:2 identity:1 presentation:2 goal:2 ann:2 formulated:1 towards:1 labelme:9 content:3 hard:1 specifically:3 principal:4 lemma:9 called:7 accepted:1 experimental:5 m3:2 exception:1 select:2 immorlica:1 guo:1 rotate:4 evaluate:1 d1:2 |
4,250 | 4,847 | Super-Bit Locality-Sensitive Hashing
Jianqiu Ji? , Jianmin Li? , Shuicheng Yan? , Bo Zhang? , Qi Tian?
?
State Key Laboratory of Intelligent Technology and Systems,
Tsinghua National Laboratory for Information Science and Technology (TNList),
Department of Computer Science and Technology,
Tsinghua University, Beijing 100084, China
[email protected],
{lijianmin, dcszb}@mail.tsinghua.edu.cn
?
Department of Electrical and Computer Engineering,
National University of Singapore, Singapore, 117576
[email protected]
?
Department of Computer Science, University of Texas at San Antonio,
One UTSA Circle, University of Texas at San Antonio, San Antonio, TX 78249-1644
[email protected]
Abstract
Sign-random-projection locality-sensitive hashing (SRP-LSH) is a probabilistic
dimension reduction method which provides an unbiased estimate of angular similarity, yet suffers from the large variance of its estimation. In this work, we propose the Super-Bit locality-sensitive hashing (SBLSH). It is easy to implement,
which orthogonalizes the random projection vectors in batches, and it is theoretically guaranteed that SBLSH also provides an unbiased estimate of angular similarity, yet with a smaller variance when the angle to estimate is within (0, ?/2].
The extensive experiments on real data well validate that given the same length
of binary code, SBLSH may achieve significant mean squared error reduction in
estimating pairwise angular similarity. Moreover, SBLSH shows the superiority
over SRP-LSH in approximate nearest neighbor (ANN) retrieval experiments.
1
Introduction
Locality-sensitive hashing (LSH) method aims to hash similar data samples to the same hash code
with high probability [7, 9]. There exist various kinds of LSH for approximating different distances
or similarities, e.g., bit-sampling LSH [9, 7] for Hamming distance and `1 -distance, min-hash [2, 5]
for Jaccard coefficient. Among them are some binary LSH schemes, which generate binary codes.
Binary LSH approximates a certain distance or similarity of two data samples by computing the
Hamming distance between the corresponding compact binary codes. Since computing Hamming
distance involves mainly bitwise operations, it is much faster than directly computing other distances, e.g. Euclidean, cosine, which require many arithmetic operations. On the other hand, the
storage is substantially reduced due to the use of compact binary codes. In large-scale applications
[22, 11, 5, 17], e.g. near-duplicate image detection, object and scene recognition, etc., we are often
confronted with the intensive computing of distances or similarities between samples, then binary
LSH may act as a scalable solution.
1.1
Locality-Sensitive Hashing for Angular Similarity
For many data representations, the natural pairwise similarity is only related with the angle between
the data, e.g., the normalized bag-of-words representation for documents, images, and videos, and
the normalized histogram-based local features like SIFT [20]. In these cases, angular similarity
1
ha,bi
can serve as a similarity measurement, which is defined as sim(a, b) = 1 cos 1 ( kakkbk
)/?. Here
ha, bi denotes the inner product of a and b, and k ? k denotes the `2 -norm of a vector.
One popular LSH for approximating angular similarity is the sign-random-projection LSH (SRPLSH) [3], which provides an unbiased estimate of angular similarity and is a binary LSH method.
Formally, in a d-dimensional data space, let v denote a random vector sampled from the normal
distribution N (0, Id ), and x denote a data sample, then an SRP-LSH function is defined as hv (x) =
sgn(v T x), where the sign function sgn(?) is defined as
?
1, z 0
sgn(z) =
0, z < 0
Given two data samples a, b, let ?a,b = cos
1
ha,bi
( kakkbk
), then it can be proven that [8]
P r(hv (a) 6= hv (b)) =
?a,b
?
This property well explains the essence of locality-sensitive, and also reveals the relation between
Hamming distance and angular similarity.
By independently sampling K d-dimensional vectors v1 , ..., vK from the normal distribution
N (0, Id ), we may define a function h(x) = (hv1 (x), hv2 (x), ..., hvK (x)), which consists of K
SRP-LSH functions and thus produces K-bit codes. Then it is easy to prove that
E[dHamming (h(a), h(b))] =
K?a,b
?
= C?a,b
That is, the expectation of the Hamming distance between the binary hash codes of two given data
samples a and b is an unbiased estimate of their angle ?a,b , up to a constant scale factor C = K/?.
Thus SRP-LSH provides an unbiased estimate of angular similarity.
Since dHamming (h(a), h(b)) follows a binomial distribution, i.e. dHamming (h(a), h(b)) ?
?
K?a,b
?a,b
B(K, a,b
its variance is
This implies that the variance of
? ),
? (1
? ).
dHamming (h(a), h(b))/K, i.e. V ar[dHamming (h(a), h(b))/K], satisfies
V ar[dHamming (h(a), h(b))/K] =
?a,b
K? (1
?a,b
? )
Though being widely used, SRP-LSH suffers from the large variance of its estimation, which leads
to large estimation error. Generally we need a substantially long code to accurately approximate
the angular similarity [24, 12, 23]. Since any two of the random vectors may be close to being
linearly dependent, the resulting binary code may be less informative as it seems, and even contains
many redundant bits. An intuitive idea would be to orthogonalize the random vectors. However,
once being orthogonalized, the random vectors can no longer be viewed as independently sampled.
Moreover, it remains unclear whether the resulting Hamming distance is still an unbiased estimate
of the angle ?a,b multiplied by a constant, and what its variance will be. Later we will give answers
with theoretical justifications to these two questions.
In the next section, based on the above intuitive idea, we propose the so-called Super-Bit localitysensitive hashing (SBLSH) method. We provide theoretical guarantees that after orthogonalizing the
random projection vectors in batches, we still get an unbiased estimate of angular similarity, yet with
a smaller variance when ?a,b 2 (0, ?/2], and thus the resulting binary code is more informative. Experiments on real data show the effectiveness of SBLSH, which with the same length of binary code
may achieve as much as 30% mean squared error (MSE) reduction compared with the SRP-LSH in
estimating angular similarity on real data. Moreover, SBLSH performs best among several widely
used data-independent LSH methods in approximate nearest neighbor (ANN) retrieval experiments.
2
Super-Bit Locality-Sensitive Hashing
The proposed SBLSH is founded on SRP-LSH. When the code length K satisfies 1 < K ? d,
where d is the dimension of data space, we can orthogonalize N (1 ? N ? min(K, d) = K) of the
random vectors sampled from the normal distribution N (0, Id ). The orthogonalization procedure
2
is the Gram-Schmidt process, which projects the current vector orthogonally onto the orthogonal
complement of the subspace spanned by the previous vectors. After orthogonalization, these N
random vectors can no longer be viewed as independently sampled, thus we group their resulting
bits together as an N -Super-Bit. We call N the Super-Bit depth.
However, when the code length K > d, it is impossible to orthogonalize all K vectors. Assume
that K = N ? L without loss of generality, and 1 ? N ? d, then we can perform the GramSchmidt process to orthogonalize them in L batches. Formally, K random vectors {v1 , v2 ..., vK }
are independently sampled from the normal distribution N (0, Id ), and then divided into L batches
with N vectors each. By performing the Gram-Schmidt process to these L batches of N vectors
respectively, we get K = N ? L projection vectors {w1 , w2 ..., wK }. This results in K SBLSH
functions (hw1 , hw2 ..., hwK ), where hwi (x) = sgn(wiT x). These K functions produce L N -SuperBits and altogether produce binary codes of length K. Figure 1 shows an example of generating
12 SBLSH projection vectors. Algorithm 1 lists the algorithm for generating SBLSH projection
vectors. Note that when the Super-Bit depth N = 1, SBLSH becomes SRP-LSH. In other words,
SRP-LSH is a special case of SBLSH. The algorithm can be easily extended to the case when the
code length K is not a multiple of the Super-Bit depth N . In fact one can even use variable Super-Bit
depth Ni as long as 1 ? Ni ? d. With the same code length, SBLSH has the same running time
O(Kd) as SRP-LSH in on-line processing, i.e. generating binary codes when applying to data.
w1
v1 v2
v12
v4
v11
v5
v7
v8
v v9
v6 v 10
3
w3 w
4
w2
Orthogonalize
in 4 batches
w9
Random projection vectors
sampled from N(0, I)
v2,1
w6
w7
w12
w8
w5
w11
w10
Resulting SBLSH projection vectors
Figure 1: An illustration of 12 SBLSH projection vectors {wi } generated by orthogonalizing {vi }
in 4 batches.
Algorithm 1 Generating Super-Bit Locality-Sensitive Hashing Projection Vectors
Input: Data space dimension d, Super-Bit depth 1 ? N ? d, number of Super-Bit L
resulting code length K = N ? L.
1,
Generate a random matrix H with each element sampled independently from the normal distribution N (0, 1), with each column normalized to unit length. Denote H = [v1 , v2 , ..., vK ].
for i = 0 to L 1 do
for j = 1 to N do
wiN +j = viN +j .
for k = 1 to j 1 do
T
wiN +j = wiN +j wiN +k wiN
+k viN +j .
end for
wiN +j = wiN +j /kwiN +j k.
end for
end for
? = [w1 , w2 , ..., wK ].
Output: H
2.1
Unbiased Estimate
In this subsection we prove that SBLSH provides an unbiased estimate of ?a,b of a, b 2 Rd .
Lemma 1. ([8], Lemma 3.2) Let S d 1 denote the unit sphere in Rd . Given a random vector v
uniformly sampled from S d 1 , we have P r[hv (a) 6= hv (b)] = ?a,b /?.
Lemma 2. If v 2 Rd follows an isotropic distribution, then v? = v/kvk is uniformly distributed on
S d 1.
This lemma can be proven by the definition of isotropic distribution, and we omit the details here.
3
Lemma 3. Given k vectors v1 , ..., vk 2 Rd , which are sampled i.i.d. from the normal distribution
N (0, Id ), and span a subspace Sk , let PSk denote the orthogonal projection onto Sk , then PSk is a
random matrix uniformly distributed on the Grassmann manifold Gk,d k .
This lemma can be proven by applying the Theorem 2.2.1(iii), Theorem 2.2.2(iii) in [4].
Lemma 4. If P is a random matrix uniformly distributed on the Grassmann manifold Gk,d k ,
1 ? k ? d, and v ? N (0, Id ) is independent of P , then the random vector v? = P v follows an
isotropic distribution.
From the uniformity of P on the Grassmann manifold and the property of the normal distribution
N (0, Id ), we can get this result directly. We give a sketch of proof below.
Proof. We can write P = U U T , where the columns of U = [u1 , u2 , ..., uk ] constitute an orthonormal basis of a random k-dimensional subspace. Since the standard normal distribution is 2-stable
[6], v? = U T v = [v?1 , v?2 , ..., v?k ]T is a N (0, Ik )-distributed vector, where each v?i ? N (0, 1), and it
is easy to verify that v? is independent of U . Therefore v? = P v = U v? = ?ki=1 v?i ui . Since ui , ..., uk
can be any orthonormal basis of any k-dimensional subspace with equal probability density, and
{v?1 , v?2 , ..., v?k } are i.i.d. N (0, 1) random variables, v? follows an isotropic distribution.
Theorem 1. Given N i.i.d. random vectors v1 , v2 , ..., vN 2 Rd sampled from the normal distribution N (0, Id ), where 1 ? N ? d, perform the Gram-Schmidt process on them and produce N
orthogonalized vectors w1 , w2 , . . . , wN , then for any two data vectors a, b 2 Rd , by defining N
indicator random variables X1 , X2 , ..., XN as
?
1, hwi (a) 6= hwi (b)
Xi =
0, hwi (a) = hwi (b)
we have E[Xi ] = ?a,b /?, for any 1 ? i ? N .
Proof. Denote Si 1 the subspace spanned by {w1 , ..., wi 1 }, and the orthogonal projection onto its
orthogonal complement as PS?i 1 . Then wi = PS?i 1 vi . Denote w
? = wi /kwi k.
For any 1 ? i ? N , E[Xi ] = P r[Xi = 1] = P r[hwi (a) 6= hwi (b)] = P r[hw? (a) 6= hw? (b)]. For
i = 1, by Lemma 2 and Lemma 1, we have P r[X1 = 1] = ?a,b /?.
For any 1 < i ? N , consider the distribution of wi . By lemma 3, PSi 1 is a random matrix
uniformly distributed on the Grassmann manifold Gi 1,d i+1 , thus PS?i 1 = I PSi 1 is uniformly
distributed on Gd i+1,i 1 . Since vi ? N (0, Id ) is independent of v1 , v2 , ..., vi 1 , vi is independent
of PS?i 1 . By Lemma 4, we have that wi = PS?i 1 vi follows an isotropic distribution. By Lemma
2, w
? = wi /kwi k is uniformly distributed on the unit sphere in Rd . By Lemma 1, P r[hw? (a) 6=
hw? (b)] = ?a,b /?.
Corollary 1. For any Super-Bit depth N , 1 ? N ? d, assuming that the code length K = N ? L,
the Hamming distance dHamming (h(a), h(b)) is an unbiased estimate of ?a,b , for any two data
vectors a and b 2 Rd , up to a constant scale factor C = K/?.
Proof. Apply Theorem 1 and we get E[dHamming (h(a), h(b))] = L ? E[?N
i=1 Xi ] = L ?
K?a,b
N
N
?i=1 E[Xi ] = L ? ?i=1 ?a,b /? = ? = C?a,b .
2.2
Variance
In this subsection we prove that when the angle ?a,b 2 (0, ?/2], the variance of SBLSH is strictly
smaller than that of SRP-LSH.
Lemma 5. For the random variables {Xi } defined in Theorem 1, we have the following equality
P r[Xi = 1|Xj = 1] = P r[Xi = 1|X1 = 1], 1 ? j < i ? N ? d.
Proof. P r[Xi = 1|Xj = 1] = P r[hwi (a) 6= hwi (b)|Xj = 1] = P r[hvi ?i 1 wk wT vi (a) 6=
k=1
k
hvi ?i 1 wk wT vi (b)|hwj (a) 6= hwj (b)]. Since {w1 , ...wi 1 } is a uniformly random orthonormal
k=1
k
4
basis of a random subspace uniformly distributed on Grassmann manifold, by exchanging the index
j and 1 we have that equals P r[hvi ?i 1 wk wT vi (a) 6= hvi ?i 1 wk wT vi (b)|hw1 (a) 6= hw1 (b)] =
k=1
k
k=1
k
P r[Xi = 1|X1 = 1].
Lemma 6. For {Xi } defined in Theorem 1, we have P r[Xi = 1|Xj = 1] = P r[X2 = 1|X1 = 1],
?
1 ? j < i ? N ? d. Given ?a,b 2 (0, ?2 ], we have P r[X2 = 1|X1 = 1] < a,b
? .
The proof of this lemma is long, thus we provide it in the Appendix (in supplementary file).
Theorem 2. Given two vectors a, b 2 Rd and random variables {Xi } defined as in Theorem 1,
denote p2,1 = P r[X2 = 1|X1 = 1], and SX = ?N
i=1 Xi which is the Hamming distance between
N?
p ?
N?
the N -Super-Bits of a and b, for 1 < N ? d, then V ar[SX ] = ?a,b +N (N 1) 2,1? a,b ( ?a,b )2 .
Proof. By Lemma 6, P r[Xi = 1|Xj = 1] = P r[X2 = 1|X1 = 1] = p2,1 when 1 ? j < i ? N .
p ?
Therefore P r[Xi = 1, Xj = 1] = P r[Xi = 1|Xj = 1]P r[Xj = 1] = 2,1? a,b , for any 1 ? j <
2
2
i ? N . Therefore V ar[SX ] = E[SX
] E[SX ]2 = ?N
N 2 E[X1 ]2 =
i=1 E[Xi ] + 2?j<i E[Xi Xj ]
N ?a,b
N ?a,b 2
N ?a,b
p2,1 ?a,b
N ?a,b 2
+ 2?j<i P r[Xi = 1, Xj = 1] ( ? ) = ? + N (N 1) ?
( ? ) .
?
Corollary 2. Denote V ar[SBLSHN,K ] as the variance of the Hamming distance produced by
SBLSH, where 1 ? N ? d is the Super-Bit depth, and K = N ? L is the code length. Then
V ar[SBLSHN,K ] = L?V ar[SBLSHN,N ]. Furthermore, given ?a,b 2 (0, ?2 ], if K = N1 ?L1 =
N2 ? L2 and 1 ? N2 < N1 ? d, then V ar[SBLSHN1 ,K ] < V ar[SBLSHN2 ,K ].
Proof. Since v1 , v2 , ..., vK are independently sampled, and w1 , w2 , ..., wK are produced by orthogonalizing every N vectors, the Hamming distances produced by different N -Super-Bits are independent, thus V ar[SBLSHN,K ] = L ? V ar[SBLSHN,N ].
N ?
p
?
N ?
K?
Therefore V ar[SBLSHN1 ,K ] = L1 ?( 1?a,b +N1 (N1 1) 2,1? a,b ( 1?a,b )2 ) = ?a,b +K(N1
p ?
?
?a,b
?
2
1) 2,1? a,b KN1 ( a,b
? ) . By Lemma 6, when ?a,b 2 (0, 2 ], for N1 > N2 > 1, 0 ? p2,1 < ? .
K?a,b
?a,b
Therefore V ar[SBLSHN1 ,K ] V ar[SBLSHN2 ,K ] = ? (N1 N2 )(p2,1
? ) < 0. For
K?a,b
?a,b
N1 > N2 = 1, V ar[SBLSHN1 ,K ] V ar[SBLSHN2 ,K ] = ? (N1 1)(p2,1
? )<0
Corollary 3. Denote V ar[SRP LSHK ] as the variance of the Hamming distance produced by SRPLSH, where K = N ? L is the code length and L is a positive integer, 1 < N ? d. If ?a,b 2 (0, ?2 ],
then V ar[SRP LSHK ] > V ar[SBLSHN,K ].
Proof. By Corollary 2, V ar[SRP LSHK ] = V ar[SBLSH1,K ] > V ar[SBLSHN,K ].
2.2.1
Numerical verification
/2
Figure 2: The variances of SRP-LSH and SBLSH against the angle ?a,b to estimate.
In this subsection we verify numerically the behavior of the variances of both SRP-LSH and SBLSH
for different angles ?a,b 2 (0, ?]. By Theorem 2, the variance of SBLSH is closely related to p2,1
defined in Theorem 2. We randomly generate 30 points in R10 , which involves 435 angles. For
each angle, we numerically approximate p2,1 using sampling method, where the sample number is
1000. We fix K = N = d, and plot the variances V ar[SRP LSHN ] and V ar[SBLSHN,N ] against
various angles ?a,b . Figure 2 shows that when ?a,b 2 (0, ?/2], SBLSH has a much smaller variance
than SRP-LSH, which verifies the correctness of Corollary 3 to some extent. Furthermore, Figure 2
shows that even when ?a,b 2 (?/2, ?], SBLSH still has a smaller variance.
5
2.3
Discussion
From Corollary 1, SBLSH provides an unbiased estimate of angular similarity. From Corollary
3, when ?a,b 2 (0, ?/2], with the same length of binary code, the variance of SBLSH is strictly
smaller than SRP-LSH. In real applications, many vector representations are limited in non-negative
orthant with all vector entries being non-negative, e.g., bag-of-words representation of documents
and images, and histogram-based representations like SIFT local descriptor [20]. Usually they are
normalized to unit length, with only their orientations maintained. For this kind of data, the angle
of any two different samples is limited in (0, ?/2], and thus SBLSH will provide more accurate
estimation than SRP-LSH on such data. In fact, our later experiments show that even when ?a,b is
not constrained in (0, ?/2], SBLSH still gives much more accurate estimate of angular similarity.
3
Experimental Results
We conduct two sets of experiments, angular similarity estimation and approximate nearest neighbor
(ANN) retrieval, to evaluate the effectiveness of our proposed SBLSH method. In the first set of
experiments we directly measure the accuracy in estimating pairwise angular similarity. The second
set of experiments then test the performance of SBLSH in real retrieval applications.
3.1
Angular Similarity Estimation
In this experiment, we evaluate the accuracy of estimating pairwise angular similarity on several
datasets. Specifically, we test the effect to the estimation accuracy when the Super-Bit depth N
varies and the code length K is fixed, and vice versa. For each preprocessed data D, we get DLSH
after performing SRP-LSH, and get DSBLSH after performing the proposed SBLSH. We compute
the angles between each pair of samples in D, the corresponding Hamming distances in DLSH and
DSBLSH . We compute the mean squared error between the true angle and the approximated angles
from DLSH and DSBLSH respectively. Note that after computing the Hamming distance, we divide
the result by C = K/? and get the approximated angle.
3.1.1
Datasets and Preprocessing
We conduct the experiment on the following datasets:
1) Photo Tourism patch dataset1 [26], Notre Dame, which contains 104,106 patches, each of which
is represented by a 128D SIFT descriptor (Photo Tourism SIFT); and 2) MIR-Flickr2 , which contains 25,000 images, each of which is represented by a 3125D bag-of-SIFT-feature histogram;
For each dataset, we further conduct a simple preprocessing step as in [12], i.e. mean-centering each
data sample, so as to obtain additional mean-centered versions of the above datasets, Photo Tourism
SIFT (mean), and MIR-Flickr (mean). The experiment on these mean-centered datasets will test the
performance of SBLSH when the angles of data pairs are not constrained in (0, ?/2].
3.1.2
The Effect of Super-Bit Depth N and Code Length K
SRP-LSH
SBLSH
Mean+SRP-LSH
Mean+SBLSH
SRP-LSH
SBLSH
Mean+SRP-LSH
Mean+SBLSH
SRP-LSH
SBLSH
Mean+SRP-LSH
Mean+SBLSH
SRP-LSH
SBLSH
Mean+SRP-LSH
Mean+SBLSH
Figure 3: The effect of Super-Bit depth N (1 < N ? min(d, K)) with fixed code length K (K =
N ? L), and the effect of code length K with fixed Super-Bit depth N .
1
2
http://phototour.cs.washington.edu/patches/default.htm
http://users.ecs.soton.ac.uk/jsh2/mirflickr/
6
Table 1: ANN retrieval results, measured by proportion of good neighbors within query?s Hamming
ball of radius 3. Note that the code length K = 30.
Data
E2LSH
SRP-LSH
SBLSH
Notre Dame
Half Dome
Trevi
0.4675 ? 0.0900
0.4503 ? 0.0712
0.4661 ? 0.0849
0.7500 ? 0.0525
0.7137 ? 0.0413
0.7591 ? 0.0464
0.7845? 0.0352
0.7535? 0.0276
0.7891? 0.0329
In each dataset, for each (N, K) pair, i.e. Super-Bit depth N and code length K, we randomly
sample 10,000 data, which involve about 50,000,000 data pairs, and randomly generate SRP-LSH
functions, together with SBLSH functions by orthogonalizing the generated SRP in batches. We
repeat the test for 10 times, and compute the mean squared error (MSE) of the estimation.
To test the effect of Super-Bit depth N , we fix K = 120 for Photo Tourism SIFT and K = 3000 for
MIR-Flickr respectively, and to test the effect of code length K, we fix N = 120 for Photo Tourism
SIFT and N = 3000 for MIR-Flickr. We repeat the experiment on the mean-centered versions of
these datasets, and denote the methods by Mean+SRP-LSH and Mean+SBLSH respectively.
Figure 3 shows that when using fixed code length K, as the Super-Bit depth N gets larger
(1 < N ? min(d, K)), the MSE of SBLSH gets smaller, and the gap between SBLSH and SRPLSH gets larger. Particularly, when N = K, over 30% MSE reduction can be observed on all the
datasets. This verifies Corollary 2 that when applying SBLSH, the best strategy would be to set the
Super-Bit depth N as large as possible, i.e. min(d, K). An informal explanation to this interesting
phenomenon is that as the degree of orthogonality of the random projections gets higher, the code
becomes more and more informative, and thus provides better estimate. On the other hand, it can be
observed that the performances on the mean-centered datasets are similar as on the original datasets.
This shows that even when the angle between each data pair is not constrained in (0, ?/2], SBLSH
still gives much more accurate estimation.
Figure 3 also shows that with fixed Super-Bit depth N SBLSH significantly outperforms SRP-LSH.
When increasing the code length K, the accuracies of SBLSH and SRP-LSH shall both increase.
The performances on the mean-centered datasets are similar as on the original datasets.
3.2
Approximate Nearest Neighbor Retrieval
In this subsection, we conduct ANN retrieval experiment, which compares SBLSH with two other
widely used data-independent binary LSH methods: SRP-LSH and E2LSH (we use the binary version in [25, 1]). We use the datasets Notre Dame, Half Dome and Trevi from the Photo Tourism
patch dataset [26], which is also used in [12, 10, 13] for ANN retrieval. We use 128D SIFT representation and normalize the vectors to unit norm. For each dataset, we randomly pick 1,000 samples
as queries, and the rest samples (around 100,000) as the corpus for the retrieval task. We define
the good neighbors to a query q as the samples within the top 5% nearest neighbors (measured in
Euclidean distance) to q. We adopt the evaluation criterion used in [12, 25], i.e. the proportion of
good neighbors in returned samples that are within the query?s Hamming ball of radius r. We set
r = 3. Using code length K = 30, we repeat the experiment for 10 times and take the mean of the
results. For SBLSH, we fix the Super-Bit depth N = K = 30. Table 1 shows that SBLSH performs
best among all these data-independent hashing methods.
4
Relations to Other Hashing Methods
There exist different kinds of LSH methods, e.g. bit-sampling LSH [9, 7] for Hamming distance
and `1 -distance, min-hash [2] for Jaccard coefficient, p-stable-distribution LSH [6] for `p -distance
when p 2 (0, 2]. These data-independent methods are simple, thus easy to be integrated as a module
in more complicated algorithms involving pairwise distance or similarity computation, e.g. nearest
neighbor search. New data-independent methods for improving these original LSH methods have
been proposed recently. [1] proposed a near-optimal LSH method for Euclidean distance. Li et al.
[16] proposed b-bit minwise hash which improves the original min-hash in terms of compactness.
7
[17] shows that b-bit minwise hash can be integrated in linear learning algorithms for large-scale
learning tasks. [14] reduces the variance of random projections by taking advantage of marginal
norms, and compares the variance of SRP with regular random projections considering the margins.
[15] proposed very sparse random projections for accelerating random projections and SRP.
Prior to SBLSH, SRP-LSH [3] was the only hashing method proven to provide unbiased estimate of
angular similarity. The proposed SBLSH method is the first data-independent method that outperforms SRP-LSH in terms of higher accuracy in estimating angular similarity.
On the other hand, data-dependent hashing methods have been extensively studied. For example,
spectral hashing [25] and anchor graph hashing [19] are data-dependent unsupervised methods.
Kulis et al. [13] proposed kernelized locality-sensitive hashing (KLSH), which is based on SRPLSH, to approximate the angular similarity in very high or even infinite dimensional space induced
by any given kernel, with access to data only via kernels. There are also a bunch of works devoted
to semi-supervised or supervised hashing methods [10, 21, 23, 24, 18], which try to capture not only
the geometry of the original data, but also the semantic relations.
5
Discussion
Instead of the Gram-Schmidt process, we can use other method to orthogonalize the projection
vectors, e.g. Householder transformation, which is numerically more stable. The advantage of
Gram-Schmidt process is its simplicity in describing the algorithm.
In the paper we did not test the method on data of very high dimension. When the dimension is high,
and N is not small, the Gram-Schmidt process will be computationally expensive. In fact, when the
dimension of data is very high, the random normal projection vectors {vi }i=1,2...,K will tend to be
orthogonal to each other, thus it may not be very necessary to orthogonalize the vectors deliberately.
From Corollary 2 and the results in Section 3.1.2, we can see that the closer the Super-Bit depth N
is to the data dimension d, the larger the variance reduction SBLSH achieves over SRP-LSH.
A technical report3 (Li et al.) shows that b-bit minwise hashing almost always has a smaller variance
than SRP in estimating Jaccard coefficient on binary data. The comparison of SBLSH with b-bit
minwise hashing for Jaccard coefficient is left for future work.
6
Conclusion and Future Work
The proposed SBLSH is a data-independent hashing method which significantly outperforms SRPLSH. We have theoretically proved that SBLSH provides an unbiased estimate of angular similarity,
and has a smaller variance than SRP-LSH when the angle to estimate is in (0, ?/2]. The algorithm
is simple, easy to implement and can be integrated as a basic module in more complicated algorithms. Experiments show that with the same length of binary code, SBLSH achieves over 30%
mean squared error reduction over SRP-LSH in estimating angular similarity, when the Super-Bit
depth N is close to the data dimension d. Moreover, SBLSH performs best among several widely
used data-independent LSH methods in approximate nearest neighbor retrieval experiments. Theoretically exploring the variance of SBLSH when the angle is in (?/2, ?] is left for future work.
Acknowledgments
This work was supported by the National Basic Research Program (973 Program) of China (Grant
Nos. 2013CB329403 and 2012CB316301), National Natural Science Foundation of China (Grant
Nos. 91120011 and 61273023), and Tsinghua University Initiative Scientific Research Program
No.20121088071, and NExT Research Center funded under the research grant WBS. R-252-300001-490 by MDA, Singapore. And it was supported in part to Dr. Qi Tian by ARO grant W911BF12-1-0057, NSF IIS 1052851, Faculty Research Awards by Google, FXPAL, and NEC Laboratories
of America, respectively.
3
www.stat.cornell.edu/?li/hashing/RP_minwise.pdf
8
References
[1] Alexandr Andoni and Piotr Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in
high dimensions. In Annual IEEE Symposium on Foundations of Computer Science, 2006.
[2] Andrei Z. Broder, Steven C. Glassman, Mark S. Manasse, and Geoffrey Zweig. Syntactic clustering of
the web. Computer Networks, 29(8-13):1157?1166, 1997.
[3] Moses Charikar. Similarity estimation techniques from rounding algorithms. In ACM Symposium on
Theory of Computing, 2002.
[4] Yasuko Chikuse. Statistics on Special Manifolds. Springer, February 2003.
[5] Ondrej Chum, James Philbin, and Andrew Zisserman. Near duplicate image detection: min-hash and
tf-idf weighting. In British Machine Vision Conference, 2008.
[6] Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S. Mirrokni. Locality-sensitive hashing scheme
based on p-stable distributions. In Symposium on Computational Geometry, 2004.
[7] Aristides Gionis, Piotr Indyk, and Rajeev Motwani. Similarity search in high dimensions via hashing. In
International Conference on Very Large Databases, 1999.
[8] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and
satisfiability problems using semidefinite programming. Journal of the ACM, 42(6):1115?1145, 1995.
[9] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In ACM Symposium on Theory of Computing, 1998.
[10] Prateek Jain, Brian Kulis, and Kristen Grauman. Fast image search for learned metrics. In IEEE Conference on Computer Vision and Pattern Recognition, 2008.
[11] Herv?e J?egou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search.
IEEE Trans. Pattern Anal. Mach. Intell., 33(1):117?128, 2011.
[12] Brian Kulis and Trevor Darrell. Learning to hash with binary reconstructive embeddings. In Advances in
Neural Information Processing Systems, 2009.
[13] Brian Kulis and Kristen Grauman. Kernelized locality-sensitive hashing for scalable image search. In
IEEE International Conference on Computer Vision, 2009.
[14] Ping Li, Trevor Hastie, and Kenneth Ward Church. Improving random projections using marginal information. In COLT, pages 635?649, 2006.
[15] Ping Li, Trevor Hastie, and Kenneth Ward Church. Very sparse random projections. In KDD, pages
287?296, 2006.
[16] Ping Li and Arnd Christian K?onig. b-bit minwise hashing. In International World Wide Web Conference,
2010.
[17] Ping Li, Anshumali Shrivastava, Joshua L. Moore, and Arnd Christian K?onig. Hashing algorithms for
large-scale learning. In Advances in Neural Information Processing Systems, 2011.
[18] Wei Liu, Jun Wang, Rongrong Ji, Yu-Gang Jiang, and Shih-Fu Chang. Supervised hashing with kernels.
In CVPR, pages 2074?2081, 2012.
[19] Wei Liu, Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. Hashing with graphs. In ICML, pages 1?8, 2011.
[20] David G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of
Computer Vision, 60(2):91?110, 2004.
[21] Yadong Mu, Jialie Shen, and Shuicheng Yan. Weakly-supervised hashing in kernel space. In IEEE
Conference on Computer Vision and Pattern Recognition, 2010.
[22] Antonio Torralba, Robert Fergus, and William T. Freeman. 80 million tiny images: A large data set for
nonparametric object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell., 30(11):1958?1970,
2008.
[23] Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. Sequential projection learning for hashing with compact
codes. In International Conference on Machine Learning, 2010.
[24] Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. Semi-supervised hashing for large scale search. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 99(PrePrints), 2012.
[25] Yair Weiss, Antonio Torralba, and Robert Fergus. Spectral hashing. In Advances in Neural Information
Processing Systems, 2008.
[26] Simon A. J. Winder and Matthew Brown. Learning local image descriptors. In IEEE Conference on
Computer Vision and Pattern Recognition, 2007.
9
| 4847 |@word kulis:4 faculty:1 version:3 norm:3 seems:1 proportion:2 shuicheng:2 pick:1 egou:1 tnlist:1 reduction:6 liu:2 contains:3 document:2 outperforms:3 bitwise:1 current:1 si:1 yet:3 numerical:1 sanjiv:3 informative:3 kdd:1 christian:2 plot:1 hash:10 half:2 intelligence:1 isotropic:5 provides:8 zhang:1 symposium:4 ik:1 mirflickr:1 initiative:1 consists:1 prove:3 mayur:1 pairwise:5 theoretically:3 behavior:1 freeman:1 curse:1 considering:1 increasing:1 becomes:2 project:1 estimating:7 moreover:4 dhamming:8 what:1 prateek:1 kind:3 substantially:2 transformation:1 guarantee:1 w8:1 every:1 act:1 grauman:2 uk:3 onig:2 unit:5 grant:4 omit:1 superiority:1 positive:1 engineering:1 local:3 tsinghua:5 mach:2 id:9 jiang:1 datar:1 china:3 studied:1 co:2 limited:2 tian:2 bi:3 trevi:2 acknowledgment:1 alexandr:1 implement:2 procedure:1 yan:2 significantly:2 projection:23 word:3 v11:1 regular:1 get:11 onto:3 close:2 storage:1 impossible:1 applying:3 preprints:1 www:1 center:1 nicole:1 independently:6 shen:1 wit:1 simplicity:1 spanned:2 orthonormal:3 justification:1 user:1 programming:1 element:1 orthogonalizes:1 recognition:5 approximated:2 particularly:1 expensive:1 cut:1 database:1 observed:2 steven:1 module:2 electrical:1 hv:5 capture:1 wang:4 mu:1 ui:2 manasse:1 uniformity:1 weakly:1 serve:1 distinctive:1 gramschmidt:1 basis:3 easily:1 htm:1 various:2 tx:1 represented:2 america:1 jain:1 fast:1 reconstructive:1 query:4 widely:4 supplementary:1 larger:3 cvpr:1 statistic:1 gi:1 ward:2 syntactic:1 jialie:1 indyk:4 confronted:1 advantage:2 localitysensitive:1 propose:2 aro:1 douze:1 product:2 e2lsh:2 achieve:2 intuitive:2 validate:1 normalize:1 motwani:2 p:5 darrell:1 produce:4 generating:4 object:2 andrew:1 ac:1 stat:1 measured:2 nearest:10 sim:1 p2:8 c:2 involves:2 implies:1 radius:2 closely:1 centered:5 sgn:4 explains:1 require:1 fix:4 kristen:2 brian:3 strictly:2 exploring:1 around:1 normal:10 matthew:1 klsh:1 hvi:4 adopt:1 achieves:2 torralba:2 estimation:10 bag:3 sensitive:11 correctness:1 vice:1 tf:1 anshumali:1 always:1 super:28 aim:1 cornell:1 corollary:9 vk:5 eleyans:1 mainly:1 dependent:3 integrated:3 compactness:1 kernelized:2 relation:3 kn1:1 among:4 orientation:1 colt:1 jianmin:1 constrained:3 special:2 tourism:6 marginal:2 equal:2 once:1 cordelia:1 washington:1 sampling:4 piotr:4 yu:1 unsupervised:1 icml:1 hv1:1 future:3 intelligent:1 duplicate:2 randomly:4 national:4 intell:2 geometry:2 n1:9 william:1 detection:2 w5:1 evaluation:1 notre:3 kvk:1 hwi:9 semidefinite:1 devoted:1 accurate:3 fu:4 closer:1 necessary:1 orthogonal:5 conduct:4 euclidean:3 divide:1 circle:1 orthogonalized:2 theoretical:2 vahab:1 column:2 wb:1 ar:24 exchanging:1 entry:1 rounding:1 answer:1 varies:1 gd:1 density:1 broder:1 international:5 matthijs:1 probabilistic:1 v4:1 together:2 w1:7 squared:5 dr:1 cb329403:1 v7:1 michel:1 li:8 winder:1 utsa:2 wk:7 coefficient:4 gionis:1 vi:11 later:2 try:1 philbin:1 lowe:1 complicated:2 vin:2 simon:1 ni:2 accuracy:5 variance:24 v9:1 descriptor:3 accurately:1 produced:4 bunch:1 ping:4 flickr:3 suffers:2 trevor:3 definition:1 centering:1 against:2 james:1 proof:9 psi:2 hamming:16 soton:1 sampled:11 dataset:4 proved:1 popular:1 subsection:4 improves:1 satisfiability:1 dimensionality:1 ondrej:1 hashing:32 higher:2 supervised:5 zisserman:1 improved:1 wei:3 though:1 generality:1 furthermore:2 angular:23 hand:3 sketch:1 web:2 rajeev:2 google:1 scientific:1 effect:6 normalized:4 unbiased:13 verify:2 true:1 deliberately:1 equality:1 brown:1 laboratory:3 moore:1 semantic:1 essence:1 maintained:1 cosine:1 criterion:1 pdf:1 performs:3 l1:2 orthogonalization:2 image:10 recently:1 ji:2 million:1 approximates:1 numerically:3 significant:1 measurement:1 versa:1 rd:9 funded:1 lsh:54 stable:4 access:1 similarity:31 longer:2 etc:1 certain:1 binary:20 joshua:1 additional:1 redundant:1 arithmetic:1 semi:2 multiple:1 ii:1 keypoints:1 reduces:1 technical:1 faster:1 long:3 retrieval:10 sphere:2 divided:1 hvk:1 hwj:2 grassmann:5 award:1 zweig:1 qi:2 scalable:2 involving:1 basic:2 vision:6 expectation:1 metric:1 histogram:3 kernel:4 dcszb:1 w2:5 rest:1 file:1 kwi:2 mir:4 induced:1 tend:1 effectiveness:2 call:1 integer:1 near:4 iii:2 easy:5 wn:1 embeddings:1 xj:10 w3:1 hastie:2 inner:1 idea:2 cn:2 intensive:1 texas:2 whether:1 herv:1 accelerating:1 returned:1 constitute:1 v8:1 antonio:5 generally:1 involve:1 cb316301:1 nonparametric:1 extensively:1 reduced:1 generate:4 hw1:3 http:2 exist:2 nsf:1 singapore:3 chum:1 moses:1 sign:3 write:1 shall:1 group:1 key:1 shih:4 preprocessed:1 r10:1 kenneth:2 v1:8 graph:2 beijing:1 angle:19 almost:1 v12:1 vn:1 patch:4 w12:1 appendix:1 jaccard:4 bit:38 ki:1 dame:3 guaranteed:1 annual:1 mda:1 gang:1 orthogonality:1 idf:1 scene:2 x2:5 u1:1 min:8 span:1 kumar:3 performing:3 department:3 charikar:1 ball:2 kd:1 smaller:9 wi:8 invariant:1 computationally:1 remains:1 phototour:1 describing:1 end:3 photo:6 informal:1 operation:2 multiplied:1 apply:1 v2:7 spectral:2 batch:8 schmidt:6 yair:1 altogether:1 original:5 denotes:2 binomial:1 running:1 top:1 clustering:1 approximating:2 february:1 question:1 v5:1 strategy:1 mirrokni:1 unclear:1 win:7 subspace:6 distance:24 hwk:1 mail:2 manifold:6 w7:1 extent:1 w6:1 assuming:1 length:25 code:35 index:1 illustration:1 robert:2 gk:2 negative:2 anal:2 perform:2 datasets:12 orthant:1 defining:1 extended:1 householder:1 david:2 complement:2 pair:5 extensive:1 glassman:1 learned:1 nu:1 trans:2 below:1 usually:1 pattern:6 program:3 video:1 explanation:1 natural:2 indicator:1 scheme:2 w11:1 technology:3 orthogonally:1 church:2 jun:4 schmid:1 prior:1 sg:1 l2:1 loss:1 interesting:1 proven:4 geoffrey:1 foundation:2 degree:1 verification:1 tiny:1 repeat:3 supported:2 neighbor:13 wide:1 taking:1 sparse:2 distributed:8 dimension:10 depth:19 gram:6 xn:1 dataset1:1 default:1 world:1 san:3 preprocessing:2 dome:2 founded:1 ec:1 transaction:1 approximate:10 compact:3 reveals:1 anchor:1 arnd:2 corpus:1 xi:21 fergus:2 search:6 sk:2 table:2 aristides:1 shrivastava:1 improving:2 mse:4 williamson:1 did:1 linearly:1 srp:45 n2:5 verifies:2 x1:9 andrei:1 weighting:1 hw:4 theorem:10 british:1 removing:1 sift:9 list:1 quantization:1 andoni:1 sequential:1 orthogonalizing:4 w9:1 nec:1 sx:5 margin:1 gap:1 locality:11 v6:1 bo:1 u2:1 chang:4 springer:1 satisfies:2 acm:3 w10:1 viewed:2 ann:6 towards:1 psk:2 specifically:1 infinite:1 uniformly:9 wt:4 lemma:18 called:1 goemans:1 hv2:1 orthogonalize:7 experimental:1 formally:2 immorlica:1 mark:1 minwise:5 evaluate:2 phenomenon:1 |
4,251 | 4,848 | Learning Image Descriptors with the Boosting-Trick
Tomasz Trzcinski, Mario Christoudias, Vincent Lepetit and Pascal Fua
CVLab, EPFL, Lausanne, Switzerland
[email protected]
Abstract
In this paper we apply boosting to learn complex non-linear local visual feature
representations, drawing inspiration from its successful application to visual object detection. The main goal of local feature descriptors is to distinctively represent a salient image region while remaining invariant to viewpoint and illumination changes. This representation can be improved using machine learning, however, past approaches have been mostly limited to learning linear feature mappings
in either the original input or a kernelized input feature space. While kernelized
methods have proven somewhat effective for learning non-linear local feature descriptors, they rely heavily on the choice of an appropriate kernel function whose
selection is often difficult and non-intuitive. We propose to use the boosting-trick
to obtain a non-linear mapping of the input to a high-dimensional feature space.
The non-linear feature mapping obtained with the boosting-trick is highly intuitive. We employ gradient-based weak learners resulting in a learned descriptor
that closely resembles the well-known SIFT. As demonstrated in our experiments,
the resulting descriptor can be learned directly from intensity patches achieving
state-of-the-art performance.
1
Introduction
Representing salient image regions in a way that is invariant to unwanted image transformations is
a crucial Computer Vision task. Well-known local feature descriptors, such as the Scale Invariant
Feature Transform (SIFT) [1] or Speeded Up Robust Features (SURF) [2], address this problem
by using a set of hand-crafted filters and non-linear operations. These descriptors have become
prevalent, even though they are not truly invariant with respect to various viewpoint and illumination
changes which limits their applicability.
In an effort to address these limitations, a fair amount of work has focused on learning local feature
descriptors [3, 4, 5] that leverage labeled training image patches to learn invariant feature representations based on local image statistics. Although significant progress has been made, these approaches
are either built on top of hand-crafted representations [5] or still require significant parameter tuning
as in [4] which relies on a non-analytical objective that is difficult to optimize.
Learning an invariant feature representation is strongly related to learning an appropriate similarity
measure or metric over intensity patches that is invariant to unwanted image transformations, and
work on descriptor learning has been predominantly focused in this area [3, 6, 5]. Methods for metric learning that have been applied to image data have largely focused on learning a linear feature
mapping in either the original input or a kernelized input feature space [7, 8]. This includes previous
boosting-based metric learning methods that thus far have been limited to learning linear feature
transformations [3, 7, 9]. In this way, non-linearities are modeled using a predefined similarity or
kernel function that implicitly maps the input features to a high-dimensional feature space where the
transformation is assumed to be linear. While these methods have proven somewhat effective for
learning non-linear local feature mappings, choosing an appropriate kernel function is often nonintuitive and remains a challenging and largely open problem. Additionally, kernel methods involve
1
an optimization whose problem complexity grows quadratically with the number of training examples making them difficult to apply to large problems that are typical to local descriptor learning.
In this paper, we apply boosting to learn complex non-linear local visual feature representations
drawing inspiration from its successful application to visual object detection [10]. Image patch
appearance is modeled using local non-linear filters evaluated within the image patch that are effectively selected with boosting. Analogous to the kernel-trick, our approach can be seen as applying a
boosting-trick [11] to obtain a non-linear mapping of the input to a high-dimensional feature space.
Unlike kernel methods, the boosting-trick allows for the definition of intuitive non-linear feature
mappings. Also, our learning approach scales linearly with the number of training examples making
it more easily amenable to large scale problems and results in highly accurate descriptor matching.
We build upon [3] that also relies on boosting to compute a descriptor, and show how we can use it
as a way to efficiently select features, from which we compute a compact representation. We also
replace the simple weak learners of [3] by non-linear filters more adapted to the problem. In particular, we employ image gradient-based weak learners similar to [12] that share a close connection
with the non-linear filters used in proven image descriptors such as SIFT and Histogram-of-Oriented
Gradients (HOG) [13]. Our approach can be seen as a generalization of these methods cast within
a principled learning framework. As seen in our experiments, our descriptor can be learned directly from intensity patches and results in state-of-the-art performance rivaling its hand-designed
equivalents.
To evaluate our approach we consider the image patch dataset of [4] containing several hundreds
of thousands of image patches under varying viewpoint and illumination conditions. As baselines
we compare against leading contemporary hand-designed and learned local feature descriptors [1,
2, 3, 5]. We demonstrate the effectiveness of our approach on this challenging dataset, significantly
outperforming the baseline methods.
2
Related work
Machine learning has been applied to improve both matching efficiency and accuracy of image
descriptors [3, 4, 5, 8, 14, 15]. Feature hashing methods improve the storage and computational
requirements of image-based features [16, 14, 15]. Salakhutdinov and Hinton [16, 17] develop
a semantic hashing approach based on Restricted Boltzman Machines (RBMs) applied to binary
images of digits. Similarly, Weiss et al. [14] present a spectral hashing approach that learns compact
binary codes for efficient image indexing and matching. Kulis and Darrell [15] extend this idea
to explicitly minimize the error between the original Euclidean and computed Hamming distances.
Many of these approaches presume a given distance or similarity measure over a pre-defined input
feature space. Although they result in efficient description and indexing in many cases they are
limited to the matching accuracy of the original input space. In contrast, our approach learns a nonlinear feature mapping that is specifically optimized to result in highly accurate descriptor matching.
Methods to metric learning learn feature spaces tailored to a particular matching task [5, 8]. These
methods assume the presence of annotated label pairs or triplets that encode the desired proximity
relationships of the learned feature embedding. Jain et al. [8] learn a Mahalanobis distance metric
defined using either the original input or a kernelized input feature space applied to image classification and matching. Alternatively, Strecha et al. [5] employ Linear Discriminant Analysis to learn
a linear feature mapping from binary-labeled example pairs. Both of these methods are closely related, offering different optimization strategies for learning a Mahalanobis-based distance metric.
While these methods improve matching accuracy through a learned feature space, they require the
presence of a pre-selected kernel function to encode non-linearities. Such approaches are well suited
for certain image indexing and classification tasks where task-specific kernel functions have been
proposed (e.g., [18]). However, they are less applicable to local image feature matching, for which
the appropriate choice of kernel function is less understood.
Boosting has also been applied for learning Mahalanobis-based distance metrics involving highdimensional input spaces overcoming the large computational complexity of conventional positive
semi-definite (PSD) solvers based on the interior point method [7, 9]. Shen et al. [19] proposed
a PSD solver using column generation techniques based on AdaBoost, that was later extended to
involve closed-form iterative updates [7]. More recently, Bi et al. [9] devised a similar method
exhibiting even further improvements in computational complexity with application to bio-medical
imagery. While these methods also use boosting to learn a feature mapping, they have emphasized
2
computational efficiency only considering linear feature embeddings. Our approach exhibits similar
computational advantages, however, has the ability to learn non-linear feature mappings beyond
what these methods have proposed.
Similar to our work, Brown et al. [4] also consider different feature pooling and selection strategies
of gradient-based features resulting in a descriptor which is both short and discriminant. In [4],
however, they optimize on the combination of handcrafted blocks, and their parameters. The criterion they consider?the area below the ROC curve?is not analytical and thus difficult to optimize,
and does not generalize well. In contrast, we provide a generic learning framework for finding such
representations. Moreover, the form of our descriptor is much simpler. Simultaneous to this work,
similar ideas were explored in [20, 21]. While these approaches assume a sub-sampled or course
set of pooling regions to mitigate tractability, we allow for the discovery of more generic pooling
configurations with boosting.
Our work on boosted feature learning can be traced back to the work of Doll?ar et al. [22] where they
apply boosting across a range of different features for pedestrian detection. Our approach is probably
most similar to the boosted Similarity Sensitive Coding (SSC) method of Shakhnarovich [3] that
learns a boosted similarity function from a family of weak learners, a method that was later extended
in [23] to be used with a Hamming distance. In [3], only linear projection based weak-learners were
considered. Also, Boosted SSC can often yield fairly high-dimensional embeddings. Our approach
can be seen as an extension of Boosted SSC to form low-dimensional feature mappings. We also
show that the image gradient-based weak learners of [24] are well adapted to the problem. As seen
in our experiments, our approach significantly outperforms Boosted SSC when applied to image
intensity patches.
3
Method
Given an image intensity patch x ? RD we look for a descriptor of x as a non-linear mapping
H(x) into the space spanned by {hi }M
i=1 , a collection of thresholded non-linear response functions
hi (x) : RD ? {?1, 1}. The number of response functions M is generally large and possibly
infinite.
This mapping can be learned by minimizing the exponential loss with respect to a desired similarity
function f (x, y) defined over image patch pairs
N
X
L=
exp(?li f (xi , yi ))
(1)
i=1
where xi , yi ? RD are training intensity patches and li ? {?1, 1} is a label indicating whether it is
a similar (+1) or dissimilar (?1) pair.
The Boosted SSC method proposed in [3] considers a similarity function defined by a simply
weighted sum of thresholded response functions
f (x, y) =
M
X
?i hi (x)hi (y) .
(2)
i=1
This defines a weighted hash function with the importance of each dimension i given by ?i .
Substituting this expression into Equation (1) gives
?
?
N
M
X
X
LSSC =
exp ??li
?j hj (xi )hj (yi )? .
i=1
(3)
j=1
In practice M is large and in general the number of possible hi ?s can be infinite making the explicit
optimization of LSSC difficult, which constitutes a problem for which boosting is particularly well
suited [25]. Although boosting is a greedy optimization scheme, it is a provably effective method
for constructing a highly accurate predictor from a collection of weak predictors hi .
Similar to the kernel trick, the resulting boosting-trick also maps each observation to a highdimensional feature space, however, it computes an explicit mapping for which the ?i ?s that define
f (x, y) are assumed to be sparse [11]. In fact, Rosset et al. [26] have shown that under certain
3
settings boosting can be interpreted as imposing an L1 sparsity constraint over the response function weights ?i . As will be seen below, unlike the kernel trick, this allows for the definition of
high-dimensional embeddings well suited to the descriptor matching task whose features have an
intuitive explanation.
Boosted SSC employs linear response weak predictors based on a linear projection of the input. In
contrast, we consider non-linear response functions more suitable for the descriptor matching task
as discussed in Section 3.3. In addition, the greedy optimization can often yield embeddings that
although accurate are fairly redundant and inefficient.
In what follows, we will present our approach for learning compact boosted feature descriptors
called Low-Dimensional Boosted Gradient Maps (L-BGM). First, we present a modified similarity
function well suited for learning low-dimensional, discriminative embeddings with boosting. Next,
we show how we can factorize the learned embedding to form a compact feature descriptor. Finally,
the gradient-based weak learners utilized by our approach are detailed.
3.1 Similarity measure
To mitigate the potentially redundant embeddings found by boosting we propose an alternative similarity function that models the correlation between weak response functions,
X
fLBGM (x, y) =
?i,j hi (x)hj (y) = h(x)T Ah(y),
(4)
i,j
where h(x) = [h1 (x), ? ? ? , hM (x)] and A is an M ? M matrix of coefficients ?i,j . This similarity
measure is a generalization of Equation (2). In particular, fLBGM is equivalent to the Boosted SSC
similarity measure in the restricted case of a diagonal A.
Substituting the above expression into Equation (1) gives
?
?
N
X
X
LLBGM =
exp ??lk
?i,j hi (xk )hj (yk )? .
(5)
i,j
k=1
Although it can be shown that LLBGM can be jointly optimized for A and the hi ?s using boosting,
this involves a fairly complex procedure. Instead, we propose a two step learning strategy whereby
we first apply AdaBoost to find the hi ?s as in [3]. As shown by our experiments, this provides an
effective way to select relevant hi ?s. We then apply stochastic gradient descent to find an optimal
weighting over the selected features that minimizes LLBGM .
More formally, let P be the number of relevant response functions found with AdaBoost with P
M . We define AP ? RP ?P to be the sub-matrix corresponding to the non-zero entries of A,
explicitly optimized by our approach. Note that as the loss function is convex in A, AP can be
found optimally with respect to the selected hi ?s. In addition, we constrain ?i,j = ?j,i during
optimization restricting the solution to the set of symmetric P ? P matrices yielding a symmetric
similarity measure fLBGM . We also experimented with more restrictive forms of regularization,
e.g., constraining AP to be possitive semi-definite, however, this is more costly and gave similar
results.
We use a simple implementation of stochastic gradient descent with a constant valued step size,
initialized using the diagonal matrix found by Boosted SSC, and iterate until convergence or a maximum number of iterations is reached. Note that because the weak learners are binary, we can
precompute the exponential terms involved in the derivatives for all the data samples, as they are
constant with respect to AP . This significantly speeds up the optimization process.
3.2 Embedding factorization
The similarity function of Equation (4) defines an implicit feature mapping over example pairs. We
now show how the AP matrix in fLBGM can be factorized to result in compact feature descriptors
computed independently over each input.
Assuming AP to be a symmetric P ? P matrix it can be factorized into the following form,
AP = BWBT =
d
X
k=1
4
wk bk bTk
(6)
? weighting
?R1,e 1
R1
e3
e4
e2
e1
R2
e0
e5
e6
R3
Image Gradients
R4
Keypoint Descriptor
Figure 1: A specialized configuration of weak response functions ? corresponding to a regular
gridding within the image patch. In addition, assuming a Gaussian weighting of the ??s results in a
descriptor that closely resembles SIFT [1] and is one of the many solutions afforded by our learning
framework.
where W = diag([w1 , ? ? ? , wd ]), wk ? {?1, 1}, B = [b1 , ? ? ? , bd ], b ? RP , and d ? P .
Equation (4) can then be re-expressed as
fLBGM (x, y) =
d
X
wk
k=1
P
X
?
!? P
X
bk,j hj (y)? .
bk,i hi (x) ?
i=1
j=1
(7)
This factorization defines a signed inner product between the embedded feature vectors and provides
increased efficiency with respect to the original similarity measure 1 . For d < P (i.e., the effective
rank of AP is d < P ) the factorization represents a smoothed version of AP discarding the lowenergy dimensions that typically correlate with noise, leading to further performance improvements.
The final embedding found with our approach is therefore
HLBGM (x) = BT h(x) ,
and HLBGM (x) : R
D
(8)
d
?R .
The projection matrix B defines a discriminative dimensionality reduction optimized with respect to
the exponential loss objective of Equation (5). As seen in our experiments, in the case of redundant
hi this results in a considerable feature compression, also offering a more compact description than
the original input patch.
3.3 Weak learners
The boosting-trick allows for a variety of non-linear embeddings parameterized by the chosen weak
learner family. We employ the gradient-based response functions of [12] to form our feature descriptor. In [12], the usefulness of these features was demonstrated for visual object detection. In what
follows, we extend these features to the descriptor matching task illustrating their close connection
with the well-known SIFT descriptor.
Following the notation of [12], our weak learners are defined as
1
if ?R,e (x) ? T
h(x; R, e, T ) =
,
?1 otherwise
where
?R,e (x) =
X
?e (x, m) /
X
?ek (x, m) ,
(9)
(10)
ek ??,m?R
m?R
with region ?e (x, m) being the gradient energy along an orientation e at location m within x, and
R defining a rectangular extent within the patch. The gradient energy is computed based on the dot
product between e and the gradient orientation at pixel m [12]. The orientation e ranges between
4?
2?
[??, ?] and is quantized to take values ? = {0, 2?
q , q , ? ? ? , (q ? 1) ? q } with q the number of
1
Matching two sets of descriptors each of size N is O(N 2 P 2 ) under the original measure and O(N P d +
N 2 d) provided the factorization, resulting in significant savings for reasonably sized N and P , and d P .
5
1.2
1
1.2
1
0.8
1
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.2
0.4
0.2
0.2
0
(a)
(b)
(c)
Figure 2: Learned spatial weighting obtained with Boosted Gradient Maps (BGM) trained on (a)
Liberty, (b) Notre Dame and (c) Yosemite datasets. The learned weighting closely resembles the
Gaussian weighting employed by SIFT (white circles indicate ?/2 and ? used by SIFT).
quantization bins. As noted in [12] this representation can be computed efficiently using integral
images.
The non-linear gradient response functions ?R,e along with their thresholding T define the parameterization of the weak learner family optimized with our approach. Consider the specialized configuration illustrated in Figure 1. This corresponds to a selection of weak learners whose R and e
values are parameterized such that they lie along a regular grid, equally sampling each edge orientation within each grid cell. In addition, if we assume a Gaussian weighting centered about the
patch, the resulting descriptor closely resembles SIFT2 [1]. In fact, this configuration and weighting
corresponds to one of the many solutions afforded by our approach. In [4], they note the importance
of allowing for alternative pooling and feature selection strategies, both of which are effectively optimized within our framework. As seen in our experiments, this results in a significant performance
gain over hand-designed SIFT.
4
Results
In this section, we first present an overview of our evaluation framework. We then show the results
obtained using Boosted SSC combined with gradient-based weak learners described in Sec. 3.3.
We continue with the results generated when applying the factorized embedding of the matrix A.
Finally, we present a comparison of our final descriptor with the state of the art.
4.1
Evaluation framework
We evaluate the performance of our methods using three publicly available datasets: Liberty, Notre
Dame and Yosemite [4]. Each of them contain over 400k scale- and rotation-normalized 64 ? 64
patches. These patches are sampled around interest points detected using Difference of Gaussians and the correspondences between patches are found using a multi-view stereo algorithm. The
datasets created this way exhibit substantial perspective distortion and various lighting conditions.
The ground truth available for each of these datasets describes 100k, 200k and 500k pairs of patches,
where 50% correspond to match pairs, and 50% to non-match pairs. In our evaluation, we separately
consider each dataset for training and use the held-out datasets for testing. We report the results of
the evaluation in terms of ROC curves and 95% error rate as is done in [4].
4.2
Boosted Gradient Maps
To show the performance boost we get by using gradient-based weak learners in our boosting
scheme, we plot the results for the original Boosted SSC method [3], which relies on thresholded
pixel intensities as weak learners, and for the same method which uses gradient-based weak learners
instead (referred to as Boosted Gradient Maps (BGM)) with q = 24 quantized orientation bins used
throughout our experiments. As we can see in Fig. 3(a), a 128-dimensional Boosted SSC descriptor
can be easily outperformed by a 32-dimensional BGM descriptor. When comparing descriptors with
the same dimensionality, the improvement measured in terms of 95% error rate reaches over 50%.
Furthermore, it is worth noticing, that with 128 dimensions BGM performs similarly to SIFT, and
when we increase the dimensionality to 512 - it outperforms SIFT by 14% in terms of 95% error
rate. When comparing the 256-dimensional SIFT (obtained by increasing the granularity of the orientation bins) with the 256-dimensional BGM, the extended SIFT descriptor performs much worse
2
SIFT additionally normalizes each descriptor to be unit norm, however, the underlying representation is
otherwise quite similar.
6
Train: Liberty (200k) Test: Notre Dame (100k)
1
0.9
0.9
True Positive Rate
True Positive Rate
Train: Liberty (200k) Test: Notre Dame (100k)
1
0.8
0.7
SIFT (128, 28.09%)
Boosted SSC (128, 72.95%)
BGM (32, 37.03%)
BGM (64, 29.60%)
BGM (128, 21.93%)
BGM (256, 15.99%)
BGM (512, 14.36%)
0.6
0.5
0
0.1
0.2
0.3
0.4
0.8
SIFT (128, 28.09%)
Boosted SSC (128, 72.95%)
BGM-PCA (32, 25.73%)
L-BGM-Diag (32, 34.71%)
L-BGM (32, 16.20%)
L-BGM (64, 14.15%)
L-BGM (128, 13.76%)
L-BGM (256, 13.38%)
L-BGM (512, 16.33%)
0.7
0.6
0.5
0.5
0
False Positive Rate
0.1
0.2
0.3
0.4
0.5
False Positive Rate
(a)
(b)
Figure 3: (a) Boosted SCC using thresholded pixel intensities in comparison with our Boosted
Gradient Maps (BGM) approach. (b) Results after optimization of the correlation matrix A. Performance is evaluated with respect to factorization dimensionality d. In parentheses: the number of
dimensions and the 95% error rate.
(34.22% error rate vs 15.99% for BGM-256). This indicates that boosting with a similar number of
non-linear classifiers adds to the performance, and proves how well tuned the SIFT descriptor is.
Visualizations of the learned weighting obtained with BGM trained on Liberty, Notre Dame and
Yosemite datasets are displayed in Figure 2. To plot the visualizations we sum the ??s across
orientations within the rectangular regions of the corresponding weak learners. Note that although
there are some differences, interestingly this weighting closely resembles the Gaussian weighting
employed by SIFT.
4.3
Low-Dimensional Boosted Gradient Maps
To further improve performance, we optimize over the correlation matrix of the weak learners? responses, as explained in Sec. 3.1, and apply the embedding from Sec. 3.2. The results of this method
are shown in Fig. 3(b). In these experiments, we learn our L-BGM descriptor using the responses of
512 gradient-based weak learners selected with boosting. We first optimize over the weak learners?
correlation matrix which is constrained to be diagonal. This corresponds to a global optimization
of the weights of the weak learners. The resulting 32-dimensional L-BGM-Diag descriptor performs only slightly better than the corresponding 32-dimensional BGM. Interestingly, the additional
degrees of freedom obtained by optimizing over the full correlation matrix boost the results significantly and allow us to outperform SIFT with as few as 32 dimensions. When we compare our
128-dimensional descriptor, i.e., the descriptor of the same length as SIFT, we observe 15% improvement in terms of 95% error rate. However, when we increase the descriptor length from 256 to
512 we can see a slight performance drop since we begin to include the ?noisy? dimensions of our
embedding which correspond to the eigenvalues of low magnitude, a trend typical to many dimensionality reduction techniques. Hence, as our final descriptor, we select the 64-dimensional L-BGM
descriptor, as it provides a decent trade-off between performance and descriptor length.
Figure 3(b) also shows the results obtained by applying PCA on the responses of 512 gradient-based
weak learners (BGM-PCA). The descriptor generated this way performs similarly to SIFT, however
our method still provides better results even for the same dimensionality, which shows the advantage
in optimizing the exponential loss of Eq. 5.
4.4
Comparison with the state of the art
Here we compare our approach against the following baselines: sum of squared differences of pixel
intensities (SSD), the state-of-the-art SIFT descriptor [1], SURF descriptor [2], binary LDAHash
descriptor [5], a real-valued descriptor computed by applying LDE projections on bias-gain normalized patches (LDA-int) [4] and the original Boosted SSC [3]. We have also tested recent binary
descriptors such as BRIEF [27], ORB [28] or BRISK [29], however, they performed much worse
than the baselines presented in the paper. For SIFT, we use the publicly available implementation of
A. Vedaldi [30]. For SURF and LDAHash, we use the implementation available from the websites
of the authors. For the other methods, we use our own implementation. For LDA-int we choose
the dimensionality which was reported to perform the best on a given dataset according to [4]. For
Boosted SSC, we use 128-dimensions as this obtained the best performance.
7
Train: Yosemite (200k) Test: Notre Dame (100k)
1
0.9
0.9
True Positive Rate
True Positive Rate
Train: Notre Dame (200k) Test: Liberty (100k)
1
0.8
0.7
SSD (1024, 69.11%)
SIFT (128, 36.27%)
SURF (64, 54.01%)
LDAHash (128, 49.66%)
LDA-int (27, 53.93%)
Boosted SSC (128, 70.35%)
BGM (256, 21.62%)
L-BGM (64, 18.05%)
0.6
0.5
0
0.1
0.2
0.3
0.4
0.8
0.7
SSD (1024, 76.13%)
SIFT (128, 28.09%)
SURF (64, 45.51%)
LDAHash (128, 51.58%)
LDA-int (14, 49.14%)
Boosted SSC (128, 72.20%)
BGM (256, 14.69%)
L-BGM (64, 13.73%)
0.6
0.5
0.5
0
False Positive Rate
0.1
0.2
0.3
0.4
0.5
False Positive Rate
(a)
(b)
Figure 4: Comparison to state of the art. In parentheses: the number of dimensions, and the 95%
error rate. Our L-BGM approach outperforms SIFT by up to 18% in terms of 95% error rate using
half fewer dimensions.
In Fig. 4 we plot the recognition curves for all the baselines and our method. BGM and L-BGM
outperform the baseline methods across all FP rates. The maximal performance boost is obtained
by using our 64-dimensional L-BGM descriptor that results in an up to 18% improvement in terms
of 95% error rate with respect to the state-of-the-art SIFT descriptor. Descriptors derived from
patch intensities, i.e. SSD, Boosted SSC and LDA-int, perform much worse than the gradient-based
ones. Finally, our BGM and L-BGM descriptors far outperform SIFT which relies on hand-crafted
filters applied to gradient maps. Moreover, with BGM and L-BGM we are able to reduce the 95%
error rate by over 3 times with respect to the other state-of-the-art descriptors, namely SURF and
LDAHash. We have computed the results for all the configurations of training and testing datasets
without observing any significant differences, thus we show here only a representative set of the
curves. More results can be found in the supplementary material.
Interestingly, the results we obtain are comparable with ?the best of the best? results reported in [4].
However, since the code for their compact descriptors is not publicly available, we can only compare the performance in terms of the 95% error rates. Only the composite descriptors of [4] provide
some advantage over our compact L-BGM, as their average 95% error rate is 2% lower than this of
L-BGM. Nevertheless, we outperform their non-parametric descriptors by 12% and perform slightly
better than the parametric ones, while using descriptors of an order of magnitude shorter. This comparison indicates that even though our approach does not require any complex pipeline optimization
and parameter tuning, we perform similarly to the finely optimized descriptors presented in [4].
5
Conclusions
In this paper we presented a new method for learning image descriptors by using Low-Dimensional
Boosted Gradient Maps (L-BGM). L-BGM offers an attractive alternative to traditional descriptor
learning techniques that model non-linearities based on the kernel-trick, relying on a pre-specified
kernel function whose selection can be difficult and unintuitive. In contrast, we have shown that
for the descriptor matching problem the boosting-trick leads to non-linear feature mappings whose
features have an intuitive explanation. We demonstrated the use of gradient-based weak learner
functions for learning descriptors within our framework, illustrating their close connection with the
well-known SIFT descriptor. A discriminative embedding technique was also presented, yielding
fairly compact and discriminative feature descriptions compared to the baseline methods. We evaluated our approach on benchmark datasets where L-BGM was shown to outperform leading contemporary hand-designed and learned feature descriptors. Unlike previous approaches, our L-BGM
descriptor can be learned directly from raw intensity patches achieving state-of-the-art performance.
Interesting avenues of future work include the exploration of other weak learner families for descriptor learning, e.g., SURF-like Haar features, and extensions to binary feature embeddings.
Acknowledgments
We would like to thank Karim Ali for sharing his feature code and his insightful feedback and discussions.
References
[1] Lowe, D.: Distinctive Image Features from Scale-Invariant Keypoints. IJCV 20(2) (2004)
91?110
8
[2] Bay, H., Tuytelaars, T., Van Gool, L.: SURF: Speeded Up Robust Features. In: ECCV?06
[3] Shakhnarovich, G.: Learning Task-Specific Similarity. PhD thesis, MIT (2006)
[4] Brown, M., Hua, G., Winder, S.: Discriminative Learning of Local Image Descriptors. PAMI
(2011)
[5] Strecha, C., Bronstein, A., Bronstein, M., Fua, P.: LDAHash: Improved Matching with Smaller
Descriptors. PAMI 34(1) (2012)
[6] Kulis, B., Jain, P., Grauman, K.: Fast Similarity Search for Learned Metrics. PAMI (2009)
2143?2157
[7] Shen, C., Kim, J., Wang, L., van den Hengel, A.: Positive Semidefinite Metric Learning with
Boosting. In: NIPS. (2009)
[8] Jain, P., Kulis, B., Davis, J., Dhillon, I.: Metric and Kernel Learning using a Linear Transformation. JMLR (2012)
[9] Bi, J., Wu, D., Lu, L., Liu, M., Tao, Y., Wolf, M.: AdaBoost on Low-Rank PSD Matrices for
Metric Learning. In: CVPR. (2011)
[10] Viola, P., Jones, M.: Rapid Object Detection Using a Boosted Cascade of Simple Features. In:
CVPR?01
[11] Chapelle, O., Shivaswamy, P., Vadrevu, S., Weinberger, K., Zhang, Y., Tseng, B.: Boosted
Multi-Task Learning. Machine Learning (2010)
[12] Ali, K., Fleuret, F., Hasler, D., Fua, P.: A Real-Time Deformable Detector. PAMI 34(2) (2012)
225?239
[13] Dalal, N., Triggs, B.: Histograms of Oriented Gradients for Human Detection. In: CVPR?05
[14] Weiss, Y., Torralba, A., Fergus, R.: Spectral Hashing. NIPS 21 (2009) 1753?1760
[15] Kulis, B., Darrell, T.: Learning to Hash with Binary Reconstructive Embeddings. In: NIPS?09
[16] Salakhutdinov, R., Hinton, G.: Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure. In: International Conference on Artificial Intelligence and Statistics.
(2007)
[17] Salakhutdinov, R., Hinton, G.: Semantic Hashing. International Journal of Approximate
Reasoning (2009)
[18] Grauman, K., Darrell, T.: The Pyramid Match Kernel: Discriminative Classification with Sets
of Image Features. In: ICCV?05
[19] Shen, C., Welsh, A., Wang, L.: PSDBoost: Matrix Generation Linear Programming for Positive Semidefinite Matrices Learning. In: NIPS. (2008)
[20] Jia, Y., Huang, C., Darrell, T.: Beyond Spatial Pyramids: Receptive Field Learning for Pooled
Image Features. In: CVPR?12
[21] Simonyan, K., Vedaldi, A., Zisserman, A.: Descriptor Learning Using Convex Optimisation.
In: ECCV?12
[22] Doll?ar, P., Tu, Z., Perona, P., Belongie, S.: Integral Channel Features. In: BMVC?09
[23] Torralba, A., Fergus, R., Weiss, Y.: Small Codes and Large Databases for Recognition. In:
CVPR?08
[24] Ali, K., Fleuret, F., Hasler, D., Fua, P.: A Real-Time Deformable Detector. PAMI (2011)
[25] Freund, Y., Schapire, R.: A Decision-Theoretic Generalization of On-Line Learning and an
Application to Boosting. In: European Conference on Computational Learning Theory. (1995)
[26] Rosset, S., Zhu, J., Hastie, T.: Boosting as a Regularized Path to a Maximum Margin Classifier.
JMLR (2004)
[27] Calonder, M., Lepetit, V., Ozuysal, M., Trzcinski, T., Strecha, C., Fua, P.: BRIEF: Computing
a Local Binary Descriptor Very Fast. PAMI 34(7) (2012) 1281?1298
[28] Rublee, E., Rabaud, V., Konolidge, K., Bradski, G.: ORB: An Efficient Alternative to SIFT or
SURF. In: ICCV?11
[29] Leutenegger, S., Chli, M., Siegwart, R.: BRISK: Binary Robust Invariant Scalable Keypoints.
In: ICCV?11
[30] Vedaldi, A.: http://www.vlfeat.org/?vedaldi/code/siftpp.html
9
| 4848 |@word kulis:4 illustrating:2 version:1 dalal:1 compression:1 norm:1 triggs:1 open:1 lepetit:2 reduction:2 configuration:5 liu:1 offering:2 tuned:1 interestingly:3 psdboost:1 past:1 outperforms:3 wd:1 comparing:2 bd:1 strecha:3 designed:4 plot:3 update:1 drop:1 hash:2 v:1 greedy:2 selected:5 website:1 half:1 parameterization:1 fewer:1 intelligence:1 xk:1 short:1 provides:4 boosting:29 quantized:2 location:1 org:1 simpler:1 zhang:1 along:3 become:1 ijcv:1 rapid:1 multi:2 salakhutdinov:3 relying:1 solver:2 considering:1 increasing:1 provided:1 begin:1 linearity:3 moreover:2 notation:1 factorized:3 underlying:1 what:3 interpreted:1 minimizes:1 finding:1 transformation:5 mitigate:2 unwanted:2 grauman:2 classifier:2 bio:1 unit:1 medical:1 vlfeat:1 positive:11 understood:1 local:14 limit:1 path:1 ap:9 pami:6 signed:1 resembles:5 r4:1 lausanne:1 challenging:2 limited:3 factorization:5 speeded:2 bi:2 range:2 acknowledgment:1 testing:2 practice:1 block:1 definite:2 digit:1 procedure:1 area:2 significantly:4 vedaldi:4 matching:15 projection:4 pre:3 composite:1 regular:2 cascade:1 get:1 close:3 selection:5 interior:1 storage:1 applying:4 optimize:5 equivalent:2 map:10 demonstrated:3 conventional:1 www:1 independently:1 convex:2 focused:3 shen:3 rectangular:2 spanned:1 his:2 embedding:9 analogous:1 heavily:1 programming:1 us:1 trick:12 trend:1 rivaling:1 particularly:1 utilized:1 recognition:2 labeled:2 database:1 wang:2 thousand:1 region:5 trade:1 contemporary:2 yk:1 principled:1 substantial:1 complexity:3 trained:2 shakhnarovich:2 ali:3 upon:1 distinctive:1 efficiency:3 learner:25 rublee:1 easily:2 various:2 train:4 jain:3 fast:2 effective:5 reconstructive:1 detected:1 artificial:1 choosing:1 whose:6 quite:1 supplementary:1 valued:2 cvpr:5 distortion:1 drawing:2 otherwise:2 ability:1 statistic:2 simonyan:1 tuytelaars:1 transform:1 jointly:1 noisy:1 final:3 advantage:3 eigenvalue:1 analytical:2 propose:3 product:2 maximal:1 tu:1 relevant:2 deformable:2 intuitive:5 christoudias:1 description:3 convergence:1 requirement:1 darrell:4 r1:2 object:4 develop:1 measured:1 progress:1 eq:1 lssc:2 involves:1 indicate:1 exhibiting:1 switzerland:1 liberty:6 orb:2 closely:6 annotated:1 filter:5 stochastic:2 centered:1 exploration:1 human:1 material:1 bin:3 require:3 generalization:3 extension:2 proximity:1 around:1 considered:1 ground:1 exp:3 mapping:17 substituting:2 torralba:2 outperformed:1 applicable:1 label:2 sensitive:1 weighted:2 mit:1 gaussian:4 modified:1 hj:5 boosted:31 varying:1 encode:2 derived:1 improvement:5 prevalent:1 rank:2 indicates:2 contrast:4 baseline:7 kim:1 shivaswamy:1 epfl:2 typically:1 bt:1 kernelized:4 perona:1 tao:1 provably:1 pixel:4 classification:3 orientation:7 pascal:1 html:1 art:9 spatial:2 fairly:4 constrained:1 field:1 saving:1 sampling:1 represents:1 look:1 jones:1 constitutes:1 future:1 report:1 employ:5 few:1 oriented:2 welsh:1 psd:3 freedom:1 detection:6 interest:1 bradski:1 highly:4 evaluation:4 truly:1 notre:7 yielding:2 semidefinite:2 held:1 predefined:1 amenable:1 accurate:4 integral:2 edge:1 shorter:1 euclidean:1 initialized:1 circle:1 desired:2 re:1 e0:1 siegwart:1 increased:1 column:1 ar:2 yosemite:4 applicability:1 tractability:1 entry:1 hundred:1 predictor:3 usefulness:1 successful:2 optimally:1 reported:2 rosset:2 combined:1 international:2 off:1 w1:1 imagery:1 squared:1 thesis:1 containing:1 choose:1 possibly:1 huang:1 ssc:18 worse:3 ek:2 inefficient:1 leading:3 derivative:1 winder:1 li:3 pooled:1 coding:1 wk:3 includes:1 coefficient:1 pedestrian:1 sec:3 int:5 explicitly:2 later:2 h1:1 view:1 closed:1 performed:1 mario:1 observing:1 reached:1 lowe:1 tomasz:1 jia:1 minimize:1 publicly:3 accuracy:3 descriptor:75 largely:2 efficiently:2 yield:2 correspond:2 generalize:1 weak:29 raw:1 vincent:1 calonder:1 lu:1 lighting:1 presume:1 worth:1 ah:1 simultaneous:1 detector:2 reach:1 sharing:1 definition:2 lde:1 against:2 rbms:1 energy:2 involved:1 e2:1 hamming:2 sampled:2 gain:2 dataset:4 dimensionality:7 back:1 scc:1 hashing:5 adaboost:4 response:14 wei:3 improved:2 fua:5 zisserman:1 evaluated:3 though:2 strongly:1 done:1 furthermore:1 bmvc:1 implicit:1 correlation:5 until:1 hand:7 nonlinear:2 defines:4 lda:5 vadrevu:1 grows:1 brown:2 contain:1 normalized:2 true:4 regularization:1 inspiration:2 hence:1 symmetric:3 karim:1 dhillon:1 semantic:2 illustrated:1 white:1 attractive:1 mahalanobis:3 during:1 lastname:1 davis:1 whereby:1 noted:1 criterion:1 theoretic:1 demonstrate:1 performs:4 l1:1 reasoning:1 image:33 recently:1 predominantly:1 rotation:1 specialized:2 overview:1 handcrafted:1 extend:2 discussed:1 slight:1 significant:5 imposing:1 tuning:2 rd:3 grid:2 similarly:4 ssd:4 dot:1 chapelle:1 similarity:17 add:1 own:1 recent:1 perspective:1 optimizing:2 certain:2 outperforming:1 binary:10 continue:1 yi:3 leutenegger:1 seen:8 preserving:1 additional:1 somewhat:2 employed:2 redundant:3 semi:2 full:1 keypoints:2 match:3 offer:1 devised:1 e1:1 equally:1 parenthesis:2 involving:1 scalable:1 vision:1 metric:11 optimisation:1 histogram:2 represent:1 kernel:15 tailored:1 iteration:1 pyramid:2 cell:1 addition:4 separately:1 crucial:1 unlike:3 finely:1 probably:1 pooling:4 effectiveness:1 leverage:1 presence:2 constraining:1 granularity:1 embeddings:9 decent:1 iterate:1 variety:1 gave:1 hastie:1 inner:1 idea:2 reduce:1 avenue:1 whether:1 expression:2 pca:3 effort:1 stereo:1 e3:1 generally:1 fleuret:2 detailed:1 involve:2 amount:1 schapire:1 http:1 outperform:5 salient:2 nevertheless:1 achieving:2 traced:1 thresholded:4 hasler:2 sum:3 parameterized:2 noticing:1 family:4 throughout:1 wu:1 patch:23 decision:1 comparable:1 hi:14 dame:7 correspondence:1 adapted:2 constraint:1 constrain:1 afforded:2 btk:1 speed:1 according:1 combination:1 precompute:1 across:3 describes:1 slightly:2 smaller:1 making:3 explained:1 iccv:3 invariant:9 restricted:2 indexing:3 den:1 pipeline:1 equation:6 visualization:2 remains:1 r3:1 available:5 operation:1 gaussians:1 doll:2 apply:7 observe:1 spectral:2 appropriate:4 generic:2 neighbourhood:1 alternative:4 weinberger:1 rp:2 gridding:1 original:10 top:1 remaining:1 include:2 restrictive:1 build:1 prof:1 objective:2 strategy:4 costly:1 parametric:2 receptive:1 diagonal:3 traditional:1 exhibit:2 gradient:30 distance:6 thank:1 considers:1 discriminant:2 extent:1 tseng:1 assuming:2 code:5 length:3 modeled:2 relationship:1 minimizing:1 difficult:6 mostly:1 nonintuitive:1 potentially:1 hog:1 unintuitive:1 implementation:4 bronstein:2 perform:4 allowing:1 observation:1 datasets:8 benchmark:1 descent:2 displayed:1 defining:1 hinton:3 extended:3 viola:1 smoothed:1 intensity:11 overcoming:1 bk:3 cast:1 pair:8 namely:1 specified:1 connection:3 optimized:7 learned:14 quadratically:1 boost:3 nip:4 address:2 beyond:2 able:1 below:2 firstname:1 fp:1 sparsity:1 built:1 explanation:2 gool:1 suitable:1 rely:1 regularized:1 haar:1 zhu:1 representing:1 scheme:2 improve:4 brief:2 keypoint:1 lk:1 created:1 hm:1 discovery:1 embedded:1 loss:4 freund:1 generation:2 limitation:1 interesting:1 proven:3 degree:1 trzcinski:2 thresholding:1 viewpoint:3 share:1 normalizes:1 eccv:2 course:1 bias:1 allow:2 sparse:1 van:2 curve:4 dimension:9 feedback:1 hengel:1 computes:1 author:1 made:1 collection:2 rabaud:1 far:2 boltzman:1 correlate:1 approximate:1 compact:9 implicitly:1 global:1 b1:1 assumed:2 belongie:1 xi:3 discriminative:6 alternatively:1 factorize:1 fergus:2 search:1 iterative:1 triplet:1 bay:1 additionally:2 learn:9 reasonably:1 robust:3 channel:1 brisk:2 e5:1 complex:4 european:1 constructing:1 diag:3 distinctively:1 surf:9 main:1 linearly:1 noise:1 fair:1 cvlab:1 crafted:3 referred:1 fig:3 representative:1 roc:2 sub:2 explicit:2 exponential:4 lie:1 jmlr:2 weighting:11 learns:3 e4:1 specific:2 emphasized:1 discarding:1 sift:29 insightful:1 explored:1 experimented:1 r2:1 quantization:1 restricting:1 false:4 effectively:2 importance:2 phd:1 magnitude:2 illumination:3 margin:1 suited:4 ozuysal:1 simply:1 appearance:1 visual:5 expressed:1 hua:1 ch:1 corresponds:3 truth:1 wolf:1 relies:4 goal:1 sized:1 replace:1 considerable:1 change:2 typical:2 specifically:1 infinite:2 called:1 indicating:1 select:3 highdimensional:2 formally:1 e6:1 dissimilar:1 evaluate:2 tested:1 |
4,252 | 4,849 | Learning with Target Prior
Siwei Lyu
Computer Science, Univ. at Albany, SUNY
Albany, NY 12222
[email protected]
Zuoguan Wang
Dept. of ECSE, Rensselaer Polytechnic Inst.
Troy, NY 12180
[email protected]
Qiang Ji
Dept. of ECSE, Rensselaer Polytechnic Inst.
Troy, NY 12180
[email protected]
Gerwin Schalk
Wadsworth Center, NYS Dept. of Health
Albany, NY, 12201
[email protected]
Abstract
In the conventional approaches for supervised parametric learning, relations between data and target variables are provided through training sets consisting of
pairs of corresponded data and target variables. In this work, we describe a
new learning scheme for parametric learning, in which the target variables y can
be modeled with a prior model p(y) and the relations between data and target
variables are estimated with p(y) and a set of uncorresponded data X in training. We term this method as learning with target priors (LTP). Specifically,
LTP learning seeks parameter ? that maximizes the log likelihood of f? (X) on
a uncorresponded training set with regards to p(y). Compared to the conventional
(semi)supervised learning approach, LTP can make efficient use of prior knowledge of the target variables in the form of probabilistic distributions, and thus removes/reduces the reliance on training data in learning. Compared to the Bayesian
approach, the learned parametric regressor in LTP can be more efficiently implemented and deployed in tasks where running efficiency is critical. We demonstrate
the effectiveness of the proposed approach on parametric regression tasks for BCI
signal decoding and pose estimation from video.
1
Introduction
One of the central problems in machine learning is prediction/inference, where given an input datum X, we would like to predict or infer the value of a target variable of interest, y, assuming X
and y have some intrinsic relationship. The prediction/inference task in many practical applications
involves high dimensional and structured data and target variables. Depending on the form of knowledge about X and y and their relationship available to us, there are several different methodologies
to solve the prediction inference problem.
In the Bayesian approach, our knowledge about input and target variables, as well as their relationships, are all represented as probability distributions. Correspondingly, the prediction/inference task
is solved with optimizations based on the posterior distribution p(y|X), a common choice of which
is the maximum a posteriori objective: maxy p(y|X). The posterior distribution can be explicitly
constructed from the target prior, p(y), which encodes our knowledge on the internal structure of
the target y, and the likelihood, p(X|y), which summarizes the process of generating X from y, as
p(y|X) ? p(X|y)p(y). Or it can be directly handled as in the conditional random fields [9] without
referring to the target prior or the likelihood. The advantage of the Bayesian approach is that it
incorporates prior knowledge about data and target variables into the prediction/inference task in a
principled manner. The main downside is that, in many practical problems, the relationship between
X and y could be complicated and defy straightforward modeling. Furthermore, except for a few
special cases (e.g., Gaussian models), the Bayesian prediction/inference of y from data X usually
requires expensive numerical optimization or Monte-Carlo sampling.
1
An alternative approach to prediction/inference is supervised parametric learning, where the information about X and y and their relationship is described in the form of a set of corresponding examples, {Xi , yi }m
i=1 , and the goal of learning is to choose an optimal member from
a parametric
family
f
(X)
that
minimizes the average prediction error using a loss function
?
Pm
1
min? m
i=1 L(yi ? f? (Xi )). Usually, the optimization may also include a regularization penalty
on ? to reduce over-fitting. The most significant drawback of the supervised parametric learning
approach is that the learning performance relies heavily on the quality and quantity of the training data. This problem is somewhat alleviated in semi-supervised learning [28], where the training
data include unlabeled examples of X. However, unlike the Bayesian approach, it is usually difficult to incorporate prior knowledge in the form of probabilistic distributions into (semi)supervised
parametric learning.
In this work, we describe a new approach to learning a parametric regressor f? (X), which we term
as learning with target prior (LTP). In many practical applications, the target variables y follow the
some regular spatial and temporal patterns that can be described probabilistically, and the observed
target variables are samples of such distributions. For instance, to perform an activity like grasping
a cup, the traces of finger movements tend to have similar patterns that are caused by many factors,
such as the underlying physiological, anatomical and dynamic constraints. Such regular patterns
can benefit the task of decoding the finger movements from ECoG signals in a brain computer
interface (BCI) system, Fig.1, as it regularizes the decoder to produce similar patterns. Similarly,
the skeleton structures and the dynamic dependencies constraint the body pose to have similar spatial
and temporal patterns for the same activity (e.g. walking, running and jumping), which can be used
for body pose estimation in computer vision.
In LTP learning, we incorporate such spatial and temporal regular patterns of the target variables
into the learning framework. Specifically, we learn a probability distribution p(y) that captures the
spatial and temporal regularities of the target variable y, then we estimate the function parameters
?, by maximizing the log-likelihood of the output y = f? (X) with respect to the the prior distribution. LTP learning can be applied to both unsupervised learning, in which no corresponded input
and output are available, and semi-supervised learning in which part of corresponding outputs are
available. We demonstrate the effectiveness of LTP learning in two problems: BCI decoding and
pose estimation.
The rest of the paper is organized as the following: Section 2 discusses the related work. Section
3 describes the general framework for our method and compare with other existing methodologies.
In Sections 4 and 5, details on deployment and experimental evaluation of this general framework
in two applications, namely BCI decoding and pose estimation from video, are described. Section 6
concludes the paper with discussion and future works.
2
Related Work
LTP learning is related to several existing learning schemes. The prior knowledge about the target
variables in classification problems is exploited in recent works as learning with uncertain labels,
in which the distribution over the target class labels for each data example is used in place of corresponding pairs of data/target variables [10]. A similar idea in semi-supervised learning uses the
proportion of different classes [16, 28] to predict the class labels on the uncorresponded training
data examples. The knowledge about class proportion conditioned on certain input feature is captured by generalized expectation (GE)[12, 13]. There are several works directly embed domain
constraints about the target variables in learning. For instance, constraint driven learning (CODL)
[3] enforces task specific constraints on the target labels by appending a penalty term in the objective function. Posterior regularization [5] directly imposes regularization on the posterior of the
latent target variables, of which CODL can be seen as a special case with MAP approximation. A
general framework, which incorporates prior information as measurements in the Bayesian framework, is proposed in [11]. However, all these approaches have only been applied to problems with
discrete outputs (classification or labeling) and may be difficult to extend to incorporate complex
dependencies in high-dimensional continuous target variables.
LTP learning is also related to learning with structured outputs. Dependencies in the target variables
can be directly modeled in conditional random fields (CRF) [9], as a probabilistic graphical model
between the output components. However, the learned regressor is usually not in closed form and
predictions has to be obtained by numerical optimization or Monte-Carlo sampling. Some of the
recent supervised parametric learning methods can take advantage of some structure constraints over
the target variables. The max margin Markov network [21] trains an SVM classifier with outputs
2
Figure 1: Experiment setup for this study.
whose structures are described by graphs. The structured SVM was further extended with high order
loss function [20] or models with latent variables [27]. These methods can be viewed as special cases
of LTP learning, where general probabilistic models for target variables can be incorporated.
3
General Framework
In this section, we describe the general framework of learning with target priors. Specifically, our
task is to learn the parameter ? in a parametric family of functions of X, f? (x), to best predict
the corresponding target variable y. Both the data and target variable can be of high dimensions.
Knowledge about target variable is provided through a target prior model in the form of a parametric
probability distribution, p? (y), with model parameter ?. The specific form of p? (y) is determined
based on different applications, ranging from simple distributions to more complex models such
as Markov random fields. The model parameter ? is estimated by maximizing the log-likelihood
log p? (f? (X)). In the following, we apply the LTP learning to unsupervised learning in which no
corresponded input and output are available, as well as semi-supervised learning in which part of
corresponding outputs are available.
For the unsupervised learning, assume we are given a set of outputs y ? RY?m , as well as a set of
uncorresponded inputs X ? RX ?n , where Y and X are the dimensionality, and m and n are the
temporal length for y and X respectively. This is applicable to the case of BCI where it is easier to
gather inputs X or structured targets y than it is to gather corresponded inputs and targets (X, y).
In many real BCI applications the input brain signals X are collected only under thoughts without
actual body movement y. The body movements could be easily collected when the brain signals
are not being recorded. In the problem of pose estimation, it is a tedious work to label poses y on
the input images X. In both the finger movement decoding and pose estimation, y and X could be
Y?1
extracted from different subjects. A prior model p? (y) is learned from {yi }m
i=1 , where yi ? R
and ? is parameter of the prior model. Then the function parameter ? is estimated by maximizing
n
max
?
1X
log p? (f? (Xi )),
n i=1
(1)
where Xi ? RX ?1 . The parameter ? is chosen in the way that the output on the {Xi }ni=1 maximally
consistent of the prior distribution p? (y). The setting of semi-supervised learning is slightly different
m
from unsupervised learning, in which the corresponding input {Xi }m
i=1 of the output {yi }i=1 are
also given. Then the learning becomes the combination of supervised and unsupervised learning:
m
min
?
n
?X
1 X
L(yi ? f? (Xi )) ?
log p? (f? (Xi )),
m i=1
n i=1
(2)
where L is the loss function and ? is a constant representing the tradeoff between the two terms. In
eq. 2, the parameter ? is chosen in the way that the outputs not only minimize the loss function on
training data, but also make the predicted targets on the unlabeled data comply with the target prior.
Next, we adapt unsupervised/semi-supervised learning with LTP to the prediction/inference in two
applications, namely, decoding ECoG signal to predict finger movement in BCI and estimation of
body poses from videos, where the-state-of-the-art performances are achieved.
4
Finger Movement Decoding in ECoG based BCI
The main task in brain-computer interface (BCI) systems is to convert electronic signals recorded
from human brain into controlling commands for patients with motor disabilities (e.g., paralysis). Many recent studies in neurobiology have suggested that electrocorticographic (ECoG) signals
3
recorded near the brain surface show strong correlations with limb motions [2, 8]. ECoG signal
decoding is the critical step in ECoG based BCI systems, the goal of which is to obtain a functional
mapping between the ECoG signals and the kinematic variables (e.g., spatial locations and movement velocities of fingers recorded by a digital glove) [8]. The ECoG decoding problem has been
widely solved with supervised parametric learning [26, 8, 25], where corresponded ECoG signals
and target kinematic variables are collected from one subject and used to train a parametric regressor. However, the decoder learned from data collected from one subject in a controlled experiment
usually has trouble to generalize for the same subject over time and in an open environment (temporal generalization) [18], or to decode signals from other subjects (cross-subject generalization)
[24]. The former is due to the strong variances in ECoG signals that are caused by other concurrent
brain activities, and the latter is due to the difference in shape and volume of the brains for different
subjects. These limitations are regarded as the most challenging issues in current BCI systems [7].
There have been several works addressing these issues. For instance, to improve the generalization
performance across trials, several adaptive classification methods are proposed [18], i.e., updating
the LDA with labeled feed back data. To generalize better across subjects, a collaborative paradigm
was proposed to integrating information from multiple subjects [24]. In [17] it is investigated that
certain spectral features of ECoG signals can be used across subjects to classify movements. However, these methods do not provide satisfactory solutions since the central challenge in extending the
parametric decoder across time and subject is that the conventional parametric learning approach,
on which all these methods are based, relies on training data to obtain information for learning the
regressor, which in these cases are difficult to collect. At the same time, in BCI it is typically much
easier to gather samples of uncorresponded target variables, i.e, traces of finger movements recorded
by digital gloves, than it is to gather corresponding pairs of training samples.
Thus in this work, we propose to improve the temporal and cross-subject generalization of BCI
decoders with the learning with target priors framework. In the first step, we obtain a parametric
target prior model using uncorresponded samples of the target data, in this case, the traces of finger
positions. In the second step, we estimate a linear decoding function using the general method
described in Section 3. Let us first define notations that are to be used subsequently: we use a linear
decoding function, as: f? (x) = XT ?, to predict the traces of finger movements y as target variable.
Specifically, we define y ? RY where Y corresponds to the number of samples in the finger traces.
X ? RL?Y is a matrix whose columns are a subset of ECoG signal features of length L. The model
parameter ? ? RL is a vector. Linear decoding function are widely used in BCI decoding [1] for its
simplicity and run-time efficiency in constructing hardware based BCI system.
4.1
Target Prior Model
We use the Gaussian-Bernoulli restricted Boltzmann machine (GB-RBM) [14]: p? (y) =
P ?E? (y,h)
1
, where Z is the normalizing constant, and h ? {0, 1}H are binary hidden varihe
Z
ables, as the parametric target prior model. The pdf is defined in terms of the joint energy function
over y and h, as:
E? (y, h) =
Y
X
(yi ? ci )2
i=1
2
Y,H
X
?
i=1,j=1
Wij yi hj ?
H
X
bj hj .
j=1
where Wij is the interaction strength between the hidden node hi and visible node yj . c and b are
the bias for the visible layer and hidden layer, respectively. The target variable y is normalized to
have zero mean and unit standard variance. The parameters in this model, (W, c, b), are collectively represented with ?. Direct maximum likelihood training of GB-RBM is intractable due to the
normalizing factor Z, so we use contrastive divergence [6] to estimate ? from data.
4.2
Learning Regressor Parameter ?
With training data and the GB-RBM as the target prior model, we optimize the objective function of
LTP in Eq.(1) or (2) for parameters ?. With the linear decodingPfunction and squared loss function,
m
2
T
the gradient of the first term of Eq.(2) can be computed as ? m
i=1 Xi (yi ? Xi ?). The derivative
of ? over log-likelihood of XT ? with regards to the prior model can be computed, as
? log p? (XT ?) X
??E(XT ?, h)
=
p? (h|XT ?)
.
??
??
h
4
(3)
Plugging the energy function E into Eq.(3), we can simplify it to
X
? log p? (XT ?)
= X(XT ? ? c) + XWT
p? (h|XT ?)h,
??
(4)
h
P
where h p? (h|XT ?)h using the property
that the elements of h are independent
P of GB-RBM
T
given XT ?. Specifically, assume g =
p
(h|X
?)h,
then gi = ?(Wi XT ?), where Wi is
h ?
the ith row of W and ? is the logistic function ?(x) = 1/(1 + exp (?x)). The expectation of
the derivative over all sequences, composed of Y successive samples in the training data, can be
? log p? (XT ?)
expressed as h
idata where < ? >data stands for expectation over the data.
??
4.3
Experimental Settings
The ECoG data and target finger movement variables are collected from a clinical setting based on
five subjects (A-E) who underwent brain surgeries [8]. Each subject had a 48- or 64- electrode grid
placed over the cortex. During the experiment, the subjects are required to repeatedly flex and extend
specific individual fingers according to visual cues on a video screen. The experiment setup is shown
in Fig. 1. The data collection for each subject lasted 10 minutes, which yielded an average of 30
trials for each finger. The flexion of each finger was measured by a data glove. For each channel,
features are extracted based on signal power of three bands (1-60Hz, 60-100Hz, 100-200Hz) [2],
which results in 144 or 204 features for subjects with 48 or 64 channels, respectively.
4.4
Learning Target Prior Model and Decoding Function
The training data for the prior model p? (y) are either from other subjects or from the same subject
but were collected at a different time and do not have correspondence with the training input data.
Here we consider the finger moving traces only composed of flexion and extension as in Fig. 2(A).
This simplified model is still practically useful since we can first classify the trace into movement
state or rest state and then apply the corresponding regressor for each state [4]. Each subject has
around 1400 samples. We model the finger movement trace using the GB-RBM with 64 hidden
nodes and 12 visible nodes, which is approximately the length of one round extension and flexion.
Then, all segments from 12 successive samples in the data are used to train the prior model.
NORMALIZED AMPLITUDE
NORMALIZED AMPLITUDE
The GB-RBM is trained with stochastic gradient decent with a mini-batch size of 25 sub-sequences.
We run 5000 epochs with a fixed learning rate 0.001. We first validate the prior model by drawing
samples from the learned GB-RBM. Figure 2(B) shows the 9 samples, which seem to capture some
important properties of the temporal dynamics of the finger trace.
1
0
-1
0
1
0
-1
1
2
3
4
5
6
0
1
2
3
4
5
6
TIME (s)
TIME (s)
(B)
(A)
Figure 2: (A) Original trace; (B)samples from GB-RBM. Each sample is a segment with length 12.
With the prior model, the paired features/target variables if they exist and unpaired features, on
which the expectation of Eq.(4) is calculated, are used to learn the parameter ?. ? is randomly
initialized and learned with stochastic gradient decent with the same batch size 25. We run 2000
epochs with fixed learning rate 10?4 .
4.5
Generalization Across Subjects
We learn the decoding function for new subjects by deploying the unsupervised LTP learning in Section 3. Even though it is difficult to get the corresponded samples from new subjects, we always have
the input ECoG signals, whose features will be used as the input of the unsupervised LTP learning.
We compare the unsupervised LTP learning with linear regression [2] in two ways: 1) the linear
regression (intra subject) in which the corresponded data and target variables are available. The
accuracy of linear regression is calculated based on five fold cross-validation, that is, 4/5 trials (25
trials) are used for training and 1/5 trials (5 trials) are used for testing. 2) the linear regression (inter
5
Table 1: Results on thumb of subjects based on 2 fold cross validation (correlation coefficient).
A
B
C
D
E
Linear R
0.29 0.26 0.06 0.10 0.11
Semisupervised LTP 0.38 0.42 0.13 0.15 0.12
subject) trained on the one subject and tested on other subjects. The results for inter subjects are
calculated based on 5 fold cross-validation (each time one subject is used for training and the model
is tested on other four subjects). Linear regression is trained on pairs of features and targets while
LTP only uses the targets to train the prior model. For the linear regression trained and tested on
different subjects, the channels across subjects are aligned by the 3-d position of the sensors.
Correlation Coefficient
Figure 3(A) shows the performance comparison of the three models. Note that the performances of
the unsupervised LTP learning is on par with those of the linear regression (intra) on subject A, B, C
and D, which suggests that the decoder learned by unsupervised LTP learning can generalize across
subjects. Figure 3(B) and (C) shows two examples of prediction results from the unsupervised
LTP learning. On the other hand, not surprisingly, the performances of linear regression (inter
subjects) suggest that it cannot be extended across subjects, which is due to brain difference for
different subjects as stated above. The generalization ability gained by unsupervised LTP learning
is mainly because it directly learns decoding functions on the new subject without using brain signal
from existing subjects, which are believed to change dramatically among subjects. One thing we
noticed is that the unsupervised LTP learning does not work well on subject E, which is because the
thumb movement speed of subject E is much slower than subject A, on which the prior model is
trained. This suggests that the quality of the target prior model is critical for the performance.
Linear Regression(intra subject)
Unsupervised LTP (inter subject)
Linear Regression (inter subject)
0.3
0.2
0.1
0
?0.1
4
1.5
3
1
2
0.5
1
0
0
-0.5
-1
A
B
C
D
E
-1
-2
0
(A)
50
100
150
(B)
200
250
-1.5
300 0
50
100
150
200
250 300
(C)
Figure 3: (A) Comparison among three models across subjects; (B) Sample results for subject A;
(C) Sample results for subject B. The dot line is the ground truth and the solid line is the prediction
4.6
Online Learning for Decoding Functions
In the next set of experiment, we use the learning with target priors framework for learning decoding
functions that generalize over time. This experiment is performed for each subject individually. For
n
each subject, assume {Xi , yi }m
i=1 be the training data in the current trial and {Xj }j=1 be the new
m
samples unknown in training. We first train the prior model on {yi }i=1 . Then parameter ? is learned
using the semi-supervised learning in section 3.
The new samples come sequentially and thus we want the decoding function to be online updated.
The parameter ? is not updated for every new coming sample, but every batch of data X ? RL?Y .
Here we give a brief description of the online batch updating method. For the start, the parameter
? is learned from the corresponding pairs of samples {Xi , yi }m
i=1 . Then the decoding function
with parameter ? is used to decode the first batch {Xj }Y
.
After
the batch {Xj }Y
j=1
i=1 is decoded,
Y
{Xj }j=1 , not including the predicted target variables, is included as part of the unlabeled training
data to update the parameter ? by the semi-supervised learning in section 3. Then the updated ?
is used to decode the second batch {Xj }2Y
j=Y+1 and the process loops. In summary, after the new
coming batch is decoded using the current parameter ?, then it is included as training data to update
parameter ?. Generally, we are trying to maximally use the ?seen? data to get the decoding function
prepared for the ?unseen? coming samples.
The batch size Y is chosen to be 12. The model is tested on the thumb of five subjects based on 2
fold cross validation, that is, we treat the first 15 trials as the paired data/target variables and then
online test the remaining trials. After that in turn we treat the last 15 trials as the paired data/target
variables and use the first 15 trials for online testing. The results in Table 1 show the proposed model
with online batch updating can significantly improve the results. This means that by regularizing
6
the new features with the target prior, the semi-supervised learning in Section 3 successfully obtains
information from the new features and adapts the decoders well for new coming samples.
5
Pose Estimation from Videos
In this section, we apply learning with target priors to the problem of the pose estimation problem,
the goal of which is to extract 3D human pose from images or video sequences. We demonstrate
LTP by applying it to learn a linear mapping from image features to poses while LTP could be used
to learn more sophisticated models. We will show that the algorithms learned by LTP are more
generalizable both across subjects and over time on the same subject respectively.
In this experiment, we use six walking sequences from CMU MoCap database
(http://mocap.cs.cmu.edu). The data are from 3 subjects, with sequences 1 & 2 from the first
subject, sequences 3 & 4 from the second subject and sequences 5 & 6 from the third subject. Each
sequence consists of about 70 frames. Our task is to estimate the 3-D pose from videos, which is
described by 59 dimensional joint angles. The image feature is extracted from the silhouette image
at the side view. For each silhouette image we take 10 dimension moment features [23].
We represent the video sequence as {Xi , yi }ni=1 , where X ? R10?n are the image features, y ?
R59?n are the joint angles, where n is the length of the sequence and n could be different for
different sequences. Instead of directly mapping features to 59 dimensional joint angles, we learn
the function which maps the features to the 3 dimensional subspace of joint angles obtained through
PCA. Then the original space of joint angles is recovered from the low dimensional subspace. All
Algorithm 1 learning with target priors
Input: joint angles {yi }ni=1 , test features X?
Output: y? corresponding to X?
Steps:
1: PCA: y ? EZ, where E ? R59?3 , Z ? R3?n
2: learn prior model p? (y) on Z
3: learn mapping function Z ? = f? (X ? ) using the unsupervised LTP learning in section 3
Output: recover original space y? = EZ ?
possible segments composed of successive 60 frames in the sequence are used to train the GB-RBM.
So the length of the vector into the GB-RBM is 180 (the subspace is 3 dimension).
Many methods have been proposed to address the pose estimation problem, among which sGPLVM
[19], FOLS-GPLVM [15] and imCRBM [22] are the three very competitive ones. sGPLVM models
a shared latent space by pose and image features through GPLVM, while FOLS-GPLVM models a
shared latent space and a private latent space for each part. imCRBM constructs a pose prior for the
Bayesian model using the implicit mixture of CRBM. However, Taylor?s work is not comparable
to our method, because it requires a generative model to directly map a pose to a silhouette, while
our method explicitly uses the extracted moment features, and the comparison here focuses on algorithms instead of features. So we will compare with sGPLVM and FOLS-GPLVM using the same
image features. The training of both sGPLVM and FOLS-GPLVM require corresponded images and
poses (X, y) while LTP does not require this.
For the unsupervised LTP learning, the target prior model is trained on the subspace of the joint
angles {yi }ni=1 on sequence 1 and tested on the features of all 6 sequences. The implementation
details are shown in algorithm 1. Except for sGPLVM and FOLS-GPLVM, the results are also compared with ridge regression. Ridge regression, sGPLVM and FOLS-GPLVM are trained on the first
sequence with paired samples {Xi , yi }ni=1 and tested on all the 6 sequences. The implementation
of ridge regression is similar to that in algorithm 1, the only difference is that the mapping from
features to the PCA subspace is through ridge regression.
The results are measured in terms of mean absolute joint angle error and are shown in table 2.
We can see that when testing on the sequence from the same subject (sequence 2), unsupervised
LTP learning is not the best. In contrast, when testing on the sequences from subjects B and C,
unsupervised LTP learning achieves the best results, which is slightly better than sGPLVM. Considering that only linear dimension reduction and linear function are assumed for unsupervised
LTP learning and paired samples are not required, unsupervised LTP learning is even more competitive. FOLS-GPLVM does not perform well on this data set, which is probably due to limited
training samples. Thus the experiments demonstrate that the algorithm learned by unsupervised
7
Table 2: Train prior model on the first sequence and test on all
with mean absolute joint angle error.
Subject
A
B
Sequence Num
1
2
3
4
Ridge Regression
2.1 4.8 8.3 8.5
sGPLVM
? 3.1 5.6 6.1
FOLS-GPLVM
? 5.3 6.5 6.4
Unsupervised LTP 3.0 4.8 5.3 6.1
sequences. Results are measured
C
5
10.7
3.0
3.3
2.9
6
10.7
3.1
4.0
2.9
Table 3: For each subject, train on the first sequence and test on the second sequence. Results are
measured with absolute joint angle error.
Subject
A
B
C
Ridge Regression
4.8
5.3
3.1
sGPLVM
3.1
5.3
3.0
FOLS-GPLVM
5.3
5.8
3.8
Semi-supervised LTP 2.87 3.97 2.33
LTP learning in section 3 can generalize well across subjects. The reason that ridge regression,
sGPLVM and FOLS-GPLVM do not generalize well is that the relations between poses and images
are solely learned from corresponded poses and images, and these relations may have difficulty to
hold for the new subjects due to may factors (i.e, the video for the new subject is recorded from a
slightly different angle). LTP avoids this problem by learning the relations using the generalizable
prior distribution over the targets and the images from the new subjects.
We further demonstrate that the algorithm learned through semi-supervised learning in section 3
generalizes well across time for the same subject. In this experiment, for each subject we treat the
first sequence as the paired samples {Xi , yi }m
i=1 and estimate the 3-D pose of the second sequence
{Xi }nj=1 . The prior model is trained on the joint angles of the first sequence {yi }m
i=1 . The algorithm
is similar to that in algorithm 1 except for replacing unsupervised LTP learning with semi-supervised
learning. The results in table 3 show that the semi-supervised learning in section 3 significantly
outperforms three other methods.
6
Conclusion and Discussion
In this work, we describe a new learning scheme for parametric learning, known as learning with
target priors, that uses a prior model over the target variables and a set of uncorresponded data in
training. Compared to the conventional (semi)supervised learning approach, LTP can make efficient use of prior knowledge of the target variables in the form of probabilistic distributions, and
thus removes/reduces the reliance on training data in learning. Compared to the Bayesian approach,
the learned parametric regressor in LTP can be more efficiently implemented and deployed in tasks
where running efficiency is critical, such as on-line BCI signal decoding. We demonstrate the effectiveness of the proposed approach in terms of generalization on parametric regression tasks for BCI
signal decoding and pose estimation from video.
There are several extensions of this work we would like to further pursue. First, in the current work
we only use a simple target prior model in the form of GB-RBM. There are, however, more flexible
probabilistic models, such as Markov random fields or dynamic Bayesian networks, that can better
represent statistical properties in the target variables. Therefore, we would like to incorporate such
models into LTP learning to further improve the performance. Second, we would like to investigate
the connection between conventional capacity control methods (e.g., max margin or regularization)
with LTP learning. This has the potential to unify and shed light on the deeper relation among
different learning methodologies. Last, we would also like to use LTP learning with nonlinear
decoding functions.
Acknowledgement The authors would like to thank Jixu Chen for providing the motion capture
data and feature extraction code. Zuoguan Wang and Qiang Ji are supported in part by a grant from
US Army Research Office (W911NF-08-1-0216 (GS)) through Albany Medical College. Gerwin
Schalk is supported by US Army Research Office (W911NF-08-1-0216 (GS)) and W911NF-07-10415 (GS), and the NIH (EB006356(GS) and EB000856 (GS)). Siwei Lyu is supported by an NSF
CAREER Award (IIS-0953373).
8
References
[1] Bashashati, Ali, Fatourechi, Mehrdad, Ward, Rabab K., and Birch, Gary E. A survey of signal
processing algorithms in brain-computer interfaces based on electrical brain signals. J. Neural
Eng., 4, June 2007.
[2] Bougrain, Laurent and Liang, Nanying. Band-specific features improve Finger Flexion Prediction from ECoG. In Jornadas Argentinas sobre Interfaces Cerebro Computadora - JAICC,
Paran`a, Argentine, 2009.
[3] Chang, Mingwei, Ratinov, Lev, and Roth, Dan. Guiding semi-supervision with constraintdriven learning. In Proc. of the Annual Meeting of the ACL, 2007.
[4] Flamary, R?emi and Rakotomamonjy, Alain. Decoding finger movements from ECoG signals
using switching linear models. Technical report, September 2009.
[5] Ganchev, Kuzman, Graca, Joao, Gillenwater, Jennifer, and Taskar, Ben. Posterior regularization for structured latent variable models. JMLR, 11(July):2001?2049, 2010.
[6] Hinton, Geoffrey. Training products of experts by minimizing contrastive divergence. Neural
Computation, 14(8):2002, Aug 2000.
[7] Krusienski, Dean J, Grosse-Wentrup, Moritz, Galn, Ferran, Coyle, Damien, Miller, Kai J,
Forney, Elliott, and Anderson, Charles W. Critical issues in state-of-the-art brain-computer
interface signal processing. Journal of Neural Engineering, 8(2):025002, 2011.
[8] Kub?anek, J, Miller, K J, Ojemann, J G, Wolpaw, J R, and Schalk, G. Decoding flexion of
individual fingers using electrocorticographic signals in humans. J Neural Eng, 6(6):066001?
066001, Dec 2009.
[9] Lafferty, John. Conditional random fields: Probabilistic models for segmenting and labeling
sequence data. In NIPS, pp. 282?289. Morgan Kaufmann, 2001.
[10] Lefort, Riwal, Fablet, Ronan, and Boucher, Jean-Marc. Weakly supervised classification of
objects in images using soft random forests. In ECCV, pp. 185?198, 2010.
[11] Liang, Percy, Jordan, Michael I., and Klein, Dan. Learning from measurements in exponential
families. In ICML ?09, pp. 641?648, New York, NY, USA, 2009. ACM.
[12] Mann, Gideon S. and McCallum, Andrew. Simple, robust, scalable semi-supervised learning
via expectation regularization. In ICML, pp. 593?600, 2007.
[13] Mann, Gideon S. and Mccallum, Andrew. Generalized expectation criteria for semi-supervised
learning of conditional random fields. In ACL?08, pp. 870?878, 2008.
[14] Mohamed, A., Dahl, G., and Hinton, G. Acoustic modeling using deep belief networks. Audio,
Speech, and Language Processing, IEEE Transactions on, PP(99):1, 2011.
[15] Salzmann, Mathieu, Henrik, Carl, Raquel, Ek, and Darrell, Urtasun Trevor. Factorized orthogonal latent spaces. JMLR, 9:701?708, 2010.
[16] Schapire, Robert E., Rochery, Marie, Rahim, Mazin G., and Gupta, Narendra. Incorporating
prior knowledge into boosting. In ICML, 2002.
[17] Shenoy, P., Miller, K.J., Ojemann, J.G., and Rao, R.P.N. Generalized features for electrocorticographic bcis. Biomedical Engineering, IEEE Transactions on, 55(1), jan. 2008.
[18] Shenoy, Pradeep, Krauledat, Matthias, Blankertz, Benjamin, Rao, Rajesh P. N., and M?uller,
Klaus-Robert. Towards adaptive classification for BCI. Journal of Neural Engineering, 2006.
[19] Shon, Aaron P., Grochow, Keith, Hertzmann, Aaron, and Rao, Rajesh P. N. Learning shared
latent structure for image synthesis and robotic imitation. In NIPS, pp. 1233?1240, 2006.
[20] Tarlow, Daniel and S. Zemel, Richard. Structured output learning with high order loss functions. AISTATS, 2012.
[21] Taskar, Ben, Guestrin, Carlos, and Koller, Daphne. Max-margin markov networks. In NIPS.
MIT Press, 2003.
[22] Taylor, G.W., Sigal, L., Fleet, D.J., and Hinton, G.E. Dynamical binary latent variable models
for 3d human pose tracking. In CVPR, pp. 631 ?638, June 2010.
[23] Tian, Tai-Peng, Li, Rui, and Sclaroff, S. Articulated pose estimation in a learned smooth space
of feasible solutions. In CVPR, pp. 50, June 2005.
[24] Wang, Yijun and Jung, Tzyy-Ping. A collaborative brain-computer interface for improving
human performance. PLoS ONE, 6(5):e20422, 05 2011.
[25] Wang, Zuoguan, Ji, Qiang, Miller, Kai J., and Schalk, Gerwin. Decoding finger flexion from
electrocorticographic signals using a sparse gaussian process. In ICPR, pp. 3756?3759, 2010.
[26] Wang, Zuoguan, Schalk, Gerwin, and Ji, Qiang. Anatomically constrained decoding of finger
flexion from electrocorticographic signals. In NIPS, 2011.
[27] Yu, C.-N. and Joachims, T. Learning structural SVMs with latent variables. In ICML, 2009.
[28] Zhu, Xiaojin. Semi-supervised learning literature survey, 2006. URL http://pages.cs.
wisc.edu/?jerryzhu/pub/ssl_survey.pdf.
9
| 4849 |@word trial:11 private:1 sgplvm:10 proportion:2 tedious:1 open:1 seek:1 fatourechi:1 eng:2 contrastive:2 solid:1 reduction:1 moment:2 pub:1 salzmann:1 daniel:1 outperforms:1 existing:3 current:4 recovered:1 rpi:2 john:1 numerical:2 visible:3 ronan:1 shape:1 motor:1 remove:2 update:2 cue:1 generative:1 mccallum:2 ith:1 imcrbm:2 num:1 tarlow:1 boosting:1 node:4 location:1 successive:3 org:1 daphne:1 five:3 constructed:1 direct:1 consists:1 fitting:1 dan:2 manner:1 peng:1 inter:5 crbm:1 ry:2 brain:15 actual:1 considering:1 becomes:1 provided:2 underlying:1 notation:1 maximizes:1 joao:1 factorized:1 minimizes:1 pursue:1 generalizable:2 argentina:1 grochow:1 nj:1 lsw:1 temporal:8 every:2 graca:1 shed:1 rahim:1 classifier:1 control:1 unit:1 grant:1 medical:1 segmenting:1 shenoy:2 engineering:3 treat:3 switching:1 lev:1 laurent:1 solely:1 approximately:1 acl:2 collect:1 challenging:1 suggests:2 deployment:1 limited:1 tian:1 practical:3 enforces:1 yj:1 flex:1 testing:4 wolpaw:1 jan:1 thought:1 significantly:2 alleviated:1 integrating:1 regular:3 suggest:1 get:2 cannot:1 unlabeled:3 krusienski:1 applying:1 optimize:1 conventional:5 map:3 dean:1 center:1 maximizing:3 roth:1 straightforward:1 survey:2 unify:1 simplicity:1 regarded:1 updated:3 target:69 controlling:1 heavily:1 decode:3 carl:1 us:4 velocity:1 element:1 expensive:1 walking:2 updating:3 labeled:1 electrocorticographic:5 observed:1 database:1 taskar:2 wang:5 solved:2 capture:3 electrical:1 wentrup:1 grasping:1 movement:16 plo:1 yijun:1 principled:1 benjamin:1 environment:1 skeleton:1 hertzmann:1 ojemann:2 dynamic:4 trained:8 weakly:1 segment:3 ali:1 efficiency:3 easily:1 joint:12 represented:2 finger:22 train:8 univ:1 articulated:1 describe:4 monte:2 zemel:1 corresponded:9 labeling:2 klaus:1 whose:3 jean:1 widely:2 solve:1 kai:2 cvpr:2 drawing:1 tested:6 bci:18 ability:1 gi:1 unseen:1 ward:1 online:6 advantage:2 sequence:28 matthias:1 propose:1 interaction:1 coming:4 product:1 aligned:1 loop:1 adapts:1 flamary:1 description:1 validate:1 regularity:1 electrode:1 extending:1 darrell:1 produce:1 generating:1 ben:2 object:1 depending:1 andrew:2 damien:1 pose:25 measured:4 keith:1 aug:1 eq:5 strong:2 implemented:2 c:3 involves:1 predicted:2 come:1 drawback:1 subsequently:1 stochastic:2 human:5 mann:2 require:2 generalization:7 ecog:16 extension:3 hold:1 practically:1 around:1 ground:1 exp:1 lyu:2 predict:5 mapping:5 bj:1 narendra:1 achieves:1 estimation:12 albany:5 proc:1 applicable:1 label:5 individually:1 concurrent:1 successfully:1 ganchev:1 ferran:1 ecse:2 uller:1 mit:1 sensor:1 gaussian:3 always:1 hj:2 command:1 probabilistically:1 office:2 focus:1 june:3 joachim:1 bernoulli:1 likelihood:7 mainly:1 lasted:1 contrast:1 inst:2 inference:8 posteriori:1 typically:1 hidden:4 relation:6 koller:1 wij:2 issue:3 classification:5 among:4 flexible:1 spatial:5 special:3 art:2 wadsworth:2 constrained:1 field:6 construct:1 extraction:1 sampling:2 qiang:4 yu:1 unsupervised:24 icml:4 coyle:1 future:1 report:1 zuoguan:4 richard:1 few:1 simplify:1 randomly:1 composed:3 divergence:2 individual:2 consisting:1 interest:1 investigate:1 kinematic:2 intra:3 evaluation:1 mixture:1 pradeep:1 light:1 rajesh:2 jumping:1 orthogonal:1 taylor:2 initialized:1 uncertain:1 instance:3 classify:2 soft:1 modeling:2 downside:1 column:1 rao:3 w911nf:3 jerryzhu:1 rakotomamonjy:1 addressing:1 subset:1 dependency:3 referring:1 probabilistic:7 decoding:28 regressor:8 michael:1 synthesis:1 squared:1 central:2 recorded:6 choose:1 expert:1 derivative:2 ek:1 li:1 potential:1 coefficient:2 explicitly:2 caused:2 performed:1 view:1 closed:1 start:1 recover:1 competitive:2 complicated:1 carlos:1 collaborative:2 minimize:1 ni:5 accuracy:1 variance:2 who:1 efficiently:2 miller:4 kaufmann:1 generalize:6 bayesian:9 thumb:3 carlo:2 rx:2 uncorresponded:7 ping:1 siwei:2 deploying:1 trevor:1 energy:2 pp:10 mohamed:1 rbm:11 birch:1 knowledge:11 dimensionality:1 organized:1 amplitude:2 sophisticated:1 back:1 feed:1 supervised:26 follow:1 methodology:3 maximally:2 though:1 anderson:1 furthermore:1 implicit:1 biomedical:1 correlation:3 hand:1 replacing:1 nonlinear:1 logistic:1 boucher:1 quality:2 lda:1 bcis:1 semisupervised:1 usa:1 normalized:3 former:1 regularization:6 moritz:1 satisfactory:1 round:1 during:1 criterion:1 generalized:3 trying:1 pdf:2 crf:1 demonstrate:6 ridge:7 motion:2 interface:6 percy:1 ranging:1 image:15 tzyy:1 charles:1 nih:1 common:1 functional:1 ji:4 rl:3 volume:1 extend:2 significant:1 measurement:2 cup:1 grid:1 pm:1 similarly:1 gillenwater:1 language:1 had:1 dot:1 moving:1 cortex:1 surface:1 supervision:1 posterior:5 recent:3 driven:1 certain:2 binary:2 meeting:1 yi:18 exploited:1 captured:1 seen:2 kub:1 somewhat:1 morgan:1 guestrin:1 paradigm:1 mocap:2 signal:26 semi:20 ii:1 multiple:1 july:1 reduces:2 infer:1 smooth:1 technical:1 adapt:1 cross:6 clinical:1 believed:1 jiq:1 award:1 plugging:1 controlled:1 paired:6 prediction:13 scalable:1 regression:19 vision:1 expectation:6 patient:1 cmu:2 represent:2 achieved:1 dec:1 want:1 rest:2 unlike:1 probably:1 subject:70 tend:1 ltp:45 hz:3 thing:1 member:1 incorporates:2 lafferty:1 effectiveness:3 seem:1 jordan:1 structural:1 near:1 decent:2 xj:5 reduce:1 idea:1 tradeoff:1 fleet:1 six:1 handled:1 pca:3 gb:11 url:1 penalty:2 speech:1 york:1 repeatedly:1 deep:1 dramatically:1 useful:1 generally:1 krauledat:1 prepared:1 band:2 hardware:1 svms:1 unpaired:1 http:2 schapire:1 mazin:1 exist:1 nsf:1 estimated:3 klein:1 anatomical:1 discrete:1 four:1 reliance:2 suny:1 idata:1 wisc:1 marie:1 r10:1 dahl:1 graph:1 convert:1 ratinov:1 run:3 angle:12 raquel:1 place:1 family:3 electronic:1 summarizes:1 forney:1 comparable:1 layer:2 hi:1 datum:1 correspondence:1 fold:4 yielded:1 activity:3 g:5 strength:1 annual:1 constraint:6 encodes:1 speed:1 ables:1 min:2 emi:1 flexion:7 structured:6 according:1 icpr:1 combination:1 anek:1 describes:1 slightly:3 across:12 wi:2 maxy:1 fols:10 anatomically:1 restricted:1 tai:1 jennifer:1 discus:1 turn:1 r3:1 ge:1 available:6 generalizes:1 apply:3 polytechnic:2 limb:1 spectral:1 lefort:1 appending:1 alternative:1 batch:10 slower:1 original:3 running:3 include:2 trouble:1 remaining:1 graphical:1 schalk:6 surgery:1 objective:3 noticed:1 quantity:1 parametric:21 mehrdad:1 disability:1 september:1 gradient:3 subspace:5 thank:1 capacity:1 decoder:6 collected:6 urtasun:1 reason:1 assuming:1 length:6 code:1 modeled:2 relationship:5 mini:1 providing:1 minimizing:1 kuzman:1 liang:2 difficult:4 setup:2 robert:2 troy:2 trace:10 stated:1 implementation:2 boltzmann:1 unknown:1 perform:2 markov:4 gplvm:11 regularizes:1 extended:2 incorporated:1 neurobiology:1 hinton:3 frame:2 pair:5 namely:2 required:2 connection:1 acoustic:1 learned:15 nip:4 address:1 suggested:1 usually:5 pattern:6 dynamical:1 gideon:2 challenge:1 max:4 including:1 video:10 belief:1 power:1 critical:5 difficulty:1 zhu:1 representing:1 scheme:3 improve:5 blankertz:1 brief:1 mathieu:1 concludes:1 extract:1 health:1 xiaojin:1 prior:48 comply:1 epoch:2 acknowledgement:1 literature:1 loss:6 par:1 limitation:1 geoffrey:1 digital:2 validation:4 gather:4 elliott:1 consistent:1 imposes:1 xwt:1 sigal:1 row:1 eccv:1 summary:1 jung:1 placed:1 surprisingly:1 last:2 supported:3 alain:1 bias:1 side:1 deeper:1 cerebro:1 underwent:1 correspondingly:1 absolute:3 sparse:1 benefit:1 regard:2 dimension:4 calculated:3 gerwin:4 stand:1 avoids:1 author:1 collection:1 adaptive:2 simplified:1 transaction:2 obtains:1 silhouette:3 sequentially:1 paralysis:1 robotic:1 assumed:1 xi:16 imitation:1 continuous:1 rensselaer:2 latent:10 table:6 learn:9 channel:3 defy:1 robust:1 career:1 forest:1 improving:1 investigated:1 complex:2 constructing:1 domain:1 marc:1 aistats:1 main:2 body:5 fig:3 screen:1 deployed:2 ny:6 grosse:1 henrik:1 sub:1 position:2 decoded:2 guiding:1 exponential:1 jmlr:2 third:1 learns:1 minute:1 bashashati:1 embed:1 specific:4 xt:12 physiological:1 svm:2 gupta:1 normalizing:2 intrinsic:1 intractable:1 incorporating:1 gained:1 ci:1 conditioned:1 margin:3 rui:1 chen:1 easier:2 sclaroff:1 army:2 ez:2 visual:1 expressed:1 tracking:1 shon:1 chang:1 collectively:1 corresponds:1 truth:1 gary:1 relies:2 extracted:4 acm:1 conditional:4 goal:3 viewed:1 towards:1 shared:3 feasible:1 change:1 included:2 specifically:5 except:3 determined:1 glove:3 experimental:2 aaron:2 college:1 internal:1 latter:1 incorporate:4 dept:3 audio:1 regularizing:1 |
4,253 | 485 | A Weighted Probabilistic Neural Network
David Montana
Bolt Beranek and Newman Inc.
10 Moulton Street
Cambridge, MA 02138
Abstract
The Probabilistic Neural Network (PNN) algorithm represents the likelihood function of a given class as the sum of identical, isotropic Gaussians.
In practice, PNN is often an excellent pattern classifier, outperforming
other classifiers including backpropagation. However, it. is not. robust with
respect to affine transformations of feature space, and this can lead to
poor performance on certain data. We have derived an extension of PNN
called Weighted PNN (WPNN) which compensates for this flaw by allowing anisotropic Gaussians, i.e. Gaussians whose covariance is not a multiple of the identity matrix. The covariance is optimized using a genetic
algorithm, some interesting features of which are its redundant, logarithmic encoding and large population size. Experimental results validate our
claims.
1
INTRODUCTION
1.1
PROBABILISTIC NEURAL NETWORKS (PNN)
PNN (Specht 1990) is a pattern classification algorithm which falls into the broad
class of "nearest-neighbor-like" algorithms. It is called a "neural network" because
of its natural mapping onto a two-layer feedforward network. It works as follows.
Let the exemplars from class i be the k-vectors iT} for j = 1, ... , Ni. Then, the
likelihood function for class i is
1
N,
Li(i) = - - - - _ ""' e-(x-xj)2/ u
(1)
Ni(27r(j)k/2 ~
1110
A Weighted Probabilistic Neural Network
class B
class A
class B
(b)
(a)
Figure 1: PNN is not robust with respect to affine transformations of feature space.
Originally (a), A2 is closer to its classmate Al than to B 1 ; however, after a simple
affine transformation (b), A2 is closer to B 1 ?
and the conditional probability for class i is
M
Pi(i)
= Li(i)/ L Lj(i)
(2)
j=l
Note that the class likelihood functions are sums of identical isotropic Gaussians
centered at the exemplars.
The single free parameter of this algorithm is u, the variance of the Gaussians (the
rest of the terms in the likelihood functions are determined directly from the training
data). Hence, training a PNN consists of optimizing u relative to some evaluation
criterion, typically the number of classification errors during cross-validation (see
Sections 2.1 and 3). Since the search space is one-dimensional, the search procedure
is trivial and is often performed by hand.
1.2
THE PROBLEM WITH PNN
The main drawback of PNN and other "nearest-neighbor-like" algorithms is that
they are not robust with respect to affine transformations (i.e., transformations of
the form x 1--+ Ax + b) of feature space. (Note that in theory affine transformations
should not affect the performance of backpropagation, but the results of Section 3
show that this is not true in practice.) Figures 1 and 2 depict examples of how
affine transformations of feature space affect classification performance. In Figures
la and 2a, the point A2 is closer (using Euclidean distance) to point A l , which is
also from class A, than to point B 1 , which is from class B. Hence, with a training set
consisting of the exemplars Al and B 1 , PNN would classify A2 correctly. Figures
Ib and 2b depict the feature space after affine transformations. In both cases, A2 is
closer to Bl than to Al and would hence be classified incorrectly. For the example
of Figure 2, the transformation matrix A is not diagonal (i.e., the principle axes
of the transformation are not the coordinate axes), and the adverse effects of this
transformation cannot be undone by any affine transformation with diagonal A.
This problem has motivated us to generalize the PNN algorithm in such a way that
it is robust with respect to affine transformations of the feature space.
1111
1112
Montana
A2
class A
class~Bl
(b)
(a)
Figure 2: The principle axes of the affine transformation do not necessarily correspond with the coordinate axes.
1.3
A SOLUTION: WEIGHTED PNN (WPNN)
This flaw of nearest-neighbor-like algorithms has been recognized before, and there
have been a few proposed solutions. They all use what Dasarathy (1991) calls
"modified metrics", which are non-Euclidean distance measures in feature space.
All the approaches to modified metrics define criteria which the chosen metric
should optimize. Some criteria allow explicit derivation of the new metrics (Short
and Fukunuga 1981; Fukunuga and Flick 1984) . However, the validity of these
derivations relies on there being a very large number of exemplars in the training
set. A more recent set of approaches (Atkeson 1991; Kelly and Davis 1991) (i)
use criteria which measure the performance on the training set using leaving-oneout cross-validation (see (Stone 1974) and Section 2.1), (ii) restrict the number of
parameters of the metric to increase statistical significance, and (iii) optimize the
parameters of the metric using non-linear search techniques. For his technique of
"locally weighted regression", Atkeson (1991) uses an evaluation criterion which is
the sum of the squares of the error using leaving-one-out. His metric has the form
d2 = Wl(Xl-Yl?+ ... +Wk(Xk-Yk)2, and hence has k free parameters WI, ... , Wk. He
uses Levenberg-Marquardt to optimize these parameters with respect to the evaluation criterion. For their Weighted K-Nearest Neighbors (WKNN) algorithm, Kelly
and Davis (1991) use an evaluation criterion which is the total number of incorrect
classifications under leaving-one-out. Their metric is the same as Atkeson's, and
their optmization is done with a genetic algorithm.
We use an approach similar to that of Atkeson (1991) and Kelly and Davis (1991)
to make PNN more robust with respect to affine transformations. Our approach,
called Weighted PNN (WPNN), works by using anisotropic Gaussians rather than
the isotropic Gaussians used by PNN. An anisotropic Gaussian has the form
1
(i i ) T ~ -1 (i i ) Th
.
~ .
t'
d fi . t k x k
(271')Jc/2(det ~)1/2 e- - 0
0 ?
e covarIance
LJ IS a nonnega Ive- e m e
symmetric matrix. Note that ~ enters into the exponent of the Gaussian so as to
define a new distance metric, and hence the use of anisotropic Gaussians to extend
PNN is analogous to the use of modified metrics to extend other nearest-neighborlike algorithms.
The likelihood function for class i is
. _ _
I
N?
'"'" _(i_x;)TE-l(i_xj)
L,(x) - Ni(271")kI2(det~)1/2 f;;:e
(3)
A Weighted Probabilistic Neural Network
and the conditional probability is still as given in Equation 2. Note that when E is
a multiple of the identity, i.e. E = (J'I, Equation 3 reduces to Equation 1. Section 2
describes how we select the value of E.
To ensure good generalization, we have so far restricted ourselves to diagonal covariances (and thus metrics of the form used by Atkeson (1991) and Kelly and
Davis (1991). This reduces the number of degrees of freedom of the covariance from
k( k + 1) /2 to k. However, this restricted set of covariances is not sufficiently general
to solve all the problems of PNN (as demonstrated in Section 3), and we therefore
in Section 2 hint at some modifications which would allow us to use arbitrary covarIances.
2
OPTIMIZING THE COVARIANCE
We have used a genetic algorithm (Goldberg 1988) to optimize the covariance of the
Gaussians. The code we used was a non-object-oriented C translation of the OOGA
(Object-Oriented Genetic Algorithm) code (Davis 1991) . This code preserves the
features of OOGA including arbitrary encodings, exponential fitness, steady-state
replacement, and adaptive operator probabilities. We now describe the distinguishing features of our genetic algorithm: (1) the evaluation function (Section 2.1), (2)
the genetic encoding (Section 2.2), and (3) the population size (Section 2.3).
2.1
THE EVALUATION FUNCTION
To evaluate the performance of a particular covariance matrix on the training set, we
use a technique called "leaving-one-out", which is a special form of cross-validation
(Stone 1974). One exemplar at a time is withheld from the training set, and we
then determine how well WPNN with that covariance matrix classifies the withheld exemplar. The full evaluation is the sum of the evaluations on the individual
exemplars.
For the exemplar X}, let lq(x}) for q = 1, ... , M denote the class likelihoods obtained
upon withholding this exemplar and applying Equation 3, and let Pq (?) be the
probabilities obtained from these likelihoods via Equation 2. Then, we define the
performance as
M
E=
N,
2:2:?(1- Pi(X;?2 + 2:(Pq(X;?2)
i=l j=l
(4)
q#
We have incorporated two heuristics to quickly identify covariances which are clearly
bad and give them a value of 00, the worst possible score. This greatly speeds up the
optimization process because many of the generated covariances can be eliminated
this way (see Section 2.3) . The first heuristic identifies covariances which are too
"small" based on the condition that, for some exemplar x} and all q = 1, ... M,
lq (x}) = 0 to within the precision of IEEE double-precision floating-point format.
In this case, the probabilities Pq (X1) are not well-defined. (When E is this "small" ,
WPNN is approximately equivalent to WKNN with k = 1, and if such a small E is
indeed required, then the WKNN algorithm should be used instead.)
1113
1114
Montana
The second heuristic identifies covariances which are too "big" in the
too many exemplars contribute significantly to the likelihood functions.
observations and theoretical arguments show that PNN (and WPNN)
when only a small fraction of the exemplars contribute significantly.
reject a particular E if, for any exemplar
xJ,
sense that
Empirical
work best
Hence, we
(5)
Here, P is a parameter which we chose for our experiments to equal four.
Note: If we wish to improve the generalization by discarding some of the degrees
of freedom of the covariance (which we will need to do when we allow non-diagonal
covariances), we should modify the evaluation function by subtracting off a term
which is montonically increasing with the number of degrees of freedom discarded.
2.2
THE GENETIC ENCODING
Recall from Section 1.3 that we have presently restricted the covariance to be diagonal. Hence, the set of all possible covariances is k-dimensional, where k is the dimension ofthe feature space. We encode the covariances as k+l integers (ao, ... , ak),
where the ai's are in the ranges (ao)min ::; ao ::; (ao)max and 0 ::; ai ::; amax for
i = 1, ... , k. The decoding map is
(6)
We observe the following about this encoding. First, it is a "logarithmic encoding" ,
i.e. the encoded parameters are related logarithmically to the original parameters.
This provides a large dynamic range without the sacrifice of sufficient resolution at
any scale and without making the search space unmanageably large. The constants
C 1 and C 2 determine t.he resolution, while the constants (aO)min, (ao)max, and
amax det.ermine t.he range. Second, it. is possibly a "redundant" encoding, i.e. there
may be multiple encodings of a single covariance. We use this redundant encoding,
despite the seeming paradox, t.o reduce the size of t.he search space. The ao term
encodes the size of the Gaussian, roughly equivalent to (J' in PNN. The other aj's
encode the relative weighting of the various dimensions. If we dropped the ao term,
the other aj terms would have to have larger ranges to compensate, thus making
the search space larger.
Note: If we wish to improve the generalization by discarding some of the degrees
of freedom of the covariance, we need to allow all the entries besides ao to take on
the value of 00 in addition to the range of values defined above. When aj = 00, its
corresponding entry in the covariance matrix is zero and is hence discarded.
2.3
POPULATION SIZE
For their success, genetic algorithms rely on having multiple individuals with partial
information in the population. The problem we have encountered is that the ratio of
the the area of the search space with partial information to the entire search space
is small. In fact, with our very loose heuristics, on Dataset 1 (see Section 3) about
A Weighted Probabilistic Neural Network
90% of the randomly generated individuals of the initial population evaluated to 00.
In fact, we estimate very roughly that only 1 in 50 or 1 in 100 randomly generated
individuals contain partial information. To ensure that the initial population has
multiple individuals with partial information requires a population size of many
hundreds, and we conservatively used a population size of 1600. Note that with
such a large population it is essential to use a steady-state genetic algorithm (Davis
1991) rather than generational replacement.
3
EXPERIMENTAL RESULTS
We have performed a series of experiments to verify our claims about WPNN. To
do so, we have constructed a sequence of four datasets designed to illustrate the
shortcomings of PNN and how WPNN in its present form can fix some of these
shortcomings but not others. Dataset 1 is a training set we generated during an
effort to classify simulated sonar signals. It has ten features, five classes, and 516
total exemplars. Dataset 2 is the same as Dataset 1 except that we supplemented the
ten features of Dataset 1 with five additional features, which were random numbers
uniformly distributed between zero and one (and hence contained no information
relevant to classification), thus giving a total of 15 features. Dataset 3 is the same
as Dataset 2 except with ten (rather than five) irrelevant features added and hence
a total of 20 features. Like Dataset 3, Dataset 4 has 20 features. It is obtained
from Dataset 3 as follows. Pair each of the true features with one of the irrelevant
features. Call the feature values of the ith pair Ii and gi. Then, replace these feature
values with the values 0.5(1i + gd and 0.5(1i - gi + 1), thus mixing up the relevant
features with the irrelevant features via linear combinations .
To evaluate the performance of different pattern classification algorithms on these
four datasets, we have used lO-fold cross-validation (Stone 1974). This involves
splitting each dataset into ten disjoint subsets of similar size and similar distribution
of exemplars by class. To evaluate a particular algorithm on a dataset requires ten
training and test runs, where each subset is used as the test set for the algorithm
trained on a training set consisting of the other nine subsets.
The pattern classification algorithms we have evaluated are backpropagation (with
four hidden nodes), PNN (with (f = 0.05), WPNN and CART. The results of the
experiments are shown in Figure 3. Note that the parenthesized quantities denote
errors on the training data and are not compensated for the fact that each exemplar
of the original dataset is in nine of the ten training sets used for cross-validation.
We can draw a number of conclusions from these results. First, the performance of
PNN on Datasets 2-4 clearly demonstrates the problems which arise from its lack
of robustness with respect to affine transformations of feature space. In each case,
there exists an affine transformation which makes the problem essentially equivalent to Dataset 1 from the viewpoint of Euclidean distance, but the performance
is clearly very different. Second, WPNN clearly eliminates this problem with PNN
for Datasets 2 and 3 but not for Dataset 4. This points out both the progress we
have made so far in using WPNN to make PNN more robust and the importance
of extending the WPNN algorithm to allow non-diagonal covariances. Third, although backpropagation is in theory transparent to affine transformations of feature
space (because the first layer of weights and biases implements an arbitrary affine
1115
1116
Montana
~
1
2
4
3
Alaorithm
8ackprop
11 (69) 16 (51) 20 (27) 13 (64)
PNN
9
94
109
29
WPNN
10
11
11
25
CART
14
17
18
53
Figure 3: Performance on the four datasets of backprop, CART, PNN and WPNN
(parenthesized quantities are training set errors).
transformation), in practice affine transformations effect its performance. Indeed,
Dataset 4 is obtained from Dataset 3 by an affine transformation, yet backpropagation performs very differently on them. Backpropagation does better on the
training sets for Dataset 3 than on the training sets for Dataset 4 but does better
on the test sets of Dataset 4 than the test sets of Dataset 3. This implies that for
Dataset 4 during the training procedure backpropagation is not finding the globally
optimum set of weights and biases but is missing in such a way that improves its
generalization.
4
CONCLUSIONS AND FUTURE WORK
We have demonstrated through both theoretical arguments and experiments an
inherent flaw of PNN, its lack or robustness with respect to affine transformations
of feature space. To correct this flaw, we have proposed an extension of PNN, called
WPNN, which uses anisotropic Gaussians rather than the isotropic Gaussians used
by PNN. Under the assumption that the covariance of the Gaussians is diagonal,
we have described how to use a genetic algorithm to optimize the covariance for
optimal performance on the training set. Experiments have shown that WPNN can
partially remedy the flaw with PNN.
What remains to be done is to modify the optimization procedure to allow arbitrary
(i.e., non-diagonal) covariances. The main difficulty here is that the covariance
matrix has a large number of degrees offreedom (k(k+l)/2, where k is the dimension
of feature space), and we therefore need to ensure that the choice of covariance is
not overfit to the data. We have presented some general ideas on how to approach
this problem, but a true solution still needs to be developed.
Acknowledgements
This work was partially supported by DARPA via ONR under Contract N0001489-C-0264 as part of the Artifical Neural Networks Initiative.
A Weighted Probabilistic Neural Network
Thanks to Ken Theriault for his useful comments.
References
C.G. Atkeson. (1991) Using locally weighted regression for robot learning. Proceedings of the 1991 IEEE Conference on Robotics and Automation, pp. 958-963. Los
Alamitos, CA: IEEE Computer Society Press.
B.V. Dasarathy. (1991) Nearest Neighbor (NN) Norms: NN Pattern Classification
Techniques. Los Alamitos, CA: IEEE Computer Society Press.
L. Davis. (1991) Handbook of Genetic Algorithms. New York: Van Nostrand Reinhold.
K. Fukunaga and T.T. Flick. (1984) An optimal global nearest neighbor metric.
IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-6,
No.3, pp. 314-318.
D. Goldberg . (1988) Genetic Algorithms in Machine Learning, Optimization and
Search. Redwood City, CA: Addison-Wesley.
J.D. Kelly, Jr. and L. Davis. (1991) Hybridizing the genetic algorithm and the k
nearest neighbors classification algorithm. Proceedings of the Fourth Internation
Conference on Genetic Algorithms, pp. 377-383. San Mateo, CA: Morgan Kaufmann.
R.D. Short and K. Fukunaga. (1981) The optimal distance measure for nearest
neighbor classification. IEEE Transactions on Information Theory, Vol. IT-27, No.
5, pp. 622-627.
D.F. Specht. (1990) Probabilistic neural networks. Neural Networks, vol. 3, no. 1,
pp.109-118.
M. Stone. (1974) Cross-validatory choice and assessment of statistical predictions.
Journal of the Royal Statistical Society, vol. 36, pp. 111-147.
1117
| 485 |@word norm:1 d2:1 covariance:29 initial:2 series:1 score:1 genetic:14 marquardt:1 yet:1 designed:1 depict:2 intelligence:1 isotropic:4 xk:1 ith:1 short:2 provides:1 node:1 contribute:2 five:3 constructed:1 initiative:1 incorrect:1 consists:1 sacrifice:1 indeed:2 roughly:2 globally:1 increasing:1 classifies:1 what:2 developed:1 finding:1 transformation:22 classifier:2 demonstrates:1 before:1 dropped:1 modify:2 despite:1 encoding:9 ak:1 dasarathy:2 approximately:1 pami:1 chose:1 mateo:1 montana:4 range:5 practice:3 implement:1 backpropagation:7 procedure:3 area:1 empirical:1 undone:1 significantly:2 reject:1 onto:1 cannot:1 operator:1 applying:1 optimize:5 equivalent:3 map:1 demonstrated:2 compensated:1 missing:1 resolution:2 splitting:1 amax:2 his:3 population:9 coordinate:2 analogous:1 us:3 goldberg:2 distinguishing:1 logarithmically:1 enters:1 worst:1 yk:1 dynamic:1 trained:1 upon:1 darpa:1 differently:1 various:1 derivation:2 describe:1 shortcoming:2 newman:1 whose:1 heuristic:4 encoded:1 ive:1 solve:1 larger:2 compensates:1 withholding:1 gi:2 sequence:1 subtracting:1 relevant:2 mixing:1 validate:1 los:2 double:1 optimum:1 extending:1 object:2 illustrate:1 exemplar:16 nearest:9 progress:1 involves:1 implies:1 drawback:1 correct:1 centered:1 backprop:1 offreedom:1 ao:9 generalization:4 fix:1 transparent:1 extension:2 sufficiently:1 mapping:1 claim:2 a2:6 wl:1 city:1 weighted:11 clearly:4 gaussian:3 modified:3 rather:4 encode:2 derived:1 ax:5 likelihood:8 greatly:1 sense:1 flaw:5 nn:2 lj:2 typically:1 entire:1 hidden:1 classification:10 exponent:1 special:1 equal:1 validatory:1 having:1 eliminated:1 identical:2 represents:1 broad:1 future:1 others:1 hint:1 few:1 inherent:1 oriented:2 randomly:2 preserve:1 individual:5 fitness:1 floating:1 consisting:2 ourselves:1 replacement:2 freedom:4 unmanageably:1 evaluation:9 closer:4 partial:4 euclidean:3 theoretical:2 classify:2 entry:2 subset:3 hundred:1 too:3 gd:1 thanks:1 probabilistic:8 yl:1 off:1 decoding:1 contract:1 quickly:1 possibly:1 li:2 seeming:1 wk:2 automation:1 inc:1 jc:1 performed:2 square:1 ni:3 variance:1 kaufmann:1 correspond:1 identify:1 ofthe:1 generalize:1 classified:1 pp:6 dataset:22 recall:1 improves:1 wesley:1 originally:1 done:2 evaluated:2 overfit:1 hand:1 assessment:1 lack:2 aj:3 effect:2 validity:1 contain:1 true:3 verify:1 remedy:1 hence:10 symmetric:1 during:3 davis:8 levenberg:1 steady:2 criterion:7 stone:4 performs:1 fi:1 anisotropic:5 extend:2 he:4 cambridge:1 ai:2 pq:3 bolt:1 robot:1 recent:1 optimizing:2 irrelevant:3 certain:1 nostrand:1 outperforming:1 success:1 onr:1 morgan:1 additional:1 recognized:1 determine:2 redundant:3 signal:1 ii:2 multiple:5 full:1 reduces:2 cross:6 compensate:1 prediction:1 regression:2 moulton:1 essentially:1 metric:12 pnn:31 robotics:1 addition:1 leaving:4 rest:1 eliminates:1 comment:1 cart:3 call:2 integer:1 feedforward:1 iii:1 xj:2 affect:2 restrict:1 reduce:1 idea:1 det:3 motivated:1 effort:1 york:1 nine:2 flick:2 useful:1 locally:2 ten:6 ken:1 disjoint:1 correctly:1 vol:4 four:5 fraction:1 sum:4 run:1 fourth:1 draw:1 layer:2 fold:1 encountered:1 encodes:1 speed:1 argument:2 min:2 fukunaga:2 format:1 combination:1 poor:1 jr:1 describes:1 wi:1 modification:1 making:2 presently:1 restricted:3 equation:5 remains:1 loose:1 addison:1 specht:2 gaussians:12 observe:1 robustness:2 original:2 ensure:3 giving:1 society:3 bl:2 added:1 quantity:2 alamitos:2 diagonal:8 distance:5 simulated:1 street:1 trivial:1 code:3 besides:1 ratio:1 oneout:1 allowing:1 observation:1 datasets:5 discarded:2 withheld:2 incorrectly:1 incorporated:1 paradox:1 redwood:1 arbitrary:4 reinhold:1 david:1 pair:2 required:1 optimized:1 pattern:6 including:2 max:2 royal:1 natural:1 rely:1 difficulty:1 improve:2 identifies:2 kelly:5 acknowledgement:1 relative:2 interesting:1 validation:5 degree:5 affine:18 sufficient:1 principle:2 viewpoint:1 pi:2 translation:1 lo:1 ki2:1 supported:1 free:2 bias:2 allow:6 fall:1 neighbor:8 distributed:1 van:1 dimension:3 conservatively:1 made:1 adaptive:1 san:1 atkeson:6 far:2 transaction:2 global:1 ermine:1 handbook:1 search:9 sonar:1 robust:6 parenthesized:2 ca:4 excellent:1 necessarily:1 significance:1 main:2 big:1 arise:1 x1:1 precision:2 explicit:1 wish:2 exponential:1 xl:1 lq:2 ib:1 weighting:1 third:1 bad:1 discarding:2 supplemented:1 essential:1 exists:1 importance:1 te:1 logarithmic:2 contained:1 partially:2 relies:1 ma:1 conditional:2 identity:2 internation:1 replace:1 adverse:1 determined:1 except:2 uniformly:1 called:5 total:4 experimental:2 la:1 select:1 artifical:1 evaluate:3 |
4,254 | 4,850 | A Marginalized Particle Gaussian Process Regression
Yali Wang and Brahim Chaib-draa
Department of Computer Science
Laval University
Quebec, Quebec G1V0A6
{wang,chaib}@damas.ift.ulaval.ca
Abstract
We present a novel marginalized particle Gaussian process (MPGP) regression,
which provides a fast, accurate online Bayesian filtering framework to model the
latent function. Using a state space model established by the data construction
procedure, our MPGP recursively filters out the estimation of hidden function
values by a Gaussian mixture. Meanwhile, it provides a new online method for
training hyperparameters with a number of weighted particles. We demonstrate
the estimated performance of our MPGP on both simulated and real large data
sets. The results show that our MPGP is a robust estimation algorithm with high
computational efficiency, which outperforms other state-of-art sparse GP methods.
1
Introduction
The Gaussian process (GP) is a popular nonparametric Bayesian method for nonlinear regression.
However, the O(n3 ) computational load for training the GP model would severely limit its applicability in practice when the number of training points n is larger than a few thousand [1]. A number
of attempts have been made to handle it with a small computational load. One typical method is a
sparse pseudo-input Gaussian process (SPGP) [2] that uses a pseudo-input data set with m inputs
(m n) to parameterize the GP predictive distribution to reduce the computational burden. Then
a sparse spectrum Gaussian process (SSGP) [3] was proposed to further improve the performance
of SPGP while retaining the computational efficiency by using a stationary trigonometric Bayesian
model with m basis functions. However, both SPGP and SSGP learn hyperparameters offline by
maximizing the marginal likelihood before making the inference. They would take a risk to fall in
the local optimum. Another recent model is a Kalman filter Gaussian process (KFGP) [4] which reduces computation load by correlating function values of data subsets at each Kalman filter iteration.
But it still causes underfitting or overfitting if the hyperparameters are badly learned offline.
On the contrary, we propose in this paper an online marginalized particle filter to simultaneously
learn the hyperprameters and hidden function values. By collecting small data subsets sequentially,
we establish a novel state space model which allows us to estimate the marginal posterior distribution
(not the marginal likelihood) of hyperparameters online with a number of weighted particles. For
each particle, a Kalman filter is applied to estimate the posterior distribution of hidden function
values. We will later explain it in details and show its validity via the experiments on large datasets.
2
Data Construction
In practice, the whole training data set is usually constructed by gathering small subsets several times. For the tth collection, the training subset (Xt , yt ) consists of nt input-output pairs:
{(x1t , yt1 ), ? ? ? (xnt t , ytnt )}. Each scalar output yti is generated from a nonlinear function f (xit ) of a
d-dimensional input vector xit with an additive Gaussian noise N (0, a20 ). All the pairs are separately
organized as an input matrix Xt and output vector yt . For simplicity, the whole training data with
1
T collections is symbolized as (X1:T , y1:T ). The goal refers to a regression issue - estimating the
function value of f (x) at m test inputs X? = [x1? , ? ? ? xm
? ] given (X1:T , y1:T ).
3
Gaussian Process Regression
A Gaussian process (GP) represents a distribution over functions, which is a generalization of the
Gaussian distribution to an infinite dimensional function space. Formally, it is a collection of random
variables, any finite number of which have a joint Gaussian distribution [1]. Similar to a Gaussian
distribution specified by a mean vector and covariance matrix, a GP is fully defined by a mean function m(x) = E[f (x)] and covariance function k(x, x0 ) = E[(f (x) ? m(x))(f (x0 ) ? m(x0 ))].
Here we follow the practical choice that m(x) is set to be zero. Moreover, due to the spatial nonstationary phenomena in the real world, we choose k(x, x0 ) as kSE (x, x0 ) + kN N (x, x0 ) where
0 T
0
kSE = a21 exp[?0.5a?2
2 (x ? x ) (x ? x )] is the stationary squared exponential covariance func?2 T
2
?1 ?2 T 0
tion, kN N = a3 sin [a4 x
? x
? ((1 + a4 x
? x
?)(1 + a?2
?0T x
?0 ))?0.5 ] is the nonstationary neural
4 x
network covariance function with the augmented input x
? = [1 xT ]T . For simplicity, all the hyperparameters are collected into a vector ? = [a0 a1 a2 a3 a4 ]T .
The regression problem could be solved by the standard GP with the following two steps: First,
learning ? given (X1:T , y1:T ). One technique is to draw samples from p(?|X1:T , y1:T ) using
Markov Chain Monte Carlo (MCMC) [5, 6], another popular way is to maximize the log evidence p(y1:T |X1:T , ?) via a gradient based optimizer [1]. Second, estimating the distribution of
the function value p(f (X? )|X1:T , y1:T , X? , ?). From the perspective of GP, a function f (x) could
be loosely considered as an infinitely long vector in which each random variable is the function value
at an input x, and any finite set of function values is jointly Gaussian distributed. Hence, the joint
distribution p(y1:T , f (X? )|X1:T , X? , ?) is a multivariate Gaussian distribution. Then according to
the conditional property of Gaussian distribution, p(f (X? )|X1:T , y1:T , X? , ?) is also Gaussian distributed with the following mean vector f?(X? ) and covariance matrix P (X? , X? ) [1, 7]:
f?(X? ) = K? (X? , X1:T )[K? (X1:T , X1:T ) + a2 I]?1 y1:T
0
P (X? , X? ) = K? (X? , X? ) ? K? (X? , X1:T )[K? (X1:T , X1:T ) + a20 I]?1 K? (X? , X1:T )T
If there are n training inputs and m test inputs then K? (X? , X1:T ) denotes an m ? n covariance
matrix in which each entry is calculated by the covariance function k(x, x0 ) with the learned ?. It is
similar to construct K? (X1:T , X1:T ) and K? (X? , X? ).
4
Marginalized Particle Gaussian Process Regression
Even though GP is an elegant nonparametric method for Bayesian regression, it is commonly infeasible for large data sets due to an O(n3 ) scaling for learning the model. In order to derive a
computational tractable GP model which preserves the estimation accuracy, we firstly explore a
state space model from the data construction procedure, then propose a marginalized particle filter
to estimate the hidden f (X? ) and ? in an online Bayesian filtering framework.
4.1
State Space Model
The standard state space model (SSM) consists of the state equation and observation equation. The
state equation reflects the Markovian evolution of hidden states (the hyperparamters and function
values). For the hidden static hyperparameter ?, a popular method in filtering techniques is to add an
artificial evolution using kernel smoothing which guarantees the estimation convergence [8, 9, 10]:
?t = b?t?1 + (1 ? b)??t?1 + st?1
(1)
where b = (3? ? 1)/(2?), ? is a discount factor which is typically around 0.95-0.99, ??t?1 is the
Monte Carlo mean of ? at t ? 1, and st?1 ? N (0, r2 ?t?1 ), r2 = 1 ? b2 , ?t?1 is the Monte Carlo
variance matrix of ? at t ? 1. For hidden function values, we attempt to explore the relation between
the (t ? 1)th and tth data subset. For simplicity, we denoted Xtc = Xt ? X? and ftc = f (Xtc ). If
c
c
f (x) ? GP (0, k(x, x0 )), then the prior distribution p(ftc , ft?1
|Xt?1
, Xtc , ?t ) is jointly Gaussian:
c
K?t (Xtc , Xtc )
K?t (Xtc , Xt?1
)
c
c
c
c
p(ft , ft?1 |Xt?1 , Xt , ?t ) = N (0,
)
c
c
c
K?t (Xtc , Xt?1
)T K?t (Xt?1
, Xt?1
)
2
Then according to the conditional property of Gaussian distribution, we could get
where
c
c
c
p(ftc |ft?1
, Xt?1
, Xtc , ?t ) = N (G(?t )ft?1
, Q(?t ))
(2)
c
c
c
G(?t ) = K?t (Xtc , Xt?1
)K??1
(Xt?1
, Xt?1
)
t
(3)
K?t (Xtc , Xtc )
c
c
c
c
K?t (Xtc , Xt?1
)K??1
(Xt?1
, Xt?1
)K?t (Xtc , Xt?1
)T
t
Q(?t ) =
?
(4)
This conditional density (2) could be transformed into a linear equation of the function value with
an additive Gaussian noise vtf ? N (0, Q(?t )):
c
ftc = G(?t )ft?1
+ vtf
(5)
Finally, the observation (output) equation could be directly obtained from the tth data collection:
yt = Ht ftc + vty
(6)
where Ht = [Int 0] is an index matrix to make Ht ftc = f (Xt ) since yt is only obtained from the
tth training inputs Xt . The noise vty ? N (0, R(?t )) is from the section 2 where R(?t ) = a20,t I. Note
that a0 is a fixed unknown hyperparameter. We use the symbol a0,t just because of the consistency
with the artificial evolution of ?. To sum up, our SSM is fully specified by (1), (5), (6).
4.2
Bayesian Inference by Marginalized Particle Filter
In contrast to the GP regression with a two-step offline inference in section 3, we propose an online
filtering framework to simultaneously learn hyperparameters and estimate hidden function values.
According to the SSM before, the inference problem refers to compute the posterior distribution
p(ftc , ?1:t |X1:t , X? , y1:t ). One technique is MCMC, but MCMC usually suffers from a long convergence time. Hence we choose another popular technique - particle filter. However, for our SSM,
the traditional sampling importance resampling (SIR) particle filter will introduce the unnecessary
computational load due to the fact that (5) in the SSM is a linear structure given ?t . This inspires us
to apply a more efficient marginalized particle filter (also called Rao-Blackwellised particle filter)
[9, 11, 12, 13] to deal with the estimation problem by combining Kalman filter into particle filter.
Using Bayes rule, the posterior could be factorized as
p(ftc , ?1:t |X1:t , X? , y1:t ) = p(?1:t |X1:t , X? , y1:t )p(ftc |?1:t , X1:t , X? , y1:t )
p(?1:t |X1:t , X? , y1:t ) refers to a marginal posterior which could be solved by particle filter. After
obtaining the estimation of ?1:t , the second term p(ftc |?1:t , X1:t , X? , y1:t ) could be computed by
Kalman filter since ftc is the hidden state in the linear substructure (equation (5)) of SSM.
The detailed inference procedure is as follows: First, p(?1:t |X1:t , X? , y1:t ) should be factorized in
a recursive form so that it could be applied into sequential importance sampling framework:
p(?1:t |X1:t , X? , y1:t ) ? p(yt |y1:t?1 , ?1:t , X1:t , X? )p(?t |?t?1 )p(?1:t?1 |X1:t?1 , X? , y1:t?1 )
At each iteration of the sequential importance sampling, the particles for the hyperparameter vector
are drawn from the proposal distribution p(?t |?t?1 ) (easily obtained from equation (1)), then the
importance weight for each particle at t could be computed according to p(yt |y1:t?1 , ?1:t , X1:t , X? ).
This distribution could be solved analytically:
Z
p(yt |y1:t?1 , ?1:t , X1:t , X? ) = p(yt , ftc |y1:t?1 , ?1:t , X1:t , X? )dftc
Z
= p(yt |ftc , ?t , Xt , X? )p(ftc |y1:t?1 , ?1:t , X1:t , X? )dftc
Z
c
c
= N (Ht ftc , R(?t ))N (ft|t?1
, Pt|t?1
)dftc
c
c
= N (Ht ft|t?1
, Ht Pt|t?1
HtT + R(?t ))
(7)
where p(yt |ftc , ?t , Xt , X? ) follows a Gaussian distribution N (Ht ftc , R(?t )) (refers to equation (6)),
c
c
p(ftc |y1:t?1 , ?1:t , X1:t , X? ) = N (ft|t?1
, Pt|t?1
) is the prediction step of Kalman filter for ftc which
c
c
is also Gaussian distributed with the predictive mean ft|t?1
and covariance Pt|t?1
.
3
Second, we explain how to compute p(ftc |?1:t , X1:t , X? , y1:t ) using the prediction-update Kalman
filter. According to the recursive Bayesian filtering, this posterior could be factorized as:
p(ftc |?1:t , X1:t , X? , y1:t ) =
p(yt |ftc , ?t , Xt , X? )p(ftc |y1:t?1 , ?1:t , X1:t , X? )
p(yt |y1:t?1 , ?1:t , X1:t , X? )
(8)
In the prediction step, the goal is to compute p(ftc |y1:t?1 , ?1:t , X1:t , X? ) which is an integral:
Z
c
c
p(ftc |y1:t?1 , ?1:t , X1:t , X? ) = p(ftc , ft?1
|y1:t?1 , ?1:t , X1:t , X? )dft?1
Z
c
c
c
= p(ftc |ft?1
, ?t , Xt?1:t , X? )p(ft?1
|y1:t?1 , ?1:t?1 , X1:t?1 , X? )dft?1
Z
c
c
c
c
, Pt?1|t?1
)dft?1
= N (G(?t )ft?1
, Q(?t ))N (ft?1|t?1
c
c
, G(?t )Pt?1|t?1
G(?t )T + Q(?t ))
= N (G(?t )ft?1|t?1
(9)
c
c
where p(ftc |ft?1
, ?t , Xt?1:t , X? ) is directly from (2), and p(ft?1
|y1:t?1 , ?1:t?1 , X1:t?1 , X? ) =
c
c
c
) is the posterior estimation for ft?1
. Since p(ftc |y1:t?1 , ?1:t , X1:t , X? ) could
N (ft?1|t?1
, Pt?1|t?1
c
c
also be expressed as N (ft|t?1
, Pt|t?1
), then the prediction step is summarized as:
c
c
ft|t?1
= G(?t )ft?1|t?1
,
c
c
Pt|t?1
= G(?t )Pt?1|t?1
G(?t )T + Q(?t )
(10)
In the update step, the current observation density p(yt |ftc , ?t , Xt , X? ) = N (Ht ftc , R(?t )) is used
c
c
, Pt|t
) is
to correct the prediction. Putting (7) and (9) into (8), p(ftc |?1:t , X1:t , X? , y1:t ) = N (ft|t
actually Gaussian distributed with the Kalman Gain ?t where:
c
c
?t = Pt|t?1
HtT (Ht Pt|t?1
HtT + R(?t ))?1
c
c
c
ft|t
= ft|t?1
+ ?t (yt ? Ht ft|t?1
),
(11)
c
c
c
Pt|t
= Pt|t?1
? ?t Ht Pt|t?1
(12)
Finally, the whole algorithm (t = 1, 2, 3, ....) is summarized as follows:
? For i = 1, 2, ...N
i
) according to (1)
? Drawing ?ti ? p(?t |??t?1
i
? Using ?t to specify k(x, x0 ) in GP to construct G(?ti ), Q(?ti ), R(?ti ) in (3-4) and (6)
c,i
c,i
c,i
c,i
? Kalman Predict: Using f?t?1|t?1
, P?t?1|t?1
into (10) to compute ft|t?1
, Pt|t?1
c,i
c,i
c,i
c,i
? Kalman Update: Using ft|t?1
and Pt|t?1
into (11) and (12) to compute ft|t
and Pt|t
c,i
c,i
? Putting ft|t?1
, Pt|t?1
, R(?ti ) into (7) to compute the importance weight w
?ti
PN
? Normalizing the weight: wti = w
?ti /( i=1 w
?ti ) (i = 1, ...N )
? Hyperparameter and Hidden function value estimation:
PN
PN
c,i
c
?
c
= i=1 wti ft|t
? f?t|t
= Ht? f?t|t
??t = i=1 wti ?ti ,
f?t|t
P
N
c,i
c,i
c,i
wti (P + (f ? f?c )(f ? f?c )T ) ? P? ? = Ht? P? c (Ht? )T
P? c =
t|t
i=1
t|t
t|t
t|t
t|t
t|t
t|t
t|t
where Ht? = [0 Im ] is an index matrix to get the function value estimation at X?
c,i
c,i
? Resampling: For i = 1, ...N , resample ?ti , ft|t
, Pt|t
with respect to the importance weight
i
i ?c,i ? c,i
?
w to obtain ? , f , P for the next step
t
t
t|t
t|t
At each iteration, our marginalized particle Gaussian process (MPGP) uses a small training subset
to estimate f (X? ) by Kalman filters, and learn hyperparameters online by weighted particles. The
computational cost of the marginalized particle filter is governed by O(N T S 3 ) [10] where N is the
number of particles, T is the number of data collections, S is the size of each collection. This could
largely reduce the computational load. Moreover, the MPGP propagates the previous estimation to
improve the current accuracy in the recursive filtering framework. From the algorithm above, we
also find that f (X? ) is estimated as a Gaussian mixture at each iteration since each hyperparameter particle accompanies with a Kalman filter for f (X? ). Hence the MPGP could accelerate the
4
4
4
t=10
t=10
2
2
0
0
t=10
10
?1
0
x
1
?4
?2
2
?1
0
x
1
y
0.6
0.8
(d)
?5
1
0
0.2
0.4
0.6
0.8
1
x
0
t=50
10
y
y
y
1
2
0.5
?4
?2
?1
0
x
1
?0.5
SE?MPGP
log(a2)
log(a1)
SENN?MPGP
SENN?MPGP
?1
?1.5
(i)
0
50
t
0
50
t
SENN?MPGP
?0.5
20
30
40
50
t
log(a2)
log(a1)
(n)
10
0
50
t
?3
(o)
0
10
20
30
t
40
50
0
0.2
0.4
SENN?MPGP
?1
0
0
50
t
0
10
20
30
t
40
50
1
?1.2
SENN?MPGP
?1.4
(m)
100
?1.6
0
50
t
100
1.5
1
SENN?MPGP
?0.6
SENN?MPGP
0.5
0
?0.8
(p)
?1
?2
?0.4
SENN?MPGP
0.8
?1
?0.2
1
0.6
x
(l)
100
SE?MPGP
?2
(h)
?5
1
(k)
?1
100
SENN?MPGP
0.8
?1.5
SE?MPGP
?1
0.6
?0.5
SE?MPGP
2
SE?MPGP
0
0.4
0
0
0
SENN?MPGP
0.2
(j)
?2
100
2
1
0
0.5
?0.5
SE?MPGP
?5
2
x
0
0
(g)
log(a3)
0
x
0
(f)
log(a3)
?1
5
0
?2
?4
?2
t=50
10
5
(e)
log(a0)
0.4
log(a4)
0
?2
log(a0)
0.2
x
t=100
2
0
0
log(a4)
t=100
?1
?5
2
(c)
4
2
?1
0
(b)
4
y
0
?2
(a)
?4
?2
5
y
y
y
5
?2
t=10
10
(q)
?1
0
10
20
30
t
40
50
(r)
?0.5
0
10
20
30
40
50
t
Figure 1: Estimation result comparison. (a-b) show the estimation for f1 at t = 10 by SE-KFGP
(blue line with blue dashed interval in (a)), SE-MPGP (red line with red dashed interval in (a)),
SENN-KFGP (blue line with blue dashed interval in (b)), SENN-MPGP (red line with red dashed
interval in (b)). The black crosses are the training outputs at t = 10, the black line is the true f (X? ).
The denotation of (c-d),(e-f),(g-h) is same as (a-b) except that (c-d) are for f2 at t = 10, (e-f) are for
f1 at t = 100, (g-h) are for f2 at t = 50. (i-m), (n-r) are the estimation of the log hyperparameters
(log(a0 ) to log(a4 )) for f1 , f2 over time.
computational speed, while preserving the accuracy. Additionally, it is worth to mention that the
Kalman filter GP (KFGP) [4] is a special case of our MPGP since the KFGP firstly trains the hyperparamter vector offline and uses it to specify the SSM, then estimates p(ftc |?1:t , X1:t , X? , y1:t )
by Kalman filter. But the offline learning procedure in KFGP will either take a long time using a
large extra training data or fall into an unsatisfactory local optimum using a small extra training data.
In our MPGP, the local optimum could be used as the initial setting of hyperparameters, then the
underlying ? could be learned online by the marginalized particle filter to improve the performance.
Finally, to avoid confusion, we should clarify the difference between our MPGP and the GP modeled Bayesian filters [14, 15]. The goal of GP modeled Bayesian filters is to use GP modeling for
Bayesian filtering, on the contrary, our MPGP is to use Bayesian filtering for GP modeling.
5
Experiments
Two Synthetic Datasets: The proposed MPGP is firstly evaluated on two simulated onedimensional datasets. One is a function with a sharp peak which is spatially inhomogeneously
smooth [16]: f1 (x) = sin(x) + 2 exp(?30x2 ). For f1 (x), we gather the training data with 100
collections. For each collection, we randomly select 30 inputs from [-2, 2], then calculate their
outputs by adding a Gaussian noise N (0, 0.32 ) to their function values. The test input is from -2
to 2 with 0.05 interval. The other function is with a discontinuity [17]: if 0 ? x ? 0.3, f2 (x) =
N (x; 0.6, 0.22 )+N (x; 0.15, 0.052 ), if 0.3 < x ? 1, f2 (x) = N (x; 0.6, 0.22 )+N (x; 0.15, 0.052 )+
4. For f2 (x), we gather the training data with 50 collections. For each collection, we randomly select 60 inputs from [0, 1], then calculate their outputs by adding a Gaussian noise N (0, 0.82 ) to their
function values. The test input is from 0 to 1 with 0.02 interval.
The first experiment aims to evaluate the estimation performance in comparison of KFGP in [4].
We denote SE-KFGP, SENN-KFGP as KFGP with the covariance function kSE , KFGP with the
covariance function kSE + kN N . Similarly, SE-MPGP and SENN-MPGP are MPGP with kSE ,
5
2
0.5
0.2
MNLP for f1(x)
NMSE for f1(x)
MNLP for f2(x)
NMSE for f2(x)
0.18
0.4
1
SE?KFGP
SENN?KFGP
SE?MPGP
SE?MPGP
0.12
0.3
SENN?KFGP
SENN?MPGP
0.6
1.4
SE?MPGP
SENN?KFGP
SENN?MPGP
SENN?MPGP
0.2
1.2
0.1
0.08
0
20
40
60
80
100
0.4
0
20
40
t
60
80
100
0.1
SENN?MPGP
1.6
SE?KFGP
SE?KFGP
0.8
SENN?KFGP
SE?MPGP
0.16
0.14
SE?KFGP
1.8
0
10
20
t
30
40
1
50
0
10
20
30
40
50
8
10
12
Number of Particles
14
16
t
t
Figure 2: The NMSE and MNLP of KFGP and MPGP for f1 , f2 over time.
1.5
60
SE?MPGP
SENN?MPGP
SE?MPGP
SENN?MPGP
Running Time
MNLP
NMSE
1
0.095
SE?MPGP
SENN?MPGP
50
0.1
0.5
40
30
20
0.09
10
0.085
2
4
6
8
10
12
Number of Particles
14
0
16
0.4
2
4
6
8
10
12
Number of Particles
14
3
2
SE?MPGP
SENN?MPGP
Running Time
0.3
MNLP
6
SE?MPGP
SENN?MPGP
2.5
0.25
4
40
SE?MPGP
SENN?MPGP
0.35
NMSE
0
16
2
0.2
1.5
30
20
10
0.15
0.1
0
5
10
15
Number of Particles
20
1
0
5
10
15
Number of Particles
20
0
0
5
10
15
Number of Particles
20
Figure 3: The NMSE and MNLP of MPGP as a function of the number of particles. The first row is
for f1 , the second row is for f2 .
MPGP with kSE + kN N . The number of particles in MPGP is set to 10. The evaluation criterion
is the test Normalized Mean Square Error (NMSE) and the test Mean Negative Log Probability
(MNLP) as suggested in [3]. First, it is shown in Figure 1 that the estimation performance for both
KFGP and MPGP is getting better and attempts to convergence over time (refers to (a-h)) since
the previous estimation would be incorporated into the current estimation in the recursive Bayesian
filtering. Second, for both f1 and f2 , the estimation of MPGP is better than KFGP via the NMSE and
MNLP comparison in Figure 2. The KFGP uses offline learned hyperparameters all the time. On
the contrary, MPGP initializes hyperparameters using the ones by KFGP, then online learns the true
hyperparameters (refers to (i-r) in Figure 1). Hence the MNLP of MPGP is much lower than KFGP.
Finally, if we only focus on our MPGP, then we could find SENN-MPGP is better than SE-MPGP
since SENN-MPGP takes the spatial nonstationary phenomenon into account.
The second experiment aims to illustrate the average performance of SE-MPGP and SENN-MPGP
when the number of particles increases. For each number of particles, we run the SE-MPGP and
SENN-MPGP 5 times and compute the average NMSE and MNLP. From Figure 3, we find: First,
with increasing the number of particles, the NMSE and MNLP of SE-MPGP and SENN-MPGP
would decrease at the beginning and become convergence while the running time increases over
time. The reason is that the estimation accuracy and computational load of particle filters will
increase when the number of particles increases. Second, the average performance of SENN-MPGP
is better than SE-MPGP since it captures the spatial nonstationarity, but SENN-MPGP needs more
running time since the size of the hyperparameter vector to be inferred will increase.
The third experiment aims to compare our MPGP with the benchmarks. The state-of-art sparse
GP methods we choose are: sparse pseudo-input Gaussian process (SPGP) [2] and sparse spectrum
Gaussian process (SSGP) [3]. Moreover, we also want to examine the robustness of our MPGP
since we should clarify whether the good estimation of our MPGP heavily depends on the order
of training data collection. Hence, we randomly interrupt the order of training subsets we used
before, then implement SPGP with 5 pseudo inputs (5-SPGP), SSGP with 10 basis functions (10SSGP), SE-MPGP with 5 particles (5-SE-MPGP), SENN-MPGP with 5 particles (5-SENN-MPGP).
6
Table 1: Benchmarks Comparison for Synthetic Datasets. The NMSEi, MNLPi, RTimei represent
the NMSE, MNLP and running time for the function fi (i = 1, 2)
Method
NMSE1
MNLP1
RTime1
NMSE2
MNLP2
RTime2
5-SPGP
10-SSGP
5-SE-MPGP
5-SENN-MPGP
0.2243
0.0887
0.0880
0.0881
0.5409
0.1606
1.6318
0.1820
28.6418s
18.8605s
12.5737s
18.7513s
0.5445
0.1144
0.1687
0.1289
1.5950
1.1208
1.3524
1.1782
30.3578s
10.2025s
12.4801s
11.5909s
Table 2: Benchmarks Comparison. Data1 is the temperature dataset. Data2 is the pendulum dataset.
Data1
NMSE
MNLP
RTime
Data2
NMSE
MNLP
RTime
5-SPGP
10-SSGP
5-SE-MPGP
5-SENN-MPGP
0.48
0.27
0.11
0.10
1.62
1.33
1.05
1.16
181.3s
97.16s
50.99s
59.25s
10-SPGP
10-SSGP
20-SE-MPGP
20-SENN-MPGP
0.61
1.04
0.63
0.58
1.98
10.85
2.20
2.12
16.54s
23.59s
7.04s
8.60s
In Table 1, our 5-SE-MPGP mainly outperforms SPGP except that its MNLP1 is worse than the one
of SPGP. The reason is the synthetic functions are nonstationary but SE-MPGP uses a stationary SE
kernel. Hence we perform 5-SENN-MPGP with a nonstationary kernel to show that our MPGP is
competitive with SSGP, and much better with shorter running time than SPGP.
Global Surface Temperature Dataset: We present here a preliminary analysis of the Global Surface Temperature Dataset in January 2011 (http://data.giss.nasa.gov/gistemp/). We first gather the
training data with 100 collections. For each collection, we randomly select 90 data points where the
input vector is the longitude and latitude location, the output is the temperature (o C). There are two
test data sets: the first one is a grid test input set (Longitude: -180:40:180, Latitude: -90:20:90) that
is used to show the estimated surface temperature. The second test input set (100 points) is randomly
selected from the data website after obtaining all the training data.
The first experiment aims to show the predicted surface temperature at the grid test inputs. We set the
number of particles in the SE-MPGP and SENN-MPGP as 20. From Figure 4, the KFGP methods
stuck in the local optimum: SE-KFGP seems underfitting since it does not model the cold region
around the location (100, 50), SENN-KFGP seems overfitting since it unexpectedly models the cold
region around (-100, -50). On the contrary, SE-MPGP and SENN-MPGP suitably fit the data set via
the hyperparameter online learning.
The second experiment is to evaluate the estimation error of our MPGP using the second test data.
We firstly run all the methods to compute the NMSE and MNLP over iteration. From the first row of
Figure 5, the NMSE and MNLP of MPGP are lower than KFGP. Moreover, SENN-MPGP is much
lower than SE-MPGP, which shows that SENN-MPGP successfully models the spatial nonstationarity of the temperature data. Then we change the number of particles. For each number, we run
SE-MPGP, SENN-MPGP 3 times to evaluate the average NMSE, MNLP and running time. It shows
that SENN-MPGP fits the data better than SE-MPGP but the trade-off is the longer running time.
The third experiment is to compare our MPGP with the benchmarks. All the denotations are same as
the third experiment of the simulated data. We also randomly interrupt the order of training subsets
for the robustness consideration. From Table 2, the comparison results show that our MPGP uses a
shorter running time with a better estimation performance than SPGP and SSGP.
Pendulum Dataset: This is a small data set which contains 315 training points. In [3], it is mentioned that SSGP model seems to be overfitting for this data due to the gradient ascent optimization.
We are interested in whether our method can successfully capture the nonlinear property of this
pendulum data. We firstly collect the training data 9 times, and 35 training data for each collection. Then, 100 test points are randomly selected for evaluating the performance. From Table 2, our
SENN-MPGP obtains the estimation with the fastest speed and the smallest NMSE among all the
methods, and the MNLP is competitive to SPGP.
7
8
90
90
90
90
90
50
50
50
50
50
0
0
0
latitude
0
latitude
latitude
2
latitude
4
latitude
6
0
0
?2
?4
?50
?50
?50
?50
?50
?6
?90
?180
?8
?100
?90
100 180 ?180
0
longitude
0.8
5.4
0.7
5.2
?100
0
longitude
?90
180 ?180
100
?100
0
longitude
?90
180 ?180
100
0.1
?100
0
longitude
?90
180 ?180
100
?2.65
?0.82
?2.7
?0.84
0
log(a0)
SE?MPGP
SENN?MPGP
0.4
SE?MPGP
SENN?MPGP
4.6
?0.1
SENN?MPGP
0.1
0
50
t
100
3.8
?2.95
?0.96
?3
0
50
t
100
?0.4
0
50
t
100
?3.05
SENN?MPGP
?0.94
?0.3
4
180
?0.9
?0.92
?2.9
4.2
0.2
100
?0.88
?2.8
?2.85
?0.2
4.4
0.3
SE?MPGP
SENN?MPGP
0
longitude
?0.86
log(a4)
0.5
log(a1)
4.8
log(a2)
0.6
?2.75
log(a3)
5
?100
?0.98
0
50
t
100
?1
0
50
t
100
Figure 4: The temperature estimation at t = 100. The first row (from left to right): the temperature
value bar, the full training observation plot, the grid test output estimation by SE-KFGP, SENNKFGP, SE-MPGP, SENN-MPGP. The black crosses are the observations at t = 100. The second
row (from left to right) is the estimation of log hyperparameters (log(a0 ) to log(a4 )).
2
0.6
SE?KFGP
SENN?KFGP
SE?MPGP
SENN?MPGP
SE?KFGP
SENN?KFGP
SE?MPGP
SENN?MPGP
1.8
MNLP
NMSE
0.5
1.9
0.4
1.7
1.6
1.5
0.3
1.4
0.2
0
10
20
30
40
50
Iteration
60
70
0.45
80
90
100
1.3
0
10
20
30
2.6
SE?MPGP
SENN?MPGP
0.4
SE?MPGP
SENN?MPGP
2.4
60
70
80
90
100
SE?MPGP
SENN?MPGP
Running Time
MNLP
NMSE
0.3
50
Iteration
300
2.2
0.35
40
400
2
1.8
1.6
200
100
0.25
1.4
0.2
5
10
15
20
Number of Particles
25
30
5
10
15
20
Number of Particles
25
30
0
5
10
15
20
Number of Particles
25
30
Figure 5: The NMSE and MNLP evaluation. The first row: the NMSE and MNLP over iteration.
The second row: the average NMSE, MNLP, Running time as a function of the number of particles.
6
Conclusion
We have proposed a novel Bayesian filtering framework for GP regression, which is a fast and accurate online method. Our MPGP framework does not only estimate the function value successfully,
but it also provides a new technique for learning the unknown static hyperparameters by online estimating the marginal posterior of hyperparameters. The small training set at each iteration would
largely reduce the computation load while the estimation performance is improved over iteration due
to the fact that recursive filtering would propagate the previous estimation to enhance the current estimation. In comparison with other benchmarks, we have shown that our MPGP could provide a
robust estimation with a competitively computational speed. In the future, it would be interesting to
explore the time-varying function estimation with our MPGP.
8
References
[1] C. E. Rasmussen, C. K. I. Williams, Gaussian Process for Machine learning, MIT Press, Cambridge, MA, 2006.
[2] E. Snelson, Z. Ghahramani, Sparse gaussian processes using pseudo-inputs, in: NIPS, 2006,
pp. 1257?1264.
[3] M. L.-Gredilla, J. Q.-Candela, C. E. Rasmussen, A. R. F.-Vidal, Sparse spectrum gaussian
process regression, Journal of Machine Learning Research 11 (2010) 1865?1881.
[4] S. Reece, S. Roberts, An introduction to gaussian processes for the kalman filter expert, in:
FUSION, 2010.
[5] R. M. Neal, Monte carlo implementation of gaussian process models for bayesian regression
and classification, Tech. rep., Department of Statistics, University of Toronto (1997).
[6] D. J. C. MacKay, Introduction to gaussian processes, in: Neural Networks and Machine Learning, 1998, pp. 133?165.
[7] M. P. Deisenroth, Efficient reinforcement learning using gaussian processes, Ph.D. thesis, Karlsruhe Institute of Technology (2010).
[8] J. Liu, M. West, Combined parameter and state estimation in simulation-based filtering, in:
Sequential Monte Carlo Methods in Practice, 2001, pp. 197?223.
[9] P. Li, R. Goodall, V. Kadirkamanathan, Estimation of parameters in a linear state space model
using a Rao-Blackwellised particle filter, IEE Proceedings on Control Theory and Applications
151 (2004) 727?738.
[10] N. Kantas, A. Doucet, S. S. Singh, J. M. Maciejowski, An overview of squential Monte Carlo
methods for parameter estimation in general state space models, in: 15 th IFAC Symposium
on System Identification, 2009.
[11] A. Doucet, N. de Freitas, K. Murphy, S. Russell, Rao-Blackwellised particle filtering for dynamic Bayesian networks, in: UAI, 2000, pp. 176?183.
[12] N. de Freitas, Rao-Blackwellised particle filtering for fault diagnosis, in: IEEE Aerospace
Conference Proceedings, 2002, pp. 1767?1772.
[13] T. Sch?on, F. Gustafsson, P.-J. Nordlund, Marginalized particle filters for mixed linear/nonlinear
state-space models, IEEE Transactions on Signal Processing 53 (2005) 2279 ? 2289.
[14] J. Ko, D. Fox, Gp-bayesfilters: Bayesian filtering using gaussian process prediction and observation models, in: IROS, 2008, pp. 3471?3476.
[15] M. P. Deisenroth, R. Turner, M. F. Huber, U. D. Hanebeck, C. E. Rasmussen, Robust filtering
and smoothing with gaussian processes, IEEE Transactions on Automatic Control.
[16] I. DiMatteo, C. R. Genovese, R. E. Kass, Bayesian Curve Fitting with Free-Knot Splines,
Biometrika 88 (2001) 1055?1071.
[17] S. A. Wood, Bayesian mixture of splines for spatially adaptive nonparametric regression,
Biometrika 89 (2002) 513?528.
9
| 4850 |@word seems:3 suitably:1 simulation:1 propagate:1 covariance:10 mention:1 recursively:1 initial:1 liu:1 contains:1 outperforms:2 freitas:2 current:4 ka:1 nt:1 additive:2 plot:1 update:3 resampling:2 stationary:3 selected:2 website:1 beginning:1 data2:2 provides:3 location:2 toronto:1 firstly:5 ssm:7 bayesfilters:1 constructed:1 become:1 symposium:1 gustafsson:1 consists:2 fitting:1 underfitting:2 introduce:1 x0:9 huber:1 examine:1 gov:1 increasing:1 estimating:3 moreover:4 underlying:1 factorized:3 guarantee:1 pseudo:5 vtf:2 blackwellised:4 collecting:1 ti:10 biometrika:2 control:2 before:3 local:4 limit:1 severely:1 black:3 collect:1 fastest:1 practical:1 practice:3 recursive:5 implement:1 procedure:4 cold:2 refers:6 get:2 risk:1 hyperparamter:1 yt:14 maximizing:1 williams:1 simplicity:3 rule:1 handle:1 construction:3 hyperparamters:1 pt:21 heavily:1 us:6 ft:33 wang:2 solved:3 parameterize:1 thousand:1 calculate:2 capture:2 region:2 unexpectedly:1 decrease:1 trade:1 russell:1 dama:1 mentioned:1 dynamic:1 spgp:14 singh:1 predictive:2 efficiency:2 f2:11 basis:2 easily:1 joint:2 accelerate:1 train:1 reece:1 fast:2 monte:6 artificial:2 larger:1 drawing:1 statistic:1 gi:1 gp:21 jointly:2 online:12 propose:3 combining:1 trigonometric:1 x1t:1 getting:1 convergence:4 optimum:4 derive:1 illustrate:1 longitude:7 predicted:1 correct:1 filter:29 brahim:1 f1:10 generalization:1 preliminary:1 im:1 clarify:2 around:3 considered:1 exp:2 predict:1 optimizer:1 a2:5 smallest:1 resample:1 estimation:34 successfully:3 weighted:3 reflects:1 mit:1 gaussian:39 aim:4 pn:3 avoid:1 varying:1 xit:2 focus:1 interrupt:2 unsatisfactory:1 likelihood:2 mainly:1 tech:1 contrast:1 inference:5 ftc:33 typically:1 a0:8 hidden:10 relation:1 transformed:1 interested:1 issue:1 among:1 classification:1 denoted:1 retaining:1 art:2 spatial:4 smoothing:2 special:1 marginal:5 mackay:1 construct:2 sampling:3 represents:1 genovese:1 future:1 spline:2 few:1 randomly:7 simultaneously:2 preserve:1 murphy:1 attempt:3 evaluation:2 mixture:3 chain:1 accurate:2 integral:1 shorter:2 draa:1 fox:1 loosely:1 a20:3 modeling:2 markovian:1 rao:4 applicability:1 cost:1 subset:8 entry:1 inspires:1 iee:1 kn:4 synthetic:3 combined:1 st:2 density:2 peak:1 dimatteo:1 off:1 enhance:1 squared:1 thesis:1 choose:3 worse:1 expert:1 li:1 account:1 de:2 b2:1 summarized:2 int:1 depends:1 later:1 tion:1 candela:1 pendulum:3 red:4 competitive:2 bayes:1 substructure:1 square:1 accuracy:4 variance:1 largely:2 bayesian:18 identification:1 knot:1 carlo:6 worth:1 explain:2 suffers:1 nonstationarity:2 pp:6 static:2 chaib:2 gain:1 dataset:5 popular:4 organized:1 actually:1 nasa:1 follow:1 specify:2 improved:1 evaluated:1 though:1 just:1 nonlinear:4 karlsruhe:1 validity:1 normalized:1 true:2 evolution:3 hence:6 analytically:1 spatially:2 neal:1 deal:1 sin:2 xtc:13 ulaval:1 criterion:1 demonstrate:1 confusion:1 temperature:9 snelson:1 consideration:1 novel:3 fi:1 data1:2 laval:1 overview:1 onedimensional:1 cambridge:1 dft:3 automatic:1 consistency:1 grid:3 similarly:1 particle:48 longer:1 surface:4 add:1 posterior:8 multivariate:1 recent:1 perspective:1 rep:1 fault:1 preserving:1 maximize:1 dashed:4 signal:1 full:1 reduces:1 smooth:1 ifac:1 cross:2 long:3 a1:4 prediction:6 regression:13 ko:1 iteration:10 kernel:3 represent:1 proposal:1 want:1 separately:1 interval:6 sch:1 extra:2 ascent:1 elegant:1 quebec:2 contrary:4 nonstationary:5 htt:3 fit:2 wti:4 reduce:3 whether:2 accompanies:1 cause:1 detailed:1 se:55 nonparametric:3 discount:1 ph:1 tth:4 http:1 senn:62 estimated:3 blue:4 diagnosis:1 hyperparameter:7 putting:2 drawn:1 iros:1 ht:15 sum:1 wood:1 run:3 yt1:1 draw:1 scaling:1 yali:1 badly:1 symbolized:1 denotation:2 n3:2 x2:1 speed:3 maciejowski:1 department:2 according:6 gredilla:1 making:1 vty:2 goodall:1 gathering:1 equation:8 tractable:1 competitively:1 apply:1 vidal:1 ssgp:11 robustness:2 denotes:1 running:11 a4:8 marginalized:11 ghahramani:1 establish:1 initializes:1 kadirkamanathan:1 traditional:1 gradient:2 simulated:3 collected:1 reason:2 kalman:15 index:2 modeled:2 robert:1 negative:1 xnt:1 implementation:1 nordlund:1 unknown:2 perform:1 rtime:2 observation:6 datasets:4 markov:1 benchmark:5 finite:2 kse:6 january:1 incorporated:1 y1:36 sharp:1 hanebeck:1 inferred:1 pair:2 specified:2 aerospace:1 learned:4 established:1 discontinuity:1 nip:1 suggested:1 bar:1 usually:2 xm:1 latitude:7 turner:1 improve:3 technology:1 func:1 prior:1 sir:1 fully:2 mixed:1 interesting:1 filtering:16 gather:3 propagates:1 row:7 ift:1 rasmussen:3 free:1 infeasible:1 offline:6 institute:1 fall:2 sparse:8 distributed:4 curve:1 calculated:1 world:1 evaluating:1 stuck:1 made:1 collection:14 commonly:1 reinforcement:1 adaptive:1 transaction:2 obtains:1 global:2 correlating:1 overfitting:3 sequentially:1 doucet:2 uai:1 unnecessary:1 spectrum:3 latent:1 table:5 additionally:1 learn:4 robust:3 ca:1 obtaining:2 meanwhile:1 mnlp:23 whole:3 noise:5 hyperparameters:15 x1:46 augmented:1 nmse:22 west:1 a21:1 exponential:1 governed:1 third:3 learns:1 load:7 xt:27 symbol:1 r2:2 a3:5 evidence:1 burden:1 normalizing:1 fusion:1 sequential:3 adding:2 importance:6 explore:3 infinitely:1 expressed:1 scalar:1 ma:1 conditional:3 goal:3 yti:1 change:1 typical:1 infinite:1 except:2 called:1 formally:1 select:3 deisenroth:2 evaluate:3 mcmc:3 phenomenon:2 |
4,255 | 4,851 | Learning Mixtures of Tree Graphical Models
Daniel Hsu
Microsoft Research New England
[email protected]
Animashree Anandkumar
UC Irvine
[email protected]
Furong Huang
UC Irvine
[email protected]
Sham M. Kakade
Microsoft Research New England
[email protected]
Abstract
We consider unsupervised estimation of mixtures of discrete graphical models,
where the class variable is hidden and each mixture component can have a potentially different Markov graph structure and parameters over the observed variables.
We propose a novel method for estimating the mixture components with provable
guarantees. Our output is a tree-mixture model which serves as a good approximation to the underlying graphical model mixture. The sample and computational
requirements for our method scale as poly(p, r), for an r-component mixture of pvariate graphical models, for a wide class of models which includes tree mixtures
and mixtures over bounded degree graphs.
Keywords: Graphical models, mixture models, spectral methods, tree approximation.
1 Introduction
The framework of graphical models allows for parsimonious representation of high-dimensional
data by encoding statistical relationships among the given set of variables through a graph, known
as the Markov graph. Recent works have shown that a wide class of graphical models can be
estimated efficiently in high dimensions [1?3]. However, frequently, graphical models may not
suffice to explain all the characteristics of the observed data. For instance, there may be latent or
hidden variables, which can influence the observed data in myriad ways.
In this paper, we consider latent variable models, where a latent variable can alter the relationships
(both structural and parametric) among the observed variables. In other words, we posit the observed
data as being generated from a mixture of graphical models, where each mixture component has a
potentially different Markov graph structure and parameters. The choice variable corresponding
to the selection of the mixture component is hidden. Such a class of graphical model mixtures
can incorporate context-specific dependencies, and employs multiple graph structures to model the
observed data. This leads to a significantly richer class of models, compared to graphical models.
Learning graphical model mixtures is however far more challenging than learning graphical models. State-of-art theoretical guarantees are mostly limited to mixtures of product distributions, also
known as latent class models or na??ve Bayes models. These models are restrictive since they do not
allow for dependencies to exist among the observed variables in each mixture component. Our work
significantly generalizes this class and allows for general Markov dependencies among the observed
variables in each mixture component.
The output of our method is a tree mixture model, which is a good approximation for the underlying
graphical model mixture. The motivation behind fitting the observed data to a tree mixture is clear:
inference can be performed efficiently via belief propagation in each of the mixture components.
1
See [4] for a detailed discussion. Thus, a tree mixture model offers a good tradeoff between using
single-tree models, which are too simplistic, and general graphical model mixtures, where inference
is not tractable.
1.1 Summary of Results
We propose a novel method with provable guarantees for unsupervised estimation of discrete graphical model mixtures. Our method has mainly three stages: graph structure estimation, parameter
estimation, and tree approximation. The first stage involves estimation of the union graph structure
G? := ?h Gh , which is the union of the Markov graphs {Gh } of the respective mixture components.
Our method is based on a series of rank tests, and can be viewed as a generalization of conditionalindependence tests for graphical model selection (e.g. [1, 5, 6]). We establish that our method is
efficient (in terms of computational and sample complexities), when the underlying union graph has
sparse vertex separators. This includes tree mixtures and mixtures with bounded degree graphs. The
second stage of our algorithm involves parameter estimation of the mixture components. In general,
this problem is NP-hard. We provide conditions for tractable estimation of pairwise marginals of the
mixture components. Roughly, we exploit the conditional-independence relationships to convert the
given model to a series of mixtures of product distributions. Parameter estimation for product distribution mixture has been well studied (e.g. [7?9]), and is based on spectral decompositions of the
observed moments. We leverage on these techniques to obtain estimates of the pairwise marginals
for each mixture component. The final stage for obtaining tree approximations involves running the
standard Chow-Liu algorithm [10] on each component using the estimated pairwise marginals of the
mixture components.
We prove that our method correctly recovers the union graph structure and the tree structures corresponding to maximum-likelihood tree approximations of the mixture components. Note that if
the underlying model is a tree mixture, we correctly recover the tree structures of the mixture components. The sample and computational complexities of our method scale as poly(p, r), for an
r-component mixture of p-variate graphical models, when the union graph has sparse vertex separators between any node pair. This includes tree mixtures and mixtures with bounded degree graphs.
To the best of our knowledge, this is the first work to provide provable learning guarantees for
graphical model mixtures. Our algorithm is also efficient for practical implementation and some
preliminary experiments suggest an advantage over EM with respect to running times and accuracy
of structure estimation of the mixture components. Thus, our approach for learning graphical model
mixtures has both theoretical and practical implications.
1.2 Related Work
Graphical Model Selection: Graphical model selection is a well studied problem starting from
the seminal work of Chow and Liu [10] for finding the maximum-likelihood tree approximation of a
graphical model. Works on high-dimensional loopy graphical model selection are more recent. They
can be classified into mainly two groups: non-convex local approaches [1, 2, 6] and those based on
convex optimization [3, 11]. However, these works are not directly applicable for learning mixtures
of graphical models. Moreover, our proposed method also provides a new approach for graphical
model selection, in the special case when there is only one mixture component.
Learning Mixture Models: Mixture models have been extensively studied, and there are a number of recent works on learning high-dimensional mixtures, e.g. [12,13]. These works provide guarantees on recovery under various separation constraints between the mixture components and/or
have computational and sample complexities growing exponentially in the number of mixture components r. In contrast, the so-called spectral methods have both computational and sample complexities scaling only polynomially in the number of components, and do not impose stringent separation
constraints. Spectral methods are applicable for parameter estimation in mixtures of discrete product
distributions [7] and more generally for latent trees [8] and general linear multiview mixtures [9].
We leverage on these techniques for parameter estimation in models beyond product distribution
mixtures.
2
2 Graphical Models and their Mixtures
A graphical model is a family of multivariate distributions Markov on a given undirected graph [14].
In a discrete graphical model, each node in the graph v ? V is associated with a random variable Yv
taking value in a finite set Y. Let d := |Y| denote the cardinality of the set and p := |V | denote the
number of variables. A vector of random variables Y := (Y1 , . . . , Yp ) with a joint probability mass
function (pmf) P is Markov on the graph G if P satisfies the global Markov property for all disjoint
sets A, B ? V
P (yA , yB |yS(A,B;G) ) = P (yA |yS(A,B;G) )P (yB |yS(A,B;G) ), ?A, B ? V : N [A] ? N [B] = ?.
where the set S(A, B; G) is a node separator1 between A and B, and N [A] denotes the closed
neighborhood of A (i.e., including A).
Mixtures of discrete graphical models is considered. Let H denote the discrete hidden choice variable corresponding to selection of a different mixture components, taking values in [r] := {1, . . . , r}
and let Y denote the observed random vector. Denote ? H := [P (H = h)]>
h as the probability vector of the mixing weights and Gh as the Markov graph of the distribution P (y|H = h) of each
mixture component. Given n i.i.d. samples yn = [y1 , . . . , yn ]> from P (y), our goal is to find a
tree approximation for each mixture component {P (y|H = h)}h . We do not assume any knowledge of the mixing weights ? H or Markov graphs {Gh }h or parameters of the mixture components
{P (y|H = h)}h . Moreover, since the variable H is latent, we do not a priori know the mixture component from which a sample is drawn. Thus, a major challenge is in decomposition of the observed
statistics into the component models, and we tackle this in three main stages. First, we estimate the
union graph G? := ?rh=1 Gh , which is the union of the Markov graphs of the components. We then
b? to obtain the pairwise marginals of the respective mixture components
use this graph estimate G
{P (y|H = h)}h . Finally, Chow-Liu algorithm provides tree approximations {Th }h of each mixture
components.
3 Estimation of the Union of Component Graphs
We propose a novel method for learning graphical model mixtures by first estimating the union
graph G? = ?rh=1 Gh , which is the union of the graphs of the components. In the special case when
Gh ? G? , this gives the graph estimate of the components. However, the union graph G? appears
to have no direct relationship with the marginalized model P (y). We first provide intuitions on how
G? relates to the observed statistics.
Intuitions: We first establish the simple result that the union graph G? satisfies Markov property
in each mixture component. Recall that S(u, v; G? ) denotes a vertex separator between nodes u and
v in G? .
Fact 1 (Markov Property of G? ) For any two nodes u, v ? V such that (u, v) ?
/ G? ,
Yu ?
? Yv |YS , H, S := S(u, v; G? ).
(1)
Proof: The separator set in G? , denoted by S := S(u, v; G? ), is also a vertex separator for u and
v in each of the component graphs Gh . This is because removal of S disconnects u and v in each
Gh . Thus, we have Markov property in each component: Yu ?
? Yv |YS , {H = h}, for each h ? [r],
and the above result follows.
2
The above result can be exploited to obtain union graph estimate as follows: two nodes u, v are
not neighbors in G? if a separator set S can be found which results in conditional independence,
as in (1). The main challenge is indeed that the variable H is not observed and thus, conditional
independence cannot be directly inferred via observed statistics. However, the effect of H on the
observed statistics can be quantified as follows:
Lemma 1 (Rank Property) Given an r-component mixture of graphical models with G? =
?rh=1 Gh , for any u, v ? V such that (u, v) ?
/ G? and S := S(u, v; G? ), the probability matrix
Mu,v,{S;k} := [P [Yu = i, Yv = j, YS = k]]i,j has rank at most r for any k ? Y |S| .
1
A set S(A, B; G) ? V is a separator of sets A and B if the removal of nodes in S(A, B; G) separates A
and B into distinct components.
3
The proof is given in [15]. Thus, the effect of marginalizing the choice variable H is seen in the
rank of the observed probability matrices Mu,v,{S;k} . When u and v are non-neighbors in G? , a
separator set S can be found such that the rank of Mu,v,{S;k} is at most r. In order to use this result
as a criterion for inferring neighbors in G? , we require that the rank of Mu,v,{S;k} for any neighbors
(u, v) ? G? be strictly larger than r. This requires the dimension of each node variable d > r. We
discuss in detail the set of sufficient conditions for correctly recovering G? in Section 3.1.
Tractable Graph Families: Another obstacle in using Lemma 1 to estimate graph G? is computational: the search for separators S for any node pair u, v ? V is exponential in |V | := p if no further
constraints are imposed. We consider graph families where a vertex separator can be found for any
(u, v) ?
/ G? with size at most ?. Under our framework, the hardness of learning a union graph is
parameterized by ?. Similar observations have been made before for graphical model selection [1].
There are many natural families where ? is small:
1. If G? is trivial (i.e., no edges) then ? = 0, we have a mixture of product distributions.
2. When G? is a tree, i.e., we have a mixture model Markov on the same tree, then ? = 1,
since there is a unique path between any two nodes on a tree.
3. For an arbitrary r-component tree mixture, G? = ?h Th where each component is a tree
distribution, we have that ? ? r (since for any node pair, there is a unique path in each of
the r trees {Th }, and separating the node pair in each Th also separates them on G? ).
P
4. For an arbitrary mixture of bounded degree graphs, we have ? ? h?[r] ?h , where ?h is
the maximum degree in Gh , i.e., the Markov graph corresponding to component {H = h}.
In general, ? depends on the respective bounds ?h for the component
graphs Gh , as well as the extent
P
of their overlap. In the worst case, ? can be as high as h?[r] ?h , while in the special case when
Gh ? G? , the bound remains the same ?h ? ?. Note that for a general graph G? with treewidth
tw(G? ) and maximum degree ?(G? ), we have that ? ? min(?(G? ), tw(G? )).
bn = RankTest(yn ; ?n,p , ?, r) for estimating G? := ?r Gh of an r-component
Algorithm 1 G
?
h=1
mixture using yn samples, where ? is the bound on size of vertex separators between any node pair
in G? and ?n,p is a threshold on the singular values.
Rank(A; ?) denotes the effective rank of matrix A, i.e., number of singular values more than ?.
cn
bn
M
u,v,{S;k} := [P (Yu = i, Yv = j, YS = k)]i,j is the empirical estimate computed using n i.i.d.
bn = (V, ?). For each u, v ? V , estimate M
cn
samples yn . Initialize G
from yn for some
?
u,v,{S;k}
configuration k ? Y |S| , if
min
S?V \{u,v}
|S|??
bn .
then add (u, v) to G
?
cn
Rank(M
u,v,{S;k} ; ?n,p ) > r,
(2)
Rank Test: Based on the above observations, we propose a rank test to estimate G? := ?h?[r] Gh ,
the union graph in Algorithm 1. The method is based on a search for potential separators S between
cn
any two given nodes u, v ? V , based on the effective rank of M
u,v,{S;k} : if the effective rank is r
or less, then u and v are declared as non-neighbors (and set S as their separator). If no such sets are
found, they are declared as neighbors. Thus, the method involves searching for separators for each
node pair u, v ? V , by considering all sets S ? V \ {u, v} satisfying |S| ? ?. The computational
complexity of this procedure is O(p?+2 d3 ), where d is the dimension of each node variable Yi , for
i ? V and p is the number of nodes. This is because the number of rank tests performed is O(p?+2 )
over all node pairs and conditioning sets; each rank tests has O(d3 ) complexity since it involves
performing singular value decomposition (SVD) of a d ? d matrix.
4
3.1 Analysis of the Rank Test
We now provide guarantees for the success of rank tests in estimating G? . As noted before, we
require that the number of components r and the dimension d of each node variable satisfy d > r.
Moreover, we assume bounds on the size of separator sets, ? = O(1). This includes tree mixtures
and mixtures over bounded degree graphs. In addition, the following parameters determine the
success of the rank tests.
(A1) Rank condition for neighbors: Let Mu,v,{S;k} := [P (Yu = i, Yv = j, YS = k)]i,j and
?min :=
min
(3)
max ?r+1 Mu,v,{S;k} > 0,
(u,v)?G? ,|S|?? k?Y |S|
S?V \{u,v}
where ?r+1 (?) denotes the (r + 1)th singular value, when the singular values are arranged in
the descending order ?1 (?) ? ?2 (?) ? . . . ?d (?). This ensures that the probability matrices
for neighbors (u, v) ? G? have (effective) rank of at least r + 1, and thus, the rank test can
correctly distinguish neighbors from non-neighbors. It rules out the presence of spurious
low rank matrices between neighboring nodes in G? (for instance, when the nodes are
marginally independent or when the distribution is degenerate).
(A2) Choice of threshold ?: The threshold ? on singular values is chosen as ? := ?min
2 .
(A3) Number of Samples: Given ? ? (0, 1), the number of samples n satisfies
2 !
2
1
,
n > nRank (?; p) := max 2 2 log p + log ? ?1 + log 2 ,
t
?min ? t
(4)
for some t ? (0, ?min ) (e.g. t = ?min /2,) where p is the number of nodes.
We now provide the result on the success of recovering the union graph G? := ?rh=1 Gh .
Theorem 1 (Success of Rank Tests) The RankTest(yn ; ?, ?, r) recovers the correct graph G? ,
which is the union of the component Markov graphs, under (A1)?(A3) with probability at least
1 ? ?.
A special case of the above result is graphical model selection, where there is a single graphical
model (r = 1) and we are interested in estimating its graph structure.
Corollary 1 (Application to Graphical Model Selection) Given n i.i.d.
samples yn , the
n
RankTest(y ; ?, ?, 1) is structurally consistent under (A1)?(A3) with probability at least 1 ? ?.
Remarks: Thus, the rank test is also applicable for graphical model selection. Previous works (see
Section 1.2) have proposed tests based on conditional independence, using either conditional mutual
information or conditional variation distances, see [1, 6]. The rank test above is thus an alternative
test for conditional independence in graphical models, resulting in graph structure estimation. In
addition, it extends naturally to estimation of union graph structure of mixture components. Our
above result establishes that our method is also efficient in high dimensions, since it only requires
logarithmic samples for structural consistency (n = ?(log p)).
4 Parameter Estimation of Mixture Components
Having obtained an estimate of the union graph G? , we now describe a procedure for estimating
parameters of the mixture components {P (y|H = h)}. Our method is based on spectral decomposition, proposed previously for mixtures of product distributions [7?9]. We recap it briefly below
and then describe how it can be adapted to the more general setting of graphical model mixtures.
Recap of Spectral Decomposition in Mixtures of Product Distributions: Consider the case
where V = {u, v, w}, and Yu ?
? Yv ?
? Yw |H. For simplicity assume that d = r, i.e., the hidden
and observed variables have the same dimension. This assumption will be removed subsequently.
Denote Mu|H := [P (Yu = i|H = j)]i,j , and similarly for Mv|H , Mw|H and assume that they are
5
full rank. Denote the probability matrices Mu,v := [P (Yu = i, Yv = j)]i,j and Mu,v,{w;k} :=
[P (Yu = i, Yv = j, Yw = k)]i,j . The parameters (i.e., matrices Mu|H , Mv|H , Mw|H ) can be
estimated as:
Lemma 2 (Mixture of Product Distributions) Given the above model,
(k)
(k)
[?1 , . . . , ?d ]> be the column vector with the d eigenvalues given by
?1
?(k) := Eigenvalues Mu,v,{w;k} Mu,v
, k ? Y.
Let ? := [?
(1)
|?
(2)
(d)
| . . . |?
th
let
?(k)
=
(5)
(k)
] be a matrix where the k column corresponds to ?
>
Mw|H := [P (Yw = i|H = j)]i,j = ? .
. We have
(6)
For the proof of the above result and for the general algorithm (when d ? r), see [9]. Thus, if
we have a general product distribution mixture over nodes in V , we can learn the parameters by
performing the above spectral decomposition over different triplets {u, v, w}. However, an obstacle
remains: spectral decomposition over different triplets {u, v, w} results in different permutations
of the labels of the hidden variable H. To overcome this, note that any two triplets (u, v, w) and
(u, v 0 , w0 ) share the same set of eigenvectors in (5) when the ?left? node u is the same. Thus, if we
consider a fixed node u? ? V as the ?left? node and use a fixed matrix to diagonalize (5) for all
triplets, we obtain a consistent ordering of the hidden labels over all triplet decompositions.
Parameter Estimation in Graphical Model Mixtures: We now adapt the above procedure for
estimating components of a general graphical model mixture. We first make a simple observation on
how to obtain mixtures of product distributions by considering separators on the union graph G? .
For any three nodes u, v, w ? V , which are not neighbors on G? , let Suvw denote a multiway vertex
separator, i.e., the removal of nodes in Suvw disconnects u, v and w in G? . On lines of Fact 1,
Yu ?
? Yv ?
? Yw |YSuvw , H,
?u, v, w : (u, v), (v, w), (w, u) ?
/ G? .
(7)
Thus, by fixing the configuration of nodes in Suvw , we obtain a product distribution mixture over
{u, v, w}. If the previously proposed rank test is successful in estimating G? , then we possess correct knowledge of the separators Suvw . In this case, we can obtain estimates {P (Yw |YSuvw =
k, H = h)}h by fixing the nodes in Suvw and using the spectral decomposition described in
Lemma 2, and the procedure can be repeated over different triplets {u, v, w}.
An obstacle remains, viz., the permutation of hidden labels over different triplet decompositions
{u, v, w}. In case of product distribution mixture, as discussed previously, this is resolved by fixing
the ?left? node in the triplet to some u? ? V and using the same matrix for diagonalization over
different triplets. However, an additional complication arises when we consider graphical model
mixtures, where conditioning over separators is required. We require that the permutation of the
hidden labels be unchanged upon conditioning over different values of variables in the separator set
Su? vw . This holds when the separator set Su? vw has no effect on node u? , i.e., we require that
? YV \u? |H,
?u? ? V, s.t. Yu? ?
(8)
which implies that u? is isolated from all other nodes in graph G? .
Condition (8) is required for identifiability if we only operate on statistics over different triplets
(along with their separator sets). In other words, if we resort to operations over only low order
statistics, we require additional conditions such as (8) for identifiability. However, our setting is a
significant generalization over the mixtures of product distributions, where (8) is required to hold
for all nodes.
Finally, since our goal is to estimate pairwise marginals of the mixture components, in place of node
w in the triplet {u, v, w} in Lemma 2, we need to consider a node pair a, b ? V . The general algorithm allows the variables in the triplet to have different dimensions, see [9] for details. Thus, we
obtain estimates of the pairwise marginals of the mixture components. For details on implementation, refer to [15].
4.1 Analysis and Guarantees
In addition to (A1)?(A3) in Section 3.1 to guarantee correct recovery of G? and the conditions
discussed above, the success of parameter estimation depends on the following quantities:
6
(A4) Non-degeneracy: For each node pair a, b ? V , and any subset S ? V \ {a, b} with
|S| ? 2? and k ? Y |S| , the probability matrix M(a,b)|H,{S;k} := [P (Ya,b = i|H =
2
j, YS = k)]i,j ? Rd ?r has rank r.
(A5) Spectral Bounds and Number of Samples: Refer to various spectral bounds used to
obtain K(?; p, d, r) in (??) in [15], where ? ? (0, 1) is fixed. Given any fixed ? (0, 1),
assume that the number of samples satisfies
n > nspect (?, ; p, d, r) :=
4K 2 (?; p, d, r)
.
2
(9)
Note that (A4) is a natural condition required for success of spectral decomposition and has been
previously imposed for learning product distribution mixtures [7?9]. Moreover, when (A4) does not
hold, i.e., when the matrices are not full rank, parameter estimation is computationally at least as
hard as learning parity with noise, which is conjectured to be computationally hard [8]. Condition
(A5) is required for learning product distribution mixtures [9], and we inherit it here.
We now provide guarantees for estimation of pairwise marginals of the mixture components. Let
k ? k2 on a vector denote the `2 norm.
Theorem 2 (Parameter Estimation of Mixture Components) Under the assumptions (A1)?(A5),
the spectral decomposition method outputs Pb spect (Ya , Yb |H = h), for each a, b ? V , such that for
all h ? [r], there exists a permutation ? (h) ? [r] with
kPb spect(Ya , Yb |H = h) ? P (Ya , Yb |H = ? (h))k2 ? ,
(10)
with probability at least 1 ? 4?.
Remark: Recall that p denotes the number of variables, r is the number of mixture
components, d is the dimension of each node variable and ? is the bound on separator sets between any node pair in the
We establish that K(?; p, d, r) is
union graph.
O p2?+2 d2? r5 ? ?1 poly log(p, d, r, ? ?1 ) in [15]. Thus, we require
the number of samples in (9)
4?+4 4? 10 ?2 ?2
?1
scaling as n = ? p
d r ? poly log(p, d, r, ? ) . Since we consider models where
? = O(1) is a small constant, this implies that we have a polynomial sample complexity in p, d, r.
Tree Approximation of Mixture Components: The final step involves using the estimated pairwise marginals of each component {Pb spect (Ya , Yb |H = h)} to obtain tree approximation of the
component via Chow-Liu algorithm [10]. We now impose a standard condition of non-degeneracy
on each mixture component to guarantee the existence of a unique tree structure corresponding to
the maximum-likelihood tree approximation to the mixture component.
(A6) Separation of Mutual Information: Let Th denote the maximum-likelihood tree approximation corresponding to the model P (y|H = h) when exact statistics are input and let
? := min min
min
h?[r] (a,b)?T
/ h (u,v)?Path(a,b;Th )
(I(Yu , Yv |H = h) ? I(Ya , Yb |H = h)) ,
(11)
where Path(a, b; Th ) denotes the edges along the path connecting a and b in Th . Intuitively
? denotes the ?bottleneck? where errors are most likely to occur in tree structure estimation.
See [16] for a detailed discussion.
(A7) Number of Samples: Given tree defined in [15], we require
n > nspect(?, tree ; p, d, r),
(12)
tree
where nspect is given by (9). Intuitively,
provides the bound on distortion of the
estimated pairwise marginals of the mixture components, required for correct estimation of
tree approximations, and depends on ? in (11).
Theorem 3 (Tree Approximations of Mixture Components) Under (A1)?(A7), the Chow-Liu algorithm outputs the correct tree structures corresponding to maximum-likelihood tree approximations of the mixture components {P (y|H = h)} with probability at least 1 ? 4?, when the estimates
of pairwise marginals {Pb spect(Ya , Yb |H = h)} from spectral decomposition method are input.
7
?25
?5.5
EM
Proposed
Proposed+EM
?18
?20
?27
Log-likelihood
?6
?22
Log-likelihood
Log-likelihood
?26
?6.5
?28
?29
?30
3000
4000
5000
6000
7000
8000
?24
?7.5
?26
EM
Proposed
Proposed+EM
?7
3000
4000
5000
6000
EM
Proposed
Proposed+EM
?28
7000
8000
?30
3000
Sample Size n
Sample Size n
4000
5000
6000
7000
8000
Sample Size n
(a) Overall likelihood of the mixture (b) Conditional likelihood of strong (c) Conditional likelihood of weak
component
component
Figure 1: Performance of the proposed method, EM and EM initialized with the proposed method
output on a tree mixture with two components.
2
1.5
EM
Proposed
Proposed+EM
EM
Proposed
Proposed+EM
Edit distances
0.1
1
0.5
0.05
0
3000
4000
5000
6000
7000
Sample Size n
(a) Classification error
8000
0
EM
Proposed
Proposed+EM
1.5
Edit distances
Classification error
0.2
0.15
1
0.5
3000
4000
5000
6000
7000
8000
0
3000
4000
5000
6000
7000
8000
Sample Size n
Sample Size n
(b) Strong component edit distance (c) Weak component edit distance
Figure 2: Classification error and normalized edit distances of the proposed method, EM and EM
initialized with the proposed method output on the tree mixture.
5 Experiments
Experimental results are presented on synthetic data. We estimate the graph using proposed algorithm and compare the performance of our method with EM [4]. Comprehensive results based on
the normalized edit distances and log-likelihood scores between the estimated and the true graphs
are presented. We generate samples from a mixture over two different trees (r = 2) with mixing
weights ? = [0.7, 0.3] using Gibbs sampling. Each mixture component is generated from the standard Potts model on p = 60 nodes, where the node variables are ternary (d = 3), and the number of
samples n ? [2.5 ? 103, 104 ]. The joint distribution of nodes in each mixture component is given by
?
?
X
X
Ji,j;h (I(Yi = Yj ) ? 1) +
P (X|H = h) ? exp ?
Ki;h Yi , ?
i?V
(i,j)?G
where I is the indicator function and Jh := {Ji,j;h } are the edge potentials in the model. For the
first component (H = 1), the edge potentials J1 are chosen uniformly from [5, 5.05], while for the
second component (H = 2), J2 are chosen from [0.5, 0.55]. We refer to the first component as
strong and the second as weak since the correlations vary widely between the two models due to the
choice of parameters. The node potentials are all set to zero (Ki;h = 0) except at the isolated node
u? in the union graph. The performance of the proposed method is compared with EM. We consider
10 random initializations of EM and run it to convergence. We also evaluated EM by utilizing
proposed result as the initial point (referred to as Proposed+EM in the figures). We observe in Fig 1a
that the overall likelihood under our method is comparable with EM. Intuitively this is because EM
attempts to maximize the overall likelihood. However, our algorithm has significantly superior
performance with respect to the edit distance which is the error in estimating the tree structure in
the two components, as seen in Fig 2. In fact, EM never manages to recover the structure of the
weak components(i.e., the component with weak correlations). Intuitively, this is because EM uses
the overall likelihood as criterion for tree selection. Under the above choice of parameters, the weak
component has a much lower contribution to the overall likelihood, and thus, EM is unable to recover
it. We also observe in Fig 1b and Fig 1c, that our proposed method has superior performance in terms
of conditional likelihood for both the components. Classification error is evaluated in Fig 2a. We
could get smaller classification errors than EM method.
The above experimental results confirm our theoretical analysis and suggest the advantages of our
basic technique over more common approaches. Our method provides a point of tractability in the
spectrum of probabilistic models, and extending beyond the class we consider here is a promising
direction of future research.
Acknowledgements: The first author is supported in part by the NSF Award CCF-1219234, AFOSR Award
FA9550-10-1-0310, ARO Award W911NF-12-1-0404, and setup funds at UCI. The third author is supported
by the NSF Award 1028394 and AFOSR Award FA9550-10-1-0310.
8
References
[1] A. Anandkumar, V. Y. F. Tan, F. Huang, and A. S. Willsky. High-Dimensional Structure Learning of Ising
Models: Local Separation Criterion. Accepted to Annals of Statistics, Jan. 2012.
[2] A. Jalali, C. Johnson, and P. Ravikumar. On learning discrete graphical models using greedy methods. In
Proc. of NIPS, 2011.
[3] P. Ravikumar, M.J. Wainwright, and J. Lafferty. High-dimensional Ising Model Selection Using l1Regularized Logistic Regression. Annals of Statistics, 2008.
[4] M. Meila and M.I. Jordan. Learning with mixtures of trees. J. of Machine Learning Research, 1:1?48,
2001.
[5] P. Spirtes and C. Meek. Learning bayesian networks with discrete variables from data. In Proc. of Intl.
Conf. on Knowledge Discovery and Data Mining, pages 294?299, 1995.
[6] G. Bresler, E. Mossel, and A. Sly. Reconstruction of Markov Random Fields from Samples: Some Observations and Algorithms. In Intl. workshop APPROX Approximation, Randomization and Combinatorial
Optimization, pages 343?356. Springer, 2008.
[7] J.T. Chang. Full reconstruction of markov models on evolutionary trees: identifiability and consistency.
Mathematical Biosciences, 137(1):51?73, 1996.
[8] E. Mossel and S. Roch. Learning nonsingular phylogenies and hidden Markov models. The Annals of
Applied Probability, 16(2):583?614, 2006.
[9] A. Anandkumar, D. Hsu, and S.M. Kakade. A Method of Moments for Mixture Models and Hidden
Markov Models. In Proc. of Conf. on Learning Theory, June 2012.
[10] C. Chow and C. Liu. Approximating Discrete Probability Distributions with Dependence Trees. IEEE
Tran. on Information Theory, 14(3):462?467, 1968.
[11] N. Meinshausen and P. B?uhlmann. High Dimensional Graphs and Variable Selection With the Lasso.
Annals of Statistics, 34(3):1436?1462, 2006.
[12] M. Belkin and K. Sinha. Polynomial learning of distribution families. In IEEE Annual Symposium on
Foundations of Computer Science, pages 103?112, 2010.
[13] A. Moitra and G. Valiant. Settling the polynomial learnability of mixtures of gaussians. In IEEE Annual
Symposium on Foundations of Computer Science, 2010.
[14] S.L. Lauritzen. Graphical models: Clarendon Press. Clarendon Press, 1996.
[15] A. Anandkumar, D. Hsu, and S.M. Kakade. Learning High-Dimensional Mixtures of Graphical Models.
Preprint. Available on ArXiv:1203.0697, Feb. 2012.
[16] V.Y.F. Tan, A. Anandkumar, and A. Willsky. A Large-Deviation Analysis for the Maximum Likelihood
Learning of Tree Structures. IEEE Tran. on Information Theory, 57(3):1714?1735, March 2011.
9
| 4851 |@word briefly:1 polynomial:3 norm:1 d2:1 bn:4 decomposition:13 moment:2 initial:1 liu:6 series:2 configuration:2 score:1 daniel:1 com:2 j1:1 fund:1 greedy:1 fa9550:2 provides:4 node:44 complication:1 mathematical:1 along:2 direct:1 symposium:2 prove:1 fitting:1 pairwise:10 hardness:1 indeed:1 roughly:1 frequently:1 growing:1 kpb:1 cardinality:1 considering:2 estimating:9 underlying:4 bounded:5 suffice:1 mass:1 moreover:4 finding:1 guarantee:10 tackle:1 k2:2 yn:8 before:2 local:2 encoding:1 path:5 initialization:1 studied:3 quantified:1 meinshausen:1 challenging:1 limited:1 practical:2 unique:3 yj:1 ternary:1 union:22 procedure:4 jan:1 empirical:1 significantly:3 word:2 suggest:2 get:1 cannot:1 selection:14 context:1 influence:1 seminal:1 descending:1 imposed:2 starting:1 convex:2 simplicity:1 recovery:2 rule:1 utilizing:1 l1regularized:1 searching:1 variation:1 annals:4 tan:2 exact:1 us:1 satisfying:1 ising:2 observed:18 preprint:1 worst:1 ensures:1 ordering:1 removed:1 intuition:2 mu:12 complexity:7 myriad:1 upon:1 resolved:1 joint:2 various:2 distinct:1 effective:4 describe:2 neighborhood:1 richer:1 larger:1 widely:1 distortion:1 statistic:10 final:2 advantage:2 eigenvalue:2 propose:4 aro:1 reconstruction:2 skakade:1 product:16 tran:2 neighboring:1 uci:3 j2:1 mixing:3 degenerate:1 convergence:1 requirement:1 extending:1 intl:2 fixing:3 lauritzen:1 keywords:1 strong:3 p2:1 recovering:2 involves:6 treewidth:1 implies:2 direction:1 posit:1 correct:5 subsequently:1 stringent:1 require:7 generalization:2 preliminary:1 randomization:1 strictly:1 hold:3 recap:2 considered:1 exp:1 major:1 vary:1 a2:1 estimation:22 proc:3 applicable:3 label:4 combinatorial:1 uhlmann:1 edit:7 establishes:1 corollary:1 viz:1 june:1 potts:1 rank:29 likelihood:18 mainly:2 contrast:1 inference:2 chow:6 hidden:11 spurious:1 interested:1 overall:5 among:4 classification:5 denoted:1 priori:1 art:1 special:4 initialize:1 uc:2 mutual:2 field:1 never:1 having:1 sampling:1 r5:1 yu:12 unsupervised:2 alter:1 future:1 np:1 employ:1 belkin:1 ve:1 comprehensive:1 microsoft:4 attempt:1 a5:3 mining:1 mixture:110 behind:1 implication:1 edge:4 respective:3 tree:49 pmf:1 initialized:2 isolated:2 theoretical:3 sinha:1 instance:2 column:2 obstacle:3 w911nf:1 a6:1 loopy:1 tractability:1 vertex:7 subset:1 deviation:1 successful:1 johnson:1 too:1 learnability:1 dependency:3 synthetic:1 probabilistic:1 connecting:1 na:1 moitra:1 huang:2 conf:2 resort:1 yp:1 potential:4 includes:4 disconnect:2 satisfy:1 mv:2 depends:3 performed:2 closed:1 yv:12 bayes:1 recover:3 identifiability:3 contribution:1 accuracy:1 characteristic:1 efficiently:2 nonsingular:1 weak:6 bayesian:1 manages:1 marginally:1 classified:1 explain:1 naturally:1 associated:1 proof:3 recovers:2 bioscience:1 degeneracy:2 irvine:2 hsu:3 animashree:1 recall:2 knowledge:4 appears:1 furong:1 clarendon:2 yb:8 arranged:1 evaluated:2 stage:5 sly:1 correlation:2 su:2 a7:2 propagation:1 logistic:1 nrank:1 effect:3 normalized:2 true:1 ccf:1 spirtes:1 noted:1 criterion:3 multiview:1 gh:16 novel:3 superior:2 common:1 ji:2 conditioning:3 exponentially:1 discussed:2 marginals:10 significant:1 refer:3 gibbs:1 rd:1 meila:1 consistency:2 approx:1 similarly:1 multiway:1 add:1 feb:1 multivariate:1 recent:3 conjectured:1 success:6 yi:3 exploited:1 seen:2 additional:2 impose:2 determine:1 maximize:1 relates:1 multiple:1 full:3 sham:1 england:2 adapt:1 offer:1 ravikumar:2 y:9 award:5 a1:6 simplistic:1 basic:1 regression:1 arxiv:1 addition:3 singular:6 diagonalize:1 operate:1 posse:1 undirected:1 lafferty:1 jordan:1 anandkumar:6 structural:2 mw:3 leverage:2 presence:1 vw:2 independence:5 variate:1 lasso:1 cn:4 tradeoff:1 bottleneck:1 remark:2 generally:1 clear:1 detailed:2 yw:5 eigenvectors:1 extensively:1 generate:1 exist:1 nsf:2 estimated:6 disjoint:1 correctly:4 discrete:9 group:1 threshold:3 pb:3 drawn:1 d3:2 graph:54 convert:1 run:1 parameterized:1 extends:1 family:5 place:1 separation:4 parsimonious:1 scaling:2 comparable:1 bound:8 ki:2 spect:4 meek:1 distinguish:1 annual:2 adapted:1 occur:1 constraint:3 declared:2 min:11 performing:2 march:1 smaller:1 em:28 kakade:3 tw:2 intuitively:4 computationally:2 remains:3 previously:4 discus:1 know:1 dahsu:1 tractable:3 serf:1 generalizes:1 operation:1 gaussians:1 available:1 observe:2 spectral:14 alternative:1 existence:1 denotes:7 running:2 graphical:45 a4:3 marginalized:1 exploit:1 restrictive:1 establish:3 approximating:1 unchanged:1 quantity:1 parametric:1 dependence:1 jalali:1 evolutionary:1 distance:8 separate:2 unable:1 separating:1 w0:1 extent:1 trivial:1 provable:3 willsky:2 relationship:4 setup:1 mostly:1 potentially:2 implementation:2 observation:4 markov:21 finite:1 y1:2 arbitrary:2 inferred:1 pair:10 required:6 nip:1 beyond:2 roch:1 below:1 challenge:2 including:1 max:2 belief:1 wainwright:1 overlap:1 natural:2 settling:1 indicator:1 mossel:2 acknowledgement:1 removal:3 discovery:1 marginalizing:1 afosr:2 bresler:1 permutation:4 foundation:2 degree:7 sufficient:1 consistent:2 share:1 summary:1 supported:2 parity:1 allow:1 jh:1 wide:2 neighbor:11 taking:2 sparse:2 overcome:1 dimension:8 author:2 made:1 far:1 polynomially:1 confirm:1 global:1 spectrum:1 search:2 latent:6 triplet:12 promising:1 learn:1 obtaining:1 poly:4 separator:23 inherit:1 main:2 rh:4 motivation:1 noise:1 repeated:1 fig:5 referred:1 structurally:1 inferring:1 exponential:1 third:1 theorem:3 specific:1 a3:4 exists:1 workshop:1 valiant:1 diagonalization:1 logarithmic:1 likely:1 chang:1 springer:1 corresponds:1 satisfies:4 conditional:10 viewed:1 goal:2 hard:3 except:1 uniformly:1 lemma:5 called:1 accepted:1 svd:1 ya:9 experimental:2 phylogeny:1 arises:1 incorporate:1 |
4,256 | 4,852 | Link Prediction in Graphs with Autoregressive
Features
Emile Richard
CMLA UMR CNRS 8536,
ENS Cachan, France
St?phane Ga?ffas
CMAP - Ecole Polytechnique
& LSTA - Universit? Paris 6
Nicolas Vayatis
CMLA UMR CNRS 8536,
ENS Cachan, France
Abstract
In the paper, we consider the problem of link prediction in time-evolving graphs.
We assume that certain graph features, such as the node degree, follow a vector
autoregressive (VAR) model and we propose to use this information to improve
the accuracy of prediction. Our strategy involves a joint optimization procedure
over the space of adjacency matrices and VAR matrices which takes into account
both sparsity and low rank properties of the matrices. Oracle inequalities are derived and illustrate the trade-offs in the choice of smoothing parameters when
modeling the joint effect of sparsity and low rank property. The estimate is computed efficiently using proximal methods through a generalized forward-backward
agorithm.
1
Introduction
Forecasting systems behavior with multiple responses has been a challenging issue in many contexts
of applications such as collaborative filtering, financial markets, or bioinformatics, where responses
can be, respectively, movie ratings, stock prices, or activity of genes within a cell. Statistical modeling techniques have been widely investigated in the context of multivariate time series either in the
multiple linear regression setup [4] or with autoregressive models [23]. More recently, kernel-based
regularized methods have been developed for multitask learning [7, 2]. These approaches share the
use of the correlation structure among input variables to enrich the prediction on every single output.
Often, the correlation structure is assumed to be given or it is estimated separately. A discrete encoding of correlations between variables can be modeled as a graph so that learning the dependence
structure amounts to performing graph inference through the discovery of uncovered edges on the
graph. The latter problem is interesting per se and it is known as the problem of link prediction
where it is assumed that only a part of the graph is actually observed [15, 9]. This situation occurs
in various applications such as recommender systems, social networks, or proteomics, and the appropriate tools can be found among matrix completion techniques [21, 5, 1]. In the realistic setup
of a time-evolving graph, matrix completion was also used and adapted to take into account the
dynamics of the features of the graph [18]. In this paper, we study the prediction problem where the
observation is a sequence of graphs adjacency matrices (At )0?t?T and the goal is to predict AT +1 .
This type of problem arises in applications such as recommender systems where, given information on purchases made by some users, one would like to predict future purchases. In this context,
users and products can be modeled as the nodes of a bipartite graph, while purchases or clicks are
modeled as edges. In functional genomics and systems biology, estimating regulatory networks in
gene expression can be performed by modeling the data as graphs and fitting predictive models is
a natural way for estimating evolving networks in these contexts. A large variety of methods for
link prediction only consider predicting from a single static snapshot of the graph - this includes
heuristics [15, 20], matrix factorization [13], diffusion [16], or probabilistic methods [22]. More
recently, some works have investigated using sequences of observations of the graph to improve the
prediction, such as using regression on features extracted from the graphs [18], using matrix factorization [14], continuous-time regression [25]. Our main assumption is that the network effect is a
1
cause and a symptom at the same time, and therefore, the edges and the graph features should be
estimated simultaneously. We propose a regularized approach to predict the uncovered links and the
evolution of the graph features simultaneously. We provide oracle bounds under the assumption that
the noise sequence has subgaussian tails and we prove that our procedure achieves a trade-off in the
calibration of smoothing parameters which adjust with the sparsity and the rank of the unknown adjacency matrix. The rest of this paper is organized as follows. In Section 2, we describe the general
setup of our work with the main assumptions and we formulate a regularized optimization problem
which aims at jointly estimating the autoregression parameters and predicting the graph. In Section
3, we provide technical results with oracle inequalities and other theoretical guarantees on the joint
estimation-prediction. Section 4 is devoted to the description of the numerical simulations which
illustrate our approach. We also provide an efficient algorithm for solving the optimization problem and show empirical results. The proof of the theoretical results are provided as supplementary
material in a separate document.
2
Estimation of low-rank graphs with autoregressive features
Our approach is based on the asumption that features can explain most of the information contained
in the graph, and that these features are evolving with time. We make the following assumptions
about the sequence (At )t?0 of adjacency matrices of the graphs sequence.
Low-Rank. We assume that the matrices At have low-rank. This reflects the presence of highly
connected groups of nodes such as communities in social networks, or product categories and groups
of loyal/fan users in a market place data, and is sometimes motivated by the small number of factors
that explain nodes interactions.
We assume to be given a linear map ? : Rn?n ? Rd defined by
>
(1)
?(A) = h?1 , Ai, ? ? ? , h?d , Ai ,
Autoregressive linear features.
where (?i )1?i?d is a set of n ? n matrices. These matrices can be either deterministic or random in
our theoretical analysis, but we take them deterministic for the sake of simplicity. The vector time
series (?(At ))t?0 has autoregressive dynamics, given by a VAR (Vector Auto-Regressive) model:
?(At+1 ) = W0> ?(At ) + Nt+1 ,
(2)
where W0 ? Rd?d is a unknown sparse matrix and (Nt )t?0 is a sequence of noise vectors in Rd .
An example of linear features is the degree (i.e. number of edges connected to each node, or the sum
of their weights if the edges are weighted), which is a measure of popularity in social and commerce
networks. Introducing
XT ?1 = (?(A0 ), . . . , ?(AT ?1 ))> and XT = (?(A1 ), . . . , ?(AT ))> ,
which are both T ? d matrices, we can write this model in a matrix form:
XT = XT ?1 W0 + NT ,
(3)
>
where NT = (N1 , . . . , NT ) .
This assumes that the noise is driven by time-series dynamics (a martingale increment), where each
coordinates are independent (meaning that features are independently corrupted by noise), with a
sub-gaussian tail and variance uniformly bounded by a constant ? 2 . In particular, no independence
assumption between the Nt is required here.
Notations. The notations k?kF , k?kp , k?k? , k?k? and k?kop stand, respectively, for the Frobenius
norm, entry-wise `p norm, entry-wise `? norm, trace-norm (or nuclear norm, given by the sum of the
singular values) and operator norm (the largest singular value). We denote by hA, Bi = tr(A> B)
the Euclidean matrix product. A vector in Rd is always understood as a d ? 1 matrix. We denote
by kAk0 the number of non-zero elements of A. The product A ? B between two matrices with
matching dimensions stands for the Hadamard or entry-wise product between A and B. The matrix
|A| contains the absolute values of entries of A. The matrix (M )+ is the componentwise positive part
of the matrix M, and sign(M ) is the sign matrix associated to M with the convention sign(0) = 0
2
Pr
>
If A is a n ? n matrix with rank r, we write its SVD as A = U ?V > =
j=1 ?j uj vj where
? = diag(?1 , . . . , ?r ) is a r ? r diagonal matrix containing the non-zero singular values of A in
decreasing order, and U = [u1 , . . . , ur ], V = [v1 , . . . , vr ] are n ? r matrices with columns given by
the left and right singular vectors of A. The projection matrix onto the space spanned by the columns
(resp. rows) of A is given by PU = U U > (resp. PV = V V > ). The operator PA : Rn?n ? Rn?n
given by PA (B) = PU B + BPV ? PU BPV is the projector onto the linear space spanned by the
matrices uk x> and yvk> for 1 ? j, k ? r and x, y ? Rn . The projector onto the orthogonal space is
?
given by PA
(B) = (I ? PU )B(I ? PV ). We also use the notation a ? b = max(a, b).
2.1
Joint prediction-estimation through penalized optimization
In order to reflect the autoregressive dynamics of the features, we use a least-squares goodness-offit criterion that encourages the similarity between two feature vectors at successive time steps. In
order to induce sparsity in the estimator of W0 , we penalize this criterion using the `1 norm. This
leads to the following penalized objective function:
1
kXT ? XT ?1 W k2F + ?kW k1 ,
T
where ? > 0 is a smoothing parameter.
J1 (W ) =
Now, for the prediction of AT +1 , we propose to minimize a least-squares criterion penalized by the
combination of an `1 norm and a trace-norm. This mixture of norms induces sparsity and a low-rank
of the adjacency matrix. Such a combination of `1 and trace-norm was already studied in [8] for the
matrix regression model, and in [19] for the prediction of an adjacency matrix.
The objective function defined below exploits the fact that if W is close to W0 , then the features of
the next graph ?(AT +1 ) should be close to W > ?(AT ). Therefore, we consider
1
k?(A) ? W > ?(AT )k2F + ? kAk? + ?kAk1 ,
d
where ?, ? > 0 are smoothing parameters. The overall objective function is the sum of the two
partial objectives J1 and J2 , which is jointly convex with respect to A and W :
J2 (A, W ) =
1
. 1
L(A, W ) = kXT ? XT ?1 W k2F + ?kW k1 + k?(A) ? W > ?(AT )k22 + ? kAk? + ?kAk1 , (4)
T
d
If we choose convex cones A ? Rn?n and W ? Rd?d , our joint estimation-prediction procedure is
defined by
? W
? ) ? arg min L(A, W ).
(A,
(5)
(A,W )?A?W
d?d
It is natural to take W = R
and A = (R+ )n?n since there is no a priori on the values of the
feature matrix W0 , while the entries of the matrix AT +1 must be positive.
In the next section we propose oracle inequalities which prove that this procedure can estimate W0
and predict AT +1 at the same time.
2.2
Main result
The central contribution of our work is to bound the prediction error with high probability under the
following natural hypothesis on the noise process.
Assumption 1. We assume that (Nt )t?0 satisfies E[Nt |Ft?1 ] = 0 for any t ? 1 and that there is
? > 0 such that for any ? ? R and j = 1, . . . , d and t ? 0:
E[e?(Nt )j |Ft?1 ] ? e?
2
?2 /2
.
Moreover, we assume that for each t ? 0, the coordinates (Nt )1 , . . . , (Nt )d are independent.
The main result can be summarized as follows. The prediction error and the estimation error can be
simultaneously bounded by the sum of three terms that involve homogeneously (a) the sparsity, (b)
the rank of the adjacency matrix AT +1 , and (c) the sparsity of the VAR model matrix W0 . The tight
bounds we obtain are similar to the bounds of the Lasso and are upper bounded by:
3
log d
log n
log n
kW0 k0 + C2
kAT +1 k0 + C3
rank AT +1 .
T
d
d
The positive constants C1 , C2 , C3 are proportional to the noise level ?. The interplay between the
rank and sparsity constraints on AT +1 are reflected in the observation that the values of C2 and C3
can be changed as long as their sum remains constant.
C1
3
Oracle inequalities
In this section we give oracle inequalities for the mixed prediction-estimation error which is given,
for any A ? Rn?n and W ? Rd?d , by
1
. 1
E(A, W )2 = k(W ? W0 )> ?(AT ) ? ?(A ? AT +1 )k22 + kXT ?1 (W ? W0 )k2F .
(6)
d
T
It is important to have in mind that an upper-bound on E implies upper-bounds on each of
its two components. It entails in particular an upper-bound on the feature estimation error
c ? W0 )kF that makes k(W
c ? W0 )> ?(AT )k2 smaller and consequently controls the
kXT ?1 (W
b ? AT +1 )k2 .
prediction error over the graph edges through k?(A
The upper bounds on E given below exhibit the dependence of the accuracy of estimation and prediction on the number of features d, the number of edges n and the number T of observed graphs in
the sequence.
Let us recall NT = (N1 , . . . , NT )> and introduce the noise processes
M =?
d
T
1X
1
1X
(NT +1 )j ?j and ? =
?(At?1 )Nt> + ?(AT )NT>+1 ,
d j=1
T t=1
d
which are, respectively, n ? n and d ? d random matrices. The source of randomness comes from
the noise sequence (Nt )t?0 , see Assumption 1. If these noise processes are controlled correctly, we
can prove the following oracle inequalities for procedure (5). The next result is an oracle inequality
of slow type (see for instance [3]), that holds in full generality.
? W
? ) be given by (5) and suppose that
Theorem 1. Under Assumption 2, let (A,
? ? 2?kM kop ,
? ? 2(1 ? ?)kM k? and ? ? 2k?k?
(7)
for some ? ? (0, 1). Then, we have
b W
c )2 ?
E(A,
inf
(A,W )?A?W
n
o
E(A, W )2 + 2? kAk? + 2?kAk1 + 2?kW k1 .
For the proof of oracle inequalities of fast type, the restricted eigenvalue (RE) condition introduced
in [3] and [10, 11] is of importance. Restricted eigenvalue conditions are implied by, and in general weaker than, the so-called incoherence or RIP (Restricted isometry property, [6]) assumptions,
which excludes, for instance, strong correlations between covariates in a linear regression model.
This condition is acknowledged to be one of the weakest to derive fast rates for the Lasso (see [24]
for a comparison of conditions).
Matrix version of these assumptions are introduced in [12]. Below is a version of the RE assumption
that fits in our context. First, we need to introduce the two restriction cones.
The first cone is related to the kW k1 term used in procedure (5). If W ? Rd?d , we denote by
d?d
?W = sign(W ) ? {0, ?1}d?d the signed sparsity pattern of W and by ??
the
W ? {0, 1}
d?d
orthogonal sparsity pattern. For a fixed matrix W ? R
and c > 0, we introduce the cone
n
o
.
0
0
C1 (W, c) = W 0 ? W : k??
W ? W k1 ? ck?W ? W k1 .
This cone contains the matrices W 0 that have their largest entries in the sparsity pattern of W .
The second cone is related to mixture of the terms kAk? and kAk1 in procedure (5). Before defining
it, we need further notations and definitions.
4
For a fixed A ? Rn?n and c, ? > 0, we introduce the cone
n
o
.
?
0
0
0
C2 (A, c, ?) = A0 ? A : kPA
(A0 )k? + ?k??
?
A
k
?
c
kP
(A
)k
+
?k?
?
A
k
.
1
A
?
A
1
A
This cone consist of the matrices A0 with large entries close to that of A and that are ?almost aligned?
with the row and column spaces of A. The parameter ? quantifies the interplay between these too
notions.
Assumption 2 (Restricted Eigenvalue (RE)). For W ? W and c > 0, we have
n
o
?
?1 (W, c) = inf ? > 0 : k?W ? W 0 kF ? ? kXT ?1 W 0 kF , ?W 0 ? C1 (W, c) .
T
For A ? A and c, ? > 0, we introduce
n
?
?2 (A, W, c, ?) = inf ? > 0 : kPA (A0 )kF ? k?A ? A0 kF ? ? kW 0> ?(AT ) ? ?(A0 )k2
d
o
0
?W ? C1 (W, c), ?A0 ? C2 (A, c, ?) . (8)
The RE assumption consists of assuming that the constants ?1 and ?2 are finite. Now we can state
the following Theorem that gives a fast oracle inequality for our procedure using RE.
? W
? ) be given by (5) and suppose that
Theorem 2. Under Assumption 2 and Assumption 2, let (A,
? ? 3?kM kop ,
? ? 3(1 ? ?)kM k? and ? ? 3k?k?
(9)
for some ? ? (0, 1). Then, we have
n
o
25
25
b W
c )2 ?
E(A,
inf
E(A, W )2 + ?2 (A, W )2 ? 2 rank(A)+? 2 kAk0 )+ ?2 ?1 (W )2 kW k0 ,
18
36
(A,W )?A?W
where ?1 (W ) = ?1 (W, 5) and ?2 (A, W ) = ?2 (A, W, 5, ?/? ) (see Assumption 2).
The proofs of Theorems 1 and 2 use tools introduced in [12] and [3].
Note that the residual term from this oracle inequality mixes the notions of sparsity of A and W
via the terms rank(A), kAk0 and kW k0 . It says that our mixed penalization procedure provides an
optimal trade-off between fitting the data and complexity, measured by both sparsity and low-rank.
This is the first result of this nature to be found in literature.
In the next Theorem 3, we obtain convergence rates for the procedure (5) by combining Theorem 2
with controls on the noise processes. We introduce
d
d
d
1 X
1 X
1 X
2
>
2
v?,op
=
?>
?
?
?
?
,
v
=
?j ? ?j
,
j
j j
j
?,?
d j=1
d j=1
d j=1
op
op
?
2
??2 = max ??,j
,
j=1,...,d
2
where ??,j
=
T
1 X
T
?j (At?1 )2 + ?j (AT )2 ,
t=1
which are the (observable) variance terms that naturally appear in the controls of the noise processes.
We introduce also
1
2
`T = 2 max log log ??,j
? 2 ?e ,
j=1,...,d
??,j
which is a small (observable) technical term that comes out of our analysis of the noise process ?.
This term is a small price to pay for the fact that no independence assumption is required on the
noise sequence (Nt )t?0 , but only a martingale increment structure with sub-gaussian tails.
? W
? ) given by (5) with smoothing parameters given by
Theorem 3. Consider the procedure (A,
r
2(x + log(2n))
? = 3??v?,op
,
r d
2(x + 2 log n)
? = 3(1 ? ?)?v?,?
,
d
p
r
2e(x + 2 log d + `T )
2e(x + 2 log d + `T )
? = 6???
+
.
T
d
5
for some ? ? (0, 1) and fix a confidence level x > 0. Then, we have
n
1
1
b W
c )2 ?
E(A,
+ 2
inf
E(A, W )2 + C1 kW k0 (x + 2 log d + `T )
T
d
(A,W )?A?W
2(x + 2 log n)
2(x + log(2n)) o
+ C2 kAk0
+ C3 rank(A)
d
d
where
2
2
C1 = 100e?1 (W )2 ? 2 ??2 , C2 = 25?2 (A, W )2 (1??)2 ? 2 v?,?
, C3 = 25?2 (A, W )2 ?2 ? 2 v?,op
,
with a probability larger than 1 ? 17e?x , where ?1 and ?2 are the same as in Theorem 2.
The proof of Theorem 3 follows directly from Theorem 2 basic noise control results. In the next
Theorem, we propose more explicit upper bounds for both the indivivual estimation of W0 and the
prediction of AT +1 .
Theorem 4. Under the same assumptions as in Theorem 3 and the same choice of smoothing parameters, for any x > 0 the following inequalities hold with probability larger than 1 ? 17e?x :
? Feature prediction error:
1
? ? W0 )k2 ? 25 ?2 ?1 (W0 )2 kW0 k0
kXT (W
F
T
36
n1
o
25
+ inf
k?(A) ? ?(AT +1 )k22 + ?2 (A, W0 )2 ? 2 rank(A) + ? 2 kAk0 )
(10)
A?A d
18
? VAR parameter estimation error:
? ? W0 k1 ? 5??1 (W0 )2 kW0 k0
kW
r
p
1
25
k?(A) ? ?(AT +1 )k22 + ?2 (A, W0 )2 ? 2 rank(A) + ? 2 kAk0 )
+6 kW0 k0 ?1 (W0 ) inf
A?A
d
18
(11)
? Link prediction error:
p
?p
2
?
kA?A
kAT +1 k0 )
T +1 k? ? 5??1 (W0 ) kW0 k0 +?2 (AT +1 , W0 )(6 rank AT +1 +5
?
r
1
25
? inf
k?(A) ? ?(AT +1 )k22 + ?2 (A, W0 )2 ? 2 rank(A) + ? 2 kAk0 ) . (12)
A?A
d
18
4
4.1
Algorithms and Numerical Experiments
Generalized forward-backward algorithm for minimizing L
We use the algorithm designed in [17] for minimizing our objective function. Note that this algorithm is preferable to the method introduced in [18] as it directly minimizes L jointly in (S, W )
rather than alternately minimizing in W and S.
Moreover we use the novel joint penalty from [19] that is more suited for estimating
graphs. The proximal operator for the trace norm is given by the shrinkage operation, if
Z = U diag(?1 , ? ? ? , ?n )V T is the singular value decomposition of Z,
prox? ||.||? (Z) = U diag((?i ? ? )+ )i V T .
Similarly, the proximal operator for the `1 -norm is the soft thresholding operator defined by using
the entry-wise product of matrices denoted by ?:
prox?||.||1 (Z) = sgn(Z) ? (|Z| ? ?)+ .
The algorithm converges under very mild conditions when the step size ? is smaller than
L is the operator norm of the joint quadratic loss:
1
1
? : (A, W ) 7? kXT ? XT ?1 W k2F + k?(A) ? W > ?(AT )k2F .
T
d
6
2
L,
where
Algorithm 1 Generalized Forward-Backward to Minimize L
Initialize A, Z1 , Z2 , W
repeat
Compute (GA , GW ) = ?A,W ?(A, W ).
Compute Z1 = prox2?? ||.||? (2A ? Z1 ? ?GA )
Compute Z2 = prox2??||.||1 (2A ? Z2 ? ?GA )
Set A = 12 (Z1 + Z2 )
Set W = prox??||.||1 (W ? ?GW )
until convergence
return (A, W ) minimizing L
4.2
A generative model for graphs having linearly autoregressive features
Let V0 ? Rn?r be a sparse matrix, V0? its pseudo-inverse such, that V0? V0 = V0> V0>? = Ir . Fix two
sparse matrices W0 ? Rr?r and U0 ? Rn?r . Now define the sequence of matrices (At )t?0 for
t = 1, 2, ? ? ? by
Ut = Ut?1 W0 + Nt
and
At = Ut V0> + Mt
for i.i.d sparse noise matrices Nt and Mt , which means that for any pair of indices (i, j), with high
probability (Nt )i,j = 0 and (Mt )i,j = 0. We define the linear feature map ?(A) = AV0>? , and
point out that
1. The sequence ?(At )> = Ut + Mt V0>? follows the linear autoregressive relation
t
t
?(At ) = ?(At?1 ) W0 + Nt + Mt V0>? .
>
>
2. For any time index t, the matrix At is close to Ut V0 that has rank at most r
3. The matrices At and Ut are both sparse by construction.
4.3
Empirical evaluation
We tested the presented methods on synthetic data generated as in section (4.2). In our experiments
the noise matrices Mt and Nt where built by soft-thresholding i.i.d. noise N (0, ? 2 ). We took as
input T = 10 successive graph snapshots on n = 50 nodes graphs of rank r = 5. We used d = 10
linear features, and finally the noise level was set to ? = .5. We compare our methods to standard
baselines in link prediction. We use the area under the ROC curve as the measure of performance
and report empirical results averaged over 50 runs with the corresponding confidence intervals in
figure 4.3. The competitor methods are the nearest neighbors (NN) and static sparse and low-rank
estimation, that is the link prediction algorithm suggested in [19]. The algorithm NN scores pairs
of nodes with the number of common friends between them, which is given by A2 when A is the
fT = PT At and the static sparse and low-rank estimation
cumulative graph adjacency matrix A
t=0
fT k2 + ? kXk? + ?kXk1 , and can be seen as the
is obtained by minimizing the objective kX ? A
F
closest static version of our method. The two methods autoregressive low-rank and static low-rank
are regularized using only the trace-norm, (corresponding to forcing ? = 0) and are slightly inferior
to their sparse and low-rank rivals. Since the matrix V0 defining the linear map ? is unknown we
fT = U ?V > is the SVD of A
fT . The parameters ?
consider the feature map ?(A) = AV where A
and ? are chosen by 10-fold cross validation for each of the methods separately.
4.4
Discussion
1. Comparison with the baselines. This experiment sharply shows the benefit of using a temporal approach when one can handle the feature extraction task. The left-hand plot shows
that if few snapshots are available (T ? 4 in these experiments), then static approaches are
7
AUC
Link prediction performance
0.95
2
0.99
0.98
4
0.9
0.97
0.96
0.95
T
AUC
6
0.85
0.94
8
0.93
Autoregressive Sparse and Low?rank
Autoregressive Low?rank
Static Sparse and Low?rank
Static Low?rank
Nearest?Neighbors
0.8
0.92
10
0.91
0.9
12
0.75
2
3
4
5
6
7
8
9
0
10
T
10
20
30
rank A
40
50
60
70
T+1
Figure 1: Left: performance of algorithms in terms of Area Under the ROC Curve, average and
confidence intervals over 50 runs. Right: Phase transition diagram.
to be preferred, whereas feature autoregressive approaches outperform as soon as sufficient
number T graph snapshots are available (see phase transition). The decreasing performance
of static algorithms can be explained by the fact that they use as input a mixture of graphs
observed at different time steps. Knowing that at each time step the nodes have specific
latent factors, despite the slow evolution of the factors, adding the resulting graphs leads to
confuse the factors.
2. Phase transition. The right-hand figure is a phase transition diagram showing in which part
of rank and time domain the estimation is accurate and illustrates the interplay between
these two domain parameters.
3. Choice of the feature map ?. In the current work we used the projection onto the vector
space of the top-r singular vectors of the cumulative adjacency matrix as the linear map ?,
and this choice has shown empirical superiority to other choices. The question of choosing
the best measurement to summarize graph information as in compress sensing seems to
have both theoretical and application potential. Moreover, a deeper understanding of the
connections of our problem with compressed sensing, for the construction and theoretical
validation of the features mapping, is an important point that needs several developments.
One possible approach is based on multi-kernel learning, that should be considered in a
future work.
4. Generalization of the method. In this paper we consider only an autoregressive process of
order 1. For better prediction accuracy, one could consider mode general models, such as
vector ARMA models, and use model-selection techniques for the choice of the orders of
the model. A general modelling based on state-space model could be developed as well.
We presented a procedure for predicting graphs having linear autoregressive features. Our
approach can easily be generalized to non-linear prediction through kernel-based methods.
References
[1] J. Abernethy, F. Bach, Th. Evgeniou, and J.-Ph. Vert. A new approach to collaborative filtering:
operator estimation with spectral regularization. JMLR, 10:803?826, 2009.
[2] A. Argyriou, M. Pontil, Ch. Micchelli, and Y. Ying. A spectral regularization framework for
multi-task structure learning. Proceedings of Neural Information Processing Systems (NIPS),
2007.
[3] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of lasso and dantzig selector.
Annals of Statistics, 37, 2009.
[4] L. Breiman and J. H. Friedman. Predicting multivariate responses in multiple linear regression.
Journal of the Royal Statistical Society (JRSS): Series B (Statistical Methodology), 59:3?54,
1997.
8
[5] E.J. Cand?s and T. Tao. The power of convex relaxation: Near-optimal matrix completion.
IEEE Transactions on Information Theory, 56(5), 2009.
[6] Cand?s E. and Tao T. Decoding by linear programming. In Proceedings of the 46th Annual
IEEE Symposium on Foundations of Computer Science (FOCS), 2005.
[7] Th. Evgeniou, Ch. A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods.
Journal of Machine Learning Research, 6:615?637, 2005.
[8] S. Gaiffas and G. Lecue. Sharp oracle inequalities for high-dimensional matrix prediction.
Information Theory, IEEE Transactions on, 57(10):6942 ?6957, oct. 2011.
[9] M. Kolar and E. P. Xing. On time varying undirected graphs. in Proceedings of the 14th
International Conference on Artifical Intelligence and Statistics AISTATS, 2011.
[10] V. Koltchinskii. The Dantzig selector and sparsity oracle inequalities. Bernoulli, 15(3):799?
828, 2009.
[11] V. Koltchinskii. Sparsity in penalized empirical risk minimization. Ann. Inst. Henri Poincar?
Probab. Stat., 45(1):7?57, 2009.
[12] V. Koltchinskii, K. Lounici, and A. Tsybakov. Nuclear norm penalization and optimal rates for
noisy matrix completion. Annals of Statistics, 2011.
[13] Y. Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model.
In Proceeding of the 14th ACM SIGKDD international conference on Knowledge discovery
and data mining, pages 426?434. ACM, 2008.
[14] Y. Koren. Collaborative filtering with temporal dynamics. Communications of the ACM,
53(4):89?97, 2010.
[15] D. Liben-Nowell and J. Kleinberg. The link-prediction problem for social networks. Journal
of the American society for information science and technology, 58(7):1019?1031, 2007.
[16] S.A. Myers and Jure Leskovec. On the convexity of latent social network inference. In NIPS,
2010.
[17] H. Raguet, J. Fadili, and G. Peyr?. Generalized forward-backward splitting. Arxiv preprint
arXiv:1108.4404, 2011.
[18] E. Richard, N. Baskiotis, Th. Evgeniou, and N. Vayatis. Link discovery using graph feature
tracking. Proceedings of Neural Information Processing Systems (NIPS), 2010.
[19] E. Richard, P.-A. Savalle, and N. Vayatis. Estimation of simultaneously sparse and low-rank
matrices. In Proceeding of 29th Annual International Conference on Machine Learning, 2012.
[20] P. Sarkar, D. Chakrabarti, and A.W. Moore. Theoretical justification of popular link prediction
heuristics. In International Conference on Learning Theory (COLT), pages 295?307, 2010.
[21] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In
Lawrence K. Saul, Yair Weiss, and L?on Bottou, editors, in Proceedings of Neural Information
Processing Systems 17, pages 1329?1336. MIT Press, Cambridge, MA, 2005.
[22] B. Taskar, M.F. Wong, P. Abbeel, and D. Koller. Link prediction in relational data. In Neural
Information Processing Systems, volume 15, 2003.
[23] R. S. Tsay. Analysis of Financial Time Series. Wiley-Interscience; 3rd edition, 2005.
[24] S. A. van de Geer and P. B?hlmann. On the conditions used to prove oracle results for the
Lasso. Electron. J. Stat., 3:1360?1392, 2009.
[25] D.Q. Vu, A. Asuncion, D. Hunter, and P. Smyth. Continuous-time regression models for
longitudinal networks. In Advances in Neural Information Processing Systems. MIT Press,
2011.
9
| 4852 |@word multitask:1 mild:1 version:3 norm:16 seems:1 km:4 simulation:1 decomposition:1 tr:1 series:5 uncovered:2 contains:2 score:1 ecole:1 document:1 longitudinal:1 ka:1 z2:4 nt:23 current:1 must:1 realistic:1 numerical:2 j1:2 designed:1 plot:1 generative:1 intelligence:1 regressive:1 provides:1 node:8 successive:2 c2:7 symposium:1 chakrabarti:1 focs:1 prove:4 consists:1 fitting:2 interscience:1 introduce:7 market:2 behavior:1 cand:2 multi:2 decreasing:2 provided:1 estimating:4 bounded:3 notation:4 moreover:3 minimizes:1 developed:2 savalle:1 guarantee:1 pseudo:1 temporal:2 every:1 preferable:1 universit:1 k2:5 uk:1 control:4 appear:1 superiority:1 positive:3 before:1 understood:1 despite:1 encoding:1 meet:1 incoherence:1 signed:1 umr:2 koltchinskii:3 studied:1 dantzig:2 challenging:1 factorization:4 bi:1 averaged:1 commerce:1 vu:1 kat:2 procedure:12 pontil:2 poincar:1 area:2 empirical:5 evolving:4 vert:1 matching:1 projection:2 confidence:3 induce:1 onto:4 ga:4 close:4 operator:7 selection:1 context:5 risk:1 wong:1 restriction:1 map:6 deterministic:2 projector:2 fadili:1 independently:1 convex:3 formulate:1 simplicity:1 splitting:1 estimator:1 nuclear:2 spanned:2 financial:2 handle:1 notion:2 coordinate:2 increment:2 justification:1 resp:2 construction:2 suppose:2 pt:1 user:3 rip:1 cmla:2 annals:2 programming:1 smyth:1 hypothesis:1 pa:3 element:1 observed:3 ft:6 kxk1:1 preprint:1 taskar:1 connected:2 trade:3 liben:1 convexity:1 complexity:1 covariates:1 dynamic:5 solving:1 tight:1 predictive:1 bipartite:1 easily:1 joint:7 stock:1 k0:10 various:1 fast:3 describe:1 kp:2 choosing:1 neighborhood:1 abernethy:1 heuristic:2 widely:1 supplementary:1 larger:2 say:1 rennie:1 tested:1 compressed:1 statistic:3 jointly:3 noisy:1 interplay:3 sequence:11 kxt:7 eigenvalue:3 rr:1 myers:1 took:1 propose:5 interaction:1 product:6 j2:2 aligned:1 hadamard:1 combining:1 kak1:4 description:1 frobenius:1 convergence:2 phane:1 converges:1 illustrate:2 derive:1 completion:4 stat:2 friend:1 measured:1 nearest:2 op:5 strong:1 involves:1 implies:1 come:2 convention:1 sgn:1 material:1 adjacency:9 fix:2 generalization:1 abbeel:1 hold:2 considered:1 lawrence:1 mapping:1 predict:4 electron:1 achieves:1 bickel:1 a2:1 nowell:1 estimation:15 largest:2 tool:2 reflects:1 weighted:1 minimization:1 offs:1 mit:2 gaussian:2 always:1 aim:1 ck:1 rather:1 shrinkage:1 breiman:1 varying:1 jaakkola:1 derived:1 rank:33 modelling:1 bernoulli:1 sigkdd:1 baseline:2 inst:1 inference:2 cnrs:2 nn:2 a0:8 relation:1 koller:1 france:2 tao:2 issue:1 among:2 overall:1 arg:1 denoted:1 priori:1 colt:1 development:1 enrich:1 smoothing:6 initialize:1 evgeniou:3 having:2 extraction:1 biology:1 kw:9 k2f:6 purchase:3 future:2 report:1 richard:3 few:1 simultaneously:4 phase:4 n1:3 friedman:1 highly:1 mining:1 evaluation:1 adjust:1 mixture:3 devoted:1 accurate:1 edge:7 partial:1 orthogonal:2 euclidean:1 re:5 arma:1 theoretical:6 leskovec:1 instance:2 column:3 modeling:3 soft:2 goodness:1 hlmann:1 introducing:1 entry:8 peyr:1 too:1 corrupted:1 proximal:3 synthetic:1 st:1 international:4 probabilistic:1 off:2 decoding:1 reflect:1 central:1 containing:1 choose:1 american:1 return:1 account:2 potential:1 prox:3 de:1 summarized:1 includes:1 performed:1 xing:1 asuncion:1 collaborative:4 minimize:2 square:2 ir:1 accuracy:3 contribution:1 variance:2 efficiently:1 hunter:1 randomness:1 explain:2 simultaneous:1 definition:1 competitor:1 naturally:1 proof:4 associated:1 static:9 popular:1 recall:1 knowledge:1 ut:6 organized:1 actually:1 follow:1 reflected:1 response:3 methodology:1 wei:1 ritov:1 lounici:1 symptom:1 generality:1 correlation:4 until:1 hand:2 mode:1 multifaceted:1 effect:2 k22:5 evolution:2 regularization:2 moore:1 gw:2 encourages:1 ffas:1 inferior:1 auc:2 kak:4 criterion:3 generalized:5 polytechnique:1 meaning:1 wise:4 novel:1 recently:2 common:1 functional:1 mt:6 volume:1 tail:3 measurement:1 cambridge:1 ai:2 rd:8 similarly:1 gaiffas:1 calibration:1 entail:1 similarity:1 v0:11 pu:4 multivariate:2 isometry:1 closest:1 inf:8 driven:1 forcing:1 certain:1 inequality:13 seen:1 u0:1 multiple:4 full:1 mix:1 technical:2 cross:1 long:1 bach:1 a1:1 controlled:1 prediction:30 regression:7 basic:1 proteomics:1 arxiv:2 kernel:4 sometimes:1 cell:1 penalize:1 vayatis:3 c1:7 whereas:1 separately:2 interval:2 diagram:2 singular:6 source:1 rest:1 lsta:1 undirected:1 subgaussian:1 near:1 presence:1 variety:1 independence:2 fit:1 lasso:4 click:1 knowing:1 tsay:1 expression:1 motivated:1 forecasting:1 penalty:1 cause:1 se:1 involve:1 loyal:1 amount:1 tsybakov:2 rival:1 ph:1 induces:1 category:1 outperform:1 sign:4 estimated:2 per:1 popularity:1 correctly:1 discrete:1 write:2 group:2 acknowledged:1 diffusion:1 backward:4 v1:1 graph:36 excludes:1 relaxation:1 sum:5 cone:8 run:2 inverse:1 place:1 almost:1 cachan:2 bound:9 pay:1 koren:2 fan:1 quadratic:1 fold:1 oracle:14 activity:1 annual:2 adapted:1 constraint:1 sharply:1 sake:1 kleinberg:1 u1:1 min:1 performing:1 combination:2 jr:1 smaller:2 slightly:1 ur:1 explained:1 restricted:4 pr:1 remains:1 kw0:5 mind:1 autoregression:1 operation:1 available:2 appropriate:1 spectral:2 homogeneously:1 yair:1 compress:1 assumes:1 top:1 exploit:1 k1:7 uj:1 society:2 implied:1 objective:6 micchelli:2 already:1 question:1 occurs:1 strategy:1 dependence:2 diagonal:1 exhibit:1 link:13 separate:1 w0:26 assuming:1 modeled:3 index:2 minimizing:5 kolar:1 ying:1 setup:3 trace:5 unknown:3 recommender:2 upper:6 observation:3 snapshot:4 av:1 finite:1 situation:1 defining:2 communication:1 relational:1 rn:9 sharp:1 community:1 rating:1 introduced:4 sarkar:1 pair:2 paris:1 required:2 c3:5 componentwise:1 z1:4 connection:1 alternately:1 nip:3 jure:1 suggested:1 below:3 pattern:3 sparsity:15 summarize:1 built:1 max:3 royal:1 power:1 natural:3 regularized:4 predicting:4 residual:1 kpa:2 improve:2 movie:1 technology:1 auto:1 genomics:1 probab:1 literature:1 discovery:3 understanding:1 kf:6 loss:1 mixed:2 interesting:1 filtering:4 proportional:1 srebro:1 var:5 emile:1 penalization:2 validation:2 foundation:1 degree:2 raguet:1 sufficient:1 thresholding:2 editor:1 share:1 row:2 penalized:4 changed:1 repeat:1 soon:1 weaker:1 deeper:1 neighbor:2 saul:1 absolute:1 sparse:11 benefit:1 van:1 yvk:1 dimension:1 curve:2 stand:2 av0:1 cumulative:2 autoregressive:15 transition:4 forward:4 made:1 offit:1 social:5 transaction:2 henri:1 observable:2 selector:2 preferred:1 gene:2 assumed:2 continuous:2 regulatory:1 latent:2 quantifies:1 nature:1 nicolas:1 investigated:2 bottou:1 domain:2 vj:1 diag:3 aistats:1 main:4 linearly:1 noise:18 edition:1 en:2 roc:2 martingale:2 slow:2 vr:1 wiley:1 sub:2 pv:2 explicit:1 jmlr:1 kop:3 theorem:13 xt:7 specific:1 showing:1 sensing:2 weakest:1 consist:1 adding:1 importance:1 cmap:1 confuse:1 illustrates:1 kx:1 margin:1 suited:1 kxk:1 contained:1 tracking:1 ch:2 satisfies:1 extracted:1 acm:3 ma:1 oct:1 kak0:7 goal:1 consequently:1 ann:1 price:2 uniformly:1 called:1 geer:1 svd:2 latter:1 arises:1 bioinformatics:1 artifical:1 baskiotis:1 argyriou:1 |
4,257 | 4,853 | Small-Variance Asymptotics for Exponential Family
Dirichlet Process Mixture Models
Ke Jiang, Brian Kulis
Department of CSE
The Ohio State University
{jiangk,kulis}@cse.ohio-state.edu
Michael I. Jordan
Departments of EECS and Statistics
University of California at Berkeley
[email protected]
Abstract
Sampling and variational inference techniques are two standard methods for inference in probabilistic models, but for many problems, neither approach scales
effectively to large-scale data. An alternative is to relax the probabilistic model
into a non-probabilistic formulation which has a scalable associated algorithm.
This can often be fulfilled by performing small-variance asymptotics, i.e., letting
the variance of particular distributions in the model go to zero. For instance, in
the context of clustering, such an approach yields connections between the kmeans and EM algorithms. In this paper, we explore small-variance asymptotics
for exponential family Dirichlet process (DP) and hierarchical Dirichlet process
(HDP) mixture models. Utilizing connections between exponential family distributions and Bregman divergences, we derive novel clustering algorithms from the
asymptotic limit of the DP and HDP mixtures that features the scalability of existing hard clustering methods as well as the flexibility of Bayesian nonparametric
models. We focus on special cases of our analysis for discrete-data problems, including topic modeling, and we demonstrate the utility of our results by applying
variants of our algorithms to problems arising in vision and document analysis.
1
Introduction
An enduring challenge for machine learning is in the development of algorithms that scale to truly
large data sets. While probabilistic approaches?particularly Bayesian models?are flexible from
a modeling perspective, lack of scalable inference methods can limit applicability on some data.
For example, in clustering, algorithms such as k-means are often preferred in large-scale settings
over probabilistic approaches such as Gaussian mixtures or Dirichlet process (DP) mixtures, as the
k-means algorithm is easy to implement and scales to large data sets.
In some cases, links between probabilistic and non-probabilistic models can be made by applying
asymptotics to the variance (or covariance) of distributions within the model. For instance, connections between probabilistic and standard PCA can be made by letting the covariance of the data
likelihood in probabilistic PCA tend toward zero [1, 2]; similarly, the k-means algorithm may be
obtained as a limit of the EM algorithm when the covariances of the Gaussians corresponding to
each cluster goes to zero. Besides providing a conceptual link between seemingly quite different
approaches, small-variance asymptotics can yield useful alternatives to probabilistic models when
the data size becomes large, as the non-probabilistic models often exhibit more favorable scaling
properties. The use of such techniques to derive scalable algorithms from rich probabilistic models
is still emerging, but provides a promising approach to developing scalable learning algorithms.
This paper explores such small-variance asymptotics for clustering, focusing on the DP mixture.
Existing work has considered asymptotics over the Gaussian DP mixture [3], leading to k-meanslike algorithms that do not fix the number of clusters upfront. This approach, while an important
first step, raises the question of whether we can perform similar asymptotics over distributions other
1
than the Gaussian. We answer in the affirmative by showing how such asymptotics may be applied
to the exponential family distributions for DP mixtures; such analysis opens the door to a new class
of scalable clustering algorithms and utilizes connections between Bregman divergences and exponential families. We further extend our approach to hierarchical nonparametric models (specifically,
the hierarchical Dirichlet process (HDP) [4]), and we view a major contribution of our analysis to
be the development of a general hard clustering algorithm for grouped data.
One of the primary advantages of generalizing beyond the Gaussian case is that it opens the door
to novel scalable algorithms for discrete-data problems. For instance, visual bag-of-words [5] have
become a standard representation for images in a variety of computer vision tasks, but many existing
probabilistic models in vision cannot scale to the size of data sets now commonly available. Similarly, text document analysis models (e.g., LDA [6]) are almost exclusively discrete-data problems.
Our analysis covers such problems; for instance, a particular special case of our analysis is a hard
version of HDP topic modeling. We demonstrate the utility of our methods by exploring applications
in text and vision.
Related Work: In the non-Bayesian setting, asymptotics for the expectation-maximization algorithm for exponential family distributions were studied in [7]. The authors showed a connection between EM and a general k-means-like algorithm, where the squared Euclidean distance is replaced
by the Bregman divergence corresponding to exponential family distribution of interest. Our results
may be viewed as generalizing this approach to the Bayesian nonparametric setting. As discussed
above, our results may also be viewed as generalizing the approach of [3], where the asymptotics
were performed for the DP mixture with a Gaussian likelihood, leading to a k-means-like algorithm where the number of clusters is not fixed upfront. Note that our setting is considerably more
involved than either of these previous works, particularly since we will require an appropriate technique for computing an asymptotic marginal likelihood. Other connections between hard clustering
and probabilistic models were explored in [8], which proposes a ?Bayesian k-means? algorithm by
performing a maximization-expectation algorithm.
2
Background
In this section, we briefly review exponential family distributions, Bregman divergences, and the
Dirichlet process mixture model.
2.1
The Exponential Family
Consider the exponential family with natural parameter ? = {?j }dj=1 ? Rd ; then the exponential
family probability density function can be written as [9]:
p(x | ?) = exp hx, ?i ? ?(?) ? h(x) ,
R
where ?(?) = log exp(hx, ?i ? h(x))dx is the log-partition function. Here we assume for
simplicity that x is a minimal sufficient statistic for the natural parameter ?. ?(?) can be utilized to
compute the mean and covariance of p(x | ?); in particular, the expected value is given by ??(?),
and the covariance is ?2 ?(?).
Conjugate Priors: In a Bayesian setting, we will require a prior distribution over the natural parameter ?. A convenient property of the exponential family is that a conjugate prior distribution of
? exists; in particular, given any specific distribution in the exponential family, the conjugate prior
can be parametrized as [11]:
p(? | ?, ?) = exp h?, ? i ? ??(?) ? m(?, ?) .
Here, the ?(?) function is the same as that of the likelihood function. Given a data point xi , the
posterior distribution of ? has the same form as the prior, with ? ? ? + xi and ? ? ? + 1.
Relationship to Bregman Divergences: Let ? : S ? R be a differentiable, strictly convex function
defined on a convex set S ? Rd . The Bregman divergence for any pair of points x, y ? S is defined
as D? (x, y) = ?(x) ? ?(y) ? hx ? y, ??(y)i, and can be viewed as a generalized distortion measure. An important result connecting Bregman divergences and exponential families was discussed
in [7] (see also [10, 11]), where a bijection between the two was established. A key consequence
of this result is that we can equivalently parameterize both p(x | ?) and p(? | ?, ?) in terms of the
2
expectation ?:
p(x | ?) = p(x | ?) = exp(?D? (x, ?))f? (x),
?
p(? | ?, ?) = p(? | ?, ?) = exp ? ?D?
, ? g? (?, ?),
?
where ?(?) is the Legendre-conjugate function of ?(?) (denoted as ? = ? ? ), f? (x) = exp(?(x) ?
h(x)), and ? is the expectation parameter which satisfies ? = ??(?) (and also ? = ? ? ). The
Bregman divergence representation provides a natural way to parametrize the exponential family
distributions with its expectation parameter and, as we will see, we will find it convenient to work
with this form.
2.2
Dirichlet Process Mixture Models
The Dirichlet Process (DP) mixture model is a Bayesian nonparametric mixture model [12]; unlike
most parametric mixture models (Bayesian or otherwise), the number of clusters in a DP mixture is
not fixed upfront. Using the exponential family parameterized by the expectation ?c , the likelihood
for a data point can be expressed as the following infinite mixture:
p(x) =
?
X
c=1
?c p(x | ?c ) =
?
X
?c exp(?D? (x, ?c ))f? (x).
c=1
Even though there are conceptually an infinite number of clusters, the nonparametric prior over the
mixing weights causes the weights ?c to decay exponentially. Moreover, a simple collapsed Gibbs
sampler can be employed for performing inference in this model [13]; this Gibbs sampler will form
the basis of our asymptotic analysis. Given a data set {x1 , ..., xn }, the state of the Markov chain
is the set of cluster indicators {z1 , ..., zn } as well as the cluster means of the currently-occupied
clusters (the mixing weights have been integrated out). The Gibbs updates for zi , (i = 1, . . . , n),
are given by the following conditional probabilities:
n?i,c
P (zi = c | z?i , xi , ?) =
p(xi | ?c )
Z(n ? 1 + ?)
Z
?
P (zi = cnew | z?i , xi , ?) =
p(xi | ?)dG0 ,
Z(n ? 1 + ?)
where Z is the normalizing constant, n?i,c is the number of data points (excluding xi ) that are
currently assigned to cluster c, G0 is a prior over ?, and ? is the concentration parameter that
determines how likely we are to start a new cluster. If we choose to start a new cluster during the
Gibbs update, we sample its mean from the posterior distribution obtained from the prior distribution
G0 and the single observation xi . After performing Gibbs moves on the cluster indicators, we update
the cluster means ?c by sampling from the posterior of ?c given the data points assigned to cluster c.
3
Hard Clustering for Exponential Family DP Mixtures
Our goal is to analyze what happens as we perform small-variance asymptotics on the exponential
family DP mixture when running the collapsed Gibbs sampler described earlier, and we begin by
considering how to scale the covariance in an exponential family distribution. Given an exponential
family distribution p(x | ?) with natural parameter ? and log-partition function ?(?), consider a
scaled exponential family distribution whose natural parameter is ?? = ?? and log-partition function
? ?)
? = ??(?/?),
?
is ?(
where ? > 0. The following result characterizes the relationship between the
mean and covariance of the original and scaled exponential family distributions.
Lemma 3.1. Denote ?(?) as the mean, and cov(?) as the covariance, of p(x | ?) with log-partition
? ?)
? = ??(?/?),
?
? of the
? ?)
?(?). Given a scaled exponential family with ?? = ?? and ?(
the mean ?(
?
? ?), is cov(?)/?.
scaled distribution is ?(?) and the covariance, cov(
? = ? ??(
? ?)
? = ?? ??(?/?)
?
?
? ?)
This lemma follows directly from ?(
= ?? ?(?/?)
= ?? ?(?) =
?
?
1
1
2
2
?
?
?
?
?
? ?) = ???(?(?)) = ????(????(?/?)) = ? ? ?? ?(?/?) = ? ? ?2? ?(?) =
?(?), and cov(
cov(?)/?. It is perhaps intuitively simpler to observe what happens to the distribution using the
3
Bregman divergence representation. Recall that the generating function ? for the Bregman divergence is given by the Legendre-conjugate of ?. Using standard properties of convex conjugates, we
see that the conjugate of ?? is simply ?? = ??. The Bregman divergence representation for the scaled
distribution is given by
? = p(x | ?)
? = exp(?D??(x, ?))f
? ??(x) = exp(??D? (x, ?))f?? (x),
p(x | ?)
where the last equality follows from Lemma 3.1 and the fact that, for a Bregman divergence,
D?? (?, ?) = ?D? (?, ?). Thus, as ? increases under the above scaling, the mean is fixed while the
distribution becomes increasingly concentrated around the mean.
Next we consider the prior distribution under the scaled exponential family. When scaling by ?, we
also need to scale the hyper-parameters ? and ?, namely ? ? ? /? and ? ? ?/?. This gives the
following prior written using the Bregman divergence, where we are now explicitly conditioning on
?:
?
?
? /?
? ?
? ?
p(?? | ?, ?, ?) = exp ? D??
, ? g??
,
= exp ? ?D?
, ? g??
,
.
?
?/?
? ?
?
? ?
? as it will be necessary for
Finally, we compute the marginal likelihood for x by integrating out ?,
the Gibbs sampler. Standard algebraic manipulations yield the following:
Z
? ? p(?? | ?, ?, ?)d??
p(x | ?, ?, ?) = p(x | ?)
Z
?x + ?
? ?
?
? ?) d??
,
A(?,?,?,?)
(x) exp ? (? + ?)D?
, ?(
= f??(x) ? g??
?
? ?
?+?
Z
? ?
?x + ?
d
= f??(x) ? g??
,
A(?,?,?,?)
(x) ? ? ? exp ? (? + ?)D?
, ?(?) d?.
?
? ?
?+?
(1)
Here, A(?,?,?,?)
(x) = exp ? (??(x) + ??( ?? ) ? (? + ?)?( ?x+?
?
?+? )) , which arises when combining
the Bregman divergences from the likelihood and the prior.
Now we make the following key insight, which will allow us to perform the necessary asymptotics.
We can write the integral from the last line above (denoted I below) via Laplace?s method. Since
?x+? ?
? ??
D? ( ?x+?
?+? , ?) has a local minimum (which is global in this case) at ? = ? = ( ?+? ) , we have:
d/2 2
? ?1/2
? D? ( ?x+?
2?
1
?x + ?
?+? , ?)
?
,?
+O
I = exp ? (? + ?)D?
T
?+?
?+?
????
?
?1/2
d/2 2
?x+?
?
? D? ( ?+? , ?)
2?
1
=
+O
(2)
?+?
???? T
?
2
?x+?
?
? D? ( ?+? ,?)
? is the covariance matrix of the likelihood function instantiated at ??
where
= cov(?)
???? T
?
and approaches cov(x ) when ? goes to ?. Note that the exponential term equals one since the
divergence inside is 0.
3.1 Asymptotic Behavior of the Gibbs Sampler
We now have the tools to consider the Gibbs sampler for the exponential family DP mixture as we
let ? ? ?. As we will see, we will obtain a general k-means-like hard clustering algorithm which
utilizes the appropriate Bregman divergence in place of the squared Euclidean distance, and also can
vary the number of clusters. Recall the conditional probabilities for performing Gibbs moves on the
cluster indicators zi , where we now are considering the scaled distributions:
n?i,c
P (zi = c | z?i , xi , ?, ?) =
exp(??D? (xi , ?c ))f??(xi )
Z
?
P (zi = cnew | z?i , xi , ?, ?) =
p(xi | ?, ?, ?),
Z
where Z is a normalization factor, and the marginal probability p(xi | ?, ?, ?) is given by the derivations in (1) and (2). Now, we consider the asymptotic behavior of these probabilities as ? ? ?. We
4
note that
?xi + ?
= xi ,
??? ? + ?
lim
and
lim A(?,?,?,?)
(xi ) = exp(??(?(? /?) ? ?(xi ))),
?
???
and that the Laplace approximation error term goes to zero as ? ? ?. Further, we define ? as a
function of ?, ?, and ? (but independent of the data):
d/2
?1
? ?
2?
? = g??
,
?
? ?d
? exp(???),
? ?
?+?
for some ?. After canceling out the f??(xi ) terms from all probabilities, we can then write the Gibbs
probabilities as
P (zi = c | z?i , xi , ?, ?)
=
P (zi = cnew | z?i , xi , ?, ?)
=
Cxi
n?i,c ? exp(??D? (xi , ?c ))
Pk
? exp(???) + j=1 n?i,j ? exp(??D? (xi , ?j ))
Cxi
Cxi ? exp(???)
,
Pk
? exp(???) + j=1 n?i,j ? exp(??D? (xi , ?j ))
where Cxi approaches a positive, finite constant for a given xi as ? ? ?. Now, all of the above
probabilities will become binary as ? ? ?. More specifically, all the k + 1 values will be increasingly dominated by the smallest value of {D? (xi , ?1 ), . . . , D? (xi , ?k ), ?}. As ? ? ?, only
the smallest of these values will receive a non-zero probability. That is, the data point xi will be
assigned to the nearest cluster with a divergence at most ?. If the closest mean has a divergence
greater than ?, we start a new cluster containing only xi .
Next, we show that sampling ?c from the posterior distribution is achieved by simply computing
the empirical mean of a cluster in the limit. During Gibbs sampling, once we have performed one
complete set of Gibbs moves on the cluster assignments, we need to sample the ?c conditioned on
all assignments and observations. If we let nc be the number of points assigned to cluster c, then the
posterior distribution (parameterized by the expectation parameter) for cluster c is
P nc
c
i=1 ?xi + ?
,? ,
p(?c | X, z, ?, ?, ?) ? p(Xc | ?c , ?)?p(?c | ?, ?, ?) ? exp ?(?nc +?)D?
?nc + ?
where X is all the data, Xc = {xc1 , ..., xcnc } is the set of points currently assigned to cluster c, and z
is the set of all current assignments. WePcan see that the mass of the posterior distribution becomes
nc
xi
concentrated around the sample mean i=1
as ? ? ?. In other words, after we determine the
nc
assignments of data points to clusters, we update the means as the sample mean of the data points in
each cluster. This is equivalent to the standard k-means cluster mean update step.
3.2 Objective function and algorithm
From the above asymptotic analysis of the Gibbs sampler, we observe a new algorithm which can
be utilized for hard clustering. It is as simple as the popular k-means algorithm, but also provides
the ability to adapt the number of clusters depending on the data as well as incorporate different
distortion measures. The algorithm description is as follows:
Pn
? Initialization: input data x1 , . . . , xn , ? > 0, and ?1 = n1 i=1 xn
? Assignment: for each data point xi , compute the Bregman divergence D? (xi , ?c ) to all
existing clusters. If minc D? (xi , ?c ) ? ?, then zi,c0 = 1 where c0 = argminc D? (xi , ?c );
otherwise, start a new cluster and set zi,cnew = 1;
P
? Mean Update: compute the cluster mean for each cluster, ?j = |l1j | x?lj x, where lj is
the set of points in the j-th cluster.
We iterate between the assignment and mean update steps until local convergence. Note that the
initialization used here?placing all data points into a single cluster?is not necessary, but is one
natural way to initialize the algorithm. Also note that the algorithm depends heavily on the choice
of ?; heuristics for selecting ? were briefly discussed for the Gaussian case in [3], and we will follow
this approach (generalized in the obvious way to Bregman divergences) for our experiments.
5
We can easily show that the underlying objective function for our algorithm is quite similar to that
in [3], replacing the squared Euclidean distance with an appropriate Bregman divergence. Recall
that the squared Euclidean distance is the Bregman divergence corresponding to the Gaussian distribution. Thus, the objective function in [3] can be seen as a special case of our work. The objective
function optimized by our derived algorithm is the following:
min
{lc }k
c=1
k X
X
D? (x, ?c ) + ?k
(3)
c=1 x?lc
where k is the total number of clusters, ? is the conjugate function of the log-partition function of
the chosen exponential family distribution, and ?c is the sample mean of cluster c. The penalty term
? controls the tradeoff between the likelihood and the model complexity, where a large ? favors
small model complexity (i.e., fewer clusters) while a small ? favors more clusters. Given the above
objective function, our algorithm can be shown to monotonically decrease the objective function
value until convergence to some local minima. We omit the proof here as it is almost identical as the
proof for Theorem 3.1 in [3].
4
Extension to Hierarchies
A key benefit of the Bayesian approach is its natural ability to form hierarchical models. In the context of clustering, a hierarchical mixture allows one to cluster multiple groups of data?each group
is clustered into a set of local clusters, but these local clusters are shared among the groups (i.e.,
sets of local clusters across groups form global clusters, with a shared global mean). For Bayesian
nonparametric mixture models, one way of achieving such hierarchies arises via the hierarchical
Dirichlet Process (HDP) [4], which provides a nonparametric approach to allow sharing of clusters
among a set of DP mixtures.
In this section, we will briefly sketch out the extension of our analysis to the HDP mixture, which
yields a natural extension of our methods to groups of data. Given space considerations, and the fact
that the resulting algorithm turns out to reduce to Algorithm 2 from [3] with the squared Euclidean
distance replaced by an appropriate Bregman divergence, we will omit the full specification of the
algorithm here. However, despite the similarity to the existing Gaussian case, we do view the extension to hierarchies as a promising application of our analysis. In particular, our approach opens
the door to hard hierarchical algorithms over discrete data, such as text, and we briefly discuss an
application of our derived algorithm to topic modeling.
We assume that there are J data sets (groups) which we index by j = 1, ..., J. Data point xij
refers to data point i from set j. The HDP model can be viewed as clustering each data set into
local clusters, but where each local cluster is associated to a global mean. Global means may be
shared across data sets. When performing the asymptotics, we require variables for the global means
(?1 , ..., ?g ), the associations of data points to local clusters, zij , and the associations of local clusters
to global means, vjt , where t indexes the local clusters for a data set. A standard Gibbs sampler
considers updates on all of these variables, and in the nonparametric setting does not fix the number
of local or global clusters.
The tools from the previous section may be nearly directly applied to the hierarchical case. As
opposed to the flat model, the hard HDP requires two parameters: a value ?top which is utilized
when starting a global (top-level) cluster, and a value ?bottom which is utilized when starting a local
cluster. The resulting hard clustering algorithm first performs local assignment moves on the zij ,
then updates the local cluster assignments, and finally updates all global means.
The resulting objective function that is monotonically minimized by our algorithm is given as follows:
k
X
X
min
D? (xij , ?c ) + ?bottom t + ?top k,
(4)
{lc }k
c=1
c=1 xij ?lc
where k is the total number of global clusters and t is the total number of local clusters. The bottomlevel penalty term ?bottom controls both the number of local and top-level clusters, where larger
?bottom tends to give fewer local clusters and more top-level clusters. Meanwhile, the top-level
penalty term ?top , as in the one-level case, controls the tradeoff between the likelihood and model
complexity.
6
Figure 1: (Left) Example images from the ImageNet data (Persian cat and elephant categories). Each
image is represented via a discrete visual-bag-of-words histogram. Clustering via an asymptotic
multinomial DP mixture considerably outperforms the asymptotic Gaussian DP mixture; see text
for details. (Right) Elapsed time per iteration in seconds of our topic modeling algorithm when
running on the NIPS data, as a function of the number of topics.
5
Experiments
We conclude with a brief set of experiments highlighting applications of our analysis to discrete-data
problems, namely image clustering and topic modeling. For all experiments, we randomly permute
the data points at each iteration, as this tends to improve results (as discussed previously, unlike
standard k-means, the order in which the data points are processed impacts the resulting clusters).
Image Clustering. We first explore an application of our techniques to image clustering, focusing
on the ImageNet data [14]. We utilize a subset of this data for quantitative experiments, sampling
100 images from 10 different categories of this data set (Persian cat, African elephant, fire engine,
motor scooter, wheelchair, park bench, cello, French horn, television, and goblet), for a total of 1000
images. Each image is processed via standard visual-bag-of-words: SIFT is densely applied on top
of image patches in image, and the resulting SIFT vectors are quantized into 1000 visual words.
We use the resulting histograms as our discrete representation for an image, as is standard. Some
example images from this data set are shown in Figure 1.
We explore whether the discrete version of our hard clustering algorithm based on a multinomial
DP mixture outperforms the Gaussian mixture version (i.e., DP-means); this will validate our generalization beyond the Gaussian setting. For both the Gaussian and multinomial cases, we utilize a
farthest-first approach for both selecting ? as well as initializing the clusters (see [3] for a discussion
of farthest-first for selecting ?).
We compute the normalized mutual information (NMI) between the true clusters and the results of
the two algorithms on this difficult data set. The Gaussian version performs poorly, achieving an
NMI of .06 on this data, whereas the hard multinomial version achieves a score of .27. While the
multinomial version is far from perfect, it performs significantly better than DP-means. Scalability
to large data sets is clearly feasible, given that the method scales linearly in the number of data
points. Note that comparisons between the Gibbs sampler and the corresponding hard clustering
algorithm for the Gaussian case were considered in [3], where experiments on several data sets
showed comparable clustering accuracy results between the sampler and the hard clustering method.
Furthermore, for a fully Bayesian model that places a prior on the concentration parameter, the
sampler was shown to be considerably slower than the corresponding hard clustering method. Given
the similarity of the sampler for the Gaussian and multinomial case, we expect similar behavior
with the multinomial Gibbs sampler.
Illustration: Scalable Hard Topic Models. We also highlight an application to topic modeling,
by providing some qualitative results over two common document collections. Utilizing our general
algorithm for a hard version of the multinomial HDP is straightforward. In order to apply the hard
hierarchical algorithm to topic modeling, we simply utilize the discrete KL-divergence in the hard
exponential family HDP, since topic modeling for text uses a multinomial distribution for the data
likelihood.
To test topic modeling using our asymptotic approach, we performed analyses using the NIPS 1-121
and the NYTimes [15] datasets. For the NIPS dataset, we use the whole dataset, which contains
1740 total documents, 13649 words in the vocabulary, and 2,301,375 total words. For the NYTimes
1
http://www.cs.nyu.edu/ roweis/data.html
7
1
2
3
4
5
6
NIPS
neurons, memory, patterns, activity, response, neuron, stimulus, firing, cortex, recurrent, pattern, spike, stimuli, delay, responses
neural, networks, state, weight, states, results, synaptic, threshold, large, time, systems, activation, small, work, weights
training, hidden, recognition, layer, performance, probability, parameter, error,
speech, class, weights, trained, algorithm,
approach, order
cells, visual, cell, orientation, cortical, connection, receptive, field, center, tuning,
low, ocular, present, dominance, fields
energy, solution, methods, function, solutions, local, equations, minimum, hopfield,
temperature, adaptation, term, optimization, computational, procedure
noise, classifier, classifiers, note, margin,
noisy, regularization, generalization, hypothesis, multiclasses, prior, cases, boosting, fig, pattern
NYTimes
team, game, season, play, games, point,
player, coach, win, won, guy, played, playing, record, final
percent, campaign, money, fund, quarter,
federal, public, pay, cost, according, income, half, term, program, increase
president, power, government, country,
peace, trial, public, reform, patriot, economic, past, clear, interview, religious,
early
family, father, room, line, shares, recount,
told, mother, friend, speech, expression,
won, offer, card, real
company, companies, stock, market, business, billion, firm, computer, analyst, industry, internet, chief, technology, customer, number
right, human, decision, need, leadership,
foundation, number, question, country,
strike, set, called, support, law, train
Table 1: Sample topics inferred from the NIPS and NYTimes datasets by our hard multinomial HDP
algorithm.
dataset, we randomly sampled 2971 documents with 10171 vocabulary words, and 853,451 words in
total; we also eliminated low-frequency words (those with less than ten occurrences). The prevailing
metric to measure the goodness of topic models is perplexity; however, this is based on the predictive
probability, which has no counterpart in the hard clustering case. Furthermore, ground truth for topic
models is difficult to obtain. This makes quantitative comparisons difficult for topic modeling, and
so we therefore focus on qualitative results. Some sample topics (with the corresponding top 15
terms) discovered by our approach from both the NIPS and NYTimes datasets are given in Table 1;
we can see that the topics appear to be quite reasonable. Also, we highlight the scalability of our
approach: the number of iterations needed for convergence on these data sets ranges from 13 to 25,
and each iteration completes in under one minute (see the right side of Figure 1). In contrast, for
sampling methods, it is notoriously difficult to detect convergence, and generally a large number of
iterations is required. Thus, we expect this approach to scale favorably to large data sets.
6
Conclusion
We considered a general small-variance asymptotic analysis for the exponential family DP and
HDP mixture model. Crucially, this analysis allows us to move beyond the Gaussian distribution
in such models, and opens the door to new clustering applications, such as those involving discrete
data. Our analysis utilizes connections between Bregman divergences and exponential families,
and results in a simple and scalable hard clustering algorithm which may be viewed as generalizing
existing non-Bayesian Bregman clustering algorithms [7] as well as the DP-means algorithm [3].
Due to the prevalence of discrete data in modern computer vision and information retrieval, we
hope our algorithms will find use for a variety of large-scale data analysis tasks. We plan to
continue to focus on the difficult problem of quantitative evaluations comparing probabilistic and
non-probabilistic methods for clustering, particularly for topic models. We also plan to compare
our algorithms with recent online inference schemes for topic modeling, particularly the online
LDA [16] and online HDP [17] algorithms.
Acknowledgements. This work was supported by NSF award IIS-1217433 and by the ONR under
grant number N00014-11-1-0688.
8
References
[1] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the
Royal Statistical Society, Series B, 21(3):611?622, 1999.
[2] S. Roweis. EM algorithms for PCA and SPCA. In Advances in Neural Information Processing
Systems, 1998.
[3] B. Kulis and M. I. Jordan. Revisiting k-means: New algorithms via Bayesian nonparametrics.
In Proceedings of the 29th International Conference on Machine Learning, 2012.
[4] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal
of the American Statistical Association, 101(476):1566?1581, 2006.
[5] L. Fei-Fei and P. Perona. A Bayesian hierarchical model for learning natural scene categories.
In IEEE Conference on Computer Vision and Patterns Recognition, 2005.
[6] D. Blei, A. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[7] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences.
Journal of Machine Learning Research, 6:1705?1749, 2005.
[8] K. Kurihara and M. Welling. Bayesian k-means as a ?Maximization-Expectation? algorithm.
Neural Computation, 21(4):1145?1172, 2008.
[9] O. Barndorff-Nielsen. Information and Exponential Families in Statistical Theory. Wiley
Publishers, 1978.
[10] J. Forster and M. K. Warmuth. Relative expected instantaneous loss bounds. In Proceedings
of 13th Conference on Computational Learning Theory, 2000.
[11] A. Agarwal and H. Daume. A geometric view of conjugate priors. Machine Learning,
81(1):99?113, 2010.
[12] N. Hjort, C. Holmes, P. Mueller, and S. Walker. Bayesian Nonparametrics: Principles and
Practice. Cambridge University Press, Cambridge, UK, 2010.
[13] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of
Computational and Graphical Statistics, 9:249?265, 2000.
[14] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Patterns Recognition,
2009.
[15] A. Frank and A. Asuncion. UCI Machine Learning Repository, 2010.
[16] M. D. Hoffman, D. M. Blei, and F. Bach. Online learning for Latent Dirichlet Allocation. In
Advances in Neural Information Processing Systems, 2010.
[17] C. Wang, J. Paisley, and D. M. Blei. Online variational inference for the hierarchical Dirichlet
process. In Proceedings of the 14th International Conference on Artificial Intelligence and
Statistics, 2011.
9
| 4853 |@word kulis:3 repository:1 briefly:4 version:7 trial:1 c0:2 open:4 crucially:1 covariance:10 contains:1 exclusively:1 selecting:3 zij:2 score:1 series:1 document:5 outperforms:2 existing:6 past:1 current:1 comparing:1 activation:1 dx:1 written:2 partition:5 motor:1 update:10 fund:1 half:1 fewer:2 intelligence:1 warmuth:1 leadership:1 record:1 blei:4 provides:4 quantized:1 cse:2 bijection:1 boosting:1 simpler:1 become:2 barndorff:1 qualitative:2 inside:1 market:1 expected:2 behavior:3 company:2 considering:2 becomes:3 begin:1 moreover:1 underlying:1 mass:1 what:2 emerging:1 affirmative:1 ghosh:1 berkeley:2 quantitative:3 scaled:7 classifier:2 uk:1 control:3 farthest:2 omit:2 appear:1 grant:1 positive:1 local:19 tends:2 limit:4 consequence:1 despite:1 jiang:1 firing:1 initialization:2 studied:1 argminc:1 campaign:1 range:1 horn:1 practice:1 implement:1 prevalence:1 procedure:1 asymptotics:14 empirical:1 significantly:1 convenient:2 word:10 integrating:1 refers:1 cannot:1 context:2 applying:2 collapsed:2 www:1 equivalent:1 customer:1 center:1 go:4 straightforward:1 starting:2 convex:3 ke:1 simplicity:1 insight:1 holmes:1 utilizing:2 laplace:2 president:1 hierarchy:3 play:1 heavily:1 us:1 hypothesis:1 recognition:3 particularly:4 utilized:4 database:1 bottom:4 initializing:1 dg0:1 parameterize:1 wang:1 revisiting:1 decrease:1 nytimes:5 complexity:3 trained:1 raise:1 predictive:1 basis:1 easily:1 hopfield:1 stock:1 cat:2 represented:1 derivation:1 train:1 instantiated:1 artificial:1 hyper:1 firm:1 quite:3 whose:1 heuristic:1 larger:1 distortion:2 relax:1 otherwise:2 elephant:2 ability:2 statistic:4 cov:7 favor:2 noisy:1 final:1 seemingly:1 online:5 beal:1 advantage:1 differentiable:1 interview:1 adaptation:1 uci:1 combining:1 mixing:2 flexibility:1 poorly:1 roweis:2 description:1 validate:1 scalability:3 billion:1 convergence:4 cluster:60 generating:1 perfect:1 derive:2 depending:1 recurrent:1 friend:1 nearest:1 c:2 human:1 public:2 require:3 government:1 hx:3 fix:2 clustered:1 generalization:2 brian:1 exploring:1 strictly:1 extension:4 around:2 considered:3 ground:1 exp:25 major:1 vary:1 achieves:1 smallest:2 early:1 favorable:1 bag:3 currently:3 grouped:1 tool:2 hoffman:1 hope:1 federal:1 clearly:1 gaussian:16 occupied:1 pn:1 season:1 minc:1 derived:2 focus:3 likelihood:11 contrast:1 detect:1 inference:6 mueller:1 integrated:1 lj:2 hidden:1 perona:1 among:2 flexible:1 html:1 denoted:2 orientation:1 reform:1 development:2 proposes:1 prevailing:1 special:3 initialize:1 mutual:1 marginal:3 equal:1 once:1 field:2 religious:1 ng:1 sampling:7 eliminated:1 identical:1 placing:1 park:1 nearly:1 minimized:1 stimulus:2 modern:1 randomly:2 divergence:26 densely:1 replaced:2 fire:1 n1:1 interest:1 evaluation:1 mixture:29 truly:1 chain:2 bregman:23 integral:1 necessary:3 euclidean:5 minimal:1 instance:4 industry:1 modeling:12 earlier:1 cover:1 goodness:1 zn:1 assignment:8 maximization:3 applicability:1 cost:1 subset:1 delay:1 father:1 answer:1 eec:1 considerably:3 cello:1 density:1 explores:1 international:2 probabilistic:17 told:1 dong:1 wepcan:1 michael:1 connecting:1 squared:5 containing:1 choose:1 opposed:1 guy:1 american:1 leading:2 li:2 explicitly:1 depends:1 performed:3 view:3 analyze:1 characterizes:1 start:4 asuncion:1 contribution:1 l1j:1 accuracy:1 variance:9 merugu:1 yield:4 conceptually:1 bayesian:16 notoriously:1 african:1 canceling:1 sharing:1 synaptic:1 energy:1 frequency:1 involved:1 ocular:1 obvious:1 associated:2 proof:2 sampled:1 wheelchair:1 dataset:3 popular:1 recall:3 lim:2 nielsen:1 focusing:2 tipping:1 follow:1 response:2 formulation:1 nonparametrics:2 though:1 furthermore:2 until:2 sketch:1 replacing:1 banerjee:1 lack:1 cnew:4 french:1 lda:2 perhaps:1 normalized:1 true:1 counterpart:1 equality:1 assigned:5 regularization:1 dhillon:1 neal:1 during:2 game:2 won:2 generalized:2 complete:1 demonstrate:2 performs:3 temperature:1 percent:1 image:14 variational:2 consideration:1 ohio:2 novel:2 instantaneous:1 common:1 multinomial:10 quarter:1 conditioning:1 exponentially:1 extend:1 discussed:4 association:3 cambridge:2 gibbs:17 mother:1 paisley:1 rd:2 tuning:1 similarly:2 dj:1 specification:1 similarity:2 cortex:1 money:1 posterior:6 closest:1 showed:2 recent:1 perspective:1 perplexity:1 manipulation:1 n00014:1 binary:1 continue:1 onr:1 seen:1 minimum:3 greater:1 employed:1 deng:1 determine:1 strike:1 monotonically:2 ii:1 multiple:1 full:1 persian:2 adapt:1 offer:1 bach:1 retrieval:1 award:1 peace:1 impact:1 scalable:8 variant:1 involving:1 vision:7 expectation:8 metric:1 histogram:2 normalization:1 iteration:5 agarwal:1 achieved:1 cell:2 receive:1 background:1 whereas:1 completes:1 walker:1 country:2 publisher:1 unlike:2 tend:1 jordan:5 door:4 spca:1 hjort:1 easy:1 coach:1 variety:2 iterate:1 zi:10 reduce:1 economic:1 tradeoff:2 whether:2 expression:1 pca:3 utility:2 penalty:3 algebraic:1 speech:2 cause:1 useful:1 generally:1 clear:1 nonparametric:8 ten:1 concentrated:2 processed:2 category:3 http:1 xij:3 nsf:1 upfront:3 fulfilled:1 arising:1 per:1 discrete:11 write:2 group:6 key:3 dominance:1 threshold:1 achieving:2 neither:1 utilize:3 recount:1 parameterized:2 place:2 family:31 almost:2 reasonable:1 utilizes:3 patch:1 decision:1 scaling:3 comparable:1 layer:1 internet:1 pay:1 bound:1 played:1 activity:1 fei:4 scene:1 flat:1 dominated:1 min:2 performing:6 department:2 developing:1 according:1 conjugate:9 legendre:2 across:2 nmi:2 em:4 increasingly:2 happens:2 patriot:1 intuitively:1 equation:1 vjt:1 previously:1 turn:1 discus:1 needed:1 letting:2 available:1 gaussians:1 parametrize:1 apply:1 observe:2 hierarchical:13 appropriate:4 occurrence:1 alternative:2 slower:1 original:1 top:9 dirichlet:14 clustering:29 running:2 graphical:1 xc:2 society:1 move:5 g0:2 question:2 objective:7 spike:1 parametric:1 primary:1 concentration:2 receptive:1 forster:1 exhibit:1 dp:20 win:1 distance:5 link:2 card:1 parametrized:1 topic:19 considers:1 toward:1 analyst:1 hdp:13 besides:1 index:2 relationship:2 illustration:1 providing:2 equivalently:1 nc:6 difficult:5 frank:1 favorably:1 perform:3 teh:1 observation:2 neuron:2 markov:2 datasets:3 finite:1 excluding:1 team:1 discovered:1 inferred:1 pair:1 namely:2 kl:1 required:1 connection:8 z1:1 optimized:1 imagenet:3 plan:2 california:1 elapsed:1 engine:1 established:1 nip:6 beyond:3 below:1 pattern:5 challenge:1 program:1 including:1 memory:1 royal:1 power:1 natural:10 business:1 indicator:3 scheme:1 improve:1 technology:1 brief:1 text:5 review:1 prior:14 acknowledgement:1 geometric:1 asymptotic:10 law:1 relative:1 fully:1 expect:2 highlight:2 loss:1 allocation:2 foundation:1 sufficient:1 principle:1 playing:1 share:1 supported:1 last:2 side:1 allow:2 xc1:1 benefit:1 xn:3 vocabulary:2 cortical:1 rich:1 author:1 made:2 commonly:1 collection:1 far:1 income:1 welling:1 preferred:1 global:11 conceptual:1 conclude:1 xi:35 latent:2 chief:1 table:2 promising:2 permute:1 meanwhile:1 pk:2 linearly:1 whole:1 noise:1 daume:1 x1:2 fig:1 cxi:4 wiley:1 lc:4 exponential:31 theorem:1 minute:1 specific:1 bishop:1 showing:1 sift:2 explored:1 decay:1 nyu:1 normalizing:1 exists:1 socher:1 enduring:1 effectively:1 conditioned:1 television:1 margin:1 generalizing:4 simply:3 explore:3 likely:1 visual:5 highlighting:1 expressed:1 scooter:1 truth:1 satisfies:1 determines:1 conditional:2 viewed:5 goal:1 kmeans:1 room:1 shared:3 feasible:1 hard:22 specifically:2 infinite:2 sampler:13 kurihara:1 lemma:3 principal:1 total:7 called:1 player:1 support:1 arises:2 incorporate:1 bench:1 |
4,258 | 4,854 | Transferring Expectations in Model-based
Reinforcement Learning
Trung Thanh Nguyen, Tomi Silander, Tze-Yun Leong
School of Computing
National University of Singapore
Singapore, 117417
{nttrung, silander, leongty}@comp.nus.edu.sg
Abstract
We study how to automatically select and adapt multiple abstractions or representations of the world to support model-based reinforcement learning. We address
the challenges of transfer learning in heterogeneous environments with varying
tasks. We present an efficient, online framework that, through a sequence of tasks,
learns a set of relevant representations to be used in future tasks. Without predefined mapping strategies, we introduce a general approach to support transfer
learning across different state spaces. We demonstrate the potential impact of
our system through improved jumpstart and faster convergence to near optimum
policy in two benchmark domains.
1
Introduction
In reinforcement learning (RL), an agent autonomously learns how to make optimal sequential decisions by interacting with the world. The agent?s learned knowledge, however, is task and environment specific. A small change in the task or the environment may render the agent?s accumulated
knowledge useless; costly re-learning from scratch is often needed.
Transfer learning addresses this shortcoming by accumulating knowledge in forms that can be reused
in new situations. Many existing techniques assume the same state space or state representation in
different tasks. While recent efforts have addressed inter-task transfer in different action or state
spaces, specific mapping criteria have to be established through policy reuse [7], action correlation
[14], state abstraction [22], inter-space relation [16], or other methods. Such mappings are hard
to define when the agent operates in complex environments with large state spaces and multiple
goal states, with possibly different state feature distributions and world dynamics. To efficiently
accomplish varying tasks in heterogeneous environments, the agent has to learn to focus attention
on the crucial features of each environment.
We propose a system that tries to transfer old knowledge, but at the same time evaluates new options to see if they work better. The agent gathers experience during its lifetime and enters a new
environment equipped with expectations on how different aspects of the world affect the outcomes
of the agent?s actions. The main idea is to allow an agent to collect a library of world models or
representations, called views, that it can consult to focus its attention in a new task. In this paper, we
concentrate on approximating the transition model. The reward model library can be learned in an
analogous fashion. Effective utilization of the library of world models allows the agent to capture
the transition dynamics of the new environment quickly; this should lead to a jumpstart in learning
and faster convergence to a near optimal policy. A main challenge is in learning to select a proper
view for a new task in a new environment, without any predefined mapping strategies.
We will next formalize the problem and describe the method of collecting views into a library.
We will then present an efficient implementation of the proposed transfer learning technique. After
1
discussing related work, we will demonstrate the efficacy of our system through a set of experiments
in two different benchmark domains.
2
Method
In RL, a task environment is typically modeled as a Markov decision process (MDP) defined by a
tuple (S, A, T, R), where S is a set of states; A is a set of actions; T : S ? A ? S ? [0, 1] is
transition function, such that T (s, a, s0 ) = P (s0 |s, a) indicates the probability of transiting to a state
s0 upon taking an action a in state s; R : S ? A ? R is a reward function indicating immediate
expected reward after an action a is taken in state s. The goal is then to find a policy ? that specifies
an action to perform in each state so that the expected accumulated future reward (possibly giving
higher weights to more immediate rewards) for each state is maximized [18]. In model-based RL,
the optimal policy is calculated based on the estimates of the transition model T and the reward
model R which are obtained by interacting with the environment.
A key idea of this work is that the agent can represent the world dynamics from its sensory state
space in different ways. Such different views correspond to the agent?s decisions to focus attention
on only some features of the state in order to quickly approximate the state transition function.
2.1
Decomposition of transition model
To allow knowledge transfer from one state space to another, we assume that each state s in all
the state spaces can be characterized by a d-dimensional feature vector f (s) ? Rd . The states
themselves may or may not be factored. We use the idea in situation calculus [11] to decompose
the transition model T in accordance with the possible action effects. In the RL context, an action
will stochastically create an effect that determines how the current state changes to the next one [2,
10, 14]. For example, an attempt to move left in a grid world may cause the agent to move one step
left or one step forward, with small probabilities. The relative changes in states, ?moved left? and
?moved forward?, are called effects of the action.
Formally, let us call MDP with a decomposed transition model CMDP (situation Calculus MDP).
CMDP is defined by a tuple (S, A, E, ?, ?, f, R) in which the transition model T has been replaced
by the the terms E, ?, ?, f , where E is an effect set and f is a function from states to their feature
vectors. ? : S ?A?E ? [0, 1] is an action model such that ? (s, a, e) = P (e | f (s), a) indicates the
probability of achieving effect e upon performing action a at state s. Notice that the probability of
effect e depends on state s only through the features f (s). While the agent needs to learn the effects
of the action, it is usually assumed to understand the meaning of the effects, i.e., how the effects turn
each state into a next state. This knowledge is captured in a deterministic function ? : S ? E ? S.
Different effects e will change a state s to a different next state s0 = ?(s, e). The MDP transition
model T can be reconstructed from the CMDP by the equation:
T (s, a, s0 ; ? ) = P (s0 | f (s), a) = ? (s, a, e),
(1)
where e is the effect of action a that takes s to s0 , if such an e exists, otherwise T (s, a, s0 ; ? ) = 0.
The benefit of this decomposition is that while there may be a large number of states, there is
usually a limited number of definable effects of actions, and those are assumed to depend only
on some features of the states and not on the actual states themselves. We can therefore turn the
learning of the transition model into a supervised online classification problem that can be solved by
any standard online classification method. More specifically, the classification task is to predict the
effect e of an action a in a state s with features f (s).
2.2
A multi-view transfer framework
In our framework, the knowledge gathered and transferred by the agent is collected into a library T
of online effect predictors or views.
A view consists of a structure component f? that picks the features which should be focused on, and
a quantitative component ? that defines how these features should be combined to approximate the
distribution of action effects. Formally, a view is defined as ? = (f?, ?), such that P (E|S, a; ? ) =
P (E|f?(S), a; ?) = ? (S, a, E), in which f? is an orthogonal projection of f (s) to some subspace
2
of Rd . Each view ? is specialized in predicting the effects of one action a(? ) ? A and it yields a
probability distribution for the effects of the action a in any state. This prediction is based on the
features of the state and the parameters ?(? ) of the view that may be adjusted based on the actual
effects observed in the task environment.
We denote the subset of views that specify the effects for action a by T a ? T . The main challenge
is to build and maintain a comprehensive set of views that can be used in new environments likely
resembling the old ones, but at the same time allow adaptation to new tasks with completely new
transition dynamics and feature distributions.
At the beginning of every new task, the existing library is copied into a working library which is
also augmented with fresh, uninformed views, one for each action, that are ready to be adapted to
new tasks. We then select, for each action, a view with a good track record. This view is is used to
estimate the optimal policy based on the transition model specified in Equation 1, and the policy is
used to pick the first action a. The action effect is then used to score all the views in the working
library and to adjust their parameters. In each round the selection of views is repeated based on their
scores, and the new optimal policy is calculated based on the new selections. At the end of the task,
the actual library is updated by possibly recruiting the views that have ?performed well? and retiring
those that have not. A more rigorous version of the procedure is described in Algorithm 1.
Algorithm 1 TES: Transferring Expectations using a library of views
Input: T = {?1 , ?2 , ...}: view library; CMDPj : a new j th task; ?: view goodness evaluator
Let T0 be a set of fresh views - one for each action
Ttmp ? T ? T0
/* THE WORKING LIBRARY FOR THE TASK */
for all a ? A do T?[a] ? argmax? ?T a ?(?, j) end for
/* SELECTING VIEWS */
for t = 0, 1, 2, ... do
at ? ?
? (st ), where ?
? is obtained by solving MDP using transition model T?
Perform action at and observe effect et
at
for all ? ? Ttmp
? T at do Score[? ] ? Score[? ] + log ? (st , at , et ) end for
at
for all ? ? Ttmp
do Update view ? based on (f (st ), at , et )
end for
?
at Score[? ]
T [at ] ? argmax? ?Ttmp
/* SELECTING VIEWS */
end for
for all a ? A do ? ? ? argmax? ?Ttmp
Score[? ];
a
a
T ? growLibrary(T a , ? ? , Score, j)
/* UPDATING LIBRARY */
end for
if |T | > M then T ? T ? {argmin? ?T ?(?, j)} end if /* PRUNING LIBRARY */
2.2.1
Scoring the views
To assess the quality of a view ? , we measure its predictive performance by a cumulative log-score.
This is a proper score [12] that can be effectively calculated online.
Given a sequence Da = (d1 , d2 , . . . , dN ) of observations di = (si , a, ei ) in which action a has
resulted in effect ei in state si , the score for an a-specialized ? is
S(?, Da ) =
N
X
log ? (si , a, ei ; ?:i (? )),
i=1
:i
where ? (si , a, ei ; ? (? )) is the probability of event ei given by the event predictor ? based on
the features of state si and the parameters ?:i (? ) that may have been adjusted using previous data
(d1 , d2 , . . . , di?1 ).
2.2.2
Growing the library
After completing a task, the highest scoring new views for each action are considered for recruiting
into the actual library. The winning ?newbies? are automatically accepted. In this case, the data has
most probably come from the distribution that is far from the any current models, otherwise one of
the current models would have had an advantage to adapt and win.
3
The winners ? ? that are adjusted versions of old views ?? are accepted as new members if they score
significantly higher than their original versions, based on the logarithm of the prequential likelihood
ratio [5] ?(? ? , ??) = S(? ? , Da ) ? S(?
? , Da ). Otherwise, the original versions ?? get their parameters
updated to the new values. This procedure is just a heuristic and other inclusion and updating criteria
may well be considered. The policy is detailed in Algorithm 2.
Algorithm 2 Grow sub-library T a
Input: T a , ? ? , Score, j: task index; c: constant; H? ? = {}: empty history record
Output: updated library subset T a and winning histories H? ?
case ? ? ? T0a do T a ? T a ? {? ? }
/* ADD NEWBIE TO LIBRARY */
otherwise
do Let ?? ? T be the original, not adapted version of ? ?
case Score[? ? ] ? Score[?
? ] > c do T a ? T a ? {? ? }
otherwise
do T a ? T a ? {? ? } ? {?
?}
H? ? ? H?? /* INHERIT HISTORY */
H? ? ? H? ? ? {j}
2.2.3
Pruning the library
To keep the library relatively compact, a plausible policy is to remove views that have not performed
well for a long time, possibly because there are better predictors or they have become obsolete in
the new tasks or environments. To implement such a retiring scheme, each view ? maintains a list
H? of task indices that indicates the tasks for which the view has been the best scoring predictor for
its specialty action a(? ). We can then calculate the recency weighted track record for each view.
In practice, we have adopted the procedure by Zhu et al. [27] that introduces the recency weighted
score at time T as
X
?(?, T ) =
e??(T ?t) ,
t?H?
where ? controls the speed of decay of past success. Other decay functions could naturally also be
used. The pruning can then be done by introducing a threshold for recency weighted score or always
maintaining the top M views.
3
A view learning algorithm
In TES, a view can be implemented by any probabilistic classification model that can be quickly
learned online. A popular choice for representing the transition model in factored domains is the
dynamic Bayesian network (DBN), but learning DBNs is computationally very expensive. Recent
studies [24, 25] have shown encouraging results in learning the structure of logistic regression models that can serve as local structures of DBNs. While these models cannot capture all the conditional
distributions, their simplicity allows fast online learning in very high dimensional spaces.
We introduce an online sparse multinomial logistic regression algorithm to incrementally learn a
view. The proposed algorithm is similar to so called group-lasso [26] which has been recently
suggested for feature selection among a very large set of features [25].1
Assuming K classes of vectors x ? Rd , each class k is represented with a d-dimensional prototype
vector Wk . Classification of an input vector x in logistic regression is based on how ?similar? it is
Pd
to the prototype vectors. Similarity is measured by the inner product hWk , xi = i=1 Wki xi . The
log probability of a class y is defined by log P (y = k|x; Wk ) ? hWk , xi. The classifier can then be
parametrized by stacking the Wk vectors as rows into a matrix W = (W1 , ..., WK )T .
An online learning system usually optimizes its probabilistic classification performance by minimizing a total loss function through updating its parameters over time. A typical item-wise loss
function of a multinomial logistic regression classifier is l(W ) = ? log P (y|x; W ), where (y, x)
denotes data item observed at time t. To achieve a parsimonious model in a feature-rich domain,
we express our a priori belief that most features are superfluous by introducing a regularization term
1
We report here the details of the method that should allow its replication. A more comprehensive description is available as a separate report in the supplementary material.
4
Pd ?
?(W ) = ? i K||W?i ||2 , where ||W?i ||2 denotes the 2-norm of the ith column of W , and ? is a
positive constant. This regularization is similar to that of group lasso [26]. It communicates the idea
that it is likely that a whole column of W has zero values (especially, for large ?). A column of all
zeros suggests that the corresponding feature is irrelevant for classification.
PT
The objective function can now be written as t=1 l(W t , dt ) + ?(W t ), where W t is the coefficient
matrix learned using t ? 1 previously observed data items. Inspired by the efficient dual averaging
method [24] for solving lasso and group lasso [25] logistic regression, we extend the results to the
multinomial case. Specifically, the loss minimizing sequence of parameter matrices W t can be
achieved by the following online update scheme.
Let Gtki be the derivatives of function lt (W ) with respect to Wki . G?t is a matrix of average partial
? t = 1 Pt Gj , where Gj = ?xj (I(y j = k) ? P (k|xj ; W j?1 )).
derivatives G
ki
t
j=1
ki
ki
i
Given a K ? d average gradient matrix G?t , and a regularization parameter ? > 0, the ith column of
the new parameter matrix W t+1 can be achieved as follows
?
(
? t ||2 ? ? K,
~0
if ||G
?i
t+1
W?i = ?t ??K
(2)
?t
? t ||2 ? 1 G?i otherwise,
?
||G
?i
where ? > 0 is a constant. The update rule (2) dictates that when the length of the average gradient
matrix column is small enough, the corresponding parameter column should be truncated to zero.
This introduces feature selection into the model.
4
Related work
The survey by Taylor and Stone [20] offers a comprehensive exposition of recent methods to transfer various forms of knowledge in RL. Not much research, however, has focused on transferring
transition models. For example, while superficially similar to our framework, the case-based reasoning approaches [4] [13] focus on collecting good decisions instead of building models of world
dynamics. Taylor proposes TIMBREL [19] to transfer observations in a source to a target task via
manually tailored inter-task mapping. Fernandez et al. [7] transfers a library of policies learned
in previous tasks to bias exploration in new tasks. The method assumes a constant inter-task state
space, otherwise a state mapping strategy is needed.
Hester and Stone [8] describe a method to learn a decision tree for predicting state relative changes
which are similar to our action effects. They learn decision trees online by repeatedly applying batch
learning. Such a sequence of classifiers forms an effect predictor that could be used as a member of
our view library. This work, however, does not directly focus on transfer learning.
Multiple models have previously been used to guide behavior in non-stationary environments [6] [15]. Unlike our work, these studies usually assume a common concrete state space.
In representation selection, Konidaris and Barto [9] focus on selecting the best abstraction to assist
the agent?s skill learning, and Van et al. [21] study using multiple representations together to solve
a RL problem. None of these studies, however, solve the problem of transferring knowledge in
heterogeneous environments.
Atkeson and Santamaria introduce a locally weighted transfer learning technique called LWT to
adapt previously learned transition models into a new situation [1]. This study is among the very
few that actually consider transferring the transition model to a new task [20]. While their work is
conducted in continuous state space using a fixed state similarity measure, it can be adapted to a
discrete case. Doing so corresponds to adopting a fixed single view. We will compare our work with
this approach in our experiments. This approach could also be extended to be compatible with our
work by learning a library of state similarity measures and developing a method to choose among
those similarities for each task.
Wilson et al. [23] also address the problem of transfer in heterogeneous environments. They formalize the problem as learning a generative Dirichlet process for MDPs and suggest an approximate
solution using Gibbs sampling. Our method can be seen as a structure learning enhanced alternative implementation of this generative model. Our online-method is computationally more efficient,
but the MCMC estimation should eventually yield more accurate estimates. Both models can also
5
be adjusted to deal with non-stationary task sources. The work by Wilson et al. demonstrates the
method for reward models, and it is unclear how to extend the approach for transferring transition
models. We will also compare our work with this hierarchical Bayes approach in our experiments.
5
Experiments
We examine the performance of our expectation transfer algorithm TES that transfers views to speedup the learning process across different environments in two benchmark domains. We show that TES
can efficiently: a) learn the appropriate views online, b) select views using the proposed scoring
metric, c) achieve a good jump start, and d) perform well in the long run.
To better compare with some related work, we evaluate the performance of TES for transferring both
transition models and reward models in RL. TES can be adapted to transfer reward models as follows:
Assuming that the rewards follow a Gaussian distribution, a view of the expected reward model can
be learned similarly as shown in section 3. We use an online sparse linear regression model instead
of the multinomial logistic regression. Simply replacing matrix W by a vector w, and using squared
loss function, the coefficient update function can be found similar to that in Equation 2 [24]. When
studying reward models, the transition models are assumed to be known.
5.1
Learning views for effective transfer
In the first experiment, we compare TES with the locally weighted LWT approach by Atkeson et al.
[1] and the non-parametric hierarchical Bayesian approach HB by Wilson et al. [23] in transferring
reward models. We adopt the same domain as described in Wilson et al.?s HB paper, but augment
each state with 200 random binary features. The objective is to find the optimal route to a known
goal state in a color maze. Assuming a deterministic transition model, the highest cumulative reward,
determined by the colors around each cell/state, can be achieved on the optimal route.
Experiment set-up: Five different reward models are generated by normal Gaussian distributions,
each depending on different sets of features. The start state is random. We run experiments on 15
tasks repeatedly 20 times, and conduct leave-one-task-out test. The maximum size M of the views
library, initially empty, is set to be 20; threshold c for growing the library is set to be log 300. The
parameters for view learning are: ? = 0.05 and ? = 2.5.
Table 1: Transfer of reward models: Cumulative reward in the first episodes; Time to solve 15 tasks
(in minutes), in which each is run with 200 episodes. Map sizes vary from 20 ? 20 to 30 ? 30.
Methods
HB
LWT
TES
Tasks
Time
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
-108.01 -85.26 -67.46 -90.17 -130.11 -95.42 -46.23 -77.10 -83.91 -51.01 -131.44 -97.05 -90.11 -48.91 -92.31 77.2
-79.41 -114.28 -83.31 -46.70 -245.11 -156.23 -47.05 -49.52 -105.24 -88.19 -174.15 -85.10 -55.45 -101.24 -86.01 28.6
-45.01 -78.23 -62.15 -54.46 -119.76 -115.77 -37.15 -58.09 -167.13 -59.11 -102.46 -45.99 -86.12 -67.23 -81.39 31.5
As seen in Table 1, TES on average wins over HB in 11 and LWT in 12 out of 15 tasks. In the
15 ? 20 = 300 runs TES wins over HB 239 and over LWT 279 times, both yielding binomial test pvalues less than 0.05. This demonstrates that TES can successfully learn the views and utilize them
in novel tasks. Moreover, TES runs much faster than HB, and just slightly slower than LWT. Since
HB does not learn the relevant features for model representation, it may overfit, and the knowledge
learned cannot be easily generalized. It also needs a costly sampling method. Similarly, the strategy
for LWT that tries to learn one common model for transfer in various tasks often does not work well.
5.2
Multi-view transfer in complex environments
In the second experiment, we evaluate TES in a more challenging domain, transferring transition
models. We consider a grid-based robot navigation problem in which each grid-cell has the surface
of either sand, soil, water, brick, or fire. In addition, there may be walls between cells. The surfaces
and walls determine the stochastic dynamics of the world. However, the agent also observes numerous other features in the environment. The agent has to learn to focus on the relevant features to
quickly achieve its goal. The goal is to reach any exit door in the world consuming as little energy
as possible.
6
Experiment set-up: The agent can perform four actions (move up, down, left, right) which will lead
it to one of the four states around it, or leave it to its current state if it bumps into a wall. The agent
will spend 0.01 units of energy to perform an action. It loses 1 unit if falling into a fire, but gains 1
unit when reaching an exit door. A task ends when the agent reaches any exit door or fire.
We design fifteen tasks with grid sizes ranging from 20 ? 20 to 30 ? 30. Each task has a different
state space and different terminal states. Each state (cell) also has 200 irrelevant random binary
features, besides its surface materials and the walls around it. The tasks may have different dynamics
as well as different distributions of the surface materials. In our experiments, the environment
transition dynamics is generated using three different sets of multinomial logistic regression models
so that every combination of cell surfaces and walls around the cell will lead to a different transition
dynamics at the cell. The probability of going through a wall is rounded to zero and the freed
probability mass is evenly distributed to other effects. The agent?s starting position is randomly
picked in each episode.
We represent five effects of the actions: moved up, left, down, right, did not move. The maximum
size M of the view library, initially empty, is set to be 20; threshold c = log 300. In a new environment, the TES-agent mainly relies on its transferred knowledge. However, we allow some -greedy
exploration with = 0.05. The parameters for view learning algorithm are that ? = 0.05, ? = 1.5.
We conduct leave-one-out cross-validation experiment with fifteen different tasks. In each scenario
the agent is first allowed to experience fourteen tasks, over 100 episodes in each, and it is then tested
on the remaining one task. No recency weighting is used to calculate the goodness of the views in
the library. We next discuss experimental results averaged over 20 runs showing 95% confidence
intervals (when practical) for some representative tasks.
Transferring expectations between homogeneous tasks. To ensure that TES is capable of basic
model transfer, we first evaluate it on a simple task to ensure that the learning algorithm in section 3
works. We train and test TES on two environments which have same dynamics and 200 irrelevant
binary features that challenge agent?s ability to learn a compact model for transfer. Figure 1a shows
how much the other methods lose to TES in terms of accumulated reward in the test task. loreRL is an
implementation of TES equipped with the view learning algorithm that does not transfer knowledge.
fRmax is the factored Rmax [3] in which the network structures of transition models are provided by
an oracle [17]; its parameter m is set to be 10 in all the experiments. fEpsG is a heuristic in which
the optimistic Rmax exploration of fRmax is replaced by an -greedy strategy ( = 0.1). The results
show that these oracle methods still have to spend time to learn the model parameters, so they gain
less accumulated reward than TES. This also suggests that the transferred view of TES is likely not
only compact but also accurate. Figure 1a further shows that loreRL and fEpsG are more effective
than fRmax in early episodes.
fEpsG
fRmax
loreRL
0
10
20
30
episode
(a)
40
50
300
0
accumulated reward
0
-2
-4
-6
-8
-10
-12
-14
-16
-18
accumulated reward difference
accumulated reward difference
View selection vs. random views. Figure 1b shows how different views lead to different policies
and accumulated rewards over the first 50 episodes in a given task. The Rands curves show the
accumulated reward difference to TES when the agent follows some random combinations of views
from the library. For clarity we show only 5 such random combinations. For all these, the difference
turns negative fast in the beginning indicating less reward in early episodes. We conclude that our
view selection criterion outperforms random selection.
-1
-2
-3
-4
Rands
LWT
-6 loreRL
fEpsG
-7
0
10
-5
200
100
0
-100
-200
TES
fRmax
Rmax
-300
-400
20
30
episode
(b)
40
50
0
100
200
300
episode
400
500
(c)
Figure 1: Performance difference to TES in early trials in a) homogeneous, b) heterogeneous environments. c) Convergence.
7
Table 2: Cumulative reward after first episodes. For example, in Task 1 TES can save (0.616 ?
0.113)/0.01 = 50.3 actions compared to LWT.
Tasks
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
loreRL -0.681 -0.826 -0.814 -1.068 -0.575 -0.810 -0.529 -0.398 -0.653 -0.518 -0.528 -0.244 -0.173 -1.176 -0.692
LWT
0.113 -0.966 -0.300 0.024 -1.205 -0.345 -1.104 -1.98 -0.057 -0.664 -0.230 -1.228 0.034 0.244 -0.564
TES
0.616 -0.369 0.230 -0.044 -0.541 -0.784 -0.265 0.255 0.001 -0.298 -1.184 -0.077 0.209 0.389 -0.407
Methods
Multiple views vs. single view, and non-transfer. We compare the multi-view learning TES agent
with a non-transfer agent loreRL, and an LWT agent that tries to learn only one good model for
transfer. We also compare with the oracle method fEpsG. As seen in Figure 1b, TES outperforms
LWT which, due to differences in the tasks, also performs worse than loreRL. When the earlier
training tasks are similar to the test task, the LWT agent performs well. However, the TES agent also
quickly picks the correct views, thus we never lose much but often gain a lot. We also notice that TES
achieves a higher accumulated reward than loreRL and fEpsG that are bound to make uninformed
decisions in the beginning.
Table 2 shows the average cumulative reward after the first episode (the jumpstart effect) for each
test task in the leave-one-out cross-validation. We observe that TES usually outperforms both the
non-transfer and the LWT approach. In all 15 ? 20 = 300 runs, TES wins over LWT 247 times and
it wins over loreRL 263 times yielding p-values smaller than 0.05.
We also notice that due to its fast capability of capturing the world dynamics, TES running time is
just slightly longer than LWT?s and loreRL?s, which do not perform extra work for view switching
but need more time and data to learn the dynamics models.
Convergence. To study the asymptotic performance of TES, we compare with the oracle method
fRmax which is known to converge to a (near) optimal policy. Notice that in this feature-rich domain,
fRmax without the pre-defined DBN structure is just similar to Rmax. Therefore, we also compare
with Rmax. For Rmax, the number of visits to any state before it is considered ?known? is set to 5,
and the exploration probability for known states starts to decrease from value 0.1.
Figure 1c shows the accumulated rewards and their statistical dispersion over episodes. Average
performance is reflected by the angles of the curves. As seen, TES can achieve a (near) optimal
policy very fast and sustain its good performance over the long run. It is only gradually caught
up by fRmax and Rmax. This suggests that TES can successfully learn a good library of views in
heterogeneous environments and efficiently utilize those views in novel tasks.
6
Conclusions
We have presented a framework for learning and transferring multiple expectations or views about
world dynamics in heterogeneous environments. When the environments are different, the combination of learning multiple views and dynamically selecting the most promising ones yields a system
that can learn a good policy faster and gain higher accumulated reward as compared to the common
strategy of learning just a single good model and using it in all occasions.
Utilizing and maintaining multiple models require additional computation and memory. We have
shown that by a clever decomposition of the transition function, model selection and model updating can be accomplished efficiently using online algorithms. Our experiments demonstrate that
performance improvements in multi-dimensional heterogeneous environments can be achieved with
a small computational cost.
The current work addresses the question of learning good models, but the problem of learning good
policies in large state spaces still remains. Our model learning method is independent of the policy
learning task, thus it can well be coupled with any scalable approximate policy learning algorithms.
Acknowledgments
This research is supported by Academic Research Grants:
251RES1005 from the Ministry of Education in Singapore.
8
MOE2010-T2-2-071 and T1
References
[1] Atkeson, C., Santamaria, J.: A comparison of direct and model-based reinforcement learning. In:
ICRA?97. vol. 4, pp. 3557?3564 (1997)
[2] Boutilier, C., Dearden, R., Goldszmidt, M.: Stochastic dynamic programming with factored representations. Journal of Artificial Intelligence 121, 49?107 (2000)
[3] Brafman, R.I., Tennenholtz, M.: R-max - a general polynomial time algorithm for near-optimal reinforcement learning. Journal of Machine Learning Research 3, 213?231 (2002)
[4] Celiberto, L.A., Matsuura, J.P., de Mntaras, R.L., Bianchi, R.A.C.: Using cases as heuristics in reinforcement learning: A transfer learning application. In: IJCAI?11. pp. 1211?1217 (2011)
[5] Dawid, A.: Statistical theory: The prequential approach. Journal of the Royal Statistical Society A 147,
278?292 (1984)
[6] Doya, K., Samejima, K., Katagiri, K.i., Kawato, M.: Multiple model-based reinforcement learning. Neural Computation 14, 1347?1369 (June 2002)
[7] Fern?andez, F., Garc??a, J., Veloso, M.: Probabilistic policy reuse for inter-task transfer learning. Robot and
Autonomous System 58, 866?871 (July 2010)
[8] Hester, T., Stone, P.: Generalized model learning for reinforcement learning in factored domains. In:
AAMAS?09. vol. 2, pp. 717?724 (2009)
[9] Konidaris, G., Barto, A.: Efficient skill learning using abstraction selection. In: IJCAI?09. pp. 1107?1112
(2009)
[10] Leffler, B.R., Littman, M.L., Edmunds, T.: Efficient reinforcement learning with relocatable action models. In: AAAI?07. vol. 1, pp. 572?577 (2007)
[11] McCarthy, J.: Situations, actions, and causal laws. Tech. Rep. Memo 2, Stanford Artificial Intelligence
Project, Stanford University (1963)
[12] Savage, L.J.: Elicitation of personal probabilities and expectations. Journal of the American Statistical
Association 66(336), 783?801 (1971)
[13] Sharma, M., Holmes, M., Santamaria, J., Irani, A., Isbell, C., Ram, A.: Transfer learning in real-time
strategy games using hybrid cbr/rl. In: IJCAI?07. pp. 1041?1046 (2007)
[14] Sherstov, A.A., Stone, P.: Improving action selection in MDP?s via knowledge transfer. In: AAAI?05.
vol. 2, pp. 1024?1029 (2005)
[15] Silva, B.C.D., Basso, E.W., Bazzan, A.L.C., Engel, P.M.: Dealing with non-stationary environments using
context detection. In: ICML?06. pp. 217?224 (2006)
[16] Soni, V., Singh, S.: Using homomorphisms to transfer options across continuous reinforcement learning
domains. In: AAAI?06. pp. 494?499 (2006)
[17] Strehl, A.L., Diuk, C., Littman, M.L.: Efficient structure learning in factored-state MDPs. In: AAAI?07.
pp. 645?650 (2007)
[18] Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (1998)
[19] Taylor, M.E., Jong, N.K., Stone, P.: Transferring instances for model-based reinforcement learning. In:
Machine Learning and Knowledge Discovery in Databases. LNAI, vol. 5212 (2008)
[20] Taylor, M.E., Stone, P.: Transfer learning for reinforcement learning domains: A survey. Journal of
Machine Learning Research 10, 1633?1685 (December 2009)
[21] Van Seijen, H., Bakker, B., Kester, L.: Switching between different state representations in reinforcement
learning. In: Proceedings of the 26th IASTED International Conference on Artificial Intelligence and
Applications. pp. 226?231 (2008)
[22] Walsh, T.J., Li, L., Littman, M.L.: Transferring state abstractions between MDPs. In: ICML Workshop
on Structural Knowledge Transfer for Machine Learning (2006)
[23] Wilson, A., Fern, A., Ray, S., Tadepalli, P.: Multi-task reinforcement learning: A hierarchical Bayesian
approach. In: ICML?07. pp. 1015?1023 (2007)
[24] Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. In:
NIPS?09 (2009)
[25] Yang, H., Xu, Z., King, I., Lyu, M.R.: Online learning for group Lasso. In: ICML?10 (2010)
[26] Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society: Series B (Statistical Methodology) 68(1), 49?67 (2006)
[27] Zhu, X., Ghahramani, Z., Lafferty, J.: Time-sensitive Dirichlet process mixture models. Tech. Rep. CMUCALD-05-104, School of Computer Science, Carnegie Mellon University (2005)
9
| 4854 |@word trial:1 version:5 polynomial:1 norm:1 tadepalli:1 reused:1 calculus:2 d2:2 decomposition:3 diuk:1 pick:3 fifteen:2 homomorphism:1 series:1 efficacy:1 score:16 selecting:4 past:1 existing:2 outperforms:3 current:5 savage:1 cmdp:3 si:5 written:1 remove:1 update:4 v:2 stationary:3 generative:2 obsolete:1 greedy:2 item:3 intelligence:3 trung:1 beginning:3 ith:2 record:3 evaluator:1 five:2 dn:1 direct:1 become:1 replication:1 yuan:1 consists:1 ray:1 introduce:3 inter:5 expected:3 behavior:1 themselves:2 examine:1 growing:2 multi:5 terminal:1 inspired:1 decomposed:1 automatically:2 actual:4 encouraging:1 equipped:2 little:1 provided:1 project:1 wki:2 moreover:1 mass:1 argmin:1 rmax:7 bakker:1 quantitative:1 every:2 collecting:2 classifier:3 demonstrates:2 sherstov:1 utilization:1 control:1 unit:3 grant:1 positive:1 before:1 t1:1 accordance:1 local:1 switching:2 sutton:1 dynamically:1 collect:1 suggests:3 challenging:1 specialty:1 limited:1 walsh:1 averaged:1 practical:1 acknowledgment:1 practice:1 implement:1 procedure:3 significantly:1 dictate:1 projection:1 confidence:1 pre:1 suggest:1 get:1 cannot:2 clever:1 selection:12 recency:4 context:2 applying:1 accumulating:1 deterministic:2 map:1 thanh:1 resembling:1 attention:3 starting:1 caught:1 focused:2 survey:2 simplicity:1 factored:6 rule:1 holmes:1 utilizing:1 d1:2 autonomous:1 analogous:1 updated:3 dbns:2 pt:2 target:1 enhanced:1 programming:1 homogeneous:2 dawid:1 expensive:1 updating:4 database:1 observed:3 enters:1 capture:2 solved:1 calculate:2 soni:1 episode:13 autonomously:1 decrease:1 highest:2 observes:1 prequential:2 environment:29 pd:2 reward:30 littman:3 dynamic:15 personal:1 depend:1 solving:2 singh:1 predictive:1 serve:1 upon:2 exit:3 completely:1 easily:1 represented:1 various:2 train:1 fast:4 shortcoming:1 effective:3 describe:2 artificial:3 outcome:1 heuristic:3 supplementary:1 plausible:1 solve:3 spend:2 stanford:2 otherwise:7 ability:1 online:17 sequence:4 advantage:1 propose:1 product:1 adaptation:1 silander:2 relevant:3 basso:1 achieve:4 description:1 moved:3 convergence:4 empty:3 optimum:1 ijcai:3 leave:4 depending:1 measured:1 uninformed:2 school:2 implemented:1 come:1 concentrate:1 correct:1 stochastic:3 exploration:4 material:3 education:1 sand:1 require:1 garc:1 andez:1 wall:6 decompose:1 adjusted:4 around:4 considered:3 normal:1 mapping:6 predict:1 lyu:1 bump:1 recruiting:2 vary:1 adopt:1 early:3 achieves:1 estimation:2 lose:2 sensitive:1 grouped:1 create:1 successfully:2 engel:1 weighted:5 mit:1 always:1 gaussian:2 reaching:1 varying:2 barto:3 wilson:5 edmunds:1 focus:7 june:1 improvement:1 indicates:3 likelihood:1 mainly:1 tech:2 rigorous:1 abstraction:5 accumulated:12 typically:1 transferring:13 lnai:1 initially:2 relation:1 going:1 classification:7 among:3 dual:2 augment:1 priori:1 proposes:1 never:1 sampling:2 manually:1 definable:1 icml:4 future:2 report:2 t2:1 few:1 randomly:1 national:1 comprehensive:3 resulted:1 replaced:2 argmax:3 fire:3 maintain:1 attempt:1 cbr:1 detection:1 adjust:1 introduces:2 navigation:1 mixture:1 yielding:2 superfluous:1 predefined:2 accurate:2 tuple:2 capable:1 partial:1 experience:2 orthogonal:1 tree:2 hester:2 old:3 logarithm:1 taylor:4 re:1 conduct:2 causal:1 santamaria:3 brick:1 column:6 earlier:1 instance:1 goodness:2 stacking:1 introducing:2 cost:1 subset:2 predictor:5 conducted:1 accomplish:1 combined:1 st:3 international:1 probabilistic:3 rounded:1 together:1 quickly:5 concrete:1 w1:1 squared:1 aaai:4 choose:1 possibly:4 worse:1 stochastically:1 american:1 derivative:2 li:1 potential:1 de:1 wk:4 coefficient:2 fernandez:1 depends:1 performed:2 try:3 view:67 picked:1 optimistic:1 doing:1 lot:1 start:3 bayes:1 option:2 maintains:1 capability:1 ass:1 efficiently:4 maximized:1 correspond:1 gathered:1 yield:3 bayesian:3 fern:2 none:1 comp:1 history:3 gtki:1 reach:2 evaluates:1 konidaris:2 energy:2 pp:12 naturally:1 matsuura:1 di:2 gain:4 popular:1 knowledge:15 color:2 formalize:2 actually:1 higher:4 dt:1 supervised:1 follow:1 reflected:1 specify:1 improved:1 rand:2 sustain:1 methodology:1 done:1 lifetime:1 just:5 correlation:1 overfit:1 working:3 ei:5 replacing:1 incrementally:1 defines:1 logistic:7 quality:1 mdp:6 building:1 effect:27 regularization:3 irani:1 deal:1 round:1 during:1 game:1 criterion:3 generalized:2 occasion:1 stone:6 yun:1 demonstrate:3 performs:2 silva:1 reasoning:1 meaning:1 wise:1 ranging:1 novel:2 recently:1 common:3 specialized:2 kawato:1 multinomial:5 rl:8 fourteen:1 winner:1 extend:2 association:1 mellon:1 gibbs:1 rd:3 grid:4 dbn:2 similarly:2 inclusion:1 had:1 katagiri:1 robot:2 similarity:4 surface:5 gj:2 longer:1 add:1 mccarthy:1 recent:3 optimizes:1 irrelevant:3 scenario:1 route:2 binary:3 success:1 discussing:1 rep:2 accomplished:1 scoring:4 captured:1 seen:4 additional:1 ministry:1 determine:1 converge:1 sharma:1 july:1 multiple:9 faster:4 adapt:3 characterized:1 offer:1 long:3 cross:2 academic:1 veloso:1 lin:1 visit:1 seijen:1 impact:1 prediction:1 scalable:1 regression:9 basic:1 heterogeneous:8 expectation:7 metric:1 represent:2 tailored:1 adopting:1 achieved:4 cell:7 addition:1 addressed:1 interval:1 grow:1 source:2 crucial:1 extra:1 unlike:1 probably:1 member:2 december:1 lafferty:1 call:1 consult:1 leffler:1 near:5 structural:1 door:3 leong:1 yang:1 enough:1 hb:7 affect:1 xj:2 lasso:5 inner:1 idea:4 prototype:2 t0:2 assist:1 reuse:2 effort:1 render:1 cause:1 action:37 repeatedly:2 boutilier:1 detailed:1 locally:2 specifies:1 singapore:3 notice:4 track:2 discrete:1 carnegie:1 vol:5 express:1 group:4 key:1 four:2 iasted:1 threshold:3 achieving:1 falling:1 clarity:1 utilize:2 freed:1 ram:1 run:8 angle:1 doya:1 parsimonious:1 decision:7 capturing:1 ki:3 bound:1 completing:1 copied:1 oracle:4 adapted:4 isbell:1 aspect:1 speed:1 performing:1 relatively:1 transferred:3 speedup:1 developing:1 transiting:1 combination:4 tomi:1 across:3 slightly:2 smaller:1 gradually:1 taken:1 computationally:2 equation:3 previously:3 remains:1 turn:3 eventually:1 discus:1 needed:2 end:8 adopted:1 available:1 studying:1 observe:2 hierarchical:3 appropriate:1 save:1 batch:1 alternative:1 slower:1 original:3 top:1 denotes:2 assumes:1 dirichlet:2 binomial:1 remaining:1 ensure:2 maintaining:2 running:1 retiring:2 giving:1 ghahramani:1 build:1 especially:1 approximating:1 society:2 icra:1 move:4 objective:2 question:1 strategy:7 costly:2 parametric:1 unclear:1 gradient:2 win:5 subspace:1 separate:1 hwk:2 parametrized:1 evenly:1 collected:1 water:1 fresh:2 assuming:3 length:1 besides:1 useless:1 modeled:1 index:2 ratio:1 minimizing:2 negative:1 memo:1 implementation:3 design:1 proper:2 policy:19 perform:6 bianchi:1 observation:2 dispersion:1 markov:1 benchmark:3 truncated:1 immediate:2 situation:5 extended:1 interacting:2 specified:1 learned:8 established:1 nu:1 nip:1 address:4 suggested:1 tennenholtz:1 usually:5 elicitation:1 challenge:4 max:1 memory:1 royal:2 belief:1 dearden:1 event:2 hybrid:1 regularized:1 predicting:2 zhu:2 representing:1 scheme:2 mdps:3 library:30 numerous:1 ready:1 coupled:1 sg:1 discovery:1 relative:2 asymptotic:1 law:1 loss:4 validation:2 agent:30 gather:1 s0:8 xiao:1 strehl:1 row:1 compatible:1 soil:1 supported:1 brafman:1 bias:1 allow:5 understand:1 guide:1 taking:1 sparse:2 benefit:1 van:2 distributed:1 calculated:3 curve:2 world:13 transition:27 cumulative:5 rich:2 sensory:1 forward:2 superficially:1 reinforcement:15 jump:1 maze:1 nguyen:1 far:1 atkeson:3 reconstructed:1 approximate:4 pruning:3 compact:3 skill:2 keep:1 dealing:1 assumed:3 conclude:1 consuming:1 xi:3 samejima:1 continuous:2 table:4 promising:1 learn:16 transfer:35 improving:1 complex:2 domain:11 da:4 inherit:1 did:1 main:3 whole:1 repeated:1 allowed:1 aamas:1 xu:1 augmented:1 representative:1 fashion:1 sub:1 position:1 winning:2 communicates:1 weighting:1 learns:2 minute:1 down:2 specific:2 showing:1 list:1 decay:2 exists:1 workshop:1 sequential:1 effectively:1 te:35 lwt:16 lt:1 tze:1 likely:3 simply:1 corresponds:1 loses:1 determines:1 relies:1 conditional:1 goal:5 king:1 exposition:1 change:5 hard:1 specifically:2 typical:1 operates:1 determined:1 averaging:2 pvalues:1 called:4 total:1 accepted:2 experimental:1 indicating:2 select:4 formally:2 jong:1 support:2 goldszmidt:1 evaluate:3 mcmc:1 tested:1 scratch:1 |
4,259 | 4,855 | Majorization for CRFs and Latent Likelihoods
Anna Choromanska
Department of Electrical Engineering
Columbia University
[email protected]
Tony Jebara
Department of Computer Science
Columbia University
[email protected]
Abstract
The partition function plays a key role in probabilistic modeling including conditional random fields, graphical models, and maximum likelihood estimation. To
optimize partition functions, this article introduces a quadratic variational upper
bound. This inequality facilitates majorization methods: optimization of complicated functions through the iterative solution of simpler sub-problems. Such
bounds remain efficient to compute even when the partition function involves
a graphical model (with small tree-width) or in latent likelihood settings. For
large-scale problems, low-rank versions of the bound are provided and outperform LBFGS as well as first-order methods. Several learning applications are
shown and reduce to fast and convergent update rules. Experimental results show
advantages over state-of-the-art optimization methods.
1 Introduction
The estimation of probability density functions over sets of random variables is a central problem
in learning. Estimation often requires minimizing the partition function as is the case in conditional
random fields (CRFs) and log-linear models [1, 2]. Training these models was traditionally done
via iterative scaling and bound-majorization methods [3, 4, 5, 6, 1] which achieved monotonic convergence. These approaches were later surpassed by faster first-order methods [7, 8, 9] and then
second-order methods such as LBFGS [10, 11, 12]. This article revisits majorization and repairs
its slow convergence by proposing a tighter bound on the log-partition function. The improved majorization outperforms state-of-the-art optimization tools and admits multiple versatile extensions.
Many decomposition methods for conditional random fields and structured prediction have sought
to render the learning and prediction problems more manageable [13, 14, 15]. Our decomposition, however, hinges on bounding and majorization: decomposing an optimization of complicated
functions through the iterative solution of simpler sub-problems [16, 17]. A tighter bound provides
convergent monotonic minimization while outperforming first- and second-order methods in practice1 . The bound applies to graphical models [18], latent variable situations [17, 19, 20, 21] as well
as high-dimensional settings [10]. It also accommodates convex constraints on the parameter space.
This article is organized as follows. Section 2 presents the bound and Section 3 uses it for majorization in CRFs. Extensions to latent likelihood are shown in Section 4. The bound is extended
to graphical models in Section 5 and high dimensional problems in Section 6. Section 7 provides
experiments and Section 8 concludes. The Supplement contains proofs and additional results.
2 Partition Function Bound
Consider a log-linear density model over discrete y ? ?
(
)
1
h(y) exp ? ? f (y)
p(y|?) =
Z(?)
1
Recall that some second-order methods like Newton-Raphson are not monotonic and may even fail to
converge for convex cost functions [4] unless, of course, line searches are used.
1
which is parametrized by a vector ? ? Rd of dimensionality d ? N. Here, f : ? 7? Rd is
any vector-valued function mapping an input y to some arbitrary vector. The prior h : ? 7? R+
is a fixed non-negative measure.
The partition function Z(?) is a scalar that ensures that p(y|?)
?
normalizes, i.e. Z(?) = y h(y) exp(? ? f (y)). Assume that the number of configurations of y is
|?| = n and is finite2 . The partition function is clearly log-convex in ? and a linear lower-bound
is given via Jensen?s inequality. This article contributes an analogous quadratic upper-bound on the
partition function. Algorithm 1 computes3 the bound?s parameters and Theorem 1 shows the precise
guarantee it provides.
Algorithm 1 ComputeBound
? f (y), h(y) ?y ? ?
Input Parameters ?,
+
Init z ? 0 , ? = 0, ? = zI
For each y ? ? {
? = h(y) exp(??? f (y))
l = f (y) ? ?
tanh( 21 log(?/z)) ?
?+=
ll
2 log(?/z)
?
? + = z+?
l
z += ?
}
Output z, ?, ?
0.35
0.3
log(Z) and Bounds
0.25
0.2
0.15
0.1
0.05
0
?5
0
?
5
? ? ?(? ? ?)
? + (? ? ?)
? ??) upperTheorem 1 Algorithm 1 finds z, ?, ? such that z exp( 12 (? ? ?)
?
?
d
+
? f (y) ? R and h(y) ? R for all y ? ?.
bounds Z(?) = y h(y) exp(? f (y)) for any ?, ?,
Proof 1 (Sketch, See Supplement for Formal Proof) Recall the bound log(e? + e?? ) ? c?2 [22].
?
?
?
?
Obtain a multivariate variant log(e? 1 + e?? 1 ). Tilt the bound to handle log(h1 e? f1 + h2 e? f2 ).
?
?
?
Add an additional exponential term to get log(h1 e? f1 + h2 e? f2 + h3 e? f3 ). Iterate the last step
to extend to n elements in the summation.
The bound improves previous inequalities and its proof is in the Supplement. It tightens [4, 19] since
it avoids wasteful curvature tests (it uses duality theory to compare the bound and the optimized
function rather than compare their Hessians). It generalizes [22] which only holds for n = 2 and
h(y) constant; it generalizes [23] which only handles a simplified one-dimensional case. The bound
is computed using Algorithm 1 by iterating over the y variables (?for each y ? ??) according to an
arbitrary ordering via the bijective function ? : ? 7? {1, . . . , n} which defines i = ?(y). The order
in which we enumerate over ? slightly varies the ? in the bound (but not the ? and z) when |?| >
2. However, we empirically investigated the influence of various orderings on bound performance
(in all the experiments presented in Section
7) and noticed no significant effect across ordering
?
schemes. Recall that choosing ? = y h(y) exp(??? f (y))(f (y) ? ?)(f (y) ? ?)? with ? and z
as in Algorithm 1 yields the second-order Taylor approximation (the Hessian) of the log-partition
function. Algorithm 1 replaces a sum of log-linear models with a single log-quadratic model which
makes monotonic majorization straightforward. The figure inside Algorithm 1 depicts the bound on
? If there are no constraints on the parameters (i.e. any ? ? Rd
log Z(?) for various choices of ?.
is admissible), a simple closed-form iterative update rule emerges: ?? ? ?? ? ??1 ?. Alternatively,
if ? must satisfy linear (convex) constraints it is straightforward to compute an update by solving a
quadratic (convex) program. This update rule is interleaved with the bound computation.
3 Conditional Random Fields and Log-Linear Models
The partition function arises naturally in maximum entropy estimation or minimum relative entropy
estimation (cf. Supplement) as well as in conditional extensions of the maximum entropy paradigm
where the model is conditioned on an observed input. Such models are known as conditional random
fields and have been useful for structured prediction problems [1, 24]. CRFs are given a data-set
{(x1 , y1 ), . . . , (xt , yt )} of independent identically-distributed (iid) input-output pairs where yj is
2
3
Here, assume n is enumerable. Later, for larger spaces use O(n) to denote the time to compute Z.
By continuity, take tanh( 12 log(1))/(2 log(1)) = 14 and limz?0+ tanh( 12 log(?/z))/(2 log(?/z)) = 0.
2
the observed sample in a (discrete) space ?j conditioned on the observed input xj . A CRF defines
a distribution over all y ? ?j (of which yj is a single element) as the log-linear model
p(y|xj , ?) =
1
hx (y) exp(? ? fxj (y))
Zxj (?) j
?
?
where Zxj (?) =
y??j hxj (y) exp(? fxj (y)). For the j?th training pair, we are given a nonnegative function hxj (y) ? R+ and a vector-valued function fxj (y) ? Rd defined over the domain
y ? ?j . In this section, for simplicity, assume n = maxtj=1 |?yj |. Each partition function Zxj (?) is
a function of ?. The parameter ??
for CRFs is estimated by maximizing the regularized conditional
t
4
2
+
log-likelihood or log-posterior: j=1 log p(yj |xj , ?) ? t?
2 ??? where ? ? R is a regularizer set
using prior knowledge or cross-validation. Rewriting gives the objective of interest
J(?) =
t
?
j=1
log
hxj (yj )
+ ? ? fxj (yj ) ?
Zxj (?)
2
t?
2 ??? .
(1)
If prior knowledge (or constraints) restrict the solution vector to a convex hull ?, the maximization
problem becomes arg max??? J(?).
Algorithm 2 proposes a method for maximizing the regularized conditional likelihood J(?) or,
equivalently minimizing the partition function Z(?). It solves the problem in Equation 1 subject
to convex constraints by interleaving the quadratic bound with a quadratic programming procedure.
Theorem 2 establishes the convergence of the algorithm and the proof is in the Supplement.
Algorithm 2 ConstrainedMaximization
0: Input xj , yj and functions hxj , fxj for j = 1, . . . , t, regularizer ? ? R+ and convex hull ? ? Rd
1: Initialize ?0 anywhere inside ? and set ?? = ?0
While not converged
2: For j = 1, . . . , t
Get ?j , ?j from hxj , fxj , ?? via Algorithm 1
?
?
? ?(?j +?I)(? ? ?)
? + ? ? ? (?j ? fx (yj ) + ??)
3: Set ?? = arg min??? j 12 (? ? ?)
j
j
?
?
4: Output ? = ?
Theorem 2 For any ?0 ? ?, all ?fxj (y)? ? r and all |?j | ? n, Algorithm 2
? ? J(?0 ) ? (1 ? ?) max??? (J(?) ? J(?0 )) in more than
outputs a ?? (such that J(?)
?
(1)
?n?1 tanh(log(i)/2) ?1 )?
?
log ? / log 1 + 2r2 ( i=1
)
iterations.
log(i)
(
)
?n?1
?n?1
i?1
n
The series i=1 tanh(log(i)/2)
= i=1 (i+1)
log(i)
log(i) is the logarithmic integral which is O log n
asymptotically [26]. The next sections show how to handle hidden variables in the learning problem,
exploit graphical modeling, and further accelerate the underlying algorithms.
4 Latent Conditional Likelihood
Section 3 showed how the partition function is useful for maximum conditional likelihood problems
involving CRFs. In this section, maximum conditional likelihood is extended to the setting where
some variables are latent. Latent models may provide more flexibility than fully observable models
[21, 27, 28]. For instance, hidden conditional random fields were shown to outperform generative
hidden-state and discriminative fully-observable models [21].
Consider the latent setting where we are given t iid samples x1 , . . . , xt from some unknown distribution p?(x) and t corresponding samples y1 , . . . , yt drawn from identical conditional distributions
p?(y|x1 ), . . . , p?(y|xt ) respectively. Assume that the true generating distributions p?(x) and p?(y|x)
are unknown. Therefore, we aim to estimate a conditional distribution p(y|x) from some set of hypotheses that achieves high conditional likelihood given the data-set D = {(x1 , y1 ), . . . , (xt , yt )}.
4
Alternatively, variational Bayesian approaches can be used instead of maximum likelihood via expectation
propagation (EP) or power EP [25]. These, however, assume Gaussian posterior distributions over parameters,
require approximations, are computationally expensive and are not necessarily more efficient than BFGS.
3
We will select this conditional distribution by assuming it emerges from a conditioned joint distribution
over x and y as well as a hidden variable m which is being marginalized as p(y|x, ?) =
?
p(x,y,m|?)
? m
. Here m ? ?m represents a discrete hidden variable, x ? ?x is an input and
y,m p(x,y,m|?)
y ? ?y is a discrete output variable. The parameter ? contains all parameters that explore the
function class of such conditional distributions. The latent likelihood of the data L(?) = p(D|?)
subsumes Equation 1 and is the new objective of interest
t ?
t
?
?
p(xj , yj , m|?)
?m
L(?) =
p(yj |xj , ?) =
.
(2)
y,m p(xj , y, m|?)
j=1
j=1
A good choice of the parameters is one that achieves a large conditional likelihood value (or posterior) on the data set D. Next, assume that each p(x|y, m, ?) is an exponential family distribution
( ?
)
p(x|y, m, ?) = h(x) exp ?y,m
?(x) ? a(?y,m )
where each conditional is specified by a function h : ?x 7? R+ and a feature mapping ? : ?x 7? Rd
which are then used to derive the normalizer a : Rd 7? R+ . A parameter ?y,m ? Rd selects a specific distribution. Multiply each exponential family term by an unknown marginal dis?
tribution called the mixing proportions p(y, m|?) = ? y,m
?y,m . This is parametrized by an uny,m
known parameter ? = {?y,m } ?y, m where ?y,m ? [0, ?). Finally, the collection of all parameters is ? = {?y,m , ?y,m } ?y, m. Thus, we have the complete likelihood p(x, y, m|?) =
( ?
)
?y,m h(x)
?
exp
?
?(x)
?
a(?
)
. Insert this expression into Equation 2 and remove cony,m
y,m
?
y,m y,m
stant factors that appear in both denominator and numerator. Apply the change of variables
exp(?y,m ) = ?y,m exp(?a(?y,m )) and rewrite the objective as a function5 of a vector ?:
(
)
?
( ?
)
?
t
t ?
exp
?
?(x
)
+
?
?
?
j
yj ,m
yj ,m
m
m exp ? fj,yj ,m
(
) =
?
?
L(?) =
.
?
?
y,m exp (? fj,y,m )
y,m exp ?y,m ?(xj ) + ?y,m
j=1
j=1
The last equality emerges by rearranging all ? parameters as a vector ? ? R|?y ||?m |(d+1) equal to
?
|?y ||?m |(d+1)
?
?
?1,2 ? ? ? ?|?
?|?y |,|?m | ]? and introducing fj,?y,m
defined
?1,1 ?1,2
[?1,1
? ? R
y |,|?m |
?
? ?
?
?
as [[?(xj ) 1]?[(?y,m)=(1,1)]
?
??? [?(xj ) 1]?[(?
y ,m)=(|?
?
(thus the feature vector [?(xj ) 1] is
y |,|?m |)]]
positioned appropriately in the longer fj,?y,m
? vector which is elsewhere zero). We will now find a
? which is tight when ? = ?? such that L(?)
? = Q(?,
? ?).
?
variational lower bound on L(?) ? Q(?, ?)
We proceed by bounding each numerator and each denominator in the product over j = 1, . . . , t.
Apply Jensen?s inequality to lower bound each numerator term as
?
?
(
)
??
m ?j,m fj,yj ,m ?
m ?j,m log ?j,m
exp ? ? fj,yj ,m ? e?
m
??
where ?j,m = (e?
fj,yj ,m
?
?
??? f
?
)/( m? e j,yj ,m ). Algorithm 1 then bounds the denominator
(
)
exp ? ? fj,y,m
1
?
?
? zj e 2 (???)
?
? ? ?j
?j (???)+(??
?)
.
y,m
The overall lower bound on the likelihood is then
? ? ?(??
?
? ??
?
?
?)?(??
?)
? = L(?)e
? ? 12 (???)
Q(?, ?)
?
?
? = ?t ?j and ?
? = tj=1 (?j ? m ?j,m fj,yj ,m ). The right hand side is simply an
where ?
j=1
exponentiated quadratic function in ? which is easy to maximize. This yields an iterative scheme
similar to Algorithm 2 for monotonically maximizing latent conditional likelihood.
5 Graphical Models for Large n
The bounds in the previous sections are straightforward to compute when ? is small. However,
for graphical models, enumerating over ? can be daunting. This section provides faster algorithms
5
It is now easy to regularize L(?) by adding ? t?
???2 .
2
4
that recover the bound efficiently for graphical models of bounded tree-width. A graphical model
represents the factorization of a probability density function. This article will consider the factor
graph notation of a graphical model. A factor graph is a bipartite graph G = (V, W, E) with variable
vertices V = {1, . . . , k}, factor vertices W = {1, . . . , m} and a set of edges E between V and W .
In addition, define a set of random variables Y = {y1 , . . . , yk } each associated with the elements of
V and a set of non-negative scalar functions ? = {?1 , . . . , ?m } each associated
? with the elements
of W . The factor graph implies that p(Y ) factorizes as p(y1 , . . . , yk ) = Z1 c?W ?c (Yc ) where
Z is a normalizing partition function (the dependence on parameters is suppressed here) and Yc is
a subset of the random variables that are associated with the neighbors of node c. In other words,
Yc = {yi |i ? Ne(c)} where Ne(c) is the set of vertices that are neighbors of c. Inference in
graphical models requires the evaluation and the optimization of Z. These computations can be
NP-hard in general yet are efficient when G satisfies certain properties (low tree-width). Consider
a log-linear model (a function class) indexed by a parameter ? ? ? in a convex hull ? ? Rd as
follows
(
)
1 ?
p(Y |?) =
hc (Yc ) exp ? ? fc (Yc )
Z(?)
c?W
( ?
)
where Z(?) =
Y
c?W hc (Yc ) exp ? fc (Yc ) . The model is defined by a set of vector-valued
functions fc (Yc ) ? Rd and scalar-valued functions hc (Yc ) ? R+ . Choosing a function from the
function class hinges on estimating ? by optimizing Z(?). However, Algorithm 1 may be inapplicable due to the large number of configurations in Y . Instead, consider a more efficient surrogate
algorithm which computes the same bound parameters by efficiently exploiting the factorization of
the graphical model. This is possible since exponentiated quadratics are closed under multiplication
and the required bound computations distribute nicely across decomposable graphical models.
? ?
Algorithm 3 JunctionTreeBound
Input Reverse-topological tree T with c = 1, . . . , m factors hc (Yc ) exp(??? fc (Yc )) and ?? ? Rd
For c = 1, . . . , m
If (c < m) {Yboth = Yc ? Ypa(c) , Ysolo = Yc \ Ypa(c) }
Else {Yboth = {}, Ysolo = Yc }
For each u ? Yboth
{ Initialize zc|x ? 0+ , ?c|x = 0, ?c|x = zc|x I
For each v ? Ysolo
{
w = u ? v;
?
?
??
?w = hc (w)e? fc (w) b?ch(c)zb|w ; lw = fc (w) ? ?c|u + b?ch(c) ?b|w ;
?
tanh( 21 log( z w ))
?
c|u
?w
?c|u = b?ch(c)?b|w+
zc|u = ?w ; }}
lw l?
?w
w ; ?c|u = zc|u +?w lw ;
2 log( z
Output
c|u
)
Bound as z = zm , ? = ?m , ? = ?m
Begin by assuming that the graphical model in question is a junction tree and satisfies the running
intersection property [18]. In Algorithm 3 (the Supplement provides a proof of its correctness), take
ch(c) to be the set of children-cliques of clique c and pa(c) to be the parent of c. Note that the
algorithm enumerates over u ? Ypa(c) ? Yc and v ? Yc \ Ypa(c) . The algorithm stores a quadratic
bound for each configuration of u (where u is the set of variables in common across both clique c
and its parent). It then forms the bound by summing over v ? Yc \ Ypa(c) , each configuration of
each variable a clique c has that is not shared with its parent clique. The algorithm also collects
precomputed bounds from children of c. Also define w = u ? v ? Yc as the conjunction of both
indexing variables u and v. Thus, the two inner for loops enumerate over all configurations w ? Yc
of each clique. Note that w is used to query the children b ? ch(c) of a clique c to report their bound
parameters zb|w , ?b|w , ?b|w . This is done for each configuration w of the clique c. Note, however,
that not every variable in clique c is present in each child b so only the variables in w that intersect Yb
are relevant in indexing the parameters zb|w , ?b|w , ?b|w and the remaining variables do not change
the values of zb|w , ?b|w , ?b|w .
Algorithm 3 is efficient in the sense that computations involve enumerating over all configurations
of each clique in the junction tree rather than over all configurations of Y . This shows that the
5
?
computation involved is O( c |Yc |) rather than O(|?|) as in Algorithm 1.?Thus, for estimating the
computational efficiency of various algorithms in this article, take n = c |Yc | for the graphical
model case rather than n = |?|. Algorithm 3 is a simple extension of the known recursions that
are used to compute the partition function and its gradient vector. Thus, in addition to the ? matrix
which represents the curvature of the bound,
Algorithm 3 is recovering the partition function value
? log Z(?)
z and the gradient since ? =
.
??
?=??
6 Low-Rank Bounds for Large d
In many realistic situations, the dimensionality d is large and this prevents the storage and inversion of the matrix ?. We next present a low-rank extension that can be applied to any of the
algorithms presented so far. As an example, consider Algorithm 4 which is a low-rank incarnation of Algorithm 2. Each iteration of Algorithm 2 requires O(tnd2 + d3 ) time since step 2
computes several ?j ? Rd?d matrices and 3 performs inversion. Instead, the new algorithm
provides a low-rank version of the bound which still majorizes the log-partition function but re?
quires only O(tnd)
complexity (putting it on par with LBFGS). First, note that step 3 in AlgoAlgorithm 4 LowRankBound
? regularizer ? ? R+ , model ft (y) ? Rd and ht (y) ? R+ and rank k ? N
Input Parameter ?,
Initialize S = 0 ? Rk?k , V = orthonormal ? Rk?d , D = t?I ? diag(Rd?d )
For each t { Set z ? 0+ ; ? = 0;
For each y{
?
?? ft (y)
? = ht (y)e?
; r=
?
tanh( 12 log( z ))
?
(ft (y)
?
2 log( z )
?
? ?) ;
For i = 1, . . . , k : p(i) = r V(i, ?); r = r ? p(i)V(i, ?);
For i = 1, . . . , k : For j = 1, . . . , k : S(i, j) = S(i, j) + p(i)p(j);
Q? AQ = svd(S); S ? A; V ? QV;
s = [S(1, 1), . . . , S(k, k), ?r?2 ]? ; k? = arg mini=1,...,k+1 s(i);
? k)1
? ? |V(j, ?)| diag(|V(k, ?)|);
if (k? ? k) { D = D + S(k,
2
? k)
? = ?r? ; r = ?r??1 r; V(k, ?) = r; }
S(k,
else
{ D = D + 1? |r|diag(|r|); } }
?
? + = z+? (ft (y) ? ?); z + = ?;
}}
Output S ? diag(Rk?k ), V ? Rk?d , D ? diag(Rd?d )
?t
rithm 2 can be written as ?? = ?? ? ??1 u where u = t??? + j=1 ?j ? fxj (yj ). Clearly, Algorithm 1 can recover u by only computing ?j for j = 1, . . . , t and skipping all steps involving
matrices. This merely requires O(tnd) work. Second, we store ? using a low-rank representation V? SV + D where V ? Rk?d is orthonormal, S ? Rk?k is positive semi-definite, and
D ? Rd?d is non-negative diagonal. Rather
? than 1increment the matrix by a rank one update of the
tanh( 2 log(?/z))
?
form ?i = ?i?1 + ri ri where ri =
(fi ? ?i ), simply project ri onto each
2 log(?/z)
eigenvector in V and update matrix S and V via a singular value decomposition (O(k 3 ) work).
After removing k such projections, the remaining residual from ri forms a new eigenvector ek+1
and its magnitude forms a new singular value. The resulting rank (k + 1) system is orthonormal
with (k + 1) singular values. We discard its smallest singular value and corresponding eigenvector
to revert back to an order k eigensystem. However, instead of merely discarding we can absorb
the smallest singular value and eigenvector into the D component by bounding the remaining outer?
product with a diagonal term. This provides a guaranteed overall upper bound in O(tnd)
(k is
assumed to be logarithmic with dimension d). Finally, to invert ?, we apply the Woodbury formula:
??1 = D?1 + D?1 V? (S?1 + VD?1 V? )?1 VD?1 which only requires O(k 3 ) work. A proof of
correctness for Algorithm 4 can be found in the Supplement.
7 Experiments
We first focus on the logistic regression task and compare the performance of the bound (using
the low-rank Algorithm 2) with first-order and second order methods such as LBFGS, conjugate
gradient (CG) and steepest descent (SD). We use 4 benchmark data-sets: the SRBCT and Tumors
6
data-sets from [29] as well as the Text and SecStr data-sets from http://olivier.chapelle.cc/sslbook/benchmarks.html. For all experiments in this section, the setup is as follows. Each data-set is
split into training (90%) and testing (10%) parts. All implementations are run on the same hardware
with C++ code. The termination criterion for all algorithms is a change in estimated parameter or
function values smaller than 10?6 (with a ceiling on the number of iterations of 106 ). Results are
averaged over 10 random initializations close to 0. The regularization parameter ?, when used, was
chosen through crossvalidation. In Table 1 we report times in seconds and the number of iterations
for each algorithm (including LBFGS) to achieve the LBFGS termination solution modulo a small
constant ? (set to 10?4 ). Table 1 also provides data-set sizes and regularization values. The first 4
columns in Table 1 provide results for this experiment.
Data-set
Size
Algorithm
LBFGS
SD
CG
Bound
SRBCT
Tumors
Text
SecStr
n=4
n = 26
n=2
n=2
t = 83
t = 308
t = 1500
t = 83679
d = 9236
d = 390260
d = 23922
d = 632
? = 101
? = 101
? = 102
? = 101
time iter
time
iter time iter
time
iter
6.10 42 3246.83
8
15.54
7
881.31 47
7.27 43 18749.15 53 153.10 69 1490.51 79
40.61 100 14840.66 42 57.30 23 667.67 36
3.67
8
1639.93
4
6.18
3
27.97
9
CoNLL
PennTree
m=9
m = 45
t = 1000
t = 1000
d = 33615
d = 14175
? = 101
? = 101
time
iter
time
iter
25661.54 17 62848.08
7
93821.72 12 156319.31 12
88973.93 23 76332.39 18
16445.93 4
27073.42
2
Table 1: Time in seconds and iterations required to obtain within ? of the LBFGS solution (where
? = 10?4 ) for logistic regression problems (on SRBCT, Tumors, Text and SecStr data-sets where
n is the number of classes) and Markov CRF problems (on CoNLL and PennTree data-sets, where
m is the number of classes). Here, t is the total number of samples (training and testing), d is the
dimensionality of the feature vector and ? is the cross-validated regularization setting.
Structured prediction problems are explored using two popular data-sets. The first one contains
Spanish news wire articles from the a session of the CoNLL 2002 conference. This corpus involves
a named entity recognition problem and consists of sentences where each word is annotated with
one of m = 9 possible labels. The second task is from the PennTree Bank. This corpus involves a
tagging problem and consists of sentences where each word is labeled with one of m = 45 possible
parts-of-speech. A conditional random field is estimated with a Markov chain structure to give
word labels a sequential dependence. The features describing the words are constructed as in [30].
Two last columns of Table 1 provide results for this experiment. We used the low-rank version of
Algorithm 3. In both experiments, the bound always remained fastest as indicated in bold.
Data?set
Bound
EM
5
5
5
0
0
0
?5
?5
?5
?10
?10
?10
?15
?15
?15
?20
?20
?20
?25
?5
0
5
10
15
20
?25
25
?5
0
5
10
15
20
25
?25
?5
0
5
10
15
20
25
Figure 1: Classification boundaries using the bound and EM for a toy latent likelihood problem.
We next performed experiments with maximum latent conditional likelihood problems. We denote
by m the number of hidden variables. Due to the non-concavity of this objective, we are most interested in finding good local maxima. We start with a simple toy experiment from [19] comparing
the bound to the expectation-maximization (EM) algorithm in the binary classification problem presented on the left image of Figure 1. The model incorrectly uses only 2 Gaussians per class while
the data is generated using 8 Gaussians total. On Figure 1 we show the decision boundary obtained
using the bound (with m = 2) and EM. EM performs as well as random chance guessing while the
bound classifies the data very well. The average test log-likelihood obtained by EM was -1.5e+06
while the bound obtained -21.8.
7
We next compared the algorithms (the bound, Newton-Raphson, BFGS, CG and SD) in maximum
latent conditional likelihood problems on five benchmark data-sets. These included four UCI datasets6 (ion, bupa, hepatitis and wine) and the previously used SRBCT data-set. The feature mapping
used was ?(x) = x ? Rd which corresponds to a mixture Gaussian-gated logistic regressions
(obtained by conditioning a mixture of m Gaussians per class). We used a value of ? = 0 throughout
the latent experiments. We explored setting m ? {1, 2, 3, 4}. Table 2 shows the testing latent loglikelihood at convergence for m chosen through cross-validation (the Supplement contains a more
complete table). In bold, we show the algorithm that obtained the highest testing log-likelihood.
The bound is the best performer overall and finds better solutions in less time. Figure 2 depicts the
convergence on ion, hepatitis and SRBCT data sets.
Data-set
Algorithm
BFGS
SD
CG
Newton
Bound
ion
m=3
-5.88
-5.56
-5.57
-5.95
-4.18
bupa hepatitis wine SRBCT
m=2
m=2 m=3 m=4
-21.78
-5.28
-1.79
-6.06
-21.74
-5.14
-1.37
-5.61
-21.81
-4.84
-0.95
-5.76
-21.85
-5.50
-0.71
-5.54
-19.95
-4.40
-0.48
-0.11
Table 2: Test log-likelihood at convergence for ion, bupa, hepatitis, wine and SRBCT data-sets.
ion
hepatitis
0
SRBCT
?4
0
?5
?2
?5
?10
?15
Bound
Newton
BFGS
Conjugate gradient
Steepest descent
?20
?25
?5
0
5
log(Time) [sec]
10
?4
?log(J(?))
?log(J(?))
?log(J(?))
?6
?7
?8
?8
?9
?10
?10
?11
?6
?6
?4
?2
0
log(Time) [sec]
2
4
?12
?4
?2
0
log(Time) [sec]
2
4
Figure 2: Convergence of test latent log-likelihood on ion, hepatitis and SRBCT data-sets.
8 Discussion
A simple quadratic upper bound for the partition function of log-linear models was proposed and
makes majorization approaches competitive with state-of-the-art first- and second-order optimization methods. The bound is efficiently recoverable for graphical models and admits low-rank variants for high-dimensional data. It allows faster and monotonically convergent majorization in CRF
learning and maximum latent conditional likelihood problems (where it also finds better local maxima). Future work will explore intractable partition functions where likelihood evaluation is hard but
bound maximization may remain feasible. Furthermore, the majorization approach will be applied
in stochastic [31] and distributed optimization settings.
Acknowledgments
The authors thank A. Smola, M. Collins, D. Kanevsky and the referees for valuable feedback.
References
[1] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In ICML, 2001.
[2] A. Globerson, T. Koo, X. Carreras, and M. Collins. Exponentiated gradient algorithms for log-linear
structured prediction. In ICML, 2007.
[3] J. Darroch and D. Ratcliff. Generalized iterative scaling for log-linear models. Annals of Math. Stat.,
43:1470?1480, 1972.
6
Downloaded from http://archive.ics.uci.edu/ml/
8
[4] D. Bohning and B. Lindsay. Monotonicity of quadratic approximation algorithms. Ann. Inst. Statist.
Math., 40:641?663, 1988.
[5] A. Berger. The improved iterative scaling algorithm: A gentle introduction. Technical report, 1997.
[6] S. Della Pietra, V. Della Pietra, and J. Lafferty. Inducing features of random fields. IEEE PAMI, 19(4),
1997.
[7] R. Malouf. A comparison of algorithms for maximum entropy parameter estimation. In CoNLL, 2002.
[8] H. Wallach. Efficient training of conditional random fields. Master?s thesis, University of Edinburgh,
2002.
[9] F. Sha and F. Pereira. Shallow parsing with conditional random fields. In NAACL, 2003.
[10] C. Zhu, R. Byrd, P Lu, and J. Nocedal. Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale
bound-constrained optimization. TOMS, 23(4), 1997.
[11] S. Benson and J. More. A limited memory variable metric method for bound constrained optimization.
Technical report, Argonne National Laboratory, 2001.
[12] G. Andrew and J. Gao. Scalable training of ?1-regularized log-linear models. In ICML, 2007.
[13] D. Roth. Integer linear programming inference for conditional random fields. In ICML, 2005.
[14] Y. Mao and G. Lebanon. Generalized isotonic conditional random fields. Machine Learning, 77:225?248,
2009.
[15] C. Sutton and A. McCallum. Piecewise training for structured prediction. Machine Learning, 77:165?194,
2009.
[16] J. De Leeuw and W. Heiser. Convergence of correction matrix algorithms for multidimensional scaling,
chapter Geometric representations of relational data. 1977.
[17] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm.
J. of the Royal Stat. Soc., B-39, 1977.
[18] M. Wainwright and M Jordan. Graphical models, exponential families and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[19] T. Jebara and A. Pentland. On reversing Jensen?s inequality. In NIPS, 2000.
[20] J. Salojarvi, K Puolamaki, and S. Kaski. Expectation maximization algorithms for conditional likelihoods.
In ICML, 2005.
[21] A. Quattoni, S. Wang, L. P. Morency, M. Collins, and T. Darrell. Hidden conditional random fields. IEEE
PAMI, 29(10):1848?1852, October 2007.
[22] T. Jaakkola and M. Jordan. Bayesian parameter estimation via variational methods. Statistics and Computing, 10:25?37, 2000.
[23] G. Bouchard. Efficient bounds for the softmax and applications to approximate inference in hybrid models. In NIPS AIHM Workshop, 2007.
[24] B. Taskar, C. Guestrin, and D. Koller. Max margin Markov networks. In NIPS, 2004.
[25] Y. Qi, M. Szummer, and T. P. Minka. Bayesian conditional random fields. In AISTATS, 2005.
[26] T. Bromwich and T. MacRobert. An Introduction to the Theory of Infinite Series. Chelsea, 1991.
[27] S. B. Wang, A. Quattoni, L.-P. Morency, and D. Demirdjian. Hidden conditional random fields for gesture
recognition. In CVPR, 2006.
[28] Y. Wang and G. Mori. Max-margin hidden conditional random fields for human action recognition. In
CVPR, pages 872?879. IEEE, 2009.
[29] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization for Machine Learning, chapter Convex
optimization with sparsity-inducing norms. MIT Press, 2011.
[30] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden Markov support vector machines. In ICML, 2003.
[31] SVN. Vishwanathan, N. Schraudolph, M. Schmidt, and K. Murphy. Accelerated training of conditional
random fields with stochastic gradient methods. In ICML, 2006.
[32] T. Jebara. Multitask sparsity via maximum entropy discrimination. JMLR, 12:75?110, 2011.
9
Majorization for CRFs and Latent Likelihoods
(Supplementary Material)
Tony Jebara
Department of Computer Science
Columbia University
[email protected]
Anna Choromanska
Department of Electrical Engineering
Columbia University
[email protected]
Abstract
This supplement presents additional details in support of the full article. These include the application of the majorization method to maximum entropy problems.
It also contains proofs of the various theorems, in particular, a guarantee that the
bound majorizes the partition function. In addition, a proof is provided guaranteeing convergence on (non-latent) maximum conditional likelihood problems. The
supplement also contains supporting lemmas that show the bound remains applicable in constrained optimization problems. The supplement then proves the
soundness of the junction tree implementation of the bound for graphical models with large n. It also proves the soundness of the low-rank implementation of
the bound for problems with large d. Finally, the supplement contains additional
experiments and figures to provide further empirical support for the majorization
methodology.
Supplement for Section 2
Proof of Theorem 1 Rewrite the partition function as a sum over the integer index j = 1, . . . , n
under the random ordering ? : ? 7? {1, . . . , n}. This defines j?
= ?(y) and associates h and f with
n
hj = h(? ?1 (j)) and fj = f (? ?1 (j)). Next, write Z(?) = j=1 ?j exp(?? fj ) by introducing
? = ? ? ?? and ?j = hj exp(??? fj ). Define the partition function over only the first i components
?i
as Zi (?) = j=1 ?j exp(?? fj ). When i = 0, a trivial quadratic upper bound holds
(
)
Z0 (?) ? z0 exp 12 ?? ?0 ? + ?? ?0
with the parameters z0 ? 0+ , ?0 = 0, and ?0 = z0 I. Next, add one term to the current partition
function Z1 (?) = Z0 (?) + ?1 exp(?? f1 ). Apply the current bound Z0 (?) to obtain
Z1 (?) ? z0 exp( 21 ?? ?0 ? + ?? ?0 ) + ?1 exp(?? f1 ).
Consider the following change of variables
u
?
1/2
?1/2
= ?0 ? ? ?0
=
?1
z0
exp( 12 (f1
(f1 ? ?0 ))
? ?0 )? ??1
0 (f1 ? ?0 ))
and rewrite the logarithm of the bound as
log Z1 (?)
(
)
?
2
1
? log z0 ? 12 (f1 ? ?0 )? ??1
0 (f1 ? ?0 ) + ? f1 + log exp( 2 ?u? ) + ? .
Apply Lemma 1 (cf. [32] p. 100) to the last term to get
(
(1
) )
?
2
log Z1 (?) ? log z0 ? 12 (f1 ? ?0 )? ??1
0 (f1 ? ?0 ) + ? f1 + log exp 2 ?v? +?
(
)
v? (u ? v)
1
+
+ (u ? v)? I + ?vv? (u ? v)
1+? exp(? 12 ?v?2 ) 2
10
where ? =
1
1
tanh( 2 log(? exp(? 2 ?v?2 )))
.
1
2 log(? exp(? 2 ?v?2 ))
The bound in [32] is tight when u = v. To achieve tightness
?1/2
when ? = ?? or, equivalently, ? = 0, we choose v = ?0 (?0 ? f1 ) yielding
(
)
Z1 (?) ? z1 exp 12 ?? ?1 ? + ?? ?1
where we have
z1
= z0 + ?1
z0
?1
=
?0 +
f1
z0 + ?1
z0 + ?1
tanh( 21 log(?1 /z0 ))
= ?0 +
(?0 ? f1 )(?0 ? f1 )? .
2 log(?1 /z0 )
?1
?1
This rule updates the bound parameters z0 , ?0 , ?0 to incorporate an extra term in the sum over i in
Z(?). The process is iterated n times (replacing 1 with i and 0 with i ? 1) to produce an overall
bound on all terms.
Lemma 1 (See [32] p. 100)
(
(
)
)
For all u ? Rd , any v ? Rd and any ? ? 0, the bound log exp 12 ?u?2 + ? ?
(
(
)
)
log exp 12 ?v?2 + ? +
(
)
v? (u ? v)
1
?
?
+
(u
?
v)
I
+
?vv
(u ? v)
1
1 + ? exp(? 2 ?v?2 ) 2
holds when the scalar term ? =
tanh( 21 log(? exp(??v?2 /2)))
.
2 log(? exp(??v?2 /2))
Equality is achieved when u = v.
Proof of Lemma 1 The proof is provided in [32].
Supplement for Section 3
Maximum entropy problem We show here that partition functions arise naturally in maximum
?
p(y)
entropy estimation or minimum relative entropy RE(p?h) = y p(y) log h(y)
estimation. Consider
the following problem:
?
?
min RE(p?h) s.t.
p(y)f (y) = 0,
p(y)g(y) ? 0.
p
y
y
d?
Here, assume that f : ? 7? R and g : ? 7? R are arbitrary (non-constant)
vector-valued
(
) functions
over the sample space. The solution distribution p(y) = h(y) exp ? ? f (y) + ?? g(y) /Z(?, ?) is
recovered by the dual optimization
?
(
)
arg
?, ? = max ? log
h(y) exp ? ? f (y) + ?? g(y)
d
??0,?
y
?
where ? ? Rd and ? ? Rd . These are obtained by minimizing Z(?, ?) or equivalently by maximizing its negative logarithm. Algorithm 1 permits variational maximization of the dual via the
quadratic program
min 1 (?
??0,? 2
? ? ?(? ? ?)
? + ?? ?
? ?)
?
where ? ? = [? ? ?? ]. Note that any general convex hull of constraints ? ? ? ? Rd+d could be
imposed without loss of generality.
Proof of Theorem 2 We begin by proving a lemma that will be useful later.
Lemma 2 If ?? ? ? ? 0 for ?, ? ? Rd?d , then
? ? ?(? ? ?)
? ? (? ? ?)
? ??
L(?) = ? 21 (? ? ?)
? ? ?(? ? ?)
? ? (? ? ?)
? ??
U (?) = ? 1 (? ? ?)
2
satisfy sup??? L(?) ?
1
?
sup??? U (?) for any convex ? ? Rd , ?? ? ?, ? ? Rd and ? ? R+ .
11
Proof of Lemma 2 Define the primal problems of interest as PL = sup??? L(?) and PU =
sup??? U (?). The constraints ? ? ? can be summarized by a set of linear inequalities A? ? b
where A ? Rk?d and b ? Rk for some (possibly infinite) k ? Z. Apply the change of variables
? The constraint A(z+ ?)
? ? b simplifies into Az ? b
? where b
? = b?A?.
? Since ?? ? ?, it
z = ?? ?.
1 ?
?
is easy to show that b ? 0. We obtain the equivalent primal problems PL = sup
? ? z ?z ?
Az?b
z? ? and PU = supAz?b? ? 12 z? ?z ? z? ?. The corresponding dual problems are
2
? ?1
y?A??1A?y
? ? ? ?
+y?A??1?+y?b+
y?0
2
2
? ?1
y?A??1 A?y
?
? ?
?
DU = inf
+y?A??1?+y?b+
.
y?0
2
2
DL = inf
? > 0 as
Due to strong duality, PL = DL and PU = DU . Apply the inequalities ? ? ?? and y? b
? ?1
y?A??1 A?y y?A??1 ?
?
?+?? ?
+
+ y?b
PL ? sup ? z? ?z ? z? ? = inf
y?0
2
2?
?
2?
?
Az?b
1
1
? DU = PU .
?
?
This proves that PL ? ?1 PU .
We will use the above to prove Theorem 2. First, we will upper-bound (in the Loewner ordering
sense) the matrices ?j in Algorithm 2. Since ?fxj (y)?2 ? r for all y ? ?j and since ?j in
Algorithm 1 is a convex combination of fxj (y), the outer-product terms in the update for ?j satisfy
(fxj (y) ? ?)(fxj (y) ? ?)?
? 4r2 I.
Thus, ?j ? F(?1 , . . . , ?n )4r2 I holds where
?
1
F(?1 , . . . , ?n ) =
n tanh( log( ?i?1i
))
?
2
?k
k=1
?i
)
2 log( ?i?1
?
i=2
k
k=1
using the definition of ?1 , . . . , ?n in the proof of Theorem 1. The formula for F starts at i = 2 since
z0 ? 0+ . Assume permutation ? is sampled uniformly at random. The expected value of F is then
?
?(i)
1
n
))
1 ? ? tanh( 2 log( ?i?1
k=1 ??(k)
.
E? [F(?1 , . . . , ?n )] =
??(i)
n! ? i=2
)
2 log( ?i?1
k=1
??(k)
We claim that the expectation is maximized when all ?i = 1 or any positive constant. Also, F
is invariant under uniform scaling of its arguments. Write the expected value of F as E for short.
?E
?E
?E
Next, consider ??
at the setting ?i = 1, ?i. Due to the expectation over ?, we have ??
= ??
o
l
l
for any l, o. Therefore, the gradient vector is constant when all ?i = 1. Since F(?1 , . . . , ?n )
is invariant to scaling, the gradient vector must therefore be the all zeros vector. Thus, the point
? ?E
when all ?i = 1 is an extremum or a saddle. Next, consider ??
for any l, o. At the setting
o ??l
2
? ?E
?i = 1, ???E2 = ?c(n) and, ??
= c(n)/(n ? 1) for some non-negative constant function
o ??l
l
c(n). Thus, the ?i = 1 extremum is locally concave and is a maximum. This establishes that
E? [F(?1 , . . . , ?n )] ? E? [F(1, . . . , 1)] and yields the Loewner bound
)
(
n?1
? tanh(log(i)/2)
2
I = ?I.
?j ?
2r
log(i)
i=1
Apply this bound to each ?j in the lower bound on J(?) and also note a corresponding upper bound
?
? t?+t? ?? ? ??
? 2? (? ? ?)
? ?(?j ?fx (yj ))
J(?) ? J(?)?
j
2
j
?
? t? ?? ? ??
? 2? (? ? ?)
? ?(?j ?fx (yj ))
J(?) ? J(?)?
j
2
j
12
which follows from Jensen?s inequality. Define the current ?? at time ? as ?? and denote by L? (?) the
above lower bound and by U? (?) the above upper bound at time ? . Clearly, L? (?) ? J(?) ? U? (?)
with equality when ? = ?? . Algorithm 2 maximizes J(?) after initializing at ?0 and performing
an update by maximizing a lower bound based on ?j . Since L? (?) replaces the definition of ?j
with ?I ? ?j , L? (?) is a looser bound than the one used by Algorithm 2. Thus, performing
?? +1 = arg max??? L? (?) makes less progress than a step of Algorithm 1. Consider computing the
slower update at each iteration ? and returning ?? +1 = arg max??? L? (?). Setting ? = (t? +t?)I,
? = t?I and ? = ?+?
? allows us to apply Lemma 2 as follows
sup L? (?) ? L? (?? ) =
???
1
sup U? (?) ? U? (?? ).
? ???
Since L? (?? ) = J(?? ) = U? (?? ), J(?? +1 ) ? sup??? L? (?) and sup??? U? (?) ? J(? ? ), we
obtain
(
)
1
J(?? +1 ) ? J(? ? ) ?
1?
(J(?? ) ? J(? ? )) .
?
Iterate the above inequality starting at t = 0 to obtain
(
)?
1
?
J(?? ) ? J(? ) ?
1?
(J(?0 ) ? J(? ? )) .
?
(
)?
?
A solution within a multiplicative factor of ? implies that ? = 1 ? ?1 or log(1/?) = ? log ??1
.
?
?
log(1/?)
or
Inserting the definition for ? shows that the number of iterations ? is at most log ?
??1
?
?
log(1/?)
log(1+?/?) . Inserting the definition for ? gives the bound.
Y12,0
Y11,1
Y21,1
Y31,1
???
Ym1,1
1,1
Figure 3: Junction tree of depth 2.
Algorithm 5 SmallJunctionTree
Input Parameters ?? and h(u), f (u) ?u ? Y12,0 and zi , ?i , ?i ?i = 1, . . . , m1,1
Initialize z ? 0+ , ? = 0, ? = zI
For each configuration u ? Y12,0 {
?m1,1
?m1,1
?m1,1
??
??
??
? = h(u)( ?
i=1 zi exp(?? ?i )) exp(? (f (u) +
i=1 ?i )) = h(u) exp(? f (u))
i=1 zi
m1,1
l = f (u) + i=1 ?i ? ?
?m1,1
tanh( 21 log(?/z)) ?
? + = i=1
?i +
ll
2 log(?/z)
?
? + = z+? l
z += ?
}
Output z, ?, ?
Supplement for Section 5
Proof of correctness for Algorithm 3 Consider a simple junction tree of depth 2 shown on Figure 3.
The notation Yca,b refers to the cth tree node located at tree level a (first level is considered as the one
with?
tree leaves) whose parent is the bth from the higher tree level (the root has no parent
? so b = 0).
Let Y a1 ,b1 refer to the sum over all configurations of variables in Yca11 ,b1 and Y a1 ,b1 \Y a2 ,b2
c1
c1
c2
refers to the sum over all configurations of variables that are in Yca11 ,b1 but not in Yca22 ,b2 . Let ma,b
denote the number of children of the bth node located at tree level a + 1. For short-hand, use
?(Y ) = h(Y ) exp(? ? f (Y )). The partition function can be expressed as:
13
Y13,0
Y12,1
???
Y11,1 Y21,1
???
Y22,1
1,1
Ym
1,1
???
Y11,2 Y21,2
1,2
Ym
1,2
2,1
Ym
2,1
1,m2,1
Y1
???
1,m2,1
Y2
1,m
Ym1,m2,1
2,1
Figure 4: Junction tree of depth 3.
Z(?) =
?
u?Y12,0
?
?
u?Y12,0
=
?
?
??(u)
?
m1,1
i=1
[
?
m1,1
?(u)
i=1
[
?
?
??
?
?(v)??
v?Yi1,1 \Y12,0
)
1(
? ? ?i (? ? ?)
? + (? ? ?)
? ? ?i
zi exp(
? ? ?)
2
?
(
m1,1
?
h(u) exp(? f (u))
u?Y12,0
zi exp
i=1
]
1
? ? ?i (? ? ?)
? + (? ? ?)
? ? ?i
(? ? ?)
2
)]
?
where the upper-bound is obtained by applying Theorem 1 to each of the terms v?Y 1,1 \Y 2,0 ?(v).
1
i
By simply rearranging terms we get:
)
( (
[
(m1,1
))
m1,1
?
?
?
zi exp(???? ?i ) exp ? ? f (u) +
?i
h(u)
Z(?) ?
u?Y12,0
i=1
(
exp
1
??
(? ? ?)
2
(m1,1
?
)
?i
i=1
?
(? ? ?)
)]
.
i=1
One ( can
prove
that
this ) expression
can
be
upper-bounded
by
1
?
?
?
?
?
z exp 2 (? ? ?) ?(? ? ?) + (? ? ?) ? where z, ? and ? can be computed using Algorithm 5 (a simplification of Algorithm 3). We will call this result Lemma A. The proof is similar to
the proof of Theorem 1 so is not repeated here.
Consider enlarging the tree to a depth 3 as shown on Figure 4. The partition function is now
?
?
?
????
?
m2,1
m1,i
? ?
??
?
?
?
?
????
?
Z(?) =
?(w)???? .
??(u)
?
??(v)
?
u?Y13,0
i=1
v?Yi2,1 \Y13,0
j=1
w?Yj1,i \Yi2,1
(
))
?m1,i (?
?
1,i
2,1 ?(w)
term
By Lemma A we can upper bound each v?Y 2,1 \Y 3,0 ?(v) j=1
w?Y
\Y
1
i
j
i
(
)
? ? ?i (? ? ?)
? + (? ? ?)
? ? ?i . This yields
by the expression zi exp 12 (? ? ?)
[
)]
(
m2,1
?
?
1
? ? ?i (? ? ?)
? + (? ? ?)
? ? ?i
Z(?) ?
?(u)
(? ? ?)
.
zi exp
2
3,0
i=1
u?Y1
2,1
This process can be viewed as collapsing the sub-trees S12,1 , S22,1 , . . ., Sm
to super-nodes that
2,1
are represented by bound parameters, zi , ?i and ?i , i = {1, 2, ? ? ? , m2,1 }, where the sub-trees are
14
defined as:
S12,1
=
{Y12,1 , Y11,1 , Y21,1 , Y31,1 , . . . , Ym1,1
}
1,1
S22,1
=
{Y22,1 , Y11,2 , Y21,2 , Y31,2 , . . . , Ym1,2
}
1,2
=
{Ym2,1
, Y1
2,1
..
.
2,1
Sm
2,1
1,m2,1
1,m2,1
, Y2
1,m2,1
, Y3
2,1
, . . . , Ym1,m
}.
1,m2,1
Notice that the obtained expression can
( be further upper bounded using again
) Lemma A (induction)
1
?
?
?
?
?
yielding a bound of the form: z exp 2 (? ? ?) ?(? ? ?) + (? ? ?) ? .
Finally, for a general tree, follow the same steps described above, starting from leaves and collapsing
nodes to super-nodes, each represented by bound parameters. This procedure effectively yields
Algorithm 3 for the junction tree under consideration.
Supplement for Section 6
Proof of correctness for Algorithm 4 We begin by proving a lemma that will be useful later.
Lemma 3 For all x ? Rd and for all l ? Rd ,
?2
?
d
d
2
?
?
l(i)
? .
x(i)2 l(i)2 ? ?
x(i) ??
d
2
l(j)
i=1
i=1
j=1
Proof of Lemma 3 By Jensen?s inequality,
?2
?
( d
)2
d
d
2
? x(i)l(i)2
?
?
x(i)l(i)
? .
??
x(i)2 ?d
??
x(i)2 l(i)2 ? ?
?
?d
2
2
d
l(j)
l(j)
l(j)2
j=1
j=1
i=1
i=1
i=1
i=1
d
?
l(i)2
j=1
Now we prove the correctness of Algorithm 4. At the ith iteration, the algorithm stores ?i using
a low-rank representation Vi? Si Vi + Di where Vi ? Rk?d is orthonormal, Si ? Rk?k positive
semi-definite and Di ? Rd?d is non-negative diagonal. The diagonal terms D are initialized to t?I
where ? is the regularization term. To mimic Algorithm 1 we must increment the ? matrix by a
rank one update of the form ?i = ?i?1 + ri r?
i . By projecting ri onto each eigenvector in V, we
?k
?
Vi?1 ri + g where g is the
can decompose it as ri = j=1 Vi?1 (j, ?)ri Vi?1 (j, ?)? + g = Vi?1
remaining residue. Thus the update rule can be rewritten as:
?i
?
?
?
?
= ?i?1 + ri r?
i = Vi?1 Si?1 Vi?1 + Di?1 + (Vi?1 Vi?1 ri + g)(Vi?1 Vi?1 ri + g)
?
?
?
?
?
?
?
?
= Vi?1
(Si?1 + Vi?1 ri r?
i Vi?1 )Vi?1 + Di?1 + gg = Vi?1 Si?1 Vi?1 + gg + Di?1
?
where we define Vi?1 = Qi?1 Vi?1 and defined Qi?1 in terms of the singular value decomposition,
?
?
? ?
Q?
i?1 Si?1 Qi?1 = svd(Si?1 + Vi?1 ri ri Vi?1 ). Note that Si?1 is diagonal and nonnegative by
construction. The current formula for ?i shows that we have a rank (k + 1) system (plus diagonal
term) which needs to be converted back to a rank k system (plus diagonal term) which we denote by
?
?i . We have two options as follows.
Case 1) Remove g from ?i to obtain
?
?
?
?
?
?i = Vi?1
Si?1 Vi?1 + Di?1 = ?i ? gg? = ?i ? cvv?
1
where c = ?g?2 and v = ?g?
g.
th
Case 2) Remove the m (smallest) eigenvalue in S?i?1 and its corresponding eigenvector:
?
?i
?
?
?
?
?
?
?
= Vi?1
Si?1 Vi?1 + Di?1 + gg? ? S (m, m)V (m, ?)? V (m, ?) = ?i ? cvv?
?
?
where c = S (m, m) and v = V(m, ?) .
15
?
Clearly, both cases can be written as an update of the form ?i = ?i + cvv? where c ? 0 and
v? v = 1. We choose the case with smaller c value to minimize the change as we drop from a system
of order (k + 1) to order k. Discarding the smallest singular value and its corresponding eigenvector
would violate the bound. Instead, consider absorbing this term into the diagonal component to
??
?
preserve the bound. Formally, we look for a diagonal matrix F such that ?i = ?i + F which also
??
maintains x? ?i x ? x? ?i x for all x ? Rd . Thus, we want to satisfy:
( d
)2
d
?
?
? ??
?
?
?
?
x ?i x ? x ?i x ?? x cvv x ? x Fx ?? c
x(i)v(i)
?
x(i)2 F(i)
i=1
i=1
where, for ease of notation, we take F(i) = F(i, i).
?
where w = v? 1. Consider the case where v ? 0 though we will soon get rid of
(?
)2
?d
d
this assumption. We need an F such that i=1 x(i)2 F(i) ? c
i=1 x(i)v(i) . Equivalently, we
(?
)
?d
? 2
?
d
need i=1 x(i)2 F(i)
?
x(i)v(i)
. Define F(i) = F(i)
2
i=1
cw
cw2 for all i = 1, . . . , d. So, we need
(?
)2
?d
?
?
?
d
an F such that i=1 x(i)2 F(i) ?
. Using Lemma 3 it is easy to show that we
i=1 x(i)v(i)
Define v =
1
wv
?
?
?
may choose F (i) = v(i) . Thus, we obtain F(i) = cw2 F(i) = cwv(i). Therefore, for all x ? Rd ,
?d
all v ? 0, and for F(i) = cv(i) j=1 v(j) we have
d
?
(
x(i) F(i) ? c
2
i=1
d
?
)2
x(i)v(i)
.
(3)
i=1
To generalize the inequality to hold for all vectors v ? Rd with potentially negative entries, it is
?d
sufficient to set F(i) = c|v(i)| j=1 |v(j)|. To verify this, consider flipping the sign of any v(i).
The left side of the Inequality 3 does not change. For the right side of this inequality, flipping the
sign of v(i) is equivalent to flipping the sign of x(i) and not changing the sign of v(i). However, in
this case the inequality holds as shown before (it holds for any x ? Rd ). Thus for all x, v ? Rd and
?d
for F(i) = c|v(i)| j=1 |v(j)|, Inequality 3 holds.
Supplement for Section 7
Small scale experiments In additional small-scale experiments, we compared Algorithm 2 with
steepest descent (SD), conjugate gradient (CG), BFGS and Newton-Raphson. Small-scale problems
may be interesting in real-time learning settings, for example, when a website has to learn from a
user?s uploaded labeled data in a split second to perform real-time retrieval. We considered logistic
regression on five UCI data sets where missing values were handled via mean-imputation. A range of
regularization settings ? ? {100 , 102 , 104 } was explored and all algorithms were initialized from the
same ten random start-points. Table 3 shows the average number of seconds each algorithm needed
to achieve the same solution that BFGS converged to (all algorithms achieve the same solution due
to concavity). The bound is the fastest algorithm as indicated in bold.
data|?
BFGS
SD
CG
Newton
Bound
a|100
1.90
1.74
0.78
0.31
0.01
a|102
0.89
0.92
0.83
0.25
0.01
a|104
2.45
1.60
0.85
0.22
0.01
b|100
3.14
2.18
0.70
0.43
0.07
b|102
2.00
6.17
0.67
0.37
0.04
b|104
1.60
5.83
0.83
0.35
0.04
c|100
4.09
1.92
0.65
0.39
0.07
c|102
1.03
0.64
0.64
0.34
0.02
c|104
1.90
0.56
0.72
0.32
0.02
d|100
5.62
12.04
1.36
0.92
0.16
d|102
2.88
1.27
1.21
0.63
0.09
d|104
3.28
1.94
1.23
0.60
0.07
e|100
2.63
2.68
0.48
0.35
0.03
e|102
2.01
2.49
0.55
0.26
0.03
e|104
1.49
1.54
0.43
0.20
0.03
Table 3: Convergence time in seconds under various regularization levels for a) Bupa (t =
345, dim = 7), b) Wine (t = 178, dim = 14), c) Heart (t = 187, dim = 23), d) Ion
(t = 351, dim = 34), and e) Hepatitis (t = 155, dim = 20) data sets.
Influence of rank k on bound performance in large scale experiments We also examined the
influence of k on bound performance and compared it with LBFGS, SD and CG. Several choices
16
of k were explored. Table 4 shows results for the SRBCT data-set. In general, the bound performs
best but slows down for superfluously large values of k. Steepest descent and conjugate gradient
are slow yet obviously do not vary with k. Note that each iteration takes less time with smaller k
for the bound. However, we are reporting overall runtime which is also a function of the number of
iterations. Therefore, total runtime (a function of both) may not always decrease/increase with k.
k
LBFGS
SD
CG
Bound
1
1.37
8.80
4.39
0.56
2
1.32
8.80
4.39
0.56
4
1.39
8.80
4.39
0.67
8
1.35
8.80
4.39
0.96
16
1.46
8.80
4.39
1.34
32
1.40
8.80
4.39
2.11
64
1.54
8.80
4.39
4.57
Table 4: Convergence time in seconds as a function of k.
Additional latent-likelihood results For completeness, Figure 5 depicts two additional data-sets
to complement Figure 2. Similarly, Table 5 shows all experimental settings explored in order to
provide the summary Table 2 in the main article.
bupa
wine
?19
0
?5
?log(J(?))
?log(J(?))
?20
?21
?22
Bound
Newton
BFGS
Conjugate gradient
Steepest descent
?15
?23
?24
?5
?10
0
5
log(Time) [sec]
10
?20
?4
?2
0
2
4
log(Time) [sec]
6
8
Figure 5: Convergence of test latent log-likelihood for bupa and wine data-sets.
Data-set
Algorithm
BFGS
SD
CG
Newton
Bound
ion
m=1 m=2m=3m=4
-4.96 -5.55 -5.88 -5.79
-11.80 -9.92 -5.56 -8.59
-5.47 -5.81 -5.57 -5.22
-5.95 -5.95 -5.95 -5.95
-6.08 -4.84 -4.18 -5.17
Data-set
Algorithm
BFGS
SD
CG
Newton
Bound
bupa
m=1 m=2 m=3 m=4
-22.07 -21.78 -21.92 -21.87
-21.76 -21.74 -21.73 -21.83
-21.81 -21.81 -21.81 -21.81
-21.85 -21.85 -21.85 -21.85
-21.85 -19.95 -20.01 -19.97
wine
m=1m=2m=3m=4
-0.90 -0.91 -1.79 -1.35
-1.61 -1.60 -1.37 -1.63
-0.51 -0.78 -0.95 -0.51
-0.71 -0.71 -0.71 -0.71
-0.51 -0.51 -0.48 -0.51
hepatitis
m=1m=2m=3m=4
-4.42 -5.28 -4.95 -4.93
-4.93 -5.14 -5.01 -5.20
-4.84 -4.84 -4.84 -4.84
-5.50 -5.50 -5.50 -4.50
-5.47 -4.40 -4.75 -4.92
SRBCT
m=1m=2m=3m=4
-5.99 -6.17 -6.09 -6.06
-5.61 -5.62 -5.62 -5.61
-5.62 -5.49 -5.36 -5.76
-5.54 -5.54 -5.54 -5.54
-5.31 -5.31 -4.90 -0.11
Table 5: Test latent log-likelihood at convergence for different values of m ? {1, 2, 3, 4} on ion,
bupa, hepatitis, wine and SRBCT data-sets.
17
| 4855 |@word multitask:1 version:3 manageable:1 inversion:2 proportion:1 norm:1 termination:2 heiser:1 decomposition:4 incarnation:1 versatile:1 configuration:11 contains:7 series:2 leeuw:1 outperforms:1 current:4 comparing:1 recovered:1 skipping:1 si:10 yet:2 must:3 written:2 parsing:1 realistic:1 partition:27 hofmann:1 remove:3 drop:1 update:13 discrimination:1 generative:1 leaf:2 website:1 mccallum:2 yi1:1 steepest:5 ith:1 short:2 provides:8 math:2 node:6 completeness:1 simpler:2 five:2 constructed:1 c2:1 consists:2 prove:3 inside:2 tagging:1 expected:2 byrd:1 becomes:1 provided:3 estimating:2 underlying:1 bounded:3 notation:3 begin:3 project:1 classifies:1 maximizes:1 eigenvector:7 proposing:1 finding:1 extremum:2 guarantee:2 y3:1 every:1 multidimensional:1 concave:1 runtime:2 returning:1 appear:1 segmenting:1 positive:3 before:1 engineering:2 local:2 sd:10 sutton:1 koo:1 pami:2 plus:2 initialization:1 examined:1 wallach:1 collect:1 bupa:8 fastest:2 factorization:2 limited:1 ease:1 range:1 averaged:1 acknowledgment:1 woodbury:1 globerson:1 yj:21 testing:4 tribution:1 definite:2 procedure:2 intersect:1 empirical:1 projection:1 ym1:5 word:5 refers:2 altun:1 get:5 onto:2 close:1 tsochantaridis:1 storage:1 influence:3 applying:1 isotonic:1 optimize:1 equivalent:2 imposed:1 yt:3 crfs:7 maximizing:5 straightforward:3 roth:1 starting:2 uploaded:1 convex:13 missing:1 simplicity:1 decomposable:1 m2:10 rule:5 orthonormal:4 regularize:1 proving:2 handle:3 traditionally:1 fx:4 analogous:1 increment:2 annals:1 construction:1 play:1 lindsay:1 modulo:1 user:1 programming:2 olivier:1 us:3 hypothesis:1 pa:1 element:4 function5:1 expensive:1 recognition:3 referee:1 trend:1 associate:1 located:2 labeled:2 observed:3 role:1 ep:2 ft:4 taskar:1 electrical:2 wang:3 initializing:1 ensures:1 news:1 ordering:5 decrease:1 highest:1 valuable:1 yk:2 dempster:1 complexity:1 solving:1 rewrite:3 tight:2 inapplicable:1 bipartite:1 f2:2 efficiency:1 accelerate:1 joint:1 various:5 chapter:2 kaski:1 regularizer:3 represented:2 revert:1 fast:1 query:1 labeling:1 choosing:2 whose:1 larger:1 valued:5 cvpr:2 loglikelihood:1 supplementary:1 tightness:1 statistic:1 soundness:2 laird:1 obviously:1 advantage:1 sequence:1 loewner:2 eigenvalue:1 product:3 zm:1 inserting:2 relevant:1 loop:1 uci:3 mixing:1 flexibility:1 achieve:4 inducing:2 gentle:1 az:3 crossvalidation:1 exploiting:1 convergence:13 parent:5 darrell:1 produce:1 generating:1 guaranteeing:1 derive:1 andrew:1 bth:2 stat:2 h3:1 progress:1 strong:1 soc:1 recovering:1 c:2 involves:3 implies:2 solves:1 annotated:1 hull:4 stochastic:2 human:1 material:1 require:1 hx:1 f1:17 decompose:1 tighter:2 summation:1 extension:5 insert:1 correction:1 hold:8 pl:5 considered:2 ic:1 exp:56 mapping:3 claim:1 sought:1 achieves:2 smallest:4 a2:1 wine:8 vary:1 estimation:9 applicable:1 label:2 tanh:15 s12:2 correctness:5 establishes:2 tool:1 qv:1 minimization:1 zxj:4 clearly:4 mit:1 gaussian:2 always:2 aim:1 super:2 rather:5 hj:2 factorizes:1 jaakkola:1 conjunction:1 validated:1 focus:1 datasets6:1 rank:18 likelihood:32 ratcliff:1 hepatitis:9 normalizer:1 cg:10 sense:2 inst:1 inference:4 demirdjian:1 ym2:1 dim:5 hidden:10 koller:1 choromanska:2 selects:1 subroutine:1 interested:1 arg:6 overall:5 html:1 classification:2 dual:3 proposes:1 art:3 constrained:3 initialize:4 softmax:1 marginal:1 field:18 equal:1 f3:1 nicely:1 identical:1 represents:3 s22:2 icml:7 look:1 future:1 mimic:1 np:1 report:4 piecewise:1 preserve:1 national:1 pietra:2 murphy:1 interest:3 multiply:1 evaluation:2 introduces:1 mixture:2 yielding:2 primal:2 tj:1 chain:1 integral:1 edge:1 ypa:5 unless:1 tree:20 indexed:1 taylor:1 incomplete:1 logarithm:2 re:3 initialized:2 instance:1 column:2 modeling:2 maximization:5 cost:1 introducing:2 vertex:3 subset:1 entry:1 uniform:1 varies:1 sv:1 density:3 probabilistic:2 majorizes:2 ym:3 thesis:1 central:1 again:1 choose:3 possibly:1 collapsing:2 ek:1 toy:2 distribute:1 converted:1 bfgs:11 de:1 bold:3 subsumes:1 sec:5 summarized:1 b2:2 satisfy:4 vi:27 later:4 h1:2 performed:1 closed:2 multiplicative:1 root:1 sup:10 start:3 recover:2 competitive:1 complicated:2 bouchard:1 option:1 maintains:1 majorization:14 minimize:1 efficiently:3 maximized:1 yield:5 generalize:1 bayesian:3 iterated:1 iid:2 lu:1 cc:1 converged:2 quattoni:2 definition:4 involved:1 minka:1 e2:1 naturally:2 proof:20 associated:3 di:7 sampled:1 popular:1 recall:3 knowledge:2 emerges:3 dimensionality:3 improves:1 organized:1 enumerates:1 positioned:1 jenatton:1 back:2 higher:1 follow:1 tom:1 methodology:1 improved:2 daunting:1 yb:1 done:2 though:1 generality:1 furthermore:1 anywhere:1 smola:1 sketch:1 hand:2 replacing:1 propagation:1 continuity:1 defines:3 logistic:4 enlarging:1 indicated:2 quire:1 effect:1 naacl:1 verify:1 true:1 y2:2 equality:3 regularization:6 y12:10 laboratory:1 ll:2 numerator:3 width:3 spanish:1 criterion:1 generalized:2 eigensystem:1 gg:4 bijective:1 crf:3 complete:2 performs:3 fj:13 image:1 variational:6 consideration:1 fi:1 common:1 absorbing:1 empirically:1 tilt:1 tightens:1 conditioning:1 extend:1 m1:14 significant:1 refer:1 cv:1 rd:33 session:1 similarly:1 malouf:1 aq:1 chapelle:1 longer:1 add:2 pu:5 curvature:2 multivariate:1 posterior:3 showed:1 carreras:1 chelsea:1 optimizing:1 inf:3 reverse:1 discard:1 store:3 certain:1 inequality:15 outperforming:1 binary:1 wv:1 tnd:3 yi:1 guestrin:1 minimum:2 additional:7 performer:1 converge:1 paradigm:1 maximize:1 monotonically:2 semi:2 recoverable:1 multiple:1 full:1 violate:1 technical:2 faster:3 gesture:1 cross:3 raphson:3 bach:1 schraudolph:1 y22:2 retrieval:1 a1:2 qi:4 prediction:6 variant:2 y21:5 involving:2 denominator:3 regression:4 expectation:5 surpassed:1 stant:1 metric:1 iteration:10 achieved:2 invert:1 ion:9 c1:2 addition:3 residue:1 want:1 else:2 singular:7 appropriately:1 extra:1 archive:1 scalable:1 subject:1 facilitates:1 lafferty:2 jordan:2 integer:2 call:1 split:2 identically:1 easy:4 kanevsky:1 iterate:2 xj:11 zi:12 restrict:1 reduce:1 inner:1 simplifies:1 svn:1 enumerating:2 enumerable:1 expression:4 handled:1 darroch:1 render:1 speech:1 hessian:2 proceed:1 action:1 enumerate:2 useful:4 iterating:1 involve:1 cony:1 statist:1 hardware:1 locally:1 ten:1 http:2 outperform:2 zj:1 notice:1 sign:4 estimated:3 per:2 discrete:4 write:2 key:1 putting:1 iter:6 four:1 drawn:1 d3:1 wasteful:1 changing:1 imputation:1 rewriting:1 ht:2 nocedal:1 asymptotically:1 graph:4 merely:2 sum:5 run:1 secstr:3 master:1 named:1 reporting:1 family:3 throughout:1 looser:1 decision:1 scaling:6 conll:4 interleaved:1 bound:99 guaranteed:1 simplification:1 convergent:3 quadratic:13 replaces:2 topological:1 nonnegative:2 constraint:8 fxj:12 vishwanathan:1 ri:16 argument:1 min:3 performing:2 department:4 structured:5 according:1 combination:1 conjugate:5 remain:2 slightly:1 across:3 suppressed:1 smaller:3 em:7 shallow:1 cth:1 y13:3 benson:1 projecting:1 invariant:2 repair:1 indexing:2 heart:1 ceiling:1 computationally:1 equation:3 mori:1 previously:1 remains:1 describing:1 precomputed:1 fail:1 y31:3 fortran:1 needed:1 yj1:1 generalizes:2 decomposing:1 junction:7 gaussians:3 permit:1 apply:9 rewritten:1 schmidt:1 slower:1 running:1 tony:2 cf:2 remaining:4 graphical:18 include:1 hinge:2 newton:9 marginalized:1 exploit:1 prof:3 objective:4 noticed:1 question:1 flipping:3 sha:1 dependence:2 diagonal:9 surrogate:1 guessing:1 gradient:11 cw:1 thank:1 accommodates:1 parametrized:2 outer:2 vd:2 entity:1 trivial:1 induction:1 assuming:2 code:1 index:1 berger:1 mini:1 minimizing:3 equivalently:4 setup:1 october:1 potentially:1 negative:7 slows:1 y11:5 implementation:3 unknown:3 gated:1 perform:1 upper:12 wire:1 markov:4 sm:2 benchmark:3 descent:5 pentland:1 incorrectly:1 supporting:1 situation:2 extended:2 relational:1 precise:1 srbct:12 y1:8 incorporate:1 arbitrary:3 jebara:6 hxj:5 complement:1 pair:2 required:2 specified:1 optimized:1 z1:8 sentence:2 nip:3 yc:21 sparsity:2 program:2 including:2 max:7 memory:1 royal:1 wainwright:1 power:1 hybrid:1 regularized:3 recursion:1 residual:1 zhu:1 scheme:2 ne:2 concludes:1 columbia:8 text:3 prior:3 geometric:1 multiplication:1 relative:2 fully:2 par:1 loss:1 permutation:1 interesting:1 validation:2 h2:2 downloaded:1 foundation:1 sufficient:1 article:9 rubin:1 bank:1 normalizes:1 course:1 elsewhere:1 summary:1 last:4 soon:1 dis:1 zc:4 formal:1 side:3 exponentiated:3 bohning:1 vv:2 neighbor:2 distributed:2 edinburgh:1 boundary:2 dimension:1 feedback:1 depth:4 avoids:1 computes:2 concavity:2 author:1 collection:1 simplified:1 far:1 lebanon:1 approximate:1 observable:2 absorb:1 clique:10 ml:1 monotonicity:1 mairal:1 summing:1 corpus:2 assumed:1 b1:4 rid:1 discriminative:1 alternatively:2 search:1 latent:22 iterative:7 table:15 learn:1 rearranging:2 init:1 contributes:1 du:3 investigated:1 necessarily:1 hc:5 domain:1 diag:5 anna:2 aistats:1 yi2:2 main:1 bounding:3 revisits:1 arise:1 child:5 repeated:1 x1:4 depicts:3 rithm:1 slow:2 sub:4 mao:1 pereira:2 exponential:4 lw:3 jmlr:1 interleaving:1 admissible:1 limz:1 down:1 theorem:10 rk:10 removing:1 formula:3 xt:4 specific:1 discarding:2 remained:1 z0:18 jensen:5 r2:3 explored:5 admits:2 normalizing:1 dl:2 intractable:1 workshop:1 adding:1 sequential:1 effectively:1 supplement:17 magnitude:1 conditioned:3 margin:2 entropy:9 intersection:1 logarithmic:2 fc:6 simply:3 explore:2 lbfgs:10 gao:1 saddle:1 prevents:1 expressed:1 scalar:4 cvv:4 monotonic:4 applies:1 ch:5 corresponds:1 satisfies:2 chance:1 ma:1 obozinski:1 conditional:36 viewed:1 ann:1 shared:1 feasible:1 change:7 hard:2 included:1 infinite:2 uniformly:1 reversing:1 lemma:15 tumor:3 zb:4 called:1 total:3 morency:2 duality:2 experimental:2 svd:2 argonne:1 select:1 formally:1 support:3 szummer:1 arises:1 collins:3 accelerated:1 cw2:2 della:2 |
4,260 | 4,856 | Probabilistic Event Cascades for Alzheimer?s disease
Daniel Alexander
University College London
[email protected]
Jonathan Huang
Stanford University
[email protected]
Abstract
Accurate and detailed models of neurodegenerative disease progression are
crucially important for reliable early diagnosis and the determination of effective
treatments. We introduce the ALPACA (Alzheimer?s disease Probabilistic
Cascades) model, a generative model linking latent Alzheimer?s progression
dynamics to observable biomarker data. In contrast with previous works which
model disease progression as a fixed event ordering, we explicitly model the
variability over such orderings among patients which is more realistic, particularly
for highly detailed progression models. We describe efficient learning algorithms
for ALPACA and discuss promising experimental results on a real cohort of
Alzheimer?s patients from the Alzheimer?s Disease Neuroimaging Initiative.
1
Introduction
Models of disease progression are among the core tools of modern medicine for early disease
diagnosis, treatment determination and for explaining symptoms to patients. In neurological
diseases, for example, symptoms and pathologies tend to be similar in different diseases. The
ordering and severity of those changes, however, provide discrimination amongst different diseases.
Thus progression models are key to early differential diagnosis and thus to drug development
(for finding the right participants in trials) and for eventual deployment of effective treatments.
Despite their utility, traditional models of disease progression [3, 17] have largely been limited to
coarse symptomatic staging which divides patients into a small number of groups by thresholding
a crude clinical score of how far the disease has progressed. The models are thus only as precise
as these crude clinical scores ? although providing insight into disease mechanisms, they provide
little benefit for early diagnosis or accurate patient staging. With the growing availability of larger
datasets consisting of measurements from clinical, imaging and pathological sources, however,
more detailed characterizations of disease progression are now becoming feasible and a key hope
in medical science is that such models will provide earlier, more accurate diagnosis, leading to
more effective development and deployment of emerging treatments. The recent availability of
cross sectional datasets such as the Alzheimer?s Disease Neuroimaging Initiative data has generated
intense speculation in the neurology community about the nature of the cascade of events in AD
and the ordering in which biomarkers show abnormality. Several hypothetical models [12, 5, 1]
broadly agree, but differ in some ways. Despite early attempts on limited data sets [13], a data
driven confirmation of those models remains a pressing need.
Beckett [2] was the first, nearly two decades ago, to propose a data driven model of disease progression using a distribution over orderings of clinical events. This earlier work of [2] considered the
progressive loss of physical abilities in ageing persons such as the ability to do heavy work around
the house, or to climb up stairs. More recently, Fonteijn et al. [8] developed event-based models of
disease progression by analyzing ordered series of much finer grained clinical and atrophy events
with applications to the study of familial Alzheimer?s disease and Huntington?s disease, both of
which are well-studied autosomal-dominantly inherited neurodegenerative diseases. Examples of
events in the model of [8] include (but are not limited to) clinical events (such as a transition from
Presymptomatic Alzheimer?s to Mild Cognitive Impairment) and the onset of atrophy (reduction of
tissue volume). By assuming a single universal ordering of events within the disease progression,
the method of [8] is able to scale to much larger collections of events, thus achieving much more
detailed characterizations of disease progression compared to that of [2].
1
The assumption made in [8] of a universal ordering common to all patients within a disease cohort,
is a major oversimplification of reality, however, where the event ordering can vary considerably
among patients even if it is consistent enough to distinguish different diseases. In practice, the
assumption of a universal ordering within the model means we cannot recover the diversity of
orderings over population groups and can make fitting the model to patient data unstable. To
address the universal ordering problem, our work revisits the original philosophy of [2] by explicitly
modeling a distribution over permutations. By carefully considering computational complexity and
exploiting modern machine learning techniques, however, we are able to overcome many of its
original limitations. For example, where [2] did not model measurement noise, our method can
handle a wide range of measurement models. Additionally, like [8], our method can achieve the
scalability that is required to produced fine-grained disease progression models. The following is a
summary of our main contributions.
? We propose the Alzheimer?s disease Probabilistic Cascades (ALPACA) model, a probabilistic model of disease cascades, allowing for patients to have distinct event orderings.
? We develop efficient probabilistic inference and learning algorithms for ALPACA, including a novel patient ?staging? method, which predicts a patient?s full trajectory through
clinical and atrophy events from sparse and noisy measurement data.
? We provide empirical validation of our algorithms on synthetic data in a variety of settings
as well as promising preliminary results for a real cohort of Alzheimer?s patients.
2
Preliminaries: Snapshots of neurodegenerative disease cascades
We model a neurodegenerative disease cascade as an ordering of a discrete set of N events,
{e1 , . . . , eN }. These events represent changes in patient state, such as a sufficiently low score on
a memory test for a clinical diagnosis of AD, or the first measurement of tissue pathology, such as
significant atrophy in the hippocampus (memory related brain area). An ordering over events is represented as a permutation ? which corresponds events to the positions within the ordering at which
they occur. We write ? as ?(1)|?(2)| . . . |?(N ), where ?(j) = ei means that ?Event i occurs in
position j with respect to ??. In practice, the ordering ? for a particular patient can only be observed
indirectly via snapshots which probe at a particular point in time whether each event has occurred or
not. We denote a snapshot by a vector of N measurements z = (ze1 , . . . , zen ), where each zei is a
real valued measurement reflecting a noisy diagnosis as to whether event i of the disease progression
has occurred prior to measuring z.1 Were it not for noise within the measurement process, a single
snapshot z would partition the event set into two disjoint subsets: events that have occurred already
(e.g., {e?(1) , . . . , e?(r) }), and events which have yet to occur (e.g., {e?(r+1) , . . . , e?(N ) }).
Where prior models [8] considered data in which a patient is only associated with a single
snapshot (taken at a single time point), we allow for multiple snapshots of a patient to be taken
spaced throughout that patient?s disease cascade. In this more general case of k snapshots,
the event set is partitioned into k + 1 disjoint subsets (in the absence of noise). For example,
if ? = e3 |e1 |e4 |e5 |e6 |e2 , then k = 2 snapshots might partition the event ordering into sets
X1 = {e1 , e3 }, X2 = {e4 , e5 }, X3 = {e2 , e6 }, reflecting that events in X1 occur before events in
X2 , which occur before events in X3 . Such partitions can also be thought of as partial rankings over
the events (and indeed, we will exploit recent methods for learning with partial rankings in our own
approach, [11]). To denote partial rankings, we again use vertical bars, separating the events that occur between snapshots. In the above example, we would write e1 , e3 |e4 , e5 |e2 , e6 . This connection
between snapshots and partial rankings plays a key role in our inference algorithms in Section 4.1.
Instead of reasoning with continuous snapshot times, we use the fact that many distinct snapshot
times can result in the same partial ranking, to reason instead with discrete snapshot sets. By snapshot set, we refer to the collection of positions in the full event ordering just before each snapshot
is taken. In our running example, the snapshot set is ? = {2, 4}. Given a full ordering ?, the partial
ranking which arises from snapshot data (assuming no noise) is fully determined by ? . We denote
this resulting partial ranking by ?|? . Thus in our running example, ?|? ={2,4} = e1 , e3 |e4 , e5 |e2 , e6 .
3
ALPACA: the Alzheimer?s disease Probabilistic Cascades model
We now present ALPACA, a generative model of noisy snapshots in which the event ordering for each
patient is a latent variable. ALPACA makes two main assumptions: (1), that the measured outcomes
for each patient are independent of each other and (2), conditioned on the event ordering of each
1
For notational simplicity, we assume that measurements corresponding to each event are scalar valued.
However, our model extends trivially to more complicated measurements.
2
patient and the time at which a snapshot is taken, the measurements for each event are independent.
In contrast with [8], we do not assume that multiple snapshot vectors for the same patient are independent of each other. The simplest form of ALPACA is as follows. For each patient j = 1, . . . , M :
1. Draw an ordering of the events ? (j) from a Mallows distribution P (?; ?0 , ?) over orderings.
2. Draw a snapshot set ? (j) from a uniform distribution P (? ) over subsets of the event set.
(j)
(j)
(j)
3. For each element of the snapshot set, ?i = ?1 , . . . , ?K (j) and for each event e = e1 , . . . , eN :
(j)
(a) If ? ?1 (e) ? ?i
(j)
(j)
(i.e., if event e has occurred prior to time ?i ), draw zi,e
?
(j)
N (?occurred
, coccurred
). Otherwise draw zi,e ? N (?healthy
, chealthy
).
e
e
e
e
(j)
In the above basic model, each entry of a snapshot vector, zi,e , is generated by sampling from a univariate measurement model (assumed in this case to be Gaussian). If event e has already occurred,
(j)
(j)
, coccurred
) ? otherwise zi,e is
the observation zi,e is sampled from the distribution N (?occurred
e
e
sampled from a measurement distribution estimated from a control population of healthy individuals,
N (?healthy
, chealthy
). For notational simplicity, we denote the collection of snapshots for patient
e
e
(j)
(j)
j by z?,? = {zi,e }i=1,...,K (j) ,e=1,...,N . We remark that the success of our approach does not hinge
on the assumption of normality and our algorithms can deal with a variety of measurement models.
For example, certain clinical events (such as the loss of the ability to pass a memory test) are more
naturally modeled as discrete observations and can trivially be incorporated into the current model.
The prior distribution over possible event orderings is assumed to take the form of the well known
Mallows distribution, which has been used in a number of other application areas such as NLP,
social choice, and psychometrics ([6, 15, 18]), and has the following probability mass function over
orderings: P (? = ?; ?0 , ?) ? exp (??dK (?, ?0 )), where dK (?, ?) is the Kendall?s tau distance
metric on orderings. The Kendall?s tau distance penalizes the number of inversions, or pairs of
events for which ? and ?0 disagree over relative ordering. Mallows models are analogous to normal
distributions in that ?0 can be interpreted as the mean or central ordering and ? as a measure of the
?spread? of the distribution. Both parameters are viewed as fixed quantities to be estimated via the
empirical Bayesian approach outlined in Section 4.
The choices of the Mallows model for orderings and the uniform distribution for snapshot sets are
particularly convenient for clinical settings in which the number of subjects may be limited, since
the small number of parameters of the model (which scales linearly in N ) sufficiently constrains
learning, and eases our discussion of inference and learning in Section 4. However, as we discuss in
Section 5, the parametric assumptions made in the most basic form of ALPACA can be considerably
relaxed without impacting the computational complexity of learning. Our algorithms are thus
applicable for more general classes of distributions over orderings as well as snapshot sets.
Application to patient staging. With respect to the event-based characterization of disease
progression, a critical problem is that of patient staging, the problem of determining the extent
to which a disease has progressed for a particular patient given corresponding measurement data.
ALPACA offers a simple and natural formulation of the patient staging problem as a probabilistic
inference query. In particular, given the measurements corresponding to a particular patient, we
perform patient staging by: (1) computing a posterior distribution over the event ordering ? (j) , then
(2) computing a posterior distribution over the most recent element of the snapshot set ? (j) .
To visualize the posterior distribution over the event ordering ? (j) , we plot a simple ?first-order
staging diagram?, displaying the probability that event e has occurred (or will occur) in position
q according to the posterior. Two major features differentiate ALPACA from traditional patient
staging approaches, in which patients are binned into a small number of imprecisely defined stages.
In particular, our method is more fine-grained, allowing for a detailed picture of what the patient has
undergone as well as a prediction of what is to come next. Moreover, ALPACA has well-defined
probabilistic semantics, allowing for a rigorous probabilistic characterization of uncertainty.
4
Inference algorithms and parameter estimation
In this section we describe tractable inference and parameter estimation procedures for ALPACA.
4.1 Inference.
Given a collection of K (j) snapshots for a patient j, the critical inference problem that we must
solve is that of computing a posterior distribution over the latent event order and snapshot set
for that patient. Despite the fact that all latent variables are discrete, however, computing this
3
posterior distribution
can be nontrivial due to the super-exponential size of the state space (which is
N
O(N ! ? K (j) )), for which there exist no tractable exact inference algorithms.
We thus turn to a Gibbs sampling approximation. Directly applying the Gibbs sampler to the model
is difficult however. One reason is that it is not obvious how to tractably sample the event ordering ?
conditioned on its Markov blanket, given that the corresponding likelihood function is not conjugate
prior to the Mallows model. Instead, noting that the snapshots depend on (?, ? ) only through the
partial ranking ? ? ?|? , our Gibbs sampler operates on an augmented model in which the partial
ranking ? is first generated (deterministically) from ? and ? , and the snapshots are then generated
conditioned on the partial ranking ?. See Fig. 1(a) for a Bayes net representation. This augmented
model is equivalent to the original model, but has the advantage that it reduces the sampling step for
the event ordering ? to a well understood problem (described below). Our sampler thus alternates
between sampling ? and jointly sampling (?, ? ) from the following conditional distributions:
? (j) ? P (? | ? = ? (j) , ? = ? (j) ; ?0 , ?),
(j)
(? (j) , ? (j) ) ? P (?, ? | ? = ? (j) , z?,?
).
(4.1)
Observe that since the snapshot set ? is fully determined by the partial ranking ?, it is not necessary
to condition on ? in Equation 4.1 (left). Similarly in Equation 4.1 (right), since ? is fully determined
given both the event ordering ? and the snapshot set ? , one can sample ? first, and deterministically
reconstruct ?. Therefore the Gibbs sampling updates are:
? (j) ? P (? | ? = ? (j) ; ?0 , ?),
(j)
? (j) ? P (? | ? = ? (j) , z?,?
).
(4.2)
While the Gibbs sampling updates here effectively reduce the inference problem to smaller
inference problems,the state spaces over ? and ? still remain intractably large (with cardinalities
O(N !) and O( KN(j) ), respectively). In the remainder of this section, we show how to exploit even
further structure within each of the conditional distributions over ? and ? for efficient inference.
As a result, we are able to carry out Gibbs sampling operations efficiently and exactly.
Sampling event orderings. To sample ? (j) from the conditional distribution in Equation 4.2,
we must condition a Mallows prior on the partial ranking ? = ? (j) . This precise problem has in
fact been discussed in a number of works [4, 14, 15, 9]. In our experiments, we use the method
of Huang [9] which explicitly computes a representation of the posterior, from which one can
efficiently (and exactly) draw independent samples.
Sampling snapshot sets. We now turn to the problem of sampling a snapshot set ? (j) of size K (j)
from Equation 4.2 (right). Note first that if K (j) is small (say,
less than 3), then one can exhaustively
compute the posterior probability of each of the KN(j) K (j) -subsets and draw a sample from
a tabular representation of the posterior. For larger K (j) , however, the exhaustive approach is
intractable. In the following, we present a dynamic programming algorithm for sampling snapshot
sets with running time much lower than the exhaustive setting (even for small K (j) ). Our core
insight is to exploit conditional independence relations within the posterior distribution over
snapshot sets. That such independence relations exist may not seem surprising due to the simplicity
of the uniform prior over snapshot sets ? but on the other hand, note that the individual times of
a snapshot set drawn from the uniform distribution over K (j) -subsets are not a priori independent
of each other (they could not be, as the total number of times is observed and fixed to be K (j) ). As
we show in the following, however, we can bijectively associate each snapshot set with a trajectory
through a certain grid. With respect to this grid-based representation of snapshot sets, we then show
that the posterior distribution can be viewed as that of a particular hidden Markov model (HMM).
We will consider the set G = {(x, y) : 0 ? x ? K (j) and 0 ? y ? N ? K (j) }. G is a grid (depicted
in Fig. 1(b)) which we will visualize with (K (j) , N ? K (j) ) in the upper left corner and (0, 0) in the
lower right corner. Let PG denote the collection of staircase walks (paths which never go up or to the
left) through the grid G starting and ending at the corners (K (j) , N ? K (j) ) and (0, 0), respectively.
An example staircase walk is outlined in blue in Figure 1(b). It is not difficult to verify that every
element in PG has length N (i.e., every staircase walk traverses exactly N edges in the grid).
Given a grid G, we can now state a one-to-one correspondence between the staircase walks in PG
with the K (j) -subsets of N . To establish the correspondence, we first associate each edge of the grid
to the sum of the indices of the starting node of that edge. Hence the edge from (x1 , y1 ) to (x2 , y2 ) is
associated with the number x1 + y1 . Given any staircase walk p = ((x0 , y0 ), (x1 , y1 ), ..., (xN , yN ))
in PG , we associate p to the subset of events in {1, . . . , N } corresponding to the subset of edges
of p which point downwards. It is not difficult to show that this association is in fact, bijective (i.e.,
given a snapshot set ? , there is a unique staircase walk p? mapping to ? ).
4
(a)
(b)
Figure 1: (a): Bayesian network representation of our model (augmented by adding the partial ranking ?).
(b): Grid structured state space G for sampling snapshot sets with edges labeled with transition probabilities
according to Equation 4.3. In this example, N = 5 and K (j) = 2. The example path (highlighted) is
p = ((2, 3), (2, 2), (1, 2), (1, 1), (0, 1), (0, 0)), corresponding to the snapshot set ? = {4, 2}.
We now show that our encoding of K (j) -subsets as staircase walks allows for the posterior over
? in Equation 4.2 to factor with respect to a hidden Markov model. Conditioned on ? = ? (j) , we
define an HMM over G with the following transition and observation probabilities, respectively:
P ((xt , yt ) | (xt?1 , yt?1 ) = (x, y)) ?
?
?
x
x+y
y
x+y
?
0
if (xt , yt ) = (x ? 1, y)
if (xt , yt ) = (x, y ? 1) ,
otherwise
(j)
(j)
(j)
L(z?,?(N ?t) |(xt , yt ) = (xt , yt )) ? ?(xt , xt + yt ; z1,?(N ?t) , . . . , zK (j) ,?(N ?t) ),
where ?(v, e; z1 , . . . , zK ) ?
v?1
Y
P (zi ; ?healthy
, chealthy
)
e
e
K
Y
(4.3)
(4.4)
P (zi ; ?occurred
, coccurred
).
e
e
i=v
i=1
(j)
(j)
The initial state is set to (x0 , y0 ) = (K , N ?K ) and the chain terminates when (x, y) = (0, 0).
Note that sample trajectories from the above HMM are staircase walks with probability one.
(j)
Proposition 1. Conditioned on ? = ? (j) , the posterior probability P (? = ? (j) | ? = ? (j) , z?,? ) is
equal to the posterior probability of the staircase walk p? (j) under the hidden Markov model defined
by Equations 4.3 and 4.4.
To sample a snapshot set from the conditional distribution in Equation 4.2, we therefore sample
staircase walks from the above HMM and convert the resulting samples to snapshot sets.
Time Complexity of a single Gibbs iteration. We now consider the computational complexity
of our inference procedures. First observe that the complexity of sampling from the posterior
distribution of a Mallows model conditioned on a partial ranking is O(N 2 ) [9]. We claim that the
complexity of sampling a snapshot set is also O(N 2 ). To see why, note that the complexity of the
Backwards algorithm for HMMs is squared in the number of states and linear in the number of
time steps. In our case, the number of states is K (j) (N ? K (j) ) and the number of time steps is
N . Thus in the worst case (where K (j) ? N/2), the complexity of naively sampling a staircase
walk is O(N 5 ). However we can exploit additional problem structure. First, since the HMM
transition matrix is sparse (each state transitions to at most two states), the Backwards algorithm
can be performed in O(N ? #(states)) time. Second, since the grid coordinates corresponding to
the current state at time T are constrained to sum to N ? T , the size of the effective state space
is reduced to O(N ) rather than O(K (j) (N ? K (j) )). Thus in the worst case, the running time
complexity can in turn be reduced to O(N 2 ) and even linear time O(N ) when K (j) ? O(1). In
conclusion, the total complexity of a single Gibbs iteration requires at most O(N 2 ) operations.
Mixing considerations. Under mild assumptions, it is not difficult to establish ergodicity of our
Gibbs sampler, showing that the sampling distribution must eventually converge to the desired
posterior. The one exception is when the size of the snapshot set is one less than the number of
events (K (j) = N ? 1). In this exceptional case,2 the grid G has size N ? 1, forcing the Gibbs
sampler to be deterministic. As a result, the Markov chain defined by the Gibbs sampler is not
irreducible and hence not ergodic. We have:
Proposition 2. The Gibbs sampler is ergodic on its state space if and only if K (j) < N ? 1.
2
Note that to have so many snapshots for a single patient would be rare indeed.
5
Even when K (j) < N ? 1, mixing times for the chain can be longer for larger snapshot sets (where
K (j) is close to N ? 1). For example, when K (j) = N ? 2, it is possible to show that the T th
ordering in the Gibbs chain can differ from the (T + 1)th ordering by at most an adjacent swap.
Consequently, since it requires O(N 2 ) adjacent swaps (in the worst case) to reach the mode of the
posterior distribution with nonzero probability, we can lower bound the mixing time in this case by
O(N 2 ) steps. For smaller K (j) , the Gibbs sampler is able to make larger jumps in state space and
indeed, for these chains, we observe faster mixing times in practice.
4.2 Parameter estimation.
Given a snapshot dataset {z (j) }j=1,...,M , we now discuss how to estimate the ALPACA model
PM
parameters (?0 , ?) by maximizing the marginal log likelihood: `(?0 , ?) = j=1 log P (z (j) |?0 , ?).
Currently we obtain point estimates of model parameters, but fuller Bayesian approaches are also
possible. Our approach uses Monte Carlo expectation maximization (EM) to alternate between the
(0)
following two steps given an initial setting of model parameters (?0 , ?(0) ).
E-step. For each patient in the cohort, use the inference algorithm described in Section 4.1 to
obtain a draw from the posterior distribution P (? (j) , ? (j) |z (j) , ?0 , ?). Note that multiple draws can
also be taken to reduce the variance of the E-step.
M-step. Given the draws obtained via the E-step, we can now apply standard Mallows model estimation algorithms (see [7, 16, 15]) to optimize for the parameters ?0 and ? given the sampled ordering for each patient. Optimizing for ?, for example, is a one-dimensional convex optimization [16].
Optimizing for ?0 (sometimes called the consensus ranking problem) is known to be NP-hard. Our
implementation uses the Fligner and Verducci heuristic [7] (which is known to be an unbiased estimator of ?0 ) followed by local search, but more sophisticated estimators exist [16]. Note that the
sampled snapshot sets ({? (j) }) do not play a role in the M-step described here, but can be used to
estimate parameters for the more complex snapshot set distributions described in Section 5.
Complexity of EM. The running time of a single iteration of our E-step requires O(N 2 TGibbs M )
time, where TGibbs is the number of Gibbs iterations. The running time of the M-step is O(N 2 M )
(assuming a single sample per patient), and is therefore dominated by the E-step complexity.
5
Extensions of the basic model
Generalized ordering models. The classical Mallows model for orderings is often too limited
for real datasets in its lack of flexibility. One limitation is that the positional variances of all of
the events are governed by just a single parameter, ?. In clinical datasets, it is more conceivable
that different biomarkers within a disease cascade change over different timescales, thus leading to
higher positional variance for certain events and lower positional variance for others.
Fortunately our approach applies to any class of distributions for which one can efficiently condition
on partial ranking observations. In our experiments (Section 6), we achieve more flexibility using
the generalized Mallows model [7, 16], which includes the classical Mallows model as a special
case and allows for the positional variance of each event e to be governed by its own corresponding
parameter ?e . Generalized Mallows models are in turn a special case of the recently introduced
hierarchical riffle independent models [10] which allow one to capture dependencies among small
subsets of events. Huang et al. ([11]), in particular, proved that these hierarchical riffle independent
models form a natural conjugate prior family for partial ranking likelihood functions and introduced
efficient algorithms for conditioning on partial ranking observations.
It is finally interesting to note that it would not be trivial to use traditional Markov chains to capture
the dependencies in the event sequence due to the fact that observations come in snapshot form
instead of being indexed by time as they would be in an ordinary hidden Markov model. Thus in
order to properly perform inference, one would have to infer an HMM posterior with respect to
each of the permutations of the event set, which is computationally harder.
Generalized snapshot set models. Going beyond the uniform distribution, ALPACA can also
efficiently handle a more general class of snapshot set distribution by observing that any distribution
parametrizable as a Markov chain over the grid G that generates staircase walks can be substituted
for the uniform distribution with exactly the same time complexity of Gibbs sampling. As a cautionary remark, we note that allowing for these more general models without additional constraints
can sometimes lead to instabilities in parameter estimation. A simple constrained Markov chain that
we have successfully used in experiments parameterizes transition probabilities such that a staircase
6
(a) Central ranking recovery vs. measurement noise. Synthetic data, N = 10 events,
M = 250 patients, K (j) ? {1, 2, 3}.
Worst case Kendall?s tau score is 45.0.
(b) Central ranking recovery vs. size of
patient cohort. Synthetic data, N = 20
events, K (j) ? {1, . . . , 10}. Worst case
Kendall?s tau score is 190.0.
(d) BIC scores on the ADNI data (lower is
better) comparing the ALPACA model (with
varying settings of ?) against the single ordering model of [8] (shown in the ? ? column).
(c) Illustration of mixing times using a Gibbs
trace plot on a synthetic dataset with N =
20 and K (j) =4,8,12,16. Larger snapshots
(larger K (j) ) lead to longer mixing times.
(e) ADNI Patient staging. (Left) first order staging diagram, the
(e, q)th entry is the probability that event e has/will occur in position q. (Right) posterior probability distribution over the position in
the event ordering at which the patient snapshot was taken.
Figure 2: Experimental results
walk moves down at node (x, y) in the grid G with probability proportional to ?x and to the left with
probability proportional to (1 ? ?)y. Setting ? = 1/2 recovers the uniform distribution. Setting
0 ? ? < 1/2, however, reflects a prior bias for snapshots to have been taken earlier in the disease
cascade, while setting 1/2 < ? ? 1 reflects a prior bias for snapshots to have been taken later in
the disease cascade. Thus ? intuitively allows us to interpolate between early and late detection.
6
Experiments
Synthetic data experiments We first validate ALPACA on synthetic data. Since we are interested
in the ability of the model to recover the true central ranking, we evaluate based on the Kendall?s tau
distance between the ground truth central ranking and the central rankings learned by our algorithms.
To understand how learning is impacted by measurement noise, we simulate data from models in
which the means ?healthy and ?occurred are fixed to be 0 and 1, respectively and variances are
AX
selected uniformly at random from the interval (0, cM
), then learn model parameters from the
e
simulated data. Fig. 2(a) illustrates the results on a problem with N = 10 events and 250 patients
AX
varies between [0.2, 1.2]. As
(with K (j) set to be 1, 2, or 3 randomly for each patient) as cM
e
shown in the figure, we obtain nearly perfect performance for low measurement noise with recovery
rates degrading gracefully with higher measurement noise levels.
We also show results on a larger problem with N = 20 events, ce = 0.1, and K (j) drawn uniformly
at random from {1, . . . , 10}. Varying the cohort size, this time, Fig. 2(b) shows, as expected, that
recovery rates for the central ordering improve as the number of patients increases. Note that with
20 events, it would be utterly intractable to use brute force inference algorithms, but our algorithms
can process a patient?s measurements in roughly 3 seconds on a laptop.
In both experiments for Figs. 2(a) and 2(b), we discard the first 200 burn-in iterations of Gibbs, but
it is often sufficient to discard much fewer iterations. To illustrate mixing behavior, Fig. 2(c) shows
example Gibbs trace plots with N = 20 events and varying sizes of the snapshot set, K (j) . We
observe that mixing time increases as K (j) increases, confirming the discussion of mixing (Sec. 4.1).
The ADNI dataset. We also present a preliminary analysis of a cohort with a total number of
347 subjects (including 83 control subjects) from the Alzheimer?s Disease Neuroimaging Institute
(ADNI). We derive seven typical biomarkers associated with the onset of Alzheimers: (1) the
total tau level in cerebral spinal fluid (CSF) [tau], (2) the total A?42 level in CSF [abeta], (3)
the total ADAS cognitive assessment score [adas], (4) brain volume [brainvol], (5) hippocampal
volume [hippovol], (6) brain atrophy rate [brainatrophy], and (7) hippocampal atrophy rate
7
[hippoatrophy]. Due to the small number of measured events in the ADNI data, it is possible
to apply the model of Fonteijn et al. [8] (which assumes that all patients follow a single ordering
? ? ) by searching exhaustively over the collection of all 7! = 5040 orderings. We compare
the ALPACA model against the single ordering model via BIC scores (shown in Fig. 2(d)). We fit
our model five times, with the bias parameter ? (described in Section 5) set to .1, .3, .5, .7, .9. We
use a single Gaussian for each of the healthy and occurred measurement distributions (as described
in [8]), assuming that all patients in the control group are healthy.3
The results show that by allowing for the event ordering ? to vary across patients, the ALPACA model significantly outperforms the single ordering model (shown in the ? ? column) in
BIC score with respect to all of the tried settings of ?. Further, we observe that setting ? = 0.1
minimizes the BIC, reflecting the fact, we conjecture, that many of the patients in the ADNI cohort
are in the earlier stages of Alzheimers. The optimal central ordering inferred by the Fonteijn model
is: ? ? = adas|hippovol|hippoatrophy|brainatrophy|abeta|tau|brainvol, while ALPACA infers
the central ordering: ?0 = adas|hippovol|abeta|hippoatrophy|tau|brainatrophy|brainvol.
Observe that the two event orderings are largely in agreement with each other with CSF A?42 and
CSF tau events shifted to being earlier in the event ordering, which is more consistent with current
thinking in neurology [12, 5, 1], which places the two CSF events first. Note that adas is first in
both orderings as it was used to classify the patients ? thus its position is somewhat artificial. It
is surprising that the hippocampal volume and atrophy events are inferred in both models to occur
before the CSF events [13], but we believe that this may be due to the significant proportion of
misdiagnosed patients in the data. These misdiagnosed patients still have heavy atrophy in the
hippocampus, which is a common pathology among many neurological conditions (other dementias
and psychiatric disorders), but a change in CSF A? is much more specific to AD. Future work will
adapt the model for robustness to these misdiagnoses and other outliers.
Finally, Fig. 2(e) shows the patient staging result for an example patient from the ADNI data.
The left matrix visualizes the probability that each event will occur in each position of the event
ordering given snapshot data from this patient, while the right histogram visualizes where in the
event ordering the patient was situated when the snapshot was taken.
7
Conclusions
We have developed the Alzheimer?s disease Probabilistic Cascades model for event ordering
within the Alzheimer?s disease cascade. In its most basic form, ALPACA is a simple model with
generative semantics, allowing one to learn the central ordering of events that occur within a disease
progression as well as to quantify the variance of this ordering across patients. Our preliminary
results show that relaxing the notion that a single ordering over events exists for all patients allows
ALPACA to achieve a much better fit to snapshot data from a cohort of Alzheimer?s patients.
One of our main contributions is to show how the combinatorial structure of event ordering models
can be exploited for algorithmic efficiency. While exact inference remains intractable for ALPACA,
we have presented a simple MCMC based procedure which uses dynamic programming as a
subroutine for highly efficient inference.
There may exist biomarkers for Alzheimer?s which are more effective than those considered in our
current work for the purposes of patient staging. Identifying such biomarker events remains an
open question crucial to the success of data-driven models of disease cascades. Fortunately, one
of the main advantages of ALPACA lies in its extensibility and modularity. We have discussed
several such possible extensions, from more general measurement models to more general riffle
independent ordering models. Additionally, with the ability to scale gracefully with problem size
as well as to handle noise, we believe that the ALPACA model will be applicable to many other
Alzheimer?s datasets as well as datasets for other neurodegenerative diseases.
Acknowledgements
J. Huang is supported by a NSF Computing Innovation Fellowship. The EPSRC support D.
Alexander?s work on this topic with grant EP/J020990/01. The authors also thank Dr. Jonathan
Schott, UCL Dementia Centre, and Dr. Jonathan Bartlett, London School of Hygiene and Tropical
Medicine, for preparation of the data and help with interpretation of the results.
3
We note that this assumption is a major oversimplification as some of the control subjects are likely affected
by some non-AD neurodegenerative disease. Due to these difficulties in obtaining ground truth data, however,
estimating accurate measurement models can sometimes be a limitation.
8
References
[1] Paul S. Aisen, Ronald C. Petersen, Michael C. Donohue, Anthony Gamst, Rema Raman, Ronald G.
Thomas, Sarah Walter, John Q. Trojanowski, Leslie M. Shaw, Laurel A. Beckett, Clifford R. Jack, William
Jagust, Arthur W. Toga, Andrew J. Saykin, John C. Morris, Robert C. Green, and Michael W. Weiner. The
alzheimer?s disease neuroimaging initiative: progress report and future plans. Alzheimers dementia the
journal of the Alzheimers Association, 6(3):239?246, 2010.
[2] Laurel Beckett. Maximum likelihood estimation in Mallows?s model using partially ranked data., pages
92?107. New York: Springer-Verlag, 1993.
[3] H. Braak and E. Braak. Neuropathological staging of alzheimer-related changes. Acta Neuropathol.,
82:239?259, 1991.
[4] Ludwig M. Busse, Peter Orbanz, and Joachim Buhmann. Cluster analysis of heterogeneous rank data.
In The 24th Annual International Conference on Machine Learning, ICML ?07, Corvallis, Oregon, June
2007.
[5] A Caroli and G B Frisoni. The dynamics of alzheimer?s disease biomarkers in the alzheimer?s disease
neuroimaging initiative cohort. Neurobiology of Aging, 31(8):1263?1274, 2010.
[6] Harr Chen, S. R. K. Branavan, Regina Barzilay, and David R. Karger. Global models of document
structure using latent permutations. In Proceedings of Human Language Technologies: The 2009 Annual
Conference of the North American Chapter of the Association for Computational Linguistics, NAACL
?09, pages 371?379, Stroudsburg, PA, USA, 2009. Association for Computational Linguistics.
[7] Michael Fligner and Joseph Verducci. Mulistage ranking models. Journal of the American Statistical
Association, 83(403):892?901, 1988.
[8] Hubert M. Fonteijn, Marc Modat, Matthew J. Clarkson, Josephine Barnes, Manja Lehmann, Nicola Z.
Hobbs, Rachael I. Scahill, Sarah J. Tabrizi, Sebastien Ourselin, Nick C. Fox, and Daniel C. Alexander.
An event-based model for disease progression and its application in familial alzheimer?s disease and
huntington?s disease. NeuroImage, 60(3):1880 ? 1889, 2012.
[9] Jonathan Huang. Probabilistic Reasoning and Learning on Permutations: Exploiting Structural Decompositions of the Symmetric Group. PhD thesis, Carnegie Mellon University, 2011.
[10] Jonathan Huang and Carlos Guestrin. Learning hierarchical riffle independent groupings from rankings.
In International Conference on Machine Learning (ICML 2010), Haifa, Israel, June 2010.
[11] Jonathan Huang, Ashish Kapoor, and Carlos Guestrin. Efficient probabilistic inference with partial ranking queries. In Conference on Uncertainty in Artificial Intelligence, Barcelona, Spain, July 2011.
[12] Clifford R Jack, David S Knopman, William J Jagust, Leslie M Shaw, Paul S Aisen, Michael W
Weiner, Ronald C Petersen, and John Q Trojanowski. Hypothetical model of dynamic biomarkers of
the alzheimer?s pathological cascade. The Lancet Neurology 1, 9:119?128, January 2010.
[13] Clifford R. Jack, Prashanthi Vemuri, Heather J. Wiste, Stephen D. Weigand, Paul S. Aisen, John Q. Trojanowski, Leslie M. Shaw, Matthew A. Bernstein, Ronald C. Petersen, Michael W. Weiner, and David S.
Knopman. Evidence for ordering of alzheimer disease biomarkers. Archives of Neurology, 2011.
[14] Guy Lebanon and Yi Mao. Non-parametric modeling of partially ranked data. In John C. Platt, Daphne
Koller, Yoram Singer, and Sam Roweis, editors, Advances in Neural Information Processing Systems 20,
NIPS ?07, pages 857?864, Cambridge, MA, 2008. MIT Press.
[15] Tyler Lu and Craig Boutilier. Learning mallows models with pairwise preferences. In The 28th Annual
International Conference on Machine Learning, ICML ?11, Bellevue, Washington, June 2011.
[16] Marina Meila, Kapil Phadnis, Arthur Patterson, and Jeff Bilmes. Consensus ranking under the exponential
model. Technical Report 515, University of Washington, Statistics Department, April 2007.
[17] Rachael I. Scahill, Jonathan M. Schott, John M. Stevens, Martin N. Rossor, and Nick C. Fox. Mapping
the evolution of regional atrophy in alzheimer?s disease: Unbiased analysis of fluid-registered serial mri.
Proceedings of the National Academy of Sciences, 99(7):4703?4707, 2002.
[18] Mark Steyvers, Michael Lee, Brent Miller, and Pernille Hemmer. The wisdom of crowds in the recollection of order information. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta,
editors, Advances in Neural Information Processing Systems 22, pages 1785?1793. 2009.
9
| 4856 |@word mild:2 trial:1 mri:1 inversion:1 kapil:1 hippocampus:2 proportion:1 open:1 crucially:1 tried:1 decomposition:1 pg:4 bellevue:1 harder:1 carry:1 reduction:1 initial:2 series:1 score:9 karger:1 daniel:2 document:1 outperforms:1 current:4 comparing:1 surprising:2 yet:1 must:3 john:6 ronald:4 realistic:1 partition:3 confirming:1 plot:3 update:2 discrimination:1 v:2 generative:3 selected:1 fewer:1 intelligence:1 utterly:1 core:2 jagust:2 coarse:1 characterization:4 node:2 traverse:1 preference:1 daphne:1 parametrizable:1 five:1 differential:1 initiative:4 fitting:1 introduce:1 pairwise:1 x0:2 expected:1 indeed:3 behavior:1 roughly:1 busse:1 growing:1 brain:3 little:1 considering:1 cardinality:1 spain:1 estimating:1 moreover:1 mass:1 laptop:1 what:2 israel:1 cm:2 interpreted:1 minimizes:1 emerging:1 degrading:1 developed:2 finding:1 every:2 hypothetical:2 exactly:4 uk:1 control:4 brute:1 medical:1 grant:1 yn:1 platt:1 before:4 understood:1 local:1 aging:1 despite:3 encoding:1 analyzing:1 path:2 becoming:1 might:1 burn:1 acta:1 studied:1 heather:1 relaxing:1 deployment:2 hmms:1 limited:5 range:1 unique:1 mallow:14 practice:3 x3:2 procedure:3 area:2 universal:4 drug:1 neurodegenerative:6 cascade:16 empirical:2 thought:1 convenient:1 significantly:1 psychiatric:1 petersen:3 cannot:1 close:1 applying:1 instability:1 optimize:1 equivalent:1 deterministic:1 yt:7 maximizing:1 go:1 williams:1 starting:2 convex:1 ergodic:2 simplicity:3 recovery:4 disorder:1 identifying:1 trojanowski:3 insight:2 estimator:2 steyvers:1 population:2 handle:3 searching:1 coordinate:1 notion:1 analogous:1 play:2 exact:2 programming:2 us:3 agreement:1 associate:3 element:3 pa:1 particularly:2 predicts:1 labeled:1 observed:2 role:2 epsrc:1 ep:1 capture:2 worst:5 culotta:1 ordering:63 extensibility:1 disease:52 complexity:13 constrains:1 dynamic:5 exhaustively:2 depend:1 patterson:1 efficiency:1 swap:2 represented:1 chapter:1 walter:1 distinct:2 effective:5 london:2 describe:2 monte:1 query:2 artificial:2 outcome:1 crowd:1 exhaustive:2 heuristic:1 stanford:2 larger:8 valued:2 psychometrics:1 solve:1 otherwise:3 reconstruct:1 say:1 ability:5 statistic:1 jointly:1 noisy:3 highlighted:1 differentiate:1 advantage:2 pressing:1 sequence:1 net:1 ucl:2 propose:2 remainder:1 philosophy:1 kapoor:1 mixing:9 flexibility:2 achieve:3 ludwig:1 roweis:1 academy:1 validate:1 scalability:1 exploiting:2 cluster:1 perfect:1 stroudsburg:1 help:1 develop:1 ac:1 illustrate:1 derive:1 measured:2 sarah:2 school:1 andrew:1 barzilay:1 progress:1 c:1 come:2 blanket:1 quantify:1 differ:2 csf:7 stevens:1 human:1 oversimplification:2 regina:1 preliminary:4 proposition:2 extension:2 around:1 considered:3 sufficiently:2 normal:1 exp:1 ground:2 tyler:1 mapping:2 algorithmic:1 visualize:2 claim:1 matthew:2 major:3 vary:2 early:6 purpose:1 estimation:6 applicable:2 combinatorial:1 currently:1 healthy:7 exceptional:1 successfully:1 tool:1 reflects:2 hope:1 mit:1 gaussian:2 super:1 fonteijn:4 rather:1 imprecisely:1 varying:3 ax:2 june:3 joachim:1 notational:2 properly:1 rank:1 biomarker:2 likelihood:4 laurel:2 contrast:2 rigorous:1 inference:19 hidden:4 relation:2 misdiagnosed:2 koller:1 going:1 subroutine:1 interested:1 semantics:2 among:5 impacting:1 priori:1 development:2 plan:1 constrained:2 special:2 neuropathological:1 marginal:1 equal:1 schott:2 never:1 fuller:1 washington:2 sampling:18 progressive:1 hobbs:1 icml:3 progressed:2 nearly:2 thinking:1 tabular:1 future:2 report:2 np:1 others:1 irreducible:1 modern:2 pathological:2 randomly:1 national:1 interpolate:1 individual:2 consisting:1 william:2 attempt:1 detection:1 highly:2 stair:1 hubert:1 staging:14 chain:8 accurate:4 edge:6 partial:18 necessary:1 arthur:2 alzheimers:4 intense:1 fox:2 indexed:1 divide:1 penalizes:1 walk:13 desired:1 haifa:1 column:2 earlier:5 modeling:2 classify:1 measuring:1 maximization:1 ordinary:1 ada:5 leslie:3 subset:10 entry:2 rare:1 uniform:7 too:1 kn:2 dependency:2 varies:1 considerably:2 synthetic:6 person:1 international:3 eas:1 probabilistic:12 lee:1 michael:6 ashish:1 again:1 central:10 squared:1 clifford:3 zen:1 huang:7 thesis:1 dr:2 guy:1 cognitive:2 corner:3 american:2 tabrizi:1 leading:2 brent:1 diversity:1 sec:1 availability:2 includes:1 north:1 oregon:1 explicitly:3 ranking:27 ad:4 onset:2 toga:1 performed:1 later:1 kendall:5 observing:1 recover:2 participant:1 complicated:1 inherited:1 bayes:1 carlos:2 contribution:2 variance:7 largely:2 efficiently:4 miller:1 spaced:1 wisdom:1 bayesian:3 produced:1 craig:1 lu:1 carlo:1 trajectory:3 bilmes:1 finer:1 ago:1 tissue:2 visualizes:2 reach:1 against:2 obvious:1 e2:4 naturally:1 associated:3 recovers:1 sampled:4 dataset:3 treatment:4 proved:1 infers:1 carefully:1 sophisticated:1 reflecting:3 higher:2 symptomatic:1 verducci:2 follow:1 impacted:1 april:1 formulation:1 symptom:2 just:2 stage:2 ergodicity:1 hand:1 tropical:1 ei:1 assessment:1 lack:1 mode:1 believe:2 usa:1 naacl:1 staircase:13 verify:1 y2:1 unbiased:2 true:1 hence:2 evolution:1 rachael:2 symmetric:1 nonzero:1 deal:1 adjacent:2 generalized:4 hippocampal:3 bijective:1 reasoning:2 consideration:1 novel:1 recently:2 jack:3 common:2 physical:1 spinal:1 conditioning:1 volume:4 cerebral:1 linking:1 occurred:11 discussed:2 association:5 interpretation:1 measurement:24 significant:2 refer:1 corvallis:1 gibbs:19 mellon:1 cambridge:1 meila:1 trivially:2 outlined:2 similarly:1 grid:12 pm:1 centre:1 donohue:1 pathology:3 language:1 longer:2 posterior:20 own:2 recent:3 orbanz:1 optimizing:2 driven:3 forcing:1 discard:2 certain:3 verlag:1 success:2 knopman:2 yi:1 exploited:1 guestrin:2 additional:2 relaxed:1 fortunately:2 somewhat:1 converge:1 july:1 stephen:1 full:3 multiple:3 reduces:1 infer:1 technical:1 faster:1 determination:2 adni:7 clinical:11 cross:1 offer:1 adapt:1 serial:1 e1:6 marina:1 prediction:1 basic:4 heterogeneous:1 patient:61 metric:1 expectation:1 iteration:6 represent:1 sometimes:3 histogram:1 fellowship:1 fine:2 interval:1 diagram:2 source:1 crucial:1 regional:1 archive:1 subject:4 tend:1 lafferty:1 climb:1 seem:1 alzheimer:25 structural:1 beckett:3 noting:1 abnormality:1 bernstein:1 cohort:10 enough:1 backwards:2 bengio:1 variety:2 independence:2 bic:4 zi:8 fit:2 reduce:2 parameterizes:1 biomarkers:7 whether:2 weiner:3 utility:1 bartlett:1 clarkson:1 peter:1 e3:4 york:1 remark:2 impairment:1 boutilier:1 detailed:5 situated:1 morris:1 simplest:1 reduced:2 exist:4 nsf:1 familial:2 shifted:1 estimated:2 disjoint:2 per:1 blue:1 diagnosis:7 broadly:1 discrete:4 write:2 carnegie:1 affected:1 group:4 key:3 achieving:1 drawn:2 ce:1 imaging:1 sum:2 convert:1 uncertainty:2 lehmann:1 extends:1 throughout:1 family:1 place:1 draw:9 raman:1 bound:1 followed:1 distinguish:1 correspondence:2 annual:3 barnes:1 nontrivial:1 binned:1 occur:10 constraint:1 x2:3 cautionary:1 huntington:2 dominated:1 generates:1 simulate:1 martin:1 conjecture:1 structured:1 department:1 according:2 alternate:2 conjugate:2 smaller:2 remain:1 terminates:1 y0:2 em:2 partitioned:1 across:2 joseph:1 sam:1 intuitively:1 outlier:1 taken:9 computationally:1 equation:8 agree:1 remains:3 discus:3 turn:4 mechanism:1 eventually:1 singer:1 tractable:2 operation:2 gamst:1 probe:1 progression:17 observe:6 apply:2 indirectly:1 bijectively:1 hierarchical:3 phadnis:1 shaw:3 robustness:1 original:3 thomas:1 assumes:1 running:6 include:1 nlp:1 harr:1 linguistics:2 hinge:1 atrophy:9 medicine:2 exploit:4 yoram:1 establish:2 nicola:1 classical:2 move:1 hemmer:1 already:2 quantity:1 occurs:1 question:1 parametric:2 traditional:3 amongst:1 conceivable:1 distance:3 thank:1 separating:1 simulated:1 hmm:6 recollection:1 gracefully:2 seven:1 topic:1 extent:1 unstable:1 consensus:2 reason:2 trivial:1 ageing:1 assuming:4 length:1 modeled:1 index:1 illustration:1 providing:1 innovation:1 difficult:4 neuroimaging:5 robert:1 trace:2 fluid:2 implementation:1 sebastien:1 perform:2 allowing:6 disagree:1 vertical:1 observation:6 snapshot:64 datasets:6 markov:9 upper:1 january:1 neurobiology:1 variability:1 severity:1 precise:2 incorporated:1 y1:3 community:1 inferred:2 introduced:2 david:3 pair:1 required:1 speculation:1 connection:1 z1:2 nick:2 learned:1 registered:1 barcelona:1 tractably:1 nip:1 address:1 able:4 bar:1 beyond:1 below:1 reliable:1 including:2 memory:3 tau:10 green:1 event:85 critical:2 natural:2 force:1 difficulty:1 aisen:3 ranked:2 buhmann:1 normality:1 improve:1 technology:1 picture:1 prior:10 acknowledgement:1 determining:1 relative:1 loss:2 fully:3 permutation:5 riffle:4 zei:1 interesting:1 limitation:3 proportional:2 validation:1 sufficient:1 consistent:2 undergone:1 thresholding:1 displaying:1 lancet:1 editor:2 heavy:2 summary:1 supported:1 intractably:1 dominantly:1 bias:3 allow:2 understand:1 institute:1 explaining:1 wide:1 scahill:2 sparse:2 saykin:1 benefit:1 overcome:1 xn:1 transition:6 ending:1 computes:1 author:1 collection:6 made:2 jump:1 branavan:1 far:1 josephine:1 social:1 lebanon:1 observable:1 global:1 weigand:1 assumed:2 neurology:4 braak:2 continuous:1 latent:5 search:1 decade:1 modularity:1 why:1 reality:1 additionally:2 promising:2 learn:2 zk:2 nature:1 confirmation:1 obtaining:1 schuurmans:1 e5:4 complex:1 anthony:1 marc:1 substituted:1 did:1 main:4 spread:1 linearly:1 timescales:1 revisits:1 noise:9 paul:3 x1:5 augmented:3 fig:8 en:2 downwards:1 fligner:2 neuroimage:1 position:8 mao:1 deterministically:2 exponential:2 lie:1 crude:2 house:1 governed:2 late:1 grained:3 e4:4 down:1 xt:8 specific:1 showing:1 dementia:3 dk:2 evidence:1 grouping:1 intractable:3 naively:1 exists:1 adding:1 effectively:1 phd:1 conditioned:6 illustrates:1 chen:1 depicted:1 univariate:1 likely:1 positional:4 sectional:1 ordered:1 neurological:2 scalar:1 partially:2 applies:1 springer:1 corresponds:1 truth:2 ma:1 conditional:5 viewed:2 consequently:1 eventual:1 jeff:1 absence:1 feasible:1 change:5 hard:1 vemuri:1 determined:3 typical:1 operates:1 uniformly:2 sampler:8 total:6 called:1 pas:1 experimental:2 exception:1 college:1 e6:4 support:1 mark:1 arises:1 jonathan:7 alexander:4 preparation:1 evaluate:1 mcmc:1 |
4,261 | 4,857 | Scalable Influence Estimation in
Continuous-Time Diffusion Networks
Nan Du?
Le Song?
Manuel Gomez-Rodriguez?
Hongyuan Zha?
?
Georgia Institute of Technology
MPI for Intelligent Systems?
[email protected]
[email protected]
[email protected]
[email protected]
Abstract
If a piece of information is released from a media site, can we predict whether
it may spread to one million web pages, in a month ? This influence estimation
problem is very challenging since both the time-sensitive nature of the task and
the requirement of scalability need to be addressed simultaneously. In this paper,
we propose a randomized algorithm for influence estimation in continuous-time
diffusion networks. Our algorithm can estimate the influence of every node in
a network with |V| nodes and |E| edges to an accuracy of using n = O(1/2 )
randomizations and up to logarithmic factors O(n|E|+n|V|) computations. When
used as a subroutine in a greedy influence maximization approach, our proposed
algorithm is guaranteed to find a set of C nodes with the influence of at least
(1 ? 1/e) OPT ?2C, where OPT is the optimal value. Experiments on both
synthetic and real-world data show that the proposed algorithm can easily scale
up to networks of millions of nodes while significantly improves over previous
state-of-the-arts in terms of the accuracy of the estimated influence and the quality
of the selected nodes in maximizing the influence.
1
Introduction
Motivated by applications in viral marketing [1], researchers have been studying the influence maximization problem: find a set of nodes whose initial adoptions of certain idea or product can trigger,
in a time window, the largest expected number of follow-ups. For this purpose, it is essential to accurately and efficiently estimate the number of follow-ups of an arbitrary set of source nodes within
the given time window. This is a challenging problem for that we need first accurately model the
timing information in cascade data and then design a scalable algorithm to deal with large real-world
networks. Most previous work in the literature tackled the influence estimation and maximization
problems for infinite time window [1, 2, 3, 4, 5, 6]. However, in most cases, influence must be
estimated or maximized up to a given time, i.e., a finite time window must be considered [7]. For
example, a marketer would like to have her advertisement viewed by a million people in one month,
rather than in one hundred years. Such time-sensitive requirement renders those algorithms which
only consider static information, such as network topologies, inappropriate in this context.
A sequence of recent work has argued that modeling cascade data and information diffusion using
continuous-time diffusion networks can provide significantly more accurate models than discretetime models [8, 9, 10, 11, 12, 13, 14, 15]. There is a twofold rationale behind this modeling choice.
First, since follow-ups occur asynchronously, continuous variables seem more appropriate to represent them. Artificially discretizing the time axis into bins introduces additional tuning parameters,
like the bin size, which are not easy to choose optimally. Second, discrete time models can only
describe transmission times which obey an exponential density, and hence can be too restricted
to capture the rich temporal dynamics in the data. Extensive experimental comparisons on both
synthetic and real world data showed that continuous-time models yield significant improvement
in settings such as recovering hidden diffusion network structures from cascade data [8, 10] and
predicting the timings of future events [9, 14].
1
However, estimating and maximizing influence based on continuous-time diffusion models also entail many challenges. First, the influence estimation problem in this setting is a difficult graphical
model inference problem, i.e., computing the marginal density of continuous variables in loopy
graphical models. The exact answer can be computed only for very special cases. For example,
Gomez-Rodriguez et al. [7] have shown that the problem can be solved exactly when the transmission functions are exponential densities, by using continuous time Markov processes theory.
However, the computational complexity of such approach, in general, scales exponentially with the
size and density of the network. Moreover, extending the approach to deal with arbitrary transmission functions would require additional nontrivial approximations which would increase even more
the computational complexity. Second, it is unclear how to scale up influence estimation and maximization algorithms based on continuous-time diffusion models to millions of nodes. Especially in
the maximization case, even a naive sampling algorithm for approximate inference is not scalable:
n sampling rounds need to be carried out for each node to estimate the influence, which results in an
overall O(n|V||E|) algorithm. Thus, our goal is to design a scalable algorithm which can perform
influence estimation and maximization in this regime of networks with millions of nodes.
In particular, we propose C ON T IN E ST (Continous-Time Influence Estimation), a scalable randomized algorithm for influence estimation in a continuous-time diffusion network with heterogeneous
edge transmission functions. The key idea of the algorithm is to view the problem from the perspective of graphical model inference, and reduces the problem to a neighborhood estimation problem
in graphs. Our algorithm can estimate the influence of every node in a network with |V| nodes and
|E| edges to an accuracy of using n = O(1/2 ) randomizations and up to logarithmic factors
O(n|E| + n|V|) computations. When used as a subroutine in a greedy influence maximization algorithm, our proposed algorithm is guaranteed to find a set of nodes with an influence of at least
(1 ? 1/e) OPT ?2C, where OPT is the optimal value. Finally, we validate C ON T IN E ST on both
influence estimation and maximization problems over large synthetic and real world datasets. In
terms of influence estimation, C ON T IN E ST is much closer to the true influence and much faster
than other state-of-the-art methods. With respect to the influence maximization, C ON T IN E ST allows us to find a set of sources with greater influence than other state-of-the-art methods.
2
Continuous-Time Diffusion Networks
First, we revisit the continuous-time generative model for cascade data in social networks introduced
in [10]. The model associates each edge j ? i with a transmission function, fji (?ji ), a density over
time, in contrast to previous discrete-time models which associate each edge with a fixed infection
probability [1]. Moreover, it also differs from discrete-time models in the sense that events in a
cascade are not generated iteratively in rounds, but event timings are sampled directly from the
transmission function in the continuous-time model.
Continuous-Time Independent Cascade Model. Given a directed contact network, G = (V, E),
we use a continuous-time independent cascade model for modeling a diffusion process [10]. The
process begins with a set of infected source nodes, A, initially adopting certain contagion (idea,
meme or product) at time zero. The contagion is transmitted from the sources along their out-going
edges to their direct neighbors. Each transmission through an edge entails a random spreading time,
? , drawn from a density over time, fji (? ). We assume transmission times are independent and
possibly distributed differently across edges. Then, the infected neighbors transmit the contagion
to their respective neighbors, and the process continues. We assume that an infected node remains
infected for the entire diffusion process. Thus, if a node i is infected by multiple neighbors, only the
neighbor that first infects node i will be the true parent. As a result, although the contact network
can be an arbitrary directed network, each cascade (a vector of event timing information from the
spread of a contagion) induces a Directed Acyclic Graph (DAG).
Heterogeneous Transmission Functions. Formally, the transmission function fji (ti |tj ) for directed edge j ? i is the conditional density of node i getting infected at time ti given that node j
was infected at time tj . We assume it is shift invariant: fji (ti |tj ) = fji (?ji ), where ?ji := ti ? tj ,
and nonnegative: fji (?ji ) = 0 if ?ji < 0. Both parametric transmission functions, such as the exponential and Rayleigh function [10, 16], and nonparametric function [8] can be used and estimated
from cascade data (see Appendix A for more details).
Shortest-Path property. The independent cascade model has a useful property we will use later:
given a sample of transmission times of all edges, the time ti taken to infect a node i is the length
2
of the shortest path in G from the sources to node i, where the edge weights correspond to the
associated transmission times.
3
Graphical Model Perspectives for Continuous-Time Diffusion Networks
The continuous-time independent cascade model is essentially a directed graphical model for a set of
dependent random variables, the infection times ti of the nodes, where the conditional independence
structure is supported on the contact network G (see Appendix B for more details). More formally,
the joint density of {ti }i?V can be expressed as
Y
p ({ti }i?V ) =
p (ti |{tj }j??i ) ,
(1)
i?V
where ?i denotes the set of parents of node i in a cascade-induced DAG, and p(ti |{tj }j??i ) is the
conditional density of infection ti at node i given the infection times of its parents.
Instead of directly modeling the infection times ti , we can focus on the set of mutually independent
random transmission times ?ji = ti ? tj . Interestingly, by switching from a node-centric view to an
edge-centric view, we obtain a fully factorized joint density of the set of transmission times
Y
p {?ji }(j,i)?E =
fji (?ji ),
(2)
(j,i)?E
Based on the Shortest-Path property of the independent cascade model, each variable ti can be
viewed as a transformation from the collection of variables {?ji }(j,i)?E .
More specifically, let Qi be the collection of directed paths in G from the source nodes to node i,
where each path q ? Qi contains a sequence of directed edges (j, l). Assuming all source nodes are
infected at zero time, then we obtain variable ti via
X
ti = gi {?ji }(j,i)?E := min
?jl ,
(3)
q?Qi
(j,l)?q
where the transformation gi (?) is the value of the shortest-path minimization. As a special case, we
can now compute the probability of node i infected before T using a set of independent variables:
(4)
Pr {ti ? T } = Pr gi {?ji }(j,i)?E ? T .
The significance of the relation is that it allows us to transform a problem involving a sequence of
dependent variables {ti }i?V to one with independent variables {?ji }(j,i)?E . Furthermore, the two
perspectives are connected via the shortest path algorithm in weighted directed graph, a standard
well-studied operation in graph analysis.
4
Influence Estimation Problem in Continuous-Time Diffusion Networks
Intuitively, given a time window, the wider the spread of infection, the more influential the set of
sources. We adopt the definition of influence as the average number of infected nodes given a set of
source nodes and a time window, as in previous work [7]. More formally, consider a set of C source
nodes A ? V which gets infected at time zero, then, given a time window T , a node i is infected in
the time window if ti ? T . The expected number of infected nodes (or the influence) given the set
of transmission functions {fji }(j,i)?E can be computed as
hX
i X
X
?(A, T ) = E
I {ti ? T } =
E [I {ti ? T }] =
Pr {ti ? T } ,
(5)
i?V
i?V
i?V
where I {?} is the indicator function and the expectation is taken over the the set of dependent
variables {ti }i?V .
Essentially, the influence estimation problem in Eq. (5) is an inference problem for graphical models,
where the probability of event ti ? T given sources in A can be obtained by summing out the
possible configuration of other variables {tj }j6=i . That is
Z ?
Z T
Z ? Y
Y
Pr{ti ? T } =
???
???
p tj |{tl }l??j
dtj ,
(6)
0
ti =0
j?V
0
j?V
which is, in general, a very challenging problem. First, the corresponding directed graphical models
can contain nodes with high in-degree and high out-degree. For example, in Twitter, a user can
follow dozens of other users, and another user can have hundreds of ?followees?. The tree-width
corresponding to this directed graphical model can be very high, and we need to perform integration
for functions involving many continuous variables. Second, the integral in general can not be eval3
uated analytically for heterogeneous transmission functions, which means that we need to resort to
numerical integration by discretizing the domain [0, ?). If we use N level of discretization for each
variable, we would need to enumerate O(N |?i | ) entries, exponential in the number of parents.
Only in very special cases, can one derive the closed-form equation for computing Pr{ti ? T } [7].
However, without further heuristic approximation, the computational complexity of the algorithm
is exponential in the size and density of the network. The intrinsic complexity of the problem
entails the utilization of approximation algorithms, such as mean field algorithms or message passing
algorithms.We will design an efficient randomized (or sampling) algorithm in the next section.
5
Efficient Influence Estimation in Continuous-Time Diffusion Networks
Our first key observation is that we can transform the influence estimation problem in Eq. (5) into a
problem with independent variables. Using relation in Eq. (4), we have
hX
X
i
?(A, T ) =
Pr gi {?ji }(j,i)?E ? T = E
I gi {?ji }(j,i)?E ? T , (7)
i?V
i?V
where the expectation is with respect to the set of independent variables {?ji }(j,i)?E . This equivalent
formulation suggests a naive sampling (NS) algorithm for approximating ?(A, T ): draw n samples
of {?ji }(j,i)?E , run a shortest path algorithm for each sample, and finally average the results (see
Appendix C for more details). However, this naive sampling approach has a computational complexity of O(nC|V||E| + nC|V|2 log |V|) due to the repeated calling of the shortest path algorithm.
This is quadratic to the network size, and hence not scalable to millions of nodes.
Our second key observation is that for each sample {?ji }(j,i)?E
P , we are only interested in the neighborhood size of the source nodes, i.e., the summation i?V I {?} in Eq. (7), rather than in the
individual shortest paths. Fortunately, the neighborhood size estimation problem has been studied
in the theoretical computer science literature. Here, we adapt a very efficient randomized algorithm
by Cohen [17] to our influence estimation problem. This randomized algorithm has a computational
complexity of O(|E| log |V| + |V| log2 |V|) and it estimates the neighborhood sizes for all possible
single source node locations. Since it needs to run once for each sample of {?ji }(j,i)?E , we obtain
an overall influence estimation algorithm with O(n|E| log |V| + n|V| log2 |V|) computation, nearly
linear in network size. Next we will revisit Cohen?s algorithm for neighborhood estimation.
5.1 Randomized Algorithm for Single-Source Neighborhood Size Estimation
Given a fixed set of edge transmission times {?ji }(j,i)?E and a source node s, infected at time 0, the
neighborhood N (s, T ) of a source node s given a time window T is the set of nodes within distance
T from s, i.e.,
(8)
N (s, T ) = i gi {?ji }(j,i)?E ? T, i ? V .
Instead of estimating N (s, T ) directly, the algorithm will assign an exponentially distributed random label ri to each network node i. Then, it makes use of the fact that the minimum of a set of
exponential random variables {ri }i?N (s,T ) will also be a exponential random variable, but with its
parameter equals to the number of variables. That is if each ri ? exp(?ri ), then the smallest label
within distance T from source s, r? := mini?N (s,T ) ri , will distribute as r? ? exp {?|N (s, T )|r? }.
Suppose we randomize over the labeling m times, and obtain m such least labels, {r?u }m
u=1 . Then
the neighborhood size can be estimated as
m?1
|N (s, T )| ? Pm u .
(9)
u=1 r?
which is shown to be an unbiased estimator of |N (s, T )| [17]. This is an interesting relation since
it allows us to transform the counting problem in (8) to a problem of finding the minimum random
label r? . The key question is whether we can compute the least label r? efficiently, given random
labels {ri }i?V and any source node s.
Cohen [17] designed a modified Dijkstra?s algorithm (Algorithm 1) to construct a data structure
r? (s), called least label list, for each node s to support such query. Essentially, the algorithm starts
with the node i with the smallest label ri , and then it traverses in breadth-first search fashion along
the reverse direction of the graph edges to find all reachable nodes. For each reachable node s, the
distance d? between i and s, and ri are added to the end of r? (s). Then the algorithm moves to the
node i0 with the second smallest label ri0 , and similarly find all reachable nodes. For each reachable
4
node s, the algorithm will compare the current distance d? between i0 and s with the last recorded
distance in r? (s). If the current distance is smaller, then the current d? and ri0 are added to the end
of r? (s). Then the algorithm move to the node with the third smallest label and so on. The algorithm
is summarized in Algorithm 1 in Appendix D.
Algorithm 1 returns a list r? (s) per node s ? V, which contains information about distance to the
smallest reachable labels from s. In particular, each list contains pairs of distance and random labels,
(d, r), and these pairs are ordered as
? > d(1) > d(2) > . . . > d(|r? (s)|) = 0
(10)
r(1) < r(2) < . . . < r(|r? (s)|) ,
(11)
where {?}(l) denotes the l-th element in the list. (see Appendix D for an example). If we want to
query the smallest reachable random label r? for a given source s and a time T , we only need to
perform a binary search on the list for node s:
r? = r(l) , where d(l?1) > T ? d(l) .
(12)
Finally, to estimate |N (s, T )|, we generate m i.i.d. collections of random labels, run Algorithm 1
m
on each collection, and obtain m values {r?u }u=1 , which we use on Eq. (9) to estimate |N (i, T )|.
The computational complexity of Algorithm 1 is O(|E| log |V| + |V| log2 |V|), with expected size
of each r? (s) being O(log |V|). Then the expected time for querying r? is O(log log |V|) using
binary search. Since we need to generate m set of random labels and run Algorithm 1 m times, the
overall computational complexity for estimating the single-source neighborhood size for all s ? V
is O(m|E| log |V| + m|V| log2 |V| + m|V| log log |V|). For large scale network, and when m
min{|V|, |E|}, this randomized algorithm can be much more efficient than approaches based on
directly calculating the shortest paths.
5.2
Constructing Estimation for Multiple-Source Neighborhood Size
When we have a set of sources, A, its neighborhood is the union of the neighborhoods of its constituent sources
[
N (A, T ) =
N (i, T ).
(13)
i?A
This is true because each source independently infects its downstream nodes. Furthermore, to calculate the least label list r? corresponding to N (A, T ), we can simply reuse the least label list r? (i)
of each individual source i ? A. More formally,
r? = mini?A minj?N (i,T ) rj ,
(14)
where the inner minimization can be carried out by querying r? (i). Similarly, after we obtain m
samples of r? , we can estimate |N (A, T )| using Eq. (9). Importantly, very little additional work is
needed when we want to calculate r? for a set of sources A, and we can reuse work done for a single
source. This is very different from a naive sampling approach where the sampling process needs to
be done completely anew if we increase the source set. In contrast, using the randomized algorithm,
only an additional constant-time minimization over |A| numbers is needed.
5.3
Overall Algorithm
So far, we have achieved efficient neighborhood size estimation of |N (A, T )| with respect to a
given set of transmission times {?ji }(j,i)?E . Next, we will estimate the influence by averaging over
multiple sets of samples for {?ji }(j,i)?E . More specifically, the relation from (7)
m?1
?(A, T ) = E{?ji }(j,i)?E [|N (A, T )|] = E{?ji } E{r1 ,...,rm }|{?ji } Pm u ,
(15)
u=1 r?
suggests the following overall algorithm
Continuous-Time Influence Estimation (C ON T IN E ST):
l
1. Sample n sets of random transmission times {?ij
}(j,i)?E ?
(j,i)?E fji (?ji )
Q
u
sample m sets of random labels {ri }i?V ?
exp(?ri )
Pn
Pm i?V ul
1
sample averages ?(A, T ) ? n l=1 (m ? 1)/ ul =1 r?
l
{?ij
}(j,i)?E ,
2. Given a set of
3. Estimate ?(A, T ) by
Q
5
Importantly, the number of random labels, m, does not need to be very large. Since the estimator
for |N (A, T )| is unbiased [17], essentially the outer-loop of averaging over n samples of random
transmission times further reduces the variance of the estimator in a rate of O(1/n). In practice,
we can use a very small m (e.g., 5 or 10) and still achieve good results, which is also confirmed
by our later experiments. Compared to [2], the novel application of Cohen?s algorithm arises for
estimating influence for multiple sources, which drastically reduces the computation by cleverly
using the least-label list from single source. Moreover, we have the following theoretical guarantee
(see Appendix E for proof)
Theorem 1 Draw the following number of samples for the set of random transmission times
C?
2|V|
n > 2 log
?
(16)
where ? := maxA:|A|?C 2?(A, T )2 /(m ? 2) + 2V ar(|N (A, T )|)(m ? 1)/(m ? 2) + 2a/3 and
|N (A, T )| ? a, and for each set of random transmission times, draw m set of random labels. Then
|b
? (A, T ) ? ?(A, T )| 6 uniformly for all A with |A| 6 C, with probability at least 1 ? ?.
The theorem indicates that the minimum number of samples, n, needed to achieve certain accuracy
is related to the actual size of the influence ?(A, T ), and the variance of the neighborhood size
|N (A, T )| over the random draw of samples. The number of random labels, m, drawn in the inner
loop of the algorithm will monotonically decrease the dependency of n on ?(A, T ). It suffices to
draw a small number of random labels, as long as the value of ?(A, T )2 /(m ? 2) matches that
of V ar(|N (A, T )|). Another implication is that influence at larger time window T is harder to
estimate, since ?(A, T ) will generally be larger and hence require more random labels.
6
Influence Maximization
Once we know how to estimate the influence ?(A, T ) for any A ? V and time window T efficiently,
we can use them in finding the optimal set of C source nodes A? ? V such that the expected number
of infected nodes in G is maximized at T . That is, we seek to solve,
A? = argmax|A|6C ?(A, T ),
(17)
where set A is the variable. The above optimization problem is NP-hard in general. By construction,
?(A, T ) is a non-negative, monotonic nondecreasing function in the set of source nodes, and it can
be shown that ?(A, T ) satisfies a diminishing returns property called submodularity [7].
A well-known approximation algorithm to maximize monotonic submodular functions is the greedy
algorithm. It adds nodes to the source node set A sequentially. In step k, it adds the node i which
maximizes the marginal gain ?(Ak?1 ? {i}; T ) ? ?(Ak?1 ; T ). The greedy algorithm finds a source
node set which achieves at least a constant fraction (1 ? 1/e) of the optimal [18]. Moreover, lazy
evaluation [5] can be employed to reduce the required number of marginal gains per iteration. By
using our influence estimation algorithm in each iteration of the greedy algorithm, we gain the
following additional benefits:
First, at each iteration k, we do not need to rerun the full influence estimation algorithm (section 5.2).
We just need to store the least label list r? (i) for each node i ? V computed for a single source,
which requires expected storage size of O(|V| log |V|) overall.
Second, our influence estimation algorithm can be easily parallelized. Its two nested sampling loops
can be parallelized in a straightforward way since the variables are independent of each other. However, in practice, we use a small number of random labels, and m n. Thus we only need to
parallelize the sampling for the set of random transmission times {?ji }. The storage of the least
element lists can also be distributed.
However, by using our randomized algorithm for influence estimation, we also introduce a sampling
error to the greedy algorithm due to the approximation of the influence ?(A, T ). Fortunately, the
greedy algorithm is tolerant to such sampling noise, and a well-known result provides a guarantee
for this case (following an argument in [19, Th. 7.9]):
Theorem 2 Suppose the influence ?(A, T ) for all A with |A| ? C are estimated uniformly with
b T) ?
error and confidence 1 ? ?, the greedy algorithm returns a set of sources Ab such that ?(A,
(1 ? 1/e)OP T ? 2C with probability at least 1 ? ?.
6
?3
8
0.06
relative error
influence
150
100
0.04
0.02
50
0
x 10
0.08
NS
ConTinEst
relative error
200
2
4
6
8
T
(a) Influence vs. time
10
0 2
10
6
4
2
3
10
#samples
(b) Error vs. #samples
4
10
0
5 10
20
30
#labels
40
50
(c) Error vs. #labels
Figure 1: For core-periphery networks with 1,024 nodes and 2,048 edges, (a) estimated influence for increasing time window T , and (b) fixing T = 10, relative error for increasing number of samples with 5 random
labels, and (c) for increasing number of random labels with 10,000 random samples.
7
Experiments
We evaluate the accuracy of the estimated influence given by C ON T IN E ST and investigate the performance of influence maximization on synthetic and real networks. We show that our approach
significantly outperforms the state-of-the-art methods in terms of both speed and solution quality.
Synthetic network generation. We generate three types of Kronecker networks [20]: (i) coreperiphery networks (parameter matrix: [0.9 0.5; 0.5 0.3]), which mimic the information diffusion traces in real world networks [21], (ii) random networks ([0.5 0.5; 0.5 0.5]), typically
used in physics and graph theory [22] and (iii) hierarchical networks ([0.9 0.1; 0.1 0.9]) [10].
Next, we assign a pairwise transmission function for every directed edge in each type of network and set its parameters at random. In our experiments, we use the Weibull distribution [16],
? t ??1 ?(t/?)?
f (t; ?, ?) = ?
e
, t > 0, where ? > 0 is a scale parameter and ? > 0 is a shape
?
parameter. The Weibull distribution (Wbl) has often been used to model lifetime events in survival
analysis, providing more flexibility than an exponential distribution [16]. We choose ? and ? from 0
to 10 uniformly at random for each edge in order to have heterogeneous temporal dynamics. Finally,
for each type of Kronecker network, we generate 10 sample networks, each of which has different
? and ? chosen for every edge.
Accuracy of the estimated influence. To the best of our knowledge, there is no analytical solution to the influence estimation given Weibull transmission function. Therefore, we compare C ON T IN E ST with Naive Sampling (NS) approach (see Appendix C) by considering the highest degree
node in a network as the source, and draw 1,000,000 samples for NS to obtain near ground truth.
Figures 1(a) compares C ON T IN E ST with the ground truth provided by NS at different time window
T , from 0.1 to 10 in corre-periphery networks. For C ON T IN E ST, we generate up to 10,000 random samples (or set of random waiting times), and 5 random labels in the inner loop. In all three
networks, estimation provided by C ON T IN E ST fits accurately the ground truth, and the relative error decreases quickly as we increase the number of samples and labels (Figures 1(b) and 1(c)). For
10,000 random samples with 5 random labels, the relative error is smaller than 0.01. (see Appendix F
for additional results on the random and hierarchal networks)
Scalability. We compare C ON T IN E ST to the state-of-the-art method I NFLUMAX [7] and the Naive
Sampling (NS) method in terms of runtime for the continuous-time influence estimation and maximization. For C ON T IN E ST, we draw 10,000 samples in the outer loop, each having 5 random labels
in the inner loop. For NS, we also draw 10,000 samples. The first two experiments are carried out
in a single 2.4GHz processor. First, we compare the performance of increasingly selecting sources
(from 1 to 10) on small core-periphery networks (Figure 2(a)). When the number of selected sources
is 1, different algorithms essentially spend time estimating the influence for each node. C ON T IN E ST
outperforms other methods by order of magnitude and for the number of sources larger than 1, it can
efficiently reuse computations for estimating influence for individual nodes. Dashed lines mean that
a method did not finish in 24 hours, and the estimated run time is plotted. Next, we compare the run
time for selecting 10 sources on core-periphery networks of 128 nodes with increasing densities (or
the number of edges) (Figure 2(a)). Again, I NFLUMAX and NS are order of magnitude slower due to
their respective exponential and quadratic computational complexity in network density. In contrast,
the run time of C ON T IN E ST only increases slightly with the increasing density since its computational complexity is linear in the number of edges (see Appendix F for additional results on the
random and hierarchal networks). Finally, we evaluate the speed on large core-periphery networks,
ranging from 100 to 1,000,000 nodes with density 1.5 in Figure 2(c). We report the parallel run time
7
> 24 hours
5
4
> 48 hours
5
10
3
10
time(s)
10
time(s)
time(s)
6
10
4
10
3
10
10
10
10
2
1
1
10
10
10
ConTinEst
0
1
2
3
4
NS
5 6 7
#sources
Influmax
8
ConTinEst
0
10
1.5
9 10
4
10
3
2
2
10
> 24 hours
5
10
10
2
2.5
NS
3 3.5
density
ConTinEst
NS
Influmax
4
4.5
2
5
3
10
10
4
5
10
#nodes
6
10
10
(a) Run time vs. # sources (b) Run time vs. network density (c) Run time vs. #nodes
Figure 2: For core-periphery networks with T = 10, runtime for (a) selecting increasing number of sources
in networks of 128 nodes and 320 edges; for (b)selecting 10 sources in networks of 128 nodes with increasing
density; and for (c) selecting 10 sources with increasing network size from 100 to 1,000,000 fixing 1.5 density.
ConTinEst
IC
LT
SP1M
PMIA
3
2
1.5
60
40
30
20
1
0.5
50
influence
MAE
2.5
80
60
influence
3.5
ConTinEst
Greedy(IC)
Greedy(LT)
SP1M
PMIA
10
10
20
30
T
40
50
0
0
10
20
30
#sources
40
50
(a) Influence estimation error (b) Influence vs. #sources
40
ConTinEst(Wbl)
Greedy(IC)
Greedy(LT)
SP1M
PMIA
20
0
0
5
10
T
15
20
(c) Influence vs. time
Figure 3: In MemeTracker dataset, (a) comparison of the accuracy of the estimated influence in terms of
mean absolute error, (b) comparison of the influence of the selected nodes by fixing the observation window
T = 5 and varying the number sources, and (c) comparison of the influence of the selected nodes by by fixing
the number of sources to 50 and varying the time window.
only for C ON T IN E ST and NS (both are implemented by MPI running on 192 cores of 2.4Ghz) since
I NFLUMAX is not scalable. In contrast to NS, the performance of C ON T IN E ST increases linearly
with the network size and can easily scale up to one million nodes.
Real-world data. We first quantify how well each method can estimate the true influence in a
real-world dataset. Then, we evaluate the solution quality of the selected sources for influence maximization. We use the MemeTracker dataset [23] which has 10,967 hyperlink cascades among 600
media sites. We repeatedly split all cascades into a 80% training set and a 20% test set at random
for five times. On each training set, we learn the continuous-time model using N ET R ATE [10] with
exponential transmission functions. For discrete-time model, we learn the infection probabilities
using [24] for IC, SP1M and PMIA. Similarly, for LT, we follow the methodology by [1]. Let C(u)
be the set of all cascades where u was the source node. Based on C(u), the total number of distinct
nodes infected before T quantifies the real influence of node u up to time T . In Figure 3(a), we
report the Mean Absolute Error (MAE) between the real and the estimated influence. Clearly, C ON T IN E ST performs the best statistically. Because the length of real cascades empirically conforms
to a power-law distribution where most cascades are very short (2-4 nodes), the gap of the estimation error is relatively not large. However, we emphasize that such accuracy improvement is critical
for maximizing long-term influence. The estimation error for individuals will accumulate along the
spreading paths. Hence, any consistent improvement in influence estimation can lead to significant
improvement to the overall influence estimation and maximization task, which is further confirmed
by Figures 3(b) and 3(c) where we evaluate the influence of the selected nodes in the same spirit as
influence estimation: the true influence is calculated as the total number of distinct nodes infected
before T based on C(u) of the selected nodes. The selected sources given by C ON T IN E ST achieve
the best performance as we vary the number of selected sources and the observation time window.
8
Conclusions
We propose a randomized influence estimation algorithm in continuous-time diffusion networks,
which can scale up to networks of millions of nodes while significantly improves over previous stateof-the-arts in terms of the accuracy of the estimated influence and the quality of the selected nodes
in maximizing the influence. In future work, it will be interesting to apply the current algorithm
to other tasks like influence minimization and manipulation, and design scalable algorithms for
continuous-time models other than the independent cascade model.
Acknowledgement: Our work is supported by NSF/NIH BIGDATA 1R01GM108341-01, NSF
IIS1116886, NSF IIS1218749, NSFC 61129001, a DARPA Xdata grant and Raytheon Faculty Fellowship of Gatech.
8
References
? Tardos. Maximizing the spread of influence through a
[1] David Kempe, Jon Kleinberg, and Eva
social network. In KDD, pages 137?146, 2003.
[2] Wei Chen, Yajun Wang, and Siyu Yang. Efficient influence maximization in social networks.
In KDD, pages 199?208, 2009.
[3] Wei Chen, Yifei Yuan, and Li Zhang. Scalable influence maximization in social networks
under the linear threshold model. In ICDM, pages 88?97, 2010.
[4] Amit Goyal, Francesco Bonchi, and Laks V. S. Lakshmanan. A data-based approach to social
influence maximization. Proc. VLDB Endow., 5, 2011.
[5] Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne M. VanBriesen,
and Natalie S. Glance. Cost-effective outbreak detection in networks. In KDD, pages 420?429,
2007.
[6] Matthew Richardson and Pedro Domingos. Mining knowledge-sharing sites for viral marketing. In KDD, pages 61?70, 2002.
[7] Manuel Gomez-Rodriguez and Bernhard Sch?olkopf. Influence maximization in continuous
time diffusion networks. In ICML ?12, 2012.
[8] Nan Du, Le Song, Alexander J. Smola, and Ming Yuan. Learning networks of heterogeneous
influence. In NIPS, 2012.
[9] Nan Du, Le Song, Hyenkyun Woo, and Hongyuan Zha. Uncover topic-sensitive information
diffusion networks. In AISTATS, 2013.
[10] Manuel Gomez-Rodriguez, David Balduzzi, and Bernhard Sch?olkopf. Uncovering the temporal dynamics of diffusion networks. In ICML, pages 561?568, 2011.
[11] Manuel Gomez-Rodriguez, Jure Leskovec, and Bernhard Sch?olkopf. Structure and Dynamics
of Information Pathways in On-line Media. In WSDM, 2013.
[12] Ke Zhou, Le Song, and Hongyuan Zha. Learning social infectivity in sparse low-rank networks
using multi-dimensional hawkes processes. In Artificial Intelligence and Statistics (AISTATS),
2013.
[13] Ke Zhou, Hongyuan Zha, and Le Song. Learning triggering kernels for multi-dimensional
hawkes processes. In International Conference on Machine Learning(ICML), 2013.
[14] Manuel Gomez-Rodriguez, Jure Leskovec, and Bernhard Sch?olkopf. Modeling information
propagation with survival theory. In ICML, 2013.
[15] Shuanghong Yang and Hongyuan Zha. Mixture of mutually exciting processes for viral diffusion. In International Conference on Machine Learning(ICML), 2013.
[16] Jerald F. Lawless. Statistical Models and Methods for Lifetime Data. Wiley-Interscience, 2002.
[17] Edith Cohen. Size-estimation framework with applications to transitive closure and reachability. Journal of Computer and System Sciences, 55(3):441?453, 1997.
[18] GL Nemhauser, LA Wolsey, and ML Fisher. An analysis of approximations for maximizing
submodular set functions. Mathematical Programming, 14(1), 1978.
[19] Andreas Krause. Ph.D. Thesis. CMU, 2008.
[20] Jure Leskovec, Deepayan Chakrabarti, Jon M. Kleinberg, Christos Faloutsos, and Zoubin
Ghahramani. Kronecker graphs: An approach to modeling networks. JMLR, 11, 2010.
[21] Manuel Gomez-Rodriguez, Jure Leskovec, and Andreas Krause. Inferring networks of diffusion and influence. In KDD, 2010.
[22] David Easley and Jon Kleinberg. Networks, Crowds, and Markets: Reasoning About a Highly
Connected World. Cambridge University Press, 2010.
[23] Jure Leskovec, Lars Backstrom, and Jon M. Kleinberg. Meme-tracking and the dynamics of
the news cycle. In KDD, 2009.
[24] Praneeth Netrapalli and Sujay Sanghavi. Learning the graph of epidemic cascades. In SIGMETRICS/PERFORMANCE, pages 211?222. ACM, 2012.
[25] Wei Chen, Chi Wang, and Yajun Wang. Scalable influence maximization for prevalent viral
marketing in large-scale social networks. In KDD ?10, pages 1029?1038, 2010.
9
| 4857 |@word faculty:1 closure:1 vldb:1 seek:1 lakshmanan:1 harder:1 memetracker:2 configuration:1 contains:3 initial:1 selecting:5 interestingly:1 outperforms:2 yajun:2 current:4 discretization:1 manuel:6 must:2 numerical:1 kdd:7 shape:1 designed:1 v:8 greedy:12 selected:10 generative:1 intelligence:1 core:6 short:1 provides:1 node:84 location:1 traverse:1 zhang:1 five:1 mathematical:1 along:3 direct:1 natalie:1 chakrabarti:1 yuan:2 pathway:1 interscience:1 bonchi:1 introduce:1 pairwise:1 market:1 expected:6 mpg:1 multi:2 chi:1 wsdm:1 ming:1 little:1 actual:1 window:16 inappropriate:1 increasing:8 considering:1 begin:1 estimating:6 moreover:4 provided:2 maximizes:1 medium:3 factorized:1 weibull:3 maxa:1 finding:2 transformation:2 guarantee:2 temporal:3 every:4 ti:27 runtime:2 exactly:1 rm:1 utilization:1 grant:1 before:3 infectivity:1 manuelgr:1 timing:4 switching:1 ak:2 nsfc:1 parallelize:1 path:12 studied:2 suggests:2 challenging:3 adoption:1 statistically:1 directed:11 union:1 practice:2 goyal:1 differs:1 significantly:4 cascade:20 ups:3 confidence:1 zoubin:1 get:1 storage:2 context:1 influence:87 equivalent:1 maximizing:6 straightforward:1 independently:1 ke:2 estimator:3 importantly:2 siyu:1 transmit:1 tardos:1 construction:1 trigger:1 suppose:2 user:3 exact:1 programming:1 domingo:1 associate:2 element:2 continues:1 solved:1 capture:1 wang:3 calculate:2 eva:1 connected:2 news:1 cycle:1 decrease:2 highest:1 meme:2 complexity:10 dynamic:5 completely:1 easily:3 joint:2 darpa:1 differently:1 easley:1 distinct:2 describe:1 effective:1 query:2 artificial:1 labeling:1 edith:1 neighborhood:14 crowd:1 whose:1 heuristic:1 larger:3 solve:1 spend:1 epidemic:1 statistic:1 gi:6 richardson:1 transform:3 nondecreasing:1 asynchronously:1 sequence:3 analytical:1 propose:3 product:2 loop:6 flexibility:1 achieve:3 validate:1 scalability:2 getting:1 constituent:1 olkopf:4 parent:4 requirement:2 transmission:27 extending:1 r1:1 wider:1 derive:1 fixing:4 ij:2 op:1 lsong:1 eq:6 netrapalli:1 recovering:1 implemented:1 reachability:1 quantify:1 direction:1 submodularity:1 lars:1 bin:2 argued:1 require:2 hx:2 assign:2 suffices:1 randomization:2 opt:4 summation:1 considered:1 ground:3 ic:4 exp:3 predict:1 matthew:1 achieves:1 adopt:1 smallest:6 released:1 vary:1 purpose:1 estimation:39 proc:1 spreading:2 label:34 sensitive:3 largest:1 weighted:1 minimization:4 clearly:1 sigmetrics:1 uated:1 rather:2 modified:1 pn:1 zhou:2 varying:2 gatech:4 endow:1 focus:1 improvement:4 iis1116886:1 rank:1 indicates:1 prevalent:1 contrast:4 sense:1 inference:4 twitter:1 dependent:3 jeanne:1 i0:2 entire:1 typically:1 initially:1 hidden:1 relation:4 diminishing:1 her:1 going:1 subroutine:2 interested:1 rerun:1 overall:7 among:1 uncovering:1 stateof:1 art:6 special:3 integration:2 kempe:1 marginal:3 field:1 once:2 construct:1 equal:1 having:1 sampling:13 lawless:1 icml:5 nearly:1 jon:4 future:2 mimic:1 np:1 report:2 intelligent:1 sanghavi:1 simultaneously:1 individual:4 argmax:1 ab:1 detection:1 message:1 investigate:1 mining:1 highly:1 evaluation:1 introduces:1 mixture:1 behind:1 tj:9 r01gm108341:1 implication:1 accurate:1 edge:22 closer:1 integral:1 respective:2 conforms:1 tree:1 plotted:1 theoretical:2 leskovec:6 modeling:6 infected:17 ar:2 maximization:19 loopy:1 cost:1 entry:1 hundred:2 too:1 optimally:1 dependency:1 answer:1 synthetic:5 st:18 density:19 international:2 randomized:10 physic:1 quickly:1 again:1 thesis:1 recorded:1 choose:2 possibly:1 resort:1 return:3 li:1 distribute:1 de:1 summarized:1 infects:2 piece:1 later:2 view:3 closed:1 zha:6 start:1 carlos:1 parallel:1 accuracy:9 variance:2 efficiently:4 maximized:2 yield:1 correspond:1 accurately:3 confirmed:2 cc:2 researcher:1 j6:1 processor:1 followees:1 minj:1 sharing:1 infection:7 definition:1 associated:1 proof:1 static:1 sampled:1 gain:3 dataset:3 knowledge:2 improves:2 uncover:1 centric:2 follow:5 methodology:1 wei:3 formulation:1 done:2 furthermore:2 marketing:3 just:1 lifetime:2 smola:1 web:1 propagation:1 rodriguez:7 glance:1 quality:4 contain:1 true:5 unbiased:2 hence:4 analytically:1 iteratively:1 deal:2 round:2 width:1 hawkes:2 mpi:2 performs:1 reasoning:1 ranging:1 novel:1 nih:1 viral:4 ji:27 empirically:1 cohen:5 exponentially:2 million:8 jl:1 mae:2 accumulate:1 significant:2 cambridge:1 dag:2 tuning:1 sujay:1 pm:3 similarly:3 xdata:1 yifei:1 submodular:2 reachable:6 entail:3 add:2 recent:1 showed:1 perspective:3 reverse:1 periphery:6 store:1 certain:3 hierarchal:2 manipulation:1 discretizing:2 binary:2 transmitted:1 minimum:3 additional:7 greater:1 fortunately:2 guestrin:1 employed:1 parallelized:2 shortest:9 maximize:1 monotonically:1 dashed:1 ii:1 multiple:4 full:1 rj:1 reduces:3 faster:1 adapt:1 match:1 long:2 icdm:1 qi:3 scalable:10 involving:2 heterogeneous:5 essentially:5 expectation:2 cmu:1 iteration:3 represent:1 adopting:1 kernel:1 achieved:1 want:2 fellowship:1 krause:3 addressed:1 source:54 sch:4 induced:1 wbl:2 spirit:1 seem:1 near:1 counting:1 yang:2 iii:1 easy:1 split:1 independence:1 fit:1 finish:1 topology:1 triggering:1 inner:4 idea:3 reduce:1 andreas:3 praneeth:1 dunan:1 shift:1 whether:2 motivated:1 reuse:3 ul:2 song:5 render:1 passing:1 repeatedly:1 enumerate:1 useful:1 generally:1 nonparametric:1 ph:1 induces:1 discretetime:1 generate:5 nsf:3 revisit:2 estimated:12 per:2 discrete:4 waiting:1 key:4 threshold:1 drawn:2 breadth:1 diffusion:21 graph:8 downstream:1 fraction:1 year:1 run:11 draw:8 appendix:9 nan:3 gomez:7 guaranteed:2 tackled:1 corre:1 quadratic:2 nonnegative:1 nontrivial:1 occur:1 kronecker:3 ri:10 calling:1 kleinberg:4 speed:2 argument:1 min:2 relatively:1 influential:1 ri0:2 cleverly:1 across:1 smaller:2 increasingly:1 slightly:1 ate:1 vanbriesen:1 backstrom:1 outbreak:1 intuitively:1 restricted:1 invariant:1 pr:6 taken:2 equation:1 mutually:2 remains:1 needed:3 know:1 end:2 fji:9 studying:1 operation:1 apply:1 obey:1 hierarchical:1 appropriate:1 faloutsos:2 slower:1 denotes:2 running:1 graphical:8 log2:4 laks:1 calculating:1 balduzzi:1 especially:1 amit:1 approximating:1 ghahramani:1 contact:3 move:2 question:1 added:2 parametric:1 randomize:1 unclear:1 nemhauser:1 distance:8 tue:1 outer:2 topic:1 assuming:1 length:2 deepayan:1 mini:2 providing:1 nc:2 difficult:1 trace:1 negative:1 design:4 perform:3 observation:4 francesco:1 markov:1 datasets:1 finite:1 dijkstra:1 arbitrary:3 introduced:1 david:3 pair:2 required:1 extensive:1 continous:1 hour:4 nip:1 jure:6 regime:1 challenge:1 hyperlink:1 power:1 event:6 critical:1 predicting:1 indicator:1 technology:1 contagion:4 axis:1 carried:3 woo:1 naive:6 transitive:1 literature:2 acknowledgement:1 relative:5 law:1 fully:1 rationale:1 interesting:2 generation:1 wolsey:1 acyclic:1 querying:2 degree:3 consistent:1 exciting:1 supported:2 last:1 gl:1 drastically:1 institute:1 neighbor:5 absolute:2 sparse:1 distributed:3 benefit:1 ghz:2 calculated:1 world:8 rich:1 collection:4 far:1 social:7 approximate:1 emphasize:1 bernhard:4 ml:1 anew:1 sequentially:1 tolerant:1 hongyuan:5 summing:1 continuous:26 search:3 quantifies:1 nature:1 learn:2 du:3 artificially:1 constructing:1 domain:1 did:1 significance:1 spread:4 aistats:2 linearly:1 noise:1 dtj:1 repeated:1 site:3 tl:1 georgia:1 fashion:1 wiley:1 n:13 christos:2 inferring:1 exponential:10 jmlr:1 third:1 advertisement:1 infect:1 dozen:1 theorem:3 list:10 survival:2 essential:1 intrinsic:1 magnitude:2 gap:1 chen:3 logarithmic:2 rayleigh:1 simply:1 lt:4 lazy:1 expressed:1 ordered:1 tracking:1 monotonic:2 marketer:1 pedro:1 nested:1 truth:3 satisfies:1 acm:1 conditional:3 month:2 viewed:2 goal:1 twofold:1 fisher:1 hard:1 infinite:1 specifically:2 uniformly:3 averaging:2 raytheon:1 called:2 total:2 experimental:1 la:1 formally:4 people:1 support:1 arises:1 alexander:1 bigdata:1 evaluate:4 |
4,262 | 4,858 | Adaptive Anonymity via b-Matching
Krzysztof Choromanski
Columbia University
[email protected]
Tony Jebara
Columbia University
[email protected]
Kui Tang
Columbia University
[email protected]
Abstract
The adaptive anonymity problem is formalized where each individual shares their
data along with an integer value to indicate their personal level of desired privacy.
This problem leads to a generalization of k-anonymity to the b-matching setting.
Novel algorithms and theory are provided to implement this type of anonymity.
The relaxation achieves better utility, admits theoretical privacy guarantees that
are as strong, and, most importantly, accommodates a variable level of anonymity
for each individual. Empirical results confirm improved utility on benchmark and
social data-sets.
1 Introduction
In many situations, individuals wish to share their personal data for machine learning applications
and other exploration purposes. If the data contains sensitive information, it is necessary to protect
it with privacy guarantees while maintaining some notion of data utility [18, 2, 24]. There are
various definitions of privacy. These include k-anonymity [19], l-diversity [16], t-closeness [14]
and differential1 privacy [3, 22]. All these privacy guarantees fundamentally treat each contributed
datum about an individual equally. However, the acceptable anonymity and comfort-level of each
individual in a population can vary widely. This article explores the adaptive anonymity setting and
shows how to generalize the k-anonymity framework to handle it. Other related approaches have
been previously explored [20, 21, 15, 5, 6, 23] yet herein we contribute novel efficient algorithms
and formalize precise privacy guarantees. Note also that there are various definitions of utility. This
article focuses on the use of suppression since it is well-formalized. Therein, we hide certain values
in the data-set by replacing them with a ? symbol (fewer ? symbols indicate higher utility). The
overall goal is to maximize utility while preserving each individual?s level of desired privacy.
This article is organized as follows. ? 2 formalizes the adaptive anonymity problem and shows
how k-anonymity does not handle it. This leads to a relaxation of k-anonymity into symmetric
and asymmetric bipartite regular compatibility graphs. ? 3 provides algorithms for maximizing
utility under these relaxed privacy criteria. ? 4 provides theorems to ensure the privacy of these
relaxed criteria for uniform anonymity as well as for adaptive anonymity. ? 5 shows experiments on
benchmark and social data-sets. Detailed proofs are provided in the Supplement.
2 Adaptive anonymity and necessary relaxations to k-anonymity
The adaptive anonymity problem considers a data-set X ? Zn?d consisting of n ? N observations
{x1 , . . . , xn } each of which is a d-dimensional discrete vector, in other words, xi ? Zd . Each user
i contributes an observation vector xi which contains discrete attributes pertaining to that user2 .
Furthermore, each user i provides an adaptive anonymity parameter ?i ? N they desire to keep
when the database is released. Given such a data-set and anonymity parameters, we wish to output
an obfuscated data-set denoted by Y ? {Z ? ?}n?d which consists of vectors {y1 , . . . , yn } where
1
2
Differential privacy often requires specifying the data application (e.g. logistic regression) in advance [4].
For instance, a vector can contain a user?s gender, race, height, weight, age, income bracket and so on.
1
yi (k) ? {xi (k), ?}. The star symbol ? indicates that the k?th attribute has been masked in the i?th
user-record. We say that vector xi is compatible with vector yj if xi (k) = yj (k) for all elements of
yj (k) $= ?. The goal of this article is to create a Y which contains a minimal number of ? symbols
such that each entry yi of Y is compatible with at least ?i entries of X and vice-versa.
The most pervasive method for anonymity in the released data is the k-anonymity method [19, 1].
However, it is actually more constraining than the above desiderata. If all users have the same value
?i = k, then k-anonymity suppresses data in the database such that, for each user?s data vector in the
released (or anonymized) database, there are at least k ? 1 identical copies in the released database.
The existence of copies is used by k-anonymity to justify some protection to attack.
We will show that the idea of k ? 1 copies can be understood as forming a compatibility graph between the original database and the released database which is composed of several fully-connected
k-cliques. However, rather than guaranteeing copies or cliques, the anonymity problem can be
relaxed into a k-regular compatibility to achieve nearly identical resilience to attack. More interestingly, this relaxation will naturally allow users to select different ?i anonymity values or degrees in
the compatibility graph and allow them to achieve their desired personal protection level.
Why can?t k-anonymity handle heterogeneous anonymity levels ?i ? Consider the case where the
population contains many liberal users with very low anonymity levels yet one single paranoid user
(user i) wants to have a maximal anonymity with ?i = n. In the k-anonymity framework, that user
will require n ? 1 identical copies of his data in the released database. Thus, a single paranoid user
will destroy all the information of the database which will merely contain completely redundant
vectors. We will propose a b-matching relaxation to k-anonymity which prevents this degeneracy
since it does not merely handle compatibility queries by creating copies in the released data.
While k-anonymity is not the only criterion for privacy, there are situations in which it is sufficient
as illustrated by the following scenario. First assume the data-set X is associated with a set of
identities (or usernames) and Y is associated with a set of keys. A key may be the user?s password
or some secret information (such as their DNA sequence). Represent the usernames and keys using
integers x1 , . . . , xn and y1 , . . . , yn , respectively. Username xi ? Z is associated with entry xi and
key yj ? Z is associated with entry yj . Furthermore, assume that these usernames and keys are
diverse, unique and independent of their corresponding attributes. These x and y values are known
as the sensitive attributes and the entries of X and Y are the non-sensitive attributes [16]. We aim to
release an obfuscated database Y and its keys with the possibility that an adversary may have access
to all or a subset of X and the identities.
The goal is to ensure that the success of an attack (using a username-key pair) is low. In other
words, the attack succeeds with probability no larger than 1/?i for a user which specified ?i ? N.
Thus, the attack we seek to protect against is the use of the data to match usernames to keys (rather
than attacks in which additional non-sensitive attributes about a user are discovered). In the uniform
?i setting, k-anonymity guarantees that a single one-time attack using a single username-key pair
succeeds with probability at most 1/k. In the extreme case, it is easy to see that replacing all of Y
with ? symbols will result in an attack success probability of 1/n if the adversary attempts a single
random attack-pair (username and key). Meanwhile, releasing a database Y = X with keys could
allow the adversary to succeed with an initial attack with probability 1.
We first assume that all degrees ?i are constant and set to ? and discuss how the proposed b-matching
privacy output subtly differs from standard k-anonymity [19]. First, define quasi-identifiers as sets
of attributes like gender and age that can be linked with external data to uniquely identify an individual in the population. The k-anonymity criterion says that a data-set such as Y is protected against
linking attacks that exploit quasi-identifiers if every element is indistinguishable from at least k ? 1
other elements with respect to every set of quasi-identifier attributes. We will instead use a compatibility graph G to more precisely characterize how elements are indistinguishable in the data-sets and
which entries of Y are compatible with entries in the original data-set X. The graph places edges
between entries of X which are compatible with entries of Y. Clearly, G is an undirected bipartite
graph containing two equal-sized partitions (or color-classes) of nodes A and B each of cardinality
n where A = {a1 , . . . , an } and B = {b1 , . . . , bn }. Each element of A is associated with an entry of
X and each element of B is associated with an entry of Y. An edge e = (i, j) ? G that is adjacent
to a node in A and a node in B indicates that the entries xi and yj are compatible. The absence of
an edge means nothing: entries are either compatible or not compatible.
2
For ?i = ?, b-matching produces ?-regular bipartite graphs G while k-anonymity produces ?-regular
clique-bipartite graphs3 defined as follows.
Definition 2.1 Let G(A, B) be a bipartite graph with color classes: A, B where A =
{a1 , ..., an }, B = {b1 , ..., bn }. We call a k-regular bipartite graph G(A, B) a clique-bipartite
graph if it is a union of pairwise disjoint and nonadjacent complete k-regular bipartite graphs.
Denote by Gbn,? the family of ?-regular bipartite graphs with n nodes. Similarly, denote by Gkn,?
the family of ?-regular graphs clique-bipartite graphs. We will also denote by Gsn,? the family of
symmetric b-regular graphs using the following definition of symmetry.
Definition 2.2 Let G(A, B) be a bipartite graph with color classes: A, B where A =
{a1 , ..., an }, B = {b1 , ..., bn }. We say that G(A, B) is symmetric if the existence of an edge (ai , bj )
in G(A, B) implies the existence of an edge (aj , bi ), where 1 ? i, j ? n.
For values of n that are not trivially small, it is easy to see that the graph families satisfy
Gkn,? ? Gsn,? ? Gbn,? . This holds since symmetric ?-regular graphs are ?-regular with the additional
symmetry constraint. Clique-bipartite graphs are ?-regular graphs constrained to be clique-bipartite
and the latter property automatically yields symmetry.
This article introduces graph families Gbn,? and Gsn,? to enforce privacy since these are relaxations
of the family Gkn,b as previously explored in k-anonymity research. These relaxations will achieve
better utility in the released database. Furthermore, they will allow us to permit adaptive anonymity
levels across the users in the database. We will drop the superscripts n and ? whenever the meaning
is clear from the context. Additional properties of these graph families will be formalized in ? 4 but
we first informally illustrate how they are useful in achieving data privacy.
username
alice
bob
carol
dave
eve
fred
key
1
0
0
1
1
0
0
0
0
0
1
1
0
0
1
1
0
1
0
0
1
1
0
1
*
*
*
*
*
*
0
0
0
0
1
1
0
0
1
1
*
*
0
0
1
1
*
*
ggacta
tacaga
ctagag
tatgaa
caacgc
tgttga
Figure 1: Traditional k-anonymity (in Gk ) for n = 6, d = 4, ? = 2 achieves #(?) = 10. Left to
right: usernames with data (x, X), compatibility graph (G) and anonymized data with keys (Y, y).
username
alice
bob
carol
dave
eve
fred
key
1
0
0
1
1
0
0
0
0
0
1
1
0
0
1
1
0
1
0
0
1
1
0
1
*
*
*
*
1
0
0
*
0
*
*
*
0
0
1
1
0
1
0
0
1
1
0
1
ggacta
tacaga
ctagag
tatgaa
caacgc
tgttga
Figure 2: The b-matching anonymity (in Gb ) for n = 6, d = 4, ? = 2 achieves #(?) = 8. Left to
right: usernames with data (x, X), compatibility graph (G) and anonymized data with keys (Y, y).
In figure 1, we see an example of k-anonymity with a graph from Gk . Here each entry of the
anonymized data-set Y appears k = 2 times (or ? = 2). The compatibility graph shows 3 fully
connected cliques since each of the k copies in Y has identical entries. By brute force exploration
3
Traditional k-anonymity releases an obfuscated database of n rows where there are k copies of each row.
So, each copy has the same neighborhood. Similarly, the entries of the original database all have to be connected
to the same k copies in the obfuscated database. This induces a so-called bipartite clique-connectivity.
3
we find that the minimum number of stars to achieve this type of anonymity is #(?) = 10. Moreover, since this problem is NP-hard [17], efficient algorithms rarely achieve this best-possible utility
(minimal number of stars).
Next, consider figure 2 where we have achieved superior utility by only introducing #(?) = 8 stars
to form Y. The compatibility graph is at least ? = 2-regular. It was possible to find a smaller
number of stars since ?-regular bipartite graphs are a relaxation of k-clique graphs as shown in
figure 1. Another possibility (not shown in the figures) is a symmetric version of figure 2 where
nodes on the left hand side and nodes on the right hand side have a symmetric connectivity. Such an
intermediate solution (since Gk ? Gs ? Gb ) should potentially achieve #(?) between 8 and 10.
It is easy to see why all graphs have to have a minimum degree of ? at least (i.e. must contain a
?-regular graph). If one of the nodes has a degree of 1, then the adversary will know the key (or the
username) for that node with certainty. If each node has degree ? or larger, then the adversary will
have probability at most 1/? of choosing the correct key (or username) for any random victim.
We next describe algorithms which accept X and integers ?1 , . . . , ?n and output Y such that each
entry i in Y is compatible with at least ?i entries in X and vice-versa. These algorithms operate by
finding a graph in Gb or Gs and achieve similar protection as k-anonymity (which finds a graph in
the most restrictive family Gk and therefore requires more stars). We provide a theoretical analysis
of the topology of G in these two new families to show resilience to single and sustained attacks
from an all-powerful adversary.
3 Approximation algorithms
While the k-anonymity suppression problem is known to be NP-hard, a polynomial time method
with an approximation guarantee is the forest algorithm [1] which has an approximation ratio of 3k?
3. In practice, though, the forest algorithm is slow and achieves poor utility compared to clustering
methods [10]. We provide an algorithm
for the b-matching anonymity problem with approximation
?
ratio of ? and runtime of O(?m n) where n is the number of users in the data, ? is the largest
anonymity level in {?1 , . . . , ?n } and m is the number of edges to explore (in the worst case with
no prior knowledge, we have m = O(n2 ) edges between all possible users). One algorithm solves
for minimum weight bipartite b-matchings which is easy to implement using linear programming,
max-flow methods or belief propagation in the bipartite case [9, 11]. The other algorithm uses a
general non-bipartite solver which involves Blossom structures and requires O(?mn log(n)) time[8,
9, 13]. Fortunately, minimum weight general matching has recently been shown to require only
O(m"?1 log "?1 ) time to achieve a (1 ? ") approximation [7].
First, we define two quantities of interest. Given a graph
matrix G ? Bn?n and a
!
! G
!with adjacency
data-set X, the Hamming error is defined as h(G) = i j Gij k (Xik $= Xjk ). The number of
! ! "
stars to achieve G is s(G) = nd ? i k j (1 ? Gij (Xik $= Xjk )) .
Recall Gb is the family of regular bipartite graphs. Let minG?Gb s(G) be the minimum number of
stars (or suppressions) that one can place in Y while keeping the entries in Y compatible with at
least ? entries in X and vice-versa. We propose the following polynomial time algorithm which,
in its first iteration, minimizes h(G) over the family Gb and then iteratively minimizes a variational
upper bound [12] on s(G) using a weighted version of the Hamming distance.
Algorithm 1 variational bipartite b-matching
Input X ? Zn?d , ?i ? N for i ? {1, . . . , n}, ? > 0 and initialize W ? Rn?d to the all 1s matrix
While not converged {
? = arg minG?Bn?n ! Gij ! Wik (Xik $= Xjk ) s.t. ! Gij = ! Gji ? ?i
Set G
ij #
k
j
j
$
! ?
For all i and k set Wik = exp
Gij (Xik $= Xjk ) ln ?
}
j
1+?
? ij = 1 and Xjk $= Xik for any j
For all i and k set Yik = ? if G
Choose random permutation M as matrix M ? Bn?n and output Ypublic = MY
We can further restrict the b-matching solver such that the graph G is symmetric with respect to both
the original data X and the obfuscated data Y. To do so, we require that G is a symmetric matrix.
? is recovered by a general
This will produce a graph G ? Gs . In such a situation, the value of G
4
unipartite b-matching algorithm rather than a bipartite b-matching program. Thus, the set of possible
output solutions is strictly smaller (the bipartite formulation relaxes the symmetric one).
Algorithm 2 variational symmetric b-matching
Input X ? Zn?d , ?i ? N for i ? {1, . . . , n}, ? > 0 and initialize W ? Rn?d to the all 1s matrix
While not converged {
? = arg minG?Bn?n ! Gij ! Wik (Xik $= Xjk ) s.t. ! Gij ? ?i , Gij = Gji
Set G
k
j
ij #
$
! ?
For all i and k set Wik = exp
Gij (Xik $= Xjk ) ln ?
}
j
1+?
? ij = 1 and Xjk $= Xik for any j
For all i and k set Yik = ? if G
Choose random permutation M as matrix M ? Bn?n and output Ypublic = MY
? such that s(G)
? ? ? minG?G s(G).
Theorem 1 For ?i ? ?, iteration #1 of algorithm 1 finds G
b
?
Theorem 2 Each iteration of algorithm 1 monotonically decreases s(G).
Theorem 1 and 2 apply to both algorithms. Both algorithms4 manipulate a bipartite regular graph
G(A, B) containing the true matching {(a1 , b1 ), . . . , (an , bn )}. However, they ultimately release the
data-set Ypublic after randomly shuffling Y according to some matching or permutation M which
hides the true matching. The random permutation or matching M can be represented as a matrix
M ? Bn?n or as a function ? : {1, . . . , n} ? {1, . . . , n}. We now discuss how an adversary can
attack privacy by recovering this matching or parts of it.
4 Privacy guarantees
We now characterize the anonymity provided by a compatibility graph G ? Gb (or G ? Gs ) under
several attack models. The goal of the adversary is to correctly match people to as many records as
possible. In other words, the adversary wishes to find the random matching M used in the algorithms
(or parts of M ) to connect the entries of X to the entries of Ypublic (assuming the adversary has
stolen X and Ypublic or portions of them). More precisely, we have a bipartite graph G(A, B) with
color classes A, B, each of size n. Class A corresponds to n usernames and class B to n keys. Each
username in A is matched to its key in B through some unknown matching M .
We consider the model where the graph G(A, B) is ?-regular, where ? ? N is a parameter chosen by
the publisher. The latter is especially important if we are interested in guaranteeing different levels
of privacy for different users and allowing ? to vary with the user?s index i.
Sometimes it is the case that the adversary has some additional information and at the very beginning
knows some complete records that belong to some people. In graph-theoretic terms, the adversary
thus knows parts of the hidden matching M in advance. Alternatively, the adversary may have
come across such additional information through sustained attack where previous attempts revealed
the presence or absence of an edge. We are interested in analyzing how this extra knowledge can
help him further reveal other edges of the matching. We aim to show that, for some range of the
parameters of the bipartite graphs, this additional knowledge does not help him much. We will
compare the resilience to attack relative to the resilience of k-anonymity. We say that a person v is
k-anonymous if his or her real data record can be confused with at least k ? 1 records from different
people. We first discuss the case of single attacks and then discuss sustained attacks.
4.1 One-Time Attack Guarantees
Assume first that the adversary has no extra information about the matching and performs a one-time
attack. Then, lemma 4.1 holds which is a direct implication of lemma 4.2.
Lemma 4.1 If G(A, B) is an arbitrary ?-regular graph and the adversary does not know any edges
of the matching he is looking for then every person is ?-anonymous.
4
It is straightforward to put a different weight on certain suppressions over others if the utility of the data
is not uniform for each entry or bit. This done by using an n ? d weight matrix in the optimization. It is also
straightforward to handle missing data by allowing initial stars in X before anonymizing.
5
Lemma 4.2 Let G(A, B) be a ?-regular bipartite graph. Then for every edge e of G(A, B) there
exists a perfect matching in G(A, B) that uses e.
The result does not assume any structure in the graph beyond its ?-regularity. Thus, for a single
attack, b-matching anonymity (symmetric or asymmetric) is equivalent to k-anonymity when b = k.
Corollary 4.1 Assume the bipartite graph G(A, B) is either ?-regular, symmetric ?-regular or
clique-bipartite and ?-regular. An adversary attacking G once succeeds with probability ? 1/?.
4.2 Sustained Attack on k-Cliques
Now consider the situation of sustained attacks or attacks with prior information. Here, the adversary may know c ? N edges in M a priori by whatever means (previous attacks or through side
information). We begin by analyzing the resilience of k-anonymity where G is a cliques-structured
graph. In the clique-bipartite graph, even if the adversary knows some edges of the matching (but
not too many) then there still is hope of good anonymity for all people. The anonymity of every
person decreases from ? to at least (? ? c). So, for example, if the adversary knows in advance ?2
edges of the matching then we get the same type of anonymity for every person as for the model
with two times smaller degree in which the adversary has no extra knowledge. So we will be able to
show the following:
Lemma 4.3 If G(A, B) is clique-bipartite ?-regular graph and the adversary knows in advance c
edges of the matching then every person is (? ? c)-anonymous.
The above is simply a consequence of the following lemma.
Lemma 4.4 Assume that G(A, B) is clique-bipartite ?-regular graph. Denote by M some perfect
matching in G(A, B). Let C be some subset of the edges of M and let c = |C|. Fix some vertex
v ? A not matched in C. Then there are at least (? ? c) edges adjacent to v such that, for each of
these edges e, there exists some perfect matching M e in G(A, B) that uses both e and C.
Corollary 4.2 Assume graph G(A, B) is a clique-bipartite and ?-regular. Assume that the adversary knows in advance c edges of the matching. The adversary selects uniformly at random a vertex
the privacy of which he wants to break from the set of vertices he does not know in advance. Then
1
he succeeds with probability at most ??c
.
We next show that b-matchings achieve comparable resilience under sustained attack.
4.3 Sustained attack on asymmetric bipartite b-matching
We now consider the case where we do not have a graph G(A, B) which is clique-bipartite but rather
is only ?-regular and potentially asymmetric (as returned by algorithm 1).
Theorem 4.1 Let G(A,B) be a ?-regular bipartite graph with color classes: A and B. Assume
that |A| = |B| = n. Denote by M some perfect matching M in G(A, B). Let C be some
$
subset of the edges of M and let c = |C|.
% Take some ? ? c. Denote n = n ? c. Fix
any function ? : N ? R satisfying ?? (?
?(?)+
? =
?
2? +
1
4
< ?(?) < ?). Then for all but at most
?2 (?)?2? 2 ?
2c? 2 n" ?(1+
)
2??
r
?(1? c )
2?
?
1
c
+
)
)(
?3 (?)(1+ 1? ?2?
?
2 (?)
?
?(?)
?(?)
+
c?
?(?)
vertices v ? A not matched in C the following
holds: The size of the set of edges e adjacent to v and having the additional property that there exists some perfect matching M v in G(A, B) that uses e and edges from C is: at least (? ? c ? ?(?)).
Essentially, theorem 4.1 says that all %
but at most a small number ? of people are (? ? c ? ?(?))anonymous for every ? satisfying: c 2? + 41 < ?(?) < ? if the adversary knows in advance c
edges of the matching. For example, take ?(?) := ?? for ? ? (0, 1). Fix ? = c and assume that
1
the adversary knows in advance at most ? 4 edges of the matching. Then, using the formula from
6
theorem 4.1, we obtain that (for n large enough) all but at most
1
4
4n"
?
1
4
?3
1
+
?4
?
people from those that
the adversary does not know in advance are ((1 ? ?)? ? ? )-anonymous. So if ? is large enough
then all but approximately a small fraction 14 3 of all people not known in advance are almost
(1 ? ?)?-anonymous.
?4?
Again take ?(?) := ?? where ? ? (0, 1). Take ? = 2c. Next assume that 1 ? c ? min( ?4 , ?(1 ?
? ? ?2 )). Assume that the adversary selects uniformly at random a person to attack. Our goal is to
find an upper bound on the probability he succeeds. Then, using theorem 4.1, we can conclude that
all but at most F n$ people whose records are not known in advance are ((1 ? ?)? ? c)-anonymous
2
1
for F = 33c
? 2 ? . The probability of success is at most: F + (1 ? F ) (1??)??c . Using the expression
on F that we have and our assumptions, we can conclude that the probability we are looking for is
2
at most 34c
? 2 ? . Therefore we have:
Theorem 4.2 Assume graph G(A, B) is ?-regular and the adversary knows in advance c edges of
the matching, where c satisfies: 1 ? c ? min( ?4 , ?(1 ? ? ? ?2 )). The adversary selects uniformly at
random a vertex the privacy of which he wants to break from those that he does not know in advance.
2
Then he succeeds with probability at most 34c
?2? .
4.4 Sustained attack on symmetric b-matching with adaptive anonymity
We now consider the case where the graph is not only ?-regular but also symmetric as defined in
definition 2.2 and as recovered by algorithm 2. Furthermore, we consider the case where we have
varying values of ?i for each node since some users want higher privacy than others. It turns out
that if the corresponding bipartite graph is symmetric (we define this term below) we can conclude
that each user is (?i ? c)-anonymous, where ?i is the degree of a vertex associated with the user
of the bipartite matching graph. So we get results completely analogous to those for the much
simpler models described before. We will use a slightly more elaborate definition of symmetric5,
however, since this graph has one if its partitions permuted by a random matching (the last step in
both algorithms before releasing the data).
Definition 4.1 Let G(A, B) be a bipartite graph with color classes: A, B and matching M =
{(a1 , b1 ), ...(an , bn )}, where A = {a1 , ..., an }, B = {b1 , ..., bn }. We say that G(A, B) is symmetric
with respect to M if the existence of an edge (ai , bj ) in G(A, B) implies the existence of an edge
(aj , bi ), where 1 ? i, j ? n.
From now on, the matching M with respect to which G(A, B) is symmetric is a canonical matching
of G(A, B). Assume that G(A, B) is symmetric with respect to its canonical matching M (it does
not need to be a clique-bipartite graph). In such a case, we will prove that, if the adversary knows
in advance c edges of the matching, then every person from the class A of degree ?i is (?i ? c)anonymous. So we obtain the same type of anonymity as in a clique-bipartite graph (see: lemma 4.3).
Lemma 4.5 Assume that G(A, B) is a bipartite graph, symmetric with respect to its canonical
matching M . Assume furthermore that the adversary knows in advance c edges of the matching.
Then every person that he does not know in advance is (?i ? c)-anonymous, where ?i is a degree of
the related vertex of the bipartite graph.
As a corollary, we obtain the same privacy guarantees in the symmetric case as the k-cliques case.
Corollary 4.3 Assume bipartite graph G(A, B) is symmetric with respect to its canonical matchings M . Assume that the adversary knows in advance c edges of the matching. The adversary selects
uniformly at random a vertex the privacy of which he wants to break from the set of vertices he does
not know in advance. Then he succeeds with probability at most ?i1?c , where ?i is a degree of a
vertex of the matching graph associated with the user.
5
A symmetric graph G(A, B) may not remain symmetric according to definition 2.2 if nodes in B are
shuffled by a permutation M . However, it will still be symmetric with respect to M according to definition 4.1.
7
In summary, the symmetric case is as resilient to sustained attack as the cliques-bipartite case, the
usual one underlying k-anonymity if we set ?i = ? = k everywhere. The adversary succeeds with
probability at most 1/(?i ?c). However, the asymmetric case is potentially weaker and the adversary
2
can succeed with probability at most 34c
? 2 ? . Interestingly, in the symmetric case with variable ?i
degrees, however, we can provide guarantees that are just as good without forcing all individuals to
agree on a common level of anonymity.
1
1
1
0.8
0.8
0.95
0.9
0.9
0.6
utility
0.7
utility
utility
utility
0.8
0.6
0.6
0.4
0
5
10
anonymity
15
0.2
0
20
5
10
anonymity
15
0.2
0
20
1
1
1
0.95
0.8
0.9
0.85
0.8
0.75
0.7
0
b?matching
b?symmetric
k?anonymity
5
10
anonymity
15
20
0
0
5
10
anonymity
0.8
0.75
15
0.7
5
20
b?matching
b?symmetric
k?anonymity
10
15
20
anonymity
25
30
25
30
0.95
0.9
0.4
0.2
b?matching
b?symmetric
k?anonymity
0.8
0.6
utility
utility
0.9
utility
0.4
b?matching
b?symmetric
k?anonymity
utility
0.5
0.4
b?matching
b?symmetric
k?anonymity
0.85
0.7
0.85
0.6
b?matching
b?symmetric
k?anonymity
5
10
anonymity
0.5
15
20
0.4
0
(a)
0.8
b?matching
b?symmetric
k?anonymity
5
10
anonymity
15
20
0.75
5
b?matching
b?symmetric
k?anonymity
10
15
20
anonymity
(b)
Figure 3: Utility (1 ? #(?)
nd ) versus anonymity on (a) Bupa (n = 344, d = 7), Wine (n = 178, d =
14), Heart (n = 186, d = 23), Ecoli (n = 336, d = 8), and Hepatitis (n = 154, d = 20) and Forest
Fires (n = 517, d = 44) data-sets and (b) CalTech University Facebook (n = 768, d = 101) and
Reed University Facebook (n = 962, d = 101) data-sets.
5 Experiments
We compared algorithms 1 and 2 against an agglomerative clustering competitor (optimized to minimize stars) which is known to outperform the forest method [10]. Agglomerative clustering starts
with singleton clusters and keeps unifying the two closest clusters with smallest increase in stars until
clusters grow to a size at least k. Both algorithms release data with suppressions to achieve a desired constant anonymity level ?. For our algorithms, we swept values of ? in {2?1 , 2?2 , . . . , 2?10 }
from largest to smallest and chose the solution that produced the least number of stars. Furthermore, we warm-started the symmetric algorithm with the star-pattern solution of the asymmetric
algorithm to make it converge more quickly. We first explored six standard data-sets from UCI
http://archive.ics.uci.edu/ml/ in the uniform anonymity setting. Figure 3(a) summarizes the results where utility is plotted against ?. Fewer stars imply greater utility and larger ? implies higher
anonymity. We discretized each numerical dimension in a data-set into a binary attribute by finding
all elements above and below the median and mapped categorical values in the data-sets into a binary
code (potentially increasing the dimensionality). Algorithms 1 achieved significantly better utility
for any given fixed constant anonymity level ? while algorithm 2 achieved a slight improvement.
We next explored Facebook social data experiments where each user has a different level of desired
anonymity and has 7 discrete profile attributes which were binarized into d = 101 dimensions. We
used the number of friends fi a user has to compute their desired anonymity level (which decreases
as the number of friends increases). We set F = maxi=1,...n -log fi . and, for each value of ? in the
plot, we set user i?s privacy level to ?i = ? ? (F ? -log fi .). Figure 3(b) summarizes the results
where utility is plotted against ?. Since the k-anonymity agglomerative clustering method requires
a constant ? for all users, we set k = maxi ?i in order to have a privacy guarantee. Algorithms 1
and 2 consistently achieved significantly better utility in the adaptive anonymity setting while also
achieving the desired level of privacy protection.
6 Discussion
We described the adaptive anonymity problem where data is obfuscated to respect each individual
user?s privacy settings. We proposed a relaxation of k-anonymity which is straightforward to implement algorithmically. It yields similar privacy protection while offering greater utility and the ability
to handle heterogeneous anonymity levels for each user.
8
References
[1] G. Aggarwal, T. Feder, K. Kenthapadi, R. Motwani, R. Panigrahy, D. Thomas, and A. Zhu.
Approximation algorithms for k-anonymity. Journal of Privacy Technology, 2005.
[2] M. Allman and V. Paxson. Issues and etiquette concerning use of shared measurement data. In
Proceedings of the 7th ACM SIGCOMM conference on Internet measurement, 2007.
[3] M. Bugliesi, B. Preneel, V. Sassone, I Wegener, and C. Dwork. Lecture Notes in Computer Science - Automata, Languages and Programming, chapter Differential Privacy. Springer Berlin
/ Heidelberg, 2006.
[4] K. Chaudhuri, C. Monteleone, and A.D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, (12):1069?1109, 2011.
[5] G. Cormode, D. Srivastava, S. Bhagat, and B. Krishnamurthy. Class-based graph anonymization for social network data. In PVLDB, volume 2, pages 766?777, 2009.
[6] G. Cormode, D. Srivastava, T. Yu, and Q. Zhang. Anonymizing bipartite graph data using safe
groupings. VLDB J., 19(1):115?139, 2010.
[7] R. Duan and S. Pettie. Approximating maximum weight matching in near-linear time. In
Proceedings 51st Symposium on Foundations of Computer Science, 2010.
[8] J. Edmonds. Paths, trees and flowers. Canadian Journal of Mathematics, 17, 1965.
[9] H.N. Gabow. An efficient reduction technique for degree-constrained subgraph and bidirected
network flow problems. In Proceedings of the fifteenth annual ACM symposium on Theory of
computing, 1983.
[10] A. Gionis, A. Mazza, and T. Tassa. k-anonymization revisited. In ICDE, 2008.
[11] B. Huang and T. Jebara. Fast b-matching via sufficient selection belief propagation. In Artificial
Intelligence and Statistics, 2011.
[12] M.I. Jordan, Z. Ghahramani, T. Jaakkola, and L.K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183?233, 1999.
[13] V.N. Kolmogorov. Blossom V: A new implementation of a minimum cost perfect matching
algorithm. Mathematical Programming Computation, 1(1):43?67, 2009.
[14] N. Li, T. Li, and S. Venkatasubramanian. t-closeness: Privacy beyond k-anonymity and ldiversity. In ICDE, 2007.
[15] S. Lodha and D. Thomas. Probabilistic anonymity. In PinKDD, 2007.
[16] A. Machanavajjhala, D. Kifer, J. Gehrke, and M. Venkitasubramaniam. L-diversity: Privacy
beyond k-anonymity. ACM Transactions on Knowledge Discovery from Data (TKDD), 1, 2007.
[17] A. Meyerson and R. Williams. On the complexity of optimal k-anonymity. In PODS, 2004.
[18] P. Samarati and L. Sweeney. Generalizing data to provide anonymity when disclosing information. In PODS, 1998.
[19] L. Sweeney. Achieving k-anonymity privacy protection using generalization and suppression.
International Journal on Uncertainty, Fuzziness and Knowledge-based Systems, 10(5):571?
588, 2002.
[20] Y. Tao and X. Xiao. Personalized privacy preservation. In SIGMOD Conference, 2006.
[21] Y. Tao and X. Xiao. Personalized privacy preservation. In Privacy-Preserving Data Mining,
2008.
[22] O. Williams and F. McSherry. Probabilistic inference and differential privacy. In NIPS, 2010.
[23] M. Xue, P. Karras, C. Rassi, J. Vaidya, and K.-L. Tan. Anonymizing set-valued data by nonreciprocal recoding. In KDD, 2012.
[24] E. Zheleva and L. Getoor. Preserving the privacy of sensitive relationships in graph data. In
KDD, 2007.
9
| 4858 |@word private:1 version:2 polynomial:2 nd:2 vldb:1 seek:1 bn:12 gabow:1 reduction:1 venkatasubramanian:1 initial:2 contains:4 offering:1 interestingly:2 recovered:2 protection:6 yet:2 must:1 numerical:1 partition:2 kdd:2 drop:1 plot:1 intelligence:1 fewer:2 beginning:1 pvldb:1 record:6 cormode:2 provides:3 contribute:1 node:11 revisited:1 attack:30 liberal:1 simpler:1 zhang:1 height:1 mathematical:1 along:1 direct:1 differential:3 symposium:2 consists:1 sustained:9 prove:1 privacy:38 pairwise:1 secret:1 discretized:1 ming:4 automatically:1 duan:1 cardinality:1 solver:2 increasing:1 provided:3 confused:1 moreover:1 matched:3 begin:1 underlying:1 minimizes:2 suppresses:1 kenthapadi:1 finding:2 guarantee:11 formalizes:1 certainty:1 every:10 binarized:1 runtime:1 brute:1 whatever:1 yn:2 before:3 understood:1 treat:1 resilience:6 consequence:1 disclosing:1 analyzing:2 path:1 approximately:1 chose:1 therein:1 specifying:1 alice:2 bupa:1 bi:2 range:1 unique:1 yj:6 union:1 practice:1 implement:3 differs:1 empirical:2 significantly:2 matching:61 word:3 regular:30 get:2 selection:1 put:1 context:1 risk:1 equivalent:1 missing:1 maximizing:1 straightforward:3 williams:2 pod:2 automaton:1 sweeney:2 formalized:3 importantly:1 his:2 gsn:3 population:3 handle:6 notion:1 krishnamurthy:1 analogous:1 tan:1 user:30 programming:3 us:4 element:7 satisfying:2 anonymity:94 asymmetric:6 database:15 worst:1 connected:3 decrease:3 complexity:1 nonadjacent:1 personal:3 ultimately:1 subtly:1 bipartite:45 completely:2 matchings:3 various:2 represented:1 chapter:1 kolmogorov:1 fast:1 describe:1 pertaining:1 query:1 artificial:1 neighborhood:1 choosing:1 victim:1 whose:1 widely:1 larger:3 valued:1 say:6 ability:1 statistic:1 superscript:1 sequence:1 propose:2 maximal:1 uci:2 subgraph:1 chaudhuri:1 achieve:11 differentially:1 regularity:1 cluster:3 motwani:1 produce:3 guaranteeing:2 perfect:6 help:2 illustrate:1 friend:2 ij:4 solves:1 strong:1 recovering:1 involves:1 indicate:2 implies:3 come:1 safe:1 correct:1 attribute:10 exploration:2 adjacency:1 require:3 resilient:1 fix:3 generalization:2 obfuscated:6 anonymous:10 strictly:1 hold:3 ic:1 exp:2 bj:2 achieves:4 vary:2 smallest:2 released:8 wine:1 purpose:1 sensitive:5 him:2 largest:2 vice:3 create:1 gehrke:1 weighted:1 hope:1 minimization:1 clearly:1 aim:2 rather:4 varying:1 password:1 jaakkola:1 pervasive:1 corollary:4 release:4 focus:1 improvement:1 consistently:1 indicates:2 hepatitis:1 suppression:6 inference:1 accept:1 hidden:1 gkn:3 her:1 quasi:3 interested:2 selects:4 i1:1 compatibility:11 issue:1 overall:1 arg:2 tao:2 denoted:1 priori:1 constrained:2 initialize:2 equal:1 once:1 having:1 identical:4 yu:1 nearly:1 np:2 others:2 fundamentally:1 randomly:1 composed:1 individual:9 consisting:1 fire:1 attempt:2 interest:1 graphs3:1 possibility:2 dwork:1 mining:1 introduces:1 bracket:1 extreme:1 mcsherry:1 implication:1 edge:30 necessary:2 tree:1 desired:7 plotted:2 xjk:8 theoretical:2 minimal:2 instance:1 bidirected:1 zn:3 cost:1 introducing:1 vertex:10 entry:23 subset:3 uniform:4 masked:1 too:1 characterize:2 connect:1 xue:1 paranoid:2 my:2 person:8 st:1 explores:1 international:1 probabilistic:2 anonymization:2 quickly:1 connectivity:2 again:1 containing:2 choose:2 anonymizing:3 huang:1 external:1 creating:1 algorithms4:1 li:2 diversity:2 singleton:1 star:14 gionis:1 satisfy:1 race:1 break:3 linked:1 portion:1 start:1 minimize:1 yield:2 identify:1 generalize:1 produced:1 machanavajjhala:1 ecoli:1 bob:2 dave:2 converged:2 whenever:1 facebook:3 definition:10 against:5 competitor:1 naturally:1 proof:1 associated:8 vaidya:1 hamming:2 degeneracy:1 recall:1 color:6 knowledge:6 dimensionality:1 organized:1 formalize:1 actually:1 appears:1 higher:3 improved:1 formulation:1 done:1 though:1 furthermore:6 just:1 until:1 hand:2 replacing:2 propagation:2 logistic:1 aj:2 reveal:1 contain:3 true:2 shuffled:1 symmetric:35 iteratively:1 illustrated:1 adjacent:3 indistinguishable:2 uniquely:1 criterion:4 complete:2 theoretic:1 performs:1 meaning:1 variational:4 novel:2 recently:1 fi:3 superior:1 common:1 permuted:1 volume:1 sarwate:1 linking:1 belong:1 he:12 slight:1 tassa:1 measurement:2 versa:3 ai:2 shuffling:1 trivially:1 mathematics:1 similarly:2 language:1 access:1 closest:1 hide:2 forcing:1 scenario:1 certain:2 binary:2 success:3 yi:2 swept:1 caltech:1 preserving:3 minimum:6 additional:7 relaxed:3 fortunately:1 greater:2 attacking:1 maximize:1 redundant:1 monotonically:1 converge:1 preservation:2 aggarwal:1 match:2 concerning:1 equally:1 manipulate:1 sigcomm:1 a1:6 desideratum:1 regression:1 heterogeneous:2 essentially:1 fifteenth:1 iteration:3 represent:1 sometimes:1 achieved:4 want:5 grow:1 median:1 publisher:1 extra:3 releasing:2 operate:1 archive:1 undirected:1 flow:2 jordan:1 integer:3 call:1 eve:2 near:1 presence:1 allman:1 constraining:1 intermediate:1 easy:4 relaxes:1 revealed:1 enough:2 wegener:1 canadian:1 topology:1 restrict:1 idea:1 expression:1 six:1 utility:27 gb:7 feder:1 returned:1 yik:2 useful:1 detailed:1 clear:1 informally:1 induces:1 dna:1 http:1 outperform:1 canonical:4 disjoint:1 correctly:1 algorithmically:1 diverse:1 zd:1 discrete:3 edmonds:1 key:19 achieving:3 choromanski:1 krzysztof:1 destroy:1 graph:70 relaxation:9 merely:2 fraction:1 icde:2 everywhere:1 powerful:1 uncertainty:1 place:2 family:11 almost:1 acceptable:1 summarizes:2 comparable:1 bit:1 bound:2 internet:1 datum:1 g:4 annual:1 precisely:2 constraint:1 personalized:2 min:2 structured:1 according:3 poor:1 across:2 smaller:3 slightly:1 remain:1 heart:1 ln:2 agree:1 previously:2 discus:4 turn:1 know:20 kifer:1 permit:1 apply:1 enforce:1 existence:5 original:4 thomas:2 clustering:4 tony:1 include:1 ensure:2 graphical:1 maintaining:1 unifying:1 exploit:1 sigmod:1 restrictive:1 ghahramani:1 especially:1 approximating:1 quantity:1 usual:1 traditional:2 distance:1 mapped:1 berlin:1 accommodates:1 agglomerative:3 considers:1 assuming:1 panigrahy:1 code:1 index:1 reed:1 gji:2 ratio:2 relationship:1 username:9 potentially:4 xik:8 gk:4 paxson:1 implementation:1 unknown:1 contributed:1 allowing:2 upper:2 observation:2 benchmark:2 situation:4 looking:2 precise:1 y1:2 discovered:1 rn:2 arbitrary:1 jebara:2 pair:3 specified:1 optimized:1 herein:1 protect:2 nip:1 beyond:3 adversary:35 able:1 below:2 pattern:1 flower:1 comfort:1 user2:1 program:1 max:1 belief:2 getoor:1 force:1 warm:1 mn:1 wik:4 zhu:1 technology:1 imply:1 started:1 categorical:1 columbia:6 prior:2 discovery:1 relative:1 fully:2 lecture:1 permutation:5 versus:1 age:2 foundation:1 degree:12 sufficient:2 anonymized:4 article:5 xiao:2 share:2 row:2 compatible:9 summary:1 last:1 copy:10 keeping:1 side:3 allow:4 blossom:2 weaker:1 saul:1 recoding:1 dimension:2 xn:2 fred:2 meyerson:1 adaptive:12 income:1 social:4 transaction:1 keep:2 confirm:1 clique:22 ml:1 b1:6 conclude:3 xi:8 alternatively:1 protected:1 why:2 gbn:3 symmetry:3 contributes:1 forest:4 heidelberg:1 kui:1 meanwhile:1 tkdd:1 profile:1 n2:1 identifier:3 nothing:1 x1:2 elaborate:1 slow:1 wish:3 tang:1 theorem:9 formula:1 symbol:5 explored:4 maxi:2 admits:1 bhagat:1 closeness:2 grouping:1 exists:3 supplement:1 generalizing:1 simply:1 explore:1 forming:1 prevents:1 desire:1 springer:1 gender:2 corresponds:1 satisfies:1 acm:3 succeed:2 goal:5 identity:2 sized:1 fuzziness:1 shared:1 absence:2 hard:2 uniformly:4 justify:1 lemma:9 called:1 gij:9 succeeds:8 rarely:1 select:1 people:8 latter:2 carol:2 srivastava:2 |
4,263 | 4,859 | Exact and Stable Recovery of Pairwise Interaction
Tensors
Shouyuan Chen
Michael R. Lyu Irwin King
The Chinese University of Hong Kong
{sychen,lyu,king}@cse.cuhk.edu.hk
Zenglin Xu
Purdue University
[email protected]
Abstract
Tensor completion from incomplete observations is a problem of significant practical interest. However, it is unlikely that there exists an efficient algorithm with
provable guarantee to recover a general tensor from a limited number of observations. In this paper, we study the recovery algorithm for pairwise interaction
tensors, which has recently gained considerable attention for modeling multiple
attribute data due to its simplicity and effectiveness. Specifically, in the absence
of noise, we show that one can exactly recover a pairwise interaction tensor by
solving a constrained convex program which minimizes the weighted sum of nuclear norms of matrices from O(nr log2 (n)) observations. For the noisy cases,
we also prove error bounds for a constrained convex program for recovering the
tensors. Our experiments on the synthetic dataset demonstrate that the recovery
performance of our algorithm agrees well with the theory. In addition, we apply
our algorithm on a temporal collaborative filtering task and obtain state-of-the-art
results.
1
Introduction
Many tasks of recommender systems can be formulated as recovering an unknown tensor (multiway array) from a few observations of its entries [17, 26, 25, 21]. Recently, convex optimization
algorithms for recovering a matrix, which is a special case of tensor, have been extensively studied
[7, 22, 6]. Moreover, there are several theoretical developments that guarantee exact recovery of
most low-rank matrices from partial observations using nuclear norm minimization [8, 5]. These
results seem to suggest a promising direction to solve the general problem of tensor recovery.
However, there are inevitable obstacles to generalize the techniques for matrix completion to tensor
recovery, since a number of fundamental computational problems of matrix is NP-hard in their
tensorial analogues [10]. For instance, H?astad showed that it is NP-hard to compute the rank of a
given tensor [9]; Hillar and Lim proved the NP-hardness to decompose a given tensor into sum of
rank-one tensors even if a tensor is fully observed [10]. The existing evidence suggests that it is
very unlikely that there exists an efficient exact recovery algorithm for general tensors with missing
entries. Therefore, it is natural to ask whether it is possible to identify a useful class of tensors for
which we can devise an exact recovery algorithm.
In this paper, we focus on pairwise interaction tensors, which have recently demonstrated strong
performance in several recommendation applications, e.g. tag recommendation [19] and sequential
data analysis [18]. Pairwise interaction tensors are a special class of general tensors, which directly
model the pairwise interactions between different attributes. Take movie recommendation as an example, to model a user?s ratings for movies varying over time, a pairwise interaction tensor assumes
that each rating is determined by three factors: the user?s inherent preference on the movie, the
movie?s trending popularity and the user?s varying mood over time. Formally, pairwise interaction
tensor assumes that each entry Tijk of a tensor T of size n1 ? n2 ? n3 is given by following
E
D
E D
E D
(a)
(a)
(b)
(b)
(c)
(c)
Tijk = ui , vj
+ uj , vk + uk , vi
, for all (i, j, k) ? [n1 ] ? [n2 ] ? [n3 ], (1)
1
(a)
(a)
(b)
(b)
where {ui }i?[n1 ] , {vi }j?[n2 ] are r1 dimensional vectors, {uj }j?[n2 ] , {vk }k?[n3 ] are r2 di(c)
(c)
mensional vectors and {uk }k?[n3 ] , {vi }i?[n1 ] are r3 dimensional vectors, respectively.
1
The existing recovery algorithms for pairwise interaction tensor use local optimization methods,
which do not guarantee the recovery performance [18, 19]. In this paper, we design efficient recovery algorithms for pairwise interaction tensors with rigorous guarantee. More specifically, in the
absence of noise, we show that one can exactly recover a pairwise interaction tensor by solving a
constrained convex program which minimizes the weighted sum of nuclear norms of matrices from
O(nr log2 (n)) observations, where n = max{n1 , n2 , n3 } and r = max{r1 , r2 , r3 }. For noisy
cases, we also prove error bounds for a constrained convex program for recovering the tensors.
In the proof of our main results, we reformulated the recovery problem as a constrained matrix
completion problem with a special observation operator. Previously, Gross et al. [8] have showed
that the nuclear norm heuristic can exactly recover low rank matrix from a sufficient number of
observations of an orthogonal observation operator. We note that the orthogonality is critical to their
argument. However, the observation operator, in our case, turns out to be non-orthogonal, which
becomes a major challenge in our proof. In order to deal with the non-orthogonal operator, we have
substantially extended their technique in our proof. We believe that our technique can be generalized
to handle other matrix completion problem with non-orthogonal observation operators.
Moreover, we extend existing singular value thresholding method to develop a simple and scalable
algorithm for solving the recovery problem in both exact and noisy cases. Our experiments on the
synthetic dataset demonstrate that the recovery performance of our algorithm agrees well with the
theory. Finally, we apply our algorithm on a temporal collaborative filtering task and obtain stateof-the-art results.
2
Recovering pairwise interaction tensors
In this section, we first introduce the matrix formulation of pairwise interaction tensors and specify
the recovery problem. Then we discuss the sufficient conditions on pairwise interaction tensors
for which an exact recovery would be possible. After that we formulate the convex program for
solving the recovery problem and present our theoretical results on the sample bounds for achieving
an exact recovery. In addition, we also show a quadratically constrained convex program is stable
for the recovery from noisy observations.
A matrix formulation of pairwise interaction tensors. The original formulation of pairwise interaction tensors by Rendle et al. [19] is given by Eq. (1), in which each entry of a tensor is the sum of
inner products of feature vectors. We can reformulate Eq. (1) more concisely using matrix notations.
In particular, we can rewrite Eq. (1) as follows
Tijk = Aij + Bjk + Cki , for all (i, j, k) ? [n1 ] ? [n2 ] ? [n3 ],
(2)
D
E
D
E
D
E
(a)
(a)
(b)
(b)
(c)
(c)
where we set Aij = ui , vj , Bjk = uj , vk , and Cki = uk , vi
for all (i, j, k).
Clearly, matrices A, B and C are rank r1 , r2 and r3 matrices, respectively.
We call tensor T ? Rn1 ?n2 ?n3 a pairwise interaction tensor, which is denoted as T =
Pair(A, B, C), if T obeys Eq. (2). We note that this concise definition is equivalent to the original
one. In the rest of this paper, we will exclusively use the matrix formulation of pairwise interaction
tensors.
Recovery problem. Suppose we have partial observations of a pairwise interaction tensor T =
Pair(A, B, C). We write ? ? [n1 ] ? [n2 ] ? [n3 ] to be the set of indices of m observed entries. In
this work, we shall assume ? is sampled uniformly from the collection of all sets of size m. Our goal
is to recover matrices A, B, C and therefore the entire tensor T from exact or noisy observations of
{Tijk }(ijk)?? .
Before we proceed to the recovery algorithm, we first discuss when the recovery is possible.
Recoverability: uniqueness. The original recovery problem for pairwise interaction tensors is illposed due to a uniqueness issue. In fact, for any pairwise interaction tensor T = Pair(A, B, C),
1
For simplicity, we only consider three-way tensors in this paper.
2
we can construct infinitely manly different sets of matrices A0 , B0 , C0 such that Pair(A, B, C) =
Pair(A0 , B0 , C0 ). For example, we have Tijk = Aij + Bjk + Cki = (Aij + ?ai ) + Bjk + (Cki +
(1 ? ?)ai ), where ? 6= 0 can be any non-zero constant and a is an arbitrary non-zero vector of
0
size n1 . Now, we can construct A0 , B0 and C0 by setting A0ij = Aij + ?ai , Bjk
= Bjk and
0
0
0
0
Cki = Cki + (1 ? ?)ai . It is clear that T = Pair(A , B , C ).
This ambiguity prevents us to recover A, B, C even if T is fully observed, since it is entirely
possible to recover A0 , B0 , C0 instead of A, B, C based on the observations. In order to avoid
this obstacle, we construct a set of constraints such that, given any pairwise interaction tensor Pair(A, B, C), there exists unique matrices A0 , B0 , C0 satisfying the constraints and obeys
Pair(A, B, C) = Pair(A0 , B0 , C0 ). Formally, we prove the following proposition.
Proposition 1. For any pairwise interaction tensor T = Pair(A, B, C), there exists unique A0 ?
SA , B0 ? SB , C0 ? SC such that Pair(A, B, C) = Pair(A0 , B0 , C0 ) where we define SB = {M ?
n2 ?n3
T
R
: 1
M = 0T },SC = {M ? Rn3 ?n1 : 1T M = 0T } and SA = {M ? Rn1 ?n2 : 1T M =
1 T
T
n2 1 M1 1 }.
We point out that there is a natural connection between the uniqueness issue and the ?bias? components, which is a quantity of much attention in the field of recommender system [13]. Due to lack
of space, we defer the detailed discussion on this connection and the proof of Proposition 1 to the
supplementary material.
Recoverability: incoherence. It is easy to see that recovering a pairwise tensor T = Pair(A, 0, 0)
is equivalent to recover the matrix A from a subset of its entries. Therefore, the recovery problem of
pairwise interaction tensors subsumes matrix completion problem as a special case. Previous studies
have confirmed that the incoherence condition is an essential requirement on the matrix in order to
guarantee a successful recovery of matrices. This condition can be stated as follows.
Let M = U?VT be the singular value decomposition of a rank r matrix M. We call matrix M is
(?0 , ?1 )-incoherent if M satisfies:
P
P
2
2
A0. For all i ? [n1 ] and j ? [n2 ], we have nr1 k?[r] Uik
? ?0 and nr2 k?[r] Vjk
? ?0 .
p
A1. The maximum entry of UVT is bounded by ?1 r/(n1 n2 ) in absolute value.
It is well known the recovery is possible only if the matrix is (?0 , ?1 )-incoherent for bounded ?0 , ?1
(i.e, ?0 , ?1 is poly-logarithmic with respect to n). Since the matrix completion problem is reducible
to the recovery problem for pairwise interaction tensors, our theoretical result will inherit the incoherence assumptions on matrices A, B, C.
Exact recovery in the absence of noise. We first consider the scenario where the observations are
exact. Specifically, suppose we are given m observations {Tijk }(ijk)?? , where ? is sampled from
uniformly at random from [n1 ] ? [n2 ] ? [n3 ]. We propose to recover matrices A, B, C and therefore
tensor T = Pair(A, B, C) using the following convex program,
?
?
?
n3 kXk? + n1 kYk? + n2 kZk?
(3)
minimize
X?SA ,Y?SB ,Z?SC
subject to Xij + Yjk + Zki = Tijk , (i, j, k) ? ?,
where kMk? denotes the nuclear norm of matrix M, which is the sum of singular values of M, and
SA , SB , SC is defined in Proposition 1.
We show that, under the incoherence conditions, the above nuclear norm minimization method successful recovers a pairwise interaction tensor T when the number of observations m is O(nr log2 n)
with high probability.
Theorem 1. Let T ? Rn1 ?n2 ?n3 be a pairwise interaction tensor T = Pair(A, B, C) and A ?
SA , B ? SB , C ? SC as defined in Proposition 1. Without loss of generality assume that 9 ? n1 ?
n2 ? n3 . Suppose we observed m entries of T with the locations sampled uniformly at random
from [n1 ] ? [n2 ] ? [n3 ] and also suppose that each of A, B, C is (?0 , ?1 )-incoherent. Then, there
exists a universal constant C, such that if
m > C max{?21 , ?0 }n3 r? log2 (6n3 ),
where r = max{rank(A), rank(B), rank(C)} and ? > 2 is a parameter, the minimizing solution
X, Y, Z for program Eq. (3) is unique and satisfies X = A, Y = B, Z = C with probability at
least 1 ? log(6n3 )6n2??
? 3n2??
.
3
3
3
Stable recovery in the presence of noise. Now, we move to the case where the observations are
perturbed by noise with bounded energy. In particular, our noisy model assumes that we observe
T?ijk = Tijk + ?ijk ,
for all (i, j, k) ? ?,
(4)
where ?ijk is a noise term, which maybe deterministic or stochastic. We assume ? has bounded
energy on ? and specifically that kP? (?)kF ? 1 for some 1 > 0, where P? (?) denotes the
restriction on ?. Under this assumption on the observations, we derive the error bound of the
following quadratically-constrained convex program, which recover T from the noisy observations.
?
?
?
minimize
n3 kXk? + n1 kYk? + n2 kZk?
(5)
X?SA ,Y?SB ,Z?SC
subject to
P? (Pair(X, Y, Z)) ? P? (T? )
? 2 .
F
Theorem 2. Let T = Pair(A, B, C) and A ? SA , B ? SB , C ? SC . Let ? be the set of
observations as described in Theorem 1. Suppose we observe T?ijk for (i, j, k) ? ? as defined in
Eq. (4) and also assume that kP? (?)kF ? 1 holds. Denote the reconstruction error of the optimal
solution X, Y, Z of convex program Eq. (5) as E = Pair(X, Y, Z) ? T . Also assume that 1 ? 2 .
Then, we have
s
2rn1 n22
kEk? ? 5
(1 + 2 ),
8? log(n1 )
with probability at least 1 ? log(6n3 )6n2??
? 3n2??
.
3
3
The proof of Theorem 1 and Theorem 2 is available in the supplementary material.
Related work. Rendle et al. [19] proposed pairwise interaction tensors as a model used for tag recommendation. In a subsequent work, Rendle et al. [18] applied pairwise interaction tensors in the
sequential analysis of purchase data. In both applications, their methods using pairwise interaction
tensor demonstrated excellent performance. However, their algorithms are prone to local optimal
issues and the recovered tensor might be very different from its true value. In contrast, our main results, Theorem 1 and Theorem 2, guarantee that a convex program can exactly or accurately recover
the pairwise interaction tensors from O(nr log2 (n)) observations. In this sense, our work can be
considered as a more effective way to recover pairwise interaction tensors from partial observations.
In practice, various tensor factorization methods are used for estimating missing entries of tensors
[12, 20, 1, 26, 16]. In addition, inspired by the success of nuclear norm minimization heuristics in
matrix completion, several work used a generalized nuclear norm for tensor recovery [23, 24, 15].
However, these work do not guarantee exact recovery of tensors from partial observations.
3
Scalable optimization algorithm
There are several possible methods to solving the optimization problems Eq. (3) and Eq. (5). For
small problem sizes, one may reformulate the optimization problems as semi-definite programs and
solve them using interior point method. The state-of-the-art interior point solvers offer excellent
accuracy for finding the optimal solution. However, these solvers become prohibitively slow for
pairwise interaction tensors larger than 100 ? 100 ? 100. In order to apply the recover algorithms
on large scale pairwise interaction tensors, we use singular value thresholding (SVT) algorithm
proposed recently by Cai et al. [3], which is a first-order method with promising performance for
solving nuclear norm minimization problems.
We first discuss the SVT algorithm for solving the exact completion problem Eq. (3). For convenience, we reformulate the original optimization objective Eq. (3) as follows,
minimize
X?SA ,Y?SB ,Z?SC
subject to
kXk? + kYk? + kZk?
Yjk
Zki
Xij
? + ? + ? = Tijk ,
n3
n1
n2
(6)
(i, j, k) ? ?,
where we have incorporated coefficients on the nuclear norm terms into the constraints. It is easy
?1/2
?1/2
?1/2
to see that the recovered tensor is given by Pair(n3 X, n1 Y, n2 Z), where X, Y, Z is the
4
optimal solution of Eq. (6). Our algorithm solves a slightly relaxed version of the reformulated
objective Eq. (6),
1
2
2
2
minimize
? (kXk? + kYk? + kZk? ) +
kXkF + kYkF + kZkF
(7)
X?SA ,Y?SB ,Z?SC
2
Xij
Yjk
Zki
subject to ? + ? + ? = Tijk , (i, j, k) ? ?.
n3
n1
n2
It is easy to see that Eq. (7) is closely related to Eq. (6) and the original problem Eq. (3), as the
relaxed problem converges to the original one as ? ? ?. Therefore by selecting a large value the
parameter ? , a minimizing solution to Eq. (7) nearly minimizes Eq. (3).
Our algorithm iteratively minimizes Eq. (7) and produces a sequence of matrices {Xk , Yk , Zk }
converging to the optimal solution (X, Y, Z) that minimizes Eq. (7). We begin with several definitions. For observations ? = {ai , bi , ci |i ? [m]}, let operators P?A : Rn1 ?n2 ? Rm ,
P?B : Rn2 ?n3 ? Rm and P?C : Rn3 ?n1 ? Rm represents the influence of X, Y, Z on the
m observations. In particular,
m
1 X
P?A (X) = ?
Xai bi ?i ,
n3 i=1
m
m
1 X
1 X
P?B (Y) = ?
Ybi ci ?i , and P?C (Z) = ?
Zc a ?i .
n1 i=1
n2 i=1 i i
?1/2
?1/2
?1/2
It is easy to verify that P?A (X) + P?B (Y) + P?C (Z) = P? (Pair(n3 X, n1 Y, n2 Z)).
We also denote P?? A be the adjoint operator of P?A and similarly define P?? B and P?? C . Finally, for
a matrix X for size n1 ? n2 , we define center(X) = X ? n11 11T X as the column centering operator
that removes the mean of each n2 columns, i.e., 1T center(X) = 0T .
Starting with y0 = 0 and k = 1, our algorithm iteratively computes
Step (1).
Xk = shrinkA (P?? A (yk?1 ), ? ),
Yk = shrinkB (P?? B (yk?1 ), ? ),
Zk = shrinkC (P?? C (yk?1 ), ? ),
?1/2
Step (2e). ek = P? (T ) ? P? (Pair(n3
k
y =y
k?1
?1/2
X, n1
?1/2
Y, n2
Z))
k
+ ?e .
Here shrinkA is a shrinkage operator defined as follows
2
1
?
?
? M
+ ?
M
shrinkA (M, ? ) , arg min
M
.
2
F
?
?
M?S
A
(8)
? belongs SB
Shrinkage operators shrinkB and shrinkC are defined similarly except they require M
and SC , respectively. We note that our definition of the shrinkage operators shrinkA , shrinkB and
? is unconstrained.
shrinkC are slightly different from that of the original SVT [3] algorithm, where M
We can show that our constrained version of shrinkage operators can also be calculated using singular value decompositions of column centered matrices.
Let the SVD of the column centered matrix center(M) be center(M) = U?VT ,
diag({?i }). We can prove that the shrinkage operator shrinkB is given by
shrinkB (M, ? ) = U diag({?i ? ? }+ )VT ,
? =
(9)
where s+ is the positive part of s, that is, s+ = max{0, s}. Since subspace SC is structurally
identical to SB , it is easy to see that the calculation of shrinkC is identical to that of shrinkB . The
computation of shrinkA is a little more complicated. We have
shrinkA (M, ? ) = U diag({?i ? ? }+ )VT + ?
1
({? ? ? }+ + {? + ? }? ) 11T ,
n1 n2
(10)
where U?VT is still the SVD of center(M), ? = ?n11 n2 1T M1 is a constant and s? = min{0, s}
is the negative part of s. The algorithm iterates between Step (1) and Step (2e) and produces a series
of (Xk , Yk , Zk ) converging to the optimal solution of Eq. (7). The iterative procedure terminates
5
when the training error is small enough, namely,
ek
F ? . We refer interested readers to [3] for
a convergence proof of the SVT algorithm.
The optimization problem for noisy completion Eq. (5) can be solved in a similar manner. We only
need to modify Step (2e) to incorporate the quadratical constraint of Eq. (5) as follows
Step (2n).
?1/2
?1/2
?1/2
ek = P? (T? ) ? P? (Pair(n3 X, n1 Y, n2 Z))
yk
yk?1
ek
= PK
+?
,
?
sk
sk?1
where P? (T? ) is the noisy observations and the cone projection operator PK can be explicitly computed by
?
?
if kxk ? t,
?(x, t)
kxk+t
PK : (x, t) ?
2kxk (x, kxk) if ? kxk ? t ? kxk ,
?
?(0, 0)
if t ? ? kxk .
By iterating between Step (1) and Step (2n) and selecting a sufficiently large ? , the algorithm generates a sequence of {Xk , Yk , Zk } that converges to a nearly optimal solution to the noisy completion
program Eq. (5) [3]. We have also included a detailed description of both algorithms in the supplementary material.
At each iteration, we need to compute one singular value decomposition and perform a few elementary matrix additions. We can see that for each iteration k, Xk vanishes outside of ?A = {ai bi } and
is sparse. Similarly Yk ,Zk are also sparse matrices. Previously, we showed that the computation of
shrinkage operators requires a SVD of a column centered matrix center(M) ? n11 11T X, which is
the sum of a sparse matrix M and a rank-one matrix. Clearly the matrix-vector multiplication of the
form center(M)v can be computed with time O(n + m). This enables the use of Lanczos method
based SVD implementations for example PROPACK [14] and SVDPACKC [2], which only needs
subroutine of calculating matrix-vector products. In our implementation, we develop a customized
version of SVDPACKC for computing the shrinkage operators. Further, for an appropriate choice
of ? , {Xk , Yk , Zk } turned out to be low rank matrices, which matches the observations in the original SVT algorithm [3]. Hence, the storage cost Xk , Yk , Zk can be kept low and we only need to
perform a partial SVD to get the first r singular vectors. The estimated rank r is gradually increased
during the iterations using a similar method suggested in [3, Section 5.1.1]. We can see that, in sum,
the overall complexity per iteration of the recovery algorithm is O(r(n + m)).
4
Experiments
Phase transition in exact recovery. We investigate how the number of measurements affects the
success of exact recovery. In this simulation, we fixed n1 = 100, n2 = 150, n3 = 200 and r1 =
r2 = r3 = r. We tested a variety of choices of (r, m) and for each choice of (r, m), we repeat the
procedure for 10 times. At each time, we randomly generated A ? SA , B ? SB , C ? SC of rank
r. We generated A ? SA by sampling two factor matrices UA ? Rn1 ?r , VA ? Rn2 ?r with i.i.d.
T
standard Gaussian entries and setting A = PSA (UA VA
), where PSA is the orthogonal projection
onto subspace SA . Matrices B ? SB and C ? SC are sampled in a similar way. We uniformly
sampled a subset ? of m entries and reveal them to the recovery algorithm. We deemed A, B, C
successfully recovered if (kAkF + kBkF + kCkF )?1 (kX ? AkF + kY ? BkF + kZ ? CkF ) ?
10?3 , where X, Y and Z are the
? recovered matrices. Finally, we set the parameters ?, ? of the exact
recovery algorithm by ? = 10 n1 n2 n3 and ? = 0.9m(n1 n2 n3 )?1 .
Figure 1 shows the results of these experiments. The x-axis is the ratio between the number of
measurements m and the degree of freedom d = r(n1 + n2 ? r) + r(n2 + n3 ? r) + r(n3 + n1 ? r).
Note that a value of x-axis smaller than one corresponds to a case where there is infinite number of
solutions satisfying given entries. The y-axis is the rank r of the synthetic matrices. The color of
each grid indicates the empirical success rate. White denotes exact recovery in all 10 experiments,
and black denotes failure for all experiments. From Figure 1 (Left), we can see that the algorithm
succeeded almost certainly when the number of measurements is 2.5 times or larger than the degree
of freedom for most parameter settings. We also observe that, near the boundary of m/d ? 2.5,
there is a relatively sharp phase transition. To verify this phenomenon, we repeated the experiments,
6
Figure 1: Phase transition with respect to rank and degree of freedom. Left: m/d ? [1, 5]. Right:
m/d ? [1.5, 3.0].
but only vary m/d between 1.5 and 3.0 with finer steps. The results on Figure 1 (Right) shows that
the phase transition continued to be sharp at a higher resolution.
Stability of recovering from noisy data. In this simulation, we show the recovery performance
with respect to noisy data. Again, we fixed n1 = 100, n2 = 150, n3 = 200 and r1 = r2 = r3 = r
and tested against different choices of (r, m). For each choice of (r, m), we sampled the ground
truth A, B, C using the same method as in the previous simulation. We generated ? uniformly at
random. For each entry (i, j, k) ? ?, we simulated the noisy observation T?ijk = Tijk + ijk , where
ijk is a zero-mean Gaussian random variable with variance ?n2 . Then, we revealed {T?ijk }(ijk)?? to
the noisy recovery algorithm and collect the recovered matrix X, Y, Z. The error of recovery result
is measured by (kX ? AkF + kY ? BkF + kZ ? CkF )/(kAkF + kBkF + kCkF ). We tested the
algorithm with a range of noise levels and for each different configuration of (r, m, ?n2 ), we repeated
the experiments for 10 times and recorded the mean and standard deviation of the relative error.
noise level
0.1
0.2
0.3
0.4
0.5
relative error
0.1020 ? 0.0005
0.1972 ? 0.0007
0.2877 ? 0.0011
0.3720 ? 0.0015
0.4524 ? 0.0015
observations m
m = 3d
m = 4d
m = 5d
m = 6d
m = 7d
relative error
0.1445 ? 0.0008
0.1153 ? 0.0006
0.1015 ? 0.0004
0.0940 ? 0.0007
0.0920 ? 0.0011
rank r
10
20
30
40
50
relative error
0.1134 ? 0.0006
0.1018 ? 0.0007
0.0973 ? 0.0037
0.1032 ? 0.0212
0.1520 ? 0.0344
(a) Fix r = 20, m = 5d and (b) Fix r = 20, 0.1 noise level (c) Fix m = 5d, 0.1 noise level
noise level varies.
and m varies.
and r varies.
Table 1: Simulation results of noisy data.
We present the result of the experiments in Table 1. From the results in Table 1(a), we can see that
the error in the solution is proportional to the noise level. Table 1(b) indicates that the recovery is not
reliable when we have too few observations, while the performance of the algorithm is much more
stable for a sufficient number of observations around four times of the degree of freedom. Table 1(c)
shows that the recovery error is not affected much by the rank, as the number of observations scales
with the degree of freedom in our setting.
Temporal collaborative filtering. In order to demonstrate the performance of pairwise interaction
tensor on real world applications, we conducted experiments on the Movielens dataset. The MovieLens dataset contains 1,000,209 ratings from 6,040 users and 3,706 movies from April, 2000 and
February, 2003. Each rating from Movielens dataset is accompanied with time information provided
in seconds. We transformed each timestamp into its corresponding calendar month. We randomly
select 10% ratings as test set and use the rest of the ratings as training set. In the end, we obtained
a tensor T of size 6040 ? 3706 ? 36, in which the axes corresponded to user, movie and timestamp respectively, with 0.104% observed entries as the training set. We applied the noisy recovery
algorithm on the training set. Following previous studies which applies SVT algorithm on movie
recommendation datasets [11], we used a pre-specified truncation level r for computing SVD in
each iteration, i.e., we only kept top r singular vectors. Therefore, the rank of recovered matrices
are at most r.
7
We evaluated the prediction performance in terms of root mean squared error (RMSE). We compared our algorithm with noisy matrix completion method using standard SVT optimization algorithm [3, 4] to the same dataset while ignore the time information. Here we can regard the noisy
matrix completion algorithm as a special case of the recover a pairwise interaction tensor of size
6040 ? 3706 ? 1, i.e., the time information is ignored. We also noted that the training tensor had
more than one million observed entries and 80 millions total entries. This scale made a number of
tensor recovery algorithms, for example Tucker decomposition and PARAFAC [12], impractical to
apply on the dataset. In contrast, our recovery algorithm took 2430 seconds to finish on a standard
workstation for truncation level r = 100.
The experimental result is shown in Figure 2. The empirical result of Figure 2(a) suggests that, by
incorporating the temporal information, pairwise interaction tensor recovery algorithm consistently
outperformed the matrix completion method. Interestingly, we can see that, for most parameter
settings in Figure 2(b), our algorithm recovered a rank 2 matrix Y representing the change of movie
popularity over time and a rank 15 matrix Z that encodes the change of user interests over time. The
reason of the improvement on the prediction performance may be that the recovered matrix Y and
Z provided meaningful signal. Finally, we note that our algorithm achieves a RMSE of 0.858 when
the truncation level is set to 50, which slightly outperforms the RMSE=0.861 (quote from Figure 7
of the paper) result of 30-dimensional Bayesian Probabilistic Tensor Factorization (BPTF) on the
same dataset, where the authors predict the ratings by factorizing a 6040 ? 3706 ? 36 tensor using
BPTF method [26]. We may attribute the performance gain to the modeling flexibility of pairwise
interaction tensor and the learning guarantees of our algorithm.
1
120
RMSE
0.98
0.96
100
0.94
80
MC
0.92
60
0.9
40
0.88
20
RPIT
20
40
60
80
r3
r2
0
0.86
0
r1
100
20
SVD truncation level
40
60
80
100
120
SVD Truncation Level
(a)
(b)
Figure 2: Empirical results on the Movielens dataset. (a) Comparison of RMSE with different truncation levels. MC: Matrix completion algorithm. RPIT: Recovery algorithm for pairwise interaction
tensor. (b) Rank of recovered matrix X, Y, Z. r1 = rank(X), r2 = rank(Y), r3 = rank(Z).
5
Conclusion
In this paper, we proved rigorous guarantees for convex programs for recovery of pairwise interaction tensors with missing entries, both in the absence and in the presence of noise. We designed a
scalable optimization algorithm for solving the convex programs. We supplemented our theoretical
results with simulation experiments and a real-world application to movie recommendation. In the
noiseless case, simulations showed that the exact recovery almost always succeeded if the number of
observations is a constant time of the degree of freedom, which agrees asymptotically with the theoretical result. In the noisy case, the simulation results confirmed that the stable recovery algorithm
is able to reliably recover pairwise interaction tensor from noisy observations. Our results on the
temporal movie recommendation application demonstrated that, by incorporating the temporal information, our algorithm outperforms conventional matrix completion and achieves state-of-the-art
results.
Acknowledgments
This work was fully supported by the Basic Research Program of Shenzhen (Project No.
JCYJ20120619152419087 and JC201104220300A), and the Research Grants Council of the Hong
Kong Special Administrative Region, China (Project No. CUHK 413212 and CUHK 415212).
8
References
[1] Evrim Acar, Daniel M Dunlavy, Tamara G Kolda, and Morten M?rup. Scalable tensor factorizations for
incomplete data. Chemometrics and Intelligent Laboratory Systems, 106(1):41?56, 2011.
[2] M Berry et al. Svdpackc (version 1.0) user?s guide, university of tennessee tech. Report (393-194, 1993
(Revised October 1996)., 1993.
[3] Jian-Feng Cai, Emmanuel J Cand`es, and Zuowei Shen. A singular value thresholding algorithm for matrix
completion. SIAM Journal on Optimization, 20(4):1956?1982, 2010.
[4] Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925?
936, 2010.
[5] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations
of Computational mathematics, 9(6):717?772, 2009.
[6] A Evgeniou and Massimiliano Pontil. Multi-task feature learning. 2007.
[7] Maryam Fazel, Haitham Hindi, and Stephen P Boyd. A rank minimization heuristic with application to
minimum order system approximation. In American Control Conference, 2001, 2001.
[8] David Gross, Yi-Kai Liu, Steven T Flammia, Stephen Becker, and Jens Eisert. Quantum state tomography
via compressed sensing. Physical review letters, 105(15):150401, 2010.
[9] Johan H?astad. Tensor rank is np-complete. Journal of Algorithms, 11(4):644?654, 1990.
[10] Christopher Hillar and Lek-Heng Lim. Most tensor problems are np hard. JACM, 2013.
[11] Prateek Jain, Raghu Meka, and Inderjit Dhillon. Guaranteed rank minimization via singular value projection. In NIPS, 2010.
[12] Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review, 51(3):455?
500, 2009.
[13] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30?37, 2009.
[14] Rasmus Munk Larsen. Propack-software for large and sparse svd calculations. Available online., 2004.
[15] Ji Liu, Przemyslaw Musialski, Peter Wonka, and Jieping Ye. Tensor completion for estimating missing
values in visual data. In ICCV, 2009.
[16] Ian Porteous, Evgeniy Bart, and Max Welling. Multi-hdp: A non-parametric bayesian model for tensor
factorization. In AAAI, 2008.
[17] Steffen Rendle, Leandro Balby Marinho, Alexandros Nanopoulos, and Lars Schmidt-Thieme. Learning
optimal ranking with tensor factorization for tag recommendation. In SIGKDD, 2009.
[18] Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. Factorizing personalized markov
chains for next-basket recommendation. In WWW, 2010.
[19] Steffen Rendle and Lars Schmidt-Thieme. Pairwise interaction tensor factorization for personalized tag
recommendation. In ICDM, 2010.
[20] Amnon Shashua and Tamir Hazan. Non-negative tensor factorization with applications to statistics and
computer vision. In ICML, 2005.
[21] Yue Shi, Alexandros Karatzoglou, Linas Baltrunas, Martha Larson, Alan Hanjalic, and Nuria Oliver.
Tfmap: Optimizing map for top-n context-aware recommendation. In SIGIR, 2012.
[22] Nathan Srebro, Jason DM Rennie, and Tommi Jaakkola. Maximum-margin matrix factorization. NIPS,
2005.
[23] Ryota Tomioka, Kohei Hayashi, and Hisashi Kashima. Estimation of low-rank tensors via convex optimization. arXiv preprint arXiv:1010.0789, 2010.
[24] Ryota Tomioka, Taiji Suzuki, Kohei Hayashi, and Hisashi Kashima. Statistical performance of convex
tensor decomposition. NIPS, 2011.
[25] Jason Weston, Chong Wang, Ron Weiss, and Adam Berenzweig. Latent collaborative retrieval. ICML,
2012.
[26] Liang Xiong, Xi Chen, Tzu-Kuo Huang, Jeff Schneider, and Jaime G Carbonell. Temporal collaborative
filtering with bayesian probabilistic tensor factorization. In SDM, 2010.
9
| 4859 |@word kong:2 version:4 norm:10 c0:8 tensorial:1 kbkf:2 simulation:7 decomposition:6 concise:1 configuration:1 series:1 exclusively:1 selecting:2 contains:1 daniel:1 liu:2 leandro:1 interestingly:1 outperforms:2 existing:3 kmk:1 recovered:9 subsequent:1 enables:1 remove:1 designed:1 acar:1 bart:1 kyk:4 xk:7 propack:2 alexandros:2 iterates:1 cse:1 location:1 preference:1 ron:1 vjk:1 become:1 prove:4 n22:1 manner:1 introduce:1 pairwise:43 hardness:1 cand:2 multi:2 steffen:3 nuria:1 inspired:1 little:1 solver:2 ua:2 becomes:1 begin:1 estimating:2 moreover:2 notation:1 bounded:4 provided:2 project:2 brett:1 prateek:1 thieme:3 minimizes:5 substantially:1 a0ij:1 finding:1 impractical:1 guarantee:9 temporal:7 exactly:4 prohibitively:1 rm:3 uk:3 control:1 dunlavy:1 grant:1 before:1 positive:1 local:2 svt:7 modify:1 incoherence:4 might:1 black:1 baltrunas:1 studied:1 china:1 suggests:2 collect:1 christoph:1 limited:1 factorization:10 kckf:2 bi:3 range:1 obeys:2 bjk:6 practical:1 unique:3 acknowledgment:1 fazel:1 practice:1 yehuda:1 definite:1 illposed:1 procedure:2 pontil:1 universal:1 empirical:3 bell:1 kohei:2 projection:3 boyd:1 pre:1 suggest:1 get:1 convenience:1 interior:2 onto:1 operator:16 storage:1 context:1 influence:1 restriction:1 equivalent:2 deterministic:1 demonstrated:3 hillar:2 missing:4 center:7 conventional:1 attention:2 starting:1 jieping:1 convex:16 shi:1 formulate:1 resolution:1 simplicity:2 recovery:50 shen:1 sigir:1 continued:1 array:1 nuclear:10 stability:1 handle:1 kolda:2 suppose:5 user:7 exact:18 eisert:1 satisfying:2 taiji:1 observed:6 steven:1 reducible:1 preprint:1 solved:1 wang:1 region:1 yk:12 gross:2 benjamin:1 vanishes:1 ui:3 complexity:1 rup:1 solving:8 rewrite:1 various:1 massimiliano:1 jain:1 effective:1 kp:2 sc:13 corresponded:1 outside:1 heuristic:3 supplementary:3 solve:2 larger:2 kai:1 rennie:1 calendar:1 compressed:1 statistic:1 noisy:20 online:1 mood:1 sequence:2 sdm:1 cai:2 took:1 propose:1 reconstruction:1 interaction:42 product:2 maryam:1 turned:1 flexibility:1 adjoint:1 description:1 ky:2 chemometrics:1 convergence:1 yaniv:1 requirement:1 r1:7 produce:2 adam:1 converges:2 derive:1 develop:2 completion:19 measured:1 b0:8 sa:12 solves:1 astad:2 recovering:7 eq:24 strong:1 tommi:1 direction:1 closely:1 attribute:3 stochastic:1 bader:1 centered:3 lars:3 karatzoglou:1 material:3 munk:1 require:1 fix:3 decompose:1 proposition:5 elementary:1 hold:1 sufficiently:1 considered:1 ground:1 around:1 bptf:2 lyu:2 predict:1 major:1 vary:1 achieves:2 uniqueness:3 estimation:1 outperformed:1 quote:1 council:1 agrees:3 successfully:1 weighted:2 minimization:6 clearly:2 gaussian:2 always:1 tennessee:1 avoid:1 shrinkage:7 varying:2 berenzweig:1 jaakkola:1 ax:1 focus:1 parafac:1 vk:3 consistently:1 rank:28 indicates:2 improvement:1 hk:1 contrast:2 rigorous:2 tech:1 sigkdd:1 sense:1 sb:13 unlikely:2 entire:1 a0:9 transformed:1 subroutine:1 interested:1 arg:1 issue:3 overall:1 stateof:1 denoted:1 development:1 plan:1 constrained:8 art:4 special:6 timestamp:2 field:1 construct:3 evgeniou:1 aware:1 sampling:1 identical:2 represents:1 icml:2 nearly:2 inevitable:1 purchase:1 np:5 report:1 intelligent:1 inherent:1 few:3 randomly:2 zki:3 phase:4 n1:33 freedom:6 trending:1 interest:2 investigate:1 certainly:1 chong:1 chain:1 oliver:1 succeeded:2 partial:5 orthogonal:5 incomplete:2 rn3:2 theoretical:5 increased:1 instance:1 column:5 modeling:2 obstacle:2 kxkf:1 lanczos:1 nr2:1 cost:1 deviation:1 entry:17 subset:2 successful:2 conducted:1 too:1 zenglin:1 perturbed:1 varies:3 synthetic:3 recht:1 fundamental:1 siam:2 cki:6 probabilistic:2 michael:1 again:1 ambiguity:1 recorded:1 rn1:6 squared:1 aaai:1 tzu:1 huang:1 ek:4 american:1 rn2:2 accompanied:1 hisashi:2 subsumes:1 coefficient:1 explicitly:1 ranking:1 vi:4 tijk:11 root:1 jason:2 hazan:1 shashua:1 recover:15 complicated:1 candes:1 defer:1 rmse:5 collaborative:5 minimize:4 accuracy:1 kek:1 variance:1 jaime:1 identify:1 ybi:1 shenzhen:1 generalize:1 bayesian:3 accurately:1 mc:2 confirmed:2 finer:1 basket:1 definition:3 centering:1 failure:1 against:1 energy:2 volinsky:1 tucker:1 tamara:2 larsen:1 dm:1 proof:6 di:1 recovers:1 workstation:1 sampled:6 gain:1 dataset:9 proved:2 ask:1 lim:2 color:1 musialski:1 higher:1 specify:1 wei:1 april:1 formulation:4 evaluated:1 generality:1 christopher:1 lack:1 reveal:1 believe:1 ye:1 verify:2 true:1 www:1 hence:1 iteratively:2 laboratory:1 dhillon:1 deal:1 white:1 evgeniy:1 psa:2 during:1 noted:1 larson:1 hong:2 generalized:2 complete:1 demonstrate:3 recently:4 physical:1 ji:1 million:2 extend:1 m1:2 significant:1 refer:1 measurement:3 ai:6 meka:1 unconstrained:1 grid:1 mathematics:1 similarly:3 multiway:1 had:1 stable:5 showed:4 optimizing:1 belongs:1 scenario:1 success:3 vt:5 yi:1 devise:1 jens:1 minimum:1 relaxed:2 zuowei:1 schneider:1 cuhk:3 signal:1 semi:1 stephen:2 multiple:1 alan:1 match:1 evrim:1 calculation:2 offer:1 retrieval:1 icdm:1 a1:1 n11:3 va:2 converging:2 scalable:4 prediction:2 basic:1 noiseless:1 vision:1 arxiv:2 iteration:5 uvt:1 addition:4 singular:10 jian:1 flammia:1 rest:2 yue:1 subject:4 effectiveness:1 seem:1 call:2 near:1 presence:2 revealed:1 easy:5 enough:1 variety:1 affect:1 finish:1 inner:1 whether:1 amnon:1 becker:1 peter:1 reformulated:2 proceed:1 ignored:1 useful:1 iterating:1 clear:1 detailed:2 maybe:1 extensively:1 tomography:1 xij:3 estimated:1 popularity:2 per:1 write:1 shall:1 affected:1 four:1 achieving:1 linas:1 kept:2 asymptotically:1 sum:7 cone:1 letter:1 almost:2 reader:1 entirely:1 bound:4 guaranteed:1 koren:1 orthogonality:1 constraint:4 n3:33 software:1 encodes:1 personalized:2 tag:4 generates:1 nathan:1 argument:1 min:2 relatively:1 nanopoulos:1 terminates:1 slightly:3 smaller:1 y0:1 gradually:1 iccv:1 previously:2 turn:1 r3:7 discus:3 rendle:6 end:1 raghu:1 przemyslaw:1 available:2 apply:4 observe:3 appropriate:1 kashima:2 schmidt:3 xiong:1 original:8 assumes:3 denotes:4 top:2 porteous:1 log2:5 calculating:1 emmanuel:3 chinese:1 uj:3 february:1 feng:1 tensor:85 move:1 objective:2 freudenthaler:1 quantity:1 parametric:1 map:1 nr:4 subspace:2 morten:1 simulated:1 chris:1 carbonell:1 reason:1 provable:1 hdp:1 index:1 reformulate:3 ratio:1 minimizing:2 rasmus:1 liang:1 october:1 robert:1 yjk:3 ryota:2 wonka:1 stated:1 negative:2 design:1 implementation:2 reliably:1 unknown:1 perform:2 recommender:3 observation:36 revised:1 datasets:1 markov:1 purdue:2 extended:1 incorporated:1 recoverability:2 arbitrary:1 sharp:2 rating:7 david:1 pair:22 namely:1 specified:1 connection:2 concisely:1 quadratically:2 akf:2 nip:3 able:1 suggested:1 challenge:1 program:16 max:6 reliable:1 analogue:1 critical:1 natural:2 customized:1 hindi:1 representing:1 movie:10 axis:3 deemed:1 incoherent:3 review:2 berry:1 kf:2 multiplication:1 relative:4 rpit:2 fully:3 loss:1 kakf:2 filtering:4 proportional:1 srebro:1 foundation:1 degree:6 shouyuan:1 sufficient:3 thresholding:3 haitham:1 heng:1 prone:1 repeat:1 supported:1 truncation:6 aij:5 bias:1 zc:1 guide:1 absolute:1 sparse:4 regard:1 kzk:4 calculated:1 boundary:1 transition:4 world:2 tamir:1 computes:1 kz:2 author:1 collection:1 made:1 quantum:1 suzuki:1 welling:1 ignore:1 xai:1 xi:1 factorizing:2 iterative:1 latent:1 sk:2 table:5 promising:2 ckf:2 zk:7 johan:1 lek:1 excellent:2 poly:1 vj:2 diag:3 inherit:1 pk:3 main:2 noise:14 n2:43 repeated:2 xu:1 uik:1 slow:1 tomioka:2 structurally:1 administrative:1 ian:1 theorem:7 supplemented:1 sensing:1 r2:7 evidence:1 exists:5 essential:1 incorporating:2 sequential:2 gained:1 ci:2 kx:2 margin:1 chen:2 logarithmic:1 jacm:1 infinitely:1 visual:1 prevents:1 kxk:11 inderjit:1 recommendation:11 hayashi:2 applies:1 corresponds:1 truth:1 satisfies:2 kzkf:1 weston:1 goal:1 formulated:1 king:2 month:1 jeff:1 absence:4 considerable:1 hard:3 change:2 included:1 specifically:4 determined:1 uniformly:5 except:1 infinite:1 movielens:4 martha:1 total:1 kuo:1 svd:9 experimental:1 e:2 ijk:11 meaningful:1 formally:2 select:1 irwin:1 incorporate:1 bkf:2 tested:3 phenomenon:1 |
4,264 | 486 | A Computational Mechanism To Account For
Averaged Modified Hand Trajectories
Ealan A. Henis*and Tamar Flash
Department of Applied Mathematics and Computer Science
The Weizmann Institute of Science
Rehovot 76100, Israel
Abstract
Using the double-step target displacement paradigm the mechanisms underlying arm trajectory modification were investigated. Using short (10110 msec) inter-stimulus intervals the resulting hand motions were initially
directed in between the first and second target locations. The kinematic
features of the modified motions were accounted for by the superposition
scheme, which involves the vectorial addition of two independent point-topoint motion units: one for moving the hand toward an internally specified
location and a second one for moving between that location and the final
target location . The similarity between the inferred internally specified locations and previously reported measured end-points of the first saccades
in double-step eye-movement studies may suggest similarities between perceived target locations in eye and hand motor control.
1
INTRODUCTION
The generation of reaching movements toward unexpectedly displaced targets involves more complicated planning and control problems than in reaching toward
stationary ones, since the planning of the trajectory modification must be performed before the initial plan is entirely completed. One possible scheme to modify
a trajectory plan is to abort the rest of the original motion plan, and replace it with
a new one for moving toward the new target location. Another possible modifica?Current address IRCS/GRASP, University of Pennsylvania.
619
620
Henis and Flash
tion scheme is to superimpose a second plan with the initial one, without aborting
it. Both schemes are discussed below.
Earlier studies of reaching movements toward static targets have shown that pointto-point reaching hand motions follow a roughly straight path, having a typical bellshaped velocity profile. The kinematic features ofthese movements were successfully
accounted for (Figure 1, left) by the minimum-jerk model (Flash & Hogan, 1985). In
that model the X -components of hand motions (and analogously the Y -components)
were represented by:
(1)
VI
o.
--
~B
,.,
,.,
A
c
)
!
Time
---
-0.2
B
.....-+
A
Vy
IKlaft
o.
b.
Figure 1: The Minimum-jerk Model and The Non-averaged Superposition Scheme
I
o.
100
Computer
.
c
?
B
?
.: .
...
.....
.. ... :: .
. . .
2'
?
? I
.:. :.
o
?
.
100
o
o
D msec
500
Figure 2: The Experimental Setup and The Initial Movement Direction Vs. n
A Computational Mechanism ro Accoum for Averaged Modified Hand Trajecrories
where tf is the movement duration, and XB -XA is the X-component of movement
amplitude. In a previous study (Henis & Flash, 1989; Flash & Henis, 1991) we have
used the double-step target displacement paradigm (see below) with inter-stimulus
intervals (ISIs) of 50-400 msec. Many of the resulting motions were found to be
initially directed toward the first target location (non-averagerl) (for larger ISIs a
larger percentage of the motions were non-averaged). The kinematic features of
these modified motions were successfully accounted for (Figure 1 right) by a superposition modification scheme that involves the vectorial addition of two time-shifted
independent point-to-point motion units (Equation (1)) that have amplitudes that
correspond to the two target displacements.
In the present study shorter ISIs of 10-110 msec were used, hence, all target displacements occurred before movement initiation. Most of the resulting hand motions
were found to be initially directed in between the first and second target locations
(averaged motions). For increasing values of D, where D RTI - lSI (RTl is the
reaction time), the initial motion direction gradually shifted from the direction of
the first toward the direction of the second stimulus (Figure 2 right). The averaging
phenomenon has been previously reported for hand (Van Sonderen et al., 1988) and
eye (Aslin & Shea, 1987; Van Gisbergen et al., 1987) motions. In this work we
wished to account for the kinematic features of averaged trajectories as well as for
the dependence of their initial direction on D.
=
It was observed (Van Sonderen et al., 1988) that when the first target displacement
was toward the left and the second one was obliquely downwards and to the right
most of the resulting motions were averaged. Averaged motions were also induced
when the first target displacement was downwards and the second one was obliquely
upwards and to the left. In this study we have used similar target displacements.
Four naive subjects participated in the experiments. The motions were performed
in the absence of visual feedback from the moving limb. In a typical trial, initially
the hand was at rest at a starting position A (Figure 2 left). At t
0 a visual target
was presented at one of two equally probable positions B. It either remained lit
(control condition, probability 0.4) or was shifted again, following an lSI, to one of
two equally probable positions C (double-step condition, probability 0.3 each). In a
block of trials one target configuration was used. Each block consisted of five groups
of 56 trials, and within each group one lSI pair was used. The five lSI pairs were:
10 and 80, 20 and 110, 30 and 150, 40 and 200, and 50 and 300 msec. The target
presentation sequence was randomized, and included appropriate control trials.
=
2
2.1
MODELING RATIONALE AND ANALYSIS
THE SUPERPOSITION SCHEME
The superposition scheme for averaged modified motions is based on the vectorial
addition of two time-shifted independent elemental point-to-point hand motions
that obey Equation (1). The first elemental trajectory plan is for moving between
the initial hand location and an intermediate location B i , internally specified. This
plan continues unmodified until its intended completion. The second elemental
trajectory plan is for moving between Bi and the final location of the target. The
durations of the elemental motions may vary among trials, and are therefore a
621
622
Henis and Flash
priori unknown. With short ISIs the elemental motion plans may be added (to give
the modified plan) preceding movement initiation. Several possibilities for Bi were
examined: a) the first location of the stimulus, b) an a priori unknown position, c)
same as (b) with Bi constrained to lie on the line connecting the first and second
locations of the stimulus, and d) same as (b) with Bi constrained to lie on the
line of initial movement direction. Version (a) is equivalent to the superposition
scheme that successfully accounted for non-averaged modified trajectories (Flash &
Henis, 1991). In versions (b), (c) and (d) it was assumed that due to the quick
displacement of the target, the specification of the end-point for the first motion
plan may differ from the actual first location of the target. The first motion unit
was represented by:
Xl(t) = X A + (XB. - XA)(10T3
-
15T4
+ 6T 5 ),
where
t
T = - .
Tl
(2)
In (2), (XBi - XA) is the X -component of the first unit amplitude. The duration of
this unit is denoted by T i , a priori unknown. The expression for Yi (t) was analogous
to Equation (2). The X-component of the second motion unit was taken to be:
X2(t) = (Xc - XB.)(lOT 3
-
15T4
+ 6T 5 ),
where
T =
t - t,
t - t,
= - - . (3)
tl - t,
T2
In (3), (Xc - XBJ is the X-component of the amplitude of the second trajectory
unit. The start and end times of the second unit are denoted by t, and t I, respectively. The duration of the second motion unit T2 tl-t, is a priori unknown. The
X -component of the entire modified motion (and similarly for the Y -component)
was represented by:
(4)
=
The unknown parameters T 1 , T 2 , BiX and BiY that can best describe the entire
measured trajectory were determined by using least-squares best-fit methods (Marquardt, 1963). This procedure minimized the sum of the position errors between
the simulated and measured data points, taking into account (in versions (a), (c)
and (d)) the assumed constraints on the location B i .
2.2
THE ABORT-REPLAN SCHEME
In the abort-replan scheme it was assumed that initially a point-to-point trajectory
plan is generated for moving toward an initial target (Equation (2?). The same four
different possibilities for the end-point of the initial motion plan were examined. It
was assumed that at some time-instant t, the initial plan is aborted and replaced
t, and the
by a new plan for moving between the expected hand position at t
final target location. The new motion plan was assumed to be represented by:
=
5
XNEW(t)
= L:ai(t)i.
(5)
i=O
The coefficients a3, a4 and a5 were derived by using the the measured values of
position, velocity and acceleration at t
t I. For versions (b), (c) and (d) the
analysis was performed simultaneously for the X and Y components of the trajectory. Choosing a trial Bi and Tl the initial motion plan (Equation (2? was
=
A Computational Mechanism
to
Account for Averaged Modified Hand Trajectories
calculated. Choosing a trial t" the remaining three unknown coefficients ao, al and
a2 of Equation (5) were calculated using the continuity conditions of the initial and
new position, velocity and acceleration at t = t, (method I). Alternatively, these
three coefficients were calculated using the corresponding measured values at t t,
(method II). To determine the best choice of the unknown parameters B iX , B iy , Tl
and t, the same least squares methods (Marquardt, 1963) were used as described
above. For version (a), for each cartesian component, a point-to-point minimumjerk trajectory AB was speed-scaled to coincide with the initial part of the measured
velocity profile. The time t, of its deviation from the measured speed profile was
extracted. From t, on, the trajectory was represented by Equation (5). The values
of ao, al and a2 were derived by using the same least squares methods (method I).
Alternatively, these values were determined by using the measured position, velocity
and acceleration at t = t, (method II).
=
3
RESULTS
The motions recorded in the control trials were roughly straight with bell-shaped
speed profiles. The mean reaction time in these control trials was 367.1 ? 94.6
msec (N = 120). The mean movement time was 574.1 ? 127.0 msec (N = 120).
The change in target location elicited a graded movement toward an intermediate
direction in between the two target locations, followed by a subsequent motion
toward the final target (Figure 3, middle). Occasionally the hand went directly
toward the final target location (Figure 3, right). For values of D less than 100 ms
the movements were found to be initially directed toward the first target (Figure 3,
left). As D increased, the initial direction of the motions gradually shifted (Figure
2, right) from the direction of the first (non-averaged) toward the direction of the
second (direct) target locations (The initial direction depended on D rather than
on lSI). The mean reaction time to the first stimulus (RTI) was 350.4 ? 93.5 msec
(N=192). The mean reaction time to the second stimulus (RT2 ) (inferred from the
superposition version (b)) was 382.8 ? 119.9 msec (N=192) . This value is much
smaller than that predicted by successive processing of information: RT2 = 2RTl lSI (Poulton, 1981), and might be indicative of the presence of parallel planning .
The mean durations Tl and T2 of the two trajectory units (of superposition version
(b)) were: 373.0 ? 112.2 and 592.1 ? 98.1 msec (N = 192), respectively.
3.1
MODIFICATION SCHEMES
The most statistically successful model (Table-I) in accounting for the measured
motions was the superposition version (b), which involves an internally specified
location (a priori unknown) for the end-point of the first motion unit. In particular, the averaged initial direction of the measured motions was accounted for.
Superposition version (d) was equivalent to version (b). The velocities simulated on
the basis the other tested schemes substantially deviated from the measured ones
(Table 1 and Figure 4). It should be noted that in both the superposition and abortreplan versions (b), (c) and (d) there were 4, 3 and 3 unknown parameters. In the
abort-replan versions (aI) there were 3 unknown parameters, compared to 2 in the
superposition version (a). Hence the relative success of the superposition version
(b) in accounting for the data was not due to a larger number of free parameters.
623
624
Henis and Flash
Table 1: Normalized Velocity Deviations and The t-score With SP(b?
SP(a)
SP(b)
SP(c)
Sped)
AB(aI)
AB(aU)
AB(bI) AB(bU)
AB(d)
AB(clI)
AB(dI)
AB(dU)
18.60
? 50.16
(4 .711)
0.035
? 0.036
(0 .000)
0.126
? 0.132
(8.465)
0.042
? 0.045
(1 .546)
0.083
? 0.093
(6.126)
0.084
? 0.088
(6.559)
0.081
0.078
? 0.101 ? 0.102
(5 .460) (5.050)
0.084
? 0.108
(5.4 78)
0.083
? 0.096
(5.959)
0.082
? 0.097
(5.782)
0.085
? 0.101
(5.935)
Direct
--
Averaged
Non- Averaqed
~
c~
c+~
lSI '50
0?400
1St- 40
ISI?200
0'80
0?250
B+
l~em
B
B
+
"...:~
lSI? 50
O' 280
0'120
C
+B
B+
A
o
~
b
B
+
A
~
A
ISI ? ao
0 - 450
lc
c
Figure 3: Types of Modified Trajectories
B+
SP(h)
\A
8
+
AB(hII) \
c
T....
---
100_
C+~A
\~tm
o.
+B
0
i
?
:.0
I
i'-'0"
AB(bII)
SP(h)
C
fo~
?02
(+A
+B
r~
1-
v.
0
r.tooP--+---Figure 4: Representative Simulated Vs . Measured Trajectories
A Computational Mechanism to Accoum for Averaged Modified Hand Trajectories
3.2
THE END-POINTS INFERRED FROM SUPERPOSITION (b)
The mean locations Bi resulting from different trials performed by the same subject
were computed by pooling together Bi of movements with the same D ? 15 msec
(Figure 5 left). For D < 100 msec, the measured motions were non-averaged and
the inferred Bi were in the vicinity of the first target. For increasing values of D,
Bi gradually shifted from the first toward the second target location, following a
typical path that curved toward the initial hand position . For D 2:: 400 msec, Bi
were in the vicinity of the second target location. Since initially the motions are
directed toward B i , this gradual shift of Bi as a function of D may account for
the observed dependence of the initial direction of motion on D . The locations Bi
obtained on the basis of the other tested schemes did not show any regular behavior
as functions of D.
4
DISCUSSION
This paper presents explicit possible mechanisms to account for the kinematic features of averaged modified trajectories . the most statistically successful scheme in
accounting for the measured movements involves the vectorial addition of two independent point-to-point motion units: one for moving between the initial hand
position and an internally specified location, and a second one for moving between
that location and the final target location. Taken together with previous results for
non-averaged modified trajectories (Flash & Renis, 1991), it was shown that the
same superposition principle may account for both modified trajectory types. The
differences between the observed types stem from differences in the time available
to modify the end-point of the first unit . Our simulations have enabled us to infer
the locations of the intermediate target locations, which were found to be similar
to previously reported (Aslin & Shea, 1987) experimentally measured end-points of
the first saccades, obtained by using the double-step paradigm (Figure 5 right!).
This result may suggests underlying similarities between internally perceived target locations in eye and hand motor control and may suggest a common "where"
command (Gielen et al., 1984; 1990) for both systems.
c
+
~
C~A ~
A
.1210
..
ISc",
C~A
B
410?
. . . +B
B
B
-~=--
Figure 5: Inferred First Unit End-points and Measured Eye Positions
1 Reprinted with permission from Vision Res., Vol. 27, No. 11, 1925-1942, Aslin, R.N .
and Shea S.L.: The Amplitude And Angle of Saccades to Double-Step Target Displacements, Copyright [1987], Pergamon Press pic.
625
626
Henis and Flash
Why is the internally specified location dependent on D, which is a parameter
associated with both sensory information and motor execution? One possible explanation is that following the target displacement the effect of the first stimulus on
the motion planning decays, and that of the second stimulus becomes larger. These
changes may occur in the transformations from the visual to the motor system. A
purely sensory change in the perceived target location was also proposed (Van Sonderen et aI., 1988; Becker & Jurgens 1979). Another possibility is that the direction
of hand motion is internally coded in the motor system (Georgopoulos et al., 1986),
and it gradually rotates (within the motor system) from the direction of the first
to the direction of the second target. It is not clear which of these possibilities
provides a better explanation for the observations.
In the superposition scheme there is no need to keep track of the actual or planned
kinematic state of the hand. Hence, in contrast to the abort-replan scheme, an
efference copy of the planned motion is not required. The successful use of motion
plans expressed in extrapersonal coordinates provides support to the idea that arm
movements are internally represented in terms of hand motion through external
space. The construction of complex movements from simpler elementary building
blocks is consistent with a hierarchical organization of the motor system. The
independence of the elemental trajectories allows to plan them in parallel.
Acknowledgements
This research was supported by a grant no. 8800141 from the United-States Israel
Binational Science Foundation (BSF), Jerusalem, Israel. Tamar Flash is incumbent
of the Corinne S. Koshland career development chair.
References
Aslin, R.N. and Shea S.L. (1987). The Amplitude And Angle of Saccades to Double-Step
Target Displacements. Vision Res., Vol. 27, No. 11, 1925-1942.
Becker W. and Jurgens R. (1979). An Analysis of The Saccadic System By Means of
Double-Step Stimuli. Vision Res., 19, 967-983.
Flash T. and Henis E. (1991). Arm Trajectory Modification During Reaching Towards
Visual Targets. Journal of Cognitive Neuroscience Vol. 3, no. 3, 220-230.
Flash, T. & Hogan, N. (1985). The coordination of arm movements: an experimentally
confirmed mathematical model. J. Neurosci., 7, 1688-1703.
Georgopoulos A.P., Schwartz A.B. & Kettner R.E. (1986). Neuronal population coding of
movement direction. Science 233, 1416-1419.
Gi~len, C.C.A.M., ~an ?en Heuvel, P.J.~. & Denier Van der Gon, J.J. (1984). Modificatlon. of muscle activation pat terns durmg fast goal-directed arm movements. J. Motor
BehaVlor, 16, 2-19.
Gielen C.C.A.M. & Van Gisbergen J .A.M. (1990). The visual guidance of saccades and
fast aiming movements. News in Physiological Sciences Vol.5, 58-63.
Henis E. and Flash T. (1989). Mechanisms Sub serving Arm Trajectory Modification. Perception 18(4):495.
Marquardt, D.W., (1963). An algorithm for least-squares estimation of non-linear parameters. J. SIAM, 11, 431-441.
Van Gisbergen, J.A.M., Van Opstal, A.J. & Roebroek, J.G.H. {1987}. Stimulus-induced
midflight modification of saccade trajectories. In J.K. O'Regan & A. Levy-Schoen (Eds.),
Eye Movements: From Physiology to Cognition, Amsterdam: Elsevier, 27-36.
Van S~n~eren, J.F., D.eniex: Van Der Gon, J.J. & Gielen, C.C.A.M. (1988). Conditions
detenmrung early modificatlon of motor programmes in response to change in target location. Exp. Brain Res., 71, 320-328.
| 486 |@word trial:10 middle:1 version:14 schoen:1 gradual:1 simulation:1 accounting:3 initial:19 configuration:1 extrapersonal:1 score:1 united:1 reaction:4 current:1 marquardt:3 activation:1 must:1 subsequent:1 motor:9 v:2 stationary:1 indicative:1 short:2 provides:2 location:35 successive:1 simpler:1 five:2 mathematical:1 direct:2 inter:2 expected:1 behavior:1 isi:6 planning:4 aborted:1 roughly:2 brain:1 actual:2 increasing:2 becomes:1 underlying:2 israel:3 substantially:1 transformation:1 ro:1 scaled:1 schwartz:1 control:7 unit:14 internally:9 grant:1 before:2 modify:2 depended:1 aiming:1 path:2 might:1 au:1 examined:2 xbi:1 suggests:1 bix:1 bi:13 statistically:2 averaged:18 weizmann:1 directed:6 block:3 procedure:1 displacement:11 bell:1 physiology:1 pointto:1 regular:1 suggest:2 equivalent:2 quick:1 jerusalem:1 starting:1 duration:5 bsf:1 enabled:1 population:1 coordinate:1 analogous:1 poulton:1 target:42 construction:1 velocity:7 continues:1 gon:2 observed:3 unexpectedly:1 news:1 went:1 movement:22 hogan:2 purely:1 basis:2 represented:6 fast:2 describe:1 choosing:2 larger:4 efference:1 gi:1 final:6 sequence:1 obliquely:2 cli:1 elemental:6 double:8 bellshaped:1 completion:1 rt2:2 measured:16 wished:1 predicted:1 involves:5 differ:1 direction:17 ao:3 probable:2 elementary:1 exp:1 cognition:1 vary:1 early:1 a2:2 perceived:3 estimation:1 superposition:16 coordination:1 tf:1 successfully:3 modified:14 reaching:5 rather:1 command:1 derived:2 contrast:1 elsevier:1 dependent:1 entire:2 initially:7 among:1 denoted:2 priori:5 development:1 plan:18 constrained:2 having:1 shaped:1 lit:1 minimized:1 t2:3 stimulus:11 aslin:4 simultaneously:1 replaced:1 intended:1 ab:11 organization:1 a5:1 possibility:4 kinematic:6 grasp:1 copyright:1 xb:3 isc:1 shorter:1 re:4 guidance:1 increased:1 earlier:1 modeling:1 planned:2 unmodified:1 deviation:2 successful:3 reported:3 st:1 incumbent:1 randomized:1 siam:1 bu:1 analogously:1 connecting:1 iy:1 together:2 again:1 recorded:1 external:1 cognitive:1 account:7 coding:1 opstal:1 coefficient:3 vi:1 performed:4 tion:1 lot:1 start:1 len:1 complicated:1 elicited:1 parallel:2 square:4 correspond:1 t3:1 trajectory:26 confirmed:1 straight:2 fo:1 ed:1 associated:1 di:1 static:1 amplitude:6 follow:1 response:1 henis:10 xa:3 until:1 hand:22 abort:5 continuity:1 building:1 effect:1 consisted:1 normalized:1 hence:3 vicinity:2 during:1 replan:4 noted:1 m:1 jurgens:2 motion:44 topoint:1 upwards:1 common:1 sped:1 binational:1 xbj:1 heuvel:1 discussed:1 occurred:1 ai:4 mathematics:1 similarly:1 moving:10 specification:1 similarity:3 occasionally:1 initiation:2 success:1 yi:1 der:2 muscle:1 minimum:2 preceding:1 determine:1 paradigm:3 ii:2 infer:1 stem:1 rti:2 equally:2 coded:1 vision:3 addition:4 participated:1 interval:2 rest:2 pooling:1 induced:2 subject:2 presence:1 intermediate:3 jerk:2 independence:1 fit:1 pennsylvania:1 idea:1 tm:1 tamar:2 reprinted:1 shift:1 expression:1 becker:2 clear:1 percentage:1 lsi:8 vy:1 shifted:6 neuroscience:1 track:1 serving:1 rehovot:1 vol:4 group:2 four:2 sum:1 angle:2 entirely:1 followed:1 xnew:1 deviated:1 occur:1 vectorial:4 constraint:1 georgopoulos:2 x2:1 speed:3 chair:1 department:1 smaller:1 em:1 ofthese:1 modification:7 gradually:4 taken:2 equation:7 previously:3 mechanism:7 end:9 available:1 limb:1 obey:1 hierarchical:1 appropriate:1 hii:1 bii:1 permission:1 original:1 remaining:1 completed:1 a4:1 instant:1 xc:2 graded:1 pergamon:1 added:1 saccadic:1 dependence:2 rotates:1 simulated:3 toward:17 setup:1 unknown:10 gisbergen:3 observation:1 displaced:1 curved:1 pat:1 inferred:5 superimpose:1 pic:1 pair:2 required:1 specified:6 address:1 below:2 perception:1 explanation:2 arm:6 scheme:17 eye:6 naive:1 biy:1 denier:1 acknowledgement:1 relative:1 rationale:1 generation:1 regan:1 foundation:1 consistent:1 principle:1 accounted:5 supported:1 free:1 copy:1 institute:1 taking:1 van:10 feedback:1 calculated:3 sensory:2 coincide:1 programme:1 rtl:2 keep:1 assumed:5 alternatively:2 why:1 table:3 kettner:1 career:1 du:1 investigated:1 complex:1 sp:6 did:1 neurosci:1 profile:4 neuronal:1 representative:1 tl:6 en:1 downwards:2 lc:1 sub:1 position:12 msec:13 explicit:1 xl:1 lie:2 levy:1 ix:1 remained:1 decay:1 physiological:1 a3:1 shea:4 execution:1 t4:2 cartesian:1 gielen:3 visual:5 expressed:1 amsterdam:1 saccade:6 extracted:1 goal:1 presentation:1 acceleration:3 flash:14 towards:1 replace:1 absence:1 change:4 experimentally:2 included:1 determined:2 typical:3 averaging:1 experimental:1 support:1 tern:1 tested:2 phenomenon:1 |
4,265 | 4,860 | Matrix factorization with Binary Components
Martin Slawski, Matthias Hein and Pavlo Lutsik
Saarland University
{ms,hein}@cs.uni-saarland.de, [email protected]
Abstract
Motivated by an application in computational biology, we consider low-rank matrix factorization with {0, 1}-constraints on one of the factors and optionally convex constraints on the second one. In addition to the non-convexity shared with
other matrix factorization schemes, our problem is further complicated by a combinatorial constraint set of size 2m?r , where m is the dimension of the data points
and r the rank of the factorization. Despite apparent intractability, we provide
? in the line of recent work on non-negative matrix factorization by Arora et
al. (2012)? an algorithm that provably recovers the underlying factorization in the
exact case with O(mr2r + mnr + r2 n) operations for n datapoints. To obtain this
result, we use theory around the Littlewood-Offord lemma from combinatorics.
1 Introduction
Low-rank matrix factorization techniques like the singular value decomposition (SVD) constitute
an important tool in data analysis yielding a compact representation of data points as linear combinations of a comparatively small number of ?basis elements? commonly referred to as factors,
components or latent variables. Depending on the specific application, the basis elements may be
required to fulfill additional properties, e.g. non-negativity [1, 2], smoothness [3] or sparsity [4, 5].
In the present paper, we consider the case in which the basis elements are constrained to be binary,
i.e. we aim at factorizing a real-valued data matrix D into a product T A with T ? {0, 1}m?r and
A ? Rr?n , r ? min{m, n}. Such decomposition arises e.g. in blind source separation in wireless communication with binary source signals [6]; in network inference from gene expression data
[7, 8], where T encodes connectivity of transcription factors and genes; in unmixing of cell mixtures
from DNA methylation signatures [9] in which case T represents presence/absence of methylation;
or in clustering with overlapping clusters with T as a matrix of cluster assignments [10, 11].
Several other matrix factorizations involving binary matrices have been proposed in the literature. In
[12] and [13] matrix factorization for binary input data, but non-binary factors T and A is discussed,
whereas a factorization T W A with both T and A binary and real-valued W is proposed in [14],
which is more restrictive than the model of the present paper. The model in [14] in turn encompasses binary matrix factorization as proposed in [15], where all of D, T and A are constrained to
be binary. It is important to note that this ine of research is fundamentally different from Boolean
matrix factorization [16], which is sometimes also referred to as binary matrix factorization.
A major drawback of matrix factorization schemes is non-convexity. As a result, there is in general no algorithm that is guaranteed to compute the desired factorization. Algorithms such as block
coordinate descent, EM, MCMC, etc. commonly employed in practice lack theoretical guarantees
beyond convergence to a local minimum. Substantial progress in this regard has been achieved
recently for non-negative matrix factorization (NMF) by Arora et al. [17] and follow-up work in
[18], where it is shown that under certain additional conditions, the NMF problem can be solved
globally optimal by means of linear programming. Apart from being a non-convex problem, the
matrix factorization studied in the present paper is further complicated by the {0, 1}-constraints imposed on the left factor T , which yields a combinatorial optimization problem that appears to be
computationally intractable except for tiny dimensions m and r even in case the right factor A were
1
already known. Despite the obvious hardness of the problem, we present as our main contribution
an algorithm that provably provides an exact factorization D = T A whenever such factorization
exists. Our algorithm has exponential complexity only in the rank r of the factorization, but scales
linearly in m and n. In particular, the problem remains tractable even for large values of m as long
as r remains small. We extend the algorithm to the approximate case D ? T A and empirically
show superior performance relative to heuristic approaches to the problem. Moreover, we establish uniqueness of the exact factorization under the separability condition from the NMF literature
[17, 19], or alternatively with high probability for T drawn uniformly at random. As a corollary, we
obtain that at least for these two models, the suggested algorithm continues to be fully applicable
if additional constraints e.g. non-negativity, are imposed on the right factor A. We demonstrate the
practical usefulness of our approach in unmixing DNA methylation signatures of blood samples [9].
Notation. For a matrix M and index sets I, J, MI,J denotes the submatrix corresponding to I and
J; MI,: and M:,J denote the submatrices formed by the rows in I respectively columns in J. We
write [M ; M ? ] and [M, M ? ] for the row- respectively column-wise concatenation of M and M ? . The
affine hull generated by the columns of M is denoted by aff(M ). The symbols 1/0 denote vectors
or matrices of ones/zeroes and I denotes the identity matrix. We use | ? | for the cardinality of a set.
Supplement. The supplement contains all proofs, additional comments and experimental results.
2 Exact case
We start by considering the exact case, i.e. we suppose that a factorization having the desired
properties exists. We first discuss the geometric ideas underlying our basic approach for recovering
such factorization from the data matrix before presenting conditions under which the factorization
is unique. It is shown that the question of uniqueness as well as the computational performance of
our approach is intimately connected to the Littlewood-Offord problem in combinatorics [20].
2.1 Problem formulation. Given D ? Rm?n , we consider the following problem.
find T ? {0, 1}m?r and A ? Rr?n , A? 1r = 1n such that D = T A.
(1)
The columns {T:,k }rk=1 of T , which are vertices of the hypercube [0, 1]m , are referred to as components. The requirement A? 1r = 1n entails that the columns of D are affine instead of linear combinations of the columns of T . This additional constraint is not essential to our approach; it is imposed
for reasons of presentation, in order to avoid that the origin is treated differently from the other vertices of [0, 1]m , because otherwise the zero vector could be dropped from T , leaving the factorization
unchanged. We further assume w.l.o.g. that r is minimal, i.e. there is no factorization of the form (1)
with r? < r, and in turn that the columns of T are affinely independent, i.e. ?? ? Rr , ?? 1r = 0,
T ? = 0 implies that ? = 0. Moreover, it is assumed that rank(A) = r. This ensures the existence
of a submatrix A:,C of r linearly independent columns and of a corresponding submatrix of D:,C of
affinely independent columns, when combined with the affine independence of the columns of T :
?? ? Rr , ?? 1r = 0 : D:,C ? = 0 ?? T (A:,C ?) = 0 =? A:,C ? = 0 =? ? = 0,
(2)
r
?
using at the second step that 1?
r A:,C ? = 1r ? = 0 and the affine independence of the {T:,k }k=1 .
Note that the assumption rank(A) = r is natural; otherwise, the data would reside in an affine
subspace of lower dimension so that D would not contain enough information to reconstruct T .
2.2 Approach. Property (2) already provides the entry point of our approach. From D = T A, it is
obvious that aff(T ) ? aff(D). Since D contains the same number of affinely independent columns
as T , it must also hold that aff(D) ? aff(T ), in particular aff(D) ? {T:,k }rk=1 . Consequently, (1)
can in principle be solved by enumerating all vertices of [0, 1]m contained in aff(D) and selecting a
maximal affinely independent subset thereof (see Figure 1). This procedure, however, is exponential
in the dimension m, with 2m vertices to be checked for containment in aff(D) by solving a linear
system. Remarkably, the following observation along with its proof, which prompts Algorithm 1
below, shows that the number of elements to be checked can be reduced to 2r?1 irrespective of m.
Proposition 1. The affine subspace aff(D) contains no more than 2r?1 vertices of [0, 1]m . Moreover, Algorithm 1 provides all vertices contained in aff(D).
2
Algorithm 1 F INDV ERTICES EXACT
1. Fix p ? aff(D) and compute P = [D:,1 ? p, . . . , D:,n ? p].
2. Determine r ? 1 linearly independent columns C of P , obtaining P:,C and subsequently
r ? 1 linearly independent rows R, obtaining PR,C ? Rr?1?r?1 .
?
3. Form Z = P:,C (PR,C )?1 ? Rm?r?1 and Tb = Z(B (r?1) ? pR 1?
2r?1 ) + p12r?1 ?
r?1
Rm?2 , where the columns of B (r?1) correspond to the elements of {0, 1}r?1.
4. Set T = ?. For u = 1, . . . , 2r?1 , if Tb:,u ? {0, 1}m set T = T ? {Tb:,u }.
5. Return T = {0, 1}m ? aff(D).
Algorithm 2 B INARY FACTORIZATION EXACT
1. Obtain T as output from F INDV ERTICES E XACT(D)
2. Select r affinely independent elements of T to be used as columns of T .
?
3. Obtain A as solution of the linear system [1?
r ; T ]A = [1n ; D].
4. Return (T, A) solving problem (1).
Figure 1: Illustration of the geometry underlying our approach in dimension m = 3. Dots represent data points
and the shaded areas their affine hulls aff(D) ? [0, 1]m .
Left: aff(D) intersects with r + 1 vertices of [0, 1]m .
Right: aff(D) intersects with precisely r vertices.
Comments. In step 2 of Algorithm 1, determining the rank of P and an associated set of linearly
independent columns/rows can be done by means of a rank-revealing QR factorization [21, 22].
The crucial step is the third one, which is a compact description of first solving the linear systems
PR,C ? = b ? pR for all b ? {0, 1}r?1 and back-substituting the result to compute candidate vertices
P:,C ? + p stacked into the columns of Tb; the addition/subtraction of p is merely because we have to
deal with an affine instead of a linear subspace, in which p serves as origin. In step 4, the pool of
2r?1 ?candidates? is filtered, yielding T = aff(D) ? {0, 1}m.
Determining T is the hardest part in solving the matrix factorization problem (1). Given T , the
solution can be obtained after few inexpensive standard operations. Note that step 2 in Algorithm
2 is not necessary if one does not aim at finding a minimal factorization, i.e. if it suffices to have
?
D = T A with T ? {0, 1}m?r but r? possibly being larger than r.
As detailed in the supplement, the case without sum-to-one constraints on A can be handled similarly, as can be the model in [14] with binary left and right factor and real-valued middle factor.
Computational complexity. The dominating cost in Algorithm 1 is computation of the candidate
matrix Tb and checking whether its columns are vertices of [0, 1]m . Note that
TbR,: = ZR,: (B (r?1) ?pR 1?r?1 )+pR 1?r?1 = Ir?1 (B (r?1) ?pR 1?r?1 )+pR 1?r?1 = B (r?1) , (3)
2
2
2
2
i.e. the r ? 1 rows of Tb corresponding to R do not need to be taken into account. Forming the
matrix Tb would hence require O((m ? r + 1)(r ? 1)2r?1 ) and the subsequent check for vertices in
the fourth step O((m ? r + 1)2r?1 ) operations. All other operations are of lower order provided
e.g. (m ? r + 1)2r?1 > n. The second most expensive operation is forming the matrix PR,C in
step 2 with the help of a QR decomposition requiring O(mn(r ? 1)) operations in typical cases
[21]. Computing the matrix factorization (1) after the vertices have been identified (steps 2 to 4 in
Algorithm 2) has complexity O(mnr + r3 + r2 n). Here, the dominating part is the solution of a
linear system in r variables and n right hand sides. Altogether, our approach for solving (1) has
exponential complexity in r, but only linear complexity in m and n. Later on, we will argue that
under additional assumptions on T , the O((m?r+1)2r?1) terms can be reduced to O((r?1)2r?1 ).
2.3 Uniqueness. In this section, we study uniqueness of the matrix factorization problem (1)
(modulo permutation of columns/rows). First note that in view of the affine independence of the
columns of T , the factorization is unique iff T is, which holds iff
aff(D) ? {0, 1}m = aff(T ) ? {0, 1}m = {T:,1 , . . . , T:,r },
(4)
i.e. if the affine subspace generated by {T:,1 , . . . , T:,r } contains no other vertices of [0, 1]m than the
r given ones (cf. Figure 1). Uniqueness is of great importance in applications, where one aims at
3
an interpretation in which the columns of T play the role of underlying data-generating elements.
Such an interpretation is not valid if (4) fails to hold, since it is then possible to replace one of the
columns of a specific choice of T by another vertex contained in the same affine subspace.
Solution of a non-negative variant of our factorization. In the sequel, we argue that property (4)
plays an important role from a computational point of view when solving extensions of problem (1)
in which further constraints are imposed on A. One particularly important extension is the following.
?
find T ? {0, 1}m?r and A ? Rr?n
+ , A 1r = 1n such that D = T A.
(5)
Problem (5) is a special instance of non-negative matrix factorization. Problem (5) is of particular
interest in the present paper, leading to a novel real world application of matrix factorization techniques as presented in Section 4.2 below. It is natural to ask whether Algorithm 2 can be adapted
to solve problem (5). A change is obviously required for the second step when selecting r vertices
from T , since in (5) the columns D now have to be expressed as convex instead of only affine combinations of columns of T : picking an affinely independent collection from T does not take into
account the non-negativity constraint imposed on A. If, however, (4) holds, we have |T | = r and
Algorithm 2 must return a solution of (5) provided that there exists one.
Corollary 1. If problem (1) has a unique solution, i.e. if condition (4) holds and if there exists a
solution of (5), then it is returned by Algorithm 2.
To appreciate that result, consider the converse case |T | > r. Since the aim is a minimal factorization, one has to find a subset of T of cardinality r such that (5) can be solved. In principle, this can
be achieved by solving a linear program for |Tr | subsets of T , but this is in general not computationally feasible: the upper bound of Proposition 1 indicates that |T | = 2r?1 in the worst case. For
the example below, T consists of all 2r?1 vertices contained in an r ? 1-dimensional face of [0, 1]m :
?
?
0m?r?r
)
(
r?1
X
?
?I
T = ? r?1 0r?1 ? with T = T ? : ?1 ? {0, 1}, . . . , ?r?1 ? {0, 1}, ?r = 1 ?
?k . (6)
k=1
0?
r
Uniqueness under separability. In view of the negative example (6), one might ask whether
uniqueness according to (4) can be achieved under additional conditions on T . We prove uniqueness
under separability, a condition introduced in [19] and imposed recently in [17] to show solvability
of the NMF problem by linear programming. We say that T is separable if there exists a permutation
? such that ?T = [M ; Ir ], where M ? {0, 1}m?r?r .
Proposition 2. If T is separable, condition (4) holds and thus problem (1) has a unique solution.
Uniqueness under generic random sampling. Both the negative example (6) as well as the positive result of Proposition 2 are associated with special matrices T . This raises the question whether
uniqueness holds respectively fails for broader classes of binary matrices. In order to gain insight
into this question, we consider random T with i.i.d. entries from a Bernoulli distribution with parameter 21 and study the probability of the event {aff(T ) ? {0, 1}m = {T:,1 , . . . , T:,r }}. This question
has essentially been studied in combinatorics [23], with further improvements in [24]. The results
therein rely crucially on Littlewood-Offord theory (see Section 2.4 below).
Theorem 1. Let T be a random m ? r-matrix whose entries are drawn i.i.d. from {0, 1} with
probability 12 . Then, there is a constant C so that if r ? m ? C,
m
m
r
3
3
P aff(T )?{0, 1}m = {T:,1, . . . , T:,r ? 1?(1+o(1)) 4
?
+ o(1)
as m ? ?.
3
4
4
Theorem 1 suggests a positive answer to the question of uniqueness posed above. For m large
enough and r small compared to m (in fact, following [24] one may conjecture that Theorem 1
holds with C = 1), the probability that the affine hull of r vertices of [0, 1]m selected uniformly at
random contains some other vertex is exponentially small in the dimension m. We have empirical
evidence that the result of Theorem 1 continues to hold if the entries of T are drawn from a Bernoulli
distribution with parameter in (0, 1) sufficiently far away from the boundary points (cf. supplement).
As a byproduct, these results imply that also the NMF variant of our matrix factorization problem
(5) can in most cases be reduced to identifying a set of r vertices of [0, 1]m (cf. Corollary 1).
4
2.4 Speeding up Algorithm 1. In Algorithm 1, an m ? 2r?1 matrix Tb of potential vertices is
formed (Step 3). We have discussed the case (6) where all candidates must indeed be vertices,
in which case it seems to be impossible to reduce the computational cost of O((m ? r)r2r?1 ),
which becomes significant once m is in the thousands and r ? 25. On the positive side, Theorem
1 indicates that for many instances of T , only r out of 2r?1 candidates are in fact vertices. In
that case, noting that columns of Tb cannot be vertices if a single coordinate is not in {0, 1} (and
that the vast majority of columns of Tb must have one such coordinate), it is computationally more
favourable to incrementally compute subsets of rows of Tb and then to discard already those columns
with coordinates not in {0, 1}. We have observed empirically that this scheme rapidly reduces the
candidate set ? already checking a single row of Tb eliminates a substantial portion (see Figure 2).
Littlewood-Offord theory. Theoretical underpinning for the last observation can be obtained from
a result in combinatorics, the Littlewood-Offord (L-O)-lemma. Various extensions of that result have
been developed until recently, see the survey [25]. We here cite the L-O-lemma in its basic form.
Theorem 2. [20] Let a1 , . . . , a? ? R \ {0} and y ? R.
P?
?
(i) {b ? {0, 1}? :
i=1 ai bi = y} ? ??/2? .
P?
(ii) If |ai | ? 1, i = 1, . . . , ?, {b ? {0, 1}? :
i=1 ai bi ? (y, y + 1)} ?
.
?
??/2?
The two parts of Theorem 2 are referred to as discrete respectively continuous L-O lemma. The
discrete L-O lemma provides an upper bound on the number of {0, 1}-vectors whose weighted
sum with given weights {ai }?i=1 is equal to some given number y, whereas the stronger continuous
version, under a more stringent condition on the weights, upper bounds the number of {0, 1}-vectors
whose weighted sum is contained in some interval (y, y + 1). In order to see the relation of Theorem
2 to Algorithm 1, let us re-inspect the third step of that algorithm. To obtain a reduction of candidates
?
by checking a single row of Tb = Z(B (r?1) ?pR 1?
/ R (recall that coordinates
2r?1 )+p12r?1 , pick i ?
r?1
in R do not need to be checked, cf. (3)) and u ? {1, . . . , 2 } arbitrary. The u-th candidate can be
a vertex only if Tbi,u ? {0, 1}. The condition Tbi,u = 0 can be written as
Zi,: B (r?1) = Zi,: pR ? pi .
|{z} | :,u
{z } | {z }
{ak }rk=1
(7)
=y
=b
A similar reasoning applies when setting Tbi,u = 1. Provided
the entries of Zi,: = 0, the
none ofr?1
r?1
discrete L-O lemma implies that there are at most 2 ?(r?1)/2?
out of 2
candidates for which the
r?1
r?1
i-th coordinate is in {0, 1}. This yields a reduction of the candidate set by 2 ?(r?1)/2?
/2
=
1
O ?r?1 . Admittedly, this reduction may appear insignificant given the total number of candidates to be checked. The reduction achieved empirically (cf. Figure 2) is typically larger. Stronger
reductions have been proven under additional assumptions on the weights {ai }?i=1 : e.g. for distinct
weights, one obtains a reduction of O((r ? 1)?3/2 ) [25]. Furthermore, when picking successively d
rows of Tb and if one assumes that each row yields a reduction according to the discrete L-O lemma,
one would obtain the reduction (r ? 1)?d/2 so that d = r ? 1 would suffice to identify all vertices
provided r ? 4. Evidence for the rate (r ? 1)?d/2 can be found in [26]. This indicates a reduction
in complexity of Algorithm 1 from O((m ? r)r2r?1 ) to O(r2 2r?1 ).
Achieving further speed-up with integer linear programming. The continuous L-O lemma (part
(ii) of Theorem 2) combined with the derivation leading to (7) allows us to tackle even the case
r = 80 (280 ? 1024 ). In view of the continuous L-O lemma, a reduction in the number of candidates
can still be achieved if the requirement is weakened to Tbi,u ? [0, 1]. According to (7) the candidates
satisfying the relaxed constraint for the i-th coordinate can be obtained from the feasibility problem
find b ? {0, 1}r?1 subject to 0 ? Zi,: (b ? pR ) + pi ? 1,
(8)
which is an integer linear program that can be solved e.g. by CPLEX. The L-O- theory suggests that
the branch-bound strategy employed therein is likely to be successful. With the help of CPLEX, it
is affordable to solve problem (8) with all m ? r + 1 constraints (one for each of the rows of Tb to
be checked) imposed simultaneously. We always recovered directly the underlying vertices in our
experiments and only these, without the need to prune the solution pool (which could be achieved
by Algorithm 1, replacing the 2r?1 candidates by a potentially much smaller solution pool).
5
Speed?up achieved by cplex (m = 1000)
3
r=8, p=0.02
r=8, p=0.1
r=8, p=0.5
r=16, p=0.02
r=16, p=0.1
r=16, p=0.5
r=24, p=0.02
r=24, p=0.1
r=24, p=0.5
20
15
10
log10(CPU time) in seconds
Number of Vertices (log2)
Maximum number of remaining vertices (out of 2(r?1)) over 100 trials
25
5
0
0
1 2
5
10 20
50 100 200
Coordinates checked
2.5
2
1.5
1
0.5
0
?0.5
10
500
20
30
40
50
alg1,p=0.5
alg1,p = 0.1
alg1,p = 0.9
cplex,p = 0.5
cplex,p = 0.1
cplex,p = 0.9
60
70
80
r
Figure 2: Left: Speeding up the algorithm by checking single coordinates, remaining number of
coordinates vs.# coordinates checked (m = 1000). Right: Speed up by CPLEX compared to
Algorithm 1. For both plots, T is drawn entry-wise from a Bernoulli distribution with parameter p.
3 Approximate case
In the sequel, we discuss an extension of our approach to handle the approximate case D ? T A
with T and A as in (1). In particular, we have in mind the case of additive noise i.e. D = T A + E
with kEkF small. While the basic concept of Algorithm 1 can be adopted, changes are necessary
because D may have full rank min{m, n} and second aff(D) ? {0, 1}m = ?, i.e. the distances of
aff(D) and the {T:,k }rk=1 may be strictly positive (but are at least assumed to be small). As disAlgorithm 3 F INDV ERTICES APPROXIMATE
1. Let p = D1n /n and compute P = [D:,1 ? p, . . . , D:,n ? p].
2. Compute U (r?1) ? Rm?r?1 , the left singular vectors corresponding to the r ? 1 largest
singular values of P . Select r ? 1 linearly independent rows R of U (r?1) , obtaining
(r?1)
UR,: ? Rr?1?r?1 .
(r?1) ?1
3. Form Z = U (r?1) (U
) and Tb = Z(B (r?1) ? pR 1?r?1 ) + p1?r?1 .
R,:
2
2
r?1
01
4. Compute Tb01 ? Rm?2 : for u = 1, . . . , 2r?1 , i = 1, . . . , m, set Tbi,u
= I(Tbi,u > 12 ).
01
5. For u = 1, . . . , 2r?1 , set ?u = kTb:,u ? Tb:,u
k2 . Order increasingly s.t. ?u1 ? . . . ? ?2r?1 .
01
01
6. Return T = [Tb:,u1 . . . Tb:,ur ]
tinguished from the exact case, Algorithm 3 requires the number of components r to be specified
in advance as it is typically the case in noisy matrix factorization problems. Moreover, the vector
p subtracted from all columns of D in step 1 is chosen as the mean of the data points, which is in
particular a reasonable choice if D is contaminated with additive noise distributed symmetrically
around zero. The truncated SVD of step 2 achieves the desired dimension reduction and potentially
reduces noise corresponding to small singular values that are discarded. The last change arises in
step 5. While in the exact case, one identifies all columns of Tb that are in {0, 1}m, one instead only
identifies columns close to {0, 1}m. Given the output of Algorithm 3, we solve the approximate
matrix factorization problem via least squares, obtaining the right factor from minA kD ? T Ak2F .
Refinements. Improved performance for higher noise levels can be achieved by running Algorithm
3 multiple times with different sets of rows selected in step 2, which yields candidate matrices
{T (l) }sl=1 , and subsequently using T = argmin{T (l) } minA kD ? T (l) Ak2F , i.e. one picks the candidate yielding the best fit. Alternatively, we may form a candidate pool by merging the {T (l) }sl=1 and
then use a backward elimination scheme, in which successively candidates are dropped that yield the
smallest improvement in fitting D until r candidates are left. Apart from that, T returned by Algorithm 3 can be used for initializing the block optimization scheme of Algorithm 4 below. Algorithm
4 is akin to standard block coordinate descent schemes proposed in the matrix factorization literature, e.g. [27]. An important observation (step 3) is that optimization of T is separable along the
rows of T , so that for small r, it is feasible to perform exhaustive search over all 2r possibilities
(or to use CPLEX). However, Algorithm 4 is impractical as a stand-alone scheme, because without proper initialization, it may take many iterations to converge, with each single iteration being
more expensive than Algorithm 3. When initialized with the output of the latter, however, we have
observed convergence of the block scheme only after few steps.
6
Algorithm 4 Block optimization scheme for solving minT ?{0,1}m?r , A kD ? T Ak2F
1. Set k = 0 and set T (k) equal to a starting value.
2. A(k) ? argminA kD ? T (k) Ak2F and set k = k + 1.
Pm
(k) 2
3. T (k) ? argminT ?{0,1}m?r kD?T A(k) k2F = argmin{Ti,: ?{0,1}r }m
k2 (9)
i=1 kDi,: ?Ti,: A
i=1
4. Alternate between steps 2 and 3.
4 Experiments
In Section 4.1 we demonstrate with the help of synthetic data that the approach of Section 3
performs well on noisy datasets. In the second part, we present an application to a real dataset.
4.1 Synthetic data.
Setup. We generate D = T ? A? + ?E, where the entries of T ? are drawn i.i.d. from {0, 1} with
probability 0.5, the columns of A are drawn i.i.d. uniformly from the probability simplex and the
entries of E are i.i.d. standard Gaussian. We let m = 1000, r = 10 and n = 2r and let the noise
level ? vary along a grid starting from 0. Small sample sizes n as considered here yield more
challenging problems and are motivated by the real world application of the next subsection.
Evaluation. Each setup is run 20 times and we report averages over the following performance measures: the normalized Hamming distance kT ? ? T k2F /(m r) and the two RMSEs
kT ? A? ? T AkF /(m n)1/2 and kT A ? DkF /(m n)1/2 , where (T, A) denotes the output of one of
the following approaches that are compared. FindVertices: our approach in Section 3. oracle: we
solve problem (9) with A(k) = A? . box: we run the block scheme of Algorithm 4, relaxing the
integer constraint into a box constraint. Five random initializations are used and we take the result
yielding the best fit, subsequently rounding the entries of T to fulfill
P the {0, 1}-constraints and
refitting A. quad pen: as box, but a (concave) quadratic penalty ? i,k Ti,k (1 ? Ti,k ) is added to
push the entries of T towards {0, 1}. D.C. programming [28] is used for the block updates of T .
RMSE(TA, T*A*),T0.5,r=10
0
0
0.05
alpha
0.15
0.1
0.05
0
0
0.1
|TA?T*A*| /sqrt(m*n)
HotTopixx
FindVertices
0.1
F
2
F
|T?T*| /(m*r)
0.15
0.05
0
0
0.02
0.04
alpha
0.15
0.1
0
0
0.1
0.06
0.2
0.15
0.05
0.02
0.04
alpha
0.05
alpha
0.1
RMSE(TA, D),r=10
0.2
HotTopixx
FindVertices
0.1
0
0
box
quad pen
oracle
FindVertices
0.05
RMSE(TA, T*A*),r=10
Hamming(T,T*),r=10
0.2
0.05
alpha
RMSE(TA, D),T0.5,r=10
0.2
box
quad pen
oracle
FindVertices
|TA?D|F/sqrt(m*n)
0.05
0.2
|TA?D|F/sqrt(m*n)
0.1
box
quad pen
oracle
FindVertices
F
2
F
|T?T*| /(m*r)
0.15
|TA?T*A*| /sqrt(m*n)
Hamming(T,T*),T0.5,r=10
0.2
0.06
0.15
HotTopixx
FindVertices
0.1
0.05
0
0
0.02
0.04
alpha
0.06
Figure 3: Top: comparison against block schemes. Bottom: comparison against HOTTOPIXX.
Left/Middle/Right: kT ? ? T k2F /(m r), kT ?A? ? T AkF /(m n)1/2 and kT A ? DkF /(m n)1/2 .
Comparison to HOTTOPIXX [18]. HOTTOPIXX (HT) is a linear programming approach to
NMF equipped with guarantees such as correctness in the exact and robustness in the non-exact
case as long as T is (nearly) separable (cf. Section 2.3). HT does not require T to be binary, but
applies to the generic NMF problem D ? T A, T ? Rm?r
and A ? Rr?n
+
+ . Since separability is
crucial to the performance of HT, we restrict our comparison to separable T = [M ; Ir ], generating
the entries of M i.i.d. from a Bernoulli distribution with parameter 0.5. For runtime reasons,
we lower the dimension to m = 100. Apart from that, the experimental setup is as above. We
7
use an implementation of HT from [29]. We first pre-normalize D to have unit row sums as
required by HT, and obtain A as first output. Given A, the non-negative least squares problem
minT ?Rm?r kD ? T Ak2F is solved. The entries of T are then re-scaled to match the original scale
+
of D, and thresholding at 0.5 is applied to obtain a binary matrix. Finally, A is re-optimized by
solving the above fitting problem with respect to A in place of T . In the noisy case, HT needs a
tuning parameter to be specified that depends on the noise level, and we consider a grid of 12 values
for that parameter. The range of the grid is chosen based on knowledge of the noise matrix E. For
each run, we pick the parameter that yields best performance in favour of HT.
Results. From Figure 3, we find that unlike the other approaches, box does not always recover
T ? even if the noise level ? = 0. FindVertices outperforms box and quad pen throughout. For
? ? 0.06, its performance closely matches that of the oracle. In the separable case, our approach
performs favourably as compared to HT, a natural benchmark in this setting.
4.2 Analysis of DNA methylation data.
Background. Unmixing of DNA methylation profiles is a problem of high interest in cancer
research. DNA methylation is a chemical modification of the DNA occurring at specific sites,
so-called CpGs. DNA methylation affects gene expression and in turn various processes such as
cellular differentiation. A site is either unmethylated (?0?) or methylated (?1?). DNA methylation
microarrays allow one to measure the methylation level for thousands of sites. In the dataset
considered here, the measurements D (the rows corresponding to sites, the columns to samples)
result from a mixture of cell types. The methylation profiles of the latter are in {0, 1}m, whereas,
depending on the mixture proportions associated with each sample, the entries of D take values in
[0, 1]m . In other words, we have the model D ? T A, with T representing the methylation of the
cell types and the columns of A being elements of the probability simplex. It is often of interest
to recover the mixture proportions of the samples, because e.g. specific diseases, in particular
cancer, can be associated with shifts in these proportions. The matrix T is frequently unknown, and
determining it experimentally is costly. Without T , however, recovering the mixing matrix A is
challenging, in particular since the number of samples in typical studies is small.
Dataset. We consider the dataset studied in [9], with m = 500 CpG sites and n = 12 samples of
blood cells composed of four major types (B-/T-cells, granulocytes, monocytes), i.e. r = 4. Ground
truth is partially available: the proportions of the samples, denoted by A? , are known.
estimated A
1
1
1
2
0.5
3
4
component
component
1
2
0.5
3
4
1 2 3 4 5 6 7 8 9 101112
sample
0
1 2 3 4 5 6 7 8 9 101112
sample
0
|D ? T A|F / sqrt(m * n)
A (ground truth)
number of components vs error
0.2
FindVertices
ground truth
0.15
0.1
2
3
4
5
components used
6
Figure 4: Left: Mixture proportions of the ground truth. Middle: mixture proportions as estimated
by our method. Right: RMSEs kD ? T AkF /(m n)1/2 in dependency of r.
Analysis. We apply our approach to obtain an approximate factorization D ? T A, T ? {0, 1}m?r ,
?
A ? Rr?n
and A 1r = 1n . We first obtained T as outlined in Section 3, replacing {0, 1} by
+
{0.1, 0.9} in order to account for measurement noise in D that slightly pushes values towards 0.5.
This can be accomodated re-scaling Tb01 in step 4 of Algorithm 3 by 0.8 and then adding 0.1. Given
T , we solve the quadratic program A = argminA?Rr?n ,A? 1r =1n kD ? T Ak2F and compare A to
+
the ground truth A? . In order to judge the fit as well as the matrix T returned by our method, we
compute T ? = argminT ?{0,1}m?r kD ? T A? k2F as in (9). We obtain 0.025 as average mean squared
difference of T and T ? , which corresponds to an agreement of 96 percent. Figure 4 indicates at
least a qualitative agreement of A? and A. In the rightmost plot, we compare the RMSEs of our
approach for different choices of r relative to the RMSE of (T ? , A? ). The error curve flattens after
r = 4, which suggests that with our approach, we can recover the correct number of cell types.
8
References
[1] P. Paatero and U. Tapper. Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values. Environmetrics, 5:111?126, 1994.
[2] D. Lee and H. Seung. Learning the parts of objects by nonnegative matrix factorization. Nature, 401:788?
791, 1999.
[3] J. Ramsay and B. Silverman. Functional Data Analysis. Springer, New York, 2006.
[4] F. Bach, J. Mairal, and J. Ponce. Convex Sparse Matrix Factorization. Technical report, ENS, Paris, 2008.
[5] D. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposition, with applications to sparse
principal components and canonical correlation analysis. Biostatistics, 10:515?534, 2009.
[6] A-J. van der Veen. Analytical Method for Blind Binary Signal Separation. IEEE Signal Processing,
45:1078?1082, 1997.
[7] J. Liao, R. Boscolo, Y. Yang, L. Tran, C. Sabatti, and V. Roychowdhury. Network component analysis:
reconstruction of regulatory signals in biological systems. PNAS, 100(26):15522?15527, 2003.
[8] S. Tu, R. Chen, and L. Xu. Transcription Network Analysis by a Sparse Binary Factor Analysis Algorithm.
Journal of Integrative Bioinformatics, 9:198, 2012.
[9] E. Houseman et al. DNA methylation arrays as surrogate measures of cell mixture distribution. BMC
Bioinformatics, 13:86, 2012.
[10] A. Banerjee, C. Krumpelman, J. Ghosh, S. Basu, and R. Mooney. Model-based overlapping clustering.
In KDD, 2005.
[11] E. Segal, A. Battle, and D. Koller. Decomposing gene expression into cellular processes. In Proceedings
of the 8th Pacific Symposium on Biocomputing, 2003.
[12] A. Schein, L. Saul, and L. Ungar. A generalized linear model for principal component analysis of binary
data. In AISTATS, 2003.
[13] A. Kaban and E. Bingham. Factorisation and denoising of 0-1 data: a variational approach. Neurocomputing, 71:2291?2308, 2008.
[14] E. Meeds, Z. Gharamani, R. Neal, and S. Roweis. Modeling dyadic data with binary latent factors. In
NIPS, 2007.
[15] Z. Zhang, C. Ding, T. Li, and X. Zhang. Binary matrix factorization with applications. In IEEE ICDM,
2007.
[16] P. Miettinen and T. Mielik?ainen and A. Gionis and G. Das and H. Mannila. The discrete basis problem.
In PKDD, 2006.
[17] S. Arora, R. Ge, R. Kannan, and A. Moitra. Computing a nonnegative matrix factorization ? provably.
STOC, 2012.
[18] V. Bittdorf, B. Recht, C. Re, and J. Tropp. Factoring nonnegative matrices with linear programs. In NIPS,
2012.
[19] D. Donoho and V. Stodden. When does non-negative matrix factorization give a correct decomposition
into parts? In NIPS, 2003.
[20] P. Erd?os. On a lemma of Littlewood and Offord. Bull. Amer. Math. Soc, 51:898?902, 1951.
[21] M. Gu and S. Eisenstat. Efficient algorithms for computing a strong rank-revealing QR factorization.
SIAM Journal on Scientific Computing, 17:848?869, 1996.
[22] G. Golub and C. Van Loan. Matrix Computations. Johns Hopkins University Press, 1996.
[23] A. Odlyzko. On Subspaces Spanned by Random Selections of ?1 vectors. Journal of Combinatorial
Theory A, 47:124?133, 1988.
[24] J. Kahn, J. Komlos, and E. Szemeredi. On the Probability that a ?1 matrix is singular. Journal of the
American Mathematical Society, 8:223?240, 1995.
[25] H. Nguyen and V. Vu. Small ball probability, Inverse theorems, and applications. arXiv:1301.0019.
[26] T. Tao and V. Vu. The Littlewoord-Offord problem in high-dimensions and a conjecture of Frankl and
F?uredi. Combinatorica, 32:363?372, 2012.
[27] C.-J. Lin. Projected gradient methods for non-negative matrix factorization. Neural Computation,
19:2756?2779, 2007.
[28] P. Tao and L. An. Convex analysis approach to D.C. programming: theory, algorithms and applications.
Acta Mathematica Vietnamica, pages 289?355, 1997.
[29] https://sites.google.com/site/nicolasgillis/publications.
9
| 4860 |@word trial:1 version:1 middle:3 seems:1 stronger:2 proportion:6 integrative:1 crucially:1 decomposition:5 pick:3 tr:1 reduction:11 contains:5 selecting:2 hottopixx:6 rightmost:1 outperforms:1 recovered:1 com:1 must:4 written:1 john:1 subsequent:1 additive:2 kdd:1 plot:2 ainen:1 update:1 v:2 alone:1 selected:2 filtered:1 provides:4 math:1 cpg:1 five:1 zhang:2 saarland:3 along:3 mathematical:1 symposium:1 tbi:6 qualitative:1 consists:1 prove:1 fitting:2 indeed:1 hardness:1 p1:1 frequently:1 pkdd:1 globally:1 cpu:1 quad:5 equipped:1 cardinality:2 considering:1 becomes:1 provided:4 underlying:5 moreover:4 notation:1 suffice:1 biostatistics:1 argmin:2 developed:1 finding:1 ghosh:1 differentiation:1 impractical:1 guarantee:2 ti:4 concave:1 tackle:1 runtime:1 rm:7 k2:2 scaled:1 utilization:1 unit:1 converse:1 appear:1 before:1 positive:5 dropped:2 local:1 despite:2 granulocyte:1 ak:1 might:1 therein:2 studied:3 weakened:1 initialization:2 suggests:3 shaded:1 challenging:2 relaxing:1 acta:1 factorization:50 bi:2 range:1 practical:1 unique:4 vu:2 practice:1 block:8 silverman:1 mannila:1 procedure:1 veen:1 area:1 empirical:1 submatrices:1 revealing:2 pre:1 word:1 cannot:1 close:1 selection:1 impossible:1 imposed:7 starting:2 convex:5 survey:1 identifying:1 factorisation:1 eisenstat:1 insight:1 array:1 spanned:1 dkf:2 datapoints:1 handle:1 coordinate:12 suppose:1 play:2 modulo:1 exact:11 programming:6 methylation:12 origin:2 agreement:2 element:8 ak2f:6 expensive:2 particularly:1 satisfying:1 continues:2 observed:2 role:2 bottom:1 ding:1 solved:5 initializing:1 worst:1 thousand:2 ensures:1 connected:1 substantial:2 disease:1 convexity:2 complexity:6 seung:1 signature:2 raise:1 solving:9 meed:1 basis:4 gu:1 differently:1 various:2 intersects:2 derivation:1 stacked:1 distinct:1 exhaustive:1 apparent:1 heuristic:1 larger:2 valued:3 dominating:2 solve:5 say:1 otherwise:2 reconstruct:1 posed:1 noisy:3 slawski:1 obviously:1 komlos:1 rr:10 matthias:1 analytical:1 reconstruction:1 tran:1 product:1 maximal:1 tu:1 rapidly:1 iff:2 mixing:1 roweis:1 ine:1 description:1 normalize:1 qr:3 convergence:2 cluster:2 requirement:2 unmixing:3 generating:2 object:1 help:3 depending:2 uredi:1 progress:1 strong:1 soc:1 recovering:2 c:1 implies:2 judge:1 drawback:1 closely:1 correct:2 hull:3 subsequently:3 stringent:1 elimination:1 require:2 ungar:1 fix:1 suffices:1 proposition:4 biological:1 extension:4 strictly:1 hold:9 underpinning:1 around:2 sufficiently:1 considered:2 ground:5 great:1 substituting:1 major:2 achieves:1 vary:1 smallest:1 uniqueness:11 applicable:1 combinatorial:3 largest:1 correctness:1 tool:1 weighted:2 always:2 xact:1 aim:4 gaussian:1 fulfill:2 avoid:1 broader:1 publication:1 corollary:3 ponce:1 improvement:2 rank:10 check:1 indicates:4 bernoulli:4 affinely:6 inference:1 factoring:1 typically:2 relation:1 kahn:1 koller:1 tao:2 provably:3 accomodated:1 denoted:2 constrained:2 special:2 equal:2 once:1 having:1 sampling:1 mnr:2 bmc:1 biology:1 represents:1 hardest:1 k2f:4 nearly:1 simplex:2 report:2 contaminated:1 fundamentally:1 few:2 composed:1 simultaneously:1 neurocomputing:1 geometry:1 cplex:8 interest:3 possibility:1 evaluation:1 golub:1 mixture:7 yielding:4 inary:1 kt:6 byproduct:1 necessary:2 initialized:1 desired:3 re:5 hein:2 schein:1 theoretical:2 minimal:3 instance:2 column:32 modeling:1 boolean:1 assignment:1 bull:1 cost:2 vertex:28 entry:13 subset:4 usefulness:1 successful:1 rounding:1 dependency:1 answer:1 synthetic:2 combined:2 recht:1 siam:1 sequel:2 refitting:1 lee:1 pool:4 picking:2 hopkins:1 connectivity:1 squared:1 successively:2 moitra:1 possibly:1 american:1 leading:2 return:4 li:1 account:3 potential:1 segal:1 de:2 gionis:1 combinatorics:4 blind:2 depends:1 later:1 view:4 portion:1 start:1 recover:3 complicated:2 rmse:5 contribution:1 formed:2 ir:3 square:2 yield:7 correspond:1 identify:1 none:1 mooney:1 sqrt:5 whenever:1 checked:7 inexpensive:1 against:2 mathematica:1 obvious:2 thereof:1 proof:2 mi:2 recovers:1 associated:4 hamming:3 gain:1 dataset:4 ask:2 recall:1 subsection:1 knowledge:1 back:1 appears:1 higher:1 ta:8 follow:1 improved:1 erd:1 formulation:1 done:1 box:8 amer:1 furthermore:1 until:2 correlation:1 hand:1 whose:3 favourably:1 replacing:2 tropp:1 o:1 overlapping:2 lack:1 incrementally:1 banerjee:1 google:1 scientific:1 contain:1 requiring:1 concept:1 normalized:1 hence:1 chemical:1 neal:1 deal:1 m:1 generalized:1 mina:2 presenting:1 demonstrate:2 performs:2 percent:1 reasoning:1 wise:2 variational:1 novel:1 recently:3 superior:1 witten:1 functional:1 empirically:3 exponentially:1 discussed:2 extend:1 interpretation:2 significant:1 measurement:2 ai:5 smoothness:1 tuning:1 grid:3 pm:1 similarly:1 outlined:1 ramsay:1 dot:1 entail:1 etc:1 argmina:2 solvability:1 recent:1 apart:3 discard:1 mint:2 certain:1 binary:19 der:1 ofr:1 minimum:1 additional:8 relaxed:1 employed:2 prune:1 subtraction:1 determine:1 converge:1 signal:4 ii:2 branch:1 full:1 multiple:1 pnas:1 reduces:2 technical:1 match:2 bach:1 long:2 lin:1 icdm:1 a1:1 feasibility:1 involving:1 basic:3 variant:2 liao:1 essentially:1 affordable:1 iteration:2 sometimes:1 represent:1 arxiv:1 achieved:8 cell:7 addition:2 whereas:3 remarkably:1 background:1 interval:1 singular:5 source:2 leaving:1 crucial:2 eliminates:1 unlike:1 comment:2 subject:1 integer:3 presence:1 noting:1 symmetrically:1 yang:1 enough:2 independence:3 fit:3 zi:4 affect:1 hastie:1 identified:1 restrict:1 reduce:1 idea:1 microarrays:1 enumerating:1 shift:1 t0:3 whether:4 motivated:2 expression:3 handled:1 favour:1 akin:1 penalty:1 returned:3 york:1 constitute:1 stodden:1 detailed:1 dna:9 reduced:3 generate:1 http:1 sl:2 canonical:1 roychowdhury:1 frankl:1 estimated:2 tibshirani:1 write:1 discrete:5 four:1 vietnamica:1 blood:2 drawn:6 achieving:1 ht:8 backward:1 vast:1 merely:1 sum:4 run:3 inverse:1 fourth:1 place:1 throughout:1 reasonable:1 separation:2 environmetrics:1 scaling:1 submatrix:3 bound:4 guaranteed:1 quadratic:2 oracle:5 nonnegative:3 adapted:1 constraint:14 precisely:1 aff:22 encodes:1 u1:2 speed:3 min:2 separable:6 martin:1 conjecture:2 pacific:1 according:3 alternate:1 combination:3 ball:1 kd:9 battle:1 smaller:1 slightly:1 em:1 separability:4 intimately:1 ur:2 increasingly:1 mielik:1 modification:1 pr:14 taken:1 computationally:3 remains:2 turn:3 discus:2 r3:1 kdi:1 mind:1 ge:1 tractable:1 serf:1 adopted:1 available:1 operation:6 decomposing:1 apply:1 away:1 generic:2 d1n:1 subtracted:1 robustness:1 altogether:1 existence:1 original:1 denotes:3 clustering:2 cf:6 assumes:1 remaining:2 running:1 log2:1 top:1 log10:1 restrictive:1 establish:1 hypercube:1 comparatively:1 unchanged:1 appreciate:1 society:1 already:4 question:5 added:1 flattens:1 r2r:2 strategy:1 costly:1 surrogate:1 gradient:1 mx:1 subspace:6 distance:2 miettinen:1 concatenation:1 majority:1 kekf:1 argue:2 cellular:2 reason:2 kannan:1 index:1 illustration:1 optionally:1 setup:3 potentially:2 stoc:1 negative:10 implementation:1 proper:1 unknown:1 perform:1 upper:3 inspect:1 observation:3 datasets:1 discarded:1 benchmark:1 descent:2 truncated:1 communication:1 arbitrary:1 prompt:1 nmf:7 introduced:1 required:3 specified:2 paris:1 optimized:1 akf:3 nip:3 beyond:1 suggested:1 sabatti:1 below:5 sparsity:1 encompasses:1 tb:20 program:4 event:1 treated:1 natural:3 rely:1 alg1:3 zr:1 mn:1 representing:1 scheme:11 imply:1 identifies:2 arora:3 irrespective:1 negativity:3 speeding:2 literature:3 geometric:1 checking:4 determining:3 relative:2 fully:1 permutation:2 proven:1 rmses:3 affine:13 boscolo:1 principle:2 thresholding:1 intractability:1 tiny:1 pi:2 row:17 cancer:2 penalized:1 ktb:1 wireless:1 last:2 cpgs:1 side:2 allow:1 basu:1 saul:1 face:1 sparse:3 distributed:1 regard:1 boundary:1 dimension:9 curve:1 valid:1 world:2 stand:1 van:2 reside:1 commonly:2 collection:1 refinement:1 projected:1 nguyen:1 far:1 odlyzko:1 approximate:6 compact:2 uni:2 obtains:1 argmint:2 transcription:2 gene:4 alpha:6 kaban:1 mairal:1 containment:1 assumed:2 alternatively:2 factorizing:1 continuous:4 latent:2 search:1 pen:5 monocyte:1 regulatory:1 bingham:1 nature:1 obtaining:4 gharamani:1 da:1 aistats:1 main:1 linearly:6 noise:9 profile:2 dyadic:1 xu:1 site:7 referred:4 en:1 fails:2 exponential:3 candidate:19 third:2 rk:4 theorem:10 specific:4 symbol:1 littlewood:6 r2:3 favourable:1 insignificant:1 evidence:2 intractable:1 exists:5 essential:1 merging:1 adding:1 importance:1 supplement:4 tapper:1 push:2 occurring:1 chen:1 likely:1 forming:2 expressed:1 contained:5 partially:1 applies:2 springer:1 cite:1 truth:5 corresponds:1 identity:1 presentation:1 consequently:1 donoho:1 towards:2 shared:1 absence:1 replace:1 change:3 feasible:2 experimentally:1 typical:2 except:1 uniformly:3 loan:1 denoising:1 lemma:10 admittedly:1 total:1 called:1 principal:2 svd:2 experimental:2 select:2 combinatorica:1 latter:2 arises:2 bioinformatics:2 biocomputing:1 mcmc:1 paatero:1 |
4,266 | 4,861 | On the Complexity and Approximation of
Binary Evidence in Lifted Inference
Guy Van den Broeck and Adnan Darwiche
Computer Science Department
University of California, Los Angeles
{guyvdb,darwiche}@cs.ucla.edu
Abstract
Lifted inference algorithms exploit symmetries in probabilistic models to speed
up inference. They show impressive performance when calculating unconditional
probabilities in relational models, but often resort to non-lifted inference when
computing conditional probabilities. The reason is that conditioning on evidence
breaks many of the model?s symmetries, which can preempt standard lifting techniques. Recent theoretical results show, for example, that conditioning on evidence which corresponds to binary relations is #P-hard, suggesting that no lifting
is to be expected in the worst case. In this paper, we balance this negative result
by identifying the Boolean rank of the evidence as a key parameter for characterizing the complexity of conditioning in lifted inference. In particular, we show
that conditioning on binary evidence with bounded Boolean rank is efficient. This
opens up the possibility of approximating evidence by a low-rank Boolean matrix
factorization, which we investigate both theoretically and empirically.
1
Introduction
Statistical relational models are capable of representing both probabilistic dependencies and relational structure [1, 2]. Due to their first-order expressivity, they concisely represent probability distributions over a large number of propositional random variables, causing inference in these models
to quickly become intractable. Lifted inference algorithms [3] attempt to overcome this problem by
exploiting symmetries found in the relational structure of the model.
In the absence of evidence, exact lifted inference algorithms can work well. For large classes of
statistical relational models [4], they perform inference that is polynomial in the number of objects
in the model [5], and are therein exponentially faster than classical inference algorithms. When
conditioning a query on a set of evidence literals, however, these lifted algorithms lose their advantage over classical ones. The intuitive reason is that evidence breaks the symmetries in the model.
The technical reason is that these algorithms perform an operation called shattering, which ends
up reducing the first-order model to a propositional one. This issue is implicitly reflected in the
experiment sections of exact lifted inference papers. Most report on experiments without evidence.
Examples include publications on FOVE [3, 6, 7] and WFOMC [8, 5]. Others found ways to efficiently deal with evidence on only unary predicates. They perform experiments without evidence
on binary or higher-arity relations. There are examples for FOVE [9, 10], WFOMC [11], PTP [12]
and CP [13].
This evidence problem has largely been ignored in the exact lifted inference literature, until recently,
when Bui et al. [10] and Van den Broeck and Davis [11] showed that conditioning on unary evidence
is tractable. More precisely, conditioning on unary evidence is polynomial in the size of evidence.
This type of evidence expresses attributes of objects in the world, but not relations between them.
Unfortunately, Van den Broeck and Davis [11] also showed that this tractability does not extend to
1
evidence on binary relations, for which conditioning on evidence is #P-hard. Even if conditioning is
hard in general, its complexity should depend on properties of the specific relation that is conditioned
on. It is clear that some binary evidence is easy to condition on, even if it talks about a large number
of objects, for example when all atoms are true (?X, Y p(X, Y )) or false (?X, Y ? p(X, Y )). As
our first main contribution, we formalize this intuition and characterize the complexity of conditioning more precisely in terms of the Boolean rank of the evidence. We show that it is a measure of
how much lifting is possible, and that one can efficiently condition on large amounts of evidence,
provided that its Boolean rank is bounded.
Despite the limitations, useful applications of exact lifted inference were found by sidestepping the
evidence problem. For example, in lifted generative learning [14], the most challenging task is to
compute partition functions without evidence. Regardless, the lack of symmetries in real applications is often cited as a reason for rejecting the idea of lifted inference entirely (informally called
the ?death sentence for lifted inference?). This problem has been avoided for too long, and as
lifted inference gains maturity, solving it becomes paramount. As our second main contribution,
we present a first general solution to the evidence problem. We propose to approximate evidence
by an over-symmetric matrix, and will show that this can be achieved by minimizing Boolean rank.
The need for approximating evidence is new and specific to lifted inference: in (undirected) probabilistic graphical models, more evidence typically makes inference easier. Practically, we will show
that existing tools from the data mining community can be used for this low-rank Boolean matrix
factorization task.
The evidence problem is less pronounced in the approximate lifted inference literature. These algorithms often introduce approximations that lead to symmetries in their computation, even when
there are no symmetries in the model. Also for approximate methods, however, the benefits of lifting will decrease with the amount of symmetry-breaking evidence (e.g., Kersting et al. [15]). We
will show experimentally that over-symmetric evidence approximation is also a viable technique for
approximate lifted inference.
2
Encoding Binary Relations in Unary
Our analysis of conditioning is based on a reduction, turning evidence on a binary relation into
evidence on several unary predicates. We first introduce some necessary background.
2.1
Background
An atom p(t1 , . . . , tn ) consists of a predicate p /n of arity n followed by n arguments, which are either (lowercase) constants or (uppercase) logical variables. A literal is an atom a or its negation ?a.
A formula combines atoms with logical connectives (e.g., ?, ?, ?). A formula is ground if it does
not contain any logical variables. A possible world assigns a truth value to each ground atom. Statistical relational languages define a probability distribution over possible words, where ground atoms
are individual random variables. Numerous languages have been proposed in recent years, and our
analysis will apply to many, including MLNs [16], parfactors [3] and WFOMC problems [8].
Example 1. The following MLNs model the dependencies between web pages. A first, peer-to-peer
model says that student web pages are more likely to link to other student pages.
studentpage(X) ? linkto(X, Y ) ? studentpage(Y )
w
It increases the probability of a world by a factor ew with every pair of pages X, Y that satisfies the
formula. A second, hierarchical model says that professors are more likely to link to course pages.
w
profpage(X) ? linkto(X, Y ) ? coursepage(Y )
In this context, evidence e is a truth-value assignment to a set of ground atoms, and is often represented as a conjunction of literals. In unary evidence, atoms have one argument (e.g.,
studentpage(a)) while in binary evidence, they have two (e.g., linkto(a, b)). Without loss of generality, we assume full evidence on certain predicates (i.e., all their ground atoms are in e).1 We will
sometimes represent unary evidence as a Boolean vector and binary evidence as a Boolean matrix.
1
Partial evidence on the relation p can be encoded as full evidence on predicates p0 and p1 by adding
formulas ?X, Y p(X, Y ) ? p1 (X, Y ) and ?X, Y ? p(X, Y ) ? p0 (X, Y ) to the model.
2
Example 2. Evidence e = p(a, a) ? p(a, b) ? ? p(a, c) ? ? ? ? ? ? p(d, c) ? p(d, d) is represented by
p(X,Y )
?
X=a
P=
?
?
?
?
X=b
X=c
X=d
Y =a
Y =b
Y =c
Y =d
1
1
0
1
1
1
0
0
0
0
1
0
0
1
0
1
?
?
?
?
?
We will look at computing conditional probabilities Pr(q | e) for single ground atoms q. Finally, we
assume a representation language that can express universally quantified logical constraints.
2.2
Vector-Product Binary Evidence
Certain binary relations can be represented by a pair of unary predicates. By adding the formula
?X, ?Y, p(X, Y ) ? q(X) ? r(Y )
(1)
to our statistical relational model and conditioning on the q and r relations, we can condition on
certain types of binary p relations. Assuming that we condition on the q and r predicates, adding
this formula (as hard clauses) to the model does not change the probability distribution over the
atoms in the original model. It is merely an indirect way of conditioning on the p relation.
If we now represent these unary relations by vectors q and r, and the binary relation by the binary
matrix P, the above technique allows us to condition on any relation P that can be factorized in the
outer vector product P = q r| .
Example 3. Consider the following outer vector factorization of the Boolean matrix P.
P=
?
0
?1
?0
1
0
0
0
0
0
0
0
0
?
0
1?
0?
1
=
? ? ? ?|
0 1
?1? ?0?
?0? ?0?
1 1
In a model containing Formula 1, this factorization indicates that we can condition on the 16 binary
evidence literals ? p(a, a) ? ? p(a, b) ? ? ? ? ? ? p(d, c) ? p(d, d) of P by conditioning on the the 8
unary literals ? q(a) ? q(b) ? ? q(c) ? q(d) ? r(a) ? ? r(b) ? ? r(c) ? r(d) represented by q and r.
2.3
Matrix-Product Binary Evidence
This idea of encoding a binary relation in unary relations can be generalized to n pairs of unary
relations, by adding the following formula to our model.
?X, ?Y, p(X, Y ) ? (q1 (X) ? r1 (Y )) ? (q2 (X) ? r2 (Y )) ? ? ? ? ? (qn (X) ? rn (Y ))
(2)
By conditioning on the qi and ri relations, we can now condition on a much richer set of binary p
relations. The relations that can be expressed this way are all the matrices that can be represented
by the sum of outer products (in Boolean algebra, where + is ? and 1 ? 1 = 1):
P = q1 r|1 ? q2 r|2 ? ? ? ? ? qn r|n = Q R|
(3)
where the columns of Q and R are the qi and ri vectors respectively, and the matrix multiplication
is performed in Boolean algebra, that is,
W
(Q R| )i,j = r Qi,r ? Rj,r
Example 4. Consider the following P, its decomposition into a sum/disjunction of outer vector
products, and the corresponding Boolean matrix multiplication.
P=
?
1
?1
?0
1
1
1
0
0
0
0
1
0
?
0
1?
0?
1
=
? ? ? ?| ? ? ? ?| ? ? ? ?|
0
1
1 1
0 0
?1? ?0? ?1? ?1? ?0? ?0?
?0? ?0? ? ?0? ?0? ? ?1? ?1?
1
1
0 0
0 0
=
?
0
?1
?0
1
1
1
0
0
??
0
1
0? ?0
1? ?0
0
1
1
1
0
0
?|
0
0?
1?
0
This factorization shows that we can condition on the binary evidence literals of P (see Example 2)
by conditioning on the unary literals
e = [? q1 (a) ? q1 (b) ? ? q1 (c) ? q1 (d)] ? [r1 (a) ? ? r1 (b) ? ? r1 (c) ? r1 (d)]
? [q2 (a) ? q2 (b) ? ? q2 (c) ? ? q2 (d)] ? [r2 (a) ? r2 (b) ? ? r2 (c) ? ? r2 (d)]
? [? q3 (a) ? ? q3 (b) ? q3 (c) ? ? q3 (d)] ? [? r3 (a) ? ? r3 (b) ? r3 (c) ? ? r3 (d)] .
3
3
Boolean Matrix Factorization
Matrix factorization (or decomposition) is a popular linear algebra tool. Some well-known instances
are singular value decomposition and non-negative matrix factorization (NMF) [17, 18]. NMF
factorizes into a product of non-negative matrices, which are more easily interpretable, and therefore
attracted much attention for unsupervised learning and feature extraction. These factorizations all
work with real-valued matrices. We instead consider Boolean-valued matrices, with only 0/1 entries.
3.1
Boolean Rank
Factorizing a matrix P as Q R| in Boolean algebra is a known problem called Boolean Matrix
Factorization (BMF) [19, 20]. BMF factorizes a (k ? l) matrix P into a (k ? n) matrix Q and a
(l ? n) matrix R, where potentially n k and n l and we always have that n ? min(k, l).
Any Boolean matrix can be factorized this way and the smallest number n for which it is possible is
called the Boolean rank of the matrix. Unlike (textbook) real-valued rank, computing the Boolean
rank is NP-hard and cannot be approximated unless P=NP [19]. The Boolean and real-valued rank
are incomparable, and the Boolean rank can be exponentially smaller than the real-valued rank.
Example 5. The factorization in Example 4 is a BMF with Boolean rank 3. It is only a decomposition in Boolean algebra and not over the real numbers. Indeed, the matrix product over the reals
contains an incorrect value of 2:
?
0
?1
?0
1
1
1
0
0
?
?
0
1
0?
?0
?
?
?
real 0
1
0
1
1
1
0
0
?|
0
0?
1?
0
=
?
1
?2
?0
1
1
1
0
0
0
0
1
0
?
0
1?
0?
1
6= P
Note that P is of full real-valued rank (having four non-zero singular values) and that its Boolean
rank is lower than its real-valued rank.
3.2
Approximate Boolean Factorization
Computing Boolean ranks is a theoretical problem. Because most real-world matrices will have
nearly full rank (i.e., almost min(k, l)), applications of BMF look at approximate factorizations. The
goal is to find a pair of (small) Boolean matrices Qk?n and Rl?n such that Pk?l ? Qk?n R|l?n ,
or more specifically, to find matrices that optimize some objective that trades off approximation
error and Boolean rank n. When n k and n l, this approximation extracts interesting structure
and removes noise from the matrix. This has caused BMF to receive considerable attention in the
data mining community recently, as a tool for analyzing high-dimensional data. It is used to find
important and interpretable (i.e., Boolean) concepts in a data matrix.
Unfortunately, the approximate BMF optimization problem is NP-hard as well, and inapproximable [20]. However, several algorithms have been proposed that work well in practice. Algorithms exist that find good approximations for fixed values of n [20], or when P is sparse [21].
BMF is related to other data mining tasks, such as biclustering [22] and tiling databases [23], whose
algorithms could also be used for approximate BMF. In the context of social network analysis, BMF
is related to stochastic block models [24] and their extensions, such as infinite relational models.
4
Complexity of Binary Evidence
Our goal in this section is to provide a new complexity result for reasoning with binary evidence
in the context of lifted inference. Our result can be thought of as a parametrized complexity result, similar to ones based on treewidth in the case of propositional inference. To state the new
result, however, we must first define formally the computational task. We will also review the key
complexity result that is known about this computation now (i.e., the one we will be improving on).
Consider an MLN ? and let ?m contain a set of ground literals representing binary evidence. That is,
for some binary predicate p(X, Y ), evidence ?m contains precisely one literal (positive or negative)
for each grounding of predicate p(X, Y ). Here, m represents the number of objects that parameters
X and Y may take.2 Therefore, evidence ?m must contain precisely m2 literals.
2
We assume without loss of generality that all logical variables range over the same set of objects.
4
Suppose now that Prm is the distribution induced by MLN ? over m objects, and q is a ground
literal. Our analysis will apply to classes of models ? that are domain-liftable [4], which means that
the complexity of computing Prm (q) without evidence is polynomial in m. One such class is the
set of MLNs with two logical variables per formula [5].
Our task is then to compute the posterior probability Prm (q|em ), where em is a conjunction of the
ground literals in binary evidence ?m . Moreover, our goal here is to characterize the complexity of
this computation as a function of evidence size m.
The following recent result provides a lower bound on the complexity of this computation [11].
Theorem 1. Suppose that evidence ?m is binary. Then there exists a domain-liftable MLN ? with
a corresponding distribution Prm , and a posterior marginal Prm (q|em ) that cannot be computed
by any algorithm whose complexity grows polynomially in evidence size m, unless P = N P .
This is an analogue to results according to which, for example, the complexity of computing posterior probabilities in propositional graphical models is exponential in the worst case. Yet, for these
models, the complexity of inference can be parametrized, allowing one to bound the complexity of
inference on some models. Perhaps the best example of such a parametrized complexity is the one
based on treewidth, which can be thought of as a measure of the model?s sparsity (or tree-likeness).
In this case, inference can be shown to be linear in the size of the model and exponential only in its
treewidth. Hence, this parametrized complexity result allows us to state that inference can be done
efficiently on models with bounded treewidth.
We now provide a similar parameterized complexity result, but for evidence in lifted inference. In
this case, the parameter we use to characterize complexity is that of Boolean rank.
Theorem 2. Suppose that evidence ?m is binary and has a bounded Boolean rank. Then for every
domain-liftable MLN ? and corresponding distribution Prm , the complexity of computing posterior
marginal Prm (q|em ) grows polynomially in evidence size m.
The proof of this theorem is based on the reduction from binary to unary evidence, which is described
in Section 2. In particular, our reduction first extends the MLN ? with Formula 2, leading to the new
MLN ?0 and new pairs of unary predicates qi and ri . This does not change the domain-liftability
of ?0 , as Formula 2 is itself liftable. We then replace binary evidence ?m by unary evidence ?0 . That
is, the ground literals of the binary predicate p are replaced by ground literals of the unary predicates
qi and ri (see Example 4). This unary evidence is obtained by Boolean matrix factorization. As the
matrix size in our reduction is m2 , the following Lemma implies that the first step of our reduction
is polynomial in m for bounded rank evidence.
Lemma 3 (Miettinen [25]). The complexity of Boolean matrix factorization for matrices with
bounded Boolean rank is polynomial in their size.
The main observation in our reduction is that Formula 2 has size n, which is the Boolean rank of the
given binary evidence. Hence, when the Boolean rank n is bounded by a constant, the size of the
extended MLN ?0 is independent of the evidence size and is proportional to the size of the original
MLN ?.
We have now reduced inference on MLN ? and binary evidence ?m into inference on an extended
MLN ?0 and unary evidence ?0 . The second observation behind the proof is the following.
Lemma 4 (Van den Broeck and Davis [11], Van den Broeck [26]). Suppose that evidence ?m is
unary. Then for every domain-liftable MLN ? and corresponding distribution Prm , the complexity
of computing posterior marginal Prm (q|em ) grows polynomially in evidence size m.
Hence, computing posterior probabilities can be done in time which is polynomial in the size of
unary evidence m, which completes our proof.
We can now identify additional similarities between treewidth and Boolean rank. Exact inference algorithms for probabilistic graphical models typically perform two steps, namely to (a) compute a tree
decomposition of the graphical model (or a corresponding variable order), and (b) perform inference
that is polynomial in the size of the decomposition, but potentially exponential in its (tree)width. The
analogous steps for conditioning are to (a) perform a BMF, and (b) perform inference that is polynomial in the size of the BMF, but potentially exponential in its rank. The (a) steps are both NP-hard,
yet are efficient assuming bounded treewidth [27] or bounded Boolean rank (Lemma 3). Whereas
5
treewidth is a measure of tree-likeness and sparsity of the graphical model, Boolean rank seems to
be a fundamentally different property, more related to the presence of symmetries in evidence.
5
Over-Symmetric Evidence Approximation
Theorem 2 opens up many new possibilities. Even for evidence with high Boolean rank, it is possible
to find a low-rank approximate BMF of the evidence, as is commonly done for other data mining
and machine learning problems. Algorithms already exist for solving this task (cf. Section 3).
Example 6. The evidence matrix from Example 4 has Boolean rank three. Dropping the third pair
of vectors reduces the Boolean rank to two.
?
1
?1
?0
1
1
1
0
0
0
0
1
0
?
0
1?
0?
1
?
? ? ? ?| ? ? ? ?| @? ? ? ?|
?
0
0
1
0 1
1
0
@0? ?0?
?
?1? ?0? ?1? ?1?
?1
?
?
=
@
?1? ?1?
?0? ?0? ?0? ?0?
?0
@
0@
0
0
1 1
0
1
@
??
1 1
1? ?0
0? ?0
0 1
?|
1
1?
0?
0
=
?
1
?1
?0
1
1
1
0
0
?
0 0
0 1?
0 0?
0 1
This factorization is approximate, as it flips the evidence for atom p(c, c) from true to false (represented by the bold 0). By paying this price, the evidence has more symmetries, and we can condition
on the binary relation by introducing only two instead of three new pairs (qi , ri ) of unary predicates.
Low-rank approximate BMF is an instance of a more general idea; that of over-symmetric evidence
approximation. This means that when we want to compute Pr(q | e), we approximate it by computing Pr(q | e0 ) instead, with evidence e0 that permits more efficient inference. In this case, it is more
efficient because it maintains more symmetries of the model and permits more lifting. Because all
lifted inference algorithms, exact or approximate, exploit symmetries, we expect this general idea,
and low-rank approximate BMF in particular, to improve the performance of any lifted inference
algorithm.
Having a small amount of incorrect evidence in the approximation need not be a problem. As these
literals are not covered by the first most important vector pairs, they can be considered as noise in
the original matrix. Hence, a low-rank approximation may actually improve the performance of, for
example, a lifted collective classification algorithm. On the other hand, the approximation made in
Example 6 may not be desirable if we are querying attributes of the constant c, and we may prefer
to approximate other areas of the evidence matrix instead. There are many challenges in finding
appropriate evidence approximations, which makes the task all the more interesting.
6
Empirical Evaluation
To complement the theoretical analysis from the previous sections, we will now report on experiments that investigate the following practical questions.
Q1 How well can we approximate a real-world relational data set by a low-rank Boolean matrix?
Q2 Is Boolean rank a good indicator of the complexity of inference, as suggested by Theorem 2?
Q3 Is over-symmetric evidence approximation a viable technique for approximate lifted inference?
To answer Q1, we compute approximations of the linkto binary relation in the WebKB data set
using the ASSO algorithm for approximate BMF [20]. The WebKB data set consists of web pages
from the computer science departments of four universities [28]. The data has information about
words that appear on pages, labels of pages and links between web pages (linkto relation). There
are four folds, one for each university. The exact evidence matrix for the linkto relation ranges in
size from 861 by 861 to 1240 by 1240. Its real-valued rank ranges from 384 to 503. Performing a
BMF approximation in this domain adds or removes hyperlinks between web pages, so that more
web pages can be grouped together that behave similarly.
Figure 1 plots the approximation error for increasing Boolean ranks, measured as the number of
incorrect evidence literals. The error goes down quickly for low rank, and is reduced by half after
Boolean rank 70 to 80, even though the matrix dimensions and real-valued rank are much higher.
Note that these evidence matrices contain around a million entries, and are sparse. Hence, these
approximations correctly label 99.7% to 99.95% of the atoms.
6
Error
Rank n
0
1
2
3
4
5
6
cornell
texas
washington
wisconsin
3000
2000
1000
0
0
20
40
60
80
Boolean rank
100
120
Circuit Size (b)
24
50
129
371
1098
3191
9571
Figure 2: First-order NNF circuit size (number
of nodes) for increasing Boolean rank n, and
(a) the peer to peer and (b) hierarchical model.
Figure 1: Approximation BMF error in terms
of the number of incorrect literals for the
WebKB linkto relation.
1
Relative KLD
1
Relative KLD
Circuit Size (a)
18
58
160
1873
> 2129
?
?
0.1
0
200000
400000
Iteration
600000
0.1
0
(a) Texas Data Set
200000
400000
Iteration
600000
(b) Wisconsin Data Set
Figure 3: KLD of LMCMC on different BMF approximations, relative to the KLD of vanilla MCMC
on the same approximation. From top to bottom, the lines represent exact evidence (blue), and
approximations (red) of rank 150, 100, 75, 50, 20, 10, 5, 2, and 1.
To answer Q2, we perform two sets of experiments. Firstly, we look at exact lifted inference and
investigate the influence of adding Formula 2 to the ?peer-to-peer? and ?hierarchical? MLNs from
Example 1. The goals is to condition on linkto relations with increasing rank n. These models
are compiled using the WFOMC [8] algorithm into first-order NNF circuits, which allow for exact
domain-lifted inference (c.f., Lemma 4). Table 2 shows the sizes of these circuits. As expected,
circuit sizes grow exponentially with n. Evidence breaks more symmetries in the peer-to-peer model
than in the hierarchical model, causing the circuit size to increase more quickly with Boolean rank.
Since the connection between rank and exact inference is obvious from Theorem 2, the more
interesting question in Q2 is whether Boolean rank is indicative of the complexity of approximate lifted inference as well. Therefore, we investigate its influence on the Lifted MCMC algorithm (LMCMC) [29] with Rao-Blackwellized probability estimation [30]. LMCMC interleaves
standard MCMC steps (here Gibbs sampling) with jumps to states that are symmetric in the graphical model, in order to speed up mixing of the chain. We run LMCMC on the WebKB MLN of Davis
and Domingos [31], which has 333 first-order formulas and over 1 million random variables. It
classifies web pages into 6 categories, based on their link structure and the 50 most predictive words
they contain. We learn its parameters with the Alchemy package and obtain evidence sets of varying
Boolean rank from the factorizations of Figure 1.3 . For these, we run both vanilla and lifted MCMC,
and measure the KL divergence (KLD) between the marginal distribution at each iteration4 , and a
ground truth obtained from 3 million iterations on the corresponding evidence set. Figure 3 plots the
KLD of LMCMC divided by the KLD of MCMC. It shows that the improvement of LMCMC over
MCMC goes down with Boolean rank, answering Q2 positively.
To answer Q3, we look at the KLD between different evidence approximations Pr(. | e0n ) of rank
n, and the true marginals Pr(. | e) conditioned on exact evidence. As this requires a good estimate
of Pr(. | e), we make our learned WebKB model more tractable by removing formulas about word
content. For two approximations e0a and e0b such that rank a < b, we expect LMCMC to converge
faster to Pr(. | e0a ) than to Pr(. | e0b ), as suggested by Figure 3. However, because Pr(. | e0a ) is a more
crude approximation of Pr(. | e) than Pr(. | e0b ) is, the KLD at convergence should be worse for a
3
4
When synthetically generating evidence of these ranks, results are comparable.
Runtime per iteration is comparable for both algorithms. BMF runtime is negligible.
7
1
Ground MCMC
Lifted MCMC
Lifted MCMC (Rank 2)
Lifted MCMC (Rank 10)
KL Divergence
KL Divergence
1
0.1
0
20000
40000
60000
Iteration
80000
Ground MCMC
Lifted MCMC
Lifted MCMC (Rank 75)
Lifted MCMC (Rank 150)
0.1
0
100000
(a) Cornell, Ranks 2 and 10
100000
150000
Iteration
200000
250000
(b) Cornell, Ranks 75 and 150
1
1
Ground MCMC
Lifted MCMC
Lifted MCMC (Rank 75)
Lifted MCMC (Rank 150)
KL Divergence
KL Divergence
50000
0.1
0
50000
100000
150000
Iteration
200000
250000
Ground MCMC
Lifted MCMC
Lifted MCMC (Rank 75)
Lifted MCMC (Rank 150)
0.1
0
(c) Washington, Ranks 75 and 150
50000
100000
150000
Iteration
200000
250000
(d) Wisconsin, Ranks 75 and 150
Figure 4: Error for different low-rank approximations of WebKB, in KLD from true marginals.
than for b. Hence, we expect to see a trade-off, where the lowest ranks are optimal in the beginning,
higher ranks become optimal later one, and the exact model is optimal at convergence.
Figure 4 shows exactly that, for a representative sample of ranks and data sets. In Figure 4(a), rank
2 and 10 outperform LMCMC with the exact evidence at first. Exact evidence overtakes rank 2
after 40k iterations, and rank 10 after 50k. After 80k iterations, even non-lifted MCMC outperforms
these crude approximations. Figure 4(b) shows the other side of the spectrum, where a rank 75
and 150 approximation are overtaken at iterations 90k and 125k. Figure 4(c) is representative of
other datasets. Note here that at around iteration 50k, rank 75 in turn outperforms the rank 150
approximation, which has fewer symmetries and does not permit as much lifting. Finally, Figure 4(d)
shows the ideal case for low-rank approximation. This is the largest dataset, and therefore the most
challenging inference task. Here, LMCMC on e converges slowly compared to its approximations e0 ,
and e0 results in almost perfect marginals. The crossover point where exact inference outperforms
the approximation is never reached in practice. This answers Q3 positively.
7
Conclusions
We presented two main results. The first is a more precise complexity characterization of conditioning on binary evidence, in terms of its Boolean rank. The second is a technique to approximate
binary evidence by a low-rank Boolean matrix factorization. This is a first type of over-symmetric
evidence approximation that can speed up lifted inference. We showed empirically that low-rank
BMF speeds up approximate inference, leading to improved approximations.
For future work, we want to evaluate the practical implications of the theory developed for other
lifted inference algorithms, such as lifted BP, and look at the performance of over-symmetric evidence approximation on machine learning tasks such as collective classification. There are many
remaining challenges in finding good evidence-approximation schemes, including ones that are
query-specific (cf. de Salvo Braz et al. [32]) or that incrementally run inference to find better approximations (cf. Kersting et al. [33]). Furthermore, we want to investigate other subsets of binary
relations for which conditioning could be efficient, in particular functional relations p(X, Y ), where
each X has at most a limited number of associated Y values.
Acknowledgments
We thank Pauli Miettinen, Mathias Niepert, and Jilles Vreeken for helpful suggestions. This work
was supported by ONR grant #N00014-12-1-0423, NSF grant #IIS-1118122, NSF grant #IIS0916161, and the Research Foundation-Flanders (FWO-Vlaanderen).
8
References
[1] L. Getoor and B. Taskar, editors. An Introduction to Statistical Relational Learning. MIT Press, 2007.
[2] Luc De Raedt, Paolo Frasconi, Kristian Kersting, and Stephen Muggleton, editors. Probabilistic inductive
logic programming: theory and applications. Springer-Verlag, 2008.
[3] David Poole. First-order probabilistic inference. In Proceedings of IJCAI, pages 985?991, 2003.
[4] Manfred Jaeger and Guy Van den Broeck. Liftability of probabilistic inference: Upper and lower bounds.
In Proceedings of the 2nd International Workshop on Statistical Relational AI,, 2012.
[5] Guy Van den Broeck. On the completeness of first-order knowledge compilation for lifted probabilistic
inference. In Advances in Neural Information Processing Systems 24 (NIPS), pages 1386?1394, 2011.
[6] Rodrigo de Salvo Braz, Eyal Amir, and Dan Roth. Lifted first-order probabilistic inference. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 1319?1325, 2005.
[7] B. Milch, L.S. Zettlemoyer, K. Kersting, M. Haimes, and L.P. Kaelbling. Lifted probabilistic inference
with counting formulas. Proceedings of the 23rd AAAI Conference on Artificial Intelligence, 2008.
[8] Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, and Luc De Raedt. Lifted probabilistic
inference by first-order knowledge compilation. In Proceedings of IJCAI, pages 2178?2185, 2011.
[9] N. Taghipour, D. Fierens, J. Davis, and H. Blockeel. Lifted variable elimination with arbitrary constraints.
In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics, 2012.
[10] H.H. Bui, T.N. Huynh, and R. de Salvo Braz. Exact lifted inference with distinct soft evidence on every
object. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, 2012.
[11] Guy Van den Broeck and Jesse Davis. Conditioning in first-order knowledge compilation and lifted
probabilistic inference. In Proceedings of the 26th AAAI Conference on Artificial Intelligence,, 2012.
[12] Vibhav Gogate and Pedro Domingos. Probabilistic theorem proving. In Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence (UAI), pages 256?265, 2011.
[13] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted inference seen from the other side: The tractable
features. In Proceedings of the 24th Conference on Neural Information Processing Systems (NIPS), 2010.
[14] Guy Van den Broeck, Wannes Meert, and Jesse Davis. Lifted generative parameter learning. In Statistical
Relational AI (StaRAI) workshop, July 2013.
[15] K. Kersting, B. Ahmadi, and S. Natarajan. Counting belief propagation. In Proceedings of the 25th
Conference on Uncertainty in Artificial Intelligence (UAI), pages 277?284, 2009.
[16] M. Richardson and P. Domingos. Markov logic networks. Machine learning, 62(1):107?136, 2006.
[17] D. Seung and L. Lee. Algorithms for non-negative matrix factorization. Advances in neural information
processing systems, 13:556?562, 2001.
[18] M. Berry, M. Browne, A. Langville, V. Pauca, and R. Plemmons. Algorithms and applications for approximate nonnegative matrix factorization. In Computational Statistics and Data Analysis, 2006.
[19] Pauli Miettinen, Taneli Mielik?ainen, Aristides Gionis, Gautam Das, and Heikki Mannila. The discrete
basis problem. In Knowledge Discovery in Databases, pages 335?346. Springer, 2006.
[20] Pauli Miettinen, Taneli Mielikainen, Aristides Gionis, Gautam Das, and Heikki Mannila. The discrete
basis problem. IEEE Transactions on Knowledge and Data Engineering, 20(10):1348?1362, 2008.
[21] Pauli Miettinen. Sparse Boolean matrix factorizations. In IEEE 10th International Conference on Data
Mining (ICDM), pages 935?940. IEEE, 2010.
[22] Boris Mirkin. Mathematical classification and clustering, volume 11. Kluwer Academic Pub, 1996.
[23] Floris Geerts, Bart Goethals, and Taneli Mielik?ainen. Tiling databases. In Discovery science, 2004.
[24] Paul W Holland, Kathryn Blackmond Laskey, and Samuel Leinhardt. Stochastic blockmodels: First steps.
Social networks, 5(2):109?137, 1983.
[25] Pauli Miettinen. Matrix decomposition methods for data mining: Computational complexity and algorithms. PhD thesis, 2009.
[26] Guy Van den Broeck. Lifted Inference and Learning in Statistical Relational Models. PhD thesis, KU
Leuven, January 2013.
[27] Hans L Bodlaender. Treewidth: Algorithmic techniques and results. Springer, 1997.
[28] M. Craven and S. Slattery. Relational learning with statistical predicate invention: Better models for
hypertext. Machine Learning Journal, 43(1/2):97?119, 2001.
[29] Mathias Niepert. Markov chains on orbits of permutation groups. In Proceedings of the 28th Conference
on Uncertainty in Artificial Intelligence (UAI), 2012.
[30] Mathias Niepert. Symmetry-aware marginal density estimation. In Proceedings of the 27th Conference
on Artificial Intelligence (AAAI), 2013.
[31] Jesse Davis and Pedro Domingos. Deep transfer via second-order markov logic. In Proceedings of the
26th annual international conference on machine learning, pages 217?224, 2009.
[32] R. de Salvo Braz, S. Natarajan, H. Bui, J. Shavlik, and S. Russell. Anytime lifted belief propagation.
Proceedings of the 6th International Workshop on Statistical Relational Learning, 2009.
[33] K. Kersting, Y. El Massaoudi, B. Ahmadi, and F. Hadiji. Informed lifting for message-passing. In
Proceedings of the 24th AAAI Conference on Artificial Intelligence,, 2010.
9
| 4861 |@word polynomial:8 seems:1 nd:1 adnan:1 open:2 decomposition:7 p0:2 q1:8 reduction:6 contains:2 pub:1 outperforms:3 existing:1 yet:2 attracted:1 must:2 partition:1 remove:2 plot:2 interpretable:2 ainen:2 bart:1 generative:2 half:1 fewer:1 braz:4 amir:1 mln:12 indicative:1 intelligence:10 beginning:1 manfred:1 content:1 provides:1 characterization:1 node:1 completeness:1 gautam:2 firstly:1 blackwellized:1 mathematical:1 become:2 maturity:1 viable:2 incorrect:4 consists:2 combine:1 dan:1 introduce:2 darwiche:2 theoretically:1 indeed:1 expected:2 p1:2 plemmons:1 alchemy:1 increasing:3 becomes:1 provided:1 parfactors:1 bounded:9 moreover:1 webkb:6 factorized:2 circuit:7 lowest:1 classifies:1 connective:1 geerts:1 q2:10 textbook:1 developed:1 informed:1 finding:2 every:4 runtime:2 exactly:1 grant:3 appear:1 t1:1 positive:1 engineering:1 negligible:1 despite:1 encoding:2 analyzing:1 blockeel:1 therein:1 quantified:1 challenging:2 factorization:21 limited:1 range:3 practical:2 acknowledgment:1 practice:2 block:1 mannila:2 area:1 empirical:1 crossover:1 thought:2 word:4 preempt:1 cannot:2 context:3 influence:2 kld:10 milch:1 optimize:1 roth:1 jesse:4 go:2 regardless:1 attention:2 identifying:1 assigns:1 m2:2 proving:1 analogous:1 suppose:4 exact:17 programming:1 kathryn:1 domingo:4 approximated:1 natarajan:2 database:3 bottom:1 taskar:1 worst:2 hypertext:1 decrease:1 trade:2 russell:1 intuition:1 meert:2 slattery:1 complexity:26 seung:1 depend:1 solving:2 algebra:5 predictive:1 basis:2 easily:1 joint:1 indirect:1 represented:6 talk:1 distinct:1 query:2 artificial:10 peer:8 disjunction:1 encoded:1 richer:1 valued:9 whose:2 say:2 statistic:2 richardson:1 itself:1 advantage:1 propose:1 leinhardt:1 product:7 causing:2 mixing:1 intuitive:1 pronounced:1 los:1 exploiting:1 convergence:2 ijcai:3 r1:5 jaeger:1 generating:1 perfect:1 converges:1 boris:1 object:7 measured:1 pauca:1 paying:1 c:1 treewidth:8 implies:1 attribute:2 stochastic:2 elimination:1 extension:1 practically:1 around:2 considered:1 ground:16 algorithmic:1 prm:9 smallest:1 mlns:4 estimation:2 lose:1 label:2 grouped:1 largest:1 tool:3 mit:1 always:1 cornell:3 lifted:55 kersting:6 factorizes:2 varying:1 publication:1 conjunction:2 q3:7 improvement:1 rank:84 indicates:1 helpful:1 inference:57 el:1 lowercase:1 unary:22 typically:2 relation:29 issue:1 classification:3 overtaken:1 marginal:5 aware:1 never:1 frasconi:1 extraction:1 having:2 atom:13 washington:2 shattering:1 represents:1 sampling:1 look:5 unsupervised:1 nearly:1 future:1 report:2 others:1 np:4 fundamentally:1 divergence:5 individual:1 replaced:1 negation:1 attempt:1 message:1 possibility:2 investigate:5 mining:6 evaluation:1 unconditional:1 uppercase:1 behind:1 compilation:3 suciu:1 chain:2 implication:1 capable:1 partial:1 necessary:1 unless:2 tree:4 orbit:1 e0:4 theoretical:3 instance:2 column:1 vreeken:1 boolean:57 rao:1 soft:1 raedt:2 assignment:1 tractability:1 introducing:1 kaelbling:1 entry:2 subset:1 predicate:14 too:1 characterize:3 dependency:2 answer:4 broeck:11 cited:1 international:6 density:1 probabilistic:13 off:2 lee:1 meliou:1 together:1 quickly:3 thesis:2 aaai:5 containing:1 slowly:1 literal:17 guy:7 worse:1 resort:1 leading:2 suggesting:1 de:6 student:2 bold:1 jha:1 gionis:2 caused:1 performed:1 break:3 later:1 eyal:1 red:1 reached:1 maintains:1 langville:1 contribution:2 blackmond:1 qk:2 largely:1 efficiently:3 identify:1 rejecting:1 ptp:1 obvious:1 proof:3 associated:1 gain:1 dataset:1 popular:1 logical:6 knowledge:5 anytime:1 liftable:5 formalize:1 actually:1 higher:3 reflected:1 improved:1 done:3 though:1 niepert:3 generality:2 furthermore:1 until:1 hand:1 web:7 lack:1 incrementally:1 propagation:2 e0n:1 perhaps:1 laskey:1 vibhav:1 grows:3 grounding:1 contain:5 true:4 concept:1 inductive:1 hence:6 symmetric:8 death:1 deal:1 width:1 huynh:1 davis:9 samuel:1 wfomc:4 generalized:1 tn:1 cp:1 reasoning:1 likeness:2 recently:2 functional:1 empirically:2 clause:1 rl:1 conditioning:20 exponentially:3 volume:1 million:3 extend:1 kluwer:1 marginals:3 gibbs:1 ai:2 rd:1 vanilla:2 leuven:1 similarly:1 language:3 interleaf:1 han:1 impressive:1 similarity:1 compiled:1 add:1 posterior:6 recent:3 showed:3 certain:3 n00014:1 verlag:1 binary:37 onr:1 seen:1 additional:1 converge:1 july:1 ii:1 stephen:1 full:4 desirable:1 rj:1 reduces:1 asso:1 technical:1 faster:2 academic:1 muggleton:1 long:1 divided:1 icdm:1 qi:6 iteration:12 represent:4 sometimes:1 achieved:1 receive:1 background:2 whereas:1 want:3 zettlemoyer:1 completes:1 singular:2 grow:1 unlike:1 induced:1 liftability:2 undirected:1 heikki:2 presence:1 synthetically:1 ideal:1 counting:2 easy:1 browne:1 nnf:2 incomparable:1 idea:4 texas:2 angeles:1 whether:1 fove:2 passing:1 deep:1 ignored:1 useful:1 clear:1 informally:1 covered:1 amount:3 fwo:1 category:1 reduced:2 outperform:1 exist:2 nsf:2 taghipour:2 per:2 correctly:1 blue:1 sidestepping:1 discrete:2 dropping:1 paolo:1 express:2 group:1 key:2 four:3 invention:1 merely:1 year:1 sum:2 run:3 package:1 parameterized:1 uncertainty:3 extends:1 almost:2 prefer:1 comparable:2 entirely:1 bound:3 followed:1 fold:1 paramount:1 nonnegative:1 annual:1 precisely:4 constraint:2 bp:1 ri:5 ucla:1 haimes:1 speed:4 argument:2 min:2 performing:1 department:2 according:1 craven:1 smaller:1 em:5 mielik:2 den:11 pr:11 turn:1 r3:4 fierens:1 hadiji:1 flip:1 tractable:3 end:1 tiling:2 vlaanderen:1 operation:1 permit:3 apply:2 hierarchical:4 pauli:5 appropriate:1 ahmadi:2 bodlaender:1 original:3 top:1 remaining:1 include:1 cf:3 clustering:1 graphical:6 calculating:1 exploit:2 approximating:2 classical:2 objective:1 already:1 question:2 link:4 thank:1 miettinen:6 parametrized:4 outer:4 reason:4 assuming:2 gogate:2 balance:1 minimizing:1 unfortunately:2 potentially:3 negative:5 collective:2 perform:8 allowing:1 upper:1 wannes:2 observation:2 datasets:1 markov:3 behave:1 january:1 relational:15 extended:2 precise:1 rn:1 overtakes:1 jilles:1 arbitrary:1 community:2 nmf:2 david:1 propositional:4 pair:8 namely:1 complement:1 kl:5 sentence:1 connection:1 california:1 concisely:1 learned:1 expressivity:1 salvo:4 nip:2 suggested:2 poole:1 sparsity:2 challenge:2 hyperlink:1 including:2 belief:2 analogue:1 getoor:1 turning:1 indicator:1 representing:2 scheme:1 improve:2 numerous:1 extract:1 review:1 literature:2 berry:1 discovery:2 multiplication:2 relative:3 wisconsin:3 loss:2 expect:3 permutation:1 interesting:3 limitation:1 proportional:1 suggestion:1 querying:1 foundation:1 editor:2 course:1 supported:1 side:2 allow:1 shavlik:1 rodrigo:1 characterizing:1 sparse:3 van:11 benefit:1 overcome:1 dimension:1 world:5 qn:2 commonly:1 made:1 universally:1 avoided:1 jump:1 polynomially:3 social:2 transaction:1 approximate:22 implicitly:1 bui:3 logic:3 uai:3 goethals:1 spectrum:1 factorizing:1 table:1 aristides:2 learn:1 ku:1 transfer:1 symmetry:15 improving:1 domain:7 da:2 pk:1 main:4 blockmodels:1 taneli:3 bmf:20 noise:2 paul:1 positively:2 representative:2 exponential:4 crude:2 answering:1 breaking:1 flanders:1 third:1 formula:16 theorem:7 down:2 removing:1 specific:3 arity:2 r2:5 evidence:102 intractable:1 exists:1 workshop:3 guyvdb:1 false:2 adding:5 lifting:7 phd:2 conditioned:2 easier:1 likely:2 expressed:1 biclustering:1 holland:1 kristian:1 springer:3 corresponds:1 truth:3 satisfies:1 pedro:2 conditional:2 goal:4 price:1 luc:2 replace:1 absence:1 professor:1 hard:7 experimentally:1 change:2 specifically:1 considerable:1 reducing:1 inapproximable:1 infinite:1 nima:1 lemma:5 called:4 mathias:3 starai:1 ew:1 formally:1 evaluate:1 mcmc:23 |
4,267 | 4,862 | Unsupervised Spectral Learning of FSTs
Rapha?el Bailly
Xavier Carreras
Ariadna Quattoni
Universitat Politecnica de Catalunya
Barcelona, 08034
rbailly,carreras,[email protected]
Abstract
Finite-State Transducers (FST) are a standard tool for modeling paired inputoutput sequences and are used in numerous applications, ranging from computational biology to natural language processing. Recently Balle et al. [4] presented
a spectral algorithm for learning FST from samples of aligned input-output sequences. In this paper we address the more realistic, yet challenging setting where
the alignments are unknown to the learning algorithm. We frame FST learning as
finding a low rank Hankel matrix satisfying constraints derived from observable
statistics. Under this formulation, we provide identifiability results for FST distributions. Then, following previous work on rank minimization, we propose a
regularized convex relaxation of this objective which is based on minimizing a
nuclear norm penalty subject to linear constraints and can be solved efficiently.
1
Introduction
This paper addresses the problem of learning probability distributions over pairs of input-output
sequences, also known as transduction problem. A pair of sequences is made of an input sequence,
built from an input alphabet, and an output sequence, built from an output alphabet. Finite State
Transducers (FST) are one of the main probabilistic tools used to model such distributions and
have been used in numerous applications ranging from computational biology to natural language
processing. A variety of algorithms for learning FST have been proposed in the literature, most of
them are based on EM optimizations [9, 11] or grammatical inference techniques [8, 6].
In essence, an FST can be regarded as an HMM that generates bi-symbols of combined input-output
symbols. The input and output symbols may be generated jointly or independently conditioned on
the previous observations. A particular generation pattern constitutes what we call an alignment.
GAATTCAG| | || |
GGA-TC-GA
GAATTCAG| || | |
GGAT-C-GA
GAATTC-AG
| | || |
GGA-TCGA-
GAATTC-AG
| || | |
GGAT-CGA-
To be able to handle different alignments, a special empty symbol ? is added to the input and output
alphabets. With this enlarged set of bi-symbols, the model is able to generate an input symbol (resp.
an output symbol) without an output symbol (resp. input symbol). These special bi-symbols will
be represented by the pair x? (resp. ?y ). As an example, the first alignment above will correspond
G? AAT T C AG?
A? AT T C AG?
to the two possible representations G
G ? G A ? T C ? G A and G G ? A ? T C ? G A . Under this model the
probability of observing a pair of un-aligned input-output sequences is obtained by integrating over
all possible alignments.
Recently, following a recent trend of work on spectral learning algorithms for finite state machines
[14, 2, 17, 18, 7, 16, 10, 5], Balle et al. [4] presented an algorithm for learning FST where the input
to the algorithm are samples of aligned input-output sequences. As with most spectral methods the
core idea of this algorithm is to exploit low-rank decompositions of some Hankel matrix representing
1
the distribution of aligned sequences. To estimate this Hankel matrix it is assumed that the algorithm
can sample aligned sequences, i.e. it can directly observe sequences of enlarged bi-symbols.
While the problem of learning FST from fully aligned sequences (what we sometimes refer to as
supervised learning) has been solved, the problem of deriving an unsupervised spectral method that
can be trained from samples of input-output sequences alone (i.e. where the alignment is hidden)
remains open. This setting is significantly more difficult due to the fact that we must deal with two
sets of hidden variables: the states and the alignments. In this paper we address this unsupervised
setting and present a spectral algorithm that can approximate the distribution of paired sequences
generated by an FST without having access to aligned sequences. To the best of our knowledge this
is the first spectral algorithm for this problem.
The main challenge in the unsupervised setting is that since the alignment information is not available, the Hankel matrices (as in [4]) can no longer be directly estimated from observable statistics.
However, a key observation is that we can nevertheless compute observable statistics that can constraint the coefficients of the Hankel matrix. This is because the probability of observing a pair of
un-aligned input-output sequences (i.e. an observable statistic) is computed by summing over all
possible alignments; i.e. by summing entries of the hidden Hankel matrix. The main idea of our algorithm is to exploit these constraints and find a Hankel matrix (from which we can directly recover
the model) which both agrees on the observed statistics and has a low-rank matrix factorization.
In brief, our main contribution is to show that an FST can be approximated by solving an optimization which is based on finding a low-rank matrix satisfying a set of constraints derived from observable statistics and Hankel structure. We provide sample complexity bounds and some identifiability
results for this optimization that show that, theoretically, the rank and the parameters of an FST
distribution can be identified. Following previous work on rank minimization, we propose a regularized convex relaxation of the proposed objective which is based on minimizing a nuclear norm
penalty subject to linear constraints. The proposed relaxation balances a trade-off between model
complexity (measured by the nuclear norm penalty) and fitting the observed statistics. Synthetic
experiments show that the performance of our unsupervised algorithm efficiently approximates that
of a supervised method trained from fully aligned sequences.
The paper is organized as follows. Section 2 gives preliminaries on FST and spectral learning
methods, and establishes that an FST can be induced from a Hankel matrix of observable aligned
statistics. Section 3 presents a generalized form of Hankel matrices for FST that allows to express
observation constraints efficiently. One can not observe generalized Hankel matrices without assumming access to aligned samples. To solve this problem, Section 4 formulates finding the Hankel
matrix of an FST from unaligned samples as rank minimization problem. This section also presents
the main theoretical results of the method, as well as a convex relaxation of the rank minimization
problem. Section 5 presents results on synthetic data and Section 6 concludes.
2
2.1
Preliminaries
Finite-State Transducers
Definition 1. A Finite-State Transducer (FST) of rank d is given by:
? alphabets ?+ = {x1 , . . . , xp } (input), ?? = {y1 , . . . , yq } (output)
? ?1 ? Rd , ?? ? Rd
? ?x ? ?+ ? {?}, ?y ? ?? ? {?}, a matrix Mxy ? Rd?d , with M?? = 0
Definition 2. Let s be an input sequence, and let t be an output sequence. An alignment of (s, t) is
given by a sequence of pairs xy11 . . . xynn such that the sequence obtained from x1 . . . xn (resp. y1 . . . yn )
by removing the empty symbols ? equals s (resp. t).
Definition 3. The set of alignments for a pair of sequences (s, t) is denoted [s, t].
Definition 4. Let ? = {?+ ? {?}} ? {?? ? {?}}. The set of aligned sequences is ?? . The empty
string is denoted ?.
2
Definition 5. Let T be an FST, and let w = xy11 . . . xynn be an aligned sequence. Then the value of w
for the model T is given by:
rT (w) = ?1? Mxy 1 ?Mxynn ? ??
1
Definition 6. Let (s, t) be an i/o (input/output) sequence. Then the value for (s, t) computed by an
FST T is given by the sum of the values for all alignments:
rT ((s, t)) =
?
x1
xn
y1 ...yn ?[s,t]
rT (xy11 . . . xynn )
A more complete description of FST can be found in [15].
2.2
Computing with an FST
In order to compute the value of a pair of sequences (s, t), one needs to sum over all possible
alignments, which is generally exponential in the length of s and t. Standard techniques (e.g. the
edit distance algorithm) can be applied in order to compute such a value in polynomial time.
Proposition 1. Let T be an FST, s1?n ? ??+ , t1?m ? ??? . The forward vector is defined by:
F0,j = ?1? M?t ?M?t , Fi,0 = ?1? Ms1 ?Msi , Fi,j = Fi?1,j Msi + Fi,j?1 M?t + Fi?1,j?1 Msi
1
?
j
?
?
tj
j
It is then possible to compute rT ((s, t)) = Fn,m ?? in O(d2 ?s??t?). The sum of rT over all possible
values rT (?? ) = ?s??+ ,t??? rT ((s, t)) can be computed with the formula
rT (?? ) = ?1? [Id ? M ]?1 ??
where M = ?x??+ ?{?},y??? ?{?} Mxy .
Example 1. Let us consider a particular subclass of FST: ?? = ?+ = {0, 1}, where M01 = M10 =
M0? = M?1 = 0.
00
0
1
?
0
0
0
?1 = (
0
0
0
?
1
1
1
?? = (
1
)
0
M1? = (
1/4
)
0
1/3
0
M0 = (
0
0
0
0
)
1/6
1/4
)
0
M?0 = (
1/6
0
M1 = (
1
0
1/2
0
)
1/3
0
)
0
The FST A satisfies the constraints.
Let us draw all the paths for the i/o sequence (01, 001). The dashed red edges are discarded because
of the constraints. Thus, there are only two different non-zero paths, corresponding to 00 ?0 11 (in green)
and ?0 00 11 (in blue).
Let us consider the model A which satisfies the constraints above. One has rA (00 ?0 11 ) =
1/96, rA (?0 00 11 ) = 1/192 and, as those two aligned sequences are the only possible alignments for
(01, 001), one has rA ((01, 001)) = 1/64. It is possible to check that rA (?? ) = 1, thus the model
computes a probability distribution.
2.3
Hankel Matrices
Let us recall some basic definitions and properties.
Let ? be an alphabet, U ? ?? a set of prefixes, V ? ?? a set of suffixes. U is said to be prefix-closed
if uv ? U ? u ? U . V is said to be suffix-closed if uv ? V ? v ? V .
Let us denote U ? the set U ? {uxy ?u ? U, xy ? ?}. Let us denote ?V the set V ? {xy v?v ? V, xy ? ?}.
A Hankel matrix on U and V is a matrix with rows corresponding to elements u ? U and columns
corresponding to v ? V , which satisfies uv = u? v ? ? H(u, v) = H(u? , v ? ).
Definition 7. Let H a Hankel matrix for U ? and ?V . One supposes that ? ? U and ? ? V . One
then defines the partial Hankel matrices defined for u ? U and ? V :
H? (u, v) = H(u, v)
,
Hxy (u, v) = H(u, xy v)
3
,
H1 (v) = H(?, v)
,
H? (u) = H(u, ?)
The main result that we will be using is the following:
Proposition 2. Let H a Hankel matrix for U ? and ?V . One supposes that U is prefix-closed, V is
suffix-closed, and that rank(H? ) = rank(H). Then the WA defined by
?1? = H1? H?+ , ?? = H? , Mxy = Hxy H?+
computes a mapping f such that ?u ? U, ?v ? V, f (uv) = H? (u, v).
We will not give a proof of this result, as a more general result is seen further. The rank equality
comes from the fact that the WA defined above has the same rank as H? , and that the rank of a
mapping f which satisfies f (uv) = H(u, v) is at least the rank of H. The following example shows
that the prefix and suffix closeness are necessary conditions.
Example 2. Let us consider the following Hankel over the set of prefixes U ? and the set of suffixes
3
3
?V with U = {?, ab3 }, and V = {?, ab3 }.
3
?
?
0
H? = a3 (
0
b3
3
a
b3
?
0 ),
1/4
H
a
b
= a3
b3
a4
b4
4
a
? ab ab3
b4
0 ?
?0 0 0
?0 0 0
0 ?
?
?
? 0 0 1/4 1/4 ?,
?
?
? 0 0 1/4 1/4 ?
a2
b2
H
?
a2
b2
= a33
b
a4
b4
? 0
?
? 0
?
? 1/4
a3
b3
a4
b4
1/4 ?
?
1/4 1/4 ?
?
1/4 1/4 ?
0
One has ? ? U and ? ? V , and also rank(H? ) = rank(H) = 1, thus the computed WA is rank 1.
6
Such a WA cannot compute a mapping such that rT () = 0 and rT (ab6 ) = 1/4. The complete Hankel
matrix has at least rank 7.
3
Inducing FST from Generalized Hankel Matrices
Proposition 2 tells us that if we had access to certain sub-blocks of the Hankel matrix for aligned
sequences we could recover the FST model. However, we do not have access to the hidden
alignment information: we only have access to the statistics p(s, t), which we will call observations. One natural idea would be to search for a Hankel matrix that agrees with the observations. To do so, we introduce observable constraints, which are linear constraints of the form
p(s, t) = ?xy 1 ...xynn ?[s,t] rT (xy11 . . . xynn ), where rT (xy11 . . . xynn ) is computed from the Hankel.
1
From a matrix satisfying the hypothesis of Proposition 2 and the observation constraints, one can
derive an FST computing a mapping which agrees on the observations.
Given an i/o sequence (s, t), the size of [s, t] (hence the size of the Hankel matrix) is in general exponential in the size of s and t. In order to overcome that problem when considering the observation
constraints, one will consider aggregations of rows and columns, corresponding to sets of prefixes
and suffixes. Obviously, the definition of a Hankel matrix must be extended to this case.
One will denote by P(A) the set of subsets of A.
Definition 8. Let u, u? ? P(?? ). The set uu? is defined by uu? = {ww? ?w ? u, w? ? u? }.
1?n
1?n
We will denote sets of alignments as follows: xy1?n
will denote the set {xy1?n
}, which contains a single
x1?n
x1?n xn+1?n+k xn+1?n+k
aligned sequence; then y1?n [s, t] will denote the set {y1?n yn+1?n+k ? yn+1?n+k ? [s, t]}, which extends
1?n
1?n
{xy1?n
} with all ways of aligning (s, t). We will also use [s, t]xy1?n
analogously.
Definition 9. A generalized prefix (resp. generalized suffix) is the empty set or a set of the form
1?n
1?n
[s, t]yx1?n
(resp. yx1?n
[s, t]), where the aligned part is possibly empty.
3.1
Generalized Hankel
One needs to extend the definition of a Hankel matrix to the generalized prefixes and suffixes.
Definition 10. Let U be a set of generalized prefixes, V be a set of generalized suffixes. Let H be a
matrix indexed by U (in rows) and V (in columns). H is a generalized Hankel matrix if it satisfies:
? ?
? ?
??
u, u
?? ? U, ??
v , v?? ? V,
? uv =
? u v ? ? H(u, v) =
? H(u v )
u??
u,v??
v
u? ??
u? ,v ? ??
v?
where ? is the disjoint union.
4
u??
u,v??
v
u? ??
u? ,v ? ??
v?
In particular, if U and V are sets of regular prefixes and suffixes, this definition encompasses the
regular definition for a Hankel matrix.
Definition 11. Let U be a set of generalized prefixes. U is prefix-closed if
1?n
1?n?1
[s, t]xy1?n
? U ? [s, t]yx1?n?1
?U
[s1?n , t1?k ] ? U ? [s1?n?1 , t1?k ]s?n , [s1?n , t1?k?1 ]?tk , [s1?n?1 , t1?k?1 ]stkn ? U
Definition 12. Let V be a set of generalized suffixes. V is suffix-closed if
x1?n
y1?n [s, t]
2?n
? V ? xy2?n
[s, t] ? V
[s1?n , t1?k ] ? V ? s?1 [s2?n , t1?k ], ?t1 [s1?n , t2?k ], st11 [s2?n , t2?k ] ? V
Definition 13. Let U be a set of generalized prefixes, V be a set of generalized suffixes. The rightoperator completion of U is the set U ? = U ? {uxy ?u ? U, xy ? ?}. The left-operator completion of V
is the set ?V = V ? {xy v?v ? V, xy ? ?}.
A key result is the following, which is analogous to Proposition 2 for generalized Hankel matrices:
Proposition 3. Let U and V be two sets of generalized prefixes and generalized suffixes. Let H be
a Hankel matrix built from U ? and ?V . One supposes that rank(H? ) = rank(H), U is prefixclosed and V is suffix-closed. Then the model A defined by ?1? = H1? H?+ , ?? = H? , Mxy = Hxy H?+
computes a mapping rA such that
?u ? U, ?v ? V, rA (uv) = H? (u, v)
Proof. The proof can be found in the Appendix.
Let S be a sample of unaligned sequences. Let prefS,in (resp. prefS,out , suffS,in , suffS,out ) be the
prefix (resp. suffix) closure of input (resp. output) strings in S. Let U = {[s, t]}s?prefS,in ,t?prefS,out
and V = {[s, t]}s?suffS,in ,t?suffS,out . The sets U and V contain all the observed pairs of unaligned
sequences, and one can check that the sizes of U ? and ?V are polynomial in the size of S.
Example 3. Let us continue with the same model A as in Example 1. Let us now consider the
prefixes , 00 and the suffixes , 11 . The Hankel matrices will be:
H1 = (
1/4
1/4
1/4
) H? = (
) H? = (
0
0
0
H1? = (
1/12
0
0
0
) H0 = (
0
1/192
0
0
1/24
) H?0 = (
1/32
0
1/32
0
) H1 = (
0
1/32
1
0
)
1/96
0
)
0
and one finally gets the model A? defined by:
?1 = (
M?0 = (
1
1/4
1/3
) M1? = (
) ?? = (
0
0
0
1/6
0
0
0
) M0 = (
1/3
0
0
0
)
1/6
1
0
) M1 = (
1/8
0
1
0
)
0
One can easily check that A? computes the same probability distribution than A.
4
FST Learning as Non-convex Optimization
Proposition 3 shows that FST models can be recovered from generalized Hankel matrices. In this
section we show how FST learning can be framed as an optimization problem where one searches
for a low-rank generalized Hankel matrix that agrees with observation constraints derived from a
sample. We assume here that p is a probability distribution over i/o sequences.
We will denote by z = (p([s, t]))s???+ ,t???? the vector built from observable probabilities, and by
z S = (pS ([s, t]))s???+ ,t???? the set of empirical observable probabilities, where pS is the frequency
deduced from an i.i.d. sample S.
5
? be the vector describing the coefficients of H. Let K be the matrix such that K H
? = 0 repLet H
?
resents the Hankel constraints (cf. definition 10). Let O be the matrix such that OH = z represents
the observation constraints (i.e. ? H([s, t], ) = p([s, t])).
The optimization task is the following:
rank(H)
minimize
H
? ? z?2 ? ?
subject to ?OH
? =0
KH
(1)
?H?2 ? 1.
The condition ?H?2 ? 1 is necessary to bound the set of possible solutions. In particular, if H is
the Hankel matrix of a probability distribution, the condition is satisfied: one?has ?H?1 ? 1 as each
column is a probability distribution, as well as ?H?F ? 1 and thus ?H?2 ? ?H?1 ?H?F ? 1. Let
us remark that the set of matrices satisfying the constraints is a compact set.
Let us denote H? the class of Hankel matrices solutions of (1) for a given ?, where z represents the
p(ui vj ). Let us denote H?S the class of Hankel matrices solutions of (1) for a given ?, where zS
represents pS (ui vj ), the observed frequencies in a sample S i.i.d. with respect to p.
Proposition 4. Let p be a distribution over i/o sequences computed by an FST. There exists U and
V such that any solution of H0 leads to an FST
?1? = H1? H?+ , ?? = H? , Mxy = Hxy H?+
which computes p.
Proof. The proof can be found in the Appendix.
4.1
Theoretical Properties
We now present the main theoretical results related to the optimization problem (1). The first one
concerns the rank identification, while the second one concerns the consistency of the method.
Proposition 5. Let p be a rank d distribution computed by an FST. There exists ?2 such that for any
? > 0 and any i.i.d. sample S
?S? >
?2 +
?
?
2
8 log(1/?) ?
?2
?
implies that any H ? H?S solution of (1) with ? =
at least 1 ? ?.
?
1+ 2 log(1/?)
?
?S?
leads to a rank-d FST with probability
Proof. The proof can be found in the Appendix.
Proposition 6. Let p be a rank d distribution computed by an FST. There exists ? such that for any
? > 0 and any i.i.d. sample S
?1 +
?S? >
?
?
2
2 log(1/?) ?
?
?
implies that for any H ? H?S solution of (1) with ? =
?
1+ 2 log(1/?)
?
?S?
leading to a model As there exists
a model A computing p such that
?A, AS ?? ? O(
d2
)
?p3
where ?A, AS ?? is the maximum distance for model parameters, and ?p is a non-zero parameter
depending on p.
Proof. The proof can be found in the Appendix.
6
Example 4. This continues Examples 1 and 3. Let us first remark that, among all the values used to
build the Hankel matrices in Example 3, some of them correspond to observable statistics, as there
is only one possible alignment for them. Conversely, the exact values of rM (00 ?0 11 ) and rM (00 1? 11 ) are
not observable. Then the rank minimization objective is not sufficient, as it allows any value for
those variables.
Let us consider now larger sets of prefixes and suffixes: ?, ?0 , 00 and ?, 1? , 11 . One then has
? 1/4
H = ? 1/24
? 0
1/12
?
0
0 ?
? 1/12
0 ? H1 = ? ?
?
? 0
1/32 ?
1/36
?
0
0 ?
? 0
0 ? H1 = ? 0
1
? 1/32
? ?
0
0
?
0 ?
0 ?
0 ?
We want to minimize the rank of H ? subject to the constraints O and K:
? H? ?
H = ? H1? ? = (hij ) ,
? H1 ?
?
1
?
?
?
?
?
O=?
?
?
?
?
?
h11 = 1/4
h12 = 1/12
h13 = 0
h21 = 1/24
?
h63 + h92 = 1/64
?
,
?
h = h41
?
? 12
K = ? h13 = h71
?
?
? h22 = h51
h23 = h81
h32 = h61
h33 = h91
The relation h63 + h92 = 1/64 is due to fact that rA (00 ?0 11 ) + rA (?0 00 11 ) = rA ((01, 001)) = 1/64. One
has h22 = h51 as they both represent p(?0 1? ). It is clear that H ? has a rank greater or equal than 2.
The only way to reach rank 2 under the constraints is
h22 = h51 = 1/72, h52 = 1/216, h63 = 1/192, h92 = 1/96
Thus, the process of rank minimization under linear constraints leads to one single model, which is
identical to the original one. Of course, in the general case, the rank minimization objective may
lead to several models.
4.2
Convex Relaxation
The problem as it is stated in (1) is NP-hard to solve in the size of the Hankel matrix, hence impossible to deal with in practical cases. One can solve instead a convex relaxation of the problem (1),
obtained by replacing the rank objective by the nuclear norm. The relaxed optimization statement is
then the following:
minimize ?H??
H
? ? z?2 ? ?
subject to ?OH
? =0
KH
(2)
?H?2 ? 1.
This type of relaxation has been used extensively in multiple settings [13]. The nuclear norm ???
plays the same role than the ??1 norm in convex relaxations of the ??0 norm, used to reach sparsity
under linear constraints.
5
Experiments
We ran synthetic experiments using samples generated from random FST with input-output alphabets of size two. The main goal of our experiments was to compare our algorithm to a supervised
spectral algorithm for FST that has access to alignments. In both methods, the Hankel was defined
for prefixes and suffixes up to length 1. Each run consists of generating a target FST, and generating
N aligned samples according to the target distribution. These samples were directly used by the supervised algorithm. Then, we removed the alignment information from each sample (thus obtaining
a pair of unaligned strings), and we used them to train an FST with our general learning algorithm,
trying different values for a C parameter that trades-off the nuclear norm term and the observation
term. We ran this experiment for 8 target models of 5 states, for different sampling sizes. We measure the L1 error of the learned models with respect to the target distribution on all unaligned pairs
of strings whose sizes sum up to 6. We report results for geometric averages.
In addition, we ran two additional methods. First a factorized method that assumes that the two
sequences are generated independently, and learns two separate weighted automata using a spectral
7
0.003
UNS 0.01
UNS 0.1
SUP
EM
0.0025
L1 error
0.002
0.0015
0.001
0.0005
0.0001
10k
20k
50k
100k
200k
500k
1M
Number of Samples (logscale)
Figure 1: Learning curves for different methods: SUP, supervised; UNS, unsupervised with different regularizers; EM, expectation maximization. The curves are averages of L1 error for random
target models of 5 states.
method. Its performance is very bad, with L1 error rates at ?0.08, which confirms that our target
models generate highly dependent sequence pairs. This baseline result also implies that the rest
of the methods can learn the dependencies between paired strings. Second, we ran an Expectation
Maximization algorithm (EM).
Figure 1 shows the performance of the learned models with respect to the number of samples.
Clearly, our algorithm is able to estimate the target distribution and gets close to the performance of
the supervised method, while making use of much simpler statistics. EM performed slightly better
than the spectral methods, but nonetheless at the same levels of performance.
One can find other experimental results for the unsupervised spectral method in [1], where it is
shown that, under certain circumstances, unsupervised spectral method can perform better than supervised EM. Though the framework (unsupervised learning of PCFGs) is not the same, the method
is similar and the optimization statement is identical.
6
Conclusion
In this paper we presented a spectral algorithm for learning FST from unaligned sequences. This is
the first paper to derive a spectral algorithm for the unsupervised FST learning setting. We prove that
there is theoretical identifiability of the rank and the parameters of an FST distribution, using a rank
minimization formulation. However, this problem is NP-hard, and it remains open whether there
exists a polynomial method with identifiability results. Classically, rank minimization problems
are solved with convex relaxations such as the nuclear norm minimization we have proposed. In
experiments, we show that our method is comparable to a fully supervised spectral method, and
close to the performance of EM.
Our approach follows a similar idea to that of [3] since both works combine classic ideas from
spectral learning with classic ideas from low rank matrix completion. The basic idea is to frame
learning of distributions over structured objects as a low-rank matrix factorization subject to linear
constraints derived from observable statistics. This method applies to other grammatical inference
domains, such as unsupervised spectral learning of PCFGs ([1]).
Acknowledgments
We are grateful to Borja Balle and the anonymous reviewers for providing us with helpful comments.
This work was supported by a Google Research Award, and by projects XLike (FP7-288342), ERANet CHISTERA VISEN, TACARDI (TIN2012-38523-C02-02), BASMATI (TIN2011-27479-C0403), and SGR-LARCA (2009-SGR-1428). Xavier Carreras was supported by the Ram?on y Cajal
program of the Spanish Government (RYC-2008-02223).
8
References
[1] R. Bailly, X. Carreras, F. M. Luque, and A. Quattoni. Unsupervised spectral learning of wcfg
as low-rank matrix completion. EMNLP, 2013.
[2] R. Bailly, F. Denis, and L. Ralaivola. Grammatical inference as a principal component analysis
problem. In Proc. ICML, 2009.
[3] B. Balle and M. Mohri. Spectral learning of general weighted automata via constrained matrix
completion. In Proc. of NIPS, 2012.
[4] B. Balle, A. Quattoni, and X. Carreras. A spectral learning algorithm for finite state transducers. ECML?PKDD, 2011.
[5] Borja Balle, Ariadna Quattoni, and Xavier Carreras. Local loss optimization in operator models: A new insight into spectral learning. In John Langford and Joelle Pineau, editors, Proceedings of the 29th International Conference on Machine Learning (ICML-12), ICML ?12,
pages 1879?1886, New York, NY, USA, July 2012. Omnipress.
[6] M. Bernard, J-C. Janodet, and M. Sebban. A discriminative model of stochastic edit distance
in the form of a conditional transducer. Grammatical Inference: Algorithms and Applications,
4201, 2006.
[7] B. Boots, S. Siddiqi, and G. Gordon. Closing the learning planning loop with predictive state
representations. I. J. Robotic Research, 2011.
[8] F. Casacuberta. Inference of finite-state transducers by using regular grammars and morphisms.
Grammatical Inference: Algorithms and Applications, 1891, 2000.
[9] A. Clark. Partially supervised learning of morphology with stochastic transducers. In Proc. of
NLPRS, pages 341?348, 2001.
[10] Shay B. Cohen, Karl Stratos, Michael Collins, Dean P. Foster, and Lyle Ungar. Spectral learning of latent-variable pcfgs. In Proceedings of the 50th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers), pages 223?231, Jeju Island, Korea, July
2012. Association for Computational Linguistics.
[11] J. Eisner. Parameter estimation for probabilistic finite-state transducers. In Proc. of ACL, pages
1?8, 2002.
[12] G. Stewart et J.-G. Sun. Matrix perturbation theory. Academic Press, 1990.
[13] Maryam Fazel. Matrix rank minimization with applications. PhD thesis, Stanford University,
Electrical Engineering Dept., 2002.
[14] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden markov models.
In Proc. of COLT, 2009.
[15] Mehryar Mohri. Finite-state transducers in language and speech processing. Computational
Linguistics, 23(2):269?311, 1997.
[16] A.P. Parikh, L. Song, and E.P. Xing. A spectral algorithm for latent tree graphical models.
ICML, 2011.
[17] S.M. Siddiqi, B. Boots, and G.J. Gordon. Reduced-rank hidden markov models. AISTATS,
2010.
[18] L. Song, B. Boots, S. Siddiqi, G. Gordon, and A. Smola. Hilbert space embeddings of hidden
markov models. ICML, 2010.
[19] L. Zwald and G. Blanchard. On the convergence of eigenspaces in kernel principal component
analysis. NIPS, 2005.
9
| 4862 |@word polynomial:3 norm:9 open:2 closure:1 d2:2 confirms:1 decomposition:1 contains:1 prefix:18 recovered:1 yet:1 must:2 john:1 fn:1 realistic:1 alone:1 core:1 denis:1 simpler:1 zhang:1 transducer:10 consists:1 prove:1 fitting:1 combine:1 introduce:1 theoretically:1 ra:9 pkdd:1 planning:1 morphology:1 considering:1 project:1 factorized:1 what:2 string:5 z:1 finding:3 ag:4 subclass:1 rm:2 yn:4 t1:8 aat:1 local:1 engineering:1 id:1 path:2 acl:1 conversely:1 challenging:1 factorization:2 pcfgs:3 bi:4 fazel:1 practical:1 acknowledgment:1 lyle:1 union:1 block:1 empirical:1 significantly:1 integrating:1 regular:3 get:2 cannot:1 ga:2 close:2 operator:2 ralaivola:1 impossible:1 zwald:1 a33:1 reviewer:1 dean:1 independently:2 convex:8 automaton:2 insight:1 regarded:1 nuclear:7 deriving:1 oh:3 classic:2 handle:1 analogous:1 resp:10 target:7 play:1 exact:1 hypothesis:1 trend:1 element:1 satisfying:4 approximated:1 continues:1 wcfg:1 observed:4 role:1 solved:3 electrical:1 sun:1 trade:2 removed:1 ran:4 complexity:2 ui:2 trained:2 grateful:1 solving:1 predictive:1 easily:1 represented:1 alphabet:6 train:1 tell:1 h0:2 whose:1 larger:1 solve:3 stanford:1 grammar:1 statistic:12 jointly:1 obviously:1 sequence:38 h22:3 propose:2 unaligned:6 maryam:1 h33:1 aligned:18 loop:1 ab3:3 description:1 inducing:1 kh:2 inputoutput:1 convergence:1 empty:5 p:3 generating:2 tk:1 object:1 derive:2 depending:1 completion:5 measured:1 come:1 uu:2 implies:3 tcga:1 stochastic:2 government:1 ungar:1 h11:1 preliminary:2 anonymous:1 mxy:6 proposition:10 mapping:5 m0:3 a2:2 estimation:1 proc:5 edit:2 agrees:4 establishes:1 tool:2 weighted:2 minimization:11 clearly:1 derived:4 rank:43 check:3 sgr:2 baseline:1 helpful:1 inference:6 dependent:1 el:1 suffix:19 hidden:7 relation:1 among:1 colt:1 denoted:2 constrained:1 special:2 equal:2 having:1 sampling:1 biology:2 represents:3 identical:2 unsupervised:12 constitutes:1 icml:5 t2:2 np:2 report:1 gordon:3 cajal:1 ab:1 highly:1 alignment:19 tj:1 regularizers:1 edge:1 partial:1 necessary:2 xy:8 korea:1 eigenspaces:1 indexed:1 tree:1 theoretical:4 column:4 modeling:1 formulates:1 stewart:1 maximization:2 entry:1 subset:1 h71:1 xy2:1 universitat:1 dependency:1 supposes:3 synthetic:3 combined:1 rapha:1 m10:1 deduced:1 international:1 probabilistic:2 off:2 michael:1 analogously:1 thesis:1 satisfied:1 possibly:1 emnlp:1 classically:1 leading:1 de:1 b2:2 coefficient:2 blanchard:1 performed:1 h1:11 closed:7 observing:2 sup:2 red:1 xing:1 recover:2 aggregation:1 identifiability:4 contribution:1 minimize:3 efficiently:3 correspond:2 identification:1 quattoni:4 reach:2 definition:19 nonetheless:1 frequency:2 proof:9 hsu:1 recall:1 knowledge:1 organized:1 hilbert:1 supervised:9 formulation:2 though:1 smola:1 langford:1 h23:1 replacing:1 google:1 defines:1 pineau:1 usa:1 b3:4 contain:1 xavier:3 equality:1 hence:2 deal:2 spanish:1 essence:1 generalized:19 trying:1 complete:2 l1:4 omnipress:1 ranging:2 recently:2 fi:5 parikh:1 sebban:1 cohen:1 b4:4 volume:1 extend:1 association:2 approximates:1 m1:4 refer:1 framed:1 rd:3 uv:7 consistency:1 closing:1 language:3 had:1 access:6 f0:1 longer:1 fst:42 aligning:1 carreras:6 recent:1 certain:2 continue:1 joelle:1 meeting:1 seen:1 greater:1 relaxed:1 additional:1 catalunya:1 dashed:1 july:2 multiple:1 borja:2 academic:1 long:1 award:1 paired:3 basic:2 circumstance:1 expectation:2 sometimes:1 represent:1 kernel:1 addition:1 want:1 rest:1 comment:1 subject:6 induced:1 call:2 embeddings:1 variety:1 ms1:1 identified:1 idea:7 whether:1 gga:2 penalty:3 song:2 speech:1 york:1 remark:2 generally:1 cga:1 clear:1 extensively:1 siddiqi:3 reduced:1 generate:2 lsi:1 estimated:1 disjoint:1 blue:1 express:1 key:2 nevertheless:1 ram:1 relaxation:9 sum:4 run:1 hankel:41 extends:1 c02:1 p3:1 h12:1 draw:1 appendix:4 comparable:1 bound:2 annual:1 constraint:23 politecnica:1 generates:1 structured:1 according:1 slightly:1 em:7 island:1 kakade:1 making:1 s1:7 remains:2 describing:1 fp7:1 resents:1 available:1 observe:2 spectral:25 m01:1 original:1 assumes:1 cf:1 linguistics:3 a4:3 graphical:1 logscale:1 exploit:2 eisner:1 build:1 objective:5 added:1 rt:12 said:2 distance:3 separate:1 hmm:1 length:2 msi:3 providing:1 minimizing:2 balance:1 difficult:1 statement:2 hij:1 stated:1 unknown:1 perform:1 boot:3 observation:11 markov:3 discarded:1 finite:9 hxy:4 ecml:1 extended:1 frame:2 y1:6 ww:1 perturbation:1 pair:12 learned:2 yx1:3 barcelona:1 nip:2 address:3 able:3 pattern:1 sparsity:1 challenge:1 uxy:2 encompasses:1 program:1 built:4 green:1 natural:3 regularized:2 representing:1 brief:1 yq:1 numerous:2 concludes:1 literature:1 balle:6 geometric:1 fully:3 loss:1 generation:1 clark:1 shay:1 sufficient:1 xp:1 foster:1 editor:1 row:3 karl:1 course:1 mohri:2 casacuberta:1 supported:2 ariadna:2 grammatical:5 overcome:1 curve:2 xn:4 computes:5 forward:1 made:1 approximate:1 observable:12 compact:1 robotic:1 summing:2 assumed:1 discriminative:1 un:5 search:2 latent:2 learn:1 obtaining:1 mehryar:1 domain:1 vj:2 aistats:1 main:8 s2:2 upc:1 x1:6 enlarged:2 transduction:1 ny:1 sub:1 exponential:2 learns:1 removing:1 formula:1 h32:1 bad:1 symbol:12 closeness:1 a3:3 exists:5 concern:2 phd:1 conditioned:1 stratos:1 tc:1 bailly:3 luque:1 partially:1 applies:1 satisfies:5 h13:2 conditional:1 goal:1 hard:2 principal:2 bernard:1 experimental:1 h21:1 collins:1 dept:1 |
4,268 | 4,863 | On Decomposing the Proximal Map
Yaoliang Yu
Department of Computing Science, University of Alberta, Edmonton AB T6G 2E8, Canada
[email protected]
Abstract
The proximal map is the key step in gradient-type algorithms, which have become prevalent in large-scale high-dimensional problems. For simple functions
this proximal map is available in closed-form while for more complicated functions it can become highly nontrivial. Motivated by the need of combining regularizers to simultaneously induce different types of structures, this paper initiates
a systematic investigation of when the proximal map of a sum of functions decomposes into the composition of the proximal maps of the individual summands.
We not only unify a few known results scattered in the literature but also discover
several new decompositions obtained almost effortlessly from our theory.
1
Introduction
Regularization has become an indispensable part of modern machine learning algorithms. For example, the `2 -regularizer for kernel methods [1] and the `1 -regularizer for sparse methods [2] have
led to immense successes in various fields. As real data become more and more complex, different
types of regularizers, usually nonsmooth functions, have been designed. In many applications, it
is thus desirable to combine regularizers, usually taking their sum, to promote different structures
simultaneously.
Since many interesting regularizers are nonsmooth, they are harder to optimize numerically, especially in large-scale high-dimensional settings. Thanks to recent advances [3?5], gradient-type
algorithms have been generalized to take nonsmooth regularizers explicitly into account. And due
to their cheap per-iteration cost (usually linear-time), these algorithms have become prevalent in
many fields recently. The key step of such gradient-type algorithms is to compute the proximal map
(of the nonsmooth regularizer), which is available in closed-form for some specific regularizers.
However, the proximal map becomes highly nontrivial when we start to combine regularizers.
The main goal of this paper is to systematically investigate when the proximal map of a sum of
functions decomposes into the composition of the proximal maps of the individual functions, which
we simply term prox-decomposition. Our motivation comes from a few known decomposition
results scattered in the literature [6?8], all in the form of our interest. The study of such proxdecompositions is not only of mathematical interest, but also the backbone of popular gradient-type
algorithms [3?5]. More importantly, a precise understanding of this decomposition will shed light
on how we should combine regularizers, taking computational efforts explicitly into account.
After setting the context in Section 2, we motivate the decomposition rule with some justifications, as well as some cautionary results. Based on a sufficient condition presented in Section 3.1,
we study how ?invariance? of the subdifferential of one function would lead to nontrivial proxdecompositions. Specifically, we prove in Section 3.3 that when the subdifferential of one function
is scaling invariant, then the prox-decomposition always holds if and only if another function is
radial?which is, quite unexpectedly, exactly the same condition proven recently for the validity of
the representer theorem in the context of kernel methods [9, 10]. The generalization to cone invariance is considered in Section 3.4, and enables us to recover most known prox-decompositions, as
well as some new ones falling out quite naturally.
1
Our notations are mostly standard. We use ?C (x) for the indicator function that takes 0 if x ? C
and ? otherwise, and 1C (x) for the indicator that takes 1 if x ? C and 0 otherwise. The symbol
? Throughout the
Id stands for the identity map and the extended real line R ? {?} is denoted as R.
paper we denote ?f (x) as the subdifferential of the function f at point x.
2
Preliminary
Let our domain be some (real) Hilbert space (H, h?, ?i), with the induced Hilbertian norm k ? k. If
needed, we will assume some fixed orthonormal basis {ei }i?I is chosen for H, so that for x ? H
we are able to refer to its ?coordinates? xi = hx, ei i.
? we define its Moreau envelop as [11]
For any closed convex proper function f : H ? R,
?y ? H, Mf (y) = min 21 kx ? yk2 + f (x),
x?H
(1)
and the related proximal map
Pf (y) = argmin 12 kx ? yk2 + f (x).
(2)
x?H
Due to the strong convexity of k ? k2 and the closedness and convexity of f , Pf (y) always exists
and is unique. Note that Mf : H ? R while Pf : H ? H. When f = ?C is the indicator of some
closed convex set C, the proximal map reduces to the usual projection. Perhaps the most interesting
property of Mf , known as Moreau?s identity, is the following decomposition [11]
Mf (y) + Mf ? (y) = 21 kyk2 ,
(3)
?
where f (z) = supx hx, zi ? f (x) is the Fenchel conjugate of f . It can be shown that Mf is Frech?t
differentiable, hence taking derivative w.r.t. y in both sides of (3) yields
Pf (y) + Pf ? (y) = y.
3
(4)
Main Results
Our main goal is to investigate and understand the equality (we always assume f + g 6? ?)
?
?
Pf +g = Pf ? Pg = Pg ? Pf ,
(5)
where f, g ? ?0 , the set of all closed convex proper functions on H, and f ? g denotes the mapping
composition. We present first some cautionary results.
?1 ?1
Note that Pf = (Id+?f )?1 , hence under minor technical assumptions Pf +g = (P?1
?2Id.
2f +P2g )
However, computationally this formula is of little use. On the other hand, it is possible to develop
forward-backward splitting procedures1 to numerically compute Pf +g , using only Pf and Pg as
subroutines [12]. Our focus is on the exact closed-form formula (5). Interestingly, under some
?shrinkage? assumption, the prox-decomposition (5), even if not necessarily hold, can still be used
in subgradient algorithms [13].
Our first result is encouraging:
Proposition 1. If H = R, then for any f, g ? ?0 , there exists h ? ?0 such that Ph = Pf ? Pg .
Proof: In fact, Moreau [11, Corollary 10.c] proved that P : H ? H is a proximal map iff it
is nonexpansive and it is the subdifferential of some convex function in ?0 . Although the latter
condition in general is not easy to verify, it reduces to monotonic increasing when H = R (note that
P must be continuous). Since both Pf and Pg are increasing and nonexpansive, it follows easily that
so is Pf ? Pg , hence the existence of h ? ?0 so that Ph = Pf ? Pg .
In a general Hilbert space H, we again easily conclude that the composition Pf ? Pg is always a
nonexpansion, which means that it is ?close? to be a proximal map. This justifies the composition
Pf ? Pg as a candidate for the decomposition of Pf +g . However, we note that Proposition 1 indeed
can fail already in R2 :
1
In some sense, this procedure is to compute Pf +g ? limt?? (Pf ? Pg )t , modulo some intermediate steps.
Essentially, our goal is to establish the one-step convergence of that iterative procedure.
2
Example 1. Let H = R2 . Let f = ?{x1 =x2 } and g = ?{x2 =0} . Clearly both f and g are in ?0 . The
2 x1 +x2
proximal maps in this case are simply projections: Pf (x) = ( x1 +x
, 2 ) and Pg (x) = (x1 , 0).
2
Therefore Pf (Pg (x)) = ( x21 , x21 ). We easily verify that the inequality
kPf (Pg (x)) ? Pf (Pg (y))k2 ? hPf (Pg (x)) ? Pf (Pg (y)), x ? yi
is not always true, contradiction if Pf ? Pg was a proximal map [11, Eq. (5.3)].
Even worse, when Proposition 1 does hold, in general we can not expect the decomposition (5) to
be true without additional assumptions.
1
Example 2. Let H = R and q(x) = 21 x2 . It is easily seen that P?q (x) = 1+?
x. Therefore
1
1
Pq ? Pq = 4 Id 6= 3 Id = Pq+q . We will give an explanation for this failure of composition shortly.
Nevertheless, as we will see, the equality in (5) does hold in many scenarios, and an interesting
theory can be suitably developed.
3.1
A Sufficient Condition
We start with a sufficient condition that yields (5). This result, although easy to obtain, will play a
key role in our subsequent development.
Using the first order optimality condition and the definition of the proximal map (2), we have
Pf +g (y) ? y + ?(f + g)(Pf +g (y)) 3 0
Pg (y) ? y + ?g(Pg (y)) 3 0
Pf (Pg (y)) ? Pg (y) + ?f (Pf (Pg (y))) 3 0.
(6)
(7)
(8)
Adding the last two equations we obtain
Pf (Pg (y)) ? y + ?g(Pg (y)) + ?f (Pf (Pg (y))) 3 0.
(9)
Comparing (6) and (9) gives us
Theorem 1. A sufficient condition for Pf +g = Pf ? Pg is
? x ? H, ?g(Pf (x)) ? ?g(x).
(10)
Proof: Let x = Pg (y). Then by (9) and the subdifferential rule ?(f + g) ? ?f + ?g we verify that
Pf (Pg (y)) satisfies (6), hence follows Pf +g = Pf ? Pg since the proximal map is single-valued.
We note that a special form of our sufficient condition has appeared in the proof of [8, Theorem 1],
whose main result also follows immediately from our Theorem 4 below. Let us fix f , and define
Kf = {g ? ?0 : f + g 6? ?, (f, g) satisfy (10)}.
Immediately we have
Proposition 2. For any f ? ?0 , Kf is a cone. Moreover, if g1 ? Kf , g2 ? Kf , f + g1 + g2 6? ?
and ?(g1 + g2 ) = ?g1 + ?g2 , then g1 + g2 ? Kf too.
The condition ?(g1 +g2 ) = ?g1 +?g2 in Proposition 2 is purely technical; it is satisfied when, say g1
is continuous at a single, arbitrary point in dom g1 ? dom g2 . For comparison purpose, we note that
it is not clear how Pf +g+h = Pf ? Pg+h would follow from Pf +g = Pf ? Pg and Pf +h = Pf ? Ph .
This is the main motivation to consider the sufficient condition (10). In particular
Definition 1. We call f ? ?0 self-prox-decomposable (s.p.d.) if f ? K?f for all ? > 0.
For any s.p.d. f , since Kf is a cone, ?f ? K?f for all ?, ? ? 0. Consequently, P(?+?)f =
P?f ? P?f = P?f ? P?f .
Remark 1. A weaker definition for s.p.d. is to require f ? Kf , from which we conclude that
?f ? Kf for all ? ? 0, in particular P(m+n)f = Pnf ? Pmf = Pmf ? Pnf for all natural numbers
m and n. The two definitions coincide for positive homogeneous functions. We have not been able
to construct a function that satisfies this weaker definition but not the stronger one in Definition 1.
Example 3. We easily verify that all affine functions ` = h?, ai + b are s.p.d., in fact, they are the
only differentiable functions that are s.p.d., which explains why Example 2 must fail. Another trivial
class of s.p.d. functions are projectors to closed convex sets. Also, univariate gauges2 are s.p.d., due
to Theorem 4 below. Some multivariate s.p.d. functions are given in Remark 5 below.
2
A gauge is a positively homogeneous convex function that vanishes at the origin.
3
The next example shows that (10) is not necessary.
Example 4. Fix z ? H, f = ?{z} , and g ? ?0 with full domain. Clearly for any x ? H, Pf +g (x) =
z = Pf [Pg (x)]. However, since x is arbitrary, ?g(Pf (x)) = ?g(z) 6? ?g(x) if g is not linear.
On the other hand, if f, g are differentiable, then we actually have equality in (10), which is clearly
necessary in this case. Since convex functions are almost everywhere differentiable (in the interior
of their domain), we expect the sufficient condition (10) to be necessary ?almost everywhere? too.
Thus we see that the key for the decomposition (5) to hold is to let the proximal map of f and the
subdifferential of g ?interact well? in the sense of (10). Interestingly, both are fully equivalent to the
function itself.
Proposition 3 ([11, ?8]). Let f, g ? ?0 . f = g + c for some c ? R ?? ?f ? ?g ?? Pf = Pg .
Proof: The first implication is clear. The second follows from the optimality condition Pf =
(Id + ?f )?1 . Lastly, Pf = Pg implies that Mf ? = Mg? ? c for some c ? R (by integration).
Conjugating we get f = g + c for some c ? R.
Therefore some properties of the proximal map will transfer to some properties of the function f
itself, and vice versa. The next result is easy to obtain, and appeared essentially in [14].
Proposition 4. Let f ? ?0 and x ? H be arbitrary, then
i). Pf is odd iff f is even;
ii). Pf (U x) = U Pf (x) for all unitary U iff f (U x) = f (x) for all unitary U ;
iii). Pf (Qx) = QPf (x) for all permutation Q (under some fixed basis) iff f is permutation invariant, that is f (Qx) = f (x) for all permutation Q.
In the following, we will put some invariance assumptions on the subdifferential of g and accordingly
find the right family of f whose proximal map ?respects? that invariance. This way we will meet
(10) by construction therefore effortlessly have the decomposition (5).
3.2
No Invariance
To begin with, consider first the trivial case where no invariance on the subdifferential of g is assumed. This is equivalent as requiring (10) to hold for all g ? ?0 . Not surprisingly, we end up with
a trivial choice of f .
Theorem 2. Fix f ? ?0 . Pf +g = Pf ? Pg for all g ? ?0 if and only if
? dim(H) ? 2; f ? c or f = ?{w} + c for some c ? R and w ? H;
? dim(H) = 1 and f = ?C + c for some closed and convex set C and c ? R.
Proof: ?: Straightforward calculations, see [15] for details.
?: We first prove that f is constant on its domain even when g is restricted to indicators. Indeed,
let x ? dom f and take g = ?{x} . Then x = Pf +g (x) = Pf [Pg (x)] = Pf (x), meaning that
x ? argmin f . Since x ? dom f is arbitrary, f is constant on its domain. The case dim(H) = 1 is
complete. We consider the other case where dim(H) ? 2 and dom f contains at least two points.
If dom f 6= H, there exists z 6? dom f such that Pf (z) = y for some y ? dom f , and closed
convex set C ? dom f 6= ? with y 6? C 3 z. Let g = ?C we obtain Pf +g (z) ? C ? dom f while
Pf (Pg (z)) = Pf (z) = y 6? C, contradiction.
Observe that the decomposition (5) is not symmetric in f and g, also reflected in the next result:
Theorem 3. Fix g ? ?0 . Pf +g = Pf ? Pg for all f ? ?0 iff g is a continuous affine function.
Proof: ?: If g = h?, ai + c, then Pg (x) = x ? a. Easy calculation reveals that Pf +g (x) =
Pf (x ? a) = Pf [Pg (x)].
?: The converse is true even when f is restricted to continuous linear functions. Indeed, let a ? H
be arbitrary and consider f = h?, ai. Then Pf +g (x) = Pg (x ? a) = Pf (Pg (x)) = Pg (x) ? a.
Letting a = x yields Pg (x) = x + Pg (0) = Ph?,?Pg (0)i (x). Therefore by Proposition 3 we know g
is equal to a continuous affine function.
4
Naturally, the next step is to put invariance assumptions on the subdifferential of g, effectively
restricting the function class of g. As a trade off, the function class of f , that satisfies (10), becomes
larger so that nontrivial results will arise.
3.3
Scaling Invariance
The first invariance property we consider is scaling-invariance. What kind of convex functions have
their subdifferential invariant to (positive) scaling? Assuming 0 ? dom g and by simple integration
Z t
Z t
0
h?g(sx), xi ds = t ? [g(x) ? g(0)],
g (sx)ds =
g(tx) ? g(0) =
0
0
where the last equality follows from the scaling invariance of the subdifferential of g. Therefore, up
to some additive constant, g is positive homogeneous (p.h.). On the other hand, if g ? ?0 is p.h.
(automatically 0 ? dom g), then from definition we verify that ?g is scaling-invariant. Therefore,
under the scaling-invariance assumption, the right function class for g is the set of all p.h. functions
in ?0 , up to some additive constant. Consequently, the right function class for f is to have the
proximal map Pf (x) = ? ? x for some ? ? [0, 1] that may depend on x as well3 . The next theorem
completely characterizes such functions.
Theorem 4. Let f ? ?0 . Consider the statements
?
i). f = h(k ? k) for some increasing function h : R+ ? R;
ii). x ? y =? f (x + y) ? f (y);
iii). Pf (u) = ? ? u for some ? ? [0, 1] (that may itself depend on u);
iv). 0 ? dom f and Pf +? = Pf ? P? for all p.h. (up to some additive constant) function ? ? ?0 .
Then we have i) =? ii) ?? iii) ?? iv). Moreover, when dim(H) ? 2, ii) =? i) as well, in
which case Pf (u) = Ph (kuk)/kuk ? u (where we interpret 0/0 = 0).
Remark 2. When dim(H) = 1, ii) is equivalent as requiring f to attain its minimum at 0, in which
case the implication ii) =? iv), under the redundant condition that f is differentiable, was proved
by Combettes and Pesquet [14, Proposition 3.6]. The implication ii) =? iii) also generalizes [14,
Corollary 2.5], where only the case dim(H) = 1 and f differentiable is considered. Note that there
exists non-even f that satisfies Theorem 4 when dim(H) = 1. Such is impossible for dim(H) ? 2,
in which case any f that satisfies Theorem 4 must also enjoy all properties listed in Proposition 4.
Proof: i) =? ii): x ? y =? kx + yk ? kyk.
ii) =? iii): Indeed, by definition
Mf (u) = min 21 kx ? uk2 + f (x) = minu? ,? 12 ku? + ?u ? uk2 + f (u? + ?u)
x
= min 21 k?u ? uk2 + f (?u) = min??[0,1] 21 (? ? 1)2 kuk2 + f (?u),
?
where the third equality is due to ii), and the nonnegative constraint in the last equality can be seen
as follows: For any ? < 0, by increasing it to 0 we can only decrease both terms; similar argument
for ? > 1. Therefore there exists ? ? [0, 1] such that ?u minimizes the Moreau envelop Mf hence
we have Pf (u) = ?u due to uniqueness.
iii) =? iv): Note first that iii) implies 0 ? ?f (0), therefore 0 ? dom f . Since the subdifferential
of ? is scaling-invariant, iii) implies the sufficient condition (10) hence iv).
iv) =? iii): Fix y and construct the gauge function
0,
if z = ? ? y for some ? ? 0
?(z) =
.
?, otherwise
Then P? (y) = y, hence Pf (P? (y)) = Pf (y) = Pf +? (y) by iv). On the other hand,
Mf +? (y) = min 21 kx ? yk22 + f (x) + ?(x) = min??0 21 k?y ? yk22 + f (?y).
x
3
Note that ? ? 1 is necessary since any proximal map is nonexpansive.
5
(11)
Take y = 0 we obtain Pf +? (0) = 0. Thus Pf (0) = 0, i.e. 0 ? ?f (0), from which we deduce that
Pf (y) = Pf +? (y) = ?y for some ? ? [0, 1], since f (?y) in (11) is increasing on [1, ?[.
iii) =? ii): First note that iii) implies that Pf (0) = 0 hence 0 ? ?f (0), in particular, 0 ? dom f .
If dim(H) = 1 we are done, so we assume dim(H) ? 2 in the rest of the proof. In this case, it is
known, cf. [9, Theorem 1] or [10, Theorem 3], that ii) ?? i) (even without assuming f convex).
All we left is to prove iii) =? ii) or equivalently i), for the case dim(H) ? 2.
We first prove the case when dom f = H. By iii), Pf (x) = ?x for some ? ? [0, 1] (which may
depend on x as well). Using the first order optimality condition for the proximal map we have
0 ? ?x ? x + ?f (?x), that is ( ?1 ? 1)y ? ?f (y) for each y ? ran(Pf ) = H due to our assumption
dom f = H. Now for any x ? y, by the definition of the subdifferential,
f (x + y) ? f (y) + hx, ?f (y)i = f (y) + x, ( ?1 ? 1)y = f (y).
For the case when dom f ? H, we consider the proximal average [16]
g = A(f, q) = [( 21 (f ? + q)? + 14 q)? ? q]? ,
(12)
where q = 12 k ? k2 . Importantly, since q is defined on the whole space, the proximal average g has
full domain too [16, Corollary 4.7]. Moreover, Pg (x) = 21 Pf (x) + 41 x = ( 12 ? + 14 )x. Therefore
by our previous argument, g satisfies ii) hence also i). It is easy to check that i) is preserved under
taking the Fenchel conjugation (note that the convexity of f implies that of h). Since we have shown
that g satisfies i), it follows from (12) that f satisfies i) hence also ii).
As mentioned, when dim(H) ? 2, the implication ii) =? i) was shown in [9, Theorem 1]. The
formula Pf (u) = Ph (kuk)/kuk ? u for f = h(k ? k) follows from straightforward calculation.
We now discuss some applications of Theorem 4. When dim(H) ? 2, iii) in Theorem 4 automatically implies that the scalar constant ? depends on x only through its norm. This fact, although not
entirely obvious, does have a clear geometric picture:
Corollary 1. Let dim(H) ? 2, C ? H be a closed convex set that contains the origin. Then the
projection onto C is simply a shrinkage towards the origin iff C is a ball (of the norm k ? k).
Proof: Let f = ?C and apply Theorem 4.
Example 5. As usual, denote q = 21 k ? k2 . In many applications, in addition to the regularizer ?
(usually a gauge), one adds the `22 regularizer ?q either for stability or grouping effect or strong
convexity. This incurs no computational cost in the sense of computing the proximal map: We easily
1
1
compute that P?q = ?+1
Id. By Theorem 4, for any gauge ?, P?+?q = ?+1
P? , whence it is also
clear that adding an extra `2 regularizer tends to double ?shrink? the solution. In particular, let
H = Rd and take ? = k ? k1 (the sum of absolute values) we recover the proximal map for the
elastic-net regularizer [17].
Example 6. The Berhu regularizer
h(x) = |x|1|x|<? +
x2 +? 2
2? 1|x|??
= |x| +
(|x|??)2
1|x|?? ,
2?
being the reverse of Huber?s function, is proposed in [18] as a bridge between the lasso (`1 regularization) and ridge regression (`22 regularization). Let f (x) = h(x) ? |x|. Clearly, f satisfies ii) of
Theorem 4 (but not differentiable), hence
Ph = Pf ? P|?| ,
whereas simple calculation verifies that
?
Pf (x) = sign(x) ? min{|x|, 1+?
(|x| + 1)},
and of course P|?| (x) = sign(x) ? max{|x| ? 1, 0}. Note that this regularizer is not s.p.d.
Corollary 2. Let dim(H) ? 2, then the p.h. function f ? ?0 satisfies any item of Theorem 4 iff it is
a positive multiple of the norm k ? k.
Proof: [10, Theorem 4] showed that under positive homogeneity, i) implies that f is a positive
multiple of the norm.
Therefore (positive multiples of) the Hilbertian norm is the only p.h. convex function f that satisfies
Pf +? = Pf ? P? for all gauge ?. In particular, this means that the norm k ? k is s.p.d. Moreover, we
easily recover the following result that is perhaps not so obvious at first glance:
6
Corollary 3 (Jenatton et al. [7]). Fix the orthonormal basis {ei }i?I of H. Let G ? 2I be a collection
of tree-structured groups, that is, either g ? g0 or g0 ? g or g ? g0 = ? for all g, g0 ? G. Then
PPm
= Pk?kg1 ? ? ? ? ? Pk?kgm ,
i=1 k?kgi
where we arrange the groups so that gi ? gj =? i > j, and the notation k ? kgi denotes the
Hilbertian norm that is restricted to the coordinates indexed by the group gi .
Pm
Proof: Let f = k ? kg1 and ? = i=2 k ? kgi . Clearly they are both p.h. (and convex). By the
tree-structured assumption we can partition ? = ?1 + ?2 , where gi ? g1 for all gi appearing in ?1
while gj ? g1 = ? for all gj appearing in ?2 . Restricting to the subspace spanned by the variables in
g1 we can treat f as the Hilbertian norm. Apply Theorem 4 we obtain Pf +?1 = Pf ? P?1 . On the
other hand, due to the non-overlapping property, nothing will be affected by adding ?2 , thus
PPm
= Pk?kg1 ? PPm
.
i=1 k?kgi
i=2 k?kgi
We can clearly iterate the argument to unravel the proximal map as claimed.
For notational clarity, we have chosen not to incorporate weights in the sum of group seminorms:
Such can be absorbed into the seminorm and the corollary clearly remains intact. Our proof also
reveals the fundamental reason why Corollary 3 is true: The `2 norm admits the decomposition (5)
for any gauge g! This fact, to the best of our knowledge, has not been recognized previously.
3.4
Cone Invariance
In the previous subsection, we restricted the subdifferential of g to be constant along each ray. We
now generalize this to cones. Specifically, consider the gauge function
?(x) = max haj , xi ,
j?J
(13)
where J is a finite index set and each aj ? H. Such polyhedral gauge functions have become
extremely important due to the work of Chandrasekaran et al. [19]. Define the polyhedral cones
Kj = {x ? H : haj , xi = ?(x)}.
(14)
Assume Kj 6= ? for each j (otherwise delete j from J). Since ??(x) = {aj |j ? J, x ? Kj }, the
sufficient condition (10) becomes
?j ? J, Pf (Kj ) ? Kj .
(15)
In other words, each cone Kj is ?fixed? under the proximal map of f . Although it would be very
interesting to completely characterize f under (15), we show that in its current form, (15) already
implies many known results, with some new generalizations falling out naturally.
Corollary
4. Denote E a collection of pairs (m, n), and define the total variational norm kxktv =
P
4
{m,n}?E wm,n |xm ? xn |, where wm,n ? 0. Then for any permutation invariant function f ,
Pf +k?ktv = Pf ? Pk?ktv .
Proof:
Pick an arbitrary pair (m, n) ? E and let ? = |xm ? xn |.
Clearly
J = {1, 2}, K1 = {xm ? xn } and K2 = {xm ? xn }. Since f is permutation invariant,
its proximal map Pf (x) maintains the order of x, hence we establish (15). Finally apply Proposition 2 and Theorem 1.
Remark 3. The special case where E = {(1, 2), (2, 3), . . .} is a chain, wm,n ? 1 and f is the `1
norm, appeared first in [6] and is generally known as the fused lasso. The case where f is the `p
norm appeared in [20].
We call the permutation invariant function f symmetric if ?x, f (|x|) = f (x), where | ? | denotes
the componentwise absolute value. The proof for the next corollary is almost the same as that of
Corollary 4, except that we also use the fact sign([Pf (x)]m ) = sign(xm ) for symmetric functions.
P
Corollary 5. As in Corollary 4, define the norm kxkoct = {m,n}?E wm,n max{|xm |, |xn |}. Then
for any symmetric function f , Pf +k?koct = Pf ? Pk?koct .
4
All we need is the weaker condition: For all {m, n} ? E, xm ? xn =? [Pf (x)]m ? [Pf (x)]n .
7
Remark 4. This norm k ? koct is proposed in [21] for feature grouping. Surprisingly, Corollary 5
appears to be new. The proximal map Pk?koct is derived
P in [22], which turns out to be another
decomposition result. Indeed, for i ? 2, define ?i (x) = j?i?1 max{|xi |, |xj |}. Thus
X
k ? koct =
?i .
i?2
Importantly, we observe that ?i is symmetric on the first i ? 1 coordinates. We claim that
Pk?koct = P?|I| ? . . . ? P?2 .
The proof is by recursion: Write k ? koct = f + g, where f = ?|I| . Note that the subdifferential of
g depends only on the ordering and sign of the first |I| ? 1 coordinates while the proximal map of
f preserves the ordering and sign of the first |I| ? 1 coordinates (due to symmetry). If we pre-sort
x, the individual proximal maps P?i (x) become easy to compute sequentially and we recover the
algorithm in [22] with some bookkeeping.
Corollary 6. As in Corollary 3, let G ? 2I be a collection of tree-structured groups, then
PPm
= Pk?kg1 ,k ? ? ? ? ? Pk?kgm ,k ,
i=1 k?kgi ,k
Pk
where we arrange the groups so that gi ? gj =? i > j, and kxkgi ,k = j=1 |xgi |[j] is the sum
of the k (absolute-value) largest elements in the group gi , i.e., Ky-Fan?s k-norm.
Proof: Similar as in the proof of Corollary 3, we need only prove that
Pk?kg1 ,k +k?kg2 ,k = Pk?kg1 ,k ? Pk?kg2 ,k ,
where w.l.o.g. we assume g1 contains all variables while g2 ? g1 . Therefore k ? kg1 ,k can be treated
as symmetric and the rest follows the proof of Corollary 5.
Note that the case k ? {1, |I|} was proved in [7] and Corollary 6 can be seen as an interpolation.
Interestingly, there is another interpolated result whose proof should be apparent now.
Corollary 7. Corollary 6 remains true if we replace Ky-Fan?s k-norm with
X
kxkoct,k =
max{|xi1 |, . . . , |xik |}.
(16)
1?i1 <i2 <...<ik ?|I|
Therefore we can employ the norm kxkoct,2 for feature grouping in a hierarchical manner. Clearly
we can also combine Corollary 6 and Corollary 7.
Corollary 8. For any symmetric f , Pf +k?koct,k = Pf ? Pk?koct,k . Similarly for Ky-Fan?s k-norm.
Remark 5. The above corollary implies that Ky-Fan?s k-norm and the norm k ? koct,k defined in
(16) are both s.p.d. (see Definition 1). The special case for the `p norm where p ? {1, 2, ?} was
proved in [23, Proposition 11], with a substantially more complicated argument. As pointed out in
[23], s.p.d. regularizers allow us to perform lazy updates in gradient-type algorithms.
We remark that we have not exhausted the possibility to have the decomposition (5). It is our hope
to stimulate further work in understanding the prox-decomposition (5).
Added after acceptance: We have managed to extend the results in this subsection to the Lov?sz
extension of submodular set functions. Details will be given elsewhere.
4
Conclusion
The main goal of this paper is to understand when the proximal map of the sum of functions decomposes into the composition of the proximal maps of the individual functions. Using a simple
sufficient condition we are able to completely characterize the decomposition when certain scaling
invariance is exhibited. The generalization to cone invariance is also considered and we recover
many known decomposition results, with some new ones obtained almost effortlessly. In the future
we plan to generalize some of the results here to nonconvex functions.
Acknowledgement
The author thanks Bob Williamson and Xinhua Zhang from NICTA?Canberra for their hospitality
during the author?s visit when part of this work was performed; Warren Hare, Yves Lucet, and
Heinz Bauschke from UBC?Okanagan for some discussions around Theorem 4; and the reviewers
for their valuable comments.
8
References
[1] Bernhard Scholk?pf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001.
[2] Peter B?hlmann and Sara van de Geer. Statistics for High-Dimensional Data. Springer, 2011.
[3] Patrick L. Combettes and Val?rie R. Wajs. Signal recovery by proximal forward-backward
splitting. Multiscale Modeling and Simulation, 4(4):1168?1200, 2005.
[4] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear
inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[5] Yurii Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, Series B, 140:125?161, 2013.
[6] Jerome Friedman, Trevor Hastie, Holger H?fling, and Robert Tibshirani. Pathwise coordinate
optimization. The Annals of Applied Statistics, 1(2):302?332, 2007.
[7] Rodolphe Jenatton, Julien Mairal, Guillaume Obozinski, and Francis Bach. Proximal methods
for hierarchical sparse coding. Journal of Machine Learning Research, 12:2297?2334, 2011.
[8] Jiayu Zhou, Jun Liu, Vaibhav A. Narayan, and Jieping Ye. Modeling disease progression via
fused sparse group lasso. In Conference on Knowledge Discovery and Data Mining, 2012.
[9] Francesco Dinuzzo and Bernhard Sch?lkopf. The representer theorem for Hilbert spaces: a
necessary and sufficient condition. In NIPS, 2012.
[10] Yao-Liang Yu, Hao Cheng, Dale Schuurmans, and Csaba Szepesv?ri. Characterizing the representer theorem. In ICML, 2013.
[11] Jean J. Moreau. Proximit? et dualtit? dans un espace Hilbertien. Bulletin de la Soci?t? Math?matique de France, 93:273?299, 1965.
[12] Patrick L. Combettes, ?inh D?ung, and Ba`? ng C?ng V?u. Proximity for sums of composite
functions. Journal of Mathematical Analysis and Applications, 380(2):680?688, 2011.
[13] Andr? F. T. Martins, Noah A. Smith, Eric P. Xing, Pedro M. Q. Aguiar, and M?rio A. T.
Figueiredo. Online learning of structured predictors with multiple kernels. In Conference on
Artificial Intelligence and Statistics, 2011.
[14] Patrick L. Combettes and Jean-Christophe Pesquet. Proximal thresholding algorithm for minimization over orthonormal bases. SIAM Journal on Optimization, 18(4):1351?1376, 2007.
[15] Yaoliang Yu. Fast Gradient Algorithms for Stuctured Sparsity. PhD thesis, University of
Alberta, 2013.
[16] Heinz H. Bauschke, Rafal Goebel, Yves Lucet, and Xianfu Wang. The proximal average:
Basic theory. SIAM Journal on Optimization, 19(2):766?785, 2008.
[17] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal
of the Royal Statistical Society B, 67:301?320, 2005.
[18] Art B. Owen. A robust hybrid of lasso and ridge regression. In Prediction and Discovery,
pages 59?72. AMS, 2007.
[19] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear
inverse problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
[20] Xinhua Zhang, Yaoliang Yu, and Dale Schuurmans. Polar operators for structured sparse
estimation. In NIPS, 2013.
[21] Howard Bondell and Brian Reich. Simultaneous regression shrinkage, variable selection, and
supervised clustering of predictors with oscar. Biometrics, 64(1):115?123, 2008.
[22] Leon Wenliang Zhong and James T. Kwok. Efficient sparse modeling with automatic feature
grouping. In ICML, 2011.
[23] John Duchi and Yoram Singer. Efficient online and batch learning using forward backward
splitting. Journal of Machine Learning Research, 10:2899?2934, 2009.
9
| 4863 |@word norm:22 stronger:1 suitably:1 simulation:1 decomposition:20 pg:47 pick:1 incurs:1 harder:1 liu:1 contains:3 series:1 lucet:2 ktv:2 interestingly:3 current:1 comparing:1 must:3 john:1 subsequent:1 kpf:1 additive:3 partition:1 cheap:1 enables:1 designed:1 update:1 intelligence:1 item:1 kyk:1 accordingly:1 amir:1 dinuzzo:1 smith:1 math:1 zhang:2 mathematical:3 along:1 become:7 ik:1 prove:5 combine:4 ray:1 polyhedral:2 manner:1 lov:1 huber:1 indeed:5 heinz:2 alberta:2 automatically:2 little:1 encouraging:1 pf:106 increasing:5 becomes:3 begin:1 discover:1 notation:2 moreover:4 what:1 backbone:1 argmin:2 kind:1 minimizes:1 substantially:1 developed:1 csaba:1 wajs:1 shed:1 exactly:1 k2:5 converse:1 enjoy:1 positive:7 treat:1 tends:1 id:7 meet:1 interpolation:1 sara:1 unique:1 procedure:2 attain:1 composite:2 projection:3 word:1 induce:1 radial:1 pre:1 get:1 onto:1 close:1 interior:1 selection:2 operator:1 put:2 context:2 impossible:1 optimize:1 equivalent:3 map:34 projector:1 reviewer:1 jieping:1 straightforward:2 convex:15 unravel:1 unify:1 decomposable:1 splitting:3 immediately:2 xgi:1 recovery:1 contradiction:2 rule:2 importantly:3 orthonormal:3 spanned:1 stability:1 coordinate:6 justification:1 annals:1 construction:1 play:1 wenliang:1 ualberta:1 exact:1 modulo:1 homogeneous:3 programming:1 soci:1 origin:3 element:1 role:1 wang:1 unexpectedly:1 nonexpansion:1 ordering:2 trade:1 e8:1 decrease:1 valuable:1 yk:1 ran:1 mentioned:1 vanishes:1 convexity:4 disease:1 xinhua:2 nesterov:1 ppm:4 dom:18 motivate:1 depend:3 purely:1 proximit:1 eric:1 basis:3 completely:3 easily:7 various:1 tx:1 regularizer:9 fast:2 artificial:1 quite:2 whose:3 larger:1 valued:1 apparent:1 say:1 jean:2 otherwise:4 statistic:3 gi:6 g1:14 hilbertien:1 itself:3 online:2 differentiable:7 mg:1 net:2 dans:1 combining:1 iff:7 ky:4 convergence:1 double:1 develop:1 narayan:1 odd:1 minor:1 eq:1 strong:2 c:1 come:1 implies:9 explains:1 require:1 hx:3 fix:6 generalization:3 investigation:1 preliminary:1 proposition:12 brian:1 extension:1 hold:6 effortlessly:3 around:1 considered:3 proximity:1 minu:1 mapping:1 claim:1 arrange:2 purpose:1 uniqueness:1 polar:1 estimation:1 frech:1 bridge:1 largest:1 vice:1 gauge:8 hope:1 minimization:1 mit:1 clearly:9 hospitality:1 always:5 zhou:1 shrinkage:4 zhong:1 corollary:25 derived:1 focus:1 notational:1 prevalent:2 check:1 sense:3 whence:1 dim:16 rio:1 am:1 yaoliang:4 subroutine:1 i1:1 france:1 denoted:1 hilbertian:4 development:1 plan:1 art:1 special:3 integration:2 field:2 construct:2 equal:1 ung:1 ng:2 holger:1 yu:4 icml:2 representer:3 promote:1 future:1 espace:1 nonsmooth:4 few:2 employ:1 modern:1 simultaneously:2 preserve:1 homogeneity:1 individual:4 fling:1 beck:1 geometry:1 envelop:2 ab:1 friedman:1 interest:2 acceptance:1 highly:2 investigate:2 possibility:1 mining:1 rodolphe:1 light:1 regularizers:9 immense:1 implication:4 chain:1 necessary:5 biometrics:1 tree:3 iv:7 indexed:1 pmf:2 delete:1 fenchel:2 modeling:3 teboulle:1 hlmann:1 cost:2 predictor:2 too:3 closedness:1 characterize:2 bauschke:2 supx:1 proximal:39 thanks:2 recht:1 fundamental:1 kgi:6 siam:3 systematic:1 off:1 xi1:1 fused:2 yao:1 thesis:1 again:1 satisfied:1 rafal:1 worse:1 derivative:1 account:2 prox:6 de:3 parrilo:1 coding:1 satisfy:1 explicitly:2 depends:2 performed:1 closed:10 characterizes:1 haj:2 start:2 recover:5 wm:4 complicated:2 maintains:1 sort:1 francis:1 jiayu:1 xing:1 yves:2 yield:3 generalize:2 lkopf:1 bob:1 simultaneous:1 trevor:2 definition:10 failure:1 hare:1 james:1 obvious:2 naturally:3 proof:19 proved:4 popular:1 knowledge:2 subsection:2 hilbert:3 actually:1 jenatton:2 appears:1 supervised:1 follow:1 reflected:1 rie:1 done:1 shrink:1 smola:1 lastly:1 jerome:1 d:2 hand:5 ei:3 multiscale:1 overlapping:1 glance:1 aj:2 perhaps:2 stimulate:1 seminorm:1 effect:1 ye:1 validity:1 verify:5 true:5 requiring:2 managed:1 regularization:5 hence:12 equality:6 symmetric:7 i2:1 during:1 kyk2:1 self:1 generalized:1 scholk:1 complete:1 ridge:2 duchi:1 meaning:1 variational:1 recently:2 bookkeeping:1 extend:1 numerically:2 interpret:1 refer:1 composition:7 goebel:1 versa:1 ai:3 rd:1 automatic:1 pm:1 similarly:1 pointed:1 mathematics:1 submodular:1 pq:3 reich:1 yk2:2 gj:4 summands:1 deduce:1 add:1 patrick:3 base:1 multivariate:1 recent:1 showed:1 reverse:1 scenario:1 indispensable:1 claimed:1 certain:1 kg1:7 inequality:1 nonconvex:1 success:1 christophe:1 yi:1 seen:3 minimum:1 additional:1 recognized:1 conjugating:1 redundant:1 signal:1 ii:17 full:2 desirable:1 multiple:4 reduces:2 technical:2 calculation:4 bach:1 visit:1 prediction:1 regression:3 basic:1 essentially:2 iteration:1 kernel:4 limt:1 preserved:1 subdifferential:15 addition:1 whereas:1 szepesv:1 sch:1 extra:1 rest:2 exhibited:1 comment:1 induced:1 call:2 unitary:2 yk22:2 intermediate:1 iii:14 easy:6 iterate:1 xj:1 zi:1 pesquet:2 lasso:4 hastie:2 motivated:1 effort:1 peter:1 remark:7 generally:1 clear:4 listed:1 ph:7 andr:1 uk2:3 sign:6 per:1 tibshirani:1 write:1 affected:1 group:8 key:4 nevertheless:1 falling:2 clarity:1 kuk:4 backward:3 imaging:1 subgradient:1 sum:8 cone:8 inverse:2 everywhere:2 oscar:1 almost:5 throughout:1 family:1 chandrasekaran:2 scaling:9 entirely:1 hpf:1 conjugation:1 cheng:1 fan:4 nonnegative:1 nontrivial:4 noah:1 constraint:1 cautionary:2 x2:5 ri:1 interpolated:1 argument:4 min:7 optimality:3 extremely:1 leon:1 martin:1 department:1 structured:5 ball:1 nonexpansive:3 conjugate:1 invariant:8 restricted:4 computationally:1 equation:1 bondell:1 remains:2 previously:1 discus:1 turn:1 fail:2 needed:1 initiate:1 letting:1 know:1 singer:1 end:1 well3:1 yurii:1 available:2 decomposing:1 generalizes:1 apply:3 observe:2 hierarchical:2 progression:1 kwok:1 appearing:2 batch:1 shortly:1 existence:1 denotes:3 clustering:1 cf:1 x21:2 yoram:1 k1:2 especially:1 establish:2 society:1 g0:4 already:2 added:1 usual:2 gradient:7 subspace:1 pnf:2 trivial:3 reason:1 nicta:1 willsky:1 assuming:2 index:1 minimizing:1 equivalently:1 liang:1 mostly:1 robert:1 statement:1 xik:1 hao:1 ba:1 proper:2 perform:1 francesco:1 howard:1 finite:1 extended:1 precise:1 inh:1 arbitrary:6 canada:1 pair:2 componentwise:1 nip:2 able:3 beyond:1 usually:4 below:3 xm:7 appeared:4 sparsity:1 max:5 royal:1 explanation:1 natural:1 treated:1 hybrid:1 indicator:4 recursion:1 picture:1 julien:1 jun:1 kj:6 literature:2 understanding:2 geometric:1 kf:8 acknowledgement:1 val:1 discovery:2 fully:1 expect:2 permutation:6 interesting:4 proven:1 foundation:1 affine:3 sufficient:11 t6g:1 thresholding:2 systematically:1 course:1 elsewhere:1 surprisingly:2 last:3 figueiredo:1 side:1 weaker:3 understand:2 allow:1 warren:1 taking:4 characterizing:1 bulletin:1 absolute:3 sparse:5 moreau:5 van:1 xn:6 stand:1 dale:2 forward:3 collection:3 author:2 coincide:1 qx:2 bernhard:2 sz:1 sequentially:1 reveals:2 mairal:1 conclude:2 assumed:1 xi:5 continuous:5 iterative:2 un:1 decomposes:3 why:2 ku:1 transfer:1 robust:1 ca:1 elastic:2 symmetry:1 schuurmans:2 interact:1 williamson:1 complex:1 necessarily:1 zou:1 domain:6 marc:1 pk:14 main:6 motivation:2 whole:1 arise:1 nothing:1 verifies:1 x1:4 positively:1 canberra:1 edmonton:1 scattered:2 combettes:4 candidate:1 third:1 theorem:26 formula:3 kuk2:1 specific:1 symbol:1 r2:2 admits:1 grouping:4 exists:5 restricting:2 adding:3 effectively:1 hui:1 phd:1 justifies:1 exhausted:1 kx:5 sx:2 mf:10 led:1 simply:3 univariate:1 absorbed:1 lazy:1 pathwise:1 g2:9 scalar:1 monotonic:1 springer:1 pedro:1 ubc:1 satisfies:11 obozinski:1 goal:4 identity:2 consequently:2 towards:1 aguiar:1 replace:1 owen:1 specifically:2 except:1 total:1 geer:1 invariance:15 la:1 intact:1 guillaume:1 support:1 latter:1 alexander:1 incorporate:1 |
4,269 | 4,864 | Non-Uniform Camera Shake Removal Using a
Spatially-Adaptive Sparse Penalty
?
Haichao Zhang?? and David Wipf ?
School of Computer Science, Northwestern Polytechnical University, Xi?an, China
?
Department of Electrical and Computer Engineering, Duke University, USA
?
Visual Computing Group, Microsoft Research Asia, Beijing, China
[email protected]
[email protected]
Abstract
Typical blur from camera shake often deviates from the standard uniform convolutional assumption, in part because of problematic rotations which create greater
blurring away from some unknown center point. Consequently, successful blind
deconvolution for removing shake artifacts requires the estimation of a spatiallyvarying or non-uniform blur operator. Using ideas from Bayesian inference and
convex analysis, this paper derives a simple non-uniform blind deblurring algorithm with a spatially-adaptive image penalty. Through an implicit normalization
process, this penalty automatically adjust its shape based on the estimated degree
of local blur and image structure such that regions with large blur or few prominent edges are discounted. Remaining regions with modest blur and revealing
edges therefore dominate on average without explicitly incorporating structureselection heuristics. The algorithm can be implemented using an optimization
strategy that is virtually tuning-parameter free and simpler than existing methods,
and likely can be applied in other settings such as dictionary learning. Detailed
theoretical analysis and empirical comparisons on real images serve as validation.
1 Introduction
Image blur is an undesirable degradation that often accompanies the image formation process and
may arise, for example, because of camera shake during acquisition. Blind image deblurring strategies aim to recover a sharp image from only a blurry, compromised observation. Extensive efforts
have been devoted to the uniform blur (shift-invariant) case, which can be described with the convolutional model y = k ? x + n, where x is the unknown sharp image, y is the observed blurry
image, k is the unknown blur kernel (or point spread function), and n is a zero-mean Gaussian noise
term [6, 21, 17, 5, 28, 14, 1, 27, 29]. Unfortunately, many real-world photographs contain blur effects that vary across the image plane, such as when unknown rotations are introduced by camera
shake [17].
More recently, algorithms have been generalized to explicitly handle some degree of non-uniform
blur using the more general observation model y = Hx+ n, where each column of the blur operator
H contains the spatially-varying effective blur kernel at the corresponding pixel site [25, 7, 8, 9,
11, 4, 22, 12]. Note that the original uniform blur model can be achieved equivalently when H is
forced to adopt certain structure (e.g., block-toeplitz structure with toeplitz-blocks). In general, nonuniform blur may arise under several different contexts. This paper will focus on the blind removal
of non-uniform blur caused by general camera shake (as opposed to blur from object motion) using
only a single image, with no additional hardware assistance.
While existing algorithms for addressing non-uniform camera shake have displayed a measure of
success, several important limitations remain. First, some methods require either additional spe1
cialized hardware such as high-speed video capture [23] or inertial measurement sensors [13] for
estimating motion, or else multiple images of the same scene [4]. Secondly, even the algorithms that
operate given only data from a single image typically rely on carefully engineered initializations,
heuristics, and trade-off parameters for selecting salient image structure or edges, in part to avoid
undesirable degenerate, no-blur solutions [7, 8, 9, 11]. Consequently, enhancements and rigorous
analysis may be problematic. To address these shortcomings, we present an alternative blind deblurring algorithm built upon a simple, closed-form cost function that automatically discounts regions of
the image that contain little information about the blur operator without introducing any additional
salient structure selection steps. This transparency leads to a nearly tuning-parameter free algorithm
based upon a sparsity penalty whose shape adapts to the estimated degree of local blur, and provides
theoretical arguments regarding how to robustly handle non-uniform degradations.
The rest of the paper is structured as follows. Section 2 briefly describes relevant existing work on
non-uniform blind deblurring operators and implementation techniques. Section 3 then introduces
the proposed non-uniform blind deblurring model, while further theoretical justification and analyses
are provided in Section 4. Experimental comparisons with state-of-the-art methods are carried out
in Section 5 followed by conclusions in Section 6.
2 Non-Uniform Deblurring Operators
Perhaps the most direct way of handling non-uniform blur is to simply partition the image into different regions and then learn a separate, uniform blur kernel for each region, possibly with an additional
weighting function for smoothing the boundaries between two adjacent kernels. The resulting algorithm has been adopted extensively [18, 8, 22, 12] and admits an efficient implementation called
efficient filter flow (EFF) [10]. The downside with this type of model is that geometric relationships
between the blur kernels of different regions derived from the the physical motion path of the camera
are ignored.
In contrast, to explicitly account for camera motion, the projective motion path (PMP) model [23]
treats a blurry image as the weighted summation of projectively transformed sharp images, leading
to the revised observation model
y=
wj Pj x + n,
(1)
j
where Pj is the j-th projection or homography operator (a combination of rotations and translations)
and wj is the corresponding combination weight representing the proportion of time spent at that particular camera pose during exposure. The uniform convolutional model can be obtained by restricting the general projection operators {P j } to be translations. In this regard, (1) represents a more
general model that has been used in many recent non-uniform deblurring efforts [23, 25, 7, 11, 4].
PMP also retains the bilinear property of uniform convolution, meaning that
where H =
y = Hx + n = Dw + n,
j
(2)
wj Pj and D = [P1 x, P2 x, ? ? ? , Pj x, ? ? ? ] is a matrix of transformed sharp images.
The disadvantage of PMP is that it typically leads to inefficient algorithms because the evaluation
of the matrix-vector product Hx = Dw requires generating many expensive intermediate transformed images. However, EFF can be combined with the PMP model by introducing a set of basis
images efficiently generated by transforming a grid of delta peak images [9]. The computational
cost can be further reduced by using an active set for pruning out the projection operators with small
responses [11].
3 A New Non-Uniform Blind Deblurring Model
Following previous work [6, 16], we will work in the derivative domain of images for ease of modeling and better performance, meaning that x ? R m and y ? Rn will denote the lexicographically
ordered sharp and blurry image derivatives respectively. 1
1
The derivative filters used in this work are {[?1, 1], [?1, 1]T }. Other choices are also possible.
2
The observation model (1) is equivalent to the likelihood function
1
2
p(y|x, w) ? exp ? y ? Hx2 ,
2?
(3)
where ? denotes the noise variance. Maximum likelihood estimation of x and w using (3) is clearly
ill-posed and so further regularization is required to constrain the solution space. For this purpose
we adopt the Gaussian prior p(x) ? N (x; 0, ?), where ? diag[?] with ? = [? 1 , . . . , ?m ]T a
vector of m hyperparameter variances, one for each element of x = [x 1 , . . . , xm ]T . While presently
? is unknown, if we first marginalize over the unknown x, we can estimate it jointly along with the
blur parameters w and the unknown noise variance ?. This type II maximum likelihood procedure
has been advocated in the context of sparse estimation, where the goal is to learn vectors with mostly
zero-valued coefficients [24, 26]. The final sharp image can then be recovered using the estimated
kernel and noise level along with standard non-blind deblurring algorithms (e.g., [15]).
Mathematically, the proposed estimation scheme requires that we solve
?1
max
y + log H?HT + ?I ,
p(y|x, w)p(x)dx ? min yT H?HT + ?I
?,w,??0
?,w,??0
(4)
where a ? log transformation has been included for convenience. Clearly (4) does not resemble the
traditional blind non-uniform deblurring script, where estimation proceeds using the more transparent penalized regression model [4, 7, 9]
min y ? Hx22 + ?
g(xi ) + ?
h(wj )
(5)
x;w?0
i
j
and ? and ? are user-defined trade-off parameters, g is an image penalty which typically favors
sparsity, and h is usually assumed to be quadratic. Despite the differing appearances however,
(4) has some advantageous properties with respect to deconvolution problems. In particular, it is
devoid of tuning parameters and it possesses more favorable minimization conditions. For example,
consider the simplified non-uniform deblurring situation where the true x has a single non-zero
element and H is defined such that each column indexed by i is independently parameterized with
finite support symmetric around pixel i. Moreover, assume this support matches the true support of
the unknown blur operator. Then we have the following:
Lemma 1 Given the idealized non-uniform deblurring problem described above, the cost function
(4) will be characterized by a unique minimizing solution that correctly locates the nonzero element
in x and the corresponding true blur kernel at this location. No possible problem in the form of
(5), with g(x) = |x|p , h(w) = wq , and {p, q} arbitrary non-negative scalars, can achieve a similar
result (there will always exist either multiple different minimizing solutions or an global minima that
does not produce the correct solution).
This result, which can be generalized with additional effort, can be shown by expanding on some
of the derivations in [26]. Although obviously the conditions upon which Lemma 1 is based are
extremely idealized, it is nonetheless emblematic of the potential of the underlying cost function to
avoid local minima, etc., and [26] contains complementary results in the case where H is fixed.
While optimizing (4) is possible using various general techniques such as the EM algorithm, it
is computationally expensive in part because of the high-dimensional determinants involved with
realistic-sized images. Consequently we are presently considering various specially-tailored optimization schemes for future work. But for the present purposes, we instead minimize a convenient
upper bound allowing us to circumvent such computational issues. Specifically, using Hadamard?s
inequality we have
log H?HT + ?I = n log ? + log |?| + log ??1 HT H + ??1
? n log ? + log |?| + log ??1 diag HT H + ??1
? i 22 + (n ? m) log ?,
=
(6)
log ? + ?i w
i
? i denotes the i-th column of H. Note that Hadamard?s inequality is applied by using
where w
??1 HT H + ??1 = VT Vfor some matrix V =
[v 1 , . . . , vm ]. We
then have log |??1 HT H +
?1
?1
? | = 2 log |V| ? 2 log ( i vi 2 ) = log diag ??1 HT H + ? , leading to the stated result.
3
? i 2 which appears in (6) can be viewed as a measure of the degree of local
Also, the quantity w
blur
at
location
i.
Given
the feasible region w ? 0 and without loss of generality the constraint
? i 22 ? 1, where
i wi = 1 for normalization purposes, it can easily be shown that 1/L ? w
? i or column of H. The upper
L is the maximum number of elements in any local blur kernel w
bound is achieved when the local kernel is a delta solution, meaning only one nonzero element
? i 22 occurs when every element of
and therefore minimal blur. In contrast, the lower bound on w
? i has an equal value, constituting the maximal possible blur. This metric, which will influence
w
? i 22 = wT (BTi Bi )w, where Bi
our analysis in the next section, can be computing using w
[P1 ei , P2 ei , ? ? ? , Pj ei , ? ? ? ] and ei denotes an all-zero image with a one at site i. In the uniform
? i 2 = w2 for all i.
deblurring case, B Ti Bi = I ignoring edge effects, and therefore w
While optimizing (4) using the upper bound from (6) can be justified in part using Bayesian-inspired
arguments and the lack of trade-off parameters, the augmented cost function unfortunately no longer
satisfies Lemma 1. However, it is still well-equipped for estimating sparse image gradients and
avoiding degenerate no-blur solutions. For example, consider the case of an asymptotically large
image with iid distributed sparse image gradients, with some constant fraction exactly equal to zero
and the remaining nonzero elements drawn from any continuous distribution.Now suppose that
this image is corrupted with a non-uniform blur operator of the form H =
j wj Pj , where the
cardinality of the summation is finite and H satisfies minimal regularity conditions. Then it can be
shown that any global minimum of (4), with or without the bound from (6), will produce the true
blur operator. Related intuition applies when noise is present or when the image gradients are not
exactly sparse (we will defer more detailed analysis to a future publication).
Regardless, the simplified ?-dependent cost function is still far less intuitive than the penalized
regression models dependent on x such as (5) that are typically employed for non-uniform blind
deblurring. However, using the framework from [26], it can be shown that the kernel estimate
obtained by this process is formally equivalent to the one obtained via
1
? i 2 , ?) + (n ? m) log ?,
min
?(|xi |w
with
(7)
y ? Hx22 +
x;w?0,??0 ?
i
2u
?
u ? 0.
?(u, ?)
+ log 2? + u2 + u 4? + u2
u + 4? + u2
The optimization from (7) closely resembles a standard penalized regression (or equivalently MAP)
problem used for blind deblurring. The primary distinction is the penalty term ?, which jointly regularizes x, w, and ? as discussed Section 4. The supplementary file derives a simple majorizationminimization algorithm for solving (7) along with additional implementational details. The underlying procedure is related to variational Bayesian (VB) models from [1, 16, 20]; however, these
models are based on a completely different mean-field approximation and a uniform blur assumption, and they do not learn the noise parameter. Additionally, the analysis provided with these VB
models is limited by relatively less transparent underlying cost functions.
4 Model Properties
The proposed blind deblurring strategy involves simply minimizing (7); no additional steps for tradeoff parameter selection or structure/salient-edge detection are required unlike other state-of-the-art
approaches. This section will examine theoretical properties of (7) that ultimately allow such a simple algorithm to succeed. First, we will demonstrate a form of intrinsic column normalization that
facilitates the balanced sparse estimation of the unknown latent image and implicitly de-emphasizes
regions with large blur and few dominate edges. Later we describe an appealing form of noisedependent shape adaptation that helps in avoiding local minima. While there are multiple, complementary perspectives for interpreting the behavior of this algorithm, more detailed analyses, as well
as extensions to other types of underdetermined inverse problems such as dictionary learning, will
be deferred to a later publication.
4.1 Column-Normalized Sparse Estimation
? i 2 it follows that (7) is exactly equivalent to solving
Using the simple reparameterization z i xi w
1
2
y ? Hz
?(|zi |, ?) + (n ? m) log ?,
(8)
min
2+
z;w?0,??0 ?
i
4
is simply the 2 -column-normalized version of H. Moreover,
where z = [z1 , . . . , zm ]T and H
it can be shown that this ? is a concave, non-decreasing function of |z|, and hence represents a
canonical sparsity-promoting penalty function with respect to z [26]. Consequently, noise and kernel dependencies notwithstanding, this reparameterization places the proposed cost function in a
form exactly consistent with nearly all prototypical sparse regression problems, where 2 column
normalization is ubiquitous, at least in part, to avoid favoring one column over another during the
estimation process (which can potentially bias the solution). To understand the latter point, note
T Hz
? 2yT Hz.
Among other things, because of the normalization, the
2 ? zT H
that y ? Hz
2
T
are scaled
quadratic factor H H now has a unit diagonal, and likewise the inner products y T H
by the consistent induced 2 norms, which collectively avoids the premature favoring of any one
element of z over another. Moreover, no additional heuristic kernel penalty terms such as in (5)
is in some sense self-regularized by the normalization. Additional ancillary
are required since H
benefits of (8) will be described in Section 4.2.
Of course we can always apply the same reparameterization to existing algorithms in the form of
(5). While this will indeed result in normalized columns and a properly balanced data-fit term, these
raw norms will now appear in the penalty function g, giving the equivalent objective
22 + ?
? i ?1
min y ? Hz
+?
g zi w
h(wj ).
(9)
2
z;w?0
i
j
However, the presence of these norms now embedded in g may have undesirable consequences.
? i 2 is sparse or nearly so,
Simply put, the problem (9) will favor solutions where the ratio z i /w
? i 2 big. If some z i is estimated
which can be achieved by either making many z i zero or many w
to be zero (and many z i will provably be exactly zero at any local minima if g(x) is a concave,
? i 2 will be unconstrained. In contrast,
non-decreasing function of |x|), then the corresponding w
? i 2 to be large, i.e.,
if a given zi is non-zero, there will be a stronger push for the associated w
more like the delta kernel which maximizes the 2 norm. Thus, the relative penalization of the
kernel norms will depend on the estimated local image gradients, and no-blur delta solutions may
be arbitrarily favored in parts of the image plane dominated by edges, the very place where blur
estimation information is paramount.
? i 2 , which quantify the degree of local blur as mentioned previIn reality, the local kernel norms w
ously, should be completely independent of the sparsity of the image gradients in the same location.
This is of course because the different blurring effects from camera shake are independent of the
locations of strong edges in a given scene, since the blur operator is only a function of camera motion (at least to first order approximation). One way to compensate for this independence would be
? i 2 removed from g. While this is possible in principle, enforcing
to simply optimize (9) with w
the non-convex, and coupled constraints required to maintain normalized columns is extremely difficult. Another option would be to carefully choose ? and h to somehow compensate. In contrast,
our algorithm handles these complications seamlessly without any additional penalty terms.
4.2 Noise-Dependent, Parameter-Free Homotopy Continuation
Column normalization can be viewed as a principled first step towards solving challenging sparse
estimation problems. However, when non-convex sparse regularizers are used for the image penalty,
e.g., p norms with p < 1, then local minima can be a significant problem. The rationalization for
using such potentially problematic non-convexity is as follows; more details can be found in [17, 27].
When applied to a sharp image, any blur operator will necessarily contribute two opposing
effects:
(i) It reduces a measure of the image sparsity, which normally increases thepenalty i |yi |p , and
p
(ii) It broadly reduces the overall image variance, which actually reduces
i |yi | . Additionally,
the greater the degree of blur, the more effect (ii) will begin to overshadow (i). Note that we can
always apply greater and greater blur to any sharp image x such that the variance of the resulting
blurry
y is arbitrarily
small. This then produces an arbitrarily small p norm, which implies that
p
p
|y
|
<
|x
|
,
meaning
that the penalty actually favors the blurry image over the sharp one.
i
i
i
i
In a practical sense though, the amount of blur that can be tolerated before this undesirable preference for y over x occurs is much larger as p approaches zero. This is because the more concave
the image penalty becomes (as a function of coefficient magnitudes), the less sensitive it is to image
variance and the more sensitive it is to image sparsity. In fact the scale-invariant special case where
5
p ? 0 depends only on sparsity, or the number of elements that are exactly equal to zero. 2 We may
therefore expect such a highly concave, sparsity promoting penalty to favor the sharp image over the
blurry one in a broader range of blur conditions. Even with other families of penalty functions the
same basic notion holds: greater concavity means greater sparsity preference and less sensitivity to
variance changes that favor no-blur degenerate solutions.
From an implementational standpoint, homotopy continuation methods provide one attractive means
of dealing with difficult non-convex penalty functions and the associated constellation of local
minima [3]. The basic idea is to use a parameterized family of sparsity-promoting functions
g(x; ?), where different values of ? determine the relative degree of concavity allowing a transition from something convex such as the 1 norm (with ? large) to something concave such as the
0 norm (with ? small). Moreover, to ensure cost function descent (see below), we also require that
g(x; ?2 ) ? g(x; ?1 ) whenever ?2 ? ?1 , noting that this rules out simply setting ? = p and using
the family of p norms. We then begin optimization with a large ? value; later as the estimation
progresses and hopefully we are near a reasonably good basin of attraction, ? is reduced introducing
greater concavity, a process which is repeated until convergence, all the while guaranteeing cost
function descent. While potentially effective in practice, homotopy continuation methods require
both a trade-off parameter for g(x; ?) and a pre-defined schedule or heuristic for adjusting ?, both
of which could potentially be image dependent.
The proposed deblurring algorithm automatically implements a form of noise-dependent, parameterfree homotopy continuation with several attractive auxiliary properties [26]. To make this claim
precise and facilitate subsequent analysis, we first introduce the definition of relative concavity [19]:
Definition 1 Let u be a strictly increasing function on [a, b]. The function ? is concave relative to
u on the interval [a, b] if and only if ?(y) ? ?(x) + u? (x)
(x) [u(y) ? u(x)] holds ?x, y ? [a, b].
We will use ? ? u to denote that ? is concave relative to u on [0, ?). This can be understood
as a natural generalization of the traditional notion of a concavity, in that a concave function is
equivalently concave relative to a linear function per Definition 1. In general, if ? ? u, then when ?
and u are set to have the same functional value and the same slope at any given point (i.e., by an affine
transformation of u), then ? lies completely under u. In the context of homotopy continuation, an
ideal candidate penalty would be one for which g(x; ? 1 ) ? g(x; ?2 ) whenever ?1 ? ?2 . This would
ensure that greater sparsity-inducing concavity is introduced as ? is reduced. We now demonstrate
that ?(|z|, ?) is such a function, with ? occupying the role of ?. This dependency on the noise
parameter is unlike other continuation methods and ultimately leads to several attractive attributes.
Theorem
1 If ?1 < ?2 , then ?(u, ?1 ) ? ?(u, ?2 ) for u ? 0. Additionally, in the limit as ? ? 0,
scaling and translation).
then i ?(|zi |, ?) converges to the 0 norm (up to an inconsequential
?
Conversely, as ? becomes large, i ?(|zi |, ?) converges to 2z1 / ?.
The proof has been deferred to the supplementary file. The relevance of this result can be understood
as follows. First, at the beginning of the optimization process ? will be large both because of
initialization and because we have not yet found a relatively sparse z and associated w such that y
can be well-approximated; hence the estimated ? should not be small. Based on Theorem 1, in this
regime (8) approaches
?
22 + 2 ?z1
min y ? Hz
(10)
z
assuming w and ? are fixed. Note incidentally that this square-root dependency on ?, which
arises naturally from our model, is frequently advocated when performing regular 1 -norm penalized sparse regression given that the true noise variance is ? [2]. Additionally, because ? must be
relatively large to arrive at this 1 approximation, the estimation need only focus on reproducing
the largest elements in z since the sparse penalty will dominate the data fit term. Furthermore,
these larger elements are on average more likely to be in regions of relatively lower blurring or high
? i 2 value by virtue of the reparameterization z i = xi w
? i 2 . Consequently, the less concave
w
? i 2 ,
initial estimation can proceed successfully by de-emphasizing regions with high blur or low w
and focusing on coarsely approximating regions with relatively less blur.
2
Note that even if the true sharp image is not exactly sparse, as long as it can be reasonably wellapproximated by some exactly sparse image in an 2 norm sense, then the analysis here still holds [27].
6
Elephant
Blurry
Spatially Non-Adaptive
Spatially Adaptive
Blur-map
Figure 1: Effectiveness of spatially-adaptive sparsity. From left to right: the blurry image, the
deblurred image and estimated local kernels without spatially-adaptive column normalization, the
analogous results with this normalization and its spatially-varying impact on image estimation, and
? i ?1
the associated map of w
2 , which reflects the degree of estimated local blurring.
Later as the estimation proceeds and w and z are refined, ? will be reduced which in turn necessarily
increases the relative concavity of the penalty ? per Theorem 1. However, the added concavity will
now be welcome for resolving increasingly fine details uncovered by a lower noise variance and the
concomitant boosted importance of the data fidelity term, especially since many of these uncovered
details may reside near increasingly blurry regions of the image and we need to avoid unwanted noblur solutions. Eventually the penalty can even approach the 0 norm (although images are generally
not exactly sparse, and other noise factors and unmodeled artifacts are usually present such that ?
will never go all the way to zero). Importantly, all of this implicit, spatially-adaptive penalization
occurs without the need for trade-off parameters or additional structure selection measures, meaning
carefully engineered heuristics designed to locate prominent edges such that good global solutions
can be found without strongly concave image penalties [21, 5, 28, 8, 9]. Figure 1 displays results of
this procedure both with and without the spatially-varying column normalizations and the implicit
adaptive penalization that help compensate for locally varying image blur.
5 Experimental Results
This section compares the proposed method with several state-of-the-art algorithms for non-uniform
blind deblurring using real-world images from previously published papers (note that source code
is not available for conducting more widespread evaluations with most algorithms). The supplementary file contains a number of additional comparisons, including assessments with a benchmark
uniform blind deblurring dataset where ground truth is available. Overall, our algorithm consistently performs comparably or better on all of these respective images. Experimental specifics of
our implementation (e.g., regarding the non-blind deblurring step, projection operators, etc.) are
also contained in the supplementary file for space considerations.
Comparison with Harmeling et al. [8] and Hirsch et al. [9]: Results are based on three test
images provided in [8]. Figure 2 displays deblurring comparisons based on the Butchershop and
Vintage-car images. In both cases, the proposed algorithm reveals more fine details than the
other methods, despite its simplicity and lack of salient structure selection heuristics or trade-off
parameters. Note that with these images, ground truth blur kernels were independently estimated
using a special capturing process [8]. As shown in the supplementary file, the estimated blur kernel
patterns obtained from our algorithm better resemble the ground truth relative to the other methods,
a performance result that compensates for any differences in the non-blind step.
Comparison with Whyte et al. [25]: Results on the Pantheon test image from [25] are shown in
Figure 3 (top row), where we observe that the deblurred image from Whyte et al. has noticeable
ringing artifacts. In contrast, our result is considerably cleaner.
Comparison with Gupta et al. [7]: We next experiment using the test image Building from [7],
which contains large rotational blurring that can be challenging for blind deblurring algorithms.
Figure 3 (middle row) reveals that our algorithm contains less ringing and more fine details relative
to Gupta et al.
Comparison with Joshi et al. [13]: Joshi et al. presents a deblurring algorithm that relies upon
additional hardware for estimating camera motion [13]. However, even without this additional in7
Butchershop
Vintage-car
B LURRY
H ARMELING
H IRSCH
O UR
B LURRY
H ARMELING
H IRSCH
O UR
Pantheon
Figure 2: Non-uniform deblurring results. Comparison with Harmeling [8] and Hirsch [9] on
real-world images. (better viewed electronically with zooming)
W HYTE
O UR
B LURRY
G UPTA
O UR
B LURRY
J OSHI
O UR
Sculpture
Building
B LURRY
Figure 3: Non-uniform deblurring results. Comparison with Whyte [25], Gupta [7], and Joshi [13]
on real-world images. (better viewed electronically with zooming)
formation, our algorithm produces a better sharp estimate of the Sculpture image from [13], with
fewer ringing artifacts and higher resolution details. See Figure 3 (bottom row).
6 Conclusion
This paper presents a strikingly simple yet effective method for non-uniform camera shake removal based upon a principled, transparent cost function that is open to analysis and further extensions/refinements. For example, it can be combined with the model from [29] to perform joint
multi-image alignment, denoising, and deblurring. Both theoretical and empirical evidence are
provided demonstrating the efficacy of the blur-dependent, spatially-adaptive sparse regularization
which emerges from our model. The framework also suggests exploring other related cost functions
that, while deviating from the original probabilistic script,
? nonetheless share similar properties. One
? i 2 ); many others are possible.
such simple example is a penalty of the form i log( ? + |xi |w
Acknowledgements
This work was supported in part by National Natural Science Foundation of China (61231016).
8
References
[1] S. D. Babacan, R. Molina, M. N. Do, and A. K. Katsaggelos. Bayesian blind deconvolution
with general sparse image priors. In ECCV, 2012.
[2] E. Cand?s and Y. Plan. Near-ideal model selection by 1 minimization. The Annals of Statistics,
(5A):2145?2177.
[3] R. Chartrand and W. Yin. Iteratively reweighted algorithms for compressive sensing. In
ICASSP, 2008.
[4] S. Cho, H. Cho, Y.-W. Tai, and S. Lee. Registration based non-uniform motion deblurring.
Comput. Graph. Forum, 31(7-2):2183?2192, 2012.
[5] S. Cho and S. Lee. Fast motion deblurring. In SIGGRAPH ASIA, 2009.
[6] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman. Removing camera shake
from a single photograph. In SIGGRAPH, 2006.
[7] A. Gupta, N. Joshi, C. L. Zitnick, M. Cohen, and B. Curless. Single image deblurring using
motion density functions. In ECCV, 2010.
[8] S. Harmeling, M. Hirsch, and B. Sch?lkopf. Space-variant single-image blind deconvolution
for removing camera shake. In NIPS, 2010.
[9] M. Hirsch, C. J. Schuler, S. Harmeling, and B. Sch?lkopf. Fast removal of non-uniform camera
shake. In ICCV, 2011.
[10] M. Hirsch, S. Sra, B. Scholkopf, and S. Harmeling. Efficient filter flow for space-variant
multiframe blind deconvolution. In CVPR, 2010.
[11] Z. Hu and M.-H. Yang. Fast non-uniform deblurring using constrained camera pose subspace.
In BMVC, 2012.
[12] H. Ji and K. Wang. A two-stage approach to blind spatially-varying motion deblurring. In
CVPR, 2012.
[13] N. Joshi, S. B. Kang, C. L. Zitnick, and R. Szeliski. Image deblurring using inertial measurement sensors. In ACM SIGGRAPH, 2010.
[14] D. Krishnan, T. Tay, and R. Fergus. Blind deconvolution using a normalized sparsity measure.
In CVPR, 2011.
[15] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Deconvolution using natural image priors.
Technical report, MIT, 2007.
[16] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman. Efficient marginal likelihood optimization
in blind deconvolution. In CVPR, 2011.
[17] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman. Understanding blind deconvolution algorithms. IEEE Trans. Pattern Anal. Mach. Intell., 33(12):2354?2367, 2011.
[18] J. G. Nagy and D. P. O?Leary. Restoring images degraded by spatially variant blur. SIAM J.
Sci. Comput., 19(4):1063?1082, 1998.
[19] J. A. Palmer. Relatve convexity. Technical report, UCSD, 2003.
[20] J. A. Palmer, D. P. Wipf, K. Kreutz-Delgado, and B. D. Rao. Variational EM algorithms for
non-Gaussian latent variable models. In NIPS, 2006.
[21] Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image. In
SIGGRAPH, 2008.
[22] M. Sorel and F. Sroubek. Image Restoration: Fundamentals and Advances. CRC Press, 2012.
[23] Y.-W. Tai, P. Tan, and M. S. Brown. Richardson-Lucy deblurring for scenes under a projective
motion path. IEEE Trans. Pattern Anal. Mach. Intell., 33(8):1603?1618, 2011.
[24] M. E. Tipping. Sparse bayesian learning and the relevance vector machine. Journal of Machine
Learning Research, 1:211?244, 2001.
[25] O. Whyte, J. Sivic, A. Zisserman, and J. Ponce. Non-uniform deblurring for shaken images.
In CVPR, 2010.
[26] D. P. Wipf, B. D. Rao, and S. S. Nagarajan. Latent variable Bayesian models for promoting
sparsity. IEEE Trans. Information Theory, 57(9):6236?6255, 2011.
[27] D. P. Wipf and H. Zhang. Revisiting Bayesian blind deconvolution. submitted to Journal of
Machine Learning Research, 2013.
[28] L. Xu and J. Jia. Two-phase kernel estimation for robust motion deblurring. In ECCV, 2010.
[29] H. Zhang, D. P. Wipf, and Y. Zhang. Multi-image blind deblurring using a coupled adaptive
sparse prior. In CVPR, 2013.
9
| 4864 |@word determinant:1 version:1 briefly:1 middle:1 proportion:1 advantageous:1 norm:15 stronger:1 open:1 hu:1 delgado:1 wellapproximated:1 initial:1 contains:5 uncovered:2 selecting:1 efficacy:1 existing:4 recovered:1 com:2 gmail:2 dx:1 yet:2 must:1 realistic:1 partition:1 blur:52 subsequent:1 shape:3 designed:1 fewer:1 plane:2 beginning:1 provides:1 complication:1 location:4 contribute:1 preference:2 simpler:1 zhang:4 along:3 direct:1 scholkopf:1 introduce:1 indeed:1 behavior:1 p1:2 examine:1 frequently:1 multi:2 cand:1 inspired:1 discounted:1 decreasing:2 freeman:4 automatically:3 little:1 equipped:1 considering:1 cardinality:1 becomes:2 provided:4 estimating:3 moreover:4 underlying:3 maximizes:1 begin:2 increasing:1 ringing:3 compressive:1 sroubek:1 differing:1 transformation:2 every:1 ti:1 concave:11 unwanted:1 exactly:9 scaled:1 unit:1 normally:1 appear:1 before:1 engineering:1 local:15 treat:1 understood:2 limit:1 consequence:1 bilinear:1 despite:2 mach:2 path:3 inconsequential:1 initialization:2 china:3 resembles:1 conversely:1 challenging:2 suggests:1 ease:1 limited:1 projective:2 palmer:2 bi:3 range:1 unique:1 camera:17 practical:1 harmeling:5 emblematic:1 practice:1 block:2 implement:1 restoring:1 procedure:3 empirical:2 revealing:1 projection:4 convenient:1 pre:1 regular:1 convenience:1 undesirable:4 selection:5 operator:14 marginalize:1 put:1 context:3 influence:1 optimize:1 equivalent:4 map:3 center:1 yt:2 exposure:1 regardless:1 go:1 independently:2 convex:5 resolution:1 simplicity:1 rule:1 attraction:1 importantly:1 dominate:3 dw:2 reparameterization:4 handle:3 notion:2 justification:1 analogous:1 annals:1 suppose:1 tan:1 user:1 duke:1 deblurring:37 element:11 expensive:2 approximated:1 observed:1 role:1 bottom:1 electrical:1 capture:1 wang:1 revisiting:1 region:12 wj:6 trade:6 removed:1 balanced:2 intuition:1 transforming:1 mentioned:1 principled:2 convexity:2 hertzmann:1 ultimately:2 depend:1 solving:3 singh:1 serve:1 upon:5 blurring:5 basis:1 completely:3 strikingly:1 easily:1 joint:1 ously:1 icassp:1 siggraph:4 various:2 derivation:1 forced:1 fast:3 effective:3 shortcoming:1 describe:1 formation:2 refined:1 whose:1 heuristic:6 posed:1 valued:1 cvpr:6 solve:1 supplementary:5 larger:2 elephant:1 compensates:1 toeplitz:2 favor:5 statistic:1 richardson:1 jointly:2 final:1 obviously:1 product:2 maximal:1 adaptation:1 zm:1 relevant:1 hadamard:2 degenerate:3 achieve:1 adapts:1 roweis:1 intuitive:1 inducing:1 convergence:1 enhancement:1 regularity:1 produce:4 generating:1 guaranteeing:1 converges:2 incidentally:1 object:1 spent:1 help:2 pose:2 school:1 noticeable:1 advocated:2 progress:1 strong:1 p2:2 implemented:1 auxiliary:1 resemble:2 implies:1 involves:1 quantify:1 whyte:4 closely:1 correct:1 attribute:1 filter:3 ancillary:1 engineered:2 eff:2 crc:1 require:3 hx:6 nagarajan:1 transparent:3 generalization:1 homotopy:5 secondly:1 summation:2 mathematically:1 extension:2 underdetermined:1 strictly:1 hold:3 exploring:1 around:1 ground:3 exp:1 claim:1 dictionary:2 vary:1 adopt:2 purpose:3 estimation:16 favorable:1 sensitive:2 largest:1 create:1 occupying:1 successfully:1 weighted:1 reflects:1 minimization:2 mit:1 clearly:2 sensor:2 gaussian:3 always:3 aim:1 avoid:4 boosted:1 varying:5 broader:1 publication:2 derived:1 focus:2 ponce:1 properly:1 consistently:1 likelihood:4 seamlessly:1 contrast:5 rigorous:1 sense:3 inference:1 dependent:6 typically:4 favoring:2 transformed:3 provably:1 pixel:2 agarwala:1 issue:1 among:1 ill:1 fidelity:1 favored:1 overall:2 katsaggelos:1 plan:1 smoothing:1 art:3 special:2 constrained:1 marginal:1 equal:3 field:1 never:1 represents:2 nearly:3 wipf:5 future:2 others:1 report:2 few:2 deblurred:2 national:1 intell:2 deviating:1 phase:1 microsoft:1 maintain:1 opposing:1 detection:1 highly:1 evaluation:2 adjust:1 alignment:1 deferred:2 introduces:1 devoted:1 regularizers:1 edge:9 respective:1 modest:1 projectively:1 indexed:1 theoretical:5 minimal:2 column:14 modeling:1 downside:1 rao:2 disadvantage:1 implementational:2 retains:1 restoration:1 cost:12 introducing:3 addressing:1 uniform:35 successful:1 levin:3 dependency:3 corrupted:1 tolerated:1 combined:2 considerably:1 cho:3 devoid:1 peak:1 sensitivity:1 density:1 siam:1 fundamental:1 probabilistic:1 off:6 vm:1 lee:2 homography:1 leary:1 opposed:1 choose:1 possibly:1 multiframe:1 in7:1 inefficient:1 leading:2 derivative:3 account:1 potential:1 de:2 coefficient:2 explicitly:3 caused:1 blind:27 idealized:2 vi:1 script:2 later:4 depends:1 closed:1 polytechnical:1 root:1 recover:1 option:1 defer:1 slope:1 jia:2 minimize:1 square:1 degraded:1 convolutional:3 variance:9 conducting:1 efficiently:1 likewise:1 chartrand:1 lkopf:2 bayesian:7 raw:1 curless:1 emphasizes:1 iid:1 comparably:1 published:1 submitted:1 whenever:2 definition:3 nonetheless:2 acquisition:1 involved:1 naturally:1 associated:4 proof:1 dataset:1 adjusting:1 car:2 emerges:1 ubiquitous:1 vintage:2 schedule:1 inertial:2 carefully:3 pmp:4 actually:2 appears:1 focusing:1 higher:1 tipping:1 asia:2 response:1 wei:2 bmvc:1 zisserman:1 though:1 strongly:1 generality:1 furthermore:1 implicit:3 stage:1 until:1 ei:4 hopefully:1 lack:2 assessment:1 somehow:1 widespread:1 artifact:4 perhaps:1 quality:1 building:2 usa:1 effect:5 contain:2 true:6 normalized:5 facilitate:1 brown:1 regularization:2 hence:2 spatially:13 symmetric:1 nonzero:3 iteratively:1 attractive:3 adjacent:1 assistance:1 during:3 self:1 reweighted:1 generalized:2 prominent:2 demonstrate:2 performs:1 motion:14 interpreting:1 image:79 meaning:5 variational:2 consideration:1 recently:1 rotation:3 functional:1 physical:1 ji:1 cohen:1 discussed:1 measurement:2 significant:1 tuning:3 unconstrained:1 grid:1 longer:1 bti:1 etc:2 something:2 recent:1 perspective:1 optimizing:2 certain:1 inequality:2 durand:3 success:1 arbitrarily:3 vt:1 yi:2 upta:1 molina:1 minimum:7 greater:8 additional:14 employed:1 determine:1 ii:3 resolving:1 multiple:3 reduces:3 transparency:1 technical:2 lexicographically:1 match:1 characterized:1 compensate:3 long:1 locates:1 impact:1 variant:3 regression:5 basic:2 metric:1 normalization:10 kernel:19 tailored:1 irsch:2 achieved:3 justified:1 fine:3 interval:1 else:1 source:1 standpoint:1 sch:2 operate:1 rest:1 posse:1 specially:1 file:5 unlike:2 hz:6 induced:1 virtually:1 facilitates:1 thing:1 flow:2 effectiveness:1 joshi:5 near:3 presence:1 noting:1 intermediate:1 ideal:2 yang:1 krishnan:1 independence:1 fit:2 zi:5 inner:1 idea:2 regarding:2 tradeoff:1 shift:1 effort:3 penalty:23 accompanies:1 proceed:1 ignored:1 generally:1 detailed:3 cleaner:1 overshadow:1 shake:12 amount:1 discount:1 extensively:1 welcome:1 hardware:3 locally:1 reduced:4 continuation:6 exist:1 problematic:3 canonical:1 estimated:10 delta:4 correctly:1 per:2 broadly:1 hyperparameter:1 coarsely:1 group:1 pantheon:2 salient:4 demonstrating:1 drawn:1 pj:6 registration:1 ht:8 asymptotically:1 graph:1 fraction:1 beijing:1 inverse:1 parameterized:2 place:2 family:3 arrive:1 scaling:1 vb:2 capturing:1 bound:5 shan:1 followed:1 display:2 quadratic:2 paramount:1 constraint:2 constrain:1 scene:3 dominated:1 speed:1 argument:2 min:6 extremely:2 babacan:1 performing:1 relatively:5 department:1 structured:1 combination:2 rationalization:1 across:1 remain:1 describes:1 em:2 increasingly:2 wi:1 appealing:1 ur:5 making:1 presently:2 invariant:2 iccv:1 computationally:1 previously:1 tai:2 turn:1 eventually:1 adopted:1 available:2 promoting:4 apply:2 observe:1 away:1 blurry:10 robustly:1 alternative:1 original:2 denotes:3 remaining:2 ensure:2 top:1 sorel:1 giving:1 especially:1 approximating:1 forum:1 objective:1 added:1 quantity:1 occurs:3 strategy:3 primary:1 traditional:2 diagonal:1 gradient:5 subspace:1 separate:1 zooming:2 sculpture:2 sci:1 enforcing:1 assuming:1 code:1 relationship:1 ratio:1 minimizing:3 concomitant:1 rotational:1 equivalently:3 difficult:2 unfortunately:2 mostly:1 potentially:4 negative:1 stated:1 implementation:3 anal:2 zt:1 unknown:9 perform:1 allowing:2 upper:3 observation:4 revised:1 convolution:1 benchmark:1 finite:2 descent:2 displayed:1 situation:1 regularizes:1 precise:1 locate:1 rn:1 nonuniform:1 reproducing:1 ucsd:1 sharp:12 arbitrary:1 unmodeled:1 davidwipf:1 david:1 introduced:2 required:4 extensive:1 z1:1 sivic:1 distinction:1 kang:1 nip:2 trans:3 address:1 proceeds:2 usually:2 below:1 xm:1 pattern:3 regime:1 sparsity:14 built:1 max:1 including:1 video:1 natural:3 rely:1 circumvent:1 regularized:1 representing:1 scheme:2 carried:1 coupled:2 deviate:1 haichao:1 geometric:1 prior:4 removal:4 acknowledgement:1 understanding:1 relative:9 embedded:1 loss:1 expect:1 northwestern:1 prototypical:1 limitation:1 validation:1 penalization:3 foundation:1 degree:8 affine:1 basin:1 consistent:2 principle:1 share:1 translation:3 row:3 eccv:3 course:2 penalized:4 supported:1 free:3 majorizationminimization:1 electronically:2 bias:1 allow:1 understand:1 nagy:1 szeliski:1 sparse:21 distributed:1 regard:1 boundary:1 benefit:1 world:4 avoids:1 transition:1 concavity:8 reside:1 adaptive:10 refinement:1 simplified:2 premature:1 far:1 constituting:1 pruning:1 implicitly:1 dealing:1 global:3 active:1 hirsch:5 reveals:2 kreutz:1 assumed:1 xi:6 fergus:3 continuous:1 compromised:1 latent:3 reality:1 additionally:4 learn:3 reasonably:2 robust:1 expanding:1 schuler:1 ignoring:1 sra:1 necessarily:2 domain:1 diag:3 zitnick:2 spread:1 big:1 noise:13 arise:2 repeated:1 complementary:2 xu:1 augmented:1 site:2 comput:2 lie:1 candidate:1 weighting:1 removing:3 theorem:3 emphasizing:1 specific:1 constellation:1 sensing:1 admits:1 gupta:4 virtue:1 evidence:1 deconvolution:10 derives:2 incorporating:1 intrinsic:1 restricting:1 importance:1 magnitude:1 notwithstanding:1 push:1 yin:1 photograph:2 simply:6 likely:2 appearance:1 lucy:1 visual:1 ordered:1 contained:1 scalar:1 u2:3 applies:1 collectively:1 truth:3 satisfies:2 relies:1 acm:1 succeed:1 goal:1 sized:1 viewed:4 consequently:5 towards:1 feasible:1 change:1 included:1 typical:1 specifically:1 tay:1 wt:1 denoising:1 degradation:2 lemma:3 called:1 parameterfree:1 experimental:3 formally:1 wq:1 support:3 latter:1 arises:1 relevance:2 avoiding:2 handling:1 |
4,270 | 4,865 | Provable Subspace Clustering:
When LRR meets SSC
Yu-Xiang Wang
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213 USA
[email protected]
Huan Xu
Dept. of Mech. Engineering
National Univ. of Singapore
Singapore, 117576
[email protected]
Chenlei Leng
Department of Statistics
University of Warwick
Coventry, CV4 7AL, UK
[email protected]
Abstract
Sparse Subspace Clustering (SSC) and Low-Rank Representation (LRR) are both
considered as the state-of-the-art methods for subspace clustering. The two methods are fundamentally similar in that both are convex optimizations exploiting the
intuition of ?Self-Expressiveness?. The main difference is that SSC minimizes the
vector `1 norm of the representation matrix to induce sparsity while LRR minimizes nuclear norm (aka trace norm) to promote a low-rank structure. Because the
representation matrix is often simultaneously sparse and low-rank, we propose a
new algorithm, termed Low-Rank Sparse Subspace Clustering (LRSSC), by combining SSC and LRR, and develops theoretical guarantees of when the algorithm
succeeds. The results reveal interesting insights into the strength and weakness of
SSC and LRR and demonstrate how LRSSC can take the advantages of both methods in preserving the ?Self-Expressiveness Property? and ?Graph Connectivity? at
the same time.
1
Introduction
We live in the big data era ? a world where an overwhelming amount of data is generated and collected every day, such that it is becoming increasingly impossible to process data in its raw form, even
though computers are getting exponentially faster over time. Hence, compact representations of data
such as low-rank approximation (e.g., PCA [13], Matrix Completion [4]) and sparse representation
[6] become crucial in understanding the data with minimal storage. The underlying assumption is
that high-dimensional data often lie in a low-dimensional subspace [4]). Yet, when such data points
are generated from different sources, they form a union of subspaces. Subspace Clustering deals
with exactly this structure by clustering data points according to their underlying subspaces. Application include motion segmentation and face clustering in computer vision [16, 8], hybrid system
identification in control [26, 2], community clustering in social networks [12], to name a few.
Numerous algorithms have been proposed to tackle the problem. Recent examples include GPCA [25], Spectral Curvature Clustering [5], Sparse Subspace Clustering (SSC) [7, 8], Low Rank
Representation (LRR) [17, 16] and its noisy variant LRSC [9] (for a more exhaustive survey of subspace clustering algorithms, we refer readers to the excellent survey paper [24] and the references
therein). Among these algorithms, LRR and SSC, based on minimizing the nuclear norm and `1
norm of the representation matrix respectively, remain the top performers on the Hopkins155 motion segmentation benchmark dataset [23]. Moreover, they are among the few subspace clustering
algorithms supported with theoretic guarantees: Both algorithms are known to succeed when the
subspaces are independent [27, 16]. Later, [8] showed that subspace being disjoint is sufficient
for SSC to succeed1 , and [22] further relaxed this condition to include some cases of overlapping
1
Disjoint subspaces only intersect at the origin. It is a less restrictive assumption comparing to independent
subspaces, e.g., 3 coplanar lines passing the origin are not independent, but disjoint.
1
subspaces. Robustness of the two algorithms has been studied too. Liu et al. [18] showed that a variant of LRR works even in the presence of some arbitrarily large outliers, while Wang and Xu [29]
provided both deterministic and randomized guarantees for SSC when data are noisy or corrupted.
Despite LRR and SSC?s success, there are questions unanswered. LRR has never been shown to
succeed other than under the very restrictive ?independent subspace? assumption. SSC?s solution is
sometimes too sparse that the affinity graph of data from a single subspace may not be a connected
body [19]. Moreover, as our experiment with Hopkins155 data shows, the instances where SSC fails
are often different from those that LRR fails. Hence, a natural question is whether combining the
two algorithms lead to a better method, in particular since the underlying representation matrix we
want to recover is both low-rank and sparse simultaneously.
In this paper, we propose Low-Rank Sparse Subspace Clustering (LRSSC), which minimizes a
weighted sum of nuclear norm and vector 1-norm of the representation matrix. We show theoretical
guarantees for LRSSC that strengthen the results in [22]. The statement and proof also shed insight
on why LRR requires independence assumption. Furthermore, the results imply that there is a fundamental trade-off between the interclass separation and the intra-class connectivity. Indeed, our
experiment shows that LRSSC works well in cases where data distribution is skewed (graph connectivity becomes an issue for SSC) and subspaces are not independent (LRR gives poor separation).
These insights would be useful when developing subspace clustering algorithms and applications.
We remark that in the general regression setup, the simultaneous nuclear norm and 1-norm regularization has been studied before [21]. However, our focus is on the subspace clustering problem, and
hence the results and analysis are completely different.
2
Problem Setup
Notations: We denote the data matrix by X ? Rn?N , where each column of X (normalized to
unit vector) belongs to a union of L subspaces
S1 ? S2 ? ... ? SL .
Each subspace ` contains N` data samples with N1 + N2 + ... + NL = N . We observe the noisy
data matrix X. Let X (`) ? Rn?N` denote the selection (as a set and a matrix) of columns in
X that belong to S` ? Rn , which is an d` -dimensional subspace. Without loss of generality, let
X = [X (1) , X (2) , ..., X (L) ] be ordered. In addition, we use k ? k to represent Euclidean norm (for
vectors) or spectral norm (for matrices) throughout the paper.
Method: We solve the following convex optimization problem
LRSSC :
min kCk? + ?kCk1
C
s.t.
X = XC,
diag(C) = 0.
(1)
Spectral clustering techniques (e.g., [20]) are then applied on the affinity matrix W = |C| + |C|T
where C is the solution to (1) to obtain the final clustering and | ? | is the elementwise absolute value.
Criterion of success: In the subspace clustering task, as opposed to compressive sensing or matrix
completion, there is no ?ground-truth? C to compare the solution against. Instead, the algorithm
succeeds if each sample is expressed as a linear combination of samples belonging to the same
subspace, i.e., the output matrix C are block diagonal (up to appropriate permutation) with each
subspace cluster represented by a disjoint block. Formally, we have the following definition.
Definition 1 (Self-Expressiveness Property (SEP)). Given subspaces {S` }L
`=1 and data points X
from these subspaces, we say a matrix C obeys Self-Expressiveness Property, if the nonzero entries of
each ci (ith column of C) corresponds to only those columns of X sampled from the same subspace
as xi .
Note that the solution obeying SEP alone does not imply the clustering is correct, since each block
may not be fully connected. This is the so-called ?graph connectivity? problem studied in [19].
On the other hand, failure to achieve SEP does not necessarily imply clustering error either, as the
spectral clustering step may give a (sometimes perfect) solution even when there are connections
between blocks. Nevertheless, SEP is the condition that verifies the design intuition of SSC and
LRR. Notice that if C obeys SEP and each block is connected, we immediately get the correct
clustering.
2
3
Theoretic Guanratees
3.1
The Deterministic Setup
Before we state our theoretical results for the deterministic setup, we need to define a few quantities.
Definition 2 (Normalized dual matrix set). Let {?1 (X)} be the set of optimal solutions to
max hX, ?1 i s.t.
?1 ,?2 ,?3
k?2 k? ? ?, kX T ?1 ? ?2 ? ?3 k ? 1, diag? (?3 ) = 0,
where k ? k? is the vector `? norm and diag? selects all the off-diagonal entries. Let ?? =
?
[?1? , ..., ?N
] ? {?1 (X)} obey ?i? ? span(X) for every i = 1, ..., N .2 For every ? = [?1 , ..., ?N ] ?
{?1 (X)}, we define normalized dual matrix V for X as
?1
?N
V (X) ,
,
...,
? k ,
k?1? k
k?N
and the normalized dual matrix set {V (X)} as the collection of V (X) for all ? ? {?1 (X)}.
Definition 3 (Minimax subspace incoherence property). Compactly denote V (`) = V (X (`) ). We
say the vector set X (`) is ?-incoherent to other points if
? ? ?(X (`) ) :=
min
V
max
(`) ?{V (`) }
x?X\X (`)
T
kV (`) xk? .
The incoherence ? in the above definition measures how separable the sample points in S` are against sample points in other subspaces (small ? represents more separable data). Our definition
differs from Soltanokotabi and Candes?s definition of subspace incoherence [22] in that it is defined
as a minimax over all possible dual directions. It is easy to see that ?-incoherence in [22, Definition 2.4] implies ?-minimax-incoherence as their dual direction are contained in {V (X)}. In fact,
in several interesting cases, ? can be significantly smaller under the new definition. We illustrate the
point with the two examples below and leave detailed discussions in the supplementary materials.
Example
P1 (Independent Subspace). Suppose the subspaces are independent, i.e., dim(S1 ? ... ?
SL ) = `=1,...,L dim(S` ), then all X (`) are 0-incoherent under our Definition 3. This is because
for each X (`) one can always find a dual matrix V (`) ? {V (`) } whose column space is orthogonal to
the span of all other subspaces. To contrast, the incoherence parameter according to Definition 2.4
in [22] will be a positive value, potentially large if the angles between subspaces are small.
Example 2 (Random except 1 subspace). Suppose we have L disjoint 1-dimensional subspaces
in Rn (L > n). S1 , ..., SL?1 subspaces are randomly drawn. SL is chosen such that its angle
to one of the L ? 1 subspace, say S1 , is ?/6. Then the incoherence parameter ?(X (L) ) defined
in [22] is at q
least cos(?/6). However under our new definition, it is not difficult to show that
?(X (L) ) ? 2
6 log(L)
n
with high probability3 .
The result also depends on the smallest singular value of a rank-d matrix (denoted by ?d ) and the
inradius of a convex body as defined below.
Definition 4 (inradius). The inradius of a convex body P, denoted by r(P), is defined as the radius
of the largest Euclidean ball inscribed in P.
The smallest singular value and inradius measure how well-represented each subspace is by its data
samples. Small inradius/singular value implies either insufficient data, or skewed data distribution,
in other word, it means that the subspace is ?poorly represented?. Now we may state our main result.
Theorem 1 (LRSSC). Self-expressiveness property holds for the solution of (1) on the data X
if there exists a weighting parameter ? such that for all ` = 1, ..., L, one of the following two
conditions holds:
p
(`)
?(X (`) )(1 + ? N` ) < ? min ?d` (X?k ),
(2)
k
or
?(X
(`)
(`)
)(1 + ?) < ? min r(conv(?X?k )),
k
2
(3)
If this is not unique, pick the one with least Frobenious norm.
The full proof is given in the supplementary. Also it is easy to generalize this example to d-dimensional
subspaces and to ?random except K subspaces?.
3
3
(`)
where X?k denotes X with its k th column removed and ?d` (X?k ) represents the dth
` (smallest
(`)
non-zero) singular value of the matrix X?k .
We briefly explain the intuition of the proof. The theorem is proven by duality. First we write out
the dual problem of (1),
Dual LRSSC :
max hX, ?1 i s.t. k?2 k? ? ?, kX T ?1 ? ?2 ? ?3 k ? 1, diag? (?3 ) = 0.
?1 ,?2 ,?3
This leads to a set of optimality conditions, and leaves us to show the existence of a dual certificate
satisfying these conditions. We then construct two levels of fictitious optimizations (which is the
main novelty of the proof) and construct a dual certificate from the dual solution of the fictitious
optimization problems. Under condition (2) and (3), we establish this dual certifacte meets all optimality conditions, hence certifying that SEP holds. Due to space constraints, we defer the detailed
proof to the supplementary materials and focus on the discussions of the results in the main text.
Remark 1 (SSC). Theorem 1 can be considered a generalization of Theorem 2.5 of [22]. Indeed,
when ? ? ?, (3) reduces to the following
(`)
?(X (`) ) < min r(conv(?X?k )).
k
The readers may observe that this is exactly the same as Theorem 2.5 of [22], with the only difference
being the definition of ?. Since our definition of ?(X (`) ) is tighter (i.e., smaller) than that in [22],
our guarantee for SSC is indeed stronger. Theorem 1 also implies that the good properties of SSC
(such as overlapping subspaces, large dimension) shown in [22] are also valid for LRSSC for a range
of ? greater than a threshold.
To further illustrate the key difference from [22], we describe the following scenario.
Example 3 (Correlated/Poorly Represented Subspaces). Suppose the subspaces are poorly represented, i.e., the inradius r is small. If furthermore, the subspaces are highly correlated, i.e., canonical
angles between subspaces are small, then the subspace incoherence ?0 defined in [22] can be quite
large (close to 1). Thus, the succeed condition ?0 < r presented in [22] is violated. This is an
important scenario because real data such as those in Hopkins155 and Extended YaleB often suffer
from both problems, as illustrated in [8, Figure 9 & 10]. Using our new definition of incoherence
?, as long as the subspaces are ?sufficiently independent?4 (regardless of their correlation) ? will
assume very small values (e.g., Example 2), making SEP possible even if r is small, namely when
subspaces are poorly represented.
Remark 2 (LRR). The guarantee is the strongest when ? ? ? and becomes superficial when
? ? 0 unless subspaces are independent (see Example 1). This seems to imply that the ?independent
subspace? assumption used in [16, 18] to establish sufficient conditions for LRR (and variants) to
work is unavoidable.5 On the other hand, for each problem instance, there is a ?? such that whenever
? > ?? , the result satisfies SEP, so we should expect phase transition phenomenon when tuning ?.
Remark 3 (A tractable condition). Condition (2) is based on singular values, hence is computationally tractable. In contrast, the verification of (3) or the deterministic condition in [22] is NPComplete, as it involves computing the inradii of V-Polytopes [10]. When ? ? ?, Theorem 1
reduces to the first computationally tractable guarantee for SSC that works for disjoint and potentially overlapping subspaces.
3.2
Randomized Results
We now present results for the random design case, i.e., data are generated under some random
models.
Definition 5 (Random data). ?Random sampling? assumes that for each `, data points in X (`)
are iid uniformly distributed on the unit sphere of S` . ?Random subspace? assumes each S` is
generated independently by spanning d` iid uniformly distributed vectors on the unit sphere of Rn .
4
5
Due to space constraint, the concept is formalized in supplementary materials.
Our simulation in Section 6 also supports this conjecture.
4
Lemma 1 (Singular value bound). Assume random sampling. If d` < N` < n, then there exists an
absolute constant C1 such that with probability of at least 1 ? N`?10 ,
!
r
r
r
1
N`
log N`
1 N`
?d` (X) ?
? 3 ? C1
,
or simply
?d` (X) ?
,
2
d`
d`
4 d`
if we assume N` ? C2 d` , for some constant C2 .
Lemma 2 (Inradius bound [1, 22]). Assume random
sampling of N` = ?` d` data points in each S` ,
?
PL
then with probability larger than 1 ? `=1 N` e? d` N`
s
log (?` )
(`)
for all pairs (`, k).
r(conv(?X?k )) ? c(?` )
2d`
?
Here, c(?` ) is a constant depending on ?` . When ?` is sufficiently large, we can take c(?` ) = 1/ 8.
Combining Lemma 1 and Lemma 2, we get the following remark showing that conditions (2) and
(3) are complementary.
Remark 4. Under the random sampling assumption, when ? is smaller than a threshold, the
q singular
`
value condition (2) is better than the inradius condition (3). Specifically, ?d` (X) > 14 N
d` with
high probability, so for some constant C > 1, the singular value condition is strictly better if
?
p
C
N` ? log (N` /d` )
C
,
p
or when N` is large, ? <
?< ?
.
p
1 + log (N` /d` )
N` 1 + log (N` /d` )
By further assuming random subspace, we provide an upper bound of the incoherence ?.
Lemma 3 (Subspace incoherence bound). Assume random subspace and random sampling. It holds
with probability greater than 1 ? 2/N that for all `,
r
6 log N
(`)
.
?(X ) ?
n
Combining Lemma 1 and Lemma 3, we have the following theorem.
Theorem 2 (LRSSC for random data). Suppose L rank-d subspace are uniformly and independently
generated from Rn , and N/L data points are uniformly and independently sampled from the unit
sphere embedded in each subspace, furthermore N > CdL for some absolute constant C, then SEP
holds with probability larger than 1 ? 2/N ? 1/(Cd)10 , if
d<
1
n
, for all ? > q q
96 log N
N
n
L
96d log N
.
?1
(4)
The above condition is obtained from the singular value condition. Using the inradius guarantee,
n log(?)
combined with Lemma 2 and 3, we have a different succeed condition requiring d < 96
log N for all
1
. Ignoring constant terms, the condition on d is slightly better than (4) by a log
? > q n log ?
96d log N
?1
factor but the range of valid ? is significantly reduced.
4
Graph Connectivity Problem
The graph connectivity problem concerns when SEP is satisfied, whether each block of the solution
C to LRSSC represents a connected graph. The graph connectivity problem concerns whether each
disjoint block (since SEP holds true) of the solution C to LRSSC represents a connected graph. This
is equivalent to the connectivity of the solution of the following fictitious optimization problem,
where each sample is constrained to be represented by the samples of the same subspace,
min kC (`) k? + ?kC (`) k1
s.t.
X (`) = X (`) C (`) ,
C (`)
5
diag(C (`) ) = 0.
(5)
The graph connectivity for SSC is studied by [19] under deterministic conditions (to make the problem well-posed). They show by a negative example that even if the well-posed condition is satisfied,
the solution of SSC may not satisfy graph connectivity if the dimension of the subspace is greater
than 3. On the other hand, graph connectivity problem is not an issue for LRR: as the following
proposition suggests, the intra-class connections of LRR?s solution are inherently dense (fully connected).
Proposition 1. When the subspaces are independent, X is not full-rank and the data points are
randomly sampled from a unit sphere in each subspace, then the solution to LRR, i.e.,
min kCk? s.t. X = XC,
C
is class-wise dense, namely each diagonal block of the matrix C is all non-zero.
The proof makes use of the following lemma which states the closed-form solution of LRR.
Lemma 4 ([16]). Take skinny SVD of data matrix X = U ?V T . The closed-form solution to LRR
is the shape interaction matrix C = V V T .
Proposition 1 then follows from the fact that each entry of V V T has a continuous distribution,
hence the probability that any is exactly zero is negligible (a complete argument is given in the
supplementary).
Readers may notice that when ? ? 0, (5) is not exactly LRR, but with an additional constraint
that diagonal entries are zero. We suspect this constrained version also have dense solution. This is
demonstrated numerically in Section 6.
5
5.1
Practical issues
Data noise/sparse corruptions/outliers
The natural extension of LRSSC to handle noise is
1
(6)
min kX ? XCk2F + ?1 kCk? + ?2 kCk1 s.t. diag(C) = 0.
C
2
We believe it is possible (but maybe tedious) to extend our guarantee to this noisy version following
the strategy of [29] which analyzed the noisy version of SSC. This is left for future research.
According to the noisy analysis of SSC, a rule of thumb of choosing the scale of ?1 and ?2 is
?( 1 )
?( ? )
?1 = ? 1+? ,
?2 = ? 1+? ,
2 log N
2 log N
where ? is the tradeoff parameter used in noiseless case (1), ? is the estimated noise level and N is
the total number of entries.
In case of sparse corruption, one may use `1 norm penalty instead of the Frobenious norm. For
outliers, SSC is proven to be robust to them under mild assumptions [22], and we suspect a similar
argument should hold for LRSSC too.
5.2
Fast Numerical Algorithm
As subspace clustering problem is usually large-scale, off-the-shelf SDP solvers are often too slow
to use. Instead, we derive alternating direction methods of multipliers (ADMM) [3], known to be
scalable, to solve the problem numerically. The algorithm involves separating out the two objectives
and diagonal constraints with dummy variables C2 and J like
min kC1 k? + ?kC2 k1
C1 ,C2 ,J
(7)
s.t. X = XJ, J = C2 ? diag(C2 ), J = C1 ,
and update J, C1 , C2 and the three dual variables alternatively. Thanks to the change of variables,
all updates can be done in closed-form. To further speed up the convergence, we adopt the adaptive penalty mechanism of Lin et.al [15], which in some way ameliorates the problem of tuning
numerical parameters in ADMM. Detailed derivations, update rules, convergence guarantee and the
corresponding ADMM algorithm for the noisy version of LRSSC are made available in the supplementary materials.
6
6
Numerical Experiments
To verify our theoretical results and illustrate the advantages of LRSSC, we design several numerical
experiments. Due to space constraints, we discuss only two of them in the paper and leave the rest to
the supplementary materials. In all our numerical experiments, we use the ADMM implementation
of LRSSC with fixed set of numerical parameters. The results are given against an exponential grid
of ? values, so comparisons to only 1-norm (SSC) and only nuclear norm (LRR) are clear from two
ends of the plots.
6.1
Separation-Sparsity Tradeoff
We first illustrate the tradeoff of the solution between obeying SEP and being connected (this is
measured using the intra-class sparsity of the solution). We randomly generate L subspaces of
dimension 10 from R50 . Then, 50 unit length random samples are drawn from each subspace and
we concatenate into a 50 ? 50L data matrix. We use Relative Violation [29] to measure of the
violation of SEP and Gini Index [11] to measure the intra-class sparsity6 . These quantities are
defined below:
P
|C|i,j
(i,j)?M
/
RelViolation (C, M) = P
,
(i,j)?M |C|i,j
where M is the index set that contains all (i, j) such that xi , xj ? S` for some `.
GiniIndex (C, M) is obtained by first sorting the absolute value of Cij?M into a non-decreasing
sequence ~c = [c1 , ..., c|M| ], then evaluate
GiniIndex (vec(CM )) = 1 ? 2
|M|
X
ck
|M| ? k + 1/2
.
k~ck1
|M|
k=1
Note that RelViolation takes the value of [0, ?] and SEP is attained when RelViolation is zero.
Similarly, Gini index takes its value in [0, 1] and it is larger when intra-class connections are sparser.
The results for L = 6 and L = 11 are shown in Figure 1. We observe phase transitions for both
metrics. When ? = 0 (corresponding to LRR), the solution does not obey SEP even when the
independence assumption is only slightly violated (L = 6). When ? is greater than a threshold,
RelViolation goes to zero. These observations match Theorems 1 and 2. On the other hand, when ?
is large, intra-class sparsity is high, indicating possible disconnection within the class.
Moreover, we observe that there exists a range of ? where RelViolation reaches zero yet the sparsity
level does not reaches its maximum. This justifies our claim that the solution of LRSSC, taking ?
within this range, can achieve SEP and at the same time keep the intra-class connections relatively
dense. Indeed, for the subspace clustering task, a good tradeoff between separation and intra-class
connection is important.
6.2
Skewed data distribution and model selection
In this experiment, we use the data for L = 6 and combine the first two subspaces into one 20dimensional subspace and randomly sample 10 more points from the new subspace to ?connect?
the 100 points from the original two subspaces together. This is to simulate the situation when data
distribution is skewed, i.e., the data samples within one subspace has two dominating directions.
The skewed distribution creates trouble for model selection (judging the number of subspaces), and
intuitively, the graph connectivity problem might occur.
We find that model selection heuristics such as the spectral gap [28] and spectral gap ratio [14] of
the normalized Laplacian are good metrics to evaluate the quality of the solution of LRSSC. Here
the correct number of subspaces is 5, so the spectral gap is the difference between the 6th and 5th
smallest singular value and the spectral gap ratio is the ratio of adjacent spectral gaps. The larger
these quantities, the better the affinity matrix reveals that the data contains 5 subspaces.
6
We choose Gini Index over the typical `0 to measure sparsity as the latter is vulnerable to numerical
inaccuracy.
7
Figure 1: Illustration of the separation-sparsity trade-off. Left: 6 subspaces. Right: 11 subspace.
Figure 2 demonstrates how singular values change when ? increases. When ? = 0 (corresponding
to LRR), there is no significant drop from the 6th to the 5th singular value, hence it is impossible for
either heuristic to identify the correct model. As ? increases, the last 5 singular values gets smaller
and become almost zero when ? is large. Then the 5-subspace model can be correctly identified
using spectral gap ratio. On the other hand, we note that the 6th singular value also shrinks as ?
increases, which makes the spectral gap very small on the SSC side and leaves little robust margin
for correct model selection against some violation of SEP. As is shown in Figure 3, the largest
spectral gap and spectral gap ratio appear at around ? = 0.1, where the solution is able to benefit
from both the better separation induced by the 1-norm factor and the relatively denser connections
promoted by the nuclear norm factor.
Figure 2: Last 20 singular values of the normalized Figure 3: Spectral Gap and Spectral Gap
Laplacian in the skewed data experiment.
Ratio in the skewed data experiment.
7
Conclusion and future works
In this paper, we proposed LRSSC for the subspace clustering problem and provided theoretical
analysis of the method. We demonstrated that LRSSC is able to achieve perfect SEP for a wider
range of problems than previously known for SSC and meanwhile maintains denser intra-class connections than SSC (hence less likely to encounter the ?graph connectivity? issue). Furthermore, the
results offer new understandings to SSC and LRR themselves as well as problems such as skewed
data distribution and model selection. An important future research question is to mathematically
define the concept of the graph connectivity, and establish conditions that perfect SEP and connectivity indeed occur together for some non-empty range of ? for LRSSC.
Acknowledgments
H. Xu is partially supported by the Ministry of Education of Singapore through AcRF Tier Two
grant R-265-000-443-112 and NUS startup grant R-265-000-384-133.
8
References
[1] D. Alonso-Guti?errez. On the isotropy constant of random convex sets. Proceedings of the American
Mathematical Society, 136(9):3293?3300, 2008.
[2] L. Bako. Identification of switched linear systems via sparse optimization. Automatica, 47(4):668?677,
2011.
[3] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
R in Machine Learning,
via the alternating direction method of multipliers. Foundations and Trends
3(1):1?122, 2011.
[4] E.J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717?772, 2009.
[5] G. Chen and G. Lerman. Spectral curvature clustering (SCC). International Journal of Computer Vision,
81(3):317?330, 2009.
[6] M. Elad. Sparse and redundant representations. Springer, 2010.
[7] E. Elhamifar and R. Vidal. Sparse subspace clustering. In Computer Vision and Pattern Recognition
(CVPR?09), pages 2790?2797. IEEE, 2009.
[8] E. Elhamifar and R. Vidal. Sparse subspace clustering: Algorithm, theory, and applications. to appear in
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2013.
[9] P. Favaro, R. Vidal, and A. Ravichandran. A closed form solution to robust subspace estimation and
clustering. In Computer Vision and Pattern Recognition (CVPR?11), pages 1801?1807. IEEE, 2011.
[10] P. Gritzmann and V. Klee. Computational complexity of inner and outerj-radii of polytopes in finitedimensional normed spaces. Mathematical programming, 59(1):163?213, 1993.
[11] N. Hurley and S. Rickard. Comparing measures of sparsity. Information Theory, IEEE Transactions on,
55(10):4723?4741, 2009.
[12] A. Jalali, Y. Chen, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization.
In International Conference on Machine Learning (ICML?11), pages 1001?1008, 2011.
[13] I.T. Jolliffe. Principal component analysis, volume 487. Springer-Verlag New York, 1986.
[14] F. Lauer and C. Schnorr. Spectral clustering of linear subspaces for motion segmentation. In International
Conference on Computer Vision (ICCV?09), pages 678?685. IEEE, 2009.
[15] Z. Lin, R. Liu, and Z. Su. Linearized alternating direction method with adaptive penalty for low-rank
representation. In Advances in Neural Information Processing Systems 24 (NIPS?11), pages 612?620.
2011.
[16] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Yu, and Y. Ma. Robust recovery of subspace structures by low-rank
representation. IEEE Trans. on Pattern Analysis and Machine Intelligence (TPAMI), 2012.
[17] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation. In International
Conference on Machine Learning (ICML?10), pages 663?670, 2010.
[18] G. Liu, H. Xu, and S. Yan. Exact subspace segmentation and outlier detection by low-rank representation.
In International Conference on Artificial Intelligence and Statistics (AISTATS?12), 2012.
[19] B. Nasihatkon and R. Hartley. Graph connectivity in sparse subspace clustering. In Computer Vision and
Pattern Recognition (CVPR?11), pages 2137?2144. IEEE, 2011.
[20] A.Y. Ng, M.I. Jordan, Y. Weiss, et al. On spectral clustering: Analysis and an algorithm. In Advances in
Neural Information Processing Systems 15 (NIPS?02), volume 2, pages 849?856, 2002.
[21] E. Richard, P. Savalle, and N. Vayatis. Estimation of simultaneously sparse and low rank matrices. In
International Conference on Machine learning (ICML?12), 2012.
[22] M. Soltanolkotabi and E.J. Candes. A geometric analysis of subspace clustering with outliers. The Annals
of Statistics, 40(4):2195?2238, 2012.
[23] R. Tron and R. Vidal. A benchmark for the comparison of 3-d motion segmentation algorithms. In
Computer Vision and Pattern Recognition (CVPR?07), pages 1?8. IEEE, 2007.
[24] R. Vidal. Subspace clustering. Signal Processing Magazine, IEEE, 28(2):52?68, 2011.
[25] R. Vidal, Y. Ma, and S. Sastry. Generalized principal component analysis (gpca). IEEE Transactions on
Pattern Analysis and Machine Intelligence, 27(12):1945?1959, 2005.
[26] R. Vidal, S. Soatto, Y. Ma, and S. Sastry. An algebraic geometric approach to the identification of a
class of linear hybrid systems. In Decision and Control, 2003. Proceedings. 42nd IEEE Conference on,
volume 1, pages 167?172. IEEE, 2003.
[27] R. Vidal, R. Tron, and R. Hartley. Multiframe motion segmentation with missing data using powerfactorization and gpca. International Journal of Computer Vision, 79(1):85?105, 2008.
[28] U. Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395?416, 2007.
[29] Y.X. Wang and H. Xu. Noisy sparse subspace clustering. In International Conference on Machine
Learning (ICML?13), volume 28, pages 100?108, 2013.
9
| 4865 |@word mild:1 version:4 briefly:1 norm:19 stronger:1 seems:1 nd:1 tedious:1 simulation:1 linearized:1 pick:1 mpexuh:1 liu:5 contains:3 comparing:2 yet:2 chu:1 concatenate:1 numerical:7 shape:1 plot:1 drop:1 update:3 alone:1 intelligence:4 leaf:2 xk:1 ith:1 gpca:3 certificate:2 favaro:1 mathematical:2 c2:7 become:2 combine:1 indeed:5 p1:1 themselves:1 sdp:1 cand:1 decreasing:1 little:1 overwhelming:1 solver:1 becomes:2 provided:2 conv:3 underlying:3 moreover:3 notation:1 probability3:1 isotropy:1 cm:1 minimizes:3 compressive:1 savalle:1 guarantee:10 every:3 tackle:1 shed:1 exactly:4 demonstrates:1 uk:2 control:2 unit:6 grant:2 appear:2 before:2 positive:1 engineering:1 negligible:1 relviolation:5 era:1 despite:1 meet:2 incoherence:11 becoming:1 might:1 therein:1 studied:4 suggests:1 co:1 lrr:26 range:6 obeys:2 unique:1 practical:1 acknowledgment:1 union:2 block:8 differs:1 mech:1 intersect:1 yan:2 significantly:2 boyd:1 word:1 induce:1 get:3 close:1 selection:6 ravichandran:1 storage:1 live:1 impossible:2 equivalent:1 deterministic:5 demonstrated:2 missing:1 go:1 regardless:1 independently:3 convex:7 survey:2 normed:1 formalized:1 recovery:1 immediately:1 insight:3 rule:2 nuclear:6 unanswered:1 handle:1 annals:1 suppose:4 strengthen:1 exact:2 programming:1 magazine:1 origin:2 pa:1 trend:1 satisfying:1 recognition:4 observed:1 powerfactorization:1 wang:3 connected:7 sun:1 trade:2 removed:1 intuition:3 complexity:1 creates:1 completely:1 compactly:1 sep:19 represented:7 derivation:1 univ:1 fast:1 describe:1 gini:3 artificial:1 startup:1 choosing:1 exhaustive:1 whose:1 quite:1 supplementary:7 solve:2 warwick:2 say:3 larger:4 posed:2 dominating:1 denser:2 cvpr:4 statistic:4 noisy:8 final:1 advantage:2 sequence:1 tpami:2 propose:2 interaction:1 combining:4 poorly:4 achieve:3 kv:1 getting:1 exploiting:1 convergence:2 cluster:1 empty:1 perfect:3 leave:2 wider:1 illustrate:4 depending:1 ac:1 completion:3 derive:1 measured:1 school:1 c:1 involves:2 implies:3 direction:6 radius:2 hartley:2 correct:5 material:5 education:1 hx:2 generalization:1 proposition:3 tighter:1 mathematically:1 strictly:1 pl:1 extension:1 hold:7 sufficiently:2 considered:2 ground:1 around:1 claim:1 adopt:1 smallest:4 estimation:2 largest:2 weighted:1 always:1 ck:1 shelf:1 focus:2 rank:16 aka:1 contrast:2 dim:2 kc:2 selects:1 issue:4 among:2 dual:13 denoted:2 art:1 constrained:2 construct:2 never:1 ng:1 sampling:5 represents:4 yu:3 icml:4 promote:1 future:3 sanghavi:1 fundamentally:1 develops:1 few:3 richard:1 randomly:4 simultaneously:3 national:1 phase:2 skinny:1 n1:1 detection:1 highly:1 intra:9 weakness:1 violation:3 analyzed:1 nl:1 huan:1 orthogonal:1 unless:1 euclidean:2 theoretical:5 minimal:1 instance:2 column:6 entry:5 too:4 connect:1 corrupted:1 combined:1 thanks:1 recht:1 fundamental:1 randomized:2 international:8 off:4 together:2 connectivity:16 von:1 unavoidable:1 satisfied:2 opposed:1 choose:1 multiframe:1 ssc:28 american:1 satisfy:1 depends:1 later:1 closed:4 inradius:9 recover:1 maintains:1 hopkins155:3 candes:2 defer:1 identify:1 generalize:1 raw:1 identification:3 thumb:1 iid:2 corruption:2 simultaneous:1 explain:1 strongest:1 reach:2 whenever:1 definition:17 against:4 failure:1 proof:6 sampled:3 dataset:1 yuxiangw:1 segmentation:7 scc:1 attained:1 day:1 wei:1 done:1 though:1 shrink:1 generality:1 furthermore:4 correlation:1 hand:5 su:1 overlapping:3 acrf:1 lrsc:1 quality:1 reveal:1 believe:1 name:1 usa:1 verify:1 normalized:6 yaleb:1 concept:2 requiring:1 hence:8 regularization:1 true:1 alternating:3 soatto:1 nonzero:1 illustrated:1 deal:1 adjacent:1 skewed:8 self:5 criterion:1 generalized:1 theoretic:2 demonstrate:1 complete:1 tron:2 npcomplete:1 motion:5 wise:1 guti:1 parikh:1 exponentially:1 volume:4 belong:1 extend:1 elementwise:1 numerically:2 mellon:1 refer:1 significant:1 vec:1 tuning:2 grid:1 mathematics:1 similarly:1 sastry:2 soltanolkotabi:1 curvature:2 recent:1 showed:2 belongs:1 elad:1 termed:1 scenario:2 verlag:1 arbitrarily:1 success:2 preserving:1 ministry:1 greater:4 relaxed:1 additional:1 performer:1 promoted:1 novelty:1 redundant:1 signal:1 full:2 reduces:2 faster:1 match:1 offer:1 long:1 sphere:4 lin:4 dept:1 lrssc:22 laplacian:2 ameliorates:1 variant:3 regression:1 scalable:1 vision:8 cmu:1 noiseless:1 metric:2 sometimes:2 represent:1 c1:6 vayatis:1 addition:1 want:1 singular:15 source:1 crucial:1 rest:1 lauer:1 suspect:2 induced:1 jordan:1 inscribed:1 presence:1 easy:2 independence:2 xj:2 heuristic:2 multiplier:2 identified:1 inner:1 tradeoff:4 whether:3 pca:1 penalty:3 suffer:1 algebraic:1 passing:1 york:1 remark:6 useful:1 detailed:3 clear:1 maybe:1 amount:1 kc1:1 reduced:1 generate:1 sl:4 canonical:1 singapore:3 notice:2 tutorial:1 judging:1 estimated:1 disjoint:7 dummy:1 correctly:1 carnegie:1 write:1 nasihatkon:1 kck:3 key:1 nevertheless:1 threshold:3 drawn:2 graph:17 sum:1 luxburg:1 angle:3 throughout:1 reader:3 almost:1 frobenious:2 separation:6 decision:1 chenlei:1 bound:4 strength:1 occur:2 constraint:5 certifying:1 speed:1 argument:2 min:9 span:2 optimality:2 simulate:1 separable:2 relatively:2 conjecture:1 department:1 developing:1 according:3 combination:1 poor:1 ball:1 belonging:1 remain:1 smaller:4 increasingly:1 slightly:2 errez:1 making:1 s1:4 outlier:5 intuitively:1 iccv:1 tier:1 computationally:2 r50:1 previously:1 discus:1 mechanism:1 jolliffe:1 tractable:3 coplanar:1 end:1 available:1 vidal:8 observe:4 obey:2 spectral:19 appropriate:1 robustness:1 encounter:1 existence:1 original:1 top:1 clustering:37 include:3 denotes:1 assumes:2 trouble:1 ck1:1 xc:2 restrictive:2 k1:2 establish:3 society:1 objective:1 question:3 quantity:3 strategy:1 diagonal:5 jalali:1 affinity:3 subspace:94 separating:1 alonso:1 collected:1 spanning:1 provable:1 assuming:1 length:1 index:4 insufficient:1 ratio:6 minimizing:1 illustration:1 setup:4 difficult:1 cij:1 statement:1 potentially:2 trace:1 negative:1 design:3 implementation:1 upper:1 observation:1 hurley:1 benchmark:2 situation:1 extended:1 rn:6 interclass:1 community:1 expressiveness:5 peleato:1 namely:2 pair:1 eckstein:1 connection:7 polytopes:2 nu:2 inaccuracy:1 nip:2 trans:1 dth:1 able:2 below:3 usually:1 pattern:7 sparsity:8 max:3 natural:2 hybrid:2 minimax:3 cdl:1 imply:4 numerous:1 incoherent:2 text:1 sg:1 understanding:2 geometric:2 xiang:1 relative:1 embedded:1 loss:1 fully:2 permutation:1 expect:1 interesting:2 fictitious:3 proven:2 foundation:2 switched:1 sufficient:2 verification:1 cd:1 supported:2 last:2 side:1 disconnection:1 face:1 taking:1 absolute:4 sparse:17 distributed:3 benefit:1 kck1:2 dimension:3 world:1 valid:2 transition:2 finitedimensional:1 collection:1 adaptive:2 made:1 leng:2 social:1 transaction:3 compact:1 keep:1 reveals:1 pittsburgh:1 automatica:1 xi:2 alternatively:1 continuous:1 why:1 schnorr:1 superficial:1 robust:5 inherently:1 ignoring:1 excellent:1 necessarily:1 kc2:1 meanwhile:1 diag:7 aistats:1 main:4 dense:4 big:1 s2:1 noise:3 n2:1 verifies:1 complementary:1 xu:6 body:3 slow:1 fails:2 obeying:2 exponential:1 lie:1 weighting:1 theorem:10 showing:1 sensing:1 concern:2 exists:3 rickard:1 ci:1 justifies:1 elhamifar:2 kx:3 margin:1 sorting:1 sparser:1 gap:11 chen:2 simply:1 likely:1 expressed:1 ordered:1 contained:1 partially:2 vulnerable:1 springer:2 corresponds:1 truth:1 satisfies:1 ma:3 succeed:4 admm:4 change:2 specifically:1 except:2 uniformly:4 typical:1 lemma:10 principal:2 called:1 total:1 duality:1 svd:1 e:1 succeeds:2 lerman:1 indicating:1 formally:1 support:1 latter:1 violated:2 evaluate:2 phenomenon:1 correlated:2 |
4,271 | 4,866 | Matrix Completion From any Given Set of
Observations
Troy Lee
Nanyang Technological University and
Centre for Quantum Technologies
[email protected]
Adi Shraibman
Department of Computer Science
Tel Aviv-Yaffo Academic College
[email protected]
Abstract
In the matrix completion problem the aim is to recover an unknown real matrix
from a subset of its entries. This problem comes up in many application areas,
and has received a great deal of attention in the context of the netflix prize.
A central approach to this problem is to output a matrix of lowest possible
complexity (e.g. rank or trace norm) that agrees with the partially specified
matrix. The performance of this approach under the assumption that the revealed entries are sampled randomly has received considerable attention (e.g.
[1, 2, 3, 4, 5, 6, 7, 8]). In practice, often the set of revealed entries is not chosen
at random and these results do not apply. We are therefore left with no guarantees
on the performance of the algorithm we are using.
We present a means to obtain performance guarantees with respect to any set of
initial observations. The first step remains the same: find a matrix of lowest possible complexity that agrees with the partially specified matrix. We give a new way
to interpret the output of this algorithm by next finding a probability distribution
over the non-revealed entries with respect to which a bound on the generalization
error can be proven. The more complex the set of revealed entries according to a
certain measure, the better the bound on the generalization error.
1
Introduction
In the matrix completion problem we observe a subset of the entries of a target matrix Y , and our aim
is to retrieve the rest of the matrix. Obviously some restriction on the target matrix Y is unavoidable
as otherwise it is impossible to retrieve even one missing entry; usually, it is assumed that Y is
generated in a way so as to have low complexity according to a measure such as matrix rank.
A common scheme for the matrix completion problem is to select a matrix X that minimizes some
combination of the complexity of X and the distance between X and Y on the observed part. In
particular, one can demand that X agrees with Y on the observed initial sample (i.e. the distance
between X and Y on the observed part is zero). This general algorithm is described in Figure 1, and
we refer to it as Alg1. It outputs a matrix with minimal complexity that agrees with Y on the initial
sample S. The complexity measure can be rank, or a norm to serve as an efficiently computable
proxy for the rank such as the trace norm or ?2 norm. When we wish to mention which complexity
measure is used we write it explicitly, e.g. Alg1(?2 ). Our framework is suitable using any norm
satisfying few simple conditions described in the sequel.
The performance of Alg1 under the assumption that the initial subset is picked at random is well
understood [1, 2, 3, 4, 5, 6, 7, 8]. This line of research can be divided into two parts. One line
of research [5, 6, 4] studies conditions under which Alg1(Tr) retrieves the matrix exactly 1 . They
1
There are other papers studying exact matrix completion, e.g. [7].
1
define what they call an incoherence property which quantifies how spread the singular vectors of Y
are. The exact definition of the incoherence property varies in different results. It is then proved that
if there are enough samples relative to the rank of Y and its incoherence property, then Alg1(Tr)
retrieves the matrix Y exactly with high probability, assuming the samples are chosen uniformly at
random. Note that in this line of research the trace norm is used as the complexity measure in the
algorithm. It is not clear how to prove similar results with the ?2 norm.
Candes and Recht [5] observed that it is impossible to reconstruct a matrix that has only one entry
equal to 1 and zeros everywhere else, unless most of its entries are observed. Thus, exact matrix
completion must assume some special property of the target matrix Y . In a second line of research,
general results are proved regarding the performance of Alg1. These results are weaker in that they
do not prove exact recovery, but rather bounds on the distance between the output matrix X and
Y . But these results apply for every matrix Y , they can be generalized for non-uniform probability
distributions, and also apply when the complexity measure is the ?2 norm. These results take the
following form:
Theorem 1 ([2]) Let Y be an n ? n real matrix, and P a probability distribution on pairs (i, j) ?
[n]2 . Choose a sample S of |S| > n log n entries according to P . Then, with probability at least
1 ? 2?n/2 over the sample selection, the following holds:
r
X
n
Pij |Xij ? Yij | ? c?2 (X)
.
|S|
i,j
Where X is the output of the algorithm with sample S, and c is a universal constant.
In practice, the assumption that the sample is random is not always valid. Sometimes the subset we
see reflects our partial knowledge which is not random at all. What can we say about the output
of the algorithm in this case? The analysis of random samples does not help us here, because these
proofs do not reveal the structure that makes generalization possible. In order to answer this question
we need to understand what properties of a sample enable generalization.
A first step in this direction was taken in [9] where the initial subset was chosen deterministically
as the set of edges of a good expander (more generally, a good sparsifier). Deterministic guarantees
were proved for the algorithm in this case, that resemble the guarantees proved for random sampling.
For example:
Theorem 2 [9] Let S be the set of edges of a d-regular graph with second eigenvalue
For every n ? n real matrix Y , if X is the output of Alg1 with initial subset S, then
?
1 X
(Xij ? Yij )2 ? c?2 (Y )2 ,
n2 i,j
d
2
bound ?.
where c is a small universal constant.
?
Recall that d-regular graphs with ? = O( d) can be constructed in linear time using e.g. the
well-known LPS Ramanujan graphs [10].
This theorem was also generalized to bound the error with respect to any probability distribution.
Instead of expanders, sparsifiers were used to select the entries to observe for this result.
Theorem 3 [9] Let P be a probability distribution on pairs (i, j) ? [n]2 , and d > 1. There is an
efficiently constructible set S ? [n]2 of size at most dn, such that for every n ? n real target matrix
Y , if X is the output of our algorithm with initial subset S, then
X
1
Pij (Xij ? Yij )2 ? c?2 (Y )2 ? .
d
i,j
The results in [9] still do not answer the practical question of how to reconstruct a matrix from an
arbitrary sample. In this paper we continue the work started in [9], and give a simple and general
answer to this second question.
We extend the results of [9] in several ways:
2
The eigenvalues are eigenvalues of the adjacency matrix of the graph.
2
1. We upper bound the generalization error of Alg1 given any set of initial observations. This
bound depends on properties of the set of observed entries.
2. We show there is a probability distribution outside of the observed entries such that the
generalization error under this distribution is bounded in terms of the complexity of the
observed entries, under a certain complexity measure.
3. The results hold not only for ?2 but also for the trace norm, and in fact any norm satisfying
a few basic properties.
2
Preliminaries
Here we introduce some of the matrix notation and norms that we will be using. For matrices A, B
of the same size, let A ? B denote the Hadamard or entrywise product of A and B. For a m-by-n
matrix A with m ? n let ?1 (A) ? ? ? ? ? ?n (A) denote the singular values of A. The trace norm,
denoted kAktr , is the `1 norm of the vector of singular values, and the Frobenius norm, denoted
kAkF , is the `2 norm of the vector of singular values.
As the rank of a matrix is equal to the number of non-zero singular values, it follows from the
Cauchy-Schwarz inequality that
kAk2tr
? rk(A) .
(1)
kAk2F
This inequality motivates the use of the trace norm as a proxy for rank in rank minimization problems. A problem with the bound of (1) as a complexity measure is that it is not monotone?the
bound can be larger on a submatrix of A than on A itself. As taking the Hadamard product of a
matrix with a rank one matrix does not increase its rank, a way to fix this problem is to consider
instead:
kA ? vuT k2tr
? rk(A) .
max
u,v
kA ? vuT k2F
kuk=kvk=1
When A is a sign matrix, this bound simplifies nicely?for then, kA ? vuT kF = kukkvk = 1, and
we are left with
max
kA ? vuT k2tr ? rk(A) .
u,v
kuk=kvk=1
This motivates the definition of the ?2 norm.
Definition 4 Let A be a n-by-n matrix. Then
?2 (A) =
kA ? vuT ktr .
max
u,v
kuk=kvk=1
We will also make use of the dual norms of the trace and ?2 norms. Recall that in general for a norm
?(A) the dual norm ?? is defined as
?? (A) = max
B
Notice that this means that
hA, Bi
?(B)
hA, Bi ? ?? (A)?(B) .
(2)
The dual of the trace norm is k ? k the operator norm from `2 to `2 , also known as the spectral norm.
The dual of the ?2 norm looks as follows.
Definition 5
?2? (A) =
=
min
X,Y
X T Y =A
1
kXk2F + kY k2F
2
min kXkF kY kF ,
X,Y
X Y =A
T
where the min is taken over X, Y with orthogonal columns.
3
Finally, we will make use of the approximate ?2 norm. This is the minimum of the ?2 norm over all
matrices which approximate the target matrix in some sense. The particular version we will need is
denoted ?20,? and is defined as follows.
Definition 6 Let S ? {0, 1}m?n be a boolean matrix. Let S? denote the complement of S, that is
S? = J ? S where J is the all ones matrix. Then
?20,? (S) = min{?2 (T ) : T ? S ? S, T ? S? = 0}
T
In words, ?20,? (S) is the minimum ?2 norm of a matrix T which is 0 whenever S is zero, and at
least 1 whenever S is 1. This can be thought of as a ?one-sided error? version of the more familiar
?2? norm of a sign matrix, which is the minimum ?2 norm of a matrix which agrees in sign with the
target matrix and has all entries of magnitude at least 1. The ?2? bound is also known to be equal to
the margin complexity [11].
3
The algorithm
Let S ? [m] ? [n] be a subset of entries, representing our partial knowledge. We can always run
Alg1 and get an output matrix X. What we need in order to make intelligent use of X is a way to
measure the distance between X and Y . Our first observation is that although Y is not known, it
is possible to bound the distance between X and Y . This result is stated in the following theorem
which generalizes Theorems (2) and (4) of [9] 3 :
Theorem 7 Fix a set of entries S ? [m] ? [n]. Let P be a probability distribution on pairs (i, j) ?
[m] ? [n], such that there exists a real matrix Q satisfying
1. Qij = 0 when (i, j) 6? S.
2. ?2? (P ? Q) ? ?
Then for every m ? n real target matrix Y , if X is the output of our algorithm with initial subset S,
it holds that
X
Pij (Xij ? Yij )2 ? 4??2 (Y )2 .
i,j
Theorem 7 says that ?2? (P ? Q) determines, at least to some extent, the expected distance between
X and Y with respect to P .
This gives us a way to measure the quality of the output of Alg1 for any set S of initial observations.
Namely, we can do the following:
1. Choose a probability distribution P on the entries of the matrix.
2. Find a real matrix Q such that Qij = 0 when (i, j) 6? S, and ?2? (P ? Q) is minimal.
3. Output the minimal value ?.
We then know, using Theorem 7, that the expected square distance between X and Y can be bounded
in terms of ? and the complexity of Y .
Obviously, the choice of P makes a big difference. For example if the set of initial observations is
contained in a submatrix we cannot expect X to be close to Y outside this submatrix. In such cases
it makes sense to restrict P to the submatrix containing S.
One approach to find a distribution for which we can expect to be close on the unseen entries is to
optimize over probability distributions P such that Theorem 7 gives the best bound. Since ?2? can
be expressed as the optimum of semidefinite program, we can find in polynomial time a probability
distribution P and a weight function Q on S such that ?2? (P ? Q) is minimizd. Thus, instead of
trying different parameters, we can find a probability distribution for which we can prove optimal
3
Here we state the result for ?2 . See Section 4 for the corresponding result for the trace norm as well.
4
1.
2.
Input: a subset S ? [n]2 and the value of Y on S.
Output: a matrix X of smallest possible CC(X) under the condition that
Xij = Yij for all (i, j) ? S.
Figure 1: Algorithm Alg1(CC)
guarantees using Theorem 7. The second algorithm we suggest does exactly that. We refer to this
algorithm as Alg2, or Alg2(CC) if we wish to state the complexity measure that is used.
For Alg2(?2 ), we do the following: Minimize ?2? (P ? Q) over all m ? n matrices Q and P such
that:
1. Qij = 0 for (i, j) 6? S.
2. Pij = 0 for (i, j) ? S.
P
3.
i,j Pij = 1.
Globally, our algorithm for matrix completion therefore works in two phases. We first use Alg1 to
get an output matrix X, and then use Alg2 in order to find optimal guarantees regarding the distance
between X and Y . The generalization error bounds for this algorithm are proved in Section 4.
3.1
Using a general norm
In our description of Alg2 above we have used the norm ?2 . The same idea works for any norm ?
satisfying the property ?(A ? A) ? ?(A)2 . Moreover, if the dual norm can be computed efficiently
via a linear or semidefinite program, then the optimal distribution P for the bound can be found
efficiently as well.
For example for the trace norm the algorithm becomes: Given the sample S run Alg1(k ? ktr ) and
get an output matrix X. The second part of the algorithm is: Minimize kP ? Qk over all m ? n
matrices Q and P such that:
1. Qij = 0 for (i, j) 6? S.
2. Pij = 0 for (i, j) ? S.
P
3.
i,j Pij = 1.
Denote by ? the optimal value of the above program, and by P the optimal probability distribution.
Then analogously to Theorem 7, we have
X
Pij (Xij ? Yij )2 ? 4?kY k2tr .
i,j
Both of these results will follow from a more general theorem which we show in the next section.
4
Generalization bounds
Here we show a more general theorem which will imply Theorem 7.
Theorem 8 Let ? be a norm and ?? its dual norm. Suppose that ?(A ? A) ? ?(A)2 for any matrix
A.
Fix a set of indices S ? [m] ? [n]. Let P be a probability distribution on pairs (i, j) ? [m] ? [n],
such that there exists a real matrix Q satisfying
1. Qij = 0 when (i, j) 6? S.
2. ?? (P ? Q) ? ?
5
Then for every m ? n real target matrix Y , if X is the output of algorithm Alg1(?) with initial
subset S, it holds that
X
Pij (Xij ? Yij )2 ? 4?(Y )2 ?.
i,j
Proof Let R be the matrix where Rij = (Xij ? Yij )2 . By assumption ?? (P ? Q) ? ? thus by (2)
hP ? Q, Ri ? ??(R) .
Now let us focus on ?(R). As R = (X ? Y ) ? (X ? Y ) by the assumption on ? we have
?(R) ? ?(X ? Y )2 ? (?(X) + ?(Y ))2 .
Now by definition of Alg1(?) we have ?(X) ? ?(Y ), thus ?(R) ? 4?(Y )2 . Also, by definition
of the algorithm Rij = 0 for (i, j) ? S, and Qij equals zero outside of S, which implies that
P
i,j Qij Rij = 0. We conclude that
X
Pij (Xij ? Yij )2 ? 4??(Y )2 .
i,j
Both the trace norm and ?2 norm satisfy the condition of the theorem as they are multiplicative under
tensor product.
5
Analyzing the error bound
We now look more closely at the minimal value of the parameter ? from Theorem 7. The optimal
value of ? depends only on the set of observed indices S. For a set of indices S ? [m] ? [n] let S?
be its complement.
Given samples S we want to find P, Q so as to minimize ?2? (P ? Q) such that P is a probability
distribution over S? and Q has support in S. We can express this as a semidefinite program
1
Tr(?)
2
? 0
subject to ? ? (P? ? Q)
P ?0
? =1
hP, Si
? =minimize
?,P,Q
hQ, Si = Q.
Here
P? =
0
PT
P
0
?
is the ?bipartite? version of P , and similarly for Q.
Taking the dual of this program we find
1/? =minimize
A
?2 (A)
subject to A ? S?
A ? S? = A
In words, this says that that ?1 is equal to the minimum ?2 norm of a matrix that is zero on all entries
? Thus ? = 1/? 0,? (S)
? (recall Definition 6). This says that the
in S and at least 1 on all entries in S.
2
?
more complex the set of unobserved entries S according to the measure ?20,? , the smaller the value
?
? ? (? ? (S?S)?1)/2
?
of ?. Note that in particular, if we consider the sign matrix S?S
then ?20,? (S)
2
?
is lower bounded by the margin complexity of S ? S.
6
References
[1] N. Srebro, J. D. M. Rennie, and T. S. Jaakola. Maximum-margin matrix factorization. In
Neural Information Processing Systems, 2005.
[2] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In 18th Annual Conference on
Computational Learning Theory (COLT), pages 545?560, 2005.
[3] R. Foygel and N. Srebro. Concentration-based guarantees for low-rank matrix reconstruction.
Technical report, arXiv, 2011.
[4] E. J. Candes and T. Tao. The power of convex relaxation: near-optimal matrix completion.
IEEE Transactions on Information Theory, 56(5):2053?2080, 2010.
[5] E. J. Candes and B. Recht. Exact matrix completion via convex optimization. Foundations of
Computational Mathematics, 9(6):717?772, 2009.
[6] B. Recht. A simpler approach to matrix completion. Technical report, arXiv, 2009.
[7] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. Journal of
Machine Learning Research, 11:2057?2078, 2010.
[8] V. Koltchinskii, A. B. Tsybakov, and K. Lounici. Nuclear norm penalization and optimal rates
for noisy low rank matrix completion. Technical report, arXiv, 2010.
[9] E. Heiman, G. Schechtman, and A. Shraibman. Deterministic algorithms for matrix completion. Random Structures and Algorithms, 2013.
[10] A. Lubotzky, R. Phillips, and P. Sarnak. Ramanujan graphs. Combinatorica, 8:261?277, 1988.
[11] N. Linial, S. Mendelson, G. Schechtman, and A. Shraibman. Complexity measures of sign
matrices. Combinatorica, 27(4):439?463, 2007.
7
| 4866 |@word version:3 polynomial:1 norm:44 mention:1 tr:3 initial:12 ka:5 com:2 si:2 gmail:2 must:1 prize:1 simpler:1 dn:1 constructed:1 qij:7 prove:3 introduce:1 expected:2 globally:1 becomes:1 bounded:3 notation:1 moreover:1 lowest:2 what:4 minimizes:1 shraibman:4 finding:1 unobserved:1 guarantee:7 every:5 exactly:3 understood:1 analyzing:1 incoherence:3 koltchinskii:1 kxk2f:1 alg2:5 factorization:1 bi:2 jaakola:1 practical:1 nanyang:1 practice:2 area:1 universal:2 thought:1 word:2 regular:2 suggest:1 get:3 cannot:1 close:2 selection:1 operator:1 context:1 impossible:2 restriction:1 optimize:1 deterministic:2 missing:1 ramanujan:2 attention:2 convex:2 recovery:1 nuclear:1 oh:1 retrieve:2 target:8 suppose:1 pt:1 exact:5 satisfying:5 observed:9 rij:3 k2tr:3 technological:1 complexity:17 serve:1 linial:1 bipartite:1 retrieves:2 kp:1 outside:3 larger:1 say:4 rennie:1 otherwise:1 reconstruct:2 unseen:1 itself:1 noisy:2 obviously:2 eigenvalue:3 reconstruction:1 product:3 hadamard:2 ktr:2 description:1 frobenius:1 ky:3 optimum:1 help:1 completion:13 received:2 resemble:1 come:1 implies:1 direction:1 closely:1 enable:1 adjacency:1 fix:3 generalization:8 preliminary:1 yij:9 hold:4 great:1 smallest:1 schwarz:1 agrees:5 reflects:1 minimization:1 always:2 aim:2 rather:1 focus:1 rank:13 sense:2 tao:1 dual:7 colt:1 denoted:3 special:1 equal:5 nicely:1 sampling:1 look:2 k2f:2 report:3 intelligent:1 few:2 randomly:1 familiar:1 phase:1 kvk:3 kukkvk:1 semidefinite:3 edge:2 partial:2 orthogonal:1 unless:1 minimal:4 column:1 boolean:1 kxkf:1 subset:11 entry:23 uniform:1 answer:3 varies:1 recht:3 sparsifiers:1 sequel:1 lee:1 analogously:1 central:1 unavoidable:1 containing:1 choose:2 satisfy:1 explicitly:1 depends:2 multiplicative:1 picked:1 netflix:1 recover:1 candes:3 minimize:5 square:1 qk:1 efficiently:4 cc:3 whenever:2 definition:8 proof:2 sampled:1 proved:5 recall:3 knowledge:2 follow:1 entrywise:1 lounici:1 keshavan:1 quality:1 reveal:1 aviv:1 deal:1 generalized:2 trying:1 common:1 extend:1 yaffo:1 interpret:1 refer:2 phillips:1 mathematics:1 hp:2 similarly:1 centre:1 certain:2 inequality:2 continue:1 minimum:4 technical:3 academic:1 divided:1 basic:1 arxiv:3 sometimes:1 want:1 else:1 singular:5 rest:1 subject:2 expander:1 call:1 near:1 revealed:4 enough:1 restrict:1 regarding:2 simplifies:1 idea:1 computable:1 generally:1 clear:1 tsybakov:1 xij:9 notice:1 sign:5 write:1 express:1 kuk:3 graph:5 relaxation:1 monotone:1 run:2 everywhere:1 submatrix:4 bound:17 annual:1 ri:1 min:4 constructible:1 department:1 according:4 combination:1 smaller:1 lp:1 sided:1 taken:2 remains:1 foygel:1 know:1 studying:1 generalizes:1 apply:3 observe:2 spectral:1 tensor:1 question:3 concentration:1 hq:1 distance:8 kak2f:1 cauchy:1 extent:1 assuming:1 index:3 troy:1 trace:12 stated:1 motivates:2 unknown:1 vut:5 upper:1 observation:6 arbitrary:1 complement:2 pair:4 namely:1 specified:2 usually:1 program:5 max:5 power:1 suitable:1 alg1:15 representing:1 scheme:1 technology:1 imply:1 started:1 kf:2 relative:1 expect:2 kakf:1 proven:1 srebro:3 penalization:1 foundation:1 pij:10 proxy:2 weaker:1 understand:1 taking:2 valid:1 quantum:1 transaction:1 approximate:2 assumed:1 conclude:1 quantifies:1 tel:1 adi:2 complex:2 spread:1 montanari:1 big:1 expanders:1 n2:1 sparsifier:1 heiman:1 wish:2 deterministically:1 theorem:18 rk:3 exists:2 mendelson:1 magnitude:1 demand:1 margin:3 expressed:1 contained:1 partially:2 determines:1 considerable:1 uniformly:1 schechtman:2 select:2 college:1 combinatorica:2 support:1 |
4,272 | 4,867 | Convex Two-Layer Modeling
?
Ozlem
Aslan
Hao Cheng
Dale Schuurmans
Department of Computing Science, University of Alberta
Edmonton, AB T6G 2E8, Canada
{ozlem,hcheng2,dale}@cs.ualberta.ca
Xinhua Zhang
Machine Learning Research Group
National ICT Australia and ANU
[email protected]
Abstract
Latent variable prediction models, such as multi-layer networks, impose auxiliary latent variables between inputs and outputs to allow automatic inference of
implicit features useful for prediction. Unfortunately, such models are difficult
to train because inference over latent variables must be performed concurrently
with parameter optimization?creating a highly non-convex problem. Instead
of proposing another local training method, we develop a convex relaxation of
hidden-layer conditional models that admits global training. Our approach extends current convex modeling approaches to handle two nested nonlinearities
separated by a non-trivial adaptive latent layer. The resulting methods are able
to acquire two-layer models that cannot be represented by any single-layer model
over the same features, while improving training quality over local heuristics.
1
Introduction
Deep learning has recently been enjoying a resurgence [1, 2] due to the discovery that stage-wise
pre-training can significantly improve the results of classical training methods [3?5]. The advantage of latent variable models is that they allow abstract ?semantic? features of observed data to be
represented, which can enhance the ability to capture predictive relationships between observed variables. In this way, latent variable models can greatly simplify the description of otherwise complex
relationships between observed variates. For example, in unsupervised (i.e., ?generative?) settings,
latent variable models have been used to express feature discovery problems such as dimensionality
reduction [6], clustering [7], sparse coding [8], and independent components analysis [9]. More
recently, such latent variable models have been used to discover abstract features of visual data
invariant to low level transformations [1, 2, 4]. These learned representations not only facilitate
understanding, they can enhance subsequent learning.
Our primary focus in this paper, however, is on conditional modeling. In a supervised (i.e. ?conditional?) setting, latent variable models are used to discover intervening feature representations that
allow more accurate reconstruction of outputs from inputs. One advantage in the supervised case
is that output information can be used to better identify relevant features to be inferred. However,
latent variables also cause difficulty in this case because they impose nested nonlinearities between
the input and output variables. Some important examples of conditional latent learning approaches
include those that seek an intervening lower dimensional representation [10] latent clustering [11],
sparse feature representation [8] or invariant latent representation [1, 3, 4, 12] between inputs and
outputs. Despite their growing success, the difficulty of training a latent variable model remains
clear: since the model parameters have to be trained concurrently with inference over latent variables, the convexity of the training problem is usually destroyed. Only highly restricted models can
be trained to optimality, and current deep learning strategies provide no guarantees about solution
quality. This remains true even when restricting attention to a single stage of stage-wise pre-training:
simple models such as the two-layer auto-encoder or restricted Boltzmann machine (RBM) still pose
intractable training problems, even within a single stage (in fact, simply computing the gradient of
the RBM objective is currently believed to be intractable [13]).
1
Meanwhile, a growing body of research has investigated reformulations of latent variable learning that are able to yield tractable global training methods in special cases. Even though global
training formulations are not a universally accepted goal of deep learning research [14], there are
several useful methodologies that have been been applied successfully to other latent variable models: boosting strategies [15?17], semidefinite relaxations [18?20], matrix factorization [21?23], and
moment based estimators (i.e. ?spectral methods?) [24, 25]. Unfortunately, none of these approaches
has yet been able to accommodate a non-trivial hidden layer between an input and output layer while
retaining the representational capacity of an auto-encoder or RBM (e.g. boosting strategies embed
an intractable subproblem in these cases [15?17]). Some recent work has been able to capture restricted forms of latent structure in a conditional model?namely, a single latent cluster variable
[18?20]?but this remains a rather limited approach.
In this paper we demonstrate that more general latent variable structures can be accommodated
within a tractable convex framework. In particular, we show how two-layer latent conditional models
with a single latent layer can be expressed equivalently in terms of a latent feature kernel. This
reformulation allows a rich set of latent feature representations to be captured, while allowing useful
convex relaxations in terms of a semidefinite optimization. Unlike [26], the latent kernel in this
model is explicitly learned (nonparametrically). To cope with scaling issues we further develop
an efficient algorithmic approach for the proposed relaxation. Importantly, the resulting method
preserves sufficient problem structure to recover prediction models that cannot be represented by any
one-layer architecture over the same input features, while improving the quality of local training.
2
Two-Layer Conditional Modeling
We address the problem of training a two-layer latent
conditional model in the form of Figure 1; i.e., where
there is a single layer of h latent variables, , between
a layer of n input variables, x, and m output variables,
y. The goal is to predict an output vector y given an
input vector x. Here, a prediction model consists of
the composition of two nonlinear conditional models,
f1 (W x) ; and f2 (V ) ; y?, parameterized by the
matrices W 2 Rh?n and V 2 Rm?h . Once the parameters W and V have been specified, this architecture
? from x
defines a point predictor that can determine y
by first computing an intermediate representation .
To learn the model parameters, we assume we are given
t training pairs {(xj , yj )}tj=1 , stacked in two matrices
X = (x1 , ..., xt ) 2 Rn?t and Y = (y1 , ..., yt ) 2
Rm?t , but the corresponding set of latent variable values = ( 1 , ..., t ) 2 Rh?t remains unobserved.
W
V
j
f1
f2
xj
yj
t
Figure 1: Latent conditional model
f1 (W x) ; , f2 (V ) ; y?, where j is
a latent variable, xj is an observed input
vector, yj is an observed output vector,
W are first layer parameters, and V are
second layer parameters.
To formulate the training problem, we will consider two losses, L1 and L2 , that relate the input
to the latent layer, and the latent to the output layer respectively. For example, one can think of
losses as negative log-likelihoods in a conditional model that generates each successive layer given
its predecessor; i.e., L1 (W x, ) = log pW ( |x) and L2 (V , y) = log pV (y| ). (However,
a loss based formulation is more flexible, since every negative log-likelihood is a loss but not vice
versa.) Similarly to RBMs and probabilistic networks (PFNs) [27] (but unlike auto-encoders and
classical feed-forward networks), we will not assume is a deterministic output of the first layer;
instead we will consider to be a variable whose value is the subject of inference during training.
Given such a set-up many training principles become possible. For simplicity, we consider a Viterbi
based training principle where the parameters W and V are optimized with respect to an optimal
imputation of the latent values . To do so, define the first and second layer training objectives as
F1 (W, ) = L1 (W X, ) + ?2 kW k2F ,
and
F2 ( , V ) = L2 (V , Y ) + 2 kV k2F ,
(1)
where we assume the losses are convex in their first arguments. Here it is typical to assume
Pt
?
that the losses decompose columnwise; that is, L1 ( ? , ) =
j=1 L1 ( j , j ) and L2 (Z, Y ) =
Pt
?
?
?j is the jth column of Z? respectively. This
zj , yj ), where j is the jth column of and z
j=1 L2 (?
2
follows for example if the training pairs (xj , yj ) are assumed I.I.D., but such a restriction is not necessary. Note that we have also introduced Euclidean regularization over the parameters (i.e. negative
log-priors under a Gaussian), which will provide a useful representer theorem [28] we exploit later.
These two objectives can be combined to obtain the following joint training problem:
min min F1 (W, ) + F2 ( , V ),
(2)
W,V
where > 0 is a trade off parameter that balances the first versus second layer discrepancy. Unfortunately (2) is not jointly convex in the unknowns W , V and .
A key modeling question concerns the structure of the latent representation . As noted, the extensive literature on latent variable modeling has proposed a variety of forms for latent structure.
Here, we follow work on deep learning and sparse coding and assume that the latent variables are
boolean, 2 {0, 1}h?1 ; an assumption that is also often made in auto-encoders [13], PFNs [27],
and RBMs [5]. A boolean representation can capture structures that range from a single latent clustering [11, 19, 20], by imposing the assumption that 0 1 = 1, to a general sparse code, by imposing
the assumption that 0 1 = k for some small k [1, 4, 13].1 Observe that, in the latter case, one
can control the complexity of the latent representation by imposing a constraint on the number of
?active? variables k rather than directly controlling the latent dimensionality h.
2.1
Multi-Layer Perceptrons and Large-Margin Losses
To complete a specification of the two-layer model in Figure 1 and the associated training problem
(2), we need to commit to specific forms for the transfer functions f1 and f2 and the losses in (1). For
simplicity, we will adopt a large-margin approach over two-layer perceptrons. Although it has been
traditional in deep learning research to focus on exponential family conditional models (e.g. as in
auto-encoders, PFNs and RBMs), these are not the only possibility; a large-margin approach offers
additional sparsity and algorithmic simplifications that will clarify the development below. Despite
its simplicity, such an approach will still be sufficient to prove our main point.
First, consider the second layer model. We will conduct our primary evaluations on multiclass classification problems, where output vectors y encode target classes by indicator vectors
y 2 {0, 1}m?1 such that y0 1 = 1. Although it is common to adopt a softmax transfer for f2 in
such a case, it is also useful to consider a perceptron model defined by f2 (?
z) = indmax(?
z) such that
indmax(?
z) = 1i (vector of all 0s except a 1 in the ith position) where z?i z?l for all l. Therefore,
for multi-class classification, we will simply adopt the standard large-margin multi-class loss [29]:
? 1y0 z
?).
L2 (?
z, y) = max(1 y + z
(3)
? on the correct
Intuitively, if yc = 1 is the correct label, this loss encourages the response z?c = y0 z
label to be a margin greater than the response z?i on any other label i 6= c.
Second, consider the first layer model. Although the loss (3) has proved to be highly successful for
multi-class classification problems, it is not suitable for the first layer because it assumes there is
only a single target component active in any latent vector ; i.e. 0 1 = 1. Although some work
has considered learning a latent clustering in a two-layer architecture [11, 18?20], such an approach
is not able to capture the latent sparse code of a classical PFN or RBM in a reasonable way: using
clustering to simulate a multi-dimensional sparse code causes exponential blow-up in the number of
latent classes required. Therefore, we instead adopt a multi-label perceptron model for the first layer,
defined by the transfer function f1 ( ?) = step( ?) applied componentwise to the response vector ?;
i.e. step( ?i ) = 1 if ?i > 0, 0 otherwise. Here again, instead of using a traditional negative loglikelihood loss, we will adopt a simple large-margin loss for multi-label classification that naturally
accommodates multiple binary latent classifications in parallel. Although several loss formulations
exist for multi-label classification [30, 31], we adopt the following:
L1 ( ?, ) = max(1
+ ? 0 1 1 0 ?) ? max (1
)/( 0 1) + ? 1 0 ?/( 0 1) . (4)
Intuitively, this loss encourages the average response on the active labels, 0 ?/( 0 1), to exceed the
response ?i on any inactive label i, i = 0, by some margin, while also encouraging the response on
any active label to match the average of the active responses. Despite their simplicity, large-margin
multi-label losses have proved to be highly successful in practice [30, 31]. Therefore, the overall
architecture we investigate embeds two nonlinear conditionals around a non-trivial latent layer.
1
Throughout this paper we let 1 denote the vector of all 1s with length determined by context.
3
3
Equivalent Reformulation
The main contribution of this paper is to show that the training problem (2) has a convex relaxation
that preserves sufficient structure to transcend one-layer models. To demonstrate this relaxation, we
first need to establish the key observation that problem (2) can be re-expressed in terms of a kernel
matrix between latent representation vectors. Importantly, this reformulation allows the problem to
be re-expressed in terms of an optimization objective that is jointly convex in all participating variables. We establish this key intermediate result in this section in three steps: first, by re-expressing
the latent representation in terms of a latent kernel; second, by reformulating the second layer objective; and third, by reformulating the first layer objective by exploiting large-margin formulation
outlined in Section 2.1. Below let K = X 0 X denote the kernel matrix over the input data, let Im(N )
denote the row space of N , and let and ? denote Moore-Penrose pseudo-inverse.
First, simply define N =
0
. Next, re-express the second layer objective F2 in (1) by the following.
Lemma 1. For any fixed , letting N =
min F2 ( , V )
V
=
, it follows that
0
min
B2Im(N )
L2 (B, Y ) +
2
tr(BN ? B 0 ).
(5)
Proof. The result follows from the following sequence of equivalence preserving transformations:
min L2 (V , Y ) + 2 kV k2F
V
=
=
min L2 (AN, Y ) +
A
min
B2Im(N )
2
L2 (B, Y ) +
tr(AN A0 )
2
tr(BN ? B 0 ),
(6)
(7)
where, starting with the definition of F2 in (1), the first equality in (6) follows from the representer
theorem applied to kV k2F , which implies that the optimal V must be in the form of V = A 0 for
some A 2 Rm?t [28]; and finally, (7) follows by the change of variable B = AN .
Note that Lemma 1 holds for any loss L2 . In fact, the result follows solely from the structure of the
regularizer. However, we require L2 to be convex in its first argument to ensure a convex problem
below. Convexity is indeed satisfied by the choice (3). Moreover, the term tr(BN ? B 0 ) is jointly
convex in N and B since it is a perspective function [32], hence the objective in (5) is jointly convex.
Next, we reformulate the first layer objective F1 in (1). Since this transformation exploits specific
structure in the first layer loss, we present the result in two parts: first, by showing how the desired outcome follows from a general assumption on L1 , then demonstrating that this assumption
is satisfied by the specific large-margin multi-label loss defined in (4). To establish this result we
? = ?0 ?,
will exploit the following augmented forms for the data and variables: let ? = [ , kI], N
? = [ ? , 0], X
? = [X, 0], K
? =X
? 0 X,
? and t? = t + h.
? 1 such that L1 ( ? , ) = L
? 1 ( ? 0 ? , ? 0 ? ) for all
Lemma 2. For any L1 if there exists a function L
h?t
h?t
0
? 2R
and 2 {0, 1} , such that 1 = 1k, it then follows that
min F1 (W, )
W
=
min
?)
D2Im(N
? 1 (DK,
? N
?) +
L
?
2
? ? DK).
?
tr(D0 N
(8)
Proof. Similar to above, consider the sequence of equivalence preserving transformations:
min L1 (W X, ) + ?2 kW k2F
W
=
? 1 ( ? 0 W X,
? ? 0 ? ) + ? kW k2F
min L
2
=
?1( ? 0 ? C X
? 0 X,
? ?0 ?) +
min L
=
W
C
min
?)
D2Im(N
? 1 (DK,
? N
?) +
L
?
2
2
? 0 ? 0 ? CX
? 0)
tr(XC
? ? DK),
?
tr(D0 N
(9)
(10)
(11)
where, starting with the definition of F1 in (1), the first equality (9) simply follows from the assumption. The second equality (10) follows from the representer theorem applied to kW k2F , which
? 0 for some C 2 Rt??t? (using the fact
implies that the optimal W must be in the form of W = ? C X
?
? C.
that has full rank h) [28]. Finally, (11) follows by the change of variable D = N
4
? ? DK)
? is again jointly convex in N
? and D (also a perspective funcObserve that the term tr(D0 N
?
?
?
tion), while it is easy to verify that L1 (DK, N ) as defined in Lemma 3 below is also jointly convex
? and D [32]; therefore the objective in (8) is jointly convex.
in N
Next, we show that the assumption of Lemma 2 is satisfied by the specific large-margin multi-label
formulation in Section 2.1; that is, assume L1 is given by the large-margin multi-label loss (4):
P
? 0
L1 ( ? , ) =
1 0j ?j
j + j j1
j max 1
P
= ? 110
+ ? diag( 0 1) 1 diag( 0 ? )0 , such that ? (?) := j max(?j ), (12)
where we use ?j ,
j
and ?j to denote the jth columns of ? ,
and ? respectively.
Lemma 3. For the multi-label loss L1 defined in (4), and for any fixed
2 {0, 1}h?t where
0
0 ? ?0 ?
0?
0?
0?
?
?
?
?
?
) := ? (
/k) + t tr(
) using the augmentation
1 = 1k, the definition L1 (
,
? 1 ( ? 0 ? , ? 0 ? ) for any ? 2 Rh?t .
above satisfies the property that L1 ( ? , ) = L
Proof. Since
0
1 = 1k we obtain a simplification of L1 :
L1 ( ? , )
=
+ k?
? 110
1 diag(
0? 0
It only remains is to establish that ? (k ?
) = ?(?0 ?
of equivalence preserving transformations:
? (k ?
)
=
=
max
tr ?0 (k ?
max
1
k
?
t
:?0 1=1
?2Rh?
+
? ?
?2Rt+?t :?0 1=1
= ? (k ?
)
)+t
tr( ? 0 ? ).
(13)
? 0 ? /k). To do so, consider the sequence
?)
tr ?0 ? 0 (k ?
(14)
?)
= ?(?0 ?
? 0 ? /k),
(15)
where the equalities in (14) and (15) follow from the definition of ? and the fact that linear maximizations over the simplex obtain their solutions at the vertices. To establish the equality between
t?
t??t?
(14) and (15), since ? embeds the submatrix kI, for any ? 2 Rh?
+ there must exist an ? 2 R+ satisfying ? = ? ?/k. Furthermore, these matrices satisfy ?0 1 = 1 iff ?0 ? 0 1/k = 1 iff ?0 1 = 1.
? 1 defined in Lemma 3. (The
Therefore, the result (8) holds for the first layer loss (4), using L
same result can be established for other loss functions, such as the multi-class large-margin loss.)
Combining these lemmas yields the desired result of this section.
Theorem 1. For any second layer loss and any first layer loss that satisfies the assumption of Lemma
2 (for example the large-margin multi-label loss (4)), the following equivalence holds:
(2) =
min
? :9 2{0,1}t?h s.t.
{N
min
min
? = ?0 ? } B2Im(N
? ) D2Im(N
?)
1=1k,N
? 1 (DK,
? N
?) +
L
+ L2 (B, Y ) +
2
?
2
? ? DK)
?
tr(D0 N
? ? B 0 ).
tr(B N
(16)
(Theorem 1 follows immediately from Lemmas 1 and 2.) Note that no relaxation has occurred thus
far: the objective value of (16) matches that of (2). Not only has this reformulation resulted in (2)
? , the objective in (16) is jointly convex
being entirely expressed in terms of the latent kernel matrix N
? , B and D. Unfortunately, the constraints in (16) are not convex.
in all participating unknowns, N
4
Convex Relaxation
We first relax the problem by dropping the augmentation 7! ? and working with the t ? t variable
N = 0 . Without the augmentation, Lemma 3 becomes a lower bound (i.e. (14) (15)), hence a
relaxation. To then achieve a convex form we further relax the constraints in (16). To do so, consider
N0
=
N2
=
N1
=
N :9
2 {0, 1}t?h such that 1 = 1k and N =
t?t
N : N 2 {0, ..., k}
{N : N
0
, N ? 0, diag(N ) = 1k, rank(N ) ? h
0, N ? 0, diag(N ) = 1k} ,
(17)
(18)
(19)
where it is clear from the definitions that N0 ? N1 ? N2 . (Here we use N ? 0 to also encode
N 0 = N .) Note that the set N0 corresponds to the original set of constraints from (16). The set
5
Algorithm 1: ADMM to optimize F(N ) for N 2 N2 .
1
2
3
4
5
6
Initialize: M0 = I, 0 = 0.
while T = 1, 2, . . . do
NT
arg minN ?0 L(N, MT 1 , T 1 ), by using the boosting Algorithm 2.
MT
arg minM 0,Mii =k L(NT , M, T 1 ), which has an efficient closed form solution.
1
NT ); i.e. update the multipliers.
T
T 1 + ? (MT
return NT .
Algorithm 2: Boosting algorithm to optimize G(N ) for N ? 0.
1
2
3
4
5
6
Initialize: N0
0, H0
[ ] (empty set).
while T = 1, 2, . . . do
Find the smallest arithmetic eigenvalue of rG(NT 1 ), and its eigenvector hT .
Conic search by LBFGS: (aT , bT )
mina 0,b 0 G(aNT 1 + bhT h0T ).
p
Local search by LBFGS: HT local minH G(HH 0 ) initialized by H = ( aHT
0
Set NT
HT HT ; break if stopping criterion met.
return NT .
1,
p
bhT ).
N1 simplifies the characterization of this constraint set on the resulting kernel matrices N = 0 .
However, neither N0 nor N1 are convex. Therefore, we need to adopt the further relaxed set N2 ,
which is convex. (Note that Nij ? k has been implied by N ? 0 and Nii = k in N2 .) Since
dropping the rank constraint eliminates the constraints B 2 Im(N ) and D 2 Im(N ) in (16) when
N 0 [32], we obtain the following relaxed problem, which is jointly convex in N , B and D:
min
min
? 1 (DK, N ) +
min L
N 2N2 B2Rt?t D2Rt?t
5
?
2
tr(D0 N ? DK) + L2 (B, Y ) +
2
tr(BN ? B 0 ).
(20)
Efficient Training Approach
Unfortunately, nonlinear semidefinite optimization problems in the form (20) are generally thought
to be too expensive in practice despite their polynomial theoretical complexity [33, 34]. Therefore,
we develop an effective training algorithm that exploits problem structure to bypass the main computational bottlenecks. The key challenge is that N2 contains both semidefinite and affine constraints,
and the pseudo-inverse N ? makes optimization over N difficult even for fixed B and D.
To mitigate these difficulties we first treat (20) as the reduced problem, minN 2N2 F(N ), where F
is an implicit objective achieved by minimizing out B and D. Note that F is still convex in N by
the joint convexity of (20). To cope with the constraints on N we adopt the alternating direction
method of multipliers (ADMM) [35] as the main outer optimization procedure; see Algorithm 1.
This approach allows one to divide N2 into two groups, N ? 0 and {Nij
0, Nii = k}, yielding
the augmented Lagrangian
L(N, M, ) = F(N ) + (N ? 0) + (Mij
0, Mii = k)
h , N Mi +
1
2?
2
kN M kF , (21)
where ? > 0 is a small constant, and denotes an indicator such that (?) = 0 if ? is true, and 1
otherwise. In this procedure, Steps 4 and 5 cost O(t2 ) time; whereas the main bottleneck is Step 3,
which involves minimizing GT (N ) := L(N, MT 1 , T 1 ) over N ? 0 for fixed MT 1 and T 1 .
Boosting for Optimizing over the Positive Semidefinite Cone. To solve the problem in Step 3
we develop an efficient boosting procedure based on [36] that retains low rank iterates NT while
avoiding the need to determine N ? when computing G(N ) and rG(N ); see Algorithm 2. The key
idea is to use a simple change of variable. For example, consider the first layer objective and let
? 1 (DK, N ) + ? tr(D0 N ? DK). By defining D = N C, we obtain G1 (N ) =
G1 (N ) = minD L
2
? 1 (N CK, N ) + ? tr(C 0 N CK), which no longer involves N ? but remains convex in C; this
minC L
2
problem can be solved efficiently after a slight smoothing of the objective [37] (e.g. by LBFGS).
Moreover, the gradient rG1 (N ) can be readily computed given C ? . Applying the same technique
6
2
4
1.5
3.5
1
0.8
0.6
3
1
0.4
2.5
0.5
0.2
2
0
0
?0.2
1.5
?0.5
?0.4
1
?1
?0.6
0.5
?0.8
?1.5
0
?2
?2
?1.5
?1
?0.5
0
0.5
1
(a) ?Xor? (2 ? 400)
0
0.5
1
1.5
2
2.5
3
3.5
4
?1
?2
0
2
4
1.5
6
8
10
12
14
XOR
TJB2 49.8 ?0.7
TSS1 50.2 ?1.2
SVM1 50.3 ?1.1
LOC2 4.2 ?0.9
CVX2 0.2 ?0.1
(b) ?Boxes? (2 ? 320) (c) ?Interval? (2 ? 200)
BOXES
45.7 ?0.6
35.7 ?1.3
31.4 ?0.5
11.4 ?0.6
10.1 ?0.4
INTER
49.3 ?1.3
42.6 ?3.9
50.0 ?0.0
50.0 ?0.0
20.0 ?2.4
(d) Synthetic results (% error)
Figure 2: Synthetic experiments: three artificial data sets that cannot be meaningfully classified by
a one-layer model that does not use a nonlinear kernel. Table shows percentage test set error.
to the second layer yields an efficient procedure for evaluating G(N ) and rG(N ). Finally note that
many of the matrix-vector multiplications in this procedure can be further accelerated by exploiting
the low rank factorization of N maintained by the boosting algorithm; see the Appendix for details.
Additional Relaxation. One can further reduce computation cost by adopting additional relaxations
to (20). For example, by dropping N
0 and relaxing diag(N ) = 1k to diag(N ) ? 1k, the
objective can be written as min{N ?0,maxi Nii ?k} F(N ). Since maxi Nii is convex in N , it is well
known that there must exist a constant c1 > 0 such that the optimal N is also an optimal solution
2
to minN ?0 F(N ) + c1 (maxi Nii ) . While maxi Nii is not smooth, one can further smooth it
P
2
with a softmax, to instead solve minN ?0 F(N ) + c1 (log i exp(c2 Nii )) for some large c2 . This
formulation avoids the need for ADMM entirely and can be directly solved by Algorithm 2.
6
Experimental Evaluation
To investigate the effectiveness of the proposed relaxation scheme for training a two-layer conditional model, we conducted a number of experiments to compare learning quality against baseline
methods. Note that, given an optimal solution N , B and D to (20), an approximate solution to the
original problem (2) can be recovered heuristically by first rounding N to obtain , then recovering
W and V , as shown in Lemmas 1 and 2. However, since our primary objective is to determine
whether any convex relaxation of a two-layer model can even compete with one-layer or locally
trained two-layer models (rather than evaluate heuristic rounding schemes), we consider a transductive evaluation that does not require any further modification of N , B and D. In such a set-up, training data is divided into a labeled and unlabeled portion, where the method receives X = [X` , Xu ]
and Y` , and at test time the resulting predictions Y?u are evaluated against the held-out labels Yu .
Methods. We compared the proposed convex relaxation scheme (CVX2) against the following
methods: simple alternating minimization of the same two-layer model (2) (LOC2), a one-layer
linear SVM trained on the labeled data (SVM1), the transductive one-layer SVM methods of [38]
(TSJ1) and [39] (TSS1), and the transductive latent clustering method of [18, 19] (TJB2), which
is also a two-layer model. Linear input kernels were used for all methods (standard in most deep
learning models) to control the comparison between one and two-layer models. Our experiments
were conducted with the following common protocol: First, the data was split into a separate training
and test set. Then the parameters of each procedure were optimized by a three-fold cross validation
on the training set. Once the optimal parameters were selected, they were fixed and used on the test
set. For transductive procedures, the same three training sets from the first phase were used, but then
combined with ten new test sets drawn from the disjoint test data (hence 30 overall) for the final
evaluation. At no point were test examples used to select any parameters for any of the methods.
We considered different proportions between labeled/unlabeled data; namely, 100/100 and 200/200.
Synthetic Experiments. We initially ran a proof of concept experiment on three binary labeled
artificial data sets depicted in Figure 2 (showing data set sizes n ? t) with 100/100 labeled/unlabeled
training points. Here the goal was simply to determine whether the relaxed two-layer training
method could preserve sufficient structure to overcome the limits of a one-layer architecture. Clearly,
none of the data sets in Figure 2 are adequately modeled by a one-layer architecture (that does not
cheat and use a nonlinear kernel). The results are shown in the Figure 2(d) table.
7
TJB2
LOC2
SVM1
TSS1
TSJ1
CVX2
MNIST
19.3 ?1.2
19.3 ?1.0
16.2 ?0.7
13.7 ?0.8
14.6 ?0.7
9.2 ?0.6
USPS
53.2 ?2.9
13.9 ?1.1
11.6 ?0.5
11.1 ?0.5
12.1 ?0.4
9.2 ?0.5
Letter
20.4 ?2.1
10.4 ?0.6
6.2 ?0.4
5.9 ?0.5
5.6 ?0.5
5.1 ?0.5
COIL
30.6 ?0.8
18.0 ?0.5
16.9 ?0.6
17.5 ?0.6
17.2 ?0.6
13.8 ?0.6
CIFAR
29.2 ?2.1
31.8 ?0.9
27.6 ?0.9
26.7 ?0.7
26.6 ?0.8
26.5 ?0.8
G241N
26.3 ?0.8
41.6 ?0.9
27.1 ?0.9
25.1 ?0.8
24.4 ?0.7
25.2 ?1.0
Table 1: Mean test misclassification error % (? stdev) for 100/100 labeled/unlabeled.
TJB2
LOC2
SVM1
TSS1
TSJ1
CVX2
MNIST
13.7 ?0.6
16.3 ?0.6
11.2 ?0.4
11.4 ?0.5
12.3 ?0.5
8.8 ?0.4
USPS
46.6 ?1.0
9.7 ?0.5
10.7 ?0.4
11.3 ?0.4
11.8 ?0.4
6.6 ?0.4
Letter
14.0 ?2.6
8.5 ?0.6
5.0 ?0.3
4.4 ?0.3
4.8 ?0.3
3.8 ?0.3
COIL
45.0 ?0.8
12.8 ?0.6
15.6 ?0.5
14.9 ?0.4
13.5 ?0.4
8.2 ?0.4
CIFAR
30.4 ?1.9
28.2 ?0.9
25.5 ?0.6
24.0 ?0.6
23.9 ?0.5
22.8 ?0.6
G241N
22.4 ?0.5
40.4 ?0.7
22.9 ?0.5
23.7 ?0.5
22.2 ?0.6
20.3 ?0.5
Table 2: Mean test misclassification error % (? stdev) for 200/200 labeled/unlabeled.
As expected, the one-layer models SVM1 and TSS1 were unable to capture any useful classification
structure in these problems. (TSJ1 behaves similarly to TSS1.) The results obtained by CVX2, on
the other hand, are encouraging. In these data sets, CVX2 is easily able to capture latent nonlinearities while outperforming the locally trained LOC2. Although LOC2 is effective in the first two
cases, it exhibits weaker test accuracy while failing on the third data set. The two-layer method
TJB2 exhibited convergence difficulties on these problems that prevented reasonable results.
Experiments on ?Real? Data Sets. Next, we conducted experiments on real data sets to determine whether the advantages in controlled synthetic settings could translate into useful results in
a more realistic scenario. For these experiments we used a collection of binary labeled data sets:
USPS, COIL and G241N from [40], Letter from [41], MNIST, and CIFAR-100 from [42]. (See
Appendix B in the supplement for further details.) The results are shown in Tables 1 and 2 for the
labeled/unlabeled proportions 100/100 and 200/200 respectively.
The relaxed two-layer method CVX2 again demonstrates effective results, although some data sets
caused difficulty for all methods. The data sets can be divided into two groups, (MNIST, USPS,
COIL) versus (Letter, CIFAR, G241N). In the first group, two-layer modeling demonstrates a clear
advantage: CVX2 outperforms SVM1 by a significant margin. Note that this advantage must be
due to two-layer versus one-layer modeling, since the transductive SVM methods TSS1 and TSJ1
demonstrate no advantage over SVM1. For the second group, the effectiveness of SVM1 demonstrates that only minor gains can be possible via transductive or two-layer extensions, although some
gains are realized. The locally trained two-layer model LOC2 performed quite poorly in all cases.
Unfortunately, the convex latent clustering method TJB2 was also not competitive on any of these
data sets. Overall, CVX2 appears to demonstrate useful promise as a two-layer modeling approach.
7
Conclusion
We have introduced a new convex approach to two-layer conditional modeling by reformulating the
problem in terms of a latent kernel over intermediate feature representations. The proposed model
can accommodate latent feature representations that go well beyond a latent clustering, extending current convex approaches. A semidefinite relaxation of the latent kernel allows a reasonable
implementation that is able to demonstrate advantages over single-layer models and local training
methods. From a deep learning perspective, this work demonstrates that trainable latent layers can
be expressed in terms of reproducing kernel Hilbert spaces, while large margin methods can be usefully applied to multi-layer prediction architectures. Important directions for future work include
replacing the step and indmax transfers with more traditional sigmoid and softmax transfers, while
also replacing the margin losses with more traditional Bregman divergences; refining the relaxation
to allow more control over the structure of the latent representations; and investigating the utility of
convex methods for stage-wise training within multi-layer architectures.
8
References
[1] Q. Le, M. Ranzato, R. Monga, M. Devin, G. Corrado, K. Chen, J. Dean, and A. Ng. Building high-level
features using large scale unsupervised learning. In Proceedings ICML, 2012.
[2] N. Srivastava and R. Salakhutdinov. Multimodal learning with deep Boltzmann machines. In NIPS, 2012.
[3] Y. Bengio. Learning deep architectures for AI. Foundat. and Trends in Machine Learning, 2:1?127, 2009.
[4] G. Hinton. Learning multiple layers of representations. Trends in Cognitive Sciences, 11:428?434, 2007.
[5] G. Hinton, S. Osindero, and Y. Teh. A fast algorithm for deep belief nets. Neur. Comp., 18(7), 2006.
[6] N. Lawrence. Probabilistic non-linear principal component analysis. JMLR, 6:1783?1816, 2005.
[7] A. Banerjee, S. Merugu, I. Dhillon, and J. Ghosh. Clustering with Bregman divergences. J. Mach. Learn.
Res., 6:1705?1749, 2005.
[8] M. Elad and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. on Image Processing, 15:3736?3745, 2006.
[9] P. Comon. Independent component analysis, a new concept? Signal Processing, 36(3):287?314, 1994.
[10] M. Carreira-Perpi?na? n and Z. Lu. dimensionality reduction by unsupervised regression. In CVPR, 2010.
[11] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. In Allerton Conf., 1999.
[12] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit invariance
during feature extraction. In ICML, 2011.
[13] K. Swersky, M. Ranzato, D. Buchman, B. Marlin, and N. de Freitas. On autoencoders and score matching
for energy based models. In Proceedings ICML, 2011.
[14] Y. LeCun. Who is afraid of non-convex loss functions? http://videolectures.net/eml07 lecun wia, 2007.
[15] Y. Bengio, N. Le Roux, P. Vincent, and O. Delalleau. Convex neural networks. In NIPS, 2005.
[16] S. Nowozin and G. Bakir. A decoupled approach to exemplar-based unsupervised learning. In Proceedings of the International Conference on Machine Learning, 2008.
[17] D. Bradley and J. Bagnell. Convex coding. In UAI, 2009.
[18] A. Joulin and F. Bach. A convex relaxation for weakly supervised classifiers. In Proc. ICML, 2012.
[19] A. Joulin, F. Bach, and J. Ponce. Efficient optimization for discrimin. latent class models. In NIPS, 2010.
[20] Y. Guo and D. Schuurmans. Convex relaxations of latent variable training. In Proc. NIPS 20, 2007.
[21] A. Goldberg, X. Zhu, B. Recht, J. Xu, and R. Nowak. Transduction with matrix completion: Three birds
with one stone. In NIPS 23, 2010.
[22] E. Candes, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? arXiv:0912.3599, 2009.
[23] X. Zhang, Y. Yu, and D. Schuurmans. Accelerated training for matrix-norm regularization: A boosting
approach. In Advances in Neural Information Processing Systems 25, 2012.
[24] A. Anandkumar, D. Hsu, and S. Kakade. A method of moments for mixture models and hidden Markov
models. In Proc. Conference on Learning Theory, 2012.
[25] D. Hsu and S. Kakade. Learning mixtures of spherical Gaussians: Moment methods and spectral decompositions. In Innovations in Theoretical Computer Science (ITCS), 2013.
[26] Y. Cho and L. Saul. Large margin classification in infinite neural networks. Neural Comput., 22, 2010.
[27] R. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56(1):71?113, 1992.
[28] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. JMAA, 33:82?95, 1971.
[29] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. JMLR, pages 265?292, 2001.
[30] J. Fuernkranz, E. Huellermeier, E. Mencia, and K. Brinker. Multilabel classification via calibrated label
ranking. Machine Learning, 73(2):133?153, 2008.
[31] Y. Guo and D. Schuurmans. Adaptive large margin training for multilabel classification. In AAAI, 2011.
[32] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Mach. Learn., 73(3), 2008.
[33] Y. Nesterov and A. Nimirovskii. Interior-Point Polynomial Algorithms in Convex Programming. 1994.
[34] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge U. Press, 2004.
[35] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Foundat. Trends in Mach. Learn., 3(1):1?123, 2010.
[36] S. Laue. A hybrid algorithm for convex semidefinite optimization. In Proc. ICML, 2012.
[37] O. Chapelle. Training a support vector machine in the primal. Neural Comput., 19(5):1155?1178, 2007.
[38] T. Joachims. Transductive inference for text classification using support vector machines. In ICML, 1999.
[39] V. Sindhwani and S. Keerthi. Large scale semi-supervised linear SVMs. In SIGIR, 2006.
[40] http://olivier.chapelle.cc/ssl- book/benchmarks.html.
[41] http://archive.ics.uci.edu/ml/datasets.
[42] http://www.cs.toronto.edu/ kriz/cifar.html.
9
| 4867 |@word pw:1 polynomial:2 proportion:2 norm:1 heuristically:1 seek:1 bn:4 decomposition:1 tr:18 accommodate:2 moment:3 reduction:2 contains:1 score:1 afraid:1 nii:7 outperforms:1 freitas:1 bradley:1 current:3 recovered:1 nt:8 yet:1 chu:1 must:6 readily:1 written:1 devin:1 subsequent:1 realistic:1 j1:1 update:1 n0:5 generative:1 selected:1 intelligence:1 ith:1 characterization:1 boosting:8 iterates:1 toronto:1 successive:1 allerton:1 zhang:3 c2:2 predecessor:1 become:1 consists:1 prove:1 inter:1 expected:1 indeed:1 nor:1 growing:2 multi:19 salakhutdinov:1 spherical:1 alberta:1 encouraging:2 becomes:1 discover:2 moreover:2 kimeldorf:1 eigenvector:1 proposing:1 unobserved:1 transformation:5 ghosh:1 marlin:1 guarantee:1 pseudo:2 mitigate:1 every:1 usefully:1 rm:3 demonstrates:4 classifier:1 control:3 positive:1 local:6 treat:1 limit:1 despite:4 mach:3 cheat:1 solely:1 bird:1 au:1 equivalence:4 relaxing:1 factorization:2 limited:1 range:1 contractive:1 lecun:2 yj:5 practice:2 procedure:7 pontil:1 significantly:1 thought:1 matching:1 videolectures:1 pre:2 boyd:2 cannot:3 unlabeled:6 interior:1 context:1 applying:1 restriction:1 equivalent:1 deterministic:1 optimize:2 yt:1 lagrangian:1 dean:1 go:1 attention:1 starting:2 convex:42 sigir:1 formulate:1 simplicity:4 roux:1 immediately:1 estimator:1 importantly:2 vandenberghe:1 handle:1 pt:2 controlling:1 target:2 ualberta:1 programming:1 olivier:1 goldberg:1 trend:3 satisfying:1 expensive:1 h0t:1 labeled:9 observed:5 subproblem:1 solved:2 capture:6 ranzato:2 trade:1 e8:1 ran:1 convexity:3 complexity:2 xinhua:2 nesterov:1 multilabel:2 trained:6 weakly:1 predictive:1 f2:11 usps:4 easily:1 joint:2 multimodal:1 represented:3 regularizer:1 stdev:2 train:1 separated:1 stacked:1 fast:1 effective:3 artificial:3 outcome:1 h0:1 whose:1 heuristic:2 quite:1 solve:2 elad:1 loglikelihood:1 relax:2 otherwise:3 cvpr:1 encoder:2 ability:1 delalleau:1 g1:2 commit:1 think:1 jointly:9 transductive:7 final:1 advantage:7 sequence:3 eigenvalue:1 net:2 reconstruction:1 relevant:1 combining:1 uci:1 iff:2 translate:1 achieve:1 representational:1 poorly:1 description:1 intervening:2 kv:3 participating:2 exploiting:2 convergence:1 cluster:1 empty:1 extending:1 develop:4 completion:1 pose:1 exemplar:1 minor:1 auxiliary:1 c:2 involves:2 implies:2 recovering:1 met:1 direction:3 correct:2 australia:1 require:2 f1:10 decompose:1 im:3 extension:1 clarify:1 hold:3 around:1 considered:2 wright:1 ic:1 exp:1 lawrence:1 algorithmic:3 predict:1 viterbi:1 m0:1 dictionary:1 adopt:8 smallest:1 foundat:2 failing:1 proc:4 label:17 currently:1 vice:1 successfully:1 minimization:1 concurrently:2 clearly:1 gaussian:1 rather:3 ck:2 minc:1 encode:2 focus:2 refining:1 ponce:1 joachim:1 rank:5 likelihood:2 greatly:1 baseline:1 inference:5 stopping:1 brinker:1 bt:1 a0:1 initially:1 hidden:3 issue:1 classification:11 flexible:1 overall:3 arg:2 g241n:4 retaining:1 development:1 html:2 smoothing:1 special:1 softmax:3 initialize:2 ssl:1 once:2 evgeniou:1 extraction:1 ng:1 kw:4 yu:2 unsupervised:4 k2f:7 representer:3 icml:6 discrepancy:1 simplex:1 t2:1 connectionist:1 simplify:1 spline:1 future:1 preserve:3 national:1 resulted:1 divergence:2 phase:1 keerthi:1 n1:4 ab:1 highly:4 possibility:1 investigate:2 evaluation:4 mixture:2 semidefinite:7 yielding:1 primal:1 tj:1 held:1 accurate:1 bregman:2 nowak:1 necessary:1 decoupled:1 conduct:1 enjoying:1 divide:1 euclidean:1 accommodated:1 re:5 desired:2 initialized:1 nij:2 theoretical:2 column:3 modeling:10 boolean:2 retains:1 maximization:1 cost:2 vertex:1 predictor:1 successful:2 rounding:2 conducted:3 osindero:1 too:1 tishby:1 encoders:4 kn:1 synthetic:4 combined:2 cho:1 recht:1 calibrated:1 international:1 probabilistic:2 off:1 enhance:2 na:1 again:3 augmentation:3 satisfied:3 aaai:1 cognitive:1 creating:1 conf:1 book:1 return:2 li:1 nonlinearities:3 de:1 blow:1 coding:3 satisfy:1 explicitly:1 caused:1 ranking:1 performed:2 later:1 tion:1 closed:1 bht:2 break:1 portion:1 competitive:1 recover:1 parallel:1 candes:1 contribution:1 accuracy:1 xor:2 merugu:1 who:1 efficiently:1 yield:3 identify:1 ant:1 vincent:2 itcs:1 none:2 lu:1 comp:1 cc:1 minm:1 classified:1 definition:5 against:3 rbms:3 energy:1 naturally:1 associated:1 rbm:4 proof:4 mi:1 gain:2 hsu:2 proved:2 dimensionality:3 bakir:1 hilbert:1 appears:1 feed:1 supervised:4 follow:2 methodology:1 response:7 formulation:6 evaluated:1 though:1 box:2 furthermore:1 implicit:2 stage:5 autoencoders:1 working:1 receives:1 hand:1 replacing:2 nonlinear:5 banerjee:1 nonparametrically:1 defines:1 quality:4 building:1 facilitate:1 verify:1 true:2 multiplier:3 concept:2 www:1 regularization:2 equality:5 reformulating:3 hence:3 alternating:3 moore:1 dhillon:1 adequately:1 semantic:1 neal:1 during:2 encourages:2 maintained:1 noted:1 kriz:1 criterion:1 stone:1 mina:1 complete:1 demonstrate:5 l1:18 image:2 wise:3 recently:2 parikh:1 common:2 sigmoid:1 behaves:1 mt:5 occurred:1 slight:1 expressing:1 composition:1 significant:1 versa:1 imposing:3 ai:1 cambridge:1 automatic:1 outlined:1 similarly:2 svm1:8 chapelle:2 specification:1 longer:1 gt:1 recent:1 perspective:3 optimizing:1 scenario:1 binary:3 success:1 outperforming:1 muller:1 captured:1 preserving:3 additional:3 greater:1 impose:2 relaxed:4 determine:5 redundant:1 corrado:1 signal:1 arithmetic:1 semi:1 multiple:2 full:1 d0:6 smooth:2 match:2 believed:1 offer:1 cross:1 cifar:5 divided:2 bach:2 prevented:1 controlled:1 prediction:6 regression:1 arxiv:1 kernel:14 adopting:1 monga:1 achieved:1 c1:3 whereas:1 conditionals:1 interval:1 eliminates:1 unlike:2 exhibited:1 archive:1 subject:1 meaningfully:1 effectiveness:2 anandkumar:1 intermediate:3 exceed:1 easy:1 destroyed:1 split:1 variety:1 xj:4 variate:1 bengio:3 architecture:9 wahba:1 reduce:1 simplifies:1 idea:1 rifai:1 multiclass:2 inactive:1 bottleneck:3 whether:3 utility:1 cause:2 deep:10 useful:8 generally:1 clear:3 locally:3 ten:1 svms:1 reduced:1 http:4 pfn:1 exist:3 zj:1 percentage:1 disjoint:1 ozlem:2 dropping:3 promise:1 express:2 group:5 key:5 reformulation:4 demonstrating:1 tchebycheffian:1 drawn:1 imputation:1 neither:1 ht:4 relaxation:18 cone:1 compete:1 inverse:2 parameterized:1 letter:4 swersky:1 extends:1 family:1 reasonable:3 throughout:1 mii:2 appendix:2 scaling:1 submatrix:1 entirely:2 layer:72 ki:2 bound:1 simplification:2 cheng:1 fold:1 constraint:9 generates:1 aht:1 simulate:1 argument:2 optimality:1 min:20 department:1 neur:1 y0:3 kakade:2 modification:1 comon:1 intuitively:2 invariant:2 restricted:3 remains:6 hh:1 singer:1 mind:1 letting:1 tractable:2 reformulations:1 aharon:1 gaussians:1 observe:1 spectral:2 cvx2:9 original:2 assumes:1 clustering:9 include:2 ensure:1 denotes:1 xc:1 exploit:4 establish:5 classical:3 implied:1 objective:17 question:1 realized:1 laue:1 strategy:3 primary:3 rt:2 traditional:4 bialek:1 bagnell:1 exhibit:1 gradient:2 separate:1 unable:1 columnwise:1 capacity:1 accommodates:1 outer:1 trivial:3 code:3 length:1 minn:4 relationship:2 reformulate:1 modeled:1 balance:1 acquire:1 minimizing:2 equivalently:1 difficult:2 unfortunately:6 innovation:1 relate:1 hao:1 negative:4 resurgence:1 implementation:2 boltzmann:2 unknown:2 allowing:1 teh:1 observation:1 markov:1 datasets:1 benchmark:1 minh:1 defining:1 hinton:2 pfns:3 y1:1 rn:1 reproducing:1 canada:1 peleato:1 inferred:1 introduced:2 namely:2 pair:2 specified:1 extensive:1 optimized:2 required:1 componentwise:1 eckstein:1 trainable:1 learned:3 established:1 nip:5 trans:1 address:1 able:7 beyond:1 usually:1 below:4 yc:1 sparsity:1 challenge:1 max:7 belief:2 suitable:1 misclassification:2 difficulty:5 hybrid:1 indicator:2 zhu:1 scheme:3 improve:1 conic:1 auto:6 text:1 prior:1 ict:1 discovery:2 understanding:1 l2:14 literature:1 kf:1 multiplication:1 loss:29 aslan:1 versus:3 validation:1 affine:1 sufficient:4 t6g:1 huellermeier:1 principle:2 bypass:1 nowozin:1 row:1 jth:3 allow:4 weaker:1 perceptron:2 saul:1 sparse:7 distributed:1 overcome:1 evaluating:1 avoids:1 rich:1 dale:2 forward:1 made:1 adaptive:2 universally:1 collection:1 far:1 cope:2 approximate:1 ml:1 global:3 active:5 investigating:1 uai:1 assumed:1 search:2 latent:61 buchman:1 table:5 wia:1 learn:4 transfer:5 robust:1 ca:1 schuurmans:4 improving:2 investigated:1 complex:1 meanwhile:1 protocol:1 diag:7 joulin:2 main:5 rh:5 n2:9 body:1 x1:1 augmented:2 xu:2 edmonton:1 transduction:1 embeds:2 position:1 pv:1 pereira:1 explicit:1 exponential:2 comput:2 jmlr:2 third:2 theorem:5 embed:1 perpi:1 xt:1 specific:4 showing:2 maxi:4 dk:12 admits:1 svm:3 concern:1 glorot:1 intractable:3 exists:1 mnist:4 restricting:1 supplement:1 anu:2 margin:19 chen:1 rg:3 cx:1 depicted:1 simply:5 lbfgs:3 visual:1 penrose:1 expressed:5 sindhwani:1 mij:1 nested:2 corresponds:1 satisfies:2 ma:1 coil:4 conditional:14 goal:3 admm:3 change:3 carreira:1 typical:1 except:1 determined:1 infinite:1 denoising:1 lemma:12 principal:2 accepted:1 experimental:1 invariance:1 perceptrons:2 select:1 support:2 guo:2 latter:1 crammer:1 accelerated:2 evaluate:1 argyriou:1 avoiding:1 srivastava:1 |
4,273 | 4,868 | Reconciling ?priors? & ?priors? without prejudice?
R?emi Gribonval ?
Inria
Centre Inria Rennes - Bretagne Atlantique
[email protected]
Pierre Machart
Inria
Centre Inria Rennes - Bretagne Atlantique
[email protected]
Abstract
There are two major routes to address linear inverse problems. Whereas
regularization-based approaches build estimators as solutions of penalized regression optimization problems, Bayesian estimators rely on the posterior distribution
of the unknown, given some assumed family of priors. While these may seem
radically different approaches, recent results have shown that, in the context of
additive white Gaussian denoising, the Bayesian conditional mean estimator is
always the solution of a penalized regression problem. The contribution of this
paper is twofold. First, we extend the additive white Gaussian denoising results
to general linear inverse problems with colored Gaussian noise. Second, we characterize conditions under which the penalty function associated to the conditional
mean estimator can satisfy certain popular properties such as convexity, separability, and smoothness. This sheds light on some tradeoff between computational
efficiency and estimation accuracy in sparse regularization, and draws some connections between Bayesian estimation and proximal optimization.
1
Introduction
Let us consider a fairly general linear inverse problem, where one wants to estimate a parameter
vector z ? RD , from a noisy observation y ? Rn , such that y = Az + b, where A ? Rn?D
is sometimes referred to as the observation or design matrix, and b ? Rn represents an additive
Gaussian noise with a distribution PB ? N (0, ?). When n < D, it turns out to be an ill-posed
problem. However, leveraging some prior knowledge or information, a profusion of schemes have
been developed in order to provide an appropriate estimation of z. In this abundance, we will focus
on two seemingly very different approaches.
1.1
Two families of approaches for linear inverse problems
On the one hand, Bayesian approaches are based on the assumption that z and b are drawn from
probability distributions PZ and PB respectively. From that point, a straightforward way to estimate
z is to build, for instance, the Minimum Mean Squared Estimator (MMSE), sometimes referred to
as Bayesian Least Squares, conditional expectation or conditional mean estimator, and defined as:
?MMSE (y) := E(Z|Y = y).
(1)
This estimator has the nice property of being optimal (in a least squares sense) but suffers from
its explicit reliance on the prior distribution, which is usually unknown in practice. Moreover, its
computation involves a tedious integral computation that generally cannot be done explicitly.
On the other hand, regularization-based approaches have been at the centre of a tremendous amount
of work from a wide community of researchers in machine learning, signal processing, and more
?
The authors are with the PANAMA project-team at IRISA, Rennes, France.
1
generally in applied mathematics. These approaches focus on building estimators (also called decoders) with no explicit reference to the prior distribution. Instead, these estimators are built as an
optimal trade-off between a data fidelity term and a term promoting some regularity on the solution.
Among these, we will focus on a widely studied family of estimators ? that write in this form:
1
?(y) := argmin ky ? Azk2 + ?(z).
z?RD 2
(2)
For instance, the specific choice ?(z) = ?kzk22 gives rise to a method often referred to as the ridge
regression [1] while ?(z) = ?kzk1 gives rise to the famous Lasso [2].
The `1 decoder associated to ?(z) = ?kzk1 has attracted a particular attention, for the associated
optimization problem is convex, and generalizations to other ?mixed? norms are being intensively
studied [3, 4]. Several facts explain the popularity of such approaches: a) these penalties have wellunderstood geometric interpretations; b) they are known to be sparsity promoting (the minimizer
has many zeroes); c) this can be exploited in active set methods for computational efficiency [5]; d)
convexity offers a comfortable framework to ensure both a unique minimum and a rich toolbox of
efficient and provably convergent optimization algorithms [6].
1.2
Do they really provide different estimators?
Regularization and Bayesian estimation seemingly yield radically different viewpoints on inverse
problems. In fact, they are underpinned by distinct ways of defining signal models or ?priors?. The
?regularization prior? is embodied by the penalty function ?(z) which promotes certain solutions,
somehow carving an implicit signal model. In the Bayesian framework, the ?Bayesian prior? is
embodied by where the mass of the signal distribution PZ lies.
The MAP quid pro quo A quid pro quo between these distinct notions of priors has crystallized
around the notion of maximum a posteriori (MAP) estimation, leading to a long lasting incomprehension between two worlds. In fact, a simple application of Bayes
rule shows that under a Gaussian
R
noise model b ? N (0, I) and Bayesian prior PZ (z ? E) = E pZ (z)dz, E ? RN , MAP estimation1 yields the optimization problem (2) with regularization prior ?Z (z) := ? log pZ (z). By a
trivial identification, the optimization problem (2) with regularization prior ?(z) is now routinely
called ?MAP with prior exp(??(z))?. With the `1 penalty, it is often called ?MAP with a Laplacian
prior?. As an unfortunate consequence of an erroneous ?reverse reading? of this fact, this identification has given rise to the erroneous but common myth that the optimization approach is particularly
well adapted when the unknown is distributed as exp(??(z)). As a striking counter-example to this
myth, it has recently been proved [7] that when z is drawn i.i.d. Laplacian and A ? Rn?D is drawn
from the Gaussian ensemble, the `1 decoder ? and indeed any sparse decoder ? will be outperformed
by the least squares decoder ?LS (y) := pinv(A)y, unless n & 0.15D.
In fact, [8] warns us that the MAP estimate is only one of the plural possible Bayesian interpretations
of (2), even though it is the most straightforward one. Furthermore, to point out that erroneous conception, a deeper connection is dug, showing that in the more restricted context of (white) Gaussian
denoising, for any prior, there exists a regularizer ? such that the MMSE estimator can be expressed
as the solution of problem (2). This result essentially exhibits a regularization-oriented formulation
for which two radically different interpretations can be made. It highlights the important following
fact: the specific choice of a regularizer ? does not, alone, induce an implicit prior on the supposed
distribution of the unknown; besides a prior PZ , a Bayesian estimator also involves the choice of a
loss function. For certain regularizers ?, there can in fact exist (at least two) different priors PZ for
which the optimization problem (2) yields the optimal Bayesian estimator, associated to (at least)
two different losses (e.g.., the 0/1 loss for the MAP, and the quadratic loss for the MMSE).
1.3
Main contributions
A first major contribution of that paper is to extend the aforementioned result [8] to a more general
linear inverse problem setting. Our first main results can be introduced as follows:
1
which is the Bayesian optimal estimator in a 0/1 loss sense, for discrete signals.
2
Theorem (Flavour of the main result). For any non-degenerate2 prior PZ , any non-degenerate covariance matrix ? and any design matrix A with full rank, there exists a regularizer ?A,?,PZ such
that the MMSE estimator of z ? PZ given the observation y = Az + b with b ? N (0, ?),
?A,?,PZ (y) := E(Z|Y = y),
(3)
is a minimizer of z 7? 21 ky ? Azk2? + ?A,?,PZ (z).
Roughly, it states that for the considered inverse problem, for any prior on z, the MMSE estimate
with Gaussian noise is also the solution of a regularization-based problem (the converse is not true).
In addition to this result we further characterize properties of the penalty function ?A,?,PZ (z) in
the case where A is invertible, by showing that: a) it is convex if and only if the probability density
function of the observation y,P
pY (y) (often called the evidence), is log-concave; b) when A = I,
n
it is a separable sum ?(z) = i=1 ?i (zi ) where z = (z1 , . . . , zn ) if, and only if, the evidence is
multiplicatively separable: pY (y) = ?ni=1 pYi (yi ).
1.4
Outline of the paper
In Section 2, we develop the main result of our paper, that we just introduced. To do so, we review an
existing result from the literature and explicit the different steps that make it possible to generalize
it to the linear inverse problem setting. In Section 3, we provide further results that shed some light
on the connections between MMSE and regularization-oriented estimators. Namely, we introduce
some commonly desired properties on the regularizing function such as separability and convexity
and show how they relate to the priors in the Bayesian framework. Finally, in Section 4, we conclude
and offer some perspectives of extension of the present work.
2
Main steps to the main result
We begin by a highlight of some intermediate results that build into steps towards our main theorem.
2.1
An existing result for white Gaussian denoising
As a starting point, let us recall the existing results in [8] (Lemma II.1 and Theorem II.2) dealing
with the additive white Gaussian denoising problem, A = I, ? = I.
Theorem 1 (Reformulation of the main results of [8]). For any non-degenerate prior PZ , we have:
1. ?I,I,PZ is one-to-one;
2. ?I,I,PZ and its inverse are C ? ;
3. ?y ? Rn , ?I,I,PZ (y) is the unique global minimum and unique stationary point of
z 7?
?(z) = ?I,I,PZ (z) :=
1
ky ? Izk2 + ?(z), with:
2
(4)
?1
?1
? 21 k?I,I,P
(z) ? zk22 ? log pY [?MMSE
(z)]; for z ? Im?I,I,PZ ;
Z
+?,
for x ?
/ Im?I,I,PZ ;
4. The penalty function ?I,I,PZ is C ? ;
5. Any penalty function ?(z) such that ?I,I,PZ (y) is a stationary point of (4) satisfies ?(z) =
?I,I,PZ (z) + C for some constant C and all z.
2
We only need to assume that Z does not intrinsically live almost surely in a lower dimensional hyperplane.
The results easily generalize to this degenerate situation by considering appropriate projections of y and z.
Similar remarks are in order for the non-degeneracy assumptions on ? and A.
3
2.2
Non-white noise
Suppose now that B ? Rn is a centred non-degenerate normal Gaussian variable with a (positive
definite) covariance matrix ?. Using a standard noise whitening technique, ??1/2 B ? N (0, I).
This makes our denoising problem equivalent to y? = z? +b? , with y? := ??1/2 y, z? := ??1/2 z
and b? := ??1/2 b, which is drawn from a Gaussian distribution with an identity covariance matrix.
Finally, let k.k? be the norm induced by the scalar product hx, yi? := hx, ??1 yi.
Corollary 1 (non-white Gaussian noise). For any non-degenerate prior PZ , any non-degenerate ?,
Y = Z + B, we have:
1. ?I,?,PZ is one-to-one.
2. ?I,?,PZ and its inverse are C ? .
3. ?y ? Rn , ?I,?,PZ (y) is the unique global minimum and stationary point of
z 7?
1
ky ? Izk2? + ?I,?,PZ (z).
2
with ?I,?,PZ (z) := ?I,I,P??1/2 Z (??1/2 z)
4. ?I,?,PZ is C ? .
As with white noise, up to an additive constant, ?I,?,PZ is the only penalty with these properties.
Proof. First, we introduce a simple lemma that is pivotal throughout each step of this section.
Lemma 1. For any function f : Rn ? R and any M ? Rn?p , we have:
M argmin f (M v) =
v?Rp
argmin
f (u).
u?range(M )?Rn
Now, the linearity of the (conditional) expectation makes it possible to write
??1/2 E(Z|Y = y) = E(??1/2 Z|??1/2 Y = ??1/2 y)
? ??1/2 ?I,?,PZ (y) = ?I,I,P??1/2 Z (??1/2 y).
Using Theorem 1, it follows that:
?I,?,PZ (y) = ?1/2 ?I,I,P??1/2 Z (??1/2 y)
From this property and Theorem 1, it is clear that ?I,?,PZ is one-to-one, C ? , as well as its inverse.
Now, using Lemma 1 with M = ?1/2 , we get:
1 ?1/2
?I,?,PZ (y) = ?1/2 argmin
k?
y ? z 0 k2 + ?I,I,P??1/2 Z (z 0 )
2
z 0 ?Rn
1 ?1/2
= argmin
k?
y ? ??1/2 zk2 + ?I,I,P??1/2 Z (??1/2 z)
2
z?Rn
1
ky ? zk2? + ?I,?,PZ (z) ,
= argmin
2
z?Rn
with ?I,?,PZ (z) := ?I,I,P??1/2 Z (??1/2 z). This definition also makes it clear that ?I,?,PZ is C ? ,
and that this minimizer is unique (and is the only stationary point).
4
2.3
A simple under-determined problem
As a step towards handling the more generic linear inverse problem y = Az + b, we will investigate
the particular case where A = [I 0]. For the sake of readability, for any two (column) vectors
u, v, let us denote [u; v] the concatenated (column) vector. First and foremost let us decompose the
MMSE estimator into two parts, composed of the first n and last (D ? n) components :
?[I
0],?,PZ (y)
:= [?1 (y); ?2 (y)]
Corollary 2 (simple under-determined problem). For any non-degenerate prior PZ , any nondegenerate ?, we have:
1. ?1 (y) = ?I,?,PZ (y) is one-to-one and C ? . Its inverse and ?I,?,PZ are also C ? ;
2. ?2 (y) = (pB ? g)(y)/(pB ? PZ )(y) (with g(z1 ) := E(Z2 |Z1 = z1 )p(z1 )) is C ? ;
3. ?[I
0],?,PZ
is injective.
Moreover, let h : R(D?n) ? R(D?n) 7? R+ be any function such that h(x1 , x2 ) = 0 ? x1 = x2 ,
3. ?y ? Rn , ?[I
0],?,PZ (y)
is the unique global minimum and stationary point of
1
ky ? [I 0]zk2? + ?h[I 0],?,PZ (z)
2
?1
0],?,PZ (z) := ?I,?,PZ (z1 ) + h z2 , ?2 ? ?1 (z1 ) and z = [z1 ; z2 ].
z 7?
with ?h[I
4. ?[I
0],?,PZ
is C ? if and only if h is.
Proof. The expression of ?2 (y) is obtained by Bayes rule in the integral defining the conditional
expectation. The smoothing effect of convolution with the Gaussian pB (b) implies the C ? nature
of ?2 . Let Z1 = [I 0]Z. Using again the linearity of the expectation, we have:
[I 0]?[I
0],?,PZ (y)
= E([I 0]Z|Y = y) = E(Z1 |Y = y) = ?I,?,PZ (y).
Hence, ?1 (y) = ?I,?,PZ (y). Given the properties of h, we have:
?2 (y) = argmin h z2 , ?2 ? ?1?1 ?1 (y)
.
z2 ?R(D?n)
It follows that:
?[I
0],?,PZ (y)
=
argmin
z=[z1 ;z2 ]?RD
1
ky ? z1 k2? + ?I,?,PZ (z1 ) + h z2 , ?2 ? ?1?1 (z1 ) .
2
From the definitions of ?[I 0],?,PZ and h, it is clear, using Corollary 1 that ?[I 0],?,PZ is injective,
is the unique minimizer and stationary point, and that ?[I 0],?,PZ is C ? if and only if h is.
2.4
Inverse Problem
We are now equipped to generalize our result to an arbitrary full rank matrix A. Using the Singular
Value Decomposition, A can be factored as:
? [I 0]V > , with U
? = U ?.
A = U [? 0]V > = U
? ?1 y = [I 0]V > z + U
? ?1 b =: z 0 + b0 .
Our problem is now equivalent to y 0 := U
?1
?>
? =U
? ?U
?
?
Let ?
. Note that B 0 ? N (0, ?).
Theorem 2 (Main result). Let h : R(D?n) ? R(D?n) 7? R+ be any function such that h(x1 , x2 ) =
0 ? x1 = x2 . For any non-degenerate prior PZ , any non-degenerate ? and A, we have:
1. ?A,?,PZ is injective.
5
2. ?y ? Rn , ?[I 0],?,PZ (y) is the unique global minimum and stationary point of
z 7? 12 ky ? Azk2? + ?hA,?,PZ (z), with ?hA,?,PZ (z) := ?h[I 0],?,P
(V > z).
?
V >Z
3. ?A,?,PZ is C ? if and only if h is.
Proof. First note that:
V > ?A,?,PZ (y) = V > E(Z|Y = y) = E(Z 0 |Y 0 = y 0 ) = ?[I
1
h
= argmin kU > y ? [I 0]z 0 k2?
? + ?[I
0
2
z
?
0],?,P
Z0
?
0],?,P
V >Z
(z 0 ),
using Corollary 2. Now, using Lemma 1, we have:
1
h
(V > z)
?A,?,PZ (y) = argmin kU > y ? U [I 0]V > k2?
? + ?[I 0],?,P
?
V >Z
2
z
1
= argmin ky ? Azk2? + ?h[I 0],?,P
(V > z)
?
V >Z
2
z
The other properties come naturally from those of Corollary 2.
Remark 1. If A is invertible (hence square), ?A,?,PZ is one-to-one. It is also C ? , as well as its
inverse and ?A,?,PZ .
3
Connections between the MMSE and regularization-based estimators
Equipped with the results from the previous sections, we can now have a clearer idea about how
MMSE estimators, and those produced by a regularization-based approach relate to each other. This
is the object of the present section.
3.1
Obvious connections
Some simple observations of the main theorem can already shed some light on those connections.
First, for any prior, and as long as A is invertible, we have shown that there exists a corresponding
regularizing term (which is unique up to an additive constant). This simply means that the set of
MMSE estimators in linear inverse problems with Gaussian noise is a subset of the set of estimators
that can be produced by a regularization approach with a quadratic data-fitting term.
Second, since the corresponding penalty is necessarily smooth, it is in fact only a strict subset of such
regularization estimators. In other words, for some regularizers, there cannot be any interpretation
in terms of an MMSE estimator. For instance, as pinpointed by [8], all the non-C ? regularizers
belong to that category. Among them, all the sparsity-inducing regularizers (the `1 norm, among
others) fall into this scope. This means that when it comes to solving a linear inverse problem (with
an invertible A) under Gaussian noise, sparsity inducing penalties are necessarily suboptimal (in a
mean squared error sense).
3.2
Relating desired computational properties to the evidence
Let us now focus on the MMSE estimators (which also can be written as regularization-based estimators). As reported in the introduction, one of the reasons explaining the success of the optimizationbased approaches is that one can have a better control on the computational efficiency on the algorithms via some appealing properties of the functional to minimize. An interesting question then
is: can we relate these properties of the regularizer to the Bayesian priors, when interpreting the
solution as an MMSE estimate?
For instance, when the regularizer is separable, one may easily rely on coordinate descent algorithms [9]. Here is a more formal definition:
Definition 1 (Separability). A vector-valued function f : Rn ? Rn is separable if there exists a set
n
of functions f1 , . . . , fn : R ? R such that ?x ? Rn , f (x) = (fi (xi ))i=1 .
6
A scalar-valued function g : Rn ? R is additively separable (resp. multiplicatively
Pn separable) if
n
there exists
a
set
of
functions
g
,
.
.
.
,
g
:
R
?
R
such
that
?x
?
R
,
g(x)
=
1
n
i=1 gi (xi ) (resp.
Qn
g(x) = i=1 gi (xi )).
Especially when working with high-dimensional data, coordinate descent algorithms have proven to
be very efficient and have been extensively used for machine learning [10, 11].
Even more evidently, when solving optimization problems, dealing with convex functions ensures
that many algorithms will provably converge to the global minimizer [6]. As a consequence, it
would be interesting to be able to characterize the set of priors for which the MMSE estimate can be
expressed as a minimizer of a convex function.
The following lemma precisely addresses these issues. For the sake of simplicity and readability, we
focus on the specific case where A = I and ? = I.
Lemma 2 (Convexity and Separability). For any non-degenerate prior PZ , Theorem 1 says that
?y ? Rn , ?I,I,PZ (y) is the unique global minimum and stationary point of z 7? 21 ky ? Izk2 +
?I,I,PZ (z). Moreover, the following results hold:
1. ?I,I,PZ is convex if and only if pY (y) := pB ? PZ (y) is log-concave,
2. ?I,I,PZ is additively separable if and only if pY (y) is multiplicatively separable.
Proof of Lemma 2. From Lemma II.1 in [8], the Jacobian matrix J[?I,I,PZ ](y) is positive definite
hence invertible. Derivating ?I,I,PZ [?I,I,PZ (y)] from its definition in Theorem 1, we get:
J[?I,I,PZ ](y)??I,I,PZ [?I,I,PZ (y)]
1
2
= ? ? ky ? ?I,I,PZ (y)k2 ? log pY (y)
2
= ? (In ? J[?I,I,PZ ](y)) (y ? ?I,I,PZ (y)) ? ? log pY (y)
= (In ? J[?I,I,PZ ](y)) ? log pY (y) ? ? log pY (y)
= ?J[?I,I,PZ ](y)? log pY (y)
Then:
??I,I,PZ [?I,I,PZ (y)] = ?? log pY (y).
Derivating this expression once more, we get:
J[?I,I,PZ ](y)?2 ?I,I,PZ [?I,I,PZ (y)] = ??2 log pY (y).
Hence:
?2 ?I,I,PZ [?I,I,PZ (y)] = ?J ?1 [?I,I,PZ ](y)?2 log pY (y).
As ?I,I,PZ is one-to-one, ?I,I,PZ is convex if and only if ?I,I,PZ [?I,I,PZ ] is. It also follows that:
?I,I,PZ convex ? ?2 ?I,I,PZ [?I,I,PZ (y)] < 0
? ?J ?1 [?I,I,PZ ](y)?2 log pY (y) < 0
As J[?I,I,PZ ](y) = In + ?2 log pY (y), the matrices ?2 log pY (y), J[?I,I,PZ ](y), and
J ?1 [?I,I,PZ ](y) are simultaneously diagonalisable. It follows that the matrices J ?1 [?I,I,PZ ](y)
and ?2 log pY (y) commute. Now, as J[?I,I,PZ ](y) is positive definite, we have:
?J ?1 [?I,I,PZ ](y)?2 log pY (y) < 0 ? ?2 log pY (y) 4 0.
It follows that ?I,I,PZ is convex if and only if pY (y) := pB ? PX (y) is log-concave.
By its definition (II.3) in [8], it is clear that:
?I,I,PZ is additively separable ? ?I,I,PZ is separable.
Using now equation (II.2) in [8], we have:
?I,I,PZ is separable ? ? log pY is separable
? log pY is additively separable
? pY is multiplicatively separable.
7
Remark 2. This lemma focuses on the specific case where A = I and a white Gaussian noise.
However, considering the transformations induced by a shift to an arbitrary invertible matrix A and
to an arbitrary non-degenerate covariance matrix ?, which are depicted throughout Section 2, it is
easy to see that the result on convexity carries over. An analogous (but more complicated) result
could be also derived regarding separability. We leave that part to the interested reader.
These results provide a precise characterization of conditions on the Bayesian priors so that the
MMSE estimator can be expressed as minimizer of a convex or separable function. Interestingly,
those conditions are expressed in terms of the probability distribution function (pdf in short) of the
observations pY , which is sometimes referred to as the evidence. The fact that the evidence plays a
key role in Bayesian estimation has been observed in many contexts, see for example [12]. Given
that we assume that the noise is Gaussian, its pdf pB always is log-concave. Thanks to a simple
property of the convolution of log-concave functions, it is sufficient that the prior on the signal pZ
is log-concave to ensure that pY also is. However, it is not a necessary condition. This means that
there are some priors pX that are not log-concave such that the associated MMSE estimator can still
be expressed as the minimizer of a functional with a convex regularizer. For a more detailed analysis
of this last point, in the specific context of Bernoulli-Gaussian priors (which are not log-concave),
please refer to the technical report [13].
From this result, one may also draw an interesting negative result. If the distribution of the observation y is not log-concave, then, the MMSE estimate cannot be expressed as the solution of a convex
regularization-oriented formulation. This means that, with a quadratic data-fitting term, a convex
approach to signal estimation cannot be optimal (in a mean squared error sense).
4
Prospects
In this paper we have extended a result, stating that in the context of linear inverse problems with
Gaussian noise, for any Bayesian prior, there exists a regularizer ? such that the MMSE estimator
can be expressed as the solution of regularized regression problem (2). This result is a generalization
of a result in [8]. However, we think it could be extended with regards to many aspects. For instance,
our proof of the result naturally builds on elementary bricks that combine in a way that is imposed
by the definition of the linear inverse problem. However, by developing more bricks and combining
them in different ways, it may be possible to derive analogous results for a variety of other problems.
Moreover, in the situation where A is not invertible (i.e. the problem is under-determined), there is
a large set of admissible regularizers (i.e. up to the choice of a function h in Corollary 2). This additional degree of freedom might be leveraged in order to provide some additional desirable properties,
from an optimization perspective, for instance.
Also, our result relies heavily on the choice of a quadratic loss for the data-fitting term and on a
Gaussian model for the noise. Naturally, investigating other choices (e.g. logistic or hinge loss,
Poisson noise, to name a few) is a question of interest. But a careful study of the proofs in [8]
suggests that there is a peculiar connection between the Gaussian noise model on the one hand and
the quadratic loss on the other hand. However, further investigations should be conducted to get a
deeper understanding on how these really interplay on a higher level.
Finally, we have stated a number of negative results regarding the non-optimality of sparse decoders or of convex formulations for handling observations drawn from a distribution that is not
log-concave. It would be interesting to develop a metric in the estimators space in order to quantify,
for instance, how ?far? one arbitrary estimator is from an optimal one, or, in other words, what is
the intrinsic cost of convex relaxations.
Acknowledgements
This work was supported in part by the European Research Council, PLEASE project (ERC-StG2011-277906).
8
References
[1] Arthur E. Hoerl and Robert W. Kennard. Ridge regression: applications to nonorthogonal
problems. Technometrics, 12(1):69?82, 1970.
[2] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal
Statistical Society, 58(1):267?288, 1996.
[3] Matthieu Kowalski. Sparse regression using mixed norms. Applied and Computational Harmonic Analysis, 27(3):303?324, 2009.
[4] Francis Bach, Rodolphe Jenatton, Julien Mairal, and Guillaume Obozinski. Optmization with
sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1):1?106, 2012.
[5] Rodolphe Jenatton, Guillaume Obozinski, and Francis Bach. Active set algorithm for structured sparsity-inducing norms. In OPT 2009: 2nd NIPS Workshop on Optimization for Machine Learning, 2009.
[6] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[7] R?emi Gribonval, Volkan Cevher, and Mike Davies, E. Compressible Distributions for Highdimensional Statistics. IEEE Transactions on Information Theory, 2012.
[8] R?emi Gribonval. Should penalized least squares regression be interpreted as maximum a posteriori estimation? IEEE Transactions on Signal Processing, 59(5):2405?2410, 2011.
[9] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems.
Core discussion papers, Center for Operations Research and Econometrics (CORE), Catholic
University of Louvain, 2010.
[10] C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. Sathiya Keerthi, and S. Sundararajan. A dual coordinate
descent method for large-scale linear svm. In Proceedings of the 25th International Conference
on Machine Learning, pages 408?415, 2008.
[11] Pierre Machart, Thomas Peel, Liva Ralaivola, Sandrine Anthoine, and Herv?e Glotin. Stochastic low-rank kernel learning for regression. In 28th International Conference on Machine
Learning, 2011.
[12] Martin Raphan and Eero P. Simoncelli. Learning to be bayesian without supervision. In in
Adv. Neural Information Processing Systems (NIPS*06. MIT Press, 2007.
[13] R?emi Gribonval and Pierre Machart. Reconciling ?priors? & ?priors? without prejudice? Research report RR-8366, INRIA, September 2013.
9
| 4868 |@word norm:5 nd:1 tedious:1 additively:4 covariance:4 decomposition:1 hsieh:1 commute:1 carry:1 quo:2 interestingly:1 mmse:20 existing:3 z2:7 profusion:1 liva:1 attracted:1 written:1 fn:1 additive:6 alone:1 stationary:8 gribonval:5 short:1 core:2 anthoine:1 colored:1 volkan:1 characterization:1 readability:2 compressible:1 fitting:3 combine:1 introduce:2 indeed:1 roughly:1 glotin:1 equipped:2 considering:2 project:2 begin:1 moreover:4 linearity:2 mass:1 what:1 argmin:11 interpreted:1 developed:1 transformation:1 concave:10 shed:3 k2:5 control:1 underpinned:1 converse:1 comfortable:1 positive:3 consequence:2 inria:7 might:1 studied:2 suggests:1 range:1 carving:1 unique:10 practice:1 definite:3 optimizationbased:1 projection:1 boyd:1 word:2 induce:1 davy:1 get:4 cannot:4 selection:1 ralaivola:1 context:5 live:1 py:25 equivalent:2 map:7 imposed:1 dz:1 center:1 straightforward:2 attention:1 starting:1 l:1 convex:15 pyi:1 simplicity:1 matthieu:1 factored:1 estimator:31 rule:2 vandenberghe:1 machart:4 notion:2 coordinate:4 analogous:2 resp:2 suppose:1 play:1 heavily:1 trend:1 particularly:1 econometrics:1 observed:1 role:1 mike:1 ensures:1 adv:1 trade:1 counter:1 prospect:1 convexity:5 nesterov:1 solving:2 irisa:1 efficiency:4 easily:2 routinely:1 regularizer:7 distinct:2 crystallized:1 posed:1 widely:1 valued:2 say:1 statistic:1 gi:2 think:1 noisy:1 seemingly:2 interplay:1 rr:1 evidently:1 azk2:4 product:1 fr:2 combining:1 degenerate:11 supposed:1 inducing:4 az:3 ky:11 regularity:1 leave:1 object:1 derive:1 develop:2 clearer:1 stating:1 b0:1 involves:2 implies:1 come:2 quantify:1 stochastic:1 hx:2 f1:1 generalization:2 really:2 decompose:1 investigation:1 opt:1 elementary:1 im:2 extension:1 hold:1 around:1 considered:1 normal:1 exp:2 scope:1 nonorthogonal:1 major:2 estimation:8 outperformed:1 hoerl:1 council:1 mit:1 gaussian:22 always:2 pn:1 shrinkage:1 corollary:6 derived:1 focus:6 rank:3 bernoulli:1 sense:4 zk22:1 posteriori:2 kzk22:1 france:1 interested:1 provably:2 issue:1 fidelity:1 ill:1 among:3 aforementioned:1 dual:1 smoothing:1 fairly:1 once:1 represents:1 others:1 report:2 few:1 oriented:3 composed:1 simultaneously:1 keerthi:1 technometrics:1 freedom:1 peel:1 interest:1 huge:1 investigate:1 rodolphe:2 light:3 regularizers:5 peculiar:1 integral:2 injective:3 necessary:1 arthur:1 unless:1 desired:2 bretagne:2 cevher:1 instance:7 column:2 brick:2 zn:1 cost:1 subset:2 conducted:1 characterize:3 reported:1 proximal:1 thanks:1 density:1 international:2 off:1 invertible:7 squared:3 again:1 leveraged:1 leading:1 centred:1 satisfy:1 explicitly:1 francis:2 bayes:2 complicated:1 dug:1 contribution:3 minimize:1 square:5 ni:1 accuracy:1 ensemble:1 yield:3 kowalski:1 generalize:3 bayesian:19 famous:1 identification:2 produced:2 researcher:1 explain:1 suffers:1 definition:7 obvious:1 naturally:3 associated:5 proof:6 sandrine:1 degeneracy:1 proved:1 popular:1 intrinsically:1 intensively:1 recall:1 knowledge:1 jenatton:2 higher:1 formulation:3 done:1 though:1 furthermore:1 just:1 implicit:2 myth:2 hand:4 working:1 somehow:1 logistic:1 name:1 effect:1 building:1 true:1 pinpointed:1 regularization:16 hence:4 white:9 please:2 pdf:2 outline:1 ridge:2 interpreting:1 pro:2 harmonic:1 recently:1 fi:1 common:1 functional:2 extend:2 interpretation:4 belong:1 relating:1 sundararajan:1 refer:1 cambridge:1 smoothness:1 rd:3 mathematics:1 erc:1 centre:3 supervision:1 whitening:1 posterior:1 recent:1 perspective:2 reverse:1 route:1 certain:3 success:1 yi:3 exploited:1 minimum:7 additional:2 surely:1 converge:1 signal:8 wellunderstood:1 ii:5 full:2 desirable:1 simoncelli:1 smooth:1 technical:1 offer:2 long:2 bach:2 lin:1 promotes:1 laplacian:2 regression:9 essentially:1 expectation:4 foremost:1 poisson:1 metric:1 sometimes:3 kernel:1 whereas:1 want:1 addition:1 singular:1 rennes:3 strict:1 induced:2 leveraging:1 seem:1 intermediate:1 conception:1 easy:1 variety:1 zi:1 lasso:2 suboptimal:1 idea:1 regarding:2 tradeoff:1 shift:1 expression:2 herv:1 quid:2 penalty:11 remark:3 generally:2 clear:4 detailed:1 amount:1 extensively:1 raphan:1 category:1 pinv:1 exist:1 popularity:1 tibshirani:1 write:2 discrete:1 key:1 reliance:1 reformulation:1 pb:8 drawn:5 relaxation:1 sum:1 inverse:19 striking:1 catholic:1 family:3 almost:1 throughout:2 reader:1 draw:2 flavour:1 convergent:1 quadratic:5 adapted:1 precisely:1 x2:4 sake:2 aspect:1 emi:4 optimality:1 separable:15 px:2 martin:1 structured:1 developing:1 separability:5 appealing:1 lasting:1 restricted:1 equation:1 turn:1 zk2:3 operation:1 promoting:2 appropriate:2 generic:1 pierre:4 rp:1 thomas:1 reconciling:2 ensure:2 unfortunate:1 hinge:1 concatenated:1 build:4 especially:1 society:1 already:1 question:2 exhibit:1 september:1 decoder:6 trivial:1 reason:1 besides:1 multiplicatively:4 robert:2 relate:3 kzk1:2 negative:2 rise:3 stated:1 design:2 unknown:4 observation:8 convolution:2 descent:4 defining:2 situation:2 extended:2 team:1 precise:1 rn:21 arbitrary:4 community:1 introduced:2 namely:1 toolbox:1 connection:7 z1:14 louvain:1 tremendous:1 nip:2 address:2 able:1 usually:1 sparsity:5 reading:1 panama:1 built:1 royal:1 rely:2 regularized:1 scheme:1 julien:1 embodied:2 prior:37 nice:1 geometric:1 review:1 literature:1 understanding:1 acknowledgement:1 loss:8 highlight:2 mixed:2 interesting:4 proven:1 foundation:1 degree:1 sufficient:1 viewpoint:1 nondegenerate:1 penalized:3 supported:1 last:2 formal:1 deeper:2 wide:1 fall:1 explaining:1 sparse:4 distributed:1 regard:1 world:1 rich:1 qn:1 author:1 made:1 commonly:1 far:1 transaction:2 dealing:2 global:6 active:2 investigating:1 mairal:1 assumed:1 conclude:1 sathiya:1 xi:3 eero:1 nature:1 ku:2 necessarily:2 european:1 main:10 noise:16 plural:1 pivotal:1 x1:4 kennard:1 referred:4 explicit:3 lie:1 jacobian:1 abundance:1 admissible:1 theorem:10 z0:1 erroneous:3 derivating:2 specific:5 showing:2 pz:111 svm:1 evidence:5 exists:6 intrinsic:1 workshop:1 atlantique:2 depicted:1 remi:1 simply:1 expressed:7 scalar:2 chang:1 radically:3 minimizer:8 satisfies:1 relies:1 obozinski:2 conditional:6 identity:1 careful:1 towards:2 twofold:1 determined:3 hyperplane:1 prejudice:2 denoising:6 lemma:10 called:4 guillaume:2 highdimensional:1 regularizing:2 handling:2 |
4,274 | 4,869 | Robust Sparse Principal Component Regression
under the High Dimensional Elliptical Model
Han Liu
Department of Operations Research
and Financial Engineering
Princeton University
Princeton, NJ 08544
[email protected]
Fang Han
Department of Biostatistics
Johns Hopkins University
Baltimore, MD 21210
[email protected]
Abstract
In this paper we focus on the principal component regression and its application to
high dimension non-Gaussian data. The major contributions are two folds. First,
in low dimensions and under the Gaussian model, by borrowing the strength from
recent development in minimax optimal principal component estimation, we first
time sharply characterize the potential advantage of classical principal component
regression over least square estimation. Secondly, we propose and analyze a new
robust sparse principal component regression on high dimensional elliptically distributed data. The elliptical distribution is a semiparametric generalization of the
Gaussian, including many well known distributions such as multivariate Gaussian, rank-deficient Gaussian, t, Cauchy, and logistic. It allows the random vector
to be heavy tailed and have tail dependence. These extra flexibilities make it very
suitable for modeling finance and biomedical imaging data. Under the elliptical
model, we prove that our method can estimate the regression coefficients in the
optimal parametric rate and therefore is a good alternative to the Gaussian based
methods. Experiments on synthetic and real world data are conducted to illustrate
the empirical usefulness of the proposed method.
1
Introduction
Principal component regression (PCR) has been widely used in statistics for years (Kendall, 1968).
Take the classical linear regression with random design for example. Let x1 , . . . , xn ? Rd be n
independent realizations of a random vector X ? Rd with mean 0 and covariance matrix ?. The
classical linear regression model and simple principal component regression model can be elaborated
as follows:
(Classical linear regression model)
Y = X? + ;
(Principal Component Regression Model)
Y = ?Xu1 + ,
(1.1)
where X = (x1 , . . . , xn )T ? Rn?d , Y ? Rn , ui is the i-th leading eigenvector of ?, and ?
Nn (0, ? 2 Id ) is independent of X, ? ? Rd and ? ? R. Here Id ? Rd?d is the identity matrix. The
b1
principal component regression then can be conducted in two steps: First we obtain an estimator u
b 1 and solve a simple linear regression in
of u1 ; Secondly we project the data in the direction of u
estimating ?.
By checking Equation (1.1), it is easy to observe that the principal component regression model is a
subset of the general linear regression (LR) model with the constraint that the regression coefficient
? is proportional to u1 . There has been a lot of discussions on the advantage of principal component
regression over classical linear regression. In low dimensional settings, Massy (1965) pointed out
that principal component regression can be much more efficient in handling collinearity among predictors compared to the linear regression. More recently, Cook (2007) and Artemiou and Li (2009)
argued that principal component regression has potential to play a more important role. In particb of x1 , . . . , xn ,
b j be the j-th leading eigenvector of the sample covariance matrix ?
ular, letting u
1
Artemiou and Li (2009) show that under mild conditions with high probability the correlation between the response Y and Xb
ui is higher than or equal to the correlation between Y and Xb
uj when
i < j. This indicates, although not rigorous, there is possibility that principal component regression
can borrow strength from the low rank structure of ?, which motivates our work.
Even though the statistical performance of principal component regression in low dimensions is not
fully understood, there is even less analysis on principal component regression in high dimensions
where the dimension d can be even exponentially larger than the sample size n. This is partially
due to the fact that estimating the leading eigenvectors of ? itself has been difficult enough. For
example, Johnstone and Lu (2009) show that, even under the Gaussian model, when d/n ? ?
b 1 can be an inconsistent estimator of
for some ? > 0, there exist multiple settings under which u
u1 . To attack this ?curse of dimensionality?, one solution is adding a sparsity assumption on u1 ,
leading to various versions of the sparse PCA. See, Zou et al. (2006); d?Aspremont et al. (2007);
Moghaddam et al. (2006), among others. Under the (sub)Gaussian settings, minimax optimal rates
are being established in estimating u1 , . . . , um (Vu and Lei, 2012; Ma, 2013; Cai et al., 2013).
Very recently, Han and Liu (2013b) relax the Gaussian assumption in conducting a scale invariant
version of the sparse PCA (i.e., estimating the leading eigenvector of the correlation instead of the
covariance matrix). However, it can not be easily applied to estimate u1 and the rate of convergence
they proved is not the parametric rate.
This paper improves upon the aforementioned results in two directions. First, with regard to the
classical principal component regression, under a double asymptotic framework in which d is allowed to increase with n, by borrowing the very recent development in principal component analysis (Vershynin, 2010; Lounici, 2012; Bunea and Xiao, 2012), we for the first time explicitly show
the advantage of principal component regression over the classical linear regression. We explicitly
confirm the following two advantages of principal component regression: (i) Principal component
regression is insensitive to collinearity, while linear regression is very sensitive to; (ii) Principal
component regression can utilize the low rank structure of the covariance matrix ?, while linear
regression cannot.
Secondly, in high dimensions where d can increase much faster, even exponentially faster, than n,
we propose a robust method in conducting (sparse) principal component regression under a nonGaussian elliptical model. The elliptical distribution is a semiparametric generalization to the Gaussian, relaxing the light tail and zero tail dependence constraints, but preserving the symmetry property. We refer to Kl?uppelberg et al. (2007) for more details. This distribution family includes many
well known distributions such as multivariate Gaussian, rank deficient Gaussian, t, logistic, and
many others. Under the elliptical model, we exploit the result in Han and Liu (2013a), who showed
that by utilizing a robust covariance matrix estimator, the multivariate Kendall?s tau, we can obtain
e 1 , which can recover u1 in the optimal parametric rate as shown in Vu and Lei (2012).
an estimator u
e 1 in conducting principal
We then exploit u
p component regression and show that the obtained estimator ?? can estimate ? in the optimal s log d/n rate. The optimal rate in estimating u1 and
?, combined with the discussion in the classical principal component regression, indicates that the
proposed method has potential to handle high dimensional complex data and has its advantage over
high dimensional linear regression methods, such as ridge regression and lasso. These theoretical
results are also backed up by numerical experiments on both synthetic and real world equity data.
2
Classical Principal Component Regression
This section is devoted to the discussion on the advantage of classical principal component regression over the classical linear regression. We start with a brief introduction of notations. Let
M = [Mij ] ? Rd?d and v = (v1 , ..., vd )T ? Rd . We denote vI to be the subvector of v whose
entries are indexed by a set I. We also denote MI,J to be the submatrix of M whose rows are
indexed by I and columns are indexed by J. Let MI? and M?J be the submatrix of M with rows
indexed by I, and the submatrix of M with columns indexed by J. Let supp(v) := {j : vj 6= 0}.
For 0 < q < ?, we define the `0 , `q and `? vector norms as
d
X
kvk0 := card(supp(v)), kvkq := (
|vi |q )1/q and kvk? := max |vi |.
i=1
1?i?d
Let Tr(M) be the trace of M. Let ?j (M) be the j-th largest eigenvalue of M and ?j (M) be the
corresponding leading eigenvector. In particular, we let ?max (M) := ?1 (M) and ?min (M) :=
2
?d (M). We define Sd?1 := {v ? Rd : kvk2 = 1} to be the d-dimensional unit sphere. We define
the matrix `max norm and `2 norm as kMkmax := max{|Mij |} and kMk2 := supv?Sd?1 kMvk2 .
We define diag(M) to be a diagonal matrix with [diag(M)]jj = Mjj for j = 1, . . . , d. We denote
c,C
vec(M) := (MT?1 , . . . , MT?d )T . For any two sequence {an } and {bn }, we denote an bn if there
exist two fixed constants c, C such that c ? an /bn ? C.
Let x1 , . . . , xn ? Rd be n independent observations of a d-dimensional random vector X ?
Nd (0, ?), u1 := ?1 (?) and 1 , . . . , n ? N1 (0, ? 2 ) are independent from each other and
{Xi }ni=1 . We suppose that the following principal component regression model holds:
Y = ?Xu1 + ,
(2.1)
where Y = (Y1 , . . . , Yn )T ? Rn , X = [x1 , . . . , xn ]T ? Rn?d and = (1 , . . . , n )T ? Rn . We
are interested in estimating the regression coefficient ? := ?u1 .
Let ?b represent the solution of the classical least square estimator without taking the information
that ? is proportional to u1 into account. ?b can be expressed as follows:
?b := (XT X)?1 XT Y .
(2.2)
We then have the following proposition, which shows that the mean square error of ?b ? ? is highly
related to the scale of ?min (?).
Proposition 2.1. Under the principal component regression model shown in (2.1), we have
?2
1
1
2
b
Ek? ? ?k2 =
+ ??? +
.
n ? d ? 1 ?1 (?)
?d (?)
Proposition 2.1 reflects the vulnerability of least square estimator on the collinearity. More specifically, when ?d (?) is extremely small, going to zero in the scale of O(1/n), ?b can be an inconsistent
estimator even when d is fixed. On the other hand, using the Markov inequality, when ?d (?) is
lowerpbounded by a fixed constant and d = o(n), the rate of convergence of ?b is well known to be
OP ( d/n).
Motivated from Equation (2.1), the classical principal component regression estimator can be elaborated as follows.
b := 1 P xi xT .
b 1 of the sample covariance ?
(1) We first estimate u1 using the leading eigenvector u
n
i
(2) We then estimate ? ? R in Equation (2.1) by the standard least square estimation on the projected
b := Xb
data Z
u 1 ? Rn :
b T Z)
b ?1 Z
bT Y ,
?
e := (Z
b 1 . We then have
The final principal component regression estimator ?e is then obtained as ?e = ?
eu
e
the following important theorem, which provides a rate of convergence for ? to approximate ?.
Theorem 2.2. Let r? (?) := Tr(?)/?max (?) represent the effective rank of ? (Vershynin, 2010).
Suppose that
r
r? (?) log d
= o(1).
k?k2 ?
n
Under the Model (2.1), when ?max (?) > c1 and ?2 (?)/?1 (?) < C1 < 1 for some fixed constants
C1 and c1 , we have
(r
! r
)
? (?) log d
1
1
r
k?e ? ?k2 = OP
+ ?+ p
?
.
(2.3)
n
n
?max (?)
Theorem 2.2, compared to Proposition 2.1, provides several important messages on the performance
b ?e is insensitive
of principal component regression. First, compared to the least square estimator ?,
e Secondly,
to collinearity in the sense that ?min (?) plays no role in the rate of convergence of ?.
when ?min (?) is lower bounded by apfixed constant and ? is upper
bounded by a fixed constant,
p
the rate of convergence for ?b is OP ( d/n) and for ?e is OP ( r? (?) log d/n), while r? (?) :=
3
Tr(?)/?max (?) ? d and is of order o(d) when there exists a low rank structure for ?. These
two observations, combined together, illustrate the advantages of the classical principal component
regression over least square estimation. These advantages justify the use of principal component
e unlike ?,
b depends on ?.
regression. There is one more thing to be noted: the performance of ?,
When ? is small, ?e can predict ? more accurately.
1.0
These three observations are verified in Figure 1. Here the data are generated according to Equation
(2.1) and we set n = 100, d = 10, ? to be a diagonal matrix with descending diagonal values
?ii = ?i and ? 2 = 1. In Figure 1(A), we set ? = 1, ?1 = 10, ?j = 1 for j = 2, . . . , d ? 1, and
changing ?d from 1 to 1/100; In Figure 1(B), we set ? = 1, ?j = 1 for j = 2, . . . , d and changing
?1 from 1 to 100; In Figure 1(C), we set ?1 = 10, ?j = 1 for j = 2, . . . , d, and changing ? from
0.1 to 10. In the three figures, the empirical mean square error is plotted against 1/?d , ?1 , and ?. It
can be observed that the results, each by each, matches the theory.
0.8
20
40
60
80
100
0.6
0.2
0.0
0.0
0
LR
PCR
0.4
Mean Square Error
0.6
0.4
0.2
Mean Square Error
0.6
0.4
LR
PCR
0.2
Mean Square Error
0.8
0.8
1.0
LR
PCR
0
20
40
1/lambda_min
60
80
lambda_max
A
B
100
0
2
4
6
8
10
alpha
C
Figure 1: Justification of Proposition 2.1 and Theorem 2.2. The empirical mean square errors are
plotted against 1/?d , ?1 , and ? separately in (A), (B), and (C). Here the results of classical linear
regression and principal component regression are marked in black solid line and red dotted line.
3
Robust Sparse Principal Component Regression under Elliptical Model
In this section, we propose a new principal component regression method. We generalize the settings
in classical principal component regression discussed in the last section in two directions: (i) We
consider the high dimensional settings where the dimension d can be much larger than the sample
size n; (ii) In modeling the predictors x1 , . . . , xn , we consider a more general elliptical, instead of
the Gaussian distribution family. The elliptical family can capture characteristics such as heavy tails
and tail dependence, making it more suitable for analyzing complex datasets in finance, genomics,
and biomedical imaging.
3.1 Elliptical Distribution
In this section we define the elliptical distribution and introduce the basic property of the elliptical
d
distribution. We denote by X = Y if random vectors X and Y have the same distribution.
Here we only consider the continuous random vectors with density existing. To our knowledge,
there are essentially four ways to define the continuous elliptical distribution with density. The most
intuitive way is as follows: A random vector X ? Rd is said to follow an elliptical distribution
ECd (?, ?, ?) if and only there exists a random variable ? > 0 (a.s.) and a Gaussian distribution
Z ? Nd (0, ?) such that
d
X = ? + ?Z.
(3.1)
Note that here ? is not necessarily independent of Z. Accordingly, elliptical distribution can be
regarded as a semiparametric generalization to the Gaussian distribution, with the nonparametric
part ?. Because ? can be very heavy tailed, X can also be very heavy tailed. Moreover, when E? 2
exists, we have
Cov(X) = E? 2 ? and ?j (Cov(X)) = ?j (?) for j = 1, . . . , d.
This implies that, when E? 2 exists, to recover u1 := ?1 (Cov(X)), we only need to recover ?1 (?).
Here ? is conventionally called the scatter matrix.
4
We would like to point out that the elliptical family is significantly larger than the Gaussian. In
fact, Gaussian is fully parameterized by finite dimensional parameters (mean and variance). In
contrast, the elliptical is a semiparametric family (since the elliptical density can be represented as
g((x??)T ?? 1(x??)) where the function g(?) function is completely unspecified.). If we consider
the ?volumes? of the family of the elliptical family and the Gaussian family with respect to the
Lebesgue reference measure, the volume of Gaussian family is zero (like a line in a 3-dimensional
space), while the volume of the elliptical family is positive (like a ball in a 3-dimensional space).
3.2 Multivariate Kendall?s tau
As a important step in conducting the principal component regression, we need to estimate u1 =
?1 (Cov(X)) = ?1 (?) as accurately as possible. Since the random variable ? in Equation (3.1)
can be very heavy tailed, the according elliptical distributed random vector can be heavy tailed.
Therefore, as has been pointed out by various authors (Tyler, 1987; Croux et al., 2002; Han and Liu,
b can be a bad estimator in esti2013b), the leading eigenvector of the sample covariance matrix ?
mating u1 = ?1 (?) under the elliptical distribution. This motivates developing robust estimator.
In particular, in this paper we consider using the multivariate Kendall?s tau (Choi and Marden, 1998)
and recently deeply studied by Han and Liu (2013a). In the following we give a brief description
f be an independent copy of X. The population
of this estimator. Let X ? ECd (?, ?, ?) and X
multivariate Kendall?s tau matrix, denoted by K ? Rd?d , is defined as:
!
f
fT
(X ? X)(X
? X)
K := E
.
(3.2)
f 2
kX ? Xk
2
Let x1 , . . . , xn be n independent observations of X. The sample version of multivariate Kendall?s
tau is accordingly defined as
b =
K
X (xi ? xj )(xi ? xj )T
1
,
n(n ? 1)
kxi ? xj k22
(3.3)
i6=j
b = K. K
b is a matrix version U statistic and it is easy to see that
and we have that E(K)
b
b is a bounded matrix and hence can be a nicer
maxjk |Kjk | ? 1, maxjk |Kjk | ? 1. Therefore, K
statistic than the sample covariance matrix. Moreover, we have the following important proposition,
coming from Oja (2010), showing that K has the same eigenspace as ? and Cov(X).
Proposition 3.1 (Oja (2010)). Let X ? ECd (?, ?, ?) be a continuous distribution and K be the
population multivariate Kendall?s tau statistic. Then if ?j (?) 6= ?k (?) for any k 6= j, we have
!
?j (?)Uj2
?j (?) = ?j (K) and ?j (K) = E
,
(3.4)
?1 (?)U12 + . . . + ?d (?)Ud2
where U := (U1 , . . . , Ud )T follows a uniform distribution in Sd?1 . In particular, when E? 2 exists,
?j (Cov(X)) = ?j (K).
3.3 Model and Method
In this section we discuss the model we build and the accordingly proposed method in conducting
high dimensional (sparse) principal component regression on non-Gaussian data.
Similar as in Section 2, we consider the classical simple principal component regression model:
Y = ?Xu1 + = ?[x1 , . . . , xn ]T u1 + .
To relax the Gaussian assumption, we assume that both x1 , . . . , xn ? Rd and 1 , . . . , n ? R be
elliptically distributed. We assume that xi ? ECd (0, ?, ?). To allow the dimension d increasing
much faster than n, we impose a sparsity structure on u1 = ?1 (?). Moreover, to make u1 identifiable, we assume that ?1 (?) 6= ?2 (?). Thusly, the formal model of the robust sparse principal
component regression considered in this paper is as follows:
Y = ?Xu1 + ,
Md (Y , ; ?, ?, s) :
(3.5)
x1 , . . . , xn ? ECd (0, ?, ?), k?1 (?)k0 ? s, ?1 (?) 6= ?2 (?).
5
Then the robust sparse principal component regression can be elaborated as a two step procedure:
(i) Inspired by the model Md (Y , ; ?, ?, s) and Proposition 3.1, we consider the following optimization problem to estimate u1 := ?1 (?):
b
e 1 = arg max v T Kv,
u
subject to v ? Sd?1 ? B0 (s),
(3.6)
v?Rd
b is the estimated multivariate Kendall?s tau matrix.
where B0 (s) := {v ? Rd : kvk0 ? s} and K
e 1 . Using Proposition 3.1, u
e 1 is also an estimator
The corresponding global optimum is denoted by u
of ?1 (Cov(X)), whenever the covariance matrix exists.
(ii) We then estimate ? ? R in Equation (3.5) by the standard least square estimation on the projected
e := Xe
data Z
u 1 ? Rn :
e T Z)
e ?1 Z
eT Y ,
?
? := (Z
e1 .
The final principal component regression estimator ?? is then obtained as ?? = ?
?u
3.4 Theoretical Property
In Theorem 2.2, we show that how to estimate u1 accurately plays an important role in conducting
the principal component regression. Following this discussion and the very recent results in Han and
Liu (2013a), the following ?easiest? and ?hardest? conditions are considered. Here ?L , ?U are two
constants larger than 1.
1,?U
1,?U
Condition 1 (?Easiest?): ?1 (?) d?j (?) for any j ? {2, . . . , d} and ?2 (?) ?j (?) for any
j ? {3, . . . , d};
Condition 2 (?Hardest?): ?1 (?)
?L ,?U
?j (?) for any j ? {2, . . . , d}.
In the sequel, we say that the model Md (Y , ; ?, ?, s) holds if the data (Y , X) are generated using
the model Md (Y , ; ?, ?, s).
Under Conditions 1 and 2, wepthen have the following theorem, which shows that under certain
conditions, k?? ? ?k2 = OP ( s log d/n), which is the optimal parametric rate in estimating the
regression coefficient (Ravikumar et al., 2008).
Theorem 3.2. Let the model Md (Y , ; ?, ?, s) hold and |?| in Equation (3.5) are upper bounded
by a constant and k?k2 is lower bounded by a constant. Then under Condition 1 or Condition 2
and for all random vector X such that
max
v?Sd?1 ,kvk0 ?2s
b ? ?)v| = oP (1),
|v T (?
we have the robust principal component regression estimator ?? satisfies that
!
r
s log d
?
.
k? ? ?k2 = OP
n
multivariate-t
EC1
EC2
20
40
60
number of selected features
80
1.4
0.8
0.2
0
20
40
60
80
0
number of selected features
20
40
60
number of selected features
80
PCR
RPCR
0.0
PCR
RPCR
0.0
PCR
RPCR
0.0
0.0
PCR
RPCR
0
0.6
averaged error
0.4
0.5
averaged error
1.0
1.0
1.0
0.8
0.2
0.2
0.4
0.6
averaged error
0.8
0.6
0.4
averaged error
1.0
1.2
1.2
1.2
1.5
Normal
0
20
40
60
80
number of selected features
Figure 2: Curves of averaged estimation errors between the estimates and true parameters for different distributions (normal, multivariate-t, EC1, and EC2, from left to right) using the truncated power
method. Here n = 100, d = 200, and we are interested in estimating the regression coefficient ?.
The horizontal-axis represents the cardinalities of the estimates? support sets and the vertical-axis
represents the empirical mean square error. Here from the left to the right, the minimum mean
square errors for lasso are 0.53, 0.55, 1, and 1.
6
4
Experiments
In this section we conduct study on both synthetic and real-world data to investigate the empirical
performance of the robust sparse principal component regression proposed in this paper. We use the
truncated power algorithm proposed in Yuan and Zhang (2013) to approximate the global optimums
e 1 to (3.6). Here the cardinalities of the support sets of the leading eigenvectors are treated as tuning
u
parameters. The following three methods are considered:
lasso: the classical L1 penalized regression;
PCR: The sparse principal component regression using the sample covariance matrix as the sufficient statistic and exploiting the truncated power algorithm in estimating u1 ;
RPCR: The robust sparse principal component regression proposed in this paper, using the multivariate Kendall?s tau as the sufficient statistic and exploiting the truncated power algorithm to
estimate u1 .
4.1
Simulation Study
In this section, we conduct simulation study to back up the theoretical results and further investigate
the empirical performance of the proposed robust sparse principal component regression method.
To illustrate the empirical usefulness of the proposed method, we first consider generating the data
matrix X. To generate X, we need to consider how to generate ? and ?. In detail, let ?1 >
?2 > ?3 = . . . = ?d be the eigenvalues and u1 , . . . , ud be the eigenvectors of ? with uj :=
(uj1 , . . . , ujd )T . The top 2 leading eigenvectors u1 , u2 of ? are specified to be sparse with sj :=
Pj?1 Pj
?
kuj k0 and ujk = 1/ sj for k ? [1 + i=1 si , i=1 si ] and zero for all the others. ? is generated
P2
as ? = j=1 (?j ??d )uj uTj +?d Id . Across all settings, we let s1 = s2 = 10, ?1 = 5.5, ?2 = 2.5,
and ?j = 0.5 for all j = 3, . . . , d. With ?, we then consider the following four different elliptical
distributions:
d
(Normal) X ? ECd (0, ?, ?1 ) with ?1 = ?d . Here ?d is the chi-distribution with degree of freedom
p
d
i.i.d.
d. For Y1 , . . . , Yd ? N (0, 1), Y12 + . . . + Yd2 = ?d . In this setting, X follows the Gaussian
distribution (Fang et al., 1990).
d
d
d ?
(Multivariate-t) X ? ECd (0, ?, ?2 ) with ?2 = ??1? /?2? . Here ?1? = ?d and ?2? = ?? with
+
? ? Z . In this setting, X follows a multivariate-t distribution with degree of freedom ? (Fang
et al., 1990). Here we consider ? = 3.
(EC1) X ? ECd (0, ?, ?3 ) with ?3 ? F (d, 1), an F distribution.
(EC2) X ? ECd (0, ?, ?4 ) with ?4 ? Exp(1), an exponential distribution.
We then simulate x1 , . . . , xn from X. This forms a data matrix X. Secondly, we let Y = Xu1 + ,
where ? Nn (0, In ). This produces the data (Y , X). We repeatedly generate n data according
to the four distributions discussed above for 1,000 times. To show the estimation accuracy, Figure
? 1 and true regression coefficient ?
2 plots the empirical mean square error between the estimate u
? 1 k0 ), for PCR and RPCR, under
against the numbers of estimated nonzero entries (defined as ku
different schemes of (n, d), ? and different distributions. Here we considered n = 100 and d = 200.
It can be seen that we do not plot the results of lasso in Figure 2. As discussed in Section 2,
especially as shown in Figure 1, linear regression and principal component regression have their
own advantages in different settings. More specifically, we do not plot the results of lasso here
simply because it performs so bad under our simulation settings. For example, under the Gaussian
settings with n = 100 and d = 200, the lowest mean square error for lasso is 0.53 and the errors
are averagely above 1.5, while for RPCR is 0.13 and is averagely below 1.
Figure 2 shows when the data are non-Gaussian but follow an elliptically distribution, RPCR outperforms PCR constantly in terms of estimation accuracy. Moreover, when the data are indeed normally distributed, there is no obvious difference between RPCR and PCR, indicating that RPCR
is a safe alternative to the classical sparse principal component regression.
7
0.45
?
0.40
0.35
0.30
averaged prediction error
0.15
2
0
0.10
Sample Quantiles
?2
? ?
?4
lasso
PCR
RPCR
0.25
??
?
??
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
???
?
?
0.20
4
?
?
?6
?
?3
?2
?1
0
1
2
0
3
50
100
150
number of selected features
Theoretical Quantiles
A
B
Figure 3: (A) Quantile vs. quantile plot of the log-return values for one stock ?Goldman Sachs?.
(B) Prediction error against the number of features selected. The scale of the prediction errors is
enlarged by 100 times for better visualization.
4.2 Application to Equity Data
In this section we apply the proposed robust sparse principal component regression and the other
two methods to the stock price data from Yahoo! Finance (finance.yahoo.com). We collect
the daily closing prices for 452 stocks that are consistently in the S&P 500 index between January 1,
2003 through January 1, 2008. This gives us altogether T=1,257 data points, each data point corresponds to the vector of closing prices on a trading day. Let St = [Stt,j ] denote by the closing price of
stock j on day t. We are interested in the log return data X = [Xtj ] with Xtj = log(Stt,j /Stt?1,j ).
We first show that this data set is non-Gaussian and heavy tailed. This is done first by conducting
marginal normality tests (Kolmogorove-Smirnov, Shapiro-Wilk, and Lillifors) on the data. We find
that at most 24 out of 452 stocks would pass any of three normality test. With Bonferroni correction
there are still over half stocks that fail to pass any normality tests. Moreover, to illustrate the heavy
tailed issue, we plot the quantile vs. quantile plot for one stock, ?Goldman Sachs?, in Figure 3(A).
It can be observed that the log return values for this stock is heavy tailed compared to the Gaussian.
To illustrate the power of the proposed method, we pick a subset of the data first. The stocks can
be summarized into 10 Global Industry Classification Standard (GICS) sectors and we are focusing
on the subcategory ?Financial?. This leave us 74 stocks and we denote the resulting data to be
F ? R1257?74 . We are interested in predicting the log return value in day t for each stock indexed
by k (i.e., treating Ft,k as the response) using the log return values for all the stocks in day t ? 1
to day t ? 7 (i.e., treating vec(Ft?7?t0 ?t?1,? ) as the predictor). The dimension for the regressor is
accordingly 7 ? 74 = 518. For each stock indexed by k, to learn the regression coefficient ?k , we
use Ft0 ?{1,...,1256},? as the training data and applying the three different methods on this dataset. For
each method, after obtaining an estimator ?bk , we use vec(Ft0 ?{1250,...,1256},? )?b to estimate F1257,k .
This procedure is repeated for each k and the averaged prediction errors are plotted against the
b 0 ) in Figure 3(B). To visualize the difference more clearly, in
number of features selected (i.e., k?k
the figures we enlarge the scale of the prediction errors by 100 times. It can be observed that RPCR
has the universally lowest prediction error with regard to different number of features.
Acknowledgement
Han?s research is supported by a Google fellowship. Liu is supported by NSF Grants III-1116730
and NSF III-1332109, an NIH sub-award and a FDA sub-award from Johns Hopkins University.
8
References
Artemiou, A. and Li, B. (2009). On principal components and regression: a statistical explanation of a natural
phenomenon. Statistica Sinica, 19(4):1557.
Bunea, F. and Xiao, L. (2012). On the sample covariance matrix estimator of reduced effective rank population
matrices, with applications to fPCA. arXiv preprint arXiv:1212.5321.
Cai, T. T., Ma, Z., and Wu, Y. (2013). Sparse PCA: Optimal rates and adaptive estimation. The Annals of
Statistics (to appear).
Choi, K. and Marden, J. (1998). A multivariate version of kendall?s ? . Journal of Nonparametric Statistics,
9(3):261?293.
Cook, R. D. (2007). Fisher lecture: Dimension reduction in regression. Statistical Science, 22(1):1?26.
Croux, C., Ollila, E., and Oja, H. (2002). Sign and rank covariance matrices: statistical properties and application to principal components analysis. In Statistical data analysis based on the L1-norm and related
methods, pages 257?269. Springer.
d?Aspremont, A., El Ghaoui, L., Jordan, M. I., and Lanckriet, G. R. (2007). A direct formulation for sparse
PCA using semidefinite programming. SIAM review, 49(3):434?448.
Fang, K., Kotz, S., and Ng, K. (1990). Symmetric multivariate and related distributions. Chapman&Hall,
London.
Han, F. and Liu, H. (2013a). Optimal sparse principal component analysis in high dimensional elliptical model.
arXiv preprint arXiv:1310.3561.
Han, F. and Liu, H. (2013b). Scale-invariant sparse PCA on high dimensional meta-elliptical data. Journal of
the American Statistical Association (in press).
Johnstone, I. M. and Lu, A. Y. (2009). On consistency and sparsity for principal components analysis in high
dimensions. Journal of the American Statistical Association, 104(486).
Kendall, M. G. (1968). A course in multivariate analysis.
Kl?uppelberg, C., Kuhn, G., and Peng, L. (2007). Estimating the tail dependence function of an elliptical
distribution. Bernoulli, 13(1):229?251.
Lounici, K. (2012).
arXiv:1205.7060.
Sparse principal component analysis with missing observations.
arXiv preprint
Ma, Z. (2013). Sparse principal component analysis and iterative thresholding. to appear Annals of Statistics.
Massy, W. F. (1965). Principal components regression in exploratory statistical research. Journal of the American Statistical Association, 60(309):234?256.
Moghaddam, B., Weiss, Y., and Avidan, S. (2006). Spectral bounds for sparse PCA: Exact and greedy algorithms. Advances in neural information processing systems, 18:915.
Oja, H. (2010). Multivariate Nonparametric Methods with R: An approach based on spatial signs and ranks,
volume 199. Springer.
Ravikumar, P., Raskutti, G., Wainwright, M., and Yu, B. (2008). Model selection in gaussian graphical models:
High-dimensional consistency of l1-regularized mle. Advances in Neural Information Processing Systems
(NIPS), 21.
Tyler, D. E. (1987). A distribution-free m-estimator of multivariate scatter. The Annals of Statistics, 15(1):234?
251.
Vershynin, R. (2010).
arXiv:1011.3027.
Introduction to the non-asymptotic analysis of random matrices.
arXiv preprint
Vu, V. Q. and Lei, J. (2012). Minimax rates of estimation for sparse pca in high dimensions. Journal of Machine
Learning Research (AIStats Track).
Yuan, X. and Zhang, T. (2013). Truncated power method for sparse eigenvalue problems. Journal of Machine
Learning Research, 14:899?925.
Zou, H., Hastie, T., and Tibshirani, R. (2006). Sparse principal component analysis. Journal of computational
and graphical statistics, 15(2):265?286.
9
| 4869 |@word mild:1 collinearity:4 version:5 averagely:2 norm:4 smirnov:1 nd:2 simulation:3 bn:3 covariance:12 pick:1 tr:3 solid:1 reduction:1 liu:9 outperforms:1 existing:1 elliptical:26 com:1 si:2 scatter:2 john:2 numerical:1 plot:6 treating:2 v:2 half:1 selected:7 cook:2 greedy:1 accordingly:4 ud2:1 xk:1 lr:4 provides:2 attack:1 zhang:2 kvk2:1 direct:1 yuan:2 prove:1 kuj:1 introduce:1 peng:1 indeed:1 chi:1 inspired:1 goldman:2 curse:1 cardinality:2 increasing:1 project:1 estimating:10 notation:1 bounded:5 moreover:5 biostatistics:1 eigenspace:1 lowest:2 easiest:2 unspecified:1 massy:2 eigenvector:6 nj:1 finance:4 um:1 k2:6 supv:1 fpca:1 unit:1 normally:1 grant:1 yn:1 appear:2 positive:1 engineering:1 understood:1 sd:5 id:3 analyzing:1 yd:1 black:1 studied:1 collect:1 relaxing:1 averaged:7 vu:3 procedure:2 empirical:8 significantly:1 cannot:1 selection:1 applying:1 descending:1 missing:1 backed:1 estimator:20 utilizing:1 regarded:1 borrow:1 marden:2 financial:2 fang:4 population:3 handle:1 exploratory:1 justification:1 annals:3 play:3 suppose:2 exact:1 programming:1 lanckriet:1 yd2:1 observed:3 role:3 ft:3 preprint:4 capture:1 jhsph:1 eu:1 deeply:1 ui:2 upon:1 completely:1 easily:1 k0:3 stock:13 various:2 represented:1 effective:2 london:1 whose:2 widely:1 solve:1 larger:4 say:1 relax:2 statistic:11 cov:7 itself:1 final:2 advantage:9 eigenvalue:3 sequence:1 cai:2 propose:3 coming:1 realization:1 flexibility:1 intuitive:1 description:1 kv:1 exploiting:2 convergence:5 double:1 optimum:2 produce:1 generating:1 leave:1 illustrate:5 op:7 kvkq:1 b0:2 p2:1 implies:1 trading:1 direction:3 safe:1 kuhn:1 argued:1 generalization:3 proposition:9 secondly:5 correction:1 hold:3 considered:4 stt:3 normal:3 tyler:2 exp:1 hall:1 predict:1 visualize:1 major:1 estimation:10 vulnerability:1 sensitive:1 largest:1 bunea:2 reflects:1 clearly:1 gaussian:27 ollila:1 focus:1 consistently:1 rank:9 indicates:2 bernoulli:1 contrast:1 rigorous:1 sense:1 el:1 nn:2 bt:1 borrowing:2 going:1 interested:4 issue:1 arg:1 among:2 classification:1 denoted:2 aforementioned:1 yahoo:2 development:2 spatial:1 marginal:1 equal:1 enlarge:1 ng:1 chapman:1 represents:2 hardest:2 yu:1 uj1:1 others:3 oja:4 xtj:2 lebesgue:1 n1:1 freedom:2 message:1 possibility:1 highly:1 investigate:2 kvk:1 semidefinite:1 light:1 devoted:1 kjk:2 xb:3 moghaddam:2 daily:1 indexed:7 conduct:2 plotted:3 theoretical:4 column:2 modeling:2 industry:1 subset:2 entry:2 predictor:3 usefulness:2 uniform:1 conducted:2 characterize:1 kxi:1 synthetic:3 vershynin:3 combined:2 st:1 density:3 ec2:3 siam:1 sequel:1 regressor:1 together:1 hopkins:2 nongaussian:1 nicer:1 ek:1 american:3 leading:10 return:5 li:3 supp:2 account:1 potential:3 summarized:1 includes:1 coefficient:7 explicitly:2 xu1:5 vi:3 depends:1 lot:1 kendall:11 analyze:1 red:1 start:1 recover:3 elaborated:3 contribution:1 square:17 ni:1 accuracy:2 variance:1 conducting:7 who:1 characteristic:1 generalize:1 accurately:3 lu:2 whenever:1 mating:1 against:5 obvious:1 mi:2 proved:1 dataset:1 ec1:3 kmk2:1 knowledge:1 dimensionality:1 improves:1 back:1 focusing:1 higher:1 day:5 follow:2 response:2 wei:1 formulation:1 lounici:2 though:1 done:1 biomedical:2 correlation:3 hand:1 horizontal:1 google:1 logistic:2 lei:3 k22:1 true:2 hence:1 y12:1 symmetric:1 nonzero:1 bonferroni:1 noted:1 ridge:1 performs:1 l1:3 recently:3 nih:1 raskutti:1 mt:2 exponentially:2 insensitive:2 volume:4 tail:6 discussed:3 association:3 refer:1 vec:3 rd:13 tuning:1 consistency:2 i6:1 pointed:2 closing:3 han:10 multivariate:19 own:1 recent:3 showed:1 certain:1 inequality:1 meta:1 xe:1 preserving:1 minimum:1 seen:1 impose:1 ud:2 ii:4 multiple:1 faster:3 match:1 sphere:1 e1:1 ravikumar:2 award:2 mle:1 prediction:6 regression:77 basic:1 avidan:1 essentially:1 arxiv:8 represent:2 c1:4 semiparametric:4 separately:1 fellowship:1 baltimore:1 ujd:1 extra:1 unlike:1 subject:1 deficient:2 thing:1 inconsistent:2 jordan:1 iii:2 easy:2 enough:1 xj:3 ujk:1 hastie:1 lasso:7 t0:1 motivated:1 pca:7 jj:1 elliptically:3 repeatedly:1 eigenvectors:4 nonparametric:3 reduced:1 generate:3 shapiro:1 exist:2 nsf:2 dotted:1 sign:2 estimated:2 track:1 tibshirani:1 four:3 changing:3 pj:2 verified:1 utilize:1 v1:1 imaging:2 year:1 parameterized:1 family:10 kotz:1 wu:1 submatrix:3 bound:1 fold:1 croux:2 identifiable:1 strength:2 constraint:2 sharply:1 fda:1 u1:25 simulate:1 min:4 extremely:1 utj:1 department:2 developing:1 according:3 ball:1 across:1 making:1 s1:1 invariant:2 ghaoui:1 equation:7 visualization:1 discus:1 fail:1 letting:1 maxjk:2 operation:1 apply:1 observe:1 spectral:1 alternative:2 altogether:1 top:1 ecd:9 graphical:2 exploit:2 quantile:4 uj:3 build:1 especially:1 classical:19 parametric:4 dependence:4 md:6 diagonal:3 said:1 card:1 vd:1 cauchy:1 index:1 difficult:1 sinica:1 sector:1 trace:1 design:1 motivates:2 subcategory:1 upper:2 vertical:1 observation:5 markov:1 datasets:1 wilk:1 finite:1 truncated:5 january:2 y1:2 rn:7 kvk0:3 bk:1 subvector:1 kl:2 specified:1 established:1 nip:1 below:1 u12:1 sparsity:3 including:1 pcr:13 tau:8 max:10 explanation:1 power:6 suitable:2 wainwright:1 treated:1 natural:1 regularized:1 predicting:1 minimax:3 scheme:1 normality:3 brief:2 axis:2 aspremont:2 conventionally:1 genomics:1 review:1 acknowledgement:1 checking:1 asymptotic:2 fhan:1 fully:2 lecture:1 proportional:2 degree:2 sufficient:2 xiao:2 thresholding:1 heavy:9 row:2 course:1 penalized:1 supported:2 last:1 copy:1 free:1 formal:1 allow:1 johnstone:2 taking:1 sparse:26 distributed:4 regard:2 curve:1 dimension:12 xn:11 world:3 author:1 adaptive:1 projected:2 universally:1 sj:2 approximate:2 alpha:1 confirm:1 global:3 b1:1 xi:5 continuous:3 iterative:1 tailed:8 ku:1 learn:1 robust:13 symmetry:1 obtaining:1 complex:2 zou:2 necessarily:1 ft0:2 vj:1 diag:2 aistats:1 sachs:2 statistica:1 s2:1 allowed:1 repeated:1 x1:11 enlarged:1 quantiles:2 sub:3 exponential:1 gics:1 theorem:7 choi:2 bad:2 xt:3 hanliu:1 showing:1 exists:6 adding:1 uppelberg:2 kx:1 simply:1 expressed:1 mjj:1 partially:1 u2:1 springer:2 mij:2 corresponds:1 satisfies:1 constantly:1 ma:3 identity:1 marked:1 price:4 fisher:1 specifically:2 ular:1 justify:1 principal:61 called:1 pas:2 equity:2 indicating:1 support:2 princeton:3 phenomenon:1 handling:1 |
4,275 | 487 | Simulation of Optimal Movements Using the
Minimum-Muscle-Tension-Change Model.
Menashe Dornay*
Yoji Uno"
Mitsuo Kawato*
Ryoji Suzuki**
?Cognitive Processes Department, ATR Auditory and Visual Perception Research
Laboratories, Sanpeidani, Inuidani, Seika-Cho, Soraku-Gun, Kyoto 619-02 Japan.
??Department of Mathematical Engineering and Information Physics, Faculty of
Engineering, University of Tokyo, Hongo, Bunkyo-ku, Tokyo, 113 Japan.
Abstract
This work discusses various optimization techniques which were
proposed in models for controlling arm movements. In particular, the
minimum-muscle-tension-change model is investigated. A dynamic
simulator of the monkey's arm, including seventeen single and double
joint muscles, is utilized to generate horizontal hand movements. The
hand trajectories produced by this algorithm are discussed.
1
INTRODUCTION
To perform a voluntary hand movement, the primate nervous system must solve the
following problems: (A) Which trajectory (hand path and velocity) should be used while
moving the hand from the initial to the desired position. (lB) What muscle forces should
be generated. Those two problems are termed "ill-posed" because they can be solved in
an infinite number of ways. The interesting question to us is: what strategy does the
nervous system use while choosing a specific solution for these problems? The chosen
solutions must comply with the known experimental data: Human and monkey's free
horizontal multi-joint hand movements have straight or gently curved paths. The hand
velocity profiles are always roughly bell shaped (Bizzi & Abend 1986).
627
628
Damay, Uno, Kawato, and Suzuki
1.1 THE MINIMUM-JERK MODEL
Flash and Hogan (1985) proposed that a global kinematic optimization approach, the
minimum-jerk model, defines a solution for the trajectory detennination problem (problem
A). Using this strategy, the nervous system is choosing the (unique) smoothest trajectory
of the hand for any horizontal movement, without having to deal with the structure or
dynamics of the ann. The minimum-jerk model produces reasonable approximations for
hand trajectories in unconstrained point to point movements in the horizontal plane in
front of the body (Flash & Hogan 1985; Morasso 1981; Uno et al. 1989a). It fails to
describe, however, some important experimental findings for human arm movements (Uno
et al. 1989a).
1.2 THE EQUILIBRIUM-TRAJECTORY HYPOTHESIS
According to the equilibrium-trajectory hypothesis (Feldman 1966), the nervous system
generates movements by a gradual change in the equilibrium posture of the hand: at all
times during the execution of a movement the muscle forces defines a stable posture
which acts as a point of attraction in the configurational space of the limb. The actual
hand movement is the realized trajectory. The realized hand trajectory is usually different
from the attracting pre-planned virtual trajectory (Hogan 1984). Simulations by Flash
(1987), have suggested that realistic multi-joint ann movements at moderate speed can be
generated by moving the hand eqUilibrium position along a pre-planned minimum-jerk
virtual trajectory. The interactions of the dynamic properties of the ann and the attracting
virtual trajectory create together the actual realized trajectory. Flash did not suggest a
solution to problem lB.
A static local optimization algorithm related to the equilibrium-trajectory hypothesis and
called backdriving was proposed by Mussa-Ivaldi et al. (1991). This algorithm can be used
to solve problem lB only after the virtual trajectory is known. The virtual trajectory is not
necessarily a minimum-jerk trajectory. Driving the arm from a current equilibrium position
to the next one on the virtual trajectory is perfonned by two steps: 1) simulate a passive
displacement of the arm to the new position and 2) update the muscle forces so as to
eliminate the induced hand force. A unique active change (step 2) is chosen by finding
these muscle forces which minimize the change in the potential energy stored in the
muscles. Using a static model of the monkey's arm, the first author has analyzed this
sequential computational approach, including a solution for both the trajectory
detennination (A) and the muscle forces (lB) problems (Domay 1990, 1991a, 1991b).
The equilibrium-trajectory hypothesis which is using the minimum-jerk model was
criticized by Katayama and Kawato (in preparation). According to their recent findings,
the values of the dynamic stiffness used by Flash (1987) are too high to be realistic. They
have found that a very complex virtual trajectory, completely different from the one
predicted by the minimum-jerk model, is needed for coding realistic hand movements.
Simulation of Optimal Movements Using the Minimum-Muscle-Tension-Change Model
2
GLOBAL DYNAMIC OPTIMIZATIONS
A set of global dynamic optimizations have been proposed by Uno et al. (1989a, 1989b).
Uno et al. suggested that the dynamic properties of the arm must be considered by any
algorithm for controlling hand movements. They also proposed that the hand trajectory
and the motor commands (joint torques, muscle tensions, etc.,) are computed in parallel.
2.1 THE MINIMUM-TORQUE-CHANGE MODEL
Uno et al. (1989a) have proposed the minimum-torque-change model. The model proposes
that the hand trajectory and the joint torques are determined simultaneously, while the
algorithm minimizes globally the rate of change of the joint torques. The minimum-torquechange model was criticized by Flash (1990), saying that the rotary inertia used was not
realistic. If Flash's inertia values are used then the hand path predicted by the minimumtorque-change model is curved (Flash 1990).
2.2 THE MINIMUM-MUSCLE-TENSION-CHANGE MODEL
The minimum-muscle-tension-change model (Uno et al. 1989b, Domay et al. 1991) is a
parallel dynamic optimization approach in which the trajectory determination problem (A)
and the muscle force generation problem (]B) are solved simultaneously. No explicit
trajectory is imposed on the hand, but that it must reach the final desired state (position,
velocity, etc.) in a pre-specified time. The numerical solution used is a "penalty" method,
in which the controller minimizes globally by iterations an energy function E :
(1)
E is the energy that must be minimized in iterations. ED is a collection of hard
constraints, like, for example that the hand must reach the desired position at the specified
time. Es is a smoothness constraint, like the minimum-muscle-tension-change model. "is a regularization function, that needs to become smaller and smaller as the number of
iterations increases. This is a key point because the hard constraints must be strictly
satisfied at the end of the iterative process. ? is a small rate term. The smoothness
constraint Es ' is the minimum-muscle-tension-change model, defined as:
(2)
!; is the tension of muscle i, n is the total number of muscles, to is the initial time and
trut is the final time of the movement.
Preliminary studies have shown (Uno et al. 1989b) that the minimum-muscle-tensionchange model can simulate reasonable hand movements.
629
630
Damay, Uno, Kawato, and Suzuki
3
THE MONKEY'S ARM MODEL
The model used was recently described (Domay 1991a; Domay et al. 1991). It is based
on anatomical study using the Rhesus monkey. Attachments of 17 shoulder. elbow and
double joint muscles were marked on the skeleton. The skeleton was cleaned and
reassembled to a natural configuration of a monkey during horizontal arm movements
(Fig. 1). X-ray analysis was used to create a simplified horizontal model of the arm (Fig.
1). Effective origins and insertions of the muscles were estimated by computer
simulations to ensure the postural stability ofthe hand at equilibrium (Domay 1991a). The
simplified dynamic model used in this study is described in Domay et al. (1991).
Figure 1: The Monkey's Arm Model. Top left is a ventral view of the skeleton. Middle
right is a dorsal view. The bottom shows a top-down X-ray projection of the skeleton.
with the axes marked on it. The photos were taken by Mr. H.S. Hall. MIT.
Simulation of Optimal Movements Using the Minimum-Muscle-Tension-Change Model
4
THE BEHAVIORAL TASK
We tried to simulate the horizontal arm movements reported by Uno et al. (1989a) for
human subjects, using the monkey's model. Fig. 2 (left) shows a top view of the hand
workspace of the monkey (light small dots). We used 7 hand positions defined by the
following shoulder and elbow relative angles (in degrees): Tl (14,122); T z {67,100}; T3
{75,64}; T4 {63,45}; Ts {35,54}; T6 {-5,lOl} and T7 {-25,45}. The joint angles used by
Uno et al. (1989a) for T4 and T7, {77.22} and {O,O}, are out of the workspace of the
monkey's hand (open circles in Fig 2. left). We approximated them by our T4 and T7
(filled circles). The behavioral task that we simulated using the minimum-muscle-tensionchange model consisted of the 4 trajectories shown in Fig. 2 (right).
5
SIMULATION RESULTS
Figure 2 (right) shows the paths (T z->T6 ), (T3->T6), (T4->T1), and (T7->Ts)' The paths T2>T6' T 3->T6 and T7->Ts are slightly convex. Slightly convex paths for T z->T6 were
reported in human movements by Flash (1987), Uno et al. (l989a) and Morasso (1981).
Human T3->T6 paths have a small tendency to be slightly convex (Uno et al. 1989a; Flash
(1987). In our simulations, T z->T6 and T3->T6 have slightly larger curvatures than those
reported in humans. Human large movements from the side of the body to the front of the
body similar to our T7->Ts were reported by Uno et al. (1989a). The path of these
movements is convex and similar to our simulation results. The simulated path of T4->T 1
is slightly curved to the left and then to the right, but roughly straight. The human's T4>TI paths look slightly straighter than in our simulations (Uno et at. 1989a; Flash 1987).
0.(
,.--.....
if)
~
0:3
(l)
~
(l)
0.2
E
'-'
T4
T3 -..:.......
___ ,
T5
...... "-....
~ .......... :::.:::..
....
T1
T6 \"
{.;#:!..
X
\\T7
+
0 .0
-01 !
-C.2
-O.2m
'.
o. i
>-'
-O.38m
?
-01
0.0
0.1
0.2
03
04
X (meters)
Figure 2. The Behavioral Task. The left side shows the hand workspace (small dots). The
shoulder position and origin of coordinates (0,0) is marked by +. The elbow location
when the hand is on position TI is marked by E. The right side shows 4 hand paths
simulated by the minimum-musc1e-tension-change model. Arrows indicate the directions
of the movements.
631
632
Domay, Uno, Kawato, and Suzuki
Fig. 3 shows the corresponding simulated hand velocities. The velocity profiles have a
single peak and are roughly bell shaped, like those reported for human subjects. The left
side of the velocity profile of T 4->T t looks slightly irregular.
The hand trajectories simulated here are in general closer to human data than those
reported by us in the past (Domay et al. 1991). In the current study we used a much
slower protocol for reducing A than in the previous study, and we think that we are closer
now to the optimal solution of the numerical calculation than in the previous study.
Indeed, the hand velocity profiles and muscle tension profiles look smoother here than in
the previous study. It is in general very difficult to guarantee that the optimal solution is
achieved, unless an unpractical large number of iterations is used. Fig. 4 (top,left) shows
the way ED and Es of equation 1 are changing as a function A for the trajectory T7 ->T5 ?
Ideally. both should reach a plato when the optimal solution is reached. The muscle
tensions simulated for T1->T5 are shown in Fig. 4. They look quite smooth .
f
..'/"".-.
..:
:;:-
?1::
j
~
:
? J
....
o ./'
??
02
"
.t
/"..
??
..
II
??
~
:/
./
e.
"
.
II
.. T
.
4
..
.
'
.,
.? T . .T
I.
11
7O+:1j5
.
??
t.
- ()
---'--'-'--,Tlme(.'
00
/~""
.....
..-,
.:.
.
...
G'
.,,/
\ ...
?? e.. ...
~
I
!
a. ?? .?
,
,
,
Figure 3. The Hand Tangential VelocIty.
6
DISCUSSION
Various control strategies have been proposed to explain the roughly straight hand
trajectory shown by primates in planar reaching movements. The minimum-jerk model
(Flash & Hogan 1985) takes into account only the desired hand movement. and
completely ignores the dynamic properties of the arm. This simplified approach is a good
approximation for many movements, but cannot explain some experimental evidence (Uno
et al. 1989a). A more demanding approach, the minimum-torque-change model (Uno et
al. 1989a), takes into account the dynamics of the arm, but emphasizes only the torques
at the joints, and completely ignores the properties of the muscles. This model was
criticized to produce unrealistic hand trajectories when proper inertia values are used
(Flash 1990). A third and more complicated model is the minimum-muscle-tension-change
model (Uno et a1. 1989b. Domay et al. 1991). The minimum-muscle-tension-change model
was shown here to produce gently curved hand movements, which although not identical,
are quite close to the primate behavior. In the current study the initial and final tensions
of the muscles were assumed to be zero. This is not a realistic assumption since even a
static hand at an equilibrium is expected to have some stiffness. Using the minimummuscle-tension-change model with non-zero initial and final muscle tensions is a logical
Simulation of Optimal Movements Using the Minimum-Muscle-Tension-Change Model
Muscle
Forces
N
/,I:~..
~?
OJ
0I
? '!
Se4
Se3
1 ' Se2
Sel
??
??
??
?
~
_-,-._ __
? II
? ,
??
??
eI
?t
SfG
Se5
SfB
. ~.
,
'-
'I
II
, 0
Ie
I.
II
II
I'
??
.1
??
EelO
II
I'
I'
I'
? It
.."
?
'1
'I
:
II..
,.
II
Eel!
J
.,
I.
II
II
I'
' .
lOOt
Ef14
Ef13
Df16
II
??
II
...
I'
II
II
'I
De15
??
.1
J'
II
IJ
,.
II
I'
II
Df17
,./"\
.r--..
'--
o
I.
II
..
Ef12
II
II
~\
:
01
0&
Sf9
'-:-:--~-:---:'7-~
??
II
II
\
,
II
I.
I'
./\\
\
~\..
I'
IJ
Sf7
....
II
~-
../ \
..:
.1
II
,.
1
Time(s)
Figure 4. Numerical Analysis and Muscle Tensions For T 7->Ts. S=shoulder, E=elbow,
D=double-joint muscle, e=extensor, f=flexor.
study which we intend to test in the near future. Still, the minimum-muscle-tension-change
model considers only the muscle moment-arms (Il) and momvels (olll ae) and ignores
the muscle length-tension curves. A more complicated model which we are studying now
is the minimum-motor-command-change model, which includes the length-tension curves.
633
634
Domay, Uno, Kawato. and Suzuki
Acknowledgements
M. Domay and M. Kawato would like to thank Drs. K. Nakane and E. Yodogawa, ATR,
for their valuable help and support. Preparation of the paper was supported by Human
Frontier Science Program grant to M. Kawato.
References
1 E Bizzi & WK Abend (1986) Control of multijoint movements. In MJ. Cohen and
F. Strumwasser (Eds.) Comparative Neurobiology: Modes of Communication in the
Nervous System, John Wiley & Sons, pp. 255-277
2
M Dornay (1990) Control of movement and the postural stability of the monkey's
arm. Proc. 3rd International Symposium on Bioelectronic and Molecular Electronic
Devices, Kobe, Japan, December 18-20, pp. 101-102
3
M Domay (1991 a) Static analysis of posture and movement, using a 17 -muscle model
of the monkey's arm. ATR Technical Report TR-A-0109
4
M Domay (1991b) Control of movement, postural stability, and muscle angular
stiffness. Proc. IEEE Systems, Man and Cybernetics, Virginia, USA, pp. 1373-1379
5
M Dornay, Y Uno, M Kawato & R Suzuki (1991) Simulation of optimal movements
using a 17-muscle model of the monkey's arm. Proc. SICE 30th Annual Conference,
ES-1-4, July 17-19, Yonezawam Japan, pp. 919-922
6
AG Feldman (1966) Functional tuning of the nervous system with control of
movement or maintenance of a steady posture. Biophysics, li, pp. 766-775
7
T Flash & N Hogan (1985) The coordination of arm movements: an experimentally
confIrmed mathematical model. J. Neurosci., 2,., pp. 1688-1703
8
T Flash (1987) The control of hand equilibrium trajectories in multi-joint arm
movements. Biol. Cybern., 57, pp. 257-274
9
T Flash (1990) The organization of human arm trajectory control. In J. Winters and
S. Woo (Eds.) Multiple muscle systems: Biomechanics and movement organization,
Springer-Verlag, pp. 282-301
ION Hogan (1984) An organizing principle for a class of voluntary movements. J.
Neurosci., i, pp. 2745-2754
11 P Morasso (1981) Spatial control of arm movements. Experimental Brain Research,
42, pp. 223-227
12 FA Mussa-Ivaldi, P Morasso, N Hogan & E Bizzi (1991) Network models of motor
systems with many degrees of freedom. In M.D. Fraser (Ed.) Advances in control
networks and large scale parallel distributed processing models, Albex Publ. Corp.
13 Y Uno, M Kawato & R Suzuki (1989a) Formation and control of optimal trajectory
in human multijoint arm movement - minimum-torque-change model. Biol. Cybern.,
g, pp. 89-101
14 Y Uno, R Suzuki & M Kawato (1989b) Minimum muscle-tension change model
which reproduces human arm movement. Proceedings of the 4th Symposium on
Biological and Physiological Engineering, pp. 299-302, (in Japanese)
PART X
ApPLICATIONS
| 487 |@word middle:1 faculty:1 open:1 gradual:1 simulation:11 rhesus:1 tried:1 tr:1 moment:1 ivaldi:2 configuration:1 initial:4 t7:8 past:1 current:3 must:7 john:1 numerical:3 realistic:5 motor:3 update:1 device:1 nervous:6 plane:1 location:1 mathematical:2 along:1 become:1 symposium:2 ray:2 behavioral:3 expected:1 indeed:1 behavior:1 roughly:4 seika:1 simulator:1 multi:3 brain:1 torque:8 globally:2 actual:2 elbow:4 de15:1 what:2 minimizes:2 monkey:13 finding:3 ag:1 guarantee:1 act:1 ti:2 control:10 grant:1 extensor:1 t1:3 engineering:3 local:1 yodogawa:1 path:11 unique:2 displacement:1 bell:2 projection:1 pre:3 suggest:1 cannot:1 close:1 cybern:2 imposed:1 sice:1 convex:4 attraction:1 stability:3 coordinate:1 controlling:2 hypothesis:4 origin:2 velocity:8 approximated:1 utilized:1 bottom:1 solved:2 movement:42 abend:2 valuable:1 insertion:1 skeleton:4 ideally:1 dynamic:11 hogan:7 completely:3 joint:11 various:2 describe:1 effective:1 formation:1 choosing:2 quite:2 posed:1 solve:2 larger:1 think:1 final:4 interaction:1 organizing:1 double:3 produce:3 comparative:1 help:1 ij:2 predicted:2 indicate:1 sanpeidani:1 direction:1 tokyo:2 human:14 virtual:7 preliminary:1 biological:1 strictly:1 frontier:1 considered:1 hall:1 equilibrium:10 driving:1 dornay:3 ventral:1 bizzi:3 multijoint:2 proc:3 coordination:1 create:2 mit:1 always:1 reaching:1 sel:1 command:2 ax:1 eliminate:1 j5:1 ill:1 proposes:1 spatial:1 shaped:2 having:1 identical:1 look:4 future:1 minimized:1 t2:1 report:1 tangential:1 kobe:1 winter:1 simultaneously:2 detennination:2 mussa:2 freedom:1 mitsuo:1 organization:2 kinematic:1 analyzed:1 light:1 closer:2 unless:1 filled:1 desired:4 circle:2 criticized:3 planned:2 straighter:1 front:2 too:1 virginia:1 stored:1 reported:6 cho:1 peak:1 international:1 unpractical:1 ie:1 workspace:3 physic:1 eel:1 together:1 satisfied:1 cognitive:1 li:1 japan:4 account:2 potential:1 coding:1 wk:1 includes:1 view:3 reached:1 parallel:3 complicated:2 minimize:1 il:1 t3:5 ofthe:1 produced:1 emphasizes:1 trajectory:33 confirmed:1 cybernetics:1 straight:3 explain:2 reach:3 ed:5 energy:3 pp:12 static:4 auditory:1 seventeen:1 logical:1 tension:24 planar:1 angular:1 hand:39 horizontal:7 ei:1 defines:2 mode:1 usa:1 consisted:1 regularization:1 laboratory:1 se3:1 deal:1 during:2 steady:1 passive:1 recently:1 kawato:11 functional:1 cohen:1 gently:2 discussed:1 feldman:2 smoothness:2 rd:1 unconstrained:1 tuning:1 lol:1 dot:2 moving:2 stable:1 attracting:2 etc:2 curvature:1 recent:1 moderate:1 termed:1 verlag:1 corp:1 muscle:42 minimum:30 mr:1 july:1 ii:26 smoother:1 multiple:1 kyoto:1 smooth:1 technical:1 determination:1 calculation:1 biomechanics:1 molecular:1 fraser:1 a1:1 biophysics:1 maintenance:1 controller:1 ae:1 iteration:4 achieved:1 ion:1 irregular:1 configurational:1 induced:1 subject:2 plato:1 december:1 near:1 jerk:8 sfb:1 penalty:1 soraku:1 bunkyo:1 generate:1 estimated:1 anatomical:1 key:1 torquechange:1 changing:1 angle:2 flexor:1 saying:1 reasonable:2 electronic:1 annual:1 constraint:4 uno:24 generates:1 speed:1 simulate:3 department:2 according:2 smaller:2 slightly:7 son:1 primate:3 taken:1 equation:1 discus:1 needed:1 drs:1 end:1 photo:1 studying:1 stiffness:3 limb:1 slower:1 top:4 ensure:1 postural:3 intend:1 question:1 realized:3 posture:4 strategy:3 reassembled:1 fa:1 thank:1 atr:3 simulated:6 gun:1 considers:1 length:2 difficult:1 publ:1 proper:1 perform:1 curved:4 t:5 voluntary:2 neurobiology:1 shoulder:4 communication:1 lb:4 cleaned:1 specified:2 inuidani:1 suggested:2 usually:1 perception:1 program:1 including:2 oj:1 unrealistic:1 perfonned:1 demanding:1 natural:1 force:8 arm:24 morasso:4 attachment:1 woo:1 katayama:1 comply:1 acknowledgement:1 meter:1 relative:1 interesting:1 generation:1 degree:2 principle:1 supported:1 free:1 t6:10 side:4 distributed:1 curve:2 t5:3 ignores:3 author:1 suzuki:8 inertia:3 collection:1 simplified:3 tlme:1 hongo:1 global:3 active:1 reproduces:1 assumed:1 iterative:1 ku:1 mj:1 investigated:1 necessarily:1 complex:1 japanese:1 protocol:1 did:1 neurosci:2 arrow:1 profile:5 body:3 fig:8 tl:1 wiley:1 fails:1 position:9 explicit:1 smoothest:1 third:1 down:1 specific:1 physiological:1 evidence:1 sequential:1 execution:1 t4:7 visual:1 springer:1 marked:4 nakane:1 ann:3 flash:16 man:1 change:25 hard:2 experimentally:1 infinite:1 determined:1 reducing:1 called:1 total:1 experimental:4 e:4 tendency:1 support:1 dorsal:1 preparation:2 biol:2 |
4,276 | 4,870 | Structured Learning via Logistic Regression
Justin Domke
NICTA and The Australian National University
[email protected]
Abstract
A successful approach to structured learning is to write the learning objective as
a joint function of linear parameters and inference messages, and iterate between
updates to each. This paper observes that if the inference problem is ?smoothed?
through the addition of entropy terms, for fixed messages, the learning objective
reduces to a traditional (non-structured) logistic regression problem with respect
to parameters. In these logistic regression problems, each training example has a
bias term determined by the current set of messages. Based on this insight, the
structured energy function can be extended from linear factors to any function
class where an ?oracle? exists to minimize a logistic loss.
1 Introduction
The structured learning problem is to find a function F (x, y) to map from inputs x to outputs as
y ? = arg maxy F (x, y). F is chosen to optimize a loss function defined on these outputs. A
major challenge is that evaluating the loss for a given function F requires solving the inference
optimization to find the highest-scoring output y for each exemplar, which is NP-hard in general.
A standard solution to this is to write the loss function using an LP-relaxation of the inference
problem, meaning an upper-bound on the true loss. The learning problem can then be phrased as a
joint optimization of parameters and inference variables, which can be solved, e.g., by alternating
message-passing updates to inference variables with gradient descent updates to parameters [16, 9].
T
Previous work has mostly focused on linear energy functions
! F (x, y) = w ?(x, y), where a vector
of weights w is adjusted in learning, and ?(x, y) =
? ?(x, y? ) decomposes over subsets of
variables y? . While linear weights are often useful in practice [23, 16, 9, 3, 17, 12, 5], it is also
common to make use of non-linear classifiers. This is typically done by training a classifier (e.g.
ensembles of trees [20, 8, 25, 13, 24, 18, 19] or multi-layer perceptrons [10, 21]) to predict each
variable independently. Linear edge interaction weights are then learned, with unary classifiers
either held fixed [20, 8, 25, 13, 24, 10] or used essentially as ?features? with linear weights readjusted [18].
!
This paper allows the more general form F (x, y) = ? f? (x, y? ). The learning problem is to select
f? from some set of functions F? . Here, following previous work [15], we add entropy smoothing
to the LP-relaxation of the inference problem. Again, this leads to phrasing the learning problem as a
joint optimization of learning and inference variables, alternating between message-passing updates
to inference variables and optimization of the functions f? . The major result is that minimization of
the loss over f? ? F? can be re-formulated as a logistic regression problem, with a ?bias? vector
added to each example reflecting the current messages incoming to factor ?. No assumptions are
needed on the sets of functions F? , beyond assuming that an algorithm exists to optimize the logistic
loss on a given dataset over all f? ? F?
We experimentally test the results of varying F? to be the set of linear functions, multi-layer perceptrons, or boosted decision trees. Results verify the benefits of training flexible function classes
in terms of joint prediction accuracy.
1
2 Structured Prediction
The structured prediction problem can be written as seeking a function h that will predict an output
y from an input x. Most commonly, it can be written in the form
h(x; w) = arg max wT ?(x, y),
(1)
y
where ? is a fixed function of both x and y. The maximum takes place over all configurations of the
discrete vector y. It is further assumed that ? decomposes into a sum of functions evaluated over
subsets of variables y? as
!
?(x, y) =
?? (x, y? ).
?
The learning problem is to adjust set of linear weights w. This paper considers the structured learning
problem in a more general setting, directly handling nonlinear function classes. We generalize the
function h to
h(x; F ) = arg max F (x, y),
y
where the energy F again decomposes as
F (x, y) =
!
f? (x, y? ).
?
The learning problem now becomes to select {f? ? F? } for some set of functions F? . This reduces
to the previous case when f? (x, y? ) = wT ?? (x, y? ) is a linear function. Here, we do not make any
assumption on the class of functions F? other than assuming that there exists an algorithm to find
the best function f? ? F? in terms of the logistic regression loss (Section 6).
3 Loss Functions
Given a dataset (x1 , y 1 ), ..., (xN , y N ), we wish to select the energy F to minimize the empirical risk
!
R(F ) =
l(xk , y k ; F ),
(2)
k
for some loss function l. Absent computational concerns, a standard choice would be the slackrescaled loss [22]
l0 (xk , y k ; F ) = max F (xk , y) ? F (xk , y k ) + ?(y k , y),
(3)
y
where ?(y k , y) is some measure
" of discrepancy. We assume that ? is a function that decomposes
over ?, (i.e. that ?(y k , y) = ? ?? (y?k , y ? )). Our experiments use the Hamming distance.
In Eq. 3, the maximum ranges over all possible discrete labelings y, which is in NP-hard in general.
If this inference problem must be solved approximately, there is strong motivation [6] for using
relaxations of the maximization in Eq. 1, since this yields an upper-bound on the loss. A common
solution [16, 14, 6] is to use a linear relaxation1
l1 (xk , y k ; F ) = max F (xk , ?) ? F (xk , y k ) + ?(y k , ?),
(4)
??M
where the local polytope M is defined as the set of local pseudomarginals that are normalized, and
agree when marginalized over other neighboring regions,
!
M = {?|??? (y? ) = ?? (y? ) ?? ? ?,
?? (y? ) = 1 ??, ?? (y? ) ? 1 ??, y? }.
y?
Here, ??? (y? ) = y?\? ?? (y? ) is ?? marginalized out over some region ? contained in ?. It is
easy to show that l1 ? l0 , since the two would be equivalent if ? were restricted to binary values,
and hence the maximization in l1 takes place over a larger set [6]. We also define
"
?Fk (y? ) = f? (xk , y? ) + ?? (y?k , y ? ),
1
(5)
k
F and ? are slightly generalized
to allow arguments of pseudomarginals, as F (x , ?) =
! Here,
!
! !
k
k
k
f
(x
,
y
)?(y
)
and
?(y
,
?)
=
?
?
?
y?
?
y ? ?? (y? , y ? )?(y? ).
2
which gives the equivalent representation of l1 as l1 (xk , y k ; F ) = ?F (xk , y k ) + max??M ?Fk ? ?.
The maximization in l1 is of a linear objective under linear constraints, and is thus a linear program
(LP), solvable in polynomial time using a generic LP solver. In practice, however, it is preferable to
use custom solvers based on message-passing that exploit the sparsity of the problem.
Here, we make a further approximation to the loss, replacing
the inference problem of max??M ? ??
!
with the ?smoothed? problem max??M ? ? ? + " ? H(?? ), where H(?? ) is the entropy of the
marginals ?? . This approximation has been considered by Meshi et al. [15] who show that local
message-passing can have a guaranteed convergence rate, and by Hazan and Urtasun [9] who use it
for learning. The relaxed loss is
"
$
#
l(xk , y k ; F ) = ?F (xk , y k ) + max ?Fk ? ? + "
H(?? ) .
(6)
??M
?
Since the entropy is positive, this is clearly a further upper-bound on the ?unsmoothed? loss, i.e.
l1 ? l. Moreover, we can bound the looseness of this approximation as in the following theorem,
proved in the appendix. A similar result was previously given [15] bounding the difference of the
objective obtained by inference with and without entropy smoothing.
Theorem 1. l and l1 are bounded by (where |y? | is the number of configurations of y? )
l1 (x, y, F ) ? l(x, y, F ) ? l1 (x, y, F ) + "Hmax , Hmax =
#
log |y? |.
?
4 Overview
Now, the learning problem is to select the functions f? composing F to minimize R as defined in
Eq. 2. The major challenge is that evaluating R(F ) requires performing inference. Specifically, if
we define
#
A(?) = max ? ? ? + "
H(?? ),
(7)
??M
?
then we have that
min R(F ) = min
F
F
#%
&
?F (xk , y k ) + A(?Fk ) .
k
Since A(?) contains a maximization, this is a saddle-point problem. Inspired by previous work
[16, 9], our solution (Section 5) is to introduce a vector of ?messages? ? to write A in the dual form
A(?) = min A(?, ?),
?
which leads to phrasing learning as the joint minimization
#'
(
min min
?F (xk , y k ) + A(?k , ?Fk ) .
F
{?k }
k
We propose to solve this through an alternating optimization of F and {?k }. For fixed F , messagepassing can be used to perform coordinate ascent updates to all the messages ?k (Section 5). These
updates are trivially parallelized with respect to k. However, the problem remains, for fixed messages, how to optimize the functions f? composing F . Section 7 observes that this problem can be
re-formulated into a (non-structured) logistic regression problem, with ?bias? terms added to each
example that reflect the current messages into factor ?.
5 Inference
In order to evaluate the loss, it is necessary to solve the maximization in Eq. 6. For a given ?,
consider doing inference over ?, that is, in solving the maximization in Eq. 7. Standard Lagrangian
duality theory gives the following dual representation for A(?) in terms of ?messages? ?? (x? ) from
a region ? to a subregion ? ? ?, a variant of the representation of Heskes [11].
3
Algorithm 1 Reducing structured learning to logistic regression.
For all k, ?, initialize ?k (y? ) ? 0.
Repeat until convergence:
1. For all k, for all ?, set the bias term to
?
?
#
#
1
bk? (y ? ) ? ??(y?k , y ? ) +
?k? (y? ) ?
?k? (y? )? .
#
???
???
2. For all ?, solve the logistic regression problem
&
)
K
'
(
'
(
#
#
k
k
k
k
k
k
f? ? arg max
f? (x , y? ) + b? (y? ) ? log
exp f? (x , y? ) + b? (y? ) .
f? ?F?
y?
k=1
3. For all k, for all ?, form updated parameters as
?k (y? ) ? #f? (xk , y? ) + ?(y?k , y ? ).
4. For all k, perform a fixed number of message-passing iterations to update ?k using ?k . (Eq. 10)
Theorem 2. A(?) can be represented in the dual form A(?) = min? A(?, ?), where
#
###
A(?, ?) = max ? ? ? + #
H(?? ) +
?? (x? ) (??? (y? ) ? ?? (y? )) ,
??N
?
(8)
? ??? x?
*
and N = {?| y? ?? (y? ) = 1, ?? (y? ) ? 0} is the set of locally normalized pseudomarginals.
Moreover, for a fixed ?, the maximizing ? is given by
? ?
??
#
#
1
1
?? (y? ) =
exp ? ??(y? ) +
?? (y? ) ?
?? (y? )?? ,
(9)
Z?
#
???
???
where Z? is a normalizing constant to ensure that
*
y?
?? (y? ) = 1.
Thus, for any set of messages ?, there is an easily-evaluated upper-bound A(?, ?) ? A(?), and when
A(?, ?) is minimized with respect to ?, this bound is tight. The standard approach to performing
the minimization over ? is essentially block-coordinate descent. There are variants, depending on
the size of the ?block? that is updated. In our experiments, we use blocks consisting of the set of all
messages ?? (y? ) for all regions ? containing ?. When the graph only contains regions for single
variables and pairs, this is a ?star update? of all the messages from pairs that contain a variable i. It
can be shown [11, 15] that the update is
?$? (y? ) ? ?? (y? ) +
#
#
(log ?? (y? ) +
log ??! (y? )) ? # log ?? (y? ),
1 + N?
!
(10)
? ??
for all ? ? ?, where N? = |{?|? ? ?}|. Meshi et al. [15] show that with greedy or randomized
selection of blocks to update, O( ?1 ) iterations are sufficient to converge within error ?.
6 Logistic Regression
Logistic regression is traditionally understood as defining a conditional distribution p(y|x; W ) =
exp ((W x)y ) /Z(x) where W is a matrix that maps the input features x to a vector of margins W*
x. It is easy to show that the maximum conditional likelihood training problem
maxW k log p(y k |xk ; W ) is equivalent to
)
&
#
#
k
k
max
(W x )yk ? log
exp(W x )y .
W
y
k
4
Here, we generalize this in two ways. First, rather than taking the mapping from features x to the
margin for label y as the y-th component of W x, we take it as f (x, y) for some function f in a set
of function F . (This reduces to the linear case when f (x, y) = (W x)y .) Secondly, we assume that
there is a pre-determined ?bias? vector bk associated with each training example. This yields the
learning problem
"
%
! #
!
$
#
$
k k
k k
k
k
max
f (x , y ) + b (y ) ? log
(11)
exp f (x , y) + b (y) ,
f ?F
y
k
Aside from linear logistic regression, one can see decision trees, multi-layer perceptrons, and
boosted ensembles under an appropriate loss as solving Eq. 11 for different sets of functions F
(albeit possibly to a local maximum).
7 Training
Recall that the&
learning problem is to select the functions f? ? F? so as to minimize the empirical
risk R(F ) = k [?F (xk , y k ) + A(?Fk )]. At first blush, this appears challenging, since evaluating
A(?) requires solving a message-passing optimization. However, we can use the dual representation
of A from Theorem 2 to represent minF R(F ) in the form
!'
(
min min
?F (xk , y k ) + A(?k , ?Fk ) .
(12)
F
{?k }
k
To optimize Eq. 12, we alternating between optimization of messages {?k } and energy functions
{f? }. Optimization with respect to ?k for fixed F decomposes into minimizing A(?k , ?Fk ) independently for each y k , which can be done by running message-passing updates as in Section 5 using the
parameter vector ?Fk . Thus, the rest of this section is concerned with how to optimize with respect
to F for fixed messages. Below, we will use a slight generalization of a standard result [1, p. 93].
Lemma 3. The conjugate of the entropy is the ?log-sum-exp? function. Formally,
!
!
?i
xi log xi = ? log
exp .
max ? ? x ? ?
T
?
x:x 1=1,x?0
i
i
Theorem 4. If f?? is the minimizer of Eq 12 for fixed messages ?, then
"
%
! #
!
$
#
$
?
k
k
k
k
k
k
f? = $ arg max
f? (x , y? ) + b? (y? ) ? log
exp f? (x , y? ) + b? (y? ) ,
(13)
where the set of biases are defined as
?
?
!
!
1
?? (y? ) ?
?? (y? )? .
bk? (y ? ) = ??(y?k , y ? ) +
$
???
(14)
f?
y?
k
???
Proof. Substituting A(?, ?) from Eq. 8 and ?k from Eq. 5 gives that
A(?k , ?Fk ) = max
??N
!!#
!
$
f? (xk , y? ) + ?? (y?k , y ? ) ?(y? ) + $
H(?? )
?
y?
?
+
!!!
?k? (x? ) (??? (y? )
? ?? (y? )) .
? ??? x?
Using the definition of bk from Eq. 14 above, this simplifies into
.
!
!
k k
(f? (x, y? ) + $b? (y? )) ?? (y? ) + $H(?? ) ,
A(? , ?F ) =
max
?
?? ?N?
y?
5
Fi \ Fij
Zero
Const.
Linear
Boost.
MLP
Denoising
Zero Const. Linear Boost. MLP
.502 .502 .502 .511 .502
.502 .502 .502 .510 .502
.444 .077 .059 .049 .034
.444 .034 .015 .009 .007
.445 .032 .015 .009 .008
Fi \ Fij
Zero
Const.
Linear
Boost.
MLP
Horses
Zero Const. Linear Boost. MLP
.246 .246 .247 .244 .245
.246 .246 .247 .244 .245
.185 .185 .168 .154 .156
.103 .098 .092 .084 .086
.096 .094 .087 .080 .081
Table 1: Univariate Test Error Rates (Train Errors in Appendix)
!
where N? = {?? | y? ?? (y? ) = 1, ?? (y? ) ? 0} enforces that ?? is a locally normalized set of
marginals. Applying Lemma 3 to the inner maximization gives the closed-form expression
k
A(?
, ?Fk )
=
"
?
# log
"
exp
y?
#
$
1
f? (x, y? ) + b? (y? ) .
#
Thus, minimizing Eq. 12 with respect to F is equivalent to finding (for all ?)
%
$&
#
"
"
1
k
k k
f? (x, y? ) + b? (y? )
arg max
f? (x , y? ) ? # log
exp
f?
#
y?
k
%
#
$&
"
" 1
1
k
k
k
k
exp
f? (x , y? ) ? log
f (x , y? ) + b? (y? )
= arg max
f?
#
#
y
k
?
Observing that adding a bias term doesn?t change the maximizing f? , and using the fact that
arg max g( 1" ?) = # arg max g(?) gives the result.
The final learning algorithm is summarized as Alg. 1. Sometimes, the local classifier f? will
depend on the input x only through some ?local features? ?? . The above framework accomodates
this situation if the set F? is considered to select these local features.
In practice, one will often wish to constrain that some of the functions f? are the same.
This is done by taking the sum in Eq. 13 not just over all data k, but also over all factors ? that should be so constrained. For example, it is common to model !
image segmentation
problems
using
a
4-connected
grid
with
an
energy
like
F
(x,
y)
=
i u(?i , yi ) +
!
v(?
,
y
,
y
),
where
?
/?
are
univariate/pairwise
features
determined
by
x,
and u and v
ij
i
j
i
ij
ij
are functions mapping
local
features
to
local
energies.
In
this
case,
u
would
be
selected
to max)
(
)*
! ! '(
!
imize k i u(?ki , yik ) + bki (yik ) ? log yi exp u(?ki , yi ) + bki (yi ) , and analogous expression exists for v. This is the framework used in the following experiments.
8 Experiments
These experiments consider three different function classes: linear, boosted decision trees, and
multi-layer perceptrons. To maximize Eq. 11 under linear functions f (x, y) = (W x)y , we simply compute the gradient with respect to W and use batch L-BFGS. For a multi-layer perceptron,
we fit the function f (x, y) = (W ?(U x))y using stochastic gradient descent with momentum2 on
mini-batches of size 1000, using a step size of .25 for univariate classifiers and .05 for pairwise.
Boosted decision trees use stochastic gradient boosting [7]: the gradient of the logistic loss is computed for each exemplar, and a regression tree is induced to fit this (one tree for each class). To
control overfitting, each leaf node must contain at least 5% of the data. Then, an optimization adjusts the values of leaf nodes to optimize the logistic loss. Finally, the tree values are multiplied by
2
At each time, the new step is a combination of .1 times the new gradient plus .9 times the old step.
6
400
400
yi=1
y =2
0.4
?i
0.6
100
0.8
?400
0
1
?200
0.2
0.4
?i
0.6
0.8
100
y =(1,1)
0.8
ij
yij=(2,1)
yij=(2,2)
0
ij
0
?100
0
1
1
y =(1,1)
50
yij=(2,1)
f
ij
f
0.6
0.8
ij
?50
?ij
0.6
yij=(2,2)
?50
0.4
?i
y =(1,2)
ij
50
0
0.2
0.4
ij
yij=(2,1)
?100
0
0.2
100
y =(1,2)
ij
yij=(2,2)
fij
?400
0
1
y =(1,1)
ij
y =(1,2)
50
0
i
0
?200
0.2
i
200
f
i
f
fi
?200
y =2
i
200
0
?400
0
yi=1
y =2
i
200
400
yi=1
?50
0.2
Linear
0.4
?ij
0.6
0.8
1
?100
0
0.2
Boosting
0.4
?ij
0.6
0.8
1
MLP
Figure 1: The univariate (top) and pairwise (bottom) energy functions learned on denoising data.
Each column shows the result of training both univariate and pairwise terms with one function class.
Denoising
Linear
Boosting
MLP Fi \ Fij
0.1
0.1
0.1
20
40
Iteration
0
20
40
Iteration
0
0.1
0
0
20
40
Iteration
MLP
MLP
0
0.2
Error
0
0.2
Boosting
0.1
Error
0
0.2
Boosting
Error
MLP Fi \ Fij
Linear
Error
0.1
0
0.2
Error
Boosting
0.2
Linear
Error
0.2
0
0
Horses
Linear
10
20
Iteration
0
10
20
Iteration
0
10
20
Iteration
Figure 2: Dashed/Solid lines show univariate train/test error rates as a function of learning iterations
for varying univariate (rows) and pairwise (columns) classifiers.
Input
True
Denoising
Linear Boosting
MLP
Input
True
Horses
Linear Boosting MLP
Figure 3: Example Predictions on Test Images (More in Appendix)
7
.25 and added to the ensemble. For reference, we also consider the ?zero? classifier, and a ?constant?
classifier that ignores the input? equivalent to a linear classifier with a single constant feature.
All examples use ! = 0.1. Each learning iteration consists of updating fi , performing 25 iterations
of message passing, updating fij , and then performing another 25 iterations of message-passing.
The first dataset is a synthetic binary denoising dataset, intended for the purpose of visualization. To
create an example, an image is generated with each pixel random in [0, 1]. To generate y, this image
is convolved with a Gaussian with standard deviation 10 and rounded to {0, 1}. Next, if yi = 0, ?ki
is sampled uniformly from [0, .9], while if yik = 1, ?ki is sampled from [.1, 1]. Finally, for a pair
(i, j), if yik = yjk , then ?kij is sampled from [0, .8] while if yik != yjk ?ij is sampled from [.2, 1]. A
constant feature is also added to both ?ki and ?kij .
There are 16 100 ? 100 images each training and testing. Test errors for each classifier combination
are in Table 1, learning curves are in Fig. 2, and example results in Fig. 3. The nonlinear classifiers
result in both lower asymptotic training and testing errors and faster convergence rates. Boosting
converges particularly quickly. Finally, because there is only a single input feature for univariate
and pairwise terms, the resulting functions are plotted in Fig. 1.
Second, as a more realistic example, we use the Weizmann horses dataset. We use 42 univariate
features fik consisting of a constant (1) the RBG values of the pixel (3), the vertical and horizontal
position (2) and a histogram of gradients [2] (36). There are three edge features, consisting of a
constant, the l2 distance of the RBG vectors for the two pixels, and the output of a Sobel edge filter.
Results are show in Table 1 and Figures 2 and 3. Again, we see benefits in using nonlinear classifiers,
both in convergence rate and asymptotic error.
9 Discussion
This paper observes that in the structured learning setting, the optimization with respect to energy
can be formulated as a logistic regression problem for each factor, ?biased? by the current messages.
Thus, it is possible to use any function class where an ?oracle? exists to optimize a logistic loss.
Besides the possibility of using more general classes of energies, another advantage of the proposed
method is the ?software engineering? benefit of having the algorithm for fitting the energy modularized from the rest of the learning procedure. The ability to easily define new energy functions for
individual problems could have practical impact.
Future work could consider convergence rates of the overall learning optimization, systematically
investigate the choice of !, or consider more general entropy approximations, such as the Bethe
approximation used with loopy belief propagation.
In related work, Hazan and Urtasun [9] use a linear energy, and alternate between updating all inference variables and a gradient descent update to parameters, using an entropy-smoothed inference
objective. Meshi et al. [16] also use a linear energy, with a stochastic algorithm updating inference
variables and taking a stochastic gradient step on parameters for one exemplar at a time, with a pure
LP-relaxation of inference. The proposed method iterates between updating all inference variables
and performing a full optimization of the energy. This is a ?batch? algorithm in the sense of making repeated passes over the data, and so is expected to be slower than an online method for large
datasets. In practice, however, inference is easily parallelized over the data, and the majority of
computational time is spent in the logistic regression subproblems. A stochastic solver can easily be
used for these, as was done for MLPs above, giving a partially stochastic learning method.
Another related work is Gradient Tree Boosting [4] in which to train a CRF, the functional gradient
of the conditional likelihood is computed, and a regression tree is induced. This is iterated to produce
an ensemble. The main limitation is the assumption that inference can be solved exactly. It appears
possible to extend this to inexact inference, where the tree is induced to improve a dual bound, but
this has not been done so far. Experimentally, however, simply inducing a tree on the loss gradient
leads to much slower learning if the leaf nodes are not modified to optimize the logistic loss. Thus,
it is likely that such a strategy would still benefit from using the logistic regression reformulation.
8
References
[1] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[2] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[3] Chaitanya Desai, Deva Ramanan, and Charless C. Fowlkes. Discriminative models for multi-class object
layout. International Journal of Computer Vision, 95(1):1?12, 2011.
[4] Thomas G. Dietterich, Adam Ashenfelter, and Yaroslav Bulatov. Training conditional random fields via
gradient tree boosting. In ICML, 2004.
[5] Justin Domke. Learning graphical model parameters with approximate marginal inference. PAMI,
35(10):2454?2467, 2013.
[6] Thomas Finley and Thorsten Joachims. Training structural svms when exact inference is intractable. In
ICML, 2008.
[7] Jerome H. Friedman. Stochastic gradient boosting. Computational Statistics and Data Analysis, 38:367?
378, 1999.
[8] Stephen Gould, Jim Rodgers, David Cohen, Gal Elidan, and Daphne Koller. Multi-class segmentation
with relative location prior. IJCV, 80(3):300?316, 2008.
[9] Tamir Hazan and Raquel Urtasun. Efficient learning of structured predictors in general graphical models.
CoRR, abs/1210.2346, 2012.
[10] Xuming He, Richard S. Zemel, and Miguel ?. Carreira-Perpi??n. Multiscale conditional random fields
for image labeling. In CVPR, 2004.
[11] Tom Heskes. Convexity arguments for efficient minimization of the bethe and kikuchi free energies. J.
Artif. Intell. Res. (JAIR), 26:153?190, 2006.
[12] Sanjiv Kumar and Martial Hebert. Discriminative fields for modeling spatial dependencies in natural
images. In NIPS, 2003.
[13] Lubor Ladicky, Christopher Russell, Pushmeet Kohli, and Philip H. S. Torr. Associative hierarchical
CRFs for object class image segmentation. In ICCV, 2009.
[14] Andr? F. T. Martins, Noah A. Smith, and Eric P. Xing. Polyhedral outer approximations with application
to natural language parsing. In ICML, 2009.
[15] Ofer Meshi, Tommi Jaakkola, and Amir Globerson. Convergence rate analysis of MAP coordinate minimization algorithms. In NIPS. 2012.
[16] Ofer Meshi, David Sontag, Tommi Jaakkola, and Amir Globerson. Learning efficiently with approximate
inference via dual losses. In ICML, 2010.
[17] Sebastian Nowozin, Peter V. Gehler, and Christoph H. Lampert. On parameter learning in CRF-based
approaches to object class image segmentation. In ECCV, 2010.
[18] Sebastian Nowozin, Carsten Rother, Shai Bagon, Toby Sharp, Bangpeng Yao, and Pushmeet Kohli. Decision tree fields. In ICCV, 2011.
[19] Florian Schroff, Antonio Criminisi, and Andrew Zisserman. Object class segmentation using random
forests. In BMVC, 2008.
[20] Jamie Shotton, John M. Winn, Carsten Rother, and Antonio Criminisi. Textonboost for image understanding: Multi-class object recognition and segmentation by jointly modeling texture, layout, and context.
IJCV, 81(1):2?23, 2009.
[21] Nathan Silberman and Rob Fergus. Indoor scene segmentation using a structured light sensor. In ICCV
Workshops, 2011.
[22] Benjamin Taskar, Carlos Guestrin, and Daphne Koller. Max-margin markov networks. In NIPS, 2003.
[23] Jakob J. Verbeek and Bill Triggs. Scene segmentation with crfs learned from partially labeled images. In
NIPS, 2007.
[24] John M. Winn and Jamie Shotton. The layout consistent random field for recognizing and segmenting
partially occluded objects. In CVPR, 2006.
[25] Jianxiong Xiao and Long Quan. Multiple view semantic segmentation for street view images. In ICCV,
2009.
9
| 4870 |@word kohli:2 dalal:1 polynomial:1 triggs:2 textonboost:1 solid:1 configuration:2 contains:2 current:4 com:1 written:2 parsing:1 john:2 must:2 sanjiv:1 realistic:1 pseudomarginals:3 update:12 aside:1 greedy:1 selected:1 leaf:3 amir:2 xk:19 smith:1 iterates:1 boosting:12 node:3 location:1 daphne:2 consists:1 ijcv:2 fitting:1 polyhedral:1 introduce:1 pairwise:6 expected:1 multi:8 inspired:1 bulatov:1 solver:3 becomes:1 moreover:2 bounded:1 finding:1 gal:1 preferable:1 exactly:1 classifier:12 control:1 ramanan:1 segmenting:1 positive:1 understood:1 local:9 engineering:1 approximately:1 pami:1 plus:1 au:1 challenging:1 christoph:1 range:1 weizmann:1 practical:1 globerson:2 enforces:1 testing:2 practice:4 block:4 procedure:1 empirical:2 boyd:1 pre:1 selection:1 risk:2 applying:1 context:1 optimize:8 equivalent:5 map:3 lagrangian:1 bill:1 maximizing:2 crfs:2 layout:3 independently:2 convex:1 focused:1 pure:1 fik:1 insight:1 adjusts:1 vandenberghe:1 coordinate:3 traditionally:1 analogous:1 updated:2 exact:1 recognition:1 particularly:1 updating:5 labeled:1 gehler:1 bottom:1 taskar:1 solved:3 region:5 connected:1 desai:1 russell:1 highest:1 observes:3 yk:1 benjamin:1 convexity:1 occluded:1 depend:1 solving:4 tight:1 deva:1 eric:1 easily:4 joint:5 represented:1 train:3 zemel:1 horse:4 labeling:1 larger:1 solve:3 cvpr:3 ability:1 statistic:1 jointly:1 final:1 online:1 associative:1 advantage:1 propose:1 jamie:2 interaction:1 neighboring:1 inducing:1 convergence:6 readjusted:1 produce:1 adam:1 converges:1 object:6 spent:1 depending:1 kikuchi:1 andrew:1 miguel:1 exemplar:3 ij:15 eq:15 subregion:1 strong:1 australian:1 tommi:2 fij:6 filter:1 stochastic:7 criminisi:2 human:1 meshi:5 generalization:1 secondly:1 adjusted:1 yij:6 considered:2 exp:12 mapping:2 predict:2 substituting:1 major:3 purpose:1 schroff:1 label:1 create:1 minimization:5 clearly:1 sensor:1 gaussian:1 modified:1 rather:1 lubor:1 boosted:4 varying:2 blush:1 jaakkola:2 l0:2 joachim:1 likelihood:2 sense:1 inference:26 unary:1 typically:1 koller:2 labelings:1 pixel:3 arg:9 dual:6 flexible:1 overall:1 smoothing:2 constrained:1 initialize:1 spatial:1 marginal:1 field:5 having:1 icml:4 minf:1 discrepancy:1 minimized:1 np:2 future:1 richard:1 oriented:1 national:1 intell:1 individual:1 intended:1 consisting:3 friedman:1 ab:1 detection:1 mlp:11 message:25 possibility:1 investigate:1 custom:1 adjust:1 light:1 bki:2 held:1 sobel:1 edge:3 necessary:1 tree:14 old:1 chaitanya:1 re:3 plotted:1 kij:2 column:2 modeling:2 bagon:1 maximization:7 loopy:1 deviation:1 subset:2 predictor:1 recognizing:1 successful:1 dependency:1 synthetic:1 international:1 randomized:1 rounded:1 quickly:1 yao:1 again:3 reflect:1 containing:1 possibly:1 bfgs:1 yaroslav:1 star:1 summarized:1 view:2 closed:1 hazan:3 doing:1 observing:1 xing:1 carlos:1 shai:1 minimize:4 mlps:1 accuracy:1 who:2 efficiently:1 ensemble:4 yield:2 generalize:2 iterated:1 sebastian:2 definition:1 inexact:1 energy:16 associated:1 proof:1 hamming:1 sampled:4 dataset:5 proved:1 recall:1 segmentation:9 reflecting:1 appears:2 jair:1 tom:1 zisserman:1 bmvc:1 done:5 evaluated:2 just:1 until:1 jerome:1 horizontal:1 replacing:1 christopher:1 multiscale:1 nonlinear:3 propagation:1 logistic:20 artif:1 dietterich:1 verify:1 true:3 normalized:3 contain:2 hence:1 alternating:4 semantic:1 generalized:1 crf:2 l1:10 meaning:1 image:12 fi:6 charles:1 common:3 functional:1 overview:1 cohen:1 extend:1 slight:1 rodgers:1 lieven:1 marginals:2 he:1 cambridge:1 fk:11 trivially:1 heskes:2 grid:1 language:1 phrasing:2 add:1 binary:2 yi:8 scoring:1 guestrin:1 relaxed:1 florian:1 parallelized:2 converge:1 maximize:1 elidan:1 dashed:1 stephen:2 full:1 multiple:1 reduces:3 faster:1 long:1 impact:1 prediction:4 variant:2 regression:16 verbeek:1 essentially:2 vision:1 iteration:12 represent:1 sometimes:1 histogram:2 addition:1 winn:2 biased:1 rest:2 ascent:1 pass:1 induced:3 quan:1 structural:1 shotton:2 easy:2 concerned:1 iterate:1 fit:2 inner:1 simplifies:1 absent:1 expression:2 peter:1 sontag:1 passing:9 antonio:2 yik:5 useful:1 locally:2 svms:1 generate:1 andr:1 write:3 discrete:2 reformulation:1 graph:1 relaxation:4 sum:3 raquel:1 place:2 decision:5 appendix:3 bound:7 layer:5 ki:5 guaranteed:1 rbg:2 oracle:2 noah:1 constraint:1 ladicky:1 constrain:1 scene:2 software:1 phrased:1 nathan:1 argument:2 min:8 unsmoothed:1 kumar:1 performing:5 martin:1 gould:1 structured:13 alternate:1 combination:2 conjugate:1 slightly:1 lp:5 rob:1 making:1 maxy:1 restricted:1 iccv:4 thorsten:1 agree:1 previously:1 remains:1 visualization:1 needed:1 ofer:2 multiplied:1 hierarchical:1 generic:1 appropriate:1 fowlkes:1 batch:3 bangpeng:1 slower:2 convolved:1 thomas:2 top:1 running:1 ensure:1 imize:1 graphical:2 marginalized:2 const:4 exploit:1 giving:1 silberman:1 seeking:1 objective:5 added:4 strategy:1 traditional:1 gradient:15 distance:2 majority:1 philip:1 outer:1 street:1 polytope:1 considers:1 urtasun:3 nicta:2 assuming:2 rother:2 besides:1 mini:1 minimizing:2 mostly:1 yjk:2 subproblems:1 looseness:1 perform:2 upper:4 vertical:1 datasets:1 markov:1 descent:4 defining:1 extended:1 situation:1 jim:1 jakob:1 smoothed:3 sharp:1 bk:4 david:2 pair:3 learned:3 boost:4 nip:4 justin:3 beyond:1 below:1 indoor:1 sparsity:1 challenge:2 program:1 max:23 belief:1 natural:2 solvable:1 accomodates:1 improve:1 martial:1 finley:1 prior:1 understanding:1 l2:1 asymptotic:2 relative:1 loss:23 limitation:1 xuming:1 sufficient:1 consistent:1 xiao:1 systematically:1 nowozin:2 row:1 eccv:1 repeat:1 free:1 hebert:1 bias:7 allow:1 perceptron:1 taking:3 modularized:1 benefit:4 curve:1 xn:1 evaluating:3 tamir:1 doesn:1 ignores:1 commonly:1 ashenfelter:1 far:1 pushmeet:2 approximate:2 overfitting:1 incoming:1 assumed:1 xi:2 discriminative:2 fergus:1 decomposes:5 table:3 bethe:2 composing:2 messagepassing:1 forest:1 alg:1 main:1 motivation:1 bounding:1 lampert:1 toby:1 repeated:1 x1:1 fig:3 position:1 wish:2 hmax:2 theorem:5 perpi:1 concern:1 normalizing:1 exists:5 intractable:1 workshop:1 albeit:1 adding:1 corr:1 texture:1 margin:3 entropy:8 simply:2 saddle:1 univariate:9 likely:1 contained:1 partially:3 maxw:1 minimizer:1 conditional:5 carsten:2 formulated:3 hard:2 experimentally:2 change:1 determined:3 specifically:1 reducing:1 uniformly:1 domke:3 wt:2 denoising:5 lemma:2 carreira:1 torr:1 duality:1 perceptrons:4 select:6 formally:1 jianxiong:1 evaluate:1 handling:1 |
4,277 | 4,871 | Correlations strike back (again): the case of
associative memory retrieval
Cristina Savin1
[email protected]
Peter Dayan2
[email protected]
M?at?e Lengyel1
[email protected]
1
Computational & Biological Learning Lab, Dept. Engineering, University of Cambridge, UK
2
Gatsby Computational Neuroscience Unit, University College London, UK
Abstract
It has long been recognised that statistical dependencies in neuronal activity need
to be taken into account when decoding stimuli encoded in a neural population.
Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an
auto-associative memory. We show that activity-dependent learning generically
produces such correlations, and failing to take them into account in the dynamics
of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of
synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such
as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads
to a novel functional account of key biophysical features of the neural substrate.
1
Introduction
Auto-associative memories have a venerable history in computational neuroscience. However, it is
only rather recently that the statistical revolution in the wider field has provided theoretical traction
for this problem [1]. The idea is to see memory storage as a form of lossy compression ? information
on the item being stored is mapped into a set of synaptic changes ? with the neural dynamics during
retrieval representing a biological analog of a corresponding decompression algorithm. This implies
there should be a tight, and indeed testable, link between the learning rule used for encoding and the
neural dynamics used for retrieval [2].
One issue that has been either ignored or trivialized in these treatments of recall is correlations
among the synapses [1?4] ? beyond the perfect (anti-)correlations emerging between reciprocal
synapses with precisely (anti-)symmetric learning rules [5]. There is ample experimental data for
the existence of such correlations: for example, in rat visual cortex, synaptic connections tend to
cluster together in the form of overrepresented patterns, or motifs, with reciprocal connections being
much more common than expected by chance, and the strengths of the connections to and from
each neuron being correlated [6]. The study of neural coding has indicated that it is essential to
treat correlations in neural activity appropriately in order to extract stimulus information well [7?
9]. Similarly, it becomes pressing to examine the nature of correlations among synaptic weights in
auto-associative memories, the consequences for retrieval of ignoring them, and methods by which
they might be accommodated.
1
Here, we consider several well-known learning rules, from simple additive ones to bounded synapses
with metaplasticity, and show that, with a few significant exceptions, they induce correlations between synapses that share a pre- or a post-synaptic partner. To assess the importance of these dependencies for recall, we adopt the strategy of comparing the performance of decoders which either
do or do not take them into account [10], showing that they do indeed have an important effect on
efficient retrieval. Finally, we show that approximately optimal retrieval involves particular forms
of nonlinear interactions between different neuronal inputs, as observed experimentally [11].
2
General problem formulation
We consider a network of N binary neurons that enjoy all-to-all connectivity.1 As is conventional,
and indeed plausibly underpinned by neuromodulatory interactions [12], we assume that network
dynamics do not play a role during storage (with stimuli being imposed as patterns of activity on the
neurons), and that learning does not occur during retrieval.
To isolate the effects of different plasticity rules on synaptic correlations from other sources of
correlations, we assume that the patterns of activity inducing the synaptic changes have no particular
structure, i.e. their distribution factorizes. For further simplicity, we take these activity patterns to
be binary with pattern density f , i.e. a prior over patterns defined as:
Y
Pstore (x) =
Pstore (xi )
Pstore (xi ) = f xi ? (1 ? f )1?xi
(1)
i
During recall, the network is presented with a cue, x
?, which is a noisy or partial version of one
of the originally stored patterns. Network dynamics should complete this partial pattern, using the
information in the weights W (and the cue). We start by considering arbitrary dynamics; later we
impose the critical constraint for biological realisability that they be strictly local, i.e. the activity of
neuron i should depend exclusively on inputs through incoming synapses Wi,? .
Since information storage by synaptic plasticity is lossy, recall is inherently a probabilistic inference
problem [1, 13] (Fig. 1a), requiring estimation of the posterior over patterns, given the information
in the weights and the recall cue:
? ) ? Pstore (x) ? Pnoise (?
P (x|W, x
x|x) ? P(W|x)
(2)
This formulation has formed the foundation of recent work on constructing efficient autoassociative
recall dynamics for a range of different learning rules [2?4]. In this paper, we focus on the last term
P(W|x), which expresses the probability of obtaining W as the synaptic weight matrix when x is
stored along with T ? 1 random patterns (sampled from the prior, Eq. 1). Critically, this is where
we diverge from previous analyses that assumed this distribution was factorised, or only trivially
correlated due to reciprocal synapses being precisely (anti-)symmetric [1, 2, 4]. In contrast, we
explicitly study the emergence and effects of non-trivial correlations in the synaptic weight matrixdistribtion, because almost all synaptic plasticity rules induce statistical dependencies between the
synaptic weights of each neuron (Fig. 1a, d).
The inference problem expressed by Eq. 2 can be translated into neural dynamics in several ways
? dynamics could be deterministic, attractor-like, converging to the most likely pattern (a MAP
estimate) of the distribution of x [2], or to a mean-field approximate solution [3]; alternatively, the
dynamics could be stochastic, with the activity over time representing samples from the posterior,
and hence implicitly capturing the uncertainty associated with the answer [4]. We consider the latter.
Since we estimate performance by average errors, the optimal response is the mean of the posterior,
which can be estimated by integrating the activity of the network during retrieval.
We start by analysing the class of additive learning rules, to get a sense for the effect of correlations on retrieval. Later, we focus on multi-state synapses, for which learning rules are described
by transition probabilities between the states [14]. These have been used to capture a variety of
important biological constraints such as bounds on synaptic strengths and metaplasticity, i.e. the
fact that synaptic changes induced by a certain activity pattern depend on the history of activity at
the synapse [15]. The two classes of learning rule are radically different; so if synaptic correlations
matter during retrieval in both cases, then the conclusion likely applies in general.
1
Complete connectivity simplifies the computation of the parameters for the optimal dynamics for cascadelike learning rules considered in the following, but is not necessary for the theory.
2
1
covariance rule
simple Hebb rule
cortical data (Song 2005)
0.5
0
d
e
error (%)
1
corr
c
error (%)
b
corr
a
0.5
0
10 control
exact (considering correlations)
simple (ignoring correlations)
5
0
25
30
50
100
50
100
N
20
10 control
0
25
N
Figure 1: Memory recall as inference and additive learning rules. a. Top: Synaptic weights,
W, arise by storing the target pattern x together with T ?1 other patterns, {x(t) }t=1...T?1 . During
? , is a noisy version of the target pattern. The task of recall is to infer x given W
recall, the cue, x
and x
? (by marginalising out {x(t) }). Bottom: The activity of neuron i across the stored patterns is
a source of shared variability between synapses connecting it to neurons j and k. b-c. Covariance
rule: patterns of synaptic correlations and recall performance for retrieval dynamics ignoring or
considering synaptic correlations; T = 5. d-e. Same for the simple Hebbian learning rule. The
control is an optimal decoder that ignores W.
3
Additive learning rules
Local additive learning rules assume that synaptic changes induced by different activity patterns
combine additively; such that storing a sequence of T patterns from Pstore (x), results in weights
P
(t)
(t)
Wij = t ?(xi , xj ), with function ?(xi , xj ) describing the change in synaptic strength induced
by presynaptic activity xj and postsynaptic activity xi . We consider a generalized Hebbian form for
this function, with ? (xi , xj ) = (xi ? ?)(xj ? ?). This class includes, for example, the covariance
rule (? = ? = f ), classically used in Hopfield networks, or simple Hebbian learning (? = ? = 0).
As synaptic changes are deterministic, the only source of uncertainty in the distribution P(W|x)
is the identity of the other stored patterns. To estimate this, let us first consider the distribution of
the weights after storing one random pattern from Pstore (x). The mean ? and covariance C of the
weight change induced by this event can be computed as:2
Z
Z
? = Pstore (x)?| (x)dx,
C = Pstore (x) ?| (x) ? ?| (x)T dx ? ? ? ?T
(3)
Since the rule is additive and the patterns are independent, the mean and covariance scale linearly
with the number of intervening patterns. Hence, the distribution over possible weight values at
recall, given that pattern x is stored along with T ? 1 other, random, patterns has mean ?W =
?(x) + (T ? 1) ? ?, and covariance CW = (T ? 1) ? C. Most importantly, because the rule is
additive, in the limit of many stored patterns (and in practice even for modest values of T ), the
distribution P(W|x) approaches a multivariate Gaussian that is characterized completely by these
two quantities; moreover, its covariance is independent of x.
For retrieval dynamics based on Gibbs sampling, the key quantity is the log-odds ratio
P(xi = 1|x?i , W, x
?)
(4)
Ii = log
P(xi = 0|x?i , W, x
?)
for neuron i, which could be represented by the total current entering the unit. This would translate
into a probability of firing given by the sigmoid activation function f (Ii ) = 1/(1 + e?Ii ).
The total current entering a neuron is a sum of two terms: one term from the external input of the
form c1 ? x
?i + c2 (with constants c1 and c2 determined by parameters f and r [16]), and one term
from the recurrent input, of the form:
T
T
1
(0)
(0)
(1)
(1)
Iirec =
(5)
W| ? ?W
C?1 W| ? ?W ? W| ? ?W
C?1 W| ? ?W
2(T ?1)
2
For notational convenience, we use a column-vector form of the matrix of weight changes ?, and the
weight matrix W, marked by subscript | .
3
(0/1)
where ?W = ?| (x(0/1) )+(T?1)? and x(0/1) is the vector of activities obtained from x in which
the activity of neuron i is set to 0, or 1, respectively.
It is easy to see that for the covariance rule, ? (xi , xj ) = (xi ? f )(xj ? f ), synapses sharing
a single pre- or post-synaptic partner happen to be uncorrelated (Fig. 1b). Moreover, as for any
(anti-)symmetric additive learning rule, reciprocal connections are perfectly correlated (Wij = Wji ).
The (non-degenerate part of the) covariance matrix in this case becomes diagonal, and the total
current in optimal retrieval reduces to simple linear dynamics :
Ii =
1
2
(T ? 1) ?W
X
Wij xj ?
j
|
{z
}
recurrent input
X
(1 ? 2f )2 X
1 ? 2f
Wij ? f 2
xj ? f
2
2
j
j
| {z }
{z
}
| {z }
|
feedback inhibition
homeostatic term
(6)
constant
2
where ?W
is the variance of a synaptic weight resulting from storing a single pattern. This term
includes a contribution from recurrent excitatory input, dynamic feedback inhibition (proportional
to the total population activity) and a homeostatic term that reduces neuronal excitability as function
of the net strength of its synapses (a proxy for average current the neuron expects to receive) [17].
Reassuringly, the optimal decoder for the covariance rule recovers a form for the input current that is
closely related to classic Hopfield-like [5] dynamics (with external field [1, 18]): feedback inhibition
is needed only when the stored patterns are not balanced (f 6= 0.5); for the balanced case, the
homeostatic term can be integrated in the recurrent current, by rewriting neural activities as spins.
In sum, for the covariance rule, synapses are fortuitously uncorrelated (except for symmetric pairs
which are perfectly correlated), and thus simple, classical linear recall dynamics suffice (Fig. 1c).
The covariance rule is, however, the exception rather than the rule. For example, for simple Hebbian
learning, ? (xi , xj ) = xi ?xj , synapses sharing a pre- or post-synaptic partner are correlated (Fig. 1d)
and so the covariance matrix C is no longer diagonal. Interestingly, the final expression of the
recurrent current to a neuron remains strictly local (because of additivity and symmetry), and very
similar to Eq. 6, but feedback inhibition becomes a non-linear function of the total activity in the
network [16]. In this case, synaptic correlations have a dramatic effect: using the optimal non-linear
dynamics ensures high performance, but trying to retrieve information using a decoder that assumes
synaptic independence (and thus uses linear dynamics) yields extremely poor performance, which
is even worse than the obvious control of relying only on the information in the recall cue and the
prior over patterns (Fig. 1e).
For the generalized Hebbian case, ? (xi , xj ) = (xi ??)(xj ??) with ? 6= ?, the optimal decoder becomes even more complex, with the total current including additional terms accounting for pairwise
correlations between any two synapses that have neuron i as a pre- or post-synaptic partner [16].
Hence, retrieval is no longer strictly local3 and a biological implementation will require approximating the contribution of non-local terms as a function of locally available information, as we discuss
in detail for palimpsest learning below.
4
Palimpsest learning rules
Though additive learning rules are attractive for their analytical tractability, they ignore several important aspects of synaptic plasticity, e.g. they assume that synapses can grow without bound. We
investigate the effects of bounded weights by considering another class of learning rules, which assumes synaptic efficacies can only take binary values, with stochastic transitions between the two
underpinned by paired cascades of latent internal states [14] (Fig. 2). These learning rules, though
very simple, capture an important aspect of memory ? the fact that memory is leaky, and information
about the past is overwritten by newly stored items (usually referred to as the palimpsest property).
Additionally, such rules can account for experimentally observed synaptic metaplasticity [15].
3
For additive learning rules, the current to neuron i always depends only on synapses local to a neuron, but
these can also include outgoing synapses of which the weight, W?i , should not influence its dynamics. We refer
to such dynamics as ?semi-local?. For other learning rules, the optimal current to neuron i may depend on all
connections in the network, including Wjk with j, k 6= i (?non-local? dynamics).
4
R1
D P
post
R2
R3
c
- -
0
1
D
P
P D
D P
0 1
pre
0.4
cortex data (Song 2005)
d
correlated synapses
20
10
0.2
0
0
0.4
20
error (%)
b
correlation coefficient
a
0.2
0
0.6
pseudostorage
10
0
20
0.3
*
10
0
exact approx
0
simple
dynamics
*
corr-dependent
dynamics
Figure 2: Palimpsest learning. a. The cascade model. Colored circles are latent states (V ) that
belong to two different synaptic weights (W ), arrows are state transitions (blue: depression, red:
potentiation) b. Different variants of mapping pre- and post-synaptic activations to depression (D)
and potentiation (P): R1?postsynaptically gated, R2?presynaptically gated, R3?XOR rule. c. Correlation structure induced by these learning rules. c. Retrieval performance for each rule.
Learning rule
Learning is stochastic and local, with changes in the state of a synapse Vij being determined only by
the activation of the pre- and post-synaptic neurons, xj and xi . In general, one could define separate
transition matrices for each activity pattern, M(xi , xj ), describing the probability of a synaptic state
transitioning between any two states Vij to Vij0 following an activity pattern, (xi , xj ). For simplicity,
we define only two such matrices, for potentiation, M+ , and depression, M? , respectively, and then
map different activity patterns to these events. In particular, we assume Fusi?s cascade model [14]4
and three possible mappings (Fig. 2b [16]): 1) a postsynaptically gated learning rule, where changes
occur only when the postsynaptic neuron is active, with co-activation of pre- and post- neuron leading to potentiation, and to depression otherwise5 ; 2) a presynaptically gated learning rule, typically
assumed when analysing cascades[20, 21]; and 3) an XOR-like learning rule which assumes potentiation occurs whenever the pre- and post- synaptic activity levels are the same, with depression
otherwise. The last rule, proposed by Ref. 22, was specifically designed to eliminate correlations
between synapses, and can be viewed as a version of the classic covariance rule fashioned for binary
synapses.
Estimating the mean and covariance of synaptic weights
At the level of a single synapse, the presentation of a sequence of uncorrelated patterns from
Pstore (x) corresponds to a Markov random walk, P
defined by a transition matrix M, which averages over possible neural activity patterns: M = xi ,xj Pstore (xi ) ? Pstore (xj ) ? M(xi , xj ). The
distribution over synaptic states t steps after the initial encoding can be calculated by starting from
the stationary distribution of the weights ? V 0 (assuming a large number of other patterns have previously been stored; formally, this is the eigenvector of M corresponding to eigenvalue ? = 1), then
storing the pattern (xi , xj ), and finally t ? 1 other patterns from the prior:
t?1
? V (xi , xj , t) = M
? M(xi , xj ) ? ? V 0 ,
(7)
?lV
with the distribution over states given as a column vector,
= P(Vij = l|xi , xj ), l ? {1 . . . 2n},
where n is the depth of the cascade. Lastly, the distribution over weights, P(Wij |xi , xj ), can be
derived as ? W = MV ?W ? ? V , where MV ?W is a deterministic map from states to observed
weights (Fig. 2a).
As in the additive case, the states of synapses sharing a pre- or post- synaptic partner will be correlated (Figs. 1a, 2c). The degree of correlations for different synaptic configurations can be estimated
by generalising the above procedure to computing the joint distribution of the states of pairs of
synapses, which we represent as a matrix ?. E.g. for a pair of synapses sharing a postsynaptic
partner (Figs. 1b, d, and 2c), element (u, v) is ?uv = P(Vpost,pre1 = u, Vpost,pre2 = v). Hence, the
presentation of an activity pattern (xpre1 , xpre2 , xpost ) induces changes in the corresponding pair of
4
Other models, e.g. serial [19], could be used as well without qualitatively affecting the results.
One could argue that this is the most biologically relevant as plasticity is often NMDA-receptor dependent,
and hence it requires postsynaptic depolarisation for any effect to occur.
5
5
incoming synapses to neuron post as ?(1) = M(xpost , xpre1 ) ? ?(0) ? M(xpost , xpre2 )T , where ?(0)
is the stationary distribution corresponding to storing an infinite number of triplets from the pattern
distribution [16].
Replacing ? V with ? (which is now a function of the triplet (xpre1 , xpre2 , xpost )), and the multiplication by M with the slightly more complicated operator above, we can estimate the evolution of
the joint distribution over synaptic states in a manner very similar to Eq. 7:
X
? i ) ? ?(t?1) ? M(x
? i )T ,
Pstore (xi ) ? M(x
(8)
?(t) =
xi
P
? i) =
where M(x
xj Pstore (xj )M(xi , xj ). Also as above, the final joint distribution over states
can be mapped into a joint distribution over synaptic weights as MV ?W ? ?(t) ? MT
V ?W . This
approach can be naturally extended to all other correlated pairs of synapses [16].
The structure of correlations for different synaptic pairs varies significantly as a function of the
learning rule (Fig. 2c), with the overall degree of correlations depending on a range of factors.
Correlations tend to decrease with cascade depth and pattern sparsity. The first two variants of the
learning rule considered are not symmetric, and so induce different patterns of correlations than the
additive rules above. The XOR rule is similar to the covariance rule, but the reciprocal connections
are no longer perfectly correlated (due to metaplasticity), which means that it is no longer possible
to factorize P(W|x). Hence, assuming independence at decoding seems bound to introduce errors.
Approximately optimal retrieval when synapses are independent
If we ignore synaptic correlations, the evidence from the weights factorizes, P(W|x) =
Q
3
i,j P(Wij |xi , xj ), and so the exact dynamics would be semi-local . We can further approximate
the contribution of the outgoing weights by its mean, which recovers the same simple dynamics
derived for the additive case:
X
X
X
P(xi = 1|x?i , W, x
?)
Ii = log
= c1
Wij xj + c2
Wij + c3
xj + c4 x?i + c5 (9)
j
j
j
P(xi = 0|x?i , W, x
?)
The parameters c. depend on the prior over x, the noise model, the parameters of the learning rule
and t. Again, the optimal decoder is similar to previously derived attractor dynamics; in particular,
for stochastic binary synapses with presynaptically gated learning the optimal dynamics require
dynamic inhibition only for sparse patterns, and no homeostatic term, as used in [21] .
To validate these dynamics, we remove synaptic correlations by a pseudo-storage procedure in which
synapses are allowed to evolve independently according to transition matrix M, rather than changing
as actual intermediate patterns are stored. The dynamics work well in this case, as expected (Fig. 2d,
blue bars). However, when storing actual patterns drawn from the prior, performance becomes extremely poor, and often worse than the control (Fig. 2d, gray bars). Moreover, performance worsens
as the network size increases (not shown). Hence, ignoring correlations is highly detrimental for this
class of learning rules too.
Approximately optimal retrieval when synapses are correlated
To accommodate synaptic correlations, we approximate P(W|x) with a maximum entropy distribution with the same marginals and covariance structure, ignoring the higher order moments.6
Specifically, we assume the evidence from the weights has the functional form:
X
1X
1
exp
kij (x, t) ? Wij +
J(ij)(kl) (x, t) ? Wij Wkl
(10)
P(W|x, t) =
ij
ijkl
Z(x, t)
2
We use the TAP mean-field method [23] to find parameters k and J and the partition function, Z,
for each possible activity pattern x, given the mean and covariance for the synaptic weights matrix,
computed above7 [16].
6
This is just a generalisation of the simple dynamics which assume a first order max entropy model; moreover, the resulting weight distribution is a binary analog of the multivariate normal used in the additive case,
allowing the two to be directly compared.
7
Here, we ask whether it is possible to accommodate correlations in appropriate neural dynamics at all,
ignoring the issue of how the optimal values for the parameters of the network dynamics would come about.
6
a
b
no corr
corr
5
0.05
0.5
0.01
0
0
0
0
?5
?10
?0.05
d
12
6
0
?2
0
2
4
6
8
10
number of coactive inputs
12
10
20
0
10
0
10
20
e
10
5
0
?5
?10
?0.01
20
0
2
4
6
8
10
12
number of coactive inputs
normalized EPSP
0
TIP
20
MIDDLE
10
postsynaptic current
c
postsynaptic current
0
BASE
?0.5
1.0
0.8
0.6
0.4
0.2
0.0
0
1
2 3 4 5 6
number of inputs
7
Figure 3: Implications for neural dynamics. a. R1: parameters for Iirec ; linear modulation by
network activity, nb . b. R2: nonlinear modulation of pairwise term by network activity (cf. middle
panel in a); other parameters have
P linear dependences on nb . c. R1: Total current as
Pfunction of
number of coactivated inputs, j Wij xj ; lines: different levels of neural excitability j Wij , line
widths scale with frequency of occurrence in a sample run. d. Same for R2. e. Nonlinear integration
in dendrites, reproduced from [11], cf. curves in c.
Exact retrieval dynamics based on Eq. 10, but not respecting locality constraints, work substantially
better in the presence of synaptic correlations, for all rules (Fig. 2d, yellow bars). It is important to
note that for the XOR rule, which was supposed to be the closest analog to the covariance rule and
hence afford simple recall dynamics [22], error rates stay above control, suggesting that it is actually
a case in which even dependencies beyond 2nd-order correlation would need to be considered.
As in the additive case, exact recall dynamics are biologically implausible, as the total current to
the neuron depends on the full weight matrix. It is possible to approximate the dynamics using
strictly local information by replacing the nonlocal term by its mean, which, however,
is no longer a
P
constant, but rather a linear function of the total activity in the network, nb = j6=i xj [16]. Under
this approximation, the current from recurrent connections corresponding to the evidence from the
weights becomes:
X
P(W|x(1) )
1X
4
J4
=
k
(x)Wij Wik ? Z 4
(11)
(x)W
+
Iirec = log
ij
ij
jk (ij)(ik)
j
2
P(W|x(0) )
where i is the index of the neuron to be updated, and x(0/1) activity vector has the to-be-updated
neuron?s activity set to 1 or 0, respectively, and all other components given by the
current network
4
4
state. The functions kij
(x) = kij (x(1) )?kij (x(0) ), J(ij)(kl)
(x) = J(ij)(kl) x(1) ?J(ij)(kl) x(0) ,
and Z 4 = log Z x(1) ? log Z x(0) depend on the local activity at the indexed synapses,
modulated by the number of active neurons in the network, nb . This approximation is again consistent with our previous analysis, i.e. in the absence of synaptic correlations, the complex dynamics
recover the simple case presented before. Importantly, this approximation also does about as well as
exact dynamics (Fig. 2d, red bars).
For post-synaptically gated learning, comparing the parameters of the dynamics in the case of independent versus correlated synapses (Fig. 3a) reveals a modest modulation of the recurrent input by
the total activity. More importantly, the net current to the postsynaptic P
neuron depends non-linearly
(formally, quadratically) on the number of co-active inputs, nW 1 = j xj Wij , (Fig. 3c), which
is reminiscent of experimentally observed dendritic non-linearities [11] (Fig. 3e). Conversely, for
the presynaptically gated learning rule, approximately optimal dynamics predict a non-monotonic
modulation of activity by lateral inhibition (Fig. 3b), but linear neural integration (Fig. 3d).8 Lastly,
retrieval based on the XOR rule has the same form as the simple dynamics derived for the factorized
case [16]. However, the total current has to be rescaled to compensate for the correlations introduced
by reciprocal connections.
8
The difference between the two rules emerges exclusively because of the constraint of strict locality of the
approximation, since the exact form of the dynamics is essentially the same for the two.
7
additive
cascade
RULE
covariance
simple Hebbian
generalized Hebbian
presyn. gated
postsyn. gated
XOR
EXACT DYNAMICS
strictly local, linear
strictly local, nonlinear
semi-local, nonlinear
nonlocal, nonlinear
nonlocal, nonlinear
beyond correlations
NEURAL IMPLEMENTATION
linear feedback inh., homeostasis
nonlinear feedback inh.
nonlinear feedback inh.
nonlinear feedback inh., linear dendritic integr.
linear feedback inh., non-linear dendritic integr.
?
Table 1: Results summary: circuit adaptations against correlations for different learning rules.
5
Discussion
Statistical dependencies between synaptic efficacies are a natural consequence of activity dependent
synaptic plasticity, and yet their implications for network function have been unexplored. Here, in
the context of an auto-associative memory network, we investigated the patterns of synaptic correlations induced by several well-known learning rules and their consequent effects on retrieval. We
showed that most rules considered do indeed induce synaptic correlations and that failing to take
them into account greatly damages recall. One fortuitous exception is the covariance rule, for which
there are no synaptic correlations. This might explain why the bulk of classical treatments of autoassociative memories, using the covariance rule, could achieve satisfying capacity levels despite
overlooking the issue of synaptic correlations [5, 24, 25].
In general, taking correlations into account optimally during recall requires dynamics in which there
are non-local interactions between neurons. However, we derived approximations that perform well
and are biologically realisable without such non-locality (Table 1). Examples include the modulation of neural responses by the total activity of the population, which could be mediated by feedback
inhibition, and specific dendritic nonlinearities. In particular, for the post-synaptically gated learning rule, which may be viewed as an abstract model of hippocampal NMDA receptor-dependent
plasticity, our model predicts a form of non-linear mapping of recurrent inputs into postsynaptic
currents which is similar to experimentally observed dendritic integration in cortical pyramidal cells
[11]. In general, the tight coupling between the synaptic plasticity used for encoding (manifested
in patterns of synaptic correlations) and circuit dynamics offers an important route for experimental
validation [2].
None of the rules governing synaptic plasticity that we considered perfectly reproduced the pattern
of correlations in [6]; and indeed, exactly which rule applies in what region of the brain under which
neuromodulatory influences is unclear. Furthermore, results in [6] concern the neocortex rather
than the hippocampus, which is a more common target for models of auto-associative memory.
Nonetheless, our analysis has shown that synaptic correlations matter for a range of very different
learning rules that span the spectrum of empirical observations.
Another strategy to handle the negative effects of synaptic correlations is to weaken or eliminate
them. For instance, in the palimpsest synaptic model [14], the deeper the cascade, the weaker the
correlations, and so metaplasticity may have the beneficial effect of making recall easier. Another,
popular, idea is to use very sparse patterns [21], although this reduces the information content of
each one. More speculatively, one might imagine a process of off-line synaptic pruning or recoding,
in which strong correlations are removed or the weights adjusted so that simple recall methods will
work.
Here, we focused on second-order correlations. However, for plasticity rules such as XOR, we
showed that this does not suffice. Rather, higher-order correlations would need to be considered,
and thus, presumably higher-order interactions between neurons approximated. Finally, we know
from work on neural coding of sensory stimuli that there are regimes in which correlations either
help or hurt the informational quality of the code, assuming that decoding takes them into account.
Given our results, it becomes important to look at the relative quality of different plasticity rules,
assuming realizable decoding ? it is not clear whether rules that strive to eliminate correlations will
be bested by ones that do not.
Acknowledgments This work was supported by the Wellcome Trust (CS, ML), the Gatsby Charitable Foundation (PD), and the European Union Seventh Framework Programme (FP7/2007?2013)
under grant agreement no. 269921 (BrainScaleS) (ML).
8
References
1. Sommer, F.T. & Dayan, P. Bayesian retrieval in associative memories with storage errors. IEEE transactions on neural networks 9, 705?713 (1998).
2. Lengyel, M., Kwag, J., Paulsen, O. & Dayan, P. Matching storage and recall: hippocampal spike timingdependent plasticity and phase response curves. Nature Neuroscience 8, 1677?1683 (2005).
3. Lengyel, M. & Dayan, P. Uncertainty, phase and oscillatory hippocampal recall. Advances in Neural
Information Processing (2007).
4. Savin, C., Dayan, P. & Lengyel, M. Two is better than one: distinct roles for familiarity and recollection in
retrieving palimpsest memories. in Advances in Neural Information Processing Systems, 24 (MIT Press,
Cambridge, MA, 2011).
5. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities.
Proc. Natl. Acad. Sci. USA 76, 2554?2558 (1982).
6. Song, S., Sj?ostr?om, P.J., Reigl, M., Nelson, S. & Chklovskii, D.B. Highly nonrandom features of synaptic
connectivity in local cortical circuits. PLoS biology 3, e68 (2005).
7. Dayan, P. & Abbott, L. Theoretical Neuroscience (MIT Press, 2001).
8. Averbeck, B.B., Latham, P.E. & Pouget, A. Neural correlations, population coding and computation.
Nature Reviews Neuroscience 7, 358?366 (2006).
9. Pillow, J.W. et al. Spatio-temporal correlations and visual signalling in a complete neuronal population.
Nature 454, 995?999 (2008).
10. Latham, P.E. & Nirenberg, S. Synergy, redundancy, and independence in population codes, revisited.
Journal of Neuroscience 25, 5195?5206 (2005).
11. Branco, T. & H?ausser, M. Synaptic integration gradients in single cortical pyramidal cell dendrites. Neuron
69, 885?892 (2011).
12. Hasselmo, M.E. & Bower, J.M. Acetylcholine and memory. Trends Neurosci. 16, 218?222 (1993).
13. MacKay, D.J.C. Maximum entropy connections: neural networks. in Maximum Entropy and Bayesian
Methods, Laramie, 1990 (eds. Grandy, Jr, W.T. & Schick, L.H.) 237?244 (Kluwer, Dordrecht, The Netherlands, 1991).
14. Fusi, S., Drew, P.J. & Abbott, L.F. Cascade models of synaptically stored memories. Neuron 45, 599?611
(2005).
15. Abraham, W.C. Metaplasticity: tuning synapses and networks for plasticity. Nature Reviews Neuroscience
9, 387 (2008).
16. For details, see Supplementary Information.
17. Zhang, W. & Linden, D. The other side of the engram: experience-driven changes in neuronal intrinsic
excitability. Nature Reviews Neuroscience (2003).
18. Engel, A., Englisch, H. & Sch?utte, A. Improved retrieval in neural networks with external fields. Europhysics Letters (EPL) 8, 393?397 (1989).
19. Leibold, C. & Kempter, R. Sparseness constrains the prolongation of memory lifetime via synaptic metaplasticity. Cerebral cortex (New York, N.Y. : 1991) 18, 67?77 (2008).
20. Amit, Y. & Huang, Y. Precise capacity analysis in binary networks with multiple coding level inputs.
Neural computation 22, 660?688 (2010).
21. Huang, Y. & Amit, Y. Capacity analysis in multi-state synaptic models: a retrieval probability perspective.
Journal of computational neuroscience (2011).
22. Dayan Rubin, B. & Fusi, S. Long memory lifetimes require complex synapses and limited sparseness.
Frontiers in Computational Neuroscience (2007).
23. Thouless, D.J., Anderson, P.W. & Palmer, R.G. Solution of ?Solvable model of a spin glass?. Philosophical
Magazine 35, 593?601 (1977).
24. Amit, D., Gutfreund, H. & Sompolinsky, H. Storing infinite numbers of patterns in a spin-glass model of
neural networks. Phys Rev Lett 55, 1530?1533 (1985).
25. Treves, A. & Rolls, E.T. What determines the capacity of autoassociative memories in the brain? Network
2, 371?397 (1991).
9
| 4871 |@word worsens:1 middle:2 version:3 compression:1 seems:1 hippocampus:1 nd:1 additively:1 overwritten:1 eng:1 covariance:22 accounting:1 postsynaptically:2 paulsen:1 dramatic:1 fortuitous:1 accommodate:2 catastrophically:1 moment:1 initial:1 configuration:1 cristina:1 exclusively:2 efficacy:2 interestingly:1 past:1 coactive:2 current:19 comparing:2 activation:4 yet:1 dx:2 reminiscent:1 additive:16 happen:1 partition:1 plasticity:14 remove:1 designed:1 stationary:2 cue:5 item:2 signalling:1 reciprocal:6 colored:1 revisited:1 zhang:1 along:2 c2:3 ik:1 retrieving:1 combine:1 introduce:1 manner:1 pairwise:2 expected:2 indeed:5 examine:1 multi:2 brain:2 depolarisation:1 relying:1 informational:1 actual:2 considering:4 becomes:7 provided:1 estimating:1 bounded:2 moreover:4 panel:1 circuit:4 factorized:1 linearity:1 what:2 suffice:2 substantially:1 emerging:1 eigenvector:1 gutfreund:1 nonrandom:1 pseudo:1 temporal:1 unexplored:1 exactly:1 uk:5 control:6 unit:2 underpinned:2 enjoy:1 grant:1 before:1 engineering:1 local:16 treat:1 limit:1 consequence:2 acad:1 receptor:2 encoding:3 despite:1 subscript:1 firing:1 modulation:5 approximately:4 might:3 studied:2 conversely:1 co:2 limited:1 palmer:1 range:4 acknowledgment:1 practice:1 union:1 procedure:2 empirical:1 cascade:9 significantly:1 matching:1 postsyn:1 pre:10 induce:4 integrating:1 get:1 convenience:1 palimpsest:6 operator:1 storage:6 nb:4 influence:2 context:1 conventional:1 imposed:1 overrepresented:1 deterministic:3 map:3 starting:1 independently:1 focused:1 simplicity:2 pouget:1 rule:69 importantly:3 retrieve:1 population:6 classic:2 handle:1 hurt:1 updated:2 target:3 play:1 imagine:1 magazine:1 exact:8 substrate:1 us:1 agreement:1 element:1 trend:1 satisfying:1 jk:1 approximated:1 predicts:1 observed:6 role:2 bottom:1 capture:2 region:1 ensures:1 sompolinsky:1 plo:1 decrease:1 rescaled:1 removed:1 balanced:2 pd:1 respecting:1 constrains:1 venerable:1 cam:2 dynamic:50 depend:5 tight:2 completely:1 translated:1 joint:4 hopfield:3 emergent:1 represented:1 overlooking:1 additivity:1 distinct:1 london:1 dordrecht:1 encoded:2 supplementary:1 otherwise:1 nirenberg:1 ability:1 emergence:1 noisy:2 final:2 associative:7 reproduced:2 sequence:2 pressing:1 biophysical:1 net:2 analytical:1 ucl:1 eigenvalue:1 interaction:4 epsp:1 adaptation:1 relevant:1 translate:1 degenerate:1 achieve:1 supposed:1 intervening:1 inducing:1 validate:1 wjk:1 cluster:1 r1:4 produce:1 perfect:1 wider:1 derive:1 recurrent:8 ac:3 depending:1 coupling:1 help:1 ij:8 eq:5 strong:1 c:1 involves:1 implies:1 come:1 pfunction:1 closely:1 stochastic:4 require:3 potentiation:5 biological:5 dendritic:6 adjusted:1 strictly:6 frontier:1 considered:6 normal:1 exp:1 presumably:1 branco:1 mapping:3 nw:1 predict:1 adopt:1 failing:2 estimation:1 proc:1 vpost:2 homeostasis:1 hasselmo:1 pnoise:1 engel:1 mit:2 gaussian:1 always:1 averbeck:1 rather:6 factorizes:2 acetylcholine:1 derived:5 focus:2 notational:1 greatly:1 contrast:1 sense:1 realizable:1 glass:2 inference:3 dayan:7 dependent:5 motif:2 integrated:1 typically:1 eliminate:3 wij:14 schick:1 issue:3 overall:1 among:2 integration:4 mackay:1 field:5 sampling:1 biology:1 look:1 xpost:4 stimulus:4 few:1 thouless:1 phase:2 attractor:2 investigate:1 highly:2 generically:1 natl:1 implication:2 partial:2 necessary:1 experience:1 modest:2 indexed:1 accommodated:1 walk:1 circle:1 theoretical:2 epl:1 weaken:1 kij:4 column:2 instance:1 tractability:1 addressing:1 expects:1 seventh:1 too:1 optimally:1 stored:12 dependency:6 answer:1 varies:1 e68:1 density:1 stay:1 probabilistic:1 off:1 decoding:5 diverge:1 tip:1 together:2 connecting:1 connectivity:3 again:3 speculatively:1 huang:2 classically:1 worse:2 external:3 strive:1 leading:1 account:9 suggesting:1 nonlinearities:2 realisable:1 factorised:1 coding:4 includes:2 coefficient:1 matter:2 caused:1 explicitly:1 depends:3 dayan2:1 mv:3 later:2 lab:1 red:2 start:2 recover:1 complicated:1 contribution:3 ass:1 formed:1 spin:3 om:1 xor:7 variance:1 roll:1 yield:1 yellow:1 bayesian:2 ijkl:1 critically:1 none:1 lengyel:4 j6:1 history:2 explain:1 synapsis:32 implausible:1 oscillatory:1 phys:1 sharing:4 synaptic:68 whenever:1 ed:1 against:1 nonetheless:1 frequency:1 obvious:1 naturally:1 associated:1 recovers:2 sampled:1 newly:1 treatment:2 popular:1 ask:1 recall:23 emerges:1 pstore:13 nmda:2 actually:1 back:1 originally:1 higher:3 response:3 improved:1 synapse:3 formulation:2 though:3 anderson:1 marginalising:1 just:1 governing:1 lastly:2 furthermore:1 correlation:57 fashioned:1 lifetime:2 replacing:2 trust:1 nonlinear:10 grandy:1 quality:2 indicated:1 gray:1 lossy:2 usa:1 effect:10 requiring:1 normalized:1 evolution:1 hence:8 entering:2 symmetric:5 excitability:3 attractive:1 during:8 width:1 timingdependent:1 rat:1 generalized:3 trying:1 hippocampal:3 recognised:1 complete:3 latham:2 novel:1 recently:1 common:2 sigmoid:1 functional:2 mt:1 physical:1 cerebral:1 analog:3 belong:1 kluwer:1 marginals:1 significant:1 refer:1 cambridge:2 lengyel1:1 gibbs:1 neuromodulatory:2 approx:1 uv:1 trivially:1 tuning:1 similarly:1 decompression:1 j4:1 cortex:3 longer:5 inhibition:8 base:1 posterior:3 multivariate:2 recent:1 closest:1 showed:2 perspective:1 ausser:1 driven:1 route:1 certain:1 manifested:1 binary:7 wji:1 additional:1 impose:1 strike:1 ii:5 semi:3 full:1 multiple:1 infer:1 reduces:3 hebbian:7 pre1:1 characterized:1 offer:1 long:2 retrieval:24 compensate:1 prolongation:1 post:13 equally:1 serial:1 europhysics:1 paired:1 converging:1 variant:2 pernicious:1 essentially:1 represent:1 synaptically:3 cell:2 c1:3 receive:1 affecting:1 chklovskii:1 grow:1 source:3 pyramidal:2 appropriately:1 sch:1 strict:1 isolate:1 tend:2 induced:6 ample:1 odds:1 presence:1 intermediate:1 easy:1 variety:1 xj:33 independence:3 perfectly:4 idea:2 simplifies:1 reassuringly:1 whether:2 expression:1 song:3 peter:1 york:1 afford:1 autoassociative:3 depression:5 ignored:1 clear:1 involve:1 netherlands:1 traction:1 neocortex:1 locally:1 induces:1 neuroscience:10 estimated:2 kwag:1 bulk:1 blue:2 express:1 key:2 redundancy:1 drawn:1 changing:1 rewriting:1 abbott:2 sum:2 run:1 letter:1 uncertainty:3 almost:1 fusi:3 capturing:1 bound:3 activity:37 strength:4 occur:3 precisely:2 constraint:4 aspect:2 extremely:2 span:1 savin:1 reigl:1 according:1 poor:3 jr:1 across:1 slightly:1 beneficial:1 postsynaptic:8 wi:1 rev:1 biologically:3 making:1 taken:1 wellcome:1 previously:3 remains:1 describing:2 discus:1 r3:2 needed:1 integr:2 know:1 fp7:1 available:1 appropriate:1 presyn:1 occurrence:1 existence:1 top:1 assumes:3 include:2 cf:2 sommer:1 plausibly:1 realisability:1 testable:1 amit:3 approximating:1 classical:2 presynaptically:4 quantity:2 occurs:1 spike:1 strategy:2 damage:1 dependence:1 diagonal:2 unclear:1 gradient:1 detrimental:1 cw:1 link:1 mapped:2 fortuitously:1 separate:1 decoder:6 lateral:1 capacity:4 recollection:1 sci:1 partner:6 presynaptic:1 argue:1 nelson:1 trivial:1 assuming:4 code:2 index:1 ratio:1 negative:1 implementation:2 collective:1 gated:10 allowing:1 perform:1 neuron:29 observation:1 markov:1 anti:4 extended:1 variability:1 precise:1 inh:5 wkl:1 homeostatic:4 arbitrary:1 treves:1 introduced:1 pair:6 kl:4 c3:1 connection:9 philosophical:1 tap:1 c4:1 metaplasticity:7 quadratically:1 beyond:3 bar:4 below:1 pattern:51 usually:1 regime:1 sparsity:1 including:2 memory:19 max:1 critical:1 event:2 natural:1 solvable:1 representing:2 wik:1 iirec:3 mediated:1 auto:5 extract:1 prior:6 review:3 multiplication:1 evolve:1 relative:1 engram:1 kempter:1 proportional:1 versus:1 lv:1 validation:1 foundation:2 degree:2 proxy:1 consistent:1 rubin:1 vij:3 charitable:1 storing:8 share:1 uncorrelated:3 excitatory:1 summary:1 supported:1 last:2 ostr:1 weaker:1 deeper:1 side:1 face:1 taking:1 leaky:1 sparse:2 recoding:1 feedback:11 lett:1 calculated:1 cortical:4 transition:6 depth:2 curve:2 pillow:1 ignores:1 sensory:1 qualitatively:1 c5:1 programme:1 transaction:1 nonlocal:3 sj:1 approximate:4 pruning:1 ignore:2 implicitly:1 synergy:1 ml:2 active:3 incoming:2 reveals:1 generalising:1 assumed:2 spatio:1 xi:34 factorize:1 alternatively:1 spectrum:1 latent:2 triplet:2 why:1 table:2 additionally:1 nature:6 inherently:1 ignoring:6 obtaining:1 symmetry:1 dendrite:2 investigated:1 complex:3 european:1 constructing:1 linearly:2 arrow:1 neurosci:1 noise:1 arise:1 abraham:1 allowed:1 ref:1 neuronal:5 fig:21 referred:1 gatsby:3 hebb:1 bower:1 transitioning:1 familiarity:1 specific:1 revolution:1 showing:1 coactivated:1 r2:4 consequent:1 linden:1 evidence:3 concern:1 essential:1 intrinsic:1 corr:5 importance:1 drew:1 sparseness:2 easier:1 locality:3 entropy:4 likely:2 visual:2 expressed:1 applies:2 monotonic:1 radically:1 corresponds:1 chance:1 determines:1 ma:1 identity:1 marked:1 viewed:2 presentation:2 brainscales:1 shared:1 absence:1 content:1 experimentally:5 change:12 analysing:2 determined:2 except:1 specifically:2 infinite:2 generalisation:1 total:12 experimental:2 exception:3 formally:2 college:1 internal:1 latter:1 modulated:1 dept:1 outgoing:2 correlated:11 |
4,278 | 4,872 | A memory frontier for complex synapses
Subhaneil Lahiri and Surya Ganguli
Department of Applied Physics, Stanford University, Stanford CA
[email protected], [email protected]
Abstract
An incredible gulf separates theoretical models of synapses, often described solely
by a single scalar value denoting the size of a postsynaptic potential, from the
immense complexity of molecular signaling pathways underlying real synapses.
To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse
from a single scalar to an entire dynamical system with many internal molecular
functional states. Moreover, theoretical considerations alone demand such an expansion; network models with scalar synapses assuming finite numbers of distinguishable synaptic strengths have strikingly limited memory capacity. This raises
the fundamental question, how does synaptic complexity give rise to memory? To
address this, we develop new mathematical theorems elucidating the relationship
between the structural organization and memory properties of complex synapses
that are themselves molecular networks. Moreover, in proving such theorems, we
uncover a framework, based on first passage time theory, to impose an order on
the internal states of complex synaptic models, thereby simplifying the relationship between synaptic structure and function.
1
Introduction
It is widely thought that our very ability to remember the past over long time scales depends crucially
on our ability to modify synapses in our brain in an experience dependent manner. Classical models
of synaptic plasticity model synaptic efficacy as an analog scalar value, denoting the size of a postsynaptic potential injected into one neuron from another. Theoretical work has shown that such
models have a reasonable, extensive memory capacity, in which the number of long term associations
that can be stored by a neuron is proportional its number of afferent synapses [1?3]. However,
recent experimental work has shown that many synapses are more digital than analog; they cannot
robustly assume an infinite continuum of analog values, but rather can only take on a finite number
of distinguishable strengths, a number than can be as small as two [4?6] (though see [7]). This
one simple modification leads to a catastrophe in memory capacity: classical models with digital
synapses, when operating in a palimpset mode in which the ongoing storage of new memories can
overwrite previous memories, have a memory capacity proportional to the logarithm of the number
of synapses [8, 9]. Intuitively, when synapses are digital, the storage of a new memory can flip
a population of synaptic switches, thereby rapidly erasing previous memories stored in the same
synaptic population. This result indicates that the dominant theoretical basis for the storage of long
term memories in modifiable synaptic switches is flawed.
Recent work [10?12] has suggested that a way out of this logarithmic catastrophe is to expand our
theoretical conception of a synapse from a single scalar value to an entire stochastic dynamical system in its own right. This conceptual expansion is further necessitated by the experimental reality
that synapses contain within them immensely complex molecular signaling pathways, with many internal molecular functional states (e.g. see [4, 13, 14]). While externally, synaptic efficacy could be
digital, candidate patterns of electrical activity leading to potentiation or depression could yield transitions between these internal molecular states without necessarily inducing an associated change in
1
synaptic efficacy. This form of synaptic change, known as metaplasticity [15, 16], can allow the
probability of synaptic potentiation or depression to acquire a rich dependence on the history of
prior changes in efficacy, thereby potentially improving memory capacity.
Theoretical studies of complex, metaplastic synapses have focused on analyzing the memory performance of a limited number of very specific molecular dynamical systems, characterized by a
number of internal states in which potentiation and depression each induce a specific set of allowable transitions between states (e.g. see Figure 1 below). While these models can vastly outperform
simple binary synaptic switches, these analyses leave open several deep and important questions.
For example, how does the structure of a synaptic dynamical system determine its memory performance? What are the fundamental limits of memory performance over the space of all possible
synaptic dynamical systems? What is the structural organization of synaptic dynamical systems that
achieve these limits? Moreover, from an experimental perspective, it is unlikely that all synapses
can be described by a single canonical synaptic model; just like the case of neurons, there is an
incredible diversity of molecular networks underlying synapses both across species and across brain
regions within a single organism [17]. In order to elucidate the functional contribution of this diverse molecular complexity to learning and memory, it is essential to move beyond the analysis of
specific models and instead develop a general theory of learning and memory for complex synapses.
Moreover, such a general theory of complex synapses could aid in development of novel artificial
memory storage devices.
Here we initiate such a general theory by proving upper bounds on the memory curve associated with
any synaptic dynamical system, within the well established ideal observer framework of [10, 11, 18].
Along the way we develop principles based on first passage time theory to order the structure of
synaptic dynamical systems and relate this structure to memory performance. We summarize our
main results in the discussion section.
2
Overall framework: synaptic models and their memory curves
In this section, we describe the class of models of synaptic plasticity that we are studying and how
we quantify their memory performance. In the subsequent sections, we will find upper bounds on
this performance.
We use a well established formalism for the study of learning and memory with complex synapses
(see [10, 11, 18]). In this approach, electrical patterns of activity corresponding to candidate potentiating and depressing plasticity events occur randomly and independently at all synapses at a
Poisson rate r. These events reflect possible synaptic changes due to either spontaneous network
activity, or the storage of new memories. We let f pot and f dep denote the fraction of these events that
are candidate potentiating or depressing events respectively. Furthermore, we assume our synaptic
model has M internal molecular functional states, and that a candidate potentiating (depotentiating) event induces a stochastic transition in the internal state described by an M ? M discrete time
Markov transition matrix Mpot (Mdep ). In this framework, the states of different synapses will be
independent, and the entire synaptic population can be fully described by the probability distribution
across these states, which we will indicate with the row-vector p(t). Thus the i?th component of
p(t) denotes the fraction of the synaptic population in state i. Furthermore, each state i has its own
synaptic weight, wi , which we take, in the worst case scenario, to be restricted to two values. After
shifting and scaling these two values, we can assume they are ?1, without loss of generality.
We also employ an ?ideal observer? approach to the memory readout, where the synaptic weights
are read directly. This provides an upper bound on the quality of any readout using neural activity.
For any single memory, stored at time t = 0, we assume there will be an ideal pattern of synaptic
weights across a population of N synapses, the N -element vector w
~ ideal , that is +1 at all synapses
that experience a candidate potentiation event, and ?1 at all synapses that experience a candidate
depression event at the time of memory storage. We assume that any pattern of synaptic weights
close to w
~ ideal is sufficient to recall the memory. However, the actual pattern of synaptic weights at
some later time, t, will change to w(t)
~
due to further modifications from the storage of subsequent
memories. We can use the overlap between these, w
~ ideal ? w(t),
~
as a measure of the quality of the
memory. As t ? ?, the system will return to its steady state distribution which will be uncorrelated
2
Cascade model
(b)
Serial model
(c)
?1
10
SNR
(a)
?2
10
Cascade
Serial
?3
10
?1
10
0
10
1
10
Time
2
10
3
10
Figure 1: Models of complex synapses. (a) The cascade model of [10], showing transitions between
states of high/low synaptic weight (red/blue circles) due to potentiation/depression (solid red/dashed
blue arrows). (b) The serial model of [12]. (c) The memory curves of these two models, showing
the decay of the signal-to-noise ratio (to be defined in ?2) as subsequent memories are stored.
with the memory stored at t = 0. The probability distribution of the quantity w
~ ideal ? w(?)
~
can be
used as a ?null model? for comparison.
The extent to which the memory has been stored is described by a signal-to-noise ratio (SNR)
[10, 11]:
hw
~ ideal ? w(t)i
~
? hw
~ ideal ? w(?)i
~
p
SNR(t) =
.
(1)
Var(w
~ ideal ? w(?))
~
?
The noise in the denominator is essentially N . There is a correction when potentiation and depression are imbalanced, but this will not affect the upper bounds that we will discuss below and
will be ignored in the subsequent formulae.
A simple average memory curve can be derived as follows. All of the preceding plasticity events,
prior to t = 0, will put the population of synapses in its steady-state distribution, p? . The memory we are tracking at t = 0 will change the internal state distribution to p? Mpot (or p? Mdep )
in those synapses that experience a candidate potentiation (or depression) event. As the potentiating/depressing nature of the subsequent memories is independent of w
~ ideal , we can average over all
sequences, resulting in the evolution of the probability distribution:
dp(t)
= rp(t)WF ,
where WF = f pot Mpot + f dep Mdep ? I.
(2)
dt
Here WF is a continuous time transition matrix that models the process of forgetting the memory
stored at time t = 0 due to random candidate potentiation/depression events occurring at each
synapse due to the storage of subsequent memories. Its stationary distribution is p? .
This results in the following SNR
?
F
(3)
SNR(t) = N 2f pot f dep p? Mpot ? Mdep ertW w.
A detailed derivation of this formula can be found in the supplementary material. We will frequently
refer to this function as the memory curve. It can be thought of as the excess fraction of synapses
(relative to equilibrium) that maintain their ideal synaptic strength at time t, as dictated by the stored
memory at time t = 0.
Much of the previous work on these types of complex synaptic models has focused on understanding
the memory curves of specific models, or choices of Mpot/dep . Two examples of these models are
shown in Figure 1. We see that they have different memory properties. The serial model performs
relatively well at one particular timescale, but it performs poorly at other times. The cascade model
does not perform quite as well at that time, but it maintains its performance over a wider range of
timescales.
In this work, rather than analyzing specific models, we take a different approach, in order to obtain
a more general theory. We consider the entire space of these models and find upper bounds on the
memory capacity of any of them. The space of models with a fixed number of internal states M is
parameterized by the pair of M ? M discrete time stochastic transition matrices Mpot and Mdep , in
addition to f pot/dep . The parameters must satisfy the following constraints:
Mpot/dep
? [0, 1],
ij
X
Mpot/dep
ij
= 1,
f pot/dep ? [0, 1],
f pot + f dep = 1,
j
p? WF = 0,
X
p?
i = 1.
i
3
wi = ?1,
(4)
and f pot/dep follow automatically from the other constraints.
The upper bounds on Mpot/dep
ij
The critical question is: what do these constraints imply about the space of achievable memory
curves in (3)? To answer this question, especially for limits on achievable memory at finite times, it
will be useful to employ the eigenmode decomposition:
X
WF =
?qa ua va , va ub = ?ab , WF ua = ?qa ua , va WF = ?qa va .
(5)
a
Here qa are the negative of the eigenvalues of the forgetting process WF , ua are the right (column)
eigenvectors and va are the left (row) eigenvectors. This decomposition allows us to write the
memory curve as a sum of exponentials,
? X
SNR(t) = N
Ia e?rt/?a ,
(6)
a
where Ia = (2f pot f dep )p? (Mpot ? Mdep )ua va w and ?a = 1/qa . We can then ask the question:
what are the constraints on these quantities, namely eigenmode initial SNR?s, Ia , and time constants,
?a , implied by the constraints in (4)? We will derive some of these constraints in the next section.
3
Upper bounds on achievable memory capacity
In the previous section, in (3) we have described an analytic expression for a memory curve as a
function of the structure of a synaptic dynamical system, described by the pair of stochastic transition
matrices Mpot/dep . Since the performance measure for memory is an entire memory curve, and not
just a single number, there is no universal scalar notion of optimal memory in the space of synaptic
dynamical systems. Instead there are tradeoffs between storing proximal and distal memories; often
attempts to increase memory at late (early) times by changing Mpot/dep , incurs a performance loss
in memory at early (late) times in specific models considered so far [10?12]. Thus our end goal,
achieved in ?4, is to derive an envelope memory curve in the SNR-time plane, or a curve that forms
an upper-bound on the entire memory curve of any model. In order to achieve this goal, in this
section, we must first derive upper bounds, over the space of all possible synaptic models, on two
different scalar functions of the memory curve: its initial SNR, and the area under the memory curve.
In the process of upper-bounding the area, we will develop an essential framework to organize the
structure of synaptic dynamical systems based on first passage time theory.
3.1
Bounding initial SNR
We now give an upper bound on the initial SNR,
?
SNR(0) = N 2f pot f dep p? Mpot ? Mdep w,
(7)
over all possible models and also find the class of models that saturate this bound. A useful quantity
is the equilibrium probability flux between two disjoint sets of states, A and B:
XX
F
?AB =
rp?
(8)
i Wij .
i?A j?B
The initial SNR is closely related to the flux from the states with wi = ?1 to those with wj = +1
(see supplementary material):
?
4 N ??+
SNR(0) ?
.
(9)
r
This inequality becomes an equality if potentiation never decreases the synaptic weight and depression never increases it, which should be a property of any sensible model.
To maximize this flux, potentiation from a weak state must be guaranteed to end in a strong state,
and depression must do the reverse. An example of such a model is shown in Figure 2(a,b). These
models have a property known as ?lumpability? (see [19, ?6.3] for the discrete time version and
[20, 21] for continuous time). They are completely equivalent (i.e. have the same memory curve) as
a two state model with transition probabilities equal to 1, as shown in Figure 2(c).
4
(a)
(b)
(c)
1
1
Figure 2: Synaptic models that maximize initial SNR. (a) For potentiation, all transitions starting
from a weak state lead to a strong state, and the probabilities for all transitions leaving a given weak
state sum to 1. (a) Depression is similar to potentiation, but with strong and weak interchanged.
(c) The equivalent two state model, with transition probabilities under potentiation and depression
equal to one.
This two state model has the equilibrium distribution p? = (f dep , f pot ) and its flux is given by
??+ = rf pot f dep . This is maximized when f pot = f dep = 12 , leading to the upper bound:
?
SNR(0) ? N .
(10)
We note that while this model has high initial SNR, it also has very fast memory decay ? with a
timescale ? ? 1r . As the synapse is very plastic, the initial memory is encoded very easily, but
the subsequent memories also overwrite it rapidly. This is one example of the tradeoff between
optimizing memory at early versus late times.
3.2
Imposing order on internal states through first passage times
Our goal of understanding the relationship between structure and function in the space of all possible
synaptic models is complicated by the fact that this space contains many different possible network
topologies, encoded in the nonzero matrix elements of Mpot/dep . To systematically analyze this
entire space, we develop an important organizing principle using the theory of first passage times
in the stochastic process of forgetting, described by WF . The mean first passage time matrix, Tij ,
is defined as the average time it takes to reach state j for the first time, starting from state i. The
diagonal elements are defined to be zero.
A remarkable theorem we will exploit is that the quantity
X
??
Tij p?
j ,
(11)
j
known as Kemeny?s constant (see [19, ?4.4]), is independent of the starting state i. Intuitively, (11)
states that the average time it takes to reach any state, weighted by its equilibrium probability, is
independent of the starting state, implying a hidden constancy inherent in any stochastic process.
In the context of complex synapses, we can define the partial sums
X
X
?i+ =
Tij p?
?i? =
Tij p?
j ,
j .
j?+
(12)
j??
These can be thought of as the average time it takes to reach the strong/weak states respectively.
Using these definitions, we can then impose an order on the states by arranging them in order of
decreasing ?i+ or increasing ?i? . Because ?i+ + ?i? = ? is independent of i, the two orderings are
the same. In this order, which depends sensitively on the structure of Mpot/dep , states later (to the
right in figures below) can be considered to be more potentiated than states earlier (to the left in
figures below), despite the fact that they have the same synaptic efficacy. In essence, in this order, a
state is considered to be more potentiated if the average time it takes to reach all the strong efficacy
states is shorter. We will see that synaptic models that optimize various measures of memory have
an exceedingly simple structure when, and only when, their states are arranged in this order.1
1
Note that we do not need to worry about the order of the ?i? changing during the optimization: necessary
conditions for a maximum only require that there is no infinitesimal perturbation that increases the area. Therefore we need only consider an infinitesimal neighborhood of the model, in which the order will not change.
5
(a)
(b)
(c)
Figure 3: Perturbations that increase the area. (a) Perturbations that increase elements of Mpot
above the diagonal and decrease the corresponding elements of Mdep . It can no longer be used
when Mdep is lower triangular, i.e. depression must move synapses to ?more depressed? states. (b)
Perturbations that decrease elements of Mpot below the diagonal and increase the corresponding
elements of Mdep . It can no longer be used when Mpot is upper triangular, i.e. potentiation must
move synapses to ?more potentiated? states. (c) Perturbation that decreases ?shortcut? transitions
and increases the bypassed ?direct? transitions. It can no longer be used when there are only nearestneighbor ?direct? transitions.
3.3
Bounding area
Now consider the area under the memory curve:
Z ?
A=
dt SNR(t).
(13)
0
We will find an upper bound on this quantity as well as the model that saturates this bound.
First passage time theory introduced in the previous section becomes useful because the area has a
simple expression in terms of quantities introduced in (12) (see supplementary material):
X
?
pot
dep
A = N (4f pot f dep )
p?
?i+ ? ?j+
M
?
M
i
ij
ij
ij
?
=
N (4f
pot dep
f
)
X
dep
p?
?j? ? ?i? .
Mpot
i
ij ? Mij
(14)
ij
With the states in the order described above, we can find perturbations of Mpot/dep that will always
increase the area, whilst leaving the equilibrium distribution, p? , unchanged. Some of these perturbations are shown in Figure 3, see supplementary material for details. For example, in Figure 3(a),
for two states i on the left and j on the right, with j being more ?potentiated? than i (i.e. ?i+ > ?j+ ),
dep
we have proven that increasing Mpot
ij and decreasing Mij leads to an increase in area. The only
thing that can prevent these perturbations from increasing the area is when they require the decrease
of a matrix element that has already been set to 0. This determines the topology (non-zero transition
probabilities) of the model with maximal area. It is of the form shown in Figure 4(c),with potentiation moving one step to the right and depression moving one step to the left. Any other topology
would allow some class of perturbations (e.g. in Figure 3) to further increase the area.
As these perturbations do not change the equilibrium distribution, this means that the area of any
model is bounded by that of a linear chain with the same equilibrium distribution. The area of
a linear chain model can be expressed directly in terms of its equilibrium state distribution, p? ,
yielding the following upper bound on the area of any model with the same p? (see supplementary
material):
?
?
?
?
X
X
X
2
N
2 N X?
?
? ?
?
k?
jp?
p
w
=
k
?
jp
A?
(15)
k
j
k
j pk ,
r
r
j
j
k
k
P
where we chose wk = sgn[k ? j jp?
j ]. We can then maximize this by pushing all of the equilibrium distribution symmetrically to the two end states. This can be done by reducing the transition
probabilities out of these states, as in Figure 4(c). This makes it very difficult to exit these states
once they have been entered. The resulting area is
?
N (M ? 1)
.
(16)
A?
r
This analytical result is similar to a numerical result found in [18] under a slightly different information theoretic measure of memory performance.
6
The ?sticky? end states result in very slow decay of memory, but they also make it difficult to encode
the memory in the first place, since a small fraction of synapses are able to change synaptic efficacy
during the storage of a new memory. Thus models that maximize area optimize memory at late
times, at the expense of early times.
4
Memory curve envelope
Now we will look at the implications of the upper bounds found in the previous section for the SNR
at finite times. As argued in (6), the memory curve can be written in the form
? X
SNR(t) = N
Ia e?rt/?a .
(17)
a
The upper bounds on the initial SNR, (10), and the area, (16), imply the following constraints on the
parameters {Ia , ?a }:
X
X
Ia ? 1,
Ia ?a ? M ? 1.
(18)
a
a
We are not claiming that these are a complete set of constraints: not every set {Ia , ?a } that satisfies
these inequalities will actually be achievable by a synaptic model. However, any set that violates
either inequality will definitely not be achievable.
Now we can pick some fixed time, t0 , and maximize the SNR at that time wrt. the parameters
{Ia , ?a }, subject to the constraints above. This always results in a single nonzero Ia ; in essence,
optimizing memory at a single time requires a single exponential. The resulting optimal memory
curve, along with the achieved memory at the chosen time, depends on t0 as follows:
?
?
M ?1
t0 ?
=? SNR(t) = N e?rt/(M ?1)
=? SNR(t0 ) = N e?rt0 /(M ?1) ,
r
?
?
(19)
M ?1
N (M ? 1)e?t/t0
N (M ? 1)
t0 ?
=? SNR(t) =
=? SNR(t0 ) =
.
r
rt0
ert0
Both the initial SNR bound and the area bound are saturated at early times. At late times, only
the area bound is saturated. The function SNR(t0 ), the green curve in Figure 4(a), above forms a
memory curve envelope with late-time power-law decay ? t?1
0 . No synaptic model can have an
SNR that is greater than this at any time. We can use this to find an upper bound on the memory
lifetime, ? (), by finding the point at which the envelope crosses :
?
N (M ? 1)
,
(20)
? () ?
er
where we assume N > (e)2 . Intriguingly, both the lifetime and memory envelope expand linearly
with the number of internal states M , and increase as the square root of the number of synapses N .
This leaves the question of whether this bound is achievable. At any time, can we find a model
whose memory curve touches the envelope? The red curves in Figure 4(a) show the closest we
have come to the envelope with actual models, by repeated numerical optimization of SNR(t0 ) over
Mpot/dep with random initialization and by hand designed models.
We see that at early, but not late times, there is a gap between the upper bound that we can prove
and what we can achieve with actual models. There may be other models we haven?t found that
could beat the ones we have, and come closer to our proven envelope. However, we suspect that the
area constraint is not the bottleneck for optimizing memory at times less than O( M
r ). We believe
there is some other constraint that prevents models from approaching the envelope, and currently are
exploring several mathematical conjectures for the precise form of this constraint in order to obtain
a potentially tighter envelope. Nevertheless, we have proven rigorously that no model?s memory
curve can ever exceed this envelope, and that it is at least tight for late times, longer than O( M
r ),
where models of the form in Figure 4(c)can come close to the envelope.
5
Discussion
We have initiated the development of a general theory of learning and memory with complex
synapses, allowing for an exploration of the entire space of complex synaptic models, rather than
7
(a)
(b)
1
10
0
10
SNR
envelope
numerical search
hand designed
?1
10
(c)
?
Area bound active
Initial SNR bound active
?
?2
10 ?1
10
0
10
1
10
Time
2
10
3
10
Figure 4: The memory curve envelope for N = 100, M = 12. (a) An upper bound on the SNR
at any time is shown in green. The red dashed curve shows the result of numerical optimization of
synaptic models with random initialization. The solid red curve shows the highest SNR we have
found with hand designed models. At early times these models are of the form shown in (b) with
different numbers of states, and all transition probabilities equal to 1. At late times they are of the
form shown in (c) with different values of ?. The model shown in (c) also saturates the area bound
(16) in the limit ? ? 0.
analyzing individual models one at a time. In doing so, we have obtained several new mathematical results delineating the functional limits of memory achievable by synaptic complexity, and the
structural characterization of synaptic dynamical systems that achieve these limits. In particular,
operating within the ideal observer framework of [10, 11, 18], we have shown that for a population
?
of N synapses with M internal states, (a) the initial SNR of any synaptic model cannot exceed N ,
and any model that achieves this bound is equivalent to a binary synapse, (b) the area under the
memory curve of any model cannot exceed that of a linear chain model with the same
? equilibrium
distribution, (c) both the area and memory lifetime of any model cannot exceed O( N M ), and the
model that achieves this limit has a linear chain topology with only nearest neighbor transitions, (d)
we have derived an envelope memory curve in the SNR-time plane that cannot be exceeded by the
memory curve of any model, and models that approach this envelope for times greater ?
than O( M
r )
are linear chain models, and (e) this late-time envelope is a power-law proportional to O( N M /rt),
indicating that synaptic complexity can strongly enhance the limits of achievable memory.
This theoretical study opens up several avenues for further inquiry. In particular, the tightness of our
envelope for early times, less than O( M
r ), remains an open question, and we are currently pursuing
several conjectures. We have also derived memory constrained envelopes, by asking in the space
of models that achieve a given SNR at a given time, what is the maximal SNR achievable at other
times. If these two times are beyond a threshold separation, optimal constrained models require
two exponentials. It would be interesting to systematically analyze the space of models that achieve
good memory at multiple times, and understand their structural organization, and how they give rise
to multiple exponentials, leading to power law memory decays.
Finally, it would be interesting to design physiological experiments in order to perform optimal
systems identification of potential Markovian dynamical systems hiding within biological synapses,
given measurements of pre and post-synaptic spike trains along with changes in post-synaptic potentials. Then given our theory, we could match this measured synaptic model to optimal models to
understand for which timescales of memory, if any, biological synaptic dynamics may be tuned.
In summary, we hope that a deeper theoretical understanding of the functional role of synaptic
complexity, initiated here, will help advance our understanding of the neurobiology of learning and
memory, aid in the design of engineered memory circuits, and lead to new mathematical theorems
about stochastic processes.
Acknowledgements
We thank Sloan, Genenetech, Burroughs-Wellcome, and Swartz foundations for support. We thank
Larry Abbott, Marcus Benna, Stefano Fusi, Jascha Sohl-Dickstein and David Sussillo for useful
discussions.
8
References
[1] J. J. Hopfield, ?Neural networks and physical systems with emergent collective computational
abilities,? Proc. Natl. Acad. Sci. U.S.A. 79 (1982) no. 8, 2554?2558.
[2] D. J. Amit, H. Gutfreund, and H. Sompolinsky, ?Spin-glass models of neural networks,? Phys.
Rev. A 32 (Aug, 1985) 1007?1018.
[3] E. Gardner, ?The space of interactions in neural network models,? Journal of Physics A:
Mathematical and General 21 (1988) no. 1, 257.
[4] T. V. P. Bliss and G. L. Collingridge, ?A synaptic model of memory: long-term potentiation in
the hippocampus,? Nature 361 (Jan, 1993) 31?39.
[5] C. C. H. Petersen, R. C. Malenka, R. A. Nicoll, and J. J. Hopfield, ?All-or-none potentiation at
CA3-CA1 synapses,? Proc. Natl. Acad. Sci. U.S.A. 95 (1998) no. 8, 4732?4737.
[6] D. H. O?Connor, G. M. Wittenberg, and S. S.-H. Wang, ?Graded bidirectional synaptic
plasticity is composed of switch-like unitary events,? Proc. Natl. Acad. Sci. U.S.A. 102 (2005)
no. 27, 9679?9684.
[7] R. Enoki, Y. ling Hu, D. Hamilton, and A. Fine, ?Expression of Long-Term Plasticity at
Individual Synapses in Hippocampus Is Graded, Bidirectional, and Mainly Presynaptic:
Optical Quantal Analysis,? Neuron 62 (2009) no. 2, 242 ? 253.
[8] D. J. Amit and S. Fusi, ?Constraints on learning in dynamic synapses,? Network:
Computation in Neural Systems 3 (1992) no. 4, 443?464.
[9] D. J. Amit and S. Fusi, ?Learning in neural networks with material synapses,? Neural
Computation 6 (1994) no. 5, 957?982.
[10] S. Fusi, P. J. Drew, and L. F. Abbott, ?Cascade models of synaptically stored memories,?
Neuron 45 (Feb, 2005) 599?611.
[11] S. Fusi and L. F. Abbott, ?Limits on the memory storage capacity of bounded synapses,? Nat.
Neurosci. 10 (Apr, 2007) 485?493.
[12] C. Leibold and R. Kempter, ?Sparseness Constrains the Prolongation of Memory Lifetime via
Synaptic Metaplasticity,? Cerebral Cortex 18 (2008) no. 1, 67?77.
[13] D. S. Bredt and R. A. Nicoll, ?AMPA Receptor Trafficking at Excitatory Synapses,? Neuron
40 (2003) no. 2, 361 ? 379.
[14] M. P. Coba, A. J. Pocklington, M. O. Collins, M. V. Kopanitsa, R. T. Uren, S. Swamy, M. D.
Croning, J. S. Choudhary, and S. G. Grant, ?Neurotransmitters drive combinatorial multistate
postsynaptic density networks,? Sci Signal 2 (2009) no. 68, ra19.
[15] W. C. Abraham and M. F. Bear, ?Metaplasticity: the plasticity of synaptic plasticity,? Trends
in Neurosciences 19 (1996) no. 4, 126 ? 130.
[16] J. M. Montgomery and D. V. Madison, ?State-Dependent Heterogeneity in Synaptic
Depression between Pyramidal Cell Pairs,? Neuron 33 (2002) no. 5, 765 ? 777.
[17] R. D. Emes and S. G. Grant, ?Evolution of Synapse Complexity and Diversity,? Annual
Review of Neuroscience 35 (2012) no. 1, 111?131.
[18] A. B. Barrett and M. C. van Rossum, ?Optimal learning rules for discrete synapses,? PLoS
Comput. Biol. 4 (Nov, 2008) e1000230.
[19] J. Kemeny and J. Snell, Finite markov chains. Springer, 1960.
[20] C. Burke and M. Rosenblatt, ?A Markovian function of a Markov chain,? The Annals of
Mathematical Statistics 29 (1958) no. 4, 1112?1122.
[21] F. Ball and G. F. Yeo, ?Lumpability and Marginalisability for Continuous-Time Markov
Chains,? Journal of Applied Probability 30 (1993) no. 3, 518?528.
9
| 4872 |@word version:1 achievable:9 hippocampus:2 open:3 hu:1 crucially:1 simplifying:1 decomposition:2 pick:1 incurs:1 thereby:3 solid:2 initial:12 contains:1 efficacy:7 denoting:2 tuned:1 past:1 must:6 written:1 subsequent:7 numerical:4 plasticity:8 analytic:1 designed:3 alone:1 stationary:1 implying:1 device:1 leaf:1 plane:2 incredible:2 provides:1 characterization:1 mathematical:6 along:3 direct:2 prove:1 pathway:2 manner:1 forgetting:3 themselves:1 frequently:1 brain:2 decreasing:2 automatically:1 trafficking:1 actual:3 ua:5 becomes:2 increasing:3 xx:1 underlying:2 moreover:4 bounded:2 circuit:1 hiding:1 null:1 what:6 ca1:1 gutfreund:1 whilst:1 finding:1 remember:1 every:1 delineating:1 grant:2 organize:1 hamilton:1 rossum:1 modify:1 limit:9 acad:3 despite:1 receptor:1 analyzing:3 initiated:2 solely:1 chose:1 initialization:2 nearestneighbor:1 limited:2 range:1 signaling:2 jan:1 area:25 universal:1 thought:3 cascade:5 mpot:22 pre:1 induce:1 petersen:1 cannot:5 close:2 storage:10 put:1 context:1 optimize:2 equivalent:3 starting:4 independently:1 rt0:2 focused:2 jascha:1 rule:1 proving:2 population:7 notion:1 overwrite:2 arranging:1 annals:1 elucidate:1 spontaneous:1 element:8 trend:1 constancy:1 role:1 electrical:2 wang:1 worst:1 region:1 readout:2 wj:1 sticky:1 sompolinsky:1 ordering:1 decrease:5 highest:1 plo:1 complexity:8 constrains:1 rigorously:1 dynamic:2 raise:1 tight:1 exit:1 basis:1 completely:1 strikingly:1 easily:1 hopfield:2 emergent:1 various:1 neurotransmitter:1 derivation:1 train:1 fast:1 describe:1 artificial:1 neighborhood:1 quite:1 encoded:2 stanford:4 widely:1 supplementary:5 whose:1 tightness:1 triangular:2 ability:3 statistic:1 timescale:2 sequence:1 eigenvalue:1 analytical:1 multistate:1 interaction:1 maximal:2 rapidly:2 entered:1 organizing:1 poorly:1 achieve:6 inducing:1 leave:1 wider:1 derive:3 develop:5 help:1 sussillo:1 measured:1 nearest:1 ij:9 dep:27 aug:1 strong:5 pot:15 indicate:1 come:3 quantify:1 closely:1 stochastic:7 exploration:1 sgn:1 engineered:1 larry:1 material:6 violates:1 require:3 argued:1 potentiation:17 lumpability:2 snell:1 tighter:1 biological:2 frontier:1 exploring:1 correction:1 burke:1 immensely:1 considered:3 equilibrium:10 interchanged:1 continuum:1 early:8 achieves:2 proc:3 combinatorial:1 currently:2 weighted:1 hope:1 always:2 rather:3 sensitively:1 encode:1 derived:3 wittenberg:1 indicates:1 mainly:1 sganguli:1 wf:9 glass:1 ganguli:1 dependent:2 entire:8 unlikely:1 hidden:1 expand:3 wij:1 overall:1 development:2 constrained:2 equal:3 once:1 never:2 intriguingly:1 flawed:1 look:1 inherent:1 employ:2 haven:1 randomly:1 composed:1 individual:2 metaplastic:1 maintain:1 ab:2 attempt:1 organization:3 elucidating:1 saturated:2 yielding:1 natl:3 immense:1 chain:8 implication:1 closer:1 partial:1 necessary:1 experience:4 shorter:1 necessitated:1 logarithm:1 circle:1 theoretical:9 formalism:1 column:1 earlier:1 asking:1 markovian:2 ca3:1 snr:38 stored:9 answer:1 proximal:1 density:1 fundamental:2 definitely:1 physic:2 enhance:1 vastly:1 reflect:1 leading:3 return:1 yeo:1 potential:4 diversity:2 wk:1 bliss:1 satisfy:1 sloan:1 afferent:1 depends:3 later:2 root:1 observer:3 analyze:2 doing:1 red:5 maintains:1 complicated:1 contribution:2 square:1 spin:1 maximized:1 yield:1 weak:5 identification:1 plastic:1 none:1 drive:1 history:1 inquiry:1 synapsis:42 reach:4 phys:1 synaptic:62 eigenmode:2 definition:1 infinitesimal:2 burroughs:1 associated:2 ask:1 recall:1 uncover:1 actually:1 worry:1 exceeded:1 bidirectional:2 dt:2 follow:1 synapse:6 depressing:3 arranged:1 though:1 done:1 generality:1 furthermore:2 just:2 lifetime:4 strongly:1 hand:3 lahiri:1 touch:1 mode:1 benna:1 quality:2 believe:1 contain:1 evolution:2 equality:1 read:1 nonzero:2 distal:1 potentiating:4 during:2 essence:2 steady:2 allowable:1 theoretic:1 complete:1 performs:2 stefano:1 passage:7 consideration:1 novel:1 functional:7 physical:1 jp:3 cerebral:1 analog:3 association:1 organism:1 refer:1 measurement:1 connor:1 imposing:1 depressed:1 moving:2 longer:4 operating:2 cortex:1 feb:1 dominant:1 closest:1 own:2 recent:2 imbalanced:1 perspective:1 dictated:1 optimizing:3 reverse:1 scenario:1 inequality:3 binary:2 greater:2 impose:2 preceding:1 determine:1 maximize:5 swartz:1 dashed:2 signal:3 multiple:2 match:1 characterized:1 cross:1 long:5 prolongation:1 serial:4 molecular:11 post:2 va:6 denominator:1 essentially:1 poisson:1 achieved:2 synaptically:1 cell:1 addition:1 fine:1 pyramidal:1 leaving:2 envelope:19 subject:1 suspect:1 thing:1 structural:4 unitary:1 symmetrically:1 ideal:13 exceed:4 conception:2 switch:4 affect:1 topology:4 approaching:1 avenue:1 tradeoff:2 t0:9 whether:1 expression:3 bottleneck:1 gulf:1 depression:15 deep:1 ignored:1 useful:4 tij:4 detailed:1 eigenvectors:2 induces:1 outperform:1 canonical:1 neuroscience:2 disjoint:1 modifiable:1 blue:2 diverse:1 rosenblatt:1 discrete:4 write:1 dickstein:1 nevertheless:1 threshold:1 changing:2 prevent:1 abbott:3 fraction:4 sum:3 parameterized:1 injected:1 place:1 reasonable:1 pursuing:1 separation:1 fusi:5 scaling:1 bound:28 guaranteed:1 annual:1 activity:4 strength:3 occur:1 constraint:13 malenka:1 emes:1 optical:1 relatively:1 conjecture:2 department:1 ball:1 across:4 slightly:1 postsynaptic:3 wi:3 rev:1 modification:2 intuitively:2 restricted:1 wellcome:1 nicoll:2 remains:1 discus:1 montgomery:1 wrt:1 initiate:1 flip:1 end:4 studying:1 collingridge:1 robustly:1 rp:2 swamy:1 denotes:1 madison:1 pushing:1 exploit:1 especially:1 amit:3 graded:2 classical:2 unchanged:1 implied:1 move:3 question:7 quantity:6 already:1 spike:1 dependence:1 rt:4 diagonal:3 kemeny:2 dp:1 separate:1 thank:2 sci:4 capacity:8 sensible:1 presynaptic:1 extent:1 marcus:1 assuming:1 relationship:3 quantal:1 ratio:2 acquire:1 difficult:2 potentially:2 relate:1 expense:1 claiming:1 negative:1 rise:2 design:2 collective:1 perform:2 allowing:1 upper:20 potentiated:4 neuron:7 markov:4 finite:5 beat:1 heterogeneity:1 saturates:2 ever:1 precise:1 neurobiology:1 perturbation:10 introduced:2 david:1 pair:3 namely:1 extensive:1 metaplasticity:3 established:2 qa:5 address:1 beyond:2 suggested:1 able:1 dynamical:13 pattern:5 below:5 summarize:1 rf:1 green:2 memory:102 shifting:1 ia:10 event:11 overlap:1 critical:1 power:3 choudhary:1 imply:2 gardner:1 prior:2 understanding:4 acknowledgement:1 review:1 relative:1 law:3 fully:1 loss:2 kempter:1 bear:1 interesting:2 proportional:3 proven:3 var:1 versus:1 remarkable:1 digital:4 foundation:1 sufficient:1 principle:2 systematically:2 uncorrelated:1 storing:1 erasing:1 row:2 excitatory:1 summary:1 allow:2 understand:3 deeper:1 neighbor:1 van:1 curve:31 transition:19 rich:1 exceedingly:1 far:1 flux:4 excess:1 nov:1 active:2 conceptual:1 surya:1 continuous:3 search:1 reality:1 nature:2 ca:1 bypassed:1 improving:1 expansion:2 complex:13 necessarily:1 ampa:1 apr:1 pk:1 main:1 timescales:2 arrow:1 bounding:3 noise:3 linearly:1 ling:1 neurosci:1 abraham:1 repeated:1 slow:1 aid:2 exponential:4 comput:1 candidate:8 late:10 hw:2 externally:1 theorem:4 formula:2 saturate:1 specific:6 showing:2 er:1 decay:5 physiological:1 barrett:1 essential:3 sohl:1 drew:1 nat:1 occurring:1 sparseness:1 demand:1 gap:1 logarithmic:1 distinguishable:2 prevents:1 expressed:1 tracking:1 scalar:7 catastrophe:2 springer:1 mij:2 determines:1 satisfies:1 goal:3 shortcut:1 change:10 infinite:1 reducing:1 specie:1 experimental:3 indicating:1 internal:12 support:1 collins:1 ub:1 ongoing:1 biol:1 |
4,279 | 4,873 | Bayesian entropy estimation for binary spike train
data using parametric prior knowledge
Evan Archer13 , Il Memming Park123 , Jonathan W. Pillow123
1. Center for Perceptual Systems, 2. Dept. of Psychology,
3. Division of Statistics & Scientific Computation
The University of Texas at Austin
{memming@austin., earcher@, pillow@mail.} utexas.edu
Abstract
Shannon?s entropy is a basic quantity in information theory, and a fundamental
building block for the analysis of neural codes. Estimating the entropy of a discrete distribution from samples is an important and difficult problem that has received considerable attention in statistics and theoretical neuroscience. However,
neural responses have characteristic statistical structure that generic entropy estimators fail to exploit. For example, existing Bayesian entropy estimators make
the naive assumption that all spike words are equally likely a priori, which makes
for an inefficient allocation of prior probability mass in cases where spikes are
sparse. Here we develop Bayesian estimators for the entropy of binary spike trains
using priors designed to flexibly exploit the statistical structure of simultaneouslyrecorded spike responses. We define two prior distributions over spike words using mixtures of Dirichlet distributions centered on simple parametric models. The
parametric model captures high-level statistical features of the data, such as the
average spike count in a spike word, which allows the posterior over entropy to
concentrate more rapidly than with standard estimators (e.g., in cases where the
probability of spiking differs strongly from 0.5). Conversely, the Dirichlet distributions assign prior mass to distributions far from the parametric model, ensuring
consistent estimates for arbitrary distributions. We devise a compact representation of the data and prior that allow for computationally efficient implementations
of Bayesian least squares and empirical Bayes entropy estimators with large numbers of neurons. We apply these estimators to simulated and real neural data and
show that they substantially outperform traditional methods.
Introduction
Information theoretic quantities are popular tools in neuroscience, where they are used to study
neural codes whose representation or function is unknown. Neuronal signals take the form of fast
(? 1 ms) spikes which are frequently modeled as discrete, binary events. While the spiking response
of even a single neuron has been the focus of intense research, modern experimental techniques make
it possible to study the simultaneous activity of hundreds of neurons. At a given time, the response
of a population of n neurons may be represented by a binary vector of length n, where each entry
represents the presence (1) or absence (0) of a spike. We refer to such a vector as a spike word.
For n much greater than 30, the space of 2n spike words becomes so large that effective modeling
and analysis of neural data, with their high dimensionality and relatively low sample size, presents
a significant computational and theoretical challenge.
We study the problem of estimating the discrete entropy of spike word distributions. This is a difficult problem when the sample size is much less than 2n , the number of spike words. Entropy
estimation in general is a well-studied problem with a literature spanning statistics, physics, neuro1
A
B
neurons
frequecny
time
words
0 1 0 0 1 0 1 1
0 0 1 0 1 1 0 1
0 0 0 1 0 1 1 1
word distribution
0 1 0
0 1 1
1 0 1
Figure 1: Illustrated example of binarized spike responses for n = 3 neurons and corresponding
word distribution. (A) The spike responses of n = 3 simultaneously-recorded neurons (green,
orange, and purple). Time is discretized into bins of size ?t. A single spike word is a 3 ? 1 binary
vector whose entries are 1 or 0 corresponding to whether the neuron spiked or not within the time
bin. (B) We model spike words as drawn iid from the word distribution ?, a probability distribution
supported on the A = 2n unique binary words. Here we show a schematic ? for the data of panel
(A). The spike words (x-axis) occur with varying probability (blue)
science, ecology, and engineering, among others [1?7]. We introduce a novel Bayesian estimator
which, by incorporating simple a priori information about spike trains via a carefully-chosen prior,
can estimate entropy with remarkable accuracy from few samples. Moreover, we exploit the structure of spike trains to compute efficiently on the full space of 2n spike words.
We begin by briefly reviewing entropy estimation in general. In Section 2 we discuss the statistics
of spike trains and emphasize a statistic, called the synchrony distribution, which we employ in
our model. In Section 3 we introduce two novel estimators, the Dirichlet?Bernoulli (DBer) and
Dirichlet?Synchrony (DSyn) entropy estimators, and discuss their properties and computation. We
? DBer and H
? DSyn to other entropy estimation techniques in simulation and on neural data,
compare H
? DBer drastically outperforms other popular techniques when applied to real neural
and show that H
data. Finally, we apply our estimators to study synergy across time of a single neuron.
1
Entropy Estimation
A
N
Let x := {xk }k=1 be spike words drawn iid from an unknown word distribution ? := {?i }i=1 .
There are A = 2n unique words for a population of n neurons, which we index by {1, 2, . . . , A}.
Each sampled word xk is a binary vector of length n, where xki records the presence or absence of
a spike from the ith neuron. We wish to estimate the entropy of ?,
H(?) = ?
A
X
?k log ?k ,
(1)
k=1
where ?k > 0 denotes the probability of observing the kth word.
A naive method for estimating H is to first estimate ? using the count of observed words nk =
PN
? where ?
?k = nk /N . Evali=1 1{xi =k} for each word k. This yields the empirical distribution ?,
uating eq. 1 on this estimate yields the ?plugin? estimator,
? plugin = ?
H
A
X
?
?i log ?
?i ,
(2)
i=1
which is also the maximum-likelihood estimator under the multinomial likelihood. Although con? plugin is in general severely biased when N A.
sistent and straightforward to compute, H
Indeed, all entropy estimators are biased when N A [8]. As a result, many techniques for biascorrection have been proposed in the literature [6, 9?18]. Here, we extend the Bayesian approach
of [19], focusing in particular on the problem of entropy estimation for simultaneously-recorded
neurons.
In a Bayesian paradigm, rather than attempting to directly compute and remove the bias for a given
estimator, we instead choose a prior distribution over the space of discrete distributions. Nemenman
2
RGC Empirical Synchrony Distribution, 2ms bins
0.7
B
0.6
0.5
RGC data
0.4
NSB prior
0.3
0.2
0.1
0
0
1
2
3
4
5
6
7
number of spikes in a word
Empirical Synchrony Distribution of Simulated Ising Model
and ML Binomial Fit
0.7
proportion of words (of 600000 words)
proportion of words (out of 600000 words)
A
0.6
0.5
Binomial fit
0.3
0.2
0.1
0
8
Ising
0.4
0
1
2
3
4
5
6
7
number of spikes in a word
8
Figure 2: Sparsity structure of spike word distribution illustrated using the synchrony distribution.
(A) The empirical synchrony distribution of 8 simultaneously-recorded retinal ganglion cells (blue).
The cells were recorded for 20 minutes and binned with ?t = 2 ms bins. Spike words are overwhelmingly sparse, with w0 by far the most common word. In contrast, we compare the prior empirical synchrony distribution sampled using 106 samples from the NSB prior (? ? Dir(?, . . . , ?) ,
with p(?) ? A?1 (A? + 1) ? ?1 (? + 1), and ?1 the digamma function) (red). The empirical synchrony distribution shown is averaged across samples. (B) The synchrony distribution of an Ising
model (blue) compared to its best binomial fit (red). The Ising model parameters were set randomly
by drawing the entries of the matrix J and vector h iid from N(0, 1). A binomial distribution cannot
accurately capture the observed synchrony distribution.
et al. showed Dirichlet to be priors highly informative about the entropy, and thus a poor prior for
Bayesian entropy estimation [19]. To rectify this problem, they introduced the Nemenman?Shafee?
Bialek (NSB) estimator, which uses a mixture of Dirichlet distributions to obtain an approximately
flat prior over H. As a prior on ?, however, the NSB prior is agnostic about application: all symbols
have the same marginal probability under the prior, an assumption that may not hold when the
symbols correspond to binary spike words.
2
Spike Statistics and the Synchrony Distribution
We discretize neural signals by binning multi-neuron spike trains in time, as illustrated in Fig. 1. At a
time t, then, the spike response of a population of n neurons is a binary vector w
~ = (b1 , b2 , . . . , bn ),
where bi ? {0, 1} corresponds to the event that the ith neuron spikes within the time window
Pn?1
(t, t + ?t). We let w
~ k be that word such that k = i=0 bi 2i . There are a total of A = 2n possible
words.
For a sufficiently small bin size ?t, spike words are likely to be sparse, and so our strategy will be
to choose priors that place high prior probability on sparse words. To quantify sparsity we use the
synchrony distribution: the distribution of population spike counts across all words. In Fig. 2 we
compare the empirical synchrony distribution for a population of 8 simultaneously-recorded retinal
ganglion cells (RGCs) with the prior synchrony distribution under the NSB model. For real data, the
synchrony distribution is asymmetric and sparse, concentrating around words with few simultaneous
spikes. No more than 4 synchronous spikes are observed in the data. In contrast, under the NSB
model all words are equally likely, and the prior synchrony distribution is symmetric and centered
around 4.
These deviations in the synchrony distribution are noteworthy: beyond quantifying sparseness, the
synchrony distribution provides a surprisingly rich characterization of a neural population. Despite
its simplicity, the synchrony distribution carries information about the higher-order correlation structure of a population [20,21]. It uniquely specifies distributions
? for which the probability of a word
P
wk depends only on its spike count [k] = [w
~ k ] := i bi . Equivalently: all words with spike count
k, Ek = {w : [w] = k}, have identical probability ?k of occurring. For such a ?, the synchrony
3
distribution ? is given by,
?k =
X
wi ?Ek
n
?i =
?k .
k
(3)
Different neural models correspond to different synchrony distributions. Consider an independentlyBernoulli spiking model. Under this model, the number of spikes in a word w is distributed binomially, [w]
~ ? Bin(p, n), where p is a uniform spike probability across neurons. The probability of a
word wk is given by,
P (w
~ k |p) = ?[k] = p[k] (1 ? p)n?[k] ,
(4)
while the probability of observing a word with i spikes is,
n
P (Ei |p) =
?i .
i
3
(5)
Entropy Estimation with Parametric Prior Knowledge
Although a synchrony distribution may capture our prior knowledge about the structure of spike
patterns, our goal is not to estimate the synchrony distribution itself. Rather, we use it to inform a
prior on the space of discrete distributions, the (2n ?1)-dimensional simplex. Our strategy is to use a
synchrony distribution G as the base measure of a Dirichlet distribution. We construct a hierarchical
model where ? is a mixture of Dir(?G), and counts n of spike train observations are multinomial
given ? (See Fig. 3(A). Exploiting the conjugacy of Dirichlet and multinomial, and the convenient
symmetries of both the Dirichlet distribution and G, we obtain a computationally efficient Bayes
least squares estimator for entropy. Finally, we discuss using empirical estimates of the synchrony
distribution ? as a base measure.
3.1
Dirichlet?Bernoulli entropy estimator
We model spike word counts n as drawn iid multinomial given the spike word distribution ?. We
place a mixture-of-Dirichlets prior on ?, which in general takes the form,
n ? Mult(?)
? ? Dir(?1 , ?2 , . . . , ?A ),
|
{z
}
(6)
(7)
?
~ := (?1 , ?2 , . . . , ?A ) ? P (~
?),
(8)
2n
where ?i > 0 are concentration parameters, and P (~
?) is a prior distribution of our choosing. Due
to the conjugacy of Dirichlet and multinomial, the posterior distribution given observations and ?
~ is
?|n, ?
~ ? Dir(?1 + n1 , . . . , ?A + nA ), where ni is the number of observations for the i-th spiking
pattern. The posterior expected entropy given ?
~ is given by [22],
E[H(?)|~
?] = ?0 (? + 1) ?
A
X
?i
i=1
where ?0 is the digamma function, and ? =
PA
i=1
?
?0 (?i + 1)
(9)
?i .
For large A, ?
~ is too large to select arbitrarily, and so in practice we center the Dirichlet around
a simple, parametric base measure G [23]. We rewrite the vector of concentration parameters as
?
~ ? ?G, where G = Bernoulli(p) is a Bernoulli distribution with spike rate p and ? > 0 is a scalar.
The general prior of eq. 7 then takes the form,
? ? Dir(?G) ? Dir(?g1 , ?g2 . . . , ?gA ),
where each gk is the probability of the kth word under the base measure, satisfying
(10)
P
gk = 1.
We illustrate the dependency structure of this model schematically in Fig. 3. Intuitively, the base
measure incorporates the structure of G into the prior by shifting the Dirichlet?s mean. With a
base measure G the prior mean satisfies E[?|p] = G|p. Under the NSB model, G is the uniform
distribution; thus, when p = 0.5 the Binomial G corresponds exactly to the NSB model. Since
4
in practice choosing a base measure is equivalent to selecting distinct values of the concentration
parameter ?i , the posterior mean of entropy under this model has the same form as eq. 9, simply
replacing ?k = ?gk . Given hyper-prior distributions P (?) and P (p), we obtain the Bayes least
squares estimate, the posterior mean of entropy under our model,
ZZ
? DBer = E[H|x] =
H
E [H|?, p] P (?, p|x) d? dp.
(11)
? DBer . Thanks to the closedWe refer to eq. 11 as the Dirichlet?Bernoulli (DBer) entropy estimator, H
form expression for the conditional mean eq. 9 and the convenient symmetries of the Bernoulli
distribution, the estimator is fast to compute using a 2D numerical integral over the hyperparameters
? and p.
3.1.1
Hyper-priors on ? and p
Previous work on Bayesian entropy estimation has focused on Dirichlet priors with scalar, constant
concentration parameters ?i = ?. Nemenman et al. [19] noted that these fixed-? priors yield poor
estimators for entropy, because p(H|?) is highly concentrated around its mean. To address this
problem, [19] proposed a Dirichlet mixture prior on ?,
Z
P (?) = PDir (?|?)P (?)d?,
(12)
d
where the hyper-prior, P (?) ? d?
E[H(?)|?] assures an approximately flat prior distribution over
H. We adopt the same strategy here, choosing the prior,
n
X
n 2
d
? ?1 (??i + 1).
(13)
E[H(?)|?, p] = ?1 (? + 1) ?
P (?) ?
i i
d?
i=0
Entropy estimates are less sensitive to the choice of prior on p. Although we experimented with
several priors
P on p, in all examples we found that the evidence for p was highly concentrated around
p? = N1n ij xij , the maximum (Bernoulli) likelihood estimate for p. In practice, we found that an
empirical Bayes procedure, fitting p? from data first and then using the fixed p? to perform the integral
eq. 11, performed indistinguishably from a P (p) uniform on [0, 1].
3.1.2
Computation
For large n, the 2n distinct values of ?i render the sum of eq. 9 potentially intractable to compute.
We sidestep this exponential scaling of terms by exploiting the redundancy of Bernoulli and binomial
distributions. Doing so, we are able to compute eq. 9 without explicitly representing the 2N values
of ?i .
Under the Bernoulli
model, each element gk of the base measure takes the value ?[k]
(eq. 4). Further,
Pn
there are ni words for which the value of ?i is identical, so that A = i=0 ? ni ?i = ?. Applied
to eq. 9, we have,
n
X
n
E[H(?)|?, p] = ?0 (? + 1) ?
?i ?0 (??i + 1).
i
i=0
For the posterior, the sum takes the same form, except that A = n + ?, and the mean is given by,
E[H(?)|?, p, x] = ?0 (n + ? + 1) ?
A
X
ni + ??[i]
i=1
= ?0 (n + ? + 1) ?
X ni + ??[i]
i?I
n
X
??
i=0
5
n+?
?0 (ni + ??[i] + 1)
?0 (ni + ??[i] + 1)
n+?
n
? i ?i
i ?n
?0 (??i + 1),
n+?
(14)
words
B
N neurons
A
prior
distribution
0
0
0
0
most likely
1
0
0
0
0
1
0
0
0
0
1
0
less likely
0
0
0
1
1
1
0
0
1
0
1
0
1
0
0
1
0
1
1
0
0
1
0
1
even less likely
0
0
1
1
0
1
1
1
1
0
1
1
1
1
0
1
1
1
1
0
less likely still
1
1
1
1
least likely
C
entropy
data
,
,
,
,
Figure 3: Model schematic and intuition for Dirichlet?Bernoulli entropy estimation. (A) Graphical
model for Dirichlet?Bernoulli entropy estimation. The Bernoulli base measure G depends on the
spike rate parameter p. In turn, G acts as the mean of a Dirichlet prior over ?. The scalar Dirichlet
concentration parameter ? determines the variability of the prior around the base measure. (B) The
set of possible spike words for n = 4 neurons. Although easy to enumerate for this small special
case, the number of words increases exponentially with n. In order to compute with this large set,
we assume a prior distribution with a simple equivalence class structure: a priori, all words with the
same number of synchronous spikes (outlined in blue) occur with equal probability. We then need
only n parameters, the synchrony distribution of eq. 3, to specify the distribution. (C) We center a
Dirichlet distribution on a model of the synchrony distribution. The symmetries of the count and
Dirichlet distributions allow us to compute without explicitly representing all A words.
where I = {k : nk > 0}, the set of observed characters, and n
? i is the count of observed words with
i spikes (i.e., observations of the equivalence class Ei ). Note that eq. 14 is much more computationally tractable than the mathematically equivalent form given immediately above it. Thus, careful
bookkeeping allows us to efficiently evaluate eq. 9 and, in turn, eq. 11.1
3.2
Empirical Synchrony Distribution as a Base Measure
While the Bernoulli base measure captures the sparsity structure of multi-neuron recordings, it also
imposes unrealistic independence assumptions. In general, the synchrony distribution can capture
correlation structure that cannot be represented by a Bernoulli model. For example, in Fig. 2B, a
maximum likelihood Bernoulli fit fails to capture the sparsity structure of a simulated Ising model.
We might address this by choosing a more flexible parametric base measure. However, since the
dimensionality of ? scales only linearly with the number of neurons, the empirical synchrony distribution (ESD),
N
1 X
?
?i =
1{[xj ]=i} ,
(15)
N j=1
converges quickly even when the sample size is inadequate for estimating the full ?.
?
This suggests an empirical Bayes procedure where we use the ESD as a base measure (take G = ?)
for entropy estimation. Computation proceeds exactly as in Section 3.1.2
with the Bernoulli base
measure replaced by the ESD by setting gk = ?k and ?i = ?i / mi . The resulting Dirichlet?
Synchrony (DSyn) estimator may incorporate more varied sparsity and correlation structures into its
? DBer (see Fig. 4), although it depends on an estimate of the synchrony distribution.
prior than H
4
Simulations and Comparisons
? DBer and H
? DSyn to the Nemenman?Shafee?Bialek (NSB) [19] and Best Upper Bound
We compared H
? DSyn , we regular(BUB) entropy estimators [8] for several simulated and real neural datasets. For H
1
For large n, the binomial coefficient of eq. 14 may be difficult to compute. By writing it in terms of the
Bernoulli probability eq. 5, it may be computed using the Normal approximation to the Binomial.
6
B
2
1.8
8
1.6
7
1.4
Entropy (nats)
9
6
5
0.3
4
3
2
0.2
10
10
2
3
N
10
1.2
1
0.8
0.1
0
Power Law Synchrony Distribution (30 Neurons)
plugin
DBer
DSyn
BUB
NSB
0.6
0
10
20
number of spikes per word
10
4
30
10
5
0.4
6
frequency
Bimodal Synchrony Distribution (30 Neurons)
frequency
Entropy (nats)
A 10
10
2
0.8
0.6
0.4
0.2
0
10
3
N
10
0
10
20
number of spikes per word
10
4
10
5
30
6
? DBer , H
? DSyn , H
? NSB , H
? BUB , and H
? plugin as a function of sample size for
Figure 4: Convergence of H
two simulated examples of 30 neurons. Binary word data are drawn from two specified synchrony
distributions (insets). Error bars indicate variability of the estimator over independent samples (?1
standard deviation).
(A) Data drawn from a bimodal
synchrony distribution with peaks at 0 spikes
1 ?4(i?2n/3)2
and 10 spikes ?i = e?2i + 10
e
. (B) Data generated from a power-law synchrony
distribution (?i ? i?3 ).
A
B
RGC Spike Data (27 Neurons)
1 ms bins
3.5
Entropy (nats)
102
frequency
plugin
DBer
DSyn
BUB
NSB
0.3
103
10
0.2
8
0.1
0
N
0.2
12
0.4
frequency
Entropy (nats)
14
2.5
1.5
10 ms bins
16
3
2
RGC Spike Data (27 neurons)
18
0
10
20
number of spikes per word
104
0
6
0
10
20
30
number of spikes per word
102
105
0.1
103
N
104
105
? DBer , H
? DSyn , H
? NSB , H
? BUB , and H
? plugin as a function of sample size for
Figure 5: Convergence of H
27 simultaneously-recorded retinal ganglion cells (RGC). The two figures show the same RGC data
binned and binarized at ?t = 1 ms (A) and 10 ms (B). The error bars, axes, and color scheme are as
? plugin , H
? DSyn and H
? DBer both show
in Fig. 4. While all estimators improve upon the performance of H
excellent performance for very low sample sizes (10?s of samples). (inset) The empirical synchrony
distribution estimated from 120 minutes of data.
1
ized the estimated ESD by adding a pseudo-count of K
, where K is the number of unique words
observed in the sample. In Fig. 4 we simulated data from two distinct synchrony distribution mod? DSyn converges the fastest with increasing sample size
els. As is expected, among all estimators, H
?
N . The HDBer estimator converges more slowly, as the Bernoulli base measure is not capable of
capturing the correlation structure of the simulated synchrony distributions. In Fig. 5, we show
convergence performance on increasing subsamples of 27 simultaneously-recorded retinal ganglion
? DBer and H
? DSyn show excellent performance. Although the true word distribution is
cells. Again, H
not described by a synchrony distribution, the ESD proves an excellent regularizer for the space of
distributions, even for very small sample sizes.
5
Application: Quantification of Temporal Dependence
We can gain insight into the coding of a single neural time-series by quantifying the amount of information a single time bin contains about another. The correlation function (Fig. 6A) is the statistic
most widely used for this purpose. However, correlation cannot capture higher-order dependencies.
In neuroscience, mutual information is used to quantify higher-order temporal structure [24]. A re7
B
delayed mutual information
C
growing block mutual information
6
bits
auto-correlation function
4
2
spike rate
A
growing block mutual information
0
20 spk/s
D
1
5
10
15
information gain
bits
10 ms
0.5
0
dMI
5
10
lags (ms)
15
? DBer . (A) The auto-correlation
Figure 6: Quantifying temporal dependence of RGC coding using H
function of a single retinal ganglion neuron. Correlation does not capture the full temporal dependence. We bin with ?t = 1 ms bins. (B) Schematic definition of time delayed mutual information (dMI), and block mutual information. The information gain of the sth bin is ?(s) =
I(Xt ; Xt+1:t+s ) ? I(Xt ; Xt+1:t+s?1 ). (C) Block mutual information estimate as a function of
growing block size. Note that the estimate is monotonically increasing, as expected, since adding
new bins can only increase the mutual information. (D) Information gain per bin assuming temporal
independence (dMI), and with difference between block mutual informations (?(s)). We observe
synergy for the time bins in the 5 to 10 ms range.
lated quantity, the delayed mutual information (dMI) provides an indication of instantaneous dependence: dM I(s) = I(Xt ; Xt+s ), where Xt is a binned spike train, and I(X; Y ) = H(X)?H(X|Y )
denotes the mutual information. However, this quantity ignores any temporal dependences in
the intervening times: Xt+1 , . . . , Xt+s?1 . An alternative approach allows us to consider such
dependences: the ?block mutual information? ?(s) = I(Xt ; Xt+1:t+s ) ? I(Xt ; Xt+1:t+s?1 )
(Fig. 6B,C,D)
The relationship between ?(s) and dM I(s) provides insight about the information contained in the
recent history of the signal. If each time bin is conditionally independent given Xt , then ?(s) =
dM I(s). In contrast, if ?(s) < dM I(s), instantaneous dependence is partially explained by history.
Finally, ?(s) > dM I(s) implies that the joint distribution of Xt , Xt+1 , . . . , Xt+s contains more
? DBer entropy
information about Xt than the joint distribution of Xt and Xt+s alone. We use the H
estimator to compute mutual information (by computing H(X) and H(X|Y )) accurately for ? 15
bins of history. Surprisingly, individual retinal ganglion cells code synergistically in time (Fig. 6D).
6
Conclusions
? DBer and H
? DSyn . These estimators use a
We introduced two novel Bayesian entropy estimators, H
hierarchical mixture-of-Dirichlets prior with a base measure designed to integrate a priori knowledge about spike trains into the model. By choosing base measures with convenient symmetries,
we simultaneously sidestepped potentially intractable computations in the high-dimensional space
of spike words. It remains to be seen whether these symmetries, as exemplified in the structure of
the synchrony distribution, are applicable across a wide range of neural data. Finally, however, we
? DSyn , perform exceptionally well in
showed several examples in which these estimators, especially H
application to neural data. A MATLAB implementation of the estimators will be made available at
https://github.com/pillowlab/CDMentropy.
Acknowledgments
We thank E. J. Chichilnisky, A. M. Litke, A. Sher and J. Shlens for retinal data. This work was
supported by a Sloan Research Fellowship, McKnight Scholar?s Award, and NSF CAREER Award
IIS-1150186 (JP).
8
References
[1] K. H. Schindler, M. Palus, M. Vejmelka, and J. Bhattacharya. Causality detection based on informationtheoretic approaches in time series analysis. Physics Reports, 441:1?46, 2007.
[2] A. R?enyi. On measures of dependence. Acta Mathematica Hungarica, 10(3-4):441?451, 9 1959.
[3] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. Information
Theory, IEEE Transactions on, 14(3):462?467, 1968.
[4] A. Chao and T. Shen. Nonparametric estimation of Shannon?s index of diversity when there are unseen
species in sample. Environmental and Ecological Statistics, 10(4):429?443, 2003.
[5] P. Grassberger. Estimating the information content of symbol sequences and efficient codes. Information
Theory, IEEE Transactions on, 35(3):669?675, 1989.
[6] S. Ma. Calculation of entropy from data of motion. Journal of Statistical Physics, 26(2):221?240, 1981.
[7] S. Panzeri, R. Senatore, M. A. Montemurro, and R. S. Petersen. Correcting for the sampling bias problem
in spike train information measures. J Neurophysiol, 98(3):1064?1072, Sep 2007.
[8] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15:1191?1253, 2003.
[9] W. Bialek, F. Rieke, R. R. de Ruyter van Steveninck, R., and D. Warland. Reading a neural code. Science,
252:1854?1857, 1991.
[10] R. Strong, S. Koberle, de Ruyter van Steveninck R., and W. Bialek. Entropy and information in neural
spike trains. Physical Review Letters, 80:197?202, 1998.
[11] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15:1191?1253, 2003.
[12] R. Barbieri, L. Frank, D. Nguyen, M. Quirk, V. Solo, M. Wilson, and E. Brown. Dynamic analyses of
information encoding in neural ensembles. Neural Computation, 16:277?307, 2004.
[13] M. Kennel, J. Shlens, H. Abarbanel, and E. Chichilnisky. Estimating entropy rates with Bayesian confidence intervals. Neural Computation, 17:1531?1576, 2005.
[14] J. Victor. Approaches to information-theoretic analysis of neural activity. Biological theory, 1(3):302?
316, 2006.
[15] J. Shlens, M. B. Kennel, H. D. I. Abarbanel, and E. J. Chichilnisky. Estimating information rates with
confidence intervals in neural spike trains. Neural Computation, 19(7):1683?1719, Jul 2007.
[16] V. Q. Vu, B. Yu, and R. E. Kass. Coverage-adjusted entropy estimation. Statistics in medicine,
26(21):4039?4060, 2007.
[17] V. Q. Vu, B. Yu, and R. E. Kass. Information in the nonstationary case. Neural Computation, 21(3):688?
703, 2009, http://www.mitpressjournals.org/doi/pdf/10.1162/neco.2008.01-08-700. PMID: 18928371.
[18] E. Archer, I. M. Park, and J. Pillow. Bayesian estimation of discrete entropy with mixtures of stickbreaking priors. In P. Bartlett, F. Pereira, C. Burges, L. Bottou, and K. Weinberger, editors, Advances in
Neural Information Processing Systems 25, pages 2024?2032. MIT Press, Cambridge, MA, 2012.
[19] I. Nemenman, F. Shafee, and W. Bialek. Entropy and inference, revisited. In Advances in Neural Information Processing Systems 14, pages 471?478. MIT Press, Cambridge, MA, 2002.
[20] M. Okun, P. Yger, S. L. Marguet, F. Gerard-Mercier, A. Benucci, S. Katzner, L. Busse, M. Carandini, and
K. D. Harris. Population rate dynamics and multineuron firing patterns in sensory cortex. The Journal of
Neuroscience, 32(48):17108?17119, 2012, http://www.jneurosci.org/content/32/48/17108.full.pdf+html.
[21] G. Tka?cik, O. Marre, T. Mora, D. Amodei, M. J. Berry II, and W. Bialek. The simplest maximum
entropy model for collective behavior in a neural network. Journal of Statistical Mechanics: Theory and
Experiment, 2013(03):P03011, 2013.
[22] D. Wolpert and D. Wolf. Estimating functions of probability distributions from a finite set of samples.
Physical Review E, 52(6):6841?6854, 1995.
[23] I. M. Park, E. Archer, K. Latimer, and J. W. Pillow. Universal models for binary spike patterns using
centered Dirichlet processes. In Advances in Neural Information Processing Systems (NIPS), 2013.
[24] A. Panzeri, S. Treves, S. Schultz, and E. Rolls. On decoding the responses of a population of neurons
from short time windows. Neural Computation, 11:1553?1577, 1999.
9
| 4873 |@word briefly:1 proportion:2 simulation:2 bn:1 carry:1 synergistically:1 series:2 contains:2 selecting:1 liu:1 outperforms:1 existing:1 ka:2 com:1 grassberger:1 indistinguishably:1 multineuron:1 numerical:1 informative:1 earcher:1 remove:1 designed:2 alone:1 xk:2 ith:2 short:1 record:1 provides:3 characterization:1 revisited:1 org:2 fitting:1 introduce:2 yger:1 expected:3 indeed:1 montemurro:1 behavior:1 frequently:1 growing:3 multi:2 busse:1 discretized:1 mechanic:1 window:2 increasing:3 becomes:1 begin:1 estimating:8 moreover:1 panel:1 mass:2 agnostic:1 substantially:1 pseudo:1 temporal:6 binarized:2 act:1 exactly:2 lated:1 engineering:1 severely:1 plugin:8 despite:1 encoding:1 barbieri:1 firing:1 approximately:2 noteworthy:1 might:1 acta:1 studied:1 equivalence:2 conversely:1 suggests:1 fastest:1 bi:3 range:2 averaged:1 steveninck:2 unique:3 acknowledgment:1 vu:2 practice:3 block:8 differs:1 procedure:2 evan:1 empirical:14 universal:1 mult:1 convenient:3 word:63 confidence:2 regular:1 petersen:1 cannot:3 ga:1 writing:1 www:2 equivalent:2 center:3 straightforward:1 attention:1 flexibly:1 focused:1 shen:1 simplicity:1 immediately:1 correcting:1 estimator:31 insight:2 shlens:3 mora:1 population:9 rieke:1 us:1 pa:1 element:1 satisfying:1 asymmetric:1 ising:5 binning:1 observed:6 capture:8 intuition:1 nats:4 dynamic:2 reviewing:1 rewrite:1 upon:1 division:1 neurophysiol:1 spk:1 sep:1 joint:2 represented:2 regularizer:1 train:12 pmid:1 distinct:3 fast:2 effective:1 enyi:1 doi:1 hyper:3 choosing:5 whose:2 lag:1 widely:1 drawing:1 statistic:9 g1:1 unseen:1 itself:1 subsamples:1 sequence:1 indication:1 okun:1 re7:1 rapidly:1 bub:5 intervening:1 exploiting:2 convergence:3 gerard:1 converges:3 illustrate:1 develop:1 quirk:1 ij:1 received:1 eq:16 strong:1 coverage:1 indicate:1 implies:1 quantify:2 concentrate:1 centered:3 bin:17 assign:1 scholar:1 biological:1 mathematically:1 adjusted:1 hold:1 sufficiently:1 around:6 normal:1 panzeri:2 adopt:1 purpose:1 estimation:17 applicable:1 kennel:2 utexas:1 sensitive:1 stickbreaking:1 tool:1 sidestepped:1 mit:2 rather:2 pn:3 varying:1 wilson:1 overwhelmingly:1 ax:1 focus:1 bernoulli:18 likelihood:4 contrast:3 digamma:2 litke:1 inference:1 el:1 chow:1 archer:2 among:2 flexible:1 html:1 priori:4 special:1 orange:1 mutual:15 marginal:1 equal:1 construct:1 dmi:4 sampling:1 zz:1 identical:2 represents:1 park:2 yu:2 simplex:1 others:1 report:1 few:2 employ:1 modern:1 randomly:1 simultaneously:7 individual:1 delayed:3 replaced:1 n1:1 ecology:1 nemenman:5 detection:1 highly:3 mixture:7 palus:1 benucci:1 solo:1 integral:2 capable:1 intense:1 tree:1 theoretical:2 modeling:1 deviation:2 entry:3 hundred:1 uniform:3 inadequate:1 too:1 dependency:2 dir:6 thanks:1 fundamental:1 peak:1 physic:3 decoding:1 quickly:1 dirichlets:2 na:1 again:1 recorded:7 latimer:1 choose:2 slowly:1 n1n:1 ek:2 inefficient:1 sidestep:1 abarbanel:2 diversity:1 de:2 retinal:7 b2:1 wk:2 coding:2 coefficient:1 explicitly:2 sloan:1 depends:3 performed:1 observing:2 doing:1 red:2 bayes:5 jul:1 synchrony:42 memming:2 il:1 ni:7 square:3 purple:1 accuracy:1 characteristic:1 efficiently:2 ensemble:1 yield:3 correspond:2 roll:1 bayesian:12 accurately:2 iid:4 history:3 simultaneous:2 inform:1 definition:1 frequency:4 mathematica:1 dm:5 mi:1 con:1 sampled:2 gain:4 carandini:1 popular:2 concentrating:1 knowledge:4 color:1 dimensionality:2 carefully:1 cik:1 focusing:1 higher:3 response:8 specify:1 strongly:1 correlation:9 ei:2 replacing:1 scientific:1 building:1 rgcs:1 true:1 brown:1 symmetric:1 illustrated:3 conditionally:1 uniquely:1 noted:1 m:11 pdf:2 theoretic:2 motion:1 instantaneous:2 novel:3 common:1 bookkeeping:1 multinomial:5 spiking:4 pdir:1 physical:2 exponentially:1 jp:1 extend:1 refer:2 significant:1 cambridge:2 outlined:1 dber:17 rectify:1 cortex:1 base:18 posterior:6 showed:2 recent:1 ecological:1 binary:11 arbitrarily:1 devise:1 victor:1 seen:1 greater:1 paradigm:1 monotonically:1 signal:3 ii:2 neco:1 full:4 esd:5 calculation:1 dept:1 equally:2 award:2 ensuring:1 schematic:3 basic:1 bimodal:2 cell:6 schematically:1 fellowship:1 interval:2 biased:2 recording:1 incorporates:1 mod:1 nonstationary:1 presence:2 easy:1 independence:2 fit:4 psychology:1 xj:1 texas:1 synchronous:2 whether:2 expression:1 bartlett:1 render:1 matlab:1 enumerate:1 amount:1 nonparametric:1 concentrated:2 simplest:1 katzner:1 http:3 simultaneouslyrecorded:1 outperform:1 specifies:1 xij:1 nsf:1 neuroscience:4 estimated:2 per:5 blue:4 discrete:7 redundancy:1 drawn:5 schindler:1 sum:2 letter:1 place:2 scaling:1 bit:2 capturing:1 bound:1 activity:2 binned:3 occur:2 flat:2 xki:1 attempting:1 relatively:1 amodei:1 mcknight:1 poor:2 across:5 character:1 sth:1 wi:1 intuitively:1 spiked:1 explained:1 computationally:3 conjugacy:2 remains:1 assures:1 discus:3 count:10 fail:1 turn:2 tractable:1 mercier:1 available:1 apply:2 observe:1 hierarchical:2 generic:1 bhattacharya:1 alternative:1 weinberger:1 denotes:2 dirichlet:24 binomial:8 graphical:1 medicine:1 exploit:3 warland:1 prof:1 especially:1 approximating:1 quantity:4 spike:68 parametric:7 strategy:3 concentration:5 dependence:9 traditional:1 bialek:6 kth:2 dp:1 thank:1 simulated:7 w0:1 mail:1 spanning:1 assuming:1 code:5 length:2 modeled:1 index:2 relationship:1 equivalently:1 difficult:3 potentially:2 frank:1 gk:5 ized:1 implementation:2 binomially:1 collective:1 unknown:2 perform:2 discretize:1 upper:1 neuron:27 observation:4 datasets:1 finite:1 variability:2 varied:1 arbitrary:1 treves:1 nsb:13 introduced:2 specified:1 chichilnisky:3 nip:1 address:2 beyond:1 able:1 proceeds:1 bar:2 pattern:4 exemplified:1 sparsity:5 challenge:1 reading:1 green:1 shifting:1 unrealistic:1 event:2 power:2 quantification:1 representing:2 scheme:1 improve:1 github:1 marguet:1 axis:1 naive:2 auto:2 sher:1 koberle:1 hungarica:1 chao:1 prior:46 literature:2 review:2 berry:1 law:2 allocation:1 remarkable:1 integrate:1 tka:1 consistent:1 imposes:1 editor:1 austin:2 supported:2 surprisingly:2 drastically:1 bias:2 allow:2 burges:1 wide:1 sparse:5 distributed:1 van:2 pillow:3 rich:1 ignores:1 sensory:1 made:1 schultz:1 nguyen:1 far:2 transaction:2 compact:1 emphasize:1 informationtheoretic:1 synergy:2 ml:1 b1:1 xi:1 ruyter:2 career:1 marre:1 symmetry:5 excellent:3 bottou:1 linearly:1 hyperparameters:1 neuronal:1 fig:12 causality:1 fails:1 pereira:1 wish:1 exponential:1 perceptual:1 minute:2 xt:20 inset:2 symbol:3 shafee:3 experimented:1 evidence:1 incorporating:1 intractable:2 adding:2 occurring:1 sparseness:1 nk:3 entropy:50 wolpert:1 simply:1 likely:8 paninski:2 ganglion:6 contained:1 g2:1 scalar:3 partially:1 corresponds:2 wolf:1 satisfies:1 determines:1 environmental:1 ma:3 harris:1 conditional:1 goal:1 quantifying:3 careful:1 absence:2 considerable:1 exceptionally:1 content:2 except:1 called:1 total:1 biascorrection:1 specie:1 experimental:1 shannon:2 rgc:7 select:1 jonathan:1 incorporate:1 evaluate:1 |
4,280 | 4,874 | Inferring neural population dynamics from multiple
partial recordings of the same neural circuit
Srinivas C. Turaga?1,2 , Lars Buesing1 , Adam M. Packer2 , Henry Dalgleish2 , Noah Pettit2 , Michael
H?ausser2 and Jakob H. Macke3,4
1
2
Gatsby Computational Neuroscience Unit, University College London
Wolfson Institute for Biomedical Research, University College London
3
Max-Planck Institute for Biological Cybernetics, T?ubingen
4
Bernstein Center for Computational Neuroscience, T?ubingen
Abstract
Simultaneous recordings of the activity of large neural populations are extremely
valuable as they can be used to infer the dynamics and interactions of neurons in
a local circuit, shedding light on the computations performed. It is now possible
to measure the activity of hundreds of neurons using 2-photon calcium imaging.
However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we
contribute a statistical method for ?stitching? together sequentially imaged sets of
neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the
population-sizes for which population dynamics can be characterized?beyond
the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible
to predict noise correlations between non-simultaneously recorded neuron pairs.
1
Introduction
The computation performed by a neural circuit is a product of the properties of single neurons in the
circuit and their connectivity. Simultaneous measurements of the collective dynamics of all neurons
in a neural circuit will help us understand their function and test theories of neural computation.
However, experimental limitations make it difficult to measure the joint activity of large populations
of neurons. Recent progress in 2-photon calcium imaging now allows for recording of the activity of hundreds of neurons nearly simultaneously [1, 2]. However, in neocortex where circuits or
subnetworks can span thousands of neurons, current imaging techniques are still inadequate.
We present a computational method to more effectively leverage currently available experimental
technology. To illustrate our method consider the following example: A whisker barrel in the mouse
somatosensory cortex consists of a few thousand neurons responding to stimuli from one whisker.
Modern microscopes can only image a small fraction?a few hundred neurons?of this circuit. But
since nearby neurons couple strongly to one another [3], by moving the microscope to nearby locations, one can expect to image neurons which are directly coupled to the first population of neurons.
In this paper we address the following question: Could we characterize the joint dynamics of the
first and second populations of neurons, even though they were not imaged simultaneously? Can we
estimate correlations in variability across the two populations? Surprisingly, the answer is yes.
We propose a statistical tool for ?stitching? together measurements from multiple partial observations of the same neural circuit. We show that we can predict the correlated dynamics of large
?
[email protected]
1
a
imaging
session 1
b
imaging
session 2
couplings (A)
simultaneously
measured pairs
non-simultaneously
measured pairs
Figure 1: Inferring neuronal interactions from non-simultaneous measurements. a) If two
subsets of a neural population can only be recorded from in two separate imaging sessions, can
we infer the connectivity across the sub-populations (red connections)? b) We want to infer the
functional connectivity matrix, and in particular those entries which correspond to pairs of neurons
that were not simultaneously measured (red off-diagonal block). While the two sets of neurons
are pictured as non-overlapping here, we will also be interested in the case of partially overlapping
measurements.
populations of neurons even if many of the neurons have not been imaged simultaneously. In sensory cortical neurons, where large variability in the evoked response is observed [4, 5], our model
can successfully predict the magnitude of (so-called) noise correlations between non-simultaneously
recorded neurons. Our method can help us build data-driven models of large cortical circuits and
help test theories of circuit function.
Related recent research. Numerous studies have addressed the question of inferring functional
connectivity from 2-photon imaging data [6, 7] or electrophysiological measurements [8, 9, 10, 11].
These approaches include detailed models of the relationship between fluorescence measurements, calcium transients and spiking activity [6] as well as model-free information-theoretic approaches [7]. However, these studies do not attempt to infer functional connections between
non-simultaneously observed neurons. On the other hand, a few studies have presented statistical methods for dealing with sub-sampled observations of neural activity or connectivity, but these
approaches are not applicable to our problem: A recent study [12] presented a method for predicting noise correlations between non-simultaneously recorded neurons, but this method requires the
strong assumption that noise correlations are monotonically related to stimulus correlations. [13]
presented an algorithm for latent GLMs, but this algorithm does not scale to the population sizes
of interest here. [14] presented a method for inferring synaptic connections on dendritic trees from
sub-sampled voltage observations. In this setting, one typically obtains a measurement from each
location every few imaging frames, and it is therefore possible to interpolate these observations.
In contrast, in our application, imaging sessions are of much longer duration than the time-scale
of neural dynamics. Finally, [15] presented a statistical framework for reconstructing anatomical
connectivity by superimposing partial connectivity matrices derived from fluorescent markers.
2
Methods
Our goal is to estimate a joint model of the activity of a neural population which captures the correlation structure and stimulus selectivity of the population from partial observations of the population
activity. We model the problem as fitting a latent dynamical system with missing observations. In
principle, any latent dynamical system model [13] can be used?here we demonstrate our main point
using the simple linear gaussian dynamical system for its computational tractability.
2.1
A latent dynamical system model for combining multiple measurements of population
activity
Linear dynamics. We denote by xk the activity of N neurons in the population on recording session
k, and model its dynamics as linear with Gaussian innovations in discrete time,
2
xkt = Axkt?1 + Bukt + ?t ,
where ?t ? N (0, Q).
(1)
Here, the N ? N coupling matrix A models correlations across neurons and time. An entry Aij
being non-zero implies that activity of neuron j at time t has a statistical influence on the activity of
neuron i on the next time-step t + 1, but does not necessarily imply a direct synaptic connection. For
this reason, entries of A are usually referred to as the ?functional? (rather than anatomical) couplings
or connectivity of the population. The entries of A also shape trial-to-trial variability which is
correlated across neurons, i.e. noise-correlations. Further, we include an external, observed stimulus
ukt (of dimension Nu ) as well as receptive fields B (of size N ? Nu ) which model the stimulus
dependence of the population activity. We model neural noise (which could include the effect of
other influences not modeled explicitly) using zero-mean innovations ?t , which are Gaussian i.i.d.
with covariance matrix Q, assuming the latter to be diagonal (see below for how our framework also
can allow for correlated noise). The mean x0 and covariance Q0 of the initial state xk0 were chosen
such that the system is stationary (apart from the stimulus contribution Bukt ), i.e. x0 = 0 and Q0
satisfies the Lyapunov equation Q0 = AQ0 A> + Q.
For the sake of simplicity, we work directly in the space of continuous valued imaging measurements
(rather than on the underlying spiking activity), i.e. xkt models the relative calcium fluorescence signal. While this model does not capture the nonlinear and non-Gaussian cascade of neural couplings,
calcium dynamics, fluorescence measurements and imaging noise [16, 6], we will show that this
model nevertheless is able to predict correlations across non-simultaneously observed pairs of neurons.
Incomplete observations. In each imaging session k we measure the activity of Nk neurons simultaneously, where Nk is smaller than the total number of neurons N . Since these measurements are
noisy and incomplete observations of the full state vector, the true underlying activity of all neurons
xkt is treated as a latent variable. The vector of the Nk measurements at time t in session k is denoted
as ytk and is related to the underlying population activity by
ytk = C k (xkt + d + t )
t ? N (0, R),
(2)
where the ?measurement matrix? C k is of size Nk ? N . Further assuming that the recording sites
correspond to identified cells (which typically is the case for 2-photon calcium imaging), we can
k
assume C k to be known and of the following form: The element Cij
is 1 if neuron j of the population
is being recorded from on session k (as the i-th recording site); the remaining elements of C k are
0. The measurement noise is modeled as a Gaussian random variable t with covariance R, and
the parameter d captures a constant offset. One can also envisage using our model with dimensions
of xkt which are never observed? such latent dimensions would then model correlated noise or the
input from unobserved neurons into the population [17, 18].
Fitting the model. Our goal is to estimate the parameters (A, B, Q, R) of the latent linear dynamical
system (LDS) model described by equations (1) and (2) from experimental data. One can learn these
parameters using the standard expectation maximization (EM) algorithm that finds a local maximum
of the log-likelihood of the observed data [19]. The E-step can be performed via Kalman Smoothing
(with a different C k for each session). In the M-step, the updates for A, B and Q are as in standard
linear dynamical systems, and the updates for R and d are element-wise given by
1 X k k
?j yt,?k ? xkt,j
j
T nj
k,t
E
1 X kD k
=
?j (yt,?k ? xkt,j ? dj )2 ,
j
T nj
dj =
Rjj
k,t
where h?i denotes the expectation over the posterior distribution calculated in the E-step, and T is
the number of time steps in each recording P
session (assumed to be the same for each session for
k
the sake of simplicity). Furthermore, ?kj := i Cij
is 1 if neuron j was imaged in session k and 0
P k
otherwise, nj = k ?j is the total number of sessions in which neuron j was imaged and ?jk is the
index of the recording site of neuron j during session k. To improve the computational efficiency of
the fitting procedure as well as to avoid shallow local maxima, we used a variant of online-EM with
randomly selected mini-batches [20] followed by full batch EM for fine-tuning.
3
2.2
Details of simulated and experimental data
Simulated data. We simulated a population of 60 neurons which were split into 3 pools (?cell
types?) of 20 neurons each, with both connection probability and strength being cell-type specific.
Within each pool, pairs were coupled with probability 50% and random weights, cell-types one and
two had excitatory connections onto the other cells, and type three had weak but dense inhibitory
couplings (see Figure 2a, top left). Coupling weights were truncated at ?0.2. The 4-dimensional external stimulus was delivered into the first pool. On average, 24% of the variance of each neuron was
noise, 2% driven by the stimulus, 25% by self-couplings and a further 49% by network-interactions.
After shuffling the ordering of neurons (resulting in the connectivity matrix displayed in Fig. 2a,
top middle), we simulated K = 10 trials of length T = 1000 samples from the population. We
then pretended that the population was imaged in two sessions with non-overlapping subsets of 30
neurons each (Figure 2a, green outlined blocks) of K = 5 trials each, and that observation noise
was uncorrelated and very small, std(ii ) = 0.006.
Experimental data. We also applied the stitching method to two calcium imaging datasets recorded
in the somatosensory cortex of awake or anesthetized mice. We imaged calcium signals in the
superficial layers of mouse barrel cortex (S1) in-vivo using 2-photon laser scanning microscopy [1].
A genetically encoded calcium indicator (GCaMP6s) was virally expressed, driven pan-neuronally
by the human-synapsin promoter, in the C2 whisker barrel and the activity of about 100-200 neurons
was imaged simultaneously in-vivo at about 3Hz, compatible with the slow timescales of the calcium
dynamics revealed by GCaMP6s. The anesthetized dataset was collected during an experiment in
which the C2 whisker of an anesthetized mouse was repeatedly flicked randomly in one of three
different directions (rostrally, caudally or ventrally). About 200 neurons were imaged for about
27min at a depth of 240?m in the C2 whisker barrel. The awake dataset was collected while an
awake animal was performing a whisker flick detection task. In this session, about 80 neurons were
imaged for about 55min at a depth of 190?m, also in the C2 whisker barrel. Regions of interest
(ROI) corresponding to putative GCaMP expressing soma (and in some instances isolated neuropil)
were manually defined and the time-series corresponding to the calcium signal for each such ROI
was extracted. The calcium time-series were high-pass filtered with a time-constant of 1s.
2.3
Quantifying and comparing model performance
Fictional imaging scenario in experimental data. To evaluate how well stitching works on real
data, we created a fictional imaging scenario. We pretended that the neurons, which were in reality
simultaneously imaged, were not imaged in one session but instead were ?imaged? in two subsets in
two different sessions. The subsets corresponding to different ?sessions? c = 60% of the neurons,
meaning that the subsets overlapped and a few neurons in common. We also experimented with
c = 50% as in our simulation above, but failed to get good performance without any overlapping
neurons. We imagined that we spent the first 40% of the time ?imaging? subset 1 and the second 40%
of the time ?imaging? subset 2. The final 20% of the data was withheld for use as the test set. We
then used our stitching method to predict pairwise correlations from the fictional imaging session.
Upper and lower bounds on performance. We wanted to benchmark how well our method is doing
both compared to the theoretical optimum and to a conventional approach. On synthetic data, we
can use the ground-truth parameters as the optimal model. In lieu of ground-truth on the real data,
we fit a ?fully observed? model to the simulatenous imaging data of all neurons (which would be
impossible of course in practice, but is possible in our fictional imaging scenario). We also analyzed
the data using a conventional, ?naive? approach in which we separately fit dynamical system models
to each of the two imaging sessions and then combined their parameters. We set coefficients of nonsimultaneously recorded pairs to 0 and averaged coefficients for neurons which were part of both
imaging sessions (in the c = 60% scenario). The ?fully observed? and the ?naive? models constitute
an upper and lower bound respectively on our performance. Certainly we can not expect to do better
at predicting correlations, than if we had observed all neurons simultaneously.
3
Results
We tested our ability to stitch multiple observations into one coherent model which is capable of
predicting statistics of the joint dynamics, such as correlations across non-simultaneously imaged
4
a
true couplings
naive couplings
b
noise correlations
stitched
0.5
naive
estimate
shuffle
0
?0.5
?0.5
stitched
estimate
stitched couplings
true
stitching
estimate
c
0
true
0.5
off-diagonal
coupling
unshuffle
blocks
stitched
0.2
unshuffle
0
?0.2
?0.2
0
true
0.2
Figure 2: Noise correlations and coupling parameters can be well recovered in a simulated
dataset. a) A coupling matrix for 60 neurons arranged in 3 blocks was generated (true coupling
matrix) and shuffled. We simulated the imaging of non-overlapping subsets of 30 neurons each in
two sessions. Couplings were recovered using a ?naive? strategy and using our proposed ?stitching?
method. b) Noise correlations estimated by our stitching method match true noise correlations
well. c) Couplings between non-simultaneously imaged neuron pairs (red off-diagonal block) are
estimated well by our method.
neuron pairs. We first apply our method to a synthetic dataset to explain its properties, and then
demonstrate that it works for real calcium imaging measurements from the mouse somatosensory
cortex.
3.1
Inferring correlations and model parameters in a simulated population
It might seem counterintuitive that one can infer the cross-couplings, and hence noise-correlations,
between neurons observed in separate sessions. An intuition for why this might work nevertheless
can gained by considering the artificial scenario of a network of linearly interacting neurons driven
by Gaussian noise: Suppose that during the first recording session we image half of these neurons.
We can fit a linear state-space model to the data in which the other, unobserved half of the population
constitutes the latent space. Given enough data, the maximum likelihood estimate of the model
parameters (which is consistent) lets us identify the true joint dynamics of the whole population up
to an invertible linear transformation of the unobserved dimensions [21]. After the second imaging
session, where we image the second (and previously unobserved) half of the population, we can
identify this linear transformation, and thus identify all model parameters uniquely, in particular the
cross-couplings. To demonstrate this intuition, we simulated such an artificial dataset (described in
2.2) and describe here the results of the stitching procedure.
Recovering the coupling matrix. Our stitching method was able to recover the true coupling
matrix, including the off-diagonal blocks which correspond to pairs of neurons that were not imaged
simultaneously (see red-outlined blocks in 2a, bottom middle). As expected, recovery was better for
couplings across observed pairs (correlation between true and estimated parameters 0.95, excluding
self-couplings) than for non-simultaneously recorded pairs (Figure 2c; correlation 0.73). With the
?naive? approach couplings between non-simultaneously observed pairs cannot be recovered, and
even for simultaneously observed pairs, the estimate of couplings is biased (correlation 0.75).
Recovering noise correlations. We also quantified the degree to which we are able to predict
statistics of the joint dynamics of the whole network, in particular noise correlations across pairs
of neurons that were never observed simultaneously. We calculated noise correlations by computing correlations in variability of neural activity after subtracting contributions due to the stimulus.
We found that the stitching method was able to accurately recover the noise-correlations of nonsimultaneously recorded pairs (correlation between predicted and true correlations was 0.92; Figure
2b). In fact, we generally found the prediction of correlations to be more accurate than prediction
5
fully observed
stitched
naive
b
fully observed
0.5
0
d
correlations
c
?0.5
?0.5
0.3
0.2
0
0.5
Naive
Stitched
partially observed
couplings
a
0.1
0
0
0.1
0.2
0.3
Figure 3: Examples of correlation and coupling recovery in the anesthetized calcium imaging
experiments. a) Coupling matrices fit to calcium signal using all neurons (fully observed) or fit after
?imaging? two overlapping subsets of 60% neurons each (stitched and naive). The naive approach
is unable to estimate coupling terms for ?non-simultaneously imaged? neurons, so these are set to
zero. b) Scatter plot of coupling terms for ?non-simultaneously imaged? neuron pairs estimated
using the stitching method vs the fully observed estimates. c) Correlations predicted using the
coupling matrices. d) Scatter plot of correlations in c for ?non-simultaneously imaged? neuron pairs
estimated using the stitching and the naive approaches.
of the underlying coupling parameters. In contrast, a naive approach would not be able to estimate
noise correlations between non-simultaneously observed pairs. (We note that, as the stimulus drive
in this simulation was very weak, inferring noise correlations from stimulus correlations [12] would
be impossible).
Predicting unobserved neural activity. Given activity measurements from a subset of neurons,
our method can predict the activity of neurons in the unobserved subset. This prediction can be
calculated by doing inference in the resulting LDS, i.e. by calculating the posterior mean ?k1:T =
k
, hk1:T ) and looking at those entries of ?k1:T which correspond to unobserved neurons.
E(xk1:T |y1:T
On our simulated data, we found that this prediction was strongly correlated with the underlying
ground-truth activity (average correlation 0.70 ? 0.01 s.e.m across neurons, using a separate testset which was not used for parameter fitting.). The upper bound for this prediction metric can be
obtained by using the ground-truth parameters to calculate the posterior mean. Use of this groundtruth model resulted in a performance of 0.82 ? 0.01. In contrast, the ?naive? approach can only
utilize the stimulus, but not the activity of the observed population for prediction and therefore only
achieved a correlation of 0.23 ? 0.01.
3.2
Inferring correlations in mouse somatosensory cortex
Next, we applied our stitching method to two real datasets: anesthetized and awake (described in
Section 2.2). We demonstrate that it can predict correlations between non-simultaneously accessed
neuron pairs with accuracy approaching that of the upper bound (?fully observed? model trained on
all neurons), and substantially better than the lower bound ?naive? model.
Example results. Figure 3a displays coupling matrices of a population consisting of the 50 most
correlated neurons in the anesthetized dataset (see Section 2.2 for details) estimated using all three
methods. Our stitching method yielded a coupling matrix with structure similar to the fully observed model (Figure 3a, central panel), even in the off-diagonal blocks which correspond to nonsimultaneously recorded pairs. In contrast, the naive method, by definition, is unable to infer couplings for non-simultaneously recorded pairs, and therefore over-estimates the magnitude of observed couplings (Figure 3a, right panel). Even for non-simultaneously recorded pairs, the stitched
model predicted couplings which were correlated with the fully observed predictions (Figure 3b,
correlation 0.38).
6
a
predicting correlations
b
predicting neural activity
0.2
0.4
correlation
anesthetized
0.6
0.6
20
40
60
80
100
0.8
0
0.8
0.4
full obs (UB)
stitched
naive (LB)
0.2
1
predicting couplings
0.8
0.8
0
c
1
0.4
1
20
40
60
80
0.2
0
100
60
80
80
100
awake
40
60
0.4
0.2
20
40
0.6
0.4
0.4
20
0.8
0.6
0.6
1
20
40
60
80
0.2
20
40
60
80
population size
Figure 4: Recovering correlations and coupling parameters in a real calcium imaging experiments. 100 neurons were simultaneously imaged in an anesthetized mouse (top row) and an awake
mouse (bottom row). Random populations of these neurons, ranging in size from 10 to 100 were
chosen and split into two slightly overlapping sub-sets each containing 60% of the neurons. The
activity of these sub-sets were imagined to be ?imaged? in two separate ?imaging? sessions (see
Section 2.2). a) Pairwise correlations for ?non-simultaneously imaged? neuron pairs estimated by
the ?naive? and our ?stitched? strategies compared to correlations predicted by a model fit to all neurons (?full obs?). b) Accuracy of predicting the activity of one sub-set of neurons, given the activities
of the other sub-set of neurons. c) Comparison of estimated couplings for ?non-simultaneously imaged? neuron pairs to those estimated using the ?fully observed? model. Note that true coupling
terms are unavailable here.
However, of greater interest is how well our model can recover pairwise correlations between nonsimultaneously measured neuron pairs. We found that our stitching method, but not the naive
method, was able to accurately reconstruct these correlations (Figure 3c). As expected, the naive
method strongly under-estimated correlations in the non-simultaneously recorded blocks, as it can
only model stimulus-correlations but not noise-correlations across neurons. 1 In contrast, our stitching method predicted correlations well, matching those of the fully observed model (correlation 0.84
for stitchLDS, 0.15 for naiveLDS, figure 3d).
Summary results across multiple populations. Here, we investigate the robustness of our findings. We drew random neuronal populations of sizes ranging from 10 to 80 (for awake) or 100
(for anesthetized) from the full datasets. For each population, we fit three models (fully observed,
stitch, naive) and compared their correlations, parameters and activity cross-prediction accuracy.
We repeated this process 20 times for each population size and dataset (anesthetized/awake) to characterize the variability. We found that for both datasets, the correlations predicted by the stitching
method for non-simultaneously recorded pairs were similar to the fully observed ones, and that this
similarity is almost independent of population size (Figure 4a). In fact, for the awake data (in which
the overall level of correlation was higher), the correlation matrices were extremely similar (lower
panel). The stitching method also substantially outperformed the naive approach, for which the
similarity was lower by a factor of about 2.
We compared the accuracy of the models at predicting the neural activity of one subset of neurons given the stimulus and the activity of the other subset (Figure 4b). We find that our model
makes significantly better predictions than the lower bound naive model, whose performance comes
from modeling the stimulus and neurons in the overlap between both subsets. Indeed for the more
active and correlated awake dataset, predictions are nearly as good as those of the fully observed
1
The naive approach also over-estimated correlations within each view. This is a consequence of biases
resulting from averaging couplings across views for neurons in the overlap between the two fictional sessions.
7
model. We also found that prediction accuracy increased slightly with population size, perhaps
since a larger population provides more neurons from which the activity of the other subset can be
predicted. Apparently, this gain in accuracy from additional neurons outweighed any potential drop
in performance resulting from increased potential for over-fitting on larger populations.
While we have no access to the true cross-couplings for the real data, we can nonetheless compare
the couplings from our stitched model to those estimated by the fully observed model. We find
that the stitching model is indeed able to estimate couplings that correlate positively with the fully
observed couplings, even for non-simultaneously imaged neuron pairs. Interestingly, this correlation
drops with increasing population size, perhaps due to possible near degeneracy of parameters for
large systems.
4
Discussion
It has long been appreciated that a dynamical system can be reconstructed from observations of only
a subset of its variables [22, 23, 21]. These theoretical results suggest that while only measuring
the activity of one population of neurons, we can infer the activity of a second neural population
that strongly interacts with the first, up to re-parametrization. Here, we go one step further. By later
measuring the activity of the second population, we recover the true parametrization allowing us to
predict aspects of the joint dynamics of the two populations, such as noise correlations.
Our essential finding is that we can put these theoretical insights to work using a simple linear
dynamical system model that ?stitches? together data from non-simultaneously recorded but strongly
interacting populations of neurons. We applied our method to analyze 2-photon population calcium
imaging measurements from the superficial layers of the somatosensory cortex of both anesthetized
and awake mice, and found that our method was able to successfully combine data not accessed
simultaneously. In particular, this approach allowed us to accurately predict correlations even for
pairs of non-simultaneously recorded neurons.
In this paper, we focused our demonstration to stitching together two populations of neurons. Our
framework can be generalized to more than two populations, however it remains to be empirically
seen how well larger numbers of populations can be combined. An experimental variable of interest
is the degree of overlap (shared neurons) between different populations of neurons. We found that
some overlap was critical for stitching to work, and increasing overlap improves stitching performance. Given a fixed imaging time budget, determining a good trade-off between overlap and total
coverage is an intriguing open problem in experimental design.
We emphasise that our linear gaussian dynamical system provides only a statistical description of the
observed data. However, even this simple model makes accurate predictions of correlations between
non-simultaneously observed neurons. Nevertheless, more realistic models [16, 6] can help improve
the accuracy of these predictions and disentangle the contributions of spiking activity, calcium dynamics, fluorescence measurements and imaging noise to the observed statistics. Similarly, better
priors on neural connectivity [24] might improve reconstruction performance. Indeed, we found
in unreported simulations that using a sparsifying penalty on the connectivity matrix [6] improves
parameter estimates slightly. We note that our model can easily be extended to model potential
common input from neurons which are never observed [13] as a low dimensional LDS [17, 18].
The simultaneous measurement of the activity of all neurons in a neural circuit will shed much light
on the nature of neural computation. While there is much progress in developing faster imaging
modalities, there are fundamental physical limits to the number of neurons which can be simultaneously imaged. Our paper suggests a means for expanding our limited capabilities. With more
powerful algorithmic tools, we can imagine mapping population dynamics of all the neurons in an
entire neural circuit such as the zebrafish larval olfactory bulb, or layers 2 & 3 of a whisker barrel?
an ambitious goal which has until now been out of reach.
Acknowledgements
We thank Peter Dayan for valuable comments on our manuscript and members of the Gatsby Unit for discussions. We are grateful for support from the Gatsby Charitable Trust, Wellcome Trust, ERC, EMBO, People
Programme (Marie Curie Actions) and German Federal Ministry of Education and Research (BMBF; FKZ:
01GQ1002, Bernstein Center T?ubingen).
8
References
[1] J. N. D. Kerr and W. Denk, ?Imaging in vivo: watching the brain in action,? Nat Rev Neurosci, vol. 9,
no. 3, pp. 195?205, 2008.
[2] C. Grienberger and A. Konnerth, ?Imaging calcium in neurons.,? Neuron, vol. 73, no. 5, pp. 862?885,
2012.
[3] S. Lefort, C. Tomm, J.-C. Floyd Sarria, and C. C. H. Petersen, ?The excitatory neuronal network of the
C2 barrel column in mouse primary somatosensory cortex.,? Neuron, vol. 61, no. 2, pp. 301?316, 2009.
[4] D. J. Tolhurst, J. A. Movshon, and A. F. Dean, ?The statistical reliability of signals in single neurons in
cat and monkey visual cortex,? Vision research, vol. 23, no. 8, pp. 775?785, 1983.
[5] W. R. Softky and C. Koch, ?The highly irregular firing of cortical cells is inconsistent with temporal
integration of random epsps,? The Journal of Neuroscience, vol. 13, no. 1, pp. 334?350, 1993.
[6] Y. Mishchenko, J. T. Vogelstein, and L. Paninski, ?A bayesian approach for inferring neuronal connectivity from calcium fluorescent imaging data,? The Annals of Applied Statistics, vol. 5, no. 2B, pp. 1229?
1261, 2011.
[7] O. Stetter, D. Battaglia, J. Soriano, and T. Geisel, ?Model-free reconstruction of excitatory neuronal
connectivity from calcium imaging signals,? PLoS Comp Bio, vol. 8, no. 8, p. e1002653, 2012.
[8] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. J. Chichilnisky, and E. P. Simoncelli,
?Spatio-temporal correlations and visual signalling in a complete neuronal population.,? Nature, vol. 454,
no. 7207, pp. 995?999, 2008.
[9] I. H. Stevenson, J. M. Rebesco, L. E. Miller, and K. P. K?ording, ?Inferring functional connections between
neurons,? Current opinion in neurobiology, vol. 18, no. 6, pp. 582?588, 2008.
[10] A. Singh and N. A. Lesica, ?Incremental mutual information: A new method for characterizing the
strength and dynamics of connections in neuronal circuits,? PLoS Comp Bio, vol. 6, no. 12, p. e1001035,
2010.
[11] D. Song, H. Wang, C. Y. Tu, V. Z. Marmarelis, R. E. Hampson, S. A. Deadwyler, and T. W. Berger,
?Identification of sparse neural functional connectivity using penalized likelihood estimation and basis
functions,? J Comp Neursci, pp. 1?23, 2013.
[12] A. Wohrer, R. Romo, and C. Machens, ?Linear readout from a neural population with partial correlation
data,? in Advances in Neural Information Processing Systems, vol. 22, Curran Associates, Inc., 2010.
[13] J. W. Pillow and P. Latham, ?Neural characterization in partially observed populations of spiking neurons,? Adv Neural Information Processing Systems, vol. 20, no. 3.5, 2008.
[14] A. Pakman, J. H. Huggins, and P. L., ?Fast penalized state-space methods for inferring dendritic synaptic
connectivity,? Journal of Computational Neuroscience, 2013.
[15] Y. Mishchenko and L. Paninski, ?A bayesian compressed-sensing approach for reconstructing neural
connectivity from subsampled anatomical data,? J Comp Neurosci, vol. 33, no. 2, pp. 371?388, 2012.
[16] J. T. Vogelstein, B. O. Watson, A. M. Packer, R. Yuste, B. Jedynak, and L. Paninski, ?Spike inference from
calcium imaging using sequential monte carlo methods,? Biophysical Journal, vol. 97, no. 2, pp. 636?
655, 2009.
[17] M. Vidne, Y. Ahmadian, J. Shlens, J. Pillow, J. Kulkarni, A. Litke, E. Chichilnisky, E. Simoncelli, and
L. Paninski, ?Modeling the impact of common noise inputs on the network activity of retinal ganglion
cells.,? J Comput Neurosci, 2011.
[18] J. H. Macke, L. B?using, J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani., ?Empirical models of
spiking in neural populations.,? in Advances in Neural Information Processing Systems, vol. 24, Curran
Associates, Inc., 2012.
[19] A. P. Dempster, N. M. Laird, and D. B. Rubin, ?Maximum likelihood from incomplete data via the EM
algorithm,? J R Stat Soc Ser B, vol. 39, no. 1, pp. 1?38, 1977.
[20] P. Liang and D. Klein, ?Online EM for unsupervised models,? in NAACL ?09: Proceedings of Human
Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association
for Computational Linguistics, Association for Computational Linguistics, 2009.
[21] T. Katayama, Subspace methods for system identification. Springer Verlag, 2005.
[22] L. E. Baum and T. Petrie, ?Statistical Inference for Probabilistic Functions of Finite State Markov
Chains,? The Annals of Mathematical Statistics, vol. 37, no. 6, pp. 1554?1563, 1966.
[23] F. Takens, ?Detecting Strange Attractors In Turbulence,? in Dynamical Systems and Turbulence (D. A.
Rand and L. S. Young, eds.), vol. 898 of Lecture Notes in Mathematics, (Warwick), pp. 366?381,
Springer-Verlag, Berlin, 1981.
[24] S. W. Linderman and R. P. Adams, ?Inferring functional connectivity with priors on network topology,?
in Cosyne Abstracts, 2013.
9
| 4874 |@word trial:4 hampson:1 middle:2 open:1 simulation:3 covariance:3 initial:1 series:2 interestingly:1 ording:1 current:2 comparing:1 recovered:3 scatter:2 intriguing:1 realistic:1 shape:1 wanted:1 plot:2 drop:2 update:2 v:1 stationary:1 half:3 selected:1 signalling:1 xk:1 parametrization:2 filtered:1 provides:2 tolhurst:1 contribute:1 location:2 characterization:1 detecting:1 accessed:2 mathematical:1 c2:5 direct:1 consists:1 fitting:6 combine:1 olfactory:1 pairwise:3 x0:2 indeed:3 expected:2 brain:1 considering:1 increasing:2 underlying:5 lesica:1 circuit:14 panel:3 barrel:8 wolfson:1 virally:1 substantially:3 monkey:1 unobserved:7 transformation:2 finding:2 nj:3 grienberger:1 temporal:2 every:1 shed:1 uk:1 bio:2 unit:2 ser:1 planck:1 shenoy:1 local:3 limit:1 consequence:1 packer2:1 firing:1 might:3 quantified:1 evoked:1 suggests:1 limited:1 averaged:1 jedynak:1 practice:1 block:9 procedure:2 empirical:1 thought:1 cascade:1 matching:1 significantly:1 suggest:1 petersen:1 get:1 onto:1 cannot:1 turbulence:2 put:1 influence:2 impossible:2 conventional:2 dean:1 center:2 missing:2 yt:2 go:1 romo:1 baum:1 duration:1 focused:1 simplicity:2 recovery:2 insight:1 counterintuitive:1 shlens:2 population:57 annals:2 imagine:1 suppose:1 curran:2 machens:1 overlapped:1 element:3 associate:2 jk:1 std:1 observed:37 bottom:2 wang:1 capture:3 thousand:3 calculate:1 region:1 readout:1 adv:1 ordering:1 shuffle:1 trade:1 plo:2 valuable:2 intuition:2 dempster:1 dynamic:18 denk:1 trained:1 grateful:1 singh:1 efficiency:1 basis:1 easily:1 joint:7 cat:1 chapter:1 e1002653:1 laser:1 fast:1 describe:1 london:2 monte:1 ahmadian:1 artificial:2 whose:1 encoded:1 larger:3 valued:1 warwick:1 otherwise:1 hk1:1 reconstruct:1 ability:1 statistic:5 compressed:1 noisy:1 envisage:1 delivered:1 online:2 final:1 laird:1 biophysical:1 ucl:1 propose:1 subtracting:1 interaction:3 product:1 reconstruction:2 tu:1 combining:1 description:1 optimum:1 adam:2 incremental:1 help:4 illustrate:1 coupling:45 ac:1 stat:1 spent:1 measured:4 progress:2 strong:1 epsps:1 recovering:3 predicted:7 coverage:1 somatosensory:8 implies:1 come:1 lyapunov:1 direction:1 geisel:1 lars:1 human:2 transient:1 opinion:1 education:1 biological:1 dendritic:2 larval:1 koch:1 ground:4 roi:2 algorithmic:1 predict:10 mapping:1 battaglia:1 ventrally:1 estimation:1 outperformed:1 applicable:1 currently:1 fluorescence:4 successfully:2 tool:2 federal:1 gaussian:7 rather:2 avoid:1 voltage:1 derived:1 likelihood:4 contrast:5 litke:2 inference:3 dayan:1 typically:2 entire:1 cunningham:1 expand:1 interested:1 overall:1 denoted:1 takens:1 animal:1 smoothing:1 integration:1 mutual:1 field:1 never:3 manually:1 yu:1 unsupervised:1 nearly:2 constitutes:1 stimulus:15 few:5 modern:1 randomly:2 simultaneously:41 resulted:1 interpolate:1 packer:1 subsampled:1 consisting:2 attractor:1 attempt:1 detection:1 wohrer:1 interest:4 investigate:1 highly:1 certainly:1 analyzed:1 light:2 stitched:11 chain:1 accurate:2 konnerth:1 capable:1 partial:5 tree:1 incomplete:3 re:1 isolated:1 theoretical:3 instance:1 increased:2 modeling:2 column:1 measuring:2 maximization:1 tractability:1 subset:16 entry:5 hundred:3 inadequate:1 characterize:2 answer:1 axkt:1 scanning:1 aq0:1 synthetic:2 combined:2 fundamental:1 probabilistic:1 off:6 pool:3 michael:1 together:4 mouse:11 invertible:1 connectivity:17 central:1 recorded:16 ukt:1 containing:1 marmarelis:1 cosyne:1 watching:1 external:2 pretended:2 american:1 macke:1 potential:3 photon:6 stevenson:1 retinal:1 north:1 coefficient:2 inc:2 explicitly:1 performed:3 view:2 later:1 doing:2 apparently:1 red:4 analyze:1 recover:4 capability:1 curie:1 vivo:3 contribution:3 accuracy:7 variance:1 neuronally:1 miller:1 correspond:5 identify:3 yes:1 outweighed:1 lds:3 weak:2 bayesian:2 identification:2 accurately:3 carlo:1 comp:4 drive:1 cybernetics:1 simultaneous:4 explain:1 reach:1 synaptic:3 ed:1 definition:1 nonetheless:1 pp:14 couple:1 sampled:2 gain:1 dataset:8 degeneracy:1 improves:2 electrophysiological:1 manuscript:1 higher:1 response:1 rand:1 arranged:1 though:1 strongly:5 furthermore:1 xk1:1 biomedical:1 correlation:65 glms:1 hand:1 until:1 trust:2 nonlinear:1 overlapping:7 marker:1 perhaps:2 xkt:7 effect:1 naacl:1 true:14 hence:1 shuffled:1 imaged:26 q0:3 floyd:1 during:3 self:2 uniquely:1 generalized:1 theoretic:1 demonstrate:5 complete:1 latham:1 rostrally:1 image:4 wise:1 meaning:1 ranging:2 caudally:1 petrie:1 common:3 functional:7 spiking:5 empirically:1 physical:1 imagined:2 association:2 measurement:19 expressing:1 shuffling:1 tuning:1 outlined:2 mathematics:1 session:28 similarly:1 erc:1 language:1 unreported:1 reliability:1 henry:1 phrasing:1 moving:1 dj:2 cortex:10 longer:1 had:3 fictional:5 similarity:2 access:1 disentangle:1 posterior:3 recent:3 driven:4 apart:1 scenario:5 selectivity:1 verlag:2 ubingen:3 watson:1 embo:1 seen:1 ministry:1 greater:1 additional:1 monotonically:1 signal:6 ii:1 vogelstein:2 multiple:5 full:5 simoncelli:2 infer:7 match:1 characterized:1 faster:1 cross:4 long:1 pakman:1 impact:1 prediction:13 variant:1 gcamp6s:2 expectation:2 metric:1 vision:1 microscopy:1 cell:7 microscope:2 achieved:1 irregular:1 want:1 fine:1 separately:1 addressed:1 modality:1 biased:1 comment:1 recording:10 hz:1 member:1 inconsistent:1 seem:1 near:1 leverage:1 bernstein:2 split:2 revealed:1 enough:1 fit:7 identified:1 approaching:1 fkz:1 topology:1 soriano:1 penalty:1 movshon:1 song:1 peter:1 constitute:1 repeatedly:1 action:2 flick:1 generally:1 detailed:1 involve:1 neocortex:1 ytk:2 inhibitory:1 neuroscience:4 estimated:12 klein:1 anatomical:3 discrete:1 vol:18 sparsifying:1 soma:1 nevertheless:3 marie:1 utilize:1 imaging:39 fraction:1 powerful:1 almost:1 groundtruth:1 strange:1 zebrafish:1 putative:1 ob:2 layer:3 bound:6 followed:1 display:1 unshuffle:2 yielded:1 activity:38 annual:1 strength:2 noah:1 awake:11 sake:2 nearby:2 aspect:1 extremely:2 span:1 min:2 performing:1 developing:1 turaga:1 kd:1 across:12 smaller:1 reconstructing:2 em:5 pan:1 slightly:3 shallow:1 rev:1 s1:1 huggins:1 wellcome:1 equation:2 previously:1 remains:1 german:1 kerr:1 stitching:23 subnetworks:1 lieu:1 available:1 linderman:1 apply:1 lefort:1 batch:2 robustness:1 vidne:1 responding:1 remaining:1 include:3 denotes:1 top:3 linguistics:2 calculating:1 rebesco:1 k1:2 build:1 question:2 spike:1 receptive:1 strategy:2 dependence:1 primary:1 diagonal:6 interacts:1 subspace:1 softky:1 separate:4 unable:2 simulated:9 thank:1 berlin:1 collected:2 reason:1 assuming:2 kalman:1 length:1 modeled:2 relationship:1 index:1 mini:1 demonstration:1 berger:1 innovation:2 liang:1 difficult:1 cij:2 design:1 xk0:1 calcium:22 collective:1 ambitious:1 allowing:1 upper:4 neuron:108 observation:12 datasets:4 markov:1 benchmark:1 withheld:1 finite:1 displayed:1 truncated:1 neurobiology:1 extended:1 variability:5 excluding:1 looking:1 frame:1 interacting:2 jakob:1 y1:1 lb:1 pair:29 chichilnisky:2 connection:8 coherent:1 nu:2 address:1 beyond:1 able:8 dynamical:12 usually:1 below:1 genetically:1 max:1 green:1 including:1 overlap:6 critical:1 treated:1 predicting:9 indicator:1 pictured:1 improve:3 technology:2 imply:1 numerous:1 created:1 coupled:2 naive:23 sher:1 kj:1 sahani:1 katayama:1 prior:2 acknowledgement:1 determining:1 relative:1 stetter:1 fully:16 whisker:8 expect:2 lecture:1 yuste:1 limitation:1 fluorescent:2 degree:2 bulb:1 gq1002:1 consistent:1 rubin:1 principle:1 charitable:1 uncorrelated:1 gcamp:1 row:2 excitatory:3 compatible:1 course:1 surprisingly:1 summary:1 free:2 penalized:2 aij:1 bias:1 allow:1 understand:1 appreciated:1 institute:2 characterizing:1 anesthetized:11 emphasise:1 sparse:1 dimension:4 cortical:4 calculated:3 depth:2 pillow:3 sensory:1 testset:1 programme:1 soc:1 correlate:1 reconstructed:1 obtains:1 dealing:1 sequentially:1 active:1 assumed:1 spatio:1 continuous:1 latent:9 why:1 reality:1 learn:1 nature:2 superficial:2 expanding:1 unavailable:1 neuropil:1 necessarily:1 main:1 dense:1 promoter:1 timescales:1 linearly:1 noise:28 whole:2 neurosci:3 mishchenko:2 repeated:1 allowed:1 neursci:1 positively:1 neuronal:7 site:3 referred:1 fig:1 gatsby:4 slow:1 bmbf:1 sub:7 inferring:11 rjj:1 comput:1 young:1 specific:1 sensing:1 offset:1 experimented:1 essential:1 sequential:1 effectively:1 gained:1 drew:1 magnitude:2 nat:1 budget:1 nk:4 rodent:1 paninski:5 ganglion:1 visual:2 failed:1 expressed:1 stitch:3 partially:3 springer:2 truth:4 satisfies:1 extracted:1 goal:3 quantifying:1 shared:1 averaging:1 called:1 total:3 pas:1 superimposing:1 experimental:8 shedding:1 college:2 support:1 people:1 latter:1 ub:1 kulkarni:1 evaluate:1 tested:1 srinivas:1 correlated:8 |
4,281 | 4,875 | Noise-Enhanced Associative Memories
Amin Karbasi
Swiss Federal Institute of Technology Zurich
[email protected]
Amir Hesam Salavati
Ecole Polytechnique Federale de Lausanne
[email protected]
Amin Shokrollahi
Ecole Polytechnique Federale de Lausanne
[email protected]
Lav R. Varshney
IBM Thomas J. Watson Research Center
[email protected]
Abstract
Recent advances in associative memory design through structured pattern sets and
graph-based inference algorithms allow reliable learning and recall of exponential
numbers of patterns. Though these designs correct external errors in recall, they
assume neurons compute noiselessly, in contrast to highly variable neurons in
hippocampus and olfactory cortex. Here we consider associative memories with
noisy internal computations and analytically characterize performance. As long
as internal noise is less than a specified threshold, error probability in the recall
phase can be made exceedingly small. More surprisingly, we show internal noise
actually improves performance of the recall phase. Computational experiments
lend additional support to our theoretical analysis. This work suggests a functional
benefit to noisy neurons in biological neuronal networks.
1
Introduction
Hippocampus, olfactory cortex, and other brain regions are thought to operate as associative memories [1,2], having the ability to learn patterns from presented inputs, store a large number of patterns,
and retrieve them reliably in the face of noisy or corrupted queries [3?5]. Associative memory models are designed to have these properties.
Although such information storage and recall seemingly falls into the information-theoretic framework, where an exponential number of messages can be communicated reliably with a linear number
of symbols, classical associative memory models could only store a linear number of patterns [4]. A
primary reason is classical models require memorizing a randomly chosen set of patterns. By enforcing structure and redundancy in the possible set of memorizable patterns?like natural stimuli [6],
internal neural representations [7], and error-control codewords?advances in associative memory
design allow storage of an exponential number of patterns [8,9], just like in communication systems.
Information-theoretic and associative memory models of storage have been used to predict experimentally measurable properties of synapses in the mammalian brain [10,11]. But contrary to the fact
that noise is present in computational operations of the brain [12, 13], associative memory models
with exponential capacities have assumed no internal noise in the computational nodes. The purpose
here is to model internal noise and study whether such associative memories still operate reliably.
Surprisingly, we find internal noise actually enhances recall performance, suggesting a functional
role for variability in the brain.
In particular we consider a multi-level, graph code-based, associative memory model [9] and find
that even if all components are noisy, the final error probability in recall can be made exceedingly
small. We characterize a threshold phenomenon and show how to optimize algorithm parameters
when knowing statistical properties of internal noise. Rather counterintuitively the performance
1
of the memory model improves in the presence of internal neural noise, as observed previously as
stochastic resonance [13, 14]. There are mathematical connections to perturbed simplex algorithms
for linear programing [15], where internal noise pushes the algorithm out of local minima.
The benefit of internal noise has been noted previously in associative memory models with stochastic
update rules, cf. [16]. However, our framework differs from previous approaches in three key aspects. First, our memory model is different, which makes extension of previous analysis nontrivial.
Second, and perhaps most importantly, pattern retrieval capacity in previous approaches decreases
with internal noise, cf. [16, Fig. 6.1], in that increasing internal noise helps correct more external
errors, but also reduces the number of memorizable patterns. In our framework, internal noise does
not affect pattern retrieval capacity (up to a threshold) but improves recall performance. Finally, our
noise model has bounded rather than Gaussian noise, and so a suitable network may achieve perfect
recall despite internal noise.
Reliably storing information in memory systems constructed completely from unreliable components is a classical problem in fault-tolerant computing [17?19], where models have used random
access architectures with sequential correcting networks. Although direct comparison is difficult
since notions of circuit complexity are different, our work also demonstrates that associative memory architectures constructed from unreliable components can store information reliably.
Building on the idea of structured pattern sets [20], our associative memory model [9] relies on the
fact that all patterns to be learned lie in a low-dimensional subspace. Learning features of a lowdimensional space is very similar to autoencoders [21] and has structural similarities to Deep Belief
Networks (DBNs), particularly Convolutional Neural Networks [22].
2
Associative Memory Model
Notation and basic structure: In our model, a neuron can assume an integer-valued state from
the set S = {0, . . . , S ? 1}, interpreted as the short term firing rate of neurons. A neuron updates
n
its state
Pnbased on the states of its neighbor {si }i=1 as follows. It first computes a weighted sum
h = i=1 wi si + ?, where wi is the weight of the link from si and ? is the internal noise, and then
applies nonlinear function f : R ? S to h.
An associative memory is represented by a weighted bipartite graph, G, with pattern neurons and
constraint neurons. Each pattern x = (x1 , . . . , xn ) is a vector of length n, where xi ? S, i =
1, . . . , n. Following [9], the focus is on recalling patterns with strong local correlation among
entries. Hence, we divide entries of each pattern x into L overlapping sub-patterns of lengths
n1 , . . . , nL . Due to overlaps, a pattern neuron can be a member of multiple subpatterns, as in
(i)
(i)
Fig. 1a. The ith subpattern is denoted x(i) = (x1 , . . . , xni ), and local correlations are assumed to
be in the form of subspaces, i.e. the subpatterns x(i) form a subspace of dimension ki < ni .
We capture the local correlations by learning a set of linear constraints over each subspace corre(i)
(i)
sponding to the dual vectors orthogonal to that subspace. More specifically, let {w1 , . . . , wmi } be
(i)
a set of dual vectors orthogonal to all subpatterns x of cluster i. Then:
(i)
(i)
yj = (wj )T ? x(i) = 0,
for all j ? {1, . . . , mi } and for all i ? {1, . . . , L}.
(i)
(i)
(1)
(i)
Eq. (1) can be rewritten as W (i) ? x(i) = 0 where W (i) = [w1 |w2 | . . . |wmi ]T is the matrix of dual
vectors. Now we use a bipartite graph with connectivity matrix determined by W (i) to represent the
subspace constraints learned from subpattern x(i) ; this graph is called cluster i. We developed an
efficient way of learning W (i) in [9], also used here. Briefly, in each iteration of learning:
1. Pick a pattern x at random from the dataset;
(i)
2. Adjust weight vectors wj for j = {1, . . . , mi } and i = {1, . . . , L} such that the projection
(i)
of x onto wj is reduced. Apply a sparsity penalty to favor sparse solutions.
This process repeats until all weights are orthogonal to the patterns in the dataset or the maximum
iteration limit is reached. The learning rule allows us to assume the weight matrices W (i) are known
and satisfy W (i) ? x(i) = 0 for all patterns x in the dataset X , in this paper.
2
G(1)
y1
y2
G(2)
y3
(2)
x1
y4
(2)
x2
G(3)
y5
(2)
x3
y6
y7
y8
(2)
x4
(a) Bipartite graph G.
e
(b) Contraction graph G.
Figure 1: The proposed neural associative memory with overlapping clusters.
e whose connectivity
For the forthcoming asymptotic analysis, we need to define a contracted graph G
f and has size L ? n. This is a bipartite graph in which constraints in each cluster
matrix is denoted W
fij = 1;
are represented by a single neuron. Thus, if pattern neuron xj is connected to cluster i, W
f
e using
otherwise Wij = 0. We also define the degree distribution from an edge perspective over G,
P
P
j
j?1
e
e
ej (resp., ?ej ) equals the fraction of edges that
?(z)
=
e(z) =
ej z
where ?
j ?j z and ?
j?
connect to pattern (resp., cluster) nodes of degree j.
Noise model: There are two types of noise in our model: external errors and internal noise. As
mentioned earlier, a neural network should be able to retrieve memorized pattern x
? from its corrupted
version x due to external errors. We assume the external error is an additive vector of size n, denoted
by z satisfying x = x
? + z, whose entries assume values independently from {?1, 0, +1}1 with
corresponding probabilities p?1 = p+1 = /2 and p0 = 1 ? . The realization of the external error
on subpattern x(i) is denoted z (i) . Note that the subspace assumption implies W ? y = W ? z and
W (i) ? y (i) = W (i) ? z (i) for all i. Neurons also suffer from internal noise. We consider a bounded
noise model, i.e. a random number uniformly distributed in the intervals [??, ?] and [??, ?] for the
pattern and constraint neurons, respectively (?, ? < 1).
The goal of recall is to filter the external error z to obtain the desired pattern x as the correct states
of the pattern neurons. When neurons compute noiselessly, this task may be achieved by exploiting
the fact the set of patterns x ? X to satisfy the set of constraints W (i) ? x(i) = 0. However, it is not
clear how to accomplish this objective when the neural computations are noisy. Rather surprisingly,
we show that eliminating external errors is not only possible in the presence of internal noise, but
that neural networks with moderate internal noise demonstrate better external noise resilience.
Recall algorithms: To efficiently deal with external errors, we use a combination of Alg. 1 and
Alg. 2. The role of Alg. 1 is to correct at least a single external error in each cluster. Without
overlaps between clusters, the error resilience of the network is limited. Alg. 2 exploits the overlaps:
it helps clusters with external errors recover their correct states by using the reliable information
from clusters that do not have external errors. The error resilience of the resulting combination
thereby drastically improves. Now we describe the details of Alg. 1 and Alg. 2 more precisely.
Alg. 1 performs a series of forward and backward iterations in each cluster G(l) to remove (at
least) one external error from its input domain. At each iteration, the pattern neurons locally decide
whether to update their current state: if the amount of feedback received by a pattern neuron exceeds
a threshold, the neuron updates its state, and otherwise remains as is. With abuse of notation, let
us denote messages transmitted by pattern node i and constraint node j at round t by xi (t) and
yj (t), respectively. In round 0, pattern nodes are initialized by a pattern x
?, sampled from dataset X ,
perturbed by external errors z, i.e., x(0) = x
? + z. Thus, for cluster ` we have x(`) (0) = x
?(`) + z (`) ,
(`)
(`)
where z is the realization of errors on subpattern x .
In round t, the pattern and constraint neurons update their states using feedback from neighbors.
However since neural computations are faulty, decisions made by neurons may not be reliable. To
minimize effects of internal noise, we use the following update rule for pattern node i in cluster `:
(
(`)
(`)
(`)
xi (t) ? sign(gi (t)), if |gi (t)| ? ?
(`)
xi (t + 1) =
(2)
(`)
xi (t),
otherwise,
1
Note that the proposed algorithms also work with larger noise values, i.e. from a set {?a, . . . , a} for some
a ? N, see [23]; the ?1 noise model is presented here for simplicity.
3
Algorithm 1 Intra-Module Error Correction
Input: Training set X , thresholds ?, ?, iteration tmax
(`)
(`)
(`)
Output: x1 , x2 , . . . , xn`
1: for t = 1 ? tmax do
(`)
2:
Forward iteration: Calculate the input hi =
Pn`
(`) (`)
(`)
and
j=1 Wij xj + vi , for each neuron yi
(`)
(`)
set yi = f (hi , ?).
(`)
3:
Backward iteration: Each neuron xj computes
Pm`
sign(Wij(`) )yi(`)
(`)
gj = Pi=1
+ ui .
m`
(`)
i=1 sign(|Wij |)
4:
Update state of each pattern neuron j according
(`)
(`)
(`)
(`)
to xj = xj ? sign(gj ) only if |gj | > ?.
5: end for
Algorithm 2 Sequential Peeling Algorithm
e G(1) , G(2) , . . . , G(L) .
Input: G,
Output: x1 , x2 , . . . , xn
1: while there is an unsatisfied v (`) do
2:
for ` = 1 ? L do
3:
If v (`) is unsatisfied, apply Alg. 1
to cluster G(l) .
4:
If v (`) remained unsatisfied, revert
state of pattern neurons connected
to v (`) to their initial state. Otherwise, keep their current states.
5:
end for
6: end while
7: Declare x1 , x2 , . . . , xn if all v (`) ?s are
satisfied. Otherwise, declare failure.
(`)
(`)
(`)
where ? is the update threshold and gi (t) = (sign(W (`) )> ? y (`) (t) i /di + ui .2 Here, di is
(`)
(`)
the degree of pattern node i in cluster `, y (`) (t) = [y1 (t), . . . , ym` (t)] is the vector of messages
transmitted by the constraint neurons in cluster `, and ui is the random noise affecting pattern node
(`)
i. Basically, the term gi (t) reflects the (average) belief of constraint nodes connected to pattern
(`)
neuron i about its correct value. If gi (t) is larger than a specified threshold ? it means most of
(`)
the connected constraints suggest the current state xi (t) is not correct, hence, a change should be
made. Note this average belief is diluted by the internal noise of neuron i. As mentioned earlier, ui
is uniformly distributed in the interval [??, ?], for some ? < 1. On the constraint side, the update
rule is:
?
(`)
?
?+1, if hi (t) ? ?
(`)
(`)
(`)
(3)
yi (t) = f (hi (t), ?) = 0,
if ? ? ? hi (t) ? ?
?
?
?1, otherwise,
(`)
where ? is the update threshold and hi (t) = W (`) ? x(`) (t) i + vi . Here, x(`) (t) =
(`)
(`)
[x1 (t), . . . , xn` (t)] is the vector of messages transmitted by the pattern neurons and vi is the random noise affecting node i. As before, we consider a bounded noise model for vi , i.e., it is uniformly
distributed in the interval [??, ?] for some ? < 1.3
The error correction ability of Alg. 1 is fairly limited, as determined analytically and through simulations [23]. In essence, Alg. 1 can correct one external error with high probability, but degrades
terribly against two or more external errors. Working independently, clusters cannot correct more
than a few external errors, but their combined performance is much better. As clusters overlap, they
help each other in resolving external errors: a cluster whose pattern neurons are in their correct states
can always provide truthful information to neighboring clusters. This property is exploited in Alg. 2
by applying Alg. 1 in a round-robin fashion to each cluster. Clusters either eliminate their internal
noise in which case they keep their new states and can now help other clusters, or revert back to their
original states. Note that by such a scheduling scheme, neurons can only change their states towards
correct values. This scheduling technique is similar in spirit to the peeling algorithm [24].
3
Recall Performance Analysis
Now let us analyze recall error performance. The following lemma shows that if ? and ? are chosen
properly, then in the absence of external errors the constraints remain satisfied and internal noise
cannot result in violations. This is a crucial property for Alg. 2, as it allows one to determine whether
(`)
2
Note that xi (t + 1) is further mapped to the interval [0, S ? 1] by saturating the values below 0 and above
S ? 1 to 0 and S ? 1 respectively. The corresponding equations are omitted for brevity.
(`)
3
Note that although the values of yi (t) can be shifted to 0, 1, 2, instead of ?1, 0, 1 to match our assumption
that neural states are non-negative, we leave them as such to simplify later analysis.
4
a cluster has successfully eliminated external errors (Step 4 of algorithm) by merely checking the
satisfaction of all constraint nodes.
Lemma 1. In the absence of external errors, the probability that a constraint neuron (resp. pat(`)
tern neuron)
in cluster ` makes a wrong decision due to its internal noise is given by ?0 =
(`)
max 0, ???
(resp. P0 = max 0, ???
).
?
?
(`)
(`)
Proof is given in [23]. In the sequel, we assume ? > ? and ? > ? so that ?0 = 0 and P0 = 0.
However, an external error combined with internal noise may still push neurons to an incorrect state.
Given the above lemma and our neural architecture, we can prove the following surprising result: in
the asymptotic regime of increasing number of iterations of Alg. 2, a neural network with internal
noise outperforms one without. Let us define the fraction of errors corrected by the noiseless and
noisy neural network (parametrized by ? and ?) after T iterations of Alg. 2 by ?(T ) and ??,? (T ),
respectively. Note that both ?(T ) ? 1 and ??,? (T ) ? 1 are non-decreasing sequences of T . Hence,
their limiting values are well defined: limT ?? ?(T ) = ?? and limT ?? ??,? (T ) = ???,? .
(`)
(`)
Theorem 2. Let us choose ? and ? so that ?0 = 0 and P0
same realization of external errors, we have ???,? ? ?? .
= 0 for all ` ? {1, . . . , L}. For the
Proof is given in [23]. The high level idea why a noisy network outperforms a noiseless one comes
from understanding stopping sets. These are realizations of external errors where the iterative Alg. 2
cannot correct all of them. We show that the stopping set shrinks as we add internal noise. In other
words, we show that in the limit of T ? ? the noisy network can correct any error pattern that can
be corrected by the noiseless version and it can also get out of stopping sets that cause the noiseless
network to fail. Thus, the supposedly harmful internal noise will help Alg. 2 to avoid stopping sets.
Thm. 2 suggests the only possible downside with using a noisy network is its possible running time
in eliminating external errors: the noisy neural network may need more iterations to achieve the
same error correction performance. Interestingly, our empirical experiments show that in certain
scenarios, even the running time improves when using a noisy network.
Thm. 2 indicates that noisy neural networks (under our model) outperform noiseless ones, but does
not specify the level of errors that such networks can correct. Now we derive a theoretical upper
bound on error correction performance. To this end, let Pci be the average probability that a cluster
can correct i external errors in its domain. The following theorem gives a simple condition under
which Alg. 2 can correct a linear fraction of external errors (in terms of n) with high probability.
? and ??, the degree distributions of the contracted graph G.
?
The condition involves ?
e grows large and it is chosen randomly with degree
Theorem 3. Under the assumptions that graph G
e
distributions given by ? and ?e, Alg. 2 is successful if
?
e ?1 ?
?
X
i?1
?
z i?1 di?1 ?e(1 ? z) ?
Pc i
?
< z, f or z ? [0, ].
i!
dz i?1
(4)
Proof is given in [23] and is based on the density evolution technique [25]. Thm. 3 states that for any
fraction of errors ??,? ? ???,? that satisfies the above recursive formula, Alg. 2 will be successful
with probability close to one. Note that the first fixed point of the above recursive equation dictates
the maximum fraction of errors ???,? that our model can correct. For the special case of Pc1 = 1 and
e ? ?e(1 ? z)) < z, the same condition given in [9]. Thm. 3 takes into
Pci = 0, ?i > 1, we obtain ?1
account the contribution of all Pci terms and as we will see, their values change as we incorporate
the effect of internal noise ? and ?. Our results show that the maximum value of Pci does not
occur when the internal noise is equal to zero, i.e. ? = ? = 0, but instead when the neurons are
contaminated with internal noise! As an example, Fig. 2 illustrates how Pci behaves as a function
of ? in the network considered (note that maximum values are not at ? = 0). This finding suggests
that even individual clusters are able to correct more errors in the presence of internal noise.
5
? = 0, ? = 0-Sim
Pc1
Pc2
Pc3
Pc4
0.8
0.6
0.15
? = 0.2, ? = 0-Thr
0.4
? = 0.4, ? = 0-Sim
0.10
? = 0.4, ? = 0-Thr
? = 0.6, ? = 0-Sim
? = 0.6, ? = 0-Thr
0.05
0.2
0.00
0.00
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
?
0.05
0.10
Figure 3: The final SER for a network with n =
400, L = 50 cf. [9]. The blue curves correspond
to the noiseless neural network.
Figure 2: The value of Pci as a function of pattern neurons noise ? for i = 1, . . . , 4. Noise at
constraint neurons is assumed as zero (? = 0).
3.1
? = 0, ? = 0-Thr
? = 0.2, ? = 0-Sim
Final SER
Prob. correcting ext. errors
1
Simulations
Now we consider simulation results for a finite system. To learn the subspace constraints (1) for each
cluster G(`) we use the learning algorithm in [9]. Henceforth, we assume that the weight matrix W
is known and given. In our setup, we consider a network of size n = 400 with L = 50 clusters. We
have 40 pattern nodes and 20 constraint nodes in each cluster, on average. External error is modeled
by randomly generated vectors z with entries ?1 with probability and 0 otherwise. Vector z is
added to the correct patterns, which satisfy (1). For recall, Alg. 2 is used and results are reported
in terms of Symbol Error Rate (SER) as the level of external error () or internal noise (?, ?) is
changed; this involves counting positions where the output of Alg. 2 differs from the correct pattern.
3.1.1
Symbol Error Rate as a function of Internal Noise
Fig. 3 illustrates the final SER of our algorithm for different values of ? and ?. Recall that ? and
? quantify the level of noise in pattern and constraint neurons, respectively. Dashed lines in Fig. 3
are simulation results whereas solid lines are theoretical upper bounds provided in this paper. As
evident, there is a threshold phenomenon such that SER is negligible for ? ? and grows beyond
this threshold. As expected, simulation results are better than the theoretical bounds. In particular,
the gap is relatively large as ? moves towards one.
A more interesting trend in Fig. 3 is the fact that internal noise helps in achieving better performance,
as predicted by theoretical analysis (Thm. 2). Notice how ? moves towards one as ? increases.
This phenomenon is examined more closely in Figs. 4a and 4b where is fixed to 0.125 while ?
and ? vary. As we see, a moderate amount of internal noise at both pattern and constraint neurons
improves performance. There is an optimum point (? ? , ? ? ) for which the SER reaches its minimum.
Fig. 4b indicates for instance that ? ? ? 0.25, beyond which SER deteriorates.
3.2
Recall Time as a function of Internal Noise
Fig. 5 illustrates the number of iterations performed by Alg. 2 for correcting the external errors when
is fixed to 0.075. We stop whenever the algorithm corrects all external errors or declare a recall
error if all errors were not corrected in 40 iterations. Thus, the corresponding areas in the figure
where the number of iterations reaches 40 indicates decoding failure. Figs. 6a and 6b are projected
versions of Fig. 5 and show the average number of iterations as a function of ? and ?, respectively.
The amount of internal noise drastically affects the speed of Alg. 2. First, from Fig. 5 and 6b observe
that running time is more sensitive to noise at constraint neurons than pattern neurons and that the
algorithms become slower as noise at constraint neurons is increased. In contrast, note that internal
noise at the pattern neurons may improve the running time, as seen in Fig. 6a.
6
Final SER
=0
= 0.1
= 0.3
= 0.5
0.10
Final SER
?
?
?
?
0.10
0.05
0.00
?
?
?
?
0.05
=0
= 0.1
= 0.2
= 0.5
0.00
0
0.1
0.2
?
0.3
0.4
0
(a) Final SER as function of ? for = 0.125.
0.1
0.2
?
0.3
0.4
(b) The effect of ? on the final SER for = 0.125
Avg. No. Iterations
Figure 4: The final SER vs. internal noise parameters at pattern and constraint neurons for = 0.125
40.00
20.00
0.00
0.4
0.2
0.2
0.1
0
0.3
0.4
0.5
?
Figure 5: The effect of internal noise on the number of iterations of Alg. 2 when = 0.075.
?
0
Note that the results presented here are for the case where the noiseless decoder succeeds as well and
its average number of iterations is pretty close to the optimal value (see Fig. 5). In [23], we provide
additional results corresponding to = 0.125, where the noiseless decoder encounters stopping sets
while the noisy decoder is still capable of correcting external errors; there we see that the optimal
running time occurs when the neurons have a fair amount of internal noise.
In [23] we also provide results of a study for a slightly modified scenario where there is only internal
noise and no external errors. Furthermore, ? < ?. Thus, the internal noise can now cause neurons to
make wrong decisions, even in the absence of external errors. There, we witness the more familiar
phenomenon where increasing the amount of internal noise results in a worse performance. This
finding emphasizes the importance of choosing update threshold ? and ? according to Lem. 1.
4
Pattern Retrieval Capacity
For completeness, we review pattern retrieval capacity results from [9] to show that the proposed
model is capable of memorizing an exponentially large number of patterns. First, note that since the
patterns form a subspace, the number of patterns C does not have any effect on the learning or recall
algorithms (except for its obvious influence on the learning time). Thus, in order to show that the
pattern retrieval capacity is exponential in n, all we need to demonstrate is that there exists a training
set X with C patterns of length n for which C ? arn , for some a > 1 and 0 < r.
Theorem 4 ( [9]). Let X be a C ? n matrix, formed by C vectors of length n with entries from the
set S. Furthermore, let k = rn for some 0 < r < 1. Then, there exists a set of vectors for which
C = arn , with a > 1, and rank(X ) = k < n.
7
40
?
?
?
?
10
0
0.1
0.2
?
0.3
=0
= 0.2
= 0.3
= 0.5
Avg. No. Iterations
Avg. No. Iterations
40
0.4
?
?
?
?
10
0
(a) Effect of internal noise at pattern neurons side.
0.1
=0
= 0.2
= 0.3
= 0.5
0.2
?
0.3
0.4
(b) Effect of internal noise at constraint neurons side.
Figure 6: The effect of internal noise on the number of iterations performed by Alg. 2, for different
values of ? and ? with = 0.075. The average iteration number of 40 indicate the failure of Alg. 2.
The proof is constructive: we create a dataset X such that it can be memorized by the proposed
neural network and satisfies the required properties, i.e. the subpatterns form a subspace and pattern
entries are integer values from the set S = {0, . . . , S ? 1}. The complete proof can be found in [9].
5
Discussion
We have demonstrated that associative memories with exponential capacity still work reliably even
when built from unreliable hardware, addressing a major problem in fault-tolerant computing and
further arguing for the viability of associative memory models for the (noisy) mammalian brain.
After all, brain regions modeled as associative memories, such as the hippocampus and the olfactory
cortex, certainly do display internal noise [12, 13, 26]. The linear-nonlinear computations of Alg. 1
are certainly biologically plausible, but implementing the state reversion computation of Alg. 2 in a
biologically plausible way remains an open question.
We found a threshold phenomenon for reliable operation, which manifests the tradeoff between
the amount of internal noise and the amount of external noise that the system can handle. In fact,
we showed that internal noise actually improves the performance of the network in dealing with
external errors, up to some optimal value. This is a manifestation of the stochastic facilitation [13] or
noise enhancement [14] phenomenon that has been observed in other neuronal and signal processing
systems, providing a functional benefit to variability in the operation of neural systems.
The associative memory design developed herein uses thresholding operations in the messagepassing algorithm for recall; as part of our investigation, we optimized these neural firing thresholds
based on the statistics of the internal noise. As noted by Sarpeshkar in describing the properties of
analog and digital computing circuits, ?In a cascade of analog stages, noise starts to accumulate.
Thus, complex systems with many stages are difficult to build. [In digital systems] Round-off error
does not accumulate significantly for many computations. Thus, complex systems with many stages
are easy to build? [27]. One key to our result is capturing this benefit of digital processing (thresholding to prevent the build up of errors due to internal noise) as well as a modular architecture which
allows us to correct a linear number of external errors (in terms of the pattern length).
This paper focused on recall, however learning is the other critical stage of associative memory operation. Indeed, information storage in nervous systems is said to be subject to storage (or learning)
noise, in situ noise, and retrieval (or recall) noise [11, Fig. 1]. It should be noted, however, there
is no essential loss by combining learning noise and in situ noise into what we have called external
error herein, cf. [19, Fn. 1 and Prop. 1]. Thus our basic qualitative result extends to the setting where
the learning and stored phases are also performed with noisy hardware.
Going forward, it is of interest to investigate other neural information processing models that explicitly incorporate internal noise and see whether they provide insight into observed empirical phenomena. As an example, we might be able to understand the threshold phenomenon observed in
the SER of human telegraph operators under heat stress [28, Fig. 2], by invoking a thermal internal
noise explanation.
8
References
[1] A. Treves and E. T. Rolls, ?Computational analysis of the role of the hippocampus in memory,? Hippocampus, vol. 4, pp. 374?391, Jun. 1994.
[2] D. A. Wilson and R. M. Sullivan, ?Cortical processing of odor objects,? Neuron, vol. 72, pp. 506?519,
Nov. 2011.
[3] J. J. Hopfield, ?Neural networks and physical systems with emergent collective computational abilities,?
Proc. Natl. Acad. Sci. U.S.A., vol. 79, pp. 2554?2558, Apr. 1982.
[4] R. J. McEliece, E. C. Posner, E. R. Rodemich, and S. S. Venkatesh, ?The capacity of the Hopfield associative memory,? IEEE Trans. Inf. Theory, vol. IT-33, pp. 461?482, 1987.
[5] D. J. Amit and S. Fusi, ?Learning in neural networks with material synapses,? Neural Comput., vol. 6, pp.
957?982, Sep. 1994.
[6] B. A. Olshausen and D. J. Field, ?Sparse coding of sensory inputs,? Curr. Opin. Neurobiol., vol. 14, pp.
481?487, Aug. 2004.
[7] A. A. Koulakov and D. Rinberg, ?Sparse incomplete representations: A potential role of olfactory granule
cells,? Neuron, vol. 72, pp. 124?136, Oct. 2011.
[8] A. H. Salavati and A. Karbasi, ?Multi-level error-resilient neural networks,? in Proc. 2012 IEEE Int. Symp.
Inf. Theory, Jul. 2012, pp. 1064?1068.
[9] A. Karbasi, A. H. Salavati, and A. Shokrollahi, ?Iterative learning and denoising in convolutional neural
associative memories,? in Proc. 30th Int. Conf. Mach. Learn. (ICML 2013), Jun. 2013, pp. 445?453.
[10] N. Brunel, V. Hakim, P. Isope, J.-P. Nadal, and B. Barbour, ?Optimal information storage and the distribution of synaptic weights: Perceptron versus Purkinje cell,? Neuron, vol. 43, pp. 745?757, 2004.
[11] L. R. Varshney, P. J. Sj?ostr?om, and D. B. Chklovskii, ?Optimal information storage in noisy synapses
under resource constraints,? Neuron, vol. 52, pp. 409?423, Nov. 2006.
[12] C. Koch, Biophysics of Computation. New York: Oxford University Press, 1999.
[13] M. D. McDonnell and L. M. Ward, ?The benefits of noise in neural systems: bridging theory and experiment,? Nat. Rev. Neurosci., vol. 12, pp. 415?426, Jul. 2011.
[14] H. Chen, P. K. Varshney, S. M. Kay, and J. H. Michels, ?Theory of the stochastic resonance effect in
signal detection: Part I?fixed detectors,? IEEE Trans. Signal Process., vol. 55, pp. 3172?3184, Jul. 2007.
[15] D. A. Spielman and S.-H. Teng, ?Smoothed analysis of algorithms: Why the simplex algorithm usually
takes polynomial time,? J. ACM, vol. 51, pp. 385?463, May 2004.
[16] D. J. Amit, Modeling Brain Function. Cambridge: Cambridge University Press, 1992.
[17] M. G. Taylor, ?Reliable information storage in memories designed from unreliable components,? Bell
Syst. Tech. J., vol. 47, pp. 2299?2337, Dec. 1968.
[18] A. V. Kuznetsov, ?Information storage in a memory assembled from unreliable components,? Probl. Inf.
Transm., vol. 9, pp. 100?114, July-Sept. 1973.
[19] L. R. Varshney, ?Performance of LDPC codes under faulty iterative decoding,? IEEE Trans. Inf. Theory,
vol. 57, pp. 4427?4444, Jul. 2011.
[20] V. Gripon and C. Berrou, ?Sparse neural networks with large learning diversity,? IEEE Trans. Neural
Netw., vol. 22, pp. 1087?1096, Jul. 2011.
[21] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, ?Extracting and composing robust features with
denoising autoencoders,? in Proc. 25th Int. Conf. Mach. Learn. (ICML 2008), Jul. 2008, pp. 1096?1103.
[22] Q. V. Le, J. Ngiam, Z. Chen, D. Chia, P. W. Koh, and A. Y. Ng, ?Tiled convolutional neural networks,? in
Advances in Neural Information Processing Systems 23, J. Lafferty, C. K. I. Williams, J. Shawe-Taylor,
R. S. Zemel, and A. Culotta, Eds. Cambridge, MA: MIT Press, 2010, pp. 1279?1287.
[23] A. Karbasi, A. H. Salavati, A. Shokrollahi, and L. R. Varshney, ?Noise-enhanced associative memories,?
arXiv, 2013.
[24] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. A. Spielman, ?Efficient erasure correcting
codes,? IEEE Trans. Inf. Theory, vol. 47, pp. 569?584, Feb. 2001.
[25] T. Richardson and R. Urbanke, Modern Coding Theory. Cambridge: Cambridge University Press, 2008.
[26] M. Yoshida, H. Hayashi, K. Tateno, and S. Ishizuka, ?Stochastic resonance in the hippocampal CA3?CA1
model: a possible memory recall mechanism,? Neural Netw., vol. 15, pp. 1171?1183, Dec. 2002.
[27] R. Sarpeshkar, ?Analog versus digital: Extrapolating from electronics to neurobiology,? Neural Comput.,
vol. 10, pp. 1601?1638, Oct. 1998.
[28] N. H. Mackworth, ?Effects of heat on wireless telegraphy operators hearing and recording Morse messages,? Br. J. Ind. Med., vol. 3, pp. 143?158, Jul. 1946.
9
| 4875 |@word version:3 briefly:1 eliminating:2 polynomial:1 hippocampus:5 open:1 simulation:5 contraction:1 p0:4 invoking:1 pick:1 thereby:1 solid:1 electronics:1 initial:1 series:1 ecole:2 interestingly:1 outperforms:2 current:3 surprising:1 si:3 fn:1 additive:1 opin:1 remove:1 designed:2 extrapolating:1 update:11 v:1 nervous:1 amir:1 ith:1 short:1 completeness:1 node:13 mathematical:1 constructed:2 direct:1 become:1 reversion:1 incorrect:1 prove:1 qualitative:1 symp:1 olfactory:4 indeed:1 expected:1 shokrollahi:5 multi:2 brain:7 decreasing:1 increasing:3 provided:1 notation:2 bounded:3 circuit:2 what:1 interpreted:1 neurobiol:1 nadal:1 developed:2 ca1:1 finding:2 y3:1 demonstrates:1 wrong:2 ser:13 control:1 before:1 declare:3 negligible:1 local:4 resilience:3 limit:2 acad:1 despite:1 ext:1 mach:2 oxford:1 firing:2 abuse:1 subpatterns:4 tmax:2 might:1 examined:1 suggests:3 lausanne:2 limited:2 arguing:1 yj:2 recursive:2 differs:2 swiss:1 communicated:1 x3:1 sullivan:1 erasure:1 area:1 empirical:2 bell:1 thought:1 dictate:1 projection:1 cascade:1 word:1 significantly:1 suggest:1 get:1 onto:1 cannot:3 close:2 operator:2 scheduling:2 storage:9 faulty:2 applying:1 influence:1 optimize:1 measurable:1 demonstrated:1 center:1 dz:1 williams:1 yoshida:1 independently:2 focused:1 y8:1 simplicity:1 correcting:5 rule:4 insight:1 importantly:1 facilitation:1 posner:1 retrieve:2 kay:1 handle:1 notion:1 limiting:1 resp:4 enhanced:2 dbns:1 us:1 trend:1 satisfying:1 particularly:1 mammalian:2 observed:4 role:4 module:1 capture:1 calculate:1 region:2 wj:3 connected:4 culotta:1 decrease:1 rinberg:1 mentioned:2 supposedly:1 complexity:1 ui:4 bipartite:4 completely:1 transm:1 sep:1 hopfield:2 emergent:1 represented:2 sarpeshkar:2 revert:2 heat:2 describe:1 query:1 zemel:1 pci:6 choosing:1 whose:3 modular:1 larger:2 valued:1 plausible:2 otherwise:7 ability:3 favor:1 gi:5 statistic:1 koulakov:1 ward:1 richardson:1 noisy:16 final:9 associative:25 seemingly:1 sequence:1 lowdimensional:1 neighboring:1 combining:1 realization:4 achieve:2 amin:4 exploiting:1 cluster:30 optimum:1 enhancement:1 perfect:1 leave:1 diluted:1 help:6 derive:1 object:1 received:1 aug:1 sim:4 eq:1 strong:1 predicted:1 involves:2 implies:1 come:1 quantify:1 indicate:1 larochelle:1 fij:1 closely:1 correct:21 filter:1 stochastic:5 human:1 terribly:1 memorized:2 implementing:1 material:1 require:1 resilient:1 investigation:1 biological:1 extension:1 correction:4 koch:1 considered:1 predict:1 major:1 vary:1 omitted:1 purpose:1 proc:4 counterintuitively:1 sensitive:1 create:1 successfully:1 weighted:2 reflects:1 federal:1 mit:2 wmi:2 gaussian:1 always:1 modified:1 rather:3 pn:1 ej:3 avoid:1 wilson:1 focus:1 properly:1 rank:1 indicates:3 tech:1 contrast:2 inference:1 stopping:5 epfl:2 eliminate:1 wij:4 going:1 among:1 dual:3 denoted:4 resonance:3 special:1 fairly:1 equal:2 field:1 having:1 ng:1 eliminated:1 x4:1 y6:1 icml:2 simplex:2 contaminated:1 stimulus:1 simplify:1 few:1 modern:1 randomly:3 individual:1 familiar:1 phase:3 n1:1 recalling:1 curr:1 detection:1 interest:1 message:5 highly:1 investigate:1 intra:1 situ:2 adjust:1 certainly:2 violation:1 nl:1 pc:1 natl:1 xni:1 edge:2 capable:2 orthogonal:3 incomplete:1 divide:1 harmful:1 initialized:1 desired:1 urbanke:1 taylor:2 theoretical:5 federale:2 increased:1 instance:1 earlier:2 downside:1 purkinje:1 modeling:1 isope:1 ca3:1 addressing:1 entry:6 hearing:1 successful:2 characterize:2 reported:1 stored:1 connect:1 perturbed:2 corrupted:2 accomplish:1 combined:2 density:1 sequel:1 off:1 corrects:1 decoding:2 telegraph:1 ym:1 w1:2 connectivity:2 satisfied:2 choose:1 salavati:5 henceforth:1 worse:1 external:39 conf:2 syst:1 suggesting:1 account:1 potential:1 de:2 diversity:1 coding:2 int:3 satisfy:3 explicitly:1 vi:4 later:1 performed:3 analyze:1 reached:1 start:1 recover:1 jul:7 contribution:1 minimize:1 formed:1 ni:1 om:1 convolutional:3 roll:1 efficiently:1 correspond:1 vincent:1 emphasizes:1 basically:1 detector:1 synapsis:3 reach:2 whenever:1 synaptic:1 ed:1 failure:3 against:1 pp:24 obvious:1 mackworth:1 proof:5 mi:2 di:3 sampled:1 stop:1 dataset:5 recall:22 manifest:1 improves:7 noiselessly:2 actually:3 back:1 rodemich:1 y7:1 specify:1 mitzenmacher:1 though:1 shrink:1 furthermore:2 just:1 stage:4 autoencoders:2 correlation:3 until:1 working:1 mceliece:1 nonlinear:2 overlapping:2 perhaps:1 grows:2 olshausen:1 building:1 effect:10 y2:1 subpattern:4 evolution:1 analytically:2 hence:3 deal:1 ind:1 round:5 essence:1 noted:3 manifestation:1 hippocampal:1 stress:1 evident:1 theoretic:2 polytechnique:2 demonstrate:2 complete:1 performs:1 behaves:1 functional:3 physical:1 exponentially:1 analog:3 accumulate:2 cambridge:5 probl:1 pm:1 shawe:1 access:1 cortex:3 similarity:1 gj:3 add:1 feb:1 recent:1 alum:1 perspective:1 showed:1 inf:6 moderate:2 scenario:2 store:3 certain:1 watson:1 fault:2 yi:5 exploited:1 transmitted:3 minimum:2 additional:2 seen:1 arn:2 determine:1 truthful:1 berrou:1 signal:3 dashed:1 resolving:1 multiple:1 july:1 reduces:1 exceeds:1 match:1 long:1 retrieval:6 chia:1 biophysics:1 basic:2 noiseless:8 arxiv:1 iteration:21 sponding:1 represent:1 limt:2 achieved:1 cell:2 dec:2 affecting:2 whereas:1 chklovskii:1 interval:4 crucial:1 w2:1 operate:2 pc3:1 subject:1 recording:1 med:1 member:1 contrary:1 lafferty:1 spirit:1 integer:2 extracting:1 structural:1 presence:3 counting:1 bengio:1 viability:1 easy:1 affect:2 xj:5 forthcoming:1 architecture:4 idea:2 knowing:1 tradeoff:1 br:1 whether:4 bridging:1 penalty:1 suffer:1 york:1 cause:2 deep:1 clear:1 amount:7 locally:1 hardware:2 reduced:1 outperform:1 shifted:1 notice:1 sign:5 deteriorates:1 blue:1 vol:20 redundancy:1 key:2 threshold:14 achieving:1 prevent:1 backward:2 graph:11 merely:1 fraction:5 sum:1 prob:1 extends:1 decide:1 fusi:1 decision:3 pc2:1 capturing:1 ki:1 hi:6 bound:3 barbour:1 corre:1 display:1 nontrivial:1 occur:1 constraint:25 precisely:1 x2:4 aspect:1 speed:1 relatively:1 structured:2 according:2 combination:2 mcdonnell:1 remain:1 slightly:1 wi:2 rev:1 biologically:2 lem:1 memorizing:2 karbasi:5 koh:1 equation:2 zurich:1 previously:2 remains:2 describing:1 resource:1 fail:1 mechanism:1 end:4 operation:5 rewritten:1 apply:2 observe:1 luby:1 encounter:1 odor:1 slower:1 thomas:1 original:1 running:5 cf:4 exploit:1 build:3 amit:2 granule:1 classical:3 objective:1 move:2 added:1 question:1 occurs:1 codewords:1 degrades:1 primary:1 said:1 enhances:1 subspace:10 link:1 mapped:1 sci:1 capacity:8 parametrized:1 decoder:3 y5:1 reason:1 enforcing:1 code:3 length:5 modeled:2 y4:1 ldpc:1 providing:1 manzagol:1 difficult:2 setup:1 negative:1 design:4 reliably:6 collective:1 upper:2 neuron:51 finite:1 thermal:1 pat:1 witness:1 communication:1 variability:2 neurobiology:1 y1:2 rn:1 incorporate:2 smoothed:1 pc1:2 thm:5 treves:1 venkatesh:1 required:1 specified:2 connection:1 thr:4 optimized:1 gripon:1 learned:2 herein:2 trans:5 assembled:1 able:3 beyond:2 below:1 pattern:63 usually:1 regime:1 sparsity:1 built:1 reliable:5 memory:32 explanation:1 lend:1 belief:3 max:2 suitable:1 overlap:4 natural:1 satisfaction:1 critical:1 scheme:1 improve:1 technology:1 jun:2 sept:1 review:1 understanding:1 checking:1 asymptotic:2 unsatisfied:3 morse:1 loss:1 interesting:1 versus:2 digital:4 degree:5 thresholding:2 storing:1 pi:1 ibm:1 changed:1 surprisingly:3 repeat:1 wireless:1 drastically:2 ostr:1 side:3 allow:2 understand:1 institute:1 fall:1 neighbor:2 face:1 hakim:1 perceptron:1 sparse:4 benefit:5 distributed:3 feedback:2 dimension:1 xn:5 curve:1 cortical:1 exceedingly:2 computes:2 forward:3 made:4 avg:3 projected:1 sensory:1 sj:1 nov:2 netw:2 varshney:6 unreliable:5 keep:2 dealing:1 tolerant:2 assumed:3 xi:7 iterative:3 why:2 robin:1 pretty:1 learn:4 robust:1 composing:1 messagepassing:1 alg:29 ngiam:1 complex:2 domain:2 apr:1 neurosci:1 noise:82 fair:1 x1:7 neuronal:2 fig:16 contracted:2 fashion:1 sub:1 position:1 exponential:6 comput:2 lie:1 peeling:2 theorem:4 remained:1 formula:1 symbol:3 exists:2 essential:1 sequential:2 importance:1 nat:1 illustrates:3 push:2 gap:1 chen:2 michels:1 lav:1 saturating:1 hayashi:1 applies:1 brunel:1 ch:3 kuznetsov:1 satisfies:2 relies:1 acm:1 ma:1 prop:1 oct:2 goal:1 towards:3 absence:3 experimentally:1 programing:1 change:3 specifically:1 determined:2 uniformly:3 corrected:3 except:1 denoising:2 lemma:3 called:2 teng:1 tiled:1 succeeds:1 internal:56 support:1 tern:1 brevity:1 ethz:1 spielman:2 constructive:1 phenomenon:8 |
4,282 | 4,876 | Demixing odors ? fast inference in olfaction
?
Agnieszka Grabska-Barwinska
Gatsby Computational Neuroscience Unit
UCL
[email protected]
Jeff Beck
Duke University
[email protected]
Peter E. Latham
Gatsby Computational Neuroscience Unit
UCL
[email protected]
Alexandre Pouget
University of Geneva
[email protected]
Abstract
The olfactory system faces a difficult inference problem: it has to determine what
odors are present based on the distributed activation of its receptor neurons. Here
we derive neural implementations of two approximate inference algorithms that
could be used by the brain. One is a variational algorithm (which builds on the
work of Beck. et al., 2012), the other is based on sampling. Importantly, we use
a more realistic prior distribution over odors than has been used in the past: we
use a ?spike and slab? prior, for which most odors have zero concentration. After mapping the two algorithms onto neural dynamics, we find that both can infer
correct odors in less than 100 ms. Thus, at the behavioral level, the two algorithms make very similar predictions. However, they make different assumptions
about connectivity and neural computations, and make different predictions about
neural activity. Thus, they should be distinguishable experimentally. If so, that
would provide insight into the mechanisms employed by the olfactory system,
and, because the two algorithms use very different coding strategies, that would
also provide insight into how networks represent probabilities.
1
Introduction
The problem faced by the sensory system is to infer the underlying causes of a set of input spike
trains. For the olfactory system, the input spikes come from a few hundred different types of olfactory receptor neurons, and the problem is to infer which odors caused them. As there are more than
10,000 possible odors, and more than one can be present at a time, the search space for mixtures of
odors is combinatorially large. Nevertheless, olfactory processing is fast: organisms can typically
determine what odors are present in a few hundred ms.
Here we ask how organisms could do this. Since our focus is on inference, not learning: we assume
that the olfactory system has learned both the statistics of odors in the world and the mapping
from those odors to olfactory receptor neuron activity. We then choose a particular model for both,
and compute, via Bayes rule, the full posterior distribution. This distribution is, however, highly
complex: it tells us, for example, the probability of coffee at a concentration of 14 parts per million
(ppm), and no bacon, and a rose at 27 ppm, and acetone at 3 ppm, and no apples and so on, where
the ?so on? is a list of thousands more odors. It is unlikely that such detailed information is useful
to an organism. It is far more likely that organisms are interested in marginal probabilities, such
as whether or not coffee is present independent of all the other odors. Unfortunately, even though
we can write down the full posterior, calculation of marginal probabilities is intractable due to the
1
sum over all possible combinations of odors: the number of terms in the sum is exponential in the
number of odors.
We must, therefore, consider approximate algorithms. Here we consider two: a variational approximation, which naturally generates approximate posterior marginals, and sampling from the posterior,
which directly gives us the marginals. Our main goal is to determine which, if either, is capable of
performing inference on ecologically relevant timescales using biologically plausible circuits. We
begin by introducing a generative model for spikes in a population of olfactory receptor neurons. We
then describe the variational and sampling inference schemes. Both descriptions lead very naturally
to network equations. We simulate those equations, and find that both the variational and sampling
approaches work well, and require less than 100 ms to converge to a reasonable solution. Therefore,
from the point of view of speed and accuracy ? things that can be measured from behavioral experiments ? it is not possible to rule out either of them. However, they do make different predictions
about activity, and so it should be possible to tell them apart from electrophysiological experiments.
They also make different predictions about the neural representation of probability distributions. If
one or the other could be corroborated experimentally, that would provide valuable insight into how
the brain (or at least one part of the brain) codes for probabilities [1].
2
The generative model for olfaction
The generative model consists of a probabilistic mapping from odors (which for us are a high level
percepts, such as coffee or bacon, each of which consists of a mixture of many different chemicals) to
odorant receptor neurons, and a prior over the presence or absence of odors and their concentrations.
It is known that each odor, by itself, activates a different subset of the olfactory receptor neurons;
typically on the order of 10%-30% [2]. Here we assume, for simplicity, that activation is linear, for
which the activity of odorant receptor neuron i, denoted ri is linearly related to the concentrations,
cj of the various odors which are present in a given olfactory scene, plus some background rate, r0 .
Assuming Poisson noise, the response distribution has the form
ri
P
P
Y r0 + j wij cj
P (r|c) =
(2.1)
e? r0 + j wij cj .
ri !
i
P
In a nutshell, ri is Poisson with mean r0 + j wij cj .
In contrast to previous work [3], which used a smooth prior on the concentrations, here we use
a spike and slab prior. With this prior, there is a finite probability that the concentration of any
particular odor is zero. This prior is much more realistic than a smooth one, as it allows only a
small number of odors (out of ?10,000) to be present in any given olfactory scene. It is modeled by
introducing a binary variable, sj , which is 1 if odor j is present and 0 otherwise. For simplicity we
assume that odors are independent and statistically homogeneous,
Y
P (c|s) =
(1 ? sj )?(cj ) + sj ?(cj |?1 , ?1 )
(2.2a)
j
P (s) =
Y
? sj (1 ? ?)1?sj
(2.2b)
j
where ?(c) is the Dirac delta function and ?(c|?, ?) is the Gamma
distribution: ?(c|?, ?) =
R?
? ? c??1 e??c /?(?) with ?(?) the ordinary Gamma function, ?(?) = 0 dx x??1 e?x .
3
3.1
Inference
Variational inference
Because of the delta-function in the prior, performing efficient variational inference in our model is
difficult. Therefore, we smooth the delta-function, and replace it with a Gamma distribution. With
this manipulation, the approximate (with respect to the true model, Eq. (2.2a)) prior on c is
Y
Pvar (c|s) =
(1 ? sj )?(cj |?0 , ?0 ) + sj ?(cj |?1 , ?1 ) .
(3.1)
j
2
The approximate prior allows absent odors to have nonzero concentration. We can partially compensate for that by setting the background firing rate, r0 to zero, and choosing ?0 and ?0 such that
the effective background firing rate (due to the small concentration when sj = 0) is equal to r0 ; see
Sec. 4.
As is typical in variational inference, we use a factorized approximate distribution. This distribution,
denoted Q(c, s|r),was set to Q(c|s, r)Q(s|r) where
Y
Q(c|s, r) =
(1 ? sj )?(cj |?0j , ?0j ) + sj ?(cj |?1j , ?1j )
(3.2a)
j
Q(s|r) =
Y
s
?j j (1 ? ?j )1?sj .
(3.2b)
j
Introducing auxiliary variables, as described in Supplementary Material, and minimizing the
Kullback-Leibler distance between Q and the true posterior augmented by the auxiliary variables
leads to a set of nonlinear equations for the parameters of Q. To simplify those equations, we set ?1
to ?0 + 1, resulting in
?0j = ?0 +
X
i
Lj ? log
ri wij Fj (?j , ?0j )
k=1 wik Fk (?k , ?0k )
P
?j
= L0j + log(?0j /?0 ) + ?0j log(?0j /?1j )
1 ? ?j
(3.3a)
(3.3b)
where
?
? ?0 log (?0 /?1 ) + log(?1 /?1j )
1??
Fj (?, ?) ? exp [(1 ? ?)(?(?) ? log ?0j ) + ?(?(? + 1) ? log ?1j )]
L0j ? log
(3.3c)
(3.3d)
and ?(?) ? d log ?(?)/d? is the digamma function. TheP
remaining two parameters,
P ?0j and ?1j ,
are fixed by our choice of weights and priors: ?0j = ?0 + i wij and ?1j = ?1 + i wij .
To solve Eqs. (3.3a-b) in a way that mimics the kinds of operations that could be performed by
neuronal circuitry, we write down a set of differential equations that have fixed points satisfying
Eq. (3.3),
??
??
X
d?i
= ri ? ?i
wij Fj (?j , ?0j )
dt
j
X
d?0j
= ?0 + Fj (?j , ?0j )
?i wij ? ?0j
dt
i
??
dLj
= L0j + log(?0j /?0 ) + ?0j log(?0j /?1j ) ? Lj
dt
(3.4a)
(3.4b)
(3.4c)
Note that we have introduced an additional variable, ?i . This variable is proportional to ri , but
modulated by divisive inhibition: the fixed point of Eq. (3.4a) is
ri
.
w
F
k ik k (?k , ?0k )
?i = P
(3.5)
Close scrutiny of Eqs. (3.4) and (3.3d) might raise some concerns: (i) ? and ? are reciprocally
and symmetrically connected; (ii) there are multiplicative interactions between F (?j , ?0j ) and ?;
and (iii) the neurons need to compute nontrivial nonlinearities, such as logarithm, exponent and a
mixture of digamma functions. However: (i) reciprocal and symmetric connectivity exists in the
early olfactory processing system [4, 5, 6]; (ii) although multiplicative interactions are in general
not easy for neurons, the divisive normalization (Eq. (3.5)) has been observed in the olfactory bulb
[7], and (iii) the nonlinearities in our algorithms are not extreme (the logarithm is defined only on the
positive range (?0j > ?0 , Eq. (3.3a)), and Fj (?, ?) function is a soft-thresholded linear function;
see Fig. S1). Nevertheless, a realistic model would have to approximate Eqs. (3.4a-c), and thus
degrade slightly the quality of the inference.
3
3.2
Sampling
The second approximate algorithm we consider is sampling. To sample efficiently from our model,
we introduce a new set of variables, c?j ,
cj = c?j sj .
When written in terms of c?j rather than cj , the likelihood becomes
P
Y (r0 + j wij c?j sj )ri ? r +P w c? s
ij j j
j
P (r|?
c, s) =
e 0
.
r
!
i
i
(3.6)
(3.7)
Because the value of c?j is unconstrained when sj = 0, we have complete freedom in choosing
P (?
cj |sj = 0), the piece of the prior corresponding to the absence of odor j. It is convenient to set it
to the same prior we use when sj = 1, which is ?(?
cj |?1 , ?1 ). With this choice, ?
c is independent of
s, and the prior over ?
c is simply
Y
P (?
c) =
?(?
cj |?1 , ?1 ) .
(3.8)
j
The prior over s, Eq. (2.2b), remains the same. Note that this set of manipulations does not change
the model: the likelihood doesn?t change, since by definition c?j sj = cj ; when sj = 1, c?j is drawn
from the correct prior; and when sj = 0, c?j does not appear in the likelihood.
To sample from this distribution we use Langevin sampling on c and Gibbs sampling on s. The
former is standard,
X
d?
cj
? log P (?
c, s|r)
?1 ? 1
ri
P
?c
=
+ ?(t) =
? ?1 + sj
wij
? 1 + ?(t)
dt
??
cj
c?j
r0 + k wik c?k sk
i
(3.9)
where ?(t) is delta-correlated white noise with variance 2? : h?j (t)?j 0 (t0 )i = 2? ?(t ? t0 )?jj 0 .
Because the ultimate goal is to implement this algorithm in networks of neurons, we need a Gibbs
sampler that runs asynchronously and in real time. This can be done by discretizing time into steps
of length dt, and computing the update probability for each odor on each time step. This is a valid
Gibbs sampler only in the limit dt ? 0, where no more than one odor can be updated per time step
that?s the limit of interest here. The update rule is
T (s0j |?
c, s, r) = ?0 dtP (s0j |?
c, s, r) + (1 ? ?0 dt) ?(s0j ? sj )
(3.10)
where s0j ? sj (t + dt), s and ?
c should be evaluated at time t, and ?(s) is the Kronecker delta:
?(s) = 1 if s = 0 and 0 otherwise. As is straightforward to show, this update rule has the correct
equilibrium distribution in the limit dt ? 0 (see Supplementary Material).
Computing P (s0j = 1|?
c, s, r) is straightforward, and we find that
1
1 + exp[??j ]
"
#
P
X
r0 + k6=j wik c?k sk + wij c?j
?
P
?j = log
+
ri log
? c?j wij .
1??
r
+
?k sk
0
k6=j wik c
i
P (s0j = 1|?
c, s, r) =
(3.11)
Equations (3.9) and (3.11) raise almost exactly the same concerns that we saw for the variational
approach: (i) c and s are reciprocally and symmetrically connected; (ii) there are multiplicative
interactions between c? and s; and (iii) the neurons need to compute nontrivial nonlinearities, such as
logarithm and divisive normalization. Additionally, the noise in the Langevin sampler (? in Eq. 3.9)
has to be white and have exactly the right variance. Thus, as with the variational approach, we
expect a biophysical model to introduce approximations, and, therefore ? as with the variational
algorithm ? degrade slightly the quality of the inference.
4
?(c)
0.02
P0(c)
P1(c)
0.05
0
0
0
0
0.5
10
20
30
40
1
50
60
Concentration
70
80
90
100
Figure 1: Priors over concentration. The true priors ? the ones used to generate the data ? are shown
in red and magenta; these correspond to ?(c) and ?(c|?1 , ?1 ), respectively. The variational prior in
the absence of an odor, ?(c|?0 , ?0 ) with ?0 = 0.5 and ?0 = 20, is shown in blue.
4
Simulations
To determine how fast and accurate these two algorithms are, we performed a set of simulations
using either Eq. (3.4) (variational inference) or Eqs. (3.9 - 3.11) (sampling). For both algorithms,
the odors were generated from the true prior, Eq. (2.2). We modeled a small olfactory system, with
40 olfactory receptor types (compared to approximately 350 in humans and 1000 in mice [8]). To
keep the ratio of identifiable odors to receptor types similar to the one in humans [8], we assumed
400 possible odors, with 3 odors expected to be present in the scene (? = 3/400). If an odor was
present, its concentration was drawn from a Gamma distribution with ?1 = 1.5 and ?1 = 1/40.
The background spike count, r0 , was set to 1. The connectivity matrix was binary and random,
with a connection probability, pc (the probability that any particular element is 1), set to 0.1 [2]. All
network time constants (?? , ?? , ?? , ?c and 1/?0 , from Eqs (3.4), (3.9) and (3.10)) were set to 10 ms.
The differential equations were solved using the Euler method with a time step of 0.01 ms. Because
we used ?1 = ?0 + 1, the choice ?1 = 1.5 forced ?0 to be 0.5. Our remaining parameter, ?0 , was
set to ensure that, for the variational algorithm, the absent odors (those with sj = P
0) contributed a
background firing rate of r0 on average. This average background rate is given by j hwij ihcj i =
pc Nodors ?0 /?0 . Setting this to r0 yields ?0 = pc Nodors ?0 /r0 = 0.1 ? 400 ? 0.5/1 = 20. The true
(Eq. (2.2)) and approximate (Eq. (3.1)) prior distributions over concentration are shown in Fig. 1.
Figure 2 shows how the inference process evolves over time for a typical set of odors and concentrations. The top panel shows concentration, with variational inference on the left (where we plot
the mean of the posterior distribution over concentration, (1 ? ?j )?0j (t)/?0j (t) + ?j ?1j (t)/?1j (t);
see Eq. (3.2)) and sampling on the right (where we plot c?j , the output of our Langevin sampler; see
Eq. (3.9)) for a case with three odors present. The three colored lines correspond to the odors that
Sampling
100
100
c(t)
150
50
50
0
0
0
400
300
?2
odors
Log?probabilities
Concentrations
Variational
150
?4
?6
0
200
100
0.2
0.4
0.6
Time [sec]
0.8
0
0
1
0.2
0.4
0.6
Time [sec]
0.8
1
Figure 2: Example run for the variational algorithm (left) and sampling (right); see text for details.
In the bottom left panel the green, blue and red lines go to a probability of 1 ( log probability of 0)
within about 50 ms. In sampling, the initial value of concentrations is set to the most likely value
under the prior (?
c(0) = (?1 ? 1)/?1 ). The dashed lines are the true concentrations.
5
were presented, with solid lines for the inferred concentrations and dashed lines for the true ones.
Black lines are the odors that were not present. At least in this example, both algorithms converge
rapidly to the true concentration.
In the bottom left panel of Fig. 2 we plot the log-probability that each of the odors is present, ?j (t).
The present odors quickly approach probabilities of 1; the absent odors all have probabilities below
10?4 within about 200 ms. The bottom right panel shows samples from sj for all the odors, with
dots denoting present odors (sj (t) = 1) and blanks absent odors (sj (t) = 0). Beyond about 500 ms,
the true odors (the colored lines at the bottom) are on continuously, and for the odors that were not
present, sj is still occasionally 1, but relatively rarely.
In Fig. 3 we show the time course of the probability of odors when between 1 and 5 odors were
presented. We show only the first 100 ms, to emphasize the initial time course. Again, variational
inference is on the left and sampling is on the right. The black lines are the average values of the
probability of the correct odors; the gray regions mark 25%?75% percentiles. Ideally, we would like
to compare these numbers to those expected from a true posterior. However, due to its intractability,
we must seek different means of comparison. Therefore, we plot the probability of the most likely
non-presented odor (red); the average probability of the non-presented odors (green), and the probability of guessing the correct odors via simple template matching (dashed; see Fig. 3 legend for
details).
Although odors are inferred relatively rapidly (they exceed template matching within 20 ms), there
were almost always false positives. Even with just one odor present, both algorithms consistently
report the existence of another odor (red). This problem diminishes with time if fewer odors are
presented than the expected three, but it persists for more complex mixtures. The false positives
are in fact consistent with human behavior: humans have difficulty correctly identify more than one
odor in a mixture, with the most common problem being false positives [9].
Finally, because the two algorithms encode probabilities differently (see Discussion below), we also
look into the time courses of the neural activity. In Fig. 4, we show the log-probability, L (left),
and probability, ? (right), averaged across 400 scenes containing 3 odors (see Supplementary Fig. 2
for the other odor mixtures). The probability of absent odors drops from log(3/400) ? e?5 (the
prior) to e?12 (the final inferred probability). For the variational approach, this represents a drop
in activity of 7 log units, comparable to the increase of about 5 log units for the present odors
(whose probability is inferred to be near 1). For the sampling based approach, on the other hand,
this represents a very small drop in activity. Thus, for the variational algorithm the average activity
associated with the absent odors exhibits a large drop, whereas for the sampling based approach the
average activity associated with the absent odors starts small and stays small.
5
Discussion
We introduced two algorithms for inferring odors from the activity of the odorant receptor neurons.
One was a variational method; the other sampling based. We mapped both algorithms onto dynamical systems, and, assuming time constants of 10 ms (plausible for biophysically realistic networks),
tested the time course of the inference.
The two algorithms performed with striking similarity: they both inferred odors within about 100 ms
and they both had about the same accuracy. However, since the two methods encode probabilities
differently (linear vs logarithmic encoding), they can be differentiated at the level of neural activity.
As can be seen by examining Eqs. (3.4a) and (3.4c), for variational inference the log probability of
concentration and presence/absence are related to the dynamical variables via
log Q(cj ) ? ?1j log cj ? ?1j cj
(5.1a)
log Q(sj ) ? Lj sj
(5.1b)
where ? indicates equality within a constant. If we interpret ?0j and Lj as firing rates, then these
equations correspond to a linear probabilistic population code [10]: the log probability inferred by
the approximate algorithm is linear in firing rate, with a parameter-dependent offset (the term ??1j cj
in Eq. (5.1a)). For the sampling-based algorithm, on the other hand, activity generates samples from
the posterior; an average of those samples codes for the probability of an odor being present. Thus,
if the olfactory system uses variational inference, activity should code for log probability, whereas
if it uses sampling, activity should code for probability.
6
Variational
0.5
0
2 odors
<p(s=1)>
<p(s=1)>
0.5
3 odors
3 odors
1
<p(s=1)>
<p(s=1)>
0.5
0
1
0.5
0
0.5
0
4 odors
1
4 odors
1
<p(s=1)>
<p(s=1)>
2 odors
1
0
0.5
0
0.5
0
5 odors
1
5 odors
1
<p(s=1)>
<p(s=1)>
0.5
0
1
0.5
0
0
1 odor
1
<p(s=1)>
<p(s=1)>
Sampling
1 odor
1
20
40
60
80
0.5
0
0
100
200
400
600
800
1000
Time [ms]
Time [ms]
Figure 3: Inference by networks ? initial 100 ms. Black: average value of the probability of correct
odors; red: probability of the most likely non-presented odor; green: average probability of the nonpresented odors. Shaded areas represent 25th?75th percentile of values across 400 olfactory scenes.
In the variational approach, values are often either 0 or 1, which makes it possible for the mean to
land outside of the chosen percentile range; this happens whenever the odors are guessed correctly
more than 75% of the time, in which case the 25th?75th percentile collapses to 1, or less than 25%
of the time, in which case the 25th?75th percentile collapses to 0. The left panel shows variational
inference, where we plot ?j (t); the right one shows sampling, where we plot sk (t) averaged over 20
repetitions of the algorithm (to avoid arbitrariness in decoding/smoothing/averaging the samples).
Both methods exceed template matching within 20 ms (dashed line). (Template matching finds odors
(the j?s) that maximize the dot product between the activity, ri , and the weights, wij , associated,
P
P 2 P 2 1/2
with odor j; that is, it chooses j?s that maximize i ri wij /
. The number of
i ri
i wij
odors chosen by template matching was set to the number of odors presented.) For more complex
mixtures, sampling is slightly more efficient at inferring the presented odors. See Supplementary
Material for the time course out to 1 second and for mixtures of up to 10 odors.
7
Variational
Sampling
3 odors
?5
?
L
3 odors
1
0
0.5
?10
0
20
40
60
80
0
0
100
20
40
60
80
100
Time [ms]
Time [ms]
Figure 4: Average time course of log(p(s)) (left) and p(s) (right, same as in Fig. 3). For the variational algorithm, the activity of the neurons codes for log probability (relative to some background
to keep firing rates non-negative). For this algorithm, the drop in probability of the non-presented
odors from about e?5 to e?12 corresponds to a large drop in firing rate. For the sampling based
algorithm, on the other hand, activity codes for probability, and there is almost no drop in activity.
There are two ways to determine which. One is to note that for the variational algorithm there is
a large drop in the average activity of the neurons coding for the non-present odors (Fig. 4 and
Supplementary Figure 2). This drop could be detected with electrophysiology. The other focuses on
the present odors, and requires a comparison between the posterior probability inferred by an animal
and neural activity. The inferred probability can be measured by so-called ?opt-out? experiments
[11]; the latter by sticking an electrode into an animal?s head, which is by now standard.
The two algorithms also make different predictions about the activity coding for concentration. For
the variational approach, activity, ?0j , codes for the parameters of a probability distribution. Importantly, in the variational scheme the mean and variance of the distribution are tied ? both are
proportional to activity. Sampling, on the other hand, can represent arbitrary concentration distributions. These two schemes could, therefore, be distinguished by separately manipulating average
concentration and uncertainty ? by, for example, showing either very similar or very different odors.
Unfortunately, it is not clear where exactly one needs to stick the electrode to record the trace of the
olfactory inference. A good place to start would be the olfactory bulb, where odor representations
have been studied extensively [12, 13, 14]. For example, the dendro-dendritic connections observed
in this structure [4] are particularly well suited to meet the symmetry requirements on wij . We note
in passing that these connections have been the subject of many theoretical studies. Most, however,
considered single odors [15, 6, 16], for which one does not need a complicated inference process
An early notable exception to the two-odor standard was Zhaoping [17], who proposed a model
for serial analysis of complex mixtures, whereby higher cortical structures would actively adapt the
already recognized components and send a feedback signal to the lower structures. Exactly how her
network relates to our inference algorithms remains unclear. We should also point out that although
the olfactory bulb is a likely location for at least part of our two inference algorithms, both are
sufficiently complicated that they may need to be performed by higher cortical structures, such as
the anterior piriform cortex, [18, 19].
Future directions. We have made several unrealistic assumptions in this analysis. For instance,
the generative model was very simple: we assumed that concentrations added linearly, that weights
were binary (so that each odor activated a subset of the olfactory receptor neurons at a finite value,
and did not activate the rest at all), and that noise was Poisson. None of these are likely to be exactly
true. And we considered priors such that all odors were independent. This too is unlikely to be true ?
for instance, the set of odors one expects in a restaurant are very different than the ones one expects
in a toxic waste dump, consistent with the fact that responses in the olfactory bulb are modulated
by task-relevant behavior [20]. Taking these effects into account will require a more complicated,
almost certainly hierarchical, model. We have also focused solely on inference: we assumed that
the network knew perfectly both the mapping from odors to odorant receptor neurons and the priors.
In fact, both have to be learned. Finally, the neurons in our network had to implement relatively
complicated nonlinearities: logs, exponents, and digamma and quadratic functions, and neurons had
to be reciprocally connected. Building a network that can both exhibit the proper nonlinearities
(at least approximately) and learn the reciprocal weights is challenging. While these issues are
nontrivial, they do not appear to be insurmountable. We expect, therefore, that a more realistic
model will retain many of the features of the simple model we presented here.
8
References
[1] J. Fiser, P. Berkes, G. Orban, and M. Lengyel. Statistically optimal perception and learning:
from behavior to neural representations. Trends Cogn. Sci. (Regul. Ed.), 14(3):119?130, Mar
2010.
[2] R. Vincis, O. Gschwend, K. Bhaukaurally, J. Beroud, and A. Carleton. Dense representation
of natural odorants in the mouse olfactory bulb. Nat. Neurosci., 15(4):537?539, Apr 2012.
[3] Jeff Beck, Katherine Heller, and Alexandre Pouget. Complex inference in neural circuits with
probabilistic population codes and topic models. In NIPS, 2012.
[4] W. Rall and G. M. Shepherd. Theoretical reconstruction of field potentials and dendrodendritic
synaptic interactions in olfactory bulb. J. Neurophysiol., 31(6):884?915, Nov 1968.
[5] Shepherd GM, Chen WR, and Greer CA. The synaptic organization of the brain, volume 4,
chapter Olfactory bulb, pages 165?216. Oxford University Press Oxford, 2004.
[6] A. A. Koulakov and D. Rinberg. Sparse incomplete representations: a potential role of olfactory granule cells. Neuron, 72(1):124?136, Oct 2011.
[7] Shawn Olsen, Vikas Bhandawat, and Rachel Wilson. Divisive normalization in olfactory population codes. Neuron, 66(2):287?299, 2010.
[8] P. Mombaerts. Genes and ligands for odorant, vomeronasal and taste receptors. Nat. Rev.
Neurosci., 5(4):263?278, Apr 2004.
[9] D. G. Laing and G. W. Francis. The capacity of humans to identify odors in mixtures. Physiol.
Behav., 46(5):809?814, Nov 1989.
[10] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget. Bayesian inference with probabilistic
population codes. Nat. Neurosci., 9(11):1432?1438, Nov 2006.
[11] R. Kiani and M. N. Shadlen. Representation of confidence associated with a decision by
neurons in the parietal cortex. Science, 324(5928):759?764, May 2009.
[12] G. Laurent, M. Stopfer, R. W. Friedrich, M. I. Rabinovich, A. Volkovskii, and H. D. Abarbanel.
Odor encoding as an active, dynamical process: experiments, computation, and theory. Annu.
Rev. Neurosci., 24:263?297, 2001.
[13] H. Spors and A. Grinvald. Spatio-temporal dynamics of odor representations in the mammalian
olfactory bulb. Neuron, 34(2):301?315, Apr 2002.
[14] Kevin Cury and Naoshige Uchida. Robust odor coding via inhalation-coupled transient activity
in the mammalian olfactory bulb. Neuron, 68(3):570?585, 2010.
[15] Z. Li and J. J. Hopfield. Modeling the olfactory bulb and its neural oscillatory processings.
Biol Cybern, 61(5):379?392, 1989.
[16] Y. Yu, T. S. McTavish, M. L. Hines, G. M. Shepherd, C. Valenti, and M. Migliore. Sparse
distributed representation of odors in a large-scale olfactory bulb circuit. PLoS Comput. Biol.,
9(3):e1003014, 2013.
[17] Z. Li. A model of olfactory adaptation and sensitivity enhancement in the olfactory bulb. Biol
Cybern, 62(4):349?361, 1990.
[18] Julie Chapuis and Donald Wilson. Bidirectional plasticity of cortical pattern recognition and
behavioral sensory acuity. Nature neuroscience, 15(1):155?161, 2012.
[19] Keiji Miura, Zachary Mainen, and Naoshige Uchida. Odor representations in olfactory cortex:
distributed rate coding and decorrelated population activity. Neuron, 74(6):1087?1098, 2012.
[20] R. A. Fuentes, M. I. Aguilar, M. L. Aylwin, and P. E. Maldonado. Neuronal activity of mitraltufted cells in awake rats during passive and active odorant stimulation. J. Neurophysiol.,
100(1):422?430, Jul 2008.
9
| 4876 |@word seek:1 simulation:2 p0:1 solid:1 initial:3 mainen:1 denoting:1 past:1 blank:1 anterior:1 activation:2 dx:1 must:2 written:1 physiol:1 realistic:5 plasticity:1 plot:6 drop:9 update:3 v:1 generative:4 fewer:1 reciprocal:2 record:1 colored:2 location:1 differential:2 ik:1 consists:2 behavioral:3 olfactory:34 introduce:2 expected:3 behavior:3 p1:1 odorants:1 brain:4 rall:1 becomes:1 begin:1 underlying:1 circuit:3 factorized:1 panel:5 pel:1 grabska:1 what:2 kind:1 temporal:1 nutshell:1 exactly:5 uk:3 stick:1 unit:4 scrutiny:1 appear:2 positive:4 persists:1 limit:3 receptor:13 encoding:2 oxford:2 meet:1 firing:7 solely:1 approximately:2 laurent:1 might:1 plus:1 black:3 hwij:1 studied:1 shaded:1 challenging:1 dtp:1 collapse:2 range:2 statistically:2 averaged:2 implement:2 cogn:1 area:1 convenient:1 matching:5 confidence:1 donald:1 onto:2 close:1 cybern:2 send:1 straightforward:2 go:1 focused:1 simplicity:2 pouget:4 insight:3 rule:4 importantly:2 aguilar:1 population:6 updated:1 gm:1 duke:1 homogeneous:1 us:2 element:1 trend:1 satisfying:1 particularly:1 recognition:1 mammalian:2 corroborated:1 observed:2 bottom:4 role:1 solved:1 thousand:1 region:1 connected:3 plo:1 rinberg:1 valuable:1 rose:1 ideally:1 ppm:3 dynamic:2 raise:2 neurophysiol:2 hopfield:1 differently:2 various:1 chapter:1 train:1 forced:1 fast:3 describe:1 effective:1 activate:1 detected:1 tell:2 kevin:1 choosing:2 outside:1 whose:1 unige:1 supplementary:5 plausible:2 solve:1 otherwise:2 statistic:1 koulakov:1 naoshige:2 itself:1 asynchronously:1 final:1 biophysical:1 ucl:5 reconstruction:1 interaction:4 product:1 adaptation:1 relevant:2 rapidly:2 description:1 sticking:1 dirac:1 electrode:2 requirement:1 enhancement:1 derive:1 ac:3 insurmountable:1 measured:2 ij:1 eq:20 auxiliary:2 come:1 direction:1 correct:6 human:5 transient:1 material:3 require:2 opt:1 dendritic:1 sufficiently:1 considered:2 exp:2 equilibrium:1 mapping:4 slab:2 circuitry:1 dendrodendritic:1 early:2 laing:1 diminishes:1 saw:1 combinatorially:1 repetition:1 activates:1 always:1 rather:1 avoid:1 wilson:2 encode:2 focus:2 acuity:1 consistently:1 likelihood:3 indicates:1 contrast:1 digamma:3 inference:28 dependent:1 typically:2 unlikely:2 lj:4 her:1 manipulating:1 wij:16 interested:1 issue:1 denoted:2 exponent:2 k6:2 animal:2 smoothing:1 marginal:2 equal:1 field:1 sampling:25 represents:2 zhaoping:1 look:1 inhalation:1 yu:1 mimic:1 future:1 report:1 simplify:1 migliore:1 few:2 gamma:4 beck:4 olfaction:2 freedom:1 organization:1 interest:1 highly:1 maldonado:1 certainly:1 mixture:10 extreme:1 pc:3 activated:1 accurate:1 capable:1 incomplete:1 logarithm:3 theoretical:2 instance:2 soft:1 modeling:1 miura:1 rabinovich:1 ordinary:1 introducing:3 subset:2 euler:1 expects:2 hundred:2 examining:1 too:1 acetone:1 chooses:1 sensitivity:1 stay:1 retain:1 probabilistic:4 decoding:1 mouse:2 quickly:1 continuously:1 connectivity:3 again:1 containing:1 choose:1 abarbanel:1 actively:1 li:2 account:1 potential:2 nonlinearities:5 coding:5 sec:3 waste:1 notable:1 caused:1 piece:1 performed:4 view:1 multiplicative:3 francis:1 red:5 start:2 bayes:1 complicated:4 jul:1 accuracy:2 variance:3 who:1 percept:1 efficiently:1 correspond:3 yield:1 identify:2 guessed:1 biophysically:1 bayesian:1 none:1 ecologically:1 apple:1 lengyel:1 oscillatory:1 decorrelated:1 whenever:1 ed:1 synaptic:2 definition:1 naturally:2 associated:4 bhandawat:1 ask:1 electrophysiological:1 cj:22 alexandre:3 bidirectional:1 higher:2 dt:9 response:2 done:1 though:1 evaluated:1 mar:1 just:1 fiser:1 hand:4 nonlinear:1 quality:2 gray:1 building:1 effect:1 true:12 former:1 equality:1 chemical:1 symmetric:1 nonzero:1 leibler:1 odorant:6 white:2 during:1 whereby:1 percentile:5 rat:1 m:18 complete:1 latham:2 fj:5 passive:1 variational:30 common:1 stimulation:1 arbitrariness:1 volkovskii:1 volume:1 million:1 organism:4 marginals:2 interpret:1 gibbs:3 unconstrained:1 fk:1 had:3 dot:2 similarity:1 cortex:3 inhibition:1 berkes:1 posterior:9 apart:1 manipulation:2 occasionally:1 binary:3 discretizing:1 seen:1 additional:1 employed:1 r0:13 recognized:1 determine:5 converge:2 maximize:2 signal:1 dashed:4 ii:3 full:2 relates:1 infer:3 smooth:3 barwinska:1 adapt:1 calculation:1 compensate:1 serial:1 prediction:5 poisson:3 represent:3 normalization:3 cell:2 background:7 whereas:2 separately:1 keiji:1 greer:1 carleton:1 rest:1 shepherd:3 subject:1 thing:1 legend:1 near:1 presence:2 symmetrically:2 exceed:2 iii:3 easy:1 restaurant:1 perfectly:1 shawn:1 absent:7 t0:2 whether:1 ultimate:1 peter:1 passing:1 cause:1 jj:1 behav:1 useful:1 detailed:1 clear:1 extensively:1 kiani:1 generate:1 neuroscience:3 delta:5 per:2 correctly:2 wr:1 blue:2 write:2 nevertheless:2 drawn:2 thresholded:1 sum:2 run:2 uncertainty:1 striking:1 place:1 almost:4 reasonable:1 rachel:1 decision:1 comparable:1 quadratic:1 identifiable:1 activity:26 nontrivial:3 kronecker:1 awake:1 ri:14 scene:5 uchida:2 generates:2 simulate:1 speed:1 orban:1 performing:2 relatively:3 combination:1 l0j:3 slightly:3 across:2 evolves:1 biologically:1 s1:1 happens:1 toxic:1 rev:2 equation:8 remains:2 count:1 mechanism:1 operation:1 hierarchical:1 differentiated:1 distinguished:1 odor:106 existence:1 vikas:1 top:1 remaining:2 ensure:1 build:1 coffee:3 granule:1 already:1 added:1 spike:6 strategy:1 concentration:25 guessing:1 unclear:1 exhibit:2 distance:1 mapped:1 sci:1 capacity:1 degrade:2 topic:1 assuming:2 code:11 length:1 modeled:2 ratio:1 minimizing:1 piriform:1 difficult:2 unfortunately:2 katherine:1 trace:1 negative:1 implementation:1 proper:1 contributed:1 fuentes:1 neuron:24 finite:2 parietal:1 langevin:3 head:1 arbitrary:1 inferred:8 introduced:2 connection:3 friedrich:1 learned:2 nip:1 beyond:1 below:2 dynamical:3 perception:1 pattern:1 green:3 reciprocally:3 unrealistic:1 difficulty:1 natural:1 bacon:2 wik:4 scheme:3 coupled:1 faced:1 prior:25 text:1 heller:1 taste:1 relative:1 expect:2 proportional:2 bulb:12 consistent:2 shadlen:1 intractability:1 land:1 course:6 template:5 face:1 taking:1 sparse:2 julie:1 distributed:3 feedback:1 cortical:3 world:1 valid:1 zachary:1 doesn:1 sensory:2 made:1 agnieszka:2 far:1 sj:29 geneva:1 approximate:10 emphasize:1 nov:3 kullback:1 olsen:1 keep:2 gene:1 stopfer:1 active:2 assumed:3 knew:1 spatio:1 thep:1 search:1 sk:4 additionally:1 learn:1 nature:1 robust:1 ca:1 symmetry:1 s0j:6 complex:5 did:1 apr:3 main:1 timescales:1 linearly:2 dense:1 neurosci:4 noise:4 augmented:1 pvar:1 neuronal:2 fig:9 dump:1 gatsby:5 dlj:1 inferring:2 grinvald:1 exponential:1 comput:1 tied:1 down:2 magenta:1 annu:1 showing:1 list:1 offset:1 concern:2 demixing:1 intractable:1 exists:1 false:3 nat:3 chen:1 suited:1 logarithmic:1 distinguishable:1 simply:1 likely:6 electrophysiology:1 partially:1 ligand:1 ch:1 corresponds:1 ma:1 hines:1 oct:1 goal:2 jeff:3 replace:1 absence:4 experimentally:2 change:2 typical:2 sampler:4 averaging:1 called:1 divisive:4 rarely:1 exception:1 mark:1 latter:1 modulated:2 tested:1 biol:3 correlated:1 |
4,283 | 4,877 | Recurrent linear models of
simultaneously-recorded neural populations
Marius Pachitariu, Biljana Petreska, Maneesh Sahani
Gatsby Computational Neuroscience Unit
University College London, UK
{marius,biljana,maneesh}@gatsby.ucl.ac.uk
Abstract
Population neural recordings with long-range temporal structure are often best understood in terms of a common underlying low-dimensional dynamical process.
Advances in recording technology provide access to an ever-larger fraction of the
population, but the standard computational approaches available to identify the
collective dynamics scale poorly with the size of the dataset. We describe a new,
scalable approach to discovering low-dimensional dynamics that underlie simultaneously recorded spike trains from a neural population. We formulate the Recurrent Linear Model (RLM) by generalising the Kalman-filter-based likelihood
calculation for latent linear dynamical systems to incorporate a generalised-linear
observation process. We show that RLMs describe motor-cortical population data
better than either directly-coupled generalised-linear models or latent linear dynamical system models with generalised-linear observations. We also introduce
the cascaded generalised-linear model (CGLM) to capture low-dimensional instantaneous correlations in neural populations. The CGLM describes the cortical
recordings better than either Ising or Gaussian models and, like the RLM, can be
fit exactly and quickly. The CGLM can also be seen as a generalisation of a lowrank Gaussian model, in this case factor analysis. The computational tractability
of the RLM and CGLM allow both to scale to very high-dimensional neural data.
1
Introduction
Many essential neural computations are implemented by large populations of neurons working in
concert, and recent studies have sought both to monitor increasingly large groups of neurons [1, 2]
and to characterise their collective behaviour [3, 4]. In this paper we introduce a new computational
tool to model coordinated behaviour in very large neural data sets. While we explicitly discuss only
multi-electrode extracellular recordings, the same model can be readily used to characterise 2-photon
calcium-marker image data, EEG, fMRI or even large-scale biologically-faithful simulations.
Populational neural data may be represented at each time point by a vector yt with as many dimensions as neurons, and as many indices t as time points in the experiment. For spiking neurons, yt
will have positive integer elements corresponding to the number of spikes fired by each neuron in
the time interval corresponding to the t-th bin. As others have before [5, 6], we assume that the
coordinated activity reflected in the measurement yt arises from a low-dimensional set of processes,
collected into a vector xt , which is not directly observed. However, unlike the previous studies,
we construct a recurrent model in which the hidden processes xt are driven directly and explicitly
by the measured neural signals y1 . . . yt?1 . This assumption simplifies the estimation process. We
assume for simplicity that xt evolves with linear dynamics and affects the future state of the neural
signal yt in a generalised-linear manner, although both assumptions may be relaxed. As in the latent
dynamical system, the resulting model enforces a ?bottleneck?, whereby predictions of yt based on
y1 . . . yt?1 must be carried by the low-dimensional xt .
1
State prediction in the RLM is related to the Kalman filter [7] and we show in the next section a
formal equivalence between the likelihoods of the RLM and the latent dynamical model when observation noise is Gaussian distributed. However, spiking data is not well modelled as Gaussian,
and the generalisation of our approach to Poisson noise leads to a departure from the latent dynamical approach. Unlike latent linear models with conditionally Poisson observations, the parameters
of our model can be estimated efficiently and without approximation. We show that, perhaps in
consequence, the RLM can provide superior descriptions of neural population data.
2
From the Kalman filter to the recurrent linear model (RLM)
Consider a latent linear dynamical system (LDS) model with linear-Gaussian observations. Its
graphical model is shown in Fig. 1A. The latent process is parametrised by a dynamics matrix
A and innovations covariance Q that describe the evolution of the latent state xt :
P (xt |xt?1 ) = N (xt |Axt?1 , Q) ,
where N (x|?, ?) represents a normal distribution on x with mean ? and (co)variance ?. For brevity,
we omit here and below the special case of the first time-step, in which x1 is drawn from a multivariate Gaussian. The output distribution is determined by an observation loading matrix C and a noise
covariance R often taken to be diagonal so that all covariance is modelled by the latent process:
P (yt |xt ) = N (yt |Cxt , R) .
In the LDS, the joint likelihood of the observations {yt } can be written as the product:
T
Y
P (y1 . . . yT ) = P (y1 )
P (yt |y1 . . . yt?1 )
t=2
and in the Gaussian case can be computed using the usual Kalman filter approach to find the conditional distributon at time t iteratively:
Z
P (yt+1 |y1 . . . yt ) = dxt+1 P (yt+1 |xt+1 )P (xt+1 |y1 . . . yt )
Z
? t , Vt+1 )
= dxt+1 N (yt+1 |Cxt+1 , R) N (xt+1 |Ax
? t , CVt+1 C > + R) ,
= N (yt+1 |CAx
? t = E [xt |y1 . . . yt ] and (predictive) unwhere we have introduced
the (filtered) state estimate x
? t )2 |y1 . . . yt . Both quantities are computed recursively using the
certainty Vt+1 = E (xt+1 ? Ax
Kalman gain Kt = Vt C > (CVt C > + R)?1 , giving the following recursive recipe to calculate the
conditional likelihood of yt+1 :
? t = Ax
? t?1 + Kt (yt ? y?t )
x
Vt+1 = A(I ? Kt C)Vt A> + Q
?t
y?t+1 = CAx
P (yt+1 |y1 . . . yt ) = N (yt+1 |y?t+1 , CVt+1 C > + R)
For the Gaussian LDS, the Kalman gain Kt and state uncertainty Vt+1 (and thus the output covariance CVt+1 C > + R) depend on the model parameters (A, C, R, Q) and on the time step?although
as time grows they both converge to stationary values. Neither depends on the observations.
Thus, we might consider a relaxation of the Gaussian LDS model in which these matrices are taken
to be stationary from the outset, and are parametrised independently so that they are no longer
constrained to take on the ?correct? values as computed for Kalman inference. Let us call this
parametric form of the Kalman gain W and the parametric form of the output covariance S. Then
the conditional likelihood iteration becomes
? t = Ax
? t?1 + W (yt ? y?t )
x
?t
y?t+1 = CAx
P (yt+1 |y1 . . . yt ) = N (yt+1 |y?t+1 , S) .
2
A
A
x1
C
C
y1
B x0
y1
x1
x2
CA
y1
?2
W
CA
y2
xT
C
y3
x1
A
? ? ?
C
y2
A
A
x3
y2
?1
C x0
A
x2
yT
xT -1
? ? ?
?3
?T -1
y3
A
W
x2
CA
yT
A
? ? ?
W
A
W
y3
xT -1
CA
yT
Figure 1: Graphical representations of the latent linear dynamical
system (LDS: A, B) and recurrent
linear model (RLM: C). Shaded
LDS variables are observed, unshaded
?
?
?
circles are latent random variables
?
?
?
?
and squares are variables that depend deterministically on their parents. In B the LDS is redrawn in
terms of the random innovations
?t = xt ? Axt?1 , facilitating the
transition towards the RLM. The
RLM RLM is then obtained by replacing
?t with a deterministically derived
estimate W (yt ? y?t ).
?
?
?
?
?
?
?
?
The parameters of this new model are A, C, W and S. This is a relaxation of the Gaussian latent
LDS model because W has more degrees of freedom than Q, as does S than R (at least if R is
constrained to be diagonal). The new model has a recurrent linear structure in that the random
?t.
observation yt is fed back linearly to perturb the otherwise deterministic evolution of the state x
A graphical representation of this model is shown in Fig. 1C, along with a redrawn graph of the LDS
model. The RLM can be viewed as replacing the random innovation variables ?t = xt ? Axt?1
with data-derived estimates W (yt ? y?t ); estimates which are made possible by the fact that ?t
contributes to the variability of yt around y?t .
3
Recurrent linear models with Poisson observations
The discussion above has transformed a stochastic-latent LDS model with Gaussian output to an
RLM with deterministic latent, but still with Gaussian output. Our goal, however, is to fit a model
with an output distribution better suited to the binned point-processes that characterise neural spiking. Both linear Kalman-filtering steps above and the eventual stationarity of the inference parameters depend on the joint Gaussian structure of the assumed LDS model. They would not apply
if we were to begin a similar derivation from an LDS with Poisson output. However, a tractable
approach to modelling point-process data with low-dimensional temporal structure may be provided
by introducing a generalised-linear output stage directly to the RLM. This model is given by:
? t = Ax
? t?1 + W (yt ? y?t )
x
?t
g(y?t+1 ) = CAx
P (yt+1 |y1 . . . yt ) = ExpFam(yt+1 |y?t+1 )
(1)
where ExpFam is an exponential-family distribution such as Poisson, and the element-wise link
function g allows for a nonlinear mapping from xt to the predicted mean y?t+1 . In the following, we
? t ).
will write f for the inverse-link as is more common for neural models, so that y?t+1 = f(CAx
The simplest Poisson-based generalised-linear RLM might take as its output distribution
Y
? t?1 )) ,
P (yt |y?t ) =
Poisson(yti |?
yti );
y?t = f(CAx
i
where yti is the spike count of the ith cell in bin t and the function f is non-negative. However,
comparison with the output distribution derived for the Gaussian RLM suggests that this choice
would fail to capture the instantaneous covariance that the LDS formulation transfers to the output
distribution (and which appears in the low-rank structure of S above). We can address this concern
in two ways. One option is to bin the data more finely, thus diminishing the influence of the instantaneous covariance. The alternative is to replace the independent Poissons with a correlated output
distribution on spike counts. The cascaded generalised-linear model introduced below is a natural
choice, and we will show that it captures instantaneous correlations faithfully with very few hidden
dimensions.
3
In practice, we also sometimes add a fixed input ?t to equation 1 that varies in time and determines
the average behavior of the population or the peri-stimulus time histogram (PSTH).
y?t+1 = f (?t + CAxt )
Note that the matrices A and C retain their interpretation from the LDS models. The matrix A
controls the evolution of the dynamical process xt . The phenomenology of its dynamics is determined by the complex eigenvalues of A. Eigenvalues with moduli close to 1 correspond to long
timescales of fluctuation around the PSTH. Eigenvalues with non-zero imaginary part correspond
to oscillatory components. Finally, the dynamics will be stable iff all the eigenvalues lie within the
unit disc. The matrix C describes the dependence of the high-dimensional neural signals on the lowdimensional latent processes xt . In particular, equation 2 determines the firing rate of the neurons.
This generalised-linear stage ensures that the firing rates are positive through the link function f, and
the observation process is Poisson. For other types of data, the generalised-linear stage might be
replaced by other appropriate link functions and output distributions.
3.1
Relationship to other models
RLMs are related to recurrent neural networks [8]. The differences lie in the state evolution, which
in the neural network is nonlinear: xt = h (Axt?1 + W yt?1 ); and in the recurrent term which
depends on the observation rather than the prediction error. On the data considered here, we found
that using sigmoidal or threshold-linear functions h resulted in models comparable in likelihood
to the RLM, and so we restricted our attention to simple linear dynamics. We also found that
using the prediction error term W (yt?1 ? y?t ) resulted in better models than the simple neural-net
formulation, and we attribute this difference to the link between the RLM and Kalman inference.
It is also possible to work within the stochatic latent LDS framework, replacing the Gaussian output distribution with a generalised-linear Poisson output (e.g. [6]). The main difficulty here is the
intractability of the estimation procedure. For an unobserved latent process xt , an inference procedure needs to be devised to estimate the posterior distribution on the entire sequence x1 . . . xt .
For linear-Gaussian observations, this inference is tractable and is provided by Kalman smoothing.
However, with generalised-linear observations, inference becomes intractable and the necessary approximations [6] are computationally intense and can jeopardize the quality of the fitted models. By
contrast, in the RLM xt is a deterministic function of data. In effect, the Kalman filter has been built
into the model as the accurate estimation procedure, and efficient fitting is possible by direct gradient
ascent on the log-likelihood. Empirically we did not encounter difficulties with local minima during
optimization, as has been reported for LDS models fit by approximate EM [9]. Multiple restarts
from different random values of the parameters always led to models with similar likelihoods.
Note that to estimate the matrices A and W the gradient must be backpropagated through successive iterations of equation 1. This technique, known as backpropagation-through-time, was first
described by [10] as a technique to fit recurrent neural network models. Recent implementations
have demonstrated state-of-the-art language models [11]. Backpropagation-through-time is thought
to be inherently unstable when propagated past many timesteps and often the gradient is truncated
prematurely [11]. We found that using large values of momentum in the gradient ascent alleviated
these instabilities and allowed us to use backpropagation without the truncation.
4
The cascaded generalised-linear model (CGLM)
The link between the RLM and the LDS raises the possibility that a model for simultaneouslyrecorded correlated spike counts might be derived in a similar way, starting from a non-dynamical,
but low-dimensional, Gaussian model. Stationary models of population activity have attracted recent
interest for their own sake (e.g. [1]), and would also provide a way model correlations introduced
by common innovations that were neglected by the simple Poisson form of the RLM. Thus, we
consider vectors y of spike counts from N neurons, without explicit reference to the time at which
they were collected. A Gaussian model for y can certainly describe correlations between the cells,
but is ill-matched to discrete count observations. Thus, as with the derivation of the RLM from the
Kalman filter, we derive here a new generalisation of a low-dimensional, structured Gaussian model
to spike count data.
4
The distribution of any multivariate variable y can be factorized into a ?cascaded? product of multiple one-dimensional distributions:
P (y) =
N
Y
P (yn |y<n ) .
(2)
n=1
Here n indexes the neurons up to the last neuron N , and y<n is the (n?1)-vector [y1 . . . yn?1 ]. For
a Gaussian-distributed y, the conditionals P (yn |y<n ) would be linear-Gaussian. Thus, we propose
the ?cascaded generalised linear model? (CGLM) in which each such one-dimensional conditional
distribution is a generalised-linear model:
y?n = f ?n + SnT y<n
(3)
P (yn |y<n ) = ExpFam (?
yn )
and in which the linear weights Sn take on a structured form developed below.
(4)
The equations 3 and 4 subsume the Gaussian distribution with arbitrary covariance in the case that
f is linear, and the ExpFam conditionals are Gaussian. In this case, for a joint covariance of ?, it is
straightforward to derive the expression
1
?1
Sn =
(5)
?1 (??n,?n )n,<n .
(??n,?n )n,n
where the subscripts < n and ? n restrict the matrix to the first (n ? 1) and n rows and/or columns
respectively. Thus, we might construct suitably structured linear weights for the CGLM by applying
this result to the covariance matrix induced by the low-dimensional Gaussian model known as factor
analysis [12]. Factor analysis assumes that data are generated from a K-dimensional latent process
x ? N (0, I), where I is the K?K identity matrix, and y has the conditional distribution P (y|x) =
N (?x, ?) with ? a diagonal matrix and ? an N ? K loading matrix. This leads to a covariance
of y given by ? = ? + ??T . If we repeat the derivation of equations 3, 4 and 5 for this covariance
matrix, we obtain an expression for Sn via the matrix inversion lemma:
?1
1
T
Sn =
?1 ??n,?n + ??n,? ??n,? n,<n
(??n,?n )n,n
1
?1
?1
T
?1
=
?
?
?
?
(?
?
?)
?
?
(6)
<n,?
<n,?
<n,<n
?n,?n
?n,?n
?1
n,<n
(??n,?n )n,n
T
1
??1 ? ?n,? (? ? ?) ???1 ?n,?
=?
?1
n,<n
(??n,?n )n,n
where the omitted factor (? ? ? ) is a K ? K matrix. The first term in equation 6 vanishes because it
involves only the off-diagonal entries of ?. The surviving factor shows that Sn is formed by taking
a linear combination of the columns of ??1 ? and then truncating to the first n ? 1 elements. Thus,
if we arrange all Sn as the upper columns of an N ? N matrix S, we can write S = upper zwT
for some low-dimensional matrices z = ??1 ? and w, where the operation upper extracts the
strictly upper triangular part of a matrix. This is the natural structure imposed on the cascaded
conditionals by factor analysis. Thus, we adopt the same constraint on S in the case of generalisedlinear observations. The resulting (CGLM) is shown below to provide better fits to binarized neural
data than standard Ising models (see the Results section), even with as few as three latent dimensions.
Another useful property of the CGLM is that it allows stimulus-dependent inputs in equation 3. The
CGLM can also be used in combination with the generalised-linear RLM, with the CGLM replacing
the otherwise independent observation model. This approach can be useful when large bins are used
to discretize spike trains. In both cases the model can be estimated quickly with standard gradient
ascent techniques.
5
5.1
Alternative models
Alternative for temporal interactions: causally-coupled generalised linear model
One popular and simple model of simultaneously recorded neuronal populations [3] constructs temporal dependencies between units by directly coupling each neuron?s probability of firing to the past
5
spikes in the entire population:
yt ? Poisson(f(?t +
N
X
Bi (hi ? yt )))
i=1
Here, hi ? yt are convolutions of the spike trains with a set of basis functions hi , and Bi are pairwise
interaction weights. Each matrix Bi has N 2 parameters where N is the number of neurons, so the
number of parameters grows quadratically with the population size. This type of scaling makes the
model prohibitive to use with very large-scale array recordings. Even with aggresive regularization
techniques, the model?s parameters are difficult to identify with limited amounts of data. Perhaps
more importantly, the model does not have a physical interpretation. Neurons recorded in cortex
are rarely directly-connected and retinal ganglion cells almost never directly connect to each other.
Instead, such directly-coupled GLMs are used to describe so-called ?functional? interactions between
neurons [3]. We believe a much better interpretation for the correlations observed between pairs of
neurons is that they are caused by common inputs to these neurons which seem often to be confined
to a small number of dimensions. The models we propose here, the RLM and the CGLM, are aimed
at discovering such inputs.
5.2
Alternative for instantaneous interactions: the Ising model
Instantaneous interactions between binary data (as would be obtained by counting spikes in short
intervals) can be modelled in terms of their pairwise interactions [1] embodied in the Ising model:
1 yT Jy
e
.
(7)
Z
where J is a pairwise interaction matrix and Z is the partition function, or the normalization constant
of the model. The model?s attractiveness is that for a given covariance structure it makes the weakest
possible assumptions about the distribution of y, that is, like a Gaussian for continuous data, it
has the largest possible entropy under the covariance constraint. However, the Ising model and
the so-called functional interactions J have no physical interpretation when applied to neural data.
Furthermore, Ising models are difficult to fit as they require estimates of the gradients of the partition
function Z; they also suffer from the same quadratic scaling in number of paramters as does the
directly-coupled GLM. Ising models are even harder to estimate when stimulus-dependent inputs
are added in equation 7, but for data collected in the retina or other sensory areas [1], much of the
covariation in y may be expected to arise from common stimulus input. Another short-coming of
the Ising model is that it can only model binarized data and cannot be normalized for integer y-s [6],
so either the time bins need to be reduced to ensure no neuron fires more than one spike in a single
bin or the spike counts must be capped at 1.
P (y) =
6
6.1
Results
Simulated data
We began by evaluating RLM models fit to simulated data where the true generative parameters were
known. Two aspects of the estimated models were of particular interest: the phenomenology of the
dynamics (captured by the eigenvalues of the dynamics matrix A) and the relationship between the
dynamical subspace and measured neural activity (captured by the output matrix C). We evaluated
the agreement between the estimated and generative output matrices by measuring the principal
angles between the corresponding subspaces. These report, in succession, the smallest angle achievable between a line in one subspace and a line in the second subspace, once all previous such vectors
of maximal agreement have been projected out. Exactly aligned n-dimensional subspaces have all
n principal angles equal to 0? . Unrelated low-dimensional subspaces embedded in high dimensions
are close to orthogonal and so have principal angles near 90? .
We first verified the robustness of maximisation of the generalised-linear RLM likelihood by fitting
models to simulated data generated by a known RLM. Fig. 2(a) shows eigenvalues from several simulated RLMs and the eigenvalues recovered by fitting parameters to simulated data. The agreement
is generally good. In particular, the qualitative aspects of the dynamics reflected in the absolute values and imaginary parts of the eigenvalues are well characterised. Fig. 2(d) shows that the RLM fits
6
RLM identifies the eigenvalues
of diverse PLDS models
RLM recovers eigenvalues
of simulated dynamics
0.4
0.4
Generative PLDS
Identified by RLM
Generative PLDS
Identified by PLDS
Identified by RLM
0.05
Ground truth
Identified
0
Imaginary
0.2
Imaginary
Imaginary
0.2
RLM identifies the eigenvalues
of a PLDS model fit to real data
0.1
0
?0.2
?0.2
?0.4
0
0.2
0.4
0.6
0.8
1
0
?0.05
?0.4
0.2
0.4
0.6
Real
Real
(a)
0.8
?0.1
0.85
1
0.9
(b)
Principal angles
between ground truth (RLM)
and identified subspaces
0.95
1
Real
(c)
Principal angles
between true and identified subspaces
Principal angles
between PLDS fit to data
and identified subspaces
90
90
45
0
PCA
GLDS
(d)
RLM
Degrees
Degrees
Degrees
90
45
0
PCA
LDS
(e)
RLM
PLDS
45
0
PCA
GLDS
RLM
(f)
Figure 2: Experiments on 100-dimensional simulated data generated from a 5-dimensional latent
process. Generating models were Poisson RLM (ad), Poisson LDS with random parameters (cf) and
Poisson LDS model with parameters fit to neural data (cf). The models fit were PCA, LDS with
Gaussian (LDS/GLDS) or Poisson (PLDS) output, and RLM with Poisson output (RLM). In the
upper plots, eigenvalues from different runs are shown in different colors.
also recover the subspace defined by the loading matrix C, and do so substantially more accurately
than either principal components analysis (PCA) or GLDS models. It is important to note that the
likelihoods of LDS models with Poisson observations are difficult to optimise, and so may yield
poor results even when fit to within-class data. In practice we did not observe local optima with the
RLM or CGLM.
We also asked whether the RLM could recover the dynamical properties and latent subspace of data
generated by a latent LDS model with Poisson observations. Fig. 2(b) shows that the dynamical
eigenvalues of the maximum-likelihood RLM are close to the eigenvalues of generative LDS dynamics, whilst Fig. 2(e) shows that the dynamical subspace is also correctly recovered. Parameters
for these simulations were chosen randomly. We then asked whether the quality of parameter identification extended to Poisson-output LDS models with realistic parameters, by generating data from
a Poisson-output LDS model that had been fit to a neural recording. As seen in figs. 2(c) and 2(f),
the RLM fits remain accurate in this regime, yielding better subspace estimates than either PCA or
a Gaussian LDS.
6.2
Array recorded data
We next compared the performance of the novel models on neural data. The RLM was compared
to the directed-coupled GLM (fit by gradient-based likelihood optimisation) as well as LDS models
with Gaussian or Poisson outputs (fit by EM, with a Laplace approximation E-step). The CGLM
was compared to the Ising model. We used a dataset of 92 neurons recorded with a Utah array
implanted in the premotor and motor cortices of a rhesus macaque monkey performing a delayed
center-out reach task. For all comparisons below we use datasets of 108 trials in which the monkey
made movements to the same target.
We discretized spike trains into time bins of 10ms. The directed-coupled GLM needed substantial
regularization in order to make good predictions on held-out test data. Figure 3(a) shows only
the best cross-validation result for the GLM, but results without regularization for models with
7
Filtering prediction on test data
Likelihood per spike ? baseline (bits)
13
MSEbaseline ? MSE
12
11
10
9
8
7
6
5
GLM ? SCGLMPLDS10LDS10LDS20RLM10RLM20 RLM3+PSTH
baseline = PSTH (low rank)
(a)
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
Ising rank=1 r=2
r=3
r=4
r=5
(b)
Figure 3: a. Predictive performance of various models on test data (higher is better). GLM-type
models are helped greatly by self-coupling filters (which the other models do not have). The best
model is an RLM with three latent dimensions and a low-rank model of the PSTH (see the supplementary material for more information about this model). Adding self-coupling filters to this model
further increases its predictive performance by 5 (not shown). b. The likelihood per spike of Ising
models as well as CGLM models with small numbers of hidden dimensions. The CGLM saturates
at three dimensions and performs better than Ising models.
low-dimensional parametrisation. Performance was measured by the causal mean-squared-error in
prediction subtracted from the error of a low-rank smoothed PSTH model (based on a singularvalue decomposition of the matrix of all smoothed PSTHs). The number of dimensions (5) and the
standard deviation of the Gaussian smoothing filter (20 ms) were cross-validated to find the best
possible PSTH performance. Thus, our evaluation is focuses on each model?s ability to predict
trial-to-trial co-variation in firing around the mean.
A second measure of performance for the RLM was obtained by studying probabilistic samples
obtained from the fitted model. Figure 4 in the supplemental material shows averaged noise crosscorrelograms obtained from a large set of samples. Note that the PSTHs have been subtracted from
each trial to reveal only the extra correlation structure that is not repeated amongst trials. Even with
few hidden dimensions, the model captures well the full temporal structure of the noise correlations.
In the case of the Ising model we binarized the data by replacing all spike counts larger than 1 with
1. The log-likelihood of the Ising model could only be estimated for small numbers of neurons, so
for comparison we took only the 30 most active neurons. The measure of performance reported in
figure 3(b) is the extra log-likelihood per spike obtained above that of a model that makes constant
predictions equal to the mean firing rate of each neuron. The CGLM model with only three hidden
dimensions achieves the best generalisation performance, surpassing the Ising model. Similar results
for the performance of the CGLM can be seen on the full dataset of 92 neurons with non-binarized
data, indicating that three latent dimensions suffice to describe the full space visited by the neuronal
population on a trial-by-trial basis.
7
Discussion
The generalised-linear RLM model, while sharing motivation with latent LDS model, can be fit more
efficiently and without approximation to non-Gaussian data. We have shown improved performance
on both simulated data and on population recordings from the motor cortex of behaving monkeys.
The model is easily extended to other output distributions (such as Bernoulli or negative binomial),
to mixed continuous and discrete data, to nonlinear outputs, and to nonlinear dynamics. For the
motor data considered here, the generalised-linear model performed as well as models with further
non-linearites. However, preliminary results on data from sensory cortical areas suggests that nonlinear models may be of greater value in other settings.
8
Acknowledgments
We thank Krishna Shenoy and members of his lab for generously providing access to data. Funding
from the Gatsby Charitable Foundation and DARPA REPAIR N66001-10-C-2010.
8
References
[1] E Schneidman, MJ Berry, R Segev, and W Bialek. Weak pairwise correlations imply strongly correlated
network states in a neural population. Nature, 440:1007?1012, 2005.
[2] Gyorgy Buzsaki. Large-scale recording of neuronal ensembles. NatNeurosci, 7(5):446?51, 2004.
[3] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. J. Chichilnisky, and E. P. Simoncelli. Spatiotemporal correlations and visual signalling in a complete neuronal population. Nature, 454(7207):995?
999, 2008.
[4] Mark M. Churchland, Byron M. Yu, Maneesh Sahani, and Krishna V. Shenoy. Techniques for extracting single-trial activity patterns from large-scale neural recordings. CurrOpinNeurobiol, 17(5):609?618,
2007.
[5] BM Yu, A Afshar, G Santhanam, SI Ryu, KV Shenoy, and M Sahani. Extracting dynamical structure
embedded in neural activity. Advances in Neural Information Processing Systems, 18:1545?1552, 2006.
[6] JH Macke, L Bsing, JP Cunningham, BM Yu, KV Shenoy, and M Sahani. Empirical models of spiking in
neural populations. Advances in Neural Information Processing Systems, 24:1350?1358, 2011.
[7] R.E. Kalman. A new approach to linear filtering and prediction problems. Journal of Basic Engineering,
82(1):35?45, 1960.
[8] JL Elman. Finding structure in time. Cognitive Science, 14:179?211, 1990.
[9] L Buesing, JH Macke, and M Sahani. Spectral learning of linear dynamics from generalised-linear observations with application to neural population data. Advances in Neural Information Processing Systems,
25, 2012.
[10] DE Rumelhart, GE Hinton, and RJ Williams. Learning internal representations by error propagation. Mit
Press Computational Models Of Cognition And Perception Series, pages 318?462, 1986.
[11] T Mikolov, A Deoras, S Kombrink, L Burget, and JH Cernocky. Empirical evaluation and combination
of advanced language modeling techniques. Conference of the International Speech Communication
Association, 2011.
[12] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
9
| 4877 |@word trial:8 inversion:1 achievable:1 loading:3 suitably:1 simulation:2 rhesus:1 covariance:14 decomposition:1 harder:1 recursively:1 series:1 past:2 imaginary:5 recovered:2 si:1 must:3 readily:1 written:1 attracted:1 realistic:1 partition:2 motor:4 plot:1 concert:1 stationary:3 generative:5 discovering:2 prohibitive:1 signalling:1 ith:1 short:2 filtered:1 psth:7 successive:1 sigmoidal:1 along:1 direct:1 qualitative:1 fitting:3 manner:1 introduce:2 pairwise:4 x0:2 expected:1 behavior:1 elman:1 multi:1 discretized:1 becomes:2 begin:1 provided:2 underlying:1 matched:1 unrelated:1 factorized:1 suffice:1 substantially:1 monkey:3 developed:1 whilst:1 supplemental:1 unobserved:1 finding:1 temporal:5 certainty:1 y3:3 binarized:4 axt:4 exactly:2 uk:2 control:1 unit:3 underlie:1 omit:1 yn:5 causally:1 shenoy:4 generalised:21 before:1 understood:1 local:2 engineering:1 positive:2 consequence:1 subscript:1 fluctuation:1 firing:5 might:5 equivalence:1 suggests:2 shaded:1 co:2 limited:1 range:1 bi:3 averaged:1 directed:2 faithful:1 acknowledgment:1 enforces:1 recursive:1 practice:2 maximisation:1 x3:1 backpropagation:3 procedure:3 area:2 empirical:2 maneesh:3 thought:1 alleviated:1 burget:1 outset:1 cannot:1 close:3 influence:1 instability:1 applying:1 unshaded:1 deterministic:3 yt:48 cvt:4 demonstrated:1 straightforward:1 attention:1 starting:1 independently:1 truncating:1 glds:4 formulate:1 williams:1 simplicity:1 array:3 importantly:1 shlens:1 his:1 population:19 variation:1 poissons:1 laplace:1 target:1 agreement:3 element:3 rumelhart:1 recognition:1 ising:15 observed:3 capture:4 snt:1 calculate:1 ensures:1 connected:1 movement:1 substantial:1 vanishes:1 asked:2 dynamic:14 neglected:1 depend:3 raise:1 churchland:1 predictive:3 basis:2 easily:1 joint:3 darpa:1 represented:1 various:1 derivation:3 train:4 describe:6 london:1 premotor:1 larger:2 supplementary:1 deoras:1 otherwise:2 triangular:1 ability:1 sequence:1 eigenvalue:14 net:1 ucl:1 propose:2 lowdimensional:1 interaction:8 product:2 coming:1 maximal:1 took:1 aligned:1 iff:1 poorly:1 fired:1 buzsaki:1 description:1 kv:2 recipe:1 parent:1 electrode:1 optimum:1 generating:2 derive:2 recurrent:10 ac:1 coupling:3 measured:3 lowrank:1 implemented:1 predicted:1 involves:1 correct:1 attribute:1 filter:9 redrawn:2 stochastic:1 material:2 bin:7 require:1 behaviour:2 preliminary:1 strictly:1 around:3 considered:2 ground:2 normal:1 mapping:1 predict:1 cognition:1 sought:1 arrange:1 adopt:1 omitted:1 smallest:1 achieves:1 estimation:3 visited:1 largest:1 faithfully:1 tool:1 mit:1 generously:1 gaussian:30 always:1 rather:1 ax:5 derived:4 validated:1 focus:1 modelling:1 likelihood:16 rank:5 bernoulli:1 greatly:1 contrast:1 litke:1 baseline:2 inference:6 dependent:2 entire:2 diminishing:1 hidden:5 cunningham:1 transformed:1 ill:1 constrained:2 special:1 smoothing:2 art:1 equal:2 construct:3 never:1 once:1 represents:1 yu:3 fmri:1 future:1 report:1 stimulus:4 others:1 few:3 retina:1 randomly:1 simultaneously:3 resulted:2 delayed:1 replaced:1 fire:1 freedom:1 stationarity:1 interest:2 possibility:1 evaluation:2 certainly:1 yielding:1 parametrised:2 held:1 kt:4 accurate:2 expfam:4 necessary:1 intense:1 orthogonal:1 circle:1 causal:1 fitted:2 column:3 modeling:1 measuring:1 tractability:1 introducing:1 deviation:1 entry:1 reported:2 dependency:1 connect:1 varies:1 spatiotemporal:1 peri:1 international:1 retain:1 probabilistic:1 off:1 quickly:2 parametrisation:1 squared:1 recorded:6 cognitive:1 macke:2 photon:1 de:1 retinal:1 generalisedlinear:1 coordinated:2 explicitly:2 caused:1 depends:2 ad:1 performed:1 helped:1 lab:1 recover:2 option:1 cxt:2 square:1 formed:1 afshar:1 variance:1 efficiently:2 succession:1 correspond:2 identify:2 yield:1 ensemble:1 modelled:3 lds:30 identification:1 weak:1 accurately:1 disc:1 buesing:1 oscillatory:1 reach:1 sharing:1 recovers:1 propagated:1 gain:3 dataset:3 popular:1 covariation:1 color:1 back:1 appears:1 higher:1 restarts:1 reflected:2 improved:1 formulation:2 evaluated:1 strongly:1 furthermore:1 stage:3 correlation:9 glms:1 working:1 replacing:5 christopher:1 nonlinear:5 marker:1 propagation:1 quality:2 perhaps:2 reveal:1 grows:2 believe:1 modulus:1 utah:1 effect:1 normalized:1 y2:3 true:2 evolution:4 regularization:3 iteratively:1 biljana:2 conditionally:1 during:1 self:2 whereby:1 m:2 complete:1 performs:1 image:1 wise:1 instantaneous:6 novel:1 funding:1 began:1 common:5 superior:1 psths:2 functional:2 spiking:4 empirically:1 physical:2 jp:1 jl:1 association:1 interpretation:4 surpassing:1 measurement:1 imposed:1 stochatic:1 language:2 had:1 access:2 stable:1 longer:1 cortex:3 behaving:1 add:1 multivariate:2 posterior:1 recent:3 own:1 driven:1 binary:1 vt:6 seen:3 minimum:1 captured:2 relaxed:1 greater:1 krishna:2 gyorgy:1 converge:1 schneidman:1 signal:3 multiple:2 full:3 simoncelli:1 rj:1 calculation:1 cross:2 long:2 devised:1 jy:1 prediction:9 scalable:1 basic:1 implanted:1 optimisation:1 poisson:21 iteration:2 sometimes:1 histogram:1 normalization:1 confined:1 cell:3 conditionals:3 interval:2 extra:2 unlike:2 finely:1 ascent:3 recording:9 induced:1 byron:1 member:1 seem:1 integer:2 call:1 surviving:1 near:1 counting:1 extracting:2 affect:1 fit:18 timesteps:1 restrict:1 identified:7 simplifies:1 bottleneck:1 whether:2 expression:2 pca:6 suffer:1 speech:1 useful:2 generally:1 aimed:1 characterise:3 amount:1 backpropagated:1 simplest:1 reduced:1 simultaneouslyrecorded:1 neuroscience:1 estimated:5 correctly:1 per:3 paramters:1 diverse:1 write:2 discrete:2 santhanam:1 group:1 threshold:1 monitor:1 drawn:1 neither:1 verified:1 n66001:1 graph:1 relaxation:2 fraction:1 run:1 inverse:1 angle:7 uncertainty:1 family:1 almost:1 scaling:2 comparable:1 bit:1 hi:3 quadratic:1 activity:5 binned:1 constraint:2 segev:1 x2:3 sake:1 aspect:2 performing:1 mikolov:1 extracellular:1 marius:2 structured:3 combination:3 poor:1 petreska:1 describes:2 increasingly:1 em:2 remain:1 evolves:1 biologically:1 restricted:1 repair:1 glm:6 taken:2 computationally:1 equation:8 discus:1 count:8 fail:1 needed:1 ge:1 fed:1 tractable:2 studying:1 available:1 operation:1 plds:8 pachitariu:1 phenomenology:2 apply:1 observe:1 appropriate:1 spectral:1 subtracted:2 alternative:4 encounter:1 robustness:1 assumes:1 binomial:1 ensure:1 cf:2 graphical:3 giving:1 perturb:1 added:1 quantity:1 spike:18 parametric:2 dependence:1 usual:1 diagonal:4 bialek:1 gradient:7 amongst:1 subspace:13 link:6 thank:1 simulated:8 collected:3 unstable:1 kalman:14 index:2 relationship:2 providing:1 innovation:4 difficult:3 negative:2 implementation:1 collective:2 calcium:1 upper:5 discretize:1 observation:20 neuron:21 convolution:1 datasets:1 cernocky:1 truncated:1 subsume:1 extended:2 communication:1 ever:1 variability:1 saturates:1 y1:16 prematurely:1 hinton:1 smoothed:2 arbitrary:1 introduced:3 pair:1 chichilnisky:1 quadratically:1 ryu:1 macaque:1 address:1 capped:1 dynamical:15 below:5 pattern:2 departure:1 perception:1 regime:1 built:1 optimise:1 natural:2 difficulty:2 cascaded:6 advanced:1 technology:1 imply:1 identifies:2 carried:1 coupled:6 extract:1 embodied:1 sher:1 sn:6 sahani:5 berry:1 embedded:2 dxt:2 mixed:1 filtering:3 cax:6 validation:1 foundation:1 degree:4 charitable:1 intractability:1 row:1 kombrink:1 populational:1 repeat:1 last:1 truncation:1 formal:1 allow:1 jh:3 taking:1 absolute:1 distributed:2 dimension:12 cortical:3 transition:1 evaluating:1 pillow:1 sensory:2 made:2 projected:1 bm:2 approximate:1 active:1 generalising:1 assumed:1 continuous:2 latent:26 nature:2 mj:1 transfer:1 ca:4 inherently:1 eeg:1 contributes:1 mse:1 complex:1 did:2 timescales:1 main:1 linearly:1 motivation:1 noise:5 arise:1 allowed:1 facilitating:1 center:1 repeated:1 x1:5 neuronal:4 fig:7 attractiveness:1 gatsby:3 momentum:1 deterministically:2 explicit:1 exponential:1 lie:2 xt:26 bishop:1 concern:1 weakest:1 essential:1 intractable:1 adding:1 crosscorrelograms:1 suited:1 entropy:1 led:1 paninski:1 ganglion:1 visual:1 springer:1 truth:2 determines:2 conditional:5 viewed:1 goal:1 identity:1 towards:1 eventual:1 replace:1 yti:3 generalisation:4 determined:2 characterised:1 rlm:48 lemma:1 principal:7 called:2 rarely:1 indicating:1 college:1 internal:1 mark:1 arises:1 brevity:1 incorporate:1 correlated:3 |
4,284 | 4,878 | Understanding Dropout
Peter Sadowski
Department of Computer Science
University of California, Irvine
Irvine, CA 92697
[email protected]
Pierre Baldi
Department of Computer Science
University of California, Irvine
Irvine, CA 92697
[email protected]
Abstract
Dropout is a relatively new algorithm for training neural networks which relies
on stochastically ?dropping out? neurons during training in order to avoid the
co-adaptation of feature detectors. We introduce a general formalism for studying dropout on either units or connections, with arbitrary probability values, and
use it to analyze the averaging and regularizing properties of dropout in both linear and non-linear networks. For deep neural networks, the averaging properties
of dropout are characterized by three recursive equations, including the approximation of expectations by normalized weighted geometric means. We provide
estimates and bounds for these approximations and corroborate the results with
simulations. Among other results, we also show how dropout performs stochastic
gradient descent on a regularized error function.
1
Introduction
Dropout is an algorithm for training neural networks that was described at NIPS 2012 [7]. In its
most simple form, during training, at each example presentation, feature detectors are deleted with
probability q = 1 ? p = 0.5 and the remaining weights are trained by backpropagation. All weights
are shared across all example presentations. During prediction, the weights are divided by two.
The main motivation behind the algorithm is to prevent the co-adaptation of feature detectors, or
overfitting, by forcing neurons to be robust and rely on population behavior, rather than on the
activity of other specific units. In [7], dropout is reported to achieve state-of-the-art performance on
several benchmark datasets. It is also noted that for a single logistic unit dropout performs a kind of
?geometric averaging? over the ensemble of possible subnetworks, and conjectured that something
similar may occur also in multilayer networks leading to the view that dropout may be an economical
approximation to training and using a very large ensemble of networks.
In spite of the impressive results that have been reported, little is known about dropout from a
theoretical standpoint, in particular about its averaging, regularization, and convergence properties.
Likewise little is known about the importance of using q = 0.5, whether different values of q can
be used including different values for different layers or different units, and whether dropout can be
applied to the connections rather than the units. Here we address these questions.
2
Dropout in Linear Networks
It is instructive to first look at some of the properties of dropout in linear networks, since these can
be studied exactly in the most general setting of a multilayer feedforward network described by an
underlying acyclic graph. The activity in unit i of layer h can be expressed as:
Sih (I) =
XX
l<h
hl l
wij
Sj
j
1
with Sj0 = Ij
(1)
where the variables w denote the weights and I the input vector. Dropout applied to the units can be
expressed in the form
Sih =
XX
l<h
hl l l
wij
?j Sj
with
Sj0 = Ij
(2)
j
where ?jl is a gating 0-1 Bernoulli variable, with P (?jl = 1) = plj . Throughout this paper we assume
that the variables ?jl are independent of each other, independent of the weights, and independent of
the activity of the units. Similarly, dropout applied to the connections leads to the random variables
Sih =
XX
l<h
hl hl l
?ij
wij Sj
with
Sj0 = Ij
(3)
j
For brevity in the rest of this paper, we focus exclusively on dropout applied to the units, but all the
results remain true for the case of dropout applied to the connections with minor adjustments.
For a fixed input vector, the expectation of the activity of all the units, taken over all possible realizations of the gating variables hence all possible subnetworks, is given by:
E(Sih ) =
XX
l<h
hl l
wij
pj E(Sjl )
for h > 0
(4)
j
with E(Sj0 ) = Ij in the input layer. In short, the ensemble average can easily be computed by
hl
hl l
feedforward propagation in the original network, simply replacing the weights wij
by wij
pj .
3
3.1
Dropout in Neural Networks
Dropout in Shallow Neural Networks
Pn
Consider first a single logistic unit with n inputs O = ?(S) = 1/(1 + ce??S ) and S = 1 wj Ij .
To achieve the greatest level of generality, we assume that the unit produces different outputs
OP
1 , . . . , Om , corresponding to different sums S1 . . . , Sm with different probabilities P1 , . . . , Pm
( Pm = 1). In the most relevant case, these outputs and these sums are associated with the
m = 2n possible subnetworks of the unit. The probabilities P1 , . . . , Pm could be generated, for
instance, by using Bernoulli gating variables, although this isP
not necessary for this derivation. It is
useful to define the following four quantities: the mean E =
Pi Oi ; the mean of the complements
P
Q
E0 =
Pi (1 ? Oi ) = 1 ? E; the weighted geometric
mean
(W GM ) G = i OiPi ; and the
Q
weighted geometric mean of the complements G0 = i (1 ? Oi )Pi . We also define the normalized
weighted geometric mean N W GM = G/(G + G0 ). We can now prove the key averaging theorem
for logistic functions:
1
= ?(E(S))
1 + ce??E(S)
(5)
1
1
Q
Q
=
(1?Oi )Pi
(1??(Si ))Pi
1 + Q Pi
1 + Q ?(S )Pi
(6)
N W GM (O1 , . . . , Om ) =
To prove this result, we write
N W GM (O1 , . . . , Om ) =
Oi
i
??x
The logistic function satisfies the identity [1 ? ?(x)]/?(x) = ce
and thus
1
1
P
Q ??S P =
N W GM (O1 , . . . , Om ) =
= ?(E(S))
(7)
i
i
??
Pi Si
1 + [ce
]
1 + ce
Thus in the case of Bernoulli gating variables, we can compute the N WP
GM over all possible
n
dropout configurations by simple forward propagation by: N W GM = ?( 1 wj pj Ij ). A similar
result is true also for normalized exponential transfer functions. Finally, one can also show that
the only class of functions f that satisfy N W GM (f ) = f (E) are the constant functions and the
logistic functions [1].
2
3.2
Dropout in Deep Neural Networks
We can now deal with the most interesting case of deep feedforward networks of sigmoidal units 1 ,
described by a set of equations of the form
Oih = ?(Sih ) = ?(
XX
hl l
wij
Oj )
with
Oj0 = Ij
(8)
j
l<h
where Oih is the output of unit i in layer h. Dropout on the units can be described by
Oih = ?(Sih ) = ?(
XX
l<h
hl l l
wij
?j Oj )
with Oj0 = Ij
(9)
j
using the Bernoulli selector variables ?jl . For each sigmoidal unit
N W GM (Oih ) = Q
h P (N )
N (Oi )
Q
h
P
(N
)
+ N (1 ?
N (Oi )
Q
Oih )P (N )
(10)
where N ranges over all possible subnetworks. Assume for now that the N W GM provides a
good approximation to the expectation (this point will be analyzed in the next section). Then the
averaging properties of dropout are described by the following three recursive equations. First the
approximation of means by NWGMs:
E(Oih ) ? N W GM (Oih )
(11)
Second, using the result of the previous section, the propagation of expectation symbols:
N W GM (Oih ) = ?ih E(Sih )
(12)
And third, using the linearity of the expectation with respect to sums, and to products of independent
random variables:
E(Sih ) =
XX
l<h
hl l
wij
pj E(Ojl )
(13)
j
Equations 11, 12, and 13 are the fundamental equations explaining the averaging properties of the
dropout procedure. The only approximation is of course Equation 11 which is analyzed in the next
section. If the network contains linear units, then Equation 11 is not necessary for those units and
their average can be computed exactly. In the case of regression with linear units in the top layers,
this allows one to shave off one layer of approximations. The same is true in binary classification
by requiring the output layer to compute directly the N W GM of the ensemble rather than the
expectation. It can be shown that for any error function that is convex up (?), the error of the mean,
weighted geometric mean, and normalized weighted geometric mean of an ensemble is always less
than the expected error of the models [1].
Equation 11 is exact if and only if the numbers Oih are identical over all possible subnetworks N .
h
Thus it is useful to
measure
the consistency C(Oi , I) of neuron i in layer h for input I by using
the variance V ar Oih (I) taken over all subnetworks N and their distribution when the input I is
fixed. The larger the variance is, the less consistent the neuron is, and the worse we can expect
the approximation in Equation 11 to be. Note that for a random variable O in [0,1] the variance
cannot exceed 1/4 anyway. This is because V ar(O) = E(O2 ) ? (E(O))2 ? E(O) ? (E(O))2 =
E(O)(1 ? E(O)) ? 1/4. This measure can also be averaged over a training set or a test set.
1
Given the results of the previous sections, the network can also include linear units or normalized exponential units.
3
4
The Dropout Approximation
Given a set of numbers O1 , . . . , Om between 0 and 1, with probabilities P1 , . . . , PM (corresponding
to the outputs of a sigmoidal neuron for a fixed input and different subnetworks), we are primarily
interested in the approximation of E by N W GM . The N W GM provides a good approximation
because we show below that to a first order of approximation: E ? N W GM and E ? G. Furthermore, there are formulae in the literature for bounding the error E ? G in terms of the consistency
(e.g. the Cartwright and Field inequality [6]). However, one can suspect that the N W GM provides
even a better approximation to E than the geometric mean. For instance, if the numbers Oi satisfy
0 < Oi ? 0.5 (consistently low), then
G
E
G
? 0 and therefore G ?
?E
(14)
G0
E
G + G0
This is proven by applying Jensen?s inequality to the function ln x ? ln(1 ? x) for x ? (0, 0.5]. It is
also known as the Ky Fan inequality [2, 8, 9].
To get even better results, one must consider a second order approximation. For this, we write
Oi = 0.5 + i with 0 ? |i | ? 0.5. Thus we have E(O) = 0.5 + E() and V ar(O) = V ar().
Using a Taylor expansion:
?
?
?
X
X pi (pi ? 1)
X
1 Y X pi
1
G=
(2i )n = ?1 +
pi 2i +
(2i )2 +
4pi pj i j + R3 (i )?
2 i n=0 n
2
2
i
i
i<j
(15)
where R3 (i ) is the remainder and
R3 (i ) =
pi
(2i )3
3 (1 + ui )3?pi
(16)
where |ui | ? 2|i |. Expanding the product gives
X
X
1 X
1
G= +
pi i +(
i )2 ?
pi 2i +R3 () = +E()?V ar()+R3 () = E(O)?V ar(O)+R3 ()
2 i
2
i
(17)
By symmetry, we have
G0 =
Y
(1 ? Oi )pi = 1 ? E(O) ? V ar(O) + R3 ()
(18)
i
where R3 () is the higher order remainder. Neglecting the remainder and writing E = E(O) and
V = V ar(O) we have
G
E?V
G0
1?E?V
?
and
?
(19)
0
0
G+G
1 ? 2V
G+G
1 ? 2V
Thus, to a second order, the differences between the mean and the geometric mean and the normalized geometric means satisfy
E?G?V
and E ?
G
V (1 ? 2E)
?
0
G+G
1 ? 2V
(20)
and
G0
V (1 ? 2E)
?
(21)
0
G+G
1 ? 2V
Finally it is easy to check that the factor (1 ? 2E)/(1 ? 2V ) is always less or equal to 1. In addition
we always have V ? E(1 ? E), with equality achieved only for 0-1 Bernoulli variables. Thus
1 ? E ? G0 ? V
and
(1 ? E) ?
4
V |1 ? 2E|
E(1 ? E)|1 ? 2E|
G
|?
?
? 2E(1 ? E)|1 ? 2E|
(22)
G + G0
1 ? 2V
1 ? 2V
The first inequality is optimal in the sense that it is attained in the case of a Bernoulli variable
with expectation E and, intuitively, the second inequality shows that the approximation error is
always small, regardless of whether E is close to 0, 0.5, or 1. In short, the NWGM provides a
very good approximation to E, better than the geometric mean G. The property is always true to
a second order of approximation and it is exact when the activities are consistently low, or when
N W GM ? E, since the latter implies G ? N W GM ? E. Several additional properties of the
dropout approximation, including the extension to rectified linear units and other transfer functions,
are studied in [1].
|E ?
5
Dropout Dynamics
Dropout performs gradient descent on-line with respect to both the training examples and the ensemble of all possible subnetworks. As such, and with the appropriately decreasing learning rates,
it is almost surely convergent like other forms of stochastic gradient descent [11, 4, 5]. To further
understand the properties of dropout, it is again instructive to look at the properties of the gradient
in the linear case.
5.1
Single Linear Unit
In the case of a single linear unit, consider the two error functions EEN S and ED associated with
the ensemble of all possible subnetworks and the network with dropout. For a single input I, these
are defined by:
EEN S
n
X
1
1
2
= (t ? OEN S ) = (t ?
pi wi Ii )2
2
2
i=1
(23)
n
X
1
1
(t ? OD )2 = (t ?
?i wi Ii )2
2
2
i=1
(24)
ED =
We use a single training input I for notational simplicity, otherwise the errors of each training
example can be combined additively. The learning gradient is given by
?EEN S
?OEN S
= ?(t ? OEN S )
= ?(t ? OEN S )pi Ii
?wi
?wi
X
?ED
?OD
= ?(t ? OD )
= ?(t ? OD )?i Ii = ?t?i Ii + wi ?i2 Ii2 +
wj ?i ?j Ii Ij
?wi
?wi
(25)
(26)
j6=i
The dropout gradient is a random variable and we can take its expectation. A short calculation yields
E
?ED
?wi
=
?EEN S
?EEN S
+ wi pi (1 ? pi )Ii2
+ wi Ii2 V ar(?i )
?wi
?wi
(27)
Thus, remarkably, in this case the expectation of the gradient with dropout is the gradient of the
regularized ensemble error
n
E = EEN S +
1X 2 2
w I V ar(?i )
2 i=1 i i
(28)
The regularization term is the usual weight decay or Gaussian prior term based on the square of the
weights to prevent overfitting. Dropout provides immediately the magnitude of the regularization
term which is adaptively scaled by the inputs and by the variance of the dropout variables. Note that
pi = 0.5 is the value that provides the highest level of regularization.
5
5.2
Single Sigmoidal Unit
The previous result generalizes to a sigmoidal unit O = ?(S) = 1/(1 + ce??S ) trained to minimize
the relative entropy error E = ?(t log O + (1 ? t) log(1 ? O)). In this case,
?ED
?S
= ??(t ? O)
= ??(t ? O)?i Ii
(29)
?wi
?wi
The terms O and Ii are not independent but using a Taylor expansion with the N W GM approximation gives
?EEN S
?ED
?
+ ?? 0 (SEN S )wi Ii2 V ar(?i )
(30)
E
?wi
?wi
P
with SEN S = j wj pj Ij . Thus, as in the linear case, the expectation of the dropout gradient is approximately the gradient of the ensemble network regularized by weight decay terms with the proper
adaptive coefficients. A similar analysis, can be carried also for a set of normalized exponential
units and for deeper networks [1].
5.3
Learning Phases and Sparse Coding
During dropout learning, we can expect three learning phases: (1) At the beginning of learning, when
the weights are typically small and random, the total input to each unit is close to 0 for all the units
and the consistency is high: the output of the units remains roughly constant across subnetworks
(and equal to 0.5 with c = 1). (2) As learning progresses, activities tend to move towards 0 or 1
and the consistency decreases, i.e. for a given input the variance of the units across subnetworks
increases. (3) As the stochastic gradient learning procedure converges, the consistency of the units
converges to a stable value.
Finally, for simplicity, assume that dropout
is applied only in layer h where the units have an output
P
hl l l
of the form Oih = ?(Sih ) and Sih = l<h wij
?j Oj . For a fixed input, Ojl is a constant since dropout
is not applied to layer l. Thus
V ar(Sih ) =
X
hl 2
(wij
) (Ojl )2 plj (1 ? plj )
(31)
l<h
under the usual assumption that the selector variables ?jl are independent of each other. Thus
V ar(Sih ) depends on three factors. Everything else being equal, it is reduced by: (1) Small weights
which goes together with the regularizing effect of dropout; (2) Small activities, which shows that
dropout is not symmetric with respect to small or large activities. Overall, dropout tends to favor
small activities and thus sparse coding; and (3) Small (close to 0) or large (close to 1) values of the
dropout probabilities plj . Thus values plj = 0.5 maximize the regularization effect but may also lead
to slower convergence to the consistent state. Additional results and simulations are given in [1].
6
Simulation Results
We use Monte Carlo simulation to partially investigate the approximation framework embodied by
the three fundamental dropout equations 11, 12, and 13, the accuracy of the second-order approximation and bounds in Equations 20 and 22, and the dynamics of dropout learning. We experiment
with an MNIST classifier of four hidden layers (784-1200-1200-1200-1200-10) that replicates the
results in [7] using the Pylearn2 and Theano software libraries[12, 3]. The network is trained with
a dropout probability of 0.8 in the input, and 0.5 in the four hidden layers. For fixed weights and
a fixed input, 10,000 Monte Carlo simulations are used to estimate the distribution of activity O
in each neuron. Let O? be the activation under the deterministic setting with the weights scaled
appropriately.
The left column of Figure 1 confirms empirically that the second-order approximation in Equation
20 and the bound in Equation 22 are accurate. The right column of Figure 1 shows the difference between the true ensemble average E(O) and the prediction-time neuron activity O? . This difference
grows very slowly in the higher layers, and only for active neurons.
6
Figure 1: Left: The difference E(O) ? N W GM (O), it?s second-order approximation in Equation
20, and the bound from Equation 22, plotted for four hidden layers and a typical fixed input. Right:
The difference between the true ensemble average E(O) and the final neuron prediction O? .
Next, we examine the neuron consistency during dropout training. Figure 2a shows the three phases
of learning for a typical neuron. In Figure 2b, we observe that the consistency does not decline in
higher layers of the network.
One clue into how this happens is the distribution of neuron activity. As noted in [10] and section 5
above, dropout training results in sparse activity in the hidden layers (Figure 3). This increases the
consistency of neurons in the next layer.
7
(a) The three phases of learning. For a particular input, a typical active neuron (red) starts out
with low variance, experiences a large increase in
variance during learning, and eventually settles to
some steady constant value. In contrast, a typical
inactive neuron (blue) quickly learns to stay silent.
Shown are the mean with 5% and 95% percentiles.
(b) Consistency does not noticeably decline in the upper layers. Shown here are the mean Std(O) for active
neurons (0.1 < O after training) in each layer, along
with the 5% and 95% percentiles.
Figure 2
Figure 3: In every hidden layer of a dropout trained network, the distribution of neuron activations
O? is sparse and not symmetric. These histograms were totalled over a set of 100 random inputs.
8
References
[1] P. Baldi and P. Sadowski. The Dropout Learning Algorithm. Artificial Intelligence, 2014. In
press.
[2] E. F. Beckenbach and R. Bellman. Inequalities. Springer-Verlag Berlin, 1965.
[3] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian,
D. Warde-Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In
Proceedings of the Python for Scientific Computing Conference (SciPy), Austin, TX, June
2010. Oral Presentation.
[4] L. Bottou. Online algorithms and stochastic approximations. In D. Saad, editor, Online Learning and Neural Networks. Cambridge University Press, Cambridge, UK, 1998.
[5] L. Bottou. Stochastic learning. In O. Bousquet and U. von Luxburg, editors, Advanced Lectures
on Machine Learning, Lecture Notes in Artificial Intelligence, LNAI 3176, pages 146?168.
Springer Verlag, Berlin, 2004.
[6] D. Cartwright and M. Field. A refinement of the arithmetic mean-geometric mean inequality.
Proceedings of the American Mathematical Society, pages 36?38, 1978.
[7] G. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. http://arxiv.org/abs/1207.0580,
2012.
[8] E. Neuman and J. S?andor. On the Ky Fan inequality and related inequalities i. MATHEMATICAL INEQUALITIES AND APPLICATIONS, 5:49?56, 2002.
[9] E. Neuman and J. Sandor. On the Ky Fan inequality and related inequalities ii. Bulletin of the
Australian Mathematical Society, 72(1):87?108, 2005.
[10] S. Nitish. Improving Neural Networks with Dropout. PhD thesis, University of Toronto,
Toronto, Canada, 2013.
[11] H. Robbins and D. Siegmund. A convergence theorem for non negative almost supermartingales and some applications. Optimizing methods in statistics, pages 233?257, 1971.
[12] D. Warde-Farley, I. Goodfellow, P. Lamblin, G. Desjardins, F. Bastien, and Y. Bengio.
pylearn2. 2011. http://deeplearning.net/software/pylearn2.
9
| 4878 |@word confirms:1 additively:1 simulation:5 configuration:1 contains:1 exclusively:1 o2:1 od:4 si:2 activation:2 must:1 gpu:1 intelligence:2 beginning:1 short:3 provides:6 math:1 pascanu:1 toronto:2 sigmoidal:5 org:1 mathematical:3 along:1 prove:2 baldi:2 introduce:1 expected:1 roughly:1 p1:3 examine:1 behavior:1 bellman:1 salakhutdinov:1 decreasing:1 little:2 cpu:1 xx:7 underlying:1 linearity:1 kind:1 every:1 exactly:2 scaled:2 classifier:1 uk:1 neuman:2 unit:34 tends:1 approximately:1 studied:2 co:3 range:1 averaged:1 recursive:2 backpropagation:1 procedure:2 spite:1 get:1 cannot:1 close:4 applying:1 writing:1 deterministic:1 go:1 regardless:1 convex:1 simplicity:2 immediately:1 scipy:1 lamblin:2 population:1 anyway:1 siegmund:1 gm:21 exact:2 goodfellow:1 std:1 wj:4 decrease:1 highest:1 ui:2 warde:2 dynamic:2 trained:4 oral:1 easily:1 isp:1 tx:1 derivation:1 monte:2 artificial:2 sih:12 larger:1 otherwise:1 favor:1 statistic:1 sj0:4 final:1 online:2 net:1 sen:2 product:2 adaptation:3 remainder:3 uci:2 relevant:1 realization:1 achieve:2 ky:3 sutskever:1 convergence:3 produce:1 converges:2 ij:11 minor:1 op:1 progress:1 implies:1 australian:1 stochastic:5 supermartingales:1 settle:1 everything:1 noticeably:1 extension:1 ic:1 desjardins:2 robbins:1 weighted:6 always:5 gaussian:1 rather:3 avoid:1 pn:1 focus:1 june:1 notational:1 consistently:2 bernoulli:6 check:1 contrast:1 sense:1 andor:1 typically:1 lnai:1 hidden:5 wij:11 interested:1 overall:1 among:1 classification:1 art:1 field:2 equal:3 identical:1 look:2 primarily:1 phase:4 ab:1 investigate:1 replicates:1 analyzed:2 farley:2 behind:1 accurate:1 neglecting:1 necessary:2 experience:1 taylor:2 plotted:1 e0:1 theoretical:1 instance:2 formalism:1 column:2 corroborate:1 ar:13 krizhevsky:1 reported:2 shave:1 combined:1 adaptively:1 fundamental:2 stay:1 off:1 together:1 quickly:1 again:1 von:1 thesis:1 slowly:1 worse:1 stochastically:1 american:1 leading:1 bergstra:1 coding:2 coefficient:1 satisfy:3 depends:1 view:1 analyze:1 red:1 start:1 compiler:1 om:5 oi:12 square:1 minimize:1 accuracy:1 variance:7 likewise:1 ensemble:11 yield:1 carlo:2 economical:1 rectified:1 j6:1 detector:4 ed:6 associated:2 irvine:4 higher:3 attained:1 generality:1 furthermore:1 replacing:1 propagation:3 logistic:5 scientific:1 grows:1 effect:2 normalized:7 true:6 requiring:1 regularization:5 hence:1 equality:1 symmetric:2 wp:1 i2:1 deal:1 during:6 noted:2 steady:1 percentile:2 performs:3 empirically:1 jl:5 cambridge:2 consistency:9 pm:4 similarly:1 stable:1 impressive:1 something:1 conjectured:1 optimizing:1 forcing:1 verlag:2 inequality:12 binary:1 additional:2 surely:1 maximize:1 ii:9 arithmetic:1 characterized:1 calculation:1 divided:1 prediction:3 regression:1 multilayer:2 expectation:10 arxiv:1 histogram:1 achieved:1 addition:1 remarkably:1 else:1 standpoint:1 appropriately:2 breuleux:1 rest:1 saad:1 suspect:1 tend:1 feedforward:3 exceed:1 easy:1 bengio:2 pfbaldi:1 een:7 silent:1 decline:2 inactive:1 whether:3 expression:1 sjl:1 peter:1 deep:3 useful:2 reduced:1 http:2 blue:1 write:2 dropping:1 key:1 four:4 deleted:1 prevent:2 pj:6 ce:6 graph:1 sum:3 luxburg:1 throughout:1 almost:2 dropout:51 bound:4 layer:20 convergent:1 fan:3 activity:13 occur:1 software:2 bousquet:1 nitish:1 relatively:1 department:2 across:3 remain:1 wi:17 shallow:1 s1:1 happens:1 hl:12 intuitively:1 theano:2 taken:2 ln:2 equation:15 remains:1 r3:8 eventually:1 subnetworks:11 studying:1 generalizes:1 observe:1 pierre:1 slower:1 original:1 top:1 remaining:1 include:1 plj:5 society:2 move:1 g0:9 question:1 quantity:1 cartwright:2 sandor:1 usual:2 gradient:11 berlin:2 o1:4 negative:1 proper:1 upper:1 neuron:17 datasets:1 sm:1 benchmark:1 descent:3 hinton:1 arbitrary:1 canada:1 complement:2 connection:4 california:2 pylearn2:3 nip:1 address:1 below:1 including:3 oj:3 greatest:1 rely:1 regularized:3 advanced:1 oih:11 library:1 carried:1 embodied:1 ii2:4 prior:1 understanding:1 geometric:12 literature:1 python:1 relative:1 expect:2 lecture:2 interesting:1 acyclic:1 proven:1 consistent:2 editor:2 pi:23 austin:1 course:1 understand:1 deeper:1 explaining:1 bulletin:1 sparse:4 preventing:1 forward:1 adaptive:1 clue:1 refinement:1 sj:3 selector:2 overfitting:2 ojl:3 active:3 transfer:2 robust:1 ca:2 expanding:1 symmetry:1 improving:2 expansion:2 bottou:2 main:1 motivation:1 bounding:1 turian:1 exponential:3 third:1 learns:1 sadowski:2 theorem:2 formula:1 specific:1 bastien:2 gating:4 oen:4 jensen:1 symbol:1 decay:2 deeplearning:1 ih:1 mnist:1 importance:1 phd:1 magnitude:1 entropy:1 simply:1 expressed:2 adjustment:1 partially:1 springer:2 satisfies:1 relies:1 identity:1 presentation:3 towards:1 shared:1 typical:4 averaging:7 total:1 latter:1 brevity:1 regularizing:2 instructive:2 srivastava:1 |
4,285 | 4,879 | Annealing Between Distributions by
Averaging Moments
Roger Grosse
Comp. Sci. & AI Lab
MIT
Cambridge, MA 02139
Chris J. Maddison
Dept. of Computer Science
University of Toronto
Toronto, ON M5S 3G4
Ruslan Salakhutdinov
Depts. of Statistics and Comp. Sci.,
University of Toronto
Toronto, ON M5S 3G4, Canada
Abstract
Many powerful Monte Carlo techniques for estimating partition functions, such
as annealed importance sampling (AIS), are based on sampling from a sequence
of intermediate distributions which interpolate between a tractable initial distribution and the intractable target distribution. The near-universal practice is to use
geometric averages of the initial and target distributions, but alternative paths can
perform substantially better. We present a novel sequence of intermediate distributions for exponential families defined by averaging the moments of the initial and
target distributions. We analyze the asymptotic performance of both the geometric and moment averages paths and derive an asymptotically optimal piecewise
linear schedule. AIS with moment averaging performs well empirically at estimating partition functions of restricted Boltzmann machines (RBMs), which form
the building blocks of many deep learning models.
1
Introduction
Many generative models are defined in terms of an unnormalized probability distribution, and computing the probability of a state requires computing the (usually intractable) partition function. This
is problematic for model selection, since one often wishes to compute the probability assigned to
held-out test data. While partition function estimation is intractable in general, there has been extensive research on variational [1, 2, 3] and sampling-based [4, 5, 6] approximations. In the context
of model comparison, annealed importance sampling (AIS) [4] is especially widely used because
given enough computing time, it can provide high-accuracy estimates. AIS has enabled precise
quantitative comparisons of powerful generative models in image statistics [7, 8] and deep learning
[9, 10, 11]. Unfortunately, applying AIS in practice can be computationally expensive and require
laborious hand-tuning of annealing schedules. Because of this, many generative models still have
not been quantitatively compared in terms of held-out likelihood.
AIS requires defining a sequence of intermediate distributions which interpolate between a tractable
initial distribution and the intractable target distribution. Typically, one uses geometric averages of
the initial and target distributions. Tantalizingly, [12] derived the optimal paths for some toy models in the context of path sampling and showed that they vastly outperformed geometric averages.
However, as choosing an optimal path is generally intractable, geometric averages still predominate.
In this paper, we present a theoretical framework for evaluating alternative paths. We propose a novel
path defined by averaging moments of the initial and target distributions. We show that geometric
averages and moment averages optimize different variational objectives, derive an asymptotically
optimal piecewise linear schedule, and analyze the asymptotic performance of both paths. Our
proposed path often outperforms geometric averages at estimating partition functions of restricted
Boltzmann machines (RBMs).
1
2
Estimating Partition Functions
Suppose we have a probability distribution pb (x) = fb (x)/Zb defined on a space X , where fb (x)
can be computed efficiently for a given x ? X , and we are interested in estimating the partition
function Zb . Annealed importance sampling (AIS) is an algorithm which estimates Zb by gradually changing, or ?annealing,? a distribution. In particular, one must specify a sequence of K + 1
intermediate distributions pk (x) = fk (x)/Zk for k = 0, . . . K, where pa (x) = p0 (x) is a tractable
initial distribution, and pb (x) = pK (x) is the intractable target distribution. For simplicity, assume
all distributions are strictly positive on X . For each pk , one must also specify an MCMC transition operator Tk (e.g. Gibbs sampling) which leaves pk invariant. AIS alternates between MCMC
transitions and importance sampling updates, as shown in Alg 1.
The output of AIS is an unbiased estimate Z?b
Algorithm 1 Annealed Importance Sampling
of Zb . Remarkably, unbiasedness holds even in
for i = 1 to M do
the context of non-equilibrium samples along
x0 ? sample from p0 (x)
the chain [4, 13]. However, unless the intermediate distributions and transition operators are
w(i) ? Za
carefully chosen, Z?b may have high variance
for k = 1 to K do
fk (xk?1 )
and be far from Zb with high probability.
w(i) ? w(i) fk?1
(xk?1 )
xk ? sample from Tk (x | xk?1 )
The mathematical formulation of AIS leaves
end for
much flexibility for choosing intermediate disend
for
tributions. However, one typically defines a
PM
return Z?b = i=1 w(i) /M
path ? : [0, 1] ? P through some family P
of distributions. The intermediate distributions
pk are chosen to be points along this path corresponding to a schedule 0 = ?0 < ?1 < . . . < ?K = 1.
One typically uses the geometric path ?GA , defined in terms of geometric averages of pa and pb :
p? (x) = f? (x)/Z(?) = fa (x)1?? fb (x)? /Z(?).
(1)
?
Commonly, fa is the uniform distribution, and (1) reduces to p? (x) = fb (x) /Z(?). This motivates
the term ?annealing,? and ? resembles an inverse temperature parameter. As in simulated annealing,
the ?hotter? distributions often allow faster mixing between modes which are isolated in pb .
AIS is closely related to a broader family of techniques for posterior inference and partition function
estimation, all based on the following identity from statistical physics:
Z 1
d
log Zb ? log Za =
Ex?p?
log f? (x) d?.
(2)
d?
0
Thermodynamic integration [14] estimates (2) using numerical quadrature, and path sampling [12]
does so with Monte Carlo integration. The weight update in AIS can be seen as a finite difference
approximation. Tempered transitions [15] is a Metropolis-Hastings proposal operator which heats
up and cools down the distribution, and computes an acceptance ratio by approximating (2).
The choices of a path and a schedule are central to all of these methods. Most work on adapting paths
has focused on tuning schedules along a geometric path [15, 16, 17]. [15] showed that the geometric
schedule was optimal for annealing the scale parameter of a Gaussian, and [16] extended this result
more broadly. The aim of this paper is to propose, analyze, and evaluate a novel alternative to ?GA
based on averaging moments of the initial and target distributions.
3
Analyzing AIS Paths
When analyzing AIS, it is common to assume perfect transitions, i.e. that each transition operator Tk returns an independent and exact sample from the distribution pk [4]. This corresponds to
the (somewhat idealized) situation where the Markov chains mix quickly. As Neal [4] pointed out,
assuming perfect transitions, the Central Limit Theorem shows that the samples w(i) are approximately log-normally distributed. In this case, the variances var(w(i) ) and var(log w(i) ) are both
monotonically related to E[log w(i) ]. Therefore, our analysis focuses on E[log w(i) ].
Assuming perfect transitions, the expected log weights are given by:
E[log w(i) ] = log Za +
K?1
X
Epk [log fk+1 (x) ? log fk (x)] = log Zb ?
k=0
K?1
X
k=0
2
DKL (pk kpk+1 ). (3)
In other words, each log w(i) can be seen as a biased estimator of log Zb , where the bias ? =
PK?1
log Zb ? E[log w(i) ] is given by the sum of KL divergences k=0 DKL (pk kpk+1 ).
Suppose P is a family of probability distributions parameterized by ? ? ?, and the K + 1 distributions p0 , . . . , pK are chosen to be linearly spaced along a path ? : [0, 1] ? P. Let ?(?) represent
the parameters of the distribution ?(?). As K is increased, the bias ? decays like 1/K, and the
asymptotic behavior is determined by a functional F(?).
Theorem 1. Suppose K + 1 distributions pk are linearly spaced along a path ?. Assuming perfect transitions, if ?(?) and the Fisher information matrix G? (?) = covx?p? (?? log p? (x)) are
continuous and piecewise smooth, then as K ? ? the bias ? behaves as follows:
Z
K?1
X
1 1 ? T
?
K? = K
DKL (pk kpk+1 ) ? F(?) ?
?(?) G? (?)?(?)d?,
(4)
2 0
k=0
?
where ?(?)
represents the derivative of ? with respect to ?. [See supplementary material for proof.]
This result reveals a relationship with path sampling, as [12] showed that the variance of the path
sampling estimator is proportional to the same functional. One useful result from their analysis is
a derivation of the optimal schedule along a given path. In particular, the value of F(?) using the
optimal schedule is given by `(?)2 /2, where ` is the Riemannian path length defined by
Z 1q
? T G? (?)?(?)d?.
?
`(?) =
?(?)
(5)
0
Intuitively, the optimal schedule allocates more distributions to regions where p? changes quickly.
While [12] derived the optimal paths and schedules for some simple examples, they observed that
this is intractable in most cases and recommended using geometric paths in practice.
The above analysis assumes perfect transitions, which can be unrealistic in practice because many
distributions have separated modes between which mixing is difficult. As Neal [4] observed, in
such cases, AIS can be viewed as having two sources of variance: that caused by variability within
a mode, and that caused by misallocation of samples between modes. The former source of variance is well modeled by the perfect transitions analysis and can be made small by adding more
intermediate distributions. The latter, however, can persist even with large numbers of intermediate
distributions. While our theoretical analysis assumes perfect transitions, our proposed method often
gave substantial improvement empirically in situations with poor mixing.
4
Moment Averaging
As discussed in Section 2, the typical choice of intermediate distributions for AIS is the geometric
averages path ?GA given by (1). In this section, we propose an alternative path for exponential
family models. An exponential family model is defined as
1
p(x) =
h(x) exp ? T g(x) ,
(6)
Z(?)
where ? are the natural parameters and g are the sufficient statistics. Exponential families include a
wide variety of statistical models, including Markov random fields.
In exponential families, geometric averages correspond to averaging the natural parameters:
?(?) = (1 ? ?)?(0) + ??(1).
(7)
Exponential families can also be parameterized in terms of their moments s = E[g(x)]. For any
minimal exponential family (i.e. one whose sufficient statistics are linearly independent), there is a
one-to-one mapping between moments and natural parameters [18, p. 64]. We propose an alternative
to ?GA called the moment averages path, denoted ?M A , and defined by averaging the moments of
the initial and target distributions:
s(?) = (1 ? ?)s(0) + ?s(1).
(8)
This path exists for any exponential family model, since the set of realizable moments is convex
[18]. It is unique, since g is unique up to affine transformation.
3
As an illustrative example, consider a multivariate Gaussian distribution parameterized by the mean
? and covariance ?. The moments are E[x] = ? and ? 21 E[xxT ] = ? 12 (? + ??T ). By plugging
these into (8), we find that ?M A is given by:
?(?) = (1 ? ?)?(0) + ??(1)
(9)
T
?(?) = (1 ? ?)?(0) + ??(1) + ?(1 ? ?)(?(1) ? ?(0))(?(1) ? ?(0)) .
(10)
In other words, the means are linearly interpolated, and the covariances are linearly interpolated
and stretched in the direction connecting the two means. Intuitively, this stretching is a useful
property, because it increases the overlap between successive distributions with different means. A
comparison of the two paths is shown in Figure 1.
Next consider the example of a restricted Boltzmann machine (RBM),
a widely used model in deep learning. A binary RBM is a Markov
random field over binary vectors v (the visible units) and h (the hidden
units), and which has the distribution
p(v, h) ? exp aT v + bT h + vT Wh .
(11)
The parameters of the model are the visible biases a, the hidden biases
b, and the weights W. Since these parameters are also the natural
parameters in the exponential family representation, ?GA reduces to
linearly averaging the biases and the weights. The sufficient statistics
of the model are the visible activations v, the hidden activations h, and
the products vhT . Therefore, ?M A is defined by:
E[v]? = (1 ? ?)E[v]0 + ?E[v]1
E[h]? = (1 ? ?)E[h]0 + ?E[h]1
Figure 1: Comparison of
?GA and ?M A for multivariate Gaussians: intermediate
distribution for ? = 0.5,
and ?(?) for ? evenly spaced
from 0 to 1.
E[vhT ]? = (1 ? ?)E[vhT ]0 + ?E[vhT ]1
(12)
(13)
(14)
For many models of interest, including RBMs, it is infeasible to determine ?M A exactly, as it requires solving two often intractable problems: (1) computing the moments of pb , and (2) solving for model
parameters which match the averaged moments s(?). However, much work has been devoted to
practical approximations [19, 20], some of which we use in our experiments with intractable models. Since it would be infeasible to moment match every ?k even approximately, we introduce the
moment averages spline (MAS) path, denoted ?M AS . We choose a set of R values ?1 , . . . , ?R called
knots, and solve for the natural parameters ?(?j ) to match the moments s(?j ) for each knot. We
then interpolate between the knots using geometric averages. The analysis of Section 4.2 shows that,
under the assumption of perfect transitions, using ?M AS in place of ?M A does not affect the cost
functional F defined in Theorem 1.
4.1
Variational Interpretation
By interpreting ?GA and ?M A as optimizing different variational objectives, we gain additional
insight into their behavior. For geometric averages, the intermediate distribution ?GA (?) minimizes
a weighted sum of KL divergences to the initial and target distributions:
(GA)
p?
= arg min (1 ? ?)DKL (qkpa ) + ?DKL (qkpb ).
q
(15)
On the other hand, ?M A minimizes the weighted sum of KL divergences in the reverse direction:
(M A)
p?
= arg min (1 ? ?)DKL (pa kq) + ?DKL (pb kq).
q
(16)
See the supplementary material for the derivations. The objective function (15) is minimized by a
distribution which puts significant mass only in the ?intersection? of pa and pb , i.e. those regions
which are likely under both distributions. By contrast, (16) encourages the distribution to be spread
out in order to capture all high probability regions of both pa and pb . This interpretation helps
explain why the intermediate distributions in the Gaussian example of Figure 1 take the shape that
they do. In our experiments, we found that ?M A often gave more accurate results than ?GA because
the intermediate distributions captured regions of the target distribution which were missed by ?GA .
4
4.2
Asymptotics under Perfect Transitions
In general, we found that ?GA and ?M A can look very different. Intriguingly, both paths always
result in the same value of the cost functional F(?) of Theorem 1 for any exponential family model.
Furthermore, nothing is lost by using the spline approximation ?M AS in place of ?M A :
Theorem 2. For any exponential family model with natural parameters ? and moments s, all three
paths share the same value of the cost functional:
1
F(?GA ) = F(?M A ) = F(?M AS ) = (?(1) ? ?(0))T (s(1) ? s(0)).
(17)
2
Proof. The two parameterizations of exponential
satisfy the relationship G? ?? = s? [21,
R 1families
T?
?
s(?) d?. Because ?GA and ?M A linearly
sec. 3.3]. Therefore, F(?) can be rewritten as 12 0 ?(?)
interpolate the natural parameters and moments respectively,
Z 1
1
1
T
F(?GA ) = (?(1) ? ?(0))
s? (?) d? = (?(1) ? ?(0))T (s(1) ? s(0))
(18)
2
2
0
Z 1
1
1
?
?(?)
d? = (s(1) ? s(0))T (?(1) ? ?(0)).
F(?M A ) = (s(1) ? s(0))T
(19)
2
2
0
Finally, to show that F(?M AS ) = F(?M A ), observe that ?M AS uses the geometric path between
each pair of knots ?(?j ) and ?(?j+1 ), while ?M A uses the moments path. The above analysis shows
the costs must be equal for each segment, and therefore equal for the entire path.
This analysis shows that all three paths result in the same expected log weights asymptotically,
assuming perfect transitions. There are several caveats, however. First, we have noticed experimentally that ?M A often yields substantially more accurate estimates of Zb than ?GA even when the
average log weights are comparable. Second, the two paths can have very different mixing properties, which can strongly affect the results. Third, Theorem 2 assumes linear schedules, and in
principle there is room for improvement if one is allowed to tune the schedule.
For instance, consider annealing between two Gaussians pa = N (?a , ?) and pb = N (?b , ?). The
optimal schedule for ?GA is a linear schedule with cost F(?GA ) = O(d2 ), where d = |?b ? ?a |/?.
Using a linear schedule, the moment path also has cost O(d2 ), consistent with Theorem 2. However,
most of the cost of the path results from instability near the endpoints, where the variance changes
suddenly. Using an optimal schedule, which allocates more distributions near the endpoints, the cost
functional falls to O((log d)2 ), which is within a constant factor of the optimal path derived by [12].
(See the supplementary material for the derivations.) In other words, while F(?GA ) = F(?M A ),
they achieve this value for different reasons: ?GA follows an optimal schedule along a bad path,
while ?M A follows a bad schedule along a near-optimal path. We speculate that, combined with the
procedure of Section 4.3 for choosing a schedule, moment averages may result in large reductions
in the cost functional for some models.
4.3
Optimal Binned Schedules
In general, it is hard to choose a good schedule for a given path. However, consider the set of binned
schedules, where the path is divided into segments, some number Kj of intermediate distributions
are allocated to each segment, and the distributions are spaced linearly within each segment. Under
the assumption of perfect transitions, there is a simple formula for an asymptotically optimal binned
schedule which requires only the parameters obtained through moment averaging:
Theorem 3. Let ? be any path for an exponential family model defined by a set of knots ?j , each with
natural parameters ?j and moments sj , connected by segments of either ?GA or ?M A paths. Then,
under the assumption of perfect transitions, an asymptotically optimal allocation of intermediate
distributions to segments is given by:
q
Kj ? (?j+1 ? ?j )T (sj+1 ? sj ).
(20)
Proof. By Theorem 2, the cost functional for segment j is Fj = 12 (?j+1 ? ?j )T (sj+1 ? sj ). Hence,
with Kj distributions allocated to it, it contributes Fj /Kj to the total cost. The
pvalues of Kj which
P
P
minimize j Fj /Kj subject to j Kj = K and Kj ? 0 are given by Kj ? Fj .
5
Figure 2: Estimates of log Zb for a normalized Gaussian as K, the number of intermediate distributions, is
varied. True value: log Zb = 0. Error bars show bootstrap 95% confidence intervals. (Best viewed in color.)
5
Experimental Results
In order to compare our proposed path with geometric averages, we ran AIS using each path to estimate partition functions of several probability distributions. For all of our experiments, we report
two sets of results. First, we show the estimates of log Z as a function of K, the number of intermediate distributions, in order to visualize the amount of computation necessary to obtain reasonable
accuracy. Second, as recommended by [4], we report the effective sample size (ESS) of the weights
for a large K. This statistic roughly measures how many independent samples one obtains using
AIS.1 All results are based on 5,000 independent AIS runs, so the maximum possible ESS is 5,000.
5.1
Annealing Between Two Distant Gaussians
In our first experiment, the initial
distributions
were the two Gaussians
shown in Fig. 1,
and1target
?0.85
1 0.85
10 ,
.
As
both
distributions
whose parameters are N ?10
,
and
N
0.85
1
0
?0.85
1
0
are normalized, Za = Zb = 1. We compared ?GA and ?M A both under perfect transitions, and
using the Gibbs transition operator. We also compared linear schedules with the optimal binned
schedules of Section 4.3, using 10 segments evenly spaced from 0 to 1.
Figure 2 shows the estimates of log Zb for K ranging from 10 to 1,000. Observe that with 1,000
intermediate distributions, all paths yielded accurate estimates of log Zb . However, ?M A needed
fewer intermediate distributions to achieve accurate estimates. For example, with K = 25, ?M A
resulted in an estimate within one nat of log Zb , while the estimate based on ?GA was off by 27 nats.
This result may seem surprising in light of Theorem 2, which implies that F(?GA ) = F(?M A ) for
linear schedules. In fact, the average log weights for ?GA and ?M A were similar for all values of K,
as the theorem would suggest; e.g., with K = 25, the average was -27.15 for ?M A and -28.04 for
?GA . However, because the ?M A intermediate distributions were broader, enough samples landed
in high probability regions to yield reasonable estimates of log Zb .
5.2
Partition Function Estimation for RBMs
Our next set of experiments focused on restricted Boltzmann machines (RBMs), a building block of
many deep learning models (see Section 4). We considered RBMs trained with three different methods: contrastive divergence (CD) [19] with one step (CD1), CD with 25 steps (CD25), and persistent
contrastive divergence (PCD) [20]. All of the RBMs were trained on the MNIST handwritten digits
dataset [22], which has long served as a benchmark for deep learning algorithms. We experimented
both with small, tractable RBMs and with full-size, intractable RBMs.
Since it is hard to compute ?M A exactly for RBMs, we used the moments spline path ?M AS of
Section 4 with the 9 knot locations 0.1, 0.2, . . . , 0.9. We considered the two initial distributions
discussed by [9]: (1) the uniform distribution, equivalent to an RBM where all the weights and
biases are set to 0, and (2) the base rate RBM, where the weights and hidden biases are set to 0, and
the visible biases are set to match the average pixel values over the MNIST training set.
Small, Tractable RBMs: To better understand the behavior of ?GA and ?M AS , we first evaluated
the paths on RBMs with only 20 hidden units. In this setting, it is feasible to exactly compute the
(i)
(i)
1
The ESS is defined as M/(1 + s2 (w? )) where s2 (w? ) is the sample variance of the normalized weights
[4]. In general, one should regard ESS estimates cautiously, as they can give misleading results in cases where
an algorithm completely misses an important mode of the distribution. However, as we report the ESS in cases
where the estimated partition functions are close to the true value (when known) or agree closely with each
other, we believe the statistic is meaningful in our comparisons.
6
Figure 3: Estimates of log Zb for the tractable PCD(20) RBM as K, the number of intermediate distributions,
is varied. Error bars indicate bootstrap 95% confidence intervals. (Best viewed in color.)
CD1(20)
PCD(20)
pa (v)
path & schedule
log Zb
log Z?b
ESS
log Zb
log Z?b
ESS
uniform
uniform
uniform
uniform
GA linear
GA optimal binned
MAS linear
MAS optimal binned
279.59
279.60
279.51
279.59
279.60
248
124
2686
2619
178.06
177.99
177.92
178.09
178.08
204
142
289
934
Table 1: Comparing estimates of log Zb and effective sample size (ESS) for tractable RBMs. Results are shown
for K = 100,000 intermediate distributions, with 5,000 chains and Gibbs transitions. Bolded values indicate
ESS estimates that are not significantly different from the largest value (bootstrap hypothesis test with 1,000
samples at ? = 0.05 significance level). The maximum possible ESS is 5,000.
Figure 4: Visible activations for samples from the PCD(500) RBM. (left) base rate RBM, ? = 0 (top) geometric
path (bottom) MAS path (right) target RBM, ? = 1.
partition function and moments and to generate exact samples by exhaustively summing over all
220 hidden configurations. The moments of the target RBMs were computed exactly, and moment
matching was performed with conjugate gradient using the exact gradients.
The results are shown in Figure 3 and Table 1. Under perfect transitions, ?GA and ?M AS were both
able to accurately estimate log Zb using as few as 100 intermediate distributions. However, using
the Gibbs transition operator, ?M AS gave accurate estimates using fewer intermediate distributions
and achieved a higher ESS at K = 100,000. To check that the improved performance didn?t rely on
accurate moments of pb , we repeated the experiment with highly biased moments.2 The differences
in log Z?b and ESS compared to the exact moments condition were not statistically significant.
Full-size, Intractable RBMs: For intractable RBMs, moment averaging required approximately
solving two intractable problems: moment estimation for the target RBM, and moment matching.
We estimated the moments from 1,000 independent Gibbs chains, using 10,000 Gibbs steps with
1,000 steps of burn-in. The moment averaged RBMs were trained using PCD. (We used 50,000 updates with a fixed learning rate of 0.01 and no momentum.) In addition, we ran a cheap, inaccurate
moment matching scheme (denoted MAS cheap) where visible moments were estimated from the
empirical MNIST base rate and the hidden moments from the conditional distributions of the hidden
units given the MNIST digits. Intermediate RBMs were fit using 1,000 PCD updates and 100 particles, for a total computational cost far smaller than that of AIS itself. Results of both methods are
2
In particular, we computed the biased moments from the conditional distributions of the hidden units given
the MNIST training examples, where each example of digit class i was counted i + 1 times.
7
Figure 5: Estimates of log Zb for intractable RBMs. Error bars indicate bootstrap 95% confidence intervals.
(Best viewed in color.)
CD1(500)
PCD(500)
CD25(500)
pa (v)
path
log Z?b
ESS
log Z?b
ESS
log Z?b
ESS
uniform
uniform
uniform
GA linear
MAS linear
MAS cheap linear
341.53
359.09
359.09
4
3076
3773
417.91
418.27
418.33
169
620
5
451.34
449.22
450.90
13
12
30
base rate
base rate
base rate
GA linear
MAS linear
MAS cheap linear
359.10
359.07
359.09
4924
2203
2465
418.20
418.26
418.25
159
1460
359
451.27
451.31
451.14
2888
304
244
Table 2: Comparing estimates of log Zb and effective sample size (ESS) for intractable RBMs. Results are
shown for K = 100,000 intermediate distributions, with 5,000 chains and Gibbs transitions. Bolded values
indicate ESS estimates that are not significantly different from the largest value (bootstrap hypothesis test with
1,000 samples at ? = 0.05 significance level). The maximum possible ESS is 5,000.
shown in Figure 5 and Table 2. Overall, the MAS results compare favorably with those of GA on
both of our metrics. Performance was comparable under MAS cheap, suggesting that ?M AS can be
approximated cheaply and effectively. As with the tractable RBMs, we found that optimal binned
schedules made little difference in performance, so we focus here on linear schedules.
The most serious failure was ?GA for CD1(500) with uniform initialization, which underestimated
our best estimates of the log partition function (and hence overestimated held-out likelihood) by
nearly 20 nats. The geometric path from uniform to PCD(500) and the moments path from uniform to CD1(500) also resulted in underestimates, though less drastic. The rest of the paths agreed
closely with each other on their partition function estimates, although some methods achieved substantially higher ESS values on some RBMs. One conclusion is that it?s worth exploring multiple
initializations and paths for a given RBM in order to ensure accurate results.
Figure 4 compares samples along ?GA and ?M AS for the PCD(500) RBM using the base rate initialization. For a wide range of ? values, the ?GA RBMs assigned most of their probability mass
to blank images. As discussed in Section 4.1, ?GA prefers configurations which are probable under
both the initial and target distributions. In this case, the hidden activations were closer to uniform
conditioned on a blank image than on a digit, so ?GA preferred blank images. By contrast, ?M AS
yielded diverse, blurry digits which gradually coalesced into crisper ones.
6
Conclusion
We presented a theoretical analysis of the performance of AIS paths and proposed a novel path
for exponential families based on averaging moments. We gave a variational interpretation of this
path and derived an asymptotically optimal piecewise linear schedule. Moment averages performed
well empirically at estimating partition functions of RBMs. We hope moment averaging can also
improve other path-based sampling algorithms which typically use geometric averages, such as path
sampling [12], parallel tempering [23], and tempered transitions [15].
Acknowledgments
This research was supported by NSERC and Quanta Computer. We thank Geoffrey Hinton for
helpful discussions. We also thank the anonymous reviewers for thorough and helpful feedback.
8
References
[1] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized belief propagation algorithms. IEEE Trans. on Inf. Theory, 51(7):2282?2312, 2005.
[2] Martin J. Wainwright, Tommi Jaakkola, and Alan S. Willsky. A new class of upper bounds on
the log partition function. IEEE Transactions on Information Theory, 51(7):2313?2335, 2005.
[3] Amir Globerson and Tommi Jaakkola. Approximate Inference Using Conditional Entropy
Decompositions. In 11th International Workshop on AI and Statistics (AISTATS?2007), 2007.
[4] Radford Neal. Annealed importance sampling. Statistics and Computing, 11:125?139, 2001.
[5] John Skilling. Nested sampling for general Bayesian computation.
1(4):833?859, 2006.
Bayesian Analysis,
[6] Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. Sequential Monte Carlo samplers. Journal
of the Royal Statistical Society: Series B (Methodology), 68(3):411?436, 2006.
[7] Jascha Sohl-Dickstein and Benjamin J. Culpepper. Hamiltonian annealed importance sampling
for partition function estimation. Technical report, Redwood Center, UC Berkeley, 2012.
[8] Lucas Theis, Sebastian Gerwinn, Fabian Sinz, and Matthias Bethge. In all likelihood, deep
belief is not enough. Journal of Machine Learning Research, 12:3071?3096, 2011.
[9] Ruslan Salakhutdinov and Ian Murray. On the quantitative analysis of deep belief networks.
In Int?l Conf. on Machine Learning, pages 6424?6429, 2008.
[10] Guillaume Desjardins, Aaron Courville, and Yoshua Bengio. On tracking the partition function. In NIPS 24. MIT Press, 2011.
[11] Graham Taylor and Geoffrey Hinton. Products of hidden Markov models: It takes N > 1 to
tango. In Uncertainty in Artificial Intelligence, 2009.
[12] Andrew Gelman and Xiao-Li Meng. Simulating normalizing constants: From importance
sampling to bridge sampling to path sampling. Statistical Science, 13(2):163?186, 1998.
[13] Christopher Jarzynski. Equilibrium free-energy differences from nonequilibrium measurements: A master-equation approach. Physical Review E, 56:5018?5035, 1997.
[14] Daan Frenkel and Berend Smit. Understanding Molecular Simulation: From Algorithms to
Applications. Academic Press, 2 edition, 2002.
[15] Radford Neal. Sampling from multimodal distributions using tempered transitions. Statistics
and Computing, 6:353?366, 1996.
[16] Gundula Behrens, Nial Friel, and Merrilee Hurn. Tuning tempered transitions. Statistics and
Computing, 22:65?78, 2012.
[17] Ben Calderhead and Mark Girolami. Estimating Bayes factors via thermodynamic integration
and population MCMC. Computational Statistics and Data Analysis, 53(12):4028?4045, 2009.
[18] Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and
variational inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[19] Geoffrey E. Hinton. Training products of experts by minimizing contrastive divergence. Neural
Computation, 14(8):1771?1800, 2002.
[20] Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In Intl. Conf. on Machine Learning, 2008.
[21] Shun-ichi Amari and Hiroshi Nagaoka. Methods of Information Geometry. Oxford University
Press, 2000.
[22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[23] Y. Iba. Extended ensemble Monte Carlo.
12(5):623?656, 2001.
9
International Journal of Modern Physics C,
| 4879 |@word d2:2 simulation:1 covariance:2 p0:3 contrastive:3 decomposition:1 reduction:1 moment:46 configuration:2 series:1 initial:13 document:1 outperforms:1 blank:3 comparing:2 surprising:1 activation:4 must:3 john:1 numerical:1 partition:18 visible:6 distant:1 shape:1 cheap:5 update:4 generative:3 leaf:2 fewer:2 intelligence:1 amir:1 xk:4 es:19 hamiltonian:1 caveat:1 parameterizations:1 toronto:4 successive:1 location:1 mathematical:1 along:9 persistent:1 introduce:1 g4:2 x0:1 expected:2 roughly:1 behavior:3 salakhutdinov:2 freeman:1 little:1 estimating:7 mass:2 didn:1 substantially:3 minimizes:2 transformation:1 sinz:1 quantitative:2 every:1 thorough:1 berkeley:1 exactly:4 normally:1 unit:5 hurn:1 positive:1 limit:1 friel:1 analyzing:2 oxford:1 meng:1 path:66 approximately:3 burn:1 initialization:3 resembles:1 range:1 statistically:1 averaged:2 unique:2 practical:1 acknowledgment:1 globerson:1 lecun:1 practice:4 block:2 lost:1 bootstrap:5 digit:5 procedure:1 asymptotics:1 universal:1 empirical:1 adapting:1 significantly:2 matching:3 word:3 confidence:3 suggest:1 ga:38 selection:1 operator:6 close:1 put:1 context:3 applying:1 instability:1 gelman:1 optimize:1 equivalent:1 reviewer:1 center:1 annealed:6 convex:1 focused:2 simplicity:1 jascha:1 estimator:2 insight:1 enabled:1 population:1 target:15 suppose:3 behrens:1 exact:4 us:4 hypothesis:2 pa:8 trend:1 expensive:1 approximated:1 recognition:1 tributions:1 persist:1 observed:2 bottom:1 capture:1 region:5 connected:1 cautiously:1 ran:2 substantial:1 benjamin:1 nats:2 exhaustively:1 trained:3 solving:3 segment:8 calderhead:1 completely:1 multimodal:1 xxt:1 derivation:3 separated:1 heat:1 effective:3 monte:4 hiroshi:1 artificial:1 choosing:3 whose:2 widely:2 supplementary:3 solve:1 amari:1 statistic:12 nagaoka:1 itself:1 sequence:4 matthias:1 propose:4 product:3 mixing:4 flexibility:1 achieve:2 intl:1 perfect:14 ben:1 tk:3 help:1 derive:2 andrew:1 cool:1 implies:1 indicate:4 girolami:1 tommi:2 direction:2 closely:3 material:3 shun:1 require:1 landed:1 anonymous:1 probable:1 strictly:1 exploring:1 hold:1 considered:2 exp:2 equilibrium:2 mapping:1 visualize:1 desjardins:1 ruslan:2 estimation:5 outperformed:1 coalesced:1 bridge:1 largest:2 weighted:2 hope:1 mit:2 gaussian:4 always:1 aim:1 broader:2 jaakkola:2 derived:4 focus:2 improvement:2 likelihood:4 check:1 contrast:2 realizable:1 helpful:2 inference:3 inaccurate:1 typically:4 bt:1 entire:1 hidden:11 interested:1 pixel:1 arg:2 overall:1 denoted:3 lucas:1 integration:3 uc:1 field:2 vht:4 equal:2 having:1 intriguingly:1 sampling:21 represents:1 berend:1 look:1 nearly:1 minimized:1 report:4 spline:3 piecewise:4 quantitatively:1 few:1 serious:1 culpepper:1 modern:1 yoshua:1 divergence:6 interpolate:4 resulted:2 geometry:1 acceptance:1 interest:1 highly:1 laborious:1 light:1 devoted:1 held:3 chain:5 accurate:7 closer:1 necessary:1 allocates:2 unless:1 taylor:1 isolated:1 theoretical:3 minimal:1 increased:1 instance:1 cost:12 uniform:13 kq:2 nonequilibrium:1 combined:1 unbiasedness:1 international:2 overestimated:1 physic:2 off:1 michael:1 connecting:1 quickly:2 bethge:1 vastly:1 central:2 choose:2 conf:2 expert:1 derivative:1 return:2 toy:1 li:1 suggesting:1 speculate:1 sec:1 int:1 satisfy:1 caused:2 idealized:1 performed:2 lab:1 analyze:3 bayes:1 parallel:1 minimize:1 accuracy:2 variance:7 stretching:1 efficiently:1 bolded:2 spaced:5 correspond:1 yield:2 ensemble:1 handwritten:1 bayesian:2 accurately:1 knot:6 carlo:4 epk:1 comp:2 served:1 worth:1 m5s:2 za:4 explain:1 kpk:3 sebastian:1 failure:1 underestimate:1 rbms:24 energy:2 proof:3 riemannian:1 rbm:11 gain:1 dataset:1 wh:1 color:3 schedule:31 agreed:1 carefully:1 higher:2 methodology:1 specify:2 improved:1 wei:1 formulation:1 evaluated:1 though:1 strongly:1 furthermore:1 roger:1 hand:2 hastings:1 christopher:1 propagation:1 del:1 defines:1 mode:5 cd25:2 believe:1 building:2 normalized:3 unbiased:1 true:2 former:1 hence:2 assigned:2 arnaud:1 neal:4 encourages:1 illustrative:1 iba:1 unnormalized:1 generalized:1 performs:1 temperature:1 interpreting:1 fj:4 image:4 variational:6 ranging:1 novel:4 common:1 behaves:1 functional:8 empirically:3 physical:1 endpoint:2 discussed:3 interpretation:3 significant:2 measurement:1 cambridge:1 gibbs:7 ai:23 stretched:1 tuning:3 fk:5 pm:1 pointed:1 particle:1 base:7 posterior:1 multivariate:2 showed:3 frenkel:1 optimizing:1 inf:1 reverse:1 gerwinn:1 binary:2 vt:1 tempered:4 seen:2 captured:1 additional:1 somewhat:1 determine:1 monotonically:1 recommended:2 thermodynamic:2 mix:1 full:2 reduces:2 multiple:1 smooth:1 alan:1 faster:1 match:4 technical:1 academic:1 long:1 divided:1 molecular:1 dkl:7 plugging:1 ajay:1 metric:1 represent:1 achieved:2 proposal:1 addition:1 remarkably:1 annealing:8 interval:3 underestimated:1 source:2 allocated:2 biased:3 rest:1 subject:1 seem:1 jordan:1 near:4 intermediate:27 bengio:2 enough:3 variety:1 affect:2 fit:1 gave:4 haffner:1 moral:1 prefers:1 deep:7 generally:1 useful:2 tune:1 amount:1 generate:1 problematic:1 estimated:3 broadly:1 diverse:1 dickstein:1 ichi:1 pb:10 tempering:1 changing:1 asymptotically:6 sum:3 run:1 inverse:1 parameterized:3 powerful:2 uncertainty:1 master:1 place:2 family:18 reasonable:2 missed:1 comparable:2 graham:1 bound:1 courville:1 yielded:2 binned:7 pcd:9 interpolated:2 min:2 martin:2 jasra:1 alternate:1 jarzynski:1 poor:1 conjugate:1 smaller:1 metropolis:1 intuitively:2 restricted:5 gradually:2 invariant:1 computationally:1 equation:1 agree:1 needed:1 tractable:8 drastic:1 end:1 gaussians:4 rewritten:1 yedidia:1 observe:2 blurry:1 pierre:1 simulating:1 skilling:1 alternative:5 assumes:3 top:1 include:1 ensure:1 graphical:1 especially:1 murray:1 approximating:1 society:1 suddenly:1 objective:3 noticed:1 fa:2 predominate:1 gradient:4 thank:2 sci:2 simulated:1 chris:1 maddison:1 evenly:2 evaluate:1 reason:1 willsky:1 assuming:4 length:1 modeled:1 relationship:2 tijmen:1 ratio:1 minimizing:1 difficult:1 unfortunately:1 favorably:1 motivates:1 boltzmann:5 perform:1 upper:1 markov:4 benchmark:1 finite:1 fabian:1 daan:1 defining:1 extended:2 situation:2 precise:1 variability:1 hinton:3 varied:2 redwood:1 canada:1 pair:1 required:1 kl:3 extensive:1 nip:1 trans:1 able:1 bar:3 usually:1 including:2 royal:1 belief:3 wainwright:2 unrealistic:1 overlap:1 natural:8 rely:1 scheme:1 improve:1 misleading:1 kj:9 review:1 geometric:21 understanding:1 theis:1 asymptotic:3 proportional:1 allocation:1 var:2 geoffrey:3 foundation:1 affine:1 sufficient:3 consistent:1 xiao:1 principle:1 share:1 cd:2 supported:1 free:2 infeasible:2 bias:9 allow:1 understand:1 wide:2 fall:1 distributed:1 regard:1 feedback:1 evaluating:1 transition:26 fb:4 computes:1 quantum:1 commonly:1 made:2 counted:1 far:2 transaction:1 sj:5 approximate:1 obtains:1 preferred:1 doucet:1 reveals:1 summing:1 continuous:1 why:1 table:4 zk:1 contributes:1 alg:1 bottou:1 constructing:1 aistats:1 pk:12 spread:1 significance:2 linearly:8 s2:2 edition:1 nothing:1 allowed:1 repeated:1 quadrature:1 fig:1 grosse:1 momentum:1 wish:1 exponential:15 third:1 ian:1 down:1 theorem:11 formula:1 bad:2 decay:1 experimented:1 normalizing:1 intractable:15 exists:1 mnist:5 workshop:1 adding:1 effectively:1 importance:8 sequential:1 sohl:1 smit:1 nat:1 conditioned:1 depts:1 entropy:1 intersection:1 cd1:5 likely:1 covx:1 cheaply:1 nserc:1 tracking:1 radford:2 corresponds:1 nested:1 tieleman:1 ma:12 conditional:3 identity:1 viewed:4 room:1 fisher:1 feasible:1 change:2 experimentally:1 hard:2 determined:1 typical:1 pvalues:1 averaging:13 sampler:1 miss:1 zb:24 called:2 total:2 experimental:1 meaningful:1 aaron:1 guillaume:1 mark:1 latter:1 dept:1 mcmc:3 ex:1 |
4,286 | 488 | Bayesian Model Comparison and Backprop Nets
David J.C. MacKay?
Computation and Neural Systems
California Institute of Technology 139-14
Pasadena CA 91125
mackayGras.phy.cam.ac.uk
Abstract
The Bayesian model comparison framework is reviewed, and the Bayesian
Occam's razor is explained. This framework can be applied to feedforward
networks, making possible (1) objective comparisons between solutions
using alternative network architectures; (2) objective choice of magnitude
and type of weight decay terms; (3) quantified estimates of the error bars
on network parameters and on network output. The framework also generates a measure of the effective number of parameters determined by the
data.
The relationship of Bayesian model comparison to recent work on prediction of generalisation ability (Guyon et al., 1992, Moody, 1992) is discussed.
1
BAYESIAN INFERENCE AND OCCAM'S RAZOR
In science, a central task is to develop and compare models to account for the data
that are gathered. Typically, two levels of inference are involved in the task of
data modelling. At the first level of inference, we assume that one of the models
that we invented is true, and we fit that model to the data. Typically a model
includes some free parameters; fitting the model to the data involves inferring what
values those parameters should probably take, given the data. This is repeated for
each model. The second level of inference is the task of model comparison. Here,
?Current address: Darwin College, Cambridge CB3 9EU, U.K.
839
840
MacKay
we wish to compare the models in the light of the data, and assign some sort of
preference or ranking to the alternatives. 1
For example, consider the task of interpolating a noisy data set. The data set
could be interpolated using a splines model, polynomials, or feedforward neural
networks. At the first level of inference, we find for each individual model the best
fit interpolant (a process sometimes known as 'learning'). At the second level of
inference we want to rank the alternative models and state for our particular data
set that, for example, 'splines are probably the best interpolation model', or 'if the
interpolant is modelled as a polynomial, it should probably be a cubic', or 'the best
neural network for this data set has eight hidden units'.
Model comparison is a difficult task because it is not possible simply to choose the
model that fits the data best: more complex models can always fit the data better,
so the maximum likelihood model choice leads us inevitably to implausible overparameterised models which generalise poorly. 'Occam's razor' is the principle that
states that unnecessarily complex models should not be preferred to simpler ones.
Bayesian methods automatically and quantitatively embody Occam's razor (Gull,
1988, Jeffreys, 1939), without the introduction of ad hoc penalty terms. Complex
models are automatically self-penalising under Bayes' rule.
Let us write down Bayes' rule for the two levels of inference described above . Assume each model 1ii has a vector of parameters w. A model is defined by its
functional form and two probability distributions: a 'prior' distribution P(WI1ii)
which states what values the model's parameters might plausibly take; and the predictions P(Dlw, 1ii) that the model makes about the data D when its parameters
have a particular value w. Note that models with the same parameterisation but
different priors over the parameters are therefore defined to be different models.
1. Model fitting. At the first level of inference, we assume that one model 1ii
is true, and we infer what the model's parameters w might be given the data D.
Using Bayes' rule, the posterior probability of the parameters w is:
(1)
In words:
.
Likelihood X Prior
Postenor =
.d
EVI ence
It is common to use gradient-based methods to find the maximum of the posterior,
which defines the most probable value for the parameters, W MP ; it is then common
to summarise the posterior distribution by the value of W MP , and error bars on
these best fit parameters. The error bars are obtained from the curvature of the
posterior; writing the Hessian A = -\7\7 log P(wID, 1ii) and Taylor-expanding the
log posterior with ~w w - W MP ,
=
(2)
Note that both levels of inference are distinct from decision theory. The goal of inference is, given a defined hypothesis space and a particular data set, to assign probabilities
to hypotheses. Decision theory chooses between alternative actions on the basis of these
probabilities so as to minimise the expectation of a 'loss function'.
1
Bayesian Model Comparison and Backprop Nets
w
Figure 1: The Occam factor
This figure shows the quantities that determine the Occam factor for a hypothesis 1ii havin? a single parameter w. The prior distribution (dotted line) for the parameter has width
Il. w. The posterior distribution (solid line) has a single peak at WMP with characteristic
width Il.w. The Occam factor is :b~.
we see that the posterior can be locally approximated as a gaussian with covariance
matrix (error bars) A -1.
2. Model comparison. At the second level of inference, we wish to infer which
model is most plausible given the data. The posterior probability of each model is:
P(1i i ID) ex P(DI1i i )P(1i i )
(3)
Notice that the objective data-dependent term P(DI1id is the evidence for 1ii,
which appeared as the normalising constant in (1). The second term, P(1i i ), is a
'subjective' prior over our hypothesis space. Assuming that we have no reason to
assign strongly differing priors P(1ii) to the alternative models, models 1ii are
ranked by evaluating the evidence.
This concept is very general: the evidence can be evaluated for parametric and
'non-parametric' models alike; whether our data modelling task is a regression
problem, a classification problem, or a density estimation problem, the evidence
is the Bayesian's transportable quantity for comparing alternative models. In all
these cases the evidence naturally embodies Occam's razor, as we will now see. The
evidence is the normalising constant for equation (1):
P(D l1ii)
=
J
P(Dlw, 1ii)P(wl1id dw
(4)
For many problems, including interpolation, it is common for the posterior
P(wID,1ii) ex P(Dlw,1ij)P(wl1ii) to have a strong peak at the most probable
parameters W MP (figure 1). Then the evidence can be approximated by the height
of the peak of the integrand P(Dlw, 1ii)P(wl1ii) times its width, ~w:
-
,P(D IwMP ' 1ii)
~
y
,P(wMPI1ii) ~w
~
y
(5)
Evidence - Best fit likelihood
Occam factor
Thus the evidence is found by taking the best fit likelihood that the model can
achieve and multiplying it by an 'Occam factor' (Gull, 1988), which is a term with
magnitude less than one that penalises 1ii for having the parameter w.
841
842
MacKay
Interpretation of the Occam factor
The quantity ~ w is the posterior uncertainty in w. Imagine for simplicity that
the prior P(wlllj) is uniform on some large interval ~ow (figure 1), so that
P(wMPllli)
AJW; then
=
Occam factor
~w
= ~ ow'
i.e. the ratio of the posterior accessible volume oflli's parameter space to
the prior accessible volume (Gull, 1988, Jeffreys, 1939). The log of the Occam
factor can be interpreted as the amount of information we gain abou t the model lli
when the data arrive.
Typically, a complex or flexible model with many parameters, each of which is free
to vary over a large range ~ ow, will be penalised with a larger Occam factor than
a simpler model. The Occam factor also penalises models which have to be finely
tuned to fit the data. Which model achieves the greatest evidence is determined
by a trade-off between minimising this natural complexity measure and minimising
the data misfit.
Occam factor for several parameters
If w is k-dimensional, and if the posterior is well approximated by a gaussian, the
Occam factor is given by the determinant of the gaussian's covariance matrix:
=-
where A
VV log P(wID, lli), the Hessian which we already evaluated when we
calculated the error bars on W MP . As the amount of data collected, N, increases,
this gaussian approximation is expected to become increasingly accurate on account
of the central limit theorem.
Thus Bayesian model selection is a simple extension of maximum likelihood model
selection: the evidence is obtained by multiplying the best fit likelihood
by the Occam factor. To evaluate the Occam factor all we need is the Hessian
A, if the gaussian approximation is good. Thus the Bayesian method of model
comparison by evaluating the evidence is computationally no more demanding than
the task of finding for each model the best fit parameters and their error bars.
2
THE EVIDENCE FOR NEURAL NETWORKS
Neural network learning procedures include a host of control parameters such as
the number of hidden units and weight decay rates. These parameters are difficult
to set because there is an Occam's razor problem: if we just set the parameters
so as to minimise the error on the training set, we would be led to over-complex
and under-regularised models which over-fit the data. Figure 2a illustrates this
problem by showing the test error versus the training error of a hundred networks
of varying complexity all trained on the same interpolation problem.
Bayesian Model Comparison and Backprop Nets
Of course if we had unlimited resources, we could compare these networks by measuring the error on an unseen test set or by similar cross-validation techniques.
However these techniques may require us to devote a large amount of data to the
test set, and may be computationally demanding. If there are several parameters
like weight decay rates, it is preferable if they can be optimised on line.
Using the Bayesian framework, it is possible for all our data to have a say in both the
model fitting and the model comparison levels of inference. We can rank alternative
neural network solutions by evaluating the 'evidence'. Weight decay rates can also
be optimised by finding the 'most probable' weight decay rate. Alternative weight
decay schemes can be compared using the evidence. The evidence also makes it
possible to compare neural network solutions with other interpolation models, for
example, splines or radial basis functions, and to choose control parameters such
as spline order or RBF kernel. The framework can be applied to classification
networks as well as the interpolation networks discussed here. For details of the
theoretical framework (which is due to Gull and Skilling (1989? and for more
complete discussion and bibliography, MacKay (1991) should be consulted.
2.1
THE PROBABILISTIC INTERPRETATION
=
Fitting a backprop network to a data set D
{x, t} often involves minimising an
objective function M(w) = {3ED(W; D) + aEw(w). The first term is the data error, for example ED
L !(Y - t)2, and the second term is a regulariser (weight
L !w~ . (There may be several regularisers with
decay term), for example Ew
independent constants {a c}. The Bayesian framework also covers that case.) A
model 1? has three components {A,N, 'R}: The architecture A specifies the functional dependence of the input-output mapping on the network's parameters w.
The noise model N specifies the functional form of the data error. Within the
probabilistic interpretation (Tishby et ai., 1989), the data error is viewed as relating to a likelihood, P(Dlw,{3,A,N) exp(-{3ED )/ZD. For example, a quadratic
ED corresponds to the assumption that the distribution of errors between the data
and the true interpolant is Gaussian, with variance u~
1/{3. Lastly, the regulariser 'R, with associated regularisation constant a, is interpreted as specifying a
exp( -aEw). For example, the use of
prior on the parameters w, P(wla,A, 'R)
a plain quadratic regulariser corresponds to a Gaussian prior distribution for the
parameters.
=
=
=
=
=
Given this probabilistic interpretation, interpolation with neural networks can then
be decomposed into three levels of inference:
1
Fitting a regularised model
2a Setting regularisation constants and estimating nOIse
level
2
Model comparison
P( ID
w
{3 1?.)
,a, , a
= P(Dlw, {3, 1?j)P(wla, 1?i)
P(Dla,{3,1id
P(a {3ID 1?-) = P(Dla,{3,1?i)P(a, {311?i)
,
,a
P(DI1ij)
Both levels 2a and 2 require Occam's razor. For both levels the key step is to
evaluate the evidence P(Dla,{3,1?), which, within the quadratic approximation
843
844
MacKay
...
eo
..
..
o
I ...
I
... .'i'~ ?
'..
... .. ...,~.
.'\ ..:,
J
i
...
J
..
tao
...
,
? I
..
uo
b)
?
JOe
...
...
_.-... ..
?
III
Figure 2: The evidence solves the neural networks' Occam problem
a) Test error vs. data error. Each point represents the performance of a single trained
neural network on the training set and on the test set. This graph illustrates the fact that
the best generalisation is not achieved by the models which fit the training data best.
b) Log Evidence vs. test error.
around w MP , is given by:
1
k
log P(Dla,,B, 1-?) = -aEW -,BE~P -210g det A-log Zw(a)-log ZD (,B) + '2 log 27r.
(1)
At level 2a we can find the most probable value for the regularisation constant a
and noise level 1/,B by differentiating (1) with respect to a and ,B. The result is
=2aEw =
=
and
X~ 2,BED = N - 'Y,
(8)
where'Y is 'the effective number of parameters determined by the data' (Gull, 1989),
X!.
'Y
k
'Y
=k -
-1
aTraceA:
= a=1
~ A Aa '
a +a
"
(9)
where Aa are the eigenvalues of 'V''V' ,BED in the natural basis of Ew. Each term
in the sum is a number between 0 and 1 which measures how well one parameter is determined by the data rather than by the prior. The expressions (8), or
approximations to them, can be used to re-estimate weight decay rates on line.
The central quantity in the evidence and in 'Y is the inverse hessian kl, which
describes the error bars on the parameters w. From this we can also obtain error
bars on the outputs of a network (Denker and Le Cun, 1991, MacKay, 1991). These
error bars are closely related to the predicted generalisation error calculated by
Levin et al.(1989). In (MacKay, 1991) the practical utility of these error bars is
demonstrated for both regression and classification networks.
Figure 2b shows the Bayesian 'evidence' for each of the solutions in figure 2a against
the test error. It can be seen that the correlation between the evidence and the
test error is extremely good. This good correlation depends on the model being
well-matched to the problem; when an inconsistent weight decay scheme was used
(forcing all weights to decay at the same rate), it was found that the correlation between the evidence and the test error was much poorer. Such comparisons between
Bayesian and traditional methods are powerful tools for human learning.
Bayesian Model Comparison and Backprop Nets
3
RELATION TO THEORIES OF GENERALISATION
The Bayesian 'evidence' framework assesses within a well-defined hypothesis space
how probable a set of alternative models are. However, what we really want to
know is how well each model is expected to generalise. Empirically, the correlation
between the evidence and generalisation error is surprisingly good. But a theoretical
connection linking the two is not yet established. Here, a brief discussion is given
of similarities and differences between the evidence and quantities arising in recent
work on prediction of generalisation error.
3.1
RELATION TO MOODY'S 'G.P.E.'
Moody's (1992) 'Generalised Prediction Error' is a generalisation of Akaike's
'F .P.E.' to non-linear regularised models. The F .P.E. is an estimator of generalisation error which can be derived without making assumptions about the distribution
of errors between the data and true interpolant, and without assuming a known
class to which the true interpolant belongs. The difference between F .P.E. and
G.P.E. is that the total number of parameters k in F.P.E. is replaced by an effective
number of parameters, which is in fact identical to the quantity -y arising in the
Bayesian analysis (9). If ED is as defined earlier,
G.P.E. = (ED
+ u~-y) IN.
(10)
Like the log evidence, the G .P.E. has the form of the data error plus a term that
penalises complexity. However, although the same quantity -y arises in the Bayesian
analysis, the Bayesian Occam factor does not have the same scaling behaviour as the
G.P.E. term (see discussion below). And empirically, the G.P.E. is not always a good
predictor of generalisation. The reason for this is that in the derivation of the G.P.E.,
it is assumed that the distribution over x values is well approximated by a sum of
delta functions at the samples in the training set. This is equivalent to assuming test
samples will be drawn only at the x locations at which we have already received data.
This assumption breaks down for over-parameterised and over-flexible models. An
additional distinction that between the G.P.E. and the evidence framework is that
the G.P.E. is defined for regression problems only; the evidence can be evaluated
for regression, classification and density estimation models.
3.2
RELATION TO THE EFFECTIVE V-C DIMENSION
Recent work on 'structural risk minimisation' (Guyon et al., 1992) utilises empirical
expressions of the form:
E
,...., E IN
gen -
D
+ Cl
log(NI-y) + C2
N l-y
(11)
where -y is the 'effective V-C dimension' of the model, and is identical to the quantity arising in (9). The constants Cl and C2 are determined by experiment. The
structural risk theory is currently intended to be applied only to nested families of
classification models (hence the abscence of (3: ED is dimensionless) with monotonic
effective V-C dimension, whereas the evidence can be evaluated for any models.
However, it is very interesting that the scaling behaviour of this expression (11) is
845
846
MacKay
identical to the scaling behaviour of the log evidence (1), subject to the following
assumptions. Assume that the value of the regularisation constant satisfies (8).
Assume furthermore that the significant eigenvalues (Aa > a) scale as Aa -- Na/'Y
(It can be confirmed that this scaling is obtained for example in the interpolation
models consisting of a sequence of steps of independent heights, as we vary the
number of steps.) Then it can be shown that the scaling of the log evidence is:
1
-log P(Dla,,B, 1i) '" ,BE~P + 2 ('Y log(N/'Y)
1
=-
+ 'Y)
(12)
(Readers familiar with MDL will recognise the dominant log N term; MDL and
Bayes are identical.) Thus the scaling behaviour of the lo~ evidence is identical to
the structural risk minimisation expression (11), if Cl
and C2
1. I. Guyon
(personal communication) has confirmed that the empirically determined values for
Cl and C2 are indeed close to these Bayesian values. It will be interesting to try and
understand and develop this relationship.
=
Acknowledgements
This work was supported by studentships from Caltech and SERC, UK.
References
J.S. Denker and Y. Le Cun (1991). 'Transforming neural-net output levels to probability distributions', in Advances in neural information processing systems 3, ed.
R.P. Lippmann et al., 853-859, Morgan Kaufmann.
S.F. Gull (1988). 'Bayesian inductive inference and maximum entropy', in Maximum Entropy and Bayesian Methods in science and engineering, vol. 1: Foundations, G.J. Erickson and C.R. Smith, eds., Kluwer.
S.F. Gull (1989). 'Developments in Maximum entropy data analysis', in J. Skilling,
ed., 53-11.
I. Guyon, V.N. Vapnik, B.E. Boser, L.Y. Bottou and S.A. Solla (1992). 'Structural
risk minimization for character recognition', this volume.
H. Jeffreys (1939). Theory of Probability, Oxford Univ. Press.
E. Levin, N. Tishby and S. Solla (1989). 'A statistical approach to learning and
generalization in layered neural networks', in COLT '89: 2nd workshop on computational learning theory, 245-260.
D.J.C. MacKay (1991) 'Bayesian Methods for Adaptive Models', Ph.D. Thesis, Caltech. Also 'Bayesian interpolation', 'A practical Bayesian framework for backprop
networks', 'Information-based objective functions for active data selection', to appear in Neural computation. And 'The evidence framework applied to classification
networks', submitted to Neural computation.
J .E. Moody (1992). 'Generalization, regularization and architecture selection in
nonlinear learning systems', this volume.
N. Tishby, E. Levin and S.A. Solla (1989). 'Consistent inference of probabilities in
layered networks: predictions and generalization', in Proc. [lCNN, Washington.
| 488 |@word determinant:1 polynomial:2 nd:1 wla:2 covariance:2 abou:1 solid:1 phy:1 tuned:1 subjective:1 current:1 comparing:1 yet:1 v:2 smith:1 normalising:2 location:1 preference:1 penalises:3 simpler:2 height:2 c2:4 become:1 fitting:5 indeed:1 expected:2 embody:1 decomposed:1 automatically:2 estimating:1 matched:1 what:4 interpreted:2 differing:1 finding:2 preferable:1 uk:2 control:2 unit:2 uo:1 appear:1 generalised:1 engineering:1 limit:1 id:4 oxford:1 optimised:2 interpolation:8 might:2 plus:1 quantified:1 specifying:1 range:1 practical:2 procedure:1 empirical:1 dla:5 word:1 radial:1 close:1 selection:4 layered:2 risk:4 dimensionless:1 writing:1 equivalent:1 demonstrated:1 simplicity:1 rule:3 estimator:1 dw:1 imagine:1 akaike:1 hypothesis:5 regularised:3 approximated:4 recognition:1 invented:1 eu:1 trade:1 solla:3 transforming:1 complexity:3 interpolant:5 cam:1 personal:1 trained:2 basis:3 derivation:1 univ:1 distinct:1 effective:6 larger:1 plausible:1 say:1 ability:1 unseen:1 noisy:1 hoc:1 sequence:1 eigenvalue:2 net:5 gen:1 poorly:1 achieve:1 bed:2 develop:2 ac:1 ij:1 received:1 solves:1 strong:1 predicted:1 involves:2 closely:1 wid:3 human:1 backprop:6 require:2 behaviour:4 assign:3 generalization:3 really:1 probable:5 extension:1 around:1 exp:2 mapping:1 vary:2 achieves:1 estimation:2 proc:1 currently:1 tool:1 minimization:1 always:2 gaussian:7 rather:1 varying:1 minimisation:2 derived:1 modelling:2 rank:2 likelihood:7 aew:4 inference:15 dependent:1 typically:3 pasadena:1 hidden:2 relation:3 tao:1 classification:6 flexible:2 colt:1 development:1 mackay:9 having:1 washington:1 identical:5 represents:1 unnecessarily:1 summarise:1 spline:4 quantitatively:1 individual:1 familiar:1 replaced:1 intended:1 consisting:1 mdl:2 light:1 accurate:1 poorer:1 taylor:1 re:1 gull:7 darwin:1 theoretical:2 earlier:1 ence:1 cover:1 measuring:1 uniform:1 hundred:1 predictor:1 levin:3 tishby:3 chooses:1 density:2 peak:3 accessible:2 probabilistic:3 off:1 moody:4 na:1 thesis:1 central:3 choose:2 account:2 includes:1 mp:6 ranking:1 ad:1 depends:1 break:1 try:1 sort:1 bayes:4 ass:1 il:2 ni:1 variance:1 characteristic:1 kaufmann:1 gathered:1 misfit:1 modelled:1 bayesian:26 lli:2 multiplying:2 confirmed:2 submitted:1 implausible:1 penalised:1 ed:10 against:1 involved:1 naturally:1 associated:1 gain:1 dlw:6 penalising:1 abscence:1 evaluated:4 strongly:1 furthermore:1 wmp:1 just:1 lastly:1 parameterised:1 correlation:4 nonlinear:1 defines:1 concept:1 true:5 inductive:1 hence:1 regularization:1 self:1 width:3 razor:7 complete:1 common:3 functional:3 empirically:3 volume:4 discussed:2 interpretation:4 linking:1 relating:1 kluwer:1 significant:1 cambridge:1 ai:1 had:1 similarity:1 dominant:1 curvature:1 posterior:12 recent:3 belongs:1 forcing:1 caltech:2 seen:1 morgan:1 additional:1 utilises:1 eo:1 determine:1 ii:13 infer:2 minimising:3 cross:1 host:1 prediction:5 regression:4 expectation:1 sometimes:1 kernel:1 achieved:1 whereas:1 want:2 interval:1 zw:1 finely:1 probably:3 subject:1 ajw:1 inconsistent:1 structural:4 feedforward:2 iii:1 fit:12 architecture:3 det:1 minimise:2 whether:1 expression:4 utility:1 penalty:1 regularisers:1 hessian:4 action:1 amount:3 locally:1 ph:1 specifies:2 notice:1 dotted:1 delta:1 arising:3 zd:2 write:1 vol:1 key:1 drawn:1 cb3:1 graph:1 sum:2 inverse:1 uncertainty:1 powerful:1 arrive:1 family:1 guyon:4 reader:1 recognise:1 decision:2 scaling:6 quadratic:3 bibliography:1 unlimited:1 generates:1 interpolated:1 integrand:1 extremely:1 describes:1 increasingly:1 character:1 cun:2 parameterisation:1 making:2 alike:1 jeffreys:3 explained:1 computationally:2 equation:1 resource:1 know:1 eight:1 denker:2 skilling:2 alternative:9 evi:1 include:1 embodies:1 serc:1 plausibly:1 objective:5 already:2 quantity:8 parametric:2 transportable:1 dependence:1 traditional:1 erickson:1 devote:1 gradient:1 ow:3 collected:1 reason:2 assuming:3 relationship:2 ratio:1 difficult:2 regulariser:3 inevitably:1 communication:1 consulted:1 david:1 kl:1 connection:1 california:1 distinction:1 boser:1 established:1 address:1 bar:10 below:1 appeared:1 including:1 greatest:1 demanding:2 ranked:1 natural:2 scheme:2 technology:1 brief:1 prior:11 acknowledgement:1 regularisation:4 loss:1 interesting:2 versus:1 validation:1 foundation:1 consistent:1 principle:1 occam:23 lo:1 course:1 surprisingly:1 supported:1 free:2 vv:1 generalise:2 institute:1 understand:1 taking:1 differentiating:1 studentship:1 calculated:2 plain:1 evaluating:3 dimension:3 adaptive:1 lippmann:1 preferred:1 active:1 assumed:1 reviewed:1 ca:1 expanding:1 bottou:1 interpolating:1 complex:5 cl:4 noise:3 repeated:1 cubic:1 inferring:1 wish:2 down:2 theorem:1 showing:1 decay:10 evidence:34 workshop:1 joe:1 vapnik:1 magnitude:2 illustrates:2 entropy:3 led:1 simply:1 monotonic:1 aa:4 corresponds:2 nested:1 satisfies:1 goal:1 viewed:1 rbf:1 determined:6 generalisation:9 total:1 ew:2 college:1 arises:1 evaluate:2 ex:2 |
4,287 | 4,880 | A simple example of Dirichlet process mixture
inconsistency for the number of components
Jeffrey W. Miller
Division of Applied Mathematics
Brown University
Providence, RI 02912
jeffrey [email protected]
Matthew T. Harrison
Division of Applied Mathematics
Brown University
Providence, RI 02912
matthew [email protected]
Abstract
For data assumed to come from a finite mixture with an unknown number of components, it has become common to use Dirichlet process mixtures (DPMs) not
only for density estimation, but also for inferences about the number of components. The typical approach is to use the posterior distribution on the number of
clusters ? that is, the posterior on the number of components represented in the
observed data. However, it turns out that this posterior is not consistent ? it does
not concentrate at the true number of components. In this note, we give an elementary proof of this inconsistency in what is perhaps the simplest possible setting: a
DPM with normal components of unit variance, applied to data from a ?mixture?
with one standard normal component. Further, we show that this example exhibits
severe inconsistency: instead of going to 1, the posterior probability that there is
one cluster converges (in probability) to 0.
1
Introduction
It is well-known that Dirichlet process mixtures (DPMs) of normals are consistent for the density ?
that is, given data from a sufficiently regular density p0 the posterior converges to the point mass at
p0 (see [1] for details and references). However, it is easy to see that this does not necessarily imply
consistency for the number of components, since for example, a good estimate of the density might
include superfluous components having vanishingly small weight.
Despite the fact that a DPM has infinitely many components with probability 1, it has become
common to apply DPMs to data assumed to come from finitely many components or ?populations?,
and to apply the posterior on the number of clusters (in other words, the number of components used
in the process of generating the observed data) for inferences about the true number of components;
see [2, 3, 4, 5, 6] for a few prominent examples. Of course, if the data-generating process very
closely resembles the DPM model, then it is fine to use this posterior for inferences about the number
of clusters (but beware of misspecification; see Section 2). However, in the examples cited, the
authors evaluated the performance of their methods on data simulated from a fixed finite number of
components or populations, suggesting that they found this to be more realistic than a DPM for their
applications.
Therefore, it is important to understand the behavior of this posterior when the data comes from a
finite mixture ? in particular, does it concentrate at the true number of components? In this note,
we give a simple example in which a DPM is applied to data from a finite mixture and the posterior
distribution on the number of clusters does not concentrate at the true number of components. In
fact, DPMs exhibit this type of inconsistency under very general conditions [7] ? however, the aim
of this note is brevity and clarity. To that end, we focus our attention on a special case that is as
1
Figure 1: Prior (red x) and estimated posterior (blue o) of the number of clusters in the observed
P2
data, for a univariate normal DPM on n i.i.d. samples from (a) N (0, 1), and (b) k=?2 15 N (4k, 12 ).
The DPM had concentration parameter ? = 1 and a Normal?Gamma base measure on the mean and
precision: N (? | 0, 1/c?)Gamma(? | a, b) with a = 1, b = 0.1, and c = 0.001. Estimates were
made using a collapsed Gibbs sampler, with 104 burn-in sweeps and 105 sample sweeps; traceplots
and running averages were used as convergence diagnostics. Each plot shown is an average over 5
independent runs.
simple as possible: a ?standard normal DPM?, that is, a DPM using univariate normal components
of unit variance, with a standard normal base measure (prior on component means).
The rest of the paper is organized as follows. In Section 2, we address several pertinent questions
and consider some suggestive experimental evidence. In Section 3, we formally define the DPM
model under consideration. In Section 4, we give an elementary proof of inconsistency in the case
of a standard normal DPM on data from one component, and in Section 5, we show that on standard
normal data, a standard normal DPM is in fact severely inconsistent.
2
Discussion
It should be emphasized that these results do not diminish, in any way, the utility of Dirichlet process
mixtures as a flexible prior on densities, i.e., for Bayesian density estimation. In addition to their
widespread success in empirical studies, DPMs are backed by theoretical guarantees showing that
in many cases the posterior on the density concentrates at the true density at the minimax-optimal
rate, up to a logarithmic factor (see [1] and references therein).
Many researchers (e.g. [8, 9], among others) have empirically observed that the DPM posterior on
the number of clusters tends to overestimate the number of components, in the sense that it tends to
put its mass on a range of values greater or equal to the true number. Figure 1 illustrates this effect for
univariate normals, and similar experiments with different families of component distributions yield
similar results. Thus, while our theoretical results in Sections 4 and 5 (and in [7]) are asymptotic in
nature, experimental evidence suggests that the issue is present even in small samples.
It is natural to think that this overestimation is due to the fact that the prior on the number of clusters
diverges as n ? ?, at a log n rate. However, this does not seem to be the main issue ? rather,
the problem is that DPMs strongly prefer having some tiny clusters and will introduce extra clusters
even when they are not needed (see [7] for an intuitive explanation of why this is the case).
2
In fact, many researchers have observed the presence of tiny extra clusters (e.g. [8, 9]), but the reason
for this has not previously been well understood, often being incorrectly attributed to the difficulty of
detecting components with small weight. These tiny extra clusters are rather inconvenient, especially
in clustering applications, and are often dealt with in an ad hoc way by simply removing them. It
might be possible to consistently estimate the number of components in this way, but this remains
an open question.
A more natural solution is the following: if the number of components is unknown, put a prior on
the number of components. For example, draw the number of components s from a probability
mass function p(s) on {1, 2, . . . } with p(s) > 0 for all s, draw mixing weights ? = (?1 , . . . , ?s )
(given s), draw component parameters ?1 , . . . , ?s i.i.d. (given s and ?) from an appropriate prior,
and draw X1 , X2 , . . . i.i.d. (given s, ?, and ?1:s ) from the resulting mixture. This approach has been
widely used [10, 11, 12, 13]. Under certain conditions, the posterior on the density has been shown
to concentrate at the true density at the minimax-optimal rate, up to a logarithmic factor, for any
sufficiently regular true density [14]. Strictly speaking, as defined, such a model is not identifiable,
but it is fairly straightforward to modify it to be identifiable by choosing one representative from
each equivalence class. Subject to a modification of this sort, it can be shown (see [10]) that under
very general conditions, when the data is from a finite mixture of the chosen family, such models are
(a.e.) consistent for the number of components, the mixing weights, the component parameters, and
the density. Also see [15] for an interesting discussion about estimating the number of components.
However, as a practical matter, when dealing with real-world data, one would not expect to find data
coming exactly from a finite mixture of a known family (except, perhaps, in rare circumstances).
Unfortunately, even for a model as in the preceding paragraph, the posterior on the number of components will typically be highly sensitive to misspecification, and it seems likely that in order to
obtain robust estimators, the problem itself may need to be reformulated. We urge researchers interested in the number of components to be wary of this robustness issue, and to think carefully about
whether they really need to estimate the number of components, or whether some other measure of
heterogeneity will suffice.
3
Setup
In this section, we define the Dirichlet process mixture model under consideration.
3.1
Dirichlet process mixture model
The DPM model was introduced by Ferguson [16] and Lo [17] for the purpose of Bayesian density estimation, and was made practical through the efforts of several authors (see [18] and references therein). We will use p(?) to denote probabilities under the DPM model (as opposed to
other probability distributions that will be considered in what follows). The core of the DPM is the
so-called Chinese restaurant process (CRP), which defines a certain probability distribution on partitions. Given n ? {1, 2, . . . } and t ? {1, . . . , n}, let At (n) denote the set of all ordered partitions
(A1 , . . . , At ) of {1, . . . , n} into t nonempty sets. In other words,
t
n
o
[
At (n) = (A1 , . . . , At ) : A1 , . . . , At are disjoint,
Ai = {1, . . . , n}, |Ai | ? 1 ?i .
i=1
The CRP with concentration parameter ? > 0 defines a probability mass function on A(n) =
S
n
t=1 At (n) by setting
p(A) =
t
Y
?t
(|Ai | ? 1)!
?(n) t! i=1
for A ? At (n), where ?(n) = ?(? + 1) ? ? ? (? + n ? 1). Note that since t is a function of A,
we have p(A) = p(A, t). (It is more common to see this distribution defined in terms of unordered
partitions {A1 , . . . , At }, in which case the t! does not appear in the denominator ? however, for our
purposes it is more convenient to use the distribution on ordered partitions (A1 , . . . , At ) obtained
by uniformly permuting the parts. This does not affect the prior or posterior on t.)
3
Consider the hierarchical model
p(A, t) = p(A) =
p(?1:t | A, t) =
?t
t
Y
(|Ai | ? 1)!,
?(n) t! i=1
t
Y
(3.1)
?(?i ), and
i=1
p(x1:n | ?1:t , A, t) =
t Y
Y
p?i (xj ),
i=1 j?Ai
where ?(?) is a prior on component parameters ? ? ?, and {p? : ? ? ?} is a parametrized family
of distributions on x ? X for the components. Typically, X ? Rd and ? ? Rk for some d and k.
Here, x1:n = (x1 , . . . , xn ) with xi ? X , and ?1:t = (?1 , . . . , ?t ) with ?i ? ?. This hierarchical
model is referred to as a Dirichlet process mixture (DPM) model.
P
The prior on the number of clusters t under this model is pn (t) = A?At (n) p(A, t). We use Tn
(rather than T ) to denote the random variable representing the number of clusters, as a reminder
that its distribution depends on n. Note that we distinguish between P
the terms ?component? and
?
?cluster?: a component is part of a mixture distribution (e.g. a mixture i=1 ?i p?i has components
p?1 , p?2 , . . . ), while a cluster is the set of indices of data points coming from a given component
(e.g. in the DPM model above, A1 , . . . , At are the clusters).
Since we are concerned with the posterior distribution p(Tn = t | x1:n ) on the number of clusters,
we will be especially interested in the marginal distribution on (x1:n , t), given by
X Z
p(x1:n , Tn = t) =
p(x1:n , ?1:t , A, t) d?1:t
A?At (n)
=
=
X
p(A)
t Z Y
Y
A?At (n)
i=1
X
t
Y
p(A)
p?i (xj ) ?(?i ) d?i
j?Ai
m(xAi )
(3.2)
i=1
A?At (n)
where for any subset of indices S ? {1, . . . , n}, we denote xS = (xj : j ? S) and let m(xS )
denote the single-cluster marginal of xS ,
Z Y
m(xS ) =
p? (xj ) ?(?) d?.
(3.3)
j?S
3.2
Specialization to the standard normal case
In this note, for brevity and clarity, we focus on the univariate normal case with unit variance, with
a standard normal prior on means ? that is, for x ? R and ? ? R,
1
p? (x) = N (x | ?, 1) = ? exp(? 12 (x ? ?)2 ),
2?
1
?(?) = N (? | 0, 1) = ? exp(? 12 ?2 ).
2?
and
It is a straightforward calculation to show that the single-cluster marginal is then
n
m(x1:n ) = ?
1 1 X 2
1
p0 (x1:n ) exp
xj
,
2 n + 1 j=1
n+1
(3.4)
where p0 (x1:n ) = p0 (x1 ) ? ? ? p0 (xn ) (and p0 is the N (0, 1) density). When p? (x) and ?(?) are as
above, we refer to the resulting DPM as a standard normal DPM.
4
4
Simple example of inconsistency
In this section, we prove the following result, exhibiting a simple example in which a DPM is
inconsistent for the number of components: even when the true number of components is 1 (e.g.
N (?, 1) data), the posterior probability of Tn = 1 does not converge to 1. Interestingly, the result
applies even when X1 , X2 , . . . are identically equal to a constant c ? R. To keep it simple, we set
? = 1; for more general results, see [7].
Theorem 4.1. If X1 , X2 , . . . ? R are i.i.d. from any distribution with E|Xi | < ?, then with
probability 1, under the standard normal DPM with ? = 1 as defined above, p(Tn = 1 | X1:n ) does
not converge to 1 as n ? ?.
Proof. Let n P
? {2, 3, . . . }. Let x1P
, . . . , xn ? R, A ? A2 (n), and ai = |Ai | for i = 1, 2.
n
Define sn =
x
and
s
=
j
A
i
j=1
j?Ai xj for i = 1, 2. Using Equation 3.4 and noting that
1/(n + 1) ? 1/(n + 2) + 1/n2 , we have
1 s2
1 s2
1 s2
?
m(x1:n )
n
n
n
? exp
exp
.
n+1
= exp
p0 (x1:n )
2n+1
2n+2
2 n2
Pn
The second factor equals exp( 21 x2n ), where xn = n1 j=1 xj . By the convexity of x 7? x2 ,
s 2
a1 + 1 sA1 2 a2 + 1 sA2 2
n
?
+
,
n+2
n + 2 a1 + 1
n + 2 a2 + 1
and thus, the first factor is less or equal to
1 s2
?
1 s2A2 ?
m(xA1 ) m(xA2 )
A1
exp
+
.
= a1 + 1 a2 + 1
2 a1 + 1 2 a2 + 1
p0 (x1:n )
Hence,
?
?
m(x1:n )
a1 + 1 a2 + 1
?
(4.1)
?
exp( 21 x2n ).
m(xA1 ) m(xA2 )
n+1
Consequently, we have
p(x1:n , Tn = 2) (a) X
m(xA1 ) m(xA2 )
=
n p(A)
p(x1:n , Tn = 1)
m(x1:n )
A?A2 (n)
?
X
(b)
n+1
p
?
n p(A) p
exp(? 12 x2n )
|A
|
+
1
|A
|
+
1
1
2
A?A2 (n)
?
X
(c)
(n ? 2)! n + 1
? ? exp(? 12 x2n )
?
n
n! 2!
2 n
A?A (n):
2
|A1 |=1
1
? ? exp(? 12 x2n ),
2 2
where step (a) follows from applying Equation 3.2 to both numerator and denominator, plus using
Equation 3.1 (with ? = 1) to see that p(A) = 1/n when A = ({1, . . . , n}), step (b) follows from
Equation 4.1 above, step (c) follows since all the terms in the sum are nonnegative and p(A) =
(n ? 2)!/n! 2! when |A1 | = 1 (by Equation 3.1, with ? = 1), and step (d) follows since there are n
partitions A ? A2 (n) such that |A1 | = 1.
(d)
If P
X1 , X2 , . . . ? R are i.i.d. with ? = EXj finite, then by the law of large numbers, X n =
n
1
j=1 Xj ? ? almost surely as n ? ?. Therefore,
n
p(X1:n , Tn = 1)
p(X1:n , Tn = 1)
p(Tn = 1 | X1:n ) = P?
?
p(X
,
T
p(X
,
T
=
t)
1:n
n = 1) + p(X1:n , Tn = 2)
1:n
n
t=1
1
1
a.s.
???
?
1
1 2 < 1.
1
1 2
?
1
+
exp(?
?
1 + 2 2 exp(? 2 X n )
2? )
2 2
Hence, almost surely, p(Tn = 1 | X1:n ) does not converge to 1.
5
5
Severe inconsistency
In the previous section, we showed that p(Tn = 1 | X1:n ) does not converge to 1 for a standard
normal DPM on any data with finite mean. In this section, we prove that in fact, it converges to 0, at
least on standard normal data. This vividly illustrates that improperly using DPMs in this way can
lead to entirely misleading results. The key step in the proof is an application of Hoeffding?s strong
law of large numbers for U-statistics.
Theorem 5.1. If X1 , X2 , . . . ? N (0, 1) i.i.d. then
Pr
p(Tn = 1 | X1:n ) ?
?0
as n ? ?
under the standard normal DPM with concentration parameter ? = 1.
Proof. For t = 1 and t = 2 define
Rt (X1:n ) = n3/2
p(X1:n , Tn = t)
.
p0 (X1:n )
Our method of proof is as follows. We will show that
Pr
R2 (X1:n ) ????? ?
n??
(or in other words, for any B > 0 we have P(R2 (X1:n ) > B) ? 1 as n ? ?), and we will show
that R1 (X1:n ) is bounded in probability:
R1 (X1:n ) = OP (1)
(or in other words, for any ? > 0 there exists B? > 0 such that P(R1 (X1:n ) > B? ) ? ? for all
n ? {1, 2, . . . }). Putting these two together, we will have
p(X1:n , Tn = 1)
R1 (X1:n ) Pr
p(X1:n , Tn = 1)
p(Tn = 1 | X1:n ) = P?
=
????? 0.
?
p(X1:n , Tn = 2)
R2 (X1:n ) n??
t=1 p(X1:n , Tn = t)
First, let?s show that R2 (X1:n ) ? ? in probability. For S ? {1, . . . , n} with |S| ? 1, define h(xS )
by
1 1 X 2
1
m(xS )
h(xS ) =
=p
exp
xj
,
p0 (xS )
2 |S| + 1
|S| + 1
j?S
where m?
is the single-cluster marginal as in Equations 3.3 and 3.4. Note that when 1 ? |S| ? n ? 1,
we have n h(xS ) ? 1. Note also that Eh(XS ) = 1 since
Z
Z
Eh(XS ) = h(xS ) p0 (xS ) dxS = m(xS ) dxS = 1,
using the fact that m(xS ) is a density with respect to Lebesgue measure. For k ? {1, . . . , n}, define
the U-statistics
1 X
Uk (X1:n ) = n
h(XS )
k
|S|=k
where the sum is over all S ? {1, . . . , n} such that |S| = k. By Hoeffding?s strong law of large
numbers for U-statistics [19],
a.s.
Uk (X1:n ) ????? Eh(X1:k ) = 1
n??
6
for any k ? {1, 2, . . . }. Therefore, using Equations 3.1 and 3.2 we have that for any K ? {1, 2, . . . }
and any n > K,
X
m(XA1 ) m(XA2 )
R2 (X1:n ) = n3/2
p(A)
p0 (X1:n )
A?A2 (n)
X
?
=n
p(A) n h(XA1 ) h(XA2 )
A?A2 (n)
X
?n
p(A) h(XA1 )
A?A2 (n)
=n
n?1
X
X (k ? 1)! (n ? k ? 1)!
h(XS )
n! 2!
k=1 |S|=k
=
n?1
X
k=1
=
n?1
X
k=1
?
K
X
k=1
a.s.
?????
n??
n
1 X
h(XS )
2k(n ? k) nk
|S|=k
n
Uk (X1:n )
2k(n ? k)
n
Uk (X1:n )
2k(n ? k)
K
X
1
HK
log K
=
>
2k
2
2
k=1
th
where HK is the K harmonic number, and the last inequality follows from the standard bounds
[20] on harmonic numbers: log K < HK ? log K + 1. Hence, for any K,
lim inf R2 (X1:n ) >
n??
log K
2
almost surely,
and it follows easily that
a.s.
R2 (X1:n ) ????? ?.
n??
Convergence in probability is implied by almost sure convergence.
Now, let?s show that R1 (X1:n ) = OP (1). By Equations 3.1, 3.2, and 3.4, we have
p(X1:n , Tn = 1) ? m(X1:n )
= n
p0 (X1:n )
p0 (X1:n )
?
n
1 n
1 X 2
n
?
exp
Xi
? exp(Zn2 /2)
=?
2n+1
n i=1
n+1
R1 (X1:n ) = n3/2
? Pn
where Zn = (1/ n) i=1 Xi ? N (0, 1) for each n ? {1, 2, . . . }. Since Zn = OP (1) then we
conclude that R1 (X1:n ) = OP (1). This completes the proof.
Acknowledgments
We would like to thank Stu Geman for raising this question, and the anonymous referees for several
helpful suggestions that improved the quality of this manuscript. This research was supported in part
by the National Science Foundation under grant DMS-1007593 and the Defense Advanced Research
Projects Agency under contract FA8650-11-1-715.
References
[1] S. Ghosal. The Dirichlet process, related priors and posterior asymptotics. In N.L. Hjort,
C. Holmes, P. M?uller, and S.G. Walker, editors, Bayesian Nonparametrics, pages 36?83. Cambridge University Press, 2010.
7
[2] J.P. Huelsenbeck and P. Andolfatto. Inference of population structure under a Dirichlet process
model. Genetics, 175(4):1787?1802, 2007.
[3] M. Medvedovic and S. Sivaganesan. Bayesian infinite mixture model based clustering of gene
expression profiles. Bioinformatics, 18(9):1194?1206, 2002.
[4] E. Otranto and G.M. Gallo. A nonparametric Bayesian approach to detect the number of
regimes in Markov switching models. Econometric Reviews, 21(4):477?496, 2002.
[5] E.P. Xing, K.A. Sohn, M.I. Jordan, and Y.W. Teh. Bayesian multi-population haplotype inference via a hierarchical Dirichlet process mixture. In Proceedings of the 23rd International
Conference on Machine Learning, pages 1049?1056, 2006.
[6] P. Fearnhead. Particle filters for mixture models with an unknown number of components.
Statistics and Computing, 14(1):11?21, 2004.
[7] J. W. Miller and M. T. Harrison. Inconsistency of Pitman?Yor process mixtures for the number
of components. arXiv:1309.0024, 2013.
[8] M. West, P. M?uller, and M.D. Escobar. Hierarchical priors and mixture models, with application in regression and density estimation. Institute of Statistics and Decision Sciences, Duke
University, 1994.
[9] A. Onogi, M. Nurimoto, and M. Morita. Characterization of a Bayesian genetic clustering
algorithm based on a Dirichlet process prior and comparison among Bayesian clustering methods. BMC Bioinformatics, 12(1):263, 2011.
[10] A. Nobile. Bayesian Analysis of Finite Mixture Distributions. PhD thesis, Department of
Statistics, Carnegie Mellon University, Pittsburgh, PA, 1994.
[11] S. Richardson and P.J. Green. On Bayesian analysis of mixtures with an unknown number of
components. Journal of the Royal Statistical Society. Series B, 59(4):731?792, 1997.
[12] P.J. Green and S. Richardson. Modeling heterogeneity with and without the Dirichlet process.
Scandinavian Journal of Statistics, 28(2):355?375, June 2001.
[13] A. Nobile and A.T. Fearnside. Bayesian finite mixtures with an unknown number of components: The allocation sampler. Statistics and Computing, 17(2):147?162, 2007.
[14] W. Kruijer, J. Rousseau, and A. Van der Vaart. Adaptive Bayesian density estimation with
location-scale mixtures. Electronic Journal of Statistics, 4:1225?1257, 2010.
[15] P. McCullagh and J. Yang. How many clusters? Bayesian Analysis, 3(1):101?120, 2008.
[16] T.S. Ferguson. Bayesian density estimation by mixtures of normal distributions. In M. H.
Rizvi, J. Rustagi, and D. Siegmund, editors, Recent Advances in Statistics, pages 287?302.
Academic Press, 1983.
[17] A. Y. Lo. On a class of Bayesian nonparametric estimates: I. Density estimates. The Annals of
Statistics, 12(1):351?357, 1984.
[18] M.D. Escobar and M. West. Computing nonparametric hierarchical models. In D. Dey,
P. M?uller, and D. Sinha, editors, Practical Nonparametric and Semiparametric Bayesian Statistics, pages 1?22. Springer-Verlag, New York, 1998.
[19] W. Hoeffding. The strong law of large numbers for U-statistics. Institute of Statistics, Univ. of
N. Carolina, Mimeograph Series, 302, 1961.
[20] R.L. Graham, D.E. Knuth, and O. Patashnik. Concrete Mathematics. Addison-Wesley, 1989.
8
| 4880 |@word seems:1 open:1 carolina:1 p0:15 series:2 genetic:1 interestingly:1 realistic:1 partition:5 pertinent:1 plot:1 core:1 detecting:1 characterization:1 location:1 x1p:1 become:2 prove:2 paragraph:1 introduce:1 behavior:1 multi:1 project:1 estimating:1 bounded:1 suffice:1 mass:4 what:2 sivaganesan:1 guarantee:1 exactly:1 uk:4 unit:3 grant:1 appear:1 overestimate:1 understood:1 modify:1 tends:2 severely:1 switching:1 despite:1 might:2 burn:1 plus:1 therein:2 resembles:1 equivalence:1 suggests:1 range:1 practical:3 acknowledgment:1 urge:1 asymptotics:1 empirical:1 convenient:1 word:4 regular:2 put:2 collapsed:1 applying:1 backed:1 straightforward:2 attention:1 xa1:6 estimator:1 holmes:1 population:4 siegmund:1 annals:1 duke:1 pa:1 referee:1 geman:1 observed:5 agency:1 convexity:1 overestimation:1 division:2 easily:1 represented:1 univ:1 choosing:1 widely:1 statistic:14 richardson:2 vaart:1 think:2 itself:1 hoc:1 vanishingly:1 coming:2 dpms:7 mixing:2 intuitive:1 convergence:3 cluster:22 diverges:1 r1:7 generating:2 escobar:2 converges:3 finitely:1 op:4 strong:3 p2:1 come:3 exhibiting:1 concentrate:5 closely:1 filter:1 really:1 anonymous:1 rousseau:1 elementary:2 strictly:1 sufficiently:2 diminish:1 considered:1 normal:21 exp:17 matthew:2 patashnik:1 a2:12 nobile:2 purpose:2 estimation:6 sensitive:1 uller:3 fearnhead:1 aim:1 rather:3 pn:3 focus:2 june:1 consistently:1 hk:3 sense:1 detect:1 helpful:1 inference:5 ferguson:2 typically:2 going:1 interested:2 issue:3 among:2 flexible:1 special:1 fairly:1 marginal:4 equal:4 having:2 x2n:5 bmc:1 others:1 few:1 gamma:2 national:1 stu:1 lebesgue:1 jeffrey:2 n1:1 highly:1 severe:2 mixture:26 diagnostics:1 superfluous:1 permuting:1 inconvenient:1 theoretical:2 sinha:1 modeling:1 zn:2 subset:1 rare:1 providence:2 density:19 cited:1 international:1 contract:1 together:1 concrete:1 thesis:1 huelsenbeck:1 opposed:1 hoeffding:3 suggesting:1 unordered:1 matter:1 ad:1 depends:1 red:1 xing:1 sort:1 vividly:1 variance:3 miller:3 yield:1 dealt:1 bayesian:16 researcher:3 zn2:1 dm:1 proof:7 attributed:1 reminder:1 lim:1 organized:1 carefully:1 manuscript:1 wesley:1 improved:1 nonparametrics:1 evaluated:1 strongly:1 dey:1 crp:2 widespread:1 xa2:5 defines:2 quality:1 perhaps:2 effect:1 brown:4 true:9 hence:3 numerator:1 prominent:1 tn:21 harmonic:2 consideration:2 common:3 haplotype:1 empirically:1 refer:1 mellon:1 cambridge:1 gibbs:1 ai:9 rd:2 consistency:1 mathematics:3 dxs:2 particle:1 exj:1 had:1 scandinavian:1 base:2 posterior:18 showed:1 recent:1 inf:1 certain:2 gallo:1 verlag:1 inequality:1 success:1 inconsistency:8 der:1 greater:1 preceding:1 surely:3 converge:4 academic:1 calculation:1 a1:15 regression:1 denominator:2 circumstance:1 arxiv:1 addition:1 semiparametric:1 fine:1 harrison:3 completes:1 walker:1 extra:3 rest:1 sure:1 morita:1 subject:1 dpm:24 inconsistent:2 seem:1 jordan:1 yang:1 presence:1 noting:1 hjort:1 easy:1 concerned:1 identically:1 affect:1 restaurant:1 xj:9 whether:2 specialization:1 expression:1 defense:1 utility:1 effort:1 improperly:1 reformulated:1 fa8650:1 speaking:1 york:1 nonparametric:4 sohn:1 simplest:1 medvedovic:1 estimated:1 disjoint:1 rustagi:1 blue:1 carnegie:1 key:1 putting:1 clarity:2 econometric:1 sum:2 run:1 family:4 almost:4 electronic:1 draw:4 decision:1 prefer:1 graham:1 entirely:1 bound:1 distinguish:1 identifiable:2 nonnegative:1 ri:2 x2:6 n3:3 department:1 modification:1 pr:3 equation:8 previously:1 remains:1 turn:1 nonempty:1 needed:1 addison:1 end:1 apply:2 hierarchical:5 appropriate:1 robustness:1 dirichlet:12 include:1 running:1 clustering:4 especially:2 chinese:1 society:1 sweep:2 implied:1 question:3 concentration:3 rt:1 exhibit:2 thank:1 simulated:1 parametrized:1 reason:1 index:2 setup:1 unfortunately:1 unknown:5 teh:1 markov:1 finite:10 incorrectly:1 heterogeneity:2 misspecification:2 ghosal:1 introduced:1 raising:1 sa1:1 address:1 regime:1 green:2 royal:1 explanation:1 natural:2 difficulty:1 eh:3 advanced:1 minimax:2 representing:1 misleading:1 imply:1 sn:1 prior:13 review:1 asymptotic:1 law:4 expect:1 interesting:1 suggestion:1 allocation:1 foundation:1 consistent:3 editor:3 tiny:3 lo:2 course:1 genetics:1 supported:1 last:1 understand:1 institute:2 pitman:1 yor:1 van:1 xn:4 world:1 author:2 made:2 adaptive:1 keep:1 dealing:1 gene:1 suggestive:1 xai:1 pittsburgh:1 assumed:2 conclude:1 xi:4 why:1 wary:1 nature:1 robust:1 necessarily:1 main:1 s2:4 profile:1 n2:2 x1:63 rizvi:1 representative:1 referred:1 west:2 precision:1 removing:1 rk:1 theorem:2 emphasized:1 showing:1 r2:7 x:18 evidence:2 exists:1 phd:1 knuth:1 illustrates:2 nk:1 logarithmic:2 simply:1 univariate:4 infinitely:1 likely:1 ordered:2 applies:1 springer:1 consequently:1 mccullagh:1 typical:1 except:1 uniformly:1 infinite:1 sampler:2 called:1 experimental:2 formally:1 brevity:2 bioinformatics:2 |
4,288 | 4,881 | Approximate Bayesian Image Interpretation using
Generative Probabilistic Graphics Programs
Vikash K. Mansinghka?
1,2
, Tejas D. Kulkarni?
1,2
, Yura N. Perov1,2,3 , and Joshua B. Tenenbaum1,2
1
Computer Science and Artificial Intelligence Laboratory, MIT
2
Department of Brain and Cognitive Sciences, MIT
3
Institute of Mathematics and Computer Science, Siberian Federal University
Abstract
The idea of computer vision as the Bayesian inverse problem to computer graphics
has a long history and an appealing elegance, but it has proved difficult to directly
implement. Instead, most vision tasks are approached via complex bottom-up
processing pipelines. Here we show that it is possible to write short, simple probabilistic graphics programs that define flexible generative models and to automatically invert them to interpret real-world images. Generative probabilistic graphics
programs (GPGP) consist of a stochastic scene generator, a renderer based on
graphics software, a stochastic likelihood model linking the renderer?s output and
the data, and latent variables that adjust the fidelity of the renderer and the tolerance of the likelihood. Representations and algorithms from computer graphics
are used as the deterministic backbone for highly approximate and stochastic generative models. This formulation combines probabilistic programming, computer
graphics, and approximate Bayesian computation, and depends only on generalpurpose, automatic inference techniques. We describe two applications: reading sequences of degraded and adversarially obscured characters, and inferring
3D road models from vehicle-mounted camera images. Each of the probabilistic
graphics programs we present relies on under 20 lines of probabilistic code, and
yields accurate, approximately Bayesian inferences about real-world images.
1
Introduction
Computer vision has historically been formulated as the problem of producing symbolic descriptions
of scenes from input images [10]. This is usually done by building bottom-up processing pipelines
that isolate the portions of the image associated with each scene element and extract features that
signal its identity. Many pattern recognition and learning techniques can then be used to build
classifiers for individual scene elements, and sometimes to learn the features themselves [11, 7].
This approach has been remarkably successful, especially on problems of recognition. Bottom-up
pipelines that combine image processing and machine learning can identify written characters with
high accuracy and recognize objects from large sets of possibilities. However, the resulting systems
typically require large training corpuses to achieve reasonable levels of accuracy, and are difficult
both to build and modify. For example, the Tesseract system [16] for optical character recognition
is over 10, 000 lines of C++. Small changes to the underlying assumptions frequently necessitates
end-to-end retraining and/or redesign.
Generative models for a range of image parsing tasks are also being explored [17, 4, 18, 22, 20].
These provide an appealing avenue for integrating top-down constraints with bottom-up processing,
* The first two authors contributed equally to this work.
* (vkm, tejask, perov, jbt)@mit.edu ? Project URL: http://probcomp.csail.mit.edu/gpgp/
1
and provide an inspiration for the approach we take in this paper. But like traditional bottom-up
pipelines for vision, these approaches have relied on considerable problem-specific engineering,
chiefly to design and/or learn custom inference strategies, such as MCMC proposals [18, 22] that
incorporate bottom-up cues. Other combinations of top-down knowledge with bottom up processing
have been remarkably powerful [9]. For example, [8] has shown that global, 3D geometric information can significantly improve the performance of bottom-up object detectors.
In this paper, we propose a novel formulation of image interpretation problems, called generative
probabilstic graphics programming (GPGP). GPGP shares a common template: a stochastic scene
generator, an approximate renderer based on existing graphics software, a highly stochastic likelihood model for comparing the renderer?s output with the observed data, and latent variables that
control the fidelity of the renderer and the tolerance of the image likelihood. Our probabilistic
graphics programs are written in Venture, a probabilistic programming language descended from
Church [6]. Each model we introduce requires less than 20 lines of probabilistic code. The renderers and likelihoods for each are based on standard templates written as short Python programs.
Unlike typical generative models for scene parsing, inverting our probabilistic graphics programs requires no custom inference algorithm design. Instead, we rely on the automatic Metropolis-Hastings
(MH) transition operators provided by our probabilistic programming system. The approximations
and stochasticity in our renderer, scene generator and likelihood models serve to implement a variant
of approximate Bayesian computation [19, 12]. This combination can produce a kind of self-tuning
analogue of annealing that facilities reliable convergence.
To the best of our knowledge, our GPGP framework is the first real-world image interpretation formulation to combine all of the following elements: probabilistic programming, automatic inference,
computer graphics, and approximate Bayesian computation; this constitutes our main contribution.
Our second contribution is to provide demonstrations of the efficacy of this approach on two image interpretation problems: reading snippets of degraded and adversarially obscured alphanumeric
characters, and inferring 3D road models from vehicle mounted cameras. In both cases we quantitatively report the accuracy of our approach on representative test datasets, as compared to standard
bottom-up baselines that have been extensively engineered.
2
Generative Probabilistic Graphics Programs and Approximate Bayesian
Inference.
GPGP defines generative models for images by combining four components. The first is a stochastic scene generator written as probabilistic code that makes random choices for the location and
configuration of the main elements in the scene. The second is an approximate renderer based on
existing graphics software that maps a scene S and control variables X to an image IR = f (S, X).
The third is a stochastic likelihood model for image data ID that enables scoring of rendered scenes
given the control variables. The fourth is a set of latent variables X that control the fidelity of the
renderer and/or the tolerance in the stochastic likelihood model. These components are described
schematically in Figure 1.
We formulate image interpretation tasks in terms of sampling (approximately) from the posterior
distribution over images:
P (S|ID ) /
Z
P (S)P (X)
f (S,X) (IR )P (ID |IR , X)dX
We perform inference over execution histories of our probabilistic graphics programs using a
uniform mixture of generic, single-variable Metropolis-Hastings transitions, without any custom,
bottom-up proposals. We first give a general description of the generative model and inference algorithm induced by our probabilistic graphics programs; in later sections, we describe specific details
for each application.
Let S = {Si } be a decomposition of the scene S into parts Si with independent priors P (Si ). For
example, in our text application, the Si s include binary indicators for the presence or absence of each
glyph, along with its identity (?A? through ?Z?, plus digits 0-9), and parameters including location,
size and rotation. Also let X = {Xj } be a decomposition of the control variables X into parts Xj
with priors P (Xj ), such as the bandwidths of per-glyph Gaussian spatial blur kernels, the variance
2
Stochastic
Scene Generator
X ~ P(X)
S ~ P(S)
Approximate
Renderer
IR = f(S,X)
Data ID
Stochastic
Comparison
P(ID|IR,X)
Figure 1: An overview of the GPGP framework. Each of our models shares a common template: a
stochastic scene generator which samples possible scenes S according to their prior, latent variables
X that control the fidelity of the rendering and the tolerance of the model, an approximate render
f (S, X) ! IR based on existing graphics software, and a stochastic likelihood model P (ID |IR , X)
that links observed rendered images. A scene S ? sampled from the scene generator according to
?
P (S) could be rendered onto a single image IR
. This would be extremely unlikely to exactly match
?
the data ID . Instead of requiring exact matches, our formulation can broaden the renderer?s output
?
P (IR |S?) and the image likelihood P (ID
|IR ) via the latent control variables X. Inference over X
mediates the degree of smoothing in the posterior.
of a Gaussian image likelihood, and so on. Our proposals modify single elements of the scene and
control variables at a time, as follows:
P (S) =
Y
P (Si )
qi (Si0 , Si ) = P (Si0 )
P (X) =
i
Y
P (Xj )
qj (Xj0 , Xj ) = P (Xj0 )
j
Now let K = |{Si }| + |{Xj }| be the total number of random variables in each execution. For
simplicity, we describe the case where this number can be bounded above beforehand, i.e. total
a priori scene complexity is limited. At each inference step, we choose a random variable index
k < K uniformly at random. If k corresponds to a scene variable i, then we propose from qi (Si0 , Si ),
so our overall proposal kernel q((S, X) ! (S 0 , X 0 )) = S i (S 0 )P (Si0 ) X (X 0 ). If k corresponds
0
to a control variable j, we propose from qj (Xj0 , Xj ). In both cases we re-render the scene IR
=
f (S 0 , X 0 ). We then run the kernel associated with this variable, and accept or reject via the MH
equation:
?M H ((S, X) ! (S 0 , X 0 ))
=
min 1,
P (ID |f (S 0 , X 0 ), X 0 )P (S 0 )P (X 0 )q((S 0 , X 0 ) ! (S, X))
P (ID |f (S, X), X)P (S)P (X)q((S, X) ! (S 0 , X 0 ))
We implement our probabilistic graphics programs in the Venture probabilistic programming language. The Metropolis-Hastings inference algorithm we use is provided by default in this system;
no custom inference code is required. In the context of our GPGP formulation, this algorithm makes
implicit use of ideas from approximate Bayesian computation (ABC). ABC methods approximate
Bayesian inference over complex generative processes by using an exogenous distance function
to compare sampled outputs with observed data. In the original rejection sampling formulation,
samples are accepted only if they match the data within a hard threshold. Subsequently, combinations of ABC and MCMC were proposed [12], including variants with inference over the threshold
value [15]. Most recently, extensions have been introduced where the hard cutoff is replaced with
a stochastic likelihood model [19]. Our formulation incorporates a combination of these insights:
rendered scenes are only approximately constrained to match the observed image, with the tightness of the match mediated by inference over factors such as the fidelity of the rendering and the
stochasticity in the likelihood. This allows image variability that is unnecessary or even undesirable
to model to be treated in a principled fashion.
3
Figure 2: Four input images from our CAPTCHA corpus, along with the final results and convergence trajectory of typical inference runs. The first row is a highly cluttered synthetic CAPTCHA
exhibiting extreme letter overlap. The second row is a CAPTCHA from TurboTax, the third row
is a CAPTCHA from AOL, and the fourth row shows an example where our system makes errors
on some runs. Our probabilistic graphics program did not originally support rotation, which was
needed for the AOL CAPTCHAs; adding it required only 1 additional line of probabilistic code. See
the main text for quantitative details, and supplemental material for the full corpus.
3
Generative Probabilistic Graphics in 2D for Reading Degraded Text.
We developed a probabilistic graphics program for reading short snippets of degraded text consisting
of arbitrary digits and letters. See Figure 2 for representative inputs and outputs. In this program,
the latent scene S = {Si } contains a bank of variables for each glyph, including whether a potential
letter is present or absent from the scene, what its spatial coordinates and size are, what its identity
is, and how it is rotated:
?
?
1/w 0 ? x ? w
1/h 0 ? x ? h
P (Sipres = 1) = 0.5 P (Six = x) =
P (Siy = y) =
0
otherwise
0
otherwise
P (Siglyph id
= g) =
(
1/G
0
0 ? Siglyph id < G
otherwise
P (Si?
= g) =
?
1/2?max
0
?max ? Si? < ?max
otherwise
Our renderer rasterizes each letter independently, applies a spatial blur to each image, composites
the letters, and then blurs the result. We also applied global blur to the original training image
before applying the stochastic likelihood model on the blurred original and rendered images. The
stochastic likelihood model is a multivariate Gaussian whose mean is the blurry rendering; formally,
ID ? N (IR ; ). The control variables X = {Xj } for the renderer and likelihood consist of perletter Gaussian spatial blur bandwidths Xji ? ? Beta(1, 2), a global image blur on the rendered
image Xblur rendered ? ? Beta(1, 2), a global image blur on the original test image Xblur test ?
? Beta(1, 2), and the standard deviation of the Gaussian likelihood ? Gamma(1, 1) (with ,
and set to favor small bandwidths). To make hard classification decisions, we use the sample
with lowest pixel reconstruction error from a set of 5 approximate posterior samples. We also
experimented with enabling enumerative (griddy) Gibbs sampling for uniform discrete variables
with 10% probability. The probabilistic code for this model is shown in Figure 4.
To assess the accuracy of our approach on adversarially obscured text, we developed a corpus consisting of over 40 images from widely used websites such as TurboTax, E-Trade, and AOL, plus
additional challenging synthetic CAPTCHAs with high degrees of letter overlap and superimposed
distractors. Each source of text violates the underlying assumptions of our probabilistic graphics
program in different ways. TurboTax CAPTCHAs incorporate occlusions that break strokes within
4
(a)
(c)
(b)
(d)
(e)
(f)
Figure 3: Inference over renderer fidelity significantly improves the reliability of inference. (a) Reconstruction errors for 5 runs of two variants of our probabilistic graphics program for text. Without
sufficient stochasticity and approximation in the generative model ? that is, with a strong prior over
a purely deterministic, high-fidelity renderer ? inference gets stuck in local energy minima (red
lines). With inference over renderer fidelity via per-letter and global blur, the tolerance of the image
likelihood, and the number of letters, convergence improves substantially (blue lines). Many local
minima in the likelihood are escaped over the course of single-variable inference, and the blur variables are automatically adjusted to support localizing and identifying letters. (b) Clockwise from
top left: an input CAPTCHA, two typical local minima, and one correct parse. (c,d,e,f) A representative run, illustrating the convergence dynamics that result from inference over the renderer?s
fidelity. From left to right, we show overall log probability, pixel-wise disagreement (many local
minima are escaped over the course of inference), the number of active letters in the scene, and the
per-letter blur variables. Inference automatically adjusts blur so that newly proposed letters are often
blurred out until they are localized and identified accurately.
letters, while AOL CAPTCHAs include per-letter warping. These CAPTCHAs all involve arbitrary
digits and letters, and as a result lack cues from word identity that the best published CAPTCHA
breaking systems depend on [13]. The dynamically-adjustable fidelity of our approximate renderer
and the high stochasticity of our generative model appear to be necessary for inference to robustly
escape local minima. We have observed a kind of self-tuning annealing resulting from inference
over the control variables; see Figure 3 for an illustration. We observe robust character recognition
given enough inference, with an overall character detection rate of 70.6%. To calibrate the difficulty
of our corpus, we also ran the Tesseract optical character recognition engine [16] on our corpus; its
character detection rate was 37.7%.
4
Generative Probabilistic Graphics in 3D: Road Finding.
We have also developed a generative probabilistic graphics program for localizing roads in 3D from
single images. This is an important problem in autonomous driving. As with many perception
problems in robotics, there is clear scene structure to exploit, but also considerable uncertainty
about the scene, as well as substantial image-to-image variability that needs to be robustly ignored.
See Figure 5b for example inputs.
The probabilistic graphics program we use for this problem is shown in Figure 7. The latent
scene S is comprised of the height of the roadway from the ground plane, the road?s width and
lane size, and the 3D offset of the corner of the road from the (arbitrary) camera location. The
prior encodes assumption that the lanes are small relative to the road, and that the road has two
lanes and is very likely to be visible (but may not be centered). This scene is then rendered to
produce a surface-based segmentation image, that assigns each input pixel to one of 4 regions
r 2 R = {left o?road, right o?road, road, lane}. Rendering is done for each scene element separately, followed by compositing, as with our 2D text program. See Figure 5a for random surfacebased segmentation images drawn from this prior. Extensions to richer road and ground geometries
are an interesting direction for future work. This model is similar in spirit to [1] but the key differ5
ASSUME is_present (mem (lambda (id) (bernoulli 0.5)))
ASSUME pos_x (mem (lambda (id) (uniform_discrete 0 200)))
ASSUME pos_y (mem (lambda (id) (uniform_discrete 0 200)))
ASSUME size_x (mem (lambda (id) (uniform_discrete 0 100)))
ASSUME size_y (mem (lambda (id) (uniform_discrete 0 100)))
ASSUME rotation (mem (lambda (id) (uniform_continuous -20.0 20.0)))
ASSUME glyph (mem (lambda (id) (uniform_discrete 0 35))) // 26 + 10.
ASSUME blur (mem (lambda (id) (* 7 (beta 1 2))))
ASSUME global_blur (* 7 (beta 1 2))
ASSUME data_blur (* 7 (beta 1 2))
ASSUME epsilon (gamma 1 1)
ASSUME data (load_image "captcha_1.png" data_blur)
ASSUME image (render_surfaces max-num-glyphs global_blur
(pos_x 1) (pos_y 1) (glyph 1) (size_x 1) (size_y 1) (rotation 1) (blur 1)
(is_present 1) (pos_x 2) (pos_y 2) (glyph 2) (size_x 2) (size_y 2)
(rotation 2) (blur 2) (is_present 2) ... (is_present 10))
OBSERVE (incorporate_stochastic_likelihood data image epsilon) True
Figure 4: A generative probabilistic graphics program for reading degraded text. The scene generator chooses letter identity (A-Z and digits 0-9), position, size and rotation at random. These random
variables are fed into the renderer, along with the bandwidths of a series of spatial blur kernels (one
per letter, another for the overall rendered image from generative model and another for the original
input image). These blur kernels control the fidelity of the rendered image. The image returned by
the renderer is compared to the data via a pixel-wise Gaussian likelihood model, whose variance is
also an unknown variable.
ence is that our framework relies on automatic inference techniques, is representationally richer due
to compact model description and goes beyond point estimates to report posterior uncertainty.
In our experiments, we used k-means (with k = 20) to cluster RGB values from a randomly chosen
training image. We used these clusters to build a compact appearance model based on cluster-center
histograms, by assigning text image pixels to their nearest cluster. However, we are agnostic to
the particular choice of the appearence model and many feature engineering and feature learning
techniques can be substituted here without the loss of generality. Our stochastic likelihood incorporates these histograms, by multiplying together the appearance probabilities for each image region
r 2 R. These probabilities, denoted ?~r , are smoothed by pseudo-counts ? drawn from a Gamma
distribution. Let Zr be the per-region normalizing constant, and ID(x,y) be the quantized pixel at
coordinates (x, y) in the input image. Then our likelihood model is:
P (ID |IR , ?) =
Y
Y
r2R x,y s.t. IR =r
ID(x,y)
?r
+?
Zr
Figure 5f shows appearance model histograms from one random training frame. Figure 5c shows
the extremely noisy lane/non-lane classifications that result from the appearance model on its own,
without our scene prior; accuracy is extremely low. Other, richer appearance models, such as Gaussian mixtures over RGB values (which could be either hand specified or learned), are compatible
with our formulation; our simple, quantized model was chosen primarily for simplicity. We use the
same generic Metropolis-Hastings strategy for inference in this problem as in our text application.
Although deterministic search strategies for MAP inference could be developed for this particular
program, it is less clear how to build a single deterministic search algorithm that could work on both
of the generative probabilistic graphics programs we present.
In Table 1, we report the accuracy of our approach on one road dataset from the KITTI Vision
Benchmark Suite [5]. To focus on accuracy in the face of visual variability, we do not exploit temporal correspondences. We test on every 5th frame for a total of 80. We report lane/non-lane accuracy
results for maximum likelihood classification over 10 appearance models (from 10 randomly chosen
training images), as well as for the single best appearance model from this set. We use 10 posterior
samples per frame for both. For reference, we include the performance of a sophisticated bottom-up
baseline system from [2]. This baseline system requires significant 3D a priori knowledge, including
6
(a)
(b)
(c)
(d)
(e)
(f)
Figure 5: An illustration of generative probabilistic graphics for 3D road finding. (a) Renderings
of random samples from our scene prior, showing the surface-based image segmentation induced
by each sample. (b) Representative test frames from the KITTI dataset [5]. (c) Maximum likelihood lane/non-lane classification of the images from (b) based solely on the best-performing singletraining-frame appearance model (ignoring latent geometry). Geometric constraints are clearly
needed for reliable road finding. (d) Results from [2]. (e) Typical inference results from the proposed generative probabilistic graphics approach on the images from (b). (f) Appearance model histograms (over quantized RGB values) from the best-performing single-training-frame appearance
model for all four region types: lane, left offroad, right offroad and road.
(a) Lanes superimposed
from 30 scenes sampled
(b) 30 posterior samples on (c) 30 posterior samples on a (d) Posterior samples
from our prior
a low accuracy (Frame 199), high accuracy (Frame 384), of left lane position
for both frames
high uncertainty frame
low uncertainty frame
Figure 6: Approximate Bayesian inference yields samples from a broad, multimodal scene posterior
on a frame that violates our modeling assumptions (note the intersection), but reports less uncertainty
on a frame more compatible with our model (with perceptually reasonable alternatives to the mode).
the intrinsic and extrinsic parameters of the camera, and a rough initial segmentation of each test
image. In contrast, our approach has to infer these aspects of the scene from the image data. We
also show some uncertainty estimates that result from approximate Bayesian inference in Figure 6.
Our probabilistic graphics program for this problem requires under 20 lines of probabilistic code.
5
Discussion
We have shown that it is possible to write short probabilistic graphics programs that use simple
2D and 3D computer graphics techniques as the backbone for highly approximate generative models. Approximate Bayesian inference over the execution histories of these probabilistic graphics
7
ASSUME road_width (uniform_discrete 5 8) //arbitrary units
ASSUME road_height (uniform_discrete 70 150)
ASSUME lane_pos_x (uniform_continuous -1.0 1.0) //uncentered renderer
ASSUME lane_pos_y (uniform_continuous -5.0 0.0) //coordinate system
ASSUME lane_pos_z (uniform_continuous 1.0 3.5)
ASSUME lane_size (uniform_continuous 0.10 0.35)
ASSUME eps (gamma 1 1)
ASSUME theta_left (list 0.13 ... 0.03)
ASSUME theta_right (list 0.03 ... 0.02)
ASSUME theta_road (list 0.05 ... 0.07)
ASSUME theta_lane (list 0.01 ... 0.21)
ASSUME data (load_image "frame201.png")
ASSUME surfaces (render_surfaces lane_pos_x lane_pos_y lane_pos_z
road_width road_height lane_size)
OBSERVE (incorporate_stochastic_likelihood theta_left theta_right
theta_road theta_lane data surfaces eps) True
Figure 7: Source code for a generative probabilistic graphics program that infers 3D road models.
Method
Accuracy
Aly et al [2]
GPGP (Best Single Appearance)
GPGP (Maximum Likelihood over Multiple Appearances)
68.31%
64.56%
74.60%
Table 1: Quantitative results for lane detection accuracy on one of the road datasets in the KITTI
Vision Benchmark Suite [5]. See main text for details.
programs ? automatically implemented via generic, single-variable Metropolis-Hastings transitions, using existing rendering libraries and simple likelihoods ? then implements a new variation
on analysis by synthesis [21]. We have also shown that this approach can yield accurate, globally
consistent interpretations of real-world images, and can coherently report posterior uncertainty over
latent scenes when appropriate. Our core contributions are the introduction of this conceptual framework and two initial demonstrations of its efficacy.
To scale our inference approach to handle more complex scenes, it will likely be important to consider more complex forms of automatic inference, beyond the single-variable Metropolis-Hastings
proposals we currently use. For example, discriminatively trained proposals could help, and in fact
could be trained based on forward executions of the probabilistic graphics program. Appearance
models derived from modern image features and texture descriptors [14, 7, 11] ? going beyond the
simple quantizations we currently use ? could also reduce the burden on inference and improve the
generalizability of individual programs. It is important to note that the high dimensionality involved
in probabilistic graphics programming does not necessarily mean inference (and even automatic inference) is impossible. For example, approximate inference in models with probabilities bounded
away from 0 and 1 can sometimes be provably tractable via sampling techniques, with runtimes that
depend on factors other than dimensionality [3]. Exploring the role of stochasticity in facilitating
tractability is an important avenue for future work.
The most interesting potential of GPGP lies in bringing graphics representations and algorithms
to bear on the hard modeling and inference problems in vision. For example, to avoid global rerendering after each inference step, we need to represent and exploit the conditional independencies
between latent scene elements and image regions. Inference in GPGP based on a z-buffer or a layered compositor could potentially do this. We hope the GPGP framework facilitates image analysis
by Bayesian inversion of rich graphics algorithms for scene generation and image synthesis.
Acknowledgments
We are grateful to K. Bonawitz and E. Jonas for preliminary work on CAPTCHA breaking, and to S.
Teller, B. Freeman, T. Adelson, M. James, M. Siegel and anonymous reviewers for helpful feedback
and discussions. T. Kulkarni was graciously supported by the Henry E Singleton (1940) Fellowship.
This research was supported by ONR award N000141310333, ARO MURI W911NF-13-1-2012,
the DARPA UPSIDE program and a gift from Google.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
?
Jos?e Manuel Alvarez,
Theo Gevers, and Antonio M Lopez. ?3D scene priors for road detection?. In: Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on.
IEEE. 2010, pp. 57?64.
Mohamed Aly. ?Real time detection of lane markers in urban streets?. In: Intelligent Vehicles
Symposium, 2008 IEEE. IEEE. 2008, pp. 7?12.
Paul Dagum and Michael Luby. ?An optimal approximation algorithm for Bayesian inference?. In: Artificial Intelligence 93.1 (1997), pp. 1?27.
L Del Pero et al. ?Bayesian geometric modeling of indoor scenes?. In: Computer Vision and
Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE. 2012, pp. 2719?2726.
Andreas Geiger, Philip Lenz, and Raquel Urtasun. ?Are we ready for autonomous driving?
The KITTI vision benchmark suite?. In: Computer Vision and Pattern Recognition (CVPR),
2012 IEEE Conference on. IEEE. 2012, pp. 3354?3361.
Noah Goodman, Vikash Mansinghka, Daniel Roy, Keith Bonawitz, and Joshua Tenenbaum.
?Church: A language for generative models?. In: UAI. 2008.
Geoffrey E Hinton, Simon Osindero, and Yee-Whye Teh. ?A fast learning algorithm for deep
belief nets?. In: Neural computation 18.7 (2006), pp. 1527?1554.
Derek Hoiem, Alexei A Efros, and Martial Hebert. ?Putting objects in perspective?. In: Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. Vol. 2.
IEEE. 2006, pp. 2137?2144.
Derek Hoiem, Alexei A Efros, and Martial Hebert. ?Recovering surface layout from an image?. In: International Journal of Computer Vision 75.1 (2007), pp. 151?172.
Berthold Klaus Paul Horn. Robot vision. the MIT Press, 1986.
Yann LeCun and Yoshua Bengio. ?Convolutional networks for images, speech, and time series?. In: The handbook of brain theory and neural networks 3361 (1995).
Paul Marjoram, John Molitor, Vincent Plagnol, and Simon Tavar?e. ?Markov chain Monte
Carlo without likelihoods?. In: Proceedings of the National Academy of Sciences 100.26
(2003).
Greg Mori and Jitendra Malik. ?Recognizing objects in adversarial clutter: Breaking a visual
CAPTCHA?. In: Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE
Computer Society Conference on. Vol. 1. IEEE. 2003, pp. I?134.
Javier Portilla and Eero P Simoncelli. ?A parametric texture model based on joint statistics of
complex wavelet coefficients?. In: International Journal of Computer Vision 40.1 (2000).
Oliver Ratmann, Christophe Andrieu, Carsten Wiuf, and Sylvia Richardson. ?Model criticism
based on likelihood-free inference, with an application to protein network evolution?. In:
106.26 (2009), pp. 10576?10581.
Ray Smith. ?An overview of the Tesseract OCR engine?. In: Ninth International Conference
on Document Analysis and Recognition. Vol. 2. IEEE. 2007, pp. 629?633.
Zhuowen Tu, Xiangrong Chen, Alan L Yuille, and Song-Chun Zhu. ?Image parsing: Unifying
segmentation, detection, and recognition?. In: International Journal of Computer Vision 63.2
(2005), pp. 113?140.
Zhuowen Tu and Song-Chun Zhu. ?Image Segmentation by Data-Driven Markov Chain
Monte Carlo?. In: IEEE Trans. Pattern Anal. Mach. Intell. 24.5 (May 2002).
Richard D Wilkinson. ?Approximate Bayesian computation (ABC) gives exact results under
the assumption of model error?. In: arXiv preprint arXiv:0811.3355 (2008).
David Wingate, Noah D Goodman, A Stuhlmueller, and J Siskind. ?Nonstandard interpretations of probabilistic programs for efficient inference?. In: Advances in Neural Information
Processing Systems 23 (2011).
Alan Yuille and Daniel Kersten. ?Vision as Bayesian inference: analysis by synthesis?? In:
Trends in cognitive sciences 10.7 (2006), pp. 301?308.
Yibiao Zhao and Song-Chun Zhu. ?Image Parsing via Stochastic Scene Grammar?. In: Advances in Neural Information Processing Systems. 2011.
9
| 4881 |@word illustrating:1 inversion:1 retraining:1 rgb:3 decomposition:2 initial:2 configuration:1 contains:1 efficacy:2 series:2 hoiem:2 daniel:2 document:1 existing:4 comparing:1 manuel:1 si:11 assigning:1 dx:1 written:4 parsing:4 john:1 visible:1 alphanumeric:1 blur:16 enables:1 generative:24 intelligence:2 cue:2 website:1 plane:1 smith:1 short:4 core:1 num:1 quantized:3 location:3 height:1 along:3 beta:6 symposium:1 jonas:1 lopez:1 combine:3 ray:1 introduce:1 xji:1 themselves:1 frequently:1 brain:2 freeman:1 globally:1 automatically:4 gift:1 project:1 provided:2 underlying:2 bounded:2 agnostic:1 lowest:1 what:2 backbone:2 kind:2 substantially:1 tesseract:3 developed:4 supplemental:1 finding:3 suite:3 pseudo:1 quantitative:2 temporal:1 every:1 exactly:1 classifier:1 control:12 unit:1 appear:1 producing:1 before:1 engineering:2 local:5 modify:2 representationally:1 mach:1 id:24 solely:1 approximately:3 plus:2 dynamically:1 challenging:1 limited:1 range:1 acknowledgment:1 camera:4 horn:1 lecun:1 implement:4 digit:4 descended:1 significantly:2 reject:1 composite:1 word:1 road:19 integrating:1 protein:1 symbolic:1 get:1 onto:1 undesirable:1 layered:1 operator:1 context:1 applying:1 impossible:1 yee:1 kersten:1 deterministic:4 map:2 center:1 appearence:1 reviewer:1 go:1 layout:1 cluttered:1 independently:1 formulate:1 simplicity:2 identifying:1 assigns:1 insight:1 adjusts:1 siskind:1 handle:1 coordinate:3 autonomous:2 variation:1 exact:2 programming:7 renderers:1 element:7 roy:1 recognition:12 trend:1 muri:1 bottom:11 observed:5 role:1 preprint:1 wingate:1 region:5 trade:1 ran:1 principled:1 substantial:1 complexity:1 wilkinson:1 dynamic:1 trained:2 depend:2 grateful:1 serve:1 purely:1 yuille:2 necessitates:1 multimodal:1 mh:2 darpa:1 joint:1 fast:1 describe:3 monte:2 artificial:2 approached:1 klaus:1 whose:2 richer:3 widely:1 cvpr:3 tightness:1 otherwise:4 grammar:1 favor:1 statistic:1 richardson:1 noisy:1 final:1 sequence:1 net:1 propose:3 reconstruction:2 aro:1 tu:2 combining:1 achieve:1 compositing:1 academy:1 description:3 venture:2 convergence:4 cluster:4 wiuf:1 produce:2 stuhlmueller:1 rotated:1 object:4 kitti:4 help:1 nearest:1 mansinghka:2 keith:1 strong:1 implemented:1 recovering:1 exhibiting:1 direction:1 correct:1 stochastic:17 subsequently:1 centered:1 engineered:1 material:1 violates:2 require:1 preliminary:1 anonymous:1 adjusted:1 extension:2 exploring:1 ground:2 driving:2 efros:2 lenz:1 dagum:1 currently:2 si0:4 hope:1 federal:1 mit:5 clearly:1 rough:1 gaussian:7 avoid:1 derived:1 focus:1 bernoulli:1 likelihood:28 superimposed:2 contrast:1 tavar:1 criticism:1 baseline:3 graciously:1 adversarial:1 helpful:1 inference:47 typically:1 unlikely:1 accept:1 going:1 provably:1 pixel:6 overall:4 fidelity:11 flexible:1 classification:4 denoted:1 priori:2 spatial:5 smoothing:1 constrained:1 sampling:4 runtimes:1 adversarially:3 broad:1 adelson:1 constitutes:1 future:2 report:6 yoshua:1 quantitatively:1 escape:1 primarily:1 richard:1 modern:1 randomly:2 intelligent:1 gamma:4 recognize:1 national:1 individual:2 intell:1 replaced:1 geometry:2 consisting:2 occlusion:1 plagnol:1 detection:6 highly:4 possibility:1 alexei:2 custom:4 adjust:1 mixture:2 extreme:1 chain:2 accurate:2 beforehand:1 oliver:1 necessary:1 pero:1 re:1 obscured:3 modeling:3 ence:1 w911nf:1 perov:1 localizing:2 calibrate:1 tractability:1 deviation:1 uniform:2 comprised:1 recognizing:1 successful:1 osindero:1 graphic:40 generalizability:1 synthetic:2 chooses:1 international:4 csail:1 probabilistic:40 jos:1 michael:1 together:1 synthesis:3 choose:1 lambda:8 cognitive:2 corner:1 zhao:1 potential:2 singleton:1 coefficient:1 blurred:2 jitendra:1 depends:1 vehicle:3 later:1 break:1 exogenous:1 portion:1 red:1 relied:1 gevers:1 simon:2 contribution:3 ass:1 ir:14 greg:1 degraded:5 siberian:1 accuracy:12 variance:2 descriptor:1 yield:3 identify:1 convolutional:1 bayesian:17 vincent:1 accurately:1 carlo:2 trajectory:1 multiplying:1 published:1 history:3 stroke:1 nonstandard:1 detector:1 energy:1 pp:13 involved:1 james:1 mohamed:1 derek:2 elegance:1 associated:2 sampled:3 newly:1 proved:1 dataset:2 knowledge:3 distractors:1 improves:2 infers:1 segmentation:6 dimensionality:2 javier:1 gpgp:13 sophisticated:1 originally:1 alvarez:1 formulation:8 done:2 generality:1 implicit:1 until:1 hand:1 hastings:6 parse:1 marker:1 lack:1 google:1 del:1 defines:1 mode:1 building:1 glyph:7 requiring:1 true:2 facility:1 andrieu:1 inspiration:1 evolution:1 laboratory:1 jbt:1 self:2 width:1 whye:1 image:64 wise:2 novel:1 recently:1 common:2 rotation:6 overview:2 linking:1 interpretation:7 molitor:1 interpret:1 eps:2 significant:1 gibbs:1 automatic:6 tuning:2 captchas:5 mathematics:1 stochasticity:5 language:3 henry:1 reliability:1 robot:1 surface:5 renderer:21 posterior:10 multivariate:1 own:1 perspective:1 driven:1 buffer:1 binary:1 onr:1 christophe:1 joshua:2 scoring:1 minimum:5 additional:2 xj0:3 signal:1 clockwise:1 full:1 multiple:1 upside:1 infer:1 simoncelli:1 alan:2 match:5 long:1 escaped:2 equally:1 award:1 qi:2 variant:3 vision:18 arxiv:2 histogram:4 sometimes:2 kernel:5 represent:1 invert:1 robotics:1 proposal:6 schematically:1 remarkably:2 separately:1 fellowship:1 annealing:2 source:2 goodman:2 unlike:1 bringing:1 isolate:1 induced:2 facilitates:1 incorporates:2 spirit:1 presence:1 bengio:1 enough:1 zhuowen:2 rendering:6 xj:8 bandwidth:4 identified:1 reduce:1 idea:2 andreas:1 avenue:2 vikash:2 qj:2 absent:1 whether:1 six:1 url:1 vkm:1 song:3 render:2 returned:1 speech:1 antonio:1 ignored:1 deep:1 clear:2 involve:1 clutter:1 extensively:1 tenenbaum:1 png:2 http:1 extrinsic:1 per:7 blue:1 write:2 discrete:1 vol:3 key:1 four:3 independency:1 threshold:2 putting:1 drawn:2 urban:1 cutoff:1 run:5 inverse:1 letter:17 powerful:1 fourth:2 uncertainty:7 raquel:1 reasonable:2 yann:1 geiger:1 decision:1 followed:1 correspondence:1 noah:2 constraint:2 sylvia:1 scene:43 software:4 encodes:1 lane:15 aspect:1 extremely:3 min:1 performing:2 optical:2 rendered:10 department:1 according:2 combination:4 character:8 appealing:2 metropolis:6 pipeline:4 mori:1 equation:1 count:1 needed:2 fed:1 tractable:1 end:2 observe:3 ocr:1 away:1 generic:3 disagreement:1 blurry:1 appropriate:1 robustly:2 luby:1 alternative:1 original:5 broaden:1 top:3 include:3 unifying:1 exploit:3 epsilon:2 build:4 especially:1 society:2 warping:1 malik:1 coherently:1 strategy:3 r2r:1 parametric:1 traditional:1 distance:1 link:1 captcha:8 street:1 philip:1 enumerative:1 urtasun:1 code:8 index:1 illustration:2 demonstration:2 difficult:2 potentially:1 design:2 anal:1 adjustable:1 redesign:1 contributed:1 perform:1 unknown:1 teh:1 datasets:2 markov:2 benchmark:3 enabling:1 snippet:2 hinton:1 variability:3 frame:13 portilla:1 ninth:1 smoothed:1 arbitrary:4 aly:2 introduced:1 inverting:1 david:1 required:2 specified:1 yura:1 engine:2 learned:1 mediates:1 trans:1 beyond:3 usually:1 pattern:7 perception:1 indoor:1 reading:5 program:30 reliable:2 including:4 max:4 belief:1 analogue:1 overlap:2 treated:1 rely:1 difficulty:1 indicator:1 zr:2 marjoram:1 zhu:3 improve:2 historically:1 library:1 martial:2 church:2 ready:1 mediated:1 extract:1 text:12 prior:10 geometric:3 teller:1 python:1 relative:1 loss:1 discriminatively:1 bear:1 interesting:2 generation:1 mounted:2 geoffrey:1 localized:1 generator:8 degree:2 sufficient:1 consistent:1 bank:1 share:2 row:4 course:2 compatible:2 supported:2 free:1 hebert:2 theo:1 institute:1 template:3 face:1 tolerance:5 feedback:1 default:1 berthold:1 world:4 transition:3 rich:1 author:1 stuck:1 forward:1 approximate:20 compact:2 global:6 active:1 uncentered:1 mem:8 uai:1 corpus:6 conceptual:1 unnecessary:1 handbook:1 eero:1 search:2 latent:10 table:2 bonawitz:2 learn:2 robust:1 ignoring:1 aol:4 generalpurpose:1 complex:5 necessarily:1 yibiao:1 substituted:1 did:1 main:4 paul:3 facilitating:1 representative:4 siegel:1 fashion:1 inferring:2 position:2 lie:1 breaking:3 third:2 wavelet:1 down:2 specific:2 showing:1 explored:1 experimented:1 offset:1 list:4 chun:3 normalizing:1 consist:2 intrinsic:1 quantization:1 burden:1 adding:1 texture:2 execution:4 perceptually:1 chen:1 rejection:1 intersection:1 likely:2 appearance:13 visual:2 applies:1 corresponds:2 chiefly:1 relies:2 abc:4 tejas:1 conditional:1 identity:5 formulated:1 carsten:1 absence:1 considerable:2 change:1 hard:4 typical:4 uniformly:1 called:1 total:3 accepted:1 formally:1 support:2 kulkarni:2 incorporate:2 mcmc:2 |
4,289 | 4,882 | Dropout Training as Adaptive Regularization
Stefan Wager? , Sida Wang? , and Percy Liang?
Departments of Statistics? and Computer Science?
Stanford University, Stanford, CA-94305
[email protected], {sidaw, pliang}@cs.stanford.edu
Abstract
Dropout and other feature noising schemes control overfitting by artificially corrupting the training data. For generalized linear models, dropout performs a form
of adaptive regularization. Using this viewpoint, we show that the dropout regularizer is first-order equivalent to an L2 regularizer applied after scaling the features
by an estimate of the inverse diagonal Fisher information matrix. We also establish
a connection to AdaGrad, an online learning algorithm, and find that a close relative of AdaGrad operates by repeatedly solving linear dropout-regularized problems. By casting dropout as regularization, we develop a natural semi-supervised
algorithm that uses unlabeled data to create a better adaptive regularizer. We apply this idea to document classification tasks, and show that it consistently boosts
the performance of dropout training, improving on state-of-the-art results on the
IMDB reviews dataset.
1
Introduction
Dropout training was introduced by Hinton et al. [1] as a way to control overfitting by randomly
omitting subsets of features at each iteration of a training procedure.1 Although dropout has proved
to be a very successful technique, the reasons for its success are not yet well understood at a theoretical level.
Dropout training falls into the broader category of learning methods that artificially corrupt training data to stabilize predictions [2, 4, 5, 6, 7]. There is a well-known connection between artificial
feature corruption and regularization [8, 9, 10]. For example, Bishop [9] showed that the effect of
training with features that have been corrupted with additive Gaussian noise is equivalent to a form
of L2 -type regularization in the low noise limit. In this paper, we take a step towards understanding how dropout training works by analyzing it as a regularizer. We focus on generalized linear
models (GLMs), a class of models for which feature dropout reduces to a form of adaptive model
regularization.
Using this framework, we show that dropout training is first-order equivalent to L2 -regularization af? 1/2 , where I? is an estimate of the Fisher information matrix.
ter transforming the input by diag(I)
This transformation effectively makes the level curves of the objective more spherical, and so balances out the regularization applied to different features. In the case of logistic regression, dropout
can be interpreted as a form of adaptive L2 -regularization that favors rare but useful features.
The problem of learning with rare but useful features is discussed in the context of online learning
by Duchi et al. [11], who show that their AdaGrad adaptive descent procedure achieves better regret
bounds than regular stochastic gradient descent (SGD) in this setting. Here, we show that AdaGrad
S.W. is supported by a B.C. and E.J. Eaves Stanford Graduate Fellowship.
Hinton et al. introduced dropout training in the context of neural networks specifically, and also advocated
omitting random hidden layers during training. In this paper, we follow [2, 3] and study feature dropout as a
generic training method that can be applied to any learning algorithm.
1
1
and dropout training have an intimate connection: Just as SGD progresses by repeatedly solving
linearized L2 -regularized problems, a close relative of AdaGrad advances by solving linearized
dropout-regularized problems.
Our formulation of dropout training as adaptive regularization also leads to a simple semi-supervised
learning scheme, where we use unlabeled data to learn a better dropout regularizer. The approach
is fully discriminative and does not require fitting a generative model. We apply this idea to several
document classification problems, and find that it consistently improves the performance of dropout
training. On the benchmark IMDB reviews dataset introduced by [12], dropout logistic regression
with a regularizer tuned on unlabeled data outperforms previous state-of-the-art. In follow-up research [13], we extend the results from this paper to more complicated structured prediction, such
as multi-class logistic regression and linear chain conditional random fields.
2
Artificial Feature Noising as Regularization
We begin by discussing the general connections between feature noising and regularization in generalized linear models (GLMs). We will apply the machinery developed here to dropout training in
Section 4.
A GLM defines a conditional distribution over a response y 2 Y given an input feature vector
x 2 Rd :
def
p (y | x) = h(y) exp{y x ?
A(x ? )},
def
`x,y ( ) =
log p (y | x).
(1)
Here, h(y) is a quantity independent of x and , A(?) is the log-partition function, and `x,y ( ) is the
loss function (i.e., the negative log likelihood); Table 1 contains a summary of notation. Common
examples of GLMs include linear (Y = R), logistic (Y = {0, 1}), and Poisson (Y = {0, 1, 2, . . . })
regression.
Given n training examples (xi , yi ), the standard maximum likelihood estimate ? 2 Rd minimizes
the empirical loss over the training examples:
? def
= arg min
2Rd
n
X
(2)
`xi , yi ( ).
i=1
With artificial feature noising, we replace the observed feature vectors xi with noisy versions x
?i =
?(xi , ?i ), where ? is our noising function and ?i is an independent random variable. We first create
many noisy copies of the dataset, and then average out the auxiliary noise. In this paper, we will
consider two types of noise:
? Additive Gaussian noise: ?(xi , ?i ) = xi + ?i , where ?i ? N (0,
2
Id?d ).
? Dropout noise: ?(xi , ?i ) = xi ?i , where
is the elementwise product of two vectors. Each component of ?i 2 {0, (1
) 1 }d is an independent draw from a scaled
Bernoulli(1
) random variable. In other words, dropout noise corresponds to setting x
?ij
to 0 with probability and to xij /(1
) else.2
Integrating over the feature noise gives us a noised maximum likelihood parameter estimate:
? = arg min
2Rd
n
X
i=1
def
E? [`x?i , yi ( )] , where E? [Z] = E [Z | {xi , yi }]
(3)
is the expectation taken with respect to the artificial feature noise ? = (?1 , . . . , ?n ). Similar expressions have been studied by [9, 10].
For GLMs, the noised empirical loss takes on a simpler form:
n
X
E? [`x?i , yi ( )] =
i=1
2
Artificial noise of the form xi
to dropout noise as defined by [1].
n
X
i=1
(y xi ?
E? [A(?
xi ? )]) =
n
X
`xi , yi ( ) + R( ).
(4)
i=1
?i is also called blankout noise. For GLMs, blankout noise is equivalent
2
Table 1: Summary of notation.
xi
x
?i
A(x ? )
Observed feature vector
Noised feature vector
Log-partition function
R( )
Rq ( )
`( )
Noising penalty (5)
Quadratic approximation (6)
Negative log-likelihood (loss)
The first equality holds provided that E? [?
xi ] = xi , and the second is true with the following definition:
n
X
def
R( ) =
E? [A(?
xi ? )] A(xi ? ).
(5)
i=1
Here, R( ) acts as a regularizer that incorporates the effect of artificial feature noising. In GLMs, the
log-partition function A must always be convex, and so R is always positive by Jensen?s inequality.
The key observation here is that the effect of artificial feature noising reduces to a penalty R( )
that does not depend on the labels {yi }. Because of this, artificial feature noising penalizes the
complexity of a classifier in a way that does not depend on the accuracy of a classifier. Thus, for
GLMs, artificial feature noising is a regularization scheme on the model itself that can be compared
with other forms of regularization such as ridge (L2 ) or lasso (L1 ) penalization. In Section 6, we
exploit the label-independence of the noising penalty and use unlabeled data to tune our estimate of
R( ).
The fact that R does not depend on the labels has another useful consequence that relates to prediction. The natural prediction rule with artificially noised features is to select y? to minimize expected
loss over the added noise: y? = argminy E? [`x?, y ( ?)]. It is common practice, however, not to noise
the inputs and just to output classification decisions based on the original feature vector [1, 3, 14]:
y? = argminy `x, y ( ?). It is easy to verify that these expressions are in general not equivalent, but
they are equivalent when the effect of feature noising reduces to a label-independent penalty on the
likelihood. Thus, the common practice of predicting with clean features is formally justified for
GLMs.
2.1
A Quadratic Approximation to the Noising Penalty
Although the noising penalty R yields an explicit regularizer that does not depend on the labels
{yi }, the form of R can be difficult to interpret. To gain more insight, we will work with a quadratic
approximation of the type used by [9, 10]. By taking a second-order Taylor expansion of A around
x ? , we get that E? [A(?
x ? )] A(x ? ) ? 12 A00 (x ? ) Var? [?
x ? ] . Here the first-order term
E? [A0 (x ? )(?
x x)] vanishes because E? [?
x] = x. Applying this quadratic approximation to (5)
yields the following quadratic noising regularizer, which will play a pivotal role in the rest of the
paper:
n
X
def 1
Rq ( ) =
A00 (xi ? ) Var? [?
xi ? ] .
(6)
2 i=1
This regularizer penalizes two types of variance over the training examples: (i) A00 (xi ? ), which
corresponds to the variance of the response yi in the GLM, and (ii) Var? [?
xi ? ], the variance of the
estimated GLM parameter due to noising.3
Accuracy of approximation Figure 1a compares the noising penalties R and Rq for logistic redef
gression in the case that x
? ? is Gaussian;4 we vary the mean parameter p = (1 + e x? ) 1 and the
q
noise level . We see that R is generally very accurate, although it tends to overestimate the true
penalty for p ? 0.5 and tends to underestimate it for very confident predictions. We give a graphical
explanation for this phenomenon in the Appendix (Figure A.1).
The quadratic approximation also appears to hold up on real datasets. In Figure 1b, we compare the evolution during training of both R and Rq on the 20 newsgroups alt.atheism vs
3
Although Rq is not convex, we were still able (using an L-BFGS algorithm) to train logistic regression
with Rq as a surrogate for the dropout regularizer without running into any major issues with local optima.
4
This assumption holds a priori for additive Gaussian noise, and can be reasonable for dropout by the central
limit theorem.
3
0.30
500
Loss
100
0.20
50
0.15
0.00
10
0.05
20
0.10
Noising Penalty
Dropout Penalty
Quadratic Penalty
Negative Log?Likelihood
200
0.25
p = 0.5
p = 0.73
p = 0.82
p = 0.88
p = 0.95
0.0
0.5
1.0
1.5
0
50
Sigma
100
150
Training Iteration
(a) Comparison of noising penalties R and Rq for
logistic regression with Gaussian perturbations,
i.e., (?
x
x) ?
? N (0, 2 ). The solid line
indicates the true penalty and the dashed one is
our quadratic approximation thereof; p = (1 +
e x? ) 1 is the mean parameter for the logistic
model.
(b) Comparing the evolution of the exact dropout
penalty R and our quadratic approximation Rq
for logistic regression on the AthR classification
task in [15] with 22K features and n = 1000
examples. The horizontal axis is the number of
quasi-Newton steps taken while training with exact dropout.
Figure 1: Validating the quadratic approximation.
soc.religion.christian classification task described in [15]. We see that the quadratic approximation is accurate most of the way through the learning procedure, only deteriorating slightly
as the model converges to highly confident predictions.
In practice, we have found that fitting logistic regression with the quadratic surrogate Rq gives
similar results to actual dropout-regularized logistic regression. We use this technique for our experiments in Section 6.
3
Regularization based on Additive Noise
Having established the general quadratic noising regularizer Rq , we now turn to studying the effects of Rq for various likelihoods (linear and logistic regression) and noising models (additive and
dropout). In this section, we warm up with additive noise; in Section 4 we turn to our main target of
interest, namely dropout noise.
Linear regression Suppose x
? = x + " is generated by by adding noise with Var["] = 2 Id?d to
the original feature vector x. Note that Var? [?
x ? ] = 2 k k22 , and in the case of linear regression
1 2
00
A(z) = 2 z , so A (z) = 1. Applying these facts to (6) yields a simplified form for the quadratic
noising penalty:
1 2
Rq ( ) =
nk k22 .
(7)
2
Thus, we recover the well-known result that linear regression with additive feature noising is equivalent to ridge regression [2, 9]. Note that, with linear regression, the quadratic approximation Rq is
exact and so the correspondence with L2 -regularization is also exact.
Logistic regression The situation gets more interesting when we move beyond linear regression.
For logistic regression, A00 (xi ? ) = pi (1 pi ) where pi = (1 + exp( xi ? )) 1 is the predicted
probability of yi = 1. The quadratic noising penalty is then
1
R ( )=
2
q
2
k
k22
n
X
pi (1
pi ).
(8)
i=1
In other words, the noising penalty now simultaneously encourages parsimonious modeling as before (by encouraging k k22 to be small) as well as confident predictions (by encouraging the pi ?s to
move away from 12 ).
4
Table 2: Form of the different regularization
schemes. These expressions assume that the design
P
matrix has been normalized, i.e., that i x2ij = 1 for all j. The pi = (1 + e xi ? ) 1 are mean
parameters for the logistic model.
L2 -penalization
Additive Noising
Dropout Training
4
Linear Regression
k k22
k k22
k k22
Logistic Regression
2
Pk k2
2
k
k
p
P 2 i i (1 2pi ) 2
pi ) xij j
i, j pi (1
GLM
k k22
k k22 tr(V ( ))
>
diag(X > V ( )X)
Regularization based on Dropout Noise
Recall that dropout training corresponds to applying dropout noise to training examples, where
the noised features x
?i are obtained by setting x
?ij to 0 with some ?dropout probability? and to
xij /(1
) with probability (1
), independently for each coordinate j of the feature vector. We
can check that:
d
X
1
Var? [?
xi ? ] =
x2ij j2 ,
(9)
21
j=1
and so the quadratic dropout penalty is
Rq ( ) =
1
21
n
X
i=1
A00 (xi ? )
d
X
x2ij
2
j.
(10)
j=1
Letting X 2 Rn?d be the design matrix with rows xi and V ( ) 2 Rn?n be a diagonal matrix with
entries A00 (xi ? ), we can re-write this penalty as
1
>
Rq ( ) =
diag(X > V ( )X) .
(11)
21
Let ? be the maximum
estimate given infinite data. When computed at ? , the matrix
Pn likelihood
1
1
>
?
2
?
X
V
(
)X
=
r
`
) is an estimate of the Fisher information matrix I. Thus,
x i , yi (
i=1
n
n
dropout can be seen as an attempt to apply an L2 penalty after normalizing the feature vector by
diag(I) 1/2 . The Fisher information is linked to the shape of the level surfaces of `( ) around ? .
If I were a multiple of the identity matrix, then these level surfaces would be perfectly spherical
around ? . Dropout, by normalizing the problem by diag(I) 1/2 , ensures that while the level
surfaces of `( ) may not be spherical, the L2 -penalty is applied in a basis where the features have
been balanced out. We give a graphical illustration of this phenomenon in Figure A.2.
Linear Regression For linear regression, V is the identity matrix, so the dropout objective is
equivalent to a form of ridge regression where each column of the design matrix is normalized
before applying the L2 penalty.5 This connection has been noted previously by [3].
Logistic Regression The form of dropout penalties becomes much more intriguing once we move
beyond the realm of linear regression. The case of logistic regression is particularly interesting.
Here, we can write the quadratic dropout penalty from (10) as
n X
d
X
1
Rq ( ) =
pi (1 pi ) x2ij j2 .
(12)
21
i=1 j=1
Thus, just like additive noising, dropout generally gives an advantage to confident predictions and
small . However, unlike all the other methods considered so far, dropout may allow for some large
pi (1 pi ) and some large j2 , provided that the corresponding cross-term x2ij is small.
Our analysis shows that dropout regularization should be better than L2 -regularization for learning
weights for features that are rare (i.e., often 0) but highly discriminative, because dropout effectively
does not penalize j over observations for which xij = 0. Thus, in order for a feature to earn a large
2
pi ) each time that it
j , it suffices for it to contribute to a confident prediction with small pi (1
is active.6 Dropout training has been empirically found to perform well on tasks such as document
5
Normalizing the columns of the design matrix before performing penalized regression is standard practice,
and is implemented by default in software like glmnet for R [16].
6
To be precise, dropout does not reward all rare but discriminative features. Rather, dropout rewards those
features that are rare and positively co-adapted with other features in a way that enables the model to make
confident predictions whenever the feature of interest is active.
5
Table 3: Accuracy of L2 and dropout regularized logistic regression on a simulated example. The
first row indicates results over test examples where some of the rare useful features are active (i.e.,
where there is some signal that can be exploited), while the second row indicates accuracy over the
full test set. These results are averaged over 100 simulation runs, with 75 training examples in each.
All tuning parameters were set to optimal values. The sampling error on all reported values is within
?0.01.
Accuracy
Active Instances
All Instances
L2 -regularization
0.66
0.53
Dropout training
0.73
0.55
classification where rare but discriminative features are prevalent [3]. Our result suggests that this is
no mere coincidence.
We summarize the relationship between L2 -penalization, additive noising and dropout in Table 2.
Additive noising introduces a product-form penalty depending on both and A00 . However, the full
potential of artificial feature noising only emerges with dropout, which allows the penalty terms due
to and A00 to interact in a non-trivial way through the design matrix X (except for linear regression,
in which all the noising schemes we consider collapse to ridge regression).
4.1 A Simulation Example
The above discussion suggests that dropout logistic regression should perform well with rare but
useful features. To test this intuition empirically, we designed a simulation study where all the
signal is grouped in 50 rare features, each of which is active only 4% of the time. We then added
1000 nuisance features that are always active to the design matrix, for a total of d = 1050 features.
To make sure that our experiment was picking up the effect of dropout training specifically and not
just normalization of X, we ensured that the columns of X were normalized in expectation.
The dropout penalty for logistic regression can be written as a matrix product
0
10 1
???
???
1
Rq ( ) =
(? ? ? pi (1 pi ) ? ? ?) @? ? ? x2ij ? ? ?A @ j2 A .
21
???
???
(13)
We designed the simulation study in such a way that, at the optimal , the dropout penalty should
have structure
Small
(confident prediction)
Big
(weak prediction)
!
0
B
@
???
0
1
0
B
CB
AB
B
@
???
???
Big
(useful feature)
Small
(nuisance feature)
1
C
C
C.
C
A
(14)
A dropout penalty with such a structure should be small. Although there are some uncertain predictions with large pi (1 pi ) and some big weights j2 , these terms cannot interact because the
corresponding terms x2ij are all 0 (these are examples without any of the rare discriminative features and thus have no signal). Meanwhile, L2 penalization has no natural way of penalizing some
j more and others less. Our simulation results, given in Table 3, confirm that dropout training
outperforms L2 -regularization here as expected. See Appendix A.1 for details.
5
Dropout Regularization in Online Learning
There is a well-known connection between L2 -regularization and stochastic gradient descent (SGD).
In SGD, the weight vector ? is updated with ?t+1 = ?t ?t gt , where gt = r`xt , yt ( ?t ) is the
gradient of the loss due to the t-th training example. We can also write this update as a linear
L2 -penalized problem
?
?t+1 = argmin `x , y ( ?t ) + gt ? (
?t ) + 1 k
?t k2 ,
(15)
2
t
t
2?t
where the first two terms form a linear approximation to the loss and the third term is an L2 regularizer. Thus, SGD progresses by repeatedly solving linearized L2 -regularized problems.
6
0.9
0.88
0.88
0.86
0.86
accuracy
accuracy
0.9
0.84
dropout+unlabeled
dropout
L2
0.82
0.8
0
10000
20000
30000
size of unlabeled data
0.84
dropout+unlabeled
dropout
L2
0.82
0.8
40000
5000
10000
size of labeled data
15000
Figure 2: Test set accuracy on the IMDB dataset [12] with unigram features. Left: 10000 labeled
training examples, and up to 40000 unlabeled examples. Right: 3000-15000 labeled training examples, and 25000 unlabeled examples. The unlabeled data is discounted by a factor ? = 0.4.
As discussed by Duchi et al. [11], a problem with classic SGD is that it can be slow at learning
weights corresponding to rare but highly discriminative features. This problem can be alleviated
by running a modified form of SGD with ?t+1 = ?t ? At 1 gt , where the transformation At is
also learned online; this leads to the
family of stochastic descent rules. Duchi et al. use
PtAdaGrad
>
At = diag(Gt )1/2 where Gt =
g
g
and
show that this choice achieves desirable regret
i=1 i i
bounds in the presence of rare but useful features. At least superficially, AdaGrad and dropout seem
to have similar goals: For logistic regression, they can both be understood as adaptive alternatives
to methods based on L2 -regularization that favor learning rare, useful features. As it turns out, they
have a deeper connection.
The natural way to incorporate dropout regularization into SGD is to replace the penalty term k
?k2 /2? in (15) with the dropout regularizer, giving us an update rule
2
n
o
?t+1 = argmin `x , y ( ?t ) + gt ? (
?t ) + Rq (
?t ; ?t )
(16)
t
t
where, Rq (?; ?t ) is the quadratic noising regularizer centered at ?t :7
Rq (
?t ; ?t ) = 1 (
2
?t )> diag(Ht ) (
?t ), where Ht =
t
X
i=1
r2 `xi , yi ( ?t ).
(17)
This implies that dropout descent is first-order equivalent to an adaptive SGD procedure with At =
diag(Ht ). To see the connection between AdaGrad and this dropout-based online procedure, recall
that for GLMs both of the expressions
?
?
?
?
E ? r2 `x, y ( ? ) = E ? r`x, y ( ? )r`x, y ( ? )>
(18)
are equal to the Fisher information I [17]. In other words, as ?t converges to ? , Gt and Ht are both
consistent estimates of the Fisher information. Thus, by using dropout instead of L2 -regularization
to solve linearized problems in online learning, we end up with an AdaGrad-like algorithm.
Of course, the connection between AdaGrad and dropout is not perfect. In particular, AdaGrad
allows for a more aggressive learning rate by using At = diag(Gt ) 1/2 instead of diag(Gt ) 1 .
But, at a high level, AdaGrad and dropout appear to both be aiming for the same goal: scaling
the features by the Fisher information to make the level-curves of the objective more circular. In
contrast, L2 -regularization makes no attempt to sphere the level curves, and AROW [18]?another
popular adaptive method for online learning?only attempts to normalize the effective feature matrix
but does not consider the sensitivity of the loss to changes in the model weights. In the case of
logistic regression, AROW also favors learning rare features, but unlike dropout and AdaGrad does
not privilege confident predictions.
7
This expression is equivalent to (11) except that we used ?t and not
7
?t to compute Ht .
Table 4: Performance of semi-supervised dropout training for document classification.
(a) Test accuracy with and without unlabeled data on
different datasets. Each dataset is split into 3 parts
of equal sizes: train, unlabeled, and test. Log. Reg.:
logistic regression with L2 regularization; Dropout:
dropout trained with quadratic surrogate; +Unlabeled: using unlabeled data.
(b) Test accuracy on the IMDB dataset [12]. Labeled:
using just labeled data from each paper/method, +Unlabeled: use additional unlabeled data. Drop: dropout
with Rq , MNB: multionomial naive Bayes with semisupervised frequency estimate from [19],8 -Uni: unigram features, -Bi: bigram features.
Datasets Log. Reg. Dropout +Unlabeled
Subj
88.96
90.85
91.48
RT
73.49
75.18
76.56
IMDB-2k
80.63
81.23
80.33
XGraph
83.10
84.64
85.41
BbCrypt
97.28
98.49
99.24
IMDB
87.14
88.70
89.21
Methods Labeled +Unlabeled
MNB-Uni [19] 83.62
84.13
MNB-Bi [19] 86.63
86.98
Vect.Sent[12] 88.33
88.89
NBSVM[15]-Bi 91.22
?
Drop-Uni 87.78
89.52
Drop-Bi 91.31
91.98
6
Semi-Supervised Dropout Training
Recall that the regularizer R( ) in (5) is independent of the labels {yi }. As a result, we can use
additional unlabeled training examples to estimate it more accurately. Suppose we have an unlabeled
dataset {zi } of size m, and let ? 2 (0, 1] be a discount factor for the unlabeled data. Then we can
define a semi-supervised penalty estimate
?
n ?
def
R? ( ) =
R( ) + ? RUnlabeled ( ) ,
(19)
n + ?m
P
where R( ) is the original penalty estimate and RUnlabeled ( ) = i E? [A(zi ? )] A(zi ? ) is
computed using (5) over the unlabeled examples zi . We select the discount parameter by crossvalidation; empirically, ? 2 [0.1, 0.4] works well. For convenience, we optimize the quadratic
surrogate R?q instead of R? . Another practical option would be to use the Gaussian approximation
from [3] for estimating R? ( ).
Most approaches to semi-supervised learning either rely on using a generative model [19, 20, 21, 22,
23] or various assumptions on the relationship between the predictor and the marginal distribution
over inputs. Our semi-supervised approach is based on a different intuition: we?d like to set weights
to make confident predictions on unlabeled data as well as the labeled data, an intuition shared by
entropy regularization [24] and transductive SVMs [25].
Experiments We apply this semi-supervised technique to text classification. Results on several
datasets described in [15] are shown in Table 4a; Figure 2 illustrates how the use of unlabeled data
improves the performance of our classifier on a single dataset. Overall, we see that using unlabeled
data to learn a better regularizer R? ( ) consistently improves the performance of dropout training.
Table 4b shows our results on the IMDB dataset of [12]. The dataset contains 50,000 unlabeled
examples in addition to the labeled train and test sets of size 25,000 each. Whereas the train and
test examples are either positive or negative, the unlabeled examples contain neutral reviews as well.
We train a dropout-regularized logistic regression classifier on unigram/bigram features, and use the
unlabeled data to tune our regularizer. Our method benefits from unlabeled data even in the presence
of a large amount of labeled data, and achieves state-of-the-art accuracy on this dataset.
7
Conclusion
We analyzed dropout training as a form of adaptive regularization. This framework enabled us
to uncover close connections between dropout training, adaptively balanced L2 -regularization, and
AdaGrad; and led to a simple yet effective method for semi-supervised training. There seem to be
multiple opportunities for digging deeper into the connection between dropout training and adaptive
regularization. In particular, it would be interesting to see whether the dropout regularizer takes
on a tractable and/or interpretable form in neural networks, and whether similar semi-supervised
schemes could be used to improve on the results presented in [1].
8
Our implementation of semi-supervised MNB. MNB with EM [20] failed to give an improvement.
8
References
[1] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint
arXiv:1207.0580, 2012.
[2] Laurens van der Maaten, Minmin Chen, Stephen Tyree, and Kilian Q Weinberger. Learning with
marginalized corrupted features. In Proceedings of the International Conference on Machine Learning,
2013.
[3] Sida I Wang and Christopher D Manning. Fast dropout training. In Proceedings of the International
Conference on Machine Learning, 2013.
[4] Yaser S Abu-Mostafa. Learning from hints in neural networks. Journal of Complexity, 6(2):192?198,
1990.
[5] Chris J.C. Burges and Bernhard Schlkopf. Improving the accuracy and speed of support vector machines.
In Advances in Neural Information Processing Systems, pages 375?381, 1997.
[6] Patrice Y Simard, Yann A Le Cun, John S Denker, and Bernard Victorri. Transformation invariance in
pattern recognition: Tangent distance and propagation. International Journal of Imaging Systems and
Technology, 11(3):181?197, 2000.
[7] Salah Rifai, Yann Dauphin, Pascal Vincent, Yoshua Bengio, and Xavier Muller. The manifold tangent
classifier. Advances in Neural Information Processing Systems, 24:2294?2302, 2011.
[8] Kiyotoshi Matsuoka. Noise injection into inputs in back-propagation learning. Systems, Man and Cybernetics, IEEE Transactions on, 22(3):436?440, 1992.
[9] Chris M Bishop. Training with noise is equivalent to Tikhonov regularization. Neural computation,
7(1):108?116, 1995.
[10] Salah Rifai, Xavier Glorot, Yoshua Bengio, and Pascal Vincent. Adding noise to the input of a model
trained with a regularized objective. arXiv preprint arXiv:1104.3250, 2011.
[11] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. Journal of Machine Learning Research, 12:2121?2159, 2010.
[12] Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts.
Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 142?150. Association for Computational Linguistics, 2011.
[13] Sida I Wang, Mengqiu Wang, Stefan Wager, Percy Liang, and Christopher D Manning. Feature noising
for log-linear structured prediction. In Empirical Methods in Natural Language Processing, 2013.
[14] Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout
networks. In Proceedings of the International Conference on Machine Learning, 2013.
[15] Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics,
pages 90?94. Association for Computational Linguistics, 2012.
[16] Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models
via coordinate descent. Journal of Statistical Software, 33(1):1, 2010.
[17] Erich Leo Lehmann and George Casella. Theory of Point Estimation. Springer, 1998.
[18] Koby Crammer, Alex Kulesza, Mark Dredze, et al. Adaptive regularization of weight vectors. Advances
in Neural Information Processing Systems, 22:414?422, 2009.
[19] Jiang Su, Jelber Sayyad Shirab, and Stan Matwin. Large scale text classification using semi-supervised
multinomial naive Bayes. In Proceedings of the International Conference on Machine Learning, 2011.
[20] Kamal Nigam, Andrew Kachites McCallum, Sebastian Thrun, and Tom Mitchell. Text classification from
labeled and unlabeled documents using EM. Machine Learning, 39(2-3):103?134, May 2000.
[21] G. Bouchard and B. Triggs. The trade-off between generative and discriminative classifiers. In International Conference on Computational Statistics, pages 721?728, 2004.
[22] R. Raina, Y. Shen, A. Ng, and A. McCallum. Classification with hybrid generative/discriminative models.
In Advances in Neural Information Processing Systems, Cambridge, MA, 2004. MIT Press.
[23] J. Suzuki, A. Fujino, and H. Isozaki. Semi-supervised structured output learning based on a hybrid
generative and discriminative approach. In Empirical Methods in Natural Language Processing and
Computational Natural Language Learning, 2007.
[24] Y. Grandvalet and Y. Bengio. Entropy regularization. In Semi-Supervised Learning, United Kingdom,
2005. Springer.
[25] Thorsten Joachims. Transductive inference for text classification using support vector machines. In
Proceedings of the International Conference on Machine Learning, pages 200?209, 1999.
9
| 4882 |@word version:1 bigram:3 triggs:1 simulation:5 linearized:4 sgd:9 tr:1 solid:1 contains:2 united:1 tuned:1 document:5 outperforms:2 comparing:1 deteriorating:1 yet:2 intriguing:1 must:1 written:1 john:2 additive:11 partition:3 shape:1 christian:1 enables:1 minmin:1 designed:2 drop:3 update:2 interpretable:1 v:1 generative:5 mccallum:2 contribute:1 simpler:1 fitting:2 dan:1 expected:2 multi:1 salakhutdinov:1 discounted:1 spherical:3 actual:1 encouraging:2 becomes:1 begin:1 provided:2 notation:2 estimating:1 argmin:2 interpreted:1 minimizes:1 developed:1 transformation:3 act:1 ensured:1 scaled:1 classifier:6 k2:3 control:2 appear:1 overestimate:1 positive:2 before:3 understood:2 local:1 tends:2 limit:2 consequence:1 aiming:1 analyzing:1 id:2 jiang:1 path:1 studied:1 suggests:2 co:2 collapse:1 graduate:1 bi:4 averaged:1 practical:1 practice:4 regret:2 procedure:5 empirical:4 alleviated:1 word:4 integrating:1 regular:1 get:2 cannot:1 close:3 unlabeled:30 convenience:1 noising:33 context:2 applying:4 optimize:1 equivalent:11 yt:1 independently:1 convex:2 shen:1 rule:3 insight:1 enabled:1 classic:1 coordinate:2 updated:1 target:1 play:1 suppose:2 exact:4 us:1 mengqiu:1 goodfellow:1 recognition:1 particularly:1 labeled:10 observed:2 role:1 preprint:2 coincidence:1 wang:5 noised:5 ensures:1 kilian:1 trade:1 rq:21 balanced:2 transforming:1 vanishes:1 complexity:2 intuition:3 reward:2 warde:1 trained:2 depend:4 solving:4 imdb:7 basis:1 matwin:1 various:2 regularizer:19 leo:1 train:5 fast:1 effective:2 artificial:10 stanford:5 solve:1 elad:1 favor:3 statistic:2 transductive:2 noisy:2 itself:1 patrice:1 online:8 advantage:1 product:3 adaptation:1 j2:5 normalize:1 crossvalidation:1 sutskever:1 optimum:1 perfect:1 converges:2 depending:1 develop:1 andrew:3 ij:2 advocated:1 progress:2 soc:1 auxiliary:1 c:1 predicted:1 implemented:1 implies:1 laurens:1 stochastic:4 centered:1 require:1 suffices:1 hold:3 pham:1 around:3 considered:1 exp:2 cb:1 mostafa:1 major:1 achieves:3 vary:1 ruslan:1 daly:1 estimation:1 label:6 grouped:1 create:2 stefan:2 mit:1 gaussian:6 always:3 modified:1 rather:1 pn:1 casting:1 broader:1 focus:1 joachim:1 improvement:1 potts:1 consistently:3 bernoulli:1 likelihood:8 indicates:3 check:1 prevalent:1 contrast:1 baseline:1 inference:1 a0:1 hidden:1 quasi:1 arg:2 classification:13 issue:1 overall:1 dauphin:1 priori:1 pascal:2 art:3 marginal:1 field:1 once:1 equal:2 having:1 ng:2 sampling:1 koby:1 kamal:1 others:1 yoshua:3 mirza:1 hint:1 randomly:1 simultaneously:1 privilege:1 attempt:3 ab:1 friedman:1 interest:2 highly:3 circular:1 introduces:1 analyzed:1 farley:1 wager:2 chain:1 accurate:2 machinery:1 taylor:1 penalizes:2 re:1 theoretical:1 uncertain:1 instance:2 column:3 modeling:1 subset:1 rare:14 entry:1 predictor:1 neutral:1 krizhevsky:1 successful:1 fujino:1 reported:1 corrupted:2 confident:9 adaptively:1 international:7 sensitivity:1 off:1 picking:1 ilya:1 earn:1 central:1 huang:1 simard:1 aggressive:1 potential:1 bfgs:1 stabilize:1 linked:1 hazan:1 recover:1 bayes:2 complicated:1 option:1 bouchard:1 minimize:1 gression:1 accuracy:12 variance:3 who:1 yield:3 weak:1 vincent:2 accurately:1 schlkopf:1 mere:1 corruption:1 cybernetics:1 detector:1 casella:1 whenever:1 trevor:1 sebastian:1 definition:1 underestimate:1 frequency:1 thereof:1 gain:1 dataset:11 proved:1 popular:1 mitchell:1 recall:3 realm:1 emerges:1 improves:3 uncover:1 back:1 appears:1 supervised:14 follow:2 tom:1 response:2 formulation:1 just:5 jerome:1 glms:9 horizontal:1 christopher:4 mehdi:1 su:1 propagation:2 defines:1 logistic:25 matsuoka:1 blankout:2 semisupervised:1 dredze:1 omitting:2 effect:6 verify:1 true:3 k22:9 normalized:3 evolution:2 regularization:37 equality:1 xavier:2 contain:1 during:2 encourages:1 nuisance:2 noted:1 generalized:4 ridge:4 performs:1 percy:2 duchi:4 l1:1 argminy:2 common:3 multinomial:1 empirically:3 discussed:2 extend:1 salah:2 elementwise:1 interpret:1 association:4 a00:8 cambridge:1 rd:4 tuning:1 erich:1 language:3 surface:3 gt:10 showed:1 tikhonov:1 inequality:1 success:1 discussing:1 meeting:2 yi:13 exploited:1 der:1 muller:1 seen:1 additional:2 george:1 isozaki:1 sida:4 signal:3 dashed:1 semi:14 relates:1 ii:1 multiple:2 reduces:3 full:2 desirable:1 stephen:1 af:1 cross:1 sphere:1 prediction:16 regression:36 expectation:2 poisson:1 arxiv:4 iteration:2 normalization:1 penalize:1 justified:1 addition:1 fellowship:1 whereas:1 victorri:1 else:1 rest:1 unlike:2 nbsvm:1 sure:1 validating:1 sent:1 incorporates:1 seem:2 presence:2 ter:1 split:1 easy:1 bengio:4 newsgroups:1 independence:1 zi:4 hastie:1 lasso:1 perfectly:1 idea:2 rifai:2 whether:2 expression:5 penalty:32 sentiment:2 yaser:1 peter:1 repeatedly:3 useful:8 generally:2 tune:2 amount:1 discount:2 svms:1 category:1 xij:4 estimated:1 tibshirani:1 write:3 abu:1 key:1 penalizing:1 clean:1 ht:5 imaging:1 subgradient:1 run:1 inverse:1 lehmann:1 family:1 reasonable:1 yann:2 parsimonious:1 draw:1 decision:1 appendix:2 scaling:2 maaten:1 dropout:89 def:7 bound:2 layer:1 courville:1 correspondence:1 quadratic:21 annual:2 adapted:1 swager:1 subj:1 alex:2 software:2 mnb:5 nitish:1 min:2 speed:1 performing:1 injection:1 department:1 structured:3 eaves:1 manning:3 sidaw:1 slightly:1 em:2 cun:1 rob:1 glm:4 thorsten:1 taken:2 previously:1 turn:3 singer:1 letting:1 tractable:1 end:1 studying:1 apply:5 denker:1 away:1 generic:1 alternative:1 weinberger:1 original:3 running:2 include:1 linguistics:4 graphical:2 opportunity:1 marginalized:1 newton:1 exploit:1 giving:1 yoram:1 establish:1 objective:4 move:3 added:2 quantity:1 rt:1 diagonal:2 surrogate:4 gradient:3 distance:1 simulated:1 thrun:1 chris:2 topic:1 manifold:1 trivial:1 reason:1 relationship:2 illustration:1 balance:1 liang:2 difficult:1 digging:1 kingdom:1 sigma:1 negative:4 design:6 implementation:1 pliang:1 perform:2 observation:2 vect:1 datasets:4 benchmark:1 descent:6 situation:1 hinton:3 precise:1 rn:2 perturbation:1 introduced:3 david:1 namely:1 connection:11 learned:1 established:1 boost:1 able:1 beyond:2 pattern:1 kulesza:1 summarize:1 explanation:1 natural:7 warm:1 regularized:8 predicting:1 rely:1 hybrid:2 raina:1 runlabeled:2 scheme:6 improve:1 technology:1 stan:1 axis:1 naive:2 raymond:1 text:4 review:3 understanding:1 l2:28 tangent:2 adagrad:13 relative:2 fully:1 loss:9 interesting:3 var:6 geoffrey:1 penalization:4 consistent:1 viewpoint:1 corrupting:1 corrupt:1 tyree:1 pi:20 grandvalet:1 row:3 course:1 summary:2 penalized:2 supported:1 maas:1 copy:1 allow:1 deeper:2 burges:1 fall:1 taking:1 benefit:1 van:1 curve:3 default:1 superficially:1 x2ij:7 preventing:1 suzuki:1 adaptive:14 simplified:1 far:1 transaction:1 uni:3 bernhard:1 confirm:1 overfitting:2 active:6 discriminative:9 xi:30 table:9 learn:2 ca:1 nigam:1 improving:3 interact:2 expansion:1 artificially:3 meanwhile:1 diag:10 pk:1 main:1 big:3 noise:26 atheism:1 pivotal:1 positively:1 slow:1 explicit:1 kachites:1 intimate:1 third:1 ian:1 theorem:1 bishop:2 xt:1 unigram:3 jensen:1 r2:2 alt:1 normalizing:3 glorot:1 adding:2 effectively:2 illustrates:1 nk:1 chen:1 entropy:2 led:1 failed:1 glmnet:1 religion:1 arow:2 springer:2 corresponds:3 ma:1 conditional:2 identity:2 goal:2 towards:1 maxout:1 replace:2 fisher:7 shared:1 change:1 man:1 specifically:2 infinite:1 operates:1 except:2 called:1 total:1 bernard:1 invariance:1 aaron:1 select:2 formally:1 support:2 mark:1 crammer:1 incorporate:1 reg:2 phenomenon:2 srivastava:1 |
4,290 | 4,883 | Stochastic Gradient Riemannian Langevin Dynamics
on the Probability Simplex
Yee Whye Teh
Department of Statistics
University of Oxford
[email protected]
Sam Patterson
Gatsby Computational Neuroscience Unit
University College London
[email protected]
Abstract
In this paper we investigate the use of Langevin Monte Carlo methods on the
probability simplex and propose a new method, Stochastic gradient Riemannian
Langevin dynamics, which is simple to implement and can be applied to large
scale data. We apply this method to latent Dirichlet allocation in an online minibatch setting, and demonstrate that it achieves substantial performance improvements over the state of the art online variational Bayesian methods.
1
Introduction
In recent years there has been increasing interest in probabilistic models where the latent variables
or parameters of interest are discrete probability distributions over K items, i.e. vectors lying in the
probability simplex
X
?K = {(?1 , . . . , ?K ) : ?k ? 0,
?k = 1} ? RK
(1)
k
Important examples include topic models like latent Dirichlet allocation (LDA) [BNJ03], admixture
models in genetics like Structure [PSD00], and discrete directed graphical models with a Bayesian
prior over the conditional probability tables [Hec99].
Standard approaches to inference over the probability simplex include variational inference [Bea03,
WJ08] and Markov chain Monte Carlo methods (MCMC) like Gibbs sampling [GRS96]. In the
context of LDA, many methods have been developed, e.g. variational inference [BNJ03], collapsed
variational inference [TNW07, AWST09] and collapsed Gibbs sampling [GS04]. With the increasingly large scale document corpora to which LDA and other topic models are applied, there has
also been developments of specialised and highly scalable algorithms [NASW09]. Most proposed
algorithms are based on a batch learning framework, where the whole document corpus needs to be
stored and accessed for every iteration. For very large corpora, this framework can be impractical.
Most recently, [Sat01, HBB10, MHB12] proposed online Bayesian variational inference algorithms
(OVB), where on each iteration only a small subset (a mini-batch) of the documents is processed
to give a noisy estimate of the gradient, and a stochastic gradient descent algorithm [RM51] is
employed to update the parameters of interest. These algorithms have shown impressive results on
very large corpora like Wikipedia articles, where it is not even feasible to store the whole dataset in
memory. This is achieved by simply fetching the mini-batch articles in an online manner, processing,
and then discarding them after the mini-batch.
In this paper, we are interested in developing scalable MCMC algorithms for models defined over
the probability simplex. In some scenarios, and particularly in LDA, MCMC algorithms have been
shown to work extremely well, and in fact achieve better results faster than variational inference
on small to medium corpora [GS04, TNW07, AWST09]. However current MCMC methodology
1
have mostly been in the batch framework which, as argued above, cannot scale to the very large
corpora of interest. We will make use of a recently developed MCMC method called stochastic
gradient Langevin dynamics (SGLD) [WT11, ABW12] which operates in a similar online minibatch framework as OVB. Unlike OVB and other stochastic gradient descent algorithms, SGLD
is not a gradient descent algorithm. Rather, it is a Hamiltonian MCMC [Nea10] algorithm which
will asymptotically produce samples from the posterior distribution. It achieves this by updating
parameters according to both the stochastic gradients as well as additional noise which forces it to
explore the full posterior instead of simply converging to a MAP configuration.
There are three difficulties that have to be addressed, however, to successfully apply SGLD to LDA
and other models defined on probability simplices. Firstly, the probability simplex (1) is compact
and has boundaries that has to be accounted for when an update proposes a step that brings the
vector outside the simplex. Secondly, the typical Dirichlet priors over the probability simplex place
most of its mass close to the boundaries and corners of the simplex. This is particularly the case for
LDA and other linguistic models, where probability vectors parameterise distributions over a larger
number of words, and it is often desirable to use distributions that place significant mass on only
a few words, i.e. we want distributions over ?K which place most of its mass near the boundaries
and corners. This also causes a problem as depending on the parameterisation used, the gradient
required for Langevin dynamics is inversely proportional to entries in ? and hence can blow up
when components of ? are close to zero. Finally, again for LDA and other linguistic models, we
would like algorithms that work well in high-dimensional simplices.
These considerations lead us to the first contribution of this paper in Section 3, which is an investigation into different ways to parameterise the probability simplex. This section shows that the
choice of a good parameterisation is not obvious, and that the use of the Riemannian geometry of
the simplex [Ama95, GC11] is important in designing Langevin MCMC algorithms. In particular,
we show that an unnormalized parameterisation, using a mirroring trick to remove boundaries, coupled with a natural gradient update, achieves the best mixing performance. In Section 4, we then
show that the SGLD algorithm, using this parameterisation and natural gradient updates, performs
significantly better than OVB algorithms [HBB10, MHB12]. Section 2 reviews Langevin dynamics,
natural gradients and SGLD to setup the framework used in the paper, and Section 6 concludes.
2
2.1
Review
Langevin dynamics
QN
Suppose we model a data set x = x1 , . . . , xN , with a generative model p(x | ?) = i=1 p(xi |
?) parameterized by ? ? RD with prior p(?) and that our aim is to compute the posterior p(? |
x). Langevin dynamics [Ken90, Nea10] is an MCMC scheme which produces samples from the
posterior by means of gradient updates plus Gaussian noise, resulting in a proposal distribution
q(?? | ?) as described by Equation 2.
!
N
X
?? = ? +
?? log p(?) +
?? log p(xi |?) + ?,
? ? N (0, I)
(2)
2
i=1
The mean of the proposal distribution is in the direction of increasing log posterior due to the gradient, while the added noise will prevent the samples from collapsing to a single (local) maximum.
A Metropolis-Hastings correction
is required
step
to correct for discretisation error, with proposals
p(? ? |x) q(?|? ? )
accepted with probability min 1, p(?|x) q(?? |?) [RS02]. As tends to zero, the acceptance ratio
tends to one as the Markov chain tends to a stochastic differential equation which has p(? | x) as its
stationary distribution [Ken78].
2.2
Riemannian Langevin dynamics
Langevin dynamics has an isotropic proposal distribution leading to slow mixing if the components
of ? have very different scales or if they are highly correlated. Preconditioning can help with this. A
recent approach, the Riemann manifold Metropolis adjusted Langevin algorithm [GC11] uses a user
chosen matrix G(?) to precondition in a locally adaptive manner. We will refer to their algorithm
2
as Riemannian Langevin dynamics (RLD) in this paper. The Riemannian manifold in question is
the family of probability distributions p(x | ?) parameterised by ?, for which the expected Fisher
information matrix I? defines a natural Riemannian metric tensor. In fact any positive definite matrix
G(?) defines a valid Riemannian manifold and hence we are not restricted to using G(?) = I? . This
is important in practice as for many models of interest the expected Fisher information is intractable.
As in Langevin dynamics, RLD consists of a Gaussian proposal q(?? | ?), along with a MetropolisHastings correction step. The proposal distribution can be written as
1
?? = ? + ?(?) + G? 2 (?)?,
? ? N (0, I)
(3)
2
where the j th component of ?(?) is given by
?(?)j =
?1
G
(?) ?? log p(?) +
N
X
!!
?? log p(xi |?)
i=1
+
D
X
k=1
j
D
X
?G(?) ?1
?1
G (?)
?2
G (?)
??k
jk
k=1
?G(?)
G?1 (?) jk Tr G?1 (?)
??k
(4)
The first term in Equation 4 is now the natural gradient of the log posterior. Whereas the standard
gradient gives the direction of steepest ascent in Euclidean space, the natural gradient gives the
direction of steepest descent taking into account the geometry implied by G(?). The remaining
terms in Equation 4 describe how the curvature of the manifold defined by G(?) changes for small
changes in ?. The Gaussian noise in Equation 3 also takes the geometry of the manifold into account,
1
having scale defined by G? 2 (?).
2.3
Stochastic gradient Riemannian Langevin dynamics
In the Langevin dynamics and RLD algorithms, the proposal distribution requires calculation of the
gradient of the log likelihood w.r.t. ?, which means processing all N items in the data set. For
large data sets this is infeasible, and even for small data sets it may not be the most efficient use of
computation. The stochastic gradient Langevin dynamics (SGLD) algorithm [WT11] replaces the
calculation of the gradient over the full data set, with a stochastic approximation based on a subset
of data. Specifically at iteration t we sample n data items indexed by Dt , uniformly from the full
data set and replace the exact gradient in Equation 2 with the approximation
N X
?? logp(x | ?) ?
?? log p(xi |?)
(5)
|Dt |
i?Dt
Also, SGLD does not use a Metropolis-Hastings correction step, as calculating the acceptance probability would require use of the full data set, hence defeating the purpose of the stochastic gradient
approximation.
Convergence
to the posterior is still guaranteed as long as decaying step sizes satisP?
P?
fying t=1 t = ?, t=1 2t < ? are used.
In this paper we combine the use of a preconditioning matrix G(?) as in RLD with this stochastic
gradient approximation, by replacing the exact gradient in Equation 4 with the approximation from
Equation 5. The resulting algorithm, stochastic gradient Riemannian Langevin dynamics (SGRLD),
avoids the slow mixing problems of Langevin dynamics, while still being applicable in a large scale
online setting due to its use of stochastic gradients and lack of Metropolis-Hastings correction steps.
3
Riemannian Langevin dynamics on the probability simplex
In this section, we investigate the issues which arise when applying Langevin Monte Carlo methods, specifically the Langevin dynamics and Riemannian Langevin dynamics algorithms, to models
whose parameters lie on the probability simplex. In these experiments, a Metropolis-Hastings correction step was used. Consider the simplest possible model: a K dimensional probability vector
QK
? with Dirichlet prior p(?) ? k ?k?k ?1 , and data x = x1 , . . . , xN with p(xi = k | ?) = ?k .
QK
PN
This results in a Dirchlet posterior p(? | x) ? k ?knk +?k ?1 , where nk = i=1 ?(xi = k). In
3
Parameterisation
?
Reduced-Mean
? k = ?k
?? log p(?|x)
G(?)
G?1 (?)
?G ?1
G?1 ??
G
k
jk
?G
G?1 (?) jk Tr G?1 (?) ??
k
PD
k=1
PD
k=1
k
? 1 nK?+??1
K
n+?
?
n? diag(?)
1
n?
Reduced-Natural
k
?k = log 1?P?K?1
?
?1
+
1?
1
P
k
?k 11
diag(?) ? ??T
T
k
1
?j2
K?j ? 1
1
?j2
n+??1
?
k=1
n?
k=1
diag (?)
?
K?1
P
(1? k ?k )2
?1
e??j
?
K?1
P
(1? k ?k )2
?1
e??j
n? diag(?)?1 +
K?j ? 1
Expanded-Natural
?k
?k = P e e?k
n + ? ? n? ? ? e ?
diag e?
diag e??
n + ? ? (n? + K?) ?
1
T
n? diag(?) ? ??
Expanded-Mean
?k = P |?k ||?k |
1?
1
P
11
k ?k
?
?? ?
?1
diag (?)
T
1
Table 1: Parameterisation Details
our experiments we use a sparse, symmetric prior with ?k = 0.1 ?k, and sparse count data, setting
K = 10 and n1 = 90, n2 = n3 = 5 and the remaining nk to zero. This is to replicate the sparse
nature of the posterior in many models of interest. The qualitative conclusions we draw are not
sensitive to the precise choice of hyperparameters and data here.
There are various possible ways to parameterise the probability simplex, and the performance of
Langevin Monte Carlo depends strongly on the choice of parameterisation. We consider both the
mean and natural parameter spaces, and in each of these we try both a reduced (K ? 1 dimensional)
and expanded (K dimensional) parameterisation, with details as follows.
Reduced-Mean: in the mean parameter space, the most obvious approach is to set ? = ? directly,
but there are two problems with this. Though ? has K components, it must lie on the simplex, a
K ? 1 dimensional space. Running Langevin dynamics or RLD on the full K dimensional parameterisation will result in proposals that are off the simplex with probability one. We can incorporate
PK
the constraint that k=1 ?k = 1 by using the first K ? 1 components as the parameter ?, and setPK?1
ting ?K = 1 ? k=1 ?k . Note however that the proposals can still violate the boundary constraint
0 < ?k < 1, and this is particularly problematic when the posterior has mass close to the boundaries.
Expanded-Mean: we can simplify boundary considerations using a redundant parameterisation.
We take as our parameter ? ? RK
+ with prior a product of independent Gamma(?k , 1) distributions,
QK
p(?) ? k=1 ?k?k ?1 e??k . ? is then given by ?k = P?k?k and so the prior on ? is still Dirichlet(?).
k
The boundary conditions 0 < ?k can be handled by simply taking the absolute value of the proposed
?
? . This is equivalent to letting ? take values in the whole of RK , with prior given by Gammas
QK
mirrored at 0, p(?) ? k=1 |?k |?k ?1 e?|?k | , and ?k = P|?k|?| k | , which again results in a Dirichlet(?)
k
prior on ?. This approach allows us to bypass boundary issues altogether.
Reduced-Natural: in the natural parameter space, the reduced parameterisation takes the form
?k
?k = 1+PeK?1 e?k for k = 1, . . . , K ? 1. The prior on ? can be obtained from the Dirichlet(?) prior
k=1
on ? using a change of variables. There are no boundary constraints as the range of ?k is R.
?k
Expanded-Natural: finally the expanded-natural parameterisation takes the form ?k = PKe e?k
k=1
for k = 1, . . . , K. As in the expanded-mean parameterisation, we use a product of Gamma priors,
in this case for e?k , so that the prior for ? remains Dirichlet(?).
For all parameterisations, we run both Langevin dynamics and RLD. When applying RLD, we
must choose a metric G(?). For the reduced parameterisations, we can use the expected Fisher
information matrix, but the redundancy in the full parameterisations means that this matrix has rank
K?1 and hence is not invertible. For these parameterisations we use the expected Fisher information
matrix for a Gamma/Poisson model, which is equivalent to the Dirichlet/Multinomial apart from the
fact that the total number of data items is considered to be random as well.
The details for each parameterisation are summarised in Table 1. In all cases we are interested
in sampling from the posterior distribution on ?, while ? is the specific parameterisation being
used. For the mean parameterisations, the ??1 term in the gradient of the log-posterior means
that for components of ? which are close to zero, the proposal distribution for Langevin dynamics
(Equation 2) has a large mean, resulting in unstable proposals with a small acceptance probability.
Due to the form of G(?)?1 , the same argument holds for the RLD proposal distribution for the
natural parameterisations. This leaves us with three possible combinations, RLD on the expandedmean parameterisation and Langevin dynamics on each of the natural parameterisations.
4
0
1000
10
Expanded?Mean RLD median
Expanded?Mean RLD mean
Reduced?Natural LD median
Reduced?Natural LD mean
Expanded?Natural LD median
Expanded?Natural LD mean
900
800
700
?2
10
?4
10
?6
10
?8
10
0
10
ESS
600
0
100
200
300
400
500
600
700
800
900
1000
100
200
300
400
500
600
700
800
900
1000
100
200
300
400
500
600
700
Thinned sample number
800
900
1000
?7
10
?14
10
500
?21
10
400
?28
10
0
300
10
200
10
0
?4
?8
10
100
?12
10
0
10^?5
?16
10^?4
10^?3
10^?2
Step size
10^?1
10
10^0
0
(a) Effective sample size
(b) Samples
Figure 1: Effective sample size and samples. Burn-in iterations is 10,000; thinning factor 100.
To investigate their relative performances we run a small experiment, producing 110,000 samples
from each of the three remaining parameterisations, discarding 10,000 burn-in samples and thinning
the remaining samples by a factor of 100. For the resulting 1000 thinned samples of ?, we calculate
the corresponding samples of ?, and compute the effective sample size for each component of ?.
This was done for a range of step sizes , and the mean and median effective sample sizes for the
components of ? is shown in Figure 1(a).
Figure 1(b) shows the samples from each sampler at their optimal step size of 0.1. The samples
from Langevin dynamics on both natural parameterisations display higher auto-correlation than the
RLD samples produced using the expanded-mean parameterisation, as would be expected from their
lower effective sample sizes. In addition to the increased effective sample size, the expanded-mean
parameterisation RLD sampler has the advantage that it is computationally efficient as G(?) is a
diagonal matrix. Hence it is this algorithm that we use when applying these techniques to latent
Dirichlet allocation in Section 4.
4
Applying Riemannian Langevin dynamics to latent Dirichlet allocation
Latent Dirichlet Allocation (LDA) [BNJ03] is a hierarchical Bayesian model, most frequently used
to model topics arising in collections of text documents. The model consists of K topics ?k , which
are distributions over the words in the collection, drawn from a symmetric Dirichlet prior with
hyper-parameter ?. A document d is then modelled by a mixture of topics, with mixing proportion
?d , drawn from a symmetric Dirichlet prior with hyper-parameter ?. The model corresponds to a
generative process where documents are produced by drawing a topic assignment zdi i.i.d. from ?d
for each word wdi in document d, and then drawing the word wdi from the corresponding topic ?zdi .
We integrate out ? analytically, resulting in the semi-collapsed distribution:
K
K
W
Y
? (K?)
? (? + ndk? ) Y ?(W ?) Y ?+n?kw ?1
?
(6)
? (K? + nd?? )
? (?)
?(?)W w=1 kw
d=1
k=1
k=1
PNd
where as in [TNW07], ndkw =
i=1 ?(wdi = w, zdi = k) and ? denotes summation over the
corresponding index. Conditional on ?, the documents are i.i.d., and we can factorise Equation 6
p(w, z, ? | ?, ?) =
D
Y
p(w, z, ? | ?, ?) = p(? | ?)
D
Y
p(wd , zd | ?, ?)
(7)
K
W
Y
? (? + ndk? ) Y ndkw
?kw
? (?)
w=1
(8)
d=1
where
p(wd , zd , | ?, ?) =
k=1
5
4.1
Stochastic gradient Riemannian Langevin dynamics for LDA
As we would like to apply these techniques to large document collections, we use the stochastic gradient version of the Riemannian Langevin dynamics algorithm, as detailed in Section 2.3.
Following the investigation in Section 3 we use the expanded-mean parameterisation. For each
of the K topics ?k , we introduce a W -dimensional unnormalised parameter ?k with an indepenQW
?w ?1 ??kw
dent Gamma prior p(?k ) ?
e
and set ?kw = P?kw
, for w = 1, . . . , W .
w=1 ?kw
w ?kw
We use the mirroring idea as well. The metric G(?) is then the diagonal matrix G(?) =
?1
diag (?11 , . . . , ?1W , . . . , ?K1 , . . . , ?KW ) .
The algorithm runs on mini-batches of documents: at time t it receives a mini-batch of documents
indexed by Dt , drawn at random from the full corpus D. The stochastic gradient of the log posterior
of ? on Dt is shown in Equation 9.
?log p(? | w, ?, ?)
ndkw
??1
|D| X
ndk?
Ezd |wd ,?,?
?
?1+
?
(9)
??kw
?kw
|Dt |
?kw
?k?
d?Dt
For this choice of ? and G(?), we use Equations 3, 4 to give the SGRLD update for ?,
!
1
|D| X
?
2
?kw = ?kw +
? ? ?kw +
Ezd |wd ,?,? [ndkw ? ?kw ndk? ] + (?kw ) ?kw
2
|Dt |
(10)
d?Dt
where ?kw ? N (0, ). Note that the ??1 term in Equation 9 has been replaced with ? in Equation 10
as the ?1 cancels with the curvature terms as detailed in Table 1. As discussed in Section 3, we
reflect moves across the boundary 0 < ?kw by taking the absolute value of the proposed update.
Comparing Equation 9 to the gradient for the simple model from Section 3, the observed counts
nk for the simple model have been replaced with the expectation of the latent topic assignment
counts ndkw . To calculate this expectation we use Gibbs sampling on the topic assignments in each
document separately, using the conditional distributions
\i
? + ndk? ?kwdi
p(zdi = k | wd , ?, ?) = P
(11)
\i
?
+
n
?
kw
di
k
dk?
where \i represents a count excluding the topic assignment variable we are updating.
5
Experiments
We investigate the performance of SGRLD, with no Metropolis-Hastings correction step, on two
real-world data sets. We compare it to two online variational Bayesian algorithms developed
for latent Dirichlet allocation: online variational Bayes (OVB) [HBB10] and hybrid stochastic
variational-Gibbs (HSVG) [MHB12]. The difference between these two methods is the form of variational assumption
made.
Q
Q
Q OVB assumes a mean-field variational posterior, q(?1:D , z1:D , ?1:K ) =
q(?
)
q(z
)
d
di
d
d,i
k q(?k ), in particular this means topic assignment variables within the same
document are assumed to be independent, when in reality they will be strongly coupled. In contrast HSVG
Q
Q collapses ?d analytically and uses a variational posterior of the form q(z1:D , ?1:K ) =
d q(zd )
k q(?k ), which allows dependence within the components of zd . This more complicated
posterior requires Gibbs sampling in the variational update step for zd , and we combined the code
for OVB [HBB10], with the Gibbs sampling routine from our SGRLD code to implement HSVG.
5.1
Evaluation Method
The predictive performance of the algorithms can be measured by looking at the probability they
assign to unseen data. A metric frequently used for this purpose is perplexity, the exponentiated
cross entropy between the trained model probability distribution and the empirical distribution of
the test data. For a held-out document wd and a training set W, the perplexity is given by
Pnd??
i=1 log p(wdi | W, ?, ?)
.
(12)
perp(wd | W, ?, ?) = exp ?
nd??
6
This requires calculating p(wdi | W, ?, ?), which is done by marginalising out the parameters
?d , ?1 , . . . , ?K and topic assignments zd , to give
"
#
X
p(wdi | W, ?, ?) = E?d ,?
?dk ?kwdi
(13)
k
We use a document completion approach [WMSM09], partitioning the test document wd into two
sets of words, wdtrain , wdtest and using wdtrain to estimate ?d for the test document, then calculating the
perplexity on wdtest using this estimate.
To calculate the perplexity for SGRLD, we integrate ? analytically, so Equation 13 is replaced by
"
"
##
X
train
??dk ?kwdi
p(wdi | wd , W, ?, ?) = E?|W,? Ezdtrain |?,?
(14)
k
where
test
??dk := p(zdi
= k | zdtrain , ?) =
ntrain
dk? + ?
.
train
nd?? + K?
(15)
We estimate these expectations using the samples we obtain for ? from the Markov chain produced
by SGRLD, and samples for zdtrain produced by Gibbs sampling the topic assignments on wdtrain .
For OVB and HSVG, we estimate Equation 13 by replacing the true posterior p(?, ?) with q(?, ?).
"
#
X
X
p(wdi | W, ?, ?) = Ep(?d ,?|W,?,?)
?dk ?kwdi ?
Eq(?d ) [?dk ] Eq(?k ) [?kwdi ]
(16)
k
k
We estimate the perplexity directly rather than use a variational bound [HBB10] so that we can
compare results of the variational algorithms to those of SGRLD.
5.2
Results on NIPS corpus
The first experiment was carried out on the collection of NIPS papers from 1988-2003 [GCPT07].
This corpus contains 2483 documents, which is small enough to run all three algorithms in batch
mode and compare their performance to that of collapsed Gibbs sampling on the full collection.
Each document was split 80/20 into training and test sets, the training portion of all 2483 documents
were used in each update step, and the perplexity was calculated on the test portion of all documents. Hyper-parameters ? and ? were both fixed to 0.01, and 50 topics were used. A step-size
schedule of the form t = (a ? (1 + bt ))?c was used. Perplexities were estimated for a range of step
size parameters, and for 1, 5 and 10 document updates per topic parameter update. For OVB the
document updates are fixed point iterations of q(zd ) while for HSVG and SGRLD they are Gibbs
updates of zd , the first half of which were discarded as burn-in. These numbers of document updates
were chosen as previous investigation of the performance of HSVG for varying numbers of Gibbs
updates has shown that 6-10 updates are sufficient [MHB12] to achieve good performance.
Figure 2(a) shows the lowest perplexities achieved along with the corresponding parameter settings.
As expected, CGS achieves the lowest perplexities. It is surprising that HSVG performs slightly
worse than OVB on this data set. As it uses a less restricted variational distribution it should perform
at least as well. SGRLD improves on the performance of OVB and HSVG, but does not match the
performance of Gibbs sampling.
5.3
Results on Wikipedia corpus
The algorithms? performances in an online scenario was assessed on a set of articles downloaded
at random from Wikipedia, as in [HBB10]. The vocabulary used is again as per [HBB10]; it is
not created from the Wikipedia data set, instead it is taken from the top 10,000 words in Project
Gutenburg texts, excluding all words of less than three characters. This results in vocabulary size
W of approximately 8000 words. 150,000 documents from Wikipedia were used in total, in minibatches of 50 documents each. The perplexities were estimated using the methods discussed in
7
2200
HSVG
OVB
SGRLD
Collapsed Gibbs
2200
2000
HSVG
OVB
SGRLD
2000
1800
1600
1800
1400
1600
1400
0
1200
200
400
600
(a) NIPS corpus
800
1000
0
1000
50000
100000
150000
(b) Wikipedia corpus
Figure 2: Test-set perplexities on NIPS and Wikipedia corpora.
Section 5.1 on a separate holdout set of 1000 documents, split 90/10 training/test. As the corpus size
is large, collapsed Gibbs sampling was not run on this data set.
For each algorithm a grid-search was run on the hyper-parameters, step-size parameters, and number of Gibbs sampling sweeps / variational fixed point iterations per ? update. The lowest three
perplexities attained for each algorithm are shown in Figure 2(b). Corresponding parameters are
given in the supplementary material. HSVG achieves better performance than OVB, as expected.
The performance of SGRLD is a substantial improvement on both the variational algorithms.
6
Discussion
We have explored the issues involved in applying Langevin Monte Carlo techniques to a constrained
parameter space such as the probability simplex, and developed a novel online sampling algorithm
which addresses those issues. Using an expanded parametrisation with a reflection trick for negative
proposals removed the need to deal with boundary constraints, and using the Riemannian geometry
of the parameter space dealt with the problem of parameters with differing scales.
Applying the method to Latent Dirichlet Allocation on two data sets produced state of the art predictive performance for the same computational budget as competing methods, demonstrating that
full Bayesian inference using MCMC can be practically applied to models of interest, even when
the data set is large. Python code for our method is available at http://www.stats.ox.ac.
uk/?teh/sgrld.html.
Due to the widespread use of models defined on the probability simplex, we believe the methods
developed here for Langevin dynamics on the probability simplex will find further uses beyond latent
Dirichlet allocation and stochastic gradient Monte Carlo methods. A drawback of SGLD algorithms
is the need for decreasing step sizes; it would be interesting to investigate adaptive step sizes and the
approximation entailed when using fixed step sizes (but see [AKW12] for a recent development).
Acknowledgements
We thank the Gatsby Charitable Foundation and EPSRC (grant EP/K009362/1) for generous funding, reviewers and area chair for feedback and support, and [HBB10] for use of their excellent
publicly available source code.
8
References
[ABW12]
Sungjin Ahn, Anoop Korattikara Balan, and Max Welling, Bayesian posterior sampling via
stochastic gradient fisher scoring., ICML, 2012.
[AKW12]
S. Ahn, A. Korattikara, and M. Welling, Bayesian posterior sampling via stochastic gradient
Fisher scoring, Proceedings of the International Conference on Machine Learning, 2012.
[Ama95]
S. Amari, Information geometry of the EM and em algorithms for neural networks, Neural Networks 8 (1995), no. 9, 1379?1408.
[AWST09]
A. Asuncion, M. Welling, P. Smyth, and Y. W. Teh, On smoothing and inference for topic models,
Proceedings of the International Conference on Uncertainty in Artificial Intelligence, vol. 25,
2009.
[Bea03]
M. J. Beal, Variational algorithms for approximate bayesian inference, Ph.D. thesis, Gatsby Computational Neuroscience Unit, University College London, 2003.
[BNJ03]
D. M. Blei, A. Y. Ng, and M. I. Jordan, Latent Dirichlet allocation, Journal of Machine Learning
Research 3 (2003), 993?1022.
[GC11]
M. Girolami and B. Calderhead, Riemann manifold Langevin and Hamiltonian Monte Carlo
methods, Journal of the Royal Statistical Society B 73 (2011), 1?37.
[GCPT07]
A. Globerson, G. Chechik, F. Pereira, and N. Tishby, Euclidean Embedding of Co-occurrence
Data, The Journal of Machine Learning Research 8 (2007), 2265?2295.
[GRS96]
W. R. Gilks, S. Richardson, and D. J. Spiegelhalter, Markov chain monte carlo in practice, Chapman and Hall, 1996.
[GS04]
T. L. Griffiths and M. Steyvers, Finding scientific topics, Proceedings of the National Academy
of Sciences, 2004.
[HBB10]
M. D. Hoffman, D. M. Blei, and F. Bach, Online learning for latent dirichlet allocation, Advances
in Neural Information Processing Systems, 2010.
[Hec99]
D. Heckerman, A tutorial on learning with Bayesian networks, Learning in Graphical Models
(M. I. Jordan, ed.), Kluwer Academic Publishers, 1999.
[Ken78]
J. Kent, Time-reversible diffusions, Advances in Applied Probability 10 (1978), 819?835.
[Ken90]
A. D. Kennedy, The theory of hybrid stochastic algorithms, Probabilistic Methods in Quantum
Field Theory and Quantum Gravity, Plenum Press, 1990.
[MHB12]
D. Mimno, M. Hoffman, and D. Blei, Sparse stochastic inference for latent Dirichlet allocation,
Proceedings of the International Conference on Machine Learning, 2012.
[NASW09] D. Newman, A. Asuncion, P. Smyth, and M. Welling, Distributed algorithms for topic models,
Journal of Machine Learning Research (2009).
[Nea10]
R. M. Neal, MCMC using Hamiltonian dynamics, Handbook of Markov Chain Monte Carlo
(S. Brooks, A. Gelman, G. Jones, and X.-L. Meng, eds.), Chapman & Hall / CRC Press, 2010.
[PSD00]
J.K. Pritchard, M. Stephens, and P. Donnelly, Inference of population structure using multilocus
genotype data, Genetics 155 (2000), 945?959.
[RM51]
H. Robbins and S. Monro, A stochastic approximation method, Annals of Mathematical Statistics
22 (1951), no. 3, 400?407.
[RS02]
G. O. Roberts and O. Stramer, Langevin diffusions and metropolis-hastings algorithms, Methodology and Computing in Applied Probability 4 (2002), 337?357, 10.1023/A:1023562417138.
[Sat01]
M. Sato, Online model selection based on the variational Bayes, Neural Computation 13 (2001),
1649?1681.
[TNW07]
Y. W. Teh, D. Newman, and M. Welling, A collapsed variational Bayesian inference algorithm for
latent Dirichlet allocation, Advances in Neural Information Processing Systems, vol. 19, 2007,
pp. 1353?1360.
[WJ08]
M. J. Wainwright and M. I. Jordan, Graphical models, exponential families, and variational inference, Foundations and Trends in Machine Learning 1 (2008), no. 1-2, 1?305.
[WMSM09] Hanna M. Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno, Evaluation methods
for topic models, Proceedings of the 26th International Conference on Machine Learning (ICML)
(Montreal) (L?eon Bottou and Michael Littman, eds.), Omnipress, June 2009, pp. 1105?1112.
[WT11]
M. Welling and Y. W. Teh, Bayesian learning via stochastic gradient Langevin dynamics, Proceedings of the International Conference on Machine Learning, 2011.
9
| 4883 |@word version:1 proportion:1 replicate:1 nd:3 kent:1 tr:2 ld:4 configuration:1 contains:1 document:27 current:1 wd:9 comparing:1 surprising:1 written:1 must:2 remove:1 update:17 stationary:1 generative:2 leaf:1 half:1 item:4 ntrain:1 intelligence:1 isotropic:1 es:1 steepest:2 hamiltonian:3 blei:3 firstly:1 accessed:1 mathematical:1 along:2 differential:1 qualitative:1 consists:2 combine:1 thinned:2 introduce:1 manner:2 expected:7 frequently:2 salakhutdinov:1 decreasing:1 riemann:2 increasing:2 project:1 medium:1 mass:4 lowest:3 developed:5 differing:1 finding:1 impractical:1 every:1 gravity:1 uk:3 partitioning:1 unit:2 grant:1 wj08:2 producing:1 positive:1 local:1 perp:1 tends:3 oxford:1 meng:1 approximately:1 plus:1 burn:3 wallach:1 co:1 collapse:1 range:3 directed:1 globerson:1 gilks:1 practice:2 implement:2 definite:1 area:1 empirical:1 significantly:1 chechik:1 word:9 griffith:1 fetching:1 cannot:1 close:4 selection:1 gelman:1 context:1 collapsed:7 applying:6 yee:1 knk:1 equivalent:2 map:1 www:1 reviewer:1 stats:2 zdi:5 iain:1 steyvers:1 embedding:1 population:1 plenum:1 pek:1 annals:1 suppose:1 user:1 exact:2 smyth:2 us:4 designing:1 trick:2 trend:1 particularly:3 updating:2 jk:4 observed:1 ep:2 epsrc:1 precondition:1 calculate:3 bnj03:4 removed:1 substantial:2 pd:2 littman:1 dynamic:30 trained:1 predictive:2 calderhead:1 patterson:1 preconditioning:2 various:1 train:2 describe:1 london:2 monte:9 effective:6 artificial:1 newman:2 hyper:4 outside:1 whose:1 larger:1 supplementary:1 drawing:2 amari:1 statistic:2 unseen:1 richardson:1 noisy:1 online:12 beal:1 advantage:1 ucl:1 propose:1 product:2 pke:1 j2:2 korattikara:2 mixing:4 achieve:2 academy:1 convergence:1 produce:2 help:1 depending:1 ac:3 completion:1 montreal:1 measured:1 eq:2 girolami:1 direction:3 drawback:1 correct:1 stochastic:25 material:1 crc:1 argued:1 require:1 assign:1 investigation:3 secondly:1 summation:1 adjusted:1 dent:1 correction:6 hold:1 lying:1 practically:1 considered:1 wdi:8 hall:2 sgld:8 exp:1 achieves:5 generous:1 purpose:2 ruslan:1 applicable:1 sensitive:1 robbins:1 successfully:1 defeating:1 hoffman:2 gaussian:3 aim:1 rather:2 pn:1 varying:1 linguistic:2 june:1 improvement:2 rank:1 likelihood:1 contrast:1 inference:13 bt:1 wt11:3 interested:2 issue:4 html:1 development:2 proposes:1 art:2 constrained:1 k009362:1 smoothing:1 field:2 having:1 ng:1 sampling:14 chapman:2 kw:21 represents:1 jones:1 cancel:1 icml:2 simplex:19 simplify:1 few:1 gamma:5 national:1 replaced:3 geometry:5 n1:1 factorise:1 interest:7 acceptance:3 investigate:5 highly:2 evaluation:2 entailed:1 mixture:1 genotype:1 held:1 chain:5 discretisation:1 indexed:2 euclidean:2 increased:1 logp:1 assignment:7 subset:2 entry:1 tishby:1 stored:1 akw12:2 combined:1 international:5 probabilistic:2 off:1 invertible:1 michael:1 parametrisation:1 thesis:1 again:3 reflect:1 choose:1 collapsing:1 worse:1 corner:2 leading:1 account:2 blow:1 depends:1 try:1 portion:2 decaying:1 bayes:2 complicated:1 asuncion:2 monro:1 contribution:1 publicly:1 qk:4 dealt:1 modelled:1 bayesian:12 produced:5 carlo:9 kennedy:1 ed:3 pp:2 involved:1 obvious:2 riemannian:16 di:2 dataset:1 rld:13 holdout:1 improves:1 schedule:1 routine:1 thinning:2 higher:1 dt:9 attained:1 methodology:2 done:2 ox:2 strongly:2 though:1 marginalising:1 parameterised:1 correlation:1 hastings:6 receives:1 replacing:2 reversible:1 lack:1 minibatch:2 widespread:1 defines:2 mode:1 brings:1 lda:9 scientific:1 believe:1 true:1 hence:5 analytically:3 symmetric:3 neal:1 deal:1 unnormalized:1 whye:1 demonstrate:1 performs:2 reflection:1 omnipress:1 variational:22 consideration:2 novel:1 recently:2 funding:1 wikipedia:7 multinomial:1 discussed:2 kluwer:1 significant:1 refer:1 gibbs:14 rd:1 grid:1 impressive:1 ahn:2 curvature:2 posterior:19 recent:3 apart:1 perplexity:12 scenario:2 store:1 scoring:2 additional:1 employed:1 ndk:5 redundant:1 semi:1 stephen:1 full:9 desirable:1 violate:1 faster:1 match:1 calculation:2 cross:1 long:1 bach:1 academic:1 converging:1 scalable:2 metric:4 poisson:1 expectation:3 iteration:6 achieved:2 proposal:13 whereas:1 want:1 addition:1 separately:1 addressed:1 median:4 source:1 publisher:1 unlike:1 ascent:1 jordan:3 near:1 split:2 enough:1 competing:1 idea:1 handled:1 cause:1 mirroring:2 detailed:2 locally:1 ph:1 processed:1 simplest:1 reduced:9 http:1 mirrored:1 problematic:1 tutorial:1 neuroscience:2 arising:1 estimated:2 per:3 zd:8 summarised:1 discrete:2 vol:2 donnelly:1 redundancy:1 demonstrating:1 drawn:3 prevent:1 diffusion:2 asymptotically:1 year:1 run:6 parameterized:1 uncertainty:1 multilocus:1 place:3 family:2 draw:1 bound:1 guaranteed:1 display:1 replaces:1 sato:1 constraint:4 n3:1 argument:1 extremely:1 min:1 chair:1 expanded:15 department:1 developing:1 according:1 combination:1 across:1 slightly:1 increasingly:1 sam:1 character:1 em:2 heckerman:1 metropolis:7 parameterisation:19 restricted:2 taken:1 computationally:1 equation:17 remains:1 count:4 letting:1 available:2 apply:3 hierarchical:1 ezd:2 occurrence:1 batch:8 altogether:1 denotes:1 dirichlet:21 include:2 remaining:4 sgrld:13 graphical:3 running:1 assumes:1 top:1 calculating:3 eon:1 ting:1 k1:1 murray:1 society:1 tensor:1 implied:1 move:1 added:1 question:1 sweep:1 dependence:1 diagonal:2 gradient:36 separate:1 thank:1 topic:20 manifold:6 unstable:1 code:4 index:1 mini:5 ratio:1 setup:1 mostly:1 robert:1 negative:1 perform:1 teh:6 markov:5 discarded:1 pnd:2 descent:4 langevin:37 excluding:2 precise:1 looking:1 pritchard:1 david:1 required:2 z1:2 nip:4 brook:1 address:1 beyond:1 max:1 memory:1 royal:1 wainwright:1 metropolishastings:1 difficulty:1 force:1 natural:20 hybrid:2 scheme:1 spiegelhalter:1 inversely:1 admixture:1 concludes:1 carried:1 created:1 coupled:2 auto:1 unnormalised:1 text:2 prior:16 review:2 acknowledgement:1 python:1 relative:1 parameterise:3 interesting:1 allocation:12 proportional:1 foundation:2 integrate:2 downloaded:1 sufficient:1 article:3 charitable:1 bypass:1 genetics:2 balan:1 accounted:1 infeasible:1 exponentiated:1 taking:3 absolute:2 sparse:4 distributed:1 mimno:2 boundary:12 calculated:1 xn:2 valid:1 avoids:1 world:1 qn:1 vocabulary:2 feedback:1 collection:5 adaptive:2 made:1 sungjin:1 quantum:2 welling:6 approximate:1 compact:1 handbook:1 corpus:14 assumed:1 xi:6 search:1 latent:14 table:4 reality:1 nature:1 hanna:1 excellent:1 bottou:1 diag:9 pk:1 whole:3 noise:4 arise:1 hyperparameters:1 n2:1 x1:2 gatsby:4 simplices:2 slow:2 pereira:1 exponential:1 lie:2 cgs:1 rk:3 discarding:2 specific:1 explored:1 dk:7 intractable:1 budget:1 nk:4 stramer:1 specialised:1 entropy:1 simply:3 explore:1 corresponds:1 minibatches:1 conditional:3 ndkw:5 replace:1 fisher:6 feasible:1 change:3 typical:1 specifically:2 operates:1 uniformly:1 sampler:2 called:1 total:2 accepted:1 parameterisations:9 college:2 support:1 assessed:1 anoop:1 incorporate:1 mcmc:10 correlated:1 |
4,291 | 4,884 | Restricting exchangeable nonparametric distributions
Sinead A. Williamson
University of Texas at Austin
Steven N. MacEachern
The Ohio State University
Eric P. Xing
Carnegie Mellon University
Abstract
Distributions over matrices with exchangeable rows and infinitely many columns
are useful in constructing nonparametric latent variable models. However, the distribution implied by such models over the number of features exhibited by each
data point may be poorly-suited for many modeling tasks. In this paper, we propose a class of exchangeable nonparametric priors obtained by restricting the domain of existing models. Such models allow us to specify the distribution over the
number of features per data point, and can achieve better performance on data sets
where the number of features is not well-modeled by the original distribution.
1
Introduction
The Indian buffet process [IBP, 11] is one of several distributions over matrices with exchangeable
rows and infinitely many columns, only a finite (but random) number of which contain any non-zero
entries. Such distributions have proved useful for constructing flexible latent feature models that do
not require us to specify the number of latent features a priori. In such models, each column of the
random matrix corresponds to a latent feature, and each row to a data point. The non-zero elements
of a row select the subset of features that contribute to the corresponding data point.
However, distributions such as the IBP make certain assumptions about the structure of the data that
may be inappropriate. Specifically, such priors impose distributions on the number of data points that
exhibit a given feature, and on the number of features exhibited by a given data point. For example,
in the IBP, the number of features exhibited by a data point is marginally Poisson-distributed, and
the probability of a data point exhibiting a previously-observed feature is proportional to the number
of times that feature has been seen so far.
These distributional assumptions may not be appropriate for many modeling tasks. For example,
the IBP has been used to model both text [17] and network [13] data. It is well known that word
frequencies in text corpora and degree distributions of networks often exhibit power-law behavior;
it seems reasonable to suppose that this behavior would be better captured by models that assume
a heavy-tailed distribution over the number of latent features, rather than the Poisson distribution
assumed by the IBP and related random matrices.
In certain cases we may instead wish to add constraints on the number of latent features exhibited
per data point, particularly in cases where we expect, or desire, the latent features to correspond
to interpretable features, or causes, of the data [20]. For example, we might believe that each data
point exhibits exactly S features ? corresponding perhaps to speakers in a dialog, members of a
team, or alleles in a genotype ? but be agnostic about the total number of features in our data set. A
model that explicitly encodes this prior expectation about the number of features per data point will
tend to lead to more interpretable and parsimonious results. Alternatively, we may wish to specify
a minimum number of latent features. For example, the IBP has been used to select possible next
states in a hidden Markov model [10]. In such a model, we do not expect to see a state that allows
no transitions (including self-transitions). Nonetheless, because a data point in the IBP can have
zero features with non-zero probability, this situation can occur, resulting in an invalid transition
distribution.
1
In this paper, we propose a method for modifying the distribution over the number of non-zero elements per row in arbitrary exchangeable matrices, allowing us to control the number of features per
data point in a corresponding latent feature model. We show that our construction yields exchangeable distributions, and present Monte Carlo methods for posterior inference. Our experimental evaluation shows that this approach allows us to incorporate prior beliefs about the number of features
per data point into our model, yielding superior modeling performance.
2
Exchangeability
We say a finite sequence (X1 , . . . , XN ) is exchangeable [see, for example, 1] if its distribution
is unchanged under any permutation ? of {1, . . . , N }. Further, we say that an infinite sequence
X1 , X2 , . . . is infinitely exchangeable if all of its finite subsequences are exchangeable. Such distributions are appropriate when we do not believe the order in which we see our data is important. In
such cases, a model whose posterior distribution depends on the order in which we see our data is
not justified. In addition, exchangeable models often yield efficient Gibbs samplers.
De Finetti?s law tells us that a sequence is exchangeable iff the observations are i.i.d. given some
latent distribution. This means that we can write the probability of any exchangeable sequence as
Z Y
P (X1 = x1 , X2 = x2 , . . . ) =
?? (Xi = xi )?(d?)
(1)
?
i
for some probability distribution ? over parameter space ?, and some parametrized family {?? }???
of conditional probability distributions.
Throughout this paper, we will use the notation p(x1 , x2 , . . . ) = P (X1 = x1 , X2 = x2 , . . . ) to
represent the joint distribution over an exchangeable sequence x1 , x2 , . .Q
. ; p(xn+1 |x1 , . . . , xn ) to
n
represent the associated predictive distribution; and p(x1 , . . . , xn , ?) := i=1 ?? (Xi = xi )?(?) to
represent the joint distribution over the observations and the parameter ?.
2.1
Distributions over exchangeable matrices
The Indian buffet process [IBP, 11] is a distribution over binary matrices with exchangeable rows and
infinitely many columns. In the de Finetti representation, the mixing distribution ? is a beta process,
the parameter ? is a countably infinite measure with atom sizes ?k ? (0, 1], and the conditional
distribution ?? is a Bernoulli process [17]. The beta process and the Bernoulli process are both
completely random measures [CRM, 12] ? distributions over random measures on somePspace ? that
?
assign independent masses to disjoint subsets of ?, that can be written in the form ? = k=1 ?k ??k .
We can think of each atom of ? as determining the latent probability for a column of a matrix with
infinitely many columns, and the Bernoulli process as sampling binary values for the entries of that
column of the matrix. The resulting matrix has a finite number of non-zero entries, with the number
of non-zero entries in each row distributed as Poisson(?) and the total number of non-zero columns
in N rows distributed as Poisson(?HN ), where HN is the N th harmonic number. The number of
rows with a non-zero entry for a given column exhibits a ?rich gets richer? property ? a new row has
a one in a given column with probability proportional to the number of times a one has appeared in
that column in the preceding rows.
Different patterns of behavior can be obtained with different choices of CRM. A three-parameter
extension to the IBP [15] replaces the beta process with a completely random measure called the
stable-beta process, which includes the beta process as a special case. The resulting random matrix
exhibits power law behavior: the total number of features exhibited in a data set of size N grows
as O(N s ) for some s > 0, and the number of data points exhibiting each feature also follows
a power law. The number of features per data point, however, remains Poisson-distributed. The
infinite gamma-Poisson process [iGaP, 18] replaces the beta process with a gamma process, and
the Bernoulli process with a Poisson process, to give a distribution over non-negative integer-valued
matrices with infinitely many columns and exchangeable rows. In this model, the sum of each row is
distributed according to a negative binomial distribution, and the number of non-zero entries in each
row is Poisson-distributed. The beta-negative binomial process [21] replaces the Bernoulli process
with a negative binomial process to get an alternative distribution over non-negative integer-valued
matrices.
2
3
Removing the Poisson assumption
While different choices of CRMs in the de Finetti construction can alter the distribution over the
number of data points that exhibit a feature and (in the case of non-binary matrices) the row sums,
they retain a marginally Poisson distribution over the number of distinct features exhibited by a
given data point. The construction of Caron [4] extends the IBP to allow the number of features in
each row to follow a mixture of Poissons, by assigning data point-specific parameters that have an
effect equivalent to a monotonic transformation on the atom sizes in the underlying beta process;
however conditioned on these parameters, the sum of each row is still Poisson-distributed.
This repeatedly occurring Poisson distribution is a direct result of the construction of a binary matrix
from a combination of CRMs. To elaborate on this, note that, marginally, the distribution over the
value of each element zik of a row zi of the IBP (or a three-parameter
IBP) is given by a Bernoulli
P
distribution. Therefore, by the law of rare events, the sum k zik is distributed according to a
Poisson distribution.
A similar argument applies to integer-valued matrices such as the infinite gamma-Poisson process.
Marginally, the distribution over whether an element zP
ik is greater than zero is given by a Bernoulli
distribution, hence the number
of
non-zero
elements,
k zik ? 1, is Poisson-distributed. The distriP
bution over the row sum, k zik , will depend on the choice of CRMs.
It follows that, if we wish to circumvent the requirement of a Poisson (or mixture of Poisson) number
of features per data point in an IBP-like model, we must remove the completely random assumption
on either the de Finetti mixing distribution or the family of conditional distributions. The remainder
of this section discusses how we can obtain arbitrary marginal distributions over the number of
features per row by using conditional distributions that are not completely random.
3.1
Restricting the family of conditional distributions in the de Finetti representation
Recall from Section 2 that any exchangeable sequence can be represented as a mixture over some
family of conditional distributions. The support of this family determines the support of the exchangeable sequence. For example, in the IBP the family of conditional distributions is the Bernoulli
process, which has support in {0, 1}? . A sample from the IBP therefore has support in {{0, 1}? }N .
We are familiar with the idea of restricting the support of a distribution to a measurable subset. For
example, a truncated Gaussian is a Gaussian distribution restricted to a contiguous section of the real
line. In general, we can restrict an arbitrary probability distribution ? with support ? to a measurable
subset A ? ? by defining ?|A (?) := ?(?)I(? ? A)/?(A).
Theorem 1 (Restricted exchangeable distributions). We can restrict the support of an exchangeable
distribution by restricting the family of conditional distributions {?? }??? introduced in Equation 1,
to obtain an exchangeable distribution on the restricted space.
Proof. Consider an unrestricted exchangeable model with de Finetti representation
QN
p(x1 , . . . , xN , ?) =
Let p|A be the restriction of p such that
i=1 ?? (Xi = xi )?(?).
Xi ? A, i = 1, 2, . . . , obtained by restricting the family of conditional distributions {?? } to
|A
{?? } as described above. Then
QN |A
QN
i )I(xi ?A)
?(?) ,
p|A (x1 , . . . , xN , ?) = i=1 ?? (Xi = xi )?(?) = i=1 ?? (X?i?=x
(Xi ?A)
and
p|A (xN +1 |x1 , . . . , xN ) ? I(xN +1 ? A)
Z
?
QN +1
?? (Xi =xi )
Qi=1
?(d?)
N +1
i=1 ?? (Xi ?A)
(2)
is an exchangeable sequence by construction, according to de Finetti?s law.
We give three examples of exchangeable matrices where the number of non-zero entries per row is
restricted to follow a given distribution. While our focus is on exchangeability of the rows, we note
that the following distributions (like their unrestricted counterparts) are invariant under reordering
of the columns, and that the resulting matrices are separately exchangeable [2].
Example 1 (Restriction of the IBP to a fixed number of non-zero entries per row). The family of
conditional distributions in the IBP is given by the Bernoulli process. We can restrict the support
3
?
of the Bernoulli process to an arbitrary
P measurable subset A ? {0, 1} ? for example, the set of
all vectors z ? {0, 1}? such that k zk = S for some integer S. The conditional distribution of a
matrix Z = {z1 , . . . , zN } under such a distribution is given by:
P
QN
?B (Zi = zi )I( k zik = S)
|S
P
?B (Z = Z) = i=1
(?B ( k Zik = S))N
(3)
Q? mk
N ?
? (1 ? ?k )N ?mk Y X
= k=1 k
I
z
=
S
,
ik
N
PoiBin(S|{?k }?
k=1 )
i=1
k=1
P
where mk = i zik and PoiBin(?|{?k }?
k=1 ) is the infinite limit of the Poisson-binomial distribution
[6], which describes the distribution over the number of successes in a sequence of independent
but non-identical Bernoulli trials. The probability of Z given in Equation 3 is the infinite limit of
the conditional Bernoulli distribution [6], which describes the distribution of the locations of the
successes in such a trial, conditioned on their sum.
Example 2 (Restriction of the iGaP to a fixed number of non-zero entries per row). The family of conditional distributions in the iGaP is given by the Poisson process, which has support in
N? . Following Example 1, we can restrict this support to the set of all vectors z ? N? such that
P
k zk ? 1 = S for some integer S ? i.e. the set of all non-negative integer-valued infinite vectors
with S non-zero entries. The conditional distribution of a matrix Z = {z1 , . . . , zN } under such a
distribution is given by:
QN
P?
i = zi )I(
|S
i=1 ?G (ZP
k=1 zik ? 1 = S)
?G (Z = Z) =
?
(?G ( k=1 Zik ? 1 = S))N
k e??k
Q? ?m
(4)
?
N X
QkN
Y
k=1
z
!
ik
i=1
=
zik ? 1 = S .
I
N
PoiBin(S|{e??k }?
k=1 ) i=1
k=1
Example 3 (Restriction of the IBP to a random number of non-zero entries per row). Rather than
specify the number of non-zero entries in each row a priori, we can allow it to be random, with
some arbitrary distribution f (?) over the non-negative integers. A Bernoulli process restricted to
have f -marginals can be described as
P?
N
N
?
Y
Y
f (Si )I( k=1 zik = Si ) Y mk
|S
|f
?B i (Zi = zi )f (Si ) =
?k (1 ? ?k )N ?mk , (5)
?B (Z) =
? )
PoiBin(S
|{?
}
i
k k=1
i=1
i=1
k=1
P?
P?
where Sn =
k=1 znk . If we marginalize over B =
k=1 ?k ??k , the resulting distribution is
exchangeable, because mixtures of i.i.d. distributions are i.i.d.
We note that, even if we choose f to be Poisson(?), we will not recover the IBP. The IBP has
Poisson(?) marginals over the number non-zero elements per row, but the conditional distribution
is described by a Poisson-binomial distribution. The Poisson-restricted IBP, however, will have
Poisson marginal and conditional distributions.
Figure 1 shows some examples of samples from the single-parameter IBP, with parameter ? = 5,
with various restrictions applied.
IBP
1 per row
5 per row
10 per row
Uniform{1,...,20} Power?law (s=2)
Figure 1: Samples from restricted IBPs.
3.2
Direct restriction of the predictive distributions
The construction in Section 3.1 is explicitly conditioned on a draw B from the de Finetti mixing distribution ?. Since it might be cumbersome to explicitly represent the infinite dimensional
4
object B, it is tempting to consider constructions that directly restrict the predictive distribution
p(XN +1 |X1 , . . . , XN ), where B has been marginalized out.
Unfortunately, the distribution over matrices obtained by this approach does not (in general ? see the
appendix for a counter-example) correspond to the distribution over matrices obtained by restricting
the family of conditional distributions. Moreover, the resulting distribution will not in general be
exchangeable. This means it is not appropriate for data sets where we have no explicit ordering of
the data, and also means we cannot directly use the predictive distribution to define a Gibbs sampler
(as is possible in exchangeable models).
Theorem 2 (Sequences obtained by directly restricting the predictive distribution of an exchangeable
sequence are not, in general, exchangeable). Let p be the distribution of the unrestricted exchangeable model introduced in the proof of Theorem 1. Let p?|A be the distribution obtained by directly
restricting this unrestricted exchangeable model such that Xi ? A, i.e.
R QN +1
?? (X = xi )?(d?)
?|A
.
(6)
p (xN +1 |x1 , . . . , xN ) ? I(xN +1 ? A) R? Qi=1
N +1
i=1 ?? (X ? A)?(d?)
?
In general, this will not be equal to Equation 2, and cannot be expressed as a mixture of i.i.d.
distributions.
Proof. To demonstrate that this is true, consider the counterexample given in Example 4.
Example 4 (A three-urn buffet). Consider a simple form of the Indian buffet process, with a base
measure consisting of three unit-mass atoms. We can represent the predictive distribution of such
a model using three indexed urns, each containing one red ball (representing a one in the resulting
matrix) and one blue ball (representing a zero in the resulting matrix). We generate a sequence of
ball sequences by repeatedly picking a ball from each urn, noting the ordered sequence of colors,
and returning the balls to their urns, plus one ball of each sampled color.
Proposition 1. The three-urn buffet is exchangeable.
Proof. By using the fact that a sequence is exchangeable iff the predictive distribution given the first
N elements of the sequence of the N + 1st and N + 2nd entries is exchangeable [9], it is trivial to
show that this model is exchangeable and that, for example,
p(XN +1 = (r, b, r), XN +2 = (r, r, b)|X1:N )
m1 m2 (N + 1 ? m3 ) (m + 1 + 1)(N + 1 ? m2 )m3
?
=
(7)
(N + 1)3
(N + 2)3
=p(XN +1 = (r, r, b), XN +2 = (r, b, r)|X1:N ) ,
where mi is the number of times in the first N samples that the ith ball in a sample has been red.
Proposition 2. The directly restricted three-urn scheme (and, by extension, the directly restricted
IBP) is not exchangeable.
Proof. Consider the same scheme, but where the outcome is restricted such that there is one, and
only one, red ball per sample. The probability of a sequence in this restricted model is given by
P3
mk
k=1 N +1?mk I(xi = r)
?
p (XN +1 = x|X1:N ) =
P3
mk
k=1 N +1?mk
and, for example,
p? (XN +1 = (r, b, b), XN +2 = (b, r, b)|X1:N )
m1
1
= PN +1?m
mk
k N +1?mk
?
m2
N +1?m2
?
m2
N +2?m3
m2
N +2?m2 +
mk
k N +1?mk
P
(8)
6 p? (XN +1 = (b, r, b), XN +2 = (r, b, b)|X1:N ) ,
=
therefore the restricted model is not exchangeable. By introducing a normalizing constant ? corresponding to restricting over a subset of {0, 1}3 ? that depends on the previous samples, we have
broken the exchangeability of the sequence.
By extension, a model obtained by directly restricting the predictive distribution of the IBP is not
exchangeable.
5
We note that there may well be situations where a non-exchangeable model, such as that described
in Proposition 2, is appropriate for our data ? for example where there is an explicit ordering on the
data. It is not, however, an appropriate model if we believe our data to be exchangeable, or if we
are interested in finding a single, stationary latent distribution describing our data. This exchangeable setting is the focus of this paper, so we defer exploration of distribution of non-exchangeable
matrices obtained by restriction of the predictive distribution to future work.
4
Inference
We focus on models obtained by restricting the IBP to have f -marginals over the number of nonzero elements per row, as described in Example 3. Note that when f = ?S , this yields the setting
described in Example 1. Extension to other cases, such as the restricted iGaP model of Example 2,
are straightforward. We work with a truncated model, where we approximate the countably infinite
sequence {?k }?
k=1 with a large, but finite, vector ? := (?1 , . . . , ?K ), where each atom ?k is distributed according to Beta(?/K, 1). An alternative approach would be to develop a slice sampler
that uses a random truncation, avoiding Q
the error introduced by the fixed truncation [14, 16]. We
assume a likelihood function g(X|Z) = i g(xi |zi ).
4.1
Sampling the binary matrix Z
For marginal functions f that assign probability mass to a contiguous, non-singleton subset of N,
we can Gibbs sample each entry of Z according to
P
g(xi |zik = 1, Z?ik )
p(zik = 1|xi , ?, Z?ik , j6=k zij = a) ? ?k p(P fz(a+1)
k k =a+1|?)
(9)
P
f
(a)
p(zik = 0|xi , ?, Z?ik , j6=k zij = a) ? (1 ? ?k ) p(P zk =a|?) g(xi |zik = 0, Z?ik ).
k
P
Where f = ?S , this approach will fail, since any move that changes zik must change k zik . In this
(j)
setting, instead, we sample the locations of the non-zero entries zi , j = 1, . . . , S of zi :
(j)
p(zi
(?j)
= k|xi , ?, zi
(j)
) ? ?k (1 ? ?k )?1 g(xi |zi
(?j)
= k, zi
).
(10)
To improve mixing, we also include Metropolis-Hastings moves that propose an entire row of Z.
We include details in the supplementary material.
4.2
Sampling the beta process atoms ?
Conditioned on Z, the the distribution of ? is
?({?k }?
k=1 |Z)
?
|f
?{?k } (Z
=
Z)?({?k }?
k=1 )
QK
?
k=1
?
?1)
(m + K
?k k
QN
i=1
(1 ? ?k )N ?mk
PoiBin(Si |?)
.
(11)
P
The Poisson-binomial term can be calculated exactly in O(K k zik ) using either a recursive algorithm [3, 5] or an algorithm based on the characteristic function that uses the Discrete Fourier Transform [8]. It can also be approximated using a skewed-normal approximation to the Poisson-binomial
distribution [19]. We can therefore sample from the posterior of ? using Metropolis Hastings steps.
Since we believe our posterior will be close to the posterior for the unrestricted model, we use the
proposal distribution q(?k |Z) = Beta(?/K + mk , N + 1 ? mk ) to propose new values of ?k .
4.3
Evaluating the predictive distribution
In certain cases, we may wish to directly evaluate the predictive distribution p|f (zN +1 |z1 , . . . , zN ).
Unfortunately, in the case of the IBP, we are unable to perform the integral in Equation 2 analytically. We can, however, estimate the predictive distribution using importance sampling. We sample
T measures ? (t) ? ?(?|Z), where ?(?|Z) is the posterior distribution over ? in the finite approximation to the IBP, and then weight them to obtain the restricted predictive distribution
PT
|f
1 t=1 wt ??(t) (zN +1 )
|f
P
p (zN +1 |z1 , . . . , zN ) ?
,
(12)
T
t wt
6
Figure 2: Top row: True features. Bottom row: Sample data points for S = 2.
IBP
rIBP
S=2
7297.4 ? 2822.8
57.2 ? 66.4
S=5
8982.2 ? 1981.7
3469.7 ? 133.7
S=8
7442.8 ? 3602.0
5963.8 ? 871.4
S = 11
8862.1 ? 3920.2
11413 ? 1992.9
S = 14
20244 ? 6809.7
12199 ? 2593.8
Table 1: Structure error on synthetic data with 100 data points and S features per data point.
|f
where wt = ??(t) (z1 , . . . , zN )/??(t) (z1 , . . . , zN ), and
?|f
? (Z)
PK
K
N
Y
f (Si )I( k=1 zik = Si ) Y mk
?k (1 ? ?k )N ?mk .
?
PoiBin(S
|?)
i
i=1
k=1
5
Experimental evaluation
In this paper, we have described how distributions over exchangeable matrices, such as the IBP,
can be modified to allow more flexible control over the distributions over the number of latent
features. In this section, we perform experiments on both real and synthetic data. The synthetic data
experiments are designed to show that appropriate restriction can yield more interpretable features.
The experiments on real data are designed to show that careful choice of the distribution over the
number of latent features in our models can lead to improved predictive performance.
5.1
Synthetic data
The IBP has been used to discover latent features that correspond to interpretable phenomena, such
as latent causes behind patient symptoms [20]. If we have prior knowledge about the number of latent features per data point ? for example, the number of players in a team, or the number of speakers
in a conversation ? we may expect both better predictive performance, and more interpretable latent
features. In this experiment, we evaluate this hypothesis on synthetic data, where the true latent
features are known. We generated images by randomly selecting S of 16 binary features, shown in
Figure 2, superimposing them, and adding isotropic Gaussian noise (? 2 = 0.25). We modeled the
resulting data using an uncollapsed linear Gaussian model, as described in [7], using both the IBP,
and the IBP restricted to have S features per row. To compare the generating matrix Z0 and our posterior estimate Z, we looked at the structure error [20]. This is the sum absolute difference between
the upper triangular portions of Z0 ZT0 and E[ZZT ], and is a general measure of graph dissimilarity.
Table 1 shows the structure error obtained using both a standard IBP model (IBP) and an IBP restricted to have the correct number of latent features (rIBP), for varying numbers of features S. In
each case, the number of data points is 100, the IBP parameter ? is fixed to S, and the model is
truncated to 50 features. Each experiment was repeated 10 times on independently generated data
sets; we present the mean and standard deviation. All samplers were run for 5000 samples; the first
2500 were discarded as burn-in.
Where the number of features per data point is small relative to the total number of features, the
restricted model does a much better job at recovering the ?correct? latent structure. While the IBP
may be able to explain the training data set as well as the restricted model, it will not in general
recover the desired latent structure ? which is important if we wish to interpret the latent structure.
Once the number of features per data point increases beyond half the total number of features, the
model is ill-specified ? it is more parsimonious to represent features via the absence of a bar. As
a result, both models perform poorly at recovering the generating structure. The restricted model
? and indeed the IBP ? should only be expected to recover easily interpretable features where the
number of such features per data point is small relative to the total number of features.
7
IBP
rIBP
IBP
rIBP
1
0.591
0.622
11
0.961
0.971
2
0.726
0.749
12
0.969
0.978
3
0.796
0.819
13
0.974
0.981
4
0.848
0.864
14
0.978
0.983
5
0.878
0.899
15
0.982
0.988
6
0.905
0.918
16
0.989
0.992
7
0.923
0.935
17
0.991
0.998
8
0.936
0.948
18
0.996
1.000
9
0.952
0.959
19
0.997
1.000
10
0.958
0.966
20
1.000
1.000
Table 2: Proportion correct at n on classifying documents from the 20newsgroup data set.
5.2
Classification of text data
The IBP and its extensions have been used to directly model text data [17, 15]. In such settings,
the IBP is used to directly model the presence or absence of words, and so the matrix Z is observed
rather than latent, and the total number of features is given by the vocabulary size. We hypothesize
that the Poisson assumption made by the IBP is not appropriate for text data, as the statistics of word
use in natural language tends to follow a heavier tailed distribution [22]. To test this hypothesis, we
modeled a collection of corpora using both an IBP, and an IBP restricted to have a negative Binomial
distribution over the number of words. Our corpora were 20 collections of newsgroup postings on
various topics (for example, comp.graphics, rec.autos, rec.sport.hockey)1 . No pre-processing of the
documents was performed. Since the vocabulary (and hence the feature space) is finite, we truncated
both models to the vocabulary size. Due to the very large state space, we restricted our samples such
that, in a single sample, atoms with the same posterior distribution were assigned the same value.
For each model, ? was set to the mean number of words per document in the corresponding group,
and the maximum likelihood parameters were used for the negative Binomial distribution.
To evaluate the quality of the models, we classified held out documents based on their likelihood
under each of the 20 newsgroups. This experiment is designed to replicate an experiment performed
by [15] to compare the original and three-parameter IBP models. For both models, we estimated the
predictive distribution by generating 1000 samples from the posterior of the beta process in the IBP
model. For the IBP, we used these samples directly to estimate the predictive distribution; for the
restricted model, we used the importance-weighted samples obtained using Equation 12. For each
model, we trained on 1000 randomly selected documents, and tested on a further 1000 documents.
Table 2 shows the fraction of documents correctly classified in the first n labels ? i.e. the fraction
of documents for which the correct labels is one of the n most likely. The restricted IBP (rIBP)
performs uniformly better than the unrestricted model.
6
Discussion and future work
The framework explored in this paper allows us to relax the distributional assumptions made by
existing exchangeable nonparametric processes. As future work, we intend to explore which applications and models can most benefit from this greater flexibility.
? = P? ?
We note that the model, as posed, suffers P
from an identifiability issue. Let B
k=1 ?k ??k be the
?
?k = ?k /(1 ? ?k ). Then, scaling
measure obtained by transforming B = k=1 ?k ??k such that ?
? by a positive scalar does not affect the likelihood of a given matrix Z. We intend to explore the
B
consequences of this in future work.
Acknowledgments
We would like to thank Zoubin Ghahramani for valuable suggestions and discussions throughout
this project. We would also like to thank Finale Doshi-Velez and Ryan Adams for pointing out
the non-identifiability mentioned in Section 6. This research was supported in part by NSF grants
DMS-1209194 and IIS-1111142, AFOSR grant FA95501010247, and NIH grant R01GM093156.
1
http://people.csail.mit.edu/jrennie/20Newsgroups/
8
References
?
? e de Probabilit?es de Saint-Flour
[1] D. Aldous. Exchangeability and related topics. Ecole
d?Et?
XIII, pages 1?198, 1985.
[2] D. J. Aldous. Representations for partially exchangeable arrays of random variables. Journal
of Multivariate Analysis, 11(4):581?598, 1981.
[3] R. E. Barlow and K. D. Heidtmann. Computing k-out-of-n system reliability. IEEE Transactions on Reliability, 33:322?323, 1984.
[4] F. Caron. Bayesian nonparametric models for bipartite graphs. In Neural Information Processing Systems, 2012.
[5] S. X Chen, A. P. Dempster, and J. S. Liu. Weighted finite population sampling to maximize
entropy. Biometrika, 81:457?469, 1994.
[6] S. X. Chen and J. S. Liu. Statistical applications of the Poisson-binomial and conditional
Bernoulli distributions. Statistica Sinica, 7:875?892, 1997.
[7] F. Doshi-Velez and Z. Ghahramani. Accelerated Gibbs sampling for the Indian buffet process.
In International Conference on Machine Learning, 2009.
[8] M. Fern?andez and S. Williams. Closed-form expression for the Poisson-binomial probability
density function. IEEE Transactions on Aerospace Electronic Systems, 46:803?817, 2010.
[9] S. Fortini, L. Ladelli, and E. Regazzini. Exchangeability, predictive distributions and parametric models. Sankhy?a: The Indian Journal of Statistics, Series A, pages 86?109, 2000.
[10] E. B. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky. Sharing features among dynamical
systems with beta processes. In Neural Information Processing Systems, 2010.
[11] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process.
In Neural Information Processing Systems, 2005.
[12] J. F. C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59?78,
1967.
[13] K. T. Miller, T. L. Griffiths, and M. I. Jordan. Nonparametric latent feature models for link
prediction. In Neural Information Processing Systems, 2009.
[14] R. M. Neal. Slice sampling. Annals of Statistics, 31(3):705?767, 2003.
[15] Y. W. Teh and D. G?or?ur. Indian buffet processes with power law behaviour. In Neural Information Processing Systems, 2009.
[16] Y. W. Teh, D. G?or?ur, and Z. Ghahramani. Stick-breaking construction for the Indian buffet
process. In Artificial Intelligence and Statistics, 2007.
[17] R. Thibaux and M.I. Jordan. Hierarchical beta processes and the Indian buffet process. In
Artificial Intelligence and Statistics, 2007.
[18] M. Titsias. The infinite gamma-Poisson feature model. In Neural Information Processing
Systems, 2007.
[19] A. Y. Volkova. A refinement of the central limit theorem for sums of independent random
indicators. Theory of Probability and its Applications, 40:791?794, 1996.
[20] F. Wood, T. L. Griffiths, and Z. Ghahramani. A non-parametric Bayesian method for inferring
hidden causes. In Uncertainty in Artificial Intelligence, 2006.
[21] M. Zhou, L. A. Hannah, D. B. Dunson, and L. Carin. Beta-negative binomial process and
Poisson factor analysis. In Artificial Intelligence and Statistics, 2012.
[22] G. K. Zipf. Selective Studies and the Principle of Relative Frequency in Language. Harvard
University Press, 1932.
9
| 4884 |@word trial:2 seems:1 replicate:1 nd:1 proportion:1 liu:2 series:1 zij:2 selecting:1 ecole:1 document:8 existing:2 si:6 assigning:1 written:1 must:2 remove:1 designed:3 interpretable:6 hypothesize:1 zik:19 stationary:1 half:1 selected:1 intelligence:4 isotropic:1 ith:1 contribute:1 location:2 direct:2 beta:15 ik:7 expected:1 indeed:1 behavior:4 dialog:1 inappropriate:1 project:1 discover:1 notation:1 underlying:1 moreover:1 agnostic:1 mass:3 r01gm093156:1 finding:1 transformation:1 exactly:2 returning:1 biometrika:1 stick:1 exchangeable:46 control:2 unit:1 grant:3 positive:1 tends:1 limit:3 consequence:1 might:2 plus:1 burn:1 acknowledgment:1 recursive:1 probabilit:1 word:5 pre:1 griffith:3 zoubin:1 get:2 cannot:2 marginalize:1 close:1 restriction:8 equivalent:1 measurable:3 straightforward:1 williams:1 independently:1 m2:7 array:1 population:1 poissons:1 annals:1 construction:8 suppose:1 pt:1 zzt:1 us:2 hypothesis:2 harvard:1 element:8 approximated:1 particularly:1 rec:2 distributional:2 observed:2 steven:1 bottom:1 ordering:2 counter:1 valuable:1 mentioned:1 transforming:1 broken:1 dempster:1 trained:1 depend:1 predictive:18 titsias:1 bipartite:1 eric:1 completely:5 easily:1 joint:2 represented:1 various:2 distinct:1 monte:1 artificial:4 tell:1 outcome:1 whose:1 richer:1 supplementary:1 valued:4 posed:1 say:2 relax:1 triangular:1 statistic:6 think:1 transform:1 sequence:19 propose:4 remainder:1 mixing:4 poorly:2 achieve:1 iff:2 flexibility:1 requirement:1 zp:2 uncollapsed:1 generating:3 adam:1 object:1 develop:1 ibp:50 job:1 recovering:2 exhibiting:2 correct:4 modifying:1 allele:1 exploration:1 material:1 require:1 behaviour:1 assign:2 andez:1 proposition:3 ryan:1 extension:5 normal:1 pointing:1 label:2 weighted:2 mit:1 gaussian:4 modified:1 rather:3 pn:1 zhou:1 exchangeability:5 varying:1 focus:3 bernoulli:14 likelihood:4 inference:2 entire:1 hidden:2 selective:1 interested:1 issue:1 classification:1 flexible:2 ill:1 among:1 priori:2 special:1 marginal:3 equal:1 once:1 atom:7 sampling:7 identical:1 carin:1 sankhy:1 alter:1 future:4 xiii:1 randomly:2 gamma:4 familiar:1 consisting:1 zt0:1 evaluation:2 flour:1 mixture:5 genotype:1 yielding:1 behind:1 held:1 integral:1 fox:1 indexed:1 desired:1 regazzini:1 mk:18 column:13 modeling:3 contiguous:2 zn:9 introducing:1 deviation:1 entry:15 subset:7 rare:1 uniform:1 graphic:1 thibaux:1 synthetic:5 st:1 density:1 international:1 retain:1 csail:1 picking:1 central:1 containing:1 hn:2 choose:1 kingman:1 de:10 singleton:1 includes:1 explicitly:3 depends:2 performed:2 closed:1 bution:1 xing:1 recover:3 red:3 portion:1 identifiability:2 defer:1 qk:1 characteristic:1 miller:1 correspond:3 yield:4 bayesian:2 fern:1 marginally:4 carlo:1 comp:1 j6:2 classified:2 explain:1 cumbersome:1 suffers:1 sharing:1 igap:4 nonetheless:1 frequency:2 doshi:2 dm:1 associated:1 proof:5 mi:1 sampled:1 proved:1 sinead:1 recall:1 color:2 knowledge:1 conversation:1 follow:3 specify:4 improved:1 symptom:1 hastings:2 quality:1 perhaps:1 believe:4 grows:1 effect:1 contain:1 true:3 barlow:1 counterpart:1 hence:2 analytically:1 assigned:1 nonzero:1 neal:1 skewed:1 self:1 speaker:2 demonstrate:1 performs:1 image:1 harmonic:1 ohio:1 nih:1 superior:1 m1:2 marginals:3 interpret:1 velez:2 mellon:1 fa95501010247:1 caron:2 gibbs:4 counterexample:1 zipf:1 mathematics:1 language:2 reliability:2 jrennie:1 stable:1 add:1 base:1 posterior:9 multivariate:1 aldous:2 certain:3 binary:6 success:2 seen:1 captured:1 minimum:1 greater:2 impose:1 preceding:1 unrestricted:6 maximize:1 tempting:1 ii:1 qi:2 prediction:1 patient:1 expectation:1 poisson:31 represent:6 justified:1 addition:1 proposal:1 separately:1 sudderth:1 exhibited:6 tend:1 member:1 finale:1 integer:7 jordan:3 noting:1 presence:1 crm:2 newsgroups:2 affect:1 zi:13 restrict:5 idea:1 texas:1 whether:1 expression:1 heavier:1 cause:3 repeatedly:2 useful:2 nonparametric:6 generate:1 fz:1 http:1 nsf:1 estimated:1 disjoint:1 per:26 correctly:1 blue:1 carnegie:1 write:1 discrete:1 finetti:8 group:1 graph:2 fraction:2 sum:8 wood:1 run:1 uncertainty:1 extends:1 family:11 reasonable:1 throughout:2 electronic:1 p3:2 parsimonious:2 draw:1 appendix:1 scaling:1 replaces:3 occur:1 constraint:1 x2:7 encodes:1 fourier:1 argument:1 urn:6 pacific:1 according:5 combination:1 ball:8 describes:2 ur:2 metropolis:2 restricted:23 invariant:1 equation:5 previously:1 remains:1 discus:1 describing:1 fail:1 hierarchical:1 appropriate:7 alternative:2 buffet:10 original:2 binomial:12 top:1 include:2 saint:1 marginalized:1 ghahramani:5 unchanged:1 implied:1 move:2 intend:2 looked:1 parametric:2 exhibit:6 unable:1 thank:2 link:1 parametrized:1 topic:2 trivial:1 willsky:1 modeled:3 sinica:1 unfortunately:2 dunson:1 negative:10 perform:3 allowing:1 upper:1 teh:2 observation:2 markov:1 discarded:1 finite:8 truncated:4 situation:2 crms:3 defining:1 team:2 arbitrary:5 introduced:3 specified:1 z1:6 aerospace:1 able:1 beyond:1 bar:1 dynamical:1 pattern:1 appeared:1 including:1 belief:1 power:5 event:1 natural:1 circumvent:1 indicator:1 representing:2 scheme:2 improve:1 auto:1 sn:1 text:5 prior:5 determining:1 relative:3 law:8 afosr:1 reordering:1 expect:3 permutation:1 suggestion:1 proportional:2 degree:1 znk:1 principle:1 classifying:1 heavy:1 austin:1 row:35 supported:1 truncation:2 allow:4 absolute:1 distributed:10 slice:2 benefit:1 calculated:1 xn:23 transition:3 evaluating:1 rich:1 qn:8 vocabulary:3 made:2 collection:2 refinement:1 far:1 transaction:2 approximate:1 countably:2 corpus:3 assumed:1 ibps:1 xi:24 alternatively:1 subsequence:1 latent:26 tailed:2 table:4 hockey:1 zk:3 williamson:1 constructing:2 domain:1 pk:1 statistica:1 noise:1 repeated:1 x1:20 qkn:1 elaborate:1 inferring:1 wish:5 explicit:2 breaking:1 posting:1 hannah:1 removing:1 theorem:4 z0:2 specific:1 explored:1 normalizing:1 restricting:12 adding:1 importance:2 dissimilarity:1 conditioned:4 occurring:1 chen:2 suited:1 entropy:1 likely:1 infinitely:6 explore:2 desire:1 expressed:1 ordered:1 sport:1 scalar:1 partially:1 monotonic:1 applies:1 corresponds:1 determines:1 conditional:18 invalid:1 careful:1 absence:2 change:2 specifically:1 infinite:11 uniformly:1 sampler:4 wt:3 called:1 total:7 experimental:2 superimposing:1 m3:3 player:1 e:1 newsgroup:2 select:2 maceachern:1 support:10 people:1 indian:9 accelerated:1 avoiding:1 incorporate:1 evaluate:3 tested:1 phenomenon:1 |
4,292 | 4,885 | Approximate inference in latent Gaussian-Markov
models from continuous time observations
Botond Cseke1
Manfred Opper2
School of Informatics
University of Edinburgh, U.K.
{bcseke,gsanguin}@inf.ed.ac.uk
1
Guido Sanguinetti1
Computer Science
TU Berlin, Germany
[email protected]
2
Abstract
We propose an approximate inference algorithm for continuous time Gaussian Markov
process models with both discrete and continuous time likelihoods. We show that the
continuous time limit of the expectation propagation algorithm exists and results in a
hybrid fixed point iteration consisting of (1) expectation propagation updates for discrete
time terms and (2) variational updates for the continuous time term. We introduce postinference corrections methods that improve on the marginals of the approximation. This
approach extends the classical Kalman-Bucy smoothing procedure to non-Gaussian observations, enabling continuous-time inference in a variety of models, including spiking
neuronal models (state-space models with point process observations) and box likelihood
models. Experimental results on real and simulated data demonstrate high distributional
accuracy and significant computational savings compared to discrete-time approaches in
a neural application.
1
Introduction
Continuous time stochastic processes provide a flexible and popular framework for data modelling in
a broad spectrum of scientific and engineering disciplines. Their intrinsically non-parametric, infinitedimensional nature also makes them a challenging field for the development of efficient inference algorithms. Recent years have seen several such algorithms being proposed for a variety of models [Opper
and Sanguinetti, 2008, Opper et al., 2010, Rao and Teh, 2012]. Most inference work has focused on the
scenario when observations are available at a finite set of time points, however, modern technologies are
making effectively continuous time observations increasingly common: for example, high speed imaging
technologies now enable the acquisition of biological data at around 100Hz for extended periods of time.
Other scenarios give intrinsically continuous time observations: for example, sensors monitoring the transit
of a particle through a barrier provide continuous time data on the particle?s position. To the best of our
knowledge, this problem has not been addressed in the statistical machine learning community.
In this paper, we propose an expectation-propagation (EP)-type algorithm [Opper and Winther, 2000,
Minka, 2001] for latent diffusion processes observed in either discrete or continuous time. We derive
fixed-point update equations by considering a continuous time limit of the parallel EP algorithm [e.g. Opper and Winther, 2005, Cseke and Heskes, 2011b]: these fixed point updates naturally become differential
equations in the continuous time limit. Remarkably, we show that, in the presence of continuous time
observations, the update equations for the EP algorithm reduce to updates for a variational Gaussian approximation [Archambeau et al., 2007]. We also generalise to the continuous-time limit the EP correction
scheme of [Cseke and Heskes, 2011b], which enable us to capture some of the non-Gaussian behaviour of
the time marginals.
1
2
Models and methods
We consider dynamical systems described by multivariate stochastic differential equations (SDEs) of
Ornstein-Uhlenbeck (OU) type over the [0, 1] time interval
1/2
dxt = (At xt + ct )dt + Bt
dWt ,
(1)
where {Wt }t is the standard Wiener process [Gardiner, 2002] and At , Bt and ct are time dependent matrix
and vector valued functions respectively with Bt being positive definite for all t ? [0, 1]. Even though the
process does not posses a formulation through density functions (with respect to the Lebesgue measure),
in order to be able to symbolically represent and manipulate the variables of the process in the Bayesian
formalism, we will use the proxy p0 ({xt }) to denote their distribution.
The process can be observed (noisily) both at discrete time points, and for continuous time intervals; we
will partition the observations in ytdi , ti ? Td and ytc , t ? [0, 1] accordingly. We assume that the likelihood
function admits the general formulation
p({ytdi }i , {ytc }| {xt }) ?
?
ti ?Td
? ? 1
?
p(ytdi |xti ) ? exp ?
dtV (t, ytc , xt ) .
(2)
0
We refer to p(ytdi |xti ) and V (t, ytc , xt ) as discrete time likelihood term and continuous time loss function,
respectively. We notice that, using Girsanov?s theorem and Ito?s lemma, non-linear diffusion equations
with constant (diagonal) diffusion matrix can be re-written in the form (1)-(2), provided the drift can be
obtained as the gradient of a potential function [e.g. ?ksendal, 2010].
Our aim is to propose approximate inference methods to compute the marginals p(xt |{ytdi }i , {ytc }) of the
posterior distribution
p({xt }t |{ytdi }i , {ytc }) ? p({ytdi }i , {ytc }| {xt }) ? p0 ({xt }).
2.1
Exact inference in Gaussian models
We start form the exact case of Gaussian observations and quadratic loss function. The linearity of equation (1) implies that the marginal distributions of the process at every time point are Gaussian (assuming
Gaussian initial conditions). The time evolution of the marginal mean mt and covariance Vt is governed
by the pair of differential equations [Gardiner, 2002]
d
mt = At mt + ct
dt
and
d
Vt = At Vt + Vt ATt + Bt .
dt
(3)
In the case of Gaussian observations and a quadratic loss function V (t, ytc , xt ) = const. ? xTt hct +
1 T c
2 xt Qt xt , these equations, together with their backward analogues, enable an exact recursive inference
algorithm, known as the Kalman-Bucy smoother [e.g. S?arkk?a, 2006]. This algorithm arises because we can
recast the loss function as an auxiliary (observation) process
1/2
dytc = xt dt + Rt dWt ,
(4)
where Rt?1 = Qct and Rt?1 dytc /dt = hct . This follows by the Gaussianity of the observation process and
the fundamental property of Ito?s calculus dWt2 = Idt.
The Kalman-Bucy algorithm computes the posterior marginal means and covariances by solving the differential equations in a forward-backward fashion. These can be combined with classical Kalman filtering to
account for discrete-time observations. The exact form of the equations as well as the variational derivation
of the Kalman-Bucy problem are given in Section B of the Supplementary Material.
2.2
Approximate inference
In this section we use an Euler discretisation of the prior and the continuous time likelihood to turn our
model into a multivariate latent Gaussian model. We review the EP algorithm for such models and then we
show that when taking the limit ?t ? 0 the updates of the EP algorithm exist. The resulting approximate
posterior process is again an OU process and we compute its parameters. Finally, we show how corrections
to the marginals proposed [Cseke and Heskes, 2011b] can be extended to the continuous time case.
2
2.2.1
Euler discretisation
Let T = {t1 = 0, t2 , . . . , tK?1 , tK = 1} be a discretisation of the [0, 1] interval and let the matrix
x = [xt1 , . . . , xtK ] represent the process {xt }t using the discretisation given by T . Without loss of
generality we can assume that Td ? T . We assume the Euler-Maruyama approach and approximate
p({xt }) by1
p0 (x) = N (x0 ; m0 , V0 )
?
N (xtk+1 ; xtk + (Atk xtk + ctk )?tk , ?tk Btk )
k
and in a similar fashion we approximate the continuous time likelihood by
p(y c |x) ? exp
?
?
?
?tk V (tk , ytck , xtk )
k
?
,
where y c is the matrix y c = [ytc1 , . . . , ytcK ]. Consequently we approximate our model by the latent Gaussian model
p({ytdi }i , y c , x) = p0 (x) ?
?
i
p(ytdi |xti )
?
k
?
?
exp ??tk V (tk , ytck , xtk )
where we remark that the prior p0 has a block-diagonal precision structure.? To simplify notation,?in the
following we use the aliases ?di (xti ) = p(ytdi |xti ) and ?ck (xtk ; ?tk ) = exp ??tk V (tk , ytck , xtk ) .
2.2.2
Inference using expectation propagation
Expectation propagation [Opper and Winther, 2000, Minka, 2001] is a well known algorithm that provides
good approximations of the posterior marginals in latent Gaussian models. We use here the parallel EP
approach [e.g. Cseke and Heskes, 2011b]; similar continuous time limiting arguments can be made for the
original (sequential) EP approach. The algorithm approximates the posterior p(x|{ytdi }i , y c ) by a Gaussian
q0 (x) ? p0 (x)
?
??di (xti )
i
?
??ck (xtk ; ?tk ),
k
where ??di and ??ck are Gaussian functions. When applied to our model the algorithm proceeds by performing
the fixed point iteration
[??di (xti )]
new
[??ck (xtk ; ?tk )]
new
Collapse(?di (xti )??di (xti )?1 q0 (xti ); N )
? ??di (xti ) for all ti ? Td ,
q0 (xti )
Collapse(?ck (xtk ; ?tk )??ck (xtk ; ?tk )?1 q0 (xtk ); N )
?
? ??ck (xtk ; ?tk )
q0 (xtk )
?
(5)
for all tk ? T, (6)
where Collapse(p(z); N ) = argminq?N D[p(z)||q(z)] denotes the projection of the density p(z) into
the Gaussian family denoted by N . In other words, Collapse(p(z); N ) is the Gaussian density that
matches the first and second moments of p(z). Readers familiar with the classical formulation of
EP [Minka, 2001] will recognise in equation (5) the so-called term updates, where ??di (xti )?1 q0 (xti )
is the cavity distribution and ?di (xti )??di (xti )?1 q0 (xti ) the tilted distribution. Equations (5-6) imply
that at any fixed point of the iterations we have q(xti ) = Collapse(?di (xti )??di (xti )?1 q0 (xti ); N ) and
q(xtk ) = Collapse(?ck (xtk ; ?tk )??ck (xtk ; ?tk )?1 q0 (xtk ); N ). The algorithm can also be derived and justified as a constrained optimisation problem of a Gibbs free energy formulation [Heskes et al., 2005]; this
alternative approach can also be shown to extend to the continuous time limit (see Section A.2 of the
Supplementary Material) and provides a useful tool for approximate evidence calculations.
Equation (5) does not depend on the time discretisation, and hence provides a valid update equation also
working directly with the continuous time process. On the other hand, the quantities in equation (6) depend explicitly on ?tk , and it is necessary to ensure that they remain well defined (and computable) in
the continuous time limit. In order to derive the limiting behaviour of (6) we introduce the the following notation: (i) we use f (z) = (z, ?zz T /2) to denote the sufficient statistic of a multivariate Gaussian (ii), we use ?dti = (hdti , Qdti ) as the canonical parameters corresponding to the Gaussian function
??di (xti ) ? exp{?dti ? f (xti )}2 , (iii) we use ?ctk = (hctk , Qctk ) as the canonical parameters corresponding
to the Gaussian function ??ck (xtk ) ? exp{?tk ?ctk ? f (xtk )}, and finally, (iv) we use Collapse(p(z); f ) as
1
We remark that one could also integrate the OU process between time steps, yielding an exact finite dimensional
marginal of the prior. In the limit however both procedures are equivalent.
2
We use ??? as scalar product for general (concatenated) vector objects, for example, x?y = xT y when x, y ? Rn .
3
the canonical parameters corresponding to the density Collapse(p(z); N ). By using this notation we can
rewrite (6) as
[?ctk ]new = ?ctk +
with
?
1 ?
Collapse(qc (xtk ); f ) ? Collapse(q0 (xtk ); f )
?tk
(7)
qc (xtk ) ? exp(??tk [V (tk , xtk ) + ?ctk ? f (xtk )])q0 (xtk ).
(8)
The approximating density can then be written as
q0 (x) ? p0 (x) ? exp
??
i
?dti ? f (xti ) +
?
k
?
?tk ?ctk ? f (xtk ) .
(9)
By direct Taylor expansion of Collapse(qc (xtk ); f ) one can show that the update equation (7) remains
finite when we take the limit ?tk ? 0. A slightly more general perspective however affords greater insight
into the algorithm, as shown below.
2.2.3
Continuous time limit of the update equations
Let ?tk = Collapse(q0 (xtk ); f ) and denote by Z(?tk , ?tk ) and Z(?tk ) the normalisation constant of
qc (xtk ) and q0 (xtk ) respectively. The notation emphasises that qc (xtk ) differs from q0 (xtk ) by a term
dependent on the granularity of the discretisation ?tk . We exploit the well known fact that the derivatives
with respect to the canonical parameters of the log normalisation constant of a distribution within the
exponential family give the moment parameters of the distribution. From the definition of qc (xtk ) in
equation (8) we then have that its first two moments can be computed as ??tk log Z(?tk , ?tk ). The
Collapse operation in (7) can then be rewritten as
Collapse(qc (xtk ); f ) = ?(??tk log Z(?tk , ?tk )),
(10)
where ? is the function transforming the moment parameters of a Gaussian into its (canonical) parameters.
We now assume ?tk to be small and expand Z(?tk , ?tk ) to first order in ?tk . By using the property that
1/?
lim??0+ ?g(z)? ?p(z) = exp(?log g(x)?p ) for any distribution p(z) and g(z) > 0, one can write
lim
?tk ?0
?
?1/?t
1
[log Z(?tk , ?tk ) ? log Z(?tk )] = log lim exp{??tk [V (tk , xtk ) + ?ctk ? f (xtk )]} q (x k )
tk
0
?tk ?0
?tk
?
?
c
= ? [V (tk , xtk ) + ?tk ? f (xtk )] q (x )
0
= ? ?V (tk , xtk )?q0 (xt
k
tk
?1
(?tk )?ctk ,
) ??
(11)
where we exploited the fact that ?f (xtk )?q0 (xtk ) are the moments of the q0 (xtk ) distribution. We can now
exploit the fact that ?tk is small and linearise the nonlinear map ? about the moments of q0 (xtk ) to obtain
a first order approximation to equation (10) as
Collapse(qc (xtk ); f ) ? ?tk ? ?tk ?ctk ? ?tk J? (?tk )??tk ?V (tk , xtk )?q0 (xt
k
)
(12)
where J? (?tk ) denotes the Jacobian matrix of the map ? evaluated at ?tk . The second term on the r.h.s.
of equation (12) follows from the obvious identity ??tk ?(??1 (?tk )) = I.
By substituting (12) into (7), we take the limit ?tk ? 0 and obtain the update equations
[?ct ]new = ?J? (?t )??t ?V (t, xt )?q0 (xt )
for all t ? [0, 1].
(13)
Notice that the updating of ?ct is somewhat hidden in equation (13); the ?old? parameters are in fact contained in the parameters ?tk . Since ?ct corresponds to the canonical parameters of a multivariate Gaussian,
we can use the representation ?ct = (hct , Qct ) and after some algebra on the moment-canonical transformation of Gaussians we write the fixed point iteration as
[hct ]new = ??mt ?V (t, xt )?q0 (xt ) + 2?Vt ?V (t, xt )?q0 (xt ) mt
and
[Qct ]new = ?Vt ?V (t, xt )?q0 (xt ) ,
(14)
where mt and Vt are the marginal means and covariances of q0 at the ?tk ? 0. Algorithmically, computing the marginal moments and covariances of the discretised Gaussian q0 (x) in (9) can be done by solving a
sparse linear system and doing partial matrix inversion using the Cholesky factorisation and the Takahashi
equations as in Cseke and Heskes [2011b]. This corresponds to a junction tree algorithm on a (block) chain
graph [Davis, 2006] which, in the continuous time limit, can be reduced to a set of differential equations
4
due to the chain structure of the graph. Alternatively, one can notice that, in the continuous time limit,
the structure of q0 (x) in equation (9) defines a posterior process for an OU process p0 ({xt }) observed
at discrete times with Gaussian noise (corresponding to the terms ??di (xti ) with canonical parameters ?dti )
and with a quadratic continuous time loss, which is computed using equation (14). The moments therefore be computed using the Kalman-Bucy algorithm; details of the algorithm are given in Section B.1 of
the Supplementary Material. The derivation above illustrates another interesting characteristic of working
with continuous-time likelihoods. Readers familiar with the fractional free energies and the power EP algorithm may notice that the time lag ?tk plays a similar role as the fractional or power parameter ?. It
is well known property that in the ? ? 0 limit the algorithm and the free energy collapses to variational
[e.g. Wiegerinck and Heskes, 2003, Cseke and Heskes, 2011a] and thus, intuitively, the collapse and the
existence of the limit is related to this property.
Overall, we arrive to a hybrid algorithm in which: (i) the canonical parameters (hdti , Qdti ) corresponding to
the discrete time terms are updated by the usual EP updates in (5), (ii) the canonical parameters (hct , Qct )
corresponding to the continuous loss function V (t, xt ) are updated by the variational updates in (14) (iii),
the marginal moment parameters of q0 (xt ) are computed by the forward-backward differential equations
referred to in Section 2.1. We can use either parallel or a forward-backward type scheduling. A more
detailed description of the inference algorithm is given in Section C of the Supplementary Material. The
algorithm performs well in the comfort zone of EP, that is, log-concave discrete likelihood terms and convex
loss. Non-convergence can occur in case of multimodal likelihoods and loss functions and alternative
options to optimise the free energy have to be explored [e.g. Heskes et al., 2005, Archambeau et al., 2007].
2.2.4
Parameters of the approximating OU process
The fixed point iteration scheme computes only the marginal means and covariances of q0 ({xt }) and it
does not provide a parametric OU process as an approximation. However, this can be computed by finding
the parameters of an OU process that matches q0 in the moment matching Kullback-Leibler divergence.
That is, if q ? ({xt }) minimises D[q0 ({xt })||q ? ({xt })], then the parameters of q ? are given by
A?t = At ? Bt [Vtbw ]?1 ,
c?t = ct + Bt [Vtbw ]?1 mbw
t
and
Bt? = Bt ,
(15)
bw
where mbw
are computed by the backward Kalman-Bucy filtering equations. The computations
t and Vt
are somewhat lengthy; a full derivation can be found in Section B.3 of the Supplementary Material.
2.2.5
Corrections to the marginals
In this section we extend the factorised correction method for multivariate latent Gaussian models introduced in Cseke and Heskes [2011b] to continuous time observations. Other correction schemes [e.g. Opper
et al., 2009] can in principle also be applied. We start again from the discretised representation and then
take the ?tk ? 0. To begin with, we focus on the corrections from the continuous time observation process. By removing the Gaussian terms (with canonical parameters ?ctk ) from the approximate posterior and
replacing them with the exact likelihood, we can rewrite the exact discretised posterior as
p(x) ? q0 (x) ? exp
?
?
?
k
?
?tk [V (tk , xtk ) + ?ctk ? f (xt )] .
The exact posterior marginal at time tj is thus given by
with
?
?
p(xtj ) ? q0 (xtj )? exp ? ?tj [V (tj , xtj + ?ctj ? f (xtj ))] ? cT (xtj )
?
? ?
?
cT (xtj ) = dx\tj q0 (x\tj |xtj ) ? exp ?
?tk [V (tk , xtk ) + ?ctk ? f .(xtk )] ,
k?=j
where the subscript \j indicates the whole vector with the j-th entry removed. By approximating the joint
conditional q0 (x\tj |xtj ) with a product of its marginals and taking the ?tk ? 0 limit, we obtain
c(xt ) ? exp
?
?
?
1
0
?
ds ?V (s, xs ) + ?cs ? f (xs )?q0 (xs |xt ) .
When combining the continuous part and the factorised discrete time corrections?by adding the discrete
time terms to the formalism above?we arrive to the corrected approximate marginal
p?(xt ) ? q0 (xt ) exp
?
?
?
1
0
ds ?V (s, xs ) + ?cs ? f (xs )?q0 (xs |xt )
?
?
?
i
?
p(ytdi |xti )
exp{?dti ? f (xti )}
?
.
q0 (xti |xt )
For any fixed t one can compute the correlations in linear time by using the parametric form of the approximation in 15. The evaluations for a fixed xt are also linear in time.
5
Marginal distributions at t=0.3351
3
sampling at 10?3
variational. corr
variational Gaussian
2.5
2
1.5
1
0.5
0
?1
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
1
Figure 1: Inference results for the toy model in Section 3.1. The continuous time potential is defined as V (t, xt ) =
(2xt )8 I[1/2,2/3] (t) and we assume two hard box discrete likelihood terms I[?0.25,0.25] (xt1 ) and I[?0.25,0.25] (xt2 )
placed at t1 = 1/3 and t2 = 2/3. The prior is defined by the parameters at = ?1, ct = 4? cos(4?t) and bt = 4. The
left panel shows the prior?s and the posterior approximation?s marginal means and standard deviations. The right panel
shows the marginal approximations at t = 0.3351, a region where we expect the corrections to be strongly influenced
by both types of likelihoods. Samples were generated by using the lag ?t = 10?3 , the approximate inference was run
using RK4 at ?t = 10?4 .
3
3.1
Experiments
Inference in a (soft) box
The first example we consider is a mixed discrete-continuous time inference under box and soft box likelihood observations respectively. We consider a diffusing particle on the line under an OU prior process of
the form
dxt = (?axt + ct )dt +
?
bdWt
with a = ?1, ct = 4? cos(4?t) and b = 4. The likelihood model is given by the loss function V (t, xt ) =
(2xt )8 for all t ? [1/2, 2/3] and 0 otherwise, effectively confining the process to a narrow strip near zero
(soft box). This likelihood is therefore an approximation to physically realistic situations where particles
can perform diffusion in a confined environment. The box has hard gates: two discrete time likelihoods
given by the indicator functions I[?0.25,0.25] (xt1 ) and I[?0.25,0.25] (xt2 ) placed at the ends of the interval,
that is, Td = {1/3, 2/3}. The left panel in Figure 1 shows the prior and approximate posterior processes
(mean ? one standard deviation) in pink and cyan respectively: the confinement of the process to the box is
in clear evidence, as well as the narrowing of the confidence intervals corresponding to the two discrete time
observations. The right panel in Figure 1 shows the marginal approximations at a time point shortly after
the ?gate? to the box, these are: (i) sampling (grey) (ii) the Gaussian EP approximation (blue line), and (iii)
its corrected version (red line). The time point was chosen as we expect the strongest non-Gaussian effects
to be felt near the discrete likelihoods; the corrected distribution does indeed show strong skewness. To
benchmark the method, we compare it to MCMC sampling obtained by using slice sampling [Murray et al.,
2010] on the discretised model with ?t = 10?3 . We emphasise that this is an approximation to the model,
hence the benchmark is not a true gold standard; however, we are not aware of sampling schemes that
would be able to perform inference under the exact continuous time likelihood. The histogram in Figure 1
was generated from a sample size of 105 following a burn in of 104 . The Gaussian EP approach gives a
very good reconstruction of the first two moments of the distribution. The corrected EP approximation is
very close to the MCMC results.
3.2
Log Gaussian Cox processes
Another family of models where one encounters continuous time likelihoods is point processes; these
processes find wide application in a number of disciplines, from neuroscience Smith and Brown [2003] to
conflict modelling Zammit-Mangion et al. [2012]. We assume that we have a multivariate log Gaussian
Cox process model [Kingman, 1992]: this is defined by a d-variate Ornstein-Uhlenbeck process {xt }t
6
Figure 2: A toy example for the point process model in Section 3.2. The prior is defined by A =
[?2, 1, 0, 1; 1, ?2, 1, 0; 0, 1, ?2, 1; 1, 0, 1, ?2], cit = 4i? cos(2i?t), B = 4I. We use ?i = 0. The prior means
and standard deviations, the sampled process path, and the sampled events are shown on the left panel while the
posterior approximations are shown on the right panel.
on the [0, 1] interval. Conditioned on {xt }t we have d Poisson point processes with intensities given by
i
?it = e?i +xt for all i = 1, . . . , d and t ? [0, 1]. The likelihood of this point process model is formed by
both discrete time (point probabilities) and continuous time (void probability) terms and can be written as
log
?
i
. ?
p(Yi |{xit }t ) =
i
?
? e ?i
?
1
0
i
dtext + |Yi |?i +
{xit }t .
?
tk ?Yi
?
xit ,
where Yi denotes the set of observed event times corresponding to
Clearly, the discrete time observations in this model are (degenerate) Gaussians, therefore, one may opt for starting with an OU process
with a translated drift, however, for consistency reasons, we treat them as discrete time observations.
In this example we chose d = 4 and A = [?2, 1, 0, 1; 1, ?2, 1, 0; 0, 1, ?2, 1; 1, 0, 1, ?2], thus coupling the
? t }t ,
various processes. We chose cit = 4i? cos(2i?t), B = 4I and ?i = 0. We generate a sample path {x
draw observations Yi based on {?
xit }t and perform inference.
The results are shown in Figure 2, with four colours distinguishing the four processes. The left panel shows
prior processes (mean ? standard deviation), sample paths and (bottom row) the sampled points (i.e. the
data). The right panel shows the corresponding posterior processes approximations. The results reflect the
general pattern characteristic of fitting point process data: in regions with a substantial number of events
the sampled path can be inferred with great accuracy (accurate mean, low standard deviation) whereas in
regions with no or only a few events the fit reverts to a skewed/shifted prior path, as the void probability
dominates.
3.3
Point process modelling of neural spikes trains
In a third example we consider continuous time point process inference for spike time recordings from a
population of neurons. This type of data is frequently modelled using (discrete time) state-space models
with point process observations (SSPP) [Smith and Brown, 2003, Zammit Mangion et al., 2011, Macke
et al., 2011]; parameter estimation from such models can reveal biologically relevant facts about the neuron?s electrophysiology which are not apparent from the spike trains themselves. We consider a dataset
from Di Lorenzo and Victor [2003], available at www.neurodatabase.org, consisting of recordings of
spiking patterns of taste response cells in Sprague-Dawley rats during presentation of different taste stimuli. The recordings are 10s each at a resolution of 10?3 s, and four different taste stimuli: (i) NaCL, (ii)
Quinine HCl, (iii) Quinine HCl, and (iv) Sucrose are presented to the subjects for the duration of the first
5s of the 10s recording window. We modelled the spike train recordings by univariate log Gaussian Cox
process models (see Section 3.2) with homogeneous OU priors, that is, At , ct and Bt were considered
constant. We use the variational EM algorithm (discrete time likelihoods are Gaussian) to learn the prior
7
The fitted (c,?) parameters
5
4.5
4
?
3.5
3
2.5
NaCl
Quinine
HCl
Sucrose
2
1.5
1
?10
0
10
c
20
30
Figure 3: Inference results on data from cell 9 form the dataset in Section 3.3. The top-left, bottom-left and centre
panels show the intensity fit, event count and the Q-Q plot corresponding to one of the recordings, whereas the right
panel shows the learned c and ? parameters for all spike trains in cell 9.
parameters A, c and ? and initial conditions for each individual recording. We scaled the 10s window into
the unit interval [0, 1] and used a 10?4 resolution.
Fig 3 shows example results of this procedure. The right panel shows an emergent pattern of stimulus
based clustering of ? and c as in Zammit Mangion et al. [2011]. We observe that discrete-time approaches
such as [Smith and Brown, 2003, Zammit Mangion et al., 2011] are usually forced to take very fine time
discretisation by the requirement that at most one spike happens during one time step. This leads to significant computational resources being invested in regions with few spikes. Our continuous time approach, on
the other hand, handles uneven observations naturally.
4
Conclusion
Inference methodologies for continuous time stochastic processes are a subject of intense research, both
for fundamental and applied research. This paper contributes a novel approach which allows inference
from both discrete time and continuous time observations. Our results show that the method is effective
in accurately reconstructing marginal posterior distributions, and can be deployed effectively on real world
problems. Furthermore, it has recently been shown [Kappen et al., 2012] that optimal control problems can
be recast in inference terms: in many cases, the relevant inference problem is of the same type as the one
considered here, hence this methodology could in principle also be used in control problems. The method
is based on the parallel EP formulation of Cseke and Heskes [2011b]: interestingly, we show that the
EP updates from continuous time observations collapse to variational updates [Archambeau et al., 2007].
Algorithmically, our approach results in efficient forward-backward updates, compared to the gradient
ascent algorithm of Archambeau et al. [2007]. Furthermore, the EP perspective allows us to compute
corrections to the Gaussian marginals; in our experiments, these turned out to be highly accurate.
Our modelling framework assumes a latent linear diffusion process; however, as mentioned before, some
non-linear diffusion processes are equivalent to posterior processes for OU processes observed in continuous time [?ksendal, 2010]. Our approach, hence, can also be viewed as a method for accurate marginal
computations in (a class of) nonlinear diffusion processes observed with noise. In general, all non-linear
diffusion processes can be recast in a form similar to the one considered here; the important difference
though is that the continuous time likelihood is in general an Ito integral, not a regular integral. In the
future, it would be interesting to explore the extension of this approach to general non-linear diffusion
processes, as well as discrete and hybrid stochastic processes [Rao and Teh, 2012, Ocone et al., 2013].
Acknowledgements
B.Cs. is funded by BBSRC under grant BB/I004777/1. M.O. would like to thank for the support by EU
grant FP7-ICT-270327 (Complacs). G.S. acknowledges support from the ERC under grant MLCS-306999.
8
References
C. Archambeau, D. Cornford, M. Opper, and J. Shawe-Taylor. Gaussian process approximations of stochastic differential equations. Journal of Machine Learning Research - Proceedings Track, 1:1?16, 2007.
B. Cseke and T. Heskes. Properties of Bethe free energies and message passing in Gaussian models. Journal of
Artificial Intelligence Research, 41:1?24, 2011a.
B. Cseke and T. Heskes. Approximate marginals in latent Gaussian models. Journal of Machine Learning Research,
12:417?457, 2011b.
T. A. Davis. Direct Methods for Sparse Linear Systems (Fundamentals of Algorithms 2). Society for Industrial and
Applied Mathematics, Philadelphia, 2006.
P. M. Di Lorenzo and J. D. Victor. Taste response variability and temporal coding in the nucleus of the solitary tract of
the rat. Journal of Neurophysiology, 90:1418?1431, 2003.
C. W. Gardiner. Handbook of stochastic methods: for physics, chemistry and the natural sciences. Springer series in
synergetics, 13. Springer, 2002.
T. Heskes, M. Opper, W. Wiegerinck, O. Winther, and O. Zoeter. Approximate inference techniques with expectation
constraints. Journal of Statistical Mechanics: Theory and Experiment, 2005.
H. J. Kappen, V. G?omez, and M. Opper. Optimal control as a graphical model inference problem. Machine Learning,
87(2):159?182, 2012.
J. F. C. Kingman. Poisson Processes. Oxford Statistical Science Series. Oxford University Press, New York, 1992.
S. L. Lauritzen. Graphical Models. Oxford Statistical Science Series. Oxford University Press, New York, 1996.
J. H. Macke, L. Buesing, J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani. Empirical models of spiking in
neural populations. In Advances in Neural Information Processing Systems 24, pages 1350?1358. 2011.
T. P. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, MIT, 2001.
I. Murray, R. P. Adams, and D. J.C. MacKay. Elliptical slice sampling. In Proceedings of the 13th International
Conference on Artificial Intelligence and Statistics, pages 541?548. 2010.
A. Ocone, A.J. Millar, and G. Sanguinetti. Hybrid regulatory models: a statistically tractable approach to model
regulatory network dynamics. Bioinformatics, 29(7):910?916, 2013.
B. ?ksendal. Stochastic differential equations. Universitext. Springer, 2010.
M. Opper and G. Sanguinetti. Variational inference for Markov jump processes. In Advances in Neural Information
Processing Systems 20, 2008.
M. Opper and O. Winther. Gaussian processes for classification: Mean-field algorithms. Neural Computation, 12(11):
2655?2684, 2000.
M. Opper and O. Winther. Expectation consistent approximate inference. Journal of Machine Learing Research, 6:
2177?2204, 2005.
M. Opper, U. Paquet, and O. Winther. Improving on Expectation Propagation. In Advances in Neural Information
Processing Systems 21, pages 1241?1248. MIT, Cambridge, MA, US, 2009.
M. Opper, A. Ruttor, and G. Sanguinetti. Approximate inference in continuous time Gaussian-Jump processes. In
Advances in Neural Information Processing Systems 23, pages 1831?1839, 2010.
V. Rao and Y-W Teh. MCMC for continuous-time discrete-state systems. In Advances in Neural Information Processing Systems 25, pages 710?718, 2012.
S. S?arkk?a. Recursive Bayesian Inference on Stochastic Differential Equations. PhD thesis, Helsinki University of
Technology, 2006.
A. C. Smith and E. N. Brown. Estimating a state-space model from point process observations. Neural Computation,
15(5):965?991, 2003.
W. Wiegerinck and T. Heskes. Fractional Belief Propagation. In Advances in Neural Information Processing Systems
15, pages 438?445, Cambridge, MA, 2003. The MIT Press.
J. S. Yedidia, W. T. Freeman, and Y. Weiss. Generalized belief propagation. In Advances in Neural Information
Processing Systems 12, pages 689?695, Cambridge, MA, 2000. The MIT Press.
A. Zammit Mangion, K. Yuan, V. Kadirkamanathan, M. Niranjan, and G. Sanguinetti. Online variational inference for
state-space models with point-process observations. Neural Computation, 23(8):1967?1999, 2011.
A. Zammit-Mangion, G. Dewar, M., Kadirkamanathan V., A., and G. Sanguinetti. Point process modelling of the
Afghan war diary. Proceeding of the National Academy of Sciences, 2012. doi: 10.1073/pnas.1203177109.
9
| 4885 |@word neurophysiology:1 cox:3 version:1 inversion:1 calculus:1 grey:1 covariance:5 p0:8 kappen:2 moment:12 initial:2 series:3 att:1 interestingly:1 elliptical:1 arkk:2 dx:1 written:3 tilted:1 realistic:1 partition:1 sdes:1 plot:1 update:17 intelligence:2 accordingly:1 smith:4 manfred:2 provides:3 org:1 direct:2 become:1 differential:9 learing:1 yuan:1 fitting:1 synergetics:1 introduce:2 x0:1 indeed:1 themselves:1 frequently:1 mechanic:1 freeman:1 td:5 xti:28 window:2 considering:1 provided:1 begin:1 linearity:1 notation:4 panel:11 estimating:1 skewness:1 finding:1 transformation:1 temporal:1 dti:5 every:1 ti:3 concave:1 axt:1 scaled:1 uk:1 control:3 unit:1 grant:3 shenoy:1 positive:1 t1:2 engineering:1 before:1 treat:1 limit:16 oxford:4 subscript:1 path:5 burn:1 emphasis:1 chose:2 challenging:1 co:4 archambeau:5 collapse:18 statistically:1 recursive:2 block:2 definite:1 differs:1 procedure:3 empirical:1 projection:1 matching:1 word:1 confidence:1 regular:1 close:1 scheduling:1 www:1 equivalent:2 map:2 starting:1 duration:1 convex:1 focused:1 resolution:2 qc:8 factorisation:1 insight:1 population:2 handle:1 limiting:2 updated:2 play:1 guido:1 exact:9 homogeneous:1 distinguishing:1 updating:1 distributional:1 ep:18 observed:6 role:1 narrowing:1 bottom:2 capture:1 cornford:1 region:4 eu:1 removed:1 substantial:1 mentioned:1 transforming:1 environment:1 dynamic:1 depend:2 solving:2 rewrite:2 algebra:1 nacl:2 translated:1 multimodal:1 joint:1 emergent:1 various:1 derivation:3 train:4 forced:1 effective:1 doi:1 artificial:2 apparent:1 lag:2 supplementary:5 valued:1 otherwise:1 statistic:2 paquet:1 invested:1 online:1 propose:3 reconstruction:1 product:2 tu:2 relevant:2 combining:1 turned:1 degenerate:1 gold:1 academy:1 description:1 convergence:1 requirement:1 tract:1 adam:1 tk:77 object:1 derive:2 coupling:1 ac:1 minimises:1 school:1 lauritzen:1 qt:1 strong:1 auxiliary:1 c:3 implies:1 stochastic:8 enable:3 material:5 atk:1 mangion:6 behaviour:2 opt:1 biological:1 quinine:3 extension:1 correction:10 around:1 considered:3 exp:16 great:1 substituting:1 m0:1 estimation:1 tool:1 mit:4 clearly:1 sensor:1 gaussian:40 aim:1 ck:10 ctj:1 cseke:10 derived:1 focus:1 xit:4 modelling:5 likelihood:22 indicates:1 industrial:1 inference:31 dependent:2 bt:10 cunningham:1 hidden:1 expand:1 germany:1 overall:1 classification:1 flexible:1 denoted:1 development:1 smoothing:1 constrained:1 mackay:1 marginal:16 field:2 aware:1 saving:1 sampling:6 zz:1 broad:1 yu:1 future:1 t2:2 idt:1 simplify:1 stimulus:3 few:2 hcl:3 modern:1 divergence:1 national:1 individual:1 xtj:8 familiar:2 consisting:2 lebesgue:1 bw:1 normalisation:2 message:1 highly:1 evaluation:1 yielding:1 tj:6 chain:2 accurate:3 integral:2 partial:1 necessary:1 intense:1 discretisation:7 tree:1 iv:2 taylor:2 old:1 re:1 fitted:1 formalism:2 ctk:13 soft:3 rao:3 deviation:5 entry:1 euler:3 bucy:6 ytc:8 combined:1 density:5 winther:7 fundamental:3 international:1 physic:1 informatics:1 discipline:2 complacs:1 together:1 again:2 reflect:1 thesis:2 derivative:1 kingman:2 macke:2 toy:2 account:1 potential:2 takahashi:1 de:1 factorised:2 chemistry:1 coding:1 gaussianity:1 explicitly:1 ornstein:2 doing:1 red:1 start:2 zoeter:1 option:1 parallel:4 millar:1 formed:1 botond:1 accuracy:2 wiener:1 characteristic:2 modelled:2 bayesian:3 buesing:1 accurately:1 monitoring:1 strongest:1 girsanov:1 influenced:1 ed:1 lengthy:1 definition:1 strip:1 energy:5 acquisition:1 minka:4 obvious:1 naturally:2 di:16 sampled:4 maruyama:1 dataset:2 popular:1 intrinsically:2 knowledge:1 lim:3 fractional:3 afghan:1 ou:11 dt:6 methodology:2 response:2 wei:1 formulation:5 evaluated:1 box:9 though:2 generality:1 done:1 strongly:1 furthermore:2 correlation:1 d:2 working:2 hand:2 replacing:1 nonlinear:2 propagation:8 defines:1 reveal:1 scientific:1 effect:1 brown:4 true:1 evolution:1 hence:4 q0:39 leibler:1 bbsrc:1 skewed:1 during:2 davis:2 rat:2 generalized:1 demonstrate:1 performs:1 variational:11 novel:1 recently:1 common:1 spiking:3 mt:6 extend:2 approximates:1 marginals:9 significant:2 refer:1 cambridge:3 gibbs:1 consistency:1 heskes:15 mathematics:1 erc:1 particle:4 centre:1 shawe:1 funded:1 v0:1 multivariate:6 posterior:15 recent:1 noisily:1 perspective:2 inf:1 diary:1 scenario:2 vt:8 yi:5 exploited:1 victor:2 seen:1 greater:1 somewhat:2 period:1 ii:4 smoother:1 full:1 pnas:1 match:2 calculation:1 manipulate:1 niranjan:1 optimisation:1 expectation:8 poisson:2 physically:1 iteration:5 represent:2 uhlenbeck:2 histogram:1 confined:1 cell:3 justified:1 whereas:2 remarkably:1 argminq:1 fine:1 addressed:1 interval:7 void:2 posse:1 ascent:1 hz:1 recording:7 subject:2 qct:4 near:2 presence:1 granularity:1 iii:4 diffusing:1 variety:2 variate:1 fit:2 dtv:1 confinement:1 reduce:1 computable:1 war:1 colour:1 passing:1 york:2 remark:2 useful:1 detailed:1 clear:1 cit:2 reduced:1 generate:1 exist:1 affords:1 canonical:11 notice:4 shifted:1 neuroscience:1 algorithmically:2 track:1 blue:1 discrete:26 write:2 four:3 diffusion:9 backward:6 imaging:1 graph:2 symbolically:1 year:1 xt2:2 run:1 solitary:1 extends:1 family:4 reader:2 arrive:2 recognise:1 draw:1 cyan:1 ct:14 quadratic:3 gardiner:3 occur:1 constraint:1 helsinki:1 btk:1 sprague:1 felt:1 speed:1 argument:1 performing:1 dawley:1 xtk:50 pink:1 remain:1 slightly:1 increasingly:1 em:1 reconstructing:1 making:1 biologically:1 happens:1 intuitively:1 sucrose:2 equation:31 resource:1 remains:1 turn:1 count:1 ksendal:3 fp7:1 tractable:1 end:1 available:2 hct:5 operation:1 rewritten:1 gaussians:2 junction:1 observe:1 yedidia:1 linearise:1 dwt:2 alternative:2 encounter:1 shortly:1 gate:2 existence:1 original:1 denotes:3 top:1 ensure:1 clustering:1 assumes:1 graphical:2 const:1 universitext:1 exploit:2 concatenated:1 murray:2 approximating:3 classical:3 society:1 quantity:1 spike:7 kadirkamanathan:2 parametric:3 rt:3 usual:1 diagonal:2 gradient:2 thank:1 berlin:2 simulated:1 transit:1 reason:1 assuming:1 kalman:7 confining:1 teh:3 perform:3 observation:26 neuron:2 markov:3 benchmark:2 enabling:1 finite:3 situation:1 extended:2 variability:1 rn:1 community:1 drift:2 intensity:2 inferred:1 introduced:1 pair:1 discretised:4 conflict:1 learned:1 narrow:1 able:2 proceeds:1 dynamical:1 below:1 pattern:3 comfort:1 usually:1 reverts:1 recast:3 including:1 optimise:1 belief:2 analogue:1 power:2 event:5 natural:1 hybrid:4 indicator:1 scheme:4 improve:1 technology:3 lorenzo:2 imply:1 alias:1 acknowledges:1 philadelphia:1 sahani:1 prior:13 review:1 taste:4 acknowledgement:1 ict:1 xtt:1 loss:10 expect:2 dxt:2 mixed:1 interesting:2 by1:1 filtering:2 integrate:1 nucleus:1 sufficient:1 proxy:1 consistent:1 principle:2 row:1 placed:2 free:5 generalise:1 wide:1 taking:2 barrier:1 sparse:2 emphasise:1 edinburgh:1 slice:2 opper:15 valid:1 world:1 computes:2 infinitedimensional:1 forward:4 made:1 jump:2 bb:1 approximate:18 kullback:1 ruttor:1 cavity:1 handbook:1 xt1:3 sanguinetti:6 spectrum:1 alternatively:1 continuous:47 latent:8 regulatory:2 nature:1 learn:1 bethe:1 contributes:1 improving:1 expansion:1 whole:1 noise:2 neuronal:1 fig:1 referred:1 fashion:2 deployed:1 precision:1 position:1 exponential:1 governed:1 jacobian:1 third:1 ito:3 theorem:1 removing:1 xt:48 explored:1 x:6 admits:1 evidence:2 dominates:1 exists:1 sequential:1 effectively:3 adding:1 corr:1 phd:2 illustrates:1 conditioned:1 electrophysiology:1 univariate:1 explore:1 contained:1 omez:1 scalar:1 springer:3 corresponds:2 ma:3 conditional:1 identity:1 presentation:1 viewed:1 consequently:1 hard:2 corrected:4 wt:1 wiegerinck:3 lemma:1 called:1 rk4:1 experimental:1 zone:1 uneven:1 cholesky:1 support:2 arises:1 bioinformatics:1 mcmc:3 |
4,293 | 4,886 | Bayesian inference as iterated random functions with
applications to sequential inference in graphical
models
XuanLong Nguyen
Department of Statistics
University of Michigan
Ann Arbor, Michigan 48109
[email protected]
Arash A. Amini
Department of Statistics
University of Michigan
Ann Arbor, Michigan 48109
[email protected]
Abstract
We propose a general formalism of iterated random functions with semigroup
property, under which exact and approximate Bayesian posterior updates can be
viewed as specific instances. A convergence theory for iterated random functions
is presented. As an application of the general theory we analyze convergence
behaviors of exact and approximate message-passing algorithms that arise in a
sequential change point detection problem formulated via a latent variable directed
graphical model. The sequential inference algorithm and its supporting theory are
illustrated by simulated examples.
1 Introduction
The sequential posterior updates play a central role in many Bayesian inference procedures. As an
example, in Bayesian inference one is interested in the posterior probability of variables of interest
given the data observed sequentially up to a given time point. As a more specific example which
provides the motivation for this work, in a sequential change point detection problem [1], the key
quantity is the posterior probability that a change has occurred given the data observed up to present
time. When the underlying probability model is complex, e.g., a large-scale graphical model, the calculation of such quantities in a fast and online manner is a formidable challenge. In such situations
approximate inference methods are required ? for graphical models, message-passing variational
inference algorithms present a viable option [2, 3].
In this paper we propose to treat Bayesian inference in a complex model as a specific instance of an
abstract system of iterated random functions (IRF), a concept that originally arises in the study of
Markov chains and stochastic systems [4]. The key technical property of the proposed IRF formalism that enables the connection to Bayesian inference under conditionally independent sampling is
the semigroup property, which shall be defined shortly in the sequel. It turns out that most exact and
approximate Bayesian inference algorithms may be viewed as specific instances of an IRF system.
The goal of this paper is to present a general convergence theory for the IRF with semigroup property. The theory is then applied to the analysis of exact and approximate message-passing inference
algorithms, which arise in the context of distributed sequential change point problems using latent
variable and directed graphical model as the underlying modeling framework.
We wish to note a growing literature on message-passing and sequential inference based on graphical modeling [5, 6, 7, 8]. On the other hand, convergence and error analysis of message-passing
algorithms in graphical models is quite rare and challenging, especially for approximate algorithms,
and they are typically confined to the specific form of belief propagation (sum-product) algorithm
[9, 10, 11]. To the best of our knowledge, there is no existing work on the analysis of messagepassing inference algorithms for calculating conditional (posterior) probabilities for latent random
1
variables present in a graphical model. While such an analysis is a byproduct of this work, the viewpoint we put forward here that equates Bayesian posterior updates to a system of iterated random
functions with semigroup property seems to be new and may be of general interest.
The paper is organized as follows. In Sections 2? 3, we introduce the general IRF system and
provide our main result on its convergence. The proof is deferred to Section 5. As an example of
the application of the result, we will provide a convergence analysis for an approximate sequential
inference algorithm for the problem of multiple change point detection using graphical models. The
problem setup and the results are discussed in Section 4.
2 Bayesian posterior updates as iterated random functions
In this paper we shall restrict ourselves to multivariate distributions of binary random variables.
To describe the general iteration, let Pd := P({0, 1}d) be the space of probability measures on
{0, 1}d. The iteration under consideration recursively produces a random sequence of elements of
d
Pd , starting from some initial value. We think of Pd as a subset of R2 equipped with the ?1 norm
(that is, the total variation norm for discrete probability measures). To simplify, let m := 2d , and
for x ? Pd , index its coordinates as x = (x0 , . . . , xm?1 ). For ? ? Rm
+ , consider the function
q? : Pd ? Pd , defined by
q? (x) :=
x? ?
xT ?
(1)
P i i
m
and x ? ? is pointwise multiplication
where xT ? =
i x ? is the usual inner product on R
i
i i
with coordinates [x ? ?] := x ? , for i = 0, 1, . . . , m ? 1. This function models the prior-toposterior update according to the Bayes rule. One can think of ? as the likelihood and x as the prior
distribution (or the posterior in the previous stage) and q? (x) as the (new) posterior based on the two.
The division by xT ? can be thought of as the division by the marginal to make a valid probability
vector. (See Example 1 below.)
We consider the following general iteration
Qn (x) = q?n (T (Qn?1 (x)),
Q0 (x) = x,
n ? 1,
(2)
for some deterministic operator T : Pd ? Pd and an i.i.d. random sequence {?n }n?1 ? Rm
+ . By
changing operator T , one obtains different iterative algorithms.
Our goal is to find sufficient conditions on T and {?n } for the convergence of the iteration to an
extreme point of Pd , which without loss of generality is taken to be e(0) := (1, 0, 0, . . . , 0). Standard
techniques for proving the convergence of iterated random functions are usually based on showing
some averaged-sense contraction property for the iteration function [4, 12, 13, 14], which in our
case is q?n (T (?)). See [15] for a recent survey. These techniques are not applicable to our problem
since q?n is not in general Lipschitz, in any suitable sense, precluding q?n (T (?)) from satisfying the
aforementioned conditions.
Instead, the functions {q?n } have another property which can be exploited to prove convergence;
namely, they form a semi-group under pointwise multiplication,
q?? ? ? = q? ? q? ? ,
?, ? ? ? Rm
+,
(3)
where ? denotes the composition of functions. If T is the identity, this property allows us to write
Qn (x) = q? ni=1 ?i (x) ? this is nothing but the Bayesian posterior update equation, under conditionally independent sampling, while modifying T results in an approximate Bayesian inference
n
procedure. Since after suitable normalization, ? i=1
?i concentrates around a deterministic quantity,
by the i.i.d. assumption on {?i }, this representation helps in determining the limit of {Qn (x)}. The
main result of this paper, summarized in Theorem 1, is that the same conclusions can be extended
to general Lipschitz maps T having the desired fixed point.
2
3 General convergence theory
d
Consider a sequence {?n }n?1 ? Rm
+ of i.i.d. random elements, where m = 2 . Let ?n =
(?n0 , ?n1 , . . . , ?nm?1 ) with ?n0 = 1 for all n, and
?n? :=
max
i=1,2,...,m?1
?ni .
(4)
The normalization ?n0 = 1 is convenient for showing convergence to e(0) . This is without loss of
generality, since q? is invariant to scaling of ?, that is q? = q?? for any ? > 0.
Assume the sequence {log ?n? } to be i.i.d. sub-Gaussian with mean ? ?I? < 0 and sub-Gaussian
norm ? ?? ? (0, ?). The sub-Gaussian norm in can be taken to be the ?2 Orlicz norm (cf. [16,
Section 2.2]), which we denote by k ? k?2 . By definition kY k?2 := inf{C > 0 : E?2 (|Y |/C) ? 1}
2
where ?2 (x) := ex ? 1.
Let k ? k denote the ?1 norm on Rm . Consider the sequence {Qn (x)}n?0 defined in (2) based on
{?n } as above, an initial point x = (x0 , . . . , xm?1 ) ? Pd and a Lipschitz map T : Pd ? Pd . Let
LipT denote the Lipschitz constant of T , that is LipT := supx6=y kT (x) ? T (y)k/kx ? yk.
Our main result regarding iteration (2) is the following.
Theorem 1. Assume that L := LipT ? 1 and that e(0) is a fixed point of T . Then, for all n ? 0,
and ? > 0,
kQn (x) ? e(0) k ? 2
n
1 ? x0
Le?I? +?
x0
(5)
with probability at least 1 ? exp(?c n?2 /??2 ), for some absolute constant c > 0.
The proof of Theorem 1 is outlined in Section 5. Our main application of the theorem will be to the
study of convergence of stopping rules for a distributed multiple change point problem endowed with
latent variable graphical models. Before stating that problem, let us consider the classical (single)
change point problem first, and show how the theorem can be applied to analyze the convergence of
the optimal Bayes rule.
Example 1. In the classical Bayesian change point problem [1], one observes a sequence
{X 1 , X 2 , X 3 . . . } of independent data points whose distributions change at some random time
?. More precisely, given ? = k, X 1 , X 2 , . . . , X k?1 are distributed according to g, and
X k+1 , X k+2 , . . . according to f . Here, f and g are densities with respect to some underlying
measure. One also assumes a prior ? on ?, usually taken to be geometric. The goal is to find a
stopping rule ? which can predict ? based on the data points observed so far. It is well-known
that a rule based on thresholding the posterior probability of ? is optimal (in a Neyman-Pearson
sense). To be more specific, let Xn := (X 1 , X 2 , . . . , X n ) collect the data up to time n and let
? n [n] := P(? ? n|Xn ) be the posterior probability of ? having occurred before (or at) time n.
Then, the Shiryayev rule
? := inf{n ? N : ? n [n] ? 1 ? ?}
(6)
is known to asymptotically have the least expected delay, among all stopping rules with false alarm
probability bounded by ?.
Theorem 1 provides a way to quantify how fast the posterior ? n [n] approaches 1, once the change
point has occurred, hence providing an estimate of the detection delay, even for finite number of
samples. We should note that our approach here is somewhat independent of the classical techniques
normally used for analyzing stopping rule (6). To cast the problem in the general framework of (2),
let us introduce the binary variable Z n := 1{? ? n}, where 1{?} denotes the indicator of an event.
Let Qn be the (random) distribution of Z n given Xn , in other words,
Qn := P(Z n = 1|Xn ), P(Z n = 0|Xn )).
Since ? n [n] = P(Z = 1|Xn ), convergence of ? n [n] to 1 is equivalent to the convergence of Qn to
e(0) = (1, 0). We have
P (Z n |Xn ) ?Z n P (Z n , X n |Xn?1 ) = P (X n |Z n )P (Z n |Xn?1 ).
3
(7)
n
)
Note that P (X n |Z n = 1) = f (X n ) and P (X n |Z n = 0) = g(X n ). Let ?n := 1, fg(X
(X n ) and
Rn?1 := P(Z n = 1|Xn?1 ), P(Z n = 0|Xn?1 )).
Then, (7) implies that Qn can be obtained by pointwise multiplication of Rn?1 by f (X n )?n and
normalization to make a probability vector. Alternatively, we can multiply by ?n , since the procedure is scale-invariant, that is, Qn = q?n (Rn?1 ) using definition (1). It remains to express Rn?1 in
terms of Qn?1 . This can be done by using the Bayes rule and the fact that P (Xn?1 |? = k) is the
same for k ? {n, n + 1, . . . }. In particular, after some algebra (see [17]), one arrives at
? n?1 [n] =
?(n)
?[n]c
+
? n?1 [n ? 1],
?[n ? 1]c
?[n ? 1]c
(8)
where ? k [n] := P(? ? n|Xk ), ?(n) is the prior on ? evaluated at time n, and ?[k]c :=
P
?
n?1
? and
i=k+1 ?(i). For the geometric prior with parameter ? ? [0, 1], we have ?(n) := (1 ? ?)
c
k
n?1
n?1
?[k] = ? . The above recursion then simplifies to ?
[n] = ? + (1 ? ?)?
[n ? 1]. Expressing
in terms of Rn?1 and Qn?1 , the recursion reads
1
x
x
1
1
.
=?
+ (1 ? ?)
Rn?1 = T (Qn?1 ), where T
x0
x0
0
In other words, T (x) = ?e(0) + (1 ? ?)x for x ? P2 .
Thus, we have shown that an iterative algorithm for computing ? n [n] (hence determining rule (6)),
can be expressed in the form of (2) for appropriate choices of {?n } and operator T . Note that T in
this case is Lipschitz with constant 1 ? ? which is always guaranteed to be ? 1.
We can now use Theorem 1 to analyze the convergence of ? n [n]. Let us condition on ? = k + 1,
that is, we assume that the change point has occurredR at time k + 1. Then, the sequence {X n }n?k+1
is distributed according to f , and we have E?n? = f log fg = ?I, where I is the KL divergence
between densities f and g. Noting that kQn ? e(0) k = 2(1 ? ? n [n]), we immediately obtain the
following corollary.
Corollary 1. Consider Example 1 and assume that
R log(g(X)/f (X)), where X ? f , is subGaussian with sub-Gaussian norm ? ?. Let I := f log fg . Then, conditioned on ? = k + 1,
we have for n ? 1,
n+k
n 1
?
?
1
[n + k] ? 1 ? (1 ? ?)e?I+?
? k [k]
with probability at least 1 ? exp(?c n?2 /? 2 ).
4 Multiple change point problem via latent variable graphical models
We now turn to our main application for Theorem 1, in the context of a multiple change point
problem. In [18], graphical model formalism is used to extend the classical change point problem
(cf. Example 1) to cases where multiple distributed latent change points are present. Throughout
this section, we will use this setup which we now briefly sketch.
One starts with a network G = (V, E) of d sensors or nodes, each associated with a change point ?j .
Each node j observes a private sequence of measurements Xj = (Xj1 , Xj2 , . . . ) which undergoes a
change in distribution at time ?j , that is,
iid
Xj1 , Xj2 , . . . , Xjk?1 | ?j = k ? gj ,
iid
Xjk , Xjk+1 , ? ? ? | ?j = k ? fj ,
for densities gj and fj (w.r.t. some underlying measure). Each connected pair of nodes share
an additional sequence of measurements. For example, if nodes s1 and s2 are connected, that is,
e = (s1 , s2 ) ? E, then they both observe Xe = (Xe1 , Xe2 , . . . ). The shared sequence undergoes a
change in distribution at some point depending on ?s1 and ?s2 . More specifically, it is assumed that
the earlier of the two change points causes a change in the shared sequence, that is, the distribution
of Xe conditioned on (?s1 , ?s2 ) only depends on ?e := ?s1 ? ?s2 , the minimum of the two, i.e.,
iid
iid
Xe1 , Xe2 , . . . , Xek | ?e = k ? ge ,
Xek+1 , Xek+2 , ? ? ? | ?e = k ? fe .
4
Letting ?? := {?j }j?V and Xn? = {Xnj , Xne }j?V,e?E , we can write the joint density of all random
variables as
Y
Y
Y
P (?? , Xn? ) =
?j (?j )
P (Xnj |?j )
(9)
P (Xne |?s1 , ?s2 ).
j?V
e ?E
j?V
where ?j is the prior on ?j , which we assume to be geometric with parameter ?j . Network G
induces a graphical model [2] which encodes the factorization (9) of the joint density. (cf. Fig. 1)
Suppose now that each node j wants to detect its change point ?j , with minimum expected delay,
while maintaining a false alarm probability at most ?. Inspired by the classical change point problem, one is interested in computing the posterior probability that the change point has occurred up
to now, that is,
?jn [n] := P(?j ? n | Xn? ).
(10)
The difference with the classical setting is the conditioning is done on all the data in the network (up
to time n). It is easy to verify that the natural stopping rule
?j = inf{n ? N : ?jn [n] ? 1 ? ?}
satisfy the false alarm constraint. It has also been shown that this rule is asymptotically optimal in
terms of expected detection delay. Moreover, an algorithm based on the well-known sum-product [2]
has been proposed, which allows the nodes to compute their posterior probabilities 10 by messagepassing. The algorithm is exact when G is a tree, and scales linearly in the number of nodes. More
precisely, at time n, the computational complexity is O(nd). The drawback is the linear dependence
on n, which makes the algorithm practically infeasible if the change points model rare events (where
n could grow large before detecting the change.)
In the next section, we propose an approximate message passing algorithm which has computational
complexity O(d), at each time step. This circumvents the drawback of the exact algorithm and
allows for indefinite run times. We then show how the theory developed in Section 3 can be used to
provide convergence guarantees for this approximate algorithm, as well as the exact one.
4.1 Fast approximate message-passing (MP)
We now turn to an approximate message-passing algorithm which, at each time step, has computational complexity O(d). The derivation is similar to that used for the iterative algorithm in
Example 1. Let us define binary variables
Zjn = 1{?j ? n},
Z?n = (Z1n , . . . , Zdn ).
(11)
The idea is to compute P (Z?n |Xn? ) recursively based on P (Z?n?1 |X?n?1 ). By Bayes rule, P (Z?n |Xn? )
is proportional in Z?n to P (Z?n , X?n |X?n?1 ) = P (X?n |Z?n ) P (Z?n |X?n?1 ), hence
i
hY
Y
n
P (Xij
|Zin , Zjn ) P (Z?n |X?n?1 ),
(12)
P (Z?n |Xn? ) ?Z?n
P (Xjn |Zjn )
j?V
{i,j}?E
Z?n ,
where we have used the fact that given
is independent of X?n?1 . To simplify notation, let us
e := E ?{{j} : j ? V }. This allows us to treat the private data of node j, i.e.,
extend the edge set to E
e Let ue (z; ?) := [ge (?)]1?z [fe (?])z
Xj , as shared data of a self-loop in the extended graph (V, E).
e
for e ? E, z ? {0, 1}. Then, for i 6= j,
P (Xjn |Zjn ) = uj (Zjn ; Xjn ),
X?n
n
n
P (Xij
|Zin , Zjn ) = uij (Zin ? Zjn ; Xij
).
(13)
It remains to express P (Z?n |X?n?1 ) in terms of P (Z?n?1 |X?n?1 ). It is possible to do this, exactly, at
a cost of O(2|V | ). For brevity, we omit the exact expression. (See Lemma 1 for some details.) We
term the algorithm that employs the exact relationship, the ?exact algorithm?.
In practice, however, the exponential complexity makes the exact recursion of little use for large
networks. To obtain a fast algorithm (i.e., O(poly(d)), we instead take a mean-field type approximation:
Y
Y
P (Z?n |X?n?1 ) ?
P (Zjn |X?n?1 ) =
(14)
?(Zjn ; ?jn?1 [n]),
j?V
j?V
5
where ?(z; ?) := ? z (1 ? ?)1?z . That is, we approximate a multivariate distribution by the product
of its marginals. By an argument similar to that used to derive (8), we can obtain a recursion for the
marginals,
?j (n)
?j [n]c
?jn?1 [n] =
+
? n?1 [n ? 1],
(15)
?j [n ? 1]c
?j [n ? 1]c j
where we have used the notation introduced earlier in (8). Thus, at time n, the RHS of (14) is known
based on values computed at time n ? 1 (with initial value ?j0 [0] = 0, j ? V ). Inserting this RHS
into (12) in place of P (Z?n |X?n?1 ), we obtain a graphical model in variables Z?n (instead of ?? )
which has the same form as (9) with ?(Zjn ; ?jn?1 [n]) playing the role of the prior ?(?j ).
n
In order to obtain the marginals ?jn [n] = P (Zjn = 1|Xn? ) and ?ij
[n] with respect to the approximate
n
n
n?1
version of the joint distribution P (Z? , X? |X? ), we need to marginalize out the latent variables
Zjn ?s, for which a standard sum-product algorithm can be applied (see [2, 3, 18]). The message
update equations are similar to those in [18]; the difference is that the messages are now binary and
do not grow in size with n.
4.2 Convergence of MP algorithms
We now turn to the analysis of the approximate algorithm introduced in Section 4.1. In particular, we
will look at the evolution of {Pe (Z?n |Xn? )}n?N as a sequence of probability distribution on {0, 1}d.
Here, Pe signifies that this sequence is an approximation. In order to make a meaningful comparison,
we also look at the algorithm which computes the exact sequence {P (Z?n |Xn? )}n?N , recursively. As
mentioned before, this we will call the ?exact algorithm?, the details of which are not of concern to
us at this point (cf. Prop. 1 for these details.)
Recall that we take Pe(Z?n |Xn? ) and P (Z?n |Xn? ), as distributions for Z?n , to be elements of Pd ? Rm .
To make this correspondence formal and the notation simplified, we use the symbol :? as follows
yen :? Pe(Z n |Xn ), yn :? P (Z n |Xn )
(16)
?
?
?
?
where now yen , yn ? Pd . Note that yen and yn are random elements of Pd , due the randomness of
Xn? . We have the following description.
Proposition 1. The exact and approximate sequences, {yn } and {e
yn }, follow general iteration (2)
with the same random sequence {?n }, but with different deterministic operators T , denoted respectively with Tex and Tap . Tex is linear and given by a Markov transition kernel. Tap is a polynomial
map of degree d. Both maps are Lipschitz and we have
d
d
X
Y
(1 ? ?j ).
(17)
?j , LipTap ? K? :=
LipTex ? L? := 1 ?
j=1
j=1
Detailed descriptions of the sequence {?n } and the operators Tex and Tap are given in [17]. As
suggested by Theorem 1, a key assumption for the convergence of the approximate algorithm will
be K? ? 1. In contrast, we always have L? ? 1.
Recall that {?j } are the change points and their priors are geometric with parameters {?j }. We
analyze the algorithms, once all the change points have happened. More precisely, we condition
on Mn0 := {maxj ?j ? n0 } for some n0 ? N. Then, one expects the (joint) posterior of Z?n to
contract to the point Zj? = 1, for all j ? V . In the vectorial notation, we expect both {e
yn } and
(0)
{yn } to converge to e . Theorem 2 below quantifies this convergence in ?1 norm (equivalently,
total variation for measures).
Recall pre-change
and post-change densities ge and fe , and let Ie denote their KL divergence, that
R
is, Ie := fe log(fe /ge ). We will assume that
Ye := log(ge (X)/fe (X)) with X ? fe
(18)
e
e
is sub-Gaussian, for all e ? E, where E is extended edge notation introduced in Section 4.1. The
choice X ? fe is in accordance with conditioning on Mn0 . Note that EYe = ?Ie < 0. We define
p
?max := max kYe k?2 , Imin := min Ie , I? (?) := Imin ? ? ?max log D..
e
e?E
e
e?E
where D := |V | + |E|. Theorem 1 and Lemma 1 give us the following. (See [17] for the proof.)
6
X23
?3
X23 ?3
?3
X24 X45
?1
?1
?2
?2
?4
X12
?5
?4
mn12
?2
?1
1
1
1
0.8
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0
0.2
MP
APPROX
10
20
30
40
50
60
70
0
10
20
30
40
50
60
70
0
?4
mn45?5
0.4
0.2
MP
APPROX
mn24
X12
?5
1
0.2
mn32
X24 X45
0.2
MP
APPROX
10
20
30
40
50
60
70
0
MP
APPROX
10
20
30
40
50
60
70
Figure 1: Top row illustrates a network (left), which induces a graphical model (middle). Right panel
illustrates one stage of message-passing to compute posterior probabilities ?jn [n]. Bottom row illustrates typical
examples of posterior paths, n 7? ?jn [n], obtained by EXACT and approximate (APPROX) message passing,
for the subgraph on nodes {1, 2, 3, 4}. The change points are designated with vertical dashed lines.
Theorem 2. There exists an absolute constant ? > 0, such that if I? (?) > 0, the exact algorithm
converges at least geometrically w.h.p., that is, for all n ? 1,
n
1 ? yn0
(19)
L? e?I? (?)+?
kyn+n0 ? e(0) k ? 2
yn0
2
with probability at least 1 ? exp ?c n?2 /(?max
D2 log D) , conditioned on Mn0 . If in addition,
K? ? 1, the approximate algorithm also converges at least geometrically w.h.p., i.e., for all n ? 1,
ke
yn+n0 ? e(0) k ? 2
n
1 ? yen0
K? e?I? (?)+?
yen0
(20)
with the same (conditional) probability as the exact algorithm.
4.3 Simulation results
We present some simulation results to verify the effectiveness of the proposed approximation algorithm in estimating the posterior probabilities ?jn [n]. We consider a star graph on d = 4 nodes.
This is the subgraph on nodes {1, 2, 3, 4} in Fig. 1. Conditioned on the change points ?? , all data
sequences X? are assumed Gaussian with variance 1, pre-change mean 1 and post-change mean
zero. All priors are geometric with ?j = 0.1. We note that higher values of ?j yield even faster
convergence in the simulations, but we omit these figures due to space constraints. Fig. 1 illustrates
typical examples of posterior paths n 7? ?jn [n], for both the exact and approximate MP algorithms.
One can observe that the approximate path often closely follows the exact one. In some cases, they
might deviate for a while, but as suggested by Theorem 2, they approach one another quickly, once
the change points have occurred.
From the theorem and triangle inequality, it follows that under I? (?) > 0 and K? ? 1, kyn ? yen k
converges to zero, at least geometrically w.h.p. This gives some theoretical explanation for the good
tracking behavior of approximate algorithm as observed in Fig. 1.
5 Proof of Theorem 1
For x ? Rm (including Pd ), we write x = (x0 , x
e) where x
e = (x1 , . . . , xm?1 ). Recall that e(0) =
Pm?1
(1, 0, . . . , 0) and kxk = i=0 |xi |. For x ? Pd , we have 1 ? x0 = ke
xk, and
kx ? e(0) k = k(x0 ? 1, x
e)k = 1 ? x0 + ke
xk = 2(1 ? x0 ).
e ? Rm , let
For ? = (? 0 , ?)
+
e ?=
? ? := k?k
max
i=1,...,m?1
? ? := ? 0 , (? ? L)1m?1 ? Rm
+
?i,
7
(21)
(22)
where 1m?1 is a vector in Rm?1 whose coordinates are all ones. We start by investigating how
kq? (x) ? e(0) k varies as a function of kx ? e(0) k.
Lemma 1. For L ? 1, ? ? > 0, and ? 0 = 1,
N :=
sup
x,y ? Pd ,
kx?e k ? Lky?e(0) k
kq? (x) ? e(0) k
= 1;
kq? ? (y) ? e(0) k
(23)
(0)
Lemma 1 is proved in [17]. We now proceed to the proof of the theorem. Recall that T : Pd ? Pd
is an L-Lipschitz map, and that e(0) is a fixed point of T , that is, T (e(0) ) = e(0) . It follows that for
any x ? Pd , kT (x) ? e(0) k ? Lkx ? e(0) k. Applying Lemma 1, we get
kq? (T (x)) ? e(0) k ? kq? ? (x) ? e(0) k
(24)
0
?
for ? ? Rm
+ with ? = 1, and x ? Pd . (This holds even if ? = 0 where both sides are zero.)
Recall the sequence {?n }n?1 used in defining functions {Qn } accroding to (2), and the assumption
that ?n0 = 1, for all n ? 1. Inequality (24) is key in allowing us to peel operator T , and bring
successive elements of {q?n } together. Then, we can exploit the semi-group property (3) on adjacent
elements of {q?n }.
To see this, for each ?n , let ?n? and ?n? be defined as in (22). Applying (24) with x replaced with
Qn?1 (x), and ? with ?n , we can write
kQn (x) ? e(0) k ? kq?n? (Qn?1 (x)) ? e(0) k
(by (24))
= kq?n? (q?n?1 (T (Qn?2 (x)))) ? e(0) k
= kq?n? ? ?n?1 (T (Qn?2 (x)))) ? e(0) k (by semi-group property (3))
?
We note that (?n? ? ?n?1 )? = L?n? ?n?1
and
?
?
1m?1 .
(?n? ? ?n?1 ) = 1, L(?n? ? ?n?1 )? 1m?1 = 1, L2 ?n? ?n?1
Here, ? and ? act on a general vector in the sense of (22). Applying (24) once more, we get
(0)
?
kQn (x) ? e(0) k ? kq(1,L2 ?n? ?n?1
k.
1m?1 ) (Qn?2 (x)) ? e
Q
n
The pattern is clear. Letting ?n := Ln k=1 ?k? , we obtain by induction
kQn (x) ? e(0) k ? kq(1,?n 1m?1 ) (Q0 (x)) ? e(0) k.
(25)
Recall that Q0 (x) := x. Moreover,
kq(1,?n 1m?1 ) (x) ? e(0) k = 2 1 ? [q(1,?n 1m?1 ) (x)]0 = 2 1 ? g?n (x0 )
(26)
where the first inequality is by (21), and the second is easily verified by noting that all the elements
of (1, ?n 1m?1 ), except the first, are equal. Putting (25) and (26) together with the bound 1?g? (r) =
0
?(1?r)
1?r
(0)
k ? 2?n 1?x
r+?(1?r) ? ? r , which holds for ? > 0 and r ? (0, 1], we obtain kQn (x) ? e
x0 .
By sub-Gaussianity assumption on {log ?k? }, we have
P
n
1 X
n
log ?k? ? E log ?1? > ? ? exp(?c n?2 /??2 ),
(27)
k=1
for some absolute constant c > 0. (Recall that ?? is an upper
on the sub-Gaussian norm
Qbound
n
k log ?1? k?2 .) On the complement of the event in 27, we have k=1 ?k? ? en(?I? +?) , which completes the proof.
Acknowledgments
This work was supported in part by NSF grants CCF-1115769 and OCI-1047871.
8
References
[1] A. N. Shiryayev. Optimal Stopping Rules. Springer-Verlag, 1978.
[2] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
Morgan Kaufmann, 1988.
[3] M. I. Jordan. Graphical models. Statistical Science, 19:140?155, 2004.
[4] P. Diaconis and D. Freedman. Iterated random functions. SIAM Rev., 41(1):45?76, 1999.
[5] O. P. Kreidl and A. Willsky. Inference with minimum communication: a decision-theoretic
variational approach. In NIPS, 2007.
[6] M. Cetin, L. Chen, J. W. Fisher III, A. Ihler, R. Moses, M. Wainwright, and A. Willsky. Distributed fusion in sensor networks: A graphical models perspective. IEEE Signal Processing
Magazine, July:42?55, 2006.
[7] X. Nguyen, A. A. Amini, and R. Rajagopal. Message-passing sequential detection of multiple
change points in networks. In ISIT, 2012.
[8] A. Frank, P. Smyth, and A. Ihler. A graphical model representation of the track-oriented
multiple hypothesis tracker. In Proceedings, IEEE Statistical Signal Processing (SSP). August
2012.
[9] A. T. Ihler, J. W. Fisher III, and A. S. Willsky. Loopy belief propagation: Convergence and
effects of message errors. Journal of Machine Learning Research, 6:905?936, May 2005.
[10] Alexander Ihler. Accuracy bounds for belief propagation. In Proceedings of UAI 2007, July
2007.
[11] T. G. Roosta, M. Wainwright, and S. S. Sastry. Convergence analysis of reweighted sumproduct algorithms. IEEE Trans. Signal Processing, 56(9):4293?4305, 2008.
[12] D. Steinsaltz. Locally contractive iterated function systems. Ann. Probab., 27(4):1952?1979,
1999.
[13] W. B. Wu and M. Woodroofe. A central limit theorem for iterated random functions. J . Appl.
Probab., 37(3):748?755, 2000.
[14] W. B. Wu and X. Shao. Limit theorems for iterated random functions.. :. J. Appl. Probab.,
41(2):425?436, 2004.
? Stenflo. A survey of average contractive iterated function systems. J. Diff. Equa. and Appl.,
[15] O.
18(8):1355?1380, 2012.
[16] A. van der Vaart and J. Wellner. Weak Convergence and Empirical Processes: With Applications to Statistics. Springer, 1996.
[17] A. A. Amini and X. Nguyen. Bayesian inference as iterated random functions with applications
to sequential inference in graphical models. arXiv preprint.
[18] A. A. Amini and X. Nguyen. Sequential detection of multiple change points in networks:
a graphical model approach. IEEE Transactions on Information Theory, 59(9):5824?5841,
2013.
9
| 4886 |@word private:2 version:1 briefly:1 polynomial:1 seems:1 norm:9 nd:1 middle:1 d2:1 simulation:3 contraction:1 recursively:3 initial:3 precluding:1 existing:1 xnj:2 enables:1 update:7 n0:8 lky:1 xk:3 provides:2 detecting:1 node:11 successive:1 viable:1 prove:1 introduce:2 manner:1 x0:13 expected:3 behavior:2 growing:1 inspired:1 little:1 tex:3 equipped:1 xe2:2 estimating:1 underlying:4 bounded:1 formidable:1 moreover:2 notation:5 panel:1 xe1:2 developed:1 guarantee:1 act:1 exactly:1 rm:11 normally:1 grant:1 omit:2 yn:8 before:4 cetin:1 accordance:1 treat:2 limit:3 analyzing:1 path:3 might:1 collect:1 challenging:1 appl:3 factorization:1 contractive:2 equa:1 averaged:1 directed:2 acknowledgment:1 practice:1 procedure:3 j0:1 empirical:1 thought:1 convenient:1 word:2 pre:2 get:2 marginalize:1 operator:6 put:1 context:2 applying:3 equivalent:1 deterministic:3 map:5 starting:1 survey:2 ke:3 immediately:1 rule:14 proving:1 coordinate:3 variation:2 play:1 suppose:1 magazine:1 exact:19 smyth:1 hypothesis:1 element:7 satisfying:1 observed:4 role:2 bottom:1 preprint:1 connected:2 observes:2 yk:1 mentioned:1 pd:22 complexity:4 algebra:1 division:2 triangle:1 shao:1 easily:1 joint:4 derivation:1 fast:4 describe:1 pearson:1 quite:1 whose:2 plausible:1 xek:3 statistic:3 vaart:1 think:2 online:1 rajagopal:1 sequence:19 propose:3 product:5 inserting:1 loop:1 subgraph:2 description:2 ky:1 xj2:2 convergence:24 produce:1 converges:3 help:1 depending:1 derive:1 stating:1 ij:1 z1n:1 p2:1 implies:1 quantify:1 concentrate:1 drawback:2 closely:1 modifying:1 stochastic:1 arash:1 x45:2 proposition:1 isit:1 hold:2 practically:1 around:1 tracker:1 exp:4 predict:1 applicable:1 sensor:2 gaussian:7 always:2 corollary:2 likelihood:1 contrast:1 sense:4 detect:1 inference:19 stopping:6 typically:1 uij:1 interested:2 aforementioned:1 among:1 denoted:1 marginal:1 field:1 once:4 equal:1 having:2 sampling:2 look:2 simplify:2 intelligent:1 employ:1 supx6:1 oriented:1 diaconis:1 divergence:2 maxj:1 replaced:1 ourselves:1 n1:1 detection:7 peel:1 interest:2 message:14 multiply:1 deferred:1 extreme:1 arrives:1 chain:1 kt:2 edge:2 byproduct:1 tree:1 desired:1 xjk:3 theoretical:1 instance:3 formalism:3 modeling:2 earlier:2 signifies:1 cost:1 loopy:1 subset:1 rare:2 expects:1 kq:11 delay:4 mn0:3 varies:1 density:6 siam:1 ie:4 sequel:1 contract:1 probabilistic:1 together:2 quickly:1 central:2 nm:1 star:1 summarized:1 gaussianity:1 satisfy:1 mp:7 depends:1 analyze:4 sup:1 start:2 bayes:4 option:1 yen:4 ni:2 accuracy:1 variance:1 kaufmann:1 yield:1 weak:1 bayesian:13 iterated:13 iid:4 randomness:1 definition:2 proof:6 associated:1 ihler:4 proved:1 recall:8 knowledge:1 organized:1 originally:1 higher:1 follow:1 done:2 evaluated:1 generality:2 kyn:2 stage:2 hand:1 sketch:1 propagation:3 undergoes:2 effect:1 xj1:2 ye:1 concept:1 verify:2 ccf:1 evolution:1 hence:3 read:1 q0:3 semigroup:4 illustrated:1 conditionally:2 adjacent:1 reweighted:1 ue:1 self:1 theoretic:1 bring:1 fj:2 reasoning:1 variational:2 consideration:1 conditioning:2 discussed:1 occurred:5 extend:2 marginals:3 expressing:1 composition:1 measurement:2 approx:5 outlined:1 pm:1 sastry:1 gj:2 lkx:1 posterior:20 multivariate:2 recent:1 perspective:1 inf:3 verlag:1 inequality:3 binary:4 xe:2 der:1 exploited:1 kqn:6 morgan:1 additional:1 somewhat:1 minimum:3 converge:1 dashed:1 semi:3 signal:3 multiple:8 july:2 technical:1 faster:1 calculation:1 post:2 kreidl:1 zin:3 arxiv:1 iteration:7 normalization:3 kernel:1 confined:1 addition:1 want:1 completes:1 grow:2 zjn:12 irf:5 effectiveness:1 call:1 jordan:1 subgaussian:1 noting:2 iii:2 easy:1 xj:2 restrict:1 inner:1 regarding:1 simplifies:1 idea:1 expression:1 wellner:1 passing:11 cause:1 proceed:1 xuanlong:2 detailed:1 clear:1 locally:1 induces:2 xij:3 zj:1 nsf:1 happened:1 moses:1 track:1 discrete:1 write:4 shall:2 express:2 group:3 key:4 indefinite:1 putting:1 changing:1 verified:1 asymptotically:2 graph:2 geometrically:3 sum:3 run:1 place:1 throughout:1 wu:2 circumvents:1 decision:1 scaling:1 bound:2 guaranteed:1 correspondence:1 vectorial:1 precisely:3 constraint:2 encodes:1 hy:1 argument:1 min:1 x12:2 imin:2 department:2 designated:1 according:4 rev:1 s1:6 invariant:2 taken:3 ln:1 equation:2 neyman:1 remains:2 turn:4 letting:2 ge:5 umich:2 endowed:1 x23:2 observe:2 appropriate:1 amini:4 shortly:1 jn:10 denotes:2 assumes:1 cf:4 top:1 graphical:20 maintaining:1 calculating:1 exploit:1 especially:1 uj:1 classical:6 quantity:3 dependence:1 usual:1 ssp:1 simulated:1 induction:1 willsky:3 index:1 pointwise:3 relationship:1 providing:1 roosta:1 equivalently:1 setup:2 xne:2 fe:8 frank:1 allowing:1 upper:1 vertical:1 markov:2 finite:1 supporting:1 situation:1 extended:3 defining:1 communication:1 rn:6 august:1 sumproduct:1 zdn:1 introduced:3 complement:1 namely:1 required:1 cast:1 kl:2 connection:1 pair:1 tap:3 yn0:2 pearl:1 nip:1 trans:1 suggested:2 below:2 usually:2 xm:3 pattern:1 challenge:1 max:6 including:1 explanation:1 belief:3 wainwright:2 suitable:2 event:3 natural:1 indicator:1 recursion:4 eye:1 deviate:1 prior:9 literature:1 geometric:5 l2:2 probab:3 multiplication:3 determining:2 loss:2 expect:1 proportional:1 oci:1 degree:1 sufficient:1 thresholding:1 viewpoint:1 playing:1 share:1 row:2 supported:1 infeasible:1 formal:1 side:1 absolute:3 fg:3 distributed:6 van:1 xn:26 valid:1 transition:1 qn:19 computes:1 forward:1 simplified:1 nguyen:4 far:1 transaction:1 approximate:22 obtains:1 sequentially:1 investigating:1 uai:1 assumed:2 xi:1 alternatively:1 latent:7 iterative:3 quantifies:1 messagepassing:2 complex:2 poly:1 main:5 linearly:1 rh:2 motivation:1 s2:6 arise:2 alarm:3 freedman:1 nothing:1 x1:1 fig:4 en:1 sub:7 wish:1 exponential:1 x24:2 pe:4 theorem:18 specific:6 xt:3 showing:2 symbol:1 r2:1 concern:1 fusion:1 exists:1 false:3 sequential:11 equates:1 conditioned:4 illustrates:4 kx:4 lipt:3 chen:1 michigan:4 xjn:3 expressed:1 kxk:1 tracking:1 springer:2 prop:1 conditional:2 identity:1 viewed:2 formulated:1 ann:3 goal:3 lipschitz:7 shared:3 fisher:2 change:36 specifically:1 typical:2 except:1 diff:1 lemma:5 total:2 arbor:2 meaningful:1 arises:1 brevity:1 alexander:1 ex:1 |
4,294 | 4,887 | Optimizing Instructional Policies
Robert V. Lindsey? , Michael C. Mozer? , William J. Huggins? , Harold Pashler?
?
Department of Computer Science, University of Colorado, Boulder
? Department of Psychology, University of California, San Diego
Abstract
Psychologists are interested in developing instructional policies that boost
student learning. An instructional policy specifies the manner and content
of instruction. For example, in the domain of concept learning, a policy
might specify the nature of exemplars chosen over a training sequence. Traditional psychological studies compare several hand-selected policies, e.g.,
contrasting a policy that selects only difficult-to-classify exemplars with a
policy that gradually progresses over the training sequence from easy exemplars to more difficult (known as fading). We propose an alternative to
the traditional methodology in which we define a parameterized space of
policies and search this space to identify the optimal policy. For example,
in concept learning, policies might be described by a fading function that
specifies exemplar difficulty over time. We propose an experimental technique for searching policy spaces using Gaussian process surrogate-based
optimization and a generative model of student performance. Instead of
evaluating a few experimental conditions each with many human subjects,
as the traditional methodology does, our technique evaluates many experimental conditions each with a few subjects. Even though individual subjects provide only a noisy estimate of the population mean, the optimization
method allows us to determine the shape of the policy space and to identify
the global optimum, and is as efficient in its subject budget as a traditional
A-B comparison. We evaluate the method via two behavioral studies, and
suggest that the method has broad applicability to optimization problems
involving humans outside the educational arena.
1
Introduction
What makes a teacher effective? A critical factor is their instructional policy, which specifies
the manner and content of instruction. Electronic tutoring systems have been constructed
that implement domain-specific instructional policies (e.g., J. R. Anderson, Conrad, & Corbett, 1989; Koedinger & Corbett, 2006; Martin & VanLehn, 1995). A tutoring system
decides at every point in a session whether to present some new material, provide a detailed example to illustrate a concept, pose new problems or questions, or lead the student
step-by-step to discover an answer. Prior efforts have focused on higher cognitive domains
(e.g., algebra) in which policies result from an expert-systems approach involving careful
handcrafted analysis and design followed by iterative evaluation and refinement. As a complement to these efforts, we are interested in addressing fundamental questions in the design
of instructional policies that pertain to basic cognitive skills.
Consider a concrete example: training individuals to discriminate between two perceptual
or conceptual categories, such as determining whether mammogram x-ray images are negative or positive for an abnormality. In training from examples, should the instructor tend
to alternate between categories?as in pnpnpnpn for positive and negative examples?or
present a series of instances from the same category?ppppnnnn (Goldstone & Steyvers,
1
2001)? Both of these strategies?interleaving and blocking, respectively?are adopted by
human instructors (Khan, Zhu, & Mutlu, 2011). Reliable advantages between strategies has
been observed (Kang & Pashler, 2011; Kornell & Bjork, 2008) and factors influencing the
relative effectiveness of each have been explored (Carvalho & Goldstone, 2011).
Empirical evaluation of blocking and interleaving policies involves training a set of human
subjects with a fixed-length sequence of exemplars drawn from one policy or the other.
During training, exemplars are presented one at a time, and typically subjects are asked
to guess the category label associated with the exemplar, after which they are told the
correct label. Following training, mean classification accuracy is evaluated over a set of test
exemplars. Such an experiment yields an intrinsically noisy evaluation of the two policies,
limited by the number of subjects and inter-individual variability. Consequently, the goal
of a typical psychological experiment is to find a statistically reliable difference between the
training conditions, allowing the experimenter to conclude that one policy is superior.
Blocking and interleaving are but two points in a space of policies that could be parameterized by the probability, ?, that the exemplar presented on trial t + 1 is drawn from
the same category as the exemplar on trial t. Blocking and interleaving correspond to ?
near 1 and 0, respectively. (There are many more interesting ways of constructing a policy
space that includes blocking and interleaving, e.g., ? might vary with t or with a student?s
running-average classification accuracy, but we will use the simple fixed-? policy space for
illustration.) Although one would ideally like to explore the policy space exhaustively, limits
on the availability of experimental subjects and laboratory resources make it challenging to
conduct studies evaluating more than a few candidate policies to the degree necessary to
obtain statistically significant differences.
2
Optimizing an instructional policy
Our goal is to discover the optimum in policy space?the policy that maximizes mean
accuracy or another measure of performance over a population of students. (We focus on
optimizing for a population but later discuss how our approach might be used to address
individual differences.) Our challenge is performing optimization on a budget: each subject
tested imposes a time or financial cost. Evaluating a single policy with a degree of certainty
requires testing many subjects to reduce sampling variance due to individual differences,
factors outside of experimental control (e.g., alertness), and imprecise measurement obtained
from brief evaluations and discrete (e.g., correct or incorrect) responses. Consequently,
exhaustive search over the set of distinguishable policies is not feasible.
Past research on optimal teaching (Chi, VanLehn, Litman, & Jordan, 2011; Rafferty, Brunskill, Griffiths, & Shafto, 2011; Whitehill & Movellan, 2010) has investigated reinforcement
learning and POMDP approaches. These approaches are intriguing but are not typically
touted for their data efficiency. To avoid exceeding a subject budget, the flexibility of the
POMDP framework demands additional bias, imposed via restrictions on the class of candidate policies and strong assumptions about the learner. The approach we will propose
likewise requires specification of a constrained policy space, but does not make assumptions
about the internal state of the learner or the temporal dynamics of learning. In contrast
to POMDP approaches, the cognitive agnosticism of our approach allows it to be readily
applied to arbitrary policy optimization problems. Direct optimization methods that accommodate noisy function evaluations have also been proposed, but experimentation with one
such technique (E. J. Anderson & Ferris, 2001) convinced us that the method we propose
here is orders of magnitude more efficient in its required subject budget.
Neither POMDP nor direct-optimization approaches models the policy space explicitly.
In contrast, we propose an approach based on function approximation. From a functionapproximation perspective, the goal is to determine the shape and optimum of the function
that maps policies to performance?call this the policy performance function or PPF. What
sort of experimental design should be used to approximate the PPF? Traditional experimental design?which aims to show a statistically reliable difference between two alternative
policies?requires testing many subjects for each policy. However, if our goal is to determine
the shape of the PPF, we may get better value from data collection by evaluating a large
2
Performance
0.8
0.6
0.4
0.2
Instructional Policy
0
Figure 1: A hypothetical 1D instructional policy space. The
solid black line represents an (unknown) policy performance
function. The grey disks indicate the noisy outcome of singlesubject experiments conducted at specified points in policy
space. (The diameter of the disk represents the number of
data points occuring at the disk?s location.) The dashed black
line depicts the GP posterior mean, and the coloring of each
vertical strip represents the cumulative density function for
the posterior.
number of points in policy space each with few subjects instead of a small number of points
each with many subjects. This possibility suggests a new paradigm for experimental design
in psychological science. Our vision is a completely automated system that selects points
in policy space to evaluate, runs an experiment?an evaluation of some policy with one or
a small number of subjects?and repeats until a budget for data collection is exhausted.
2.1
Surrogate-based optimization using Gaussian process regression
In surrogate-based optimization (e.g., Forrester & Keane, 2009), experimental observations
serve to constrain a surrogate model that approximates the function being optimized. This
surrogate is used both to select additional experiments to run and to estimate the optimum. Gaussian process regression (GPR) has long been used as the surrogate for solving
low-dimensional stochastic optimization problems in engineering fields (Forrester & Keane,
2009; Sacks, Welch, Mitchell, & Wynn, 1989). Like other Bayesian models, GPR makes efficient use of limited data, which is particularly critical to us because our budget is expressed
in terms of the number of subjects required. Further, GPR provides a principled approach
to handling measurement uncertainty, which is a problem any experimental context but is
particularly striking in human experimentation due to the range of factors influencing performance. The primary constraint imposed by the Gaussian Process prior?that of function
smoothness?can readily be ensured with the appropriate design of policy spaces. To illustrate GPR in surrogate-based optimization, Figure 1 depicts a hypothetical 1D instructional
policy space, along with the true PPF and the GPR posterior conditioned on the outcome
of a set of single-subject experiments at various points in policy space.
2.2
Generative model of student performance
Each instructional policy is presumed to have an inherent effectiveness for a population of
individuals. However, a policy?s effectiveness can be observed only indirectly through measurements of subject performance such as the number of correct responses. To determine the
most effective policy from noisy observations, we must specify a generative model of student
performance which relates the inherent effectiveness of instruction to observed performance.
Formally, each subject s is trained under a policy xs and then tested to evaluate their
performance. We posit that each training policy x has a latent population-wide effectiveness
fx ? R and that how well a subject performs on the test is a noisy function of fxs . We
are interested in predicting the effectiveness of a policy x0 across a population of students
given the observed test scores of S subjects trained under the policies x1:S . Conceptually,
this involves first inferring the effectiveness f of policies x1:S from the noisy test data, then
interpolating from f to fx0 .
Using a standard Bayesian nonparametric approach, we place a mean-zero Gaussian Process
prior over the function fx . For the finite set of S observations, this corresponds to the
multivariate normal distribution f ? MVN(0, ?), where ? is a covariance matrix prescribing
how smoothly varying we expect f to be across policies. We use the squared-exponential
2
s0 ||
covariance function, so that ?s,s0 = ? 2 exp(? ||xs ?x
), and ? 2 and ` as free parameters.
2`2
Having specified a prior over policy effectiveness, we turn to specifying a distribution over
observable measures of subject learning conditioned on effectiveness. In this paper, we
measure learning by administering a multiple-choice test to each subject s and observing
3
the number of correct responses s made, cs , out of ns questions. We assume the probability
that subject s answers any question correctly is a random variable ?s whose expected value
is related to the policy?s effectiveness via the logistic transform: E [?s ] = logistic(o + fxs )
where o is a constant. This is consistent with the observation model
?s | fxs , o, ? ? Beta(?, ?e?(o+fxs ) )
cs | ?s ? Binomial(g + (1 ? g)?s ; ns )
(1)
where ? controls inter-subject variability in ?s and g is the probability of answering a
question correctly by random guessing. In this paper, we assume g = .5. For this special
case, the analytic marginalization over ?s yields
X
cs
ns
cs B(? + i, ns ? cs + ?e?(o+fxs ) )
P (cs | fxs , ?, o, g = .5) = 2?ns
(2)
cs i=0 i
B(?, ?e?(o+fxs ) )
where B(a, b) = ?(a)?(b)/?(a + b) is the beta function.
Parameters ? ? ?, o, ? 2 , ` are given vague uniform priors. The effectiveness of a policy x0
PM
1
(m)
0
, ? (m) ), where p(fx0 | f (m) , ? (m) ) is Gaussian
is estimated via p(fx0 | c) ? M
m=1 p(fx | f
with mean and variance determined by the mth sample from the posterior p(f , ? | c).
Posterior samples are drawn via elliptical slice sampling, a technique well-suited for models
with highly correlated latent Gaussian variables (Murray, Adams, & MacKay, 2010).
We have also explored a more general framework that relaxes the relationship between
chance-guessing and test performance and allows for multiple policies to be evaluated per
subject. With regard to the latter, subjects may undergo multiple randomly ordered blocks
of trials where in each block b a subject s is trained under a policy fxbs and then tested. The
observation model is altered so that the score in a block is given by cbs ? Binomial(?bs ; nbs )
where ?bs ? logistic(o0 + ?s + fxbs ), the factor ?s ? Normal(0, ???1 ) represents the ability of
subject s across blocks, and the constant o0 subsumes the role of o and g from the original
model. In the spirit of item-response theory (Boeck & Wilson, 2004), the model could be
extended further to include factors that represent the difficulty of individual test questions
and interactions between subject ability and question difficulty.
2.3
Active selection
GP optimization requires a strategy for actively selecting the next experiment. (We refer
to this as a ?strategy? instead of as a ?policy? to avoid confusion with instructional policies.)
Many heuristic strategies have been proposed (Forrester & Keane, 2009), including: grid
sampling over the policy space; expanding or contracting a trust region; and goal-setting
approaches that identify regions of policy space where performance is likely to attain some
target level or beat out the current best experiment result. In addition, greedy versus k-step
predictive planning has been considered (Osborne, Garnett, & Roberts, 2009).
Every strategy faces an exploration/exploitation trade off. Exploration involves searching
regions of the function with the maximum uncertainty; exploitation involves concentrating
on the regions of the function that currently appear to be most promising. Each has a
cost. A focus on exploration rapidly exhausts the subject budget for subjects. A focus on
exploitation leads to selection of local optima.
The upper-confidence bound (UCB) strategy (Forrester & Keane, 2009; Srinivas, Krause,
Kakade, & Seeger, 2010) attempts to avoid these two costs by starting in an exploratory
mode and shifting to exploitation. This strategy chooses the most-promising experiment
from an upper-confidence bound on the GPR: xt = argmaxx ?
?t?1 (x) + ?t ?
?t?1 (x), where t
is a time index, ?
? and ?
? are the mean and standard deviation of the GPR, and ?t controls the
exploration/exploitation trade off. Large ?t focus on regions with the greatest uncertainty,
but as ?t ? 0, the focus shifts to exploitation in the neighborhood of the current best policy.
Annealing ?t as a function of t will yield exploration initially shifting toward exploitation.
We adapt the UCB strategy by transforming the UCB based on the GPR to an expression
based on the the population accuracy (proportion correct) via xt = argmaxx P ( ncss > ?t | fx ),
where ?t is an accuracy level determining the exploration/exploitation trade off. In simulations, we found that setting ?t = .999 was effective. Note that in applying the UCB selection
4
(b)
relative distance to
category boundary
(a)
Figure 2:
(a) Experiment 1
training display; (b)
Selected Experiment
near
2 stimuli
5 and 10 their
15
training trial
(b) graspability ratings
(a)
80
70
60
750 1000
25
the PPF with 100 subjects.
Light grey squares with error bars indicate the results
red line
50 depicts the GP posterior mean, ?(x) for policy x, and the pink shading is ?2?(x),
a traditional comparison
where 40?(x) is the GP posterior standardofdeviation.
among
conditions. (b) Pre30
GP optimization requires a strategy for selecting the next experiment. (We refer to this
diction
of
optimum
presenta- policies.) Many
20
as a ?strategy?
instead of a ?policy? to avoid confusion
with instructional
tion duration
as2009),
moreincluding:
sub- grid sampling
heuristic
& Keane,
10 strategies have been proposed (Forrester
over the
policy
space;
expanding
or
contracting
a trust
and line
goal-setting
approaches
jects are
run;region;
dashed
is
250
500
750 1000
2000 3000
5000
5000
Estimated
Optimal of
Duration
(Logspace
Scale) where
that identify
regions
policy
performance
is likely to attain some target level
asymptotic
value.
Subject Number
% Correct
90
500
20
Figure 2: (a) Some (b)
objects and their graspability ratings: 1 means not graspable and 5
means100highly graspable; choosing the category
of training
examplars
over a sequence of
Figure 3:
Experiment
1 retrials; 90(b) Examples of fading policies drawn
1D fading
policy of
space used in our
sults.from
(a) the
Posterior
density
study 80
(a)
100
50
250
far
2000 3000
Duration (Log Scale)
70
60
or beat out the current best experiment result. In addition, greedy versus k-step predictive
planning has been considered (Osborne, Garnett, & Roberts, 2009).
Every
strategy
faces an exploration/exploitation
o?. uniform
Exploration
involves searching
strategy, we must search over
a set
of candidate
policies. We appliedtrade
a fine
grid
regions
of
the
function
with
the
maximum
uncertainty;
exploitation
involves
concentrating
search over policy space to perform this selection.
3
on the regions of the function that currently appear to be most promising. Each has a cost.
A focus on exploration rapidly exhausts the budget for participants. A focus on exploitation
leads to selection of local optima.
Experiment 1: Optimizing presentation rate
The upper-confidence bound (UCB) strategy (Forrester & Keane, 2009; Srinivas, Krause,
Kakade, & Seeger, 2010) attempts to avoid these two costs by starting in an exploratory
de Jonge, Tabbers, Pecher, mode
and Zeelenberg
studied the
of chooses
presentation
rate on
and shifting (2012)
to exploitation.
Thiseffect
strategy
the most-promising
experiment
word-pair learning. During from
training,
each pair wasbound
viewed
total of 16 sec. Viewing was
an upper-confidence
on for
the afunction
divided into 16/d trials each with a duration of d sec, where d ranged from 1 sec (viewing the
= argmax
?t?1 (x)that
+ ?t ?an
t?1 (x),
pair 16 times) to 16 sec (viewing the pair once). dextJong
et al.
intermediate
x found
duration yielded better cued recall performance both immediately and following a delay.
where t is an index over time and ?t controls the exploration/exploitation trade o?. Large ?t
focus
on regions with
the greatest
uncertainty,
but asto?tlearn
? 0, the
shifts to exploitation
We explored a variant of this
experiment
in which
subjects
were asked
thefocus
favorite
in
the
neighborhood
of
the
current
best
policy.
Annealing
?
as
a
function
t
sporting team of six individuals. During training, each individual?s face was shown along of t will yield
exploration initially shifting toward exploitation.
with their favorite team?either Jets or Sharks (Figure 2a). The training policy specifies
the duration d of each face-team pair. Training was over a 30 second period, with a total of
3 5/d
Experimental
30/d trials and an average of
presentations task
per face-team pair. Presentation sequences
were blocked, where a block consists of all six individuals in random order. Immediately
testtested
our approach
to optimization
of instructional
policies,
use awere
challenging problem
following training, subjects To
were
on each
of the six faces
in random
orderweand
in the domain
concept
or category learning.
Salmon,
and Filliter (2010)
asked to select the corresponding
team. ofThe
training/testing
procedure
was McMullen,
repeated for
have obtained rating norms for a set of 320 objects in terms of their graspability, i.e., how
eight rounds each using different
faces. In total, each subject responded to 48 faces. The
manipulable an object is according to how easy it is to grasp and use the object with one
faces were balanced across ethnicity,
and
(provided
Minear
Park,
2004).
hand. Theyage,
polled
57gender
individuals,
each of by
whom
rated &
each
of the
objects multiple times
a 1?5 scale, where 1 means not graspable and 5 means highly graspable. Figure 2a
Using Mechanical Turk, we using
recruited
100 subjects who were paid $0.30 for their participashows several objects and their ratings.
tion. The policy space was defined to be in the logarithm of the duration, i.e., d = ex , where
We included
divided theonly
objects
into of
twoxgroups
by their
mean
rating,
with the
group
x ? [ln(.25) ln(5)]. The space
values
such that
30/d
is an
integer;
i.e.,not-glopnor
we
having ratings in [1, 2.75] and the glopnor group having ratings in [3.25, 5]. (We discarded
ensured that no trials were cut
short
by
the
30
second
time
limit.
Subject
1?s
training
policy,
objects with ratings in [2.75, 3.25]). Our goal was to teach the concept of glopnor, using
x1 , was set to the median of
rangeinstructions:
of admissable values (857 ms). After each subject
thethe
following
t completed the experiment, the PPF posterior was reestimated, and the upper-confidence
bound strategy was used to select the policy for subject t + 1, xt+1 .
4
Figure 3a shows the PPF posterior based on 100 subjects. (We include a movie showing
the evolution of the PPF over subjects in the Supplementary Materials.) The diameter of
the grey disks indicate the number of data points observed at that location in the space.
The optimum of the PPF mean is at 1.15 sec, at which duration each face-team pair will be
shown on expectation 4.33 times during training. Though the result seems intuitive, we?ve
polled colleagues, and predictions for the peak ranged from below 1 sec to 2.5 sec. Figure 3b
uses the PPF mean to estimate the optimum duration, and this duration is plotted against
5
near
repetition probability
relative distance to
category boundary
far
5
10
15
20
1
0.5
25
0
5
training trial
(b)
10
15
training trial
Some objects and their graspability ratings: 1 means not graspable and 5
graspable; choosing the category of training examplars over a sequence of
thethe
number
of policy
subjects.
Ourinprocedure
mples of fading policies drawn from
1D fading
space used
our
20
25
Figure 4: Expt. 2, trial
dependent fading and repetition policies (left and
right, respectively). Colored lines represent specific
policies.
yields an estimate for the optimum duration that is
quite stable after about 40 subjects.
Ideally, one would like to compare the PPF posterior to ground truth. However, obtaining
s the GP posterior mean, ?(x) forground
policy x, truth
and therequires
pink shading
is ?2?(x), data collection effort. As an alternative, we contrast our
a massive
the GP posterior standard deviation.
result with a more traditional experimental study based on the same number of subjects.
on requires a strategy for selecting
next100
experiment.
(We subjects
refer to thisin a standard experimental design involving evaluation of
Wetheran
additional
instead of a ?policy? to avoid confusion with instructional policies.) Many
five
alternative
policies,
d ? {1, 1.25, 1.667, 2.5, 5}, 20 subjects per policy. (These durations
gies have been proposed (Forrester & Keane, 2009), including: grid sampling
presentations
of each face-team pair during training.) The mean score for
space; expanding or contractingcorrespond
a trust region; to
and1-5
goal-setting
approaches
egions of policy space where performance
is likelyistoplotted
attain some
level3a as light grey squares with bars indicating ?2 standard
each policy
in target
Figure
current best experiment result. errors
In addition,
greedy
versus The
k-stepresult
predictive
of the
mean.
of the traditional experiment is coarsely consistent with the
een considered (Osborne, Garnett, & Roberts, 2009).
PPF posterior, but the budget of 100 subjects places a limitation on the interpretability
faces an exploration/exploitation trade o?. Exploration involves searching
of the results. When matched on budget, the optimization procedure appears to produce
unction with the maximum uncertainty; exploitation involves concentrating
results
more Each
interpretable
of the function that currently appear
to bethat
most are
promising.
has a cost. and less sensitive to noise in the data. Note that we have
loration rapidly exhausts the budget
for participants.
A focus on exploitation
biased
this comparison
in favor of the traditional design by restricting the exploration of
on of local optima.
the policy space to the region 1 sec ? d ? 5 sec. Nonetheless, no clear pattern emerges in
fidence bound (UCB) strategy (Forrester
& Keane,
Srinivas,
the shape
of the2009;
PPF
basedKrause,
on the
ger, 2010) attempts to avoid these two costs by starting in an exploratory
ting to exploitation. This strategy chooses the most-promising experiment
-confidence bound on the function
4
outcome of the traditional design.
Experiment 2: Optimizing training example sequence
xt = argmaxx ?t?1 (x) + ?t ?t?1 (x),
dex over time and ?t controls the In
exploration/exploitation
o?. Large
?t learning from examples. Subjects are told that martians
Experiment 2, wetrade
study
concept
s with the greatest uncertainty, but as ?t ? 0, the focus shifts to exploitation
teach?tthem
the meaning
of a martian adjective, glopnor, by presenting a series of
rhood of the current best policy.will
Annealing
as a function
of t will yield
example objects, some of which have the property glopnor and others do not. During a
tially shifting toward exploitation.
training phase, objects are presented one at a time and subjects must classify the object
as glopnor or not-glopnor. They then receive feedback as to the correctness of their
mental task
response. On each trial, the object from the previous trial is shown in the corner of the
display
along
with
its correct
classification, the reason for which is to facilitate comparison
proach to optimization of instructional
policies,
we use
a challenging
problem
of concept or category learning.
Salmon,
McMullen,
Filliter (2010)
and
contrasting
ofand
objects.
Following 25 training trials, 24 test trials are administered in
rating norms for a set of 320 objects in terms of their graspability, i.e., how
makes
classification
but receives no feedback. The training and test
n object is according to how easywhich
it is to the
graspsubject
and use the
object awith
one
trials
balanced
number of positive and negative examples.
olled 57 individuals, each of whom
ratedare
eachroughly
of the objects
multipleintimes
ale, where 1 means not graspable and 5 means highly graspable. Figure 2a
The stimuli in this experiment are
objects and their ratings.
drawn from a set of 320 objects normed by Salmon,
andtheFilliter
(2010)
for graspability, i.e., how manipulable an object is according
objects into two groups by their McMullen,
mean rating, with
not-glopnor
group
in [1, 2.75] and the glopnor group
in [3.25,
(We discarded
tohaving
how ratings
easy it
is to5].grasp
and use the object with one hand. They polled 57 individuals,
atings in [2.75, 3.25]). Our goal was
to teach
the concept
of glopnor,
each
of whom
rated
each of using
the objects multiple times using a 1?5 scale, where 1 means
nstructions:
4
not graspable and 5 means highly graspable. Figure 2b shows several objects and their
ratings. We divided the objects into two groups by their mean rating, with the notglopnor group having ratings in [1, 2.75] and the glopnor group having ratings in [3.25,
5]. (We discarded objects with ratings in [2.75, 3.25] because they are too difficult even
if one knows the concept). The classification task is easy if one knows that the concept is
graspability. However, the challenge of inferring the concept is extremely difficult because
there are many dimensions along which these objects vary and any one?or more?could be
the classification dimension(s).
We defined an instructional policy space characterized by two dimensions: fading and blocking. Fading refers to the notion from the animal learning literature that learning is facilitated
by presenting exemplars far from the category boundary initially, and gradually transitioning toward more difficult exemplars over time. Exemplars far from the boundary may help
individuals to attend to the dimension of interest; exemplars near the boundary may help
individuals determine where the boundary lies (Pashler & Mozer, in press). Theorists have
6
also made computational arguments for the benefit of fading (Bengio, Louradour, Collobert,
& Weston, 2009; Khan et al., 2011). Blocking refers to the issue discussed in the Introduction concerning the sequence of category labels: Should training exemplars be blocked or
interleaved? That is, should the category label on one trial tend to be the same as or
different than the label on the previous trial?
For fading, we considered a family of trial-dependent functions that specify the distance
of the chosen exemplar to the category boundary (left panel of Figure 4). This family is
parameterized by a single policy variable x2 , 0 ? x2 ? 1 that relates to the distance of an
exemplar to the category boundary, d, as follows: d(t, x2 ) = min(1, 2x2 )?(1?|2x2 ?1|) Tt?1
?1 ,
where T is the total number of training trials and t is the current trial. For blocking, we
also considered a family of trial-dependent functions that vary the probability of a category
label repetition over trials (right panel of Figure 4). This family is parameterized by the
policy variable x1 , 0 ? x1 ? 1, that relates to the probability of repeating the category label
of the previous trial, r, as follows: r(t, x1 ) = x1 + (1 ? 2x1 ) Tt?1
?1 .
Figure 5a provides a visualization of sample training trial sequences for different points in
the 2D policy space. Each graph represents an instance of a specific (probabilistic) policy.
The abscissa of each graph is an index over the 25 training trials; the ordinate represents
the category label and its distance from the category boundary. Policies in the top and
bottom rows show sequences of all-easy and all-hard examples, respectively; intermediate
rows achieve fading in various forms. Policies in the leftmost column begin training with
many repetitions and end training with many alternations; policies in the rightmost column
begin with alternations and end with repetitions; policies in the middle column have a
time-invariant repetition probability of 0.5.
Regardless of the training sequence, the set of test objects was the same for all subjects.
The test objects spanned the spectrum of distances from the category boundary. During
test, subjects were required to make a forced choice glopnor/not-glopnor judgment.
We seeded the optimization process by running 10 subjects in each of four corners of policy
space as well as in the center point of the space. We then ran 150 additional subjects using
GP-based optimization. Figure 5 shows the PPF posterior mean over the 2D policy space,
along with the selection in policy space of the 200 subjects. Contour map colors indicate
the expected accuracy of the corresponding policy (in contrast to the earlier colored graphs
in which the coloring indicates the cdf). The optimal policy is located at x? = (1, .66).
To validate the outcome of this exploration, we ran 50 subjects at x? as well as policies in
the upper corners and the center of Figure 5. Consistent with the prediction of the PPF
posterior, mean accuracy at x? is 68.6%, compared to 60.9% for (0, 1), 65.7% for (1, 0),
and 66.6% for (.5, .5). Unfortunately, only one of the paired comparisons was statistically
reliable by a two-tailed Bonferroni corrected t-test: (0, 1) versus x? (p = .027). However,
post-hoc power computation revealed that with 50 subjects and the variability inherent in
the data, the odds of observing a reliable 2% difference in the mean is only .10. Running an
additional 50 subjects would raise the power to only .17. Thus, although we did not observe
a statistically significant improvement at the inferred optimum compared to sensible alternative policies, the results are consistent with our inferred optimum being an improvement
over the type of policies one might have proposed a priori.
5
Discussion
The traditional experimental paradigm in psychology involves comparing a few alternative
conditions by testing a large number of subjects in each condition. We?ve described a novel
paradigm in which a large number of conditions are evaluated, each with only one or a few
subjects. Our approach achieves an understanding of the functional relationship between
conditions and performance, and it lends itself to discovering the conditions that attain
optimal performance.
We?ve focused on the problem of optimizing instruction, but the method described here has
broad applicability across issues in the behavioral sciences. For example, one might attempt
to maximize a worker?s motivation by manipulating rewards, task difficulty, or time pressure.
7
+
+
+
+
+
?
?
?
?
?
+
+
+
+
+
?
?
?
?
?
+
+
+
+
+
?
?
?
?
?
+
+
+
+
+
?
?
?
?
?
66%
Fading Policy
Fading
Policy
Fading Policy
64%
62%
60%
58%
+
+
+
+
+
?
?
?
?
?
56%
Blocking Policy
Repetition/Alternation
Policy
Blocking Policy
Figure 5: Experiment 2 (a) policy space and (b) policy performance function at 200 subjects
Motivation might be studied in an experimental context with voluntary time on task as a
measure of intrinsic interest level.
Consider problems in a quite different domain, human vision. Optimization approaches
might be used to determine optimal color combinations in a manner more efficient and feasible than exhaustive search (Schloss & Palmer, 2011). Also in the vision domain, one might
search for optimal sequences and parameterizations of image transformations that would
support complex visual tasks performed by experts (e.g., x-ray mammography screening) or
ordinary visual tasks performed by the visually impaired.
From a more applied angle, A-B testing has become an extremely popular technique for fine
tuning web site layout, marketing, and sales (Christian, 2012). With a large web population,
two competing alternatives can quickly be evaluated. Our approach offers a more systematic
alternative in which a space of alternatives can be explored efficiently, leading to discovery
of solutions that might not have been conceived of as candidates a priori.
The present work did not address individual differences or high-dimensional policy spaces,
but our framework can readily be extended. Individual differences can be accommodated
via policies that are parameterized by individual variables (e.g., age, education level, performance on related tasks, recent performance on the present task). For example, one might
adopt a fading policy in which the rate of fading depends in a parametric manner on a running average of performance. High dimensional spaces are in principle no challenge for GPR
given a sensible distance metric. The challenge of high-dimensional spaces comes primarily
from computational overhead in selecting the next policy to evaluate. However, this computational burden can be greatly relaxed by switching from a global optimization perspective
to a local perspective: instead of considering candidate policies in the entire space, active
selection might consider only policies in the neighborhood of previously explored policies.
Acknowledgments
This research was supported by NSF grants BCS-0339103 and BCS-720375 and by an NSF
Graduate Research Fellowship to R. L. We thank Ron Kneusel and Ali Alzabarah for their
invaluable assistance with IT support, and Ponesadat Mortazavi, Vanja Dukic, and Rosie
Cowell for helpful discussions and advice on this work.
References
Anderson, E. J., & Ferris, M. C. (2001). A direct search algorithm for optimization with
noisy function evaluations. SIAM Journal of Optimization, 11 , 837?857.
8
Anderson, J. R., Conrad, F. G., & Corbett, A. T. (1989). Skill acquisition and the LISP
tutor. Cognitive Science, 13 , 467?506.
Bengio, Y., Louradour, J., Collobert, R., & Weston, J. (2009, June). Curriculum learning.
In L. Bottou & M. Littman (Eds.), Proceedings of the 26th international conference on
machine learning (pp. 41?48). Montreal: Omnipress.
Boeck, P. D., & Wilson, M. (2004). Explanatory item response models. a generalized linear
and nonlinear approach. New York: Springer.
Carvalho, P. F., & Goldstone, R. L. (2011, November). Stimulus similarity relations modulate benefits for blocking versus interleaving during category learning. (Presentation at
the 52nd Annual Meeting of the Psychonomics Society, Seattle, WA)
Chi, M., VanLehn, K., Litman, D., & Jordan, P. (2011). Empirically evaluating the application of reinforcement learning to the induction of effective and adaptive pedagogical
strategies. User Modeling and User-Adapted Interaction. Special Issue on Data Mining
for Personalized Educational Systems, 21 , 137?180.
Christian, B. (2012). The A/B test: Inside the technology that?s changing the rules of
business. Wired , 20(4).
de Jonge, M., Tabbers, H. K., Pecher, D., & Zeelenberg, R. (2012). The effect of study
time distribution on learning and retention: A goldilocks principle for presentation rate.
J. Exp. Psych.: Learning, Mem., & Cog., 38 , 405?412.
Forrester, A. I. J., & Keane, A. J. (2009). Recent advances in surrogate-based optimization.
Progress in Aerospace Sciences, 45 , 50?79.
Goldstone, R. L., & Steyvers, M. (2001). The sensitization and differentiation of dimensions
during category learning. Journal of Experimental Psychology: General , 130 , 116?139.
Kang, S. H. K., & Pashler, H. (2011). Learning painting styles: Spacing is advantageous
when it promotes discriminative contrast. Applied Cognitive Psychology, 26 , 97?103.
Khan, F., Zhu, X. J., & Mutlu, B. (2011). How do humans teach: On curriculum learning and teaching dimension. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, &
K. Weinberger (Eds.), Adv. in NIPS 24 (pp. 1449?1457). La Jolla, CA: NIPS Found.
Koedinger, K. R., & Corbett, A. T. (2006). Cognitive tutors: Technology bringing learning
science to the classroom. In K. Sawyer (Ed.), The cambridge handbook of the learning
sciences (pp. 61?78). Cambridge UK: Cambridge University Press.
Kornell, N., & Bjork, R. A. (2008). Learning concepts and categories: Is spacing the enemy
of induction? Psychological Science, 19 , 585?592.
Martin, J., & VanLehn, K. (1995). Student assessment using bayesian nets. International
Journal of Human-Computer Studies, 42 , 575?591.
Minear, M., & Park, D. C. (2004). A lifespan database of adult facial stimuli. Behavior
Research Methods, Instruments, and Computers, 36 , 630?633.
Murray, I., Adams, R. P., & MacKay, D. J. (2010). Elliptical slice sampling. J. of Machine
Learn. Res., 9 , 541?548.
Osborne, M. A., Garnett, R., & Roberts, S. J. (2009, January). Gaussian processes for
global optimization. In 3d intl. conf. on learning and intell. opt. Trento, Italy.
Pashler, H., & Mozer, M. C. (in press). Enhancing perceptual category learning through
fading: When does it help? J. of Exptl. Psych.: Learning, Mem., & Cog..
Rafferty, A. N., Brunskill, E. B., Griffiths, T. L., & Shafto, P. (2011). Faster teaching by
POMDP planning. In Proc. of the 15th intl. conf. on AI in education.
Sacks, J., Welch, W. J., Mitchell, T. J., & Wynn, H. P. (1989). Design and analysis of
computer experiments. Statistical Science, 4 , 409?435.
Salmon, J. P., McMullen, P. A., & Filliter, J. H. (2010). Norms for two types of manipulability (graspability and functional usage), familiarity, and age of acquisition for 320
photographs of objects. Behavioral Research Methods, 42 , 82?95.
Schloss, K. B., & Palmer, S. E. (2011). Aesthetic response to color combinations: preference,
harmony, and similarity. Attention, Perception, & Psychophysics, 73 , 551?571.
Srinivas, N., Krause, A., Kakade, S., & Seeger, M. (2010). Gaussian process optimization
in the bandit setting: No regret and experimental design. In Proceedings of the 27th
international conference on machine learning. Haifa, Israel.
Whitehill, J., & Movellan, J. R. (2010). Optimal teaching machines (Tech. Rep.). La Jolla,
CA: Department of Computer Science, UCSD.
9
| 4887 |@word trial:25 exploitation:22 middle:1 proportion:1 norm:3 seems:1 disk:4 nd:1 advantageous:1 instruction:4 grey:4 simulation:1 covariance:2 paid:1 pressure:1 solid:1 shading:2 accommodate:1 series:2 score:3 selecting:4 rightmost:1 past:1 elliptical:2 current:7 unction:1 comparing:1 intriguing:1 must:3 readily:3 shape:4 analytic:1 christian:2 interpretable:1 generative:3 selected:2 guess:1 item:2 greedy:3 discovering:1 short:1 colored:2 mental:1 provides:2 jonge:2 parameterizations:1 location:2 ron:1 preference:1 five:1 along:5 constructed:1 direct:3 beta:2 become:1 incorrect:1 consists:1 manipulability:1 overhead:1 behavioral:3 ray:2 inside:1 manner:4 x0:2 inter:2 expected:2 presumed:1 behavior:1 abscissa:1 nor:1 planning:3 chi:2 considering:1 provided:1 discover:2 matched:1 awith:1 maximizes:1 panel:2 begin:2 what:2 israel:1 psych:2 lindsey:1 contrasting:2 transformation:1 differentiation:1 certainty:1 temporal:1 every:3 hypothetical:2 litman:2 ensured:2 tlearn:1 uk:1 control:5 sale:1 grant:1 appear:3 positive:3 retention:1 influencing:2 engineering:1 local:4 attend:1 limit:2 switching:1 might:12 black:2 studied:2 suggests:1 challenging:3 specifying:1 limited:2 palmer:2 range:1 statistically:5 graduate:1 agnosticism:1 acknowledgment:1 testing:5 block:5 implement:1 movellan:2 regret:1 procedure:2 empirical:1 attain:4 imprecise:1 instructor:2 confidence:6 griffith:2 word:1 refers:2 suggest:1 get:1 pertain:1 selection:7 nb:1 context:2 applying:1 pashler:5 restriction:1 imposed:2 map:2 center:2 educational:2 regardless:1 starting:3 duration:12 normed:1 focused:2 pomdp:5 welch:2 layout:1 attention:1 immediately:2 mammography:1 rule:1 spanned:1 goldilocks:1 financial:1 steyvers:2 population:8 searching:4 exploratory:3 fx:4 notion:1 diego:1 target:3 colorado:1 mples:1 massive:1 user:2 us:1 particularly:2 located:1 cut:1 database:1 blocking:11 observed:5 role:1 logspace:1 bottom:1 region:12 adv:1 alertness:1 trade:5 ran:2 principled:1 mozer:3 transforming:1 balanced:2 reward:1 ideally:2 asked:3 littman:1 exhaustively:1 dynamic:1 trained:3 raise:1 solving:1 algebra:1 ali:1 predictive:3 serve:1 efficiency:1 learner:2 completely:1 vague:1 various:2 forced:1 effective:4 zemel:1 outside:2 outcome:4 exhaustive:2 neighborhood:3 whose:1 heuristic:2 choosing:2 supplementary:1 quite:2 enemy:1 ability:2 favor:1 gp:8 transform:1 noisy:8 itself:1 hoc:1 sequence:12 advantage:1 net:1 propose:5 interaction:2 polled:3 rapidly:3 flexibility:1 achieve:1 intuitive:1 validate:1 trento:1 seattle:1 impaired:1 optimum:13 intl:2 produce:1 wired:1 adam:2 object:31 cued:1 illustrate:2 help:3 montreal:1 pose:1 exemplar:17 progress:2 functionapproximation:1 strong:1 c:7 involves:9 indicate:4 come:1 expt:1 shafto:2 posit:1 correct:7 stochastic:1 exploration:16 human:8 viewing:3 material:2 education:2 mortazavi:1 opt:1 considered:5 ground:1 normal:2 exp:2 visually:1 vary:3 achieves:1 adopt:1 proc:1 harmony:1 label:8 currently:3 sensitive:1 repetition:7 correctness:1 vanja:1 gaussian:9 aim:1 avoid:7 varying:1 wilson:2 focus:10 june:1 improvement:2 indicates:1 greatly:1 contrast:5 tech:1 seeger:3 helpful:1 dependent:3 prescribing:1 typically:2 entire:1 explanatory:1 initially:3 mth:1 relation:1 manipulating:1 bandit:1 selects:2 interested:3 issue:3 classification:6 among:1 priori:2 animal:1 constrained:1 special:2 mackay:2 psychophysics:1 field:1 once:1 having:5 sampling:6 represents:6 broad:2 park:2 others:1 stimulus:4 inherent:3 few:6 primarily:1 randomly:1 ve:3 intell:1 individual:18 argmax:1 phase:1 william:1 attempt:4 bjork:2 interest:2 screening:1 possibility:1 highly:4 mining:1 arena:1 evaluation:8 grasp:2 light:2 worker:1 necessary:1 facial:1 conduct:1 taylor:1 logarithm:1 accommodated:1 re:1 plotted:1 haifa:1 psychological:4 instance:2 classify:2 column:3 earlier:1 modeling:1 ordinary:1 applicability:2 cost:7 addressing:1 deviation:2 uniform:2 delay:1 conducted:1 too:1 answer:2 teacher:1 chooses:3 density:2 fundamental:1 peak:1 siam:1 international:3 rafferty:2 told:2 off:3 probabilistic:1 systematic:1 michael:1 reestimated:1 concrete:1 quickly:1 squared:1 cognitive:6 corner:3 expert:2 conf:2 leading:1 style:1 actively:1 de:2 gy:1 student:9 subsumes:1 includes:1 availability:1 exhaust:3 sec:9 explicitly:1 depends:1 collobert:2 later:1 tion:2 performed:2 observing:2 mutlu:2 wynn:2 red:1 sort:1 participant:2 square:2 accuracy:7 responded:1 fidence:1 variance:2 efficiently:1 likewise:1 yield:6 identify:4 correspond:1 ofthe:1 who:1 conceptually:1 judgment:1 painting:1 bayesian:3 strip:1 ed:3 evaluates:1 against:1 nonetheless:1 colleague:1 acquisition:2 turk:1 pp:3 graspable:10 associated:1 experimenter:1 intrinsically:1 concentrating:3 mitchell:2 recall:1 color:3 emerges:1 popular:1 classroom:1 coloring:2 appears:1 higher:1 methodology:2 specify:3 response:7 evaluated:4 though:2 anderson:4 keane:9 marketing:1 until:1 hand:3 receives:1 web:2 trust:3 nonlinear:1 assessment:1 logistic:3 mode:2 facilitate:1 effect:1 usage:1 concept:12 true:1 ranged:2 evolution:1 seeded:1 laboratory:1 round:1 assistance:1 during:9 bonferroni:1 harold:1 m:1 leftmost:1 generalized:1 presenting:2 occuring:1 tt:2 confusion:3 pedagogical:1 performs:1 invaluable:1 omnipress:1 image:2 meaning:1 salmon:4 novel:1 superior:1 functional:2 empirically:1 handcrafted:1 discussed:1 approximates:1 significant:2 measurement:3 refer:3 blocked:2 theorist:1 cambridge:3 ai:1 smoothness:1 tuning:1 grid:4 pm:1 session:1 teaching:4 shawe:1 sack:2 specification:1 stable:1 similarity:2 exptl:1 posterior:16 multivariate:1 recent:2 cbs:1 perspective:3 optimizing:6 jolla:2 italy:1 rep:1 alternation:3 meeting:1 conrad:2 additional:5 relaxed:1 determine:6 paradigm:3 period:1 maximize:1 dashed:2 filliter:3 relates:3 multiple:5 ale:1 bcs:2 schloss:2 jet:1 adapt:1 characterized:1 offer:1 long:1 faster:1 jects:1 divided:3 concerning:1 thethe:2 post:1 promotes:1 paired:1 prediction:2 involving:3 basic:1 regression:2 variant:1 vision:3 expectation:1 metric:1 enhancing:1 represent:2 receive:1 addition:3 fellowship:1 krause:3 fine:2 annealing:3 spacing:2 median:1 biased:1 bringing:1 subject:63 tend:2 undergo:1 recruited:1 examplars:2 spirit:1 effectiveness:11 jordan:2 call:1 integer:1 odds:1 near:4 lisp:1 abnormality:1 intermediate:2 bengio:2 easy:5 relaxes:1 automated:1 ethnicity:1 marginalization:1 revealed:1 psychology:4 aesthetic:1 een:1 competing:1 reduce:1 shift:3 administered:1 whether:2 expression:1 o0:2 six:3 bartlett:1 effort:3 york:1 vanlehn:4 detailed:1 clear:1 nonparametric:1 repeating:1 category:25 diameter:2 lifespan:1 goldstone:4 specifies:4 nsf:2 estimated:2 conceived:1 correctly:2 per:3 discrete:1 proach:1 coarsely:1 group:8 four:1 drawn:6 changing:1 neither:1 graph:3 run:3 facilitated:1 parameterized:5 uncertainty:7 angle:1 striking:1 place:2 family:4 electronic:1 shark:1 interleaved:1 bound:6 followed:1 display:2 sawyer:1 yielded:1 annual:1 adapted:1 fading:18 constraint:1 constrain:1 x2:5 personalized:1 argument:1 extremely:2 min:1 performing:1 martin:2 department:3 developing:1 according:3 alternate:1 combination:2 pink:2 across:5 kakade:3 b:2 huggins:1 psychologist:1 gradually:2 invariant:1 handling:1 boulder:1 instructional:17 ln:2 resource:1 visualization:1 previously:1 discus:1 turn:1 know:2 instrument:1 end:2 adopted:1 ferris:2 experimentation:2 eight:1 observe:1 appropriate:1 indirectly:1 alternative:9 weinberger:1 original:1 binomial:2 running:4 include:2 top:1 completed:1 ting:1 murray:2 koedinger:2 society:1 tutor:2 question:7 strategy:21 primary:1 parametric:1 traditional:11 surrogate:8 guessing:2 lends:1 distance:7 thank:1 sensible:2 whom:3 tutoring:2 toward:4 reason:1 induction:2 length:1 index:3 relationship:2 illustration:1 nc:1 difficult:5 unfortunately:1 robert:5 forrester:9 teach:4 whitehill:2 negative:3 design:11 policy:128 unknown:1 perform:1 allowing:1 upper:6 vertical:1 observation:5 kornell:2 discarded:3 finite:1 november:1 beat:2 voluntary:1 january:1 extended:2 variability:3 team:7 ucsd:1 arbitrary:1 inferred:2 rating:18 ordinate:1 complement:1 pair:8 required:3 specified:2 khan:3 optimized:1 mechanical:1 aerospace:1 california:1 kang:2 diction:1 boost:1 nip:2 address:2 adult:1 bar:2 below:1 pattern:1 perception:1 challenge:4 adjective:1 reliable:5 including:2 interpretability:1 shifting:5 greatest:3 critical:2 power:2 difficulty:4 business:1 predicting:1 curriculum:2 zhu:2 altered:1 movie:1 administering:1 brief:1 sults:1 rated:2 technology:2 fxs:7 sporting:1 mvn:1 prior:5 literature:1 understanding:1 discovery:1 determining:2 relative:3 asymptotic:1 mcmullen:4 contracting:2 expect:1 interesting:1 limitation:1 ger:1 carvalho:2 versus:5 manipulable:2 age:2 tially:1 degree:2 consistent:4 imposes:1 s0:2 boeck:2 principle:2 row:2 convinced:1 repeat:1 supported:1 free:1 bias:1 wide:1 face:12 benefit:2 slice:2 regard:1 boundary:10 feedback:2 evaluating:5 cumulative:1 dimension:6 contour:1 collection:3 refinement:1 san:1 reinforcement:2 made:2 adaptive:1 far:4 approximate:1 skill:2 observable:1 global:3 decides:1 active:2 mem:2 handbook:1 conceptual:1 conclude:1 discriminative:1 corbett:4 spectrum:1 search:7 iterative:1 latent:2 tailed:1 promising:6 nature:1 favorite:2 learn:1 correlated:1 expanding:3 ca:2 obtaining:1 argmaxx:3 bottou:1 investigated:1 interpolating:1 complex:1 constructing:1 domain:6 garnett:4 louradour:2 did:2 fx0:3 motivation:2 noise:1 osborne:4 repeated:1 martian:2 x1:8 site:1 advice:1 depicts:3 n:5 brunskill:2 inferring:2 sub:1 exceeding:1 pereira:1 exponential:1 candidate:5 lie:1 perceptual:2 answering:1 gpr:9 interleaving:6 mammogram:1 transitioning:1 cog:2 specific:3 xt:4 familiarity:1 showing:1 explored:5 x:2 intrinsic:1 burden:1 restricting:1 magnitude:1 budget:11 exhausted:1 conditioned:2 demand:1 suited:1 smoothly:1 distinguishable:1 photograph:1 explore:1 likely:2 visual:2 expressed:1 ordered:1 graspability:8 cowell:1 springer:1 gender:1 corresponds:1 truth:2 chance:1 cdf:1 weston:2 modulate:1 goal:9 presentation:7 viewed:1 consequently:2 careful:1 content:2 feasible:2 hard:1 included:1 typical:1 determined:1 corrected:1 total:4 discriminate:1 experimental:17 la:2 ucb:6 indicating:1 select:3 formally:1 internal:1 support:2 latter:1 evaluate:4 tested:3 srinivas:4 ex:1 |
4,295 | 4,888 | Linear Decision Rule as Aspiration
for Simple Decision Heuristics
? ur
? S?ims?ek
Ozg
Center for Adaptive Behavior and Cognition
Max Planck Institute for Human Development
Lentzeallee 94, 14195 Berlin, Germany
[email protected]
Abstract
Several attempts to understand the success of simple decision heuristics have examined heuristics as an approximation to a linear decision rule. This research
has identified three environmental structures that aid heuristics: dominance, cumulative dominance, and noncompensatoriness. This paper develops these ideas
further and examines their empirical relevance in 51 natural environments. The
results show that all three structures are prevalent, making it possible for simple
rules to reach, and occasionally exceed, the accuracy of the linear decision rule,
using less information and less computation.
1
Introduction
The comparison problem asks which of a number of objects has a higher value on an unobserved
criterion. Typically, some attributes of the objects are available as input to the decision. An example
is which of two houses that are currently for sale will have a higher return on investment ten years
from now, given the location, age, lot size, and total living space of each house.
The importance of comparison for intelligent behavior cannot be overstated. Much of human and
animal behavior consists of choosing one object?from among a number of available alternatives?
to act on, with respect to some criterion whose value is unobserved at the time. Examples include
a venture capitalist choosing a company to invest in, a scientist choosing a conference to submit a
paper to, a female tree frog deciding who to mate with, and an ant colony choosing a nest area to
live in.
This paper focuses on paired comparison, in which there are exactly two objects to choose from, and
its solution using linear estimation. Specifically, it is concerned with the environmental structures
that make it possible to mimic the decisions of the linear estimator using less information and less
computation, asking two questions: How much of the linear estimator do we need to know to mimic
its decisions, and under what conditions? How prevalent are these conditions in natural environments? In the following sections, I review several ideas from the literature, develop them further,
and investigate their empirical relevance.
2
Background
A standard approach to the comparison problem is to estimate the criterion as a function of the
attributes of the object, typically as a linear function:
y? = w0 + w1 x1 + w2 x2 + ... + wk xk ,
(1)
where y? is the estimate of the criterion, w0 is the intercept, w1 ..wk are the weights, and x1 ..xk are
the attribute values. This estimate leads to a decision between objects A and B as follows, where
1
?xi is used to denote the difference in attribute values between the two objects:
y?A ? y?B
Decision rule
=
=
:
w1 (x1A ? x1B ) + w2 (x2A ? x2B ) + ... + wk (xkA ? xkB )
w1 ?x1 + w2 ?x2 + ... + wk ?xk
(
Choose object A if w1 ?x1 + ... + wk ?xk > 0
Choose object B if w1 ?x1 + ... + wk ?xk < 0
Choose randomly if w1 ?x1 + ... + wk ?xk = 0
(2)
(3)
This decision rule does not need the linear estimator in its entirety. The intercept is not used at all.
As for the weights, it suffices to know their sign and relative magnitude. For instance, with two
attributes weighted +0.2 and +0.1, it suffices to know that both weights are positive and that the
first one is twice as high as the other.
The literature on simple decision heuristics [1, 2] has identified several environmental structures
that allow simple rules to make decisions identical to those of the linear decision rule using less
information [3]. These are dominance [4], cumulative dominance [5, 6], and noncompensatoriness [7, 8, 9, 10, 11]. I discuss each in turn in the following sections. I refer to attributes also as
cues and to the signs of the weights as cue directions, as in the heuristics literature. An attribute that
discriminates between two objects is one whose value differs on the two objects. A heuristic that
corresponds to a particular linear decision rule is one whose cue directions, and cue order if it needs
them, are identical to those of the linear decision rule.
The discussion will focus on two successful families of heuristics. The first is unit weighting [12,
13, 14, 15, 16, 17], which uses a linear decision rule with weights of +1 or ?1. The second is the
family of lexicographic heuristics [18, 19], which examine cues one at a time, in a specified order,
until a cue is found that discriminates between the objects. The discriminating cue, and that cue
only, is used to make the decision. Lexicographic heuristics are an abstraction of the way words are
ordered in a dictionary, with respect to the alphabetical order of the letters from left to right.
2.1
Dominance
If all terms wi ?xi in Decision Rule 3 are nonnegative, and at least one of them is positive, then
object A dominates object B. If all terms wi ?xi are zero, then objects A and B are dominance
equivalent. It is easy to see that the linear decision rule chooses the dominant object if there is one.
If objects are dominance equivalent, the decision rule chooses randomly.
Dominance is a very strong relationship. When it is present, most decision heuristics choose identically to the linear decision rule if their cue directions match those of the linear rule. These include
unit weighting and lexicographic heuristics, with any ordering of the cues.
To check for dominance, it suffices to know the signs of the weights; the magnitudes of the weights
are not needed. I occasionally refer to dominance as simple dominance to differentiate it from
cumulative dominance, which I discuss next.
2.2
Cumulative dominance
The linear sum in Equation 2 may be written alternatively as follows:
y?A ? y?B
(w1 ? w2 )?x1 + (w2 ? w3 )(?x1 + ?x2 )
+(w3 ? w4 )(?x1 + ?x2 + ?x3 ) + ... + wk (?x1 + .. + ?xk )
= w10 ?x01 + w20 ?x02 + w30 ?x03 + ... + wk0 ?x0k ,
Pi
where (1) ?x0i = j=1 ?xj , ?i , (2) wi0 = wi ? wi+1 , i = 1, 2, .., k ? 1, and (3) wk0 = wk .
=
(4)
To this alternative linear sum in Equation 4, we can apply the earlier dominance result, obtaining
a new dominance relationship called cumulative dominance. Cumulative dominance uses an additional piece of information on the weights: their relative ordering.
Object A cumulatively dominates object B if all terms wi0 ?x0i are nonnegative and at least one of
them is positive. Objects A and B are cumulative-dominance equivalent if all terms wi0 ?x0i are
zero. The linear decision rule chooses the cumulative-dominant object if there is one. If objects are
2
cumulative-dominance equivalent, the linear decision rule chooses randomly. Note that if weights
w1 ..wk are positive and decreasing, it suffices to examine ?x0i to check for cumulative dominance
(because wi0 > 0, ?i).
As an example, consider comparing the value of two piles of US coins. The attributes would be the
number of each type of coin in the pile, and the weights would be the financial value of each type
of coin. A pile that contains 6 one-dollar coins, 4 fifty-cent coins, and 2 ten-cent coins cumulatively
dominates (but not simply dominates) a pile containing 3 one-dollar coins, 5 fifty-cent coins, and 1
ten-cent coin: 6 > 3, 6 + 4 > 3 + 5, 6 + 4 + 2 > 3 + 5 + 1.
Simple dominance implies cumulative dominance. Cumulative dominance is therefore more likely
to hold than simple dominance. When a cumulative-dominance relationship holds, the linear decision rule, the corresponding lexicographic decision rule, and the corresponding unit-weighting rule
decide identically, with one exception: unit weighting may find a tie where the linear decision rule
does not [5].
2.3
Noncompensatoriness
Without loss of generality, assume that the weights w1 , w2 , .., wk are nonnegative, which can be
satisfied by inverting the attributes when necessary. Consider the linear decision rule as a sequential
process, where the terms wi ?xi are added one by one, in order of nonincreasing weights. If we were
to stop after the first discriminating attribute, would our decision be identical to the one we would
make by processing all attributes? Or would the subsequent attributes reverse this early decision?
The answer is no, it is not possible for subsequent attributes to reverse the early decision, if
the attributes are binary, taking values of 0 or 1, and the weights satisfy the set of constraints
Pk
wi >
j=i+1 wj , i = 1, 2, .., k ? 1. Such weights are called noncompensatory. An example is
the sequence 1, 0.5, 0.25, 0.125.
With binary attributes and noncompensatory weights, the linear decision rule and the corresponding
lexicographic decision rule decide identically [7, 8].
This concludes the review of the background material. The contributions of the present paper start
in the next section.
3
A probabilistic approach to dominance
Pk
To choose between two objects, the linear decision rule examines whether i=1 wi ?xi is above,
below, or equal to zero. This comparison can be made with certainty, without knowing the exact
values of the weights, if a dominance relationships exists. Here I explore what can be done in the
absence of such certainty. For instance, can we identify conditions under which the comparison can
be made with very high probability? As a motivating example, consider the case where 9 out of 10
attributes favor object A against object B. Although we cannot be certain that the linear decision rule
will select object A, that would be a very good bet.
I make the simplifying assumption that |wi ?xi | are independent, identically distributed samples
from the uniform distribution in the interval from 0 to 1. The choice of upper bound of the interval
is not consequential because the terms wi ?xi can be rescaled. Let p and n be the number of
positive and negative terms wi ?xi , respectively. Using the normal approximation to the sum of
Pk
uniform variables, we can approximate i=1 wi ?xi with the normal distribution with mean p?n
2
and variance
p2 +n2
12 .
This yields the following estimate of the probability PA that the linear decision
p
rule will select object A: PA ? P (X > 0), where X ? N ( p?n
2 ,
2
+n2
12 ).
Definition: Object A approximately dominates object B if P (X > 0) ? c, where c is a parameter
p2 +n2
of the approximation (taking values close to 1) and X ? N ( p?n
2 , 12 ).
A similar analysis applies to cumulative dominance.
3
4
An empirical analysis of relevance
I now turn to the question of whether dominance and noncompensatoriness exist in our environment
in any substantial amount. There are two earlier results on the subject. When binary versions of 20
natural datasets were used to train a multiple linear regression model, at least 3 of the 20 models
were found to have noncompensatory weights [8].1 In the same 20 datasets, with a restriction of 5 on
the maximum number of attributes, the proportion of object pairs that exhibited simple dominance
ranged from 13% to 75% [4].
The present study used 51 natural datasets obtained from a wide variety of sources, including online
data repositories, textbooks, research publications, packages for R statistical software, and individual
scientists collecting field data. The subjects were diverse, including biology, business, computer
science, ecology, economics, education, engineering, environmental science, medicine, political
science, psychology, sociology, sports, and transportation. The datasets varied in size, ranging from
12 to 601 objects, corresponding to 66?180,300 distinct paired comparisons. Number of attributes
ranged from 3 to 21. The datasets are described in detail in the supplementary material.2
I present two sets of results: on the original datasets and on binary versions where numeric attributes
were dichotomized by splitting around the median (assigning the median value to the category with
fewer objects). I refer to the original datasets as numeric datasets but it should be noted that one
dataset had only binary attributes and many datasets had at least one binary attribute. Categorical
attributes were recoded into binary attributes, one for each category, indicating membership in the
category. Objects with missing criterion values were excluded from the analysis. Missing attribute
values were replaced by means across all objects. A decision was considered to be accurate if it
selected an object whose criterion value was equal to the maximum of the criterion values of the
objects being compared.
Cumulative dominance and noncompensatoriness are sensitive to the units of measurement of the
attributes. In this analysis, all attribute values were normalized to lie between 0 and 1, measuring
them relative to the smallest and largest values they take in the dataset.
The linear decision rule was obtained using multiple linear regression with elastic net regularization [21], which contains both a ridge penalty and a lasso penalty. For the regularization parameter
?, which determines the relative proportion of ridge and lasso penalties, the values of 0, 0.2, 0.4,
0.6, 0.8, and 1 were considered. For the regularization parameter ?, which controls the amount of
total penalty, the default search path in the R-language package glmnet [22] was used. Both ? and
? were selected using cross validation. Specifically, ? and ? were set to the values that gave the
minimum mean cross-validation error in the training set. I refer to the linear decision rule learned in
this manner as the base decision rule.
On datasets with fewer than 1000 pairs of objects, a separate linear decision rule was learned for
every pair of objects, using all other objects as the training set. On larger datasets, the pairs of
objects were randomly placed in 1000 folds and a separate model was learned for each fold, training
with all objects not contained in that fold. Five replications were done using different random seeds.
Performance of the base decision rule The accuracy of the base decision rule differed substantially
across datasets, ranging from barely above chance to near-perfect. In numeric datasets, accuracy
ranged from 0.56 to 0.98 (mean=0.79). In binary datasets, accuracy was generally lower, ranging
from 0.55 to 0.86 (mean=0.74). Compared to standard multiple linear regression, regularization
improved accuracy in most datasets, occasionally in large amounts (as much as by 19%). Without
regularization, mean accuracy across datasets was lower by 1.17% in numeric datasets and by 0.51%
in binary datasets.
Dominance Figure 1 shows prevalence of dominance, measured by the proportion of object pairs
in which one object dominates the other or the two objects are equivalent. The figure shows four
types of dominance in each of the datasets. Simple and cumulative dominance are displayed as
1
The authors found 3 datasets in which the weights were noncompensatory and the order of the weights
was identical to the cue order of the take-the-best heuristic [19]. It is possible that additional datasets had
noncompensatory weights but did not match the take-the-best cue order.
2
The datasets included the 20 datasets in Czerlinski, Gigerenzer & Goldstein [20], which were used to
obtain the two sets of earlier results discussed above [8, 4].
4
1.0
0.8
Proportion of object pairs
???
? ??
??
?
??
?
??
?? ??
?
??
0.6
?? ??
0.4
??
??
?
?
?
?
??
??
?
?
?
?
??
?
?? ?
?
?
?
?
?
?
?
?
?
??
?
?
?
??
?
?
?
?
?
?
?
??
?
?
?
?
?
?
? ??
??
?
?
? ? ?? ?
?
?
?????
?
?
?
?
?
?
?
?
?
?? ?? ? ? ?? ? ? ?
?
?
?
?
?
?
?
?
? ??
?
?
?
?? ?? ?? ??
?
?
?
?
?
?
?
?
?
? ??
?
?? ? ?? ??
?
?
?
??
0.2
1.0 0.0
0.8
0.6
0.4
0.2
0.0
Proportion of object pairs
??
?
?
?
?
?
??
?
?? ?? ??
?
??
?
????
?
?
?
?
?
?
? ? ?? ? ? ? ? ? ?
Numeric datasets
?
?? ?? ?? ?? ?? ?? ?? ??
??
?? ? ?
?
?
?? ?? ?? ? ? ? ?? ?? ? ?? ?? ? ?? ? ? ?
?
?? ??
?
? ? ? ?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?? ?? ?
?
?
?
?
? ? ? ?
?
?
? ?? ?? ?
?
?
??
?
?? ??
?? ?
??
? ? ?
?
?? ?? ??
?
?? ?? ? ?? ?
?? ??
?
?
?? ?? ?? ?? ?? ?
?
?
? ?
?
?
?
?? ?? ?
?
? ?? ?? ?
? ??
?
?
?
?? ?
?
?
?
?
?
?
???
?
?
?
?
????
?
???
???
???
?
Binary datasets
Figure 1: Prevalence of dominance. Blue lines show simple dominance, red lines show cumulative dominance, blue-filled circles show approximate simple dominance, and red-filled circles show
approximate cumulative dominance.
blue and red lines stacked on top of each other. Recall that simple dominance implies cumulative
dominance, so the blue lines show pairs with both simple- and cumulative-dominance relationships.
Approximate simple and cumulative dominance are displayed as blue- and red-filled circles, respectively. The datasets are presented in order of decreasing prevalence of simple dominance. The
mean, median, minimum, and maximum prevalence of each type of dominance across the datasets
N UMERIC DATASETS
Mean Median Min
Max
Mean
B INARY DATASETS
Median Min
Max
PREVALENCE
Dom
Dom approx c=0.99
Cum dom
Cum dom approx c=0.99
Noncompensatory weights
Noncompensation
ACCURACY (%)
Dom
Dom approx c=0.99
Cum dom
Cum dom approx c=0.99
Lexicographic
0.25
0.35
0.58
0.74
0.16
0.31
0.62
0.77
0.00
0.03
0.11
0.30
0.91
0.91
0.94
0.94
0.99
0.51
0.58
0.87
0.92
0.17
0.93
0.54
0.59
0.89
0.92
0.00
0.96
0.07
0.22
0.61
0.76
0.00
0.77
1.00
1.00
1.00
1.00
1.00
1.00
0.83
0.85
0.49
76.8
81.2
90.6
94.2
93.5
77.2
82.9
93.4
96.1
96.1
56.2
57.0
60.8
69.5
51.4
97.5
100.5
101.4
101.4
110.6
87.1
90.5
98.3
99.2
97.6
89.0
91.2
98.9
99.6
99.6
63.9
70.6
90.6
93.8
78.9
100.0
100.0
103.7
103.7
104.4
Table 1: Descriptive statistics on dominance, cumulative dominance, and noncompensatoriness.
Accuracy is shown as a percentage of the accuracy of the base decision rule.
5
1.0
0.9
0.8
0.7
0.7
0.8
0.9
1.0 0.5
0.6
Accuracy
Accuracy
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
? ?
?
?
?
? ?
? ?
? ? ?
?
?
?
? ?
?
?
?
?
? ? ?
? ?
?
? ?
?
? ?
?
?
?
???
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
? ? ?
?
? ?
?
?
?
? ?
? ?
?
?
? ?
?
?
?
? ? ?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
? ?
? ? ?
?
?
?
? ?
? ? ? ? ?
? ? ? ?
?
?
?
?
?
? ?
?
?
?
? ?
?
?
?
?
?
? ? ? ? ?
?
?
?
?
?
?
?
? ?
?
? ?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
? ???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
?
? ?
?
? ?
? ?
?
??
?????
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
Numeric datasets
? ?
?
?
? ?
? ?
? ? ?
?
? ?
?
?
?
? ?
?
??
?
?
?
?
?
?
?
?
? ? ? ?
? ?
? ?
?
? ?
?
? ?
?
?
? ?
?
?
? ?
?
0.6
?
?
?
?
?
?
?
?
?
?
? ?
?
? ?
? ?
? ?
? ?
? ?
? ? ?
? ? ?
?
???
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
?
?
?
?
? ?
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ? ?
?
?
? ?
??
??
?
?
0.5
?
?
?
?
?
?
?
?
?
?
? ?
?
? ?
??
? ?
? ? ?
?
?
?
??
?
?
?
?
? ?
? ?
?
? ?
?
? ?
? ? ?
?
?
?
?
?
?
? ?
? ? ?
?
?
??? ?
?
?
? ? ?
?
? ?
??
?
? ? ?
?
?
? ?
???
? ?
?
?
?
? ?
? ?
?
?
?
?
Binary datasets
Figure 2: Accuracy of decisions guided by dominance. Blue lines show simple dominance, red
lines show cumulative dominance, blue-filled circles show approximate simple dominance, and redfilled circles show approximate cumulative dominance. Green circles show the accuracy of the base
decision rule for comparison.
are shown in Table 1, along with other performance measures that will be discussed shortly. The
approximation made a difference in 27?33 of 51 datasets, depending on type of dominance and data
(numeric/binary). As expected, the datasets on which the approximation made a difference were
those that had a larger number of attributes. Specifically, they all had six or more attributes.
Figure 2 shows the accuracy of decisions guided by dominance: choose the dominant object when
there is one; choose randomly otherwise. This accuracy can be higher than the accuracy of the
base decision rule, which happens if choosing randomly is more accurate than the base decision
rule on pairs that exhibit no dominance relationship. Table 1 shows the mean, median, minimum,
and maximum accuracies across the datasets measured as a percentage of the accuracy of the base
decision rule. The accuracies were surprisingly high, more so with binary data. It is worth pointing
out that the accuracy of approximate cumulative dominance in binary datasets ranged from 93.8%
to 103.7% of the accuracy of the base decision rule.
In the results discussed so far, approximate dominance was computed by setting c = 0.99. This
value was selected prior to the analysis based on what this parameter means: 1 ? c is the expected
error rate of the approximation, where error rate is the proportion of approximately dominant objects
that are not selected by the linear decision rule.
Figure 3, left panel, shows how well the approximation fared in the 51 datasets with various choices
of the parameter c. The vertical axis shows the mean error rate of the approximation. With numeric
data, the error rates were reasonably close to the expected values. With binary data, error rates were
substantially lower than expected.
6
1.0
?
?
?
?
?
?
?
?
0.8
0.6
?
?
0.4
?
0.2
0.03
?
0.02
0.01
Observed error rate
0.04
?
?
Noncompensatory weights
0.05
?
Dom binary
Cum dom binary
Dom numeric
Cum dom numeric
?
0.00
0.0
0.00
?
0.01
0.02
0.03
0.04
Expected error rate (1 ? c)
0.05
Binary datasets
Figure 3: Left: Error rates of approximate dominance with various values of the approximation
parameter c. Right: Proportion of linear models with noncompensatory weights in each of the
datasets.
Noncompensatoriness Let noncompensation be a logical variable that equals T RUE if the decision of the first discriminating cue, when cues are processed in nonincreasing magnitude of the
weights, is identical to the decision of the linear decision rule. With binary cues and noncompensatory weights, noncompensation is T RUE with probability 1. Otherwise, its value depends on cue
values. If noncompensation is T RUE, the linear decision rule and the corresponding lexicographic
rule make identical decisions.
Figure 3, right panel, shows the proportion of base decision rules with noncompensatory weights
in binary datasets. Recall that a large number of base decision rules were learned on each dataset,
using different training sets and random seeds. The proportion of base decision rules with noncompensatory weights ranged from 0 to 1, with a mean of 0.17 across datasets. Nine datasets had values
greater than 0.50. Thirty-two datasets had values less than 0.01.
Figure 4 shows noncompensation in each dataset, together with the accuracies of the base decision
rule and the corresponding lexicographic rule. The accuracies on the same dataset are connected
by a line segment. The figure reveals overwhelmingly large levels of noncompensation, particularly
for binary data. Median noncompensation was 0.85 in numeric datasets and 0.96 in binary datasets.
Consequently, the accuracy of the lexicographic rule was very close to that of the linear decision
rule: its median accuracy relative to the base decision rule was 96% in numeric datasets and 100%
in binary datasets. In summary, although noncompensatory weights were not particularly prevalent
in the datasets, actual levels of noncompensation were very high.
5
Discussion
It is fair to conclude that all three environmental structures are prevalent in natural environments to
such a high degree that decisions guided by these structures approach, and occasionally exceed, the
base decision model in predictive accuracy.
We have not examined the performance of any particular decision heuristic, which depends on the
cue directions and cue order it uses. These will not necessarily match those of the linear decision
rule.3 The results here show that it is possible for decision heuristics to succeed in natural environments by imitating the decisions of the linear model using less information and less computation?
because the conditions that make it possible are prevelant?but not that they necessarily do so.
3
When this is the case, it should be noted, the decision heuristic may have a higher predictive accuracy than
the linear model.
7
1.0
?
?
??
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
? ?
?
?
??
?
?
??
?
?
?
?
?
Accuracy
?
?
?
?
?
0.7
NUMERIC
?
?
?
?
?
?
0.8
?
0.6
?
0.9
?
?
?
?
?
?
??
?
?
?
?
?
?
?
0.5
?
0.8
?
?
?
Accuracy
BINARY
??
?
??
?
? ? ?
?
? ? ?
??? ? ??
?
?
?
?
?
?
?
?
0.7
?
0.6
?
?
?
?
0.9
0.5
?
0.5
0.6
0.7
0.8
0.9
1.0
Noncompensation
Figure 4: Prevalence of noncompensation. For each dataset, the proportion of decisions in which
noncompensation took place are plotted against the accuracy of the base decision rule (displayed
in green circles) and the accuracy of the corresponding lexicographic rule (displayed in blue plus
signs). Accuracies on the same dataset are connected by a line segment.
When decision heuristics are examined through the lens of bias-variance decomposition [23, 24, 25],
the three environmental structures examined here are particularly relevant for the bias component
of the prediction error. The results presented here suggest that while simple decision heuristics
examine a tiny fraction of the set of linear models, in natural environments, they may do so without
introducing much additional bias.
It is sometimes argued that the environmental structures discussed here, noncompensatoriness in
particular, are relevant for model fitting but not for prediction on unseen data. This is not accurate.
The results reviewed in Sections 2.1?2.3 apply to a linear model regardless of how the linear model
was trained. If we are comparing objects that were not used to train the model, as we have done
here, the discussion pertains to predictive accuracy.
The probabilistic approximations of dominance and of cumulative dominance introduced in this
paper can be used as decision heuristics themselves, combined with any method of estimating cue
directions and cue order. I leave detailed examination of their performance for future work but note
that the results here are encouraging.
Finally, I hope that these results will stimulate further research in statistical properties of decision
environments, as well as cognitive models that exploit them, for further insights into higher cognition.
Acknowledgments
I am grateful to all those who made their datasets available for this study. Thanks to Gerd Gigerenzer, Konstantinos Katsikopoulos, Amit Kothiyal, and three anonymous reviewers for comments on
earlier versions of this manuscript, and to Marcus Buckmann for his help in gathering the datasets.
? ur S?ims?ek from the Deutsche ForschungsgeThis work was supported by Grant SI 1732/1-1 to Ozg?
meinschaft (DFG) as part of the priority program ?New Frameworks of Rationality? (SPP 1516).
8
References
[1] G. Gigerenzer, P. M. Todd, and the ABC Research Group. Simple heuristics that make us smart. Oxford
University Press, New York, 1999.
[2] G. Gigerenzer, R. Hertwig, and T. Pachur, editors. Heuristics: The Foundations of Adaptive Behavior.
Oxford University Press, New York, 2011.
[3] K. V. Katsikopoulos. Psychological heuristics for making inferences: Definition, performance, and the
emerging theory and practice. Decision Analysis, 8(1):10?29, 2011.
[4] R. M. Hogarth and N. Karelaia. ?Take-the-best? and other simple strategies: Why and when they work
?well? with binary cues. Theory and Decision, 61(3):205?249, 2006.
[5] M. Baucells, J. A. Carrasco, and R. M. Hogarth. Cumulative dominance and heuristic performance in
binary multiattribute choice. Operations Research, 56(5):1289?1304, 2008.
[6] J. A. Carrasco and M. Baucells. Tight upper bounds for the expected loss of lexicographic heuristics in
binary multi-attribute choice. Mathematical Social Sciences, 55(2):156?189, 2008.
[7] L. Martignon and U. Hoffrage. Why does one-reason decision making work? In G. Gigerenzer, P. M.
Todd, and the ABC Research Group, editors, Simple heuristics that make us smart, pages 119?140. Oxford
University Press, New York, 1999.
[8] L. Martignon and U. Hoffrage. Fast, frugal, and fit: Simple heuristics for paired comparison. Theory and
Decision, 52(1):29?71, 2002.
[9] R. M. Hogarth and N. Karelaia. Simple models for multiattribute choice with many alternatives: When it
does and does not pay to face trade-offs with binary attributes. Management Science, 51(12):1860?1872,
2005.
[10] K. V. Katsikopoulos and L. Martignon. Na??ve heuristics for paired comparisons: Some results on their
relative accuracy. Journal of Mathematical Psychology, 50(5):488?494, 2006.
[11] K. V. Katsikopoulos. Why do simple heuristics perform well in choices with binary attributes? Decision
Analysis, 10(4):327?340, 2013.
[12] S. S. Wilks. Weighting systems for linear functions of correlated variables when there is no dependent
variable. Psychometrika, 3(1):23?40, 1938.
[13] F. L. Schmidt. The relative efficiency of regression and simple unit weighting predictor weights in applied
differential psychology. Educational and Psychological Measurement, 31:699?714, 1971.
[14] R. M. Dawes and B. Corrigan. Linear models in decision making. Psychological Bulletin, 81(2):95?106,
1974.
[15] R. M. Dawes. The robust beauty of improper linear models in decision making. American Psychologist,
34(7):571?582, 1979.
[16] H. J. Einhorn and R. M. Hogarth. Unit weighting schemes for decision making. Organizational Behavior
and Human Performance, 13(2):171?192, 1975.
[17] C. P. Davis-Stober. A geometric analysis of when fixed weighting schemes will outperform ordinary least
squares. Psychometrika, 76(4):650?669, 2011.
[18] P. C. Fishburn. Lexicographic orders, utilities and decision rules: A survey. Management Science,
20(11):1442?1471, 1974.
[19] G. Gigerenzer and D. G. Goldstein. Reasoning the fast and frugal way: Models of bounded rationality.
Psychological Review, 103(4):650?669, 1996.
[20] J. Czerlinski, G. Gigerenzer, and D. G. Goldstein. How good are simple heuristics? In G. Gigerenzer,
P. M. Todd, and the ABC Research Group, editors, Simple heuristics that make us smart, pages 97?118.
Oxford University Press, New York, 1999.
[21] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the Royal
Statistical Society, Series B, 67:301?320, 2005.
[22] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1?22, 2010.
[23] S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma. Neural
Computation, 4(1):1?58, 1992.
[24] H. Brighton and G. Gigerenzer. Bayesian brains and cognitive mechanisms: Harmony or dissonance? In
N. Chater and M. Oaksford, editors, The probabilistic mind: Prospects for Bayesian cognitive science,
pages 189?208. Oxford University Press, New York, 2008.
[25] G. Gigerenzer and H. Brighton. Homo Heuristicus: Why biased minds make better inferences. Topics in
Cognitive Science, 1(1):107?143, 2009.
9
| 4888 |@word repository:1 version:3 proportion:10 consequential:1 simplifying:1 decomposition:1 asks:1 contains:2 series:1 comparing:2 si:1 assigning:1 written:1 subsequent:2 cue:21 fewer:2 selected:4 xk:7 dawes:2 location:1 five:1 mathematical:2 along:1 differential:1 replication:1 consists:1 fitting:1 manner:1 expected:6 behavior:5 mpg:1 examine:3 themselves:1 fared:1 multi:1 brain:1 decreasing:2 company:1 actual:1 encouraging:1 psychometrika:2 estimating:1 bounded:1 deutsche:1 panel:2 what:3 substantially:2 emerging:1 textbook:1 unobserved:2 certainty:2 every:1 collecting:1 act:1 tie:1 exactly:1 sale:1 unit:7 control:1 grant:1 planck:1 positive:5 scientist:2 engineering:1 todd:3 oxford:5 path:2 approximately:2 plus:1 twice:1 frog:1 examined:4 acknowledgment:1 thirty:1 investment:1 alphabetical:1 practice:1 differs:1 x3:1 prevalence:6 area:1 empirical:3 w4:1 word:1 suggest:1 cannot:2 close:3 selection:1 live:1 intercept:2 restriction:1 equivalent:5 reviewer:1 center:1 transportation:1 missing:2 educational:1 economics:1 regardless:1 survey:1 splitting:1 rule:56 examines:2 estimator:3 insight:1 financial:1 his:1 coordinate:1 rationality:2 exact:1 us:3 pa:2 particularly:3 carrasco:2 geman:1 observed:1 wj:1 connected:2 improper:1 ordering:2 trade:1 rescaled:1 prospect:1 substantial:1 discriminates:2 environment:7 aspiration:1 dom:12 trained:1 grateful:1 gigerenzer:10 segment:2 smart:3 tight:1 predictive:3 dilemma:1 efficiency:1 various:2 train:2 stacked:1 distinct:1 fast:2 choosing:5 whose:4 heuristic:30 supplementary:1 larger:2 otherwise:2 favor:1 statistic:1 unseen:1 online:1 differentiate:1 sequence:1 descriptive:1 net:2 took:1 relevant:2 stober:1 venture:1 invest:1 perfect:1 leave:1 object:49 help:1 depending:1 develop:1 colony:1 measured:2 xka:1 x0i:4 p2:2 strong:1 entirety:1 implies:2 direction:5 guided:3 attribute:31 human:3 material:2 education:1 argued:1 suffices:4 anonymous:1 hold:2 around:1 considered:2 normal:2 deciding:1 lentzeallee:1 cognition:2 seed:2 pointing:1 dictionary:1 early:2 smallest:1 estimation:1 hoffrage:2 harmony:1 currently:1 katsikopoulos:4 sensitive:1 largest:1 weighted:1 hope:1 offs:1 lexicographic:12 beauty:1 bet:1 publication:1 overwhelmingly:1 chater:1 focus:2 prevalent:4 check:2 political:1 dollar:2 am:1 ozg:2 inference:2 dependent:1 abstraction:1 membership:1 typically:2 bienenstock:1 germany:1 among:1 development:1 animal:1 equal:3 field:1 dissonance:1 identical:6 biology:1 mimic:2 future:1 develops:1 intelligent:1 randomly:6 ve:1 individual:1 dfg:1 cumulatively:2 replaced:1 attempt:1 ecology:1 friedman:1 investigate:1 homo:1 inary:1 nonincreasing:2 accurate:3 necessary:1 tree:1 filled:4 circle:7 plotted:1 dichotomized:1 sociology:1 psychological:4 instance:2 earlier:4 asking:1 measuring:1 ordinary:1 introducing:1 organizational:1 uniform:2 predictor:1 successful:1 motivating:1 answer:1 chooses:4 combined:1 thanks:1 discriminating:3 probabilistic:3 together:1 na:1 w1:10 einhorn:1 satisfied:1 management:2 containing:1 choose:8 fishburn:1 nest:1 priority:1 cognitive:4 ek:2 american:1 return:1 de:1 wk:11 satisfy:1 depends:2 piece:1 lot:1 red:5 start:1 overstated:1 contribution:1 gerd:1 square:1 accuracy:34 variance:3 who:2 yield:1 identify:1 ant:1 bayesian:2 worth:1 reach:1 definition:2 against:2 martignon:3 stop:1 dataset:7 logical:1 recall:2 goldstein:3 manuscript:1 higher:5 improved:1 done:3 generality:1 until:1 stimulate:1 ranged:5 normalized:1 wi0:4 regularization:7 excluded:1 davis:1 noted:2 wilks:1 criterion:7 generalized:1 brighton:2 ridge:2 hogarth:4 reasoning:1 ranging:3 discussed:4 ims:2 refer:4 measurement:2 approx:4 language:1 had:7 base:16 dominant:4 female:1 reverse:2 x03:1 occasionally:4 certain:1 binary:29 success:1 mpib:1 w30:1 minimum:3 additional:3 greater:1 x02:1 living:1 multiple:3 match:3 cross:2 karelaia:2 paired:4 prediction:2 regression:4 sometimes:1 background:2 interval:2 median:8 source:1 w2:6 fifty:2 doursat:1 exhibited:1 biased:1 comment:1 subject:2 near:1 exceed:2 easy:1 concerned:1 identically:4 variety:1 xj:1 fit:1 psychology:3 w3:2 gave:1 identified:2 lasso:2 hastie:2 idea:2 knowing:1 konstantinos:1 whether:2 six:1 utility:1 penalty:4 york:5 nine:1 generally:1 detailed:1 amount:3 ten:3 wk0:2 processed:1 category:3 outperform:1 exist:1 percentage:2 sign:4 tibshirani:1 blue:8 diverse:1 multiattribute:2 dominance:62 group:3 four:1 fraction:1 year:1 sum:3 package:2 letter:1 place:1 family:2 decide:2 decision:86 bound:2 pay:1 fold:3 nonnegative:3 constraint:1 x2:4 software:2 min:2 across:6 ur:2 wi:11 making:6 happens:1 ozgur:1 psychologist:1 imitating:1 gathering:1 equation:2 discus:2 turn:2 mechanism:1 needed:1 know:4 mind:2 available:3 operation:1 apply:2 alternative:3 coin:9 shortly:1 schmidt:1 cent:4 original:2 top:1 include:2 medicine:1 exploit:1 amit:1 w20:1 society:1 question:2 added:1 strategy:1 exhibit:1 separate:2 berlin:2 w0:2 topic:1 barely:1 reason:1 marcus:1 relationship:6 hertwig:1 x2b:1 negative:1 recoded:1 perform:1 upper:2 vertical:1 datasets:49 mate:1 descent:1 displayed:4 varied:1 frugal:2 introduced:1 inverting:1 pair:9 specified:1 learned:4 below:1 spp:1 program:1 max:3 including:2 green:2 royal:1 natural:7 business:1 examination:1 scheme:2 oaksford:1 axis:1 concludes:1 categorical:1 review:3 literature:3 prior:1 geometric:1 relative:7 x0k:1 loss:2 age:1 validation:2 foundation:1 x01:1 degree:1 editor:4 tiny:1 pi:1 pile:4 summary:1 placed:1 surprisingly:1 supported:1 bias:4 allow:1 understand:1 institute:1 wide:1 taking:2 face:1 bulletin:1 distributed:1 default:1 numeric:13 cumulative:27 author:1 made:5 adaptive:2 far:1 social:1 approximate:9 reveals:1 conclude:1 xi:9 alternatively:1 search:1 why:4 table:3 reviewed:1 reasonably:1 robust:1 elastic:2 obtaining:1 necessarily:2 zou:1 rue:3 submit:1 did:1 pk:3 cum:6 n2:3 fair:1 x1:10 differed:1 aid:1 lie:1 house:2 weighting:8 dominates:6 exists:1 sequential:1 importance:1 magnitude:3 czerlinski:2 simply:1 likely:1 explore:1 glmnet:1 ordered:1 contained:1 sport:1 applies:1 corresponds:1 environmental:7 determines:1 chance:1 abc:3 w10:1 succeed:1 consequently:1 buckmann:1 x2a:1 absence:1 included:1 specifically:3 total:2 called:2 lens:1 exception:1 select:2 indicating:1 pertains:1 relevance:3 correlated:1 |
4,296 | 4,889 | Scoring Workers in Crowdsourcing: How Many
Control Questions are Enough?
Qiang Liu
Dept. of Computer Science
Univ. of California, Irvine
[email protected]
Mark Steyvers
Dept. of Cognitive Sciences
Univ. of California, Irvine
[email protected]
Alexander Ihler
Dept. of Computer Science
Univ. of California, Irvine
[email protected]
Abstract
We study the problem of estimating continuous quantities, such as prices, probabilities, and point spreads, using a crowdsourcing approach. A challenging aspect
of combining the crowd?s answers is that workers? reliabilities and biases are usually unknown and highly diverse. Control items with known answers can be used
to evaluate workers? performance, and hence improve the combined results on the
target items with unknown answers. This raises the problem of how many control
items to use when the total number of items each workers can answer is limited:
more control items evaluates the workers better, but leaves fewer resources for the
target items that are of direct interest, and vice versa. We give theoretical results
for this problem under different scenarios, and provide a simple rule of thumb for
crowdsourcing practitioners. As a byproduct, we also provide theoretical analysis
of the accuracy of different consensus methods.
1
Introduction
The recent rise of crowdsourcing has provided a fast and inexpensive way to collect human knowledge and intelligence, as illustrated by human intelligence marketplaces such as Amazon Mechanical Turk, games with purpose like ESP, reCAPTCHA, and crowd-based forecasting for politics
and sports. One of the philosophies behind these successes is the wisdom of crowds phenomenon:
properly combining a group of untrained people can be better than the average performance of the
individuals, and even as good as the experts in many application domains (e.g., Surowiecki, 2005,
Sheng et al., 2008). Unfortunately, it is not always obvious how best to combine the crowd, because
the (often anonymous) workers have unknown and diverse levels of expertise, and potentially systematic biases across the crowd. Na??ve consensus methods which simply take uniform averages or
the majority answer of the workers have been known to perform poorly. This raises the problem of
scoring the workers, that is, estimating their expertise, bias and any other associated parameters, in
order to combine their answers more effectively.
One direct way to address this problem is to score workers using their past performance on similar
problems. However, this is not always practical, since historical records are hard to maintain for
anonymous workers, and their past tasks may be very different from the current one. An alternative
is the idea behind reCAPTCHA: ?seed? some control items with known answers into the assigned
tasks (without telling workers which are control items), score the workers using these control items,
and weight their answers accordingly on the unknown target items. Similar ideas have been widely
used in existing crowdsourcing systems. CrowdFlower, for example, provides interfaces and tools
to allow requesters to explicitly specify and analyze a set of control items (sometimes called ?gold
data?). The reCAPTCHA example is a particularly simple case, where workers answer exactly one
control and one target item. However, in general crowdsourcing, the workers may answer hundreds
of questions, raising the question of how many control items should be used. There is a clear tradeoff: having workers answer more control items gives better estimates of their performance and any
1
potential systematic bias, but leaves fewer resources for the target items that are of direct interest.
However, using few control items gives poor estimates of workers? performance and their biases,
also leading to bad results. A deep understanding of the value of control items may provide useful
guidance for crowdsourcing practitioners.
On the other hand, a line of research has studied more advanced consensus methods that are able
to simultaneously estimate the workers? performance and items? answers without any ground truth
on the items, by building joint statistical models of the workers and labels (e.g., Dawid and Skene,
1979, Whitehill et al., 2009, Karger et al., 2011, Liu et al., 2012, Zhou et al., 2012). The basic
idea is to score the workers by their agreement with other workers, assuming that the majority of
workers are correct. Perhaps surprisingly, the worker reliabilities estimated by these ?unsupervised?
consensus methods can be almost as good as those estimated when the true labels of all the items are
known, and are much better than self-evaluated worker reliability (Romney et al., 1987, Lee et al.,
2012). Control items can also be incorporated into these methods: but how much can we expect
them to improve results, or does an ?unsupervised? method suffice?
The goal of this paper is to study the value of control items, and provide practical guidance on how
many control items are enough under different scenarios. We give both theoretical and empirical
results for this problem, and provide some rules of thumbs that that are easy to use in practice.
We develop our theory on a class of Gaussian models for estimating continuous quantities, such as
forecasting probabilities and point spreads in sports games, and show how it extends to more general
models. As a byproduct, we also provide analytic results of the accuracy of different consensus
algorithms. Important practical issues such as the impact of model misspecification, systematic
biases and heteroscedasticity are also highlighted on real datasets.
2
Background
Assume there is a set T of target items, associated with a set of labels ?T ?= {?i ? i ? T } whose true
values ??T we want to estimate. In addition, we have a set C of control (or training) items whose true
labels ??C ?= {??i ? i ? C} are known. We denote the set of workers by W; each worker j is associated
with a parameter ?j? that characterizes their expertise, bias, any other relevant features. We denote
the complete vector of worker parameters by ? ?= {?j? ? j ? W}. Both ? and ? are assumed to be
continuous variables in this paper. Denote by nt the number of target items and m the workers.
Let ?i be the set of workers assigned to item i, and ?jt (and ?jc ) the set of target (and control) items
labeled by worker j. The assignment relationship between the workers and the target items can
be represented by a bipartite graph Gt = (T , W, Et ), where there is an edge (ij) ? Et iff item i is
assigned to worker j. Let ri be the number of workers assigned to the i-th target item, and let `tj
(and `cj ) be the number of target (and control) items assigned to the j-th worker. Note that {ri } and
{`tj } are the two degree sequences of the bipartite graph Gt .
Denote by xij the label we collect from worker j for item i. In general, we can assume that xij is a
random variable drawn from a probabilistic distribution p(xij ???i , ?j? ). The computational question
is then to construct an estimator ?
?T of the true labels ??T based on the crowdsourced labels {xij },
such that the expected mean square error (MSE) on the target items, E[???
?T ? ??T ??2 ], is minimized.
Gaussian Model. We focus on a class of simple Gaussian models on the labels xij :
xij = ??i + b?j + ?ij ,
?ij ? N (0, ? ?2 ),
(1)
where ??i is the quantity of interest of item i, b?j is the bias of worker j, and ? ?2 is the variance.
For some quantities, like probabilities and prices, proper transforms should be applied before using
such Gaussian models. Model (1) is equivalent to the two-way fixed effects model in statistics (e.g.,
Chamberlain, 1982). It captures heterogeneous biases across workers that are commonly observed
in practice, for example in workers? judgments on probabilities and prices, and which can have
significant effects on the estimate accuracy. This model also has nice theoretical properties and will
play an important role in our theoretical analysis. Note that the biases are not identifiable solely from
the crowdsourced labels {xij }, making it is necessary to add some control items or other information
when decoding the answers.
2
An extension of model (1) is to introduce heteroscedasticity, allowing different workers to have
different level of Gaussian noise: that is, xij = ??i + b?j + ?j? ?ij , where ?ij ? N (0, 1) and ?j?2 is
a variance parameter of worker j. We will refer to this extension as the bias-variance model, and
Model (1) as the bias-only model. We will also consider another special case, xij = ??j + ?j? ?ij ,
which assumes the workers all have zero bias but different variances (the variance-only model).
Theoretical analysis of the bias-variance and variance-only models are significantly more difficult
due to the presence of the variance parameters, but is still possible under asymptotic assumption.
2.1
Consensus Algorithms With Partial Ground Truth
Control items with known true labels can be used to estimate workers? parameters, and hence improve the estimation accuracy. In this section, we introduce two types of consensus methods that
incorporate the control items in different ways: one simply scores the workers based on their performance on the control items, while the other uses a joint maximum likelihood estimator that scores
the worker based on their answers on both control items and target items. We present both methods
in terms of a general model p(xij ??i , ?j ) here; the updates for the Gaussian models can be easily
derived, but are omitted for space.
Two-stage Estimator: the workers? parameters are first estimated using the control items, and are
then used to predict the target items. That is,
Scoring workers:
??j = arg max ? log p(xij ???i , ?j ),
?j
Predicting target items:
for all j ? W,
(2)
for all i ? T ,
(3)
i??jc
?
?i = arg max ? log p(xij ??i , ??j ),
?i
j??i
where we use the maximum likelihood estimator as a general procedure for estimating parameters.
Joint Estimator: we directly maximize the joint likelihood of the crowdsourced labels {xij } of
both target and control items, with ?C of the control items clamped to the true values ??C . That is,
[?
?T , ??] = arg max { ? ? log p(xij ???i , ?j ) + ? ? log p(xij ??i , ?j )},
[?T ,?]
i?C j??i
(4)
i?T j??i
which can be solved by block coordinate descent, alternatively optimizing ?T and ?. Compared to
the two-stage estimator, the joint estimator estimates the workers? parameters based on both the control items and the target items, even though their true labels are unknown. This is because the labels
xij provide information on ??i through the model assumption p(xij ???i , ?j? ). Therefore, the joint
estimator may be much more efficient than the two-stage estimator when the model assumptions are
satisfied, but may perform poorly if the model is misspecified.
3
How many control items are enough?
We now consider the central question: assuming each worker answers ` items (we refer ` as the
budget), including k control items and ` ? k target items, what is the optimal choice of k to minimize
the expected MSE? To be concrete, here we assume all the workers (items) are assigned to the
same number of randomly selected items (workers), and hence the assignment graph Gt is a random
semi-regular bipartite graph, which can be generated by the configuration model (e.g., Karger et al.,
2011). We assume r is the number of labels received by each target item, so that r = m(` ? k)/nt .
Obviously, the optimal number of control items should depend on their usage in the subsequent
consensus method. We will show that the two-stage and joint estimators exploit control items in
fundamentally different ways,?and yield very different optimal values of k. Roughly speaking, the
?
optimal k should scale as O( `) when using a two-stage estimator, compared to O(`/ nt ) when
using joint estimators. We now discuss these two methods separately.
3.1
Optimal k for Two-stage Estimator
We first address the problem on the bias-only model, which has a particularly simple analytic solution. We then extend our results to more general models.
3
Theorem 3.1. (i). For the bias-only model with xij = ??i + b?j + ?ij , where ?ij are i.i.d. noise drawn
from N (0, ? ?2 ), the expected mean square error (MSE) of the two-stage estimator in (2)-(3) is
? ?2
1
?i ? ??i ??2 /nt ] =
(1 + ).
(5)
E[ ? ???
r
k
i?T
(ii). Note
? that r = m(`?? k)/nt , and the optimal k that minimizes the expected MSE in (5) is
?
k = ? ` + 5/4 ? 3/2? ? `, where ?z? denotes the smallest integer no less than z.
Proof. The solution of two-stage estimator has a simple linear form under the bias-only model,
1
1
?
?i = ? (xij ? ?bj ), ?bj = ? (xij ? ??i ), for ?i ? T , ?j ? W.
r j??i
k i?? c
j
Since the xij are Gaussian, the ?
?i are also Gaussian. Calculating the mean and variance of ?
?i , we
have that E?
?i = ??i , and Var(?
?i ) as in (5). The remaining steps are straightforward.
Remarks. (i). Eq. (5) shows that the MSE is inversely proportional to the number r of workers per
target item, while the number k of control items per workers only refines the multiplicative constant.
Therefore, the resources assigned to the control items are much less ?useful? than those assigned
directly to the target items, suggesting the optimal k should be much less than the budget `.
(ii). On the other hand, if k is too small, the multiplicative constant becomes large, which also
degrades the MSE. In the extreme, if k = 0 then the bias is unidentifiable, and the MSE grows
to infinity. In addition, if the budget ` grows to infinity, the optimal k should also grow to infinity,
otherwise the multiplicative
constant is strictly larger than one, which is suboptimal. One can readily
?
see that k = O( `) achieves the desired balance of trade-offs.
General Models. The bias-only model is simple enough to give closed form solutions. It turns
out that we can obtain similar results for more general models such as the bias-variance and the
variance-only model, but only in the asymptotic regime.
To set up, assume {?i } and {?j } are drawn from prior distributions Q? and Q? , respectively. Assume log p(xij ??i , ?j ) is twice differentiable w.r.t. ?i and ?j for all xij . Define the Fisher information matrix H?? = ?Ex [?2?? log p(x??, ?)], and similarly for H?? and H?? . Note that H?? is a
random variable dependent on ? and ?, and denote by E?? [H?? ] its expectation w.r.t. Q? and Q? .
Theorem 3.2. (i). Assume the crowdsourced labels {xij } are drawn from p(xij ???i , ?j? ), where
{??i } and {?j? } are drawn from priors Q? and Q? , respectively. The asymptotic expected MSE of
the two-stage estimator defined in (2)-(3), as both r and k grow to infinity, is
a
?
?2
(1 + ),
E[ ? ???
?i ? ??i ??2 /nt ] =
(6)
r
k
i?T
?1
?1 2
where ?
? 2 = E?? [tr(H??
)], J?? = Ex,? [?2?? log p(x??, ?)H??
??? log p(x??, ?)T ], and a =
?1
?1
?1
E?? [tr(H??
J?? H??
)]/E?? [tr(H??
)],
(ii). Note
? that r = m(` ? k)/nt , and
? the optimal k that minimizes the asymptotic MSE in (6) is
?
2
k = ? a` + a + 1/4 ? a ? 1/2? ? a`, where ?k? denotes the smallest integer no less than k.
Proof. Similar to Theorem 3.1, except asymptotic normality of M-estimators (e.g., Van der Vaart,
2000) should be used.
Remarks. (i). The result in Theorem 3.2 is parallel to that in Theorem 3.1 for bias-only models,
except that the contribution from uncertainty on the workers? parameters
is scaled by a model?
dependent factor a, and correspondingly, the optimal k is scaled by a. Calculation yields a = 2 for
the variance-only model, and a = 3 for the bias-variance model for any choice of prior Q? and Q? .
?
(ii). Letting k take continuous values, the optimal k to minimize (6) is k ? = a` + a2 ? a, which
achieves a minimum MSE of nmt ? ?
? 2 /(` ? 2k ? ). For comparison, the MSE would be nmt ? ?
? 2 /(` ? k ? )
if the worker parameters were known exactly. So, the uncertainty in the workers? parameters creates
an effective extra loss of k ? labels for each target item. Note that this rule is universal, in that it
remains true for any a (and hence any model).
4
3.2
Optimal k for Joint Estimator
The two-stage estimator is easy to analyze in that its accuracy is independent of the structure of the
bipartite assignment graph beyond the degree r and k. This is not true for the joint estimator, whose
accuracy depends on the topological structure of the assignment graph in a non-trivial way. In this
section we study the properties of the joint estimator, again starting with the simple bias-only model,
then discussing its extension to more general cases.
We first introduce some matrix notation. Let At be the adjacency matrix of Gt . Let Rt ?=
diag({ri ? i ? T }) be the diagonal matrix formed by the degree sequence of the target items, and
similarly define Lt = diag({`tj ? j ? W}) and Lc = diag({`cj ? j ? W}).
Theorem 3.3. (i). For the bias-only model with xij = ??i + b?j + ?ij , where ?ij are i.i.d. noise drawn
from N (0, ? ?2 ), the expected MSE of the joint estimator defined in (4) is
E[ ? ???
?i ? ??i ??2 /nt ] = ? ?2 tr((Rt ? At (Lt + Lc )?1 ATt )?1 )/nt ,
(7)
i?T
If At is regular, with Rt = rI and Lt = (` ? k)I, this simplifies:
1
`?k
T
E[ ? ???
?i ? ??i ??2 /nt ] = ? ?2 tr((I ?
W )?1 )/nt , where W = Rt?1 At L?1
(8)
t At .
r
`
i?T
Proof. Assume B ?= I ? Rt?1 At (Lt + Lc )?1 ATt is invertible. The solution of the joint estimator on
1
1
the bias-only model is ?
?T = ??T + B ?1 zT , where zi =
? (?ij ? ??j ), and ??j = c t ? ?ij
ri j??i
`j + `j i? ?? c ?? t
and ?ij = xij ? ??i ? b?j for ?i ? T . We obtain (7) by calculating Var(?
?T ).
j
j
Remarks. (ii). Equation (8) establishes an explicit connection between MSE and the spectral strucT
ture of the bipartite graph Gt . Consider the eigenvalues 1 = ?1 ? ?2 ? ? ? 0 of W ?= Rt?1 At L?1
t At ,
where the second largest eigenvalue ?2 famously characterizes the connectivity of the graph Gt .
Roughly speaking, Gt has better connectivity if ?2 is small, and verse versa. Observe that
tr((I ?
nt
`?k
`
nt ? 1
`?k
W )?1 ) = ?(1 ?
?i )?1 ?
+
.
`?k
`
`
k
1
?
?2
i=1
`
(9)
Therefore, the joint estimator performs better when ?2 is small, i.e., when the graph is strongly
connected. Intuitively, better connectivity ?couples? the items and workers more tightly together,
making it easier not to make mistakes during inference.
Besides hoping for small error, one may also want the assignment graph to be sparse, i.e., use fewer
labels. Graphs that are both sparse and strongly connected are known as expander graphs, and have
been found universally important in areas like robust computer networks, error correcting codes, and
communication networks; see Hoory et al. (2006) for a review. It is well known that large sparse
random regular graphs are good expanders (e.g., Friedman et al., 1989), and hence a near-optimal
allocation strategy for crowdsourcing (Karger et al., 2011). On such graphs, we can also estimate
the optimal k in a simple form.
Theorem 3.4. Assume At is a random regular bipartite graph, and nt = m. We have that
E[ ? ???
?i ? ??i ??2 /nt ] =
i?T
`
1
? ?2 nt ? 1
[
],
(1 + O( )) +
` ? k nt
`
nt k
(10)
with
` ? ?, the optimal k that minimizes (10) is k ? =
? probability one as nt ? ?. If in addition
?
2
2
2
? ` /nt + ` /nt + 1/4 ? `/nt ? 1/2? ? `/ nt .
Proof. Use (9) and the bound in Puder (2012) for ?2 of large random regular bipartite graphs.
Remarks. (i). Perhaps surprisingly, the optimal k of the joint estimator scales linearly w.r.t. budget
`, in contrast
? to the square-root rule of two-stage estimators. However, since usually ` ? nt , we have
?
`/ nt ? `, that is, the joint estimator requires fewer control items than the two-stage estimator.
5
(ii). In addition, the optimal k for the joint estimator also decreases as the total number nt of target
items increases. Because nt is usually quite large in practice, the number of control items is usually
very small. In particular, as nt ? ?, we have k ? = 1, that is, there is no need for control items
beyond fixing the unidentifiability issue of the biases.
General Models. The joint estimator on general models is more involved to analyze, but it is still
possible to give an rough estimate by analyzing the Fisher information matrix of the likelihood. For
notation, let H?? = Rt ? E?? (H?? ), and H?? = (Lt + Lc ) ? E?? (H?? ), where ? is the Kronecker
product, and H?? = [H?i ?j ]ij is a block matrix, where block H?i ?j for (ij) ? Et is a random copy
of ??2?? log p(x??, ?) with random x, ? and ?, and H?i ?j = 0 for (ij) ? Et . Assuming the joint
maximum likelihood estimator in (4) is asymptotically consistent (in terms of large ` and r), we can
estimate its asymptotic MSE by the inverse of the Fisher information matrix,
E[ ? ???
?i ? ??i ??2 /nt ] ? E[tr((H?? ? H?? H?? ?1 H?? T )?1 )]/nt ,
i?T
where the expectation on the right side is w.r.t. the randomness of H?? . This parallels (7) in Theorem 3.3, except the adjacency matrices are replaced by corresponding Hessian matrices. Unfortunately, it is more challenging to give a simple estimate of the optimal k as in Theorem 3.4, even when
At is a random bipartite graph, because the spectral properties of the random matrix are complicated
by blockwise structure, and may?depend on the prior distribution Q(?). However, experimentally
the optimal k follows the trend ` a/nt , where the constant a depends on both the model assumption
and the choice of Q(?), and can be numerically estimated by simulation.
4
Experiments
We show that our theoretical predictions match closely to the results on simulated data and two real
datasets for estimating prices and point spreads. The experiments also highlight important practical
issues such as the impact of model misspecification, biases, and heteroskedasticity.
Datasets and Setup. The simulated data are generated by the Gaussian models definited in Section 2, where ?i and bj are i.i.d. drawn from N (1, 1); and ?j from a ?2 -distribution with degree
4 for the heteroskedastic versions. The price dataset consists of 80 household items collected from
stores like Amazon and Costco, whose prices are estimated by 155 undergraduate students at UC
Irvine. A log transform is performed on the prices before using the Gaussian models. The National Football League (NFL) forecasting data was collected by Massey et al. (2011), where 386
participants were asked to predict the point difference of 245 NFL games. We use the point spreads
determined by professional bookmakers as the truth values in our experiments.
For all the experiments, we first construct the set of target items and control items by randomly
partitioning items, and then randomly assign each worker with k control items and `?k target items,
for varying values of ` and k. The MSE is estimated by averaging over 500 random trials. The
optimal k is estimated by minimizing the averaged MSE over 300 randomly subsampled trials, and
then taking average over 20 random subsamples.
Optimal Number of Control Items. See Figure 1 for the results of the bias-only model when the
data are simulated from the correct model. Figure 1(a) shows the empirical MSE of the two-stage
estimator when varying the number k of control items. A clear trade-off appears: MSE is large both
when k is too small to estimate workers? parameters accurately, and when k is too large to leave
a sufficient number of labels for the target items. The MSE of the joint estimator in Figure 1(b)
follows a similar trend, but the gain by using control items is less significant (the left parts of the
curves are flatter). This is because the joint estimator leverages the labels on the target items (whose
true values are unknown), and relies less on the control items. In particular, as the number n?
t of
target items increases, the optimal value of k for the joint estimator decreases with a rate of 1/ nt
(see Figure 1(d)), but that of the two-stage estimator stays the same. Overall, the empirical optimal
k of the two-stage and joint estimator aligns closely with our theoretical prediction (Figure 1(c)-(d)).
We show in Figure 2(a) the result of the bias-variance model when data
? are simulated from the
correct model. The optimal k of the two-stage estimator aligns closely to a` with a = 3, matching
?
the asymptotic result in Theorem 3.2, while that of the joint estimator scales like the line ` a/nt
with a ? 3, matching our hypothesis in Section 3.2.
6
0.6
=7
0.5
10
8
0.6
= 10
MSE
MSE
= 10
0.3
= 15
0.2
= 20
0.2
= 15
6
4
Joint (empirical)
?
Joint (/ n t)
4
Two-stage (empirical)
?
Two-stage ( )
= 20
= 50
0
1
Optimal k
0.4
0.4
6
=7
Optimal k
0.8
2 3 5 8 13 22 36 60 100
k (# of control items)
(a) Two-stage Estimator
2
0.1
= 50
0
1
0
0
2 3 5 8 13 22 36 60 100
k (# of control items)
2
50
Budget
100
(c) Optimal k vs. `
(b) Joint Estimator
100 200 300 400 500 600
n t (# of target items)
(d) Optimal k vs. nt
Figure 1: Results of the bias-only model on data simulated from the same model. (a)-(b) The MSE
of the two-stage and joint estimators with varying ` and k and fixed nt = 100. The stars and circles
denote the empirically and theoretically optimal k, respectively. (c) The optimal k with varying `,
but fixed nt = 100. (d) The optimal k with varying nt , but fixed ` = 50. We set m = nt here.
Model misspecification. Real datasets are not expected to match the model assumptions perfectly.
It is important, but difficult, to understand how the theory should be modified to compensate for the
violation of assumptions. We provide some insights on this by constructing model misspecification
artificially. Figure 2(b)-(c) shows the results when the data are simulated from a bias-variance
model with non-zero biases, but we use the variance-only model (with zero bias) in the consensus
algorithm. We see in Figure 2(b) that the optimal k of the two-stage estimator still aligns closely to
our theoretical prediction, but that of the joint estimator is much larger than one would expect (almost
half of the budget `). In addition, the MSE of the joint estimator in this case is significantly worse
than that of the two-stage estimator (see Figure 2(c)), which is not expected if the model assumption
holds. Therefore, the joint estimator seems to be more sensitive to model misspecification than the
two-stage estimator, suggesting that caution should be taken when it is applied in practice.
Real Datasets. Figure 3 shows the results of the bias-only model on the two real datasets; our
prediction of the optimal k matches the empirical results surprisingly well on the NFL dataset (Figure 3(d)-(f)), while our theoretically optimal values of k on the price dataset tend to be smaller than
the actual values (Figure 3(a)-(c)), perhaps caused by some unknown model misspecification. However, our bias on the estimated k does not cause a significant increase in MSE, because the scale in
Figure 3(a)-(b) is relatively small compared to that in Figure 4(a).
Interestingly, the two real datasets have opposite properties in terms of the importance of bias and
heteroskedasticity (see Figure 4): In the price dataset, all the workers tend to underestimate the prices
of the products, i.e., bj are negative for all workers, and the bias-only model performs much better
than the zero-bias variance-only model. In contrast, the participants in the NFL dataset exhibit no
systematic bias but seem to have different individual variances, and the variance-only model works
better than the bias-only model. In both cases, the full bias-variance model works best if budget `
is large, but is not necessarily best if the budget is small and over-fitting is an issue.
5
Conclusion
The problem of how many control questions to use is unlikely to yield a definitive answer, since real
data are always likely to be more complicated than any model. However, our results highlight several
issues and provide insights and rules of thumb that can help crowdsourcing practitioners make
? their
own decisions. In particular, we show
that
the
optimal
number
of
control
items
should
be
O(
`) for
?
the two-stage estimator and O(`/ nt ) for the joint estimator. Because the number nt of target items
is usually large in practice, it is reasonable to recommend using a minimal number of control items,
just enough to fix potential unidentifiability issues, assuming the model assumptions hold well.
However, the joint estimator may require significantly more control items if model misspecification
exists; in this case one might better switch to the more robust two-stage estimator, or search for
better models. The control items can also be used to do model selection, an issue which deserves
further discussion in the future.
Acknowledgements. Work supported in part by NSF IIS-1065618 and IIS-1254071 and a Microsoft
Research Fellowship. Thanks to Tobias Johnson for discussion on random matrix theory.
7
15
Two-stage (empirical)
?
Two-stage ( 3)
10
5
50
Joint (empirical)
40
Two-stage (empirical)
?
Two-stage ( 2)
1.5
30
MSE
Joint (empirical)
Joint ( 3/n t)
Optimal k
Optimal k
20
10
0
0
20
40
60
Budget
80
= 60
= 80
= 100
0.5
0
0
100
= 40
1
20
20
40
60
Budget
(a) Bias-variance Model
80
100
2
3
5
8
13
22
36
60
100
k (# of control items)
(b)-(c) Model Misspecification
Figure 2: (a) Results of the bias-variance model on data simulated from the same model. (b)-(c)
Results when the data are simulated from the bias-variance model with non-zero biases, but we use
the variance-only model (with zero bias) in the consensus algorithm. With this model misspecification, the joint estimator requires significantly more control items than one would expect (almost
half of the budget `), and performs worse than the two-stage estimator.
0.22
0.22
Joint (empirical)
?
Joint (/ n t)
0.21
= 10
= 10
= 15
= 25
0.2
1
2
3
5
8
13
= 15
= 25
0.2
1
22
k (# of control items)
3
5
8
13
2
0
22
30
=7
14
= 10
MSE
12
12
= 15
= 15
10
= 25
8
Joint (empirical)
?
Joint (/ n t)
6
Optimal k
= 10
MSE
10
20
Budget
(c) Optimal k vs. `
16
10
4
18
=7
14
6
(b) Joint Estimator
16
NFL Dataset
2
Two-stage (empirical)
?
Two-stage ( )
8
k (# of control items)
(a) Two-stage Estimator
18
Optimal k
0.21
=7
MSE
MSE
Price Dataset
10
=7
Two-stage (empirical)
?
Two-stage ( )
4
= 25
2
22
0
8
6
1
2
3
5
8
13
6
1
22
k (# of control items)
2
3
5
8
13
k (# of control items)
(d) Two-stage Estimator
10
20
30
Budget
40
(f) Optimal k vs. `
(e) Joint Estimator
Figure 3: Results on the real datasets when using the bias-only model. (a)-(b) and (d)-(e) The
MSE when using the two-stage and joint estimators, respectively. (c) and (f) The empirically and
theoretically optimal k as the budget ` varies. Here we fix nt = 50 for price dataset and nt = 200 for
NFL dataset.
20
0.34
Uniform Mean
Bias?only / Joint
Bias?variance / Joint
Variance?only / Joint
Bias?only / Two?stage
Bias?variance / Two?stage
Variance?only / Two?stage
0.32
15
MSE
MSE
0.3
0.28
0.26
10
0.24
0.22
0.2
2
5
5
10
Budget
(a) Price Dataset
20
2
5
10
Budget
20
(b) NFL Dataset
Figure 4: Comparison of different models and consensus methods on the two real datasets. (a)-(b)
The MSE when selecting the best possible k as the budget ` varies. The workers in the price dataset
has systematic bias, and the bias-only model works better than the variance-only model, while the
workers in NFL dataset have no bias but different individual variances, and the variance-only model
is better than bias-only. In both datasets, the full bias-variance model works best if the budget ` is
large, but is not necessarily best if the budget is small when over-fitting is an issue.
8
References
James Surowiecki. The wisdom of crowds. Anchor, 2005.
Victor S Sheng, Foster Provost, and Panagiotis G Ipeirotis. Get another label? Improving data quality and data mining using multiple, noisy labelers. In Proc. SIGKDD Int?l Conf. on Knowledge
Discovery and Data Mining, pages 614?622. ACM, 2008.
A.P. Dawid and A.M. Skene. Maximum likelihood estimation of observer error-rates using the EM
algorithm. Applied Statistics, pages 20?28, 1979.
Jacob Whitehill, Paul Ruvolo, Tingfan Wu, Jacob Bergsma, and Javier Movellan. Whose vote should
count more: Optimal integration of labels from labelers of unknown expertise. In Advances in
Neural Information Processing Systems (NIPS), pages 2035?2043, 2009.
D.R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In Advances
in Neural Information Processing Systems (NIPS), pages 1953?1961, 2011.
Qiang Liu, Jian Peng, and Alexander Ihler. Variational inference for crowdsourcing. In Advances in
Neural Information Processing Systems (NIPS), pages 701?709, 2012.
Dengyong Zhou, John Platt, Sumit Basu, and Yi Mao. Learning from the wisdom of crowds by
minimax entropy. In Advances in Neural Information Processing Systems (NIPS), pages 2204?
2212, 2012.
A Kimball Romney, William H Batchelder, and Susan C Weller. Recent applications of cultural
consensus theory. American Behavioral Scientist, 31(2):163?177, 1987.
Michael D Lee, Mark Steyvers, Mindy de Young, and Brent Miller. Inferring expertise in knowledge
and prediction ranking tasks. Topics in cognitive science, 4(1):151?163, 2012.
Gary Chamberlain. Multivariate regression models for panel data. Journal of Econometrics, 18(1):
5?46, 1982.
Aad W Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000.
Shlomo Hoory, Nathan Linial, and Avi Wigderson. Expander graphs and their applications. Bulletin
of the American Mathematical Society, 43(4):439?561, 2006.
Joel Friedman, Jeff Kahn, and Endre Szemeredi. On the second eigenvalue of random regular graphs.
In Proc. ACM Symp. on Theory of Computing, pages 587?598. ACM, 1989.
Doron Puder.
Expansion of random graphs:
arXiv:1212.5216, 2012.
New proofs, new results.
arXiv preprint
Cade Massey, Joseph P Simmons, and David A Armor. Hope over experience: Desirability and the
persistence of optimism. Psychological Science, 22(2):274?281, 2011.
9
| 4889 |@word trial:2 version:1 seems:1 simulation:1 jacob:2 tr:7 liu:3 configuration:1 score:5 karger:4 att:2 selecting:1 interestingly:1 past:2 existing:1 current:1 nt:42 readily:1 john:1 refines:1 subsequent:1 shlomo:1 analytic:2 hoping:1 update:1 unidentifiability:2 v:4 intelligence:2 leaf:2 fewer:4 item:95 selected:1 accordingly:1 half:2 ruvolo:1 record:1 provides:1 mathematical:1 direct:3 doron:1 consists:1 combine:2 fitting:2 behavioral:1 symp:1 introduce:3 theoretically:3 peng:1 expected:8 roughly:2 actual:1 becomes:1 provided:1 estimating:5 notation:2 suffice:1 cultural:1 panel:1 what:1 minimizes:3 caution:1 exactly:2 scaled:2 platt:1 control:56 partitioning:1 before:2 scientist:1 esp:1 mistake:1 analyzing:1 solely:1 might:1 twice:1 studied:1 collect:2 challenging:2 limited:1 averaged:1 practical:4 practice:5 block:3 movellan:1 procedure:1 area:1 empirical:14 universal:1 significantly:4 matching:2 persistence:1 regular:6 get:1 selection:1 equivalent:1 straightforward:1 starting:1 amazon:2 correcting:1 rule:5 estimator:59 insight:2 oh:1 steyvers:3 crowdsourcing:11 coordinate:1 requester:1 simmons:1 target:31 play:1 us:1 hypothesis:1 agreement:1 dawid:2 trend:2 particularly:2 econometrics:1 labeled:1 observed:1 role:1 preprint:1 solved:1 capture:1 susan:1 connected:2 trade:2 decrease:2 asked:1 tobias:1 cade:1 raise:2 depend:2 heteroscedasticity:2 creates:1 bipartite:8 linial:1 easily:1 joint:47 represented:1 univ:3 fast:1 effective:1 armor:1 marketplace:1 avi:1 crowd:7 whose:6 quite:1 widely:1 larger:2 otherwise:1 football:1 statistic:3 vaart:2 highlighted:1 transform:1 noisy:1 obviously:1 subsamples:1 sequence:2 differentiable:1 eigenvalue:3 product:2 uci:3 combining:2 relevant:1 iff:1 poorly:2 gold:1 leave:1 help:1 develop:1 dengyong:1 fixing:1 ij:16 received:1 eq:1 closely:4 correct:3 human:2 adjacency:2 require:1 assign:1 fix:2 anonymous:2 extension:3 strictly:1 hold:2 ic:1 ground:2 seed:1 predict:2 bj:4 achieves:2 smallest:2 crowdflower:1 omitted:1 purpose:1 a2:1 estimation:2 proc:2 panagiotis:1 label:21 sensitive:1 largest:1 vice:1 establishes:1 tool:1 hope:1 offs:1 rough:1 always:3 gaussian:10 desirability:1 modified:1 zhou:2 varying:5 kimball:1 derived:1 focus:1 properly:1 likelihood:6 contrast:2 sigkdd:1 romney:2 inference:2 dependent:2 unlikely:1 kahn:1 issue:8 arg:3 overall:1 special:1 integration:1 uc:1 construct:2 having:1 qiang:2 unsupervised:2 future:1 minimized:1 recommend:1 fundamentally:1 few:1 randomly:4 simultaneously:1 ve:1 tightly:1 individual:3 national:1 subsampled:1 replaced:1 maintain:1 microsoft:1 friedman:2 william:1 interest:3 highly:1 mining:2 joel:1 violation:1 extreme:1 behind:2 tj:3 hoory:2 edge:1 worker:61 byproduct:2 necessary:1 partial:1 experience:1 desired:1 circle:1 guidance:2 theoretical:9 minimal:1 psychological:1 assignment:5 deserves:1 uniform:2 hundred:1 johnson:1 sumit:1 too:3 weller:1 answer:16 varies:2 combined:1 thanks:1 stay:1 systematic:5 lee:2 probabilistic:1 decoding:1 invertible:1 michael:1 together:1 off:1 concrete:1 na:1 connectivity:3 again:1 central:1 satisfied:1 worse:2 cognitive:2 conf:1 expert:1 american:2 leading:1 brent:1 suggesting:2 potential:2 de:1 star:1 student:1 flatter:1 int:1 jc:2 explicitly:1 ranking:1 depends:2 caused:1 multiplicative:3 root:1 performed:1 closed:1 observer:1 analyze:3 characterizes:2 crowdsourced:4 parallel:2 complicated:2 participant:2 contribution:1 minimize:2 square:3 formed:1 accuracy:6 variance:32 miller:1 judgment:1 wisdom:3 yield:3 thumb:3 accurately:1 expertise:5 randomness:1 aligns:3 evaluates:1 inexpensive:1 verse:1 underestimate:1 turk:1 involved:1 obvious:1 james:1 associated:3 ihler:3 proof:5 batchelder:1 couple:1 irvine:4 gain:1 dataset:13 knowledge:3 cj:2 javier:1 appears:1 specify:1 evaluated:1 though:1 unidentifiable:1 strongly:2 just:1 stage:40 sheng:2 hand:2 quality:1 perhaps:3 grows:2 building:1 effect:2 usage:1 true:10 hence:5 assigned:8 illustrated:1 game:3 self:1 during:1 complete:1 performs:3 interface:1 variational:1 misspecified:1 empirically:2 volume:1 extend:1 numerically:1 significant:3 refer:2 versa:2 cambridge:1 league:1 similarly:2 reliability:3 gt:7 add:1 heteroskedasticity:2 labelers:2 bergsma:1 own:1 recent:2 multivariate:1 optimizing:1 scenario:2 store:1 success:1 discussing:1 der:2 yi:1 scoring:3 victor:1 minimum:1 maximize:1 semi:1 ii:8 full:2 multiple:1 match:3 calculation:1 compensate:1 dept:3 impact:2 prediction:5 basic:1 regression:1 heterogeneous:1 expectation:2 arxiv:2 sometimes:1 background:1 want:2 addition:5 separately:1 fellowship:1 grow:2 jian:1 extra:1 nmt:2 tend:2 expander:2 seem:1 practitioner:3 integer:2 near:1 presence:1 leverage:1 enough:5 easy:2 ture:1 switch:1 zi:1 perfectly:1 suboptimal:1 opposite:1 idea:3 simplifies:1 tradeoff:1 politics:1 nfl:8 optimism:1 forecasting:3 speaking:2 hessian:1 cause:1 remark:4 deep:1 useful:2 clear:2 transforms:1 xij:27 nsf:1 estimated:8 per:2 diverse:2 group:1 drawn:7 massey:2 graph:20 asymptotically:1 inverse:1 uncertainty:2 extends:1 almost:3 reasonable:1 wu:1 decision:1 bound:1 topological:1 identifiable:1 infinity:4 kronecker:1 ri:5 aspect:1 nathan:1 relatively:1 skene:2 poor:1 endre:1 across:2 smaller:1 em:1 joseph:1 making:2 intuitively:1 taken:1 resource:3 equation:1 remains:1 discus:1 turn:1 count:1 letting:1 observe:1 spectral:2 alternative:1 struct:1 professional:1 shah:1 assumes:1 denotes:2 remaining:1 wigderson:1 household:1 calculating:2 exploit:1 society:1 question:6 quantity:4 degrades:1 strategy:1 rt:7 heteroskedastic:1 diagonal:1 exhibit:1 simulated:8 majority:2 topic:1 collected:2 consensus:12 trivial:1 assuming:4 besides:1 code:1 relationship:1 balance:1 minimizing:1 difficult:2 unfortunately:2 setup:1 potentially:1 recaptcha:3 blockwise:1 whitehill:2 negative:1 rise:1 proper:1 zt:1 unknown:8 perform:2 allowing:1 datasets:10 descent:1 incorporated:1 misspecification:9 communication:1 provost:1 david:1 mechanical:1 connection:1 raising:1 california:3 nip:4 address:2 able:1 beyond:2 usually:5 regime:1 max:3 including:1 reliable:1 predicting:1 ipeirotis:1 advanced:1 normality:1 minimax:1 improve:3 inversely:1 surowiecki:2 nice:1 understanding:1 prior:4 review:1 acknowledgement:1 discovery:1 asymptotic:8 loss:1 expect:3 highlight:2 proportional:1 allocation:1 var:2 qliu1:1 degree:4 sufficient:1 consistent:1 foster:1 famously:1 surprisingly:3 supported:1 copy:1 bias:56 allow:1 side:1 telling:1 understand:1 basu:1 aad:1 taking:1 correspondingly:1 bulletin:1 sparse:3 van:2 curve:1 commonly:1 universally:1 historical:1 anchor:1 assumed:1 alternatively:1 continuous:4 search:1 iterative:1 robust:2 chamberlain:2 improving:1 expansion:1 mse:33 untrained:1 necessarily:2 artificially:1 constructing:1 domain:1 diag:3 spread:4 linearly:1 noise:3 expanders:1 definitive:1 paul:1 lc:4 mao:1 inferring:1 explicit:1 clamped:1 young:1 theorem:10 bad:1 jt:1 exists:1 undergraduate:1 effectively:1 importance:1 budget:19 easier:1 entropy:1 lt:5 simply:2 likely:1 sport:2 gary:1 truth:3 relies:1 acm:3 goal:1 jeff:1 price:14 fisher:3 hard:1 experimentally:1 determined:1 except:3 averaging:1 total:2 called:1 vote:1 mark:3 people:1 alexander:2 philosophy:1 incorporate:1 evaluate:1 phenomenon:1 ex:2 |
4,297 | 489 | Estimating Average-Case Learning Curves
Using Bayesian, Statistical Physics and
VC Dimension Methods
David Haussler
University of California
Santa Cruz, California
Michael Kearns?
AT&T Bell Laboratories
Murray Hill, New Jersey
Manfred Opper
Institut fur Theoretische Physik
Universita.t Giessen, Germany
Robert Schapire
AT&T Bell Laboratories
Murray Hill, New Jersey
Abstract
In this paper we investigate an average-case model of concept learning, and
give results that place the popular statistical physics and VC dimension
theories of learning curve behavior in a common framework.
1
INTRODUCTION
In this paper we study a simple concept learning model in which the learner attempts
to infer an unknown target concept I, chosen from a known concept class:F of {O, 1}valued functions over an input space X. At each trial i, the learner is given a point
Xi E X and asked to predict the value of I(xi) . If the learner predicts I(xi)
incorrectly, we say the learner makes a mistake. After making its prediction, the
learner is told the correct value.
This simple theoretical paradigm applies to many areas of machine learning, including much of the research in neural networks. The quantity of fundamental interest
in this setting is the learning curve, which is the function of m defined as the prob?Contact author. Address: AT&T Bell Laboratories, 600 Mountain Avenue, Room
2A-423, Murray Hill, New Jersey 07974. Electronic mail: [email protected].
855
856
Haussler, Kearns, Opper, and Schapire
ability the learning algorithm makes a mistake predicting f(xm+I}, having already
seen the examples (Xl, I(x!)), ... , (xm, f(xm)).
In this paper we study learning curves in an average-case setting that admits a prior
distribution over the concepts in F. We examine learning curve behavior for the
optimal Bayes algorithm and for the related Gibbs algorithm that has been studied
in statistical physics analyses of learning curve behavior. For both algorithms we
give new upper and lower bounds on the learning curve in terms of the Shannon
information gain.
The main contribution of this research is in showing that the average-case or
Bayesian model provides a unifying framework for the popular statistical physics
and VC dimension theories of learning curves. By beginning in an average-case setting and deriving bounds in information-theoretic terms, we can gradually recover
a worst-case theory by removing the averaging in favor of combinatorial parameters
that upper bound certain expectations.
Due to space limitations, the paper is technically dense and almost all derivations
and proofs have been omitted. We strongly encourage the reader to refer to our
longer and more complete versions [4, 6] for additional motivation and technical
detail.
2
NOTATIONAL CONVENTIONS
Let X be a set called the instance space. A concept class F over X is a (possibly
infinite) collection of subsets of X. We will find it convenient to view a concept
f E F as a function I : X - {O, I}, where we interpret I(x) = 1 to mean that
x E X is a positive example of f, and f(x) = 0 to mean x is a negative example.
The symbols P and V are used to denote probability distributions. The distribution
P is over F, and V is over X. When F and X are countable we assume that these
distributions are defined as probability mass functions. For uncountable F and X
they are assumed to be probability measures over some appropriate IT-algebra. All
of our results hold for both countable and uncountable F and X.
We use the notation E I E'P [x (f)] for the expectation of the random variable X with
respect to the distribution P, and Pr/E'P[cond(f)] for the probability with respect
to the distribution P of the set of all I satisfying the predicate cond(f). Everything
that needs to be measurable is assumed to be measurable.
3
INFORMATION GAIN AND LEARNING
Let F be a concept class over the instance space X. Fix a target concept I E F and
an infinite sequence of instances x = Xl, .. . , X m , Xm+l, ... with Xm E X for all m.
For now we assume that the fixed instance sequence x is known in advance to the
learner, but that the target concept I is not. Let P be a probability distribution
over the concept class F. We think of P in the Bayesian sense as representing the
prior beliefs of the learner about which target concept it will be learning.
In our setting, the learner receives information about
I incrementally via the label
Estimating Average-Case Learning Curves
sequence I(xd, ... , I(x m), I(xm+d, .... At time m, the learner receives the label
I(x m ). For any m ~ 1 we define (with respect to x, I) the mth version space
Fm(x, I) = {j E F: j(xd = I(XI), . .. , j(Xm) = I(xm)}
=
=
and the mth volume V!(x, I)
P[Fm(x, I)]. We define Fo(x, I)
F for all
x and I, so Vl(x, I) = 1. The version space at time m is simply the class of
all concepts in F consistent with the first m labels of I (with respect to x), and
the mth volume is the measure of this class under P. For the first part of the
paper, the infinite instance sequence x and the prior P are fixed; thus we simply
write Fm(f) and Vm(f). Later, when the sequence x is chosen randomly, we will
reintroduce this dependence explicitly. We adopt this notational practice of omitting
any dependence on a fixed x in many other places as well.
For each m ~ 0 let us define the mth posterior distribution Pm(x, I) = Pm by
restricting P to the mth version space Fm(f); that is, for all (measurable) S C F,
Pm[S] = P[S n Fm(I))/P[Fm(l)] = P[S n Fm(I)]/Vm(f).
Having already seen I(xd, ... , I(x m ), how much information (assuming the prior
P) does the learner expect to gain by seeing I(xm+d? If we let Im+l(x, I) (abbreviated Im+l (I) since x is fixed for now) be a random variable whose value is
the (Shannon) information gained from I(xm+d, then it can be shown that the
expected information is
E /E 'P[Im +1 (f)] = E/E'P [-log
where we define the (m
Vm +l (f)/Vm(f).
+
v~:(j~) 1= E/E'P[-logXm+1 (I)]
1)st volume ratio by X!:+l (x, I)
=
Xm+l (f)
(1)
=
We now return to our learning problem, which we define to be that of predicting the
label I(xm+d given only the previous labels I(XI), . .. , I(xm). The first learning
algorithm we consider is called the Bayes optimal classification algorithm, or the
Bayes algorithm for short. For any m and bE {O, I}, define F:n(x,!) F:n(1) =
{j E Fm(x,!) : j(xm+d = b}. Then the Bayes algorithm is:
=
If Pm[F~(f)]
> Pm[F~(I)],
< Pm[F~(I)],
predict I(xm+d = 1.
If Pm[F~(f)]
predict I(xm+d = O.
If Pm[F~(f)] = Pm[F~(f)], flip a fair coin to predict I(xm+d?
It is well known that if the target concept
I is drawn at random according to
the prior distribution P, then the Bayes algorithm is optimal in the sense that it
minimizes the probability that f(xm+d is predicted incorrectly. Furthermore, if we
let Bayes:+ 1 (x, I) (abbreviated Bayes:+ l (I) since x is fixed for now) be a random
variable whose value is 1 if the Bayes algorithm predicts I(xm+d correctly and 0
otherwise, then it can be shown that the probability of a mistake for a random I is
(2)
Despite the optimality of the Bayes algorithm, it suffers the drawback that its
hypothesis at any time m may not be a member of the target class F. (Here we
857
858
Haussler, Kearns, Opper, and Schapire
define the hypothesis of an algorithm at time m to be the (possibly probabilistic)
mapping j : X -+ {O, 1} obtained by letting j(x) be the prediction of the algorithm
when Xm+l
x.) This drawback is absent in our second learning algorithm, which
we call the Gibbs algorithm [6]:
=
Given I(x!), ... , f(x m ), choose a hypothesis concept j randomly from Pm.
Given X m+l, predict I(xm+d = j(xm+!).
The Gibbs algorithm is the "zero-temperature" limit of the learning algorithm studied in several recent papers [2, 3, 8, 9]. If we let Gibbs~+l (x, I) (abbreviated
Gibbs~+l (f) since x is fixed for now) be a random variable whose value is 1 if the
Gibbs algorithm predicts f(xm+d correctly and 0 otherwise, then it can be shown
that the probability of a mistake for a random f is
(3)
Note that by the definition of the Gibbs algorithm, Equation (3) is exactly the
average probability of mistake of a consistent hypothesis, using the distribution on
:F defined by the prior. Thus bounds on this expectation provide an interesting
contrast to those obtained via VC dimension analysis, which always gives bounds
on the probability of mistake of the worst consistent hypothesis.
4
THE MAIN INEQUALITY
In this section we state one of our main results: a chain of inequalities that upper
and lower bounds the expected error for both the Bayes and Gibbs ,algorithms
by simple functions of the expected information gain. More precisely, using the
characterizations of the expectations in terms of the volume ratio Xm+l (I) given
by Equations (1), (2) and (3), we can prove the following, which we refer to as the
main inequality:
1l- 1 (E/E'P[Im+1 (1)))
<
E/E'P [Bayes m+1 (I)]
< E/E'P[Gibbsm+1 (f)]
~ ~E/E'P[Im+l(I)].
(4)
=
Here we have defined an inverse to the binary entropy function ll(p)
-p log p (1 - p) log(1 - p) by letting 1l- 1(q), for q E [0,1]' be the unique p E [0,1/2] such
that ll(p)
q. Note that the bounds given depend on properties of the particular
prior P, and on properties of the particular fixed sequence x . These upper and
lower bounds are equal (and therefore tight) at both extremes E /E'P [Im+l (I)]
1
(maximal information gain) and E/E'P [I m +1 (f)]
0 (minimal information gain) .
To obtain a weaker but perhaps more convenient lower bound, it can also be shown
that there is a constant Co > 0 such that for all p > 0, 1l-1(p) 2:: cop/log(2/p).
=
=
=
Finally, if all that is wanted is a direct comparison of the performances of the Gibbs
and Bayes algorithms, we can also show:
Estimating Average-Case Learning C urves
5
THE MAIN INEQUALITY: CUMULATIVE VERSION
In this section we state a cumulative version of the main inequality: namely, bounds
on the expected cumulative number of mistakes made in the first m trials (rather
than just the instantaneous expectations).
First, for the cumulative information gain, it can be shown that EfE'P [L~l Li(f)] =
EfE'P[-log Vm(f)]. This expression has a natural interpretation. The first m instances Xl, . .. , xm of x induce a partition II!:(x) of the concept class :F defined
by II~(x) = II~ = {:Fm(x, f) : f E :F}. Note that III~I is always at most 2m ,
but may be considerably smaller, depending on the interaction between :F and
Xl,? ?? ,X m? It is clear that EfE'P[-logVm(f)]
L:7rEIF P[1I']1ogP[71']. Thus the
expected cumulative information gained from the labels of Xl, .. . , Xm is simply the
entropy of the partition II~ under the distribution P. We shall denote this entropy
by 1i'P(II~(x)) = 1i'f;.(x) = 1i'f;.. Now analogous to the main inequality for the
instantaneous case (Inequality (4)), we can show:
=-
log(2m/1i~)
<
mW' (~ 1l::')
< E/E'P
[t,
:'0 E/E1'
GibbSi(f)] :'0
[t.
~1l::'
BayeSi(f)]
(6)
Here we have applied the inequality 1i-l(p) ~ cop/log(2/p) in order to give the
lower bound in more convenient form. As in the instantaneous case, the upper
and lower bounds here depend on properties of the particular P and x. When the
cumulative information gain is maximum (1i'f;. = m), the upper and lower bounds
are tight.
These bounds on learning performance in terms of a partition entropy are of special
importance to us, since they will form the crucial link between the Bayesian setting
and the Vapnik-Chervonenkis dimension theory.
6
MOVING TO A WORST-CASE THEORY: BOUNDING
THE INFORMATION GAIN BY THE VC DIMENSION
Although we have given upper bounds on the expected cumulative number of mistakes for the Bayes and Gibbs algorithms in terms of 1i'f;.(x) , we are still left with the
problem of evaluating this entropy, or at least obtaining reasonable upper bounds
on it. We can intuitively see that the "worst case" for learning occurs when the
partition entropy 1i'f;. (x) is as large as possible. In our context, the entropy is qualitatively maximized when two conditions hold: (1) the instance sequence x induces
a partition of :F that is the largest possible, and (2) the prior P gives equal weight
to each element of this partition.
In this section, we move away from our Bayesian average-case setting to obtain
worst-case bounds by formalizing these two conditions in terms of combinatorial
parameters depending only on the concept class:F. In doing so, we form the link
between the theory developed so far and the VC dimension theory.
859
860
Haussler, Kearns, Opper, and Schapire
The second of the two conditions above is easily quantified. Since the entropy of
a partition is at most the logarithm of the number of classes in it, a trivial upper
bound on the entropy which holds for all priors P is 1l!(x) ~ log III~(x)l. VC
dimension theory provides an upper bound on log III~(x)1 as follows.
For any sequence x = Xl, X2, .?. of instances and for m ~ 1, let dimm(F, x) denote
the largest d ~ 0 such that there exists a subsequence XiI' ... , Xiii of Xl, ... ,X m with
1II~((Xill ... ,xili))1 = 2d; that is, for every possible labeling of XiII ... ,Xili there is
some target concept in F that gives this labeling. The Vapnik-Chervonenkis (VC)
dimension of F is defined by dim(F) = max{ dimm(F, x) : m ~ 1 and Xl, X2, .?? E
X}. It can be shown [7, 10] that for all x and m ~ d ~ 1,
log III~(x)1
~ (1 + 0(1)) dimm(F, x) log d?lmm7
)
F,x
where 0(1) is a quantity that goes to zero as
Cl'
(7)
= m/dimm(F, x) goes to infinity.
In all of our discussions so far, we have assumed that the instance sequence x is
fixed in advance, but that the target concept f is drawn randomly according to P.
We now move to the completely probabilistic model, in which f is drawn according
to P, and each instance Xm in the sequence x is drawn randomly and independently
according to a distribution V over the instance space X (this infinite sequence of
draws from V will be denoted x E V*). Under these assumptions, it follows from
Inequalities (6) and (7), and the observation above that 1l~(x) ~ log III~(x)1 that
for any P and any V,
EfEP,XE"?
[t.
Bayes,(x,
f)]
<
EfEP,XE"?
[t.
Gibbs,(x,
f)]
< ~EXE'V. [log III~(x)1l
< (1 + o(I))ExE'V.
< (1 + 0(1))
[diffim~F, x) log dimm7F, x)]
dim(F)
m
2
log dim(F)
(8)
The expectation EXE'V. [log In~(x)1l is the VC entropy defined by Vapnik and Chervonenkis in their seminal paper on uniform convergence [11] .
In terms of instantaneous mistake bounds, using more sophisticated techniques [4],
we can show that for any P and any V,
dimm(F,
dim(F)
(9)
E /E 1',XE'V? [Bayesm(x, I)] ~ EXE'V? [
m
~
m
X)]
E IE1',XE'V? [G I?bb Sm (f)]
x,
E
XE'V? [2 dimm(F,
x)]
2 dim(F)
(10)
m
Haussler, Littlestone and Warmuth [5] construct specific V, P and F for which the
last bound given by Inequality (8) is tight to within a factor of 1/ In\2) ~ 1.44; thus
this bound cannot be improved by more than this factor in general. Similarly, the
~
m
~
1 It follows that the expected total number of mistakes of the Bayes and the Gibbs
algorithms differ by a factor of at most about 1.44 in each of these cases; this was not
previously known.
Estimating Average-Case Learning Curves
bound given by Inequality (9) cannot be improved by more than a factor of 2 in
general.
For specific V, P and F, however, it is possible to improve the general bounds
given in Inequalities (8), (9) and (10) by more than the factors indicated above.
We calculate the instantaneous mistake bounds for the Bayes and Gibbs algorithms
in the natural case that F is the set of homogeneous linear threshold functions
on R d and both the distribution V and the prior P on possible target concepts
(represented also by vectors in Rd) are uniform on the unit sphere in Rd. This
class has VC dimension d. In this case, under certain reasonable assumptions used
in statistical mechanics, it can be shown that for m ~ d ~ I,
0.44d
m
(compared with the upper bound of dim given by Inequality (9) for any class of
VC dimension d) and
0.62d
m
(compared with the upper bound of 2dlm in Inequality (10)). The ratio of these
asymptotic bounds is J2. We can also show that this performance advantage of
Bayes over Gibbs is quite robust even when P and V vary, and there is noise in the
examples [6].
7
OTHER RESULTS AND CONCLUSIONS
We have a number of other results, and briefly describe here one that may be of
particular interest to neural network researchers. In the case that the class F has
infinite VC dimension (for instance, if F is the class of all multi-layer perceptrons
of finite size), we can still obtain bounds on the number of cumulative mistakes by
decomposing F into F 1 , F2, ... , F" ... , where each F, has finite VC dimension, and
by decomposing the prior P over F as a linear sum P
2::1 aiP" where each Pi
is an arbitrary prior over Fi, and 2::1
1. A typical decomposition might let
Fi be all multi-layer perceptrons of a given architecture with at most i weights, in
which case di O( i log i) [1]. Here we can show an upper bound on the cumulative
mistakes during the first m examples of roughly 11 {ai} + [2::1 aidi] log m for both
the Bayes and Gibbs algorithms, where 11{ad
2::1 ai log a,. The quantity
2::1 aid, plays the role of an "effective VC dimension" relative to the prior weights
{a,}. In the case that x is also chosen randomly, we can bound the probability of
mistake on the mth trial by roughly ~(11{ad + [2:~1 a,di)logm).
=
a, =
=
=-
In our current research we are working on extending the basic theory presented
here to the problems of learning with noise (see Opper and Haussler [6]), learning
multi-valued functions, and learning with other loss functions.
Perhaps the most important general conclusion to be drawn from the work presented here is that the various theories of learning curves based on diverse ideas
from information theory, statistical physics and the VC dimension are all in fact
closely related, and can be naturally and beneficially placed in a common Bayesian
framework.
861
862
Haussler, Kearns, Opper, and Schapire
Acknowledgements
We are greatly indebted to Ron Rivest for his valuable suggestions and guidance,
and to Sara Solla and Naft ali Tishby for insightful ideas in the early stages of this
investigation. We also thank Andrew Barron, Andy Kahn, Nick Littlestone, Phil
Long, Terry Sejnowski and Haim Sompolinsky for stimulating discussions on these
topics. This research was supported by ONR grant NOOOI4-91-J-1162, AFOSR
grant AFOSR-89-0506, ARO grant DAAL03-86-K-0171, DARPA contract NOOOI489-J-1988, and a grant from the Siemens Corporation. This research was conducted
in part while M. Kearns was at the M.I.T. Laboratory for Computer Science and the
International Computer Science Institute, and while R. Schapire was at the M.I.T.
Laboratory for Computer Science and Harvard University.
References
[1] E. Baum and D. Haussler. What size net gives valid generalization? Neural
Computation, 1(1):151-160, 1989.
[2] J. Denker, D. Schwartz, B. Wittner, S. SoHa, R. Howard, L. Jackel, and J. Hopfield. Automatic learning, rule extraction and generalization . Complex Systems,
1:877-922, 1987.
[3] G. Gyorgi and N. Tishby. Statistical theory of learning a rule. In Neural
Networks and Spin Glasses. World Scientific, 1990.
[4] D. Haussler, M. Kearns, and R. Schapire. Bounds on the sample complexity of Bayesian learning using information theory and the VC dimension. In
Computational Learning Theory: Proceedings of the Fourth Annual Workshop.
Morgan Kaufmann, 1991.
[5] D. Haussler, N. Littlestone, and M. Warmuth. Predicting {O, 1}-functions on
randomly drawn points. Technical Report UCSC-CRL-90-54, University of
California Santa Cruz, Computer Research Laboratory, Dec. 1990.
[6] M. Opper and D. Haussler. Calculation of the learning curve of Bayes optimal
classification algorithm for learning a perceptron with noise. In Computational
Learning Theory: Proceedings of the Fourth Annual Workshop. Morgan Kaufmann, 1991.
[7] N. Sauer. On the density of families of sets. Journal of Combinatorial Theory
(Series A), 13:145-147,1972.
[8] H. Sompolinsky, N. Tishby, and H. Seung. Learning from examples in large
neural networks. Physics Review Letters, 65:1683-1686, 1990.
[9] N. Tishby, E. Levin, and S. Solla. Consistent inference of probabilities in
layered networks: predictions and generalizations. In IJCNN International
Joint Conference on Neural Networks, volume II, pages 403-409. IEEE, 1989.
[10] V. N. Vapnik. Estimation of Dependences Based on Empirical Data. SpringerVerlag, New York, 1982.
[11] V. N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its
Applications, 16(2):264-80, 1971.
| 489 |@word trial:3 briefly:1 version:6 physik:1 decomposition:1 mkearns:1 att:1 series:1 chervonenkis:4 current:1 com:1 cruz:2 partition:7 wanted:1 warmuth:2 beginning:1 short:1 manfred:1 provides:2 characterization:1 ron:1 ucsc:1 direct:1 prove:1 expected:7 roughly:2 behavior:3 examine:1 mechanic:1 multi:3 gyorgi:1 estimating:4 notation:1 formalizing:1 rivest:1 mass:1 what:1 mountain:1 minimizes:1 developed:1 corporation:1 every:1 xd:3 exactly:1 schwartz:1 unit:1 grant:4 positive:1 mistake:14 limit:1 despite:1 might:1 studied:2 quantified:1 sara:1 co:1 unique:1 practice:1 area:1 empirical:1 bell:3 convenient:3 induce:1 seeing:1 cannot:2 layered:1 context:1 seminal:1 measurable:3 phil:1 baum:1 go:2 independently:1 rule:2 haussler:11 deriving:1 his:1 analogous:1 target:9 play:1 homogeneous:1 hypothesis:5 harvard:1 element:1 satisfying:1 predicts:3 role:1 worst:5 calculate:1 reintroduce:1 sompolinsky:2 solla:2 daal03:1 valuable:1 complexity:1 asked:1 seung:1 depend:2 tight:3 algebra:1 ali:1 technically:1 f2:1 learner:10 completely:1 easily:1 darpa:1 hopfield:1 joint:1 jersey:3 represented:1 various:1 derivation:1 describe:1 effective:1 sejnowski:1 labeling:2 whose:3 quite:1 valued:2 say:1 otherwise:2 ability:1 favor:1 think:1 cop:2 dimm:6 sequence:11 advantage:1 net:1 aro:1 interaction:1 maximal:1 j2:1 beneficially:1 convergence:2 extending:1 depending:2 andrew:1 predicted:1 convention:1 differ:1 drawback:2 correct:1 closely:1 vc:16 everything:1 ogp:1 fix:1 generalization:3 investigation:1 im:6 hold:3 mapping:1 predict:5 vary:1 adopt:1 early:1 omitted:1 estimation:1 combinatorial:3 label:6 jackel:1 largest:2 always:2 rather:1 notational:2 fur:1 greatly:1 contrast:1 sense:2 glass:1 dim:6 inference:1 vl:1 mth:6 kahn:1 germany:1 classification:2 denoted:1 special:1 equal:2 construct:1 having:2 extraction:1 report:1 aip:1 xiii:2 randomly:6 logm:1 attempt:1 interest:2 investigate:1 extreme:1 chain:1 andy:1 encourage:1 sauer:1 institut:1 reif:1 logarithm:1 littlestone:3 guidance:1 theoretical:1 minimal:1 instance:12 subset:1 uniform:3 predicate:1 levin:1 conducted:1 exe:4 tishby:4 considerably:1 st:1 density:1 fundamental:1 international:2 told:1 physic:6 vm:5 probabilistic:2 contract:1 michael:1 choose:1 possibly:2 return:1 li:1 explicitly:1 ad:2 later:1 view:1 doing:1 bayes:19 recover:1 contribution:1 spin:1 kaufmann:2 maximized:1 theoretische:1 bayesian:7 researcher:1 indebted:1 fo:1 suffers:1 definition:1 frequency:1 naturally:1 proof:1 di:2 gain:9 popular:2 noooi4:1 sophisticated:1 improved:2 strongly:1 furthermore:1 just:1 stage:1 working:1 receives:2 incrementally:1 indicated:1 perhaps:2 scientific:1 omitting:1 concept:20 laboratory:6 ll:2 during:1 hill:3 theoretic:1 complete:1 temperature:1 instantaneous:5 fi:2 common:2 volume:5 interpretation:1 interpret:1 refer:2 gibbs:15 ai:2 rd:2 automatic:1 pm:10 similarly:1 moving:1 longer:1 posterior:1 recent:1 certain:2 inequality:14 binary:1 onr:1 xe:5 seen:2 morgan:2 additional:1 paradigm:1 ii:7 infer:1 technical:2 calculation:1 sphere:1 long:1 wittner:1 e1:1 prediction:3 basic:1 expectation:6 dec:1 crucial:1 member:1 call:1 mw:1 iii:6 architecture:1 fm:9 idea:2 avenue:1 absent:1 expression:1 ie1:1 york:1 santa:2 clear:1 induces:1 schapire:7 efe:3 correctly:2 diverse:1 xii:1 write:1 shall:1 threshold:1 drawn:6 sum:1 prob:1 inverse:1 fourth:2 letter:1 place:2 almost:1 reader:1 reasonable:2 electronic:1 family:1 draw:1 bound:32 layer:2 haim:1 annual:2 xili:2 ijcnn:1 precisely:1 infinity:1 x2:2 optimality:1 according:4 smaller:1 making:1 intuitively:1 gradually:1 pr:1 dlm:1 equation:2 previously:1 abbreviated:3 flip:1 letting:2 decomposing:2 denker:1 away:1 appropriate:1 barron:1 coin:1 uncountable:2 unifying:1 murray:3 universita:1 contact:1 move:2 already:2 quantity:3 occurs:1 dependence:3 link:2 thank:1 topic:1 mail:1 trivial:1 assuming:1 ratio:3 robert:1 negative:1 countable:2 unknown:1 upper:13 observation:1 sm:1 howard:1 finite:2 incorrectly:2 arbitrary:1 david:1 namely:1 nick:1 california:3 address:1 xill:1 xm:27 including:1 max:1 belief:1 terry:1 event:1 natural:2 predicting:3 representing:1 improve:1 prior:13 review:1 acknowledgement:1 asymptotic:1 relative:2 afosr:2 loss:1 expect:1 interesting:1 limitation:1 suggestion:1 consistent:4 pi:1 placed:1 last:1 supported:1 weaker:1 perceptron:1 institute:1 urves:1 curve:12 dimension:16 opper:7 evaluating:1 cumulative:9 valid:1 world:1 author:1 collection:1 made:1 qualitatively:1 far:2 bb:1 assumed:3 xi:5 subsequence:1 robust:1 obtaining:1 cl:1 complex:1 main:7 dense:1 motivation:1 bounding:1 noise:3 fair:1 aid:1 xl:8 removing:1 specific:2 showing:1 insightful:1 symbol:1 admits:1 exists:1 workshop:2 restricting:1 vapnik:5 gained:2 importance:1 entropy:10 soha:1 simply:3 applies:1 stimulating:1 room:1 crl:1 springerverlag:1 infinite:5 typical:1 averaging:1 kearns:7 called:2 total:1 siemens:1 shannon:2 cond:2 perceptrons:2 |
4,298 | 4,890 | Bayesian Inference and Online Experimental Design
for Mapping Neural Microcircuits
Ben Shababo ?
Department of Biological Sciences
Columbia University, New York, NY 10027
[email protected]
Brooks Paige ?
Department of Engineering Science
University of Oxford, Oxford OX1 3PJ, UK
[email protected]
Ari Pakman
Department of Statistics,
Center for Theoretical Neuroscience,
& Grossman Center for the Statistics of Mind
Columbia University, New York, NY 10027
[email protected]
Liam Paninski
Department of Statistics,
Center for Theoretical Neuroscience,
& Grossman Center for the Statistics of Mind
Columbia University, New York, NY 10027
[email protected]
Abstract
With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop
a method for efficiently inferring posterior distributions over synaptic strengths
in neural microcircuits. The input to our algorithm is data from experiments in
which action potentials from putative presynaptic neurons can be evoked while
a subthreshold recording is made from a single postsynaptic neuron. We present
a realistic statistical model which accounts for the main sources of variability in
this experiment and allows for significant prior information about the connectivity
and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time
stimulating the neurons whose synaptic strength is most ambiguous, therefore we
also develop an online optimal design algorithm for choosing which neurons to
stimulate at each trial.
1
Introduction
A major goal of neuroscience is the mapping of neural microcircuits at the scale of hundreds to
thousands of neurons [1]. By mapping, we specifically mean determining which neurons synapse
onto each other and with what weight. One approach to achieving this goal involves the simultaneous
stimulation and observation of populations of neurons. In this paper, we specifically address the
mapping experiment in which a set of putative presynaptic neurons are optically stimulated while
an electrophysiological trace is recorded from a designated postsynaptic neuron. It should be noted
that the methods we present are general enough that most stimulation and subthreshold monitoring
technology would be well fit by our model with only minor changes. These types of experiments
have been implemented with some success [2, 3, 6], yet there are several issues which prevent
efficient, large scale mapping of neural microcircuitry. For example, while it has been shown that
multiple neurons can be stimulated simultaneously [4, 5], successful mapping experiments have thus
far only stimulated a single neuron per trial which increases experimental time [2, 3, 6]. Stimulating
multiple neurons simultaneously and with high accuracy requires well-tuned hardware, and even
then some level of stimulus uncertainty may remain. In addition, a large portion of connection
?
These authors contributed equally to this work.
1
weights are small which has meant that determining these weights is difficult and that many trials
must be performed. Due to the sparsity of neural connectivity, potentially useful trials are spent
on unconnected pairs instead of refining weight estimates for connected pairs when the stimuli
are chosen non-adaptively. In this paper, we address these issues by developing a procedure for
sparse Bayesian inference and information-based experimental design which can reconstruct neural
microcircuits accurately and quickly despite the issues listed above.
2
A realistic model of neural microcircuits
In this section we propose a novel and thorough statistical model which is specific enough to capture
most of the relevant variability in these types of experiments while being flexible enough to be used
with many different hardware setups and biological preparations.
2.1
Stimulation
In our experimental setup, at each trial, n = 1, . . . , N , the experimenter stimulates R of K possible
presynaptic neurons. We represent the chosen set of neurons for each trial with the binary vector
zn ? {0, 1}K , which has a one in each of the the R entries corresponding to the stimulated neurons
on that trial. One of the difficulties of optical stimulation lies in the experimenter?s inability to
stimulate a specific neuron without possibly failing to stimulate the target neuron or engaging other
nearby neurons. In general, this is a result of the fact that optical excitation does not stimulate a
single point in space but rather has a point spread function that is dependent on the hardware and the
biological tissue. To complicate matters further, each neuron has a different rheobase (a measure of
how much current is needed to generate an action potential) and expression level of the optogenetic
protein. While some work has shown that it may be possible to stimulate exact sets of neurons,
this setup requires very specific hardware and fine tuning [4, 5]. In addition, even if a neuron
fires, there is some probability that synaptic transmission will not occur. Because these events are
difficult or impossible to observe, we model this uncertainty by introducing a second binary vector
xn ? {0, 1}K denoting the neurons that actually release neurotransmitter in trial n. The conditional
distribution of xn given zn can be chosen by the experimenter to match their hardware settings and
understanding of synaptic transmission rates in their preparation.
2.2
Sparse connectivity
Numerous studies have collected data to estimate both connection probabilities and synaptic weight
distributions as a function of distance and cell identity [2, 3, 6, 7, 8, 9, 10, 11, 12]. Generally, the
data show that connectivity is sparse and that most synaptic weights are small with a heavy tail of
strong connections. To capture the sparsity of neural connectivity, we place a ?spike-and-slab? prior
on the synaptic weights wk [13, 14, 15], for each presynaptic neuron k = 1, . . . , K; these priors are
designed to place non-zero probability on the event that a given weight wk is exactly zero. Note that
we do not need to restrict the ?slab? distributions (the conditional distributions of wk given that wk
is nonzero) to the traditional Gaussian choice, and in fact each weight can have its own parameters.
For example, log-normal [12] or exponential [8, 10] distributions may be used in conjunction with
information about cell type and location to assign highly informative priors 1 .
2.3
Postsynaptic response
In our model a subthreshold response is measured from a designated postsynaptic neuron. Here we
assume the measurement is a one-dimensional trace yn ? RT , where T is the number of samples in
the trace. The postsynaptic response for each synaptic event in a given trial can be modeled using an
appropriate template function fk (?) for each presynaptic neuron k. For this paper we use an alpha
function to model the shape of each neuron?s contribution to the postsynaptic current, parameterized
by time constants ?k which define the rise and decay time. As with the synaptic weight priors, the
template functions could be designed based on the cells? identities. The onset of each postsynaptic
1
A cell?s identity can be general such as excitatory or inhibitory, or more specific such as VIP- or PVinterneurons. These identities can be identified by driving the optogenetic channel with a particular promotor
unique to that cell type or by coexpressing markers for various cell types along with the optogenetic channel.
2
Presynaptic weights
Location of presynaptic neurons and stimuli
Weight
1
0
?1
0
20
40
10
Current [pA]
Neuron k
60
80
100
Postsynaptic current trace
0
?10
?20
?30
0
50
100
Time [samples]
150
200
Figure 1: A schematic of the model experiment. The left figure shows the relative location of
100 presynaptic neurons; inhibitory neurons are shown in yellow, and excitatory neurons in purple.
Neurons marked with a black outline have a nonzero connectivity to the postsynaptic neuron (shown
as a blue star, in the center). The blue circles show the diffusion of the stimulus through the tissue.
The true connectivity weights are shown on the upper right, with blue vertical lines marking the five
neurons which were actually fired as a result of this stimulus. The resulting time series postsynaptic
current trace is shown in the bottom right. The connected neurons which fired are circled in red, the
triangle and star marking their weights and corresponding postsynaptic events in the plots at right.
response may be jittered such that each event starts at some time dnk after t = 0, where the delays
could be conditionally distributed on the parameters of the stimulation and cells. Finally, at each time
step the signal is corrupted by zero mean Gaussian noise with variance ? 2 . This noise distribution is
chosen for simplicity; however, the model could easily handle time-correlated noise.
2.4
Full definition of model
The full model can be summarized by the likelihood
X
T
N Y
Y
2
N ynt
wk xnk fk (t ? dnk , ?k ), ?
p(Y|w, X, D) =
n=1 t=1
(1)
k
with the general spike-and-slab prior
p(?k ) = Bernoulli(ak ),
N ?T
p(wk |?k ) = ?k p(wk |?k = 1) + (1 ? ?k )?0 (wk )
N ?K
(2a, 2b)
N ?K
where Y ? R
, X ? {0, 1}
, and D ? R
are composed of the responses, latent
neural activity, and delays, respectively; ?k is a binary variable indicating whether or not neuron k
is connected.
We restate that the key to this model is that it captures the main sources of uncertainty in the experiment while providing room for particulars regarding hardware and the anatomy and physiology of
the system to be incorporated. To infer the marginal distribution of the synaptic weights, one can
use standard Bayesian methods such as Gibbs sampling or variational inference, both of which are
discussed below. An example set of neurons and connectivity weights, along with the set of stimuli
and postsynaptic current trace for a single trial, is shown in Figure 1.
3
Inference
Throughout the remainder of the paper, all simulated data is generated from the model presented
above. As mentioned, any free hyperparameters or distribution choices can be chosen intelligently
from empirical evidence. Biological parameters may be specific and chosen on a cell by cell basis
or left general for the whole system. We show in our results that inference and optimal design still
perform well when general priors are used. Details regarding data simulation as well as specific
choices we make in our experiments are presented in Appendix A.
3
3.1
Charge as synaptic strength
To reduce the space P
over which we perform inference, we collapse the variables wk and ?k into a
single variable ck = t wk fk (t ? dnk , ?k ) which quantifies the charge transfer during the synaptic
event and can be used to define the strength of a connection. Integrating over time also eliminates
any dependence on the delays dnk . In this context, we reparameterize the likelihood as a function of
PT
yn = t=0 ynt and ? = ?T 1/2 and the resulting likelihood is
Y
2
p(y|X, c) =
N (yn |x>
(3)
n c, ? ).
n
We found that na??ve MCMC sampling over the posterior of w, ? , ?, X, and D insufficiently explored the support and inference was unsuccessful. In this effort to make the inference procedure
computationally tractable, we discard potentially useful temporal information in the responses. An
important direction for future work is to experiment with samplers that can more efficiently explore
the full posterior (e.g., using Wang-Landau or simulated tempering methods).
3.2
Gibbs sampling
The reparameterized posterior p(c, ?, X|Z, y) can be inferred using a simple Gibbs sampler. We
approximate the prior over c as a spike-and-slab with Gaussian slabs where the slabs could be
truncated if the cells? excitatory or inhibitory identity is known. Each xnk can be sampled by
computing the odds ratio, and following [15] we draw each ck , ?k from the joint distribution
p(ck , ?k |Z, y, X, {cj , ?j |j 6= k}) by sampling first ?k from p(?k |Z, y, X, {cj |j 6= k}), then
p(ck |Z, y, X, {cj , |j 6= k}, ?k ).
3.3
Variational Bayes
As stated earlier we do not only want to recover the parameters of the system, but want to perform
optimal experimental design, which is a closed-loop process. One essential aspect of the design
procedure is that decisions must be returned to the experimenter quickly, on the order of a few
seconds. This means that we must be able to perform inference of the posterior as well as choose
the next stimulus extremely quickly. For realistically sized systems with hundred to thousands of
neurons, Gibbs sampling will be too slow, and we have to explore other options for speeding up
inference.
To achieve this decrease in runtime, we approximate the posterior distribution of c and ? using a
variational approach [16]. The use of variational inference for spike-and-slab regression models has
been explored in [17, 18], and we follow their methods with some minor changes. If we, for now,
assume that X is known and let the spike-and-slab prior on c have untruncated Gaussian slabs, then
this variational approach finds the best fully-factorized approximation to the true posterior
Y
p(c, ?|x1:n , y1:n ) ?
q(ck , ?k )
(4)
k
where the functional form of q(ck , ?k ) is itself restricted to a spike-and-slab distribution
?k N (ck |?k , s2k ) if ?k = 1
q(ck , ?k ) =
(1 ? ?k )?0 (ck ) otherwise.
(5)
The variational parameters ?k , ?k , sk for k = 1, . . . , K are found by minimizing the KL-divergence
KL(q||p) between the left and right hand sides of Eq. 4 with respect to these values. As is the case
with fully-factorized variational distributions, updating the posterior involves an iterative algorithm
which cycles through the parameters for each factor.
The factorized variational approximation is reasonable when the number of simultaneous stimuli,
R, is small. Note that if we examine the posterior distributions of the weights
Y
Y
2
p(c|y, X) ?
N (yn |x>
ak N (ck |?k , ?k2 ) + (1 ? ak )?0 (ck )
(6)
n c, ? )
n
k
we see that if each xn contains only one nonzero value then each factor in the likelihood is dependent on only one of the K weights and can be multiplied into the corresponding k th spike-and-slab.
4
Therefore, since the product of a spike-and-slab and a Gaussian is still a spike-and-slab, if we stimulate only one neuron at each trial then this posterior is also spike-and-slab, and the variational
approximation becomes exact in this limit.
Since we do not directly observe X, we must take the expectation of the variational parameters
?k , ?k , sk with respect to the distribution p(X|Z, y). We Monte Carlo approximate this integral
in a manner similar to the approach used for integrating over the hyperparameters in [17]; however, here we further approximate by sampling over potential stimuli xnk from p(xnk = 1|zn ). In
practice we will see this approximation suffices for experimental design, with the overall variational
approach performing nearly as well for posterior weight reconstruction as Gibbs sampling from the
true posterior.
4
Optimal experimental design
The preparations needed to perform these type of experiments tend to be short-lived, and indeed, the
very act of collecting data ? that is, stimulating and probing cells ? can compromise the health of
the preparation further. Also, one may want to use the connectivity information to perform additional
experiments. Therefore it becomes critical to complete the mapping phase of the experiment as
quickly as possible. We are thus strongly motivated to optimize the experimental design: to choose
the optimal subset of neurons zn to stimulate at each trial to minimize N , the overall number of
trials required for good inference.
The Bayesian approach to the optimization of experimental design has been explored in [19, 20,
21]. In this paper, we maximize the mutual information I(?; D) between the model parameters ?
and the data D; however, other objective functions could be explored. Mutual information can be
decomposed into a difference of entropies, one of which does not depend on the data. Therefore
the optimization reduces to the intuitive objective of minimizing the posterior entropy with respect
to the data. Because the previous data Dn?1 = {(z1 , y1 ), . . . , (zn?1 , yn?1 )} are fixed and yn is
dependent on the stimulus zn , our problem is reduced to choosing the optimal next stimulus, denoted
z?n , in expectation over yn ,
(7)
z?n = arg max Eyn |zn [I(?; D)] = arg min Eyn |zn [H(?|D)] .
zn
zn
5
Experimental design procedure
The optimization described in Section 4 entails performing a combinatorial optimization over zn ,
where for each zn we consider an expectation over all possible yn . In order to be useful to experimenters in an online setting, we must be able to choose the next stimulus in only one or two seconds.
For any realistically sized system, an exact optimization is computationally infeasible; therefore in
the following section we derive a fast method for approximating the objective function.
5.1
Computing the objective function
The variational posterior distribution of ck , ?k can be used to characterize our general objective
function described in Section 4. We define the cost function J to be the right-hand side of Equation 7,
J ? Eyn |zn [H(c, ?|D)]
(8)
?
such that the optimal next stimulus zn can be found by minimizing J. We benefit immediately from
the factorized approximation of the variational posterior, since we can rewrite the joint entropy as
X
H[c, ?|D] ?
H[ck , ?k |D]
(9)
k
allowing us to optimize over the sum of the marginal entropies instead of having to compute the
(intractable) entropy over the full posterior. Using the conditional entropy identity H[ck , ?k |D] =
H[ck |?k , D] + H[?k |D], we see that the entropy of each spike-and-slab is the sum of a weighted
Gaussian entropy and a Bernoulli entropy and we can write out the approximate objective function
as
h?
i
X
k,n
J?
Eyn |zn
(1 + log(2?s2k,n )) ? ?k,n log ?k,n ? (1 ? ?k,n ) log(1 ? ?k,n ) . (10)
2
k
5
Here, we have introduced additional notation, using ?k,n , ?k,n , and sk,n to refer to the parameters of
the variational posterior distribution given the data through trial n. Intuitively, we see that equation
10 represents a balance between minimizing the sparsity pattern entropy H[?k ] of each neuron and
minimizing the weight entropy H[ck |?k = 1] proportional to the probability ?k that the presynaptic
neuron is connected. As p(?k = 1) ? 1, the entropy of the Gaussian slab distribution grows to
dominate. In algorithm behavior, we see when the probability that a neuron is connected increases,
we spend time stimulating it to reduce the uncertainty in the corresponding nonzero slab distribution.
To perform this optimization we must compute the expected joint entropy with respect to p(yn |zn ).
For any particular candidate zn , this can be Monte Carlo approximated by first sampling yn from the
posterior distribution p(yn |zn , c, Dn?1 ), where c is drawn from the variational posterior inferred at
trial n ? 1. Each sampled yn may be used to estimate the variational parameters ?k,n and sk,n with
which we evaluate H[ck , ?k ]; we average over these evaluations of the entropy from each sample to
compute an estimate of J in Eq. 10.
Once we have chosen z?n , we execute the actual trial and run the variational inference procedure on
the full data to obtain the updated variational posterior parameters ?k,n , ?k,n , and sk,n which are
needed for optimization. Once the experiment has concluded, Gibbs sampling can be run, though
we found only a limited gain when comparing Gibbs sampling to variational inference.
5.2
Fast optimization
The major cost to the algorithm is in the stimulus selection phase. It is not feasible to evaluate the
right-hand side of equation 10 for every zn because as K grows there is a combinatorial explosion
of possible stimuli. To avoid an exhaustive search over possible zn , we adopt a greedy approach
for choosing which R of the K locations to stimulate. First we rank the K neurons based on an
?kn , each
approximation of the objective function. To do this, we propose K hypothetical stimuli, z
th
all zeros except the k entry equal to 1 ? that is, we examine only the K stimuli which represent
?
?kn which
stimulating a single location. We then set znk
= 1 for the R neurons corresponding to the z
?
give the smallest values for the objective function and all other entries of zn to zero. We found that
the neurons selected by a brute force approach are most likely to be the neurons that the greedy
selection process chooses (see Figure 1 in the Appendix).
For large systems of neurons, even the above is too slow to perform in an online setting. For each of
?kn , to approximate the expected entropy we must compute the variational
the K proposed stimuli z
>
? n is the random variable
?>
posterior for M samples of [X>
n ] and L samples of yn (where x
1:n?1 x
corresponding to p(?
xn |?
zn )). Therefore we run the variational inference procedure on the full data
on the order of O(M KL) times at each trial. As the system size grows, running the variational
inference procedure this many times becomes intractable because the number of iterations needed
to converge the coordinate ascent algorithm is dependent on the correlations between the rows of
X. This is implicitly dependent on both N , the number of trials, and R, the number of stimulus
locations (see Figure 2 in the Appendix). Note that the stronger dependence here is on R; when
R = 1 the variational parameter updates become exact and independent across the neurons, and
therefore no coordinate ascent is necessary and the runtime becomes linear in K.
We therefore take one last measure to speed up the optimization process by implementing an online
Bayesian approach to updating the variational posterior (in the stimulus selection phase only). Since
the variational posterior of ck and ?k takes the same form as the prior distribution, we can use the
posterior from trial n ? 1 as the prior at trial n, allowing us to effectively summarize the previous
data. In this online setting, when we stimulate only one neuron, only the parameters of that specific
? kn = z
?kn , this results in explicit
neuron change. If during optimization we temporarily assume that x
updates for each variational parameter, with no coordinate ascent iterations required.
In total, the resulting optimization algorithm has a runtime O(KL) with no coordinate ascent algorithms needed. The combined accelerations described in this section result in a speed up of
several orders of magnitude which allows the full inference and optimization procedure to be run
in real time, running at approximately one second per trial in our computing environment for
K = 500, R = 8. It is worth mentioning here that there are several points at which parallelization
could be implemented in the full algorithm. We chose to parallelize over M which distributes the
sampling of X and the running of variational inference for each sample. (Formulae and step-by-step
implementation details are found in Appendix B.)
6
R =2
R =4
R =8
R =16
? =1.0
NRE of E[c]
1.2
1
0.8
0.6
0.4
0.2
1.4
? =2.5
NRE of E[c]
1.2
1
0.8
0.6
0.4
0.2
1.4
? =5.0
NRE of E[c]
1.2
1
0.8
0.6
0.4
0
200
400
trial, n
600
800
0
200
400
trial, n
600
800
0
200
400
trial, n
600
800
0
200
400
trial, n
600
800
Figure 2: A comparison of normalized reconstruction error (NRE) over 800 trials in a system with
500 neurons, between random stimulus selection (red, magenta) and our optimal experimental design approach (blue, cyan). The heavy red and blue lines indicate the results when running the
Gibbs sampler at that point in the experiment, and the thinner magenta and cyan lines indicate the
results from variational inference. Results are shown over three noise levels ? = 1, 2.5, 5, and for
multiple numbers of stimulus locations per trial, R = 2, 4, 8, 16. Each plot shows the median and
quartiles over 50 experiments. The error decreases much faster in the optimal design case, over a
wide parameter range.
6
Experiments and results
We ran our inference and optimal experimental design algorithm on data sets generated from the
model described in Section 2. We benchmarked our optimal design algorithm against a sequence
of randomly chosen stimuli, measuring performance by normalized reconstruction error, defined as
kE[c] ? ck2 /kck2 ; we report the variation in our experiments by plotting the median and quartiles.
Baseline results are shown in Figure 2, over a range of values for stimulations per trial R and
baseline postsynaptic noise levels ?. The results here use an informative prior, where we assume the
excitatory or inhibitory identity is known, and we set individual prior connectivity probabilities for
each neuron based on that neuron?s identity and distance from the postsynaptic cell. We choose to
let X be unobserved and let the stimuli Z produce Gaussian ellipsoids which excite neurons that are
located nearby. All model parameters are given in Appendix A.
We see that inference in general performs well. The optimal procedure was able to achieve equivalent reconstruction quality as a random stimulation paradigm in significantly fewer trials when the
number of stimuli per trial and response noise were in an experimentally realistic range (R = 4
and ? = 2.5 being reasonable values). Interestingly, the approximate variational inference methods
performed about as well as the full Gibbs sampler here (at much less computational cost), although
Gibbs sampling seems to break down when R grows too large and the noise level is small, which
may be a consequence of strong, local peaks in the posterior.
As the the number of stimuli per trial R increases, we start to see improved weight estimates and
faster convergence but a decrease in the relative benefit of optimal design; the random approach
?catches up? to the optimal approach as R becomes large. This is consistent with the results of [22],
who argue that optimal design can provide only modest gains in performing sparse reconstructions,
7
General Prior
X Observed
1.1
1.2
1
1
NRE of E[c]
0.9
0.8
0.8
0.7
0.6
0.6
0.5
0.4
0.4
0
200
400
trial, n
600
800
0
200
400
trial, n
600
800
Figure 3: The results of inference and optimal design
(A) with a single spike-andslab prior for all connections
(prior connection probability
of .1, and each slab Gaussian with mean 0 and standard deviation 31.4); and (B)
with X observed. Both experiments show median and
quartiles range with R = 4
and ? = 2.5.
if the design vectors x are unconstrained. (Note that these results do not apply directly in our setting
if R is small, since in this case x is constrained to be highly sparse ? and this is exactly where we
see major gains from optimal online designs.)
Finally, we see that we are still able to recover the synaptic strengths when we use a more general
prior as in Figure 3A where we placed a single spike-and-slab prior across all the connections. Since
we assumed the cells? identities were unknown, we used a zero-centered Gaussian for the slab and
a prior connection probability of .1. While we allow for stimulus uncertainty, it will likely soon be
possible to stimulate multiple neurons with high accuracy. In Figure 3B we see that - as expected performance improves.
It is helpful to place this observation in the context of [23], which proposed a compressed-sensing
algorithm to infer microcircuitry in experiments like those modeled here. The algorithms proposed
by [23] are based on computing a maximum a posteriori (MAP) estimate of the weights w; note
that to pursue the optimal Bayesian experimental design methods proposed here, it is necessary
to compute (or approximate) the full posterior distribution, not just the MAP estimate. (See, e.g.,
[24] for a related discussion.) In the simulated experiments of [23], stimulating roughly 30 of
500 neurons per trial is found to be optimal; extrapolating from Fig. 2, we would expect a limited
difference between optimal and random designs in this range of R. That said, large values of R
lead to some experimental difficulties: first, stimulating large populations of neurons with high
spatial resolution requires very fined tuned hardware (note that the approach of [23] has not yet
been applied to experimental data, to our knowledge); second, if R is sufficiently large then the
postsynaptic neuron can be easily driven out of a physiologically realistic regime, which in turn
means that the basic linear-Gaussian modeling assumptions used here and in [23] would need to be
modified. We plan to address these issues in more depth in our future work.
7
Future Work
There are several improvements we would like to explore in developing this model and algorithm
further. First, the implementation of an inference algorithm which performs well on the full model
such that we can recover the synaptic weights, the time constants, and the delays would allow us to
avoid compressing the responses to scalar values and recover more information about the system.
Also, it may be necessary to improve the noise model as we currently assume that there are no
spontaneous synaptic events which will confound the determination of each connection?s strength.
Finally, in a recent paper, [25], a simple adaptive compressive sensing algorithm was presented
which challenges the results of [22]. It would be worth exploring whether their algorithm would be
applicable to our problem.
Acknowledgements
This material is based upon work supported by, or in part by, the U. S. Army Research Laboratory
and the U. S. Army Research Office under contract number W911NF-12-1-0594 and an NSF CAREER grant. We would also like to thank Rafael Yuste and Jan Hirtz for helpful discussions, and
our anonymous reviewers.
8
References
[1] R. Reid, ?From Functional Architecture to Functional Connectomics,? Neuron, vol. 75, pp. 209?217, July
2012.
[2] M. Ashby and J. Isaac, ?Maturation of a recurrent excitatory neocortical circuit by experience-dependent
unsilencing of newly formed dendritic spines,? Neuron, vol. 70, no. 3, pp. 510 ? 521, 2011.
[3] E. Fino and R. Yuste, ?Dense Inhibitory Connectivity in Neocortex,? Neuron, vol. 69, pp. 1188?1203,
Mar. 2011.
[4] V. Nikolenko, K. E. Poskanzer, and R. Yuste, ?Two-photon photostimulation and imaging of neural circuits,? Nat Meth, vol. 4, pp. 943?950, Nov. 2007.
[5] A. M. Packer, D. S. Peterka, J. J. Hirtz, R. Prakash, K. Deisseroth, and R. Yuste, ?Two-photon optogenetics of dendritic spines and neural circuits,? Nat Meth, vol. 9, pp. 1202?1205, Dec. 2012.
[6] A. M. Packer and R. Yuste, ?Dense, unspecific connectivity of neocortical parvalbumin-positive interneurons: A canonical microcircuit for inhibition?,? The Journal of Neuroscience, vol. 31, no. 37, pp. 13260?
13271, 2011.
[7] B. Barbour, N. Brunel, V. Hakim, and J.-P. Nadal, ?What can we learn from synaptic weight distributions?,? Trends in neurosciences, vol. 30, pp. 622?629, Dec. 2007.
[8] C. Holmgren, T. Harkany, B. Svennenfors, and Y. Zilberter, ?Pyramidal cell communication within local
networks in layer 2/3 of rat neocortex,? The Journal of Physiology, vol. 551, no. 1, pp. 139?153, 2003.
[9] J. Kozloski, F. Hamzei-Sichani, and R. Yuste, ?Stereotyped position of local synaptic targets in neocortex,? Science, vol. 293, no. 5531, pp. 868?872, 2001.
[10] R. B. Levy and A. D. Reyes, ?Spatial profile of excitatory and inhibitory synaptic connectivity in mouse
primary auditory cortex,? The Journal of Neuroscience, vol. 32, no. 16, pp. 5609?5619, 2012.
[11] R. Perin, T. K. Berger, and H. Markram, ?A synaptic organizing principle for cortical neuronal groups,?
Proceedings of the National Academy of Sciences, vol. 108, no. 13, pp. 5419?5424, 2011.
[12] S. Song, P. J. Sj?ostr?om, M. Reigl, S. Nelson, and D. B. Chklovskii, ?Highly nonrandom features of
synaptic connectivity in local cortical circuits.,? PLoS biology, vol. 3, p. e68, Mar. 2005.
[13] E. I. George and R. E. McCulloch, ?Variable selection via gibbs sampling,? Journal of the American
Statistical Association, vol. 88, no. 423, pp. 881?889, 1993.
[14] T. J. Mitchell and J. J. Beauchamp, ?Bayesian variable selection in linear regression,? Journal of the
American Statistical Association, vol. 83, no. 404, pp. 1023?1032, 1988.
[15] S. Mohamed, K. A. Heller, and Z. Ghahramani, ?Bayesian and l1 approaches to sparse unsupervised
learning,? CoRR, vol. abs/1106.1157, 2011.
[16] C. M. Bishop, Pattern Recognition and Machine Learning. Springer, 2007.
[17] P. Carbonetto and M. Stephens, ?Scalable variational inference for bayesian variable selection in regression, and its accuracy in genetic association studies,? Bayesian Analysis, vol. 7, no. 1, pp. 73?108, 2012.
[18] M. Titsias and M. Lzaro-Gredilla, ?Spike and Slab Variational Inference for Multi-Task and Multiple
Kernel Learning,? in Advances in Neural Information Processing Systems 24, pp. 2339?2347, 2011.
[19] Y. Dodge, V. Fedorov, and H. Wynn, eds., Optimal Design and Analysis of Experiments. North Holland,
1988.
[20] D. J. C. MacKay, ?Information-based objective functions for active data selection,? Neural Comput.,
vol. 4, pp. 590?604, July 1992.
[21] L. Paninski, ?Asymptotic Theory of Information-Theoretic Experimental Design,? Neural Comput.,
vol. 17, pp. 1480?1507, July 2005.
[22] E. Arias-Castro, E. J. Cand`es, and M. A. Davenport, ?On the fundamental limits of adaptive sensing,?
IEEE Transactions on Information Theory, vol. 59, no. 1, pp. 472?481, 2013.
[23] T. Hu, A. Leonardo, and D. Chklovskii, ?Reconstruction of Sparse Circuits Using Multi-neuronal Excitation (RESCUME),? in Advances in Neural Information Processing Systems 22, pp. 790?798, 2009.
[24] S. Ji and L. Carin, ?Bayesian compressive sensing and projection optimization,? in Proceedings of the
24th international conference on Machine learning, ICML ?07, (New York, NY, USA), pp. 377?384,
ACM, 2007.
[25] M. Malloy and R. D. Nowak, ?Near-optimal adaptive compressed sensing,? CoRR, vol. abs/1306.6239,
2013.
9
| 4890 |@word trial:34 stronger:1 seems:1 hu:1 simulation:1 deisseroth:1 series:1 contains:1 optically:1 tuned:2 denoting:1 interestingly:1 genetic:1 hirtz:2 current:6 comparing:1 yet:2 must:7 connectomics:1 realistic:4 informative:2 shape:1 designed:2 plot:2 update:2 extrapolating:1 greedy:2 selected:1 fewer:1 shababo:1 short:1 ck2:1 beauchamp:1 location:7 five:1 along:2 dn:2 become:1 manner:1 expected:3 indeed:1 roughly:1 cand:1 examine:2 spine:2 multi:2 behavior:1 decomposed:1 landau:1 eyn:4 actual:1 becomes:5 nre:5 notation:1 circuit:5 factorized:4 advent:1 mcculloch:1 what:2 benchmarked:1 pursue:1 nadal:1 compressive:2 unobserved:1 nonrandom:1 temporal:1 thorough:1 every:1 collecting:1 act:1 charge:2 hypothetical:1 prakash:1 runtime:3 exactly:2 k2:1 uk:2 brute:1 grant:1 yn:13 reid:1 positive:1 engineering:1 local:4 thinner:1 limit:2 consequence:1 despite:1 ak:3 oxford:2 parallelize:1 approximately:1 black:1 chose:1 evoked:1 mentioning:1 collapse:1 limited:2 liam:2 range:5 unique:1 practice:1 procedure:9 jan:1 empirical:1 physiology:2 significantly:1 projection:1 integrating:2 protein:1 onto:1 selection:8 context:2 impossible:1 optimize:2 equivalent:1 map:3 reviewer:1 center:5 ke:1 resolution:1 simplicity:1 immediately:1 dominate:1 population:2 handle:1 coordinate:4 variation:1 updated:1 target:2 pt:1 spontaneous:1 fino:1 exact:4 engaging:1 pa:1 trend:1 approximated:1 recognition:1 updating:2 located:1 bottom:1 observed:2 wang:1 capture:3 thousand:2 compressing:1 connected:5 cycle:1 plo:1 decrease:3 ran:1 mentioned:1 environment:1 depend:1 rewrite:1 compromise:1 titsias:1 upon:1 dodge:1 basis:1 triangle:1 easily:2 joint:3 various:1 neurotransmitter:1 fast:2 monte:2 choosing:3 exhaustive:1 whose:1 spend:1 reconstruct:1 otherwise:1 compressed:2 optogenetics:1 statistic:4 itself:1 online:7 sequence:1 intelligently:1 propose:2 reconstruction:6 product:1 poskanzer:1 remainder:1 photostimulation:1 relevant:1 loop:1 organizing:1 fired:2 achieve:2 realistically:2 academy:1 intuitive:1 convergence:1 transmission:2 produce:1 ben:1 spent:1 derive:1 develop:2 ac:1 stat:1 recurrent:1 measured:1 minor:2 eq:2 strong:2 implemented:2 involves:2 indicate:2 direction:1 restate:1 anatomy:1 quartile:3 centered:1 material:1 implementing:1 carbonetto:1 assign:1 suffices:1 anonymous:1 biological:4 dendritic:2 exploring:1 sufficiently:1 normal:1 mapping:7 slab:21 driving:1 major:3 adopt:1 smallest:1 failing:1 applicable:1 combinatorial:2 currently:1 weighted:1 gaussian:11 modified:1 rather:1 ck:18 avoid:2 office:1 conjunction:1 unspecific:1 microcircuitry:2 focus:1 refining:1 release:1 improvement:1 bernoulli:2 likelihood:4 rank:1 baseline:2 helpful:2 inference:26 posteriori:1 dependent:6 xnk:4 arg:2 issue:4 overall:2 flexible:1 denoted:1 plan:1 constrained:1 spatial:2 mackay:1 mutual:2 marginal:2 equal:1 once:2 having:1 sampling:13 biology:1 represents:1 unsupervised:1 nearly:1 icml:1 carin:1 future:3 report:1 stimulus:27 few:1 modern:1 randomly:1 composed:1 simultaneously:2 packer:2 ve:1 individual:1 national:1 divergence:1 phase:3 fire:1 ab:2 interneurons:1 highly:3 evaluation:1 integral:1 nowak:1 explosion:1 necessary:3 experience:1 modest:1 circle:1 theoretical:2 optogenetic:3 earlier:1 modeling:1 w911nf:1 measuring:1 zn:22 cost:3 introducing:1 deviation:1 entry:3 subset:1 hundred:2 delay:4 successful:1 too:3 characterize:1 kn:5 stimulates:1 corrupted:1 jittered:1 e68:1 chooses:1 adaptively:1 combined:1 peak:1 fundamental:1 international:1 contract:1 quickly:4 mouse:1 na:1 connectivity:15 promotor:1 recorded:1 choose:4 possibly:1 davenport:1 american:2 grossman:2 account:1 potential:3 photon:2 star:2 summarized:1 wk:10 north:1 matter:1 onset:1 performed:2 break:1 closed:1 portion:1 red:3 start:2 bayes:1 recover:4 option:1 wynn:1 contribution:1 om:1 formed:1 purple:1 accuracy:3 minimize:1 variance:1 ynt:2 efficiently:2 who:1 subthreshold:3 yellow:1 bayesian:11 accurately:1 carlo:2 monitoring:1 worth:2 tissue:2 simultaneous:2 synaptic:20 complicate:1 definition:1 ed:1 against:1 pp:20 mohamed:1 isaac:1 sampled:2 gain:3 experimenter:5 newly:1 auditory:1 mitchell:1 knowledge:1 improves:1 electrophysiological:1 cj:3 actually:2 follow:1 maturation:1 response:8 improved:1 synapse:1 execute:1 microcircuit:6 ox:1 strongly:1 though:1 just:1 mar:2 nikolenko:1 correlation:1 hand:3 ox1:1 marker:1 quality:1 stimulate:10 dnk:4 grows:4 usa:1 normalized:2 true:3 nonzero:4 laboratory:1 conditionally:1 during:2 ambiguous:1 noted:1 excitation:2 rat:1 outline:1 complete:1 neocortical:2 theoretic:1 performs:2 reyes:1 l1:1 variational:31 kozloski:1 novel:1 ari:1 stimulation:8 functional:3 ji:1 tail:1 discussed:1 association:3 significant:1 measurement:1 refer:1 gibbs:11 tuning:1 unconstrained:1 fk:3 robot:1 entail:1 cortex:1 inhibition:1 posterior:26 own:1 recent:1 driven:1 discard:1 binary:3 success:1 peterka:1 additional:2 george:1 converge:1 maximize:1 paradigm:1 signal:1 july:3 stephen:1 multiple:5 full:11 infer:2 reduces:1 technical:1 match:1 pakman:1 faster:2 determination:1 fined:1 equally:1 schematic:1 scalable:1 regression:3 basic:1 expectation:3 iteration:2 represent:2 kernel:1 cell:15 dec:2 addition:2 want:3 fine:1 chklovskii:2 median:3 source:2 concluded:1 pyramidal:1 parallelization:1 eliminates:1 ascent:4 recording:1 tend:1 odds:1 near:1 enough:3 fit:1 architecture:1 restrict:1 identified:1 reduce:2 regarding:2 whether:2 expression:1 motivated:1 effort:1 song:1 returned:1 paige:1 york:4 action:2 useful:3 generally:1 listed:1 neocortex:3 hardware:7 reduced:1 generate:1 kck2:1 nsf:1 inhibitory:6 canonical:1 neuroscience:7 per:7 blue:5 write:1 vol:20 group:1 key:1 achieving:1 tempering:1 drawn:1 prevent:1 pj:1 diffusion:1 imaging:1 sum:2 run:4 parameterized:1 uncertainty:5 place:3 throughout:1 reasonable:2 putative:2 draw:1 decision:1 appendix:5 cyan:2 barbour:1 ashby:1 layer:1 activity:1 strength:6 occur:1 insufficiently:1 leonardo:1 nearby:2 aspect:1 speed:2 reparameterize:1 extremely:1 min:1 performing:3 optical:2 department:4 designated:2 developing:2 marking:2 reigl:1 gredilla:1 remain:1 across:2 postsynaptic:15 castro:1 untruncated:1 intuitively:1 restricted:1 confound:1 computationally:2 equation:3 turn:1 needed:5 mind:2 vip:1 tractable:1 available:1 malloy:1 multiplied:1 apply:1 observe:2 appropriate:1 running:4 opportunity:1 ghahramani:1 approximating:1 objective:9 spike:14 primary:1 rt:1 dependence:2 traditional:1 said:1 distance:2 thank:1 simulated:3 nelson:1 presynaptic:9 collected:1 argue:1 modeled:2 berger:1 ellipsoid:1 providing:1 ratio:1 minimizing:5 balance:1 difficult:2 setup:3 potentially:2 trace:6 stated:1 rise:1 design:24 implementation:2 lived:1 unknown:1 contributed:1 perform:8 upper:1 vertical:1 neuron:64 observation:2 allowing:2 fedorov:1 truncated:1 reparameterized:1 unconnected:1 variability:2 incorporated:2 communication:1 y1:2 inferred:2 introduced:1 pair:2 required:2 kl:4 connection:9 z1:1 brook:2 address:3 able:4 below:1 pattern:2 regime:1 sparsity:4 challenge:2 summarize:1 unsuccessful:1 max:1 event:7 critical:1 difficulty:2 s2k:2 force:1 meth:2 improve:1 technology:1 numerous:1 catch:1 columbia:6 health:1 speeding:1 prior:19 understanding:1 circled:1 acknowledgement:1 heller:1 determining:2 relative:2 asymptotic:1 fully:2 expect:1 yuste:6 proportional:1 znk:1 consistent:1 plotting:1 principle:1 heavy:2 row:1 excitatory:6 placed:1 last:1 free:1 soon:1 infeasible:1 supported:1 side:3 allow:2 ostr:1 hakim:1 wide:1 template:2 markram:1 sparse:7 distributed:1 benefit:2 depth:1 xn:4 cortical:2 author:1 made:1 adaptive:3 far:1 transaction:1 sj:1 alpha:1 approximate:8 nov:1 implicitly:1 rafael:1 andslab:1 active:1 assumed:1 excite:1 search:1 latent:1 iterative:1 quantifies:1 sk:5 physiologically:1 stimulated:4 learn:1 channel:2 transfer:1 rescume:1 career:1 main:2 spread:1 dense:2 stereotyped:1 whole:1 noise:8 hyperparameters:2 profile:1 x1:1 neuronal:3 fig:1 ny:4 slow:2 probing:1 inferring:1 position:1 explicit:1 exponential:1 comput:2 lie:1 candidate:1 levy:1 formula:1 magenta:2 down:1 specific:7 bishop:1 sensing:5 explored:4 decay:1 evidence:1 essential:1 intractable:2 effectively:1 corr:2 aria:1 magnitude:1 nat:2 entropy:15 paninski:2 explore:3 likely:2 army:2 temporarily:1 scalar:1 holland:1 brunel:1 springer:1 acm:1 stimulating:7 conditional:3 goal:2 identity:9 marked:1 sized:2 acceleration:1 room:1 feasible:1 change:3 experimentally:1 specifically:2 except:1 sampler:4 distributes:1 total:1 experimental:17 e:1 indicating:1 support:1 inability:1 arises:1 meant:1 preparation:4 evaluate:2 mcmc:1 correlated:1 |
4,299 | 4,891 | Sparse Overlapping Sets Lasso for Multitask
Learning and its Application to fMRI Analysis
Nikhil S. Rao?
[email protected]
Christopher R. Cox#
[email protected]
Robert D. Nowak?
[email protected]
?
Timothy T. Rogers#
[email protected]
Department of Electrical and Computer Engineering, # Department of Psychology
University of Wisconsin- Madison
Abstract
Multitask learning can be effective when features useful in one task are also useful
for other tasks, and the group lasso is a standard method for selecting a common
subset of features. In this paper, we are interested in a less restrictive form of multitask learning, wherein (1) the available features can be organized into subsets
according to a notion of similarity and (2) features useful in one task are similar, but not necessarily identical, to the features best suited for other tasks. The
main contribution of this paper is a new procedure called Sparse Overlapping Sets
(SOS) lasso, a convex optimization that automatically selects similar features for
related learning tasks. Error bounds are derived for SOSlasso and its consistency
is established for squared error loss. In particular, SOSlasso is motivated by multisubject fMRI studies in which functional activity is classified using brain voxels
as features. Experiments with real and synthetic data demonstrate the advantages
of SOSlasso compared to the lasso and group lasso.
1
Introduction
Multitask learning exploits the relationships between several learning tasks in order to improve
performance, which is especially useful if a common subset of features are useful for all tasks at
hand. The group lasso (Glasso) [19, 8] is naturally suited for this situation: if a feature is selected
for one task, then it is selected for all tasks. This may be too restrictive in many applications, and
this motivates a less rigid approach to multitask feature selection. Suppose that the available features
can be organized into overlapping subsets according to a notion of similarity, and that the features
useful in one task are similar, but not necessarily identical, to those best suited for other tasks. In
other words, a feature that is useful for one task suggests that the subset it belongs to may contain
the features useful in other tasks (Figure 1).
In this paper, we introduce the sparse overlapping sets lasso (SOSlasso), a convex program to recover the sparsity patterns corresponding to the situations explained above. SOSlasso generalizes
lasso [16] and Glasso, effectively spanning the range between these two well-known procedures.
SOSlasso is capable of exploiting the similarities between useful features across tasks, but unlike
Glasso it does not force different tasks to use exactly the same features. It produces sparse solutions,
but unlike lasso it encourages similar patterns of sparsity across tasks. Sparse group lasso [14] is
a special case of SOSlasso that only applies to disjoint sets, a significant limitation when features
cannot be easily partitioned, as is the case of our motivating example in fMRI. The main contribution of this paper is a theoretical analysis of SOSlasso, which also covers sparse group lasso as a
special case (further differentiating us from [14]). The performance of SOSlasso is analyzed, error
1
bounds are derived for general loss functions, and its consistency is shown for squared error loss.
Experiments with real and synthetic data demonstrate the advantages of SOSlasso relative to lasso
and Glasso.
1.1
Sparse Overlapping Sets
SOSlasso encourages sparsity patterns that are similar, but not identical, across tasks. This is accomplished by decomposing the features of each task into groups G1 . . . GM , where M is the same
for each task, and Gi is a set of features that can be considered similar across tasks. Conceptually,
SOSlasso first selects subsets that are most useful for all tasks, and then identifies a unique sparse
solution for each task drawing only from features in the selected subsets. In the fMRI application
discussed later, the subsets are simply clusters of adjacent spatial data points (voxels) in the brains of
multiple subjects. Figure 1 shows an example of the patterns that typically arise in sparse multitask
learning applications, where rows indicate features and columns correspond to tasks.
Past work has focused on recovering variables that exhibit within and across group sparsity, when
the groups do not overlap [14], finding application in genetics, handwritten character recognition
[15] and climate and oceanography [2]. Along related lines, the exclusive lasso [21] can be used
when it is explicitly known that variables in certain sets are negatively correlated.
(a) Sparse
(b) Group sparse
(c) Group sparse
plus sparse
(d) Group sparse
and sparse
Figure 1: A comparison of different sparsity patterns. (a) shows a standard sparsity pattern. An
example of group sparse patterns promoted by Glasso [19] is shown in (b). In (c), we show the
patterns considered in [6]. Finally, in (d), we show the patterns we are interested in this paper.
1.2
fMRI Applications
In psychological studies involving fMRI, multiple participants are scanned while subjected to exactly the same experimental manipulations. Cognitive Neuroscientists are interested in identifying
the patterns of activity associated with different cognitive states, and construct a model of the activity
that accurately predicts the cognitive state evoked on novel trials. In these datasets, it is reasonable
to expect that the same general areas of the brain will respond to the manipulation in every participant. However, the specific patterns of activity in these regions will vary, both because neural codes
can vary by participant [4] and because brains vary in size and shape, rendering neuroanatomy only
an approximate guide to the location of relevant information across individuals. In short, a voxel
useful for prediction in one participant suggests the general anatomical neighborhood where useful
voxels may be found, but not the precise voxel. While logistic Glasso [17], lasso [13], and the elastic net penalty [12] have been applied to neuroimaging data, these methods do not exclusively take
into account both the common macrostructure and the differences in microstructure across brains.
SOSlasso, in contrast, lends itself well to such a scenario, as we will see from our experiments.
1.3
Organization
The rest of the paper is organized as follows: in Section 2, we outline the notations that we will
use and formally set up the problem. We also introduce the SOSlasso regularizer. We derive certain key properties of the regularizer in Section 3. In Section 4, we specialize the problem to the
multitask linear regression setting (2), and derive consistency rates for the same, leveraging ideas
from [9]. We outline experiments performed on simulated data in Section 5. In this section, we also
perform logistic regression on fMRI data, and argue that the use of the SOSlasso yields interpretable
multivariate solutions compared to Glasso and lasso.
2
2
Sparse Overlapping Sets Lasso
We formalize the notations used in the sequel. Lowercase and uppercase bold letters indicate
vectors and matrices respectively. We assume a multitask learning framework, with a data matrix ?t ? Rn?p for each task t ? {1, 2, . . . , T }. We assume there exists a vector x?t ? Rp
such that measurements obtained are of the form yt = ?t x?t + ?t ?t ? N (0, ? 2 I). Let
X ? := [x?1 x?2 . . . x?T ] ? Rp?T . Suppose we are given M (possibly overlapping) groups
?1, G
?2, . . . , G
? M }, so that G
? i ? {1, 2, . . . , p} ?i, of maximum size B. These groups contain
G? = {G
sets of ?similar? features, the notion of similarity being application dependent. We assume that all
but k M groups are identically zero. Among the active groups, we further assume that at most
only a fraction ? ? (0, 1) of the coefficients per group are non zero. We consider the following
optimization program in this paper
( T
)
X
?
L? (xt ) + ?n h(x)
X = arg min
(1)
x
t
t=1
where x = [xT1 xT2 . . . xTT ]T , h(x) is a regularizer and Lt := L?t (xt ) denotes the loss function,
whose value depends on the data matrix ?t . We consider least squares and logistic loss functions. In
1
the least squares setting, we have Lt = 2n
kyt ? ?t xt k2 . We reformulate the optimization problem
(1) with the least squares loss as
1
b = arg min
x
ky ? ?xk22 + ?n h(x)
(2)
x
2n
where y = [y1T y2T . . . yTT ]T and the block diagonal matrix ? is formed by block concatenating the
?0t s. We use this reformulation for ease of exposition (see also [8] and references therein). Note
that x ? RT p , y ? RT n , and ? ? RT n?T p . We also define G = {G1 , G2 , . . . , GM } to be the
? so that
set of groups defined on RT p formed by aggregating the rows of X that were originally in G,
x is composed of groups G ? G.
We next define a regularizer h that promotes sparsity both within and across overlapping sets of
similar features:
X
X
h(x) = inf
(?G kwG k2 + kwG k1 ) s.t.
wG = x
(3)
W
G?G
G?G
where the ?G > 0 are constants that balance the tradeoff between the group norms and the `1 norm.
Each wG has the same size as x, with support restricted to the variables indexed by group G. W is
a set of vectors, where each vector has a support restricted to one of the groups G ? G:
W = {wG ? RT p | [wG ]i = 0 if i ?
/ G}
where [wG ]i is the ith coefficient of wG . The SOSlasso is the optimization in (1) with h(x) as
defined in (3).
We say the set of vectors wG is an optimal decomposition of x if they achieve the inf in (3). The
objective function in (3) is convex and coercive. Hence, ?x, an optimal decomposition always exists.
As the ?G ? ? the `1 term becomes redundant, reducing h(x) to the overlapping group lasso
penalty introduced in [5], and studied in [10, 11]. When the ?G ? 0, the overlapping group lasso
term vanishes and h(x) reduces to the lasso penalty. We consider ?G = 1 ?G. All the results in the
paper can be easily modified to incorporate different settings for the ?G .
P
P
Support
Values
kxk1
G kxG k2
G (kxG k2 + kxG k1 )
{1, 4, 9}
{3, 4, 7}
12
14
26
{1, 2, 3, 4, 5} {2, 5, 2, 4, 5}
8.602
18
26.602
{1, 3, 4}
{3, 4, 7}
8.602
14
22.602
Table 1: Different instances of a 10-d vector and their corresponding norms.
The example in Table 1 gives an insight into the kind of sparsity patterns preferred by the function
h(x). The optimization problems (1) and (2) will prefer solutions that have a small value of h(?).
3
Consider 3 instances of x ? R10 , and the corresponding group lasso, `1 , and h(x) function values.
The vector is assumed to be made up of two groups, G1 = {1, 2, 3, 4, 5} and G2 = {6, 7, 8, 9, 10}.
h(x) is smallest when the support set is sparse within groups, and also when only one of the two
groups is selected. The `1 norm does not take into account sparsity across groups, while the group
lasso norm does not take into account sparsity within groups.
To solve (1) and (2) with the regularizer proposed in (3), we use the covariate duplication method of
[5], to reduce the problem to a non overlapping sparse group lasso problem. We then use proximal
point methods [7] in conjunction with the MALSAR [20] package to solve the optimization problem.
3
Error Bounds for SOSlasso with General Loss Functions
We derive certain key properties of the regularizer h(?) in (3), independent of the loss function used.
Lemma 3.1 The function h(x) in (3) is a norm
The proof follows from basic properties of norms and because if wG , vG are optimal decompositions
of x, y, then it does not imply that wG + vG is an optimal decomposition of x + y. For a detailed
proof, please refer to the supplementary material.
The dual norm of h(x) can be bounded as
h? (u) = max{xT u} s.t. h(x) ? 1
x
X
X
T
= max{
wG
uG } s.t.
(kwG k2 + kwG k1 ) ? 1
W
G?G
(i)
? max{
W
= max{
W
X
G?G
T
wG
uG } s.t.
G?G
X
X
2kwG k2 ? 1
G?G
T
wG
uG } s.t.
X
G?G
G?G
kwG k2 ?
1
2
1
? h? (u) ? max kuG k2
(4)
G?G 2
(i) follows from the fact that the constraint set in (i) is a superset of the constraint set in the previous
statement, since kak2 ? kak1 . (4) follows from noting that the maximum is obtained by setting
uG?
, where G? = arg maxG?G kuG k2 . The inequality (4) is far more tractable than
wG? = 2ku
G ? k2
the actual dual norm, and will be useful in our derivations below. Since h(?) is a norm, we can apply
methods developed in [9] to derive consistency rates for the optimization problems (1) and (2). We
will use the same notations as in [9] wherever possible.
Definition 3.2 A norm h(?) is decomposable with respect to the subspace pair sA ? sB if h(a +
b) = h(a) + h(b) ?a ? sA, b ? sB ? .
Lemma 3.3 Let x? ? Rp be a vector that can be decomposed into (overlapping) groups with withingroup sparsity. Let G ? ? G be the set of active groups of x? . Let S = supp(x? ) indicate the support
set of x. Let sA be the subspace spanned by the coordinates indexed by S, and let sB = sA. We
then have that the norm in (3) is decomposable with respect to sA, sB
The result follows in a straightforward way from noting that supports of decompositions for vectors
in sA and sB ? do not overlap. We defer the proof to the supplementary material.
Definition 3.4 Given a subspace sB, the subspace compatibility constant with respect to a norm
k k is given by
h(x)
?x ? sB\{0}
?(B) = sup
kxk
Lemma 3.5 Consider a vector x that can be decomposed into G ? ? G active groups. Suppose the
maximum group size is B, and also assume that a fraction ? ? (0, 1) of the coordinates in each
active group is non zero. Then,
p
?
h(x) ? (1 + B?) |G ? |kxk2
4
P
Proof For any vector x with supp(x) ? G ? , there exists a representation x = G?G ? wG , such
that the supports of the different wG do not overlap. Then,
X
X
p
?
?
h(x) ?
(kwG k2 + kwG k1 ) ? (1 + B?)
kwG k2 ? (1 + B?) |G ? |kxk2
G?G ?
G?G ?
p
?
We see that (1 + B?) |G ? | (Lemma 3.5) gives an upper bound on the subspace compatibility
constant with respect to the `2 norm for the subspace indexed by the support of the vector, which is
contained in the span of the union of groups in G ? .
Definition 3.6 For a given set S, and given vector x? , the loss function L? (x) satisfies the Restricted Strong Convexity(RSC) condition with parameter ? and tolerance ? if
L? (x? + ?) ? L? (x? ) ? h?L? (x? ), ?i ? ?k?k22 ? ? 2 (x? ) ?? ? S
In this paper, we consider vectors x? that lie exactly in k M groups, and display within-group
sparsity. This implies that the tolerance ? (x? ) = 0, and we will ignore this term henceforth.
We also define the following set, which will be used in the sequel:
C(sA, sB, x? ) := {? ? Rp |h(?sB ? ?) ? 3h(?sB ?) + 4h(?sA? x? )}
(5)
where ?sA (?) denotes the projection onto the subspace sA. Based on the results above, we can now
apply a result from [9] to the SOSlasso:
Theorem 3.7 (Corollary 1 in [9]) Consider a convex and differentiable loss function such that RSC
holds with constants ? and ? = 0 over (5), and a norm h(?) decomposable over sets sA and sB. For
the optimization program in (1), using the parameter ?n ? 2h? (?L? (x? )), any optimal solution
? ?n to (1) satisfies
x
9?2
kb
x?n ? x? k2 ? n ?2 (sB)
?
The result above shows a general bound on the error using the lasso with sparse overlapping sets.
Note that the regularization parameter ?n as well as the RSC constant ? depend on the loss function
L? (x). Convergence for logistic regression settings may be derived using methods in [1]. In the
next section, we consider the least squares loss (2), and show that the estimate using the SOSlasso
is consistent.
4
Consistency of SOSlasso with Squared Error Loss
We first need to bound the dual norm of the gradient of the loss function, so as to bound ?n . Consider
1
L := L? (x) = 2n
ky ? ?xk2 . The gradient of the loss function with respect to x is given by
1 T
?L = n ? (?x ? y) = n1 ?T ? where ? = [?1T ?2T . . . ?TT ]T (see Section 2). Our goal now is to
find an upper bound on the quantity h? (?L), which from (4) is
1
1
max k?LG k2 =
max k?TG ?k2
2 G?G
2n G?G
where ?G is the matrix ? restricted to the columns indexed by the group G. We will prove an upper
bound for the above quantity in the course of the results that follow.
Since ? ? N (0, ? 2 I), we have ?TG ? ? ?N (0, ?TG ?G ). Defining ?mG := ?max {?TG ?G } to be
2
the maximum singular value, we have k?TG ?k22 ? ? 2 ?mG
k?k22 , where ? ? N (0, I|G| ) ? k?k22 ?
?2|G| , where ?2d is a chi-squared random variable with d degrees of freedom. This allows us to work
with the more tractable chi squared random variable when we look to bound the dual norm of ?L.
The next lemma helps us obtain a bound on the maximum of ?2 random variables.
Lemma 4.1 Let z1 , z2 , . . . , zM be chi-squared random variables with d degrees of freedom. Then
for some constant c,
(c ? 1)2 d
2
P
max zi ? c d ? 1 ? exp log(M ) ?
i=1,2,...,M
2
5
2
d
Proof From the chi-squared tail bound in [3], P(zi ? c2 d) ? exp ? (c?1)
. The result follows
2
from a union bound and inverting the expression.
PT
1
1
2
2
Lemma 4.2 Consider the loss function L := 2n
t=1 kyt ? ?t xt k = 2n ky ? ?xk , with the
0
2
?t s deterministic and the measurements corrupted with AWGN of variance ? . For the regularizer
in (3), the dual norm of the gradient of the loss function is bounded as
2
? 2 ?m
(log(M ) + T B)
4
n
with probability at least 1 ? c1 exp(?c2 n), for c1 , c2 > 0, and where ?m = maxG?G ?mG
h? (?L)2 ?
Proof Let ? ? ?2T |G| . We begin with the upper bound obtained for the dual norm of the regularizer
in (4):
(i) 1
1 T
2
?2
?2 ?
?
2
?
max mG
h (?L) ? max
?G ?
4 G?G n
4 G?G n2
2
2
(ii) ? 2 ? 2
? (iii) ? 2 ?m
(cn ? 1)2 T B
m
?
max 2 ?
c2 T B w. p. 1 ? exp log(M ) ?
4 G?G n
4
2
where (i) follows from the formulation of the gradient of the loss function and the fact that the
square of maximum of non negative numbers is the maximum of the squares of the same numbers.
In (ii), we have defined ?m = maxG ?mG . Finally, we have made use of Lemma 4.1 in (iii). We
then set
log(M ) + T B
c2 =
T Bn
to obtain the result.
We combine the results developed so far to derive the following consistency result for the SOS lasso,
with the least squares loss function.
Theorem 4.3 Suppose we obtain linear measurements of a sparse overlapping grouped matrix
X ? ? Rp?T , corrupted by AWGN of variance ? 2 . Suppose the matrix X ? can be decomposed
into M possible overlapping groups of maximum size B, out of which k are active. Furthermore,
assume that a fraction ? ? (0, 1] of the coefficients are non zero in each active group. Consider the
following vectorized SOSlasso multitask regression problem (2):
1
2
b = arg min
ky ? ?xk2 + ?n h(x) ,
x
x
2n
X
X
h(x) = inf
(kwG k2 + kwG k1 ) s.t.
wG = x
W
G?G
G?G
Suppose the data matrices ?t are non random, and the loss function satisfies restricted strong
2
? 2 ?m
(log(M )+T B)
, the following holds
convexity assumptions with parameter ?. Then, for ?n ?
4n
with probability at least 1 ? c1 exp(?c2 n), with c1 , c2 > 0:
2
?
2 2
?
?
1
+
T
B?
k(log(M ) + T B)
m
9
kb
x ? x? k2 ?
4
n?
where we define ?m := maxG?G ?max {?TG ?G }
Proof Follows from substituting in Theorem 3.7 the results from Lemma 3.5 and Lemma 4.2.
From [9], we see that the convergence rate matches that of the group lasso, with an additional
multiplicative factor ?. This stems from the fact that the signal has a sparse structure ?embedded?
within a group sparse structure. Visualizing the optimization problem as that of solving a lasso
within a group lasso framework lends some intuition into this result. Note that since ? < 1, this
bound is much smaller than that of the standard group lasso.
6
5
Experiments and Results
5.1
Synthetic data, Gaussian Linear Regression
For T = 20 tasks, we define a N = 2002 element vector divided into M = 500 groups of size
B = 6. Each group overlaps with its neighboring groups (G1 = {1, 2, . . . , 6}, G2 = {5, 6, . . . , 10},
G3 = {9, 10, . . . , 14}, . . . ). 20 of these groups were activated uniformly at random, and populated
from a uniform [?1, 1] distribution. A proportion ? of these coefficients with largest magnitude
1
were retained as true signal. For each task, we obtain 250 linear measurements using a N (0, 250
I)
matrix. We then corrupt each measurement with Additive White Gaussian Noise (AWGN), and
assess signal recovery in terms of Mean Squared Error (MSE). The regularization parameter was
clairvoyantly picked to minimize the MSE over a range of parameter values. The results of applying
lasso, standard latent group lasso [5, 10], and our SOSlasso to these data are plotted in Figures 2(a),
varying ?, ? = 0.2, and 2(b), varying ?, ? = 0.1. Each point in Figures 2(a) and 2(b), is the
average of 100 trials, where each trial is based on a new random instance of X ? and the Gaussian
data matrices.
0.02
0.015
Glasso
SOSlasso
lasso
Glasso
SOSlasso
0.015
MSE
MSE
0.01
0.01
0.005
0.005
0
0
0.05
0.1
?
0.15
(a) Varying ?
0.2
0
0
0.2
0.4
0.6
0.8
1
1??
(b) Varying ?
(c) Sample pattern
Figure 2: As the noise is increased (a), our proposed penalty function (SOSlasso) allows us to
recover the true coefficients more accurately than the group lasso (Glasso). Also, when alpha is
large, the active groups are not sparse, and the standard overlapping group lasso outperforms the
other methods. However, as ? reduces, the method we propose outperforms the group lasso (b). (c)
shows a toy sparsity pattern, with different colors denoting different overlapping groups
5.2
The SOSlasso for fMRI
In this experiment, we compared SOSlasso, lasso, and Glasso in analysis of the star-plus dataset [18].
6 subjects made judgements that involved processing 40 sentences and 40 pictures while their brains
were scanned in half second intervals using fMRI1 . We retained the 16 time points following each
stimulus, yielding 1280 measurements at each voxel. The task is to distinguish, at each point in time,
which stimulus a subject was processing. [18] showed that there exists cross-subject consistency in
the cortical regions useful for prediction in this task. Specifically, experts partitioned each dataset
into 24 non overlapping regions of interest (ROIs), then reduced the data by discarding all but 7 ROIs
and, for each subject, averaging the BOLD response across voxels within each ROI and showed that
a classifier trained on data from 5 subjects generalized when applied to data from a 6th.
We assessed whether SOSlasso could leverage this cross-individual consistency to aid in the discovery of predictive voxels without requiring expert pre-selection of ROIs, or data reduction, or
any alignment of voxels beyond that existing in the raw data. Note that, unlike [18], we do not
aim to learn a solution that generalizes to a withheld subject. Rather, we aim to discover a group
sparsity pattern that suggests a similar set of voxels in all subjects, before optimizing a separate
solution for each individual. If SOSlasso can exploit cross-individual anatomical similarity from
this raw, coarsely-aligned data, it should show reduced cross-validation error relative to the lasso
applied separately to each individual. If the solution is sparse within groups and highly variable
across individuals, SOSlasso should show reduced cross-validation error relative to Glasso. Finally,
if SOSlasso is finding useful cross-individual structure, the features it selects should align at least
somewhat with the expert-identified ROIs shown by [18] to carry consistent information.
1
Data and documentation available at http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-81/www/
7
R
Figure 3: Results from fMRI experiments. (a) Aggregated sparsity patterns for a single brain slice. (b) Crossvalidation error obtained with each
method. Lines connect data for a single subject. (c) The full sparsity pattern
obtained with SOSlasso.
lasso
0.36
0.34
Error
0.32
0.3
0.28
Glasso
0.26
0.24
lasso
Glasso SOSlasso
SOSlasso
(b)
(a)
Picture only
Sentence only
Picture and Sentence
(c)
Method % ROI
t(5) , p
lasso
46.11
6.08 ,0.001
Glasso
50.89
5.65 ,0.002
SOSlasso
70.31
Table 2: Proportion of selected voxels
in the 7 relevant ROIS aggregated over
subjects, and corresponding two-tailed
significance levels for the contrast of
lasso and Glasso to SOSlasso.
We trained 3 classifiers using 4-fold cross validation to select the regularization parameter, considering all available voxels without preselection. We group regions of 5 ? 5 ? 1 voxels and considered
overlapping groups ?shifted? by 2 voxels in the first 2 dimensions.2 Figure 3(b) shows the individual
error rates across the 6 subjects for the three methods. Across subjects, SOSlasso had a significantly
lower cross-validation error rate (27.47 %) than individual lasso (33.3 %; within-subjects t(5) = 4.8;
p = 0.004 two-tailed), showing that the method can exploit anatomical similarity across subjects to
learn a better classifier for each. SOSlasso also showed significantly lower error rates than glasso
(31.1 %; t(5) = 2.92; p = 0.03 two-tailed), suggesting that the signal is sparse within selected regions
and variable across subjects.
Figure 3(a) presents a sample of the the sparsity patterns obtained from the different methods, aggregated over all subjects. Red points indicate voxels that contributed positively to picture classification
in at least one subject, but never to sentences; Blue points have the opposite interpretation. Purple
points indicate voxels that contributed positively to picture and sentence classification in different
subjects. The remaining slices for the SOSlasso are shown in Figure 3(c). There are three things to
note from Figure 3(a). First, the Glasso solution is fairly dense, with many voxels signaling both
picture and sentence across subjects. We believe this ?purple haze? demonstrates why Glasso is illsuited for fMRI analysis: a voxel selected for one subject must also be selected for all others. This
approach will not succeed if, as is likely, there exists no direct voxel-to-voxel correspondence or if
the neural code is variable across subjects. Second, the lasso solution is less sparse than the SOSlasso
because it allows any task-correlated voxel to be selected. It leads to a higher cross-validation error,
indicating that the ungrouped voxels are inferior predictors (Figure 3(b)). Third, the SOSlasso not
only yields a sparse solution, but also clustered. To assess how well these clusters align with the
anatomical regions thought a-priori to be involved in sentence and picture representation, we calculated the proportion of selected voxels falling within the 7 ROIs identified by [18] as relevant to the
classification task (Table 2). For SOSlasso an average of 70% of identified voxels fell within these
ROIs, significantly more than for lasso or Glasso.
6
Conclusions and Extensions
We have introduced SOSlasso, a function that recovers sparsity patterns that are a hybrid of overlapping group sparse and sparse patterns when used as a regularizer in convex programs, and proved
its theoretical convergence rates when minimizing least squares. The SOSlasso succeeds in a multitask fMRI analysis, where it both makes better inferences and discovers more theoretically plausible
brain regions that lasso and Glasso. Future work involves experimenting with different parameters
for the group and l1 penalties, and using other similarity groupings, such as functional connectivity
in fMRI.
2
The irregular group size compensates for voxels being larger and scanner coverage being smaller in the
z-dimension (only 8 slices relative to 64 in the x- and y-dimensions).
8
References
[1] Francis Bach. Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic
regression. arXiv preprint arXiv:1303.6149, 2013.
[2] S. Chatterjee, A. Banerjee, and A. Ganguly. Sparse group lasso for regression on land climate variables.
In Data Mining Workshops (ICDMW), 2011 IEEE 11th International Conference on, pages 1?8. IEEE,
2011.
[3] S. Dasgupta, D. Hsu, and N. Verma.
arXiv:1206.6813, 2012.
A concentration theorem for projections.
arXiv preprint
[4] Eva Feredoes, Giulio Tononi, and Bradley R Postle. The neural bases of the short-term storage of verbal
information are anatomically variable across individuals. The Journal of Neuroscience, 27(41):11003?
11008, 2007.
[5] L. Jacob, G. Obozinski, and J. P. Vert. Group lasso with overlap and graph lasso. In Proceedings of the
26th Annual International Conference on Machine Learning, pages 433?440. ACM, 2009.
[6] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. Advances in
Neural Information Processing Systems, 23:964?972, 2010.
[7] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach. Proximal methods for hierarchical sparse coding. arXiv
preprint arXiv:1009.2139, 2010.
[8] K. Lounici, M. Pontil, A. B. Tsybakov, and S. van de Geer. Taking advantage of sparsity in multi-task
learning. arXiv preprint arXiv:0903.1468, 2009.
[9] S. N. Negahban, P. Ravikumar, M. J Wainwright, and Bin Yu. A unified framework for high-dimensional
analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538?557, 2012.
[10] G. Obozinski, L. Jacob, and J.P. Vert. Group lasso with overlaps: The latent group lasso approach. arXiv
preprint arXiv:1110.0413, 2011.
[11] N. Rao, B. Recht, and R. Nowak. Universal measurement bounds for structured sparse signal recovery.
In Proceedings of AISTATS, volume 2102, 2012.
[12] Irina Rish, Guillermo A Cecchia, Kyle Heutonb, Marwan N Balikic, and A Vania Apkarianc. Sparse
regression analysis of task-relevant information distribution in the brain. In Proceedings of SPIE, volume
8314, page 831412, 2012.
[13] Srikanth Ryali, Kaustubh Supekar, Daniel A Abrams, and Vinod Menon. Sparse logistic regression for
whole brain classification of fmri data. NeuroImage, 51(2):752, 2010.
[14] N. Simon, J. Friedman, T. Hastie, and R. Tibshirani. A sparse-group lasso. Journal of Computational and
Graphical Statistics, (just-accepted), 2012.
[15] P. Sprechmann, I. Ramirez, G. Sapiro, and Y. Eldar. Collaborative hierarchical sparse modeling. In
Information Sciences and Systems (CISS), 2010 44th Annual Conference on, pages 1?6. IEEE, 2010.
[16] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society. Series B (Methodological), pages 267?288, 1996.
[17] Marcel van Gerven, Christian Hesse, Ole Jensen, and Tom Heskes. Interpreting single trial data using
groupwise regularisation. NeuroImage, 46(3):665?676, 2009.
[18] X. Wang, T. M Mitchell, and R. Hutchinson. Using machine learning to detect cognitive states across
multiple subjects. CALD KDD project paper, 2003.
[19] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 68(1):49?67, 2006.
[20] J. Zhou, J. Chen, and J. Ye. Malsar: Multi-task learning via structural regularization, 2012.
[21] Y. Zhou, R. Jin, and S. C. Hoi. Exclusive lasso for multi-task feature selection. In Proceedings of the
International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
9
| 4891 |@word multitask:10 trial:4 cox:1 groupwise:1 judgement:1 norm:19 proportion:3 bn:1 decomposition:5 jacob:2 carry:1 reduction:1 series:2 exclusively:1 selecting:1 daniel:1 denoting:1 past:1 outperforms:2 existing:1 bradley:1 z2:1 rish:1 must:1 additive:1 kdd:1 shape:1 christian:1 cis:1 interpretable:1 half:1 selected:10 intelligence:1 xk:1 ith:1 short:2 location:1 along:1 c2:7 direct:1 yuan:1 specialize:1 prove:1 combine:1 introduce:2 theoretically:1 multisubject:1 multi:4 brain:10 chi:4 decomposed:3 automatically:1 actual:1 considering:1 becomes:1 begin:1 discover:1 notation:3 bounded:2 project:2 kind:1 developed:2 coercive:1 unified:1 finding:2 sapiro:1 every:1 exactly:3 k2:17 classifier:3 demonstrates:1 giulio:1 before:1 engineering:1 aggregating:1 local:1 awgn:3 plus:2 therein:1 studied:1 evoked:1 suggests:3 ease:1 range:2 averaged:1 unique:1 union:2 block:2 hesse:1 signaling:1 procedure:2 pontil:1 area:1 universal:1 significantly:3 thought:1 projection:2 vert:2 word:1 pre:1 cannot:1 onto:1 selection:5 storage:1 applying:1 www:2 deterministic:1 yt:1 straightforward:1 convex:5 focused:1 decomposable:4 identifying:1 recovery:2 insight:1 estimator:1 spanned:1 notion:3 coordinate:2 pt:1 suppose:6 gm:2 element:1 documentation:1 recognition:1 predicts:1 kxk1:1 preprint:5 electrical:1 wang:1 region:7 eva:1 malsar:2 intuition:1 vanishes:1 convexity:3 trained:2 depend:1 solving:1 predictive:1 negatively:1 easily:2 regularizer:9 derivation:1 effective:1 ole:1 artificial:1 neighborhood:1 whose:1 y1t:1 solve:2 supplementary:2 nikhil:1 drawing:1 say:1 wg:16 plausible:1 compensates:1 statistic:2 gi:1 g1:4 ganguly:1 itself:1 advantage:3 differentiable:1 mg:5 net:1 propose:1 zm:1 neighboring:1 relevant:4 aligned:1 kak1:1 achieve:1 ky:4 crossvalidation:1 exploiting:1 convergence:3 cluster:2 produce:1 help:1 derive:5 sa:11 strong:3 recovering:1 c:2 involves:1 indicate:5 implies:1 marcel:1 coverage:1 stochastic:1 kb:2 rogers:1 material:2 bin:1 hoi:1 microstructure:1 clustered:1 extension:1 hold:2 scanner:1 considered:3 roi:9 exp:5 substituting:1 vary:3 smallest:1 xk2:2 estimation:1 grouped:2 largest:1 always:1 gaussian:3 aim:2 modified:1 rather:1 zhou:2 shrinkage:1 varying:4 conjunction:1 corollary:1 srikanth:1 derived:3 methodological:1 experimenting:1 contrast:2 detect:1 inference:1 dependent:1 rigid:1 lowercase:1 sb:12 typically:1 interested:3 selects:3 compatibility:2 arg:4 among:1 dual:6 classification:4 eldar:1 priori:1 spatial:1 special:2 fairly:1 ruan:1 construct:1 never:1 identical:3 look:1 yu:1 fmri:13 future:1 others:1 stimulus:2 sanghavi:1 composed:1 individual:10 irina:1 n1:1 friedman:1 freedom:2 neuroscientist:1 organization:1 interest:1 highly:1 mining:1 tononi:1 alignment:1 analyzed:1 yielding:1 uppercase:1 activated:1 regularizers:1 nowak:3 capable:1 indexed:4 plotted:1 theoretical:2 rsc:3 psychological:1 instance:3 column:2 increased:1 modeling:1 rao:2 cover:1 tg:6 subset:8 uniform:1 predictor:1 too:1 motivating:1 connect:1 corrupted:2 hutchinson:1 proximal:2 synthetic:3 recht:1 international:3 negahban:1 sequel:2 connectivity:1 squared:8 possibly:1 henceforth:1 cognitive:4 expert:3 toy:1 supp:2 account:3 suggesting:1 de:1 star:1 bold:2 coding:1 coefficient:5 explicitly:1 depends:1 later:1 performed:1 multiplicative:1 picked:1 sup:1 red:1 francis:1 recover:2 participant:4 defer:1 simon:1 contribution:2 kxg:3 square:8 formed:2 ass:2 minimize:1 variance:2 collaborative:1 purple:2 correspond:1 yield:2 conceptually:1 handwritten:1 raw:2 accurately:2 classified:1 definition:3 involved:2 naturally:1 associated:1 proof:7 recovers:1 spie:1 hsu:1 dataset:2 proved:1 mitchell:1 color:1 organized:3 formalize:1 jenatton:1 originally:1 higher:1 follow:1 tom:1 wherein:1 response:1 methodology:1 formulation:1 lounici:1 furthermore:1 just:1 hand:1 christopher:1 overlapping:20 banerjee:1 logistic:6 menon:1 oceanography:1 believe:1 k22:4 contain:2 true:2 requiring:1 cald:1 ye:1 hence:1 regularization:4 climate:2 white:1 adjacent:1 visualizing:1 encourages:2 please:1 inferior:1 generalized:1 outline:2 tt:1 demonstrate:2 l1:1 interpreting:1 novel:1 discovers:1 kyle:1 common:3 functional:2 ug:4 volume:2 discussed:1 tail:1 interpretation:1 significant:1 measurement:7 refer:1 consistency:8 populated:1 heskes:1 clairvoyantly:1 had:1 similarity:7 align:2 base:1 multivariate:1 showed:3 optimizing:1 belongs:1 inf:3 manipulation:2 scenario:1 certain:3 inequality:1 accomplished:1 additional:1 somewhat:1 promoted:1 neuroanatomy:1 aggregated:3 redundant:1 signal:5 ii:2 multiple:3 full:1 reduces:2 stem:1 match:1 cross:9 bach:2 lin:1 divided:1 ravikumar:2 promotes:1 prediction:2 involving:1 regression:11 basic:1 cmu:2 arxiv:10 c1:4 irregular:1 separately:1 interval:1 singular:1 rest:1 unlike:3 fell:1 subject:22 duplication:1 thing:1 leveraging:1 structural:1 noting:2 leverage:1 gerven:1 iii:2 identically:1 superset:1 rendering:1 vinod:1 psychology:1 zi:2 hastie:1 lasso:52 identified:3 opposite:1 reduce:1 idea:1 cn:1 tradeoff:1 whether:1 motivated:1 expression:1 penalty:5 useful:15 detailed:1 preselection:1 tsybakov:1 reduced:3 http:1 shifted:1 neuroscience:1 disjoint:1 per:1 tibshirani:2 anatomical:4 blue:1 dasgupta:1 coarsely:1 group:69 key:2 reformulation:1 falling:1 wisc:4 r10:1 graph:1 fraction:3 xt2:1 package:1 letter:1 respond:1 reasonable:1 prefer:1 bound:16 distinguish:1 display:1 correspondence:1 fold:1 kyt:2 annual:2 activity:4 scanned:2 constraint:2 min:3 span:1 department:2 structured:1 according:2 across:19 smaller:2 character:1 partitioned:2 g3:1 wherever:1 explained:1 restricted:5 anatomically:1 haze:1 xk22:1 sprechmann:1 subjected:1 tractable:2 available:4 generalizes:2 decomposing:1 apply:2 kwg:11 hierarchical:2 kaustubh:1 rp:5 denotes:2 remaining:1 dirty:1 graphical:1 madison:1 exploit:3 restrictive:2 k1:5 especially:1 society:2 objective:1 quantity:2 concentration:1 exclusive:2 rt:5 diagonal:1 kak2:1 jalali:1 exhibit:1 gradient:5 lends:2 y2t:1 subspace:7 separate:1 abrams:1 simulated:1 argue:1 spanning:1 code:2 retained:2 relationship:1 reformulate:1 balance:1 minimizing:1 lg:1 neuroimaging:1 robert:2 statement:1 negative:1 motivates:1 perform:1 contributed:2 upper:4 datasets:1 withheld:1 descent:1 jin:1 situation:2 defining:1 precise:1 rn:1 introduced:2 inverting:1 pair:1 z1:1 sentence:7 established:1 ytt:1 beyond:1 below:1 pattern:20 sparsity:19 program:4 max:13 royal:2 wainwright:1 overlap:6 afs:1 force:1 hybrid:1 improve:1 imply:1 picture:7 identifies:1 larger:1 voxels:18 discovery:1 xtt:1 relative:4 wisconsin:1 embedded:1 loss:20 glasso:21 expect:1 regularisation:1 adaptivity:1 limitation:1 kug:2 vg:2 validation:5 degree:2 vectorized:1 consistent:2 corrupt:1 verma:1 land:1 row:2 genetics:1 course:1 guillermo:1 theo:1 verbal:1 guide:1 taking:1 differentiating:1 sparse:37 tolerance:2 slice:3 van:2 dimension:3 cortical:1 calculated:1 made:3 icdmw:1 voxel:7 far:2 approximate:1 alpha:1 ignore:1 preferred:1 active:7 mairal:1 xt1:1 assumed:1 marwan:1 ryali:1 maxg:4 latent:2 tailed:3 why:1 table:4 ku:1 learn:2 elastic:1 mse:4 necessarily:2 aistats:2 significance:1 main:2 dense:1 whole:1 noise:2 arise:1 n2:1 positively:2 aid:1 neuroimage:2 concatenating:1 lie:1 kxk2:2 third:1 supekar:1 theorem:4 specific:1 xt:5 covariate:1 discarding:1 showing:1 jensen:1 grouping:1 exists:5 workshop:1 effectively:1 magnitude:1 chatterjee:1 chen:1 suited:3 lt:2 timothy:1 simply:1 likely:1 ramirez:1 kxk:1 contained:1 g2:3 applies:1 satisfies:3 acm:1 obozinski:3 succeed:1 goal:1 exposition:1 ttrogers:1 specifically:1 reducing:1 uniformly:1 averaging:1 lemma:10 called:1 geer:1 ece:1 experimental:1 accepted:1 succeeds:1 indicating:1 formally:1 select:1 support:8 assessed:1 incorporate:1 correlated:2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.