Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
2,800 | 3,539 | Fast High-dimensional Kernel Summations Using the
Monte Carlo Multipole Method
Dongryeol Lee
Computational Science and Engineering
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
Alexander Gray
Computational Science and Engineering
Georgia Institute of Technology
Atlanta, GA 30332
[email protected]
Abstract
We propose a new fast Gaussian summation algorithm for high-dimensional
datasets with high accuracy. First, we extend the original fast multipole-type methods to use approximation schemes with both hard and probabilistic error. Second,
we utilize a new data structure called subspace tree which maps each data point in
the node to its lower dimensional mapping as determined by any linear dimension
reduction method such as PCA. This new data structure is suitable for reducing
the cost of each pairwise distance computation, the most dominant cost in many
kernel methods. Our algorithm guarantees probabilistic relative error on each kernel sum, and can be applied to high-dimensional Gaussian summations which are
ubiquitous inside many kernel methods as the key computational bottleneck. We
provide empirical speedup results on low to high-dimensional datasets up to 89
dimensions.
1
Fast Gaussian Kernel Summation
In this paper, we propose new computational techniques for efficiently approximating the following
sum for each query point qi ? Q:
X
2
2
?(qi , R) =
e?||qi ?rj || /(2h )
(1)
rj ?R
where R is the reference set; each reference point is associated with a Gaussian function with a
smoothing parameter h (the ?bandwidth?). This form of summation is ubiquitous in many statistical learning methods, including kernel density estimation, kernel regression, Gaussian process
regression, radial basis function networks, spectral clustering, support vector machines, and kernel
PCA [1, 4]. Cross-validation in all of these methods require evaluating Equation 1 for multiple values of h. Kernel density estimation, for example, requires |R| density estimate based on |R| ? 1
points, yielding a brute-force computational cost scaling quadratically (that is O(|R| 2 )).
Error bounds. Due to its expensive computational cost, many algorithms approximate the Gaussian kernel sums at the expense of reduced precision. Therefore, it is natural to discuss error bound
criteria which measure the quality of the approximations with respect to their corresponding true
values. The following error bound criteria are common in literature:
Definition 1.1. An algorithm guarantees absolute error bound,
if for each exact value ?(q i , R)
e
e
for qi ? Q, it computes ?(qi , R) such that ?(qi , R) ? ?(qi , R) ? .
Definition 1.2. An algorithm guarantees relative
error bound, if for
each exact value ?(q i , R)
e
e
for qi ? Q, it computes ?(qi , R) ? R such that ?(qi , R) ? ?(qi , R) ? |?(qi , R)|.
1
Bounding the relative error (e.g., the percentage deviation) is much harder because the error bound
criterion is in terms of the initially unknown exact quantity. As a result, many previous methods [7]
have focused on bounding the absolute error. The relative error bound criterion is preferred to the
absolute error bound criterion in statistical applications in which high accuracy is desired. Our new
algorithm will enforce the following ?relaxed? form of the relative error bound criterion, whose
motivation will be discussed shortly.
Definition 1.3. An algorithm guarantees (1 ? ?) probabilistic relative error bound, if for each
e i , R) ? R, such that with at least probability
exact value ?(qi, R) for qi ? Q, it computes
?(q
e
0 < 1 ? ? < 1, ?(qi , R) ? ?(qi , R) ? |?(qi , R)|.
Previous work. The most successful class of acceleration methods employ ?higher-order divide
and conquer? or generalized N -body algorithms (GNA) [4]. This approach can use any spatial
partioning tree such as kd-trees or ball-trees for both the query set Q and reference data R and
performs a simulataneous recursive descent on both trees.
GNA with relative error bounds (Definition 1.2) [5, 6, 11, 10] utilized bounding boxes and additional cached-sufficient statistics such as higher-order moments needed for series-expansion. [5, 6]
utilized bounding-box based error bounds which tend to be very loose, which resulted in slow empirical performance around suboptimally small and large bandwidths. [11, 10] extended GNA-based
Gaussian summations with series-expansion which provided tighter bounds; it showed enormous
performance improvements, but only up to low dimensional settings (up to D = 5) since the number of required terms in series expansion increases exponentially with respect to D.
[9] introduces an iterative sampling based GNA for accelerating the computation of nested sums
(a related easier problem). Its speedup is achieved by replacing pessimistic error bounds provided
by bounding boxes with normal-based confidence interval from Monte Carlo sampling. [9] demonstrates the speedup many orders of magnitude faster than the previous state of the art in the context
of computing aggregates over the queries (such as the LSCV score for selecting the optimal bandwidth). However, the authors did not discuss the sampling-based approach for computations that
require per-query estimates, such as those required for kernel density estimation.
None of the previous approaches for kernel summations addresses the issue of reducing the computational cost of each distance computation which incurs O(D) cost. However, the intrinsic dimensionality d of most high-dimensional datasets is much smaller than the explicit dimension D (that is,
d << D). [12] proposed tree structures using a global dimension reduction method, such as random
projection, as a preprocessing step for efficient (1 + ) approximate nearest neighbor search. Similarly, we develop a new data structure for kernel summations; our new data structure is constructed
in a top-down fashion to perform the initial spatial partitioning in the original input space R D and
performs a local dimension reduction to a localized subset of the data in a bottom-up fashion.
This paper. We propose a new fast Gaussian summation algorithm that enables speedup in higher
dimensions. Our approach utilizes: 1) probabilistic relative error bounds (Definition 1.3) on kernel
sums provided by Monte Carlo estimates 2) a new tree structure called subspace tree for reducing
the computational cost of each distance computation. The former can be seen as relaxing the strict
requirement of guaranteeing hard relative bound on very small quantities, as done in [5, 6, 11, 10].
The latter was mentioned as a possible way of ameliorating the effects of the curse of dimensionality
in [14], a pioneering paper in this area.
Notations. Each query point and reference point (a D-dimensional vector) is indexed by natural
numbers i, j ? N, and denoted qi and rj respectively. For any set S, |S| denotes the number of
elements in S. The entities related to the left and the right child are denoted with superscripts L and
R; an internal node N has the child nodes N L and N R .
2
Gaussian Summation by Monte Carlo Sampling
Here we describe the extension needed for probabilistic computation of kernel summation satisfying
Definition 1.3. The main routine for the probabilistic kernel summation is shown in Algorithm 1.
The function MCMM takes the query node Q and the reference node R (each initially called with
the roots of the query tree and the reference tree, Qroot and Rroot ) and ? (initially called with ?
value which controls the probability guarantee that each kernel sum is within relative error).
2
Algorithm 1 The core dual-tree routine for probabilistic Gaussian kernel summation.
MCMM(Q, R, ?)
if C AN S UMMARIZE E XACT(Q, R, ) then
S UMMARIZE E XACT(Q, R)
else if C AN S UMMARIZE MC(Q, R, , ?) then
5:
S UMMARIZE MC(Q, R, , ?)
else
if Q is a leaf node then
if R is a leaf node then
MCMMBASE(Q, R)
10:
else
MCMM Q, RL , ?2 , MCMM Q, RR , ?2
else
if R is a leaf node then
MCMM(QL , R, ?), MCMM(QR , R, ?)
15:
else
MCMM QL , RL , ?2 , MCMM QL , RR , ?2
MCMM QR , RL , ?2 , MCMM QR , RR , ?2
The idea of Monte Carlo sampling used in the new algorithm is similar to the one in [9], except
the sampling is done per query and we use approximations that provide hard error bounds as well
(i.e. finite difference, exhaustive base case: MCMMBASE). This means that the approximation has
less variance than a pure Monte Carlo approach used in [9]. Algorithm 1 first attempts approximations with hard error bounds, which are computationally cheaper than sampling-based approximations. For example, finite-difference scheme [5, 6] can be used for the C AN S UMMARIZE E XACT and
S UMMARIZE E XACT functions in any general dimension.
The C AN S UMMARIZE MC function takes two parameters that specify the accuracy: the relative error
and its probability guarantee and decides whether to use Monte Carlo sampling for the given pair of
nodes. If the reference node R contains too few points, it may be more efficient to process it using
exact methods that use error bounds based on bounding primitives on the node pair or exhaustive
pair-wise evaluations, which is determined by the condition: ? ? minitial ? |R| where ? > 1
controls the minimum number of reference points needed for Monte Carlo sampling to proceed.
If the reference node does contain enough points, then for each query point qP? Q, the S AMPLE
routine samples minitial terms over the terms in the summation ?(q, R) =
Kh (||q ? rjn ||)
rjn ?R
where ?(q, R) denotes the exact contribution of R to q?s kernel sum. Basically, we are interested
e R) = |R|?S , where ?S is the sample mean of S. From the Central
in estimating ?(q, R) by ?(q,
Limit Theorem, given enough m samples, ?S
N (?, ?S2 /m) where ?(q, R) = |R|? (i.e. ?
is the average of the?kernel value between q and any reference point r ? R); this implies that
|?S ? ?| ? z?/2 ?S / m with probability 1 ? ?. The pruning rule we have to enforce for each query
point for the contribution of R is:
?S
?(q, R)
z?/2 ? ?
|R|
m
where ?S the sample standard deviation of S. Since ?(q, R) is one of the unknown quanities we
want to compute, we instead enforce the following:
z
?
? S
?l (q, R) + |R| ?S ? ?/2
?S
m
z?/2 ? ?
(2)
|R|
m
where ?l (q, R) is thecurrently running lower bound on the sum computed using exact methods
z
?
? S is the probabilistic component contributed by R. Denoting ? l,new (q, R) =
and |R| ?S ? ?/2
m
z?/2 ?S
l
? (q, R) + |R| ?S ? ?
, the minimum number of samples for q needed to achieve the
|S|
3
target error the right side of the inequality in Equation 2 with at least probability of 1 ? ? is:
2
m ? z?/2
?S2
(|R| + |R|)2
2 (?l (q, R) + |R|?S )2
If the given query node and reference node pair cannot be pruned using either nonprobabilistic/probabilistic approximations, then we recurse on a smaller subsets of two sets. In
particular, when dividing over the reference node R, we recurse with half of the ? value 1 . We now
state the probablistic error guarantee of our algorithm as a theorem.
Theorem 2.1. After calling MCMM with Q = Qroot , R = Rroot , and ? = ?, Algorithm 1
e R) such that Definition 1.3 holds.
approximates each ?(q, R) with ?(q,
Proof. For a query/reference (Q, R) pair and
ASE and S UMMARIZE E XACT
0 < ? < 1, MCMMB
e
with probability at
compute estimates for q ? Q such that ?(q, R) ? ?(q, R) < ?(q,R)|R|
|R|
least
1
>
1
?
?.
By
Equation
2,
S
UMMARIZE
MC
computes
estimates
for
q ? Q such that
e
?(q,R)|R|
with probability 1 ? ?.
?(q, R) ? ?(q, R) < |R|
We now induct on |Q ? R|.Line 11 of Algorithm 1 divides over the reference
whose subcalls com
e
e
?(q,R)|RL |
L
L
R
R
pute estimates that satisfy ?(q, R ) ? ?(q, R ) ?
and
?(q,
R
)
?
?(q,
R
)
?
|R|
R
|
e R) =
?(q,R)|R
each with at least 1 ? ?2 probability by induction hypothesis. For q ? Q, ?(q,
|R|
?(q,R)|R|
e RL )+ ?(q,
e RR ) which means |?(q,
e R)??(q, R)| ?
with probability at least 1??.
?(q,
|R|
Line 14 divides over the query and each subcall computes estimates that hold with at least probability 1 ? ? for q ? QL and q ? QR . Line 16 and 17 divides both over the query and the reference, and
the correctness can be proven similarly. Therefore, M CM M (Qroot , Rroot , ?) computes estimates
satisfying Definition 1.3.
?Reclaiming? probability. We note that the assigned probability ? for the query/reference pair
computed with exact bounds (S UMMARIZE E XACT and MCMMBASE) is not used. This portion
of the probability can be ?reclaimed? in a similar fashion as done in [10] and re-used to prune
more aggressively in the later stages of the algorithm. All experiments presented in this paper were
benefited by this simple modification.
3
Subspace Tree
A subspace tree is basically a space-partitioning tree with a set of orthogonal bases associated with
each node N : N.? = (?, U, ?, d) where ? is the mean, U is a D?d matrix whose columns consist of
d eigenvectors, and ? the corresponding eigenvalues. The orthogonal basis set is constructed using
a linear dimension reduction method such as PCA. It is constructed in the top-down manner using
the PARTITION S ET function dividing the given set of points into two (where the PARTITION S ET
function divides along the dimension with the highest variance in case of a kd-tree for example),
with the subspace in each node formed in the bottom-up manner. Algorithm 3 shows a PCA tree (a
subspace tree using PCA as a dimension reduction) for a 3-D dataset. The subspace of each leaf node
is computed using P CA BASE which can use the exact PCA [3] or a stochastic one [2]. For an internal
node, the subspaces of the child nodes, N L .? = (?L , U L , ?L , dL ) and N R .? = (?R , U R , ?R , dR ),
are approximately merged using the M ERGE S UBSPACES function which involves solving an (d L +
dR + 1) ? (dL + dR + 1) eigenvalue problem [8], which runs in O((dL + dR + 1)3 ) << O(D 3 )
given that the dataset is sparse. In addition, each data point x in each node N is mapped to its new
lower-dimensional coordinate using the orthogonal basis set of N : xproj = U T (x ? ?). The L2
norm reconstruction error is given by: ||xrecon ? x||22 = ||(U xproj + ?) ? x||22 .
Monte Carlo sampling using a subspace tree. Consider C AN S UMMARIZE MC function in Algorithm 2. The ?outer-loop? over this algorithm is over the query set Q, and it would make sense to
project each query point q ? Q to the subspace owned by the reference node R. Let U and ? be the
orthogonal basis system for R consisting of d basis. For each q ? Q, consider the squared distance
1
We could also divide ? such that the node that may be harder to approximate gets a lower value.
4
Algorithm 2 Monte Carlo sampling based approximation routines.
C AN S UMMARIZE MC(Q, R, , ?)
S AMPLE(q, R, , ?, S, m)
return ? ? minitial ? |R|
for k = 1 to m do
r ? random point in R
S UMMARIZE MC(Q, R, , ?)
S ? S ? {Kh (||q ? r||)}
for qi ? Q do
(S)
?S ? M EAN(S), ?S2 ? VARIANCE
S ? ?, m ? minitial
z
?
S
?/2
?l,new (q, R) ? ?l (q, R) + |R| ?S ? ?
repeat
|S|
2
S AMPLE(qi , R, , ?, S, m)
2
mthresh ? z?/2
?S2 2 (?(|R|+|R|)
l (q,R)+|R|? )2
until m ? 0
S
m ? mthresh ? |S|
?(qi , R) ? ?(qi , R) + |R| ? M EAN(S)
||(q ? ?) ? rproj ||2 (where (q ? ?) is q?s coordinates expressed in terms of the coordinate system of
R) as shown in Figure 1. For the Gaussian kernel, each pairwise kernel value is approximated as:
e?||q?r||
2
/(2h2 )
? e?||q?qrecon ||
2
/(2h2 ) ?||qproj ?rproj ||2 /(2h2 )
e
(3)
where qrecon = U qproj +? and qproj = U (q??). For a fixed query point q, e
can
be precomputed (which takes d dot products between two D-dimensional vectors) and re-used for
every distance computation between q and any reference point r ? R whose cost is now O(d) <<
O(D). Therefore, we can take more samples efficiently. For a total of sufficiently large m samples,
the computational cost is O(d(D + m)) << O(D ? m) for each query point.
T
?||q?qrecon ||2 /(2h2 )
Increased variance comes at the cost of inexact distance computations, however. Each disincurs at most squared L2 norm of ||rrecon ? r||22 error.
That is,
tance computation
||q ? rrecon ||22 ? ||q ? r||22 ? ||rrecon ? r||22 . Neverhteless, the sample variance for each query
point plus the inexactness due to dimension reduction ?S can be shown to be bounded for the Gaus2
2
sian kernel as: (where each s = e?||q?rrecon || /(2h ) ):
!
X
1
2
2
s ? m ? ? S + ?S
m?1
s?S
!
2 !
X
2
2
2
2
1
?
s2 min 1, max e||rrecon ?r||2 /h ? m ?S min e?||rrecon ?r||2 /(2h )
r?R
r?R
m?1
s?S
Exhaustive computations using a subspace tree. Now suppose we have built subspace trees for the
query and the reference sets. We can project either each query point onto the reference subspace, or
each reference point onto the query subspace, depending on which subspace has a smaller dimension
and the number of points in each node. The subspaces formed in the leaf nodes usually are highly
numerically accurate since it contains very few points compared to the extrinsic dimensionality D.
4
Experimental Results
We empirically evaluated the runtime performance of our algorithm on seven real-world datasets,
scaled to fit in [0, 1]D hypercube, for approximating the Gaussian sum at every query point with a
range of bandwidths. This experiment is motivated by many kernel methods that require computing the Gaussian sum at different bandwidth values (according to the standard least-sqares crossvalidation scores [15]). Nevertheless, we emphasize that the acceleration results are applicable to
other kernel methods that require efficient Gaussian summation.
In this paper, the reference set equals the query set. All datasets have 50K points so that the exact
exhaustive method can be tractably computed. All times are in seconds and include the time needed
to build the trees. Codes are in C/C++ and run on a dual Intel Xeon 3GHz with 8 Gb of main
memory. The measurements in second to eigth columns are obtained by running the algorithms at
the bandwidth kh? where 10?3 ? k ? 103 is the constant in the corresponding column header. The
last columns denote the total time needed to run on all seven bandwidth values.
Each table has results for five algorithms: the naive algorithm and four algorithms. The algorithms
with p = 1 denote the previous state-of-the-art (finite-difference with error redistribution) [10],
5
Algorithm 3 PCA tree building routine.
B UILD P CAT REE(P)
if C AN PARTITION(P) then
{P L , P R } ? PARTITION S ET(P)
N ? empty node
N L ? B UILD P CAT REE(P L )
N R ? B UILD P CAT REE(P R )
N.S ? M ERGE S UBSPACES(N L .S, N R .S)
else
N ? B UILD P CAT REE BASE(P)
N.S ? P CA BASE(P)
N.Pproj ? P ROJECT(P, N.S)
return N
while those with p < 1 denote our probabilistic version. Each entry has the running time and the
percentage of the query points that did not satisfy the relative error .
Analysis. Readers should focus on the last columns containing the total time needed for evaluating Gaussian sum at all points for seven different bandwidth values. This is indicated by boldfaced
numbers for our probabilistic algorithm. As expected, On low-dimensional datasets (below 6 dimensions), the algorithm using series-expansion based bounds gives two to three times speedup compared to our approach that uses Monte Carlo sampling. Multipole moments are an effective form
of compression in low dimensions with analytical error bounds that can be evaluated; our Monte
Carlo-based method has an asymptotic error bound which must be ?learned? through sampling.
As we go from 7 dimensions and beyond, series-expansion cannot be done efficiently because of its
slow convergence. Our probabilistic algorithm (p = 0.9) using Monte Carlo consistently performs
better than the algorithm using exact bounds (p = 1) by at least a factor of two. Compared to
naive, it achieves the maximum speedup of about nine times on an 16-dimensional dataset; on an
89-dimensional dataset, it is at least three times as fast as the naive. Note that all the datasets contain
only 50K points, and the speedup will be more dramatic as we increase the number of points.
5
Conclusion
We presented an extension to fast multipole methods to use approximation methods with both hard
and probabilistic bounds. Our experimental results show speedup over the previous state-of-the-art
on high-dimensional datasets. Our future work will include possible improvements inspired by a
recent work done in the FMM community using a matrix-factorization formulation [13].
Figure 1: Left: A PCA-tree for a 3-D dataset. Right: The squared Euclidean distance between
a given query point and a reference point projected onto a subspace can be decomposed into two
components: the orthogonal component and the component in the subspace.
6
Algorithm \ scale
0.001 0.01
0.1
1
10
100
1000
?
mockgalaxy-D-1M-rnd (cosmology: positions), D = 3, N = 50000, h ? = 0.000768201
Naive
182
182
182
182
182
182
182
1274
MCMM
3
3
5
10
26
48
2
97
( = 0.1, p = 0.9)
1%
1%
1%
1%
1%
1%
5%
DFGT
2
2
2
2
6
19
3
36
( = 0.1, p = 1)
0%
0%
0%
0%
0%
0%
0%
MCMM
3
3
4
11
27
58
21
127
( = 0.01, p = 0.9) 0 %
0%
1%
1%
1%
1%
7%
DFGT
2
2
2
2
7
30
5
50
( = 0.01, p = 1)
0%
0%
0%
0%
0%
0%
0%
Algorithm \ scale
0.001 0.01
0.1
1
10
100
1000
?
bio5-rnd (biology: drug activity), D = 5, N = 50000, h? = 0.000567161
Naive
214
214
214
214
214
214
214
1498
MCMM
4
4
6
144
149
65
1
373
( = 0.1, p = 0.9)
0%
0%
0%
0%
1%
0%
1%
DFGT
4
4
5
24
96
65
2
200
( = 0.1, p = 1)
0%
0%
0%
0%
0%
0%
0%
MCMM
4
4
6
148
165
126
1
454
( = 0.01, p = 0.9) 0 %
0%
0%
0%
1%
0%
1%
DFGT
4
4
5
25
139
126
4
307
( = 0.01, p = 1)
0%
0%
0%
0%
0%
0%
0%
Algorithm \ scale
0.001 0.01
0.1
1
10
100
1000
?
pall7 ? rnd , D = 7, N = 50000, h? = 0.00131865
Naive
327
327
327
327
327
327
327
2289
MCMM
3
3
3
3
63
224
<1
300
( = 0.1, p = 0.9)
0%
0%
0%
1%
1%
12 %
0%
DFGT
10
10
11
14
84
263
223
615
( = 0.1, p = 1)
0%
0%
0%
0%
0%
0%
0%
MCMM
3
3
3
3
70
265
5
352
( = 0.01, p = 0.9) 0 %
0%
0%
1%
2%
1%
8%
DFGT
10
10
11
14
85
299
374
803
( = 0.01, p = 1)
0%
0%
0%
0%
0%
0%
0%
Algorithm \ scale
0.001 0.01
0.1
1
10
100
1000
?
covtype ? rnd , D = 10, N = 50000, h? = 0.0154758
Naive
380
380
380
380
380
380
380
2660
MCMM
11
11
13
39
318
<1
<1
381
( = 0.1, p = 0.9)
0%
0%
0%
1%
0%
0%
0%
DFGT
26
27
38
177
390
244
<1
903
( = 0.1, p = 1)
0%
0%
0%
0%
0%
0%
0%
MCMM
11
11
13
77
362
2
<1
477
( = 0.01, p = 0.9) 0 %
0%
0%
1%
1%
10 %
0%
DFGT
26
27
38
180
427
416
<1
1115
( = 0.01, p = 1)
0%
0%
0%
0%
0%
0%
0%
Algorithm \ scale
0.001 0.01
0.1
1
10
100
1000
?
CoocTexture ? rnd , D = 16, N = 50000, h? = 0.0263958
Naive
472
472
472
472
472
472
472
3304
MCMM
10
11
22
189
109
<1
<1
343
( = 0.1, p = 0.9)
0%
0%
0%
1%
8%
0%
0%
DFGT
22
26
82
240
452
66
<1
889
( = 0.1, p = 1)
0%
0%
0%
0%
0%
0%
0%
MCMM
10
11
22
204
285
<1
<1
534
( = 0.01, p = 0.9) 0 %
0%
1%
1%
10 %
4%
0%
DFGT
22
26
83
254
543
230
<1
1159
( = 0.01, p = 1)
0%
0%
0%
0%
0%
0%
0%
7
Algorithm \ scale
0.001 0.01
0.1
1
10
100
1000
LayoutHistogram ? rnd , D = 32, N = 50000, h? = 0.0609892
Naive
757
757
757
757
757
757
757
MCMM
32
32
54
168
583
8
8
( = 0.1, p = 0.9)
0%
0%
1%
1%
1%
0%
0%
DFGT
153
159
221
492
849
212
<1
( = 0.1, p = 1)
0%
0%
0%
0%
0%
0%
0%
MCMM
32
45
60
183
858
8
8
( = 0.01, p = 0.9) 0 %
0%
1%
6%
1%
0%
0%
DFGT
153
159
222
503
888
659
<1
( = 0.01, p = 1)
0%
0%
0%
0%
0%
0%
0%
Algorithm \ scale
0.001 0.01
0.1
1
10
100
1000
CorelCombined ? rnd , D = 89, N = 50000, h? = 0.0512583
Naive
1716
1716
1716
1716
1716
1716
1716
MCMM
384
418
575
428
1679
17
17
( = 0.1, p = 0.9)
0%
0%
0%
1%
10 %
0%
0%
DFGT
659
677
864
1397
1772
836
17
( = 0.1, p = 1)
0%
0%
0%
0%
0%
0%
0%
MCMM
401
419
575
437
1905
17
17
( = 0.01, p = 0.9) 0 %
0%
0%
1%
2%
0%
0%
DFGT
659
677
865
1425
1794
1649
17
( = 0.01, p = 1)
0%
0%
0%
0%
0%
0%
0%
?
5299
885
2087
1246
2585
?
12012
3518
6205
3771
7086
References
[1] Nando de Freitas, Yang Wang, Maryam Mahdaviani, and Dustin Lang. Fast krylov methods for n-body
learning. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing
Systems 18, pages 251?258. MIT Press, Cambridge, MA, 2006.
[2] P. Drineas, R. Kannan, and M. Mahoney. Fast monte carlo algorithms for matrices iii: Computing a
compressed approximate matrix decomposition, 2004.
[3] G. Golub. Matrix Computations, Third Edition. The Johns Hopkins University Press, 1996.
[4] A. Gray and A. W. Moore. N-Body Problems in Statistical Learning. In Todd K. Leen, Thomas G.
Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing Systems 13 (December
2000). MIT Press, 2001.
[5] Alexander G. Gray and Andrew W. Moore. Nonparametric Density Estimation: Toward Computational
Tractability. In SIAM International Conference on Data Mining 2003, 2003.
[6] Alexander G. Gray and Andrew W. Moore. Very Fast Multivariate Kernel Density Estimation via Computational Geometry. In Joint Statistical Meeting 2003, 2003. to be submitted to JASA.
[7] L. Greengard and J. Strain. The Fast Gauss Transform. SIAM Journal of Scientific and Statistical Computing, 12(1):79?94, 1991.
[8] Peter Hall, David Marshall, and Ralph Martin. Merging and splitting eigenspace models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(9):1042?1049, 2000.
[9] Michael Holmes, Alexander Gray, and Charles Isbell. Ultrafast monte carlo for statistical summations.
In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing
Systems 20, pages 673?680. MIT Press, Cambridge, MA, 2008.
[10] Dongryeol Lee and Alexander Gray. Faster gaussian summation: Theory and experiment. In Proceedings
of the Twenty-second Conference on Uncertainty in Artificial Intelligence. 2006.
[11] Dongryeol Lee, Alexander Gray, and Andrew Moore. Dual-tree fast gauss transforms. In Y. Weiss,
B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 747?
754. MIT Press, Cambridge, MA, 2006.
[12] Ting Liu, Andrew W. Moore, and Alexander Gray. Efficient exact k-nn and nonparametric classification
in high dimensions. In Sebastian Thrun, Lawrence Saul, and Bernhard Sch o? lkopf, editors, Advances in
Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004.
[13] P. G. Martinsson and Vladimir Rokhlin. An accelerated kernel-independent fast multipole method in one
dimension. SIAM J. Scientific Computing, 29(3):1160?1178, 2007.
[14] A. W. Moore, J. Schneider, and K. Deng. Efficient locally weighted polynomial regression predictions.
In D. Fisher, editor, Proceedings of the Fourteenth International Conference on Machine Learning, pages
196?204, San Francisco, 1997. Morgan Kaufmann.
[15] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall/CRC, 1986.
8
| 3539 |@word version:1 polynomial:1 compression:1 norm:2 decomposition:1 dramatic:1 incurs:1 harder:2 moment:2 reduction:6 liu:1 series:5 score:2 selecting:1 contains:2 initial:1 denoting:1 freitas:1 com:1 lang:1 must:1 john:1 partition:4 enables:1 half:1 leaf:5 intelligence:2 core:1 node:26 five:1 along:1 constructed:3 boldfaced:1 inside:1 manner:2 mthresh:2 pairwise:2 expected:1 fmm:1 inspired:1 dfgt:14 decomposed:1 curse:1 provided:3 estimating:1 notation:1 project:2 bounded:1 eigenspace:1 cm:1 guarantee:7 every:2 runtime:1 demonstrates:1 scaled:1 platt:3 brute:1 partitioning:2 control:2 engineering:2 local:1 todd:1 limit:1 ree:4 approximately:1 probablistic:1 plus:1 partioning:1 relaxing:1 factorization:1 range:1 recursive:1 silverman:1 area:1 empirical:2 drug:1 projection:1 confidence:1 radial:1 get:1 cannot:2 ga:2 onto:3 context:1 map:1 primitive:1 go:1 focused:1 splitting:1 pure:1 rule:1 holmes:1 coordinate:3 target:1 suppose:1 exact:12 us:1 hypothesis:1 element:1 expensive:1 satisfying:2 utilized:2 approximated:1 bottom:2 wang:1 highest:1 mentioned:1 reclaimed:1 solving:1 basis:5 drineas:1 joint:1 cat:4 fast:13 describe:1 effective:1 monte:15 query:27 artificial:1 aggregate:1 header:1 exhaustive:4 whose:4 compressed:1 statistic:2 transform:1 superscript:1 rr:4 eigenvalue:2 analytical:1 propose:3 reconstruction:1 maryam:1 product:1 loop:1 achieve:1 roweis:1 kh:3 olkopf:2 qr:4 crossvalidation:1 convergence:1 empty:1 requirement:1 cached:1 guaranteeing:1 depending:1 develop:1 andrew:4 nearest:1 dividing:2 involves:1 implies:1 come:1 merged:1 stochastic:1 nando:1 redistribution:1 crc:1 require:4 tighter:1 pessimistic:1 summation:17 extension:2 hold:2 around:1 sufficiently:1 hall:2 normal:1 lawrence:1 mapping:1 achieves:1 estimation:6 applicable:1 currently:1 correctness:1 erge:2 weighted:1 mit:5 gaussian:16 xact:6 volker:1 gatech:2 focus:1 improvement:2 consistently:1 sense:1 nn:1 initially:3 koller:1 interested:1 ralph:1 issue:1 dual:3 classification:1 denoted:2 smoothing:1 spatial:2 art:3 equal:1 sampling:13 chapman:1 biology:1 future:1 employ:1 few:2 resulted:1 cheaper:1 geometry:1 consisting:1 attempt:1 atlanta:2 highly:1 mining:1 evaluation:1 golub:1 mahoney:1 introduces:1 recurse:2 yielding:1 accurate:1 orthogonal:5 tree:24 indexed:1 divide:6 euclidean:1 desired:1 re:2 increased:1 column:5 xeon:1 marshall:1 cost:10 tractability:1 deviation:2 subset:2 entry:1 successful:1 too:1 dongryeol:3 density:7 international:2 siam:3 lee:3 probabilistic:13 michael:1 hopkins:1 squared:3 central:1 containing:1 dr:4 return:2 de:1 satisfy:2 later:1 root:1 portion:1 contribution:2 formed:2 accuracy:3 variance:5 kaufmann:1 efficiently:3 lkopf:1 basically:2 none:1 carlo:15 mc:7 cc:2 submitted:1 sebastian:1 definition:8 inexact:1 associated:2 proof:1 cosmology:1 dataset:5 dimensionality:3 ubiquitous:2 mahdaviani:1 routine:5 higher:3 specify:1 wei:2 formulation:1 done:5 box:3 evaluated:2 leen:1 stage:1 until:1 replacing:1 simulataneous:1 quality:1 indicated:1 gray:8 scientific:2 building:1 effect:1 dietterich:1 contain:2 true:1 former:1 assigned:1 aggressively:1 moore:6 criterion:6 generalized:1 performs:3 wise:1 charles:1 common:1 rl:5 qp:1 empirically:1 exponentially:1 extend:1 discussed:1 approximates:1 martinsson:1 numerically:1 measurement:1 cambridge:4 similarly:2 dot:1 pute:1 base:5 dominant:1 multivariate:1 showed:1 recent:1 inequality:1 qroot:3 ameliorating:1 meeting:1 morgan:1 seen:1 minimum:2 additional:1 relaxed:1 schneider:1 deng:1 prune:1 multiple:1 rj:3 faster:2 cross:1 ase:1 qi:22 prediction:1 regression:3 kernel:27 bio5:1 achieved:1 addition:1 want:1 interval:1 else:6 sch:3 strict:1 tend:1 ample:3 december:1 yang:1 iii:1 enough:2 fit:1 bandwidth:8 idea:1 bottleneck:1 whether:1 motivated:1 pca:8 gb:1 accelerating:1 peter:1 proceed:1 nine:1 eigenvectors:1 transforms:1 nonparametric:2 locally:1 reduced:1 percentage:2 extrinsic:1 per:2 key:1 four:1 nevertheless:1 enormous:1 rjn:2 utilize:1 sum:11 run:3 fourteenth:1 uncertainty:1 reader:1 uild:4 utilizes:1 scaling:1 bound:26 activity:1 isbell:1 calling:1 min:2 pruned:1 martin:1 speedup:8 according:1 ball:1 kd:2 smaller:3 modification:1 computationally:1 equation:3 discus:2 loose:1 precomputed:1 needed:7 rroot:3 singer:1 greengard:1 spectral:1 enforce:3 shortly:1 original:2 thomas:1 top:2 multipole:5 clustering:1 denotes:2 running:3 include:2 ting:1 build:1 conquer:1 approximating:2 hypercube:1 dongryel:1 quantity:2 subspace:18 distance:7 mapped:1 thrun:1 entity:1 outer:1 seven:3 toward:1 induction:1 kannan:1 gna:4 suboptimally:1 code:1 vladimir:1 ql:4 expense:1 unknown:2 perform:1 contributed:1 twenty:1 datasets:8 finite:3 descent:1 extended:1 strain:1 community:1 david:1 pair:6 required:2 quadratically:1 learned:1 tractably:1 address:1 beyond:1 krylov:1 usually:1 below:1 pattern:1 induct:1 pioneering:1 built:1 including:1 tance:1 max:1 memory:1 suitable:1 natural:2 force:1 sian:1 scheme:2 technology:2 eigth:1 naive:10 tresp:1 literature:1 l2:2 relative:12 asymptotic:1 nonprobabilistic:1 proven:1 localized:1 validation:1 h2:4 jasa:1 sufficient:1 inexactness:1 editor:6 repeat:1 last:2 side:1 institute:2 neighbor:1 saul:1 absolute:3 sparse:1 ghz:1 dimension:17 evaluating:2 world:1 computes:6 author:1 preprocessing:1 projected:1 san:1 transaction:1 approximate:4 pruning:1 emphasize:1 preferred:1 bernhard:1 global:1 decides:1 francisco:1 search:1 iterative:1 table:1 ca:2 ean:2 expansion:5 agray:1 did:2 main:2 bounding:6 motivation:1 s2:5 mockgalaxy:1 edition:1 child:3 body:3 benefited:1 intel:1 georgia:2 fashion:3 slow:2 precision:1 position:1 explicit:1 ultrafast:1 third:1 dustin:1 down:2 theorem:3 covtype:1 dl:3 intrinsic:1 consist:1 merging:1 magnitude:1 easier:1 gaus2:1 expressed:1 rnd:7 nested:1 owned:1 ma:4 acceleration:2 fisher:1 hard:5 determined:2 except:1 reducing:3 called:4 total:3 experimental:2 gauss:2 internal:2 support:1 rokhlin:1 latter:1 alexander:7 accelerated:1 |
2,801 | 354 | On The Circuit Complexity of Neural Networks
v.
K. Y. Sill
P. Roychowdhury
Information Systems Laboratory
Stanford University
Stanford, CA, 94305
Information Systems Laboratory
Stanford University
Stanford, CA, 94305
A. Orlitsky
AT &T Bell Laboratories
600 Mountain A venue
Murray Hill, NJ, 07974
T. Kailath
Informat.ion Systems Laboratory
Stanford U ni versity
Stanford, CA, 94305
Abstract
'~le
introduce a geometric approach for investigating the power of threshold
circuits. Viewing n-variable boolean functions as vectors in 'R'2", we invoke
tools from linear algebra and linear programming to derive new results on
the realizability of boolean functions using threshold gat.es.
Using this approach, one can obtain: (1) upper-bounds on the number of
spurious memories in HopfielJ networks, and on the number of functions
implementable by a depth-d threshold circuit; (2) a lower bound on the
number of ort.hogonal input. functions required to implement. a threshold
function; (3) a necessary condit.ion for an arbit.rary set of input. functions to
implement a threshold function; (4) a lower bound on the error introduced
in approximating boolean functions using sparse polynomials; (5) a limit
on the effectiveness of the only known lower-bound technique (based on
computing correlations among boolean functions) for the depth of threshold circuit.s implement.ing boolean functions, and (6) a constructive proof
that every boolean function f of n input variables is a threshold function
of polynomially many input functions, none of which is significantly correlated with f. Some of these results lead t.o genera.lizations of key results
concerning threshold circuit complexity, particularly t.hose that are based
on the so-called spectral or Ha.rmonic analysis approach. Moreover, our
geometric approach yields simple proofs, based on elementary results from
linear algebra, for many of these earlier results.
953
954
Roychowdhury, Orlitsky, Siu, and Kailath
1
Introduction
An S-input threshold gate is characterized by S real weights 'WI, ??. , 'Ws . It takes S
inputs: Xl, . .. , xs, each either +1 or -1, and outputs +1 if the linear combination
2::f=1 'WiXi is positive and -1 if the linear combination is negative. Threshold gates
were recently used to implement several functions of practical interest (including:
Parity, Addition, Multiplication, Division, and Comparison) with fewer gates and
reduced depth than conventional circuits using AND, OR, and NOT gates [12,4, 11].
This success has led to a considerable amount of research on the power of threshold
circuits [1, 10,9, 11,3, 13]. However, even simple questions remain unanswered. It
is not known, for example, whether there is a function that can be computed by a
depth-3 threshold circuit with polynomially many gates but cannot be computed
by any depth-2 circuit with polynomially many threshold gates.
Geometric approaches have proven useful for analyzing threshold gates. An S-input
threshold gate corresponds to a hyperpla.ne in n.s. This has been used for example
to count the number of boolean functions computable by a single threshold gate [6],
and also to determine functions that cannot be implemented by a single threshold
gate. However, t.hreshold circuits of depth two or more do not carry a simple geometric interpretation in 'R,s. The inputs to gates in the second level are themselves
threshold functions, hence the linear combination computed at the second level is
a non-linear function of the inputs. Lacking a geomet.ric view, researchers [5, 3]
have used indirect approaches, applying harmonic-analysis t.echniques to analyze
threshold gates. These techniques, apart from their complexity, restricted the input
functions of the gates to be of very special types: input variables or parities of the
input variables, t.hus not applying even t.o depth-t.wo cil'Cuits.
In this paper, we describe a simple geometric relation between the output function
of a threshold gate and its set of input functions. This applies to arbitrary sets of
input functions. Using this relation , we can prove t.he following results: (1) upper
bounds on (a) the number of threshold functions of any set of input functions, (b)
the number of spurious memories in a IIopfield network, and (c) the number of
functions implementable by threshold circuits of depth d; (2) a lower bound on the
number of orthogonal input functions required to implement a threshold function;
(3) a quantifiable necessary condition for a set of functions to implement a threshold
function; (4) a lower bound on the error in approximating boolean functions using
sparse polynomials; (5) a limit on the effectiveness of the correlation method used
in [7] to prove t.hat a cert.ain function cannot be implement.ed by depth two circuit.s
with polynomially many gates and polynomially bounded weights; (6) a proof that
every function f is a threshold function of polynomially many input functions, none
of which is significant.ly correlated wit.h f.
Special cases of some of these results, where the input functions to a threshold gate
are restricted to the input. variables, or parities of the input variables, were proven
in [5, 3] using harmonic-analysis tools. Our technique shows that these tools are
not needed, providing simpler proofs for more general results.
Due to space limitations, we cannot present the full details of our results. Instead,
we shall introduce the basic definitions followed by a technical summary of the
results; the emphasis will be on pointing out the motivation and relating our results
On The Circuit Complexity of Neural Networks
with those in the literature. The proofs and other technical details will appear in a
complete journal paper.
2
Definitions and Background
An n-variable boolean function is a mapping f : {-I, l}n - {-I, I} . We view I
as a (column) vector in 2n. Each of 1's 2n com ponents is either -lor +1 and
represents f(x) for a distinct value assignment x of the n boolean variables. We view
the S weights of an S-input threshold gate as a weight vector w = (WI, ... , Ws f
in nS.
n
Let the functions It, ... ,Is be the inputs of a threshold gate w. The gate computes
a function f (or f is the output of the gate) if the following vector equation holds:
f = sgn
(t,=1
where
sgn(x)
={
(1)
j , w ,)
+1
-1
undefined
if x > 0,
if x < 0,
if x = o.
Note that this definition requires that all components of 2::=1 liWi be nonzero. It
is convenient to write Equat.ion (1) in a matrix form:
f = sgn(Yw)
where the input matrix
y = [It??? fs]
is a 2 by S matrix whose columns are the input functions. The function f, is a
threshold function of It, ... , fs if t.here exist.s a threshold gate (i.e., w) with inputs
It, ... , Is that computes I?
n
These definitions form the basis of our approach. Each function, being a ?1 vector
in 2n , determines an orthant in 'R.2n. A function f is t.he output of a threshold gate
whose input. functions are It, ... , fs if and only if t.he linear combination 2::=1 liWi
defined by the gate lies inside the orthant determined by f.
n
Definition 1 The correlation of two n-variable boolean functions
It
and his:
UT
Chh =
f'2)/2 n ;
the two functions are uncorrelated or orthogonal if Chh = O.
Note that Chh = 1 - 2dlI(lt, 12)/2 n , where dlI(lt, h) is the Hamming distance
between It and 12; thus, the correlation can be interpreted as a measure of how
'close' the two functions are.
Fix the input functions It, ... fs to a threshold gate. The correlation vector of a
function I, with the input functions is
Cfl' = (}TT f)/2 n = (C"I C, h ... CJJs f?
Next, we define C as the maximum in magnitude among the correlation coefficients,
i.e. ,C={IC",I : l::;i::;S}.
955
956
Roychowdhury, Orlitsky, Siu, and Kailath
3
Sumluary of Results
The correlation between two n-variable functions is a multiple of 2-("-1), bounded
between -1 and 1, hence can assume 2" + 1 values. The cOlTelation vector GJY =
(GIJp . .. ,GJIt)T can therefore assume at most (2" + I)S different values. There are
22ft Boolean functions of n Boolean variables, hence many share the same correlation
vector. However, the next theorem says that a tht?eshold function of II, ... , f s does
not share its correlation vector with any other function.
Uniqueness Theorem
9
Let f be a threshold function of 11, ... , fs. Then, for all
f; f,
Corollary 1 There are at most (2"
+ I)S
threshold functions of any set of S input
functions.
The special case of the Uniqueness Theorem where the functions II, ... , fs are
t.he input variables had been proven in [5, 9]. The proof used harmonic-analysis
tools such as Parseval's theorem. It relied on the mutual orthogonality of the input
functions (namely, CX"Xj = 0 for all i :f:. j). Another special case where the input
functions are parities of the input variables was proven in [3]. The same proof
was used; see e.g. , pages 419-422 of [9]. Our proof shows that the harmonicanalysis tools and assumpt.ions are not needed thereby (1) significantly simplifying
the proof, and (2) showing that the functions It, ... , fs need not be orthogonal:
the Uniqueness Theorem holds for all collections of functions. The more general
result of the Uniqueness Theorem can be applied to obtain the following two new
counting results.
Corollary 2 The number of stable states in a Hopfield network with n elements
which is programmed by the outer product rule to store s given vectors is :::;
2& log("+!).
Corollary 3 Let Fn(S(71), d) be the number ofn-variable boolean functions computed by depth-d thresh.old circuits with fan-in bounded by S(n) (we assume S(n) ~ n).
Then, for all d, n ~ 1,
=
It follows easily from our geometric framework that if GJl'
0 then f is not a
threshold function of It, ... , f s: every linear combination of It, ... , f s is orthogonal
to f, hence cannot intersect the orthant determined by f.
Next, we consider the case where
the S-dimensional vector:
en' :f:. O. Define the
generalized spectrum to be
!3 = (!31, .. . ,!3s f = (yTy)-1yT f
(the reason for the definition and\he name will be clarified soon).
On The Circuit Complexity of Neural Networks
Spectral-bound TheoreIn If I is a linear threshold function of It, ... , Is I then
s
L IPd
~ 1,
hence,
i=l
S
>
1//3,
where /3
= max {IPil:
1 ~ i ~ S}
The Spectral-Bound theorem provides a way of lower bounding the number S of
input functions. Specifically, if Pi is exponentially small (in n) for all i E {I, ... , S},
then S must be exponentially large.
In the special case where the input functions are parities of the input variables, all
input functions are orthogonal; hence yTy = 2n Is and
P = ~yT
I = Gn'
2n
.
Note that every parity function p is a basis function of the Hadamard transform, hence Glp is the spectral coefficient corresponding to p in the transform (see
[8, 2] for more details on spectral representation of boolean functions). Therefore,
the generalized spectrum in this case is the real spectrum of I . In that case, the
Spectral-Bound Theorem implies that S > max{lcJJ\l~i~S} ' Therefore, the number of input functions needed is at least the reciprocal of the maximum magnitude
among the spectral coefficients (i. e. , C). This special case was proved in [3]. Again, their proofs used harmonic-analysis tools and assumptions that we prove are
unnecessa.ry, thereby generalizing them to arbitrary input functions. Moreover,
our geometric approach considerably simplifies the exposition by presenting simple
proofs based on elementary results from linear algebra.
In general, we can show that if the input. functions Ii are orthogonal (i. e. , GI,l) = 0
for i f. j) or asymptotically orthogonal (i. e. , lim GI,l ? = 0) then the number of
n-oo
}
input functions S ~ I/C, where C is the largest (in magnitude) correlat.ion of the
output function with any of its input function.
We can also use the generalized spectrum to derive a lower bound on the error
incurred in approxima.ting a boolean function, I, using a set of basis functions.
The lower bound can then be applied to show that the Majority function cannot be
closely approxim ated by a sparse polynomial. In particular, it can be shown that if a
polynomial of the input variables with only polynomially many (in n) monomials is
used to approximate an n variable Majority function then the approximation error
is n(I/(log log n )3/2). This provides a direct spectral approach for proving lower
bounds on the approximation error.
The method of proving lower bounds on S in terms of the correlation coefficients
GI I, of I with the possible input functions, can be termed the method of correlations. Hajnal et. al. [7] used a different a.'3pect of this method 1 to prove a lower
bound on t.he depth of a threshold circuit that computes the Inner-product-mod-2
function.
1 They did not exactly use the correlation approach introduced in this paper, rather an
equivalent framework.
957
958
Roychowdhury, Orlitsky, Siu, and Kailath
Our techniques can be applied to investigate the method of correlations in more
detail and prove some limits to its effectiveness. We can show that the number,
S, of input functions need not be inversely proportional to the largest correlation
coefficient 6. In particular, we give two constructive procedures showing that any
function 1 is a threshold function of O( n) input functions each having an exponentially small correlation with I: IG,,;I ~ 2-(n-l).
Construction 1 Every boolean function 1 01 n variables (Jor n even) can be
expressed as a threshold function of 3n boolean functions: II, 12,"" hn such that
(1) G". = 0, V 1 ~ i ~ 3n - 1, and (2) Gfhn = 2-(n-l).
Construction 2 Every boolean function 1 of n variables can be expressed as a
threshold function of 2n boolean functions: II, 12,"" hn such that (1) G,,; =
0, V 1 ~ i < 2n - 2, and (2) Gfhn_l = Gfhn = 2-(n-l).
The results of the above constructions are surprising. For example, in Construction
1, the output function of the threshold gate is uncorrelated with all but one of
the input functions, and the only non-zero correlation is the smallest possible (=
2-(n-I?). Note that 1 is not a threshold function of a set of input functions, each
of which is orthogonal to I.
The above results thus provide a comprehensive understanding of the so-called
method of correlations. In particular: (1) If the input functions are mutually orthogonal (or asymptotically orthogonal), then the method of correlations is effective
even if exponential weights are allowed, i. e. , if a function is exponentially small correlated with every function from a pool of possible input functions, then one would
require exponentially many inputs to implement the given function using a threshold gate; (2) If the input functions are not mutually orthogonal, then the method of
correlations need not be effective, i. e. , one can construct examples, where the output function is correlated exponentially small with every input function, and yet it
ca.n be implemented as a threshold function of polynomially many input functions.
Furthermore, the constructive procedures can also be considered as constituting a
preliminary answer to the following question: Given an n-variable boolean function
I, are there efficient procedures for expressing it as threshold functions of polynomially many (in 11,) input functions? A procedure for so decomposing a given
function 1 will be referred to as a threshold-decomposition procedure; moreover, a
decomposition procedure can be considered as efficient if the input functions have
simpler threshold implementations than I (i.e., easier to implement or require less
depth/size). Constructions 1 and 2 present two such threshold-decomposition procedures. At present, the efficiency of these constructions is not clear and further
work is necessary. 'Ve hope, however, that the general methodology introduced here
may lead to subsequent work resulting in more efficient threshold-decomposition
procedures.
4
Concluding Renlarks
We have out.lined a new geometric approach for investigating the properties of
threshold circuits. In the process, we have developed a unified framework where
many of the previous results can be derived simply as special cases, and without in-
On The Circuit Complexity of Neural Networks
troducing too many seemingly difficult concepts. Moreover, we have derived several
new results that quantify the input/output relationships of threshold gates, derive
lower bounds on the number of input functions required to implement a given function using a threshold gate, and also analyze the limitations of a well-known lower
bound technique for threshold circuit.
Acknowledgenlents
This work was supported in part by the Joint Services Program at Stanford University (US Army, US Navy, US Air Force) under Contract DAAL03-88-C-0011, the
SDIO/IST, managed by the Army Research Office under Contract DAAL03-90-G0108, and the Department of the Navy, NASA Headquarters, Center for Aeronautics
and Space Information Sciences under Grant NAG'V-419-S6.
References
[1] E. Allender. A note on the power of threshold circuits. IEEE Symp. Found.
Compo Sci., 30, 1989.
[2] Y. Bradman, A. Orlitsky, and J. Hennessy. A Spectral Lower Bound Technique
for the size of Decision Trees and Two level AND/OR Circuits. IEEE Trans.
on Computers, 39, No. 2:282-287, February 1990.
[3] J. Bruck. Harmonic Analysis of Polynomial Threshold Functions. SIAM Journal on Discrete Mathematics, May 1990.
[4] A. K. Chandra, L. Stockmeyer, a.nd U. Vishkin. Constant depth reducibility.
Siam J. Comput., 13:423-439, 1984.
[5] C. K. Chow. On The Characterization of Threshold Functions. Proc. Symp.
on Switching Ci7'cuit Theory and Logical Design, pages 34-38, 1961.
[6] T. M. Covel'. Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognit.ion. IEEE Trans. on Electronic
Computers, EC-14:326-34, 1965.
[7) A. Hajnal, W.1\la.ass, P. Pudlak, 1\L Szegedy, and G. Turan. Threshold circuits
of bounded depth. IEEE Symp. Found. Compo Sci., 28:99-110, 1987.
[8] R. J. Lechner. Harmonic analysis of switching functions. In A. Mukhopadhyay,
editor, Recent Development in Switching Theory. Academic Press, 1971.
[9] P. M. Lewis and C. L. Coates. Threshold Logic. John 'Viley & Sons, Inc., 1967.
[10) I. Parberry a.nd G. Schnitger. Parallel Computation with Threshold Functions.
Journal of Computer and System Sciences, 36(3):278-302, 1988.
[11] J. Reif. On Threshold Circuits and Polynomial Computation. In Structure in
Complexity Theory Symp., pages 118-123, 1987.
[12] K. Y. Siu and J. Bruck. On the Power of Threshold Circuits with Small
""eights. to appear in SIAM J. Discrete Math.
[13] K. Y. Siu, V. P. Roychowdhury, and T. Kailath. Computing with Almost
Optimal Size Threshold Circuits. submitted to JCSS, 1990.
959
Part XIV
Perforl11ance COlllparisons
| 354 |@word polynomial:6 nd:2 simplifying:1 decomposition:4 thereby:2 carry:1 com:1 surprising:1 schnitger:1 yet:1 must:1 john:1 fn:1 subsequent:1 hajnal:2 fewer:1 reciprocal:1 compo:2 correlat:1 characterization:1 provides:2 math:1 clarified:1 simpler:2 lor:1 direct:1 prove:5 symp:4 inside:1 introduce:2 themselves:1 ry:1 versity:1 moreover:4 bounded:4 circuit:25 mountain:1 interpreted:1 developed:1 turan:1 unified:1 nj:1 every:8 orlitsky:5 exactly:1 ly:1 grant:1 appear:2 positive:1 service:1 limit:3 switching:3 analyzing:1 xiv:1 emphasis:1 genus:1 programmed:1 sill:1 practical:1 implement:10 procedure:8 pudlak:1 intersect:1 bell:1 significantly:2 convenient:1 cannot:6 close:1 viley:1 applying:2 conventional:1 equivalent:1 yt:2 center:1 wit:1 rule:1 his:1 s6:1 unanswered:1 proving:2 tht:1 construction:6 programming:1 lcjj:1 element:1 hose:1 particularly:1 mukhopadhyay:1 ft:1 jcss:1 daal03:2 equat:1 complexity:7 algebra:3 division:1 efficiency:1 basis:3 easily:1 joint:1 indirect:1 hopfield:1 distinct:1 describe:1 effective:2 acknowledgenlents:1 recognit:1 navy:2 whose:2 stanford:7 say:1 vishkin:1 gi:3 transform:2 seemingly:1 product:2 hadamard:1 quantifiable:1 glp:1 derive:3 oo:1 approxima:1 implemented:2 implies:1 quantify:1 closely:1 sgn:3 viewing:1 require:2 fix:1 preliminary:1 elementary:2 hold:2 considered:2 ic:1 hennessy:1 cuit:1 mapping:1 pointing:1 smallest:1 uniqueness:4 proc:1 ain:1 largest:2 tool:6 hope:1 rather:1 office:1 corollary:3 derived:2 chow:1 spurious:2 w:2 relation:2 headquarters:1 among:3 development:1 special:7 mutual:1 construct:1 having:1 represents:1 ve:1 comprehensive:1 interest:1 investigate:1 cfl:1 undefined:1 pect:1 necessary:3 orthogonal:11 tree:1 old:1 reif:1 column:2 earlier:1 boolean:21 gn:1 assignment:1 monomials:1 siu:5 too:1 answer:1 considerably:1 venue:1 siam:3 contract:2 invoke:1 pool:1 again:1 hn:2 szegedy:1 coefficient:5 inc:1 ated:1 view:3 ipd:1 analyze:2 relied:1 parallel:1 air:1 ni:1 cert:1 yield:1 none:2 researcher:1 submitted:1 ed:1 definition:6 proof:11 hamming:1 proved:1 wixi:1 logical:1 lim:1 ut:1 nasa:1 stockmeyer:1 methodology:1 furthermore:1 correlation:19 name:1 liwi:2 concept:1 managed:1 hence:7 laboratory:4 nonzero:1 generalized:3 hill:1 presenting:1 complete:1 tt:1 geometrical:1 harmonic:6 recently:1 hreshold:1 exponentially:6 interpretation:1 he:6 relating:1 significant:1 expressing:1 mathematics:1 hus:1 had:1 stable:1 ort:1 aeronautics:1 recent:1 thresh:1 apart:1 termed:1 store:1 inequality:1 success:1 determine:1 ii:5 full:1 multiple:1 ing:1 technical:2 academic:1 characterized:1 concerning:1 basic:1 chandra:1 ion:6 addition:1 background:1 mod:1 effectiveness:3 counting:1 assumpt:1 xj:1 inner:1 simplifies:1 computable:1 whether:1 wo:1 f:7 useful:1 yw:1 clear:1 amount:1 reduced:1 exist:1 coates:1 roychowdhury:5 geomet:1 write:1 discrete:2 shall:1 ist:1 key:1 threshold:62 asymptotically:2 almost:1 electronic:1 decision:1 informat:1 ric:1 bound:18 followed:1 fan:1 orthogonality:1 concluding:1 department:1 combination:5 remain:1 son:1 wi:2 restricted:2 equation:1 mutually:2 count:1 lined:1 needed:3 decomposing:1 eight:1 spectral:9 gate:28 hat:1 ting:1 murray:1 approximating:2 february:1 question:2 yty:2 distance:1 sci:2 majority:2 outer:1 reason:1 echniques:1 relationship:1 condit:1 providing:1 difficult:1 negative:1 implementation:1 design:1 upper:2 allender:1 implementable:2 orthant:3 arbitrary:2 introduced:3 namely:1 required:3 dli:2 trans:2 pattern:1 program:1 including:1 memory:2 max:2 power:4 force:1 bruck:2 inversely:1 ne:1 realizability:1 parberry:1 geometric:8 literature:1 understanding:1 reducibility:1 multiplication:1 lacking:1 parseval:1 limitation:2 proportional:1 proven:4 arbit:1 incurred:1 editor:1 uncorrelated:2 share:2 pi:1 summary:1 supported:1 parity:6 soon:1 sparse:3 depth:14 jor:1 computes:3 collection:1 ig:1 ec:1 polynomially:9 constituting:1 approximate:1 logic:1 investigating:2 approxim:1 nag:1 spectrum:4 ca:4 as:1 did:1 motivation:1 bounding:1 allowed:1 referred:1 en:1 cil:1 n:1 exponential:1 xl:1 lie:1 comput:1 theorem:8 ofn:1 hogonal:1 showing:2 x:1 gat:1 magnitude:3 easier:1 cx:1 led:1 lt:2 generalizing:1 simply:1 army:2 lechner:1 expressed:2 applies:1 corresponds:1 determines:1 lewis:1 kailath:5 exposition:1 ponents:1 considerable:1 determined:2 specifically:1 called:2 e:1 la:1 constructive:3 correlated:4 |
2,802 | 3,540 | Shape-Based Object Localization
for Descriptive Classification
Geremy Heitz1,?
Gal Elidan2,3,?
Ben Packer2,?
Daphne Koller2
1
Department of Electrical Engineering, Stanford University
2
Department of Computer Science, Stanford University
3
Department of Statistics, Hebrew University, Jerusalem
{gaheitz,bpacker,koller}@cs.stanford.edu
[email protected]
Abstract
Discriminative tasks, including object categorization and detection, are central
components of high-level computer vision. Sometimes, however, we are interested in more refined aspects of the object in an image, such as pose or particular
regions. In this paper we develop a method (LOOPS) for learning a shape and
image feature model that can be trained on a particular object class, and used to
outline instances of the class in novel images. Furthermore, while the training data
consists of uncorresponded outlines, the resulting LOOPS model contains a set of
landmark points that appear consistently across instances, and can be accurately
localized in an image. Our model achieves state-of-the-art results in precisely outlining objects that exhibit large deformations and articulations in cluttered natural
images. These localizations can then be used to address a range of tasks, including
descriptive classification, search, and clustering.
1
Introduction
Discriminative questions such as ?What is it?? (categorization) and ?Where is it?? (detection) are
central to machine vision and have received much attention in recent years. In many cases, we are
also interested in more refined descriptive questions with regards to an object such as ?What is it
doing??, ?What is its pose??, or ?What color is its tail??. For example, we may wish to determine
whether a cheetah is running, or whether a giraffe is bending over to drink. In a shopping scenario,
we might be interested in searching for lamps that have a particular type of lampshade.
In theory it is possible to convert some descriptive questions into discriminative classification tasks
given the appropriate labels. Nevertheless, it is preferable to have a single framework in which we
can answer a range of questions, some of which may not be known at training time, or may not be
discriminative in nature. Intuitively, if we have a good model of what objects in a particular class
?look like? and the range of variation that they exhibit, we can make these descriptive distinctions
more readily, with a small number of training instances. Furthermore, such a model allows us the
flexibility to perform clustering, search, and other forms of exploration of the data.
In this paper, we address the goal of finding precise, corresponded localizations of object classes
in cluttered images while allowing for large deformations. The Localizing Object Outlines using
Probabilistic Shape (LOOPS) method constructs a unified probabilistic model that combines global
shape with appearance-based boosted detectors to define a joint distribution over the location of the
constituent elements on the object. We can then leverage the object?s shape, an important characteristic that can be used for many descriptive distinctions [9], to address our descriptive tasks. The
main challenge is to correspond this model to a novel image while accounting for the possibility of
object deformation and articulation.
Contour-based methods such as active shape/appearance models (AAMs) [4] were developed with
this goal in mind, but typically require good initial guesses and are applied to images with significantly less clutter than real-life photographs. As a result, AAMs have not been successfully used for
?
These authors contributed equally to this manuscript
1
Figure 1: The stages of LOOPS. The shape model is depicted via principal components corresponding to the
neck and legs, and the ellipse marks one standard deviation from the mean. Red circles show the location of
sample instances in this space. Descriptive tasks other than classification (right box) are described in Section 4.
class-level object recognition/analysis. Some works use geometry as a means toward object classification or detection [11, 2, 17, 21]). Since, for example, a misplaced leg has a negligible effect on
classification, these works do not attempt to optimize localization. Other works (e.g., [3, 12]) do attempt to accurately localize objects in photographs but only allow for relatively rigid configurations,
and cannot capture large deformations such as the articulation of the giraffe?s neck. To the best of
our knowledge, no work uses the consistent localization of parts for descriptive tasks.
Having a representation of the constituent elements of an object should aid in answering descriptive
questions. For example, to decide whether a giraffe is standing upright or bending down to drink, we
can use a specific representation of the head, neck, body, and legs in order to consider their relative
location. We adopt the AAM-like strategy of representing the shape of an object class via an ordered
set of N landmark points that together constitute a piecewise linear contour.
Obtaining corresponded training outlines, however, requires painstaking supervision and we would
like to be able to use readily available simple outlines such as those in the LabelMe dataset. Therefore, before we begin, we need to automatically augment the simple training outlines with a corresponded labeling. That is, we want to transform arbitrary outlines into useful training instances with
consistent elements as depicted in the pipeline of our LOOPS method (Figure 1, first two boxes).
The method we use for this step is reminiscent of Hill and Taylor [14]; we omit the details for lack
of space. Once we have corresponded training outlines, each with N consistent landmarks, we can
construct a distribution of the geometry of the objects? outline as depicted in Figure 1(middle) and
augment this with appearance based features to form a LOOPS model, as described in Section 2.
Given a model, we face the computational challenge of localizing the landmarks in test images in the
face of clutter, large deformations, and articulations (Figure 1, fourth box). In order to overcome the
problem of local maxima faced by contour propagation methods (e.g., [4, 20]), we develop a twostage scheme. We first consider a tractable global search space, consisting of candidate landmark
assignments. This allows a discrete probabilistic inference technique to achieve rough but accurate
localization that robustly explores the multimodal set of solutions allowed by our large deformation
model. We then refine our localization using a continuous hill-climbing approach. This hybrid approach allows LOOPS to deal effectively with complex images of natural scenes, without requiring
a good initialization. Preliminary investigations showed that a simpler approach that does a purely
local search, similar to the AAMs of Cootes et al. [4], was unable to deal with the challenges of
our data. The localization of outlines in test images is described in detail in Section 3. We demonstrate in Section 4 that this localization achieves state-of-the-art results for objects with significant
deformation and articulation in natural images.
Finally, with the localized outlines in hand, we can readily perform a range of descriptive tasks
(classification, ranking, clustering), based on the predicted location of landmarks in test images as
well as appearance characteristics in the vicinity of those landmarks. We demonstrate how this is
carried out for several descriptive tasks in Section 4. We explore the space of applications facilitated
by the LOOPS model across two principal axes. The first concerns the machine learning application:
we present results for classification, search (ranking), and clustering. The second axis varies the
components that are extracted from the LOOPS outlines for these tasks: we show examples that use
the entire object shape, a subcomponent of the object shape, and the appearance of a specific part
of the object. The LOOPS framework allows us to approach any of these tasks with a single model
without the need for retraining.
2
2
The LOOPS Model
Given a set of training instances, each with N corresponded landmarks, the LOOPS object class
model combines two components: an explicit representation of the object?s shape (2D silhouette),
and a set of image-based features. We define the shape of a class of objects via the locations of
the N object landmarks, each of which is assigned to one of the image pixels. We represent such
an assignment as a 2N vector of image coordinates which we denote by L. Using the language of
Markov random fields [18], the LOOPS model defines a conditional probability distribution over L:
P (L | I, ?) =
?
?
Y
?
?Y
1
PShape (L; ?, ?)
exp wi Fidet (li ; I)
exp wij Fijgrad (li , lj ; I)
Z(I)
i
i,j
(1)
where ? = {?, ?, w} are the model parameters, and i and j index the model landmarks. PShape encodes the (unnormalized) distribution over the object shape (outline), F det (li ) is a landmark specific
detector, and Fijgrad (li , lj ; I) encodes a preference for aligning outline segments along image edges.
Below we describe how the shape model and the detector features are learned. We found that our
results are quite robust to the choice of weights and that learning them provides no clear benefit. We
note that our MRF formulation is quite general, and allows for both the incorporation of (possibly
weighted) additional features. For instance, we might want to capture the notion that internal line
segments (lines entirely contained within the object) should have low color variability. This can
naturally be posed as a pairwise feature over landmarks on opposite sides of the object.
We model the shape component of Eq. (1) as a multivariate Gaussian distribution over landmark
locations with mean ? and covariance ?. The Gaussian parametric form has many attractive properties, and has been used successfully to model shape distributions in a variety of applications
(e.g., [4, 1]). In our context, one particularly useful property is that the Gaussian distribution decomposes into a product of quadratic terms over pairs of variables:
?
?
1
1 Y
1 Y
?1
exp ? (xi ? ?i )?ij (xj ? ?j ) =
?i,j (xi , xj ; ?, ?),
PShape (L | ?, ?) =
Z i,j
2
Z i,j
where Z is the normalization term. As this equation illustrates, we can specify potentials ?i,j over
only singletons and pairs of variables and still manage to represent the full shape distribution. This
allows Eq. (1) to take an appealing form in which all terms are defined over at most a two variables.
As we discuss below in Section 3, the procedure to locate the model landmarks in an image first
involves discrete global inference using the LOOPS model, followed by a local refinement stage.
Even if we limit ourselves to pairwise terms, performing discrete inference in a densely connected
MRF may be computationally impractical. Unfortunately, a general multivariate Gaussian includes
pairwise terms between all landmarks. Thus, during the discrete inference stage, we limit the number
of pairwise elements by approximating the shape distribution with a sparse multivariate Gaussian.
(During the final refinement stage, we use the full distribution.) To obtain the sparsity pattern, we
choose a linear number of landmark pairs whose relative locations have the lowest variance across
the training instances (and require that neighbor pairs be included), promoting shape stability. The
sparse Gaussian is then obtained by using a gradient method to minimize the KL distance to the full
distribution subject to the entries corresponding to the chosen pairs being 0.
To construct detector features F det , we build on the success of boosting in state-of-the-art object
detection methods [17, 22]. Specifically, we use boosting to learn a strong detector (classifier), Hi
for each landmark i. We then define the feature value in the conditional MRF for the assignment of
landmark i to pixel li to be Fidet (li ; I) = Hi (li ).
For weak detectors we use features that are based on our shape model as well as other features that
have proven useful for the task of object detection: shape templates [5], boundary fragments [17],
filter response patches [22], and SIFT descriptors [16]. The weak detector hti (li ) is one of these
features chosen at round t of boosting that best predicts whether landmark i is at a particular pixel
PT
li . Boosting yields a strong detector of the form Hi (li ) = t=1 ?t hti (li ).
P
T
The pairwise feature Fijgrad (li , lj ; I) =
r?li lj |g(r) n(li , lj )| sums over the segment between
adjacent landmarks, where g(r) is the image gradient at point r, and n(li , lj ) is the segment normal.
3
Figure 2: Example outlines
predicted using (candidate)
the top detection for each
landmark independently, (discrete) inference, (c) a continuous refinement of (b).
Candidate
3
?
Discrete
?
Refinement
Localization of Object Outlines
We now address our central computational challenge: assigning the landmarks of a LOOPS model to
test image pixels while allowing for large deformations and articulations. Recall that the conditional
MRF defines a distribution (Eq. (1)) over assignments of model landmarks to pixels. This allows us
to outline objects by using probabilistic inference to find the most probable such assignment:
L? = argmaxL P (L | I, w)
Because, in principle, each landmark can be assigned to any pixel, finding L? is computationally
prohibitive. One option is to use an approach analogous to active shape models, using a greedy
method to deform the model from a fixed starting point. However, unlike most applications of active
shape/appearance models (e.g., [4]), our images have significant clutter, and such an approach will
quickly get trapped in an inferior local maxima. A possible solution to this problem is to consider
a series of starting points. Preliminary experiments along these lines (not shown for lack of space),
however, showed that such an approach requires a computationally prohibitive number of starting
points to effectively localize even rigid objects. Furthermore, large articulations were not captured
even with the ?correct? starting point (placing the mean shape in the center of the true location). To
overcome these limitations, we propose an alternative two step method, depicted in Figure 2: we
first approximate our problem and find a coarse solution using discrete inference; we then refine our
solution using continuous optimization and the full objective defined by Eq. (1).
We cannot directly perform inference over the entire seach space of N P assigments (for N model
landmarks and P pixels). To prune this space, we first assume that landmarks will fall on ?interesting? points, and consider only candidate pixels (typically 1000-2000 per image) found by the
SIFT interest operator [16]. We then use the appearance based features Fidet to rank the pixel candidates and choose the top K (25) candidate pixels for each landmark. Even with this pruned space,
the inference problem is quite daunting, so we further approximate our objective by sparsifying the
multivariate Gaussian shape distribution, as mentioned in Section 2. The only pairwise feature functions we use are over neighboring pairs of landmarks (as described in Section 2), which does not add
to the density of the MRF construction, thus allowing the inference procedure to be tractable. We
perform approximate max-product inference using the Residual Belief Propagation (RBP) algorithm
[6] to find the most likely assignment of landmarks to pixels L? in the pruned space.
Given the best assignment L? predicted in the discrete stage, we perform a refinement stage in which
we reintroduce the entire pixel domain and use the full shape distribution. Refinement involves a
greedy hill-climbing algorithm in which we iterate across each landmark, moving it to the best candidate location using one of two types of moves, while holding the other landmarks fixed. In a local
move, each landmark picks the best pixel in a small window around its current location. In a global
move, each landmark can move to its mean location given all the other landmark assignments; this
location is the mean of the conditional Gaussian PShape (li | L \ li ), easily computed from the joint
shape Gaussian. In a typical refinement, the global moves dominate the early iterations, correcting large mistakes made by the discrete stage and that resulted in an unlikely shape. In the later
iterations, local moves do most of the work by carefully adapting to the local image characteristics.
4
Experimental Results
Our experimental evaluation is aimed at demonstrating the ability of a single LOOPS model to
perform a range of tasks based on corresponded localization of objects. In the experiments in the
following sections, we train on 20 instances of each class and test on the rest, and report results averaged over 5 random train/test partitions. For the ?airplane? image class, we selected examples from
the Caltech airplanes image set [8]; the other classes were gathered for this paper. More detailed
results, including more object classes and scenes, and an analysis of outline accuracy, appear in [13].
Full image results appear at http://ai.stanford.edu/?gaheitz/Research/Loops.
4
LOOPS
OBJ CUT
kAS Detector
Figure 3: Randomly selected outlines produced by LOOPS and its two competitors, displaying the variation in
the four classes considered in our descriptive classification experiments.
Class
Airplane
Cheetah
Giraffe
Lamp
LOOPS
2.0
5.2
2.9
2.9
OBJ CUT
6.0
12.7
11.7
7.5
kAS
3.9
11.9
8.9
5.8
Table 1: Normalized symmetric root mean squared
(rms) outline error. We report the rms of the distance
from each point on the outline to the nearest point on
the groundtruth (and vice versa), as a percentage of
the groundtruth bounding box diagonal.
Accurate Outline Localization
In order for a LOOPS model to achieve its goals of classification, search and clustering based on
characteristics of the shape or shape-localized appearance, it is necessary for our localization to be
accurate at a more refined level than the bounding box prediction that is typical in the literature. We
first evaluate the ability of our model to produce accurate outlines in which the model?s landmarks
are positioned consistently across test images.
We compare LOOPS to two state-of-the-art methods that seek to produce accurate object outlines
in cluttered images: the OBJ CUT model of Prasad and Fitzgibbon [19] and the kAS Detector of
Ferrari et al. [12]. Both methods were updated to fit our data with help from the authors (P. Kumar,
V. Ferrari; personal communications). Unlike both OBJ CUT and LOOPS, the kAS Detector only
requires bounding box supervision for the training images rather than full outlines. To provide a
quantitative evaluation of the outlines, we measured the symmetric root mean squared (rms) distance
between the produced outlines and the hand-labeled groundtruth. As we can see both qualitatively in
Figure 3 and quantitatively in Table 1, LOOPS produces significantly more accurate outlines than its
competitors. Figure 3 shows two example test images with the outlines for each of the four classes
we considered here. While in some cases the LOOPS outline is not perfect at the pixel level, it
usually captures the correct articulation, pose, and shape of the object.
Descriptive Classification with LOOPS Outlines
Our goal is to use the predicted LOOPS outlines for distinguishing between two configurations of
an object. To accomplish this, we first train the joint shape and appearance model and perform
inference to localize outlines in the test images, all without knowledge of the classification task or
any labels. Representing each instance as a corresponded outline provides information that can be
leveraged much more easily than the pixel-based representation. We then incorporate the labels to
train a descriptive classifier given a corresponded localization.
To classify a test image, we used a nearest neighbor classifier, based on chamfer distance. The
distance is computed efficiently by converting the training contours into an ?ideal? edge image and
computing the distance transform of this edge image. The LOOPS outlines are then classified based
on their mean distance to each training contour. In addition, we include a GROUND measure
that uses the landmark coordinates of manually corresponded groundtruth outlines as features in a
logistic regression classifier. This serves as a rough upper bound on the performance achievable by
5
Figure 4: Descriptive classification results. LOOPS is compared to the Naive Bayes and boosted Centroid
classifier baselines as well as the state-of-the-art OBJ CUT and kAS Detector methods. GROUND uses
manually labeled outlines and approximately upper bounds the performance achievable from outlines. For both
lamp tasks, the same LOOPS, OBJ CUT, and kAS Detector models and localizations are used. Note that unlike
the other methods, the kAS Detector requires only bounding box supervision rather than full outlines.
relying on outlines. In practice, LOOPS can outperform GROUND if the classifier picks up on
signals from the automatically chosen landmarks.
In addition to the kAS Detector and OBJ CUT competitors, we introduce to two baseline techniques
for comparison. The first is a Naive Bayes classifier that uses a codebook of SIFT features as in
[7]. The second uses a discriminative approach based on the Centroid detector described above,
which is similar to the detector used by [22]; we train the descriptive classifier based on the vector
of feature responses at the predicted object centroid.
Figure 4 (top) shows the classification results for three tasks: giraffes standing vs. bending down;
cheetahs running vs. standing; and airplanes taking off vs. flying horizontally. The first two tasks
depend on the articulation of the object, while the third depends on its pose. (In this last task, where
rotation is the key feature, we only normalize for translation and scale when performing Procrustes
alignment.) Classification performance is shown as a function of the number of labeled instances.
For all three tasks, LOOPS (solid blue) outperforms both baselines as well as the state-of-the-art
competitors. Importantly, by making use of the outline predicted in a cluttered image, we surpass
the fully supervised baselines (rightmost on the graphs) with as little as a single supervised instance
(leftmost on the graphs).
Once we have outlined instances, an important benefit of the LOOPS method is that we can in fact
perform multiple descriptive tasks with the same object model. We demonstrate this with a pair
of classification tasks for the lamp object class, presented in Figure 4(bottom). The tasks differ in
which ?part? of the object we consider for classification: triangular vs. rectangular lamp shade; and
thin vs. fat lamp base. By including a few examples in the labeled set, our classifier can learn to
consider only the relevant portion of the shape. We stress that both the learned lamp model and
the test localizations predicted by LOOPS are the same for both tasks. Only the label set and the
resulting nearest-neighbor classifier change. The consequences of this result are promising: we can
do most of the work once, and then readily perform a range of descriptive classification tasks.
Shape Similarity Search
The second descriptive application area that we consider is similarity search, which involves the
ranking of test instances based on their similarity to a search query. A shopping website, for example, might wish to allow a user to organize the examples in a database according to similarity to
a query product. The similarity measure can be any feature that is easily extracted from the image
while leveraging the predicted LOOPS outline. The experimental setup is as follows. Offline, we
train a LOOPS model for the object class and localize corresponded outlines in the test images. On6
(a)
(b)
Figure 5: Clustering airplanes. (a) sample cluster
using color features of the
entire airplane. (b,c) clusters containing the first two
instances of (a) when using
only the colors of the tail as
predicted by LOOPS.
(c)
Figure 6: (left) Object similarity search using the LOOPS output to determine the location of the lamp landmarks. (top row) searching the test database using full shape similarity to the query object on the left; (second
row) evaluating similarity only using the landmarks that correspond to the lamp shade; (third row) search
focused only on the lamp base; (bottom row) using color similarity of the lamp shade to rank the search results.
line, a user chooses a test instance to serve as a ?query? image and a similarity metric to use. We
search for the test images that are most similar to the query, and return the ranked list of images.
Figure 6 shows an example from the lamp dataset. Users select a query lamp instance, a subset
of landmarks (possibly all), and whether to use shape or color. Each instance in the dataset is
then ranked based on Euclidean distance to the query in shape PCA space or LAB color space as
appropriate. The top row shows a full-shape search, where the left-most image is the query instance
and the others are ordered by decreasing similarity. The second row shows the ranking when the user
decides to focus on the lampshade landmarks, yielding only triangular lamp shades, and the third
row focuses on the lamp base, returning only wide bases. Finally, the bottom row shows a search
based on the color of the shade. In all of these examples, by projecting the images into LOOPS
outlines, similarity search desiderata were easily specified and effectively taken into account. The
similarity of interest in all of these cases is hard to specify without a predicted outline.
Descriptive Clustering
Finally, we consider clustering a database by leveraging on the LOOPS predicted outlines. As
an example, we consider a large database of airplane images, and wish to group our images into
?similar looking? sets of airplanes. Clustering based on shape might produce clusters corresponding
to passenger jets, fighter jets, and small propeller airplanes. In this section, we consider an outline
and appearance based clustering where the feature vector for each airplane includes the mean color
values in the LAB color space for all pixels inside the airplane boundary (or in a region bounded by
a user-selected set of landmarks). To cluster images based on this vector, we use standard K-means.
Figure 5(left) shows 12 examples from one cluster that results from clustering using the entire plane,
for a database of 770 images from the Caltech airplanes image set [8]. Despite the fact that the cluster
is coherent when considering the whole plane (not shown), zooming in on the tails reveals that the
tails are quite heterogeneous in appearance. Figure 5(middle) and (right) show the tails for the two
clusters that contain the first two instances from Figure 5(left), when using only the tail region for
clustering. The coherence of the tail appearance is apparent in this case, and both clusters group
many tails from the same airlines. In order to perform such coherent clustering of airplane tails,
one needs first to accurately localize the tail in test images. Even more than the table lamp ranking
task presented above, this example highlights the ability of LOOPS to leverage localize appearance,
opening the door for many additional shape and appearance based descriptive tasks.
5
Discussion and Future Work
In this work we presented the Localizing Object Outlines using Probabilistic Shape (LOOPS) approach for obtaining accurate, corresponded outlines of objects in test images, with the goal of
performing a variety of descriptive tasks. Our approach relies on a coherent probabilistic model in
which shape is combined with discriminative detectors. We showed how the produced outlines can
7
be used to perform descriptive classification, search, and clustering based on shape and localized
appearance, and we evaluated the error of our outlines compared to two state-of-the-art competitors.
For the classification tasks, we showed that our method is superior to fully supervised competitors
with as little as a single labeled example.
Our contribution is threefold. First, we introduce a model that combines both generative and discriminative elements, allowing us to localize precise outlines of highly articulated objected in cluttered
natural images. Second, in order to achieve this localization, we present a hybrid global-discrete then
local-continuous optimization approach to the model-to-image correspondence problem. Third, we
demonstrate that precise localization is of value for a range of descriptive tasks, including those that
are based on appearance.
Several existing methods produce outlines either as a by-product of detection (e.g., [3, 17, 21]) or
as a targeted goal (e.g., [12, 19]). In experiments above, we compared LOOPS to two state-ofthe-art methods. We showed that LOOPS produces far more accurate outlines when dealing with
significant object deformation and articulation, and demonstrated that it is able to translate this into
superior classification rates for descriptive tasks. No other work that considers object classes in
natural images has demonstrated a combination of accurate localization and shape analysis that has
solved these problems.
There are further directions to pursue. We would like to automatically learn coherent parts of objects (e.g., the neck of the giraffe) as a set of landmarks that articulate together, and achieve better
localization by estimating a distribution over part articulation (e.g., synchronized legs). A natural
extension of our model is a scene-level variant in which each object is treated as a ?landmark.? The
geometry of such a model will then capture relative spatial location and orientations so that we can
answer questions such as whether a man is walking the dog, or whether the dog is chasing the man.
Acknowledgements This work was supported by the DARPA Transfer Learning program under contract number FA8750-05-2-0249 and the Multidisciplinary University Research Initiative (MURI),
contract number N000140710747, managed by the Office of Naval Research. We would also like to
thank Vittorio Ferrari and Pawan Kumar for providing us code and helping us to get their methods
working on our data.
References
[1] D. Anguelov, P. Srinivasan, D. Koller, S. Thrun, J. Rodgers, and J. Davis. Scape: shape completion and
animation of people. SIGGRAPH, ?05. 3
[2] A. Bar-Hillel, T. Hertz, D. Weinshall. Efficient learning of relational object class models. ICCV, ?05. 2
[3] A. Berg, T. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondence. CVPR, ?05. 2, 8
[4] T. Cootes, G. Edwards, and C. Taylor. Active appearance models. ECCV, ?98. 1, 2, 3, 4
[5] G. Elidan, G. Heitz, and D. Koller. Learning object shape: From cartoons to images. CVPR, ?06. 3
[6] G. Elidan, I. McGraw, and D. Koller. Residual belief propagation: Informed scheduling for async. message passing. UAI, ?06. 4
[7] L. Fei-Fei and P. Perona. A bayesian hier. model for learning natural scene categories. CVPR, ?05. 6
[8] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: an
incremental bayesian approach tested on 101 object categories. CVPR, ?04. 4, 7
[9] P. Felzenszwalb and D. Huttenlocher. Efficient matching of pictorial structures. CVPR, ?00. 1
[10] P. Felzenszwalb and J. Schwartz. Hierarchical matching of deformable shapes. CVPR, ?07.
[11] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning.
CVPR, ?03. 2
[12] V. Ferrari, F. Jurie, and C. Schmid. Accurate object detection with deformable shape models learnt from
images. CVPR, ?07. 2, 5, 8
[13] G. Heitz, G. Elidan, B. Packer, and D. Koller. Shape-based object localization for descriptive classification. Technical report, available at http://ai.stanford.edu/?gaheitz/Research/Loops/TR.pdf 4
[14] A. Hill and C. Taylor. Non-rigid corresp. for automatic landmark identification. BMVC, ?96. 2
[15] A. Holub and P. Perona. A discriminative framework for modeling object class. CVPR, ?05.
[16] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, ?03. 3, 4
[17] A. Opelt, A. Pinz, and A. Zisserman. Incremental learning of object detectors using a visual shape
alphabet. CVPR, ?06. 2, 3, 8
[18] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, ?88. 3
[19] M. Prasad and A. Fitzgibbon. Single view reconstruction of curved surfaces. CVPR 2006, ?06. 5, 8
[20] J. Sethian. Level Set Methods and Fast Marching Methods. Cambridge, ?98. 2
[21] J. Shotton, A. Blake, and R. Cipolla. Contour-based learning for object detection. ICCV, ?05. 2, 8
[22] A. Torralba, K. Murphy, and W. Freeman. Contextual models for object detection using boosted random
fields. NIPS, ?05. 3, 6
8
| 3540 |@word middle:2 achievable:2 retraining:1 seek:1 prasad:2 accounting:1 covariance:1 pick:2 tr:1 solid:1 initial:1 configuration:2 contains:1 fragment:1 series:1 fa8750:1 rightmost:1 outperforms:1 existing:1 current:1 ka:8 contextual:1 assigning:1 reminiscent:1 readily:4 subcomponent:1 partition:1 shape:51 v:5 greedy:2 prohibitive:2 guess:1 selected:3 website:1 generative:2 plane:2 lamp:16 painstaking:1 provides:2 boosting:4 coarse:1 location:14 preference:1 codebook:1 simpler:1 daphne:1 along:2 initiative:1 consists:1 ijcv:1 combine:3 inside:1 introduce:2 pairwise:6 cheetah:3 freeman:1 relying:1 decreasing:1 automatically:3 little:2 window:1 considering:1 begin:1 estimating:1 bounded:1 lowest:1 what:5 weinshall:1 pursue:1 developed:1 informed:1 unified:1 finding:2 gal:1 impractical:1 quantitative:1 preferable:1 fat:1 classifier:10 returning:1 schwartz:1 misplaced:1 omit:1 appear:3 organize:1 before:1 negligible:1 engineering:1 local:8 limit:2 mistake:1 consequence:1 packer2:1 despite:1 approximately:1 might:4 initialization:1 on6:1 range:7 averaged:1 jurie:1 practice:1 chasing:1 fitzgibbon:2 procedure:2 area:1 significantly:2 adapting:1 matching:3 sethian:1 get:2 cannot:2 operator:1 scheduling:1 context:1 optimize:1 vittorio:1 demonstrated:2 center:1 jerusalem:1 attention:1 starting:4 cluttered:5 independently:1 rectangular:1 focused:1 correcting:1 importantly:1 dominate:1 stability:1 searching:2 notion:1 variation:2 coordinate:2 analogous:1 ferrari:4 updated:1 pt:1 construction:1 user:5 us:5 distinguishing:1 element:5 recognition:3 particularly:1 walking:1 cut:7 predicts:1 labeled:5 database:5 bottom:3 muri:1 huttenlocher:1 electrical:1 capture:4 solved:1 region:3 connected:1 reintroduce:1 mentioned:1 pinz:1 personal:1 trained:1 argmaxl:1 depend:1 segment:4 purely:1 localization:21 flying:1 serve:1 distinctive:1 multimodal:1 joint:3 easily:4 darpa:1 siggraph:1 alphabet:1 train:6 articulated:1 fast:1 describe:1 query:8 corresponded:11 labeling:1 refined:3 hillel:1 quite:4 whose:1 stanford:5 posed:1 apparent:1 distortion:1 cvpr:11 triangular:2 ability:3 statistic:1 transform:2 final:1 descriptive:27 propose:1 reconstruction:1 product:4 neighboring:1 relevant:1 loop:45 translate:1 flexibility:1 achieve:4 deformable:2 normalize:1 constituent:2 cluster:8 produce:6 categorization:2 perfect:1 incremental:2 ben:1 object:64 help:1 develop:2 completion:1 pose:4 ac:1 objected:1 nearest:3 measured:1 ij:1 received:1 eq:4 edward:1 strong:2 c:1 predicted:11 involves:3 synchronized:1 differ:1 direction:1 correct:2 filter:1 exploration:1 require:2 shopping:2 preliminary:2 investigation:1 articulate:1 probable:1 extension:1 helping:1 around:1 considered:2 ground:3 normal:1 exp:3 blake:1 achieves:2 adopt:1 early:1 torralba:1 label:4 vice:1 successfully:2 weighted:1 rough:2 gaussian:9 rather:2 boosted:3 office:1 ax:1 focus:2 naval:1 consistently:2 rank:2 centroid:3 baseline:4 inference:12 rigid:3 typically:2 entire:5 lj:6 unlikely:1 perona:4 koller:5 wij:1 interested:3 pixel:16 classification:22 orientation:1 augment:2 art:8 spatial:1 field:2 construct:3 once:3 having:1 cartoon:1 manually:2 placing:1 look:1 unsupervised:1 thin:1 future:1 report:3 others:1 quantitatively:1 piecewise:1 opening:1 intelligent:1 few:2 randomly:1 packer:1 densely:1 resulted:1 pictorial:1 corresp:1 murphy:1 geometry:3 consisting:1 ourselves:1 pawan:1 attempt:2 detection:10 interest:2 message:1 possibility:1 highly:1 evaluation:2 alignment:1 yielding:1 accurate:10 edge:3 necessary:1 taylor:3 euclidean:1 circle:1 deformation:9 instance:20 classify:1 modeling:1 localizing:3 assignment:8 deviation:1 entry:1 subset:1 answer:2 varies:1 learnt:1 accomplish:1 chooses:1 combined:1 density:1 explores:1 huji:1 standing:3 probabilistic:7 off:1 contract:2 together:2 quickly:1 squared:2 central:3 manage:1 containing:1 choose:2 possibly:2 leveraged:1 return:1 li:17 deform:1 potential:1 account:1 singleton:1 includes:2 ranking:5 depends:1 passenger:1 later:1 root:2 lowe:1 lab:2 view:1 doing:1 red:1 portion:1 bayes:2 option:1 contribution:1 minimize:1 il:1 accuracy:1 variance:1 characteristic:4 descriptor:1 efficiently:1 correspond:2 yield:1 gathered:1 climbing:2 ofthe:1 kaufmann:1 weak:2 bayesian:2 identification:1 accurately:3 produced:3 uncorresponded:1 classified:1 detector:19 competitor:6 naturally:1 dataset:3 recall:1 color:10 knowledge:2 holub:1 positioned:1 carefully:1 manuscript:1 supervised:3 specify:2 response:2 daunting:1 zisserman:2 formulation:1 evaluated:1 box:7 bmvc:1 furthermore:3 stage:7 hand:2 working:1 hier:1 lack:2 propagation:3 defines:2 logistic:1 multidisciplinary:1 rbp:1 effect:1 requiring:1 true:1 normalized:1 contain:1 managed:1 vicinity:1 assigned:2 symmetric:2 deal:2 galel:1 attractive:1 round:1 during:2 adjacent:1 inferior:1 davis:1 unnormalized:1 leftmost:1 pdf:1 hill:4 outline:54 stress:1 demonstrate:4 reasoning:1 image:55 novel:2 superior:2 rotation:1 tail:10 rodgers:1 significant:3 anguelov:1 versa:1 cambridge:1 ai:2 automatic:1 outlined:1 language:1 moving:1 supervision:3 similarity:13 surface:1 add:1 aligning:1 base:4 multivariate:4 recent:1 showed:5 scenario:1 success:1 life:1 caltech:2 geremy:1 captured:1 additional:2 morgan:1 gaheitz:3 prune:1 converting:1 determine:2 scape:1 elidan:3 signal:1 full:10 multiple:1 keypoints:1 technical:1 jet:2 fighter:1 equally:1 prediction:1 mrf:5 regression:1 desideratum:1 heterogeneous:1 vision:2 metric:1 variant:1 iteration:2 sometimes:1 represent:2 normalization:1 addition:2 want:2 rest:1 unlike:3 airline:1 subject:1 leveraging:2 obj:7 leverage:2 ideal:1 door:1 shotton:1 variety:2 xj:2 iterate:1 fit:1 opposite:1 airplane:13 det:2 whether:7 pca:1 rms:3 passing:1 constitute:1 useful:3 clear:1 aimed:1 detailed:1 procrustes:1 clutter:3 category:2 http:2 outperform:1 percentage:1 async:1 trapped:1 per:1 blue:1 discrete:10 threefold:1 srinivasan:1 sparsifying:1 key:1 four:2 group:2 nevertheless:1 demonstrating:1 localize:7 graph:2 year:1 convert:1 sum:1 facilitated:1 fourth:1 cootes:2 decide:1 groundtruth:4 patch:1 coherence:1 entirely:1 bound:2 drink:2 hi:3 followed:1 correspondence:2 quadratic:1 refine:2 precisely:1 incorporation:1 fei:4 scene:4 encodes:2 aspect:1 pruned:2 performing:3 kumar:2 relatively:1 department:3 according:1 combination:1 hertz:1 across:5 wi:1 appealing:1 making:1 leg:4 intuitively:1 projecting:1 iccv:2 invariant:2 pipeline:1 taken:1 computationally:3 equation:1 discus:1 mind:1 tractable:2 serf:1 available:2 promoting:1 hierarchical:1 appropriate:2 robustly:1 alternative:1 top:5 clustering:14 running:2 include:1 build:1 ellipse:1 approximating:1 objective:2 move:6 question:6 malik:1 strategy:1 parametric:1 diagonal:1 exhibit:2 gradient:2 distance:8 unable:1 zooming:1 thank:1 thrun:1 landmark:45 considers:1 toward:1 code:1 index:1 providing:1 hebrew:1 setup:1 unfortunately:1 holding:1 perform:11 allowing:4 contributed:1 upper:2 markov:1 curved:1 relational:1 variability:1 precise:3 head:1 locate:1 communication:1 looking:1 arbitrary:1 pair:7 dog:2 kl:1 specified:1 coherent:4 distinction:2 learned:2 pearl:1 nip:1 address:4 able:2 bar:1 below:2 pattern:1 usually:1 articulation:11 sparsity:1 challenge:4 program:1 including:5 max:1 belief:2 natural:7 hybrid:2 ranked:2 treated:1 residual:2 representing:2 scheme:1 axis:1 carried:1 naive:2 schmid:1 bending:3 faced:1 literature:1 acknowledgement:1 relative:3 fully:2 highlight:1 interesting:1 limitation:1 proven:1 localized:4 outlining:1 consistent:3 principle:1 displaying:1 translation:1 row:8 eccv:1 supported:1 last:1 offline:1 side:1 allow:2 opelt:1 neighbor:3 template:1 face:2 fall:1 taking:1 wide:1 sparse:2 felzenszwalb:2 benefit:2 regard:1 overcome:2 boundary:2 heitz:2 evaluating:1 contour:6 author:2 made:1 refinement:7 twostage:1 qualitatively:1 far:1 approximate:3 mcgraw:1 silhouette:1 aam:1 dealing:1 global:6 active:4 decides:1 reveals:1 uai:1 discriminative:8 xi:2 fergus:2 search:17 continuous:4 decomposes:1 table:3 n000140710747:1 promising:1 nature:1 learn:3 robust:1 transfer:1 obtaining:2 complex:1 domain:1 giraffe:6 main:1 bounding:4 whole:1 animation:1 allowed:1 body:1 aid:1 wish:3 explicit:1 candidate:7 answering:1 third:4 hti:2 down:2 chamfer:1 specific:3 shade:5 sift:3 list:1 concern:1 effectively:3 illustrates:1 marching:1 depicted:4 photograph:2 appearance:17 explore:1 likely:1 visual:2 horizontally:1 ordered:2 contained:1 cipolla:1 relies:1 extracted:2 conditional:4 goal:6 targeted:1 labelme:1 man:2 change:1 hard:1 included:1 upright:1 specifically:1 typical:2 surpass:1 principal:2 neck:4 experimental:3 select:1 berg:2 internal:1 mark:1 people:1 incorporate:1 evaluate:1 tested:1 |
2,803 | 3,541 | Deep Learning with Kernel Regularization
for Visual Recognition
Kai Yu
Wei Xu
Yihong Gong
NEC Laboratories America, Cupertino, CA 95014, USA
{kyu, wx, ygong}@sv.nec-labs.com
Abstract
In this paper we aim to train deep neural networks for rapid visual recognition.
The task is highly challenging, largely due to the lack of a meaningful regularizer on the functions realized by the networks. We propose a novel regularization
method that takes advantage of kernel methods, where an oracle kernel function
represents prior knowledge about the recognition task of interest. We derive an efficient algorithm using stochastic gradient descent, and demonstrate encouraging
results on a wide range of recognition tasks, in terms of both accuracy and speed.
1
Introduction
Visual recognition remains a challenging task for machines. This difficulty stems from the large
pattern variations under which a recognition system must operate. The task is extremely easy for a
human, largely due to the expressive deep architecture employed by human visual cortex systems.
Deep neural networks (DNNs) are argued to have a greater capacity to recognize a larger variety of
visual patterns than shallow models, because they are considered biologically plausible.
However, training deep architectures is difficult because the large number of parameters to be tuned
necessitates an enormous amount of labeled training data that is often unavailable. Several authors
have recently proposed training methods by using unlabeled data. These methods perform a greedy
layer-wise pre-training using unlabeled data, followed by a supervised fine-tuning [9, 4, 15]. Even
though the strategy notably improves the performance, to date, the best reported recognition accuracy on popular benchmarks such as Caltech101 by deep models is still largely behind the results of
shallow models.
Beside using unlabeled data, in this paper we tackle the problem by leveraging additional prior
knowledge. In the last few decades, researchers have developed successful kernel-based systems
for a wide range of visual recognition tasks. Those sensibly-designed kernel functions provide
an extremely valuable source of prior knowledge, which we believe should be exploited in deep
learning. In this paper, we propose an informative kernel-based regularizer, which makes it possible
to train DNNs with prior knowledge about the recognition task.
Computationally, we propose to solve the learning problem using stochastic gradient descent (SGD),
as it is the de facto method for neural network training. To this end we transform the kernel regularizer into a loss function represented as a sum of costs by individual examples. This results in a
simple multi-task architecture where a number of extra nodes at the output layer are added to fit a
set of auxiliary functions automatically constructed from the kernel function.
We apply the described method to train convolutional neural networks (CNNs) for a wide range of
visual recognition tasks, including handwritten digit recognition, gender classification, ethnic origin
recognition, and object recognition. Overall our approach exhibits excellent accuracy and speed on
all of these tasks. Our results show that incorporation of prior knowledge can boost the performance
of CNNs by a large margin when the training set is small or the learning problem is difficult.
1
2
DNNs with Kernel Regularization
In our setting, the learning model, a deep neural network (DNN), aims to learn a predictive function
f : X ? R that can achieve a low expected discrepancy E[`(y, f (x))] over the distribution p(x, y).
In the simplest case Y = {?1, 1} and `(?, ?) is a differentiable hinge loss. Based on a set of labeled
examples [(xi , yi )]ni=1 , the learning is by minimizing a regularized loss
L(?, ?) =
n
X
` yi , ?1> ?i + ?0 + ?k?1 k2
(1)
i=1
where ?i = ?(xi ; ?) maps xi to q-dimensional hidden units via a nonlinear deep architecture
with parameters ?, including the connection weights and biases of all the intermediate layers,
? = {?1 , ?0 }, ?1 includes all the parameters of the transformation from the last hidden layer to
the output layer, ?0 is a bias term, ? > 0, and kak2 = tr(a> a) is the usual weight decay regularization. Applying the well-known representor theorem, we derive the equivalence to a kernel
system1
?
?
n
n
n
X
X
X
L(?, ?0 , ?) =
` ?y i ,
?j Ki,j + ?0 ? + ?
?i ?j Ki,j
(2)
i=1
j=1
i,j=1
where the kernel is computed by
Ki,j = h?(xi ; ?), ?(xj ; ?)i = ?>
i ?j
We assume the network is provided with some prior knowledge, in the form of an m ? m kernel
matrix ?, computed on n labeled training data, plus possibly additional m?n unlabeled data if m >
n. We exploit this prior knowledge via imposing a kernel regularization on K(?) = [Ki,j ]m
i,j=1 , such
that the learning problem seeks
Problem 2.1.
min L(?, ?) + ??(?)
(3)
?(?) = tr K(?)?1 ? + log det[K(?)]
(4)
?,?
where ? > 0 and ?(?) is defined by
This is a case of semi-supervised learning if m > n. Though ? is non-convex w.r.t. K, it has a
unique minimum at K = ? if ? 0, suggesting that minimizing ?(?) encourages K to approach
?. The regularization can be explained from an information-theoretic perspective. Let p(f |K) and
p(f |?) be two Gaussian distributions N (0, K) and N (0, ?).2 ?(?) is related to the KL-divergence
DKL [p(f |?)kp(f |K)]. Therefore, minimizing ?(?) forces the two distributions to be close. We
note that the regularization does not require ? to be positive definite ? it can be semidefinite.
3
Kernel Regularization via Stochastic Gradient Descent
The learning problem in Eq. (3) can be solved by using gradient-based methods. In this paper we
emphasize large-scale optimizations using stochastic gradient descent (SGD), because the method
is fast when the size m of total data is large and backpropagation, a typical SGD, has been the de
facto method to train neural networks for large-scale learning tasks.
SGD considers the problem where the optimization cost is the sum of the local cost of each individual training example. A standard batch gradient descent updates the model parameters by using
the true gradient summed over the whole training set, while SGD approximates the true gradient by
the gradient caused by a single random training example. Therefore, the parameters of the model
1
In this paper we slightly abuse the notation, i.e., we use L to denote different loss functions. However their
meanings should be uniquely identified by checking the input parameters.
2
From a Gaussian process point of view, a kernel function defines the prior distribution of a function f , such
that the marginal distribution of the function values f on any finite set of inputs is a multivariate Gaussian.
2
are updated after each training example. For large data sets, SGD is often much faster than batch
gradient descent.
However, because the regularization term defined by Eq. (4) does not consist of a cost function that
can be expressed as a sum (or an average) over data examples, SGD is not directly applicable. Our
idea is to transform the problem into an equivalent formulation that can be optimized stochastically.
3.1
Shrinkage on the Kernel Matrix
We consider a large-scale problem where the data size m may grow over time, while the size of the
last hidden layer (q) of the DNN is fixed. Therefore the computed kernel K can be rank deficient.
In order to ensure that the trace term in ?(?) is well-defined, and that the log-determinant term is
bounded from below, we instead use K + ?I to replace K in ?(?), where ? > 0 is a small shrinkage
parameter and I is an identity matrix. Thus the log-determinant acts on a much smaller q ?q matrix3
log det(K + ?I) = log det ?> ? + ?I + const
where ? = [?1 , . . . , ?m ]> and const = (m ? q) ? log ?. Omitting all the irrelevant constants, we
then turn the kernel regularization into
?(?) = tr (??> + ?I)?1 ? + log det(?> ? + ?I)
(5)
The kernel shrinkage not only remedies the ill-posedness, but also yields other conveniences in our
later development.
3.2
Transformation of the Log-determinant Term
Pn
By noticing that ?> ? = i=1 ?i ?>
i is a sum of quantities over data examples, we move it outside
of the log determinant for the convenience of SGD.
Theorem 3.1. Consider min? {L(?) = h(?) + g(a)}, where g(?) is concave and a ? a(?) is a
function of ?, if its local minimum w.r.t. ? exists, then the problem is equivalent to
min L(?, ?) = h(?) + a(?)> ? ? g ? (?)
(6)
?,?
where g ? (?) is the conjugate function of g(a), i.e. g ? (?) = mina {? > a ? g(a)}.4
Proof. For a concave function g(a), the conjugate function of its conjugate function is itself,
i.e., g(a) = min? {a> ? ? g ? (?)}. Since g ? (?) is concave, a> ? ? g ? (?) is convex w.r.t. ?
and has the unique minimum g(a). Therefore minimizing L(?, ?) w.r.t. ? and ? is equivalent to
minimizing L(?) w.r.t. ?.
Since log-determinant is concave for q ? q positive definite matrices A, the conjugate function
of log det(A) is log det(?) + q. We can use the above theorem to transform any loss function
containing log det(A) into another loss, which is an upper bound and involves A in a linear term.
Therefore the log-determinant in Eq. (5) is turned into a variational representation
"m
#
X
log det ?> ? + ?I = min+
?>
i ??i + ? ? tr(?) ? log det(?) + const
??Sq
i=1
where ? ? S+
q is a q ? q positive definite matrix, and const = ?q. As we can see, the upper bound
is a convex function of auxiliary variables ? and more importantly, it amounts to a sum of local
quantities caused by each of the m data examples.
3
Hereafter in this paper, with a slight abuse of notation, we use ?const? in equations to summarize the terms
irrelevant to the variables of interest.
4
If g(a) is convex, its conjugate function is g ? (?) = maxa {? > a ? g(a)}.
3
3.3
Transformation of the Trace Term
We assume that the kernel matrix ? is presented in a decomposed form ? = U U > , with U =
[u1 , . . . , um ]> , ui ? Rp , and p ? m. We have found that the trace term can be cast as a variational
problem by introducing an q ? p auxiliary variable matrix ?.
Proposition 3.1. The trace term in Eq. (5) is equivalent to a convex variational representation
"m
#
X 1
>
?1
>
2
2
tr (?? + ?I) ? = min
k ? ui ? ? ?i k + ?k?kF
??Rq?p
?
i=1
Proof. We first obtain the analytical solution ? ? = ?1? (?> ? + ?I)?1 ?> U , where the variational
representation reaches its unique minimum. Then, plugging it back into the function, we have
1 >
1 > ?
1 >
?
?> >
>
?2 >
?
U ?? + ? ? ?? + U ?(? ? + ?I) ? U
tr U U ? 2
?
?
?
1 >
= tr U U ? U > ?(?> ? + ?I)?1 ?> U = tr (??> + ?I)?1 U U >
?
where the last step is derived by applying the Woodbury matrix identity.
Again, we note that the upper bound is a convex function of ?, and consists of a sum of local costs
over data examples.
3.4
An Equivalent Learning Framework
Combining the previous results, we obtain the convex upper bound for the kernel regularization
Eq. (5), which amounts to a sum of costs over examples under some regularization
"
#
m
X
1
>
2
>
2
?(?) ? L(?, ?, ?) =
k ? ui ? ? ?i k + ?i ??i + ?k?kF + ? ? tr(?) ? log det(?)
?
i=1
where we omit all the terms irrelevant to ?, ? and ?. L(?, ?, ?) is convex w.r.t. ? and ?, and has
a unique minimum ?(?), hence we can replace ?(?) by instead minimizing the upper bound and
formulate an equivalent learning problem
h
i
min L(?, ?, ?, ?) = L(?, ?) + ?L(?, ?, ?)
(7)
?,?,?,?
Clearly this new optimization can be solved by SGD.
When applying the SGD method, each step based on one example needs to compute the inverse of
?. This can be computationally unaffordable when the dimensionality is large (e.g. q > 1000) ?
remember that the efficiency of SGD is dependent on the lightweight of each stochastic update. Our
next result suggests that we can dramatically reduce this complexity from O(q 3 ) to O(q).
Proposition 3.2. Eq. (5) is equivalent to the convex variational problem
#
"m
q
X
X
1
>
2
> 2
2
>
log ?k
(8)
?(?) = min
k ? ui ? ? ?i k + ? ?i + ?k?kF + ? ? ? e ?
?,?
?
i=1
k=1
where ? = [?1 , . . . , ?q ]> , and e = [1, . . . , 1]> .
Proof. There is an ambiguity of the solutions up to rotations. Suppose {? ? , ?? , ? ? , ?? } is an optimal solution set, a transformation ? ? ? R? ? , ?? ? R?? , ? ? ? R? ? , and ?? ? R?? R>
results in the same optimality if R> R = I. Since there always exists an R to diagonalize ?? , we
can pre-restrict ? to be a diagonal positive definite matrix ? = diag[?1 , . . . , ?q ], which does not
change our problem and gives rise to Eq. (8).
We note that the variational form is convex w.r.t. the auxiliary variables ? and ?. Therefore we can
formulate the whole learning problem as
4
Problem 3.1.
min
?,?,?,?
1
?
?
L(?, ?, ?, ?) = L1 (?, ?) +
L2 (?, ?) +
L3 (?, ?)
n
mn
mn
(9)
where L1 (?, ?) is defined by Eq. (1), and
L2 (?, ?) =
L3 (?, ?) =
m
X
1
k ? ui ? ? > ?i k2 + ?k?k2F
?
i=1
m
X
? > ?2i + ? ? ? > e ?
i=1
q
X
log ?k
k=1
To ensure the estimator of ? and ? is consistent, the effect of regularization should vanish as n ? ?.
Therefore we intentionally normalize L2 (?, ?) and L3 (?, ?) by 1/m. The overall loss function is
averaged over the n labeled examples, consisting of three loss functions: the main classification task
L1 (?, ?), an auxiliary least-squares regression problem L2 (?, ?), and an additional regularization
term L3 (?, ?), which can be interpreted as another least-squares problem. Since each of the loss
functions amounts to a summation of local costs caused by individual data examples, the whole
learning problem can be conveniently implemented by SGD, as described in Algorithm 1.
In practice, the kernel matrix ? = U U > that represents domain knowledge can be obtained in
three different ways: (i) In the easiest case, U is directly available by computing some hand-crafted
features computed from the input data, which corresponds to a case of a linear kernel function; (ii)
U can be results of some unsupervised learning (e.g. the self-taught learning [14] based on sparse
coding), applied on a large set of unlabeled data; (iii) If a nonlinear kernel function is available, U
can be obtained by applying incomplete Cholesky decomposition on an m ? m kernel matrix ?. In
the third case, when m is so large that the matrix decomposition cannot be computed in the main
memory, we apply the Nystr?om method [19]: We first randomly sample m1 examples p < m1 < m,
such that the computed kernel matrix ?1 can be decomposed in the memory. Let V DV > be the prank eigenvalue decomposition of ?1 , then the p-rank decomposition of ? can be approximated by
1
? ? U U > , U = ?:,1 V D? 2 , where ?:,1 is the m ? m1 kernel matrix between all the m examples
and the subset of size m1 .
Algorithm 1 Stochastic Gradient Descent
repeat
Generate a number a from uniform distribution [0, 1]
n
then
if a < m+n
Randomly pick a sample i ? {1, ? ? ? , n} for L1 , and update parameter by
[?, ?] ? [?, ?] ?
?L1 (xi , ?, ?)
?[?, ?]
else
Randomly pick a sample i ? {1, ? ? ? , m} for L2 , and update parameter by
[?, ?, ?] ? [?, ?, ?] ?
?[L2 (xi , ?, ?) + L3 (xi , ?, ?)]
m
?[?, ?, ?]
end if
until convergence
4
Visual Recognition by Deep Learning with Kernel Regularization
In the following, we apply the proposed strategy to train a class of deep models and convolutional
neural networks (CNNs, [11]) for a range of visual recognition tasks including digit recognition on
MNIST dataset, gender and ethnicity classification on the FRGC face dataset, and object recognition
on the Caltech101 dataset. In each of these tasks, we choose a kernel function that has been reported
to have state-of-the-art or otherwise good performances in the literature. We will see whether a
kernel-regularizer can improve the recognition accuracy of the deep models, and how it is compared
with the support vector machine (SVM) using the exactly the same kernel.
5
Table 1: Percentage error rates of handwritten digit recognition on MNIST
Training Size
100
600 1000 3000 60000
SVM (RBF)
22.73 8.53 6.58 3.91
1.41
SVM (RBF, Nystr?om)
24.73 9.15 6.92 5.51
5.16
SVM (Graph)
5.21 3.74 3.46 3.01
2.23
SVM (Graph, Cholesky) 7.17 6.47 5.75 4.28
2.87
CNN
19.40 6.40 5.50 2.75
0.82
kCNN (RBF)
14.49 3.85 3.40 1.88
0.73
kCNN (Graph)
4.28 2.36 2.05 1.75
0.64
CNN (Pretrain) [15]
?
3.21
?
?
0.64
EmbedO CNN [18]
11.73 3.42 3.34 2.28
?
EmbedI5 CNN [18]
7.75 3.82 2.73 1.83
?
EmbedA1 CNN [18]
7.87 3.82 2.76 2.07
?
Throughout all the experiments, ?kCNN? denotes CNNs regularized by nonlinear kernels, processed
by either Cholesky or Nystr?om approximation, with parameters p = 600, m1 = 5000, and m the
size of each whole data set. The obtained ui are normalized to have unitary lengths. ? and ? are
fixed by 1. The remaining two hyperparameters are: the learning rates = {10?3 , 10?4 , 10?5 } and
the kernel regularization weights ? = {102 , 103 , 104 , 105 }. Their values are set once for each of the
4 recognition tasks based on a 5-fold cross validation using 500 labeled examples.
4.1
Handwritten Digit Recognition on MNIST Dataset
The data contains a training set with 60000 examples and a test set with 10000 examples. The CNN
employs 50 filters of size 7 ? 7 on 34 ? 34 input images, followed by down-sampling by 1/2, then
128 filters of size 5 ? 5, followed by down-sampling by 1/2, and then 200 filters of size 5 ? 5, giving
rise to 200 dimensional features that are fed to the output layer. Two nonlinear kernels are used: (1)
RBF kernel, and (2) Graph kernel on 10 nearest neighbor graph [6]. We perform 600-dimension
Cholesky decomposition on the whole 70000 ? 70000 graph kernel because it is very sparse.
In addition to using the whole training set, we train the models on 100, 600, 1000 and 3000 random
examples from the training set and evaluate the classifiers on the whole test set, and repeat each
setting by 5 times independently. The results are given in Tab. 1. kCNNs effectively improve over
CNNs by leveraging the prior knowledge, and also outperform SVMs that use the same kernels. The
results are competitive with the state-of-the-art results by [15], and [18] of a different architecture.
4.2
Gender and Ethnicity Recognition on FRGC Dataset
The FRGC 2.0 dataset [13] contains 568 individuals? 14714 face images under various lighting
conditions and backgrounds. Beside person identities, each image is annotated with gender and
ethnicity, which we put into 3 classes, ?white?, ?asian?, and ?other?. We fix 114 persons? 3014
images (randomly chosen) as the testing set, and randomly selected 5%, 10%, 20%, 50%, and ?All?
images from the rest 454 individuals? 11700 images. For each training size, we randomize the
training data 5 times and report the average error rates.
In this experiment, CNNs operate on images represented by R/G/B planes plus horizontal and vertical gradient maps of gray intensities. The 5 input planes of size 140 ? 140 are processed by 16
convolution filters with size 16 ? 16, followed by max pooling within each disjoint 5 ? 5 neighborhood. The obtained 16 feature maps of size 25 ? 25 are connected to the next layer by 256
filters of size 6 ? 6, with 50% random sparse connections, followed by max pooling within each
5 ? 5 neighborhood. The resulting 256 ? 4 ? 4 features are fed to the output layer. The nonlinear
kernel used in this experiment is the RBF kernel computed directly on images, which has demonstrated state-of-the-art accuracy for gender recognition [3]. The results shown in Tab. 2 and Tab. 3
demonstrate that kCNNs significantly boost the recognition accuracy of CNNs for both gender and
ethnicity recognition. The difference is prominent when small training sets are presented.
4.3
Object Recognition on Caltech101 Dataset
Caltech101 [7] contains 9144 images from 101 object categories and a background category. It is
considered one of the most diverse object databases available today, and is probably the most popular
benchmark for object recognition. We follow the common setting to train on 15 and 30 images per
class and test on the rest. Following [10], we limit the number of test images to 30 per class. The
6
Table 2: Percentage error rates of gender recognition on FRGC
Training Size
5% 10% 20% 50% All
SVM (RBF)
16.7 13.4 11.3
9.1 8.6
SVM (RBF, Nystr?om) 20.2 14.3 11.6
9.1 8.8
CNN
61.5 17.2
8.4
6.6 5.9
kCNN
17.1 7.2
5.8
5.0 4.4
Table 3: Percentage error rates of ethnicity recognition on FRGC
Training Size
5% 10% 20% 50% All
SVM (RBF)
22.9 16.9 14.1 11.3 10.2
SVM (RBF, Nystr?om) 24.7 20.6 15.8 11.9 11.1
CNN
30.0 13.9 10.0
8.2
6.3
kCNN
15.6 8.7
7.3
6.2
5.8
recognition accuracy was normalized by class sizes and evaluated over 5 random data splits. The
CNN has the same architecture as the one used in the FRGC experiment. The nonlinear kernel is the
spatial pyramid matching (SPM) kernel developed in [10].
Tab. 4 shows our results together with those reported in [12, 15] using deep hierarchical architectures. The task is much more challenging than the previous three tasks for CNNs, because in each
category the data size is very small while the visual patterns are highly diverse. Thanks to the regularization by SPM kernel, kCNN dramatically improves the accuracy of CNN, and outperforms
SVM using the same kernel. This is perhaps the best performance by (trainable and hand-crafted)
deep hierarchical models on the Caltech101 dataset. Some filters trained with and without kernel
regularization are visualized in Fig. 1, which helps to understand the difference made by kCNN.
5
Related Work, Discussion, and Conclusion
Recent work on deep visual recognition models includes [17, 12, 15]. In [17] and [12] the first layer
consisted of hard-wired Gabor filters, and then a large number of patches were sampled from the
second layer and used as the basis of the representation which was then used to train a discriminative
classifier.
Deep models are powerful in representing complex functions but very difficult to train. Hinton and
his coworkers proposed training deep belief networks with layer-wise unsupervised pre-training,
followed by supervised fine-tuning [9]. The strategy was subsequently studied for other deep models like CNNs [15], autoassociators [4], and for document coding [16]. In recent work [18], the
authors proposed training a deep model jointly with an unsupervised embedding task, which led to
improved results as well. Though using unlabeled data too, our work differs from previous work at
the emphasis on leveraging the prior knowledge, which suggests that it can be combined with those
approaches, including neighborhood component analysis [8], to further enhance the deep learning.
This work is also related to transfer learning [2] that used auxiliary learning tasks to learn a linear
feature mapping, and more directly, our previous work [1], which created pseudo auxiliary tasks
based on hand-craft image features to train nonlinear deep networks.
One may ask, why bother training with kCNN, instead of simply combining two independently
trained CNN and SVM systems? The reason is computational speed ? kCNN pays an extra cost to
exploit a kernel matrix in the training phase, but in the prediction phase the system uses CNN alone.
(a) CNN-Caltech101
(b) kCNN-Caltech101
Figure 1: First-layer filters on the B channel, learned from Caltech101 (30 examples per class)
7
Table 4:
Training Size
SVM (SPM) [10]
SVM (SPM, Nystr?om)
HMAX [12]
Percentage accuracy on Caltech101
15
30
Training Size
54.0 64.6 CNN (Pretrain) [15]
52.1 63.1 CNN
51.0 56.0 kCNN
15
?
26.5
59.2
30
54.0
43.6
67.4
In our Caltech101 experiment, the SVM (SPM) needed several seconds to process a new image on
a PC with a 3.0 GHz processor, while kCNN can process about 40 images per second. The latest
record on Caltech101 was based on combining multiple kernels [5]. We conjecture that kCNN could
be further improved by using multiple kernels without sacrificing recognition speed.
To conclude, we proposed using kernels to improve the training of deep models. The approach was
implemented by stochastic gradient descent, and demonstrated excellent performances on a range of
visual recognition tasks. Our experiments showed that prior knowledge could significantly improve
the performance of deep models when insufficient labeled data were available in hard recognition
problems. The trained model was much faster than kernel systems for making predictions.
Acknowledgment: We thank the reviewers and Douglas Gray for helpful comments.
References
[1] A. Ahmed, K. Yu, W. Xu, Y. Gong, and E. P. Xing. Training hierarchical feed-forward visual recognition
models using transfer learning from pseudo tasks. European Conference on Computer Vision, 2008.
[2] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 2005.
[3] S. Baluja and H. Rowley. Boosting sex identification performance. Journal of Computer Vision, 2007.
[4] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks.
Neural Information Processing Systems, 2007.
[5] A. Bosch, A. Zisserman, and X. Mun?oz. Image classification using ROIs and multiple kernel learning.
2008. submitted to International Journal of Computer Vison.
[6] O. Chapelle, J. Weston, and B. Sch?olkopf. Cluster kernels for semi-supervised learning. Neural Information Processing Systems, 2003.
[7] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An
incremental Bayesian approach tested on 101 object categories. CVPR Workshop, 2004.
[8] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. Neural
Information Processing Systems, 2005.
[9] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504 ? 507, July 2006.
[10] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. IEEE Conference on Computer Vision and Pattern Recognition, 2006.
[11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[12] J. Mutch and D. G. Lowe. Multiclass object recognition with sparse, localized features. IEEE Conference
on Computer Vision and Pattern Recognition, 2006.
[13] P. J. Philips, P. J. Flynn, T. Scruggs, K. W. Bower, and W. Worek. Preliminary face recognition grand
challenge results. IEEE Conference on Automatic Face and Gesture Recgonition, 2006.
[14] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: Transfer learning from
unlabeled data. International Conference on Machine Learning, 2007.
[15] M. Ranzato, F.-J. Huang, Y.-L. Boureau, and Y. LeCun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. IEEE Conference on Computer Vision and Pattern
Recognition, 2007.
[16] M. Ranzato and M. Szummer. Semi-supervised learning of compact document representations with deep
networks. International Conferenece on Machine Learning, 2008.
[17] T. Serre, L. Wolf, and T. Poggio. Object recognition with features inspired by visual cortex. IEEE
Conference on Computer Vision and Pattern Recognition, 2005.
[18] J. Weston, F. Ratle, and R. Collobert. Deep learning via semi-supervised embedding. International
Conference on Machine Learning, 2008.
[19] C. Williams and M. Seeger. Using the Nystr?om method to speed up kernel machines. Neural Information
Processing Systems, 2001.
8
| 3541 |@word determinant:6 cnn:15 sex:1 seek:1 decomposition:5 pick:2 sgd:12 tr:9 nystr:7 lightweight:1 contains:3 hereafter:1 tuned:1 document:3 outperforms:1 com:1 goldberger:1 must:1 informative:1 wx:1 designed:1 update:4 alone:1 greedy:2 selected:1 generative:1 plane:2 record:1 boosting:1 node:1 zhang:1 constructed:1 consists:1 notably:1 expected:1 rapid:1 multi:1 ratle:1 salakhutdinov:2 inspired:1 decomposed:2 automatically:1 encouraging:1 provided:1 notation:2 bounded:1 easiest:1 interpreted:1 maxa:1 developed:2 flynn:1 transformation:4 pseudo:2 remember:1 act:1 concave:4 tackle:1 exactly:1 sensibly:1 k2:2 um:1 facto:2 classifier:2 unit:1 omit:1 positive:4 local:5 limit:1 abuse:2 plus:2 emphasis:1 studied:1 equivalence:1 suggests:2 challenging:3 range:5 averaged:1 unique:4 woodbury:1 acknowledgment:1 testing:1 lecun:2 practice:1 definite:4 differs:1 backpropagation:1 digit:4 sq:1 frgc:6 significantly:2 matching:2 gabor:1 pre:3 convenience:2 unlabeled:8 close:1 cannot:1 put:1 applying:4 equivalent:7 map:3 demonstrated:2 reviewer:1 latest:1 williams:1 independently:2 convex:10 formulate:2 estimator:1 system1:1 importantly:1 lamblin:1 his:1 embedding:2 variation:1 updated:1 hierarchy:1 suppose:1 today:1 us:1 origin:1 recognition:44 approximated:1 labeled:6 database:1 solved:2 connected:1 ranzato:2 valuable:1 rq:1 ui:6 complexity:1 rowley:1 trained:3 predictive:2 efficiency:1 basis:1 necessitates:1 represented:2 america:1 various:1 regularizer:4 train:10 fast:1 kp:1 outside:1 neighborhood:3 kai:1 larger:1 plausible:1 solve:1 cvpr:1 otherwise:1 transform:3 itself:1 jointly:1 advantage:1 differentiable:1 eigenvalue:1 analytical:1 propose:3 turned:1 combining:3 date:1 achieve:1 roweis:1 oz:1 normalize:1 olkopf:1 convergence:1 cluster:1 wired:1 incremental:1 object:10 help:1 derive:2 gong:2 bosch:1 nearest:1 eq:8 auxiliary:7 implemented:2 involves:1 larochelle:1 annotated:1 cnns:9 stochastic:7 filter:8 subsequently:1 human:2 argued:1 require:1 dnns:3 fix:1 preliminary:1 proposition:2 summation:1 considered:2 roi:1 mapping:1 applicable:1 bag:1 clearly:1 gaussian:3 always:1 aim:2 pn:1 shrinkage:3 derived:1 ponce:1 rank:2 pretrain:2 seeger:1 helpful:1 dependent:1 hidden:3 perona:1 dnn:2 overall:2 classification:4 ill:1 development:1 art:3 summed:1 spatial:2 marginal:1 once:1 ng:1 sampling:2 represents:2 yu:2 k2f:1 unsupervised:4 discrepancy:1 report:1 few:2 employ:1 randomly:5 packer:1 recognize:1 divergence:1 individual:5 asian:1 phase:2 consisting:1 ando:1 interest:2 highly:2 semidefinite:1 pc:1 behind:1 poggio:1 incomplete:1 sacrificing:1 cost:8 introducing:1 subset:1 uniform:1 recognizing:1 successful:1 too:1 reported:3 autoassociators:1 sv:1 combined:1 person:2 thanks:1 international:4 grand:1 lee:1 enhance:1 together:1 again:1 ambiguity:1 containing:1 choose:1 possibly:1 huang:1 stochastically:1 suggesting:1 vison:1 de:2 coding:2 includes:2 caused:3 collobert:1 later:1 view:1 lowe:1 lab:1 tab:4 competitive:1 xing:1 om:7 square:2 ni:1 accuracy:9 convolutional:2 largely:3 yield:1 handwritten:3 identification:1 bayesian:1 lighting:1 researcher:1 processor:1 submitted:1 reach:1 intentionally:1 proof:3 sampled:1 dataset:8 popular:2 ask:1 knowledge:11 improves:2 dimensionality:2 back:1 feed:1 supervised:6 follow:1 zisserman:1 wei:1 improved:2 mutch:1 formulation:1 evaluated:1 though:3 until:1 hand:3 horizontal:1 expressive:1 nonlinear:7 lack:1 spm:5 defines:1 gray:2 perhaps:1 believe:1 usa:1 omitting:1 effect:1 normalized:2 true:2 remedy:1 consisted:1 serre:1 regularization:18 hence:1 laboratory:1 white:1 self:2 encourages:1 uniquely:1 prominent:1 mina:1 theoretic:1 demonstrate:2 l1:5 meaning:1 wise:3 variational:6 novel:1 recently:1 image:15 lazebnik:1 common:1 rotation:1 cupertino:1 slight:1 approximates:1 m1:5 imposing:1 tuning:2 automatic:1 l3:5 chapelle:1 cortex:2 mun:1 multivariate:1 recent:2 showed:1 perspective:1 irrelevant:3 yi:2 exploited:1 minimum:5 greater:1 additional:3 employed:1 coworkers:1 july:1 semi:4 ii:1 multiple:4 bother:1 stem:1 faster:2 ahmed:1 cross:1 gesture:1 dkl:1 plugging:1 prediction:2 regression:1 vision:6 kernel:53 pyramid:2 addition:1 background:2 fine:2 else:1 grow:1 source:1 diagonalize:1 sch:1 extra:2 operate:2 rest:2 probably:1 comment:1 pooling:2 deficient:1 leveraging:3 unitary:1 intermediate:1 iii:1 easy:1 ethnicity:5 split:1 variety:1 xj:1 fit:1 bengio:2 architecture:7 identified:1 restrict:1 reduce:1 idea:1 haffner:1 multiclass:1 det:10 yihong:1 whether:1 deep:26 dramatically:2 amount:4 processed:2 svms:1 simplest:1 category:5 generate:1 visualized:1 outperform:1 percentage:4 disjoint:1 per:4 diverse:2 taught:2 enormous:1 douglas:1 graph:6 sum:7 inverse:1 noticing:1 powerful:1 throughout:1 patch:1 layer:14 ki:4 bound:5 followed:6 pay:1 fold:1 oracle:1 incorporation:1 fei:2 scene:1 u1:1 speed:5 min:9 extremely:2 optimality:1 conjecture:1 conjugate:5 battle:1 smaller:1 slightly:1 shallow:2 biologically:1 making:1 explained:1 dv:1 invariant:1 computationally:2 equation:1 remains:1 ygong:1 turn:1 needed:1 fed:2 end:2 available:4 apply:3 hierarchical:3 neighbourhood:1 batch:2 rp:1 denotes:1 remaining:1 ensure:2 hinge:1 const:5 exploit:2 kyu:1 giving:1 move:1 added:1 realized:1 quantity:2 strategy:3 randomize:1 kak2:1 usual:1 diagonal:1 exhibit:1 gradient:14 thank:1 capacity:1 kcnn:13 philip:1 considers:1 reason:1 length:1 insufficient:1 minimizing:6 difficult:3 trace:4 rise:2 perform:2 upper:5 vertical:1 convolution:1 benchmark:2 finite:1 descent:8 prank:1 hinton:3 intensity:1 posedness:1 cast:1 trainable:1 kl:1 connection:2 optimized:1 learned:1 boost:2 beyond:1 below:1 pattern:7 summarize:1 challenge:1 including:4 memory:2 max:2 belief:1 difficulty:1 force:1 regularized:2 natural:1 raina:1 mn:2 representing:1 improve:4 created:1 schmid:1 prior:11 literature:1 l2:6 checking:1 kf:3 popovici:1 beside:2 loss:9 localized:1 validation:1 consistent:1 caltech101:11 repeat:2 last:4 bias:2 understand:1 wide:3 neighbor:1 face:4 sparse:4 ghz:1 dimension:1 author:2 made:1 forward:1 emphasize:1 compact:1 conclude:1 xi:7 discriminative:1 fergus:1 decade:1 why:1 table:4 learn:2 transfer:3 channel:1 ca:1 unavailable:1 excellent:2 complex:1 european:1 bottou:1 domain:1 diag:1 main:2 whole:7 hyperparameters:1 xu:2 ethnic:1 crafted:2 fig:1 bower:1 vanish:1 third:1 hmax:1 theorem:3 down:2 decay:1 svm:14 consist:1 exists:2 mnist:3 workshop:1 effectively:1 nec:2 margin:1 boureau:1 led:1 simply:1 visual:15 conveniently:1 expressed:1 gender:7 corresponds:1 wolf:1 weston:2 identity:3 rbf:9 replace:2 change:1 hard:2 typical:1 baluja:1 reducing:1 total:1 craft:1 meaningful:1 cholesky:4 support:1 szummer:1 evaluate:1 tested:1 |
2,804 | 3,542 | Diffeomorphic Dimensionality Reduction
Christian Walder and Bernhard Sch?olkopf
Max Planck Institute for Biological Cybernetics
72076 T?ubingen, Germany
[email protected]
Abstract
This paper introduces a new approach to constructing meaningful lower dimensional representations of sets of data points. We argue that constraining the mapping between the high and low dimensional spaces to be a diffeomorphism is a
natural way of ensuring that pairwise distances are approximately preserved. Accordingly we develop an algorithm which diffeomorphically maps the data near to
a lower dimensional subspace and then projects onto that subspace. The problem
of solving for the mapping is transformed into one of solving for an Eulerian flow
field which we compute using ideas from kernel methods. We demonstrate the
efficacy of our approach on various real world data sets.
1
Introduction
The problem of visualizing high dimensional data often arises in the context of exploratory data
analysis. For many real world data sets this is a challenging task, as the spaces in which the data
lie are often too high dimensional to be visualized directly. If the data themselves lie on a lower
dimensional subspace however, dimensionality reduction techniques may be employed, which aim
to meaningfully represent the data as elements of this lower dimensional subspace.
The earliest approaches to dimensionality reduction are the linear methods known as principal components analysis (PCA) and factor analysis (Duda et al., 2000). More recently however, the majority of research has focussed on non-linear methods, in order to overcome the limitations of linear
approaches?for an overview and numerical comparison see e.g. (Venna, 2007; van der Maaten
et al., 2008), respectively. In an effort to better understand the numerous methods which have been
proposed, various categorizations have been proposed. In the present case, it is pertinent to make
the distinction between methods which focus on properties of the mapping to the lower dimensional
space, and methods which focus on properties of the mapped data, in that space. A canonical example of the latter is multidimensional scaling (MDS), which in its basic form finds the minimizer
with respect to y1 , y2 , . . . , ym of (Cox & Cox, 1994)
m
X
2
(kxi ? xj k ? kyi ? yj k) ,
(1)
i,j=1
where here, as throughout the paper, the xi ? Ra are input or high dimensional points, and the
yi ? Rb are output or low dimensional points, so that b < a. Note that the above term is a
function only of the input points and the corresponding mapped points, and is designed to preserve
the pairwise distances of the data set.
The methods which focus on the mapping itself (from the higher to the lower dimensional space,
which we refer to as the downward mapping, or the upward mapping which is the converse) are less
common, and form a category into which the present work falls. Both auto-encoders (DeMers &
Cottrell, 1993) and the Gaussian process latent variable model (GP-LVM) (Lawrence, 2004) also
fall into this category, but we focus on the latter as it provides an appropriate transition into the
1
main part the paper. The GP-LVM places a Gaussian process (GP) prior over each high dimensional component of the upward mapping, and optimizes with respect to the set of low dimensional
points?which can be thought of as hyper-parameters of the model?the likelihood of the high dimensional points. Hence the GP-LVM constructs a regular (in the sense of regularization, i.e. likely
under the GP prior) upward mapping. By doing so, the model guarantees that nearby points in
the low dimensional space should be mapped to nearby points in the high dimensional space?an
intuitive idea for dimensionality reduction which is also present in the MDS objective (1), above.
The converse is not guaranteed in the original GP-LVM however, and this has lead to the more recent development of the so-called back-constrained GP-LVM (Lawrence & Candela, 2006), which
essentially places an additional GP prior over the downward mapping. By guaranteeing in this way
that (the modes of the posterior distributions over) both the upward and downward mappings are
regular, the back constrained GP-LVM induces something reminiscent of a diffeomorphic mapping
between the two spaces. This leads us to the present work, in which we derive our new algorithm,
Diffeomap, by explicitly casting the dimensionality reduction problem as one of constructing a diffeomorphic mapping between the low dimensional space and the subspace of the high dimensional
space on which the data lie.
2
Diffeomorphic Mappings and their Practical Construction
In this paper we use the following definition:
Definition 2.1. Let U and V be open subsets of Ra and Rb , respectively. The mapping F : U ? V
is said to be a diffeomorphism if it is bijective (i.e. one to one), smooth (i.e. belonging to C ? ), and
has a smooth inverse map F ?1 .
We note in passing the connection between this definition, our discussion of the GP-LVM, and dimensionality reduction. The GP-LVM constructs a regular upward mapping (analogous to F ?1 )
which ensures that points nearby in Rb will be mapped to points nearby in Ra , a property referred
to as similarity preservation in (Lawrence & Candela, 2006). The back constrained GP-LVM simultaneously ensures that the downward mapping (analogous to F ) is regular, thereby additionally
implementing what its authors refer to as dissimilarity preservation. Finally, the similarity between
smoothness (required of F and F ?1 in Definition 2.1) and regularity (imposed on the downward and
upward mappings by the GP prior in the back constrained GP-LVM) complete the analogy. There is
also an alternative, more direct motivation for diffeomorphic mappings in the context of dimensionality reduction, however. In particular, a diffeomorphic mapping has the property that it does not
lose any information. That is, given the mapping itself and the lower dimensional representation of
the data set, it is always possible to reconstruct the original data.
There has been significant interest from within the image processing community, in the construction
of diffeomorphic mappings for the purpose of image warping (Dupuis & Grenander, 1998; Joshi
& Miller, 2000; Karac?ali & Davatzikos, 2003). The reason for this can be understood as follows.
Let I : U ? R3 represent the RGB values of an image, where U ? R2 is the image plane. If we
now define the warped version of I to be I ? W , then we can guarantee that the warp is topology
preserving, i.e. that it does not ?tear? the image, by ensuring the W be a diffeomorphism U ? U .
The following two main approaches to constructing such diffeomorphisms have been taken by the
image processing community, the first of which we mention for reference, while the second forms
the basis of Diffeomap. It is a notable aside that there seem to be no image warping algorithms
analogous to the back constrained GP-LVM, in which regular forward and inverse mappings are
simultaneously constructed.
1. Enforcement of the constraint that |J(W )|, the determinant of the Jacobian of the mapping, be positive everywhere. This approach has been successfully applied to the problem
of warping 3D magnetic resonance images (Karac?ali & Davatzikos, 2003), for example,
but a key ingredient of that success was the fact that the authors defined the mapping W
numerically on a regular grid. For the high dimensional cases relevant to dimensionality
reduction however, such a numerical grid is highly computationally unattractive.
2. Recasting the problem of constructing W as an Eulerian flow problem (Dupuis & Grenander, 1998; Joshi & Miller, 2000). This approach is the focus of the next section.
2
R
R
?(x, 1) = ?(x)
(s, ?(x, s))
(1, v(?(x, s), s))
x
t
0
s
1
Figure 1: The relationship between v(?, ?), ?(?, ?) and ?(?) for the one dimensional case ? : R ? R.
2.1
Diffeomorphisms via Flow Fields
The idea here is to indirectly define the mapping of interest, call it ? : Ra ? Ra , by way of a ?time?
indexed velocity field v : Ra ? R ? Ra . In particular we write ?(x) = ?(x, 1), where
Z t
v(?(x, s), s)ds.
(2)
?(x, t) = x +
s=0
This choice of ? satisfies the following Eulerian transport equation with boundary conditions:
??(x, s)
= v(?(x, s), s),
?s
?(x, 0) = x.
(3)
The role of v is to transport a given point x from its original location at time 0 to its mapped location
?(x, 1) by way of a trajectory whose position and tangent vector at time s are given by ?(x, s) and
v(?(x, s), s), respectively (see Figure 1). The point of this construction is that if v satisfies certain
regularity properties, then the mapping ? will be a diffeomorphism. This fact has been proven in a
number of places?one particularly accessible example is (Dupuis & Grenander, 1998), where the
necessary conditions are provided for the three dimensional case along with a proof that the induced
mapping is a diffeomorphism. Generalizing the result to higher dimensions is straightforward?this
fact is stated in (Dupuis & Grenander, 1998) along with the basic idea of how to do so.
We now offer an intuitive argument for the result. Consider Figure 1, and imagine adding a new
starting point x? , along with its associated trajectory. It is clear that for the mapping ? to be a
diffeomorphism, then for any such pair of points x and x? , the associated trajectories must not
collide. This is because the two trajectories would be identical after the collision, x and x? would
map to the same point, and hence the mapping would not be invertible. But if v is sufficiently regular
then such collisions cannot occur.
3
Diffeomorphic Dimensionality Reduction
The framework of Eulerian flow fields which we have just introduced provides an elegant means
of constructing diffeomorphic mappings Ra ? Ra , but for dimensionality reduction we require
additional ingredients, which we now introduce. The basic idea is to construct a diffeomorphic
mapping in such a way that it maps our data set near to a subspace of Ra , and then to project onto
this subspace. The subspace we use, call it Sb , is the b-dimensional one spanned by the first b
canonical basis vectors of Ra . Let P(a?b) : Ra ? Rb be the projection operator which extracts the
first b components of the vector it is applied to, i.e.
P(a?b) x = (I Z) x,
(4)
where I ? Ra?a is the identity matrix and Z ? Ra?b?a is a matrix of zeros. We can now write the
mapping ? : Ra ? Rb which we propose for dimensionality reduction as
?(x) = P(a?b) ?(x, 1),
3
(5)
where ? is given by (2). We choose each component of v at each time to belong to a reproducing
kernel Hilbert Space (RKHS) H, so that v(?, t) ? Ha , t ? [0, 1]. If we define the norm1
a
2
X
2
kv(?, t)kHa ,
(6)
[v(?, t)]j
,
H
j=1
2
then kv(?, t)kHa < ?, ?t ? [0, 1] is a sufficient condition which guarantees that ? is a diffeomorphism, provided that some technical conditions are satisfied (Dupuis & Grenander, 1998; Joshi
& Miller, 2000). In particular v need not be regular in its second argument. For dimensionality
reduction we propose to construct v as the minimizer of
Z 1
m
X
2
kv(?, t)kHd dt +
L (?(xj )) ,
(7)
O=?
t=0
j=1
+
where ? ? R is a regularization parameter. Here, L measures the squared distance to our b
dimensional linear subspace of interest Sb , i.e.
L(x) =
a
X
2
[x]d .
(8)
d=b+1
Note that this places special importance on the first b dimensions of the input space of interest?
accordingly we make the natural and important preprocessing step of applying PCA such that as
much as possible of the variance of the data is captured in these first b dimensions.
3.1
Implementation
One can show that the minimizer in v of (7) takes the form
[v(?, t)]d =
m
X
[?d (t)]j k(?(xj , t), ?),
d = 1 . . . a,
(9)
j=1
where k is the reproducing kernel of H and ?d is a function [0, 1] ? Rm . This was proven directly
for a similar specific case (Joshi & Miller, 2000), but we note in passing that it follows immediately
from the celebrated representer theorem of RKHS?s (Sch?olkopf et al., 2001), by considering a fixed
time t. Hence, we have simplified the problem of determining v to one of determining m trajectories
?(xj , ?). This is because not only does (9) hold, but we can use standard manipulations (in the
context of kernel ridge regression, for example) to determine that for a given set of such trajectories,
?d (t) = K(t)?1 ud (t),
m?m
d = 1, 2, . . . , a,
(10)
m
where t ? [0, 1], K(t) ? R
, ud (t) ? R and we have let [K(t)]j,k = k(?(xj , t), ?(xk , t))
along with [ud (t)]j = ?t ?(xj , t). Note that the invertibility of K(t) is guaranteed for certain kernel
functions (including the Gaussian kernel which we employ in all our Experiments, see Section 4),
provided that the set ?(xj , t) are distinct. Hence, one can verify using (9), (10) and the reproducing
property of k in H (i.e. the fact that hf, k(x, ?)iH = f (x), ?f ? H), that for the optimal v,
2
kv(?, t)kHa =
a
X
ud (t)? K(t)?1 ud (t).
(11)
d=1
This allows us to write our objective (7) in terms of the m trajectories mentioned above:
Z 1 X
a
m X
a
X
2
O=?
ud (t)? K(t)?1 ud (t) +
[?(xj , 1)]d .
t=0 d=1
(12)
j=1 d=b+1
So far no approximations have been made, and we have constructed an optimal finite dimensional
basis for v(?, t). The second argument of v is not so easily dealt with however, so as an approximate
by discretizing the interval [0, 1]. In particular, we let tk = k?, k = 0, 1, . . . , p, where ? = 1/p,
and make the approximation ?t=tk ?(xj , t) = (?(xj , tk ) ? ?(xj , tk?1 )) /?. By making the further
1
Square brackets w/ subscripts denote matrix elements, and colons denote entire rows or columns.
4
0.9
0.8
0.7
0.6
0.5
(d)
0.4
(c)
(b)
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
(a)
(b)
(c)
(d)
Figure 2: Dimensionality reduction of motion capture data. (a) The data mapped from 102 to
2 dimensions using Diffeomap (the line shows the temporal order in which the input data were
recorded). (b)-(d) Three rendered input points corresponding to the marked locations in (a).
R tk
approximation t=t
K(t)?1 dt = ?K(tk?1 )?1 , and substituting into (12) we obtain the first form
k?1
of our problem which is finite dimensional and hence readily optimized, i.e. the minimization of
p
a
b
X
? XX
?
(?k,d ? ?k?1,d ) K(tk )?1 (?k,d ? ?k?1,d ) +
k?p,d k2
?
(13)
d=a+1
d=1 k=1
with respect to ?k,d ? Rm for k = 1, 2, . . . , p and d = 1, 2, . . . , a, where [?k,d ]j = [?(xj , tk )]d .
3.2
A Practical Reduced Set Implementation
A practical problem with (13) is the computationally expensive matrix inverse. In practice we reduce
this burden by employing a reduced set expansion which replaces the sum over 1, 2, . . . , m in (9)
with a sum over a randomly selected subset I, thereby using |I| = n basis functions to represent
v(?, t). In this case it is possible to show using the reproducing property of k(?, ?) that the resulting
objective function is identical to (13), but with the matrix K(tk )?1 replaced by the expression
Km,n (Kn,m Km,n )
?
Kn,m
?1
?1
Kn,n (Kn,m Km,n )
Kn,m ,
(14)
m?n
where Km,n =
? R
is the sub-matrix of K(tk ) formed by taking all of the rows, but
only those columns given by I. Similarly, Kn,n ? Rn?n is the square sub-matrix of K(tk ) formed
by taking a subset of both the rows and columns, namely those given by I. For optimization we
also use the gradients of the above expression, the derivation of which we have omitted for brevity.
Note however that by factorizing appropriately, the computation of the objective function and its
gradients can be performed with an asymptotic time complexity of n2 (m + a).
4
Experiments
It is difficult to objectively compare dimensionality reduction algorithms, as there is no universally
agreed upon measure of performance. Algorithms which are generalizations or variations of older
ones may be compared side by side with their predecessors, but this is not the case with our new
algorithm, Diffeomap. Hence, in this section we attempt to convince the reader of the utility of our
approach by visually presenting our results on as many and as varied realistic problems as space
permits, while providing pointers to comparable results from other authors. For all experiments
we fixed the parameters which trade off between computational speed and accuracy, i.e. we set the
temporal resolution p = 20, and the number of
basis functions n = 300. We used a Gaussian kernel
function k(x, y) = exp ?kx ? yk2 /(2? 2 ) , and tuned the ? parameter manually along with the
regularization parameter ?. For optimization we used a conjugate gradient type method2 fixed to
1000 iterations and with starting point [?k,d ]j = [xj ]d , k = 1, 2, . . . p.
2
Carl Rasmussen?s minimize.m, which is freely available from http://www.kyb.mpg.de/?carl.
5
a
?
"
e
i
1
o
@
u
(a)
(b)
(c)
Figure 3: Vowel data mapped from 24 to 2 dimensions using (a) PCA and (b)-(c) Diffeomap. Plots
(b) and (c) differ only in the parameter settings of Diffeomap, with (b) corresponding to minimal
one nearest neighbor errors in the low dimensional space?see Section 4.2 for details.
4.1
Motion Capture Data
The first data set we consider consists of the coordinates in R3 of a set of markers placed on a person
breaking into a run, sampled at a constant frequency, resulting in m = 217 data points in a = 102
dimensions, which we mapped to b = 2 dimensions using Diffeomap (see Figure 2). This data set
is freely available from http://accad.osu.edu/research/mocap/mocap_data.htm
as Figure 1 Run, and was also considered in (Lawrence & Candela, 2006), where it was shown
that while the original GP-LVM fails to correctly discover the periodic component of the sequence,
the back constrained version maps poses in the same part of the subject?s step cycle nearby to
each other, while simultaneously capturing variations in the inclination of the subject. Diffeomap
also succeeded in this sense, and produced results which are competitive with those of the back
constrained GP-LVM.
4.2
Vowel Data
In this next example we consider a data set of a = 24 features (cepstral coefficients and delta
cepstral coefficients) of a single speaker performing nine different vowels 300 times per vowel,
acquired as training data for a vocal joystick system (Bilmes & et.al., 2006), and publicly available
in pre-processed form from http://www.dcs.shef.ac.uk/?neil/fgplvm/. Once again
we used Diffeomap to map the data to b = 2 dimensions, as depicted in Figure 3. We also depict
the poor result of linear PCA, in order to rule out the hypothesis that it is merely the PCA based
initialization of Diffeomap (mentioned after equation (8) on page 4) which does most of the work.
The results in Figure 3 are directly comparable to those provided in (Lawrence & Candela, 2006)
for the GP-LVM, back constrained GP-LVM, and Isomap (Tenenbaum et al., 2000). Visually, the
Diffeomap result appears to be superior to those of the GP-LVM and Isomap, and comparable to the
back constrained GP-LVM. We also measured the performance of a one nearest neighbor classifier
applied to the mapped data in R2 . For the best choice of the parameters ? and ?, Diffeomap made
140 errors, which is favorable to the figures quoted for Isomap (458), the GP-LVM (226) and the
back constrained GP-LVM (155) in (Lawrence & Candela, 2006). We emphasize however that this
measure of performance is at best a rough one, since by manually varying our choice of the parameters ? and ?, we were able to obtain a result (Figure 3 (c)) which, although leads to a significantly
higher number of such errors (418), is arguably superior from a qualitative perspective to the result
with minimal errors (Figure 3 (b)).
4.3
USPS Handwritten Digits
We now consider the USPS database of handwritten digits (Hull, 1994). Following the methodology of the stochastic neighbor embedding (SNE) and GP-LVM papers (Hinton & Roweis, 2003;
Lawrence, 2004), we take 600 images per class from the five classes corresponding to digits 0, 1, 2,
3, 4. Since the images are in gray scale and a resolution of 16 by 16 pixels, this results in a data set
of m = 3000 examples in a = 256 dimensions, which we again mapped to b = 2 dimensions as
depicted in Figure 4. The figure shows the individual points color coded according to class, along
6
(a)
(b)
Figure 4: USPS handwritten digits 0-4 mapped to 2 dimensions using Diffeomap. (a) Mapped points
color coded by class label. (b) A composite image of the mapped data?see Section 4.3 for details.
with a composite image formed by sequentially drawing each digit in random order at its mapped
location, but only if it would not obscure a previously drawn digit. Diffeomap manages to arrange
the data in a manner which reveals such image properties as digit angle and stroke thickness. At the
same time the classes are reasonably well separated, with the exception of the ones which are split
into two clusters depending on the angle. Although unfortunate, we believe that this splitting can
be explained by the fact that (a) the left- and right-pointing ones are rather dissimilar in input space,
and (b) the number of fairly vertical ones which could help to connect the left- and right-pointing
ones is rather small. Diffeomap seems to produce a result which is superior to that of the GP-LVM
(Lawrence, 2004), for example, but may be inferior to that of the SNE (Hinton & Roweis, 2003). We
believe this is due to the fact that the nearest neighbor graph used by SNE is highly appropriate to the
USPS data set. This is indicated by the fact that a nearest neighbor classifier in the 256 dimensional
input space is known to perform strongly, with numerous authors having reported error rates of less
than 5% on the ten class classification problem.
4.4
NIPS Text Data
Finally, we present results on the text data of papers from the NIPS conference proceedings volumes
0-12, which can be obtained from http://www.cs.toronto.edu/?roweis/data.html.
This experiment is intended to address the natural concern that by working in the input space rather
than on a nearest neighbor graph, for example, Diffeomap may have difficulty with very high dimensional data. Following (Hinton & Roweis, 2003; Song et al., 2008) we represent the data as a word
frequency vs. document matrix in which the author names are treated as words but weighted up by
a factor 20 (i.e. an author name is worth 20 words). The result is a data set of m = 1740 papers
represented in a = 13649 words + 2037 authors = 15686 dimensions. Note however that the input
dimensionality is effectively reduced by the PCA preprocessing step to m ? 1 = 1739, that being
the rank of the centered covariance matrix of the data.
As this data set is difficult to visualize without taking up large amounts of space, we have included
the results in the supplementary material which accompanies our NIPS submission. In particular,
we provide a first figure which shows the data mapped to b = 2 dimensions, with certain authors (or
groups of authors) color coded?the choice of authors and their corresponding color codes follows
precisely those of (Song et al., 2008). A second figure shows a plain marker drawn at the mapped
locations corresponding to each of the papers. This second figure also contains the paper title and
authors of the corrsponding papers however, which are revealed when the user moves the mouse
over the marked locations. Hence, this second figure allows one to browse the NIPS collection con7
textually. Since the mapping may be hard to judge, we note in passing that the correct classification
rate of a one nearest neighbor classifier applied to the result of Diffeomap was 48%, which compares
favorably to the rate of 33% achieved by linear PCA (which we use for preprocessing). To compute
this score we treated authors as classes, and considered only those authors who were color coded
both in our supplementary figure and in (Song et al., 2008).
5
Conclusion
We have presented an approach to dimensionality reduction which is based on the idea that the mapping between the lower and higher dimensional spaces should be diffeomorphic. We provided a
justification for this approach, by showing that the common intuition that dimensionality reduction
algorithms should approximately preserve pairwise distances of a given data set is closely related to
the idea that the mapping induced by the algorithm should be a diffeomorphism. This realization
allowed us to take advantage of established mathematical machinery in order to convert the dimensionality reduction problem into a so called Eulerian flow problem, the solution of which is guaranteed to generate a diffeomorphism. Requiring that the mapping and its inverse both be smooth is
reminiscent of the GP-LVM algorithm (Lawrence & Candela, 2006), but has the advantage in terms
of statistical strength that we need not separately estimate a mapping in each direction. We showed
results of our algorithm, Diffeomap, on a relatively small motion capture data set, a larger vowel
data set, the USPS image data set, and finally the rather high dimensional data set derived from the
text corpus of NIPS papers, with successes in all cases. Since our new approach performs well in
practice while being significantly different to all previous approaches to dimensionality reduction, it
has the potential to lead to a significant new direction in the field.
References
Bilmes, J., & et.al. (2006). The Vocal Joystick. Proc. IEEE Intl. Conf. on Acoustic, Speech and Signal Processing. Toulouse, France.
Cox, T., & Cox, M. (1994). Multidimensional scaling. London, UK: Chapman & Hall.
DeMers, D., & Cottrell, G. (1993). Non-linear dimensionality reduction. NIPS 5 (pp. 580?587). Morgan
Kaufmann, San Mateo, CA.
Duda, R. O., Hart, P. E., & Stork, D. G. (2000). Pattern classification. New York: Wiley. 2nd Edition.
Dupuis, P., & Grenander, U. (1998). Variational problems on flows of diffeomorphisms for image matching.
Quarterly of Applied Mathematics, LVI, 587?600.
Hinton, G., & Roweis, S. (2003). Stochastic neighbor embedding. In S. T. S. Becker and K. Obermayer (Eds.),
Advances in neural information processing systems 15, 833?840. Cambridge, MA: MIT Press.
Hull, J. J. (1994). A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach.
Intell., 16, 550?554.
Joshi, S. C., & Miller, M. I. (2000). Landmark matching via large deformation diffeomorphisms. IEEE Transactions on Image Processing, 9, 1357?1370.
Karac?ali, B., & Davatzikos, C. (2003). Topology preservation and regularity in estimated deformation fields.
Information Processing in Medical Imaging (pp. 426?437).
Lawrence, N. D. (2004). Gaussian process latent variable models for visualisation of high dimensional data. In
S. Thrun, L. Saul and B. Sch?olkopf (Eds.), Nips 16. Cambridge, MA: MIT Press.
Lawrence, N. D., & Candela, J. Q. (2006). Local distance preservation in the GP-LVM through back constraints.
In International conference on machine learning, 513?520. ACM.
Sch?olkopf, B., Herbrich, R., & Smola, A. J. (2001). A generalized representer theorem. Proc. of the 14th
Annual Conf. on Computational Learning Theory (pp. 416?426). London, UK: Springer-Verlag.
Song, L., Smola, A., Borgwardt, K., & Gretton, A. (2008). Colored maximum variance unfolding. In J. Platt,
D. Koller, Y. Singer and S. Roweis (Eds.), Nips 20, 1385?1392. Cambridge, MA: MIT Press.
Tenenbaum, J. B., de Silva, V., & Langford, J. C. (2000). A global geometric framework for nonlinear dimensionality reduction. Science, 290, 2319?2323.
van der Maaten, L. J. P., Postma, E., & van den Herik, H. (2008). Dimensionality reduction: A comparative
review. In T. Ertl (Ed.), Submitted to neurocognition. Elsevier.
Venna, J. (2007). Dimensionality reduction for visual exploration of similarity structures. Doctoral dissertation,
Helsinki University of Technology.
8
| 3542 |@word determinant:1 cox:4 version:2 duda:2 seems:1 nd:1 open:1 km:4 rgb:1 covariance:1 thereby:2 mention:1 reduction:22 celebrated:1 contains:1 efficacy:1 score:1 tuned:1 rkhs:2 document:1 reminiscent:2 must:1 readily:1 cottrell:2 numerical:2 recasting:1 realistic:1 christian:1 pertinent:1 kyb:1 designed:1 plot:1 depict:1 aside:1 v:1 selected:1 accordingly:2 plane:1 xk:1 dissertation:1 pointer:1 colored:1 diffeomorphically:1 provides:2 location:6 toronto:1 herbrich:1 five:1 mathematical:1 along:6 constructed:2 direct:1 predecessor:1 qualitative:1 consists:1 manner:1 introduce:1 acquired:1 pairwise:3 ra:15 mpg:2 themselves:1 considering:1 project:2 provided:5 xx:1 discover:1 what:1 method2:1 guarantee:3 temporal:2 multidimensional:2 classifier:3 rm:2 k2:1 uk:3 platt:1 medical:1 converse:2 planck:1 arguably:1 positive:1 lvm:23 understood:1 local:1 mach:1 subscript:1 approximately:2 initialization:1 mateo:1 doctoral:1 challenging:1 practical:3 yj:1 practice:2 digit:7 thought:1 significantly:2 projection:1 composite:2 pre:1 word:4 regular:8 vocal:2 matching:2 venna:2 onto:2 cannot:1 operator:1 context:3 applying:1 www:3 map:6 imposed:1 straightforward:1 starting:2 resolution:2 splitting:1 immediately:1 rule:1 spanned:1 embedding:2 exploratory:1 variation:2 coordinate:1 analogous:3 justification:1 construction:3 imagine:1 user:1 carl:2 hypothesis:1 element:2 velocity:1 expensive:1 particularly:1 recognition:1 submission:1 database:2 role:1 capture:3 ensures:2 cycle:1 trade:1 mentioned:2 intuition:1 complexity:1 solving:2 ali:3 upon:1 basis:5 usps:5 easily:1 collide:1 htm:1 joystick:2 various:2 represented:1 derivation:1 separated:1 distinct:1 london:2 hyper:1 whose:1 supplementary:2 larger:1 drawing:1 reconstruct:1 objectively:1 toulouse:1 neil:1 gp:27 itself:2 sequence:1 advantage:2 grenander:6 propose:2 relevant:1 realization:1 roweis:6 intuitive:2 kv:4 olkopf:4 regularity:3 cluster:1 intl:1 produce:1 categorization:1 guaranteeing:1 comparative:1 tk:11 help:1 derive:1 develop:1 ac:1 pose:1 depending:1 measured:1 nearest:6 c:1 judge:1 differ:1 direction:2 closely:1 correct:1 hull:2 stochastic:2 centered:1 exploration:1 material:1 implementing:1 require:1 dupuis:6 generalization:1 biological:1 hold:1 sufficiently:1 considered:2 hall:1 visually:2 exp:1 lawrence:11 mapping:37 visualize:1 pointing:2 substituting:1 arrange:1 omitted:1 purpose:1 favorable:1 proc:2 lose:1 label:1 title:1 successfully:1 weighted:1 minimization:1 unfolding:1 rough:1 mit:3 gaussian:5 always:1 aim:1 accad:1 rather:4 varying:1 casting:1 earliest:1 derived:1 focus:5 rank:1 likelihood:1 diffeomorphic:11 sense:2 colon:1 elsevier:1 sb:2 entire:1 visualisation:1 koller:1 transformed:1 france:1 germany:1 upward:6 pixel:1 classification:3 html:1 development:1 resonance:1 constrained:10 special:1 fairly:1 field:6 construct:4 once:1 having:1 manually:2 identical:2 chapman:1 kha:3 representer:2 employ:1 randomly:1 preserve:2 simultaneously:3 intell:1 individual:1 replaced:1 intended:1 vowel:5 attempt:1 interest:4 highly:2 introduces:1 bracket:1 succeeded:1 necessary:1 machinery:1 indexed:1 deformation:2 minimal:2 column:3 norm1:1 subset:3 too:1 reported:1 encoders:1 kn:6 thickness:1 periodic:1 connect:1 kxi:1 convince:1 person:1 borgwardt:1 international:1 accessible:1 off:1 invertible:1 ym:1 mouse:1 squared:1 again:2 satisfied:1 recorded:1 choose:1 conf:2 warped:1 potential:1 de:3 invertibility:1 coefficient:2 notable:1 explicitly:1 performed:1 candela:7 doing:1 competitive:1 hf:1 minimize:1 square:2 formed:3 accuracy:1 publicly:1 variance:2 who:1 kaufmann:1 miller:5 dealt:1 handwritten:4 produced:1 manages:1 trajectory:7 bilmes:2 worth:1 cybernetics:1 stroke:1 submitted:1 ed:4 definition:4 frequency:2 pp:3 proof:1 associated:2 demers:2 sampled:1 color:5 dimensionality:23 hilbert:1 agreed:1 back:11 appears:1 higher:4 dt:2 methodology:1 strongly:1 just:1 smola:2 langford:1 d:1 working:1 transport:2 nonlinear:1 marker:2 postma:1 mode:1 gray:1 indicated:1 believe:2 name:2 verify:1 y2:1 isomap:3 requiring:1 hence:7 regularization:3 visualizing:1 inferior:1 speaker:1 generalized:1 bijective:1 presenting:1 complete:1 demonstrate:1 ridge:1 performs:1 motion:3 silva:1 image:16 variational:1 recently:1 common:2 superior:3 overview:1 stork:1 volume:1 belong:1 davatzikos:3 numerically:1 refer:2 significant:2 cambridge:3 smoothness:1 grid:2 mathematics:1 similarly:1 similarity:3 yk2:1 something:1 posterior:1 recent:1 showed:1 perspective:1 optimizes:1 manipulation:1 certain:3 browse:1 ubingen:1 verlag:1 discretizing:1 success:2 der:2 yi:1 preserving:1 captured:1 additional:2 morgan:1 employed:1 freely:2 determine:1 mocap:1 ud:7 signal:1 preservation:4 gretton:1 smooth:3 technical:1 tear:1 offer:1 hart:1 coded:4 ensuring:2 basic:3 regression:1 essentially:1 iteration:1 kernel:7 represent:4 achieved:1 preserved:1 separately:1 shef:1 interval:1 sch:4 appropriately:1 induced:2 subject:2 elegant:1 meaningfully:1 flow:6 seem:1 call:2 joshi:5 near:2 constraining:1 split:1 revealed:1 xj:13 topology:2 reduce:1 idea:7 expression:2 pca:7 utility:1 becker:1 effort:1 song:4 accompanies:1 passing:3 nine:1 speech:1 york:1 collision:2 clear:1 amount:1 tenenbaum:2 ten:1 induces:1 visualized:1 category:2 processed:1 reduced:3 http:4 generate:1 canonical:2 delta:1 estimated:1 correctly:1 rb:5 per:2 write:3 group:1 key:1 drawn:2 kyi:1 imaging:1 graph:2 merely:1 sum:2 convert:1 run:2 inverse:4 everywhere:1 angle:2 place:4 throughout:1 reader:1 maaten:2 scaling:2 comparable:3 capturing:1 guaranteed:3 replaces:1 annual:1 strength:1 occur:1 constraint:2 precisely:1 helsinki:1 nearby:5 speed:1 argument:3 performing:1 diffeomorphisms:4 rendered:1 relatively:1 according:1 poor:1 belonging:1 conjugate:1 making:1 explained:1 den:1 taken:1 computationally:2 equation:2 previously:1 r3:2 singer:1 enforcement:1 available:3 permit:1 quarterly:1 appropriate:2 indirectly:1 magnetic:1 alternative:1 eulerian:5 original:4 unfortunate:1 warping:3 objective:4 move:1 md:2 said:1 obermayer:1 gradient:3 subspace:9 distance:5 mapped:16 thrun:1 majority:1 landmark:1 argue:1 tuebingen:1 reason:1 code:1 relationship:1 providing:1 difficult:2 sne:3 favorably:1 stated:1 implementation:2 anal:1 perform:1 vertical:1 herik:1 finite:2 walder:1 hinton:4 y1:1 rn:1 varied:1 reproducing:4 dc:1 community:2 introduced:1 pair:1 required:1 namely:1 connection:1 optimized:1 inclination:1 acoustic:1 distinction:1 established:1 nip:8 trans:1 address:1 able:1 pattern:2 max:1 including:1 natural:3 difficulty:1 treated:2 older:1 technology:1 numerous:2 auto:1 extract:1 text:4 prior:4 geometric:1 review:1 tangent:1 determining:2 asymptotic:1 limitation:1 analogy:1 proven:2 ingredient:2 sufficient:1 obscure:1 row:3 placed:1 last:1 rasmussen:1 side:2 understand:1 warp:1 institute:1 fall:2 neighbor:8 focussed:1 taking:3 cepstral:2 saul:1 van:3 overcome:1 boundary:1 dimension:13 world:2 transition:1 plain:1 author:13 forward:1 made:2 preprocessing:3 simplified:1 universally:1 collection:1 far:1 employing:1 san:1 transaction:1 approximate:1 emphasize:1 bernhard:1 global:1 sequentially:1 reveals:1 corpus:1 xi:1 quoted:1 factorizing:1 latent:2 additionally:1 reasonably:1 ca:1 expansion:1 constructing:5 lvi:1 main:2 motivation:1 edition:1 n2:1 allowed:1 referred:1 textually:1 wiley:1 sub:2 position:1 fails:1 lie:3 breaking:1 jacobian:1 theorem:2 specific:1 showing:1 r2:2 unattractive:1 concern:1 burden:1 ih:1 adding:1 effectively:1 importance:1 dissimilarity:1 downward:5 kx:1 generalizing:1 depicted:2 likely:1 visual:1 springer:1 minimizer:3 satisfies:2 acm:1 ma:3 identity:1 marked:2 diffeomorphism:9 hard:1 included:1 principal:1 called:2 osu:1 meaningful:1 exception:1 latter:2 arises:1 brevity:1 dissimilar:1 |
2,805 | 3,543 | Bayesian Network Score Approximation using a
Metagraph Kernel
Benjamin Yackley
Department of Computer Science
University of New Mexico
Eduardo Corona
Courant Institute of Mathematical Sciences
New York University
Terran Lane
Department of Computer Science
University of New Mexico
Abstract
Many interesting problems, including Bayesian network structure-search,
can be cast in terms of finding the optimum value of a function over the
space of graphs. However, this function is often expensive to compute
exactly. We here present a method derived from the study of Reproducing
Kernel Hilbert Spaces which takes advantage of the regular structure of the
space of all graphs on a fixed number of nodes to obtain approximations
to the desired function quickly and with reasonable accuracy. We then test
this method on both a small testing set and a real-world Bayesian network;
the results suggest that not only is this method reasonably accurate, but
that the BDe score itself varies quadratically over the space of all graphs.
1
Introduction
The problem we address in this paper is, broadly speaking, function approximation. Specifically, the application we present here is that of estimating scores on the space of Bayesian
networks as a first step toward a quick way to obtain a network which is optimal given a set
of data. Usually, the search process requires a full recomputation of the posterior likelihood
of the graph at every step, and is therefore slow. We present a new approach to the problem
of approximating functions such as this one, where the mapping is of an object (the graph,
in this particular case) to a real number (its BDe score). In other words, we have a function
f : ?n ? R (where ?n is the set of all directed graphs on n nodes) from which we have a
small number of samples, and we would like to interpolate the rest. The technique hinges
on the set ?n having a structure which can be factored into a Cartesian product, as well as
on the function we approximate being smooth over this structure.
Although Bayesian networks are by definition acyclic, our approximation technique applies
to the general directed-graph case. Because a given directed graph has n2 possible edges,
we can imagine the set of all graphs as itself being a Hamming cube of degree n2 ? a
2
?metagraph? with 2n nodes, since each edge can be independently present or absent. We
say that two graphs are connected with an edge in our metagraph if they differ in one and
only one edge. We can similarly identify each graph with a bit string by ?unraveling? the
adjacency matrix into a long string of zeros and ones. However, if we know beforehand an
ordering on the nodes of our graph to which
all directed graphs must stay consistent (to
enforce acyclicness), then there are only n2 possible edges, and the size of our metagraph
n
drops to 2( 2 ) . The same correspondence can then be made between these graphs and bit
strings of length n2 .
Since the eigenvectors of the Laplacian of a graph form a basis for all smooth functions on
the graph, then we can use our known sampled values (which correspond to a mapping from
a subset of nodes on our metagraph to the real numbers) to interpolate the others. Despite
the incredible size of the metagraph, we show that this problem is by no means intractable,
and functions can in fact be approximated in polynomial time. We also demonstrate this
technique both on a small network for which we can exhaustively compute the score of every
possible directed acyclic graph, as well as on a larger real-world network. The results show
that the method is accurate, and additionally suggest that the BDe scoring metric used is
quadratic over the metagraph.
2
2.1
Spectral Properties of the Hypercube
The Kronecker Product and Kronecker Sum
The matrix operators known as the Kronecker product and Kronecker sum, denoted ? and
? respectively, play a key role in the derivation of the spectral properties of the hypercube.
Given matrices A ? Ri?j and B ? Rk?l , A ? B is the matrix in Rik?jl such that:
?
?
a11 B a12 B ? ? ? a1j B
a2j B ?
? a21 B a22 B
?
A?B =?
..
? ...
?
.
aj1 B aj2 B
aij B
The Kronecker sum is defined over a pair of square matrices A ? Rm?m and B ? Rn?n as
A ? B = A ? In + Im ? B, where In denotes an n ? n identity matrix[8].
2.2
Cartesian Products of Graphs
The Cartesian product of two graphs G1 and G2 , denoted G1 ? G2 , is intuitively defined as
the result of replacing every node in G1 with a copy of G2 and connecting corresponding
edges together. More formally, if the product is the graph G = G1 ? G2 , then the vertex
set of G is the Cartesian product of the vertex sets of G1 and G2 . In other words, for any
vertex v1 in G1 and any vertex v2 in G2 , there exists a vertex (v1 , v2 ) in G. Additionally,
the edge set of G is such that, for any edge (u1 , u2 ) ? (v1 , v2 ) in G, either u1 = v1 and
u2 ? v2 is an edge in G2 , or u2 = v2 and u1 ? v1 is an edge in G1 .[7]
In particular, the set of hypercube graphs (or, identically, the set of Hamming cubes) can be
derived using the Cartesian product operator. If we denote the graph of an n-dimensional
hypercube as Qn , then Qn+1 = Qn ? Q1 , where the graph Q1 is a two-node graph with a
single bidirectional edge.
2.3
Spectral Properties of Cartesian Products
The Cartesian product has the property that, if we denote the adjacency matrix of a graph
G as A(G), then A(G1 ? G2 ) = A(G1 ) ? A(G2 ). Additionally, if A(G1 ) has m eigenvectors
?k and corresponding eigenvalues ?k (with k = 1...m) while A(G2 ) has n eigenvectors ?l
with corresponding eigenvalues ?l (with l = 1...n), then the full spectral decomposition of
A(G1 ? G2 ) is simple to obtain by the properties of the Kronecker sum; A(G1 ? G2 ) will
have mn eigenvectors, each of them of the form ?k ? ?l for every possible ?k and ?l in the
original spectra, and each of them having the corresponding eigenvalue ?k + ?l [2].
It should also be noted that, because hypercubes are all k-regular graphs (in particular,
the hypercube Qn is n-regular), the form of the normalized Laplacian becomes simple. The
usual formula for the normalized Laplacian is:
? = I ? D?1/2 AD?1/2
L
However, since the graph is regular, we have D = kI, and so
? = I ? (kI)?1/2 A(kI)?1/2 = I ? 1 A.
L
k
Also note that, because the formula for the combinatorial Laplacian is L = D ? A, we also
? = 1 L.
have L
k
The Laplacian also distributes over graph products, as shown in the following theorem.
Theorem 1 Given two simple, undirected graphs G1 = (V1 , E1 ) and G2 = (V2 , E2 ), with
combinatorial Laplacians LG1 and LG2 ,the combinatorial Laplacian of the Cartesian product
graph G1 ? G2 is then given by:
LG1 ?G2 = LG1 ? LG2
Proof.
LG1 = DG1 ? A(G1 )
LG2 = DG2 ? A(G2 )
Here, DG denotes the degree diagonal matrix of the graph G. Now, by the definition of the
Laplacian,
LG1 ?G2 = DG1 ?G2 ? A(G1 ) ? A(G2 )
However, the degree of any vertex uv in the Cartesian product is deg(u) + deg(v), because
all edges incident to a vertex will either be derived from one of the original graphs or the
other, leading to corresponding nodes in the product graph. So, we have
DG1 ?G2 = DG1 ? DG2
Substituting this in, we obtain
LG1 ?G2 = DG1 ? DG2 ? A(G1 ) ? A(G2 )
= DG1 ? Im + In ? DG2 ? A(G1 ) ? Im ? I ? A(G2 )
= DG1 ? Im ? A(G1 ) ? Im + In ? DG2 ? In ? A(G2 )
Because the Kronecker product is distributive over addition[8],
LG1 ?G2 = (DG1 ? A(G1 )) ? Im + In ? (DG2 ? A(G2 ))
= L G1 ? L G2
Additionally, if G1 ? G2 is k-regular,
? G1 ?G2 = L
? G1 ? L
? G2 = 1 (LG1 ? LG2 )
L
k
Therefore, since the combinatorial Laplacian operator distributes across a Kronecker sum,
we can easily find the spectra of the Laplacian of an arbitrary hypercube through a recursive
process if we just find the spectrum of the Laplacian of Q1 .
2.4
The Spectrum of the Hypercube Qn
First, consider that
A(Q1 ) =
0 1
1 0
.
This is a k-regular graph with k = 1. So,
LQ1
1
= I ? A(Q1 ) =
k
1 ?1
?1 1
Its eigenvectors and eigenvalues hcan ibe easily computed; it has the eigenvector 11 with
1
eigenvalue 0 and the eigenvector ?1
with eigenvalue 2. We can use these to compute the
four eigenvectors of LQ2 , the Laplacian of the 2-dimensional hypercube; LQ2 = LQ1 ?Q1 =
LQ1 ?LQ1 , so the four possible Kronecker products are [1 1 1 1]T , [1 1 ?1 ?1]T , [1 ?1 1 ?1]T ,
and [1 ? 1 ? 1 1]T , with corresponding eigenvalues 0, 1, 1, and 2 (renormalized by a factor
of k1 = 12 to take into account that our new hypercube is now degree 2 instead of degree 1;
the combinatorial Laplacian would require no normalization). It should be noted here that
an n-dimensional hypercube graph will have 2n eigenvalues with only n + 1 distinct
values;
n
they will be the values 2k
for
k
=
0...n,
each
of
which
will
have
multiplicity
[4].
n
k
If we arrange these columns in the proper order as a matrix, a familiar shape emerges:
?
?
1 1
1
1
? 1 ?1 1 ?1 ?
? 1 1 ?1 ?1 ?
1 ?1 ?1 1
This is, in fact, the Hadamard matrix of order 4, just as placing our original two eigenvectors
side-by-side creates the order-2 Hadamard matrix. In fact, the eigenvectors of the Laplacian
on a hypercube are simply the columns of a Hadamard matrix of the appropriate size; this
can be seen by the recursive definition of the Hadamard matrix in terms of the Kronecker
product:
H2n+1 = H2n ? H2
Recall that the eigenvectors of the Kronecker sum of two matrices are themselves all possible
Kronecker products of eigenvectors of those matrices. Since hypercubes can be recursively
constructed using Kronecker sums, the basis for smooth functions on hypercubes (i.e. the
set of eigenvectors of their graph Laplacian) is the Hadamard basis. Consequently, there is
no need to ever compute a full eigenvector explicitly; there is an explicit formula for a given
entry of any Hadamard matrix:
(H2n )ij = (?1)hbi ,bj i
The notation bx here means ?the n-bit binary expansion of x interpreted as a vector of 0s
and 1s?. This is the key to computing our kernel efficiently, not only because it takes very
little time to compute arbitrary elements of eigenvectors, but because we are free to compute
only the elements we need instead of entire eigenvectors at once.
3
3.1
The Metagraph Kernel
The Optimization Framework
Given the above, we now formulate the regression problem that will allow us to approximate
k
our desired function at arbitrary points. Given a set of k observations {yi }i=1 corresponding
?
to nodes xi in the metagraph, we wish to find the f which minimizes the squared error
between our estimate and all observed points and also which is a sufficiently smooth function
on the graph to avoid overfitting. In other words,
( k
)
1X
2
T m
?
f = arg min
kf (xi ) ? yi k + cf L f
f
k i=1
The variable m in this expression controls the type of smoothing; if m = 1, then we are
penalizing first-differences (i.e. the gradient of the function). We will take m = 2 in our experiments, to penalize second-differences (the usual case when using spline interpolation)[6].
This problem can be formulated and solved within the Reproducing Kernel Hilbert Space
framework[9]; consider the space of functions on our metagraph as the sum of two orthogonal
spaces, one (called ?0 ) consisting of functions which are not penalized by our regularization
term (which is cf?Lm f?), and one (called ?1 ) consisting of functions orthogonal to those. In
the case of our hypercube graph, ?0 turns out to be particularly simple; it consists only of
constant functions (i.e. vectors of the form 1T d, where 1 is a vector of all ones). Meanwhile,
the space ?1 is formulated under the RKHS framework as a set of columns of the kernel
matrix (denoted K1 ). Consequently, we can write f? = 1T d + K1 e, and so our formulation
becomes:
( k
)
2
1 X
T
T
?
f = arg min
(1 d + K1 e)(xi ) ? yi + ce K1 e
f
k i=1
The solution to this optimization problem is for our coefficients d and e to be linear estimates
on y, our vector of observed values. In other words, there exist matrices ?d (c, m) and
?e (c, m), dependent on our smoothing coefficient c and our exponent m, such that:
d? = ?d (c, m)y
e? = ?e (c, m)y
T ?
?
f = 1 d + K1 e? = ?(c, m)y
?(c, m) = 1T ?d (c, m) + K1 ?e (c, m) is the influence matrix[9] which provides the function
estimate over the entire graph. Because ?(c, m) is entirely dependent on the two matrices
?d and ?e as well as our kernel matrix, we can calculate an estimate for any set of nodes
in the graph by explicitly calculating only those rows of ? which correspond to those nodes
and then simply multiplying that sub-matrix by the vector y. Therefore, if we have an
efficient way to compute arbitrary entries of the kernel matrix K1 , we can estimate functions
anywhere in the graph.
3.2
Calculating entries of K1
First, we must choose an order r ? {1, 2...n}; this is equivalent to selecting the degree of a
polynomial used to perform standard interpolation on the hypercube. The effect that r will
have on our problem will be to select the set of basis functions we consider; the eigenvectors
n
corresponding to a given eigenvalue 2k
eigenvectors which divide the space
n are the
k
into identically-valued regions which are themselves (n ? k)-dimensional hypercubes. For
example, the 3 eigenfunctions on the 3-dimensional hypercube which correspond to the
eigenvalue 32 (so k = 1) are those which separate the space into a positive plane and a
negative plane along each of the three axes. Because these eigenfunctions are all equivalent
apart from rotation, there is no reason to choose one to be in our basis over another, and
so we
that the total number of eigenfunctions we use in our approximation is equal
Prcan say
to k=0 nk for our chosen value of r.
All eigenvectors can be identified with a number l corresponding to its position in the
natural-ordered Hadamard matrix; the columns where l is an exact power of 2 are ones that
alternate in identically-sized blocks of +1 and -1, while the others are element-wise products
of the columns correponsing to the ones in l?s binary expansion. Therefore, if we use the
notation |x|1 to mean ?the number of ones in the binary expansion of x?, then choosing the
order r is equivalent to choosing a basis of eigenvectors ?l such that |l|1 is less than or equal
to r. Therefore, we have:
X n m
(K1 )ij =
Hil Hjl
2k
1?|l|1 ?r
Because k is equal to |l|1 , and because we already have an explicit form for any Hxy , we
can write
m
1 X
n
(K1 )ij =
(?1)<bi ,l>+<bj ,l>
n
2|l|1
1?|l|1 ?r
r
1 X n m X
?
=
(?1)<bi ?bj ,l>
n
2k
k=1
|l|1 =k
The ?? symbol here denotes exclusive-or, which is equivalent to addition mod 2 in this
domain. The justification for this is that only the parity of the exponent (odd or even)
matters, and locations in the bit strings bi and bj which are both zero or both one contribute
no change to the overall parity. Notably, this shows that the value of the kernel between
? , the key result which allows us to
any two bit strings bi and bj is dependent only on bi ?b
P j
? j ,l>
compute these values quickly. If we let Sk (bi , bj ) = |l|1 =k (?1)<bi ?b
, there is a recursive
formulation for the computation of Sk (bi , bj ) in terms of Sk?1 (bi , bj ), which is the method
used in the experiments due to its speed and feasability of computation.
4
Experiments
4.1
The 4-node Bayesian Network
The first set of experiments we performed were on a four-node Bayesian Network. We
generated a random ?base truth? network and sampled it 1000 times, creating a data set.
We then created an exhaustive set of 26 = 64 directed graphs; there are six possible edges
in a four-node graph, assuming we already have some sort of node ordering that allows us
to orient edges, and so this represented all possibilities. Because we chose the node ordering
to be consistent with our base network, one of these graphs was in fact the correct network.
We then gave each of the set of 64 graphs a log-marginal-likelihood score (i.e. the BDe
score) based on the generated data. As expected, the correct network came out to have
the greatest likelihood. Additionally, computation of the Rayleigh quotient shows that the
function is a globally smooth one over the graph topology. We then performed a set of
experiments using the metagraph kernel.
4.1.1
Randomly Drawn Observations
First, we partitioned the set of 64 observations randomly into two groups. The training
group ranged in size from 3 to 63 samples, with the rest used as the testing group. We
then used the training group as the set of observations, and queried the metagraph kernel to
predict the values of the networks in the testing group. We repeated this process 50 times for
each of the different sizes of the training group, and the results averaged to obtain Figure 1.
Note that order 3 performs the best overall for large numbers of observations, overtaking the
order-2 approximation at 41 values observed and staying the best until the end. However,
order 1 performs the best for small numbers of observations (perhaps due to overfitting
errors caused by the higher orders) and order 2 performs the best in the middle. The data
suggests that the proper order to which to compute the kernel in order to obtain the best
approximations is a function of both the size of the space and the number of observations
made within that space.
4.1.2
Best/worst-case Observations
Secondly, we performed experiments where the observations were obtained from networks
which were in the neighborhood around the known true maximum, as well as ones from
networks which were as far from it as possible. These results are Figures 2 and 3. Despite
small differences in shape, the results are largely identical, indicating that the distribution
of the samples throughout ?n matters very little.
4.2
The Alarm Network
The Alarm Bayesian network[1] contains 37 nodes, and has been used in much Bayes-netrelated research[3]. We first generated data according to the true network, sampling it 1000
times, then generated random directed graphs over the 37 nodes to see if their scores could
be predicted as well as in the smaller four-node case. We generated two sets of these graphs:
a set of 100, and a set of 1000. We made no attempt to enforce an ordering; although
the graphs were all acyclic, we placed no assumption on the graphs being consistent with
the same node-ordering as the original. The scores of these sets, calculated using the data
drawn from the true network, served as our observed data. We then used the kernel to
Random samples
3
2
2
10
Root?mean?squared Error
Root?mean?squared Error
10
1
10
0
10
?1
10
?2
10
?3
10
0
Order
Order
Order
Order
Order
10
1
2
3
4
5
0
10
?1
Order
Order
Order
Order
Order
Order
10
10
?3
20
30
40
Observed Nodes
50
60
70
10
0
10
1
2
3
4
5
6
20
30
40
50
Observed Nodes
60
70
(b) Figure 2: Samples drawn near maximum
Samples near Minimum
3
10
Error on approximations of Alarm network data
800
2
10
750
1
10
Root?mean?squared Error
Root?mean?squared Error
1
10
?2
(a) Figure 1: Randomly-drawn Samples
700
0
10
650
?1
10
?2
10
?3
10
Samples near Maximum
3
10
10
0
Order
Order
Order
Order
Order
Order
10
600
1
2
3
4
5
6
20
550
Mean of sampled scores
100 observations
1000 observations
30
40
Observed Nodes
50
60
70
(c) Figure 3: Samples drawn near minimum
500
0
2
4
6
8
10
12
14
16
18
20
Order of approximation
(d) Figure 4: Samples from ALARM network
approximate, given the observed scores, the score of an additional 100 randomly-generated
graphs, with the order of the kernel varying from 1 to 20. The results, with root-meansquared error plotted against the order of the kernel, are shown in Figure 4. Additionally,
we calculated a baseline by taking the mean of the 1000 sampled scores and calling that the
estimated score for every graph in our testing set.
The results show that the metagraph approximation method performs significantly better
than the baseline for low orders of approximation with higher amounts of sampled data.
This makes intuitive sense; the more data there is, the better the approximation should be.
Additionally, the spike at order 2 suggests that the BDe score itself varies quadratically over
the metagraph. To our knowledge, we are the first to make this observation. In current work,
we are analyzing the BDe in an attempt to analytically validate this empirical observation.
If true, this observation may lead to improved optimization techniques for finding the BDemaximizing Bayesian network. Note, however, that, even if true, exact optimization is still
unlikely to be polynomial-time because the quadratic form is almost certainly indefinite and,
therefore, NP-hard to optimize.
5
Conclusion
Functions of graphs to real numbers, such as the posterior likelihood of a Bayesian network
given a set of data, can be approximated to a high degree of accuracy by taking advantage of
a hybercubic ?metagraph? structure. Because the metagraph is regular, standard techniques
of interpolation can be used in a straightforward way to obtain predictions for the values at
unknown points.
6
Future Work
Although this technique allows for quick and accurate prediction of function values on the
metagraph, it offers no hints (as of yet) as to where the maximum of the function might
be. This could, for instance, allow one to generate a Bayesian network which is likely to be
close to optimal, and if true optimality is required, that approximate graph could be used
as a starting point for a stepwise method such as MCMC. Even without a direct way to
find such an optimum, though, it may be worth using this approximation technique inside
an MCMC search instead of the usual exact-score computation in order to quickly converge
on a something close to the desired optimum.
Also, many other problems have a similar flavor. In fact, this technique should be able
to be used unchanged on any problem which involves the computation of a real-valued
function over bit strings. For other objects, however, the structure is not necessarily a
hypercube. For example, one may desire an approximation to a function of permutations
of some number of elements to real numbers. The set of permutations of a given number
of elements, denoted Sn , has a similarly regular structure (which can be seen as a graph in
which two permutations are connected if a single swap leads from one to the other), but not
a hypercubic one. The structure-search problem on Bayes Nets can also be cast as a search
over orderings of nodes alone[5], so a way to approximate a function over permutations
would be useful there as well.
Other domains have this ability to be turned into regular graphs ? the integers mod n with
edges between numbers that differ by 1 form a loop, for example. It should be possible to
apply a similar trick to obtain function approximations not only on these domains, but on
arbitrary Cartesian products of them. So, for instance, remembering that the directions of
the edges of Bayesian network are completely specified given an ordering on the nodes, the
network structure search problem on n nodes can be recast as a function approximation
over the set Sn ? Q( n ) . Many problems can be cast into the metagraph framework; we have
2
only just scratched the surface here.
Acknowledgments
The authors would like to thank Curtis Storlie and Joshua Neil from the UNM Department of
Mathematics and Statistics, as well as everyone in the Machine Learning Reading Group at
UNM. This work was supported by NSF grant #IIS-0705681 under the Robust Intelligence
program.
References
[1] I. Beinlich, H.J. Suermondt, R. Chavez, G. Cooper, et al. The ALARM monitoring
system: A case study with two probabilistic inference techniques for belief networks.
Proceedings of the Second European Conference on Artificial Intelligence in Medicine,
256, 1989.
[2] D.S. Bernstein. Matrix Mathematics: Theory, Facts, and Formulas with Application to
Linear Systems Theory. Princeton University Press, 2005.
[3] D.M. Chickering, D. Heckerman, and C. Meek. A Bayesian approach to learning Bayesian
networks with local structure. UAI?97, pages 80?89, 1997.
[4] Fan R. K. Chung. Spectral Graph Theory. Conference Board of the Mathematical
Sciences. AMS, 1997.
[5] N. Friedman and D. Koller. Being Bayesian about network structure. Machine Learning,
50(1-2):95?125, 2003.
[6] Chong Gu. Smoothing Splines ANOVA Models. Springer Verlag, 2002.
[7] G. Sabidussi. Graph multiplication. Mathematische Zeitschrift, 72(1):446?457, 1959.
[8] Kathrin Schacke. On the kronecker product. Master?s thesis, University of Waterloo,
2004.
[9] Grace Wahba. Spline Models for Observational Data. CBMS-NSF Regional Conference
Series in Applied Mathematics. SCIAM, 1990.
| 3543 |@word middle:1 polynomial:3 decomposition:1 q1:6 recursively:1 lq2:2 contains:1 score:16 selecting:1 series:1 rkhs:1 current:1 yet:1 must:2 suermondt:1 shape:2 drop:1 alone:1 intelligence:2 plane:2 incredible:1 provides:1 node:26 location:1 contribute:1 mathematical:2 along:1 constructed:1 direct:1 a2j:1 consists:1 inside:1 notably:1 expected:1 themselves:2 globally:1 little:2 becomes:2 estimating:1 notation:2 interpreted:1 string:6 eigenvector:3 minimizes:1 finding:2 eduardo:1 every:5 exactly:1 rm:1 control:1 grant:1 positive:1 local:1 zeitschrift:1 despite:2 analyzing:1 interpolation:3 might:1 chose:1 suggests:2 bi:9 averaged:1 directed:7 acknowledgment:1 testing:4 recursive:3 block:1 lg2:4 empirical:1 significantly:1 word:4 regular:9 suggest:2 close:2 operator:3 influence:1 optimize:1 equivalent:4 quick:2 straightforward:1 starting:1 independently:1 formulate:1 factored:1 justification:1 imagine:1 play:1 exact:3 trick:1 element:5 expensive:1 approximated:2 particularly:1 observed:8 role:1 solved:1 worst:1 calculate:1 region:1 connected:2 ordering:7 benjamin:1 exhaustively:1 renormalized:1 hjl:1 creates:1 basis:6 swap:1 completely:1 gu:1 easily:2 aj2:1 represented:1 derivation:1 distinct:1 artificial:1 choosing:2 neighborhood:1 exhaustive:1 larger:1 valued:2 say:2 ability:1 statistic:1 neil:1 g1:24 itself:3 advantage:2 eigenvalue:10 net:1 product:21 turned:1 hadamard:7 loop:1 intuitive:1 validate:1 optimum:3 a11:1 staying:1 object:2 ij:3 odd:1 a22:1 quotient:1 predicted:1 involves:1 differ:2 direction:1 correct:2 a12:1 observational:1 adjacency:2 require:1 secondly:1 im:6 sufficiently:1 around:1 mapping:2 bj:8 predict:1 lm:1 substituting:1 ibe:1 arrange:1 combinatorial:5 waterloo:1 hil:1 avoid:1 varying:1 derived:3 ax:1 likelihood:4 baseline:2 sense:1 am:1 inference:1 dependent:3 entire:2 unlikely:1 koller:1 arg:2 overall:2 denoted:4 exponent:2 smoothing:3 cube:2 equal:3 once:1 feasability:1 having:2 marginal:1 sampling:1 identical:1 placing:1 future:1 others:2 spline:3 np:1 hint:1 randomly:4 dg:1 interpolate:2 familiar:1 consisting:2 attempt:2 friedman:1 possibility:1 certainly:1 chong:1 accurate:3 beforehand:1 edge:16 orthogonal:2 divide:1 desired:3 plotted:1 hbi:1 instance:2 column:5 vertex:7 subset:1 entry:3 varies:2 hypercubes:4 stay:1 probabilistic:1 connecting:1 quickly:3 together:1 recomputation:1 squared:5 thesis:1 choose:2 creating:1 chung:1 leading:1 bx:1 account:1 coefficient:2 matter:2 explicitly:2 caused:1 ad:1 scratched:1 performed:3 root:5 sort:1 bayes:2 square:1 accuracy:2 largely:1 efficiently:1 correspond:3 identify:1 bayesian:15 multiplying:1 served:1 worth:1 monitoring:1 definition:3 against:1 e2:1 proof:1 hamming:2 sampled:5 recall:1 knowledge:1 emerges:1 hilbert:2 cbms:1 bidirectional:1 higher:2 courant:1 improved:1 formulation:2 though:1 just:3 anywhere:1 until:1 replacing:1 perhaps:1 hcan:1 effect:1 normalized:2 ranged:1 true:6 regularization:1 analytically:1 noted:2 demonstrate:1 performs:4 corona:1 wise:1 rotation:1 jl:1 queried:1 uv:1 mathematics:3 similarly:2 surface:1 base:2 something:1 posterior:2 apart:1 aj1:1 verlag:1 binary:3 came:1 yi:3 joshua:1 scoring:1 seen:2 minimum:2 additional:1 remembering:1 dg1:8 converge:1 ii:1 full:3 smooth:5 offer:1 long:1 e1:1 laplacian:14 prediction:2 regression:1 metric:1 kernel:15 normalization:1 penalize:1 addition:2 rest:2 regional:1 eigenfunctions:3 undirected:1 mod:2 integer:1 near:4 bernstein:1 identically:3 gave:1 identified:1 topology:1 wahba:1 absent:1 expression:1 six:1 york:1 speaking:1 useful:1 eigenvectors:17 amount:1 generate:1 exist:1 nsf:2 estimated:1 broadly:1 mathematische:1 write:2 group:7 key:3 four:5 dg2:6 indefinite:1 drawn:5 penalizing:1 ce:1 anova:1 v1:6 graph:58 sum:8 orient:1 master:1 throughout:1 reasonable:1 almost:1 bit:6 entirely:1 ki:3 meek:1 correspondence:1 fan:1 quadratic:2 kronecker:14 ri:1 lane:1 calling:1 u1:3 speed:1 min:2 optimality:1 department:3 according:1 alternate:1 across:1 smaller:1 heckerman:1 partitioned:1 intuitively:1 multiplicity:1 lg1:8 turn:1 know:1 end:1 apply:1 v2:6 enforce:2 spectral:5 appropriate:1 original:4 denotes:3 cf:2 hinge:1 calculating:2 medicine:1 k1:11 approximating:1 hypercube:15 unchanged:1 already:2 spike:1 exclusive:1 usual:3 unraveling:1 diagonal:1 grace:1 gradient:1 separate:1 thank:1 distributive:1 toward:1 reason:1 assuming:1 length:1 mexico:2 a1j:1 negative:1 bde:6 proper:2 unknown:1 perform:1 observation:14 beinlich:1 hxy:1 ever:1 rn:1 reproducing:2 arbitrary:5 cast:3 pair:1 required:1 specified:1 meansquared:1 quadratically:2 address:1 able:1 usually:1 laplacians:1 reading:1 program:1 recast:1 including:1 everyone:1 belief:1 power:1 greatest:1 natural:1 mn:1 created:1 sn:2 kf:1 multiplication:1 permutation:4 interesting:1 acyclic:3 h2:1 degree:7 incident:1 rik:1 consistent:3 row:1 penalized:1 placed:1 parity:2 copy:1 free:1 supported:1 aij:1 side:2 allow:2 institute:1 taking:2 calculated:2 world:2 qn:5 author:1 made:3 far:1 unm:2 approximate:5 deg:2 overfitting:2 uai:1 xi:3 spectrum:4 search:6 sk:3 additionally:7 reasonably:1 robust:1 curtis:1 h2n:3 expansion:3 necessarily:1 meanwhile:1 european:1 domain:3 alarm:5 n2:4 repeated:1 board:1 slow:1 cooper:1 sub:1 position:1 explicit:2 a21:1 wish:1 chickering:1 theorem:2 rk:1 formula:4 symbol:1 intractable:1 exists:1 stepwise:1 cartesian:10 nk:1 chavez:1 flavor:1 rayleigh:1 simply:2 likely:1 desire:1 ordered:1 g2:30 u2:3 applies:1 springer:1 truth:1 identity:1 formulated:2 sized:1 consequently:2 change:1 hard:1 specifically:1 distributes:2 called:2 total:1 indicating:1 formally:1 select:1 mcmc:2 princeton:1 |
2,806 | 3,544 | Inferring rankings under constrained sensing
Srikanth Jagabathula
Devavrat Shah
Laboratory of Information and Decision Systems,
Massachusetts Institute of Technology,
Cambridge, MA 02139.
{jskanth, devavrat}@mit.edu
Abstract
Motivated by applications like elections, web-page ranking, revenue maximization etc., we consider the question of inferring popular rankings using constrained
data. More specifically, we consider the problem of inferring a probability distribution over the group of permutations using its first order marginals. We first
prove that it is not possible to recover more than O(n) permutations over n elements with the given information. We then provide a simple and novel algorithm
that can recover up to O(n) permutations under a natural stochastic model; in this
sense, the algorithm is optimal. In certain applications, the interest is in recovering only the most popular (or mode) ranking. As a second result, we provide
an algorithm based on the Fourier Transform over the symmetric group to recover
the mode under a natural majority condition; the algorithm turns out to be a maximum weight matching on an appropriately defined weighted bipartite graph. The
questions considered are also thematically related to Fourier Transforms over the
symmetric group and the currently popular topic of compressed sensing.
1
Introduction
We consider the question of determining a real-valued function on the space of permutations of
n elements with very limited observations. Such a question naturally arises in many applications
including efficient web-page rank aggregation, choosing the winner in a sport season, setting odds
in gambling for revenue maximization, estimating popularity of candidates pre-election and the list
goes on (for example, see references [1], [2], [3]). In what follows, we give a motivating example
for the pursuit of this quest.
A motivating example. Consider a pre-election scenario in a democratic country with n potential
candidates. Each person (or voter) has certain ranking of these candidates in mind (consciously or
sub-consciously). For example, let n = 3 and the candidates be A, B and C. Each person believes
in one of the 3! = 6 possible ordering of these candidates. For example, let 50% of people believe
in A > B > C, 30% of people believe in B > A > C and 20% of people believe in C > A > B.
We wish to infer these preferences of population by means of a limited set of questions.
Specifically, suppose we can interview a representative collection (i.e. reasonably large random
collection) of people for this purpose. However, in the interview we may not be able to ask them
their complete ranking of all candidates. This may be because a person may not be able to articulate
it clearly. Or, in situations (e.g. gambling) where there is a financial significance associated with
information of complete ranking, an individual may not be ready to provide that information. In
such a situation, we will have to settle with restricted questions of the following type: what will be
the rank of candidate A in your opinion? or, whom would you rank second?
Given answers to such restricted questions, we would like to infer what fraction of the population
prefers which ordering of candidates. Clearly, such restricted information cannot lead to any useful
1
inference of prevalent ordering of candidates in the population if there are too many of them (for
large n). Now, in a real world scenario, it is likely that people decide rankings of candidates based on
a few issues such as war, abortion, economy and gay marriage. That is, an individual will decide the
ranking of the candidates based on the opinions of candidates on these issues. Therefore, irrespective
of the number of candidates, the number of distinct rankings that prevail in the population are likely
to be very few.
In this paper, we are interested in inferring such few prevalent rankings of candidates and their
popularity based on the restricted (or partial) information as explained above. Thematically, this
question is similar to the pursuit of compressed sensing. However, as we explain in Section 2,
standard compressed sensing does not apply under this setting. We also discuss a natural relation
between the available information and the Fourier coefficients of the Fourier transformation based
on group representation (see Proposition 1). It turns out that the problem we consider is equivalent
to that of recovery of a function over a symmetric group using the first order Fourier coefficients.
Thus, our problem is thematically related to the recovery of functions over non-commutative groups
using a limited set of Fourier coefficients. As we show in Section 2, a naive recovery by setting the
unknown Fourier coefficients to zero yields a very bad result. Hence, our approach has potential
applications to yielding a better recovery.
In many applications, one is specifically interested in finding out the most popular ranking (or mode)
rather than all the prevalent rankings. For this, we consider an approximation based on Fourier transformation as a surrogate to find the mode. We establish that under the natural majority condition,
our algorithm finds the correct mode (see Theorem 2). Interestingly enough, our algorithm to find
an estimate of the mode corresponds to finding a maximum weight matching in a weighted bipartite
graph of n nodes.
Organization. We start describing the setup, the problem statement, and the relation to compressed
sensing and Fourier transform based approaches in Section 2. In Section 3, we provide precise
statements of the main results. In the remaining Sections, we prove these results and discuss the
relevant algorithms.
2
Background and preliminaries
Setup. Let Sn = {?1 , . . . , ?N } denote set of all possible N = n! permutations (orderings) of
n elements. Sn is also known as the symmetric group of degree n. Let f : Sn ? [0, 1] denote a
mapping from the symmetric group to the interval [0, 1]. We assume that the function f is normalized
i.e., kf k?1 = 1, where k?k?1 denotes the ?1 norm. Let pk denote the value f (?k ), for 1 ? k ? N .
Without loss of generality we assume that the permutations are labeled such that pk ? pm for
k < m. We write f (?) to denote the function and f to denote the vector (f (?k ))N ?1 . The set of
permutations for which f (?) is non-zero will be called the support of f (?); also, the cardinality of the
support will be called sparsity of f and is denoted by K i.e., K = kf k?0 . Each permutation ? will
be represented by its corresponding permutation matrix denoted by P ? i.e., Pij? = 1{?(j)=i} , where
1E is the indicator variable of the event E. For brevity, we write P ? to mean both the n ? n matrix
and the n2 ? 1 vector. We use the terms permutation and permutation matrix interchangeably. We
think of permutations as complete matchings in a bipartite graph. Specifically, we consider an n ? n
bipartite graph and each permutation corresponds to a complete matching in the graph. The edges
in a permutation will refer to the edges in the corresponding bipartite matching. For 1 ? i, j ? n,
let
X
qij :=
f (?)
(1)
??Sn :?(j)=i
Let Q denote both the matrix (qij )n?n and the vector (qij )n2 ?1 . It is easy to note that Q can be
P
equivalently written as ??Sn f (?)P ? . From the definition, it also follows that Q is a doubly
stochastic matrix. The matrix Q corresponds to the first order information about the function f (?).
In the election example, it is easy to see that qij denotes the fraction of voters that have ranked
candidate j in the ith position.
Problem statement and result. The basic objective is to determine the values of the function f (?)
precisely, using only the values of the matrix Q. We will first prove, using information theoretic
techniques, that recovery is asymptotically reliable (average probability of error goes to zero as
2
n ? ?) only if K = O(n). We then provide a novel algorithm that recovers prevalent rankings and
their popularity exactly under minimal (essentially necessary) conditions; under a natural stochastic
model, this algorithm recovers up to O(n) permutations. In this sense, our algorithm is optimal.
It is often the case that the full knowledge of functional values at all permutations is not required.
Specifically, in scenarios such as ranked elections, interest is in finding the most likely permutation i.e., arg max f (?). Theorem 2 proves that the max-weight matching yields the most likely
permutation under natural majority assumption.
2.1
Relation to Fourier Transform
The question we consider is thematically related to harmonic analysis of functions over noncommutative groups. As we shall show soon, the matrix Q is related to the first two Fourier coefficients of the Fourier Transform of the distribution over the permutation group. Thus, the problem
we are considering can be restated as that of reconstructing a distribution over the permutation group
from its first two Fourier coefficients. Reconstructing distributions over the permutation group from
a limited number of Fourier coefficients has several applications. Specifically, there has been some
recent work on multi-object tracking (see [4] and [3]), in which they approach the daunting task of
maintaining a distribution over the permutation group by approximating it using the first few Fourier
coefficients. This requires reconstructing the function from a limited number of Fourier coefficients,
where our solution can be potentially applied.
We will now discuss the Fourier Transform of a function on the permutation group, which provides
another possible approach for recovery of f . Interestingly enough, the first order Fourier transform
of f can be constructed using information based on Q = (qij ). As we shall find, this approach fails
to recover sparse f as it has tendency to ?spread? the mass on all n! elements given Q. However,
as established in Theorem 2 this leads to recovery of mode or most likely assignment of f under
natural majority condition.
Next, some details on what the Fourier transform (an interested reader is requested to check [5] for
missing details) based approach is, how Q can be used to obtain an approximation of f and why it
does not recover f exactly. The details relevant to recovery of mode of f will be associated with
Theorem 2.
Fourier Transform: Definition. We can obtain a solution to the set of linear equations in (8)
using the Fourier Transforms at symmetric group representations. For a function h : G ? R on
? ? = P h(?)?(?). The
group G, its Fourier Transform at a representation ? of G is defined as h
?
collection of Fourier Transforms of h(?) at a complete set of inequivalent irreducible representations
of G completely determine the function. This follows from the following expression for the inverse
Fourier Transform:
h
i
1 X
? T ?k (?)
h(?) =
d?k Tr h
(2)
?k
|G|
k
where |G| denotes the cardinality of G, d?k denotes the degree of representation ?k and k indexes
over the complete set of inequivalent irreducible representations of G. The trivial representation of
a group is the 1-dimensional
representation ?0 (?) = 1, ? ? ? G. Therefore, the Fourier Transform
P
of h(?) at ?0 is ? h(?).
Fourier Transform: Approximation. The above naturally suggests an approximation based on a
limited number of Fourier coefficients with respect to a certain subset of irreducible representations.
We will show that, indeed, the information matrix Q corresponds to the Fourier coefficient with
respect to the first-order representation of the symmetric group Sn . Therefore, it yields a natural
approximation.
It is known that [5] the first order permutation representation of Sn , denoted by ?1 , has a degree
n and maps every permutation ? to its corresponding permutation matrix P ? . In other words, we
P
have ?1 (?) = P ? . Thus, f?(?) = ??Sn f (?)?1 (?) = Q. Reconstruction of f requires Fourier
Transforms at irreducible representations. Even though ?1 is not an irreducible representation, it is
known that [5] that every representation of a group is equivalent to the direct sum of irreducible representations. In particular, ?1 can be decomposed into ?1 = ?0 ? ?1 ; where ?0 is the aforementioned
trivial representation of degree 1 and ?1 is an irreducible representation of degree n ? 1. It is worth
pointing out to a familiar reader that what we call ?1 is more appropriately denoted by ?(n?1,1) in
3
the literature; but we will stick to ?1 for brevity. Thus, Q is related to the Fourier Transforms of the
irreducible representations ?0 and ?1 . We now have the following proposition:
Proposition 1. Consider a function f : Sn ? R. Suppose that kf k?1 = 1 and we are given
its corresponding Q. Then, its natural Fourier approximation obtained by looking at the Fourier
coefficients of the relevant irreducible representations is given by the function f? : Sn ? R defined
as:
hQ, P ? i n ? 2
f?(?) = (n ? 1)
?
(3)
N
N
P
for ? ? Sn , with N = n!, kf k?1 = kf?k?1 and ??Sn f?(?)P ? = Q.
Proof. We have:
Q=
X
f (?)?1 =
??Sn
X
f (?)(?0 ? ?1 ) = f??0 ? f??1 .
(4)
??Sn
Therefore,
h
i
hQ, P ? i = Tr QT P ? = Tr f??T0 ? f??T1 (?0 (?) ? ?1 (?))
Since Tr is independent of the basis, choosing an appropriate basis we can write:
h
i
h
i
h
i
hQ, P ? i = Tr f??T0 ?0 (?) + Tr f??T1 ?1 (?) = 1 + Tr f??T1 ?1 (?)
(5)
(6)
(6) is true because ?0 (?) = 1, ?? ? Sn , and kf k?1 = 1.
f? is obtained by truncating the Inverse Fourier Transform expression to the first two terms. Thus,
from (2), it follows that:
i
1 h ?T
f?(?) =
f?o ?0 (?) + (n ? 1)f??T1 ?1 (?)
(7)
N
Using the fact that ?0 (?) = 1 ?? ? Sn , f??0 = 1, and plugging (6) into (7) gives the result of the
proposition.
Summary. Thus, the Fourier Transform technique yields a solution to the problem. Unfortunately,
the solution is not sparse and the ?mass? is distributed over all the permutations yielding values
of O(1/N ) for all permutations. In summary, a naive approach to the reconstruction of a sparse
distribution gives unsatisfactory results and requires a different approach.
2.2
Relation to Compressed Sensing
Here we discuss the relation of the above stated question to the recently popular topic of compressed
sensing. Indeed, both share the commonality in the sense that the ultimate goal is to recover a sparse
function (or vector) based on few samples. However, as we shall show, the setup of our work here
is quite different. This is primarily because in the standard compressed sensing setup, samples are
chosen as ?random projections? while here samples are highly constrained and provide information
matrix Q. Next, we provide details of this.
Our problem can be formulated as a solution to a set of linear equations by defining a matrix A as
the n2 ? N matrix with column vectors as P ?k , 1 ? k ? N . Then, f is a solution to the following
set of linear equations:
Ax = Q
(8)
Candes and Tao (2005) [6] provide an approach to solve this problem. They require the vector f to
be sparse i.e., kf k?0 = ?N , for some ? > 0. As discussed earlier, this is a reasonable assumption
in our case because: (a) the total number of permutations N can be very large even for a reasonably
sized n and (b) most functions f (?) that arise in practice are determined by a small (when compared
to N ) number of parameters. Under a restriction on the isometry constants of the matrix A, Candes
and Tao prove that the solution f is the unique minimizer to the LP:
minkxk?1
s.t.
4
Ax = Q
(9)
Unfortunately, the approach of Candes and Tao cannot be directly applied to our problem because
the isometry constants of the matrix A do not satisfy the required conditions.
We now take a closer look at the isometry constants of A. Gaussian random matrices form an
important class of matrices with good isometry constants. Unfortunately, neither is our matrix A
random nor is there a straightforward random formulation of our problem. To see why the matrix
A has bad isometry constants, we take a simple example. For any n ? 4 consider the following 4
permutations: ?1 = id, ?2 = (12), ?3 = (34) and ?4 = (12)(34). Here, id refers to the identity
permutation and the permutations are represented using the cycle notation. It is easy to see that:
P ?1 + P ?4 = P ?2 + P ?3
(10)
For any integer 1 ? S ? N , the S restricted isometry constant of A is defined as the smallest
quantity such that AT c obeys:
(1 ? ?S )kck2?2 ? kAT ck2?2 ? (1 + ?S )kck2?2
(11)
?
2, . . . , N } of cardinality at most S and all real vectors c. Here, AT c denotes
PT ? {1,
?k
c
P
. From this definition and (10), it follows that ?S = 1 ? S ? 4. Theorem 1.4 rek?T k
quires ?S < 1 for perfect reconstruction of f when kf k?0 ? S. Therefore, the compressed sensing
approach of Candes and Tao does not guarantee the unique reconstruction of f if kf k?0 ? 4.
3
Main results
Exact recovery. The main result of this paper is about the exact recovery of f from the given
constrained information matrix Q = (qij ) under the hypothesis that f is sparse or has small kf k?0 .
We provide an algorithm that recovers f exactly if the underlying support and probabilities have the
following two properties:
Property 1 (P1). Suppose the function f (?) is K sparse. Let p1 , p2 , . . . , pK be the function values.
The following is true:
X
X
pj 6=
pj ? J, J ? ? {1, 2, . . . , K} s.t J ? J ? = ?
j?J
j?J ?
Property 2 (P2). Let {?1 , ?2 , . . . , ?K } be the support of f (?). For each 1 ? i ? K, ? an 1 ? ?i ?
n such that ?i (?i ) 6= ?j (?i ) ? j 6= i. In other words, each permutation has at least one edge that is
different from all the others.
When properties P1 and P2 are satisfied, the equation Q = Af has a unique solution and can indeed
be recovered; we will provide an algorithm for such recovery. The following is the formal statement
of this result and will be proved later.
Theorem 1. Consider a function f : Sn ? [0, 1] such that kf k?0 = L, kf k?1 = 1, and the functional values and the support possess properties P1 and P2. Then, matrix Q is sufficient to reconstruct f (?) precisely.
Random model, Sparsity and Theorem 1.
Theorem 1 asserts that when properties P1 and P2 are satisfied, exact recovery is possible. However,
it is not clear why they are reasonable. We will now provide some motivation and prove that the
algorithm is indeed optimal in terms of the maximum sparsity it can recover.
Let?s go back to the counter-example we mentioned before: For any n ? 4 consider the 4 permutations ?1 = id, ?2 = (12), ?3 = (34) and ?4 = (12)(34). We have P ?1 + P ?4 = P ?2 + P ?3 . Now,
consider 4 values p1 , p2 , p3 and p4 . Without loss of generality suppose that p1 ? p4 and p2 ? p3 .
Using the equation P ?1 + P ?4 = P ?2 + P ?3 , we can write the following:
Q = p 1 P ?1 + p 2 P ?2 + p 3 P ?3 + p 4 P ?4
= (p1 + p2 )P ?1 + (p1 + p2 )P ?4 + (p3 ? p2 )P ?3
= (p1 + p2 )P ?2 + (p1 + p3 )P ?3 + (p4 ? p1 )P ?4 .
5
Thus, under the above setup, there is no unique solution to Q = Af . In addition, from the last two
equalities, we can conclude that even the sparsest solution is not unique. Hence, there is no hope of
recovering f given only Q in this setup.
The question we now ask is whether the above counter example is contrived and specially constructed, or is it more prevalent. For that, we consider a random model which puts a uniform measure on all the permutations. The hope is that under this model, situations like the counter example
occur with a vanishing probability. We will now describe the random model and then state important
results on the sparsity of f that can be recovered from Q.
Random Model. Under the random model, we assume that the function f with sparsity K is constructed as follows: Choose K permutations uniformly at random and let them have any non-trivial
real functional values chosen uniformly at random from a bounded interval and then normalized.
h
i
We call an algorithm producing an estimate f? of f as asymptotically reliable if Pr f 6= f? = ?(n)
where ?(n) ? 0 as n ? ?. We now have the following two important results:
Lemma 1. Consider a function f : Sn ? R with sparsity K. Given the matrix Q = Af , and no
additional information, the recovery will be asymptotically reliable only if K ? 4n.
First note that a trivial bound of (n ? 1)2 can be readily obtained as follows: Since Q is doubly
stochastic, it can be written as a convex combination of permutation matrices [7], which form a
space of dimension (n ? 1)2 . Lemma 1 says that this bound is loose. It can be proved using standard
arguments in Information Theory by considering A as a channel with input f and output Q.
Lemma 2. Consider a function f : Sn ? R with sparsity K constructed according to the random
model described above. Then, the support and functional values of f possess properties P1 and P2
with probability 1 ? o(1) as long as K ? 0.6n.
It follows from Lemma 2 and Theorem 1 that f can be recovered exactly from Q if the sparsity
K = O(n). Coupled with Lemma 1 we conclude that our algorithm is optimal in the sense that it
achieves the sparsity bound of O(n).
Recovery of Mode. As mentioned before, often we are interested in obtaining only limited information about f (?). One such scenario is when we would like to find just the most likely permutation.
For this purpose, we use the Fourier approximation f? (cf. Proposition 1) in place of f : that is, the
mode of f is estimated as mode of f?. The following result states the correctness of this approximation under majority.
Theorem 2. Consider a function f : Sn ? [0, 1] such that kf k?0 = L and kf k?1 = 1. Suppose the
majority condition holds, that is max??Sn f (?) > 1/2. Then,
arg max f (?) = arg max f?(?) = arg max hP ? , Qi .
??Sn
??Sn
??Sn
The mode of f?, or maximizer of hP ? , Qi is essentially the maximum weight matching in a weighted
bipartite graph: consider a complete bipartite graph G = ((V1 , V2 ), E) with V1 = V2 = {1, . . . , n}
and E = V1 ? V2 with edge (i, j) ? E having weight qij . Then, weight of a matching (equivalently
permutation ?) is indeed hP ? , Qi. The problem of finding maximum weight matching is classical.
It can be solved in O(n3 ) using algorithm due to Edmond and Karp [8] or max-product belief
propagation by Bayati, Shah and Sharma [9]. Thus, this is an approximation that can be evaluated.
4
Theorem 1: Proof and Algorithm
Here, we present a constructive proof of Theorem 1. Specifically, we will describe an algorithm to
determine the function values from Q which will be the original f as long as properties P1 and P2
are satisfied.
Let p1 , p2 , . . . , pL denote the non-zero functional values. Let ?1 , ?2 , . . . , ?L denote the corresponding permutations i.e., f (?k ) = pk . Without loss of generality assume that the permutations are
labeled such that pi ? pj for i < j. Let q1 , q2 , . . . , qM , where M = n2 , denote the values of matrix
Q arranged in ascending order.
6
Given this sorted version, we have qi ? qj for i < j. Let ei denote the edge (u, v) such that
qi = qei = quv , where recall that
X
X
quv =
f (?k ) =
pk .
k:?k (u)=v
k:?k (u)=v
Let Ak denote the set of edges corresponding to permutation ?k , 1 ? k ? L. That is, Ak =
{(u, ?k (u)) : 1 ? u ? n}. The algorithm stated below will itself determine L, and (Ak , pk )1 ?
k ? L using information Q. The algorithm works when properties P1 and P2 are satisfied.
Algorithm:
initialization: p0 = 0, k(0) = 0 and Ak = ?, 1 ? k ? M .
for i = 1 toPM
if qi = j?J pj for some J ? {0, 1, . . . , k(i ? 1)}
k(i) = k(i ? 1)
Aj = Aj ? {ei } ? j ? J
else
k(i) = k(i ? 1) + 1
pk(i) = qi
Ak(i) = Ak(i) ? {ei }
end if
end for
Output L = k(i) and (pk , Ak ), 1 ? k ? L.
By property P2, there exists at least one qi such that it is equal to pk , for each 1 ? k ? L. The
property P1 ensures that whenever qi = pk(i) , the condition in the ?if? statement of the pseudocode
is not satisfied. Therefore, the algorithm correctly assigns values to each of the pk ?s.
Note that the condition in the ?if? statement being true implies that edge ei is present in all the
permutations ?j such that j ? J. Property P1 ensures that such a J, if exists, is unique. Therefore,
when the condition is satisfied, the only permutations that contain edge ei are ?j , j ? J.
When the condition in the ?if? statement fails, again from properties P1 and P2 it follows that edge
ei is contained only in permutation ?k(i) . From this discussion we can conclude that at the end of the
iterations, each of the Ai ?s contain complete information about their corresponding permutations.
The algorithm thus completely determines the function f (?). Finally, note that the algorithm does
not require the knowledge of kf k?0 .
5
Theorem 2: Proof and Algorithm
Here, our interest is in finding the mode of f . The algorithm we have proposed is use the mode of
f?, as an estimate of mode of f . We wish to establish that when max??Sn f (?) > 1/2 then
?
? ? = ? ? , where
?
? ? = arg max f?(?);
??Sn
? ? = arg max f (?).
??Sn
P
Since we have assumed that f (? ? ) > 1/2 and kf k?1 = 1, we should have ??S f (?) < 1/2,
where S ? Sn such that ? ? ?
/ S. Therefore, there is exactly one entry in each column of matrix Q
that is > 1/2, and the corresponding edge should be a part of ? ? . Thus, keeping only those edges
(i, j) such that Qi,j > 1/2, we should the matching ? ? . It is clear from the construction that ? ?
indeed has the maximum weight of all the other matchings. The result now follows.
6
Conclusion
In summary, we considered the problem of inferring popular rankings from highly constrained information. Since raking data naturally arises in several diverse practical situations, an answer to this
question has wide ranging implications.
7
Specifically, we considered the problem of inferring a sparse normalized function on the symmetric
group using only the first order information about the function. In the election example this first
order information corresponds to the fraction of people who have ranked candidate i in the j th
position. We provide a novel algorithm to precisely recover the permutations and the associated
popularity under minimal, and essentially necessary, conditions. We provide justification to the
necessity of our assumptions and consider a natural random model to quantify the sparsity that can
be supported.
We also provide an algorithm, based on Fourier transform approximation, to determine the most
popular ranking (mode of the function). The algorithm is essentially a max-weight matching with
weights as the q.. values. Under a natural majority assumption, the algorithm finds the correct mode.
The question considered is thematically related to harmonic analysis of functions over the symmetric group and also the currently popular topic of compressed sensing. The problem we consider can
be restated as the reconstruction of a function using its first order Fourier representation, which has
several applications particularly in the multi-object tracking problem. On the other hand, the parallels to the to the standard compressed sensing setup are limited because the available information is
highly constrained. Thus, the existing approaches of compressed sensing cannot be applied to the
problem.
Next Steps. We concentrated on the recovery of the distribution from its first order marginals. A
possible next step would be to consider recovery under different forms of partial information. More
specifically, practical applications motivate considering the recovery of distribution from pair-wise
information: probability of candidate i being ranked above candidate j. Another natural practical
consideration would be to address the presence of noise in the available information. Understanding
recovery of distributions with the above considerations are natural next steps.
References
[1] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation revisited. In Proceedings
of WWW10, 2001.
[2] Yiling Chen, Lance Fortnow, Evdokia Nikolova, and David M. Pennock. Betting on permutations. In EC ?07: Proceedings of the 8th ACM conference on Electronic commerce, pages
326?335, New York, NY, USA, 2007. ACM.
[3] J. Huang, C. Guestrin, and L. Guibas. Efficient Inference for Distributions on Permutations. In
Advances in Neural Information Processing Systems (NIPS), 2007.
[4] R. Kondor, A. Howard, and T. Jebara. Multi-object tracking with representations of the symmetric group. In Proceedings of the Eleventh International Conference on Artificial Intelligence
and Statistics, 2007.
[5] P. Diaconis. Group Representations in Probability and Statistics. IMS Lecture Notes-Monograph
Series, 11, 1988.
[6] E.J. Candes and T. Tao. Decoding by linear programming. Information Theory, IEEE Transactions on, 51(12):4203?4215, Dec. 2005.
[7] G. Birkhoff. Tres observaciones sobre el algebra lineal. Univ. Nac. Tucuman Rev. Ser. A, 5:147?
151, 1946.
[8] J. Edmonds and R. Karp. Theoretical improvements in algorithmic efficiency for network flow
problems. Jour. of the ACM, 19:248?264, 1972.
[9] M. Bayati, D. Shah, and M. Sharma. Max-product for maximum weight matching: convergence,
correctness and lp duality. IEEE Transactions on Information Theory, March 2008.
8
| 3544 |@word kondor:1 version:1 norm:1 p0:1 q1:1 tr:7 necessity:1 series:1 interestingly:2 existing:1 recovered:3 written:2 readily:1 intelligence:1 ith:1 vanishing:1 ck2:1 provides:1 noncommutative:1 revisited:1 node:1 preference:1 constructed:4 direct:1 qij:7 prove:5 doubly:2 naor:1 eleventh:1 indeed:6 p1:19 nor:1 multi:3 decomposed:1 election:6 cardinality:3 considering:3 estimating:1 notation:1 underlying:1 bounded:1 mass:2 what:5 q2:1 finding:5 transformation:2 guarantee:1 every:2 exactly:5 qm:1 stick:1 ser:1 producing:1 t1:4 before:2 ak:7 id:3 sivakumar:1 voter:2 initialization:1 suggests:1 limited:8 obeys:1 unique:6 practical:3 commerce:1 practice:1 kat:1 matching:11 projection:1 pre:2 word:2 refers:1 cannot:3 put:1 restriction:1 equivalent:2 map:1 missing:1 go:3 straightforward:1 truncating:1 convex:1 restated:2 recovery:18 assigns:1 financial:1 population:4 justification:1 pt:1 suppose:5 construction:1 exact:3 programming:1 hypothesis:1 element:4 particularly:1 labeled:2 solved:1 ensures:2 cycle:1 ordering:4 inequivalent:2 counter:3 mentioned:2 monograph:1 motivate:1 algebra:1 bipartite:7 efficiency:1 completely:2 matchings:2 basis:2 represented:2 univ:1 distinct:1 describe:2 artificial:1 choosing:2 quite:1 valued:1 solve:1 say:1 reconstruct:1 compressed:11 statistic:2 think:1 transform:15 itself:1 interview:2 reconstruction:5 yiling:1 product:2 p4:3 relevant:3 asserts:1 convergence:1 contrived:1 perfect:1 object:3 qt:1 p2:17 recovering:2 implies:1 quantify:1 correct:2 stochastic:4 settle:1 opinion:2 require:2 preliminary:1 articulate:1 proposition:5 pl:1 hold:1 qei:1 marriage:1 considered:4 guibas:1 mapping:1 algorithmic:1 pointing:1 achieves:1 commonality:1 smallest:1 purpose:2 currently:2 correctness:2 weighted:3 hope:2 mit:1 clearly:2 gaussian:1 rather:1 season:1 karp:2 srikanth:1 ax:2 improvement:1 unsatisfactory:1 rank:4 prevalent:5 check:1 sense:4 inference:2 economy:1 el:1 relation:5 interested:4 tao:5 issue:2 arg:6 aforementioned:1 denoted:4 constrained:6 equal:1 having:1 look:1 others:1 few:5 irreducible:9 primarily:1 diaconis:1 individual:2 familiar:1 organization:1 interest:3 highly:3 dwork:1 birkhoff:1 yielding:2 implication:1 edge:11 closer:1 partial:2 necessary:2 theoretical:1 minimal:2 column:2 earlier:1 assignment:1 maximization:2 subset:1 entry:1 uniform:1 too:1 motivating:2 answer:2 person:3 jour:1 international:1 decoding:1 again:1 satisfied:6 choose:1 huang:1 potential:2 coefficient:12 satisfy:1 ranking:16 later:1 start:1 recover:8 aggregation:2 parallel:1 candes:5 who:1 yield:4 consciously:2 worth:1 explain:1 whenever:1 definition:3 naturally:3 associated:3 proof:4 recovers:3 proved:2 massachusetts:1 popular:8 ask:2 recall:1 knowledge:2 back:1 daunting:1 formulation:1 evaluated:1 though:1 arranged:1 generality:3 just:1 hand:1 web:2 ei:6 maximizer:1 propagation:1 mode:17 aj:2 quire:1 believe:3 nac:1 usa:1 normalized:3 true:3 gay:1 contain:2 hence:2 equality:1 symmetric:10 laboratory:1 interchangeably:1 complete:8 theoretic:1 ranging:1 harmonic:2 wise:1 novel:3 recently:1 consideration:2 pseudocode:1 functional:5 winner:1 fortnow:1 discussed:1 marginals:2 ims:1 refer:1 cambridge:1 ai:1 pm:1 hp:3 etc:1 isometry:6 recent:1 scenario:4 certain:3 guestrin:1 additional:1 determine:5 sharma:2 full:1 infer:2 af:3 long:2 plugging:1 qi:10 basic:1 essentially:4 iteration:1 dec:1 background:1 addition:1 interval:2 else:1 country:1 appropriately:2 specially:1 posse:2 pennock:1 flow:1 odds:1 call:2 integer:1 presence:1 enough:2 easy:3 nikolova:1 qj:1 t0:2 whether:1 motivated:1 war:1 expression:2 ultimate:1 york:1 prefers:1 useful:1 clear:2 transforms:5 concentrated:1 kck2:2 estimated:1 popularity:4 correctly:1 diverse:1 edmonds:1 write:4 shall:3 group:23 neither:1 pj:4 v1:3 graph:7 asymptotically:3 fraction:3 sum:1 inverse:2 you:1 place:1 reader:2 decide:2 reasonable:2 electronic:1 p3:4 decision:1 bound:3 abortion:1 occur:1 precisely:3 your:1 n3:1 fourier:37 lineal:1 argument:1 kumar:1 betting:1 according:1 combination:1 march:1 reconstructing:3 lp:2 rev:1 explained:1 restricted:5 pr:1 jagabathula:1 equation:5 devavrat:2 turn:2 discus:4 describing:1 loose:1 mind:1 ascending:1 end:3 pursuit:2 available:3 apply:1 edmond:1 v2:3 appropriate:1 shah:3 original:1 denotes:5 remaining:1 cf:1 maintaining:1 prof:1 establish:2 approximating:1 classical:1 objective:1 question:13 quantity:1 surrogate:1 hq:3 majority:7 topic:3 whom:1 trivial:4 index:1 equivalently:2 setup:7 unfortunately:3 statement:7 potentially:1 stated:2 unknown:1 observation:1 howard:1 situation:4 defining:1 looking:1 precise:1 jebara:1 david:1 pair:1 required:2 established:1 nip:1 address:1 able:2 below:1 democratic:1 sparsity:10 including:1 reliable:3 max:12 belief:2 lance:1 event:1 natural:13 ranked:4 indicator:1 technology:1 irrespective:1 ready:1 naive:2 coupled:1 sn:28 literature:1 understanding:1 kf:16 determining:1 loss:3 lecture:1 permutation:49 bayati:2 revenue:2 degree:5 pij:1 sufficient:1 share:1 pi:1 summary:3 supported:1 last:1 soon:1 keeping:1 formal:1 institute:1 wide:1 sparse:8 distributed:1 dimension:1 rek:1 world:1 collection:3 ec:1 transaction:2 conclude:3 assumed:1 why:3 channel:1 reasonably:2 obtaining:1 requested:1 significance:1 main:3 pk:11 spread:1 motivation:1 noise:1 arise:1 n2:4 gambling:2 representative:1 quv:2 ny:1 sub:1 inferring:6 position:2 wish:2 fails:2 sparsest:1 candidate:18 theorem:13 bad:2 sensing:12 list:1 exists:2 prevail:1 commutative:1 chen:1 likely:6 contained:1 tracking:3 sport:1 corresponds:5 minimizer:1 determines:1 acm:3 ma:1 goal:1 formulated:1 sized:1 identity:1 sorted:1 specifically:9 determined:1 uniformly:2 lemma:5 called:2 total:1 duality:1 tendency:1 thematically:5 support:6 quest:1 people:6 arises:2 brevity:2 constructive:1 |
2,807 | 3,545 | Policy Search for Motor Primitives in Robotics
Jens Kober, Jan Peters
Max Planck Institute for Biological Cybernetics
Spemannstr. 38
72076 T?bingen, Germany
{jens.kober,jan.peters}@tuebingen.mpg.de
Abstract
Many motor skills in humanoid robotics can be learned using parametrized motor
primitives as done in imitation learning. However, most interesting motor learning problems are high-dimensional reinforcement learning problems often beyond
the reach of current methods. In this paper, we extend previous work on policy
learning from the immediate reward case to episodic reinforcement learning. We
show that this results in a general, common framework also connected to policy gradient methods and yielding a novel algorithm for policy learning that is
particularly well-suited for dynamic motor primitives. The resulting algorithm is
an EM-inspired algorithm applicable to complex motor learning tasks. We compare this algorithm to several well-known parametrized policy search methods and
show that it outperforms them. We apply it in the context of motor learning and
show that it can learn a complex Ball-in-a-Cup task using a real Barrett WAMTM
robot arm.
1
Introduction
Policy search, also known as policy learning, has become an accepted alternative of value functionbased reinforcement learning [2]. In high-dimensional domains with continuous states and actions,
such as robotics, this approach has previously proven successful as it allows the usage of domainappropriate pre-structured policies, the straightforward integration of a teacher?s presentation as
well as fast online learning [2, 3, 10, 18, 5, 6, 4]. In this paper, we will extend the previous work
in [17, 18] from the immediate reward case to episodic reinforcement learning and show how it
relates to policy gradient methods [7, 8, 11, 10]. Despite that many real-world motor learning tasks
are essentially episodic [14], episodic reinforcement learning [1] is a largely undersubscribed topic.
The resulting framework allows us to derive a new algorithm called Policy Learning by Weighting
Exploration with the Returns (PoWER) which is particularly well-suited for learning of trial-based
tasks in motor control. We are especially interested in a particular kind of motor control policies also
known as dynamic motor primitives [22, 23]. In this approach, dynamical systems are being used in
order to encode a policy, i.e., we have a special kind of parametrized policy which is well-suited for
robotics problems.
We show that the presented algorithm works well when employed in the context of learning dynamic
motor primitives in four different settings, i.e., the two benchmark problems from [10], the Underactuated Swing-Up [21] and the complex task of Ball-in-a-Cup [24, 20]. Both the Underactuated
Swing-Up as well as the Ball-in-a-Cup are achieved on a real Barrett WAMTM robot arm. Please also
refer to the video on the first author?s website. Looking at these tasks from a human motor learning
perspective, we have a human acting as teacher presenting an example for imitation learning and,
subsequently, the policy will be improved by reinforcement learning. Since such tasks are inherently
single-stroke movements, we focus on the special class of episodic reinforcement learning. In our
experiments, we show how a presented movement is recorded using kinesthetic teach-in and, subsequently, how a Barrett WAMTM robot arm is learning the behavior by a combination of imitation and
reinforcement learning.
1
2
Policy Search for Parameterized Motor Primitives
Our goal is to find reinforcement learning techniques that can be applied to a special kind of prestructured parametrized policies called motor primitives [22, 23], in the context of learning highdimensional motor control tasks. In order to do so, we first discuss our problem in the general
context of reinforcement learning and introduce the required notation in Section 2.1. Using a generalization of the approach in [17, 18], we derive a new EM-inspired algorithm called Policy Learning
by Weighting Exploration with the Returns (PoWER) in Section 2.3 and show how the general
framework is related to policy gradients methods in 2.2. [12] extends the [17] algorithm to episodic
reinforcement learning for discrete states; we use continuous states. Subsequently, we discuss how
we can turn the parametrized motor primitives [22, 23] into explorative [19], stochastic policies.
2.1 Problem Statement & Notation
In this paper, we treat motor primitive learning problems in the framework of reinforcement learning
with a strong focus on the episodic case [1]. We assume that at time t there is an actor in a state
st and chooses an appropriate action at according to a stochastic policy ?(at |st , t). Such a policy
is a probability distribution over actions given the current state. The stochastic formulation allows
a natural incorporation of exploration and, in the case of hidden state variables, the optimal timeinvariant policy has been shown to be stochastic [8]. Upon the completion of the action, the actor
transfers to a state st+1 and receives a reward rt . As we are interested in learning complex motor
tasks consisting of a single stroke [23], we focus on finite horizons of length T with episodic restarts
[1] and learn the optimal parametrized, stochastic policy for such reinforcement learning problems.
We assume an explorative version of the dynamic motor primitives [22, 23] as parametrized policy ?
with parameters ? ? Rn . However, in this section, we will keep most derivations sufficiently general
that they would transfer to various other parametrized policies. The general goal in reinforcement
learning is to optimize the expected return of the policy ? with parameters ? defined by
J(?) = T p(? )R(? )d? ,
(1)
where T is the set of all possible paths, rollout ? = [s1:T +1 , a1:T ] (also called episode or trial)
denotes a path of states s1:T +1 = [s1 , s2 , . . ., sT +1 ] and actions a1:T = [a1 , a2 , . . ., aT ]. The
probability of rollout ? is denoted by p(? ) while R(? ) refers to its return. Using the standard
assumptions of Markovness and additive accumulated rewards, we can write
QT
PT
p(? ) = p(s1 ) t=1 p(st+1 |st , at )?(at |st , t),
R(? ) = T ?1 t=1 r(st , at , st+1 , t),
(2)
where p(s1 ) denotes the initial state distribution, p(st+1 |st , at ) the next state distribution conditioned on last state and action, and r(st , at , st+1 , t) denotes the immediate reward.
While episodic Reinforcement Learning (RL) problems with finite horizons are common in motor control, few methods exist in the RL literature, e.g., Episodic REINFORCE [7], the Episodic
Natural Actor Critic eNAC [10] and model-based methods using differential-dynamic programming
[21]. Nevertheless, in the analytically tractable cases, it has been studied deeply in the optimal
control community where it is well-known that for a finite horizon problem, the optimal solution
is non-stationary [15] and, in general, cannot be represented by a time-independent policy. The
motor primitives based on dynamical systems [22, 23] are a particular type of time-variant policy
representation as they have an internal phase which corresponds to a clock with additional flexibility
(e.g., for incorporating coupling effects, perceptual influences, etc.), thus, they can represent optimal
solutions for finite horizons. We embed this internal clock or movement phase into our state and,
thus, from optimal control perspective have ensured that the optimal solution can be represented.
2.2 Episodic Policy Learning
In this section, we discuss episodic reinforcement learning in policy space which we will refer to
as Episodic Policy Learning. For doing so, we first discuss the lower bound on the expected return
suggested in [17] for guaranteeing that policy update steps are improvements. In [17, 18] only the
immediate reward case is being discussed, we extend their framework to episodic reinforcement
learning and, subsequently, derive a general update rule which yields the policy gradient theorem
[8], a generalization of the reward-weighted regression [18] as well as the novel Policy learning by
Weighting Exploration with the Returns (PoWER) algorithm.
2.2.1 Bounds on Policy Improvements
Unlike in reinforcement learning, other machine learning branches have focused on optimizing lower
bounds, e.g., resulting in expectation-maximization (EM) algorithms [16]. The reasons for this preference apply in policy learning: if the lower bound also becomes an equality for the sampling policy,
2
we can guarantee that the policy will be improved by optimizing the lower bound. Surprisingly, results from supervised learning can be transferred with ease. For doing so, we follow the scenario
suggested in [17], i.e., generate rollouts ? using the current policy with parameters ? which we
weight with the returns R (? ) and subsequently match it with a new policy parametrized by ? 0 .
This matching of the success-weighted path distribution is equivalent to minimizing the KullbackLeibler divergence D (p?0 (? ) kp? (? ) R (? )) between the new path distribution p?0 (? ) and the
reward-weighted previous one p? (? ) R (? ). As shown in [17, 18], this results in a lower bound on
the expected return using
concavity of the logarithm, i.e.,
Jensen?s inequality and the
p? (? )
p?0 (? )
0
log J(? ) = log
p?0 (? ) R (? ) d? ?
d? + const, (3)
p? (? ) R (? ) log
p? (? )
T p? (? )
T
? ?D (p? (? ) R (? ) kp?0 (? )) = L? (? 0 ),
(4)
where D (p (? ) kq (? )) = p (? ) log(p (? ) /q (? ))d? is the Kullback-Leibler divergence which is
considered a natural distance measure between probability distributions, and the constant is needed
for tightness of the bound. Note that p? (? ) R (? ) is an improper probability distribution as pointed
out in [17]. The policy improvement step is equivalent to maximizing the lower bound on the
expected return L? (? 0 ) and we show how it relates to previous policy learning methods.
2.2.2 Resulting Policy Updates
In the following part, we will discuss three different policy updates which directly result from Section 2.2.1. First, we show that policy gradients [7, 8, 11, 10] can be derived from the lower bound
L? (? 0 ) (as was to be expected from supervised learning, see [13]). Subsequently, we show that
natural policy gradients can be seen as an additional constraint regularizing the change in the path
distribution resulting from a policy update when improving the policy incrementally. Finally, we
will show how expectation-maximization (EM) algorithms for policy learning can be generated.
Policy Gradients. When differentiating the function L? (? 0 ) that defines the lower bound on the
expected return, we directly obtain
??0 L? (? 0 ) = T p? (? ) R (? ) ??0 log p?0 (? ) d? ,
(5)
PT
where T is the set of all possible paths and ??0 log p?0 (? ) = t=1 ??0 log ?(at |st , t) denotes the
log-derivative of the path distribution. As this log-derivative only depends on the policy, we can
estimate a gradient from rollouts without having a model by simply replacing the expectation by
a sum; when ? 0 is close to ?, we have the policy gradient estimator which is widely known as
Episodic REINFORCE [7], i.e., we have lim?0 ?? ??0 L? (? 0 ) = ?? J(?). Obviously, a reward which
precedes an action in an rollout, can neither be caused by the action nor cause an action in the same
rollout. Thus, when inserting Equations (2) into Equation (5), all cross-products between rt and
?? log ?(at+?t |st+?t , t + ?t) for ?t > 0 become zero in expectation [10]. Therefore, we can omit
these terms and rewrite the estimator as
nP
o
T
?
0 log ?(at |st , t)Q (s, a, t)
??0 L? (? 0 ) = E
?
,
(6)
?
t=1
PT
?
where Q (s, a, t) = E{ t?=t r(st?, at?, st?+1 , t?)|st = s, at = a} is called the state-action value
function [1]. Equation (6) is equivalent to the policy gradient theorem [8] for ? 0 ? ? in the infinite
horizon case where the dependence on time t can be dropped.
The derivation results in the Natural Actor Critic as discussed in [9, 10] when adding an additional
punishment to prevent large steps away from the observed path distribution. This can be achieved by
restricting the amount of change in the path distribution and, subsequently, determining the steepest
descent for a fixed step away from the observed trajectories. Change in probability distributions
is naturally measured using the Kullback-Leibler divergence, thus, after adding the additional constraint of D(p?0 (? )kp? (? )) ? 0.5(? 0 ? ?)T F(?)(? 0 ? ?) = ? using a second-order expansion as
approximation where F(?) denotes the Fisher information matrix [9, 10].
Policy Search via Expectation Maximization. One major drawback of gradient-based approaches is the learning rate, an open parameter which can be hard to tune in control problems
but is essential for good performance. Expectation-Maximization algorithms are well-known to
avoid this problem in supervised learning while even yielding faster convergence [16]. Previously,
similar ideas have been explored in immediate reinforcement learning [17, 18]. In general, an EMalgorithm would choose the next policy parameters ? n+1 such that ? n+1 = argmax?0 L? (? 0 ). In
the case where ?(at |st , t) belongs to the exponential family, the next policy can be determined
analytically by setting Equation
nP(6) to zero, i.e.,
o
T
?
0 log ?(at |st , t)Q (s, a, t)
E
?
= 0,
(7)
?
t=1
3
Algorithm 1 Policy learning by Weighting Exploration with the Returns for Motor Primitives
Input: initial policy parameters ?0
repeat
2
Sample: Perform rollout(s) using a = (? + ?t )T ?(s, t) with [?t ]ij ? N (0, ?ij
) as stochastic
policy and collect all (t, st , at , st+1 , ?t , rt+1 ) for t = {1, 2, . . . , T + 1}.
? ? (s, a, t) = PT? r(s?, a?, s? , t?).
Estimate: Use unbiased estimate Q
t
t t+1
t=t
Reweight: Compute importance weights and reweight rollouts, discard low-importance rollouts.
DP
E
.DP
E
T
T
?
?
Update policy using ? k+1 = ? k +
?
Q
(s,
a,
t)
Q
(s,
a,
t)
.
t
t=1
t=1
w(? )
w(? )
until Convergence ? k+1 ? ? k
and solving for ? 0 . Depending on the choice of a stochastic policy, we will obtain different solutions
and different learning algorithms. It allows the extension of the reward-weighted regression to larger
horizons as well as the introduction of the Policy learning by Weighting Exploration with the Returns
(PoWER) algorithm.
2.3 Policy learning by Weighting Exploration with the Returns (PoWER)
? = ? T ?(s, t)
In most learning control problems, we attempt to have a deterministic mean policy a
with parameters ? and basis functions ?. In Section 3, we will introduce the basis functions of
the motor primitives. When learning motor primitives, we turn this deterministic mean policy
? = ? T ?(s, t) into a stochastic policy using additive exploration ?(s, t) in order to make modela
free reinforcement learning possible, i.e., we always intend to have a policy ?(at |st , t) which can be
brought into the form a = ? T ?(s, t) + (?(s, t)). Previous work in this context [7, 4, 10, 18], with
the notable exception of [19], has focused on state-independent, white Gaussian exploration, i.e.,
(?(s, t)) ? N (0, ?). It is straightforward to obtain the Reward-Weighted Regression for episodic
RL by solving Equation (7) for ? 0 which naturally yields a weighted regression method with the
state-action values Q? (s, a, t) as weights. This form of exploration has resulted into various applications in robotics such as T-Ball batting, Peg-In-Hole, humanoid robot locomotion, constrained
reaching movements and operational space control, see [4, 10, 18] for both reviews and their own
applications.
However, such unstructured exploration at every step has a multitude of disadvantages: it causes a
large variance which grows with the number of time-steps [19, 10], it perturbs actions too frequently
?washing? out their effects and can damage the system executing the trajectory. As a result, all
methods relying on this state-independent exploration have proven too fragile for learning the Ballin-a-Cup task on a real robot system. Alternatively, as introduced by [19], one could generate a form
2
of structured, state-dependent exploration (?(s, t)) = ?T
t ?(s, t) with [?t ]ij ? N (0, ?ij ), where
2
?ij are meta-parameters of the exploration that can also be optimized. This argument results into
? t)). Inserting the resulting policy into Equation
the policy a ? ?(at |st , t) = N (a|? T ?(s, t), ?(s,
(7), we obtain the optimality condition in the sense of Equation (7) and can derive the update rule
o
nP
o?1 nP
T
T
?
?
E
Q
(s,
a,
t)W(s,
t)?
(8)
?0 = ? + E
Q
(s,
a,
t)W(s,
t)
t
t=1
t=1
with W(s, t) = ?(s, t)?(s, t)T /(?(s, t)T ?(s, t)). Note that for our motor primitives W reduces
to a diagonal, constant matrix and cancels out. Hence the simplified form in Algorithm 1. In
order to reduce the number of rollouts in this on-policy scenario, we reuse the rollouts through
importance sampling as described in the context of reinforcement learning in [1]. To avoid the
fragility sometimes resulting from importance sampling in reinforcement learning, samples with
very small importance weights are discarded. The expectations E{?} are replaced by the importance
sampler denoted by h?iw(? ) . The resulting algorithm is shown in Algorithm 1. As we will see in
Section 3, this PoWER method outperforms all other described methods significantly.
3
Application to Motor Primitive Learning for Robotics
In this section, we demonstrate the effectiveness of the algorithm presented in Section 2.3 in the
context of motor primitive learning for robotics. For doing so, we will first give a quick overview
how the motor primitives work and how the algorithm can be used to adapt them. As first evaluation,
we will show that the novel presented PoWER algorithm outperforms many previous well-known
4
methods, i.e., ?Vanilla? Policy Gradients, Finite Difference Gradients, the Episodic Natural Actor
Critic and the generalized Reward-Weighted Regression on the two simulated benchmark problems
suggested in [10] and a simulated Underactuated Swing-Up [21]. Real robot applications are done
with our best benchmarked method, the PoWER method. Here, we first show PoWER can learn
the Underactuated Swing-Up [21] even on a real robot. As a significantly more complex motor
learning task, we show how the robot can learn a high-speed Ball-in-a-Cup [24] movement with
motor primitives for all seven degrees of freedom of our Barrett WAMTM robot arm.
3.1 Using the Motor Primitives in Policy Search
The motor primitive framework [22, 23] can be described as two coupled differential equations, i.e.,
we have a canonical system y? = f (y, z) with movement phase y and possible external coupling to z
as well as a nonlinear system x
? = g(x, x,
? y, ?) which yields the current action for the system. Both
dynamical systems are chosen to be stable and to have the right properties so that they are useful for
the desired class of motor control problems. In this paper, we focus on single stroke movements as
they frequently appear in human motor control [14, 23] and, thus, we will always choose the point
attractor version of the motor primitives exactly as presented in [23] and not the older one in [22].
The biggest advantage of the motor primitive framework of [22, 23] is that the function g is linear
in the policy parameters ? and, thus, well-suited for imitation learning as well as for our presented
reinforcement learning algorithm. For example, if we would have to learn only a motor primitive for
a single degree of freedom qi , then we could use a motor primitive in the form q??i = g(qi , q?i , y, ?) =
?(s)T ? where s = [qi , q?i , y] is the state and where time is implicitly embedded in y. We use the
output of q?
?i = ?(s)T ? = a
? as the policy mean. The perturbed accelerations q?i = a = a
? + ? is given
to the system. The details of ? are given in [23].
(a) minimum motor command
(b) passing through a point
?250
1
average return
average return
In Sections 3.3 and 3.4, we use imitation learning for the initialization.
For imitations, we follow [22]: first,
extract the duration of the movement
from initial and final zero velocity
and use it to adjust the time constants.
Second, use locally-weighted regression to solve for an imitation from a
single example.
?500
?10
2
?10
?1000
2
3
10
10
number of rollouts
2
3
10
10
number of rollouts
FDG
VPG
eNAC
RWR
PoWER
3.2 Benchmark Comparison
Figure 1: This figure shows the mean performance of all
As benchmark comparison, we in- compared methods in two benchmark tasks averaged over
tend to follow a previously studied twenty learning runs with the error bars indicating the stanscenario in order to evaluate which dard deviation. Policy learning by Weighting Exploration
method is best-suited for our prob- with the Returns (PoWER) clearly outperforms Finite Diflem class. For doing so, we perform ference Gradients (FDG), ?Vanilla? Policy Gradients (VPG),
our evaluations on the exact same the Episodic Natural Actor Critic (eNAC) and the adapted
benchmark problems as [10] and use Reward-Weighted Regression (RWR) for both tasks.
two tasks commonly studied in motor control literature for which the analytic solutions are known, i.e., a reaching task where a goal
has to be reached at a certain time while the used motor commands have to be minimized and a
reaching task of the same style with an additional via-point. In this comparison, we mainly want to
show the suitability of our algorithm and show that it outperforms previous methods such as Finite
Difference Gradient (FDG) methods [10], ?Vanilla? Policy Gradients (VPG) with optimal baselines
[7, 8, 11, 10], the Episodic Natural Actor Critic (eNAC) [9, 10], and the episodic version of the
Reward-Weighted Regression (RWR) algorithm [18]. For both tasks, we use the same rewards as in
[10] but we use the newer form of the motor primitives from [23]. All open parameters were manually optimized for each algorithm in order to maximize the performance while not destabilizing the
convergence of the learning process.
When applied in the episodic scenario, Policy learning by Weighting Exploration with the Returns
(PoWER) clearly outperformed the Episodic Natural Actor Critic (eNAC), ?Vanilla? Policy Gradient
(VPG), Finite Difference Gradient (FDG) and the adapted Reward-Weighted Regression (RWR)
for both tasks. The episodic Reward-Weighted Regression (RWR) is outperformed by all other
algorithms suggesting that this algorithm does not generalize well from the immediate reward case.
5
Figure 2: This figure shows the time series of the Underactuated Swing-Up where only a single joint
of the robot is moved with a torque limit ensured by limiting the maximal motor current of that joint.
The resulting motion requires the robot to (i) first move away from the target to limit the maximal
required torque during the swing-up in (ii-iv) and subsequent stabilization (v). The performance of
the PoWER method on the real robot is shown in (vi).
average return
While FDG gets stuck on a plateau, both eNAC and VPG converge to the same, good final solution.
PoWER finds the same (or even slightly better) solution while achieving it noticeably faster. The
results are presented in Figure 1. Note that this plot has logarithmic scales on both axes, thus a
unit difference corresponds to an order of magnitude. The omission of the first twenty rollouts was
1
necessary to cope with the log-log presentation.
3.3 Underactuated Swing-Up
0.9
As additional simulated benchmark and for the realrobot evaluations, we employed the Underactuated
0.8
FDG
Swing-Up [21]. Here, only a single degree of freeRWR
VPG
0.7
PoWER
dom is represented by the motor primitive as described
eNAC
in Section 3.1. The goal is to move a hanging heavy
0.6
pendulum to an upright position and stabilize it there
50
100
150
200
number of rollouts
in minimum time and with minimal motor torques.
By limiting the motor current for that degree of free- Figure 3: This figure shows the perfordom, we can ensure that the torque limits described in mance of all compared methods for the
[21] are maintained and directly moving the joint to swing-up in simulation and show the mean
the right position is not possible. Under these torque performance averaged over 20 learning
limits, the robot needs to (i) first move away from the runs with the error bars indicating the stantarget to limit the maximal required torque during the dard deviation. PoWER outperforms the
swing-up in (ii-iv) and subsequent stabilization (v) as other algorithms from 50 rollouts on and
illustrated in Figure 2 (i-v). This problem is similar to finds a significantly better policy.
a mountain-car problem where the car would have to stop on top or experience a failure.
The applied torque limits were the same as in [21] and so was the reward function was the except
that the complete return of the trajectory was transformed by an exp(?) to ensure positivity. Again all
open parameters were manually optimized. The motor primitive with nine shape parameters and one
goal parameter was initialized by imitation learning from a kinesthetic teach-in. Subsequently, we
compared the other algorithms as previously considered in Section 3.2 and could show that PoWER
would again outperform them. The results are given in Figure 3. As it turned out to be the best
performing method, we then used it successfully for learning optimal swing-ups on a real robot. See
Figure 2 (vi) for the resulting real-robot performance.
3.4 Ball-in-a-Cup on a Barrett WAMTM
The most challenging application in this paper is the children?s game Ball-in-a-Cup [24] where a
small cup is attached at the robot?s end-effector and this cup has a small wooden ball hanging down
from the cup on a 40cm string. Initially, the ball is hanging down vertically. The robot needs
to move fast in order to induce a motion at the ball through the string, swing it up and catch it
with the cup, a possible movement is illustrated in Figure 4 (top row). The state of the system is
described in joint angles and velocities of the robot and the Cartesian coordinates of the ball. The
actions are the joint space accelerations where each of the seven joints is represented by a motor
primitive. All motor primitives are perturbed separately but employ the same joint final reward
given by r(tc ) = exp(??(xc ? xb )2 ? ?(yc ? yb )2 ) while r(t) = 0 for all other t 6= tc where tc
is the moment where the ball passes the rim of the cup with a downward direction, the cup position
denoted by [xc , yc , zc ] ? R3 , the ball position [xb , yb , zb ] ? R3 and a scaling parameter ? = 100.
The task is quite complex as the reward is not modified solely by the movements of the cup but
foremost by the movements of the ball and the movements of the ball are very sensitive to changes
in the movement. A small perturbation of the initial condition or during the trajectory will drastically
change the movement of the ball and hence the outcome of the rollout.
6
Figure 4: This figure shows schematic drawings of the Ball-in-a-Cup motion, the final learned robot
motion as well as a kinesthetic teach-in. The green arrows show the directions of the current movements in that frame. The human cup motion was taught to the robot by imitation learning with
31 parameters per joint for an approximately 3 seconds long trajectory. The robot manages to reproduce the imitated motion quite accurately, but the ball misses the cup by several centimeters.
After ca. 75 iterations of our Policy learning by Weighting Exploration with the Returns (PoWER)
algorithm the robot has improved its motion so that the ball goes in the cup. Also see Figure 5.
average return
Due to the complexity of the task, Ball-in-a-Cup is
1
even a hard motor learning task for children who usu0.8
ally only succeed at it by observing another person
playing and a lot of improvement by trial-and-error.
0.6
Mimicking how children learn to play Ball-in-a-Cup,
0.4
we first initialize the motor primitives by imitation and,
0.2
subsequently, improve them by reinforcement learning. We recorded the motions of a human player by
0
0
20
40
60
80
100
kinesthetic teach-in in order to obtain an example for
number of rollouts
imitation as shown in Figure 4 (middle row). From the Figure 5: This figure shows the expected
imitation, it can be determined by cross-validation that return of the learned policy in the Ball-in31 parameters per motor primitive are needed. As ex- a-Cup evaluation averaged over 20 runs.
pected, the robot fails to reproduce the the presented
behavior and reinforcement learning is needed for self-improvement. Figure 5 shows the expected
return over the number of rollouts where convergence to a maximum is clearly recognizable. The
robot regularly succeeds at bringing the ball into the cup after approximately 75 iterations.
4
Conclusion
In this paper, we have presented a new perspective on policy learning methods and an application
to a highly complex motor learning task on a real Barrett WAMTM robot arm. We have generalized
the previous work in [17, 18] from the immediate reward case to the episodic case. In the process,
we could show that policy gradient methods are a special case of this more general framework.
During initial experiments, we realized that the form of exploration highly influences the speed of
the policy learning method. This empirical insight resulted in a novel policy learning algorithm,
Policy learning by Weighting Exploration with the Returns (PoWER), an EM-inspired algorithm
that outperforms several other policy search methods both on standard benchmarks as well as on a
simulated Underactuated Swing-Up.
We successfully applied this novel PoWER algorithm in the context of learning two tasks on a
physical robot, i.e., the Underacted Swing-Up and Ball-in-a-Cup. Due to the curse of dimensionality,
we cannot start with an arbitrary solution. Instead, we mimic the way children learn Ball-in-a-Cup
and first present an example for imitation learning which is recorded using kinesthetic teach-in.
Subsequently, our reinforcement learning algorithm takes over and learns how to move the ball into
7
the cup reliably. After only realistically few episodes, the task can be regularly fulfilled and the
robot shows very good average performance.
References
[1] R. Sutton and A. Barto. Reinforcement Learning. MIT Press, 1998.
[2] J. Bagnell, S. Kadade, A. Ng, and J. Schneider. Policy search by dynamic programming. In
Advances in Neural Information Processing Systems (NIPS), 2003.
[3] A. Ng and M. Jordan. PEGASUS: A policy search method for large MDPs and POMDPs. In
International Conference on Uncertainty in Artificial Intelligence (UAI), 2000.
[4] F. Guenter, M. Hersch, S. Calinon, and A. Billard. Reinforcement learning for imitating constrained reaching movements. RSJ Advanced Robotics, 21, 1521-1544, 2007.
[5] M. Toussaint and C. Goerick. Probabilistic inference for structured planning in robotics. In
International Conference on Intelligent Robots and Systems (IROS), 2007.
[6] M. Hoffman, A. Doucet, N. de Freitas, and A. Jasra. Bayesian policy learning with transdimensional MCMC. In Advances in Neural Information Processing Systems (NIPS), 2007.
[7] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229?256, 1992.
[8] R. S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing
Systems (NIPS), 2000.
[9] J. Bagnell and J. Schneider. Covariant policy search. In International Joint Conference on
Artificial Intelligence (IJCAI), 2003.
[10] J. Peters and S. Schaal. Policy gradient methods for robotics. In International Conference on
Intelligent Robots and Systems (IROS), 2006.
[11] G. Lawrence, N. Cowan, and S. Russell. Efficient gradient estimation for motor control learning. In International Conference on Uncertainty in Artificial Intelligence (UAI), 2003.
[12] H. Attias. Planning by probabilistic inference. In Ninth International Workshop on Artificial
Intelligence and Statistics (AISTATS), 2003.
[13] J. Binder, D. Koller, S. Russell, and K. Kanazawa. Adaptive probabilistic networks with hidden
variables. Machine Learning, 29:213?244, 1997.
[14] G. Wulf. Attention and motor skill learning. Human Kinetics, Champaign, IL, 2007.
[15] D. E. Kirk. Optimal control theory. Prentice-Hall, Englewood Cliffs, New Jersey, 1970.
[16] G. J. McLachan and T. Krishnan. The EM Algorithm and Extensions. Wiley Series in Probability and Statistics. John Wiley & Sons, 1997.
[17] P. Dayan and G. E. Hinton. Using expectation-maximization for reinforcement learning. Neural Computation, 9(2):271?278, 1997.
[18] J. Peters and S. Schaal. Reinforcement learning by reward-weighted regression for operational
space control. In International Conference on Machine Learning (ICML), 2007.
[19] T. R?ckstie?, M. Felder, and J. Schmidhuber. State-dependent exploration for policy gradient
methods. In European Conference on Machine Learning (ECML), 2008.
[20] M. Kawato, F. Gandolfo, H. Gomi, and Y. Wada. Teaching by showing in kendama based on
optimization principle. In International Conference on Artificial Neural Networks, 1994.
[21] C. G. Atkeson. Using local trajectory optimizers to speed up global optimization in dynamic
programming. In Advances in Neural Information Processing Systems (NIPS), 1994.
[22] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor
primitives. In Advances in Neural Information Processing Systems (NIPS), 2003.
[23] S. Schaal, P. Mohajerian, and A. Ijspeert. Dynamics systems vs. optimal control ? a unifying
view. Progress in Brain Research, 165(1):425?445, 2007.
[24] Wikipedia, May 31, 2008. http://en.wikipedia.org/wiki/Ball_in_a_cup
[25] J. Kober, B. Mohler, and J. Peters. Learning perceptual coupling for motor primitives. In
International Conference on Intelligent RObots and Systems (IROS), 2008.
8
| 3545 |@word trial:3 middle:1 version:3 open:3 simulation:1 moment:1 initial:5 series:2 outperforms:7 freitas:1 current:7 john:1 explorative:2 subsequent:2 additive:2 shape:1 analytic:1 motor:58 plot:1 update:7 v:1 stationary:1 intelligence:4 website:1 imitated:1 steepest:1 preference:1 org:1 rollout:6 become:2 differential:2 recognizable:1 introduce:2 expected:8 behavior:2 mpg:1 nor:1 frequently:2 planning:2 brain:1 torque:7 inspired:3 relying:1 curse:1 becomes:1 notation:2 mountain:1 kind:3 benchmarked:1 cm:1 string:2 guarantee:1 every:1 exactly:1 ensured:2 control:16 unit:1 omit:1 appear:1 planck:1 dropped:1 vertically:1 treat:1 local:1 limit:6 despite:1 sutton:2 cliff:1 path:9 solely:1 approximately:2 initialization:1 studied:3 collect:1 challenging:1 binder:1 ease:1 averaged:3 optimizers:1 jan:2 episodic:25 empirical:1 significantly:3 destabilizing:1 matching:1 ups:1 pre:1 induce:1 refers:1 kinesthetic:5 get:1 cannot:2 close:1 prentice:1 context:8 influence:2 optimize:1 equivalent:3 deterministic:2 quick:1 maximizing:1 primitive:34 straightforward:2 go:1 duration:1 williams:1 focused:2 attention:1 unstructured:1 rule:2 estimator:2 insight:1 coordinate:1 limiting:2 pt:4 target:1 play:1 exact:1 programming:3 locomotion:1 velocity:2 particularly:2 observed:2 connected:1 improper:1 episode:2 movement:16 russell:2 deeply:1 complexity:1 reward:23 dynamic:8 dom:1 singh:1 rewrite:1 solving:2 upon:1 basis:2 joint:9 various:2 represented:4 jersey:1 derivation:2 fast:2 kp:3 precedes:1 artificial:5 outcome:1 quite:2 widely:1 larger:1 solve:1 tightness:1 drawing:1 statistic:2 final:4 online:1 obviously:1 advantage:1 product:1 kober:3 maximal:3 inserting:2 turned:1 transdimensional:1 flexibility:1 realistically:1 moved:1 convergence:4 ijcai:1 guaranteeing:1 executing:1 derive:4 coupling:3 completion:1 depending:1 measured:1 ij:5 qt:1 progress:1 strong:1 direction:2 drawback:1 subsequently:10 stochastic:8 exploration:20 human:6 stabilization:2 mcallester:1 noticeably:1 generalization:2 suitability:1 biological:1 extension:2 kinetics:1 felder:1 sufficiently:1 considered:2 hall:1 exp:2 lawrence:1 major:1 a2:1 estimation:1 outperformed:2 applicable:1 iw:1 sensitive:1 successfully:2 weighted:13 hoffman:1 brought:1 clearly:3 mit:1 always:2 gaussian:1 modified:1 reaching:4 avoid:2 command:2 barto:1 encode:1 derived:1 focus:4 ax:1 schaal:4 improvement:5 rwr:5 mainly:1 baseline:1 sense:1 wooden:1 inference:2 dependent:2 dayan:1 accumulated:1 initially:1 hidden:2 koller:1 transformed:1 reproduce:2 interested:2 germany:1 mimicking:1 denoted:3 ference:1 constrained:2 integration:1 special:4 initialize:1 having:1 ng:2 sampling:3 manually:2 cancel:1 icml:1 mimic:1 minimized:1 np:4 connectionist:1 intelligent:3 few:2 employ:1 divergence:3 resulted:2 replaced:1 phase:3 consisting:1 rollouts:13 argmax:1 attractor:2 attempt:1 freedom:2 englewood:1 highly:2 evaluation:4 adjust:1 yielding:2 xb:2 necessary:1 experience:1 iv:2 logarithm:1 initialized:1 desired:1 minimal:1 effector:1 disadvantage:1 maximization:5 deviation:2 calinon:1 wada:1 kq:1 successful:1 too:2 kullbackleibler:1 teacher:2 perturbed:2 chooses:1 punishment:1 st:25 person:1 international:9 probabilistic:3 again:2 recorded:3 choose:2 positivity:1 external:1 derivative:2 style:1 return:24 suggesting:1 underactuated:8 de:2 stabilize:1 notable:1 caused:1 depends:1 vi:2 view:1 lot:1 doing:4 pendulum:1 reached:1 observing:1 start:1 gandolfo:1 il:1 variance:1 largely:1 who:1 yield:3 landscape:1 generalize:1 bayesian:1 accurately:1 manages:1 trajectory:6 pomdps:1 cybernetics:1 gomi:1 stroke:3 plateau:1 reach:1 failure:1 naturally:2 stop:1 modela:1 lim:1 car:2 dimensionality:1 rim:1 supervised:3 restarts:1 follow:3 improved:3 yb:2 formulation:1 done:2 kendama:1 clock:2 until:1 receives:1 ally:1 replacing:1 nonlinear:1 incrementally:1 defines:1 grows:1 usage:1 effect:2 unbiased:1 swing:14 analytically:2 equality:1 hence:2 leibler:2 illustrated:2 white:1 during:4 game:1 self:1 please:1 fdg:6 maintained:1 generalized:2 guenter:1 presenting:1 complete:1 demonstrate:1 motion:8 novel:5 common:2 wikipedia:2 kawato:1 rl:3 overview:1 physical:1 attached:1 extend:3 discussed:2 refer:2 cup:25 vanilla:4 pointed:1 teaching:1 moving:1 robot:30 actor:8 stable:1 etc:1 functionbased:1 vpg:6 own:1 perspective:3 optimizing:2 belongs:1 discard:1 scenario:3 schmidhuber:1 certain:1 inequality:1 meta:1 success:1 jens:2 seen:1 minimum:2 additional:6 schneider:2 employed:2 converge:1 maximize:1 ii:2 relates:2 branch:1 reduces:1 champaign:1 match:1 faster:2 adapt:1 cross:2 long:1 nakanishi:1 a1:3 qi:3 schematic:1 variant:1 regression:11 essentially:1 expectation:8 foremost:1 iteration:2 represent:1 sometimes:1 robotics:10 achieved:2 want:1 separately:1 unlike:1 bringing:1 pass:1 tend:1 cowan:1 spemannstr:1 regularly:2 effectiveness:1 jordan:1 krishnan:1 enac:7 reduce:1 idea:1 attias:1 fragile:1 reuse:1 peter:5 bingen:1 passing:1 cause:2 nine:1 action:14 useful:1 tune:1 amount:1 locally:1 generate:2 http:1 peg:1 exist:1 outperform:1 canonical:1 wiki:1 fulfilled:1 per:2 discrete:1 write:1 taught:1 four:1 nevertheless:1 achieving:1 prevent:1 neither:1 iros:3 sum:1 run:3 prob:1 parameterized:1 angle:1 uncertainty:2 extends:1 family:1 scaling:1 bound:10 pected:1 adapted:2 incorporation:1 constraint:2 speed:3 argument:1 optimality:1 performing:1 jasra:1 transferred:1 structured:3 according:1 hanging:3 ball:26 combination:1 slightly:1 em:6 son:1 newer:1 s1:5 imitating:1 washing:1 equation:8 previously:4 discus:5 turn:2 r3:2 needed:3 tractable:1 end:1 mance:1 apply:2 away:4 appropriate:1 alternative:1 mohler:1 denotes:5 top:2 ensure:2 batting:1 unifying:1 const:1 xc:2 especially:1 rsj:1 move:5 intend:1 pegasus:1 realized:1 damage:1 rt:3 dependence:1 diagonal:1 bagnell:2 gradient:25 dp:2 distance:1 perturbs:1 reinforce:2 simulated:4 parametrized:9 topic:1 seven:2 tuebingen:1 reason:1 length:1 minimizing:1 statement:1 teach:5 reweight:2 reliably:1 policy:95 twenty:2 perform:2 billard:1 discarded:1 benchmark:8 finite:8 descent:1 ecml:1 immediate:7 hinton:1 looking:1 frame:1 rn:1 perturbation:1 mansour:1 ninth:1 omission:1 arbitrary:1 community:1 introduced:1 required:3 optimized:3 learned:3 nip:5 beyond:1 suggested:3 bar:2 dynamical:3 yc:2 max:1 green:1 video:1 power:20 natural:9 advanced:1 arm:5 older:1 improve:1 mdps:1 catch:1 coupled:1 extract:1 review:1 literature:2 determining:1 embedded:1 interesting:1 proven:2 toussaint:1 validation:1 humanoid:2 degree:4 principle:1 playing:1 critic:6 heavy:1 row:2 centimeter:1 surprisingly:1 last:1 repeat:1 free:2 zc:1 drastically:1 institute:1 differentiating:1 world:1 concavity:1 author:1 dard:2 reinforcement:32 commonly:1 simplified:1 stuck:1 adaptive:1 atkeson:1 cope:1 skill:2 implicitly:1 kullback:2 keep:1 doucet:1 global:1 uai:2 imitation:13 alternatively:1 search:10 continuous:2 learn:7 transfer:2 fragility:1 ca:1 inherently:1 operational:2 improving:1 expansion:1 complex:7 european:1 domain:1 aistats:1 arrow:1 s2:1 child:4 biggest:1 en:1 wiley:2 fails:1 position:4 exponential:1 perceptual:2 weighting:10 kirk:1 learns:1 theorem:2 down:2 embed:1 showing:1 jensen:1 explored:1 barrett:6 multitude:1 incorporating:1 essential:1 workshop:1 restricting:1 adding:2 kanazawa:1 importance:6 magnitude:1 conditioned:1 downward:1 hole:1 horizon:6 cartesian:1 suited:5 tc:3 logarithmic:1 simply:1 covariant:1 corresponds:2 succeed:1 goal:5 presentation:2 acceleration:2 fisher:1 change:5 hard:2 infinite:1 determined:2 upright:1 except:1 acting:1 sampler:1 miss:1 zb:1 called:5 ijspeert:2 accepted:1 player:1 succeeds:1 timeinvariant:1 exception:1 highdimensional:1 indicating:2 internal:2 evaluate:1 mcmc:1 regularizing:1 ex:1 |
2,808 | 3,546 | Nonparametric Bayesian Learning of Switching
Linear Dynamical Systems
Emily B. Fox
Electrical Engineering & Computer Science, Massachusetts Institute of Technology
[email protected]
?
Erik B. Sudderth? , Michael I. Jordan??
Electrical Engineering & Computer Science and ? Statistics, University of California, Berkeley
{sudderth, jordan}@eecs.berkeley.edu
Alan S. Willsky
Electrical Engineering & Computer Science, Massachusetts Institute of Technology
[email protected]
Abstract
Many nonlinear dynamical phenomena can be effectively modeled by a system
that switches among a set of conditionally linear dynamical modes. We consider two such models: the switching linear dynamical system (SLDS) and the
switching vector autoregressive (VAR) process. Our nonparametric Bayesian approach utilizes a hierarchical Dirichlet process prior to learn an unknown number
of persistent, smooth dynamical modes. We develop a sampling algorithm that
combines a truncated approximation to the Dirichlet process with efficient joint
sampling of the mode and state sequences. The utility and flexibility of our model
are demonstrated on synthetic data, sequences of dancing honey bees, and the
IBOVESPA stock index.
1 Introduction
Linear dynamical systems (LDSs) are useful in describing dynamical phenomena as diverse as human motion [9], financial time-series [4], maneuvering targets [6, 10], and the dance of honey bees
[8]. However, such phenomena often exhibit structural changes over time and the LDS models
which describe them must also change. For example, a coasting ballistic missile makes an evasive
maneuver; a country experiences a recession, a central bank intervention, or some national or global
event; a honey bee changes from a waggle to a turn right dance. Some of these changes will appear frequently, while others are only rarely observed. In addition, there is always the possibility
of a new, previously unseen dynamical behavior. These considerations motivate us to develop a
nonparametric Bayesian approach for learning switching LDS (SLDS) models. We also consider
a special case of the SLDS?the switching vector autoregressive (VAR) process?in which direct
observations of the underlying dynamical process are assumed available. Although a special case of
the general linear systems framework, autoregressive models have simplifying properties that often
make them a practical choice in applications.
One can view switching dynamical processes as an extension of hidden Markov models (HMMs)
in which each HMM state, or mode, is associated with a dynamical process. Existing methods for
learning SLDSs and switching VAR processes rely on either fixing the number of HMM modes,
such as in [8], or considering a change-point detection formulation where each inferred change is
to a new, previously unseen dynamical mode, such as in [14]. In this paper we show how one can
remain agnostic about the number of dynamical modes while still allowing for returns to previously
exhibited dynamical behaviors.
Hierarchical Dirichlet processes (HDP) can be used as a prior on the parameters of HMMs with
unknown mode space cardinality [2, 12]. In this paper we make use of a variant of the HDPHMM?the sticky HDP-HMM of [5]?that provides improved control over the number of modes
inferred by the HDP-HMM; such control is crucial for the problems we examine. Although the
HDP-HMM and its sticky extension are very flexible time series models, they do make a strong
Markovian assumption that observations are conditionally independent given the HMM mode. This
assumption is often insufficient for capturing the temporal dependencies of the observations in real
data. Our nonparametric Bayesian approach for learning switching dynamical processes extends the
sticky HDP-HMM formulation to learn an unknown number of persistent, smooth dynamical modes
and thereby capture a wider range of temporal dependencies.
2 Background: Switching Linear Dynamic Systems
A state space (SS) model provides a general framework for analyzing many dynamical phenomena.
The model consists of an underlying state, xt ? Rn , with linear dynamics observed via y t ? Rd . A
linear time-invariant SS model, in which the dynamics do not depend on time, is given by
xt = Axt?1 + et
y t = Cxt + wt ,
(1)
where et and wt are independent Gaussian noise processes with covariances ? and R, respectively.
An order r VAR process, denoted by VAR(r), with observations y t ? Rd , can be defined as
yt =
r
X
Ai y t?i + et
et ? N (0, ?).
(2)
i=1
Here, the observations depend linearly on the previous r observation vectors. Every VAR(r) process
can be described in SS form by, for example, the following transformation:
?
? ?
?
I
A1 A2 . . . Ar
0 ...
0?
?0?
?I
? ?
y t = [I 0 . . . 0] xt .
(3)
xt = ?
.. ?
..
..
? xt?1 + ? .. ? et
? ..
.
.
.
.
.
0 ...
I
0
0
Note that there are many such equivalent minimal SS representations that result in the same inputoutput relationship, where minimality implies that there does not exist a realization with lower state
dimension. On the other hand, not every SS model may be expressed as a VAR(r) process for finite
r [1]. We can thus conclude that considering a class of SS models with state dimension r ? d and
arbitrary dynamic matrix A subsumes the class of VAR(r) processes.
The dynamical phenomena we examine in this paper exhibit behaviors better modeled as switches
between a set of linear dynamical models. Due to uncertainty in the mode of the process, the overall
model is nonlinear. We define a switching linear dynamical system (SLDS) by
xt = A(zt ) xt?1 + et (zt )
y t = Cxt + wt .
(4)
The first-order Markov process zt indexes the mode-specific LDS at time t, which is driven by
Gaussian noise et (zt ) ? N (0, ?(zt ) ). We similarly define a switching VAR(r) process by
yt =
r
X
(zt )
Ai
y t?i + et (zt )
et (zt ) ? N (0, ?(zt ) ).
(5)
i=1
Note that the underlying state dynamics of the SLDS are equivalent to a switching VAR(1) process.
3 Background: Dirichlet Processes and the Sticky HDP-HMM
A Dirichlet process (DP), denoted by DP(?, H), is a distribution on discrete measures
G0 =
?
X
?k ? ? k
?k ? H
(6)
k=1
on a parameter space ?. The weights are generated via a stick-breaking construction [11]:
?k = ?k?
k?1
Y
?=1
(1 ? ??? )
?k? ? Beta(1, ?).
(7)
(a)
(b)
(c)
(d)
Figure 1: For all graphs, ? ? GEM(?) and ?k ? H(?). (a) DP mixture model in which zi ? ? and
yi ? f (y | ?zi ). (b) HDP mixture model with ?j ? DP(?, ?), zji ? ?j , and yji ? f (y | ?zji ). (c)-(d)
Sticky HDP-HMM prior on switching VAR(2) and SLDS processes with the mode evolving as zt+1 ? ?zt for
?k ? DP(? + ?, (?? + ??k )/(? + ?)). The dynamical processes are as in Eq. (13).
We denote this distribution by ? ? GEM(?). The DP is commonly used as a prior on the parameters
of a mixture model, resulting in a DP mixture model (see Fig.1(a)). To generate observations, we
choose ??i ? G0 and yi ? F (??i ). This sampling process is often described via a discrete variable
zi ? ? indicating which component generates yi ? F (?zi ).
The hierarchical Dirichlet process (HDP) [12] extends the DP to cases in which groups of data are
produced by related, yet distinct, generative processes. Taking a hierarchical Bayesian approach, the
HDP draws G0 from a Dirichlet process prior DP(?, H), and then draws group specific distributions
Gj ? DP(?, G0 ). Here, the base measure G0 acts as an ?average? distribution (E[Gj | G0 ] = G0 )
encoding the frequency of each shared, global parameter:
Gj =
=
?
X
t=1
?
X
?
?jt ???jt
?
?j ? GEM(?)
(8)
?jk ??k
?j ? DP(?, ?) .
(9)
k=1
Because G0 is discrete, multiple ??jt ? G0 may take identical values ?k . Eq. (9) aggregates these
probabilities, allowing an observation yji to be directly associated with the unique global parameters
via an indicator random variable zji ? ?j . See Fig. 1(b).
An alternative, non?constructive characterization of samples G0 ? DP(?, H) from a Dirichlet
process states that for every finite partition {A1 , . . . , AK } of ?,
(G0 (A1 ), . . . , G0 (AK )) ? Dir(?H(A1 ), . . . , ?H(AK )).
(10)
Using this expression, it can be shown that the following finite, hierarchical mixture model converges
in distribution to the HDP as L ? ? [7, 12]:
? ? Dir(?/L, . . . , ?/L)
?j ? Dir(??1 , . . . , ??L ).
(11)
This weak limit approximation is used by the sampler of Sec. 4.2.
The HDP can be used to develop an HMM with a potentially infinite mode space [2, 12]. For
this HDP-HMM, each HDP group-specific distribution, ?j , is a mode-specific transition distribution
and, due to the infinite mode space, there are infinitely many groups. Let zt denote the mode of the
Markov chain at time t. For discrete Markov processes zt ? ?zt?1 , so that zt?1 indexes the group
to which yt is assigned. The current HMM mode zt then indexes the parameter ?zt used to generate
observation yt . See Fig. 1(c), ignoring the direct correlation in the observations.
By sampling ?j ? DP(?, ?), the HDP prior encourages modes to have similar transition distributions (E[?jk | ?] = ?k ). However, it does not differentiate self?transitions from moves between modes. When modeling dynamical processes with mode persistence, the flexible nature of
the HDP-HMM prior allows for mode sequences with unrealistically fast dynamics to have large
posterior probability. Recently, it has been shown [5] that one may mitigate this problem by instead
considering a sticky HDP-HMM where ?j is distributed as follows:
?? + ??j
.
(12)
?j ? DP ? + ?,
?+?
Here, (?? + ??j ) indicates that an amount ? > 0 is added to the j th component of ??. The measure
of ?j over a finite partition (Z1 , . . . , ZK ) of the positive integers Z+ , as described by Eq. (10), adds
an amount ? only to the arbitrarily small partition containing j, corresponding to a self-transition.
When ? = 0 the original HDP-HMM is recovered. We place a vague prior on ? and learn the
self-transition bias from the data.
4 The HDP-SLDS and HDP-AR-HMM Models
For greater modeling flexibility, we take a nonparametric approach in defining the mode space of
our switching dynamical processes. Specifically, we develop extensions of the sticky HDP-HMM
for both the SLDS and switching VAR models. For the SLDS, we consider conditionally-dependent
emissions of which only noisy observations are available (see Fig. 1(d)). For this model, which we
refer to as the HDP-SLDS, we place a prior on the parameters of the SLDS and infer their posterior
from the data. We do, however, fix the measurement matrix, C, for reasons of identifiability. Let
?
C? ? Rd?n , n ? d, be the measurement matrix associated with a dynamical system defined by A,
and assume C? has full row rank. Then, without loss of generality, we may consider C = [I 0] since
? = [I 0] and A = T ?1 AT
?
there exists an invertible transformation T such that the pair C = CT
defines an equivalent input-output system. The dimensionality of I is determined by that of the data.
Our choice of the number of columns of zeros is, in essence, a choice of model order.
The previous work of Fox et al. [6] considered a related, yet simpler formulation for modeling a
maneuvering target as a fixed LDS driven by a switching exogenous input. Since the number of
maneuver modes was assumed unknown, the exogenous input was taken to be the emissions of a
HDP-HMM. This work can be viewed as an extension of the work by Caron et. al. [3] in which
the exogenous input was an independent noise process generated from a DP mixture model. The
HDP-SLDS is a major departure from these works since the dynamic parameters themselves change
with the mode and are learned from the data, providing a much more expressive model.
The switching VAR(r) process can similarly be posed as an HDP-HMM in which the observations
are modeled as conditionally VAR(r). This model is referred to as the HDP-AR-HMM and is depicted in Fig. 1(c). The generative processes for these two models are summarized as follows:
HDP-AR-HMM
HDP-SLDS
Mode dynamics
zt ? ?zt?1
zt ? ?zt?1
Pr
(zt )
Observation dynamics y t = i=1 Ai y t?i + et (zt ) xt = A(zt ) xt?1 + et (zt )
y t = Cxt + wt
(13)
Here, ?j is as defined in Sec. 3 and the additive noise processes as in Sec. 2.
4.1 Posterior Inference of Dynamic Parameters
In this section we focus on developing a prior to regularize the learning of different dynamical modes
conditioned on a fixed mode assignment z1:T . For the SLDS, we analyze the posterior distribution of
the dynamic parameters given a fixed, known state sequence x1:T . Methods for learning the number
of modes and resampling the sequences x1:T and z1:T are discussed in Sec. 4.2.
Conditioned on the mode sequence, one may partition the observations into K different linear regression problems, where K = |{z1 , . . . , zT }|. That is, for each mode k, we may form a matrix
Y(k) with Nk columns consisting of the observations y t with zt = k. Then,
? (k) + E(k) ,
Y(k) = A(k) Y
(14)
(k)
(k)
? (k) is a matrix of lagged observations, and E(k) the associated
where A(k) = [A1 . . . Ar ], Y
(k)
(k) ? (k)
noise vectors. Let D
= {Y , Y }. The posterior distribution over the VAR(r) parameters
associated with the k th mode decomposes as follows:
p(A(k) , ?(k) | D(k) ) = p(A(k) | ?(k) , D(k) )p(?(k) | D(k) ).
(15)
(k)
(k)
We place a conjugate matrix-normal inverse-Wishart prior on the parameters {A , ? } [13],
providing a reasonable combination of flexibility and analytical convenience. A matrix A ? Rd?m
has a matrix-normal distribution MN (A; M , V , K) if
d
|K| 2 ? 12 tr (A?M )T V ?1 (A?M )K
,
p(A) =
m e
|2?V | 2
?
?
(16)
where M is the mean matrix and V and K ?1 are the covariances along the rows and columns,
respectively. A vectorization of the matrix A results in
p(vec(A)) = N (vec(M ), K ?1 ? V ),
where ? denotes the Kronecker product. The resulting posterior is derived as
(k) ?(k)
(17)
(k)
p(A(k) | ?(k) , D(k) ) = MN (A(k) ; Syy? Sy?y? , ??(k) , Sy?y? ),
(18)
with B ?(k) denoting (B (k) )?1 for a given matrix B, and
T
(k)
? (k) Y
? (k) + K
Sy?y? = Y
T
(k)
T
? (k) + M K
Syy? = Y(k) Y
(k) (k)
S(k)
Y
+ M KM T .
yy = Y
We place an inverse-Wishart prior IW(S0 , n0 ) on ?(k) . Then,
(k)
p(?(k) | D(k) ) = IW(Sy|?y + S0 , Nk + n0 ),
(k)
(k)
(k) ?(k) (k)
(19)
T
where Sy|?y = Syy ? Syy? Sy?y? Syy? . When A is simply a vector, the matrix-normal inverseWishart prior reduces to the normal inverse-Wishart prior with scale parameter K.
For the HDP-SLDS, we additionally place an IW(R0 , r0 ) prior on the measurement noise covariance
R, which is shared between modes. The posterior distribution is given by
p(R | y 1:T , x1:T ) = IW(SR + R0 , T + r0 ),
(20)
PT
T
with SR = t=1 (y t ? Cxt )(y t ? Cxt ) . Further details are provided in supplemental Appendix I.
4.2 Gibbs Sampler
For the switching VAR(r) process, our sampler iterates between sampling the mode sequence, z1:T ,
and both the dynamic and sticky HDP-HMM parameters. The sampler for the SLDS is identical to
that of a switching VAR(1) process with the additional step of sampling the state sequence, x1:T ,
and conditioning on the state sequence when resampling dynamic parameters. The resulting Gibbs
sampler is described below and further elaborated upon in supplemental Appendix II.
Sampling Dynamic Parameters Conditioned on a sample of the mode sequence, z1:T , and the observations, y 1:T , or state sequence, x1:T , we can sample the dynamic parameters ? = {A(k) , ?(k) }
from the posterior density described in Sec. 4.1. For the HDP-SLDS, we additionally sample R.
Sampling z1:T As shown in [5], the mixing rate of the Gibbs sampler for the HDP-HMM can
be dramatically improved by using a truncated approximation to the HDP, such as the weak limit
approximation, and jointly sampling the mode sequence using a variant of the forward-backward
algorithm. Specifically, we compute backward messages mt+1,t (zt ) ? p(y t+1:T |zt , y t?r+1:t , ?, ?)
and then recursively sample each zt conditioned on zt?1 from
p(zt | zt?1 , y 1:T , ?, ?) ? p(zt | ?zt?1 )p(y t | y t?r:t?1 , A(zt ) , ?(zt ) )mt+1,t (zt ),
(21)
P
(z
)
r
t
where p(y t | y t?r:t?1 , A(zt ) , ?(zt ) ) = N ( i=1 Ai y t?i , ?(zt ) ). Joint sampling of the mode sequence is especially important when the observations are directly correlated via a dynamical process
since this correlation further slows the mixing rate of the direct assignment sampler of [12]. Note
that the approximation of Eq. (11) retains the HDP?s nonparametric nature by encouraging the use
of fewer than L components while allowing the generation of new components, upper bounded by
L, as new data are observed.
Sampling x1:T (HDP-SLDS only) Conditioned on the mode sequence z1:T and the set of dynamic parameters ?, our dynamical process simplifies to a time-varying linear dynamical system. We can then block sample x1:T by first running a backward filter to compute mt+1,t (xt ) ?
p(y t+1:T |xt , zt+1:T , ?) and then recursively sampling each xt conditioned on xt?1 from
p(xt | xt?1 , y 1:T , z1:T , ?) ? p(xt | xt?1 , A(zt ) , ?(zt ) )p(y t | xt , R)mt+1,t (xt ).
The messages are given in information form by mt,t?1 (xt?1 ) ? N
the information parameters are recursively defined as
?1
(22)
(xt?1 ; ?t,t?1 , ?t,t?1 ), where
T
?t,t?1 = A(zt ) ??(zt ) (??(zt ) + C T R?1 C + ?t+1,t )?1 (C T R?1 y t + ?t+1,t )
(zt )T
?(zt )
(zt )
(zt )T
?(zt )
?(zt )
T
?1
?1
(23)
?(zt )
?t,t?1 = A
?
A
?A
?
(?
+ C R C + ?t+1,t ) ?
See supplemental Appendix II for a more numerically stable version of this recursion.
(zt )
A
.
8
6
4
2
0
0.5
0.4
0.3
0.2
0.1
0.5
0.4
0.3
0.2
0.1
0.5
Normalized Hamming Distance
10
Normalized Hamming Distance
12
Normalized Hamming Distance
Normalized Hamming Distance
14
0.4
0.3
0.2
0.1
0.5
0.4
0.3
0.2
0.1
?2
14
12
10
8
6
4
2
0
0
200
400
600
Time
800
Normalized Hamming Distance
200
150
100
50
600
Iteration
800
0
1000
0.6
0.5
0.4
0.3
0.2
0.1
0
1000
400
1000
2000 3000
Iteration
4000
0.6
0.5
0.4
0.3
0.2
0.1
400
600
Iteration
800
0
1000
0.6
0.5
0.4
0.3
0.2
0.1
0
5000
200
1000
2000 3000
Iteration
4000
0.6
0.5
0.4
0.3
0.2
0.1
400
600
Iteration
800
0
1000
0.6
0.5
0.4
0.3
0.2
0.1
0
5000
200
Normalized Hamming Distance
Normalized Hamming Distance
16
200
1000
2000 3000
Iteration
4000
0.6
0.5
0.4
0.3
0.2
0.1
200
400
600
Iteration
800
1000
4000
5000
0.6
0.5
0.4
0.3
0.2
0.1
0
5000
Normalized Hamming Distance
0
2000
Normalized Hamming Distance
1500
Normalized Hamming Distance
1000
Time
Normalized Hamming Distance
500
Normalized Hamming Distance
0
1000
2000 3000
Iteration
0.6
0.5
0.4
0.3
0.2
0.1
0
0
200
400
600
Time
800
1000
0
1000
2000 3000
Iteration
4000
5000
0
1000
2000 3000
Iteration
4000
5000
0
1000
2000 3000
Iteration
4000
5000
0
1000
2000 3000
Iteration
4000
(a)
(b)
(c)
(d)
(e)
Figure 2: (a) Observation sequence (blue, green, red) and associated mode sequence (magenta) for a 5-mode
switching VAR(1) process (top), 3-mode switching AR(2) process (middle), and 3-mode SLDS (bottom). The
associated 10th, 50th, and 90th Hamming distance quantiles over 100 trials are shown for the (b) HDP-VAR(1)HMM, (c) HDP-VAR(2)-HMM, (d) HDP-SLDS with C = I (top and bottom) and C = [1 0] (middle), and
(e) sticky HDP-HMM using first difference observations.
5 Results
Synthetic Data In Fig. 2, we compare the performance of the HDP-VAR(1)-HMM, HDP-VAR(2)HMM, HDP-SLDS, and a baseline sticky HDP-HMM on three sets of test data (see Fig. 2(a)). The
Hamming distance error is calculated by first choosing the optimal mapping of indices maximizing overlap between the true and estimated mode sequences. For the first scenario, the data were
generated from a 5-mode switching VAR(1) process. The three switching linear dynamical models
provide comparable performance since both the HDP-VAR(2)-HMM and HDP-SLDS with C = I
contain the class of HDP-VAR(1)-HMMs. Note that the HDP-SLDS sampler is slower to mix since
the hidden, three-dimensional continuous state is also sampled. In the second scenario, the data were
generated from a 3-mode switching AR(2) process. The HDP-AR(2)-HMM has significantly better
performance than the HDP-AR(1)-HMM while the performance of the HDP-SLDS with C = [1 0]
is comparable after burn-in. As shown in Sec. 2, this HDP-SLDS model encompasses the class of
HDP-AR(2)-HMMs. The data in the third scenario were generated from a 3-mode SLDS model
with C = I. Here, we clearly see that neither the HDP-VAR(1)-HMM nor HDP-VAR(2)-HMM is
equivalent to the HDP-SLDS. Together, these results demonstrate both the differences between our
models as well as the models? ability to learn switching processes with varying numbers of modes.
Finally, note that all of the switching models yielded significant improvements relative to the baseline sticky HDP-HMM, even when the latter was given first differences of the observations. This
input representation, which is equivalent to an HDP-VAR(1)-HMM with random walk dynamics
(A(k) = I for all k), is more effective than using raw observations for HDP-HMM learning, but still
much less effective than richer models which switch among learned LDS.
IBOVESPA Stock Index We test the HDP-SLDS model on the IBOVESPA stock index (Sao
Paulo Stock Exchange) over the period of 01/03/1997 to 01/16/2001. There are ten key world
events shown in Fig. 3 and cited in [4] as affecting the emerging Brazilian market during this time
period. In [4], a 2-mode Markov switching stochastic volatility (MSSV) model is used to identify
periods of higher volatility in the daily returns. The MSSV assumes that the log-volatilities follow an
AR(1) process with a Markov switching mean. This underlying process is observed via conditionally
independent and normally distributed daily returns. The HDP-SLDS is able to infer very similar
change points to those presented in [4]. Interestingly, the HDP-SLDS consistently identifies three
regimes of volatility versus the assumed 2-mode model. In Fig. 3, the overall performance of the
5000
1
1
0.7
0.6
0.5
0.4
0.3
0.2
0.8
0.8
Detection Rate
Probability of Change Point
Probability of Change Point
0.8
0.6
0.4
0.6
0.4
HDP?SLDS
HDP?SLDS, non?sticky
HDP?AR(1)?HMM
HDP?AR(2)?HMM
0.2
0.2
0.1
1/3/97 7/2/97
6/1/98
1/15/99
Date
1/13/00
0
1/3/97 7/2/97
6/1/98
1/15/99
Date
1/13/00
0
1/3/97 7/2/97
6/1/98
1/15/99
Date
1/13/00
0
0
0.2
0.4
0.6
0.8
False Alarm Rate
1
(a)
(b)
(c)
(d)
Figure 3: (a) IBOVESPA stock index daily returns from 01/03/1997 to 01/16/2001. (b) Plot of the estimated
probability of a change point on each day using 3000 Gibbs samples for the HDP-SLDS. The 10 key events are
indicated with red lines. (c) Similar plot for the non-sticky HDP-SLDS with no bias towards self-transitions.
(d) ROC curves for the HDP-SLDS, non-sticky HDP-SLDS, HDP-AR(1)-HMM, and HDP-AR(2)-HMM.
HDP-SLDS is compared to that of the HDP-AR(1)-HMM, HDP-AR(2)-HMM, and HDP-SLDS
with no bias for self-transitions (i.e., ? = 0.) The ROC curves shown in Fig. 3(d) are calculated
by windowing the time axis and taking the maximum probability of a change point in each window.
These probabilities are then used as the confidence of a change point in that window. We clearly
see the advantage of using a SLDS model combined with the sticky HDP-HMM prior on the mode
sequence. Without the sticky extension, the HDP-SLDS over-segments the data and rapidly switches
between redundant states which leads to a dramatically larger number of inferred change points.
Dancing Honey Bees We test the HDP-VAR(1)-HMM on a set of six dancing honey bee sequences, aiming to segment the sequences into the three dances displayed in Fig. 4. (Note that we
did not see performance gains by considering the HDP-SLDS, so we omit showing results for that
architecture.) The data consist of measurements y t = [cos(?t ) sin(?t ) xt yt ]T , where (xt , yt )
denotes the 2D coordinates of the bee?s body and ?t its head angle. We compare our results to
those of Xuan and Murphy [14], who used a change-point detection technique for inference on this
dataset. As shown in Fig. 4(d)-(e), our model achieves a superior segmentation compared to the
change-point formulation in almost all cases, while also identifying modes which reoccur over time.
Oh et al. [8] also presented an analysis of the honey bee data, using an SLDS with a fixed number of
modes. Unfortunately, that analysis is not directly comparable to ours, because [8] used their SLDS
in a supervised formulation in which the ground truth labels for all but one of the sequences are
employed in the inference of the labels for the remaining held-out sequence, and in which the kernels
used in the MCMC procedure depend on the ground truth labels. (The authors also considered a
?parameterized segmental SLDS (PS-SLDS),? which makes use of domain knowledge specific to
honey bee dancing and requires additional supervision during the learning process.) Nonetheless,
in Table 1 we report the performance of these methods as well as the median performance (over
100 trials) of the unsupervised HDP-VAR(1)-HMM to provide a sense of the level of performance
achievable without detailed, manual supervision. As seen in Table 1, the HDP-VAR(1)-HMM yields
very good performance on sequences 4 to 6 in terms of the learned segmentation and number of
modes (see Fig. 4(a)-(c)); the performance approaches that of the supervised method. For sequences
1 to 3?which are much less regular than sequences 4 to 6?the performance of the unsupervised
procedure is substantially worse. This motivated us to also consider a partially supervised variant
of the HDP-VAR(1)-HMM in which we fix the ground truth mode sequences for five out of six of
the sequences, and jointly infer both a combined set of dynamic parameters and the left-out mode
sequence. As we see in Table 1, this considerably improved performance for these three sequences.
Not depicted in the plots in Fig. 4 is the extreme variation in head angle during the waggle dances
of sequences 1 to 3. This dramatically affects our performance since we do not use domain-specific
information. Indeed, our learned segmentations consistently identify turn-right and turn-left modes,
but often create a new, sequence-specific waggle dance mode. Many of our errors can be attributed to
creating multiple waggle dance modes within a sequence. Overall, however, we are able to achieve
reasonably good segmentations without having to manually input domain-specific knowledge.
6 Discussion
In this paper, we have addressed the problem of learning switching linear dynamical models with
an unknown number of modes for describing complex dynamical phenomena. We presented a non-
(2)
(3)
(4)
4
2
3
Estimated mode
Estimated mode
Estimated mode
3
3
2
1
200
400
Time
(a)
600
0
2
1
1
0
Detection Rate
4
200
400
Time
(b)
600
800
0
200
400
Time
(c)
600
(5)
(6)
1
1
0.8
0.8
0.6
0.4
HDP?VAR?HMM, unsupervised
HDP?VAR?HMM, supervised
Change?point formulation
Viterbi sequence
0.2
0
0
0.2
0.4
0.6
0.8
False Alarm Rate
(d)
1
Detection Rate
(1)
0.6
0.4
HDP?VAR?HMM, unsupervised
HDP?VAR?HMM, supervised
Change?point formulation
Viterbi sequence
0.2
0
0
0.2
0.4
0.6
0.8
1
False Alarm Rate
(e)
Figure 4: (top) Trajectories of the dancing honey bees for sequences 1 to 6, colored by waggle (red), turn
right (blue), and turn left (green) dances. (a)-(c) Estimated mode sequences representing the median error for
sequences 4, 5, and 6 at the 200th Gibbs iteration, with errors indicated in red. (d)-(e) ROC curves for the
unsupervised HDP-VAR-HMM, partially supervised HDP-VAR-HMM, and change-point formulation of [14]
using the Viterbi sequence for segmenting datasets 1-3 and 4-6, respectively.
Sequence
1
2
3
4
5
6
HDP-VAR(1)-HMM unsupervised
46.5 44.1 45.6 83.2 93.2 88.7
HDP-VAR(1)-HMM partially supervised 65.9 88.5 79.2 86.9 92.3 89.1
SLDS DD-MCMC
74.0 86.1 81.3 93.4 90.2 90.4
PS-SLDS DD-MCMC
75.9 92.4 83.1 93.4 90.4 91.0
Table 1: Median label accuracy of the HDP-VAR(1)-HMM using unsupervised and partially supervised Gibbs
sampling, compared to accuracy of the supervised PS-SLDS and SLDS procedures, where the latter algorithms
were based on a supervised MCMC procedure (DD-MCMC) [8].
parametric Bayesian approach and demonstrated both the utility and versatility of the developed
HDP-SLDS and HDP-AR-HMM on real applications. Using the same parameter settings, in one
case we are able to learn changes in the volatility of the IBOVESPA stock exchange while in another case we learn segmentations of data into waggle, turn-right, and turn-left honey bee dances.
An interesting direction for future research is learning models of varying order for each mode.
References
[1] M. Aoki and A. Havenner. State space modeling of multiple time series. Econ. Rev., 10(1):1?59, 1991.
[2] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In NIPS, 2002.
[3] F. Caron, M. Davy, A. Doucet, E. Duflos, and P. Vanheeghe. Bayesian inference for dynamic models with
Dirichlet process mixtures. In Int. Conf. Inf. Fusion, July 2006.
[4] C. Carvalho and H. Lopes. Simulation-based sequential analysis of Markov switching stochastic volatility
models. Comp. Stat. & Data Anal., 2006.
[5] E. B. Fox, E. B. Sudderth, M. I. Jordan, and A. S. Willsky. An HDP-HMM for systems with state
persistence. In ICML, 2008.
[6] E. B. Fox, E. B. Sudderth, and A. S. Willsky. Hierarchical Dirichlet processes for tracking maneuvering
targets. In Int. Conf. Inf. Fusion, July 2007.
[7] H. Ishwaran and M. Zarepour. Exact and approximate sum?representations for the Dirichlet process. Can.
J. Stat., 30:269?283, 2002.
[8] S. Oh, J. Rehg, T. Balch, and F. Dellaert. Learning and inferring motion patterns using parametric segmental switching linear dynamic systems. IJCV, 77(1?3):103?124, 2008.
[9] J. M. Pavlovi?c, V. Rehg and J. MacCormick. Learning switching linear models of human motion. In
NIPS, 2000.
[10] X. Rong Li and V. Jilkov. Survey of maneuvering target tracking. Part V: Multiple-model methods. IEEE
Trans. Aerosp. Electron. Syst., 41(4):1255?1321, 2005.
[11] J. Sethuraman. A constructive definition of Dirichlet priors. Stat. Sinica, 4:639?650, 1994.
[12] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. J. Amer. Stat.
Assoc., 101(476):1566?1581, 2006.
[13] M. West and J. Harrison. Bayesian Forecasting and Dynamic Models. Springer, 1997.
[14] X. Xuan and K. Murphy. Modeling changing dependency structure in multivariate time series. In ICML,
2007.
| 3546 |@word trial:2 middle:2 version:1 achievable:1 km:1 simulation:1 simplifying:1 covariance:3 thereby:1 tr:1 pavlovi:1 recursively:3 series:4 denoting:1 ours:1 interestingly:1 existing:1 current:1 recovered:1 yet:2 must:1 additive:1 partition:4 plot:3 n0:2 resampling:2 generative:2 fewer:1 colored:1 blei:1 provides:2 characterization:1 iterates:1 simpler:1 five:1 along:1 direct:3 beta:1 persistent:2 consists:1 ijcv:1 combine:1 indeed:1 market:1 behavior:3 themselves:1 ldss:1 frequently:1 examine:2 nor:1 encouraging:1 window:2 considering:4 cardinality:1 provided:1 underlying:4 bounded:1 agnostic:1 substantially:1 emerging:1 sldss:1 developed:1 supplemental:3 transformation:2 temporal:2 berkeley:2 every:3 mitigate:1 act:1 honey:9 axt:1 assoc:1 stick:1 control:2 normally:1 intervention:1 appear:1 omit:1 maneuver:2 segmenting:1 positive:1 engineering:3 limit:2 aiming:1 switching:32 encoding:1 ak:3 analyzing:1 aoki:1 burn:1 co:1 hmms:4 range:1 practical:1 unique:1 block:1 procedure:4 evasive:1 evolving:1 significantly:1 davy:1 persistence:2 confidence:1 regular:1 convenience:1 equivalent:5 demonstrated:2 yt:6 maximizing:1 emily:1 survey:1 identifying:1 regularize:1 oh:2 financial:1 rehg:2 coordinate:1 variation:1 target:4 construction:1 pt:1 exact:1 jk:2 observed:4 bottom:2 electrical:3 capture:1 sticky:16 maneuvering:4 dynamic:21 motivate:1 depend:3 segment:2 upon:1 vague:1 joint:2 stock:6 distinct:1 fast:1 describe:1 effective:2 aggregate:1 choosing:1 richer:1 posed:1 larger:1 s:6 ability:1 statistic:1 unseen:2 jointly:2 noisy:1 beal:2 differentiate:1 sequence:39 advantage:1 analytical:1 product:1 realization:1 date:3 rapidly:1 mixing:2 flexibility:3 achieve:1 inputoutput:1 p:3 xuan:2 converges:1 wider:1 volatility:6 develop:4 stat:4 fixing:1 eq:4 strong:1 implies:1 direction:1 filter:1 stochastic:2 human:2 exchange:2 fix:2 extension:5 rong:1 considered:2 ground:3 normal:4 mapping:1 viterbi:3 electron:1 major:1 achieves:1 a2:1 mssv:2 label:4 ballistic:1 iw:4 create:1 mit:2 clearly:2 always:1 gaussian:2 varying:3 derived:1 emission:2 focus:1 improvement:1 consistently:2 rank:1 indicates:1 baseline:2 sense:1 inference:4 dependent:1 hidden:3 overall:3 among:2 flexible:2 denoted:2 special:2 inversewishart:1 having:1 sampling:13 manually:1 identical:2 unsupervised:7 icml:2 future:1 others:1 report:1 national:1 murphy:2 consisting:1 versatility:1 detection:5 message:2 possibility:1 mixture:7 extreme:1 held:1 chain:1 daily:3 experience:1 fox:4 walk:1 minimal:1 column:3 modeling:5 markovian:1 ar:18 retains:1 assignment:2 dependency:3 eec:1 dir:3 synthetic:2 combined:2 considerably:1 density:1 cited:1 minimality:1 invertible:1 michael:1 together:1 central:1 containing:1 choose:1 wishart:3 worse:1 conf:2 creating:1 return:4 li:1 syst:1 paulo:1 sec:6 subsumes:1 summarized:1 int:2 view:1 exogenous:3 analyze:1 red:4 identifiability:1 elaborated:1 cxt:5 accuracy:2 who:1 sy:6 yield:1 syy:5 identify:2 balch:1 lds:5 bayesian:8 weak:2 raw:1 produced:1 trajectory:1 comp:1 manual:1 definition:1 nonetheless:1 frequency:1 associated:7 attributed:1 hamming:14 sampled:1 gain:1 dataset:1 massachusetts:2 knowledge:2 dimensionality:1 segmentation:5 higher:1 day:1 follow:1 supervised:10 improved:3 formulation:8 amer:1 generality:1 correlation:2 hand:1 expressive:1 nonlinear:2 defines:1 mode:65 indicated:2 zarepour:1 normalized:12 true:1 contain:1 assigned:1 conditionally:5 sin:1 during:3 self:5 encourages:1 essence:1 demonstrate:1 motion:3 duflos:1 consideration:1 recently:1 superior:1 mt:5 conditioning:1 discussed:1 numerically:1 refer:1 measurement:4 significant:1 caron:2 vec:2 ai:4 gibbs:6 rd:4 similarly:2 stable:1 supervision:2 gj:3 recession:1 base:1 add:1 segmental:2 posterior:8 multivariate:1 inf:2 driven:2 scenario:3 arbitrarily:1 yi:3 seen:1 greater:1 additional:2 employed:1 r0:4 period:3 redundant:1 july:2 ii:2 multiple:4 full:1 mix:1 infer:3 reduces:1 windowing:1 alan:1 smooth:2 a1:5 variant:3 regression:1 iteration:13 kernel:1 addition:1 background:2 unrealistically:1 affecting:1 addressed:1 harrison:1 sudderth:4 country:1 median:3 crucial:1 exhibited:1 sr:2 jordan:4 integer:1 structural:1 switch:4 affect:1 zi:4 architecture:1 simplifies:1 expression:1 six:2 motivated:1 utility:2 forecasting:1 dellaert:1 reoccur:1 dramatically:3 useful:1 detailed:1 amount:2 nonparametric:6 ten:1 generate:2 exist:1 estimated:6 yy:1 econ:1 blue:2 diverse:1 discrete:4 dancing:5 group:5 key:2 changing:1 neither:1 backward:3 graph:1 sum:1 inverse:3 angle:2 uncertainty:1 parameterized:1 lope:1 extends:2 place:5 reasonable:1 almost:1 brazilian:1 utilizes:1 draw:2 appendix:3 comparable:3 capturing:1 ct:1 yielded:1 kronecker:1 generates:1 missile:1 developing:1 combination:1 conjugate:1 remain:1 rev:1 invariant:1 pr:1 taken:1 previously:3 describing:2 turn:7 available:2 ishwaran:1 hierarchical:7 alternative:1 slower:1 original:1 denotes:2 dirichlet:13 running:1 top:3 assumes:1 remaining:1 ghahramani:1 especially:1 ebfox:1 move:1 g0:12 added:1 parametric:2 exhibit:2 dp:15 distance:14 maccormick:1 hmm:59 reason:1 willsky:4 erik:1 hdp:94 modeled:3 index:8 insufficient:1 relationship:1 providing:2 sinica:1 unfortunately:1 potentially:1 slows:1 lagged:1 anal:1 zt:55 unknown:5 allowing:3 upper:1 teh:1 observation:22 markov:8 datasets:1 finite:4 displayed:1 truncated:2 defining:1 head:2 rn:1 arbitrary:1 inferred:3 pair:1 z1:9 california:1 learned:4 nip:2 trans:1 able:3 dynamical:31 below:1 pattern:1 departure:1 regime:1 encompasses:1 green:2 event:3 overlap:1 rely:1 indicator:1 recursion:1 mn:2 representing:1 technology:2 identifies:1 axis:1 sethuraman:1 prior:17 bee:10 relative:1 loss:1 generation:1 interesting:1 carvalho:1 var:41 versus:1 s0:2 dd:3 sao:1 bank:1 row:2 rasmussen:1 bias:3 institute:2 taking:2 distributed:2 curve:3 dimension:2 calculated:2 transition:7 world:1 autoregressive:3 forward:1 commonly:1 author:1 approximate:1 global:3 doucet:1 assumed:3 conclude:1 gem:3 yji:2 continuous:1 vectorization:1 decomposes:1 table:4 additionally:2 learn:6 reasonably:1 nature:2 zk:1 ignoring:1 complex:1 domain:3 did:1 linearly:1 noise:6 alarm:3 x1:7 body:1 fig:14 referred:1 west:1 quantiles:1 roc:3 inferring:1 breaking:1 third:1 magenta:1 xt:23 specific:8 jt:3 showing:1 hdphmm:1 fusion:2 exists:1 consist:1 false:3 sequential:1 effectively:1 conditioned:6 nk:2 zji:3 slds:50 depicted:2 simply:1 infinitely:1 expressed:1 tracking:2 partially:4 springer:1 truth:3 viewed:1 towards:1 shared:2 change:20 infinite:3 specifically:2 determined:1 wt:4 sampler:8 rarely:1 indicating:1 latter:2 constructive:2 mcmc:5 dance:8 phenomenon:6 correlated:1 |
2,809 | 3,547 | Goal-directed decision making in prefrontal
cortex: A computational framework
Matthew Botvinick
Princeton Neuroscience Institute and
Department of Psychology, Princeton University
Princeton, NJ 08540
[email protected]
James An
Computer Science Department
Princeton University
Princeton, NJ 08540
[email protected]
Abstract
Research in animal learning and behavioral neuroscience has distinguished
between two forms of action control: a habit-based form, which relies on
stored action values, and a goal-directed form, which forecasts and
compares action outcomes based on a model of the environment. While
habit-based control has been the subject of extensive computational
research, the computational principles underlying goal-directed control in
animals have so far received less attention. In the present paper, we
advance a computational framework for goal-directed control in animals
and humans. We take three empirically motivated points as founding
premises: (1) Neurons in dorsolateral prefrontal cortex represent action
policies, (2) Neurons in orbitofrontal cortex represent rewards, and (3)
Neural computation, across domains, can be appropriately understood as
performing structured probabilistic inference. On a purely computational
level, the resulting account relates closely to previous work using Bayesian
inference to solve Markov decision problems, but extends this work by
introducing a new algorithm, which provably converges on optimal plans.
On a cognitive and neuroscientific level, the theory provides a unifying
framework for several different forms of goal-directed action selection,
placing emphasis on a novel form, within which orbitofrontal reward
representations directly drive policy selection.
1
G oal- d irect ed act i on cont rol
In the study of human and animal behavior, it is a long-standing idea that reward-based
decision making may rely on two qualitatively different mechanisms. In habit-based
decision making, stimuli elicit reflex-like responses, shaped by past reinforcement [1]. In
goal-directed or purposive decision making, on the other hand, actions are selected based on
a prospective consideration of possible outcomes and future lines of action [2]. Over the past
twenty years or so, the attention of cognitive neuroscientists and computationally minded
psychologists has tended to focus on habit-based control, due in large part to interest in
potential links between dopaminergic function and temporal-difference algorithms for
reinforcement learning. However, a resurgence of interest in purposive action selection is
now being driven by innovations in animal behavior research, which have yielded powerful
new behavioral assays [3], and revealed specific effects of focal neural damage on goaldirected behavior [4].
In discussing some of the relevant data, Daw, Niv and Dayan [5] recently pointed out the
close relationship between purposive decision making, as understood in the behavioral
sciences, and model-based methods for the solution of Markov decision problems (MDPs),
where action policies are derived from a joint analysis of a transition function (a mapping
from states and actions to outcomes) and a reward function (a mapping from states to
rewards). Beyond this important insight, little work has yet been done to characterize the
computations underlying goal-directed action selection (though see [6, 7]). As discussed
below, a great deal of evidence indicates that purposive action selection depends critically on
a particular region of the brain, the prefrontal cortex. However, it is currently a critical, and
quite open, question what the relevant computations within this part of the brain might be.
Of course, the basic computational problem of formulating an optimal policy given a model
of an MDP has been extensively studied, and there is no shortage of algorithms one might
consider as potentially relevant to prefrontal function (e.g., value iteration, policy iteration,
backward induction, linear programming, and others). However, from a cognitive and
neuroscientific perspective, there is one approach to solving MDPs that it seems particularly
appealing to consider. In particular, several researchers have suggested methods for solving
MDPs through probabilistic inference [8-12]. The interest of this idea, in the present
context, derives from a recent movement toward framing human and animal information
processing, as well as the underlying neural computations, in terms of structured
probabilistic inference [13, 14]. Given this perspective, it is inviting to consider whether
goal-directed action selection, and the neural mechanisms that underlie it, might be
understood in those same terms.
One challenge in investigating this possibility is that previous research furnishes no ?off-theshelf? algorithm for solving MDPs through probabilistic inference that both provably yields
optimal policies and aligns with what is known about action selection in the brain. We
endeavor here to start filling in that gap. In the following section, we introduce an account
of how goal-directed action selection can be performed based on probabilisitic inference,
within a network whose components map grossly onto specific brain structures. As part of
this account, we introduce a new algorithm for solving MDPs through Bayesian inference,
along with a convergence proof. We then present results from a set of simulations
illustrating how the framework would account for a variety of behavioral phenomena that
are thought to involve purposive action selection.
2
Co m p u t a t i o n a l m o d el
As noted earlier, the prefrontal cortex (PFC) is believed to play a pivotal role in purposive
behavior. This is indicated by a broad association between prefrontal lesions and
impairments in goal-directed action in both humans (see [15]) and animals [4]. Single-unit
recording and other data suggest that different sectors of PFC make distinct contributions.
In particular, neurons in dorsolateral prefrontal cortex (DLPFC) appear to encode taskspecific mappings from stimuli to responses (e.g., [16]): ?task representations,? in the
language of psychology, or ?policies? in the language of dynamic programming. Although
there is some understanding of how policy representations in DLPFC may guide action
execution [15], little is yet known about how these representations are themselves selected.
Our most basic proposal is that DLPFC policy representations are selected in a prospective,
model-based fashion, leveraging information about action-outcome contingencies (i.e., the
transition function) and about the incentive value associated with specific outcomes or states
(the reward function). There is extensive evidence to suggest that state-reward associations
are represented in another area of the PFC, the orbitofrontal cortex (OFC) [17, 18]. As for
the transition function, although it is clear that the brain contains detailed representations of
action-outcome associations [19], their anatomical localization is not yet entirely clear.
However, some evidence suggests that the enviromental effects of simple actions may be
represented in inferior fronto-parietal cortex [20], and there is also evidence suggesting that
medial temporal structures may be important in forecasting action outcomes [21].
As detailed in the next section, our model assumes that policy representations in DLPFC,
reward representations in OFC, and representations of states and actions in other brain
regions, are coordinated within a network structure that represents their causal or statistical
interdependencies, and that policy selection occurs, within this network, through a process of
probabilistic inference.
2.1
A rc h i t e c t u re
The implementation takes the form of a directed graphical model [22], with the layout shown
in Figure 1. Each node represents a discrete random variable. State variables (s),
representing the set of m possible world states, serve the role played by parietal and medial
temporal cortices in representing action outcomes. Action variables (a) representing the set
of available actions, play the role
of high-level cortical motor areas
involved in the programming of
action sequences. Policy variables
( ), each repre-senting the set of
all
deterministic
policies
associated with a specific state,
capture the representational role
of DLPFC.
Local and global
utility variables, described further
Fig 1. Left: Single-step decision. Right: Sequential decision.
below, capture the role of OFC in
Each time-slice includes a set of m policy nodes.
representing incentive value. A
separate set of nodes is included for each discrete time-step up to the planning horizon.
The conditional probabilities associated with each variable are represented in tabular form.
State probabilities are based on the state and action variables in the preceding time-step, and
thus encode the transition function. Action probabilities depend on the current state and its
associated policy variable. Utilities depend only on the current state. Rather than
representing reward magnitude as a continuous variable, we adopt an approach introduced by
[23], representing reward through the posterior probability of a binary variable (u). States
associated with large positive reward raise p(u) (i.e, p(u=1|s)) near to one; states associated
with large negative rewards reduce p(u) to near zero. In the simulations reported below, we
used a simple linear transformation to map from scalar reward values to p(u):
1 R ( si )
p (u si ) =
+1, rmax max j R ( s j )
2 rmax
(1)
In situations involving sequential actions, expected returns from different time-steps must be
integrated into a global representation of expected value. In order to accomplish this, we
employ a technique proposed by [8], introducing a ?global? utility variable (u G). Like u, this
1
is a binary random variable, but associated with a posterior probability determined as:
p (uG ) =
1
p(ui )
N i
(2)
where N is the number of u nodes. The network as whole embodies a generative model for
instrumental action. The basic idea is to use this model as a substrate for probabilistic
inference, in order to arrive at optimal policies. There are three general methods for
accomplishing this, which correspond three forms of query. First, a desired outcome state
can be identified, by treating one of the state variables (as well as the initial state variable)
as observed (see [9] for an application of this approach). Second, the expected return for
specific plans can be evaluated and compared by conditioning on specific sets of values over
the policy nodes (see [5, 21]). However, our focus here is on a less obvious possibility,
which is to condition directly on the utility variable u G , as explained next.
2.2
P o l i c y s e l e c t i o n b y p ro b a b i l i s t i c i n f e re n c e : a n i t e r a t i v e a l g o r i t h m
Cooper [23] introduced the idea of inferring optimal decisions in influence diagrams by
treating utility nodes into binary random variables and then conditioning on these variables.
Although this technique has been adopted in some more recent work [9, 12], we are aware of
no application that guarantees optimal decisions, in the expected-reward sense, in multi-step
tasks. We introduce here a simple algorithm that does furnish such a guarantee. The
procedure is as follows: (1) Initialize the policy nodes with any set of non-deterministic
2
priors. (2) Treating the initial state and u G as observed variables (u G = 1), use standard belief
1
Note that temporal discounting can be incorporated into the framework through minimal
modifications to Equation 2.
2
In the single-action situation, where there is only one u node, it is this variable that is treated as
observed (u = 1).
propagation (or a comparable algorithm) to infer the posterior distributions over all policy
nodes. (3) Set the prior distributions over the policy nodes to the values (posteriors)
obtained in step 2. (4) Go to step 2. The next two sections present proofs of monotonicity
and convergence for this algorithm.
2.2.1
Monotonicity
We show first that, at each policy node, the probability associated with the optimal policy will rise
on every iteration. Define * as follows:
(
)
(
)
p uG * , + > p uG , + , *
(3)
where + is the current set of probability distributions at all policy nodes on subsequent time-steps.
(Note that we assume here, for simplicity, that there is a unique optimal policy.) The objective is
to establish that:
p ( t* ) > p ( t*1 )
(4)
where t indexes processing iterations. The dynamics of the network entail that
p ( t ) = p ( t 1 uG )
(5)
where represents any value (i.e., policy) of the decision node being considered. Substituting this
into (4) gives
p t*1 uG > p ( t*1 )
(6)
(
)
From this point on the focus is on a single iteration, which permits us to omit the relevant
subscripts. Applying Bayes? law to (6) yields
p (uG * ) p ( * )
p (u )p ( )
> p ( * )
G
(7)
Canceling, and bringing the denominator up, this becomes
p (uG * ) > p (uG ) p ( )
(8)
Rewriting the left hand side, we obtain
p (u
* ) p ( ) > p (uG ) p ( )
G
(9)
Subtracting and further rearranging:
p (u
G
* ) p (uG ) p ( ) > 0
(10)
p (u * ) p (u * ) p ( * ) +
G
G
p (u
G
*
p (u
*
G
* ) p (uG ) p ( ) > 0
* ) p (uG ) p ( ) > 0
(11)
(12)
Note that this last inequality (12) follows from the definition of *.
Remark: Of course, the identity of * depends on +. In particular, the policy * will only be part
of a globally optimal plan if the set of choices + is optimal. Fortunately, this requirement is
guaranteed to be met, as long as no upper bound is placed on the number of processing cycles.
Recalling that we are considering only finite-horizon problems, note that for policies leading to
states with no successors, + is empty. Thus * at the relevant policy nodes is fixed, and is
guaranteed to be part of the optimal policy. The proof above shows that * will continuously rise.
Once it reaches a maximum, * at immediately preceding decisions will perforce fit with the
globally optimal policy. The process works backward, in the fashion of backward induction.
2.2.2
Convergence
Continuing with the same notation, we show now that
limt pt ( * u G ) = 1
(13)
Note that, if we apply Bayes? law recursively,
(
)
p uG pt ( )
pt ( uG ) =
pi (uG )
(
p uG
=
)
2
pt 1 ( )
(
p uG
=
pi (uG ) pt 1 (uG )
)
3
pt 2 ( )
pt (uG ) pt 1 (uG ) pt 2 (uG )
?
(14)
Thus,
p1 ( uG ) =
(
)
p uG p1 ( )
p1 (uG )
(
p uG
, p2 ( uG ) =
2
) p ( ) ,
(
p uG
1
p2 (uG ) p1 (uG )
p3 ( uG ) =
3
) p ( )
1
p3 (uG ) p2 (uG ) p1 (uG )
,
(15)
and so forth. Thus, what we wish to prove is
(
p uG *
) p ( ) = 1
*
1
(16)
p (u )
t
G
t =1
or, rearranging,
pt (uG )
t =1
(
p uG
)
(17)
= p1 ( ).
Note that, given the stipulated relationship between p() on each processing iteration and p( | uG)
on the previous iteration,
2
pt (uG ) = p (uG )pt ( ) = p (uG )pt 1 ( uG ) =
p (u ) p ( )
G
pt 1 ( uG )
3
4
p (u ) p ( )
G
=
p (u )
t 1
G
pt 1 ( )
=
pt 1 (uG ) pt 2 (uG )
t 1
pt 1 (uG ) pt 2 (uG ) pt 3 (uG )
(18)
?
With this in mind, we can rewrite the left hand side product in (17) as follows:
2
p1 (uG )
(
p uG
3
p (u ) p ( )
G
) p (u
G
)
p1 (uG )
4
p (u ) p ( )
1
G
p (u ) p ( )
1
G
(
)
(
p uG p1 (uG ) p2 (uG ) p uG
) p (u
1
1
G
) p2 (uG ) p3 (uG )
?
(19)
Note that, given (18), the numerator in each factor of (19) cancels with the denominator in the
subsequent factor, leaving only p(uG|*) in that denominator. The expression can thus be rewritten
as
4
1
(
p uG
1
) p (u
G
p (u ) p ( )
G
1
) p (u
G
)
1
(
p uG
)
? =
p (uG )
(
p uG
)
p1 ( ).
(20)
The objective is then to show that the above equals p(*). It proceeds directly from the definition
of * that, for all other than *,
p ( uG )
(
p uG
)
<1
(21)
Thus, all but one of the terms in the sum above approach zero, and the remaining term equals
p1(*). Thus,
p (uG )
(
p uG
)
p1 ( ) = p1 ( )
(22)
3
Simulations
3.1
Binary choice
We begin with a simulation of a simple incentive choice situation. Here, an animal faces
two levers. Pressing the left lever reliably yields a preferred food (r = 2), the right a less
preferred food (r = 1). Representing these contingencies in a network structured as in Fig. 1
(left) and employing the iterative algorithm described in section 2.2 yields the results in
Figure 2A. Shown here are the posterior probabilities for the policies press left and press
right, along with the marginal value of p(u = 1) under these posteriors (labeled EV for
expected value). The dashed horizontal line indicates the expected value for the optimal
plan, to which the model obviously converges.
A key empirical assay for purposive behavior involves outcome devaluation. Here, actions
yielding a previously valued outcome are abandoned after the incentive value of the outcome
is reduced, for example by pairing with an aversive event (e.g., [4]). To simulate this within
the binary choice scenario just described, we reduced to zero the reward value of the food
yielded by the left lever (fL), by making the appropriate change to p(u|fL). This yielded a
reversal in lever choice (Fig. 2B).
Another signature of purposive actions is that they are abandoned when their causal
connection with rewarding outcomes is removed (contingency degradation, see [4]). We
simulated this by starting with the model from Fig. 2A and changing conditional
probabilities at s for t=2 to reflect a decoupling of the left action from the fL outcome. The
resulting behavior is shown in Fig. 2C.
Fig 2. Simulation results, binary choice.
3.2
Stochastic outcomes
A critical aspect of the present modeling paradigm is that it yields reward-maximizing
choices in stochastic domains, a property that distinguishes it from some other recent
approaches using graphical models to do planning (e.g., [9]). To illustrate, we used the
architecture in Figure 1 (left) to simulate a choice between two fair coins. A ?left? coin
yields $1 for heads, $0 for tails; a ?right? coin $2 for heads but for tails a $3 loss. As
illustrated in Fig. 2D, the model maximizes expected value by opting for the left coin.
Fig 3. Simulation results, two-step sequential choice.
3.3
Sequential decision
Here, we adopt the two-step T-maze scenario used by [24] (Fig. 3A). Representing the task
contingencies in a graphical model based on the template from Fig 1 (right), and using the
reward values indicated in Fig. 3A, yields the choice behavior shown in Figure 3B.
Following [24], a shift in motivational state from hunger to thirst can be represented in the
graphical model by changing the reward function (R(cheese) = 2, R(X) = 0, R(water) = 4,
R(carrots) = 1). Imposing this change at the level of the u variables yields the choice
behavior shown in Fig. 3C. The model can also be used to simulate effort-based decision.
Starting with the scenario in Fig. 2A, we simulated the insertion of an effort-demanding
scalable barrier at S 2 (R(S 2 ) = -2) by making appropriate changes p(u|s). The resulting
behavior is shown in Fig. 3D.
A famous empirical demonstration of purposive control involves detour behavior. Using a
maze like the one shown in Fig. 4A, with a food reward placed at s5 , Tolman [2] found that
rats reacted to a barrier at location A by taking the upper route, but to a barrier at B by taking
the longer lower route. We simulated this experiment by representing the corresponding
3
transition and reward functions in a graphical model of the form shown in Fig. 1 (right),
representing the insertion of barriers by appropriate changes to the transition function. The
resulting choice behavior at the critical juncture s2 is shown in Fig. 4.
Fig 4. Simulation results, detour behavior. B: No barrier. C: Barrier at A. D: Barrier at B.
Another classic empirical demonstration involves latent
learning. Blodgett [25] allowed rats to explore the maze
shown in Fig. 5. Later insertion of a food reward at s13
was followed immediately by dramatic reductions in the
running time, reflecting a reduction in entries into blind
alleys. We simulated this effect in a model based on the
template in Fig. 1 (right), representing the maze layout
via an appropriate transition function. In the absence of
a reward at s12 , random choices occurred at each
intersection. However, setting R(s13 ) = 1 resulted in the
set of choices indicated by the heavier arrows in Fig. 5.
4
Fig 5. Latent learning.
Rel a t i o n t o p revi o u s work
Initial proposals for how to solve decision problems through probabilistic inference in
graphical models, including the idea of encoding reward as the posterior probability of a
random utility variable, were put forth by Cooper [23]. Related ideas were presented by
Shachter and Peot [12], including the use of nodes that integrate information from multiple
utility nodes. More recently, Attias [11] and Verma and Rao [9] have used graphical models
to solve shortest-path problems, leveraging probabilistic representations of rewards, though
not in a way that guaranteed convergence on optimal (reward maximizing) plans. More
closely related to the present research is work by Toussaint and Storkey [10], employing the
EM algorithm. The iterative approach we have introduced here has a certain resemblance to
the EM procedure, which becomes evident if one views the policy variables in our models as
parameters on the mapping from states to actions. It seems possible that there may be a
formal equivalence between the algorithm we have proposed and the one reported by [10].
As a cognitive and neuroscientific proposal, the present work bears a close relation to recent
work by Hasselmo [6], addressing the prefrontal computations underlying goal-directed
action selection (see also [7]). The present efforts are tied more closely to normative
principles of decision-making, whereas the work in [6] is tied more closely to the details of
neural circuitry. In this respect, the two approaches may prove complementary, and it will
be interesting to further consider their interrelations.
3
In this simulation and the next, the set of states associated with each state node was limited to the
set of reachable states for the relevant time-step, assuming an initial state of s1 .
Acknowledgments
Thanks to Andrew Ledvina, David Blei, Yael Niv, Nathaniel Daw, and Francisco Pereira for
useful comments.
R e f e re n c e s
[1] Hull, C.L., Principles of Behavior. 1943, New York: Appleton-Century.
[2] Tolman, E.C., Purposive Behavior in Animals and Men. 1932, New York: Century.
[3] Dickinson, A., Actions and habits: the development of behavioral autonomy. Philosophical
Transactions of the Royal Society (London), Series B, 1985. 308: p. 67-78.
[4] Balleine, B.W. and A. Dickinson, Goal-directed instrumental action: contingency and incentive
learning and their cortical substrates. Neuropharmacology, 1998. 37: p. 407-419.
[5] Daw, N.D., Y. Niv, and P. Dayan, Uncertainty-based competition between prefrontal and striatal
systems for behavioral control. Nature Neuroscience, 2005. 8: p. 1704-1711.
[6] Hasselmo, M.E., A model of prefrontal cortical mechanisms for goal-directed behavior. Journal of
Cognitive Neuroscience, 2005. 17: p. 1115-1129.
[7] Schmajuk, N.A. and A.D. Thieme, Purposive behavior and cognitive mapping. A neural network
model. Biological Cybernetics, 1992. 67: p. 165-174.
[8] Tatman, J.A. and R.D. Shachter, Dynamic programming and influence diagrams. IEEE
Transactions on Systems, Man and Cybernetics, 1990. 20: p. 365-379.
[9] Verma, D. and R.P.N. Rao. Planning and acting in uncertain enviroments using probabilistic
inference. in IEEE/RSJ International Conference on Intelligent Robots and Systems. 2006.
[10] Toussaint, M. and A. Storkey. Probabilistic inference for solving discrete and continuous state
markov decision processes. in Proceedings of the 23rd International Conference on Machine
Learning. 2006. Pittsburgh, PA.
[11] Attias, H. Planning by probabilistic inference. in Proceedings of the 9th Int. Workshop on
Artificial Intelligence and Statistics. 2003.
[12] Shachter, R.D. and M.A. Peot. Decision making using probabilistic inference methods. in
Uncertainty in artificial intelligence: Proceedings of the Eighth Conference (1992). 1992. Stanford
University: M. Kaufmann.
[13] Chater, N., J.B. Tenenbaum, and A. Yuille, Probabilistic models of cognition: conceptual
foundations. Trends in Cognitive Sciences, 2006. 10(7): p. 287-291.
[14] Doya, K., et al., eds. The Bayesian Brain: Probabilistic Approaches to Neural Coding. 2006, MIT
Press: Cambridge, MA.
[15] Miller, E.K. and J.D. Cohen, An integrative theory of prefrontal cortex function. Annual Review
of Neuroscience, 2001. 24: p. 167-202.
[16] Asaad, W.F., G. Rainer, and E.K. Miller, Task-specific neural activity in the primate prefrontal
cortex. Journal of Neurophysiology, 2000. 84: p. 451-459.
[17] Rolls, E.T., The functions of the orbitofrontal cortex. Brain and Cognition, 2004. 55: p. 11-29.
[18] Padoa-Schioppa, C. and J.A. Assad, Neurons in the orbitofrontal cortex encode economic value.
Nature, 2006. 441: p. 223-226.
[19] Gopnik, A., et al., A theory of causal learning in children: causal maps and Bayes nets.
Psychological Review, 2004. 111: p. 1-31.
[20] Hamilton, A.F.d.C. and S.T. Grafton, Action outcomes are represented in human inferior
frontoparietal cortex. Cerebral Cortex, 2008. 18: p. 1160-1168.
[21] Johnson, A., M.A.A. van der Meer, and D.A. Redish, Integrating hippocampus and striatum in
decision-making. Current Opinion in Neurobiology, 2008. 17: p. 692-697.
[22] Jensen, F.V., Bayesian Networks and Decision Graphs. 2001, New York: Springer Verlag.
[23] Cooper, G.F. A method for using belief networks as influence diagrams. in Fourth Workshop on
Uncertainty in Artificial Intelligence. 1988. University of Minnesota, Minneapolis.
[24] Niv, Y., D. Joel, and P. Dayan, A normative perspective on motivation. Trends in Cognitive
Sciences, 2006. 10: p. 375-381.
[25] Blodgett, H.C., The effect of the introduction of reward upon the maze performance of rats.
University of California Publications in Psychology, 1929. 4: p. 113-134.
| 3547 |@word neurophysiology:1 illustrating:1 hippocampus:1 seems:2 instrumental:2 open:1 integrative:1 simulation:8 rol:1 dramatic:1 recursively:1 reduction:2 initial:4 contains:1 series:1 past:2 current:4 si:2 yet:3 must:1 subsequent:2 motor:1 treating:3 medial:2 generative:1 selected:3 intelligence:3 blei:1 provides:1 node:17 location:1 rc:1 along:2 goaldirected:1 pairing:1 prove:2 behavioral:6 introduce:3 balleine:1 peot:2 expected:7 behavior:16 themselves:1 planning:4 p1:13 multi:1 brain:8 probabilisitic:1 globally:2 food:5 little:2 considering:1 becomes:2 begin:1 motivational:1 underlying:4 notation:1 maximizes:1 what:3 thieme:1 rmax:2 transformation:1 nj:2 guarantee:2 temporal:4 every:1 act:1 botvinick:1 ro:1 control:7 unit:1 underlie:1 omit:1 appear:1 hamilton:1 positive:1 understood:3 local:1 striatum:1 interrelation:1 encoding:1 subscript:1 path:1 might:3 emphasis:1 studied:1 equivalence:1 suggests:1 co:1 limited:1 minneapolis:1 directed:14 unique:1 acknowledgment:1 procedure:2 habit:5 area:2 empirical:3 elicit:1 thought:1 integrating:1 suggest:2 onto:1 close:2 selection:11 put:1 context:1 influence:3 applying:1 map:3 deterministic:2 maximizing:2 layout:2 attention:2 go:1 starting:2 simplicity:1 hunger:1 immediately:2 thirst:1 insight:1 classic:1 century:2 meer:1 pt:20 play:2 programming:4 substrate:2 dickinson:2 pa:1 storkey:2 trend:2 particularly:1 labeled:1 observed:3 role:5 capture:2 region:2 cycle:1 movement:1 removed:1 grafton:1 environment:1 ui:1 insertion:3 reward:26 dynamic:3 signature:1 depend:2 solving:5 raise:1 rewrite:1 purely:1 localization:1 serve:1 yuille:1 upon:1 joint:1 represented:5 distinct:1 london:1 query:1 artificial:3 outcome:16 quite:1 whose:1 stanford:1 solve:3 valued:1 statistic:1 obviously:1 sequence:1 pressing:1 net:1 subtracting:1 irect:1 product:1 relevant:6 representational:1 forth:2 competition:1 convergence:4 empty:1 requirement:1 converges:2 illustrate:1 andrew:1 received:1 p2:5 taskspecific:1 involves:3 met:1 closely:4 gopnik:1 stipulated:1 stochastic:2 hull:1 human:5 successor:1 opinion:1 premise:1 niv:4 biological:1 considered:1 great:1 mapping:5 cognition:2 matthew:1 substituting:1 circuitry:1 adopt:2 theshelf:1 inviting:1 currently:1 s12:1 ofc:3 hasselmo:2 minded:1 mit:1 rather:1 tolman:2 frontoparietal:1 publication:1 chater:1 rainer:1 encode:3 derived:1 focus:3 indicates:2 sense:1 inference:14 dayan:3 el:1 integrated:1 relation:1 reacted:1 provably:2 development:1 animal:9 matthewb:1 plan:5 initialize:1 marginal:1 equal:2 aware:1 once:1 shaped:1 placing:1 broad:1 represents:3 cancel:1 filling:1 future:1 tabular:1 others:1 stimulus:2 dlpfc:5 intelligent:1 employ:1 distinguishes:1 alley:1 resulted:1 recalling:1 neuroscientist:1 interest:3 possibility:2 joel:1 yielding:1 continuing:1 detour:2 re:3 desired:1 causal:4 fronto:1 minimal:1 uncertain:1 psychological:1 earlier:1 modeling:1 rao:2 devaluation:1 introducing:2 addressing:1 entry:1 johnson:1 characterize:1 stored:1 reported:2 accomplish:1 thanks:1 international:2 standing:1 probabilistic:14 off:1 rewarding:1 continuously:1 lever:4 reflect:1 prefrontal:12 cognitive:8 leading:1 return:2 account:4 potential:1 suggesting:1 s13:2 coding:1 redish:1 includes:1 int:1 coordinated:1 depends:2 blind:1 performed:1 later:1 view:1 start:1 repre:1 bayes:3 contribution:1 nathaniel:1 accomplishing:1 kaufmann:1 roll:1 miller:2 yield:8 correspond:1 bayesian:4 famous:1 critically:1 drive:1 researcher:1 cybernetics:2 reach:1 tended:1 canceling:1 ed:2 aligns:1 definition:2 grossly:1 involved:1 james:1 obvious:1 proof:3 associated:9 reflecting:1 response:2 done:1 though:2 evaluated:1 just:1 hand:3 horizontal:1 opting:1 propagation:1 indicated:3 resemblance:1 mdp:1 effect:4 furnish:1 discounting:1 assay:2 illustrated:1 deal:1 numerator:1 inferior:2 noted:1 rat:3 evident:1 consideration:1 novel:1 recently:2 ug:66 empirically:1 cohen:1 conditioning:2 cerebral:1 discussed:1 association:3 tail:2 occurred:1 neuropharmacology:1 s5:1 cambridge:1 imposing:1 appleton:1 rd:1 focal:1 pointed:1 language:2 reachable:1 minnesota:1 robot:1 entail:1 cortex:15 longer:1 posterior:7 recent:4 perspective:3 driven:1 scenario:3 route:2 certain:1 verlag:1 inequality:1 binary:6 discussing:1 der:1 fortunately:1 preceding:2 paradigm:1 shortest:1 dashed:1 relates:1 multiple:1 interdependency:1 infer:1 believed:1 long:2 involving:1 basic:3 scalable:1 denominator:3 iteration:7 represent:2 limt:1 proposal:3 whereas:1 diagram:3 leaving:1 appropriately:1 bringing:1 comment:1 subject:1 recording:1 leveraging:2 near:2 revealed:1 variety:1 fit:1 psychology:3 architecture:1 identified:1 reduce:1 idea:6 economic:1 attias:2 shift:1 whether:1 motivated:1 expression:1 heavier:1 utility:7 forecasting:1 effort:3 york:3 action:40 remark:1 impairment:1 useful:1 clear:2 shortage:1 involve:1 detailed:2 extensively:1 tenenbaum:1 reduced:2 neuroscience:5 anatomical:1 discrete:3 incentive:5 key:1 changing:2 oal:1 rewriting:1 backward:3 graph:1 year:1 sum:1 powerful:1 uncertainty:3 fourth:1 extends:1 arrive:1 doya:1 p3:3 decision:21 dorsolateral:2 orbitofrontal:5 comparable:1 entirely:1 bound:1 fl:3 guaranteed:3 played:1 followed:1 yielded:3 annual:1 activity:1 aspect:1 simulate:3 formulating:1 performing:1 dopaminergic:1 department:2 structured:3 across:1 em:2 appealing:1 making:10 modification:1 s1:1 primate:1 psychologist:1 founding:1 explained:1 computationally:1 equation:1 previously:1 mechanism:3 mind:1 reversal:1 adopted:1 available:1 rewritten:1 permit:1 yael:1 apply:1 appropriate:4 distinguished:1 coin:4 abandoned:2 assumes:1 remaining:1 running:1 graphical:7 furnishes:1 unifying:1 embodies:1 carrot:1 establish:1 society:1 rsj:1 objective:2 question:1 occurs:1 damage:1 link:1 separate:1 simulated:4 prospective:2 toward:1 induction:2 water:1 assuming:1 cont:1 relationship:2 index:1 demonstration:2 innovation:1 sector:1 potentially:1 striatal:1 negative:1 resurgence:1 neuroscientific:3 rise:2 aversive:1 implementation:1 reliably:1 policy:32 twenty:1 upper:2 neuron:4 markov:3 finite:1 parietal:2 situation:3 neurobiology:1 incorporated:1 head:2 princeton:7 introduced:3 david:1 extensive:2 connection:1 philosophical:1 california:1 framing:1 daw:3 beyond:1 suggested:1 proceeds:1 below:3 ev:1 eighth:1 challenge:1 max:1 including:2 royal:1 belief:2 critical:3 event:1 treated:1 rely:1 demanding:1 representing:11 mdps:5 prior:2 understanding:1 review:2 law:2 loss:1 bear:1 interesting:1 men:1 toussaint:2 foundation:1 contingency:5 integrate:1 principle:3 verma:2 pi:2 autonomy:1 course:2 placed:2 last:1 guide:1 side:2 formal:1 institute:1 template:2 face:1 barrier:7 taking:2 van:1 slice:1 schmajuk:1 cortical:3 transition:7 world:1 maze:5 qualitatively:1 reinforcement:2 far:1 employing:2 transaction:2 preferred:2 monotonicity:2 global:3 cheese:1 investigating:1 conceptual:1 pittsburgh:1 francisco:1 continuous:2 iterative:2 latent:2 nature:2 rearranging:2 decoupling:1 pfc:3 domain:2 arrow:1 whole:1 s2:1 motivation:1 lesion:1 fair:1 pivotal:1 allowed:1 complementary:1 child:1 fig:22 fashion:2 cooper:3 inferring:1 pereira:1 wish:1 purposive:11 tied:2 specific:7 normative:2 jensen:1 evidence:4 derives:1 workshop:2 rel:1 sequential:4 juncture:1 magnitude:1 execution:1 horizon:2 forecast:1 gap:1 intersection:1 explore:1 shachter:3 scalar:1 reflex:1 springer:1 relies:1 ma:1 conditional:2 goal:13 endeavor:1 identity:1 absence:1 man:1 change:4 included:1 determined:1 acting:1 degradation:1 assad:1 senting:1 phenomenon:1 |
2,810 | 3,548 | Nonlinear causal discovery with additive noise models
Patrik O. Hoyer
University of Helsinki
Finland
Dominik Janzing
MPI for Biological Cybernetics
T?ubingen, Germany
Joris Mooij
MPI for Biological Cybernetics
T?ubingen, Germany
Bernhard Sch?olkopf
MPI for Biological Cybernetics
T?ubingen, Germany
Jonas Peters
MPI for Biological Cybernetics
T?ubingen, Germany
Abstract
The discovery of causal relationships between a set of observed variables is a fundamental problem in science. For continuous-valued data linear acyclic causal
models with additive noise are often used because these models are well understood and there are well-known methods to fit them to data. In reality, of course,
many causal relationships are more or less nonlinear, raising some doubts as to
the applicability and usefulness of purely linear methods. In this contribution we
show that the basic linear framework can be generalized to nonlinear models. In
this extended framework, nonlinearities in the data-generating process are in fact a
blessing rather than a curse, as they typically provide information on the underlying causal system and allow more aspects of the true data-generating mechanisms
to be identified. In addition to theoretical results we show simulations and some
simple real data experiments illustrating the identification power provided by nonlinearities.
1
Introduction
Causal relationships are fundamental to science because they enable predictions of the consequences
of actions [1]. While controlled randomized experiments constitute the primary tool for identifying
causal relationships, such experiments are in many cases either unethical, too expensive, or technically impossible. The development of causal discovery methods to infer causal relationships from
uncontrolled data constitutes an important current research topic [1, 2, 3, 4, 5, 6, 7, 8]. If the observed data is continuous-valued, methods based on linear causal models (aka structural equation
models) are commonly applied [1, 2, 9]. This is not necessarily because the true causal relationships
are really believed to be linear, but rather it reflects the fact that linear models are well understood
and easy to work with. A standard approach is to estimate a so-called Markov equivalence class of
directed acyclic graphs (all representing the same conditional independencies) from the data [1, 2, 3].
For continuous variables, the independence tests often assume linear models with additive Gaussian
noise [2]. Recently, however, it has been shown that for linear models, non-Gaussianity in the data
can actually aid in distinguishing the causal directions and allow one to uniquely identify the generating graph under favourable conditions [7]. Thus the practical case of non-Gaussian data which
long was considered a nuisance turned out to be helpful in the causal discovery setting.
In this contribution we show that nonlinearities can play a role quite similar to that of nonGaussianity: When causal relationships are nonlinear it typically helps break the symmetry between
the observed variables and allows the identification of causal directions. As Friedman and Nachman have pointed out [10], non-invertible functional relationships between the observed variables
can provide clues to the generating causal model. However, we show that the phenomenon is much
more general; for nonlinear models with additive noise almost any nonlinearities (invertible or not)
will typically yield identifiable models. Note that other methods to select among Markov equivalent
DAGs [11, 8] have (so far) mainly focussed on mixtures of discrete and continuous variables.
In the next section, we start by defining the family of models under study, and then, in Section 3
we give theoretical results on the identifiability of these models from non-interventional data. We
describe a practical method for inferring the generating model from a sample of data vectors in
Section 4, and show its utility in simulations and on real data (Section 5).
2
Model definition
We assume that the observed data has been generated in the following way: Each observed variable
xi is associated with a node i in a directed acyclic graph G, and the value of xi is obtained as a
function of its parents in G, plus independent additive noise ni , i.e.
xi := fi (xpa(i) ) + ni
(1)
where fi is an arbitrary function (possibly different for each i), xpa(i) is a vector containing the
elements xj such that there is an edge from j to i in the DAG G, the noise variables ni may
have arbitrary
probability densities pni (ni ), and the noise variables are jointly independent, that
is pn (n) = i pni (ni ), where n denotes the vector containing the noise variables ni . Our data then
consists of a number of vectors x sampled independently, each using G, the same functions fi , and
the ni sampled independently from the same densities pni (ni ).
Note that this model includes the special case when all the fi are linear and all the pni are Gaussian,
yielding the standard linear?Gaussian model family [2, 3, 9]. When the functions are linear but the
densities pni are non-Gaussian we obtain the linear?non-Gaussian models described in [7].
The goal of causal discovery is, given the data vectors, to infer as much as possible about the generating mechanism; in particular, we seek to infer the generating graph G. In the next section we
discuss the prospects of this task in the theoretical case where the joint distribution px (x) of the
observed data can be estimated exactly. Later, in Section 4, we experimentally tackle the practical
case of a finite-size data sample.
3
Identifiability
Our main theoretical results concern the simplest non-trivial graph: the case of two variables. The
experimental results will, however, demonstrate that the basic principle works even in the general
case of N variables.
Figure 1 illustrates the basic identifiability principle for the two-variable model. Denoting the two
variables x and y, we are considering the generative model y := f (x) + n where x and n are
c
0.030
b
0.025
0.020
a
0
x
0.020
0.005
!5
-5
5
00
y
55
!2
!1
0.020
00
1
2
x
33
p(x)
xvals[theinds]
e
noise
f(x)
!3
-3
yvals
d
p(x,y)>0
p(x)
x
f
0.04
-5
y
g
0.000
0.000
-5
p(x | y)
0.015
cm1[theinds]
cm1
0
0.005
y
0.010
0.015
p(y | x)
0.010
5
5
-5
0
x
5
0.03
0.01
noise
p(x,y)>0
0.00
0.000
-5
p(x | y)
0.02
cm1
cm1[theinds]
0.015
p(y | x)
0.010
0
0.005
y
!5
-5
00
y
yvals
55
!3
-3
!2
!1
00
x
1
2
33
xvals[theinds]
Figure 1: Identification of causal direction based on constancy of conditionals. See main text for
a detailed explanation of (a)?(f). (g) shows an example of a joint density p(x y) generated by a
causal model x y with y := f (x) + n where f is nonlinear, the supports of the densities px (x)
and pn (n) are compact regions, and the function f is constant on each connected component of the
support of px . The support of the joint density is now given by the two gray squares. Note that the
input distribution px , the noise distribution pn and f can in fact be chosen such that the joint density
is symmetrical with respect to the two variables, i.e. p(x y) = p(y x), making it obvious that there
will also be a valid backward model.
both Gaussian and statistically independent. In panel (a) we plot the joint density p(x, y) of the
observed variables, for the linear case of f (x) = x. As a trivial consequence of the model, the
conditional density p(y | x) has identical shape for all values of x and is simply shifted by the
function f (x); this is illustrated in panel (b). In general, there is no reason to believe that this
relationship would also hold for the conditionals p(x | y) for different values of y but, as is well
known, for the linear?Gaussian model this is actually the case, as illustrated in panel (c). Panels (d-f)
show the corresponding joint and conditional densities for the corresponding model with a nonlinear
function f (x) = x + x3 . Notice how the conditionals p(x | y) look different for different values
of y, indicating that a reverse causal model of the form x := g(y) + n
? (with y and n
? statistically
independent) would not be able to fit the joint density. As we will show in this section, this will in
fact typically be the case, however, not always.
To see the latter, we first show that there exist models other than the linear?Gaussian and the independent case which admit both a forward x ? y and a backward x ? y model. Panel (g) of
Figure 1 presents a nonlinear functional model with additive non-Gaussian noise and non-Gaussian
input distributions that nevertheless admits a backward model. The functions and probability densitities can be chosen to be (arbitrarily many times) differentiable. Note that the example of panel
(g) in Figure 1 is somewhat artificial: p has compact support, and x, y are independent inside the
connected components of the support. Roughly speaking, the nonlinearity of f does not matter since
it occurs where p is zero ? an artifical situation which is avoided by the requirement that from now
on, we will assume that all probability densities are strictly positive. Moreover, we assume that all
functions (including densities) are three times differentiable. In this case, the following theorem
shows that for generic choices of f , px (x), and pn (n), there exists no backward model.
Theorem 1 Let the joint probability density of x and y be given by
p(x, y) = pn (y ? f (x))px (x) ,
(2)
where pn , px are probability densities on R. If there is a backward model of the same form, i.e.,
p(x, y) = pn? (x ? g(y))py (y) ,
(3)
then, denoting ? := log pn and ? := log px , the triple (f, px , pn ) must satisfy the following differential equation for all x, y with ? 00 (y ? f (x))f 0 (x) 6= 0:
000 0
? f
f 00
? 0 ? 000 f 00 f 0
? 0 (f 00 )2
? 000 = ? 00 ? 00 + 0 ? 2? 00 f 00 f 0 + ? 0 f 000 +
?
,
(4)
00
?
f
?
f0
where we have skipped the arguments y ? f (x), x, and x for ?, ?, and f and their derivatives,
respectively. Moreover, if for a fixed pair (f, ?) there exists y ? R such that ? 00 (y ? f (x))f 0 (x) 6= 0
for all but a countable set of points x ? R, the set of all px for which p has a backward model is
contained in a 3-dimensional affine space.
Loosely speaking, the statement that the differential equation for ? has a 3-dimensional space of
solutions (while a priori, the space of all possible log-marginals ? is infinite dimensional) amounts
to saying that in the generic case, our forward model cannot be inverted.
A simple corollary is that if both the marginal density px (x) and the noise density pn (y ? f (x)) are
Gaussian then the existence of a backward model implies linearity of f :
Corollary 1 Assume that ? 000 = ? 000 = 0 everywhere. If a backward model exists, then f is linear.
The proofs of Theorem 1 and Corollary 1 are provided in the Appendix.
Finally, we note that even when f is linear and pn and px are non-Gaussian, although a linear backward model has previously been ruled out [7], there exist special cases where there is a nonlinear
backward model with independent additive noise. One such case is when f (x) = ?x and px and
pn are Gumbel distributions: px (x) = exp(?x ? exp(?x)) and pn (n) = exp(?n ? exp(?n)).
Then taking py (y) = exp(?y ? 2 log(1 + exp(?y))), pn? (?
n) = exp(?2?
n ? exp(??
n)) and
g(y) = log(1 + exp(?y)) one obtains p(x, y) = pn (y ? f (x))px (x) = pn? (x ? g(y))py (y).
Although the above results strictly only concern the two-variable case, there are strong reasons to
believe that the general argument also holds for larger models. In this brief contribution we do not
pursue any further theoretical results, rather we show empirically that the estimation principle can
be extended to networks involving more than two variables.
4
Model estimation
Section 3 established for the two-variable case that given knowledge of the exact densities, the true
model is (in the generic case) identifiable. We now consider practical estimation methods which
infer the generating graph from sample data.
Again, we begin by considering the case of two observed scalar variables x and y. Our basic method
is straightforward: First, test whether x and y are statistically independent. If they are not, we
continue as described in the following manner: We test whether a model y := f (x) + n is consistent
with the data, simply by doing a nonlinear regression of y on x (to get an estimate f? of f ), calculating
the corresponding residuals n
? = y ? f?(x), and testing whether n
? is independent of x. If so, we
accept the model y := f (x) + n; if not, we reject it. We then similarly test whether the reverse
model x := g(y) + n fits the data.
The above procedure will result in one of several possible scenarios. First, if x and y are deemed
mutually independent we infer that there is no causal relationship between the two, and no further
analysis is performed. On the other hand, if they are dependent but both directional models are
accepted we conclude that either model may be correct but we cannot infer it from the data. A
more positive result is when we are able to reject one of the directions and (tentatively) accept the
other. Finally, it may be the case that neither direction is consistent with the data, in which case we
conclude that the generating mechanism is more complex and cannot be described using this model.
This procedure could be generalized to an arbitrary number N of observed variables, in the following
way: For each DAG Gi over the observed variables, test whether it is consistent with the data by
constructing a nonlinear regression of each variable on its parents, and subsequently testing whether
the resulting residuals are mutually independent. If any independence test is rejected, Gi is rejected.
On the other hand, if none of the independence tests are rejected, Gi is consistent with the data.
The above procedure is obviously feasible only for very small networks (roughly N ? 7 or so) and
also suffers from the problem of multiple hypothesis testing; an important future improvement would
be to take this properly into account. Furthermore, the above algorithm returns all DAGs consistent
with the data, including all those for which consistent subgraphs exist. Our current implementation
removes any such unnecessarily complex graphs from the output.
The selection of the nonlinear regressor and of the particular independence tests are not constrained.
Any prior information on the types of functional relationships or the distributions of the noise should
optimally be utilized here. In our implementation, we perform the regression using Gaussian Processes [12] and the independence tests using kernel methods [13]. Note that one must take care to
avoid overfitting, as overfitting may lead one to falsely accept models which should be rejected.
5
Experiments
To show the ability of our method to find the correct model when all the assumptions hold we have
applied our implementation to a variety of simulated and real data.
For the regression, we used the GPML code from [14] corresponding to [12], using a Gaussian kernel
and independent Gaussian noise, optimizing the hyperparameters for each regression individually.1
In principle, any regression method can be used; we have verified that our results do not depend
significantly on the choice of the regression method by comparing with ?-SVR [15] and with thinplate spline kernel regression [16]. For the independence test, we implemented the HSIC [13] with
a Gaussian kernel, where we used the gamma distribution as an approximation for the distribution
of the HSIC under the null hypothesis of independence in order to calculate the p-value of the test
result.
Simulations. The main results for the two-variable case are shown in Figure 2. We simulated data
using the model y = x + bx3 + n; the random variables x and n were sampled from a Gaussian
distribution and their absolute values were raised to the power q while keeping the original sign.
1
The assumption of Gaussian noise is somewhat inconsistent with our general setting where the residuals
are allowed to have any distribution (we even prefer the noise to be non-Gaussian); in practice however, the
regression yields acceptable results as long as the noise is sufficiently similar to Gaussian noise. In case of
significant outliers, other regression methods may yield better results.
q=1
b=0
1
correct
reverse
paccept
paccept
1
0
0
0.5
(a)
correct
reverse
1
1.5
q
?1
2
(b)
0
1
b
Figure 2: Results of simulations (see main text for details): (a) The proportion of times the forward
and the reverse model were accepted, paccept , as a function of the non-Gaussianity parameter q (for
b = 0), and (b) as a function of the nonlinearity parameter b (for q = 1).
The parameter b controls the strength of the nonlinearity of the function, b = 0 corresponding to the
linear case. The parameter q controls the non-Gaussianity of the noise: q = 1 gives a Gaussian, while
q > 1 and q < 1 produces super-Gaussian and sub-Gaussian distributions respectively. We used 300
(x, y) samples for each trial and used a significance level of 2% for rejecting the null hypothesis of
independence of residuals and cause. For each b value (or q value) we repeated the experiment 100
times in order to estimate the acceptance probabilities. Panel (a) shows that our method can solve the
well-known linear but non-Gaussian special case [7]. By plotting the acceptance probability of the
correct and the reverse model as a function of non-Gaussianity we can see that when the distributions
are sufficiently non-Gaussian the method is able to infer the correct causal direction. Then, in panel
(b) we similarly demonstrate that we can identify the correct direction for the Gaussian marginal and
Gaussian noise model when the functional relationship is sufficiently nonlinear. Note in particular,
that the model is identifiable also for positive b in which case the function is invertible, indicating
that non-invertibility is not a necessary condition for identification.
We also did experiments for 4 variables w, x, y and z with a diamond-like causal
structure.
We took w ? U (?3, 3), x = w2 + nx with nx ? U (?1, 1), y =
p
4 |w|+ny with ny ? U (?1, 1), z = 2 sin x+2 sin y+nz with nz ? U (?1, 1).
We sampled 500 (w, x, y, z) tuples from the model and applied the algorithm
described in Section 4 in order to reconstruct the DAG structure. The simplest
DAG that was consistent with the data (with significance level 2% for each test)
turned out to be precisely the true causal structure. All five other DAGs for
which the true DAG is a subgraph were also consistent with the data.
w
y
x
z
Real-world data. Of particular empirical interest is how well the proposed method performs on
real world datasets for which the assumptions of our method might only hold approximately. Due
to space constraints we only discuss three real world datasets here.
The first dataset, the ?Old Faithful? dataset [17] contains data about the duration of an eruption and
the time interval between subsequent eruptions of the Old Faithful geyser in Yellowstone National
Park, USA. Our method obtains a p-value of 0.5 for the (forward) model ?current duration causes
next interval length? and a p-value of 4.4 ? 10?9 for the (backward) model ?next interval length
causes current duration?. Thus, we accept the model where the time interval between the current
and the next eruption is a function of the duration of the current eruption, but reject the reverse
model. This is in line with the chronological ordering of these events. Figure 3 illustrates the data,
the forward and backward fit and the residuals for both fits. Note that for the forward model, the
residuals seem to be independent of the duration, whereas for the backward model, the residuals are
clearly dependent on the interval length. Time-shifting the data by one time step, we obtain for the
(forward) model ?current interval length causes next duration? a p-value smaller than 10?15 and for
the (backward) model ?next duration causes current interval length? we get a p-value of 1.8 ? 10?8 .
Hence, our simple nonlinear model with independent additive noise is not consistent with the data
in either direction.
The second dataset, the ?Abalone? dataset from the UCI ML repository [18], contains measurements
of the number of rings in the shell of abalone (a group of shellfish), which indicate their age, and the
length of the shell. Figure 4 shows the results for a subsample of 500 datapoints. The correct model
?age causes length? leads to a p-value of 0.19, while the reverse model ?length causes age? comes
60
40
0
(a)
2
4
duration
6
20
4
0
?20
0
6
(b)
2
4
duration
2
0
40
6
2
residuals of (c)
80
40
duration
residuals of (a)
interval
100
(c)
60
80
interval
0
?2
?4
40
100
(d)
60
80
interval
100
Figure 3: The Old Faithful Geyser data: (a) forward fit corresponding to ?current duration causes
next interval length?; (b) residuals for forward fit; (c) backward fit corresponding to ?next interval
length causes current duration?; (d) residuals for backward fit.
0.4
0.2
0
0
10
20
0
10
20
20
0
0
30
rings
(b)
20
10
?0.2
?0.4
0
30
rings
(a)
30
0.2
rings
length
0.6
residuals of (c)
0.4
residuals of (a)
0.8
(c)
0.5
length
10
0
?10
0
1
(d)
0.5
length
1
Figure 4: Abalone data: (a) forward fit corresponding to ?age (rings) causes length?; (b) residuals for
forward fit; (c) backward fit corresponding to ?length causes age (rings)?; (d) residuals for backward
fit.
0
1000
2000
altitude
0
?1
?2
0
3000
(b)
1000
2000
altitude
2000
1000
0
?10
3000
(c)
400
residuals of (c)
5
3000
1
altitude
10
?5
0
(a)
2
residuals of (a)
temperature
15
0
10
temperature
200
0
?200
?400
?10
20
(d)
0
10
temperature
20
Figure 5: Altitude?temperature data. (a) forward fit corresponding to ?altitude causes temperature?;
(b) residuals for forward fit; (c) backward fit corresponding to ?temperature causes altitude?; (d)
residuals for backward fit.
with p < 10?15 . This is in accordance with our intuition. Note that our method favors the correct
direction although the assumption of independent additive noise is only approximately correct here;
indeed, the variance of the length is dependent on age.
Finally, we assay the method on a simple example involving two observed variables: The altitude
above sea level (in meters) and the local yearly average outdoor temperature in centigrade, for 349
weather stations in Germany, collected over the time period of 1961?1990 [19]. The correct model
?altitude causes temperature? leads to p = 0.017, while ?temperature causes altitude? can clearly be
rejected (p = 8 ? 10?15 ), in agreement with common understanding of causality in this case. The
results are shown in Figure 5.
6
Conclusions
In this paper, we have shown that the linear?non-Gaussian causal discovery framework can be generalized to admit nonlinear functional dependencies as long as the noise on the variables remains
additive. In this approach nonlinear relationships are in fact helpful rather than a hindrance, as they
tend to break the symmetry between the variables and allow the correct causal directions to be identified. Although there exist special cases which admit reverse models we have shown that in the
generic case the model is identifiable. We have illustrated our method on both simulated and real
world datasets.
Acknowledgments
We thank Kun Zhang for pointing out an error in the original manuscript. This work was supported
in part by the IST Programme of the European Community, under the PASCAL2 Network of Excellence, IST-2007-216886. P.O.H. was supported by the Academy of Finland and by University of
Helsinki Research Funds.
A Proof of Theorem 1
Set
?(x, y) := log p(x, y) = ?(y ? f (x)) + ?(x) ,
(5)
and ?? := log pn? , ? := log py . If eq. (3) holds, then ?(x, y) = ??(x ? g(y)) + ?(y) , implying
?2?
= ??
? 00 (x ? g(y))g 0 (y)
?x?y
We conclude
?
?x
?2?
= ??00 (x ? g(y)) .
?x2
and
? 2 ?/?x2
? 2 ?/(?x?y)
= 0.
(6)
Using eq. (5) we obtain
?2?
= ?? 00 (y ? f (x))f 0 (x) ,
?x?y
(7)
and
?2?
?
=
(?? 0 (y ? f (x))f 0 (x) + ? 0 (x)) = ? 00 (f 0 )2 ? ? 0 f 00 + ? 00 ,
(8)
?x2
?x
where we have dropped the arguments for convenience. Combining eqs. (7) and (8) yields
!
?2?
? 0 f 000
? 0 ? 000 f 00
? 0 (f 00 )2
?
1
? 000
f 00
?x2
= ?2f 00 + 00 0 ? ? 000 00 0 +
? 00 0 2 ? ? 00 00 2 + ? 00 00 0 2 .
2?
00
2
?
?x ?x?y
? f
? f
(? )
? (f )
(? )
? (f )
Due to eq. (6) this expression must vanish and we obtain DE (4) by term reordering. Given f, ?, we
obtain for every fixed y a linear inhomogeneous DE for ?:
? 000 (x) = ? 00 (x)G(x, y) + H(x, y) ,
(9)
where G and H are defined by
G := ?
? 000 f 0
f 00
+ 0
00
?
f
H := ?2? 00 f 00 f 0 + ? 0 f 000 +
and
? 0 ? 000 f 00 f 0
? 0 (f 00 )2
?
.
00
?
f0
Setting z := ? 00 we have z 0 (x) = z(x)G(x, y) + H(x, y) . Given that such a function z exists, it is
given by
Z
z(x) = z(x0 )e
Rx
x0
G(?
x,y)d?
x
+
x R
x
e
x
?
G(?
x,y)d?
x
H(?
x, y)d?
x.
(10)
x0
Let y be fixed such that ? 00 (y ? f (x))f 0 (x) 6= 0 holds for all but countably many x. Then z is
determined by z(x0 ) since we can extend eq. (10) to the remaining points. The set of all functions
? satisfying the linear inhomogenous DE (9) is a 3-dimensional affine space: Once we have fixed
?(x0 ), ? 0 (x0 ), ? 00 (x0 ) for some arbitrary point x0 , ? is completely determined. Given fixed f and ?,
the set of all ? admitting a backward model is contained in this subspace.
B Proof of Corollary 1
Similarly to how (6) was derived, under the assumption of the existence of a reverse model one can
derive
2
?2?
? ?2?
?2? ?
? ?
=
?
?
2
2
?x?y ?x ?x
?x ?x ?x?y
Now using (7) and (8), we obtain
(?? 00 f 0 ) ?
?
?
? 00 (f 0 )2 ? ? 0 f 00 + ? 00 = (? 00 (f 0 )2 ? ? 0 f 00 + ? 00 ) ?
(?? 00 f 0 )
?x
?x
which reduces to
?2(? 00 f 0 )2 f 00 + ? 00 f 0 ? 0 f 000 ? ? 00 f 0 ? 000 = ?? 0 f 00 ? 000 (f 0 )2 + ? 00 ? 000 (f 0 )2 + ? 00 ? 0 (f 00 )2 ? ? 00 f 00 ? 00 .
Substituting the assumptions ? 000 = 0 and ? 000 = 0 (and hence ? 00 = C everywhere with C 6= 0 since
otherwise ? cannot be a proper log-density) yields
? 0 y ? f (x) ? f 0 f 000 ? (f 00 )2 = 2C(f 0 )2 f 00 ? f 00 ? 00 .
Since C 6= 0 there exists an ? such that ? 0 (?) = 0. Then, restricting ourselves to the submanifold
{(x, y) ? R2 : y ? f (x) = ?} on which ? 0 = 0, we have
0 = f 00 (2C(f 0 )2 ? ? 00 ).
Therefore, for all x in the open set [f 00 6= 0], we have (f 0 (x))2 = ? 00 /(2C) which is a constant, so
f 00 = 0 on [f 00 6= 0]: a contradiction. Therefore, f 00 = 0 everywhere.
References
[1] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000.
[2] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. Springer-Verlag, 1993. (2nd
ed. MIT Press 2000).
[3] D. Geiger and D. Heckerman. Learning Gaussian networks. In Proc. of the 10th Annual Conference on
Uncertainty in Artificial Intelligence, pages 235?243, 1994.
[4] D. Heckerman, C. Meek, and G. Cooper. A Bayesian approach to causal discovery. In C. Glymour and
G. F. Cooper, editors, Computation, Causation, and Discovery, pages 141?166. MIT Press, 1999.
[5] T. Richardson and P. Spirtes. Automated discovery of linear feedback models. In C. Glymour and G. F.
Cooper, editors, Computation, Causation, and Discovery, pages 253?304. MIT Press, 1999.
[6] R. Silva, R. Scheines, C. Glymour, and P. Spirtes. Learning the structure of linear latent variable models.
Journal of Machine Learning Research, 7:191?246, 2006.
[7] S. Shimizu, P. O. Hoyer, A. Hyv?arinen, and A. J. Kerminen. A linear non-Gaussian acyclic model for
causal discovery. Journal of Machine Learning Research, 7:2003?2030, 2006.
[8] X. Sun, D. Janzing, and B. Sch?olkopf. Distinguishing between cause and effect via kernel-based complexity measures for conditional probability densities. Neurocomputing, pages 1248?1256, 2008.
[9] K. A. Bollen. Structural Equations with Latent Variables. John Wiley & Sons, 1989.
[10] N. Friedman and I. Nachman. Gaussian process networks. In Proc. of the 16th Annual Conference on
Uncertainty in Artificial Intelligence, pages 211?219, 2000.
[11] X. Sun, D. Janzing, and B. Sch?olkopf. Causal inference by choosing graphs with most plausible Markov
kernels. In Proceeding of the 9th Int. Symp. Art. Int. and Math., Fort Lauderdale, Florida, 2006.
[12] C. E. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[13] A. Gretton, R. Herbrich, A. Smola, O. Bousquet, and B. Sch?olkopf. Kernel methods for measuring
independence. Journal of Machine Learning Research, 6:2075?2129, 2005.
[14] GPML code. http://www.gaussianprocess.org/gpml/code.
[15] B. Sch?olkopf, A. J. Smola, and R. Williamson. Shrinking the tube: A new support vector regression
algorithm. In Advances in Neural Information Processing 11 (Proc. NIPS*1998). MIT Press, 1999.
[16] G. Wahba. Spline Models for Observational Data. Series in Applied Math., Vol. 59, SIAM, Philadelphia,
1990.
[17] A. Azzalini and A. W. Bowman. A look at some data on the Old Faithful Geyser. Applied Statistics,
39(3):357?365, 1990.
[18] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
[19] Climate data collected by the Deutscher Wetter Dienst. http://www.dwd.de/.
| 3548 |@word trial:1 illustrating:1 repository:2 proportion:1 nd:1 open:1 hyv:1 simulation:4 seek:1 contains:2 series:1 denoting:2 current:10 comparing:1 must:3 john:1 subsequent:1 additive:10 shape:1 remove:1 plot:1 fund:1 implying:1 generative:1 intelligence:2 math:2 node:1 herbrich:1 org:1 zhang:1 five:1 bowman:1 differential:2 jonas:1 consists:1 symp:1 inside:1 manner:1 falsely:1 x0:8 excellence:1 indeed:1 roughly:2 curse:1 considering:2 provided:2 begin:1 underlying:1 moreover:2 panel:8 linearity:1 null:2 pursue:1 every:1 tackle:1 chronological:1 exactly:1 control:2 positive:3 understood:2 accordance:1 local:1 dropped:1 consequence:2 approximately:2 might:1 plus:1 nz:2 equivalence:1 statistically:3 directed:2 practical:4 faithful:4 acknowledgment:1 testing:3 practice:1 x3:1 procedure:3 empirical:1 reject:3 significantly:1 weather:1 get:2 svr:1 cannot:4 selection:1 convenience:1 impossible:1 py:4 www:2 equivalent:1 straightforward:1 williams:1 independently:2 duration:12 identifying:1 subgraphs:1 contradiction:1 datapoints:1 hsic:2 play:1 exact:1 distinguishing:2 hypothesis:3 agreement:1 element:1 expensive:1 satisfying:1 utilized:1 observed:12 role:1 constancy:1 calculate:1 region:1 connected:2 sun:2 ordering:1 prospect:1 intuition:1 complexity:1 depend:1 purely:1 technically:1 completely:1 joint:8 describe:1 artificial:3 newman:1 choosing:1 quite:1 larger:1 valued:2 solve:1 plausible:1 reconstruct:1 otherwise:1 ability:1 favor:1 gi:3 richardson:1 statistic:1 jointly:1 obviously:1 differentiable:2 took:1 turned:2 uci:2 combining:1 subgraph:1 academy:1 olkopf:5 parent:2 requirement:1 sea:1 produce:1 generating:9 ring:6 help:1 derive:1 eq:5 strong:1 implemented:1 implies:1 indicate:1 come:1 direction:10 inhomogeneous:1 correct:12 subsequently:1 eruption:4 enable:1 observational:1 arinen:1 really:1 biological:4 strictly:2 hold:6 sufficiently:3 considered:1 exp:9 pointing:1 substituting:1 finland:2 estimation:3 proc:3 nachman:2 individually:1 xpa:2 gaussianprocess:1 tool:1 reflects:1 mit:5 clearly:2 gaussian:33 always:1 super:1 rather:4 pn:17 avoid:1 gpml:3 corollary:4 derived:1 improvement:1 properly:1 mainly:1 aka:1 skipped:1 helpful:2 inference:2 dependent:3 typically:4 accept:4 germany:5 among:1 priori:1 development:1 constrained:1 special:4 raised:1 art:1 marginal:2 once:1 identical:1 unnecessarily:1 look:2 park:1 constitutes:1 future:1 spline:2 causation:3 gamma:1 national:1 neurocomputing:1 ourselves:1 friedman:2 acceptance:2 interest:1 mixture:1 yielding:1 admitting:1 edge:1 necessary:1 loosely:1 old:4 ruled:1 causal:29 theoretical:5 measuring:1 kerminen:1 applicability:1 xvals:2 usefulness:1 submanifold:1 too:1 optimally:1 dependency:1 density:20 fundamental:2 randomized:1 siam:1 lauderdale:1 invertible:3 regressor:1 again:1 tube:1 containing:2 possibly:1 admit:3 derivative:1 return:1 doubt:1 account:1 nonlinearities:4 de:4 gaussianity:4 includes:1 matter:1 invertibility:1 satisfy:1 int:2 later:1 break:2 performed:1 doing:1 start:1 identifiability:3 asuncion:1 contribution:3 square:1 ni:8 variance:1 yield:5 identify:2 directional:1 identification:4 bayesian:1 rejecting:1 none:1 rx:1 cybernetics:4 suffers:1 janzing:3 ed:1 definition:1 obvious:1 associated:1 proof:3 sampled:4 dataset:4 knowledge:1 actually:2 manuscript:1 furthermore:1 rejected:5 smola:2 hand:2 hindrance:1 nonlinear:16 cm1:4 gray:1 believe:2 usa:1 effect:1 true:5 hence:2 assay:1 spirtes:3 illustrated:3 climate:1 sin:2 uniquely:1 nuisance:1 mpi:4 abalone:3 generalized:3 demonstrate:2 performs:1 temperature:9 silva:1 reasoning:1 recently:1 fi:4 common:1 functional:5 empirically:1 extend:1 marginals:1 significant:1 measurement:1 cambridge:1 dag:8 similarly:3 pointed:1 nonlinearity:3 f0:2 optimizing:1 reverse:10 scenario:1 verlag:1 ubingen:4 arbitrarily:1 continue:1 inverted:1 somewhat:2 care:1 period:1 multiple:1 dwd:1 infer:7 reduces:1 gretton:1 believed:1 long:3 controlled:1 prediction:2 involving:2 basic:4 regression:11 kernel:7 addition:1 conditionals:3 whereas:1 interval:12 sch:5 w2:1 tend:1 inconsistent:1 seem:1 structural:2 easy:1 automated:1 variety:1 independence:9 fit:17 xj:1 identified:2 wahba:1 whether:6 expression:1 utility:1 peter:1 speaking:2 cause:16 constitute:1 action:1 detailed:1 amount:1 simplest:2 http:2 exist:4 shifted:1 notice:1 sign:1 estimated:1 discrete:1 vol:1 group:1 independency:1 ist:2 nevertheless:1 interventional:1 neither:1 verified:1 backward:21 graph:8 everywhere:3 uncertainty:2 almost:1 family:2 saying:1 geiger:1 appendix:1 prefer:1 acceptable:1 uncontrolled:1 meek:1 identifiable:4 annual:2 strength:1 precisely:1 constraint:1 helsinki:2 x2:4 bousquet:1 aspect:1 argument:3 px:15 deutscher:1 glymour:4 smaller:1 heckerman:2 son:1 making:1 outlier:1 inhomogenous:1 altitude:9 equation:4 mutually:2 previously:1 remains:1 discus:2 scheines:2 mechanism:3 generic:4 florida:1 existence:2 original:2 denotes:1 remaining:1 calculating:1 joris:1 yearly:1 paccept:3 occurs:1 primary:1 thinplate:1 hoyer:2 subspace:1 thank:1 simulated:3 nx:2 topic:1 collected:2 trivial:2 reason:2 code:3 length:16 relationship:13 kun:1 statement:1 implementation:3 countable:1 proper:1 bollen:1 perform:1 diamond:1 markov:3 datasets:3 finite:1 defining:1 extended:2 situation:1 station:1 arbitrary:4 community:1 pair:1 fort:1 raising:1 established:1 pearl:1 nip:1 able:3 including:2 azzalini:1 explanation:1 shifting:1 power:2 event:1 pascal2:1 residual:19 representing:1 brief:1 deemed:1 tentatively:1 philadelphia:1 patrik:1 text:2 prior:1 understanding:1 discovery:11 meter:1 mooij:1 reordering:1 acyclic:4 triple:1 age:6 affine:2 consistent:9 principle:4 plotting:1 editor:2 course:1 supported:2 keeping:1 rasmussen:1 allow:3 pni:5 focussed:1 taking:1 absolute:1 feedback:1 valid:1 world:4 forward:13 commonly:1 clue:1 avoided:1 programme:1 far:1 compact:2 obtains:2 countably:1 bernhard:1 ml:1 overfitting:2 symmetrical:1 conclude:3 tuples:1 xi:3 continuous:4 search:1 latent:2 reality:1 symmetry:2 williamson:1 necessarily:1 complex:2 constructing:1 european:1 did:1 significance:2 main:4 noise:25 hyperparameters:1 subsample:1 allowed:1 repeated:1 causality:2 geyser:3 ny:2 aid:1 cooper:3 wiley:1 sub:1 inferring:1 shrinking:1 outdoor:1 vanish:1 dominik:1 theorem:4 favourable:1 r2:1 admits:1 concern:2 exists:5 restricting:1 illustrates:2 gumbel:1 shimizu:1 simply:2 contained:2 scalar:1 springer:1 shell:2 conditional:4 goal:1 unethical:1 feasible:1 experimentally:1 infinite:1 determined:2 blessing:1 called:1 accepted:2 experimental:1 indicating:2 select:1 support:6 latter:1 artifical:1 nongaussianity:1 phenomenon:1 |
2,811 | 3,549 | Fast Prediction on a Tree
Mark Herbster, Massimiliano Pontil, Sergio Rojas-Galeano
Department of Computer Science
University College London
Gower Street, London WC1E 6BT, England, UK
{m.herbster, m.pontil,s.rojas}@cs.ucl.ac.uk
Abstract
Given an n-vertex weighted tree with structural diameter S and a subset of m vertices, we present a technique to compute a corresponding m ? m Gram matrix of
the pseudoinverse of the graph Laplacian in O(n + m2 + mS) time. We discuss
the application of this technique to fast label prediction on a generic graph. We
approximate the graph with a spanning tree and then we predict with the kernel
perceptron. We address the approximation of the graph with either a minimum
spanning tree or a shortest path tree. The fast computation of the pseudoinverse
enables us to address prediction problems on large graphs. We present experiments on two web-spam classification tasks, one of which includes a graph with
400,000 vertices and more than 10,000,000 edges. The results indicate that the accuracy of our technique is competitive with previous methods using the full graph
information.
1
Introduction
Classification methods which rely upon the graph Laplacian (see [3, 20, 13] and references therein),
have proven to be useful for semi-supervised learning. A key insight of these methods is that unlabeled data can be used to improve the performance of supervised learners. These methods reduce
to the problem of labeling a graph whose vertices are associated to the data points and the edges to
the similarity between pairs of data points. The labeling of the graph can be achieved either in a
batch [3, 20] or in an online manner [13]. These methods can all be interpreted as different kernel
methods: ridge regression in the case of [3], minimal semi-norm interpolation in [20] or the perceptron algorithm in [13]. This computation scales in the worst case cubically with the quantity of
unlabeled data, which may prevent the use of these methods on large graphs.
In this paper, we propose a method to improve the computational complexity of Laplacian-based
learning algorithms. If an n-vertex tree is given, our method requires an O(n) initialization step and
after that any m ? m block of the pseudoinverse of the Laplacian may be computed in O(m2 + mS)
time, where S is the structural diameter of the tree. The pseudoinverse of the Laplacian may then
be used as a kernel for a variety of label prediction methods. If a generic graph is given, we first
approximate it with a tree and then run our method on the tree. The use of a minimum spanning tree
and shortest path tree is discussed.
It is important to note that prediction is also possible using directly the graph Laplacian, without
computing its pseudoinverse. For example, this may be achieved by solving a linear system of
equations [3, 20] involving the Laplacian, and a solution may be computed in O(|E| logO(1) n)
time [18], where E is the edge set. However, computation via the graph kernel allows for multiple
prediction problems on the same graph to be computed more efficiently. The advantage is even more
striking if the data come sequentially and we need to predict in an online fashion.
To illustrate the advantage of our approach consider the case in which we are provided with a small
subset of ` labeled vertices of a large graph and we wish to predict the label of a different subset of
p vertices. Let m = ` + p and assume that m n (typically we will also have ` p). A practical
application is the problem of detecting ?spam? hosts in the internet. Although the number of hosts
in the internet is in the millions we may only need to detect spam hosts from some limited domain.
If the graph is a tree the total time required to predict with the kernel perceptron using our method
will be O(n + m2 + mS). The promise of our technique is that, if m + S n and a tree is given,
it requires O(n) time versus O(n3 ) for standard methods.
To the best of our knowledge this is the first paper which addresses the problem of fast prediction
in semi-supervised learning using tree graphs. Previous work has focused on special prediction
methods and graphs. The work in [5] presents a non-Laplacian-based method for predicting the
labeling of a tree, based on computing the exact probabilities of a Markov random field. The issue
of computation time is not addressed there. In the case of unbalanced bipartite graphs [15] presents
a method which significantly improves the computation time of the pseudoinverse to ?(k 2 (n ? k)),
where k is the size of a minority partition. Thus, in the case of a binary tree the computation is still
?(n3 ) time.
The paper is organized as follows. In Section 2 we review the notions which are needed in order
to present our technique in Section 3, concerning the fast computation of a tree graph kernel. In
Section 4 we address the issue of tree selection, commenting in particular on a potential advantage
of shortest path trees. In Section 5 we present the experimental results and draw our conclusions in
Section 6.
2
Background
In this paper any graph G is assumed to be connected, to have n vertices, and to have edge weights.
The set of vertices of G is denoted V = {1, . . . , n}. Let A = (Aij )ni,j=1 be the n ? n symmetric
weight matrix of the graph, where Aij ? 0, and define the edge set E(G) := {(i, j) : Aij >
0, i < j}. We say that G is a tree if it is connected and has n ? 1 edges. The graph Laplacian
is the n ? n matrix
Pn defined as G = D ? A, where D is the diagonal matrix with i-th diagonal
element Dii = j=1 Aij , the weighted degree of vertex i. Where it is not ambiguous, we will use
the notation G to denote both the graph G and the graph Laplacian and the notation T to denote
both a Laplacian of a tree and the treePitself. The Laplacian is positive semi-definite and induces
the semi-norm kwk2G := w> Gw = (i,j)?E(G) Aij (wi ? wj )2 . The kernel associated with the
above semi-norm is G+ , the pseudoinverse of matrix G, see e.g. [14] for a discussion. As the graph
is connected, it follows from the definition of the semi-norm that the null space of G is spanned by
the constant vector 1 only.
The analogy between graphs and networks of resistors plays an important role in this paper. That
is, the weighted graph may be seen as a network of resistors where edge (i, j) is a resistor with
resistance ?ij = A?1
ij . Then the effective resistance rG (i, j) may be defined as the resistance
measured between vertex i and j in this network and may be calculated using Kirchoff?s circuit laws
or directly from G+ using the formula [16]
+
+
rG (i, j) = G+
(2.1)
ii + Gjj ? 2Gij .
The effective resistance is a metric distance on the graph [16] as well as the geodesic and structural distances. The structural distance between vertices i, j ? V is defined as sG (i, j) :=
min {|P (i, j)| : P (i, j) ? P} where P is the set of all paths in G and P (i, j) is the set of edges
in a P
particular path from i to j. Whereas, the geodesic distance is defined as dG (i, j) :=
min{ (p,q)?P (i,j) ?pq : P (i, j) ? P}. The diameter is the maximum distance between any
two points on the graph, hence the resistance, structural, and, geodesic diameter are defined as
RG = maxi,j?V rG (i, j) SG = maxi,j?V sG (i, j), and DG = maxi,j?V dG (i, j), respectively.
Note that, by Kirchoff?s laws, rG (i, j) ? dG (i, j) and, so, RG ? DG .
3
Computing the Pseudoinverse of a Tree Laplacian Quickly
In this section we describe our method to compute the pseudoinverse of a tree.
3.1
Inverse Connectivity
Let us begin by noting that the effective resistance is a better measure of connectivity than the
geodesic distance, as for example if there are k edge disjoint paths of geodesic distance d between
two vertices, then the effective resistance is no more than kd . Thus, the more paths, the closer the
vertices.
In the following, we will introduce three more global measures of connectivity built on top of the
effective resistance,Pwhich are useful for our computation below. The first quantity is the total
resistance Rtot = i>j rG (i, j), which is a measure of the inverse connectivity of the graph: the
Pn
smaller Rtot the more connected the graph. The second quantity is R(i) = j=1 rG (i, j), which is
used as a measure of inverse centrality of vertex i [6, Def. 3] (see also [17]). The third quantity is
G+
ii , which provides an alternate notion of inverse centrality.
Summing both sides of equation (2.1) over j gives
R(i) = nG+
ii +
n
X
G+
jj ,
(3.1)
j=1
Pn
where we used the fact that j=1 G+
ij = 0, which is true because the null space of G is spanned by
the constant vector. Summing again over i yields
n
X
Rtot = n
G+
(3.2)
ii ,
i=1
where we have used
Pn
i=1
R(i) = 2Rtot . Combing the last two equations we obtain
G+
ii =
3.2
R(i) Rtot
? 2 .
n
n
(3.3)
Method
Throughout this section we assume that G is a tree with corresponding Laplacian matrix T. The
principle of the method to compute T+ is that, on a tree there is a unique path between any two
vertices and, so, the effective resistance is simply the sum of resistances along that path, see e.g. [16,
13] (for the same reason, on a tree the geodesic distance is the same as the resistance distance).
We assume that the root vertex is indexed as 1. The parent and the children of vertex i are denoted
by ?(i) and ?(i), respectively. The descendants of vertex i are denoted by
(
S
?(i) j??(i) ?* (j) ?(i) 6= ?
*
? (i) :=
.
?
?(i) = ?
We also let ?(i) be the number of descendants of vertex i and i itself, that is, ?(i) = 1 + | ?* (i)|.
The method is outlined
Pn as follows. We initially+compute R(1), . . . , R(n) in O(n) time. This in turn
gives us Rtot = 21 i=1 R(i) and G+
11 , . . . , Gnn via equation (3.3), also in O(n) time. As we shall
see, with these precomputed values, we may obtain off-diagonal elements G+
ij from equation (2.1)
by computing individually rT (i, j) in O(ST ) or an m ? m block in O(m2 + mST ) time.
Initialization
We split the computation of the inverse centrality R(i) into two terms, namely R(i) = T (i) +
S(i), where T (i) and S(i) are the sum of the resistances of vertex i to each descendant and nondescendant, respectively. That is,
X
X
T (i) =
rT (i, j) , S(i) =
rT (i, j) .
j??* (i)
j6??* (i)
We compute ?(i) and T (i), i = 1, . . . , n with the following leaves-to-root recursions
(
(P
P
1 + j??(i) ?(j) ?(i) 6= ?
j??(i) (T (j) + ?ij ?(j)) ?(i) 6= ?
?(i) :=
, T (i) :=
1
?(i) = ?
0
?(i) = ?
by computing ?(1) then T (1) and caching the intermediate values. We next descend the tree caching
each calculated S(i) with the root-to-leaves recursion
S(?(i)) + T (?(i)) ? T (i) + (n ? 2?(i))?i ?(i) i 6= 1
S(i) :=
.
0
i=1
It is clear that the time complexity of the above recursions is O(n).
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Input: {v1 , . . . , vm } ? V
Initialization: visited(all) = ?
for i = 1, . . . , m do
p = ?1; c = vi ; rT (c, c) = 0
Repeat
for w ? visited(c) ? {p} ? ?* (p) do
rT (vi , w) = rT (w, vi ) = rT (vi , c) + rT (c, w)
end
visited(c) = visited(c) ? vi
p = c; c = ?(c)
rT (vi , c) = rT (c, vi ) = rT (vi , p) + ?p,c
until (?p is the root?)
end
Figure 1: Computing an m ? m block of a tree Laplacian pseudoinverse.
Computing an m ? m block of the Laplacian pseudoinverse
Our algorithm (see Figure 1) computes the effective resistance matrix of an m ? m block which
effectively gives the kernel (via equation (2.1)). The motivating idea is that a single effective resistance rT (i, j) is simply the sum of resistances along the path from i to j. It may be computed
by separately ascending the path from i?to?root and j?to?root in O(ST ) time and summing the
resistances along each edge that is either in the i?to?root or j?to?root path but not in both. However
we may amortize the computation of an m ? m block to O(m2 + mST ) time, saving a factor of
min(m, ST ). This is realized by additionally caching the cumulative sums of resistances along the
path to the root during each ascent from a vertex.
We outline in further detail the algorithm as follows: for each vertex vi in the set Vm = {v1 , . . . , vm }
we perform an ascent to the root (see line 3 in Figure 1). As we ascend, we cache each cumulative resistance (from the starting vertex vi to the current vertex c) along the path on the way to the root (line
11). If, while ascending from vi we enter a vertex c which has previously been visited during the ascent from another vertex w (line 6) then we compute rT (vi , w) as rT (vi , c)+rT (c, w). For example,
during the ascent from vertex vk ? Vm to the root we will compute {rT (v1 , vk ), . . . , rT (vk , vk )}.
The computational complexity is obtained by noting that every ascent to the root requires O(ST )
steps and along each ascent we must compute up to max(m, ST ) resistances. Thus, the total complexity is O(m2 + mST ), assuming that each step of the algorithm is efficiently implemented. For
this purpose, we give two implementation notes. First, each of the effective resistances computed
by the algorithm should be stored on the tree, preventing creation of an n ? n matrix. When the
computation is completed the desired m ? m Gram matrix may then be directly computed by gathering the cached values via an additional set of ascents. Second, it should be ensured that the ?for
loop? (line 6) is executed in ?(|visited(c) ? {p} ? ?* (p)|) time by a careful but straightforward
implementation of the visited predicate. Finally, this algorithm may be generalized to compute
a p ? ` block in O(p` + (p + `)ST ) time or to operate fully ?online.?
Let us return to the practical scenario described in the introduction, in which we wish to predict p
vertices of the tree based on ` labeled vertices. Let m = `+p. By the above discussion, computation
of an m ? m block of the kernel matrix T+ requires O(n + m2 + mST ) time. In many practical
applications m n and SG will typically be no more than logarithmic in n, which leads to an
appealing O(n) time complexity.
4
Tree Construction
In the previous discussion, we have considered that a tree has already been given. In the following, we assume that a graph G or a similarity function is given and the aim is to construct an
approximating tree. We will consider both the minimum spanning tree (MST) as a ?best? in norm
approximation; and the shortest path tree (SPT) as an approximation which maintains a mistake
bound [13] guarantee.
Given a graph with a ?cost? on each edge, an MST is a connected n-vertex subgraph with n ? 1
edges such that the total cost is minimized. In our set-up the cost of edge (i, j) is the resistance
?ij =
1
Aij ,
therefore, a minimum spanning tree of G solves the problem
?
?
? X
?
min
?ij : T ? T (G) ,
?
?
(4.1)
(i,j)?E(T)
where T (G) denotes the set of spanning trees of G. An MST is also a tree whose Laplacian best
approximates the Laplacian of the given graph according to the trace norm, that is, it solves the
problem
min {tr(G ? T) : T ? T (G)} .
(4.2)
Pn
P
?1
Indeed, we have tr(G ? T) = i,j=1 Aij ? (i,j)?E(T) ??ij . Then, our claim that the problems
(4.1) and (4.2) have the same solution follows by noting that the edges in a minimum spanning
tree are invariant with respect to any strictly increasing function of the ?costs? on the edges in the
original graph [8] and the function ?? ?1 is increasing in ?.
The above observation suggests another approximation criterion which we may consider for finding
a spanning tree. We may use the trace norm between the pseudoinverse of the Laplacians, rather
than the Laplacians themselves as in (4.2). This seems a more natural criterion, since our goal is to
approximate well the kernel (it is the kernel which is directly involved in the prediction problem). It
is interesting to note that the quantity tr(T+ ? G+ ) is related to the total resistance. Specifically, we
have by equation (3.2) that tr(T+ ? G+ ) = Rtotn(T) ? Rtotn(G) . As noted in [10], the total resistance
is a convex function of the graph Laplacian. However, we do not know how to minimize Rtot (T)
over the set of spanning trees of G. We thus take a different route, which leads us to the notion of
shortest path trees. We choose a vertex i and look for a spanning tree which minimizes the inverse
centrality R(i) of vertex i, that is we solve the problem
min {R(i) : T ? T (G)} .
(4.3)
Note that R(i) is the contribution of vertex i to the total resistance of T and that, by equations (3.1)
and (3.2), R(i) = nTii+ + Rntot . The above problem can then be interpreted as minimizing a tradeoff between inverse centrality of vertex i and inverse connectivity of the tree. In other words, (4.3)
encourages trees which are centered at i and, at the same time have a small diameter. It is interesting
to observe that the solution of problem (4.3) is a shortest path tree (SPT) centered at vertex i, namely
a spanning tree for which the geodesic distance in ?costs? is minimized from i to every other vertex
in the graph. This is because the geodesic distance is equivalent to the resistance distance on a tree
and any SPT of G is formed from a set of shortest paths connecting the root to any other vertex in
G [8, Ch. 24.1].
Let us observe a fundamental difference between MST and SPT, which provides a justification for
approximating the given graph with an SPT. It relies upon the analysis in [13, Theorem 4.2], where
the cumulative number of mistakes of the kernel perceptron with the kernel K = G+ + 11> was
2
upper bounded by (kukG + 1)(RG + 1) for consistent labelings [13] u ? {?1, 1}n . To explain
2
our argument, first we note that when we approximate the graph with a tree T the term kukG is
always decreasing, while the term RG is always increasing by Rayleigh?s monotonicity law (see for
example [13, Corollary 3.1]). Now, note that the resistance diameter RT of an SPT of a graph G is
bounded by twice the geodesic diameter of the original graph,
RT ? 2DG .
(4.4)
Indeed, as an SPT is formed from a set of shortest paths between the root and any other vertex in G,
for any pair of vertices p, q in the graph there is in the SPT a path from p to the root and then to q
which can be no longer than 2DG .
To further discuss, consider the case that G consists of a few dense clusters each uniquely labeled
and with only a few cross-cluster edges. The above mistake bound and the bound (4.4), imply that a
tree built with an SPT would still have a non-vacuous mistake bound. No such bound as (4.4) holds
for an MST subgraph. For example, consider a bicycle wheel graph whose edge set is the union of
n spoke edges {(0, i) : i = 1, . . . , n} and n rim edges {(i, i + 1 mod n) : i = 1, . . . , n} with costs
on the spoke edges of 2 and on the rim edges of 1. The MST diameter is then n + 1 while any SPT
diameter is ? 8.
At last, let us comment on the time and space complexity of constructing such trees. The MST and
SPT trees may be constructed with Prim and Dijkstra algorithms [8] respectively in O(n log n+|E|)
time. Prim? algorithm may be further speeded up to O(n + |E|) time in the case of small integer
weights [12]. In the general case of a non-sparse graph or similarity function the time complexity is
?(n2 ), however as both Prim and Dijkstra are ?greedy? algorithms their space complexity is O(n)
which may be a dominant consideration in a large graph.
5
Web-spam Detection Experiments
In this section, we present an experimental study of the feasibility of our method on large graphs
(400,000 vertices). The motivation for our methodology is that on graphs with already 10,000 vertices it is computationally challenging to use standard graph labeling methods such as [3, 20, 13], as
they require the computation of the full graph Laplacian kernel. This computational burden makes
the use of such methods prohibitive when the number of vertices is in the millions. On the other
hand, in the practical scenario described in the introduction the computational time of our method
scales linearly in the number of vertices in the graph and can be run comfortably on large graphs
(see Figure 2 below) and at worst quadratically if the full graph needs to be labeled.
The aims of the experiments are: (i) to see whether there is a significant performance loss when using
a tree sub-graph rather than the original graph, (ii) to compare tree construction methods, specifically
the MST and the SPT and (iii) to exploit the possibility of improving performance through ensembles
of trees. The initial results are promising in that the performance of the predictor with a single SPT or
MST is competitive with that of the existing methods, some of which use the full graph information.
We shall also comment on the computational time of the method.
5.1
Datasets and previous methods
We applied the Fast Prediction on a Tree (FPT) method to the 2007 web-spam challenge developed
at the University of Paris VI1 . Two graphs are provided. The first one is formed by 9,072 vertices
and 464,959 edges, which represent computer hosts ? we call this the host-graph. In this graph,
one host is ?connected? to another host if there is at least one link from a web-page in the first host
to a web-page in the other host. The second graph consists of 400,000 vertices (corresponding to
web-pages) and 10,455,545 edges ? we call this the web-graph. Again, a web-page is ?connected?
to another web-page if there is at least one hyperlink from the former to the latter. Note that both
graphs are directed. In our experiments we discarded directional information and assigned a weight
of either 1 to unidirectional edges and of w ? {1, 2} to the bidirectional edges. Each vertex is
either labeled as spam or as non-spam. In both graphs there are about 80% of non-spam vertices and
20% of spam ones. Additional tf-idf feature vectors (determined by the web-pages? html content)
are provided for each vertex in the graph, but we have discarded this information for simplicity.
Following the web-spam protocol, for both graphs we used 10% of labeled vertices for training and
90% for testing.
We briefly discuss some previous methods which participated in the web-spam challenge. Abernathy
et al. [1] used an SVM variant on the tf-idf features with an additional graph-based regularization
term, which penalizes predictions with links between non-spam to spam vertices. Tang et al. (see
[7]) used a linear and Gaussian SVM combined with Random Forests on the feature vectors, plus
new features obtained from link information. The method of Witschel and Biemann [4] consisted of
iteratively selecting vertices and classifying them with the predominant class in their neighborhood
(hence this is very similar to label propagation method of [20]). Bencz?ur et al. (see [7]) used Naive
Bayes, C4.5 and SVM?s with a combination of content and/or graph-based features. Finally, Filoche
et al. (see [7]) applied html preprocessing to obtain web-page fingerprints, which were used to
obtain clusters; these clusters along with link and content-based features were then fed to a modified
Naive Bayes classifier.
5.2
Results
Experimental results are shown in Table 1. We report the following performance measures: (i)
average accuracy when predicting with a single tree, (ii) average accuracy when each predictor
is optimized over a threshold in the range of [?1, 1], (iii) area under the curve (AUC) and (iv)
1
See http://webspam.lip6.fr/wiki/pmwiki.php for more information.
Method
Agg.
Agg.-Best
MST
SPT
MST (bidir)
SPT (bidir)
Abernathy et al.
Tang et al.
Filoche et al.
Bencz?ur et al.
0.907
0.889
0.912
0.913
0.896
0.906
0.889
0.829
0.907
0.890
0.915
0.913
0.906
0.907
0.890
0.847
MST (bidir)
SPT (bidir)
Witschel et al.
Filoche et al.
Bencz?ur et al.
Tang et al.
0.991
0.994
0.995
0.973
0.942
0.296
0.992
0.994
0.996
0.974
0.942
0.965
AUC
Single
Host-graph
0.950 0.857?0.022
0.952 0.850?0.026
0.944 0.878?0.033
0.960 0.873?0.028
0.952
...
0.951
...
0.927
...
0.877
...
Web-graph
1.000 0.976?0.011
0.999 0.985?0.002
0.998
...
0.991
...
0.973
...
0.989
...
Single-Best
AUC
0.865?0.017
0.857?0.018
0.887?0.027
0.877?0.026
...
...
...
...
0.841?0.045
0.804?0.063
0.851?0.100
0.846?0.065
...
...
...
...
0.980?0.009
0.985?0.002
...
...
...
...
0.993?0.005
0.992?0.003
...
...
...
...
Table 1: Results of our FPT method and other competing methods.
Host?graph
Web?graph
Host?graph
0.93
8
0.96
0.92
7.5
0.95
0.91
AUC
0.94
0.93
0.92
0.9
0.89
0.88
unweighted_MST
unweighted_SPT
biweighted_MST
biweighted_SPT
0.91
0.9
7
Runtime (secs)
Accuracy
0.97
5 11
21
41
Num.of trees
81
6
5.5
5
unweighted_MST
unweighted_SPT
biweighted_MST
biweighted_SPT
0.87
0.86
6.5
5 11
21
41
Num.of trees
81
4.5
Initialization
Init+Prediction
4
20
100
200
Labeled nodes
400
Figure 2: AUC and Accuracy vs. number of trees (left and middle) and Runtime vs. number of
labeled vertices (right).
aggregate predictive value given by each tree. In the case of the host-graph, predictions for the
aggregate method were made using 81 trees. MST and SPT were obtained for the weighted graphs
with Prim and Dijkstra algorithms, respectively. For the unweighted graphs, every tree is an MST,
so we simply used trees generated by a randomized unweighted depth-first traversal of the graph and
SPT?s may be generated by using the breadth-first-search algorithm, all in O(|E|) time. In the table,
the tag ?Agg.? stands for aggregate and the ?bidir? tag indicates that the original graph was modified
by setting w = 2 for bidirectional edges. In the case of the larger web-graph, we used 21 trees and
the modified graph with bidirectional weights. In all experiments we used a kernel perceptron which
was trained for three epochs (e.g. [13]).
It is interesting to note that some of the previous methods [1, 4] take the full graph information into
account. Thus, the above results indicate that our method is statistically competitive (in fact better
than most of the other methods) even though the full graph structure is discarded. Remarkably, in the
case of the large web-graph, using just a single tree gives a very good accuracy, particularly in the
case of SPT. On this graph SPT is also more stable in terms of variance than MST. In the case of the
smaller host-graph, just using one tree leads to a decrease in performance. However, by aggregating
a few trees our result improves over the state of the art results.
In order to better understand the role of the number of trees on the aggregate prediction, we also ran
additional experiments on the host-graph with t = 5, 11, 21, 41, 81 randomly chosen MST or SPT
trees. We averaged the accuracy and AUC over 100 trials each. Results are shown in Figure 2. As it
can be seen, using as few as 11 trees already gives competitive performance. SPT works better than
MST in term of AUC (left plot), whereas the result is less clear in the case of accuracy (middle plot).
Finally, we report on an experiment evaluating the running time of our method. We choose the webgraph (n = 400, 000). We then fixed p = 1000 predictive vertices and let the number of labeled
vertices ` vary in the set {20, 40, 60, 80, 100, 200, 400}. Initialization time (tree construction plus
computation of the diagonal elements of the kernel) and initialization plus prediction times were
measured in seconds on a dual core 1.8GHz machine with 8Gb memory. As expected, the solid
curve, corresponding to initialization time, is the dominant contribution to the computation time.
6
Conclusions
We have presented a fast method for labeling of a tree. The method is simple to implement and, in
the practical regime of small labeled and testing sets and diameters, scales linearly in the number
of vertices in the tree. When we are presented with a generic undirected weighted graph, we first
extract a spanning tree from it and then run the method. We have studied minimum spanning trees
and shortest path trees, both of which can be computed efficiently with standard algorithms. We
have tested the method on a web-spam classification problem involving a graph of 400,000 vertices.
Our results indicate that the method is competitive with the state of the art. We have also shown
how performance may be improved by averaging the predictors obtained by a few spanning trees.
Further improvement may involve learning combinations of different trees. This may be obtained
following ideas in [2]. At the same time it would be valuble to study connections between our work
and other approximation methods such as those in in the context of kernel-methods [9], Gaussian
processes [19] and Bayesian learning [11].
Acknowledgments. We wish to thank A. Argyriou and J.-L. Balc?azar for valuable discussions, D.
Athanasakis and S. Shankar Raman for useful preliminary experimentation, D. Fernandez-Reyes
for both useful discussions and computing facility support, and the anonymous reviewers for useful
comments. This work was supported in part by the IST Programme of the European Community,
under the PASCAL Network of Excellence, IST-2002-506778, by EPSRC Grant EP/D071542/1 and
by the DHPA Research Councils UK Scheme.
References
[1] J. Abernethy, O. Chapelle and C. Castillo. Webspam Identification Through Content and Hyperlinks.
Proc. Adversarial Information Retrieval on Web, 2008.
[2] A. Argyriou, M. Herbster, and M. Pontil. Combining graph Laplacians for semi-supervised learning.
Advances in Neural Information Processing Systems 17. MIT Press, Cambridge, MA, 2005.
[3] M. Belkin, I. Matveeva, P. Niyogi. Regularization and Semi-supervised Learning on Large Graphs.
Proceedings of the 17-th Conference on Learning Theory (COLT? 04), pages 624?638, 2004.
[4] C. Biemann. Chinese Whispers ? an Efficient Graph Clustering Algorithm and its Application to Natural
Language Processing Problems. Proc. HLT-NAACL-06 Workshop on Textgraphs-06, 2006.
[5] A. Blum, J. Lafferty, M. R. Rwebangira, and R. Reddy. Semi-supervised learning using randomized
mincuts. Proc. 21-st International Conference on Machine Learning, page 13, 2004.
[6] U. Brandes and D. Fleischer. Centrality measures based on current flow. Proc. 22-nd Annual Symposium
on Theoretical Aspects of Computer Science, pages 533?544, 2005.
[7] C. Castillo, B. D. Davison, L. Denoyer and P. Gallinari. Proc. of the Graph Labelling Workshop and
Web-spam Challenge (ECML Workshop), 2007.
[8] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. MIT Press, 1990.
[9] P. Drineas and M. W. Mahoney, On the Nystr?om Method for Approximating a Gram Matrix for Improved
Kernel-Based Learning. J. Mach. Learn. Res., 6:2153?2175, 2005.
[10] A. Ghosh, S. Boyd and A. Saberi. Minimizing Effective Resistance of a Graph. SIAM Review, problems
and techniques section, 50(1):37-66, 2008.
[11] T. Jebara. Bayesian Out-Trees. Proc. Uncertainty in Artifical Intelligence, 2008.
[12] R. E. Haymond, J. Jarvis and D. R. Shier. Algorithm 613: Minimum Spanning Tree for Moderate Integer
Weights. ACM Trans. Math. Softw., 10(1):108?111, 1984.
[13] M. Herbster and M. Pontil. Prediction on a graph with a perceptron. Advances in Neural Information
Processing Systems 19, pages 577?584. MIT Press, 2007.
[14] M. Herbster, M. Pontil, and L. Wainer. Online learning over graphs. In ICML ?05: Proceedings of the
22nd international conference on Machine learning, pages 305?312, 2005.
[15] N.-D. Ho and P. V. Dooren. On the pseudo-inverse of the Laplacian of a bipartite graph. Appl. Math.
Lett., 18(8):917?922, 2005.
[16] D. Klein and M. Randi?c. Resistance distance. J. of Mathematical Chemistry, 12(1):81?95, 1993.
[17] M. E. J. Newman. A measure of betweenness centrality based on random walks. Soc. Networks, 27:39?
54, 2005.
[18] D. A. Spielman and S.-H. Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification,
and solving linear systems. Proc. 36-th Annual ACM Symposium Theory of Computing, 2004.
[19] C.K.I. Williams and M. Seeger. Using the Nystr?om Method to Speed Up Kernel Machines. Neural
Information Processing Systems 13, pages 682?688, MIT Press, 2001
[20] X. Zhu, J. Lafferty, and Z. Ghahramani. Semi-Supervised Learning Using Gaussian Fields and Harmonic
Functions. Proc of the the 20-th International Conference on Machine Learning, pages 912?919, 2003.
| 3549 |@word trial:1 middle:2 briefly:1 norm:7 seems:1 vi1:1 nd:2 galeano:1 tr:4 solid:1 nystr:2 initial:1 selecting:1 existing:1 current:2 must:1 mst:21 partition:1 enables:1 plot:2 v:2 greedy:1 leaf:2 prohibitive:1 intelligence:1 betweenness:1 core:1 num:2 rntot:1 detecting:1 provides:2 node:1 balc:1 davison:1 math:2 mathematical:1 along:7 constructed:1 symposium:2 descendant:3 consists:2 introduce:1 manner:1 excellence:1 ascend:1 commenting:1 expected:1 indeed:2 themselves:1 decreasing:1 cache:1 increasing:3 provided:3 begin:1 notation:2 bounded:2 circuit:1 rivest:1 null:2 interpreted:2 minimizes:1 developed:1 finding:1 ghosh:1 sparsification:1 guarantee:1 pseudo:1 every:3 runtime:2 ensured:1 classifier:1 uk:3 gallinari:1 partitioning:1 grant:1 positive:1 aggregating:1 mistake:4 mach:1 whisper:1 path:21 interpolation:1 logo:1 plus:3 twice:1 therein:1 initialization:7 studied:1 suggests:1 challenging:1 appl:1 limited:1 speeded:1 range:1 statistically:1 averaged:1 directed:1 practical:5 unique:1 acknowledgment:1 testing:2 union:1 block:8 definite:1 implement:1 pontil:5 area:1 significantly:1 boyd:1 word:1 unlabeled:2 selection:1 wheel:1 shankar:1 context:1 equivalent:1 reviewer:1 straightforward:1 williams:1 starting:1 convex:1 focused:1 simplicity:1 m2:7 insight:1 spanned:2 notion:3 justification:1 construction:3 play:1 exact:1 matveeva:1 element:3 particularly:1 labeled:10 ep:1 role:2 epsrc:1 worst:2 descend:1 wj:1 connected:7 decrease:1 valuable:1 ran:1 complexity:8 traversal:1 geodesic:9 trained:1 solving:2 creation:1 predictive:2 upon:2 bipartite:2 learner:1 drineas:1 massimiliano:1 fast:7 effective:10 london:2 describe:1 labeling:5 aggregate:4 newman:1 neighborhood:1 abernethy:1 whose:3 larger:1 solve:1 say:1 tested:1 niyogi:1 itself:1 online:4 advantage:3 ucl:1 propose:1 fr:1 jarvis:1 loop:1 combining:1 subgraph:2 parent:1 cluster:4 cached:1 illustrate:1 ac:1 measured:2 ij:8 solves:2 soc:1 implemented:1 c:1 indicate:3 come:1 centered:2 dii:1 require:1 preliminary:1 pwhich:1 anonymous:1 strictly:1 hold:1 considered:1 bicycle:1 predict:5 kirchoff:2 claim:1 vary:1 purpose:1 proc:8 label:4 visited:7 council:1 individually:1 lip6:1 tf:2 weighted:5 mit:4 always:2 gaussian:3 aim:2 modified:3 rather:2 pn:6 caching:3 corollary:1 vk:4 improvement:1 indicates:1 seeger:1 adversarial:1 detect:1 cubically:1 bt:1 typically:2 initially:1 labelings:1 issue:2 classification:3 html:2 dual:1 denoted:3 pascal:1 colt:1 art:2 special:1 field:2 construct:1 saving:1 ng:1 softw:1 look:1 icml:1 nearly:1 minimized:2 report:2 few:5 belkin:1 dooren:1 randomly:1 dg:7 detection:1 possibility:1 leiserson:1 kukg:2 mahoney:1 predominant:1 edge:26 closer:1 tree:81 indexed:1 iv:1 penalizes:1 desired:1 re:1 walk:1 theoretical:1 minimal:1 cost:6 vertex:57 subset:3 predictor:3 predicate:1 motivating:1 stored:1 combined:1 st:7 herbster:5 fundamental:1 randomized:2 international:3 siam:1 off:1 vm:4 connecting:1 quickly:1 connectivity:5 again:2 choose:2 return:1 combing:1 account:1 potential:1 kwk2g:1 chemistry:1 sec:1 includes:1 fernandez:1 vi:13 root:16 competitive:5 bayes:2 maintains:1 randi:1 unidirectional:1 contribution:2 minimize:1 formed:3 php:1 ni:1 accuracy:8 variance:1 om:2 efficiently:3 ensemble:1 yield:1 directional:1 bayesian:2 identification:1 j6:1 explain:1 hlt:1 definition:1 involved:1 associated:2 knowledge:1 improves:2 organized:1 spt:22 rim:2 bidirectional:3 supervised:7 methodology:1 improved:2 though:1 just:2 until:1 hand:1 gjj:1 web:20 propagation:1 naacl:1 consisted:1 true:1 former:1 hence:2 assigned:1 regularization:2 facility:1 symmetric:1 iteratively:1 brandes:1 gw:1 during:3 encourages:1 uniquely:1 ambiguous:1 noted:1 auc:7 m:3 generalized:1 criterion:2 outline:1 ridge:1 ntii:1 wainer:1 reyes:1 saberi:1 harmonic:1 consideration:1 million:2 discussed:1 comfortably:1 approximates:1 significant:1 cambridge:1 enter:1 outlined:1 language:1 fingerprint:1 pq:1 chapelle:1 stable:1 similarity:3 longer:1 sergio:1 dominant:2 moderate:1 scenario:2 route:1 binary:1 seen:2 minimum:7 additional:4 filoche:3 shortest:9 semi:11 ii:7 full:6 multiple:1 england:1 cross:1 retrieval:1 host:15 concerning:1 laplacian:21 feasibility:1 prediction:16 involving:2 regression:1 variant:1 metric:1 kernel:19 represent:1 achieved:2 background:1 whereas:2 separately:1 participated:1 addressed:1 remarkably:1 operate:1 ascent:7 comment:3 undirected:1 lafferty:2 mod:1 flow:1 integer:2 call:2 structural:5 noting:3 intermediate:1 split:1 iii:2 variety:1 competing:1 reduce:1 idea:2 tradeoff:1 fleischer:1 whether:1 abernathy:2 gb:1 resistance:29 jj:1 useful:5 clear:2 involve:1 induces:1 diameter:10 http:1 wiki:1 disjoint:1 klein:1 promise:1 shall:2 ist:2 key:1 threshold:1 blum:1 prevent:1 breadth:1 spoke:2 rwebangira:1 v1:3 graph:98 sum:4 run:3 inverse:9 uncertainty:1 striking:1 throughout:1 denoyer:1 draw:1 raman:1 def:1 internet:2 gnn:1 bound:5 annual:2 idf:2 n3:2 tag:2 aspect:1 speed:1 argument:1 min:6 department:1 according:1 alternate:1 combination:2 kd:1 cormen:1 smaller:2 ur:3 wi:1 appealing:1 invariant:1 gathering:1 computationally:1 equation:8 reddy:1 previously:1 discus:3 turn:1 precomputed:1 needed:1 know:1 ascending:2 fed:1 end:2 experimentation:1 observe:2 generic:3 centrality:7 batch:1 ho:1 original:4 top:1 denotes:1 running:1 clustering:1 completed:1 wc1e:1 gower:1 exploit:1 ghahramani:1 chinese:1 approximating:3 webgraph:1 already:3 quantity:5 realized:1 rt:19 diagonal:4 distance:13 link:4 thank:1 street:1 spanning:15 reason:1 minority:1 assuming:1 minimizing:2 executed:1 trace:2 rtot:7 implementation:2 perform:1 upper:1 observation:1 markov:1 datasets:1 discarded:3 dijkstra:3 ecml:1 jebara:1 community:1 textgraphs:1 vacuous:1 pair:2 required:1 namely:2 paris:1 optimized:1 connection:1 c4:1 quadratically:1 trans:1 address:4 below:2 laplacians:3 regime:1 challenge:3 hyperlink:2 built:2 max:1 memory:1 fpt:2 webspam:2 natural:2 rely:1 predicting:2 recursion:3 zhu:1 scheme:1 improve:2 imply:1 naive:2 extract:1 review:2 sg:4 epoch:1 law:3 fully:1 loss:1 interesting:3 proven:1 versus:1 analogy:1 d071542:1 degree:1 consistent:1 principle:1 classifying:1 repeat:1 last:2 supported:1 aij:7 side:1 understand:1 perceptron:6 sparse:1 ghz:1 curve:2 lett:1 calculated:2 depth:1 gram:3 cumulative:3 unweighted:2 computes:1 preventing:1 stand:1 made:1 evaluating:1 preprocessing:1 spam:15 programme:1 approximate:4 monotonicity:1 pseudoinverse:12 sequentially:1 global:1 summing:3 assumed:1 search:1 table:3 additionally:1 promising:1 learn:1 init:1 improving:1 forest:1 european:1 constructing:1 domain:1 protocol:1 dense:1 linearly:2 motivation:1 azar:1 n2:1 child:1 fashion:1 amortize:1 sub:1 wish:3 resistor:3 third:1 tang:3 formula:1 theorem:1 biemann:2 prim:4 maxi:3 svm:3 burden:1 workshop:3 effectively:1 labelling:1 rg:10 logarithmic:1 rayleigh:1 simply:3 pmwiki:1 agg:3 ch:1 relies:1 acm:2 ma:1 goal:1 rojas:2 careful:1 content:4 specifically:2 determined:1 averaging:1 total:7 gij:1 castillo:2 mincuts:1 experimental:3 teng:1 college:1 mark:1 support:1 latter:1 unbalanced:1 spielman:1 artifical:1 argyriou:2 |
2,812 | 355 | A Neural Expert System with Automated Extraction
of Fuzzy If-Then Rules and Its Application to
Medical Diagnosis
Yoichi Hayashi*
Department of Computer and Information Sciences
Ibaraki University
Hitachi-shi,Ibaraki 316, Japan
ABSTRACT
This paper proposes ajuzzy neural expert system (FNES) with the
following two functions: (1) Generalization of the information derived
from the training data and embodiment of knowledge in the form of the
fuzzy neural network; (2) Extraction of fuzzy If-Then rules with
linguistic relative importance of each proposition in an antecedent
(I f -part) from a trained neural network. This paper also gives a
method to extract automatically fuzzy If-Then rules from the trained
neural network. To prove the effectiveness and validity of the proposed
fuzzy neural expert system. a fuzzy neural expert system for medical
diagnosis has been developed.
1 INTRODUCTION
Expert systems that have neural networks for their knowledge bases are sometimes called
neural expert system (Gallant & Hayashi. 1990; Hayashi et at. 1990; Yoshida et al .?
1990) or connectionist expert system (Gallant. 1988; Yoshida et a1.. 1989). This paper
extends work reported in (Hayashi & Nakai. 1990; Hayashi et a!.. 1990) and shows a new
method to give confidence measurements for all inferences and explanations to neural
expert systems. In contrast with conventional expert systems. we propose ajuzzy neural
expert system (FNES) with automated extraction of fuzzy If-Then rules. This paper also
gives a method to extract automatically fuzzy If-Then rules with linguistic relative
importance of each proposition in an antecedent (If-part) from a trained neural network.
To prove the effectiveness and validity of the proposed neural expen system. a fuzzy
neural expert system for diagnosing hepatobiliary disorders has been developed by using a
real medical database. This paper compares the diagnostic capability provided by the
neural network approach and that provided by the statistical approach. Furthermore. we
evaluate the performance of extracted fuzzy If-Then rules from a neural network
knowledge base.
*A part of this work was performed when the author was with the University of Alabama at
Birmingham, Department of Computer and Information Sciences as a Visiting Associate
Professor.
578
A Neural Expert System with Automated Extraction of fuzzy If-Then Rules
2 FUZZY NEURAL EXPERT SYSTEM WITH AUTOMATED
EXTRACTION OF FUZZY IF-THEN RULES
2.1
Distributed Neural Network
Figure 1 illustrates a schematic diagram of a fuzzy neural expert system with automated
extraction of fuzzy If-Then rules. For backpropagation. the configuration consisting of p
input cells. q intermediate cells ("hidden units") and r output cells has been the most
widely used. Connections run from every input cell to every intermediate cell. and from
every intermediate cell to every output cell. In this paper. we employ a valiant of
conventional perceptron network. which is called distributed (neural) network (Gallant.
1990). In the network. there are the same cells and connections as with the
backpropagation. and in addition there are direct connections from input to output cells.
See Figure 2. Each connection has an integer weight Wij that roughly corresponds to the
influence of cell Uj on cell Ui . Although the weights of connections from the input layer
to the intermediate layer are generated by using a random number generator (in this
paper. integers between -10 and +10 were used) and fixed for learning process. Cell
activations are discrete. each taking on values +1. O. or-I.
<Automated extraction of fuzzy IF-THEN rules>
End
users
InpUI of
ar.lnlna d.l,
EXlr.cllon
[,of rul..
Editing .nd ?
.odlflc.llon
of .xlr.cleCl
rul ??
i'
User
Inter- ~
face
........
Connr??llon of
Inpul d...
?
?
~
'"
??
??
??
Knowledge
base
~
Ifjeger wel",t
matrtxof
Matrix controlled ~
neural networ.
Inference engine
??
??
??
??
?
~
'A ?? III I ?
<Embodlement. generalization and synthesis
of Informatlonby learning>
1/
"' R?? ull. of
.????????????111
?
?
.................
Tr.lnlna
d.l ....
,/
KnONledge
analysis
engine
.................................?
A proposed fuzzy neural network
Figure 1: A Schematic Diagram of A Fuzzy Neural System with
Automated Extraction of Fuzzy IF-THEN Rules
Activations of the input cells Ii (i = 1.2..... p). the intermediate cell Hj (j = 1.2..... q) and
the output cell Ok (k = 1.2..... r) can be calculated using equations (1) - (4). The value of
the cell 10 is always + 1. and it is connected to every other cell except for input cells.
p
SH?J
p
= L w?-/JI I
(1)
i=O
= .LUkiIi
+
0
1=
+1 or True
o or
SDk
(SHj > 0 )
Unknown (SHj = 0 ) (2)
-lor False
(SHj < 0 )
q
.L VkjHj
(3)
)=1
+1 or True
(SOk > 0)
{
Ok = 0 or Unknown (SOk = 0) (4)
-lor False
(SOk < 0)
579
580
Hayashi
Intermediate
Layer
Bias Cell
----I..
~ Trainable Connections
---~..
_
Randomly Generated Connections
Figure 2: A Distributed Neural Network
2.2
Fuzzy Neural Network
To handle various fuzziness in the input layer of the distributed neural network, it is
necessary to interpret subjective input data which has non-Boolean quantitative and/or
qualitative meaning. In general, fuzzy sets defmed by monotone membership functions
can be "defuzzified" into a family of crisp sets by using the level set representation
(Negoita, 1985) or "thermometer code" of B. Widrow. Therefore, the fuzziness can be
incorporated into the training data by using only Boolean inputs. Once the training data
is set up in this manner, it can be processed by the Pocket Algorithm (Gallant, 1990). In
this paper, we will propose aJuzzy neural network to handle fuzzy data and crisp data
I@ @
??
@
1
....
?I
/
BIas Cell
Output Layer
intermediate Layer
\
~..@
00???0
Gf
G!
'---v---' ... '---v-----"
Fuzzy Cell Group
oo??o oo??@
'---v---' ... '---v---'
Gf
Gg
Input Layer
Crisp Cell Group
Figure 3: A Neural Network with Fuzzy Cell Groups and Crisp Cell Groups
given in the input layer. Figure 3 shows the structure of proposed fuzzy neural network
whose input layer consists of fuzzy cell groups and crisp (non-fuzzy) cell groups. Here,
A Neural Expert System with Automated Extraction of fuzzy If-Then Rules
truthfulness of fuzzy information and crisp information such as binary encoded data is
represented by fuzzy cell groups and crisp cell groups. respectively. A fuzzy cell group
consists of m input cells which have the level set representation using binary mdimensional vector. each taking on values in (+1. -I); whereas a crisp cell group also
consists of m input cells which take on two values in {(+ 1.+ 1?...?+ 1). (-I.-I ?...?-I)}.
3 AUTOMATED EXTRACTION OF FUZZY IF-THEN RULES FROM
TRAINED NEURAL NETWORKS
This paper also extends previous work described in (Hayashi & Nakai. 1990) and
proposes a method to extract automatically fuzzy If-Then rules with linguistic relative
importance of each proposition in an antecedent (Hayashi & Nakai. 1989) from a trained
fuzzy neural network. The method is implemented in the knowledge analysis engine in
Figure 1. The linguistic relative importance such as Very Important and Moderately
Important. which is defined by a fuzzy set. represents the degree of effect of each
proposition on consequence. By providing linguistic relative importance for each
proposition. each fuzzy If-Then rule has more flexible expression than that of ordinary IfThen rules. Furthermore. truthfulness of each fuzzy If-Then rule is given in the form of
linguistic truth value such as Very True and Possibly True. which is defined by a fuzzy
set. Enhancement of the presentation capability and flexibility by using fuzzy If-Then
rules with linguistic relative importance facilitates the automated extraction of fuzzy IfThen rules from a trained neural network.
3.1 Automated If-Then Rule Extraction Algorithm
We have proposed some methods to extract fuzzy If-Then rules with linguistic relative
importance from a trained (fuzzy) neural network. In this section, we extend work
reported in (Hayashi & Nakai. 1990; Hayashi et al., 1990) and give an algorithm to
extract fuzzy If-Then rules from a trained fuzzy neural network in the following. Note
that an exact algorithm of Step 2 and Step 3 can be derived from algorithms shown in
(Hayashi & Nakai. 1990) in the same manner. Here. we will give a brief discussion on
them due to space limitation. We shall concentrate on Stepl.
Step l. Extraction of framework of fuzzy If-then rules: We select
propositions in an antecedent (If-part) of a rule, that is. extract framework of fuzzy IfThen rules. We will give a precise algorithm for this step in section 3.2.
Step 2. Assignment of linguistic truth value to each extracted rule: A
linguistic truth value such as Very Very True (V.V.T.) and Possibly True (P.T.) is given
to each fuzzy If-Then rule selected in Step 1. Linguistic truth value assigned to each rule
indicates the degree of certainty to draw the conclusion. The linguistic truth value is
determined by the relative amount of weighted sum of output cells.
Step 3. Assignment of linguistic relative importance to each
proposition: Linguistic relative importance is assigned to each proposition of
antecedent in fuzzy If-Then rules. Linguistic relative importance such as Very Important
(V.I.) and Moderately Important (M.I.) represents the degree of effect of each proposition
on consequence.
3.2 Algorithm to extract framework of fuzzy If-Then rules
Extraction of dispensable propositions on cell groups in an antecedent (I f -part) is
required for the extraction of framework of fuzzy If-Then rules. For simplicity. it is
581
582
Hayashi
supposed in this section that each cell consists of three input cells. Therefore, a fuzzy
cell group takes on three values in (+1.-1,-1), (+1,+1,-1), (+1,+1.+1)}; whereas a crisp
cell group takes on two values in (+1,+1,+1), (-1,-1,-1)}. In distributed neural network,
we can determine activations (values) of cells using partial input information. For
example, activations of intermediate cell Hj are determined as
+1 or True
Hj =
( ISHjl> USHj and SHj > 0 )
0 or Unknown
-lor False
(
ISHjl~ USHj)
(5)
(ISHjl> USHj and SHj < 0 )
where
USH? =
J
L
IW?i l .
(6)
. I i .IS U-I-1...
J:
n.vwwn
In the same manner, activations of output cell Ok are determined as
( ISOkl> USOk and SOk > 0 )
Ok = { 0 or Unknown
(ISOkl~ USOk)
-lor False (ISOkl> USOk and SHk < 0 )
+1 or True
(7)
where
USOk =
L IUki l +
i : Ii is Unknown
L IVkjl .
(8)
j : H j is UnknOwn
Our problem is to determine the value of input cell groups so that each output cell Ok
takes on values +1 or -1. Propositions (Input items) corresponding to determined input
cell groups will be entrapped in an antecedent (If-part) of each rule. We will give an
extraction algorithm for framework of fuzzy If-Then rules as follows:
Step I: Select one output cell 0 k .
Step II: Select one cell group. If the selected cell group is a fuzzy cell group, set the
values of the cell group in (+1,-1,-1), (+1,+1,-1) or (+1,+1,+1); whereas if the selected
cell group is a crisp cell group, set the values of the cell group in (+1,+1,+1) or (-1,-1,1). Furthermore, set the value of cell groups which were not selected to (0,0,0).
Step III (Forward search): Determine all the value of intermediate cells Hj by
using the values of cell groups given in Step II and equation (5). Furthermore,
determine the value of output cell Ok using (7). If the value of Ok is + 1 or -1, go to
Step V. Otherwise (the value of Ok is 0), go to Step IV. Although all the cell groups are
entrapped in an antecedent (If-part), if the value of Ok is 0, there is no framework of
fuzzy If-Then rules for the output cell Ok and go to Step VI.
Step IV (Backward search): Let v* be the maximum value of IVkjl which is an
absolute value of the weight of the connections between the output cell Ok and the
intermediate cell Hj whose activation value is O. Furthermore, let u * be the maximum
value of IUkil which is an absolute value of the weight of the connections between the
output cell Ok and the input cell Ii whose value is O. If u* ~ v* or values of all the
intermediate cells are determined, go to Step IV-I. Otherwise, go to Step IV -2.
Step IV-l: For the input cell Ii which is incident to uki ( I uki I = u* ). if the input cell Ii
is included in the fuzzy cell grouP. go to Step IV-I-F; whereas in the crisp cell group, go
to Step IV-I-C.
A Neural Expert System with Automated Extraction of fuzzy If-Then Rules
Step IV-I-F: If SOk '?O. select one pattern of the fuzzy cell group which has the
maximum value of SOk among (+1.-1.-1). (+1.+1.-1) and (+1.+1.+1). Conversely. If
SOk < O. select one pattern which has the minimum value of SOk. Go to Step V.
Step IV-I-C: If SOk ~O. select one pattern of the crisp cell group which has the
maximum value of SOk in (+1.+1.+1) and (-1.-1.-1). Conversely. If SOk < O. select one
pattern which has the minimum value of SOk. Go to Step V.
Step IV-2: Let w* be he maximum value of I Wji I which is an absolute value of the
weight of the connections between the intermediate cell Hj which is incident to vkj ( I vkj I
=v* ) and the input cell Ii whose activation value is O. Select the input cell Ii which is
incident to the connection Wji ( I Wji I =w*). If the input cell Ii is included in the fuzzy
cell grouP. go to Step IV-2-F; whereas in the crisp cell grouP. go to Step IV-2-C.
Step IV-2-F: If SHj ~O. select one pattern of the fuzzy cell group which has the
maximum value of SHj among (+1.-1.-1). (+1.+1.-1) and (+1.+1.+1). Conversely. If SHj
< O. select one pattern which has the minimum value of SHj. Go to Step V.
Step IV-2-C: If SHj '?O. select one pattern of the crisp cell group which has the
maximum value of SHj in (+1.+1.+1) and (-1.-1.-1). Conversely. If SHj < O. select one
pattern which has the minimum value of SUj- Go to Step V.
Step V (Extraction or framework or If-then Rules): If the value of 0 k is
determined. extract input items corresponding to a determined cell group as the
propositions in an antecedent (I f -part). Here. if the value of Ok is +1. the consequence
is set to "Ok is True"; conversely if the value of Ok is -1. the consequence is set to "Ok
is False". If multiple frameworks of If-Then rules with same antecedent and consequence
are extracted. adopt one of them.
Step VI (Termination condition or extraction algorithm for each output
cell): For output cell Ok. if there are any cell groups which are not selected yet; or for
selected cell groups. there are any patterns which are not selected yet. go to Step II.
Otherwise. go to Step VII.
Step VII (Termination condition or whole extraction algorithm): Repeat
Steps II through VI stated above until the termination condition of extraction algorithm
for each output cell is satisfied. If there are any output cell Ok which are not selected yet
in Step I. go to Step I. Otherwise. stop the whole extraction algorithm.
4 APPLICATION TO MEDICAL DIAGNOSIS
To prove the effectiveness and validity of the proposed neural expert system. we have
developed neural expert systems for diagnosing hepatobiliary disorders (Yoshida et al .?
1989 & 1990). We used a real medical database containing sex and the results of nine
biochemical tests (e.g. GOT. GGT) of four hepatobiliary disorders. Alcoholic liver
damage. Primary hepatoma. Liver cirrhosis and Cholelithiasis. The subjects consisted of
536 patients who were admitted to a university-affiliated hospital. The patients were
clinically and pathologically diagnosed by physicians. The subjects were randomly
assigned to 373 training data and 163 test (external) data. Degree of abnormality of each
biochemical item is represented by a fuzzy cell group which consists of three input cells.
There are four output cells. Each output cell corresponds to a hepatobiliary disorder. Fifty
thousand iterations in learning process of Pocket Algorithm was performed for each
583
584
Hayashi
output cell. The diagnosis criteria is the same as that employed in (Yoshida et a1. 1989).
After learning by using training data from 345 patients, the fuzzy neural network
correctly diagnosed 75.5% of test (external) data from 163 previously unseen patients and
correctly diagnosed 100% of the training data. Conversely, the diagnostic accuracy of the
linear discriminant analysis was 65.0% of the test data and 68.4% of the training data.
The proposed fuzzy neural network showed significantly higher diagnostic accuracy in
training data and also had substantially higher diagnostic accuracy in test data than those
of linear discriminant analysis. We extracted 48 general fuzzy If-Then rules for diagnosing
hepatobiliary disorders by using the proposed algorithm given in section 3.2. The number
of rules for comfrrming diseases are 12 and the those for excluding diseases are 36.
Hayashi and Nakai (1989) have proposed three kinds of reasoning methods using fuzzy IfThen rules with linguistic relative importance. In the present paper, we use the reasoning
method-I for the evaluation of extracted fuzzy If-Then rules. Total diagnostic accuracy of
the twelve extracted rules (four confmning rules and eight excluding rules) is 87.7%. We
conclude that the present neural network knowledge base approach will be a promising
and useful technique for generating practical knowledge bases from various databases. It
should be noted that enhancement of interpretation capability of real data, and
embodiment of implicit and/or subjective knowledge will lead to significant reduction of
man power for knowledge acquisition in expert system development
Acknowledgements
The author wishes to thank Dr. Stephen I. Gallant, Dr. Katsumi Yoshida and Mr.
Atsushi Imura for their valuable comments and discussions.
References
Gallant, S.I. 1988 Connectionist Expert Systems, CACM, 31(2), 152-169
Gallant, S.I. & Hayashi, Y. 1990 A Neural Network Expert System with Confidence
Measurements, Proc. of the Third Int. Conf. on Infor. Proc. and Mgt. of Uncertainty in
Knowledge-based Systems, pp.3-5, Paris, July 2-6; Springer Edited Volume (in press)
Gallant, S.I. 1990 Perceptron-Based Learning Algorithms, IEEE Transactions on
Neural Networlcs, 1(2), 179-191
Hayashi, Y. & Nakai, M. 1989 Reasoning Methods Using a Fuzzy Production Rule
with Linguistic Relative Importance in an Antecedent, The Transactions of The Institute
of Electrical Engineers of Japan (T. lEE Japan), I09-C(9), 661-668
Hayashi, Y. & Nakai, M. 1990 Automated Extraction of Fuzzy IF-THEN Rules Using
Neural Networks, T. lEE Japan, llO-C(3), 198-206
Hayashi, Y., Imura, A. & Yoshida, K. 1990 A Neural Expert System under Uncertain
Environments and Its Evaluation, Proc. of the 11th Knowledge and Intelligence System
Symposium, pp.13-18, Tokyo
Negoita, C.V. 1985 Expert Systems and Fuzzy Systems: Benjamin Cummings Pub.
Yoshida, K., Hayashi, Y. & Imura, A. 1989 A Connectionist Expert System for
Diagnosing Hepatobiliary Disorders," in MEDINF089 (proc. of the Sixth Conf. on
Medical Informatics), B. Barber et al. eds.: North-Holland, 116-120
Yoshida, K., Hayashi, Y., Imura, A. & Shimada, N. 1990 Fuzzy Neural Expert
System for Diagnosing HepatobiIiary Disorders, Proc. of the Int. Conf. on Fuzzy Logic
& Neural Networks (lIZUKA '90), pp.539-543, lizuka, Japan, July 20-24
| 355 |@word nd:1 sex:1 termination:3 llo:1 tr:1 reduction:1 configuration:1 pub:1 subjective:2 activation:7 xlr:1 yet:3 intelligence:1 selected:8 item:3 shj:12 thermometer:1 diagnosing:5 lor:4 direct:1 mgt:1 symposium:1 qualitative:1 prove:3 consists:5 manner:3 inter:1 roughly:1 automatically:3 provided:2 kind:1 substantially:1 fuzzy:70 developed:3 certainty:1 quantitative:1 every:5 ull:1 unit:1 medical:6 consequence:5 conversely:6 cirrhosis:1 practical:1 backpropagation:2 got:1 significantly:1 confidence:2 influence:1 crisp:14 conventional:2 shi:1 yoshida:8 go:16 simplicity:1 disorder:7 rule:46 handle:2 user:2 exact:1 associate:1 database:3 electrical:1 thousand:1 connected:1 valuable:1 edited:1 disease:2 benjamin:1 environment:1 ui:1 moderately:2 trained:8 networ:1 various:2 represented:2 cacm:1 whose:4 encoded:1 widely:1 otherwise:4 ifthen:4 unseen:1 propose:2 flexibility:1 shimada:1 supposed:1 enhancement:2 generating:1 oo:2 widrow:1 liver:2 implemented:1 concentrate:1 tokyo:1 hitachi:1 generalization:2 sok:12 proposition:12 adopt:1 proc:5 birmingham:1 iw:1 weighted:1 always:1 hj:6 linguistic:17 derived:2 indicates:1 contrast:1 inference:2 lizuka:2 membership:1 biochemical:2 hidden:1 wij:1 infor:1 among:2 flexible:1 proposes:2 development:1 once:1 extraction:23 represents:2 connectionist:3 employ:1 hepatoma:1 randomly:2 antecedent:11 consisting:1 evaluation:2 sh:1 partial:1 necessary:1 iv:14 uncertain:1 mdimensional:1 boolean:2 ar:1 assignment:2 ordinary:1 reported:2 truthfulness:2 twelve:1 lee:2 physician:1 informatics:1 synthesis:1 satisfied:1 containing:1 possibly:2 dr:2 external:2 conf:3 expert:25 japan:5 north:1 int:2 vi:3 performed:2 capability:3 accuracy:4 who:1 ed:1 sixth:1 acquisition:1 pp:3 stop:1 knowledge:11 pocket:2 ok:18 higher:2 cummings:1 editing:1 diagnosed:3 furthermore:5 implicit:1 entrapped:2 until:1 effect:2 validity:3 consisted:1 true:9 assigned:3 defmed:1 noted:1 criterion:1 gg:1 atsushi:1 reasoning:3 meaning:1 ji:1 volume:1 extend:1 he:1 interpretation:1 interpret:1 measurement:2 significant:1 had:1 base:5 inpul:1 showed:1 binary:2 wji:3 minimum:4 mr:1 employed:1 determine:4 july:2 ii:12 stephen:1 multiple:1 ggt:1 a1:2 controlled:1 schematic:2 patient:4 iteration:1 sometimes:1 cell:89 i09:1 addition:1 whereas:5 diagram:2 fifty:1 comment:1 subject:2 facilitates:1 effectiveness:3 integer:2 uki:2 intermediate:12 iii:2 abnormality:1 automated:13 ibaraki:2 expression:1 suj:1 nine:1 useful:1 amount:1 processed:1 diagnostic:5 correctly:2 diagnosis:4 discrete:1 shall:1 group:37 four:3 backward:1 monotone:1 sum:1 run:1 uncertainty:1 nakai:8 extends:2 family:1 draw:1 layer:9 wel:1 networlcs:1 department:2 vkj:2 clinically:1 ush:1 equation:2 previously:1 end:1 eight:1 uj:1 damage:1 primary:1 visiting:1 pathologically:1 thank:1 barber:1 discriminant:2 code:1 providing:1 stated:1 affiliated:1 unknown:6 gallant:8 incorporated:1 precise:1 excluding:2 required:1 paris:1 connection:11 engine:3 alabama:1 pattern:9 explanation:1 power:1 brief:1 extract:8 gf:2 acknowledgement:1 relative:13 limitation:1 generator:1 degree:4 incident:3 dispensable:1 production:1 repeat:1 bias:2 perceptron:2 institute:1 taking:2 face:1 absolute:3 distributed:5 embodiment:2 calculated:1 author:2 forward:1 shk:1 transaction:2 logic:1 rul:2 conclude:1 search:2 promising:1 whole:2 wish:1 third:1 false:5 valiant:1 importance:12 illustrates:1 vii:2 admitted:1 hayashi:20 holland:1 springer:1 sdk:1 corresponds:2 truth:5 extracted:6 presentation:1 fuzziness:2 yoichi:1 professor:1 man:1 included:2 determined:7 except:1 engineer:1 called:2 hospital:1 total:1 select:12 evaluate:1 trainable:1 |
2,813 | 3,550 | Domain Adaptation with Multiple Sources
Yishay Mansour
Google Research and
Tel Aviv Univ.
Mehryar Mohri
Courant Institute and
Google Research
Afshin Rostamizadeh
Courant Institute
New York University
[email protected]
[email protected]
[email protected]
Abstract
This paper presents a theoretical analysis of the problem of domain adaptation
with multiple sources. For each source domain, the distribution over the input
points as well as a hypothesis with error at most ? are given. The problem consists of combining these hypotheses to derive a hypothesis with small error with
respect to the target domain. We present several theoretical results relating to
this problem. In particular, we prove that standard convex combinations of the
source hypotheses may in fact perform very poorly and that, instead, combinations
weighted by the source distributions benefit from favorable theoretical guarantees.
Our main result shows that, remarkably, for any fixed target function, there exists
a distribution weighted combining rule that has a loss of at most ? with respect to
any target mixture of the source distributions. We further generalize the setting
from a single target function to multiple consistent target functions and show the
existence of a combining rule with error at most 3?. Finally, we report empirical
results for a multiple source adaptation problem with a real-world dataset.
1 Introduction
A common assumption in theoretical models of learning such as the standard PAC model [16], as
well as in the design of learning algorithms, is that training instances are drawn according to the
same distribution as the unseen test examples. In practice, however, there are many cases where this
assumption does not hold. There can be no hope for generalization, of course, when the training and
test distributions vastly differ, but when they are less dissimilar, learning can be more successful.
A typical situation is that of domain adaptation where little or no labeled data is at one?s disposal
for the target domain, but large amounts of labeled data from a source domain somewhat similar to
the target, or hypotheses derived from that source, are available instead. This problem arises in a
variety of applications in natural language processing [4, 7, 10], speech processing [8, 9, 11, 13?15],
computer vision [12], and many other areas.
This paper studies the problem of domain adaptation with multiple sources, which has also received
considerable attention in many areas such as natural language processing and speech processing.
An example is the problem of sentiment analysis which consists of classifying a text sample such
as a movie review, restaurant rating, or discussion boards, or other web pages. Information about a
relatively small number of domains such as movies or books may be available, but little or none can
be found for more difficult domains such as travel.
We will consider the following problem of multiple source adaptation. For each source i ? [1, k],
the learner receives the distribution Di of the input points corresponding to that source as well
as a hypothesis hi with loss at most ? on that source. The learner?s task consists of combining
the k hypotheses hi , i ? [1, k], to derive a hypothesis h with small loss with respect to the target
distribution. The target distribution is assumed to be a mixture of the distributions Di . We will
discuss both the case where the mixture is known to the learner and the case where it is unknown.
1
Note that the distribution Di is defined over the input points and bears no information about the
labels. In practice, Di is estimated from large amounts of unlabeled points typically available from
source i.
An alternative set-up for domain adaptation with multiple sources is one where the learner is not
supplied with a good hypothesis hi for each source but where instead he has access to the labeled
training data for each source domain. A natural solution consists then of combining the raw labeled
data from each source domain to form a new sample more representative of the target distribution
and use that to train a learning algorithm. This set-up and the type of solutions just described
have been in fact explored extensively in applications [8, 9, 11, 13?15]. However, several empirical
observations motivated our study of hypothesis combination, in addition to the theoretical simplicity
and clarity of this framework.
First, in some applications such as very large-vocabulary speech recognition, often the original raw
data used to derive each domain-dependent model is no more available [2, 9]. This is because such
models are typically obtained as a result of training based on many hours of speech with files occupying hundreds of gigabytes of disk space, while the models derived require orders of magnitude
less space. Thus, combining raw labeled data sets is not possible in such cases. Secondly, a combined data set can be substantially larger than each domain-specific data set, which can significantly
increase the computational cost of training and make it prohibitive for some algorithms. Thirdly,
combining labeled data sets requires the mixture parameters of the target distribution to be known,
but it is not clear how to produce a hypothesis with a low error rate with respect to any mixture
distribution.
Few theoretical studies have been devoted to the problem of adaptation with multiple sources. BenDavid et al. [1] gave bounds for single source adaptation, then Blitzer et al. [3] extended the work
to give a bound on the error rate of a hypothesis derived from a weighted combination of the source
data sets for the specific case of empirical risk minimization. Crammer et al. [5, 6] also addressed
a problem where multiple sources are present but the nature of the problem differs from adaptation
since the distribution of the input points is the same for all these sources, only the labels change
due to varying amounts of noise. We are not aware of a prior theoretical study of the problem of
adaptation with multiple sources analyzed here.
We present several theoretical results relating to this problem. We examine two types of hypothesis
combination. The first type is simply based on convex combinations of the k hypotheses hi . We
show that this natural and widely used hypothesis combination may in fact perform very poorly in
our setting. Namely, we give a simple example of two distributions and two matching hypotheses,
each with zero error for their respective distribution, but such that any convex combination has
expected absolute loss of 1/2 for the equal mixture of the distributions. This points out a potentially
significant weakness of a convex combination.
The second type of hypothesis combination, which is the main one we will study in this work,
takes into account the probabilities derived from the distributions. Namely, the weight of hypothesis
hi on an input x is proportional to ?i Di (x), were ? is the set of mixture weights. We will refer
to this method as the distribution weighted hypothesis combination. Our main result shows that,
remarkably, for any fixed target function, there exists a distribution weighted combining rule that
has a loss of at most ? with respect to any mixture of the k distributions. We also show that there
exists a distribution weighted combining rule that has loss at most 3? with respect to any consistent
target function (one for which each hi has loss ? on Di ) and any mixture of the k distributions. In
some sense, our results establish that the distribution weighted hypothesis combination is the ?right?
combination rule, and that it also benefits from a well-founded theoretical guarantee.
The remainder of this paper is organized as follows. Section 2 introduces our theoretical model for
multiple source adaptation. In Section 3, we analyze the abstract case where the mixture parameters
of the target distribution are known and show that the distribution weighted hypothesis combination
that uses as weights these mixture coefficients achieves a loss of at most ?. In Section 4, we give
a simple method to produce an error of ?(k?) that does not require the prior knowledge of the
mixture parameters of the target distribution. Our main results showing the existence of a combined
hypothesis performing well regardless of the target mixture are given in Section 5 for the case of a
fixed target function, and in Section 6 for the case of multiple target functions. Section 7 reports
empirical results for a multiple source adaptation problem with a real-world dataset.
2
2 Problem Set-Up
Let X be the input space, f : X ? R the target function to learn, and L : R ? R ? R a loss function
penalizing errors with respect to f . The loss of a hypothesis h with respect to a distribution D and
loss function L is denoted by L(D, h, f ) and defined as L(D, h, f ) = Ex?D [L(h(x), f (x))] =
P
Pk
x?X L(h(x), f (x))D(x). We will denote by ? the simplex ? = {? : ?i ? 0 ?
i=1 ?i = 1} of
Rk .
We consider an adaptation problem with k source domains and a single target domain. The input
to the problem is the set of k source distributions D1 , . . . , Dk and k corresponding hypotheses
h1 , . . . , hk such that for all i ? [1, k], L(Di , hi , f ) ? ?, for a fixed ? ? 0. The distribution
of the target domain, DT , is assumed to be a mixture of the k source distributions Di s, that is
Pk
DT (x) = i=1 ?i Di (x), for some unknown mixture weight vector ? ? ?. The adaptation problem
consists of combing the hypotheses hi s to derive a hypothesis with small loss on the target domain.
Since the target distribution DT is assumed to be a mixture, we will refer to this problem as the
mixture adaptation problem.
A combining rule for the hypotheses takes as an input the hi s and outputs a single hypothesis h : X ? R. We define two combining rules of particular interest for our purpose: the linear combining rule which is based on a parameter z ? ? and which sets the hypothesis to
Pk
h(x) =
i=1 zi hi (x); and the distribution weighted combining rule also based on a parameter
P
P
z ? ? which sets the hypothesis to h(x) = ki=1 Pkzi Dzi (x)
h (x) when kj=1 zj Dj (x) > 0.
D (x) i
j=1
j
j
This last condition always holds if Di (x) > 0 for all x ? X and some i ? [1, k]. We define H to
be the set of all distribution weighted combining rules. Given the input to the adaptation problem
we have implicit information about the target function f . We define the set of consistent target
functions, F , as follows,
F = {g : ?i ? [1, k], L(Di , hi , g) ? ?} .
By definition, the target function f is an element of F .
We will assume that the following properties hold for the loss function L: (i) L is non-negative:
P
L(x, y) ? 0 for all x, y ? R; (ii) L is convex with respect to the first argument: L( ki=1 ?i xi , y) ?
Pk
i=1 ?i L(xi , y) for all x1 , . . . , xk , y ? R and ? ? ?; (iii) L is bounded: there exists M ? 0
such that L(x, y) ? M for all x, y ? R; (iv) L(x, y) is continuous in both x and y; and (v) L is
symmetric L(x, y) = L(y, x). The absolute loss defined by L(x, y) = |x ? y| will serve as our
primary motivating example.
3 Known Target Mixture Distribution
In this section we assume that the parameters of the target mixture distribution are known. Thus, the
Pk
learning algorithm is given ? ? ? such that DT (x) = i=1 ?i Di (x). A good starting point would be
P
to study the performance of a linear combining rule. Namely the classifier h(x) = ki=1 ?i hi (x).
While this seems like a very natural classifier, the following example highlights the problematic
aspects of this approach.
Consider a discrete domain X = {a, b} and two distributions, Da and Db , such that Da (a) = 1
and Db (b) = 1. Namely, each distribution puts all the weight on a single element in X . Consider
the target function f , where f (a) = 1 and f (b) = 0, and let the loss be the absolute loss. Let
h0 = 0 be the function that outputs 0 for all x ? X and similarly h1 = 1. The hypotheses h1
and h0 have zero expected absolute loss on the distributions Da and Db , respectively, i.e., ? = 0.
Now consider the target distribution DT with ?a = ?b = 1/2, thus DT (a) = DT (b) = 1/2. The
hypothesis h(x) = (1/2)h1 (x) + (1/2)h0 (x) always outputs 1/2, and has an absolute loss of 1/2.
Furthermore, for any other parameter z of the linear combining rule, the expected absolute loss of
h(x) = zh1 (x)+ (1 ? z)h0 (x) with respect to DT is exactly 1/2. We have established the following
theorem.
Theorem 1. There is a mixture adaptation problem with ? = 0 for which any linear combination
rule has expected absolute loss of 1/2.
3
Next we show that the distribution weighted combining rule produces a hypothesis with a low exPk
pected loss. Given a mixture DT (x) = i=1 ?i Di (x), we consider the distribution weighted combining rule with parameter ?, which we denote by h? . Recall that,
h? (x) =
k
X
i=1
?i Di (x)
Pk
j=1
?j Dj (x)
hi (x) =
k
X
?i Di (x)
i=1
DT (x)
hi (x) .
Using the convexity of L with respect to the first argument, the loss of h? with respect to DT and a
target f ? F can be bounded as follows,
L(DT , h? , f ) =
X
L(h? (x), f (x))DT (x) ?
k
XX
?i Di (x)L(hi (x), f (x)) =
x?X i=1
x?X
k
X
?i ?i ? ?,
i=1
where ?i := L(Di , hi , f ) ? ?. Thus, we have derived the following theorem.
Pk
Theorem 2. For any mixture adaptation problem with target distribution D? (x) = i=1 ?i Di (x),
the expected loss of the hypothesis h? is at most ? with respect to any target function f ? F:
L(D? , h? , f ) ? ?.
4 Simple Adaptation Algorithms
In this section we show how to construct a simple distribution weighted hypothesis that has an
expected loss guarantee with respect to any mixture. Our hypothesis hu is simply based on equal
weights, i.e., ui = 1/k, for all i ? [1, k]. Thus,
hu (x) =
k
X
i=1
X
(1/k)Di (x)
Di (x)
hi (x) =
hi (x).
Pk
Pk
j=1 (1/k)Dj (x)
i=1
j=1 Dj (x)
k
We show for hu an expected loss bound of k?, with respect to any mixture distribution DT and target
function f ? F. (Proof omitted.)
Theorem 3. For any mixture adaptation problem the expected loss of hu is at most k?, for any
mixture distribution DT and target function f ? F, i.e., L(DT , hu , f ) ? k?.
Unfortunately, the hypothesis hu can have an expected absolute loss as large as ?(k?). (Proof
omitted.)
Theorem 4. There is a mixture adaptation problem for which the expected absolute loss of hu is
?(k?). Also, for k = 2 there is an input to the mixture adaptation problem for which the expected
absolute loss of hu is 2? ? ?2 .
5 Existence of a Good Hypothesis
In this section, we will show that for any target function f ? F there is a distribution weighted
combining rule hz that has a loss of at most ? with respect to any mixture DT . We will construct
the proof in two parts. In the first part, we will show, using a simple reduction to a zero-sum game,
that one can obtain a mixture of hz s that guarantees a loss bounded by ?. In the second part, which
is the more interesting scenario, we will show that for any target function f ? F there is a single
distribution weighted combining rule hz that has loss of at most ? with respect to any mixture DT .
This later part will require the use of Brouwer fixed point theorem to show the existence of such an
hz .
5.1 Zero-sum game
The adaptation problem can be viewed as a zero-sum game between two players, NATURE and
LEARNER. Let the input to the mixture adaptation problem be D1 , . . . , Dk , h1 , . . . , hk and ?, and
fix a target function f ? F. The player NATURE picks a distribution Di while the player LEARNER
selects a distribution weighted combining rule hz ? H. The loss when NATURE plays Di and
LEARNER plays hz is L(Di , hz , f ). Let us emphasize that the target function f ? F is fixed
beforehand. The objective of NATURE is to maximize the loss and the objective of LEARNER is to
minimize the loss. We start with the following lemma,
4
Lemma 1. Given any mixed strategy of NATURE, i.e., a distribution ? over Di ?s, then the following
action of LEARNER h? ? H has expected loss at most ?, i.e., L(D? , h? , f ) ? ?.
The proof is identical to that of Theorem 2. This almost establishes that the value of the game is at
most ?. The technical part that we need to take care of is the fact that the action space of LEARNER
is infinite. However, by an appropriate discretization of H we can derive the following theorem.
Theorem 5. For any target function f ? F and any ? > 0, there exists a function h(x) =
P
m
j=1 ?j hzj (x), where hzi ? H, such that L(DT , h, f ) ? ? + ? for any mixture distribution
Pk
DT (x) = i=1 ?i Di (x).
Since we can fix ? > 0 to be arbitrarily small, this implies that a linear mixture of distribution
weighted combining rules can guarantee a loss of almost ? with respect to any product distribution.
5.2 Single distribution weighted combining rule
In the previous subsection, we showed that a mixture of hypotheses in H would guarantee a loss of
at most ?. Here, we will considerably strengthen the result and show that there is a single hypothesis
in H for which this guarantee holds. Unfortunately our loss is not convex with respect to h ? H, so
we need to resort to a more powerful technique, namely the Brouwer fixed point theorem.
For the proof we will need that the distribution weighted combining rule hz be continuous in
the parameter z. In general, this does hold due to the existence of points x ? X for which
Pk
?
j=1 zj Dj (x) = 0. To avoid this discontinuity, we will modify the definition of hz to hz , as
follows.
Claim 1. Let U denote the uniform distribution over X , then for any ? > 0 and z ? ?, let
h?z : X ? R be the function defined by
h?z (x) =
k
X
i=1
zi Di (x) + ?U (x)/k
hi (x).
Pk
j=1 zj Dj (x) + ?U (x)
Then, for any distribution D, L(D, h?z , f ) is continuous in z.1
Let us first state Brouwer?s fixed point theorem.
Theorem 6 (Brouwer Fixed Point Theorem). For any compact and convex non-empty set A ? Rn
and any continuous function f : A ? A, there is a point x ? A such that f (x) = x.
We first show that there exists a distribution weighted combining rule h?z for which the losses
L(Di , h?z , f ) are all nearly the same.
Lemma 2. For any target function f ? F and any ?, ? ? > 0, there exists z ? ?, with zi 6= 0 for all
i ? [1, k], such that the following holds for the distribution weighted combining rule h?z ? H:
L(Di , h?z , f ) = ? + ? ? ?
for any 1 ? i ? k, where ? =
??
? ? + ??
zi k
Pk
?
j=1 zj L(Dj , hz , f ).
Proof. Fix ? ? > 0 and let Lzi = L(Di , h?z , f ) for all z ? ? and i ? [1, m]. Consider the
Pk
mapping ? : ? ? ? defined for all z ? ? by [?(z)]i = (zi Lzi + ? ? /k)/ ( j=1 zj Lzj + ? ? ),
where [?(z)]i , is the ith coordinate of ?(x), i ? [1, m]. By Claim 1, ? is continuous. Thus,
by Brouwer?s Fixed Point Theorem, there exists z ? ? such that ?(z) = z. This implies that
Pk
zi = (zi Lzi + ? ? /k)/( j=1 zj Lzj + ? ? ). Since ? ? > 0, we must have zi 6= 0 for any i ? [1, m]. Thus,
P
we can divide by zi and write Lzi +? ? /(zi k) = ( kj=1 zj Lzj )+? ? . Therefore, Lzi = ? +? ? ?? ? /(zi k)
Pk
with ? = j=1 zj Lzj .
1
In addition to continuity, the perturbation to hz , h?z , also helps us ensure that none of the mixture weights
zi is zero in the proof of the Lemma 2 .
5
Note that the lemma just presented does not use the structure of the distribution weighted combining
rule, but only the fact that the loss is continuous in the parameter z ? ?. The lemma applies as well
to the linear combination rule and provides the same guarantee. The real crux of the argument is, as
shown in the next lemma, that ? is small for a distribution weighted combining rule (while it can be
very large for a linear combination rule).
Lemma 3. For any target function f ? F and any ?, ? ? > 0, there exists z ? ? such that
L(D? , h?z , f ) ? ? + ?M + ? ? for any ? ? ?.
Proof. Let z be the parameter guaranteed in Lemma 2. Then L(Di , h?z , f ) = ? + ? ? ? ? ? /(zi k) ?
? + ? ? , for 1 ? i ? k. Consider the mixture Dz , i.e., set the mixture parameter to be z. Consider the
Pk
quantity L(Dz , h?z , f ). On the one hand, by definition, L(Dz , h?z , f ) = i=1 zi L(Di , h?z , f ) and
?
thus L(Dz , hz , f ) = ?. On the other hand,
L(Dz ,h?z , f )
=
X
x?X
?
X
x?X
=
k
X
k
X
Dz (x)
?U (x)
?
(zi Di (x) +
)L(hi (x), f (x))
D
k
z (x) + ?U (x)
i=1
x?X
!
k
X
X
zi Di (x)L(hi (x), f (x)) +
?M U (x)
Dz (x)L(h?z (x), f (x))
X
i=1
zi L(Di , hi , f ) + ?M =
i=1
!
x?X
k
X
zi ?i + ?M ? ? + ?M .
i=1
Therefore ? ? ? + ?M . To complete the proof, note that the following inequality holds for any
mixture D? :
k
X
?
L(D? , hz , f ) =
?i L(Di , h?z , f ) ? ? + ? ? ,
i=1
which is at most ? + ?M + ? ? .
By setting ? = ?/(2M ) and ? ? = ?/2, we can derive the following theorem.
Theorem 7. For any target function f ? F and any ? > 0, there exists ? > 0 and z ? ?, such that
L(D? , h?z , f ) ? ? + ? for any mixture parameter ?.
6 Arbitrary target function
The results of the previous section show that for any fixed target function there is a good distribution
weighted combining rule. In this section, we wish to extend these results to the case where the target
function is not fixed in advanced. Thus, we seek a single distribution weighted combining rule that
can perform well for any f ? F and any mixture D? . Unfortunately, we are not able to prove a
bound of ? + o(?) but only a bound of 3?. To show this bound we will show that for any f1 , f2 ? F
and any hypothesis h the difference of loss is bounded by at most 2?.
Lemma 4. Assume that the loss function L obeys the triangle inequality, i.e., L(f, h) ? L(f, g) +
L(g, h). Then for any f, f ? ? F and any mixture DT , the inequality L(DT , h, f ? ) ? L(DT , h, f ) +
2? holds for any hypothesis h.
Proof. Since our loss function obeys the triangle inequality, for any functions f, g, h, the following
holds, L(D, f, h) ? L(D, f, g) + L(D, g, h). In our case, we observe that replacing g with any
f ? ? F gives, L(D? , f, h) ? L(D? , f ? , h) + L(D? , f, f ? ). We can bound the term L(D? , f, f ? )
with a similar inequality, L(D? , f, f ? ) ? L(D? , f, h? ) + L(D? , f ? , h? ) ? 2?, where h? is the
distribution weighted combining rule produced by choosing z = ? and using Theorem 2. Therefore,
for any f, f ? ? F we have, L(D? , f, h) ? L(D? , f ? , h) + 2?, which completes the proof.
We derived the following corollary to Theorem 7.
Corollary 1. Assume that the loss function L obeys the triangle inequality. Then, for any ? > 0,
there exists ? > 0 and z ? ?, such that for any mixture parameter ? and any f ? F ,
L(D? , h?z , f ) ? 3? + ?.
6
Mixture = ? book + (1 ? ?) kitchen
Uniform Mixture Over 4 Domains
Mixture = ? dvd + (1 ? ?) electronics
2.1
2
2.2
MSE
MSE
1.9
1.8
weighted
linear
book
kitchen
2.4
2.2
MSE
2.4
In?Domain
Out?Domain
2
2
1.7
1.8
1.8
1.6
1.6
1.6
1.5
1.4
0
1
2
3
4
5
0.2
0.4
6
?
0.6
0.8
(a)
1
weighted
linear
dvd
electronics
1.4
0
0.2
0.4
?
0.6
0.8
1
(b)
Figure 1: (a) MSE performance for a target mixture of four domains (1: books, 2: dvd, 3: electronics,
4: kitchen 5: linear, 6: weighted). (b) MSE performance under various mixtures of two source
domains, plot left: book and kitchen, plot right: dvd and electronics.
7 Empirical results
This section reports the results of our experiments with a distribution weighted combining rule using
real-world data. In our experiments, we fixed a mixture target distribution D? and considered the
distribution weighted combining rule hz , with z = ?. Since we used real-world data, we did not have
access to the domain distributions. Instead, we modeled each distribution and used large amounts
of unlabeled data available for each source to estimate the model?s parameters. One could have thus
expected potentially significantly worse empirical results than the theoretical ones, but this turned
out not to be an issue in our experiments.
We used the sentiment analysis dataset found in [4].2 The data consists of review text and rating labels, taken from amazon.com product reviews within four different categories (domains).
These four domains consist of book, dvd, electronics and kitchen reviews, where each domain contains 2000 data points. 3 In our experiments, we fixed a mixture target distribution D? and
considered the distribution weighted combining rule hz , with z = ?.
In our first experiment, we considered mixtures of all four domains, where the test set was a uniform
mixture of 600 points, that is the union of 150 points taken uniformly at random from each domain.
The remaining 1,850 points from each domain were used to train the base hypotheses.4 We compared our proposed weighted combining rule to the linear combining rule. The results are shown
in Figure 1(a). They show that the base hypotheses perform poorly on the mixture test set, which
justifies the need for adaptation. Furthermore, the distribution weighted combining rule is shown to
perform at least as well as the worst in-domain performance of a base hypothesis, as expected from
our bounds. Finally, we observe that this real-world data experiment gives an example in which a
linear combining rule performs poorly compared to the distribution weighted combining rule.
In other experiments, we considered the mixture of two domains, where the mixture is varied according to the parameter ? ? {0.1, 0.2, . . . , 1.0}. For each plot in Figure 1 (b), the test set consists
of 600? points from the first domain and 600(1 ? ?) points from the second domain, where the
first and second domains are made clear in the figure. The remaining points that were not used for
testing were used to train the base hypotheses. The results show the linear shift from one domain to
the other, as is evident from the performance of the two base hypotheses. The distribution weighted
combining rule outperforms the base hypotheses as well as the linear combining rule.
2
http://www.seas.upenn.edu/?mdredze/datasets/sentiment/.
3
The rating label, an integer between 1 and 5, was used as a regression label, and the loss measured by the
mean squared error (MSE). All base hypotheses were generated using Support Vector Regression (SVR) [17]
with the trade-off parameters C = 8, ? = 0.1, and a Gaussian kernel with parameter g = 0.00078. The SVR
solutions were obtained using the libSVM software library ( http://www.csie.ntu.edu.tw/?cjlin/libsvm/).
Our features were defined as the set of unigrams appearing five times or more in all domains. This defined
about 4000 unigrams. We used a binary feature vector encoding the presence or absence of these frequent
unigrams to define our instances. To model the domain distributions, we used a unigram statistical language
model trained on the same corpus as the one used to define the features. The language model was created using
the GRM library (http://www.research.att.com/?fsmtools/grm/).
4
Each experiment was repeated 20 times with random folds. The standard deviation found was far below
what could be legibly displayed in the figures.
7
Thus, our preliminary experiments suggest that the distribution weighted combining rule performs
well in practice and clearly outperforms a simple linear combining rule. Furthermore, using statistical language models as approximations to the distribution oracles seem to be sufficient in practice
and can help produce a good distribution weighted combining rule.
8 Conclusion
We presented a theoretical analysis of the problem of adaptation with multiple sources. Domain
adaptation is an important problem that arises in a variety of modern applications where limited or
no labeled data is available for a target application and our analysis can be relevant in a variety of
situations. The theoretical guarantees proven for the distribution weight combining rule provide it
with a strong foundation. Its empirical performance with a real-world data set further motivates
its use in applications. Much of the results presented were based on the assumption that the target
distribution is some mixture of the source distributions. A further analysis suggests however that
our main results can be extended to arbitrary target distributions.
Acknowledgments
We thank Jennifer Wortman for helpful comments on an earlier draft of this paper and Ryan McDonald for
discussions and pointers to data sets. The work of M. Mohri and A. Rostamizadeh was partly supported by the
New York State Office of Science Technology and Academic Research (NYSTAR).
References
[1] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for
domain adaptation. In Proceedings of NIPS 2006. MIT Press, 2007.
[2] Jacob Benesty, M. Mohan Sondhi, and Yiteng Huang, editors. Springer Handbook of Speech Processing.
Springer, 2008.
[3] John Blitzer, Koby Crammer, A. Kulesza, Fernando Pereira, and Jennifer Wortman. Learning bounds for
domain adaptation. In Proceedings of NIPS 2007. MIT Press, 2008.
[4] John Blitzer, Mark Dredze, and Fernando Pereira. Biographies, Bollywood, Boom-boxes and Blenders:
Domain Adaptation for Sentiment Classification. In ACL 2007, Prague, Czech Republic, 2007.
[5] Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from Data of Variable Quality. In
Proceedings of NIPS 2005, 2006.
[6] Koby Crammer, Michael Kearns, and Jennifer Wortman. Learning from multiple sources. In Proceedings
of NIPS 2006, 2007.
[7] Mark Dredze, John Blitzer, Pratha Pratim Talukdar, Kuzman Ganchev, Joao Graca, and Fernando Pereira.
Frustratingly Hard Domain Adaptation for Parsing. In CoNLL 2007, Prague, Czech Republic, 2007.
[8] Jean-Luc Gauvain and Chin-Hui. Maximum a posteriori estimation for multivariate gaussian mixture
observations of markov chains. IEEE Transactions on Speech and Audio Processing, 2(2):291?298, 1994.
[9] Frederick Jelinek. Statistical Methods for Speech Recognition. The MIT Press, 1998.
[10] Jing Jiang and ChengXiang Zhai. Instance Weighting for Domain Adaptation in NLP. In Proceedings of
ACL 2007, pages 264?271, Prague, Czech Republic, 2007. Association for Computational Linguistics.
[11] C. J. Legetter and Phil C. Woodland. Maximum likelihood linear regression for speaker adaptation of
continuous density hidden markov models. Computer Speech and Language, pages 171?185, 1995.
[12] Aleix M. Mart??nez. Recognizing imprecisely localized, partially occluded, and expression variant faces
from a single sample per class. IEEE Trans. Pattern Anal. Mach. Intell., 24(6):748?763, 2002.
[13] S. Della Pietra, V. Della Pietra, R. L. Mercer, and S. Roukos. Adaptive language modeling using minimum
discriminant estimation. In HLT ?91: Proceedings of the workshop on Speech and Natural Language,
pages 103?106, Morristown, NJ, USA, 1992. Association for Computational Linguistics.
[14] Brian Roark and Michiel Bacchiani. Supervised and unsupervised PCFG adaptation to novel domains. In
Proceedings of HLT-NAACL, 2003.
[15] Roni Rosenfeld. A Maximum Entropy Approach to Adaptive Statistical Language Modeling. Computer
Speech and Language, 10:187?228, 1996.
[16] Leslie G. Valiant. A theory of the learnable. ACM Press New York, NY, USA, 1984.
[17] Vladimir N. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998.
8
| 3550 |@word seems:1 disk:1 hu:8 seek:1 pratim:1 blender:1 jacob:1 pick:1 reduction:1 electronics:5 contains:1 att:1 outperforms:2 discretization:1 com:2 gauvain:1 must:1 parsing:1 john:4 plot:3 prohibitive:1 xk:1 ith:1 pointer:1 provides:1 draft:1 five:1 consists:7 prove:2 interscience:1 upenn:1 expected:14 examine:1 little:2 xx:1 bounded:4 joao:1 what:1 substantially:1 nj:1 guarantee:9 graca:1 morristown:1 exactly:1 classifier:2 modify:1 encoding:1 mach:1 jiang:1 acl:2 suggests:1 limited:1 obeys:3 acknowledgment:1 testing:1 practice:4 union:1 differs:1 area:2 empirical:7 significantly:2 matching:1 suggest:1 svr:2 unlabeled:2 put:1 risk:1 www:3 dz:7 phil:1 attention:1 regardless:1 starting:1 convex:7 simplicity:1 amazon:1 rule:44 gigabyte:1 coordinate:1 yishay:1 target:52 play:2 strengthen:1 us:1 hypothesis:49 element:2 recognition:2 labeled:7 csie:1 worst:1 trade:1 convexity:1 ui:1 occluded:1 trained:1 serve:1 f2:1 learner:10 triangle:3 sondhi:1 various:1 train:3 univ:1 choosing:1 h0:4 jean:1 larger:1 widely:1 unseen:1 rosenfeld:1 product:2 talukdar:1 adaptation:35 remainder:1 frequent:1 turned:1 combining:45 relevant:1 poorly:4 empty:1 jing:1 sea:1 produce:4 ben:1 help:2 derive:6 blitzer:5 ac:1 measured:1 received:1 strong:1 c:1 dzi:1 implies:2 differ:1 require:3 crux:1 fix:3 generalization:1 f1:1 ntu:1 preliminary:1 brian:1 ryan:1 rostami:1 secondly:1 hold:9 considered:4 mapping:1 claim:2 achieves:1 omitted:2 purpose:1 favorable:1 estimation:2 travel:1 label:5 occupying:1 establishes:1 weighted:38 ganchev:1 hope:1 minimization:1 mit:3 clearly:1 always:2 gaussian:2 avoid:1 imprecisely:1 varying:1 office:1 corollary:2 derived:6 likelihood:1 hk:2 rostamizadeh:2 sense:1 helpful:1 posteriori:1 dependent:1 typically:2 hidden:1 selects:1 issue:1 classification:1 denoted:1 equal:2 aware:1 construct:2 identical:1 koby:4 unsupervised:1 nearly:1 simplex:1 report:3 few:1 modern:1 intell:1 pietra:2 kitchen:5 interest:1 weakness:1 introduces:1 mixture:57 analyzed:1 devoted:1 chain:1 beforehand:1 respective:1 iv:1 divide:1 theoretical:13 instance:3 earlier:1 modeling:2 leslie:1 cost:1 deviation:1 republic:3 hundred:1 hzj:1 uniform:3 successful:1 wortman:4 recognizing:1 motivating:1 lzj:4 considerably:1 combined:2 density:1 biography:1 off:1 michael:2 vastly:1 squared:1 huang:1 hzi:1 worse:1 book:6 resort:1 combing:1 account:1 boom:1 coefficient:1 lzi:5 later:1 h1:5 unigrams:3 analyze:1 start:1 shai:1 minimize:1 il:1 generalize:1 raw:3 produced:1 none:2 hlt:2 definition:3 proof:11 di:35 dataset:3 recall:1 knowledge:1 subsection:1 organized:1 disposal:1 courant:2 dt:23 supervised:1 box:1 furthermore:3 just:2 implicit:1 hand:2 receives:1 web:1 replacing:1 google:2 continuity:1 quality:1 aviv:1 dredze:2 usa:2 naacl:1 symmetric:1 game:4 chengxiang:1 speaker:1 chin:1 evident:1 complete:1 mcdonald:1 performs:2 novel:1 common:1 thirdly:1 extend:1 he:1 association:2 relating:2 significant:1 refer:2 similarly:1 language:10 dj:7 access:2 base:7 multivariate:1 showed:1 scenario:1 inequality:6 binary:1 arbitrarily:1 minimum:1 somewhat:1 care:1 maximize:1 fernando:4 ii:1 multiple:15 technical:1 academic:1 michiel:1 variant:1 regression:3 vision:1 kernel:1 addition:2 remarkably:2 addressed:1 completes:1 source:35 file:1 comment:1 hz:16 db:3 seem:1 prague:3 integer:1 presence:1 iii:1 variety:3 restaurant:1 gave:1 zi:18 shift:1 motivated:1 expression:1 sentiment:4 speech:10 york:4 roni:1 action:2 woodland:1 clear:2 amount:4 extensively:1 category:1 http:3 supplied:1 zj:8 problematic:1 estimated:1 per:1 discrete:1 write:1 four:4 drawn:1 clarity:1 penalizing:1 libsvm:2 sum:3 powerful:1 almost:2 roark:1 conll:1 bound:9 hi:22 ki:3 guaranteed:1 fold:1 oracle:1 pected:1 software:1 dvd:5 aspect:1 argument:3 performing:1 relatively:1 according:2 combination:17 tw:1 bendavid:1 taken:2 jennifer:4 discus:1 benesty:1 cjlin:1 available:6 observe:2 appropriate:1 appearing:1 alternative:1 existence:5 original:1 remaining:2 brouwer:5 ensure:1 nlp:1 linguistics:2 zh1:1 establish:1 objective:2 quantity:1 strategy:1 primary:1 thank:1 discriminant:1 afshin:1 modeled:1 zhai:1 kuzman:1 vladimir:1 difficult:1 unfortunately:3 potentially:2 negative:1 design:1 anal:1 motivates:1 unknown:2 perform:5 observation:2 datasets:1 markov:2 displayed:1 situation:2 extended:2 mdredze:1 mansour:2 rn:1 perturbation:1 varied:1 arbitrary:2 rating:3 david:1 namely:5 czech:3 established:1 hour:1 discontinuity:1 nip:4 trans:1 able:1 frederick:1 below:1 pattern:1 kulesza:1 tau:1 natural:6 advanced:1 movie:2 technology:1 library:2 cim:1 created:1 kj:2 text:2 review:4 prior:2 loss:46 bear:1 highlight:1 mixed:1 interesting:1 proportional:1 proven:1 localized:1 foundation:1 sufficient:1 consistent:3 mercer:1 editor:1 classifying:1 roukos:1 course:1 mohri:3 supported:1 last:1 institute:2 face:1 absolute:10 jelinek:1 benefit:2 vocabulary:1 world:6 made:1 adaptive:2 founded:1 far:1 transaction:1 emphasize:1 compact:1 handbook:1 corpus:1 assumed:3 xi:2 continuous:7 frustratingly:1 nature:6 learn:1 tel:1 mehryar:1 mse:6 domain:47 da:3 bollywood:1 did:1 pk:17 main:5 noise:1 repeated:1 x1:1 representative:1 board:1 ny:1 wiley:1 pereira:4 wish:1 weighting:1 rk:1 theorem:19 specific:2 unigram:1 pac:1 showing:1 learnable:1 nyu:2 explored:1 dk:2 exists:11 consist:1 workshop:1 vapnik:1 pcfg:1 valiant:1 hui:1 magnitude:1 mohan:1 justifies:1 entropy:1 simply:2 nez:1 partially:1 applies:1 springer:2 acm:1 mart:1 viewed:1 luc:1 absence:1 considerable:1 change:1 hard:1 typical:1 infinite:1 uniformly:1 lemma:10 kearns:2 partly:1 player:3 support:1 mark:2 arises:2 crammer:5 dissimilar:1 audio:1 d1:2 della:2 ex:1 |
2,814 | 3,551 | Unlabeled data: Now it helps, now it doesn?t
Aarti Singh, Robert D. Nowak?
Department of Electrical and Computer Engineering
University of Wisconsin - Madison
Madison, WI 53706
{singh@cae,nowak@engr}.wisc.edu
Xiaojin Zhu?
Department of Computer Sciences
University of Wisconsin - Madison
Madison, WI 53706
[email protected]
Abstract
Empirical evidence shows that in favorable situations semi-supervised learning
(SSL) algorithms can capitalize on the abundance of unlabeled training data to
improve the performance of a learning task, in the sense that fewer labeled training data are needed to achieve a target error bound. However, in other situations
unlabeled data do not seem to help. Recent attempts at theoretically characterizing SSL gains only provide a partial and sometimes apparently conflicting explanations of whether, and to what extent, unlabeled data can help. In this paper,
we attempt to bridge the gap between the practice and theory of semi-supervised
learning. We develop a finite sample analysis that characterizes the value of unlabeled data and quantifies the performance improvement of SSL compared to
supervised learning. We show that there are large classes of problems for which
SSL can significantly outperform supervised learning, in finite sample regimes
and sometimes also in terms of error convergence rates.
1 Introduction
Labeled data can be expensive, time-consuming and difficult to obtain in many applications. Semisupervised learning (SSL) aims to capitalize on the abundance of unlabeled data to improve learning
performance. Empirical evidence suggests that in certain favorable situations unlabeled data can
help, while in other situations it does not. As a result, there have been several recent attempts
[1, 2, 3, 4, 5, 6] at developing a theoretical understanding of semi-supervised learning. It is wellaccepted that unlabeled data can help only if there exists a link between the marginal data distribution
and the target function to be learnt. Two common types of links considered are the cluster assumption [7, 3, 4] which states that the target function is locally smooth over subsets of the feature space
delineated by some property of the marginal density (but may not be globally smooth), and the manifold assumption [4, 6] which assumes that the target function lies on a low-dimensional manifold.
Knowledge of these sets, which can be gleaned from unlabeled data, simplify the learning task.
However, recent attempts at characterizing the amount of improvement possible under these links
only provide a partial and sometimes apparently conflicting (for example, [4] vs. [6]) explanations
of whether or not, and to what extent semi-supervised learning helps. In this paper, we bridge the
gap between these seemingly conflicting views and develop a minimax framework based on finite
sample bounds to identify situations in which unlabeled data help to improve learning. Our results
quantify both the amount of improvement possible using SSL as well as the the relative value of
unlabeled data.
We focus on learning under a cluster assumption that is formalized in the next section, and establish that there exist nonparametric classes of distributions, denoted PXY , for which the decision
sets (over which the target function is smooth) are discernable from unlabeled data. Moreover,
we show that there exist clairvoyant supervised learners that, given perfect knowledge of the decision sets denoted by D, can significantly outperform any generic supervised learner fn in these
?
?
Supported in part by the NSF grants CCF-0353079, CCF-0350213, and CNS-0519824.
Supported in part by the Wisconsin Alumni Research Foundation.
1
(a)
(b)
(c)
Figure 1: (a) Two separated high density sets with different labels that (b) cannot be discerned if the
sample size is too small, but (c) can be estimated if sample density is high enough.
classes. That is, if R denotes a risk of interest, n denotes the labeled data sample size, fbD,n denotes
the clairvoyant supervised learner, and E denotes expectation with respect to training data, then
supPXY E[R(fbD,n )] < inf fn supPXY E[R(fn )]. Based on this, we establish that there also exist
semi-supervised learners, denoted fbm,n , that use m unlabeled examples in addition to the n labeled
examples in order to estimate the decision sets, which perform as well as fbD,n , provided that m
grows appropriately relative to n. Specifically, if the error bound for fbD,n decays polynomially (exponentially) in n, then the number of unlabeled data m needs to grow polynomially (exponentially)
with the number of labeled data n. We provide general results for a broad range of learning problems
using finite sample error bounds. Then we examine a concrete instantiation of these general results
in the regression setting by deriving minimax lower bounds on the performance of any supervised
learner and compare that to upper bounds on the errors of fbD,n and fbm,n .
In their seminal papers, Castelli and Cover [8, 9] suggested that in the classification setting the
marginal distribution can be viewed as a mixture of class conditional distributions. If this mixture is
identifiable, then the classification problem may reduce to a simple hypothesis testing problem for
which the error converges exponentially fast in the number of labeled examples. The ideas in this
paper are similar, except that we do not require identifiability of the mixture component densities,
and show that it suffices to only approximately learn the decision sets over which the label is smooth.
More recent attempts at theoretically characterizing SSL have been relatively pessimistic. Rigollet
[3] establishes that for a fixed collection of distributions satisfying a cluster assumption, unlabeled
data do not provide an improvement in convergence rate. A similar argument was made by Lafferty
and Wasserman [4], based on the work of Bickel and Li [10], for the manifold case. However, in
a recent paper, Niyogi [6] gives a constructive example of a class of distributions supported on a
manifold whose complexity increases with the number of labeled examples, and he shows that the
error of any supervised learner is bounded from below by a constant, whereas there exists a semisupervised learner that can provide an error bound of O(n?1/2 ), assuming infinite unlabeled data.
In this paper, we bridge the gap between these seemingly conflicting views. Our arguments can
be understood by the simple example shown in Fig. 1, where the distribution is supported on two
component sets separated by a margin ? and the target function is smooth over each component.
Given a finite sample of data, these decision sets may or may not be discernable depending on the
sampling density (see Fig. 1(b), (c)). If ? is fixed (this is similar to fixing the class of cluster-based
distributions in [3] or the manifold in [4, 10]), then given enough labeled data a supervised learner
can achieve optimal performance (since, eventually, it operates in regime (c) of Fig. 1). Thus, in this
example, there is no improvement due to unlabeled data in terms of the rate of error convergence for
a fixed collection of distributions. However, since the true separation between the component sets
is unknown, given a finite sample of data, there always exists a distribution for which these sets are
indiscernible (e.g., ? ? 0). This perspective is similar in spirit to the argument in [6]. We claim
that meaningful characterizations of SSL performance and quantifications of the value of unlabeled
data require finite sample error bounds, and that rates of convergence and asymptotic analysis may
not capture the distinctions between SSL and supervised learning. Simply stated, if the component
density sets are discernable from a finite sample size m of unlabeled data but not from a finite sample
size n < m of labeled data, then SSL can provide better performance than supervised learning. We
also show that there are certain plausible situations in which SSL yields rates of convergence that
cannot be achieved by any supervised learner.
2
(2)
(2)
2
g (x1)
g (x1)
2
(1)
g (x1)
(2)
1
g (x1)
2
x2
x2
?
?
(1)
2
g (x1)
(2)
g (x1)
1
(1)
(1)
g (x1)
x1
g (x1)
x1
1
? positive
1
? negative
Figure 2: Margin ? measures the minimum width of a decision set or separation between the support
sets of the component marginal mixture densities. The margin is positive if the component support
sets are disjoint, and negative otherwise.
2 Characterization of model distributions under the cluster assumption
Based on the cluster assumption [7, 3, 4], we define the following collection of joint distributions
PXY (?) = PX ? PY |X indexed by a margin parameter ?. Let X, Y be bounded random variables
with marginal distribution PX ? PX and conditional label distribution PY |X ? PY |X , supported
on the domain X = [0, 1]d .
PK
The marginal density p(x) =
k=1 ak pk (x) is the mixture of a finite, but unknown, number of
component densities {pk }K
,
where
K < ?. The unknown mixing proportions ak ? a > 0 and
k=1
PK
k=1 ak = 1. In addition, we place the following assumptions on the mixture component densities:
1. pk is supported on a unique compact, connected set Ck ? X with Lipschitz boundaries. Specifically, we assume the following form for the component support sets: (See Fig. 2 for d=2 illustration.)
(1)
(2)
Ck = {x ? (x1 , . . . , xd ) ? X : gk (x1 , . . . , xd?1 ) ? xd ? gk (x1 , . . . , xd?1 )},
(1)
(2)
where gk (?), gk (?) are d ? 1 dimensional Lipschitz functions with Lipschitz constant L.1
2. pk is bounded from above and below, 0 < b ? pk ? B.
3. pk is H?older-? smooth on Ck with H?older constant K1 [12, 13].
Let the conditional label density on Ck be denoted by pk (Y |X = x). Thus, a labeled training
point (X, Y ) is obtained as follows. With probability ak , X is drawn from pk and Y is drawn from
pk (Y |X = x). In the supervised setting, we assume access to n labeled data L = {Xi , Yi }ni=1
drawn i.i.d according to PXY ? PXY (?), and in the semi-supervised setting, we assume access to
m additional unlabeled data U = {Xi }m
i=1 drawn i.i.d according to PX ? PX .
Let D denote the collection of all non-empty sets obtained as intersections of {Ck }K
k=1 or their
K
c
complements {Ckc }K
,
excluding
the
set
?
C
that
does
not
lie
in
the
support
of
the
marginal
k=1
k=1 k
density. Observe that |D| ? 2K , and in practical situations the cardinality of D is much smaller
as only a few of the sets are non-empty. The cluster assumption is that the target function will be
smooth on each set D ? D, hence the sets in D are called decision sets. At this point, we do not
consider a specific target function.
The collection PXY is indexed by a margin parameter ?, which denotes the minimum width of
a decision set or separation between the component support sets Ck . The margin ? is assigned a
positive sign if there is no overlap between components, otherwise it is assigned a negative sign as
illustrated in Figure 2. Formally, for j, k ? {1, . . . , K}, let
djk :=
min
p,q?{1,2}
(p)
kgj
Then the margin is defined as
?=??
min
djk ,
j,k?{1,...,K}
(q)
? gk k?
where
j 6= k,
?=
(1)
(2)
dkk := kgk ? gk k? .
1 if Cj ? Ck = ? ?j 6= k
.
?1 otherwise
1
This form is a slight generalization of the boundary fragment class of sets which is used as a common
tool for analysis of learning problems [11]. Boundary fragment sets capture the salient characteristics of more
general decision sets since, locally, the boundaries of general sets are like fragments in a certain orientation.
3
3 Learning Decision Sets
Ideally, we would like to break a given learning task into separate subproblems on each D ? D since
the target function is smooth on each decision set. Note that the marginal density p is also smooth
within each decision set, but exhibits jumps at the boundaries since the component densities are
bounded away from zero. Hence, the collection D can be learnt from unlabeled data as follows:
1) Marginal density estimation ? The procedure is based on the sup-norm kernel density estimator
proposed in [14]. Consider a uniform square grid over the domain X = [0, 1]d with spacing 2hm ,
where hm = ?0 ((log m)2 /m)1/d and ?0 > 0 is a constant. For any point x ? X , let [x] denote the
closest point on the grid. Let G denote the kernel and Hm = hm I, then the estimator of p(x) is
m
1 X
?1
G(Hm
(Xi ? [x])).
pb(x) =
mhdm i=1
2) Decision set estimation ? Two points x1 , x2 ? X are said to be connected, denoted by x1 ? x2 ,
if there exists a ?
sequence of points x1 = z1 , z2 , . . . , zl?1 , zl = x2 such that z2 , . . . , zl?1 ? U,
kzj ?zj+1 k ? 2 dhm , and for all points that satisfy
p(zi )? pb(zj )| ? ?m :=
? kzi ?zj k ? hm log m, |b
(log m)?1/3 . That is, there exists a sequence of 2 dhm -dense unlabeled data points between x1 and
x2 such that the marginal density varies smoothly along the sequence. All points that are pairwise
connected specify an empirical decision set. This decision set estimation procedure is similar in
spirit to the semi-supervised learning algorithm proposed in [15]. In practice, these sequences only
need to be evaluated for the test and labeled training points.
The following lemma shows that if the margin is large relative to the average spacing m?1/d between
unlabeled data points, then with high probability, two points are connected if and only if they lie in
the same decision set D ? D, provided the points are not too close to the decision boundaries. The
proof sketch of the lemma and all other results are deferred to Section 7.
Lemma 1. Let ?D denote the boundary of D and define the set of boundary points as
?
B = {x :
inf
kx ? zk ? 2 dhm }.
z??D?D ?D
?
If |?| > Co (m/(log m) )
, where Co = 6 d?0 , then for all p ? PX , all pairs of points
x1 , x2 ? supp(p) \ B and all D ? D, with probability > 1 ? 1/m,
2 ?1/d
x1 , x2 ? D
if and only if
x1 ? x2
for large enough m ? m0 , where m0 depends only on the fixed parameters of the class PXY (?).
4 SSL Performance and the Value of Unlabeled Data
We now state our main result that characterizes the performance of SSL relative to supervised learning and follows as a corollary to the lemma stated above. Let R denote a risk of interest and
E(fb) = R(fb) ? R? , where R? is the infimum risk over all possible learners.
Corollary 1. Assume that the excess risk E is bounded. Suppose there exists a clairvoyant supervised learner fbD,n , with perfect knowledge of the decision sets D, for which the following finite
sample upper bound holds
sup E[E(fbD,n )] ? ?2 (n).
PXY (?)
Then there exists a semi-supervised learner fbm,n such that if |?| > Co (m/(log m)2 )?1/d ,
?1/d !
1
m
sup E[E(fbm,n )] ? ?2 (n) + O
+n
.
m
(log m)2
PXY (?)
This result captures the essence of the relative characterization of semi-supervised and supervised
learning for the margin based model distributions. It suggests that if the sets D are discernable
using unlabeled data (the margin is large enough compared to average spacing between unlabeled
data points), then there exists a semi-supervised learner that can perform as well as a supervised
d
learner with clairvoyant knowledge of the decision sets, provided m ? n so that (n/?2 (n)) =
4
O(m/(log m)2 ) implying that the additional term in the performace bound for fbm,n is negligible
compared to ?2 (n). This indicates that if ?2 (n) decays polynomially (exponentially) in n, then m
needs to grow polynomially (exponentially) in n.
Further, suppose that the following finite sample lower bound holds for any supervised learner:
inf sup E[E(fn )] ? ?1 (n).
fn PXY (?)
If ?2 (n) < ?1 (n), then there exists a clairvoyant supervised learner with perfect knowledge of the
decision sets that outperforms any supervised learner that does not have this knowledge. Hence,
Corollary 1 implies that SSL can provide better performance than any supervised learner provided
d
(i) m ? n so that (n/?2 (n)) = O(m/(log m)2 ), and (ii) knowledge of the decision sets simplifies
the supervised learning task, so that ?2 (n) < ?1 (n). In the next section, we provide a concrete
application of this result in the regression setting. As a simple example in the binary classification
setting, if p(x) is supported on two disjoint sets and if P (Y = 1|X = x) is strictly greater than
1/2 on one set and strictly less than 1/2 on the other, then perfect knowledge of the decision sets
reduces the problem to a hypothesis testing problem for which ?2 (n) = O(e?? n ), for some constant
? > 0. However, if ? is small relative to the average spacing n?1/d between labeled data points,
then ?1 (n) = cn?1/d where c > 0 is a constant. This lower bound follows from the minimax lower
bound proofs for regression in the next section. Thus, an exponential improvement is possible using
semi-supervised learning provided m grows exponentially in n.
5 Density-adaptive Regression
Let Y denote a continuous and bounded random variable. Under squared error loss, the target
function is f (x) = E[Y |X = x], and E(fb) = E[(fb(X) ? f (X))2 ]. Recall that pk (Y |X = x)
is the conditional density on the k-th component and let Ek denote expectation with respect to the
corresponding conditional distribution. The regression function on each component is fk (x) =
Ek [Y |X = x] and we assume that for k = 1, . . . , K
1. fk is uniformly bounded, |fk | ? M .
2. fk is H?older-? smooth on Ck with H?older constant K2 .
This implies that the overall regression function f (x) is piecewise H?older-? smooth; i.e., it is
H?older-? smooth on each D ? D, except possibly at the component boundaries. 2 Since a H?older-?
smooth function can be locally well-approximated by a Taylor polynomial, we propose the following semi-supervised learner that performs local polynomial fits within each empirical decision set,
that is, using training data that are connected as per the definition in Section 3. While a spatially
uniform estimator suffices when the decision sets are discernable, we use the following spatially
adaptive estimator proposed in Section 4.1 of [12]. This ensures that when the decision sets are
indiscernible using unlabeled data, the semi-supervised learner still achieves an error bound that is,
up to logarithmic factors, no worse than the minimax lower bound for supervised learners.
n
X
fbm,n,x (?) = arg min
(Yi ? f ? (Xi ))2 1x?Xi + pen(f ? )
and
fbm,n (x) ? fbm,n,x (x)
?
f ??
i=1
Here 1x?Xi is the indicator of x ? Xi and ? denotes a collection of piecewise polynomials
of degree [?] (the maximal integer < ?) defined over recursive dyadic partitions of the domain
X = [0, 1]d with cells of sidelength betweenP2??log(n/ log n)/(2?+d)? and 2??log(n/ log n)/d? . The
penalty term pen(f ? ) is proportional to log( ni=1 1x?Xi ) #f ? , where #f ? denotes the number
of cells in the recursive dyadic partition on which f ? is defined. It is shown in [12] that this
estimator yields a finite sample error bound of n?2?/(2?+d) for H?older-? smooth functions, and
max{n?2?/(2?+d) , n?1/d } for piecewise H?older-? functions, ignoring logarithmic factors.
Using these results from [12] and Corollary 1, we now state finite sample upper bounds on the semisupervised learner (SSL) described above. Also, we derive finite sample minimax lower bounds on
the performance of any supervised learner (SL). Our main results are summarized in the following
table, for model distributions characterized by various values of the margin parameter ?. A sketch
2
If the component marginal densities and regression functions have different smoothnesses, say ? and ?,
the same analysis holds except that f (x) is H?older-min(?, ?) smooth on each D ? D.
5
of the derivations of the results is provided in Section 7.3. Here we assume that dimension d ?
2?/(2? ? 1). If d < 2?/(2? ? 1), then the supervised learning error due to to not resolving
the decision sets (which behaves like n?1/d ) is smaller than error incurred in estimating the target
function itself (which behaves like n?2?/(2?+d) ). Thus, when d < 2?/(2? ? 1), the supervised
regression error is dominated by the error in smooth regions and there appears to be no benefit to
using a semi-supervised learner. In the table, we suppress constants and log factors in the bounds,
and also assume that m ? n2d so that (n/?2 (n))d = O(m/(log m)2 ). The constants co and Co
only depend on the fixed parameters of the class PXY (?) and do not depend on ?.
Margin range
?
SSL upper bound
?2 (n)
SL lower bound
?1 (n)
SSL helps
? ? ?0
? ? co n?1/d
co n?1/d > ? ? Co ( (logmm)2 )?1/d
Co ( (logmm)2 )?1/d > ? ? ?Co ( (logmm)2 )?1/d
?Co ( (logmm)2 )?1/d > ?
??0 > ?
n?2?/(2?+d)
n?2?/(2?+d)
n?2?/(2?+d)
n?1/d
?2?/(2?+d)
n
n?2?/(2?+d)
n?2?/(2?+d)
n?2?/(2?+d)
n?1/d
n?1/d
n?1/d
n?1/d
No
No
Yes
No
Yes
Yes
If ? is large relative to the average spacing between labeled data points n?1/d , then a supervised
learner can discern the decision sets accurately and SSL provides no gain. However, if ? > 0 is small
relative to n?1/d , but large with respect to the spacing between unlabeled data points m?1/d , then
the proposed semi-supervised learner provides improved error bounds compared to any supervised
learner. If |?| is smaller than m?1/d , the decision sets are not discernable with unlabeled data and
SSL provides no gain. However, notice that the performance of the semi-supervised learner is no
worse than the minimax lower bound for supervised learners. In the ? < 0 case, if ?? larger than
m?1/d , then the semi-supervised learner can discern the decision sets and achieves smaller error
bounds, whereas these sets cannot be as accurately discerned by any supervised learner. For the
overlap case (? < 0), supervised learners are always limited by the error incurred due to averaging
across decision sets (n?1/d ). In particular, for the collection of distributions with ? < ??0 , a faster
rate of error convergence is attained by SSL compared to SL, provided m ? n2d .
6 Conclusions
In this paper, we develop a framework for evaluating the performance gains possible with semisupervised learning under a cluster assumption using finite sample error bounds. The theoretical
characterization we present explains why in certain situations unlabeled data can help to improve
learning, while in other situations they may not. We demonstrate that there exist general situations
under which semi-supervised learning can be significantly superior to supervised learning in terms
of achieving smaller finite sample error bounds than any supervised learner, and sometimes in terms
of a better rate of error convergence. Moreover, our results also provide a quantification of the relative value of unlabeled to labeled data. While we focus on the cluster assumption in this paper, we
conjecture that similar techniques can be applied to quantify the performance of semi-supervised
learning under the manifold assumption as well. In particular, we believe that the use of minimax
lower bounding techniques is essential because many of the interesting distinctions between supervised and semi-supervised learning occur only in finite sample regimes, and rates of convergence
and asymptotic analyses may not capture the complete picture.
7 Proofs
We sketch the main ideas behind the proofs here, please refer to [13] for details. Since the component
densities are bounded from below and above, define pmin := b mink ak ? p(x) ? B =: pmax .
7.1 Proof of Lemma 1
First, we state two relatively straightforward results about the proposed kernel density estimator.
Theorem 1 (Sup-norm density estimation of non-boundary points). Consider the kernel density
estimator pb(x)Rproposed in Section 3. If Rthe kernel G satisfies supp(G) = [?1, 1]d, 0 < G ?
Gmax < ?, [?1,1]d G(u)du = 1 and [?1,1]d uj G(u)du = 0 for 1 ? j ? [?], then for all
6
p ? PX , with probability at least 1 ? 1/m,
sup
x?supp(p)\B
q
min(1,?)
d
|p(x) ? pb(x)| = O hm
+ log m/(mhm ) =: ?m .
Notice that ?m decreases with increasing m. A detailed proof can be found in [13].
Corollary 2 (Empirical density of unlabeled data). Under the conditions of Theorem 1, for all
p ? PX and ?
large enough m, with probability > 1 ? 1/m, for all x ? supp(p) \ B, ? Xi ? U s.t.
kXi ? xk ? dhm .
Proof. From Theorem 1, for all x ? supp(p) \ B, pb(x) ? p(x) ? ?m ? pmin ???m > 0, for m
Pm
?1
sufficiently large. This implies i=1 G(Hm
(Xi ? x)) > 0, and ?Xi ? U within dhm of x.
?
Using the density estimation results, we now show that if |?| > 6 dhm , then for all p ? PX , all
pairs of points x1 , x2 ? supp(p)\B and all D ? D, for large enough m, with probability > 1?1/m,
we have x1 , x2 ? D if and only if x1 ? x2 . We establish this in two steps:
1. x1 ? D, x2 6? D ? x1 6? x2 : Since x1 and x2 belong to different decision sets, all sequences
connecting x1 and x2 through unlabeled data points
? pass through a region where either (i) the density
is zero and since the region is at least |?| >
6
dhm wide, there cannot exist a sequence as defined
?
in Section 3 such that kzj ? zj+1 k ? 2 dhm , or (ii) the density is positive. In the latter case,
the marginal density p(x) jumps by at least pmin one or more times along all sequences connecting
x1 and x2 . Suppose the first jump occurs where decision set D ends
? and another decision set
?
?
D 6= D begins (in the sequence). Then since D is at least |?| > 6 dhm wide, by Corollary 2
for all sequences connecting x1 and x2 through unlabeled data points, there exist points zi , zj in the
sequence that lie in D \ B, D? \ B, respectively, and kzi ? zj k ? hm log m. Since the density on
each decision set is H?older-? smooth, we have |p(zi ) ? p(zj )| ? pmin ? O((hm log m)min(1,?) ).
Since zi , zj 6? B, using Theorem 1, |b
p(zi ) ? pb(zj )| ? |p(zi ) ? p(zj )| ? 2?m > ?m for large enough
m. Thus, x1 6? x2 .
?
2. x?
1 , x2 ? D ? x1 ? x2 : Since D has width at least |?| > 6 dhm , there exists a region of width
> 2 dhm contained in D \ B, and Corollary 2 implies that with?probability > 1 ? 1/m, there exist
sequence(s) contained in D \ B connecting x1 and x2 through 2 dhm -dense unlabeled data points.
Since the sequence is contained in D and the density on D is H?older-? smooth, we have for all points
zi , zj in the sequence that satisfy kzi ? zj k ? hm log m, |p(zi ) ? p(zj )| ? O((hm log m)min(1,?) ).
Since zi , zj 6? B, using Theorem 1, |b
p(zi ) ? pb(zj )| ? |p(zi ) ? p(zj )| + 2?m ? ?m for large enough
m. Thus, x1 ? x2 .
7.2 Proof of Corollary 1
Let ?1 denote the event under which Lemma 1 holds. Then P (?c1 ) ? 1/m. Let ?2 denote the
event that the test point X and training data X1 , . . . , Xn ? L don?t lie in B. Then P (?c2 ) ?
(n + 1)P (B) ? (n + 1)pmax vol(B) = O(nhm ). The last step follows from the definition of the set
B and since the boundaries of the support sets are Lipschitz, K is finite, and hence vol(B) = O(hm ).
Now observe that fbD,n essentially uses the clairvoyant knowledge of the decision sets D to
discern which labeled points X1 , . . . , Xn are in the same decision set as X. Conditioning on ?1 , ?2 , Lemma 1 implies that X, Xi ? D iff X ? Xi . Thus, we can define a
semi-supervised learner fbm,n to be the same as fbD,n except that instead of using clairvoyant
knowledge of whether X, Xi ? D, fbm,n is based on whether X ? Xi . It follows that
supPXY (?) E[E(fbm,n )|?1 , ?2 ] = supPXY (?) E[E(fbD,n )], and since the excess risk is bounded:
sup
E[E(fbm,n )] ? sup
E[E(fbm,n )|?1 , ?2 ] + O (1/m + nhm ) .
PXY (?)
PXY (?)
7.3 Density adaptive Regression results
1) Semi-Supervised Learning Upper Bound: The clairvoyant counterpart of fbm,n (x) is given as
Pn
fbD,n (x) ? fbD,n,x(x), where fbD,n,x (?) = arg minf ? ?? i=1 (Yi ? f ? (Xi ))2 1x,Xi ?D + pen(f ? ), and
is a standard supervised learner that performs piecewise polynomial
fit on each decision set, where
Pn
the regression function is H?older-? smooth. Let nD = n1 i=1 1Xi ?D . It follows [12] that
2?
?
E[(f (X) ? fbD,n (X))2 1X?D |nD ] ? C (nD / log nD ) d+2? .
7
P
2
b
Since E[(f (X) ? fbD,n (X))2 ] =
D?D E[(f (X) ? fD,n (X)) 1X?D ]P (D), taking expectation over nD ?Binomial(n, P (D)) and summing over all decision sets recalling that |D| is a
finite constant, the overall error of fbD,n scales as n?2?/(2?+d) , ignoring logarithmic factors. If
|?| > Co (m/(log m)2 )?1/d , using Corollary 1, the same performance bound holds for fbm,n provided m ? n2d . See [13] for further details. If |?| < Co (m/(log m)2 )?1/d , the decision sets are
not discernable using unlabeled data. Since the regression function is piecewise H?older-? smooth
on each empirical decision set, Using Theorem 9 in [12] and similar analysis, an upper bound of
max{n?2?/(2?+d) , n?1/d } follows, which scales as n?1/d when d ? 2?/(2? ? 1).
2) Supervised Learning Lower Bound: The formal minimax proof requires construction of a finite
subset of distributions in PXY (?) that are the hardest cases to distinguish based on a finite number
of labeled data n, and relies on a Hellinger version of Assouad?s Lemma (Theorem 2.10 (iii) in [16]).
Complete details are given in [13]. Here we present the simple intuition behind the minimax lower
bound of n?1/d when ? < co n?1/d . In this case the decision boundaries can only be localized
to an accuracy of n?1/d , the average spacing between labeled data points. Since the boundaries
are Lipschitz, the expected volume that is incorrectly assigned to any decision set is > c1 n?1/d ,
where c1 > 0 is a constant. Thus, if the expected excess risk at a point that is incorrectly assigned
to a decision set can be greater than a constant c2 > 0, then the overall expected excess risk is
> c1 c2 n?1/d . This is the case for both regression and binary classification. If ? > co n?1/d , the
decision sets can be accurately discerned from the labeled data alone. In this case, it follows that
the minimax lower bound is equal to the minimax lower bound for H?older-? smooth regression
functions, which is cn?2?/(d+2?) , where c > 0 is a constant [17].
References
[1] Balcan, M.F., Blum, A.: A PAC-style model for learning from labeled and unlabeled data. In: 18th
Annual Conference on Learning Theory, COLT. (2005)
[2] K?aa? ri?ainen, M.: Generalization error bounds using unlabeled data. In: 18th Annual Conference on
Learning Theory, COLT. (2005)
[3] Rigollet, P.: Generalization error bounds in semi-supervised classification under the cluster assumption.
Journal of Machine Learning Research 8 (2007) 1369?1392
[4] Lafferty, J., Wasserman, L.: Statistical analysis of semi-supervised regression. In: Advances in Neural
Information Processing Systems 21, NIPS. (2007) 801?808
[5] Ben-David, S., Lu, T., Pal, D.: Does unlabeled data provably help? worst-case analysis of the sample
complexity of semi-supervised learning. In: 21st Annual Conference on Learning Theory, COLT. (2008)
[6] Niyogi, P.:
Manifold regularization and semi-supervised learning: Some theoretical analyses. Technical Report TR-2008-01, Computer Science Department, University of Chicago. URL
http://people.cs.uchicago.edu/?niyogi/papersps/ssminimax2.pdf (2008)
[7] Seeger, M.: Learning with labeled and unlabeled data. Technical report, Institute for ANC, Edinburgh,
UK. URL http://www.dai.ed.ac.uk/?seeger/papers.html (2000)
[8] Castelli, V., Cover, T.M.: On the exponential value of labeled samples. Pattern Recognition Letters 16(1)
(1995) 105?111
[9] Castelli, V., Cover, T.M.: The relative value of labeled and unlabeled samples in pattern recognition.
IEEE Transactions on Information Theory 42(6) (1996) 2102?2117
[10] Bickel, P.J., Li, B.: Local polynomial regression on unknown manifolds. In: IMS Lecture NotesMonograph Series, Complex Datasets and Inverse Problems: Tomography, Networks and Beyond. Volume 54.
(2007) 177?186
[11] Korostelev, A.P., Tsybakov, A.B.: Minimax Theory of Image Reconstruction. Springer, NY (1993)
[12] Castro, R., Willett, R., Nowak, R.:
Faster rates in regression via active learning.
Technical Report ECE-05-03, ECE Department, University of Wisconsin - Madison. URL
http://www.ece.wisc.edu/?nowak/ECE-05-03.pdf (2005)
[13] Singh, A., Nowak, R., Zhu, X.:
Finite sample analysis of semi-supervised learning.
Technical Report ECE-08-03, ECE Department, University of Wisconsin - Madison. URL
http://www.ece.wisc.edu/?nowak/SSL TR.pdf (2008)
[14] Korostelev, A., Nussbaum, M.: The asymptotic minimax constant for sup-norm loss in nonparametric
density estimation. Bernoulli 5(6) (1999) 1099?1118
[15] Chapelle, O., Zien, A.: Semi-supervised classification by low density separation. In: Tenth International
Workshop on Artificial Intelligence and Statistics. (2005) 57?64
[16] Tsybakov, A.B.: Introduction a l?estimation non-parametrique. Springer, Berlin Heidelberg (2004)
[17] Stone, C.J.: Optimal rates of convergence for nonparametric estimators. The Annals of Statistics 8(6)
(1980) 1348?1360
8
| 3551 |@word kgk:1 version:1 polynomial:5 proportion:1 norm:3 logmm:4 nd:5 tr:2 series:1 fragment:3 outperforms:1 z2:2 fn:5 chicago:1 partition:2 ainen:1 v:1 implying:1 alone:1 fewer:1 intelligence:1 xk:1 characterization:4 provides:3 mhm:1 nussbaum:1 along:2 c2:3 clairvoyant:8 hellinger:1 pairwise:1 theoretically:2 expected:3 examine:1 globally:1 cardinality:1 increasing:1 provided:8 estimating:1 moreover:2 bounded:9 begin:1 what:2 xd:4 k2:1 uk:2 zl:3 grant:1 positive:4 negligible:1 engineering:1 understood:1 local:2 ak:5 approximately:1 suggests:2 co:15 limited:1 range:2 unique:1 practical:1 testing:2 practice:2 recursive:2 procedure:2 empirical:6 significantly:3 performace:1 sidelength:1 cannot:4 unlabeled:41 close:1 risk:7 seminal:1 py:3 www:3 straightforward:1 formalized:1 wasserman:2 estimator:8 deriving:1 annals:1 target:11 construction:1 suppose:3 us:1 hypothesis:2 expensive:1 satisfying:1 approximated:1 recognition:2 labeled:23 electrical:1 capture:4 worst:1 region:4 ensures:1 connected:5 decrease:1 intuition:1 complexity:2 ideally:1 engr:1 singh:3 depend:2 learner:35 cae:1 joint:1 various:1 derivation:1 separated:2 fast:1 notesmonograph:1 artificial:1 whose:1 larger:1 plausible:1 say:1 otherwise:3 niyogi:3 statistic:2 itself:1 seemingly:2 sequence:13 propose:1 reconstruction:1 maximal:1 mixing:1 iff:1 achieve:2 convergence:9 cluster:10 empty:2 perfect:4 converges:1 ben:1 help:10 depending:1 develop:3 derive:1 fixing:1 ac:1 c:2 implies:5 quantify:2 explains:1 require:2 suffices:2 generalization:3 pessimistic:1 strictly:2 hold:5 sufficiently:1 considered:1 claim:1 m0:2 bickel:2 achieves:2 aarti:1 favorable:2 estimation:7 label:4 bridge:3 establishes:1 tool:1 always:2 aim:1 ck:8 pn:2 corollary:9 focus:2 improvement:6 bernoulli:1 indicates:1 seeger:2 sense:1 djk:2 provably:1 overall:3 classification:6 orientation:1 arg:2 denoted:5 colt:3 html:1 ssl:21 marginal:12 equal:1 sampling:1 broad:1 capitalize:2 hardest:1 minf:1 report:4 simplify:1 piecewise:5 few:1 cns:1 n1:1 attempt:5 recalling:1 interest:2 fd:1 deferred:1 mixture:6 behind:2 n2d:3 nowak:6 partial:2 gmax:1 ckc:1 indexed:2 taylor:1 theoretical:3 cover:3 jerryzhu:1 subset:2 uniform:2 too:2 pal:1 varies:1 learnt:2 kxi:1 st:1 density:34 international:1 connecting:4 concrete:2 squared:1 possibly:1 worse:2 ek:2 style:1 pmin:4 li:2 supp:6 summarized:1 satisfy:2 depends:1 view:2 break:1 apparently:2 characterizes:2 sup:9 identifiability:1 square:1 ni:2 accuracy:1 korostelev:2 characteristic:1 yield:2 identify:1 yes:3 accurately:3 castelli:3 lu:1 ed:1 definition:2 proof:9 gain:4 recall:1 knowledge:10 cj:1 appears:1 attained:1 supervised:63 specify:1 improved:1 discerned:3 evaluated:1 sketch:3 infimum:1 believe:1 grows:2 semisupervised:4 true:1 ccf:2 counterpart:1 alumnus:1 hence:4 assigned:4 spatially:2 regularization:1 illustrated:1 width:4 please:1 essence:1 stone:1 pdf:3 complete:2 demonstrate:1 gleaned:1 performs:2 balcan:1 image:1 common:2 superior:1 behaves:2 rigollet:2 conditioning:1 exponentially:6 volume:2 belong:1 he:1 slight:1 willett:1 ims:1 refer:1 smoothness:1 grid:2 fk:4 pm:1 chapelle:1 access:2 closest:1 recent:5 kzj:2 perspective:1 inf:3 certain:4 binary:2 yi:3 minimum:2 additional:2 greater:2 dai:1 semi:28 ii:2 resolving:1 zien:1 reduces:1 fbm:15 smooth:21 technical:4 faster:2 characterized:1 regression:16 essentially:1 expectation:3 sometimes:4 kernel:5 achieved:1 cell:2 c1:4 addition:2 whereas:2 spacing:7 grow:2 appropriately:1 lafferty:2 spirit:2 seem:1 integer:1 iii:1 enough:8 fit:2 zi:11 reduce:1 idea:2 simplifies:1 cn:2 whether:4 url:4 penalty:1 detailed:1 amount:2 nonparametric:3 tsybakov:2 locally:3 tomography:1 http:4 outperform:2 exist:7 sl:3 nsf:1 zj:16 notice:2 sign:2 estimated:1 disjoint:2 per:1 vol:2 salient:1 pb:7 achieving:1 drawn:4 blum:1 wisc:4 tenth:1 inverse:1 letter:1 nhm:2 discern:3 place:1 separation:4 decision:44 dhm:12 bound:35 distinguish:1 pxy:13 discernable:7 identifiable:1 annual:3 occur:1 x2:23 ri:1 dominated:1 dkk:1 argument:3 min:7 relatively:2 px:9 conjecture:1 department:5 developing:1 according:2 smaller:5 across:1 wi:2 delineated:1 castro:1 eventually:1 needed:1 end:1 observe:2 away:1 generic:1 assumes:1 denotes:7 binomial:1 madison:6 k1:1 uj:1 establish:3 occurs:1 fbd:16 said:1 exhibit:1 link:3 separate:1 berlin:1 manifold:8 extent:2 assuming:1 illustration:1 difficult:1 robert:1 subproblems:1 gk:6 stated:2 negative:3 mink:1 suppress:1 pmax:2 unknown:4 perform:2 upper:6 datasets:1 finite:23 incorrectly:2 situation:10 excluding:1 david:1 complement:1 pair:2 z1:1 distinction:2 conflicting:4 nip:1 beyond:1 suggested:1 below:3 pattern:2 regime:3 max:2 explanation:2 overlap:2 event:2 quantification:2 indicator:1 zhu:2 minimax:13 older:15 improve:4 picture:1 hm:13 xiaojin:1 understanding:1 relative:10 wisconsin:5 asymptotic:3 loss:2 lecture:1 interesting:1 proportional:1 localized:1 foundation:1 incurred:2 degree:1 supported:7 last:1 formal:1 uchicago:1 institute:1 wide:2 characterizing:3 taking:1 benefit:1 edinburgh:1 boundary:13 dimension:1 xn:2 evaluating:1 doesn:1 fb:4 collection:8 made:1 jump:3 adaptive:3 polynomially:4 kzi:3 transaction:1 excess:4 compact:1 rthe:1 active:1 instantiation:1 summing:1 consuming:1 xi:18 don:1 continuous:1 pen:3 quantifies:1 why:1 table:2 learn:1 zk:1 ignoring:2 heidelberg:1 kgj:1 du:2 anc:1 complex:1 domain:3 pk:12 dense:2 main:3 bounding:1 dyadic:2 x1:35 fig:4 ny:1 exponential:2 lie:5 abundance:2 theorem:7 specific:1 pac:1 decay:2 evidence:2 exists:10 essential:1 workshop:1 margin:12 kx:1 gap:3 smoothly:1 intersection:1 logarithmic:3 simply:1 contained:3 springer:2 aa:1 satisfies:1 relies:1 assouad:1 conditional:5 viewed:1 lipschitz:5 specifically:2 except:4 infinite:1 operates:1 uniformly:1 averaging:1 lemma:8 called:1 pas:1 ece:7 meaningful:1 formally:1 support:6 people:1 latter:1 constructive:1 |
2,815 | 3,552 | Modeling Short-term Noise Dependence
of Spike Counts in Macaque Prefrontal Cortex
Arno Onken
Technische Universit?at Berlin
/ BCCN Berlin
[email protected]
?
Steffen Grunew?
alder
Technische Universit?at Berlin
Franklinstr. 28/29, 10587 Berlin, Germany
[email protected]
Matthias Munk
MPI for Biological Cybernetics
Spemannstr. 38, 72076 T?ubingen, Germany
[email protected]
Klaus Obermayer
Technische Universit?at Berlin
/ BCCN Berlin
[email protected]
Abstract
Correlations between spike counts are often used to analyze neural coding. The
noise is typically assumed to be Gaussian. Yet, this assumption is often inappropriate, especially for low spike counts. In this study, we present copulas as an
alternative approach. With copulas it is possible to use arbitrary marginal distributions such as Poisson or negative binomial that are better suited for modeling
noise distributions of spike counts. Furthermore, copulas place a wide range of
dependence structures at the disposal and can be used to analyze higher order interactions. We develop a framework to analyze spike count data by means of copulas. Methods for parameter inference based on maximum likelihood estimates
and for computation of mutual information are provided. We apply the method
to our data recorded from macaque prefrontal cortex. The data analysis leads to
three findings: (1) copula-based distributions provide significantly better fits than
discretized multivariate normal distributions; (2) negative binomial margins fit the
data significantly better than Poisson margins; and (3) the dependence structure
carries 12% of the mutual information between stimuli and responses.
1
Introduction
Understanding neural coding is at the heart of theoretical neuroscience. Analyzing spike counts of
a population is one way to gain insight into neural coding properties. Even when the same stimulus
is presented repeatedly, responses from the neurons vary, i.e. from trial to trial responses of neurons are subject to noise. The noise variations of neighboring neurons are typically correlated (noise
correlations). Due to their relevance for neural coding, noise correlations have been subject of a considerable number of studies (see [1] for a review). However, these studies always assumed Gaussian
noise. Thus, correlated spike rates were generally modeled by multivariate normal distributions with
a specific covariance matrix that describes all pairwise linear correlations.
For long time intervals or high firing rates, the average number of spikes is sufficiently large for the
central limit theorem to apply and thus the normal distribution is a good approximation for the spike
count distributions. However, several experimental findings suggest that noise correlations as well
as sensory information processing predominantly take place on a shorter time scale, on the order of
tens to hundreds of milliseconds [2, 3]. It is therefore questionable if the normal distribution is still
an appropriate approximation and if the results of studies based on Gaussian noise apply to short
time intervals and low firing rates.
N2 [#Spikes/Bin]
6
4
2
0
0
2
(a)
12
10
12
6
N2 [#Spikes/Bin]
N2 [#Spikes/Bin]
10
(b)
6
4
2
0
0
4
6
8
N1 [#Spikes/Bin]
2
4
6
8
N1 [#Spikes/Bin]
(c)
10
12
4
2
0
0
2
4
6
8
N1 [#Spikes/Bin]
(d)
Figure 1: (a): Recording of correlated spike trains from two neurons and conversion to spike counts.
(b): The distributions of the spike counts of a neuron pair from the data described in Section 4 for
100 ms time bins. Dark squares represent a high number of occurrences of corresponding pairs of
spike counts. One can see that the spike counts are correlated since the ratios are high near the
diagonal. The distributions of the individual spike counts are plotted below and left of the axes.
(c): Density of a fit with a bivariate normal distribution. (d): Distribution of a fit with negative
binomial margins coupled with the Clayton copula.
This is due to several major drawbacks of the multivariate normal distribution: (1) Its margins
are continuous with a symmetric shape, whereas empirical distributions of real spike counts tend
to have a positive skew, i.e. the mass of the distribution is concentrated at the left of its mode.
Moreover, the normal distribution allows negative values which are not meaningful for spike counts.
Especially for low rates, this can become a major issue, since the probability of negative values
will be high. (2) The dependence structure of a multivariate normal distribution is always elliptical,
whereas spike counts of short time bins can have a bulb-shaped dependence structure (see Fig. 1b).
(3) The multivariate normal distribution does not allow higher order correlations of its elements.
Instead, only pairwise correlations can be modeled. It was shown that pairwise interactions are
sufficient for retinal ganglion cells and cortex cells in vitro [4]. However, there is evidence that
they are insufficient for subsequent cortex areas in vivo [5]. We will show that our data recorded in
prefrontal cortex suggest that higher order interactions (which involve more than two neurons) do
play an important role in the prefrontal cortex as well.
In this paper, we present a method that addresses the above shortcomings of the multivariate normal
distribution. We apply copulas [6] to form multivariate distributions with a rich set of dependence
structures and discrete marginal distributions, including the Poisson distribution. Copulas were
previously applied to model the distribution of continuous first-spike-latencies [7]. Here we apply
this concept to spike counts.
2
Copulas
We give an informal introduction to copulas and apply the concept to a pair of neurons from our data
which are described and fully analyzed in Section 4. Formal details of copulas follow in Section 3.2.
A copula is a cumulative distribution function that can couple arbitrary marginal distributions. There
are many families of copulas, each with a different dependence structure. Some families have an
elliptical dependence structure, similar to the multivariate normal distribution. However, it is also
possible to use completely different dependence structures which are more appropriate for the data
at hand.
As an example, consider the modeling of spike count dependencies of two neurons (Fig. 1). Spike
trains are recorded from the neurons and transformed to spike counts (Fig. 1a). Counting leads to a
bivariate empirical distribution (Fig. 1b). The distribution of the counts depends on the length of the
time bin that is used to count the spikes, here 100 ms. In the case considered, the correlation at low
counts is higher than at high counts. This is called lower tail dependence.
The density of a typical population model based on the multivariate normal (MVN) distribution
is shown in Fig. 1c. Here, we did not discretize the distribution since the standard approach to
investigate noise correlations also uses the continuous distribution [1]. The mean and covariance
matrix of the MVN distribution correspond to the sample mean and the sample covariances of the
empirical distribution. Yet, the dependence structure does not reflect the true dependence structure
of the counts. But the spike count probabilities for a copula-based distribution (Fig. 1d) correspond
well to the empirical distribution in Fig. 1b.
The modeling of spike count data with the help of a copula is done in three steps: (1) A marginal
distribution, e.g. a Poisson or a negative binomial distribution is chosen, based on the spike count
distribution of the individual neurons. (2) The counts are transformed to probabilities using the
cumulative distribution function of the marginal distribution. (3) The probabilities and thereby the
cumulative marginal distributions are coupled with the help of a so-called copula function. As an
example, consider the Clayton copula family [6]. For two variables the copula is given by
1
C(p1 , p2 , ?) = q
?
max{ p1?
1
+
1
p?
2
,
? 1, 0}
where pi denotes the probability of the spike count Xi of the ith neuron being lower or equal to
ri (i.e. pi = P (Xi ? ri )). Note that there are generalizations to more than two margins (see
Section 3.2). The function C(p1 , p2 , ?) generates a joint cumulative distribution function by coupling the margins and thereby introduces correlations of second and higher order between the spike
count variables. The ratio of the joint probability that corresponds to statistically independent spike
counts P (X1 ? r1 , X2 ? r2 ) = p1 p2 and the dependence introduced by the Clayton copula (for
1
+ p1? ? 1 ? 0) is given by
p?
1
2
s
p
p1 p2
1
1
?
? ?
= p1 p2 ? ? + ? ? 1 = ? p?
1 + p2 ? p1 p2 .
C(p1 , p2 , ?)
p1
p2
Suppose that ? is positive. Since pi ? [0, 1] the deviation from the ratio 1 will be larger for small
probabilities. Thus, the copula generates correlations whose strengths depend on the magnitude of
the probabilities. The probability mass function (Fig. 1d) can then be calculated from the cumulative
probability using the difference scheme as described in Section 3.4. Care must be taken whenever
copulas are applied to form discrete distributions: while for continuous distributions typical measures of dependence are determined by the copula function C only, these measures are affected by
the shape of the marginal distributions in the discrete case [8].
3
Parametric spike count models and model selection procedure
We will now describe the formal aspects of the multivariate normal distribution on the one hand and
copula-based models as the proposed alternative on the other hand, both in terms of their application
to spike counts.
3.1
The discretized multivariate normal distribution
The MVN distribution is continuous and needs to be discretized (and rectified) before it can be applied to spike count data (which are discrete and non-negative). The cumulative distribution function
~ is then given by
(cdf) of the spike count vector X
??,? (?r1 ?, . . . , ?rd ?), if ?i ? {1, . . . , d} : ri ? 0
FX~ (r1 , . . . , rd ) =
0,
otherwise
where ?.? denotes the floor operation for the discretization, ??,? denotes the cdf of the MVN distribution with mean ? and correlation matrix ?, and d denotes the dimension of the multivariate
distribution and corresponds to the number of neurons that are modeled. Note that ? is no longer the
~ The mean is shifted to greater values as ??,? is rectified (negative values are cut off).
mean of X.
This deviation grows with the dimension d. According to the central limit theorem, the distribution
of spike counts approaches the MVN distribution only for large counts.
3.2
Copula-based models
Formally, a copula C is a cdf with uniform margins. It can be used to couple marginal cdf?s
FX1 , . . . , FXd to form a joint cdf FX~ , such that
FX~ (r1 , . . . , rd ) = C(FX1 (r1 ), . . . , FXd (rd ))
holds [6]. There are many families of copulas with different dependence shapes and different numbers of parameters, e.g. the multivariate Clayton copula family with a scalar parameter ?:
(
)!?1/?
d
X
C? (~u) = max 1 ? d +
u??
.
i , 0
i=1
Thus, for a given realization ~r, which can represent the counts of two neurons, we can set ui =
FXi (ri ) and FX (~r) = C? (~u), where FXi can be arbitrary univariate cdf?s. Thereby, we can generate
a multivariate distribution with specific margins FXi and a dependence structure determined by C.
In the case of discrete marginal distributions, however, typical measures of dependence, such as the
linear correlation coefficient or Kendall?s ? are effected by the shape of these margins [8]. Note
that ? does not only control the strength of pairwise interactions but also the degree of higher order
interactions.
Another copula family is the Farlie-Gumbel-Morgenstern (FGM) copula [6]. It is special in that it
has 2d ? d ? 1 parameters that individually determine the pairwise and higher order interactions. Its
cdf takes the form
?
?
C?~ (~u) = ?1 +
subject to the constraints
1+
d
X
X
d
X
X
?j1 j2 ...jk
k=2 1?j1 <???<jk ?d
k=2 1?j1 <???<jk ?d
?j1 j2 ...jk
k
Y
?ji ? 0,
k
Y
(1 ? uji )?
i=1
d
Y
ui
i=1
?1 , ?2 , . . . ?d ? {?1, 1}.
i=1
We only have pairwise interactions if we set all but the first d2 parameters to zero. Hence, we can
easily investigate the impact of higher order interactions on the model fit. Due to the constraints for
?, the correlations that the FGM copula can model are small in terms of their absolute value. Nevertheless, this is not an issue for modeling noise dependencies of spike counts of a small number of
neurons, since the noise correlations that are found experimentally are typically small (see e.g. [2]).
3.3
Marginal distributions
Copulas allow us to have different marginal distributions. Typically, the Poisson distribution is a
good approximation to spike count variations of single neurons [9]. For this distribution the cdf?s of
the margins take the form
?r? k
X
?i ??i
FXi (r; ?i ) =
e ,
k!
k=0
where ?i is the mean spike count of neuron i for a given bin size. We will also use the negative
binomial distribution as a generalization of the Poisson distribution:
FXi (r; ?i , ?i ) =
?r? k
X
?
1
i
k=0
k! (1 +
?i ?i
?i )
?(?i + k)
,
?(?i )(?i + ?i )k
where ? is the gamma function. The additional parameter ?i controls the degree of overdispersion:
the smaller the value of ?i , the greater the Fano factor. As ?i approaches infinity, the negative
binomial distribution converges to the Poisson distribution.
3.4
Inference for copulas and discrete margins
Likelihoods of discrete vectors can be computed by applying the inclusion-exclusion principle of
Poincar?e and Sylvester. For this purpose we define the sets A = {X1 ? r1 , . . . , Xd ? rd } and
Ai = {X1 ? r1 , . . . , Xd ? rd , Xi ? ri ? 1}, i ? {1, . . . , d}. The probability of a realization ~r is
given by
!
!
d
d
[
X
X
\
PX~ (~r) = P A \
Ai = P (A) ?
(?1)k?1
Ai
P
i=1
= FX~ (~r) ?
k=1
d
X
k?1
(?1)
k=1
X
I?{1....,d},
|I|=k
i?I
(1)
FX~ (r1 ? m1 , . . . , rd ? md ).
d,
m?{0,1}
~P
mi =k
~ Since copulas
Thus, we can compute the probability mass of a realization ~r using only the cdf of X.
separate the margins from the dependence structure, an efficient inference procedure is feasible. Let
li (?i ) =
T
X
log PXi (ri,t ; ?i ),
i = 1, . . . , d
t=1
denote the univariate margins of log likelihoods. Note that we assume independent time bins. Further, let
T
X
l(~
?, ?1 , . . . , ?d ) =
log PX~ (~
rt ; ?
~ , ?1 , . . . , ?d )
t=1
be the log likelihood of the joint distribution, where ?
~ denotes the parameter of the copula. The socalled inference for margins (IFM) method proceeds in two steps [10]. First, the marginal likelihoods
are maximized separately:
?bi = argmax{li (?i )}.
?i
Then, the full likelihood is maximized given the estimated margin parameters:
?
~b = argmax{l(~
?, ?b1 , . . . , ?bd )}.
?
~
The estimator is asymptotically efficient and close to the maximum likelihood estimator [10].
3.5
Estimation of mutual information
~ is a measure of the information that
The mutual information [11] of dependent spike counts X
knowing the neural response ~r provides about the stimulus. It can be written as
!!
X
X
X
~ S) =
I(X;
PS (s? )P ~ (~r|s? )
PS (s)
P ~ (~r|s) log2 P ~ (~r|s) ? log2
X
s?MS
~
r ?Nd
X
X
s? ?MS
where S is the stimulus random variable, MS is the set of stimuli, and PS is the probability mass
function for the stimuli. The likelihood PX~ (~r|s) of ~r given s can be calculated using Equation 1.
~ S) can be estimated by the Monte Carlo method.
Thereby, I(X;
4
Application to multi-electrode recordings
We now apply our parametric count models to the analysis of spike data, which we recorded from
the prefrontal cortex of an awake behaving macaque, using a 4 ? 4 tetrode array.
Experimental setup. Activity was recorded while the monkey performed a visual match-tosample-task. The task involved matching of 20 visual stimuli (fruits and vegetables) that were
presented for approximately 650 ms each. After an initial presentation (?sample?) a test stimulus
(?test?) was presented with a delay of 3 seconds and the monkey had to decide by differential button
press whether both stimuli were the same or not. Correct responses were rewarded. Match and
non-match trials were randomly presented with an equal probability.
We recorded from the lateral prefrontal cortex in a 2 ? 2 mm2 area around the ventral bank of
the principal sulcus. Recordings were performed simultaneously from up to 16 adjacent sites with
an array of individually movable fiber micro-tetrodes (manufactured by Thomas Recording). Data
were sampled at 32 kHz and bandpass filtered between 0.5 kHz and 10 kHz. Recording positions of
individual electrodes were chosen to maximize the recorded activity and the signal quality.
The recorded data were processed by a PCA based spike sorting method. The method provides
automatic cluster cutting which was manually corrected by subsequent cluster merging if indicated
by quantitative criteria such as the ISI-histograms or amplitude stability.
Data set. To select neurons with stimulus specific responses, we calculated spike counts from their
spike trains. No neuron was accepted in the dependence analysis that shifted its mean firing rate
averaged over the time interval of the sample stimulus presentation by less than 6.5 Hz compared
to the pre-stimulus interval. A total of six neurons fulfilled this criterion (each recorded from a
different tetrode). With this criterion we can assume that the selected neurons are indeed related to
processing of the stimulus information.
Spike trains were separated into 80 groups, one for each of the 20 different stimuli and the four
trial intervals: pre-stimulus, sample stimulus presentation, delay, and test stimulus presentation.
Afterwards, the trains were binned into successive 100 ms intervals and converted to six-dimensional
spike counts for each bin. Due to the different interval lengths, total sample sizes of the groups were
between 224 and 1793 count vectors. A representative example of the empirical distribution of a
pair of these counts from the stimulus presentation interval is presented in Fig. 1b.
Model fitting. The discretized MVN distribution as well as several copula-based distributions
were fitted to the data. For each of the 80 groups we selected randomly 50 count vectors (test set)
for obtaining an unbiased estimate of the likelihoods. We trained the model on the remainder of
each group (training set).
A commonly applied criterion for model selection is maximum entropy [4]. This criterion selects
a certain model with minimal complexity subject to given constraints. It thereby performs regularization which is supposed to prevent overfitting. Copulas on the other hand typically increase the
complexity of the model and thus decrease the entropy. However, our evaluation takes place on a
separate test set and hence takes overfitting into account.
Parameter inference for the discretized MVN distribution (see Section 3.1) was performed by computing the sample mean and sample covariance matrix of the spike counts which is the standard
procedure for analyzing noise correlations [1]. Note that this estimator is biased, since it is not the
maximum likelihood solution for the discretized distribution.
The following copula families were used to construct noise distributions of the spike counts. The
Clayton (see Section 3.2), Gumbel-Hougaard, Frank and Ali-Mikhail-Haq copula families as examples of families with one parameter [6] and the FGM with a variable number of parameters (see
Section 3.2).
We applied the IFM method for copula inference (see Section 3.4). The sample mean is the maximum likelihood estimator for ?i for both the Poisson and the negative binomial margins. The
maximum likelihood estimates for ?i were computed iteratively by Newton?s method. Depending
on whether the copula parameters were constrained, either the Nelder-Mead simplex method for
Figure 2: Evaluation of the IFM estimates on the test set and estimated mutual information. (a): Log
likelihoods for the discrete multivariate normal distribution, the best fitting copula-based model with
Poisson margins, and the best fitting copula-based model with negative binomial margins averaged
over the 20 different stimuli. (b): Difference between the log likelihood of the model with independent counts and negative binomial margins (?ind. model?) and the log likelihoods of different
copula-based models with negative binomial margins averaged over the 20 different stimuli. (c): Mutual information between stimuli and responses for the Clayton-based model with negative binomial
margins. (d): Normalized difference between the mutual information for the Clayton-based model
with negative binomial margins and the corresponding ?ind. model?.
unconstrained nonlinear optimization or the line-search algorithm for constrained nonlinear optimization was applied to estimate the copula parameters.
Results for different distributions. Fig. 2 shows the evaluation of the IFM estimates on the test
set. The likelihood for the copula-based models is significantly larger than for the discrete MVN
model (p = 2 ? 10?14 , paired-sample Student?s t test over stimuli). Moreover, the likelihood for the
negative binomial margins is even larger than that for the Poisson margins (p = 0.0003).
We estimated the impact of neglecting higher order interactions on the fit by using different
numbers
of parameters for the FGM copula. For the 2nd order model we set all but the first d2 parameters to
zero, therefore leaving only
for pairwise interactions. In contrast, for the 3rd order model
parameters
d
d
we set all but the first 2 + 3 parameters to zero.
We computed the difference between the likelihood of the model with dependence and the corresponding model with independence between its counts. Fig. 2b shows this difference for several
copulas and negative binomial margins evaluated on the test set. The model based on the Clayton
copula family provides the best fit. The fit is significantly better than for the second best fitting
copula family (p = 0.0014). In spite of having more parameters, the FGM copulas perform worse.
However, the FGM model with third order interactions fits the data significantly better than the
model that includes only pairwise interactions (p = 0.0437).
Copula coding analysis. Fig. 2c shows the Monte Carlo estimate of the mutual information based
on the Clayton-based model with negative binomial margins and IFM parameters determined on the
training set for each of the intervals. For the test stimulus interval, the estimation was performed
twice: for the previously presented sample stimulus and for the test stimulus. The Monte Carlo
method was terminated when the standard error was below 5 ? 10?4 . The mutual information is
higher during the stimulus presentation intervals than during the delay interval.
We estimated the information increase due to the dependence structure by computing the mutual information for the Clayton-based model with negative binomial margins and subtracting the (smaller)
mutual information for the corresponding distribution with independent elements. Fig. 2d shows
this information estimate ?Ishuf f led , normalized to the mutual information for the Clayton-based
model. The dependece structure carries up to 12% of the mutual information. During the test
stimulus interval it carries almost twice as much information about the test stimulus as about the
previously presented sample stimulus.
Another important measure related to stimulus decoding which is currently under debate is ?I/I
[12]. The measure provides an upper bound on the information loss for stimulus decoding based on
the distribution that assumes independence. We find that one loses at most 19.82% of the information
for the Clayton-based model.
5
Conclusion
We developed a framework for analyzing the noise dependence of spike counts. Applying this to
our data from the macaque prefrontal cortex we found that: (1) Gaussian noise is inadequate to
model spike count data for short time intervals; (2) negative binomial distributed margins describe
the individual spike counts better than Poisson distributed margins; and (3) higher order interactions
are present and play a substantial role in terms of model fit and information content.
The substantial role of higher order interactions bears a challenge for theoreticians as well as experimentalists. The complexity of taking all higher order interactions into account grows exponentially
with the number of neurons, known as the curse of dimensionality. Based on our findings, we conclude that one needs to deal with this problem to analyze short-term coding in higher cortical areas.
In summary, one can say that the copula-based approach provides a convenient way to study spike
count dependencies for small population sizes (< 20). At present, the approach is computationally
too demanding for higher numbers of neurons. Approximate inference methods might provide a
solution to the computational problem and seem worthwhile to investigate. Directions for future research are the exploration of other copula families and the validation of population coding principles
that were obtained on the assumption of Gaussian noise.
Acknowledgments. This work was supported by BMBF grant 01GQ0410.
References
[1] B. B. Averbeck and P. E. Latham and A. P. Pouget, Neural correlations, population coding and computation.
Nature Review Neuroscience, 7:358?366, 2006.
[2] W. Bair, E. Zohary, and W. T. Newsome, Correlated firing in macaque visual area MT: time scales and
relationship to behavior. Journal of Neuroscience, 21(5):1676?1697, 2001.
[3] A. Kohn and M. A. Smith, Stimulus dependence of neuronal correlation in primary visual cortex of the
macaque. Journal of Neuroscience, 25(14):3661?3673, 2005.
[4] E. Schneidman and M. J. Berry II and R. Segev and W. Bialek, Weak pairwise correlations imply strongly
correlated network states in a neural population. Nature, 440:1007?1012, 2006.
[5] M. M. Michel and R. A. Jacobs, The costs of ignoring high-order correlations in populations of model
neurons. Neural Computation, 18:660?682, 2006.
[6] R. B. Nelsen, An Introduction to Copulas. Springer, New York, second edition, 2006.
[7] R. L. Jenison and R. A. Reale, The shape of neural dependence. Neural Computation, 16:665?672, 2004.
[8] C. Genest and J. Neslehova, A primer on discrete copulas. ASTIN Bulletin, 37:475?515, 2007.
[9] D. J. Tolhurst, J. A. Movshon, and A. F. Dean, The statistical reliability of signals in single neurons in cat
and monkey visual cortex. Vision Research, 23:775?785, 1982.
[10] H. Joe and J. J. Xu, The estimation method of inference functions for margins for multivariate models.
Technical Report, 166, Department of Statistics, University of British Colombia, 1996.
[11] C. E. Shannon and W. Weaver, The mathematical theory of communication. Urbana: University of Illinois
Press, 1949.
[12] P. E. Latham and S. Nirenberg, Synergy, redundancy, and independence in population codes, revisited.
Journal of Neuroscience, 25(21):5195?5206, 2005.
| 3552 |@word trial:4 nd:2 d2:2 covariance:4 jacob:1 thereby:5 carry:3 initial:1 elliptical:2 discretization:1 yet:2 must:1 bd:1 written:1 subsequent:2 j1:4 shape:5 fx1:2 selected:2 theoretician:1 ith:1 smith:1 short:5 filtered:1 provides:5 tolhurst:1 revisited:1 successive:1 mathematical:1 become:1 differential:1 fitting:4 pairwise:9 indeed:1 behavior:1 mpg:1 p1:10 isi:1 multi:1 steffen:1 discretized:6 ifm:5 curse:1 inappropriate:1 zohary:1 provided:1 moreover:2 mass:4 monkey:3 arno:1 morgenstern:1 developed:1 finding:3 quantitative:1 xd:2 questionable:1 universit:3 control:2 grant:1 positive:2 before:1 limit:2 bccn:2 analyzing:3 mead:1 firing:4 approximately:1 might:1 twice:2 range:1 statistically:1 bi:1 averaged:3 acknowledgment:1 hougaard:1 procedure:3 poincar:1 area:4 empirical:5 significantly:5 matching:1 convenient:1 pre:2 spite:1 suggest:2 close:1 selection:2 applying:2 dean:1 pouget:1 insight:1 estimator:4 array:2 colombia:1 population:8 stability:1 variation:2 fx:6 play:2 suppose:1 us:1 element:2 jk:4 cut:1 role:3 decrease:1 substantial:2 ui:2 complexity:3 trained:1 depend:1 ali:1 completely:1 easily:1 joint:4 cat:1 fiber:1 train:5 separated:1 shortcoming:1 describe:2 monte:3 oby:1 klaus:1 whose:1 larger:3 say:1 otherwise:1 nirenberg:1 statistic:1 matthias:2 subtracting:1 interaction:15 remainder:1 tu:3 neighboring:1 j2:2 realization:3 supposed:1 electrode:2 p:3 r1:8 cluster:2 nelsen:1 converges:1 help:2 coupling:1 develop:1 depending:1 p2:9 c:3 direction:1 drawback:1 correct:1 exploration:1 munk:2 bin:12 generalization:2 biological:1 hold:1 sufficiently:1 considered:1 around:1 normal:15 major:2 vary:1 ventral:1 purpose:1 estimation:3 currently:1 individually:2 gaussian:5 always:2 averbeck:1 ax:1 pxi:1 likelihood:18 contrast:1 inference:8 dependent:1 typically:5 transformed:2 selects:1 germany:2 issue:2 socalled:1 constrained:2 special:1 copula:55 mutual:13 marginal:12 equal:2 construct:1 shaped:1 having:1 manually:1 mm2:1 future:1 simplex:1 report:1 stimulus:32 micro:1 randomly:2 gamma:1 simultaneously:1 individual:4 argmax:2 n1:3 investigate:3 evaluation:3 introduces:1 analyzed:1 neglecting:1 shorter:1 plotted:1 theoretical:1 minimal:1 fitted:1 modeling:5 newsome:1 cost:1 technische:3 deviation:2 hundred:1 uniform:1 delay:3 inadequate:1 too:1 dependency:3 density:2 off:1 decoding:2 central:2 recorded:9 reflect:1 prefrontal:7 worse:1 michel:1 li:2 manufactured:1 account:2 converted:1 de:4 retinal:1 coding:8 student:1 includes:1 coefficient:1 depends:1 performed:4 kendall:1 analyze:4 effected:1 astin:1 vivo:1 square:1 maximized:2 correspond:2 weak:1 carlo:3 rectified:2 cybernetics:1 whenever:1 involved:1 mi:1 couple:2 gain:1 sampled:1 dimensionality:1 amplitude:1 disposal:1 higher:15 follow:1 response:7 done:1 evaluated:1 strongly:1 furthermore:1 correlation:20 hand:4 nonlinear:2 mode:1 quality:1 indicated:1 grows:2 concept:2 true:1 unbiased:1 normalized:2 hence:2 regularization:1 symmetric:1 iteratively:1 deal:1 ind:2 adjacent:1 during:3 alder:1 mpi:1 m:7 criterion:5 latham:2 onken:1 performs:1 predominantly:1 mt:1 vitro:1 ji:1 khz:3 exponentially:1 tail:1 m1:1 ai:3 rd:8 automatic:1 unconstrained:1 fano:1 inclusion:1 illinois:1 had:1 jenison:1 reliability:1 cortex:11 longer:1 behaving:1 movable:1 multivariate:16 exclusion:1 rewarded:1 certain:1 ubingen:1 greater:2 care:1 floor:1 additional:1 determine:1 maximize:1 schneidman:1 signal:2 ii:1 full:1 afterwards:1 technical:1 match:3 long:1 paired:1 impact:2 sylvester:1 experimentalists:1 vision:1 poisson:11 histogram:1 represent:2 cell:2 whereas:2 separately:1 interval:14 leaving:1 biased:1 subject:4 recording:5 tend:1 hz:1 spemannstr:1 seem:1 near:1 counting:1 independence:3 fit:10 knowing:1 whether:2 six:2 pca:1 bair:1 kohn:1 movshon:1 york:1 repeatedly:1 generally:1 latency:1 vegetable:1 involve:1 dark:1 ten:1 concentrated:1 processed:1 generate:1 millisecond:1 shifted:2 neuroscience:5 estimated:5 fulfilled:1 discrete:10 affected:1 group:4 redundancy:1 four:1 nevertheless:1 sulcus:1 prevent:1 asymptotically:1 button:1 franklinstr:1 place:3 family:12 almost:1 decide:1 bound:1 activity:2 strength:2 binned:1 constraint:3 infinity:1 segev:1 awake:1 ri:6 x2:1 generates:2 aspect:1 px:3 department:1 according:1 reale:1 describes:1 smaller:2 heart:1 taken:1 computationally:1 equation:1 previously:3 skew:1 count:55 overdispersion:1 informal:1 operation:1 apply:7 worthwhile:1 fxi:5 appropriate:2 occurrence:1 alternative:2 primer:1 thomas:1 binomial:17 denotes:5 assumes:1 log2:2 newton:1 especially:2 spike:57 parametric:2 primary:1 dependence:24 md:1 diagonal:1 rt:1 obermayer:1 bialek:1 separate:2 berlin:9 fgm:6 lateral:1 tuebingen:1 length:2 code:1 modeled:3 relationship:1 insufficient:1 ratio:3 setup:1 frank:1 debate:1 negative:21 perform:1 conversion:1 discretize:1 neuron:24 upper:1 urbana:1 communication:1 arbitrary:3 clayton:12 introduced:1 pair:4 macaque:6 address:1 proceeds:1 below:2 challenge:1 including:1 max:2 demanding:1 weaver:1 scheme:1 imply:1 coupled:2 mvn:8 review:2 understanding:1 berry:1 fully:1 loss:1 bear:1 validation:1 bulb:1 degree:2 sufficient:1 fruit:1 principle:2 bank:1 pi:3 summary:1 supported:1 formal:2 allow:2 wide:1 taking:1 bulletin:1 absolute:1 mikhail:1 distributed:2 calculated:3 dimension:2 cortical:1 cumulative:6 rich:1 sensory:1 commonly:1 approximate:1 cutting:1 synergy:1 overfitting:2 b1:1 assumed:2 conclude:1 nelder:1 xi:3 continuous:5 search:1 uji:1 nature:2 ignoring:1 obtaining:1 genest:1 did:1 terminated:1 noise:18 edition:1 n2:3 x1:3 neuronal:1 fig:13 site:1 representative:1 xu:1 bmbf:1 position:1 bandpass:1 third:1 theorem:2 british:1 specific:3 r2:1 evidence:1 bivariate:2 tetrode:3 joe:1 merging:1 magnitude:1 margin:30 gumbel:2 sorting:1 suited:1 entropy:2 led:1 univariate:2 ganglion:1 visual:5 scalar:1 springer:1 corresponds:2 loses:1 cdf:9 presentation:6 considerable:1 experimentally:1 feasible:1 content:1 typical:3 determined:3 corrected:1 principal:1 called:2 total:2 accepted:1 experimental:2 shannon:1 meaningful:1 formally:1 select:1 relevance:1 correlated:6 |
2,816 | 3,553 | Sparse Convolved Gaussian Processes for
Multi-output Regression
Neil D. Lawrence
School of Computer Science
University of Manchester, U.K.
[email protected]
Mauricio Alvarez
School of Computer Science
University of Manchester, U.K.
[email protected]
Abstract
We present a sparse approximation approach for dependent output Gaussian processes (GP). Employing a latent function framework, we apply the convolution
process formalism to establish dependencies between output variables, where each
latent function is represented as a GP. Based on these latent functions, we establish
an approximation scheme using a conditional independence assumption between
the output processes, leading to an approximation of the full covariance which is
determined by the locations at which the latent functions are evaluated. We show
results of the proposed methodology for synthetic data and real world applications
on pollution prediction and a sensor network.
1
Introduction
We consider the problem of modeling correlated outputs from a single Gaussian process (GP). Applications of modeling multiple outputs include multi-task learning (see e.g. [1]) and jointly predicting
the concentration of different heavy metal pollutants [5]. Modelling multiple output variables is a
challenge as we are required to compute cross covariances between the different outputs. In geostatistics this is known as cokriging. Whilst cross covariances allow us to improve our predictions
of one output given the others because the correlations between outputs are modelled [6, 2, 15, 12]
they also come with a computational and storage overhead. The main aim of this paper is to address
these overheads in the context of convolution processes [6, 2].
One neat approach to account for non-trivial correlations between outputs employs convolution processes (CP). When using CPs each output can be expressed as the convolution between a smoothing
kernel and a latent function [6, 2]. Let?s assume that the latent function is drawn from a GP. If
we also share the same latent function across several convolutions (each with a potentially different smoothing kernel) then, since a convolution is a linear operator on a function, the outputs of
the convolutions can be expressed as a jointly distributed GP. It is this GP that is used to model
the multi-output regression. This approach was proposed by [6, 2] who focussed on a white noise
process for the latent function.
Even though the CP framework is an elegant way for constructing dependent output processes, the
fact that the full covariance function of the joint GP must be considered results in significant storage
and computational demands. For Q output dimensions and N data points the covariance matrix
scales as QN leading to O(Q3 N 3 ) computational complexity and O(N 2 Q2 ) storage. Whilst other
approaches to modeling multiple output regression are typically more constraining in the types of
cross covariance that can be expressed [1, 15], these constraints also lead to structured covariances
functions for which inference and learning are typically more efficient (typically for N > Q these
methods have O(N 3 Q) computation and O(N 2 Q) storage). We are interested in exploiting the
richer class of covariance structures allowed by the CP framework, but without the additional computational overhead they imply.
We propose a sparse approximation for the full covariance matrix involved in the multiple output
convolution process, exploiting the fact that each of the outputs is conditional independent of all others given the input process. This leads to an approximation for the covariance matrix which keeps
intact the covariances of each output and approximates the cross-covariances terms with a low rank
matrix. Inference and learning can then be undertaken with the same computational complexity as
a set of independent GPs. The approximation turns out to be strongly related to the partially independent training conditional (PITC) [10] approximation for a single output GP. This inspires us
to consider a further conditional independence function across data points that leads to an approximation which shares the form of the fully independent training conditional (FITC) approximation
[13, 10] reducing computational complexity to O(N QM 2 ) and storage to O(N QM ) with M representing a user specified value.
To introduce our sparse approximation some review of the CP framework is required (Section 2).
Then in Section 3, we present sparse approximations for the multi-output GP. We discuss relations
with other approaches in Section 4. Finally, in Section 5, we demonstrate the approach on both
synthetic and real datasets.
2
Convolution Processes
Consider a set of Q functions {fq (x)}Q
q=1 , where each function is expressed as the convolution
Q
between a smoothing kernel {kq (x)}q=1 , and a latent function u(z),
Z ?
fq (x) =
kq (x ? z)u(z)dz.
??
More generally, we can consider the influence of more than one latent function, {ur (z)}R
r=1 , and
corrupt each of the outputs of the convolutions with an independent process (which could also include a noise term), wq (x), to obtain
yq (x) = fq (x) + wq (x) =
R Z
X
r=1
?
kqr (x ? z)ur (z)dz + wq (x).
(1)
??
The covariance between two different functions yq (x) and ys (x0 ) is then recovered as
cov [yq (x), ys (x0 )] = cov [fq (x), fs (x0 )] + cov [wq (x), ws (x0 )] ?qs ,
where
cov [fq (x), fs (x0 )] =
R Z
R X
X
r=1 p=1
?
Z
?
ksp (x0 ? z0 ) cov [ur (z), up (z0 )] dz0 dz
kqr (x ? z)
??
(2)
??
This equation is a general result; in [6, 2] the latent functions ur (z) are assumed as independent
white Gaussian noise processes, i.e. cov [ur (z), up (z0 )] = ?u2 r ?rp ?z,z0 , so the expression (2) is
simplified as
Z ?
R
X
cov [fq (x), fs (x0 )] =
?u2 r
kqr (x ? z)ksr (x0 ? z)dz.
??
r=1
We are going to relax this constraint on the latent processes, we assume that each inducing function is
an independent GP, i.e. cov [ur (z), up (z0 )] = kur up (z, z0 )?rp , where kur ur (z, z0 ) is the covariance
function for ur (z). With this simplification, (2) can be written as
Z ?
R Z ?
X
0
cov [fq (x), fs (x )] =
kqr (x ? z)
ksr (x0 ? z0 )kur ur (z, z0 )dz0 dz.
(3)
r=1
??
??
As well as this correlation across outputs, the correlation between the latent function, ur (z), and
any given output, fq (x), can be computed,
Z ?
cov [fq (x), ur (z))] =
kqr (x ? z0 )kur ur (z0 , z)dz0 .
(4)
??
3
Sparse Approximation
Given the convolution formalism, we can construct a full GP over the set of outputs. The likelihood
of the model is given by
p(y|X, ?) = N (0, Kf ,f + ?),
(5)
>
>
> >
where y = y1 , . . . , yQ
is the set of output functions with yq = [yq (x1 ), . . . , yq (xN )] ;
Kf ,f ? <QN ?QN is the covariance matrix relating all data points at all outputs, with elements
cov [fq (x), fs (x0 )] in (3); ? = ? ? IN , where ? is a diagonal matrix with elements {?q2 }Q
q=1 ; ?
is the set of parameters of the covariance matrix and X = {x1 , . . . , xN } is the set of training input
vectors at which the covariance is evaluated.
The predictive distribution for a new set of input vectors X? is [11]
p(y? |y, X, X? , ?) = N Kf? ,f (Kf ,f + ?)?1 y, Kf? ,f? ? Kf? ,f (Kf ,f + ?)?1 Kf ,f? + ? ,
where we have used Kf? ,f? as a compact notation to indicate when the covariance matrix is evaluated at the inputs X? , with a similar notation for Kf? ,f . Learning from the log-likelihood involves
the computation of the inverse of Kf ,f + ?, which grows with complexity O((N Q)3 ). Once the
parameters have been learned, prediction is O(N Q) for the predictive mean and O((N Q)2 ) for the
predictive variance.
Our strategy for approximate inference is to exploit the natural conditional dependencies in the
model. If we had observed the entire length of each latent function, ur (z), then from (1) we see that
each yq (x) would be independent, i.e. we can write,
Q
R
p({yq (x)}q=1 | {ur (z)}r=1 , ?) =
Q
Y
R
p(yq (x) | {ur (z)}r=1 , ?),
q=1
where ? are the parameters of the kernels and covariance functions. Our key assumption is that this
independence will hold even if we have only observed M samples from ur (z) rather than the whole
function. The observed values of these M samples are then marginalized (as they are for the exact
case) to obtain the approximation to the likelihood. Our intuition is that the approximation should
be more accurate for larger M and smoother latent functions, as in this domain the latent function
could be very well characterized from only a few samples.
> >
as the samples from the latent function with ur =
We define u = u>
1 , . . . , uR
>
[ur (z1 ), . . . , ur (zM )] ; Ku,u is then the covariance matrix between the samples from the latent
functions ur (z), with elements given by kur ur (z, z0 ); Kf ,u = K>
u,f are the cross-covariance matrices between the latent functions ur (z) and the outputs fq (x), with elements cov [fq (x), ur (z)] in
(4) and Z = {z1 , . . . , zM } is the set of input vectors at which the covariance Ku,u is evaluated.
We now make the conditional independence assumption given the samples from the latent functions,
p(y|u, Z, X, ?) =
Q
Y
q=1
p(yq |u, Z, X, ?) =
Q
Y
?1
2
N Kfq ,u K?1
u,u u, Kfq ,fq ? Kfq ,u Ku,u Ku,fq + ?q I .
q=1
We rewrite this product as a single Gaussian with a block diagonal covariance matrix,
p(y|u, Z, X, ?) = N Kf ,u K?1
(6)
u,u u, D + ?
where D = blockdiag Kf ,f ? Kf ,u K?1
u,u Ku,f , and we have used the notation blockdiag [G] to
indicate the block associated with each output of the matrix G should be retained, but all other
elements should be set to zero. We can also write this as D = Kf ,f ? Kf ,u K?1
u,u Ku,f M where
is the Hadamard product and M = IQ ? 1N , 1N being the N ? N matrix of ones and ? being the
Kronecker product. We now marginalize the values of the samples from the latent functions by using
their process priors, i.e. p(u|Z) = N (0, Ku,u ). This leads to the following marginal likelihood,
Z
p(y|Z, X, ?) = p(y|u, Z, X, ?)p(u|Z)du = N 0, D + Kf ,u K?1
(7)
u,u Ku,f + ? .
Notice that, compared to (5), the full covariance matrix Kf ,f has been replaced by the low rank covariance Kf ,u K?1
u,u Ku,f in all entries except in the diagonal blocks corresponding to Kfq ,fq . When
using the marginal likelihood for learning, the computation load is associated to the calculation of
the inverse of D. The complexity of this inversion is O(N 3 Q) + O(N QM 2 ), storage of the matrix
is O(N 2 Q) + O(N QM ). Note that if we set M = N these reduce to O(N 3 Q) and O(N 2 Q)
respectively which matches the computational complexity of applying Q independent GPs to model
the multiple outputs.
Combining eq. (6) with p(u|Z) using Bayes theorem, the posterior distribution over u is obtained
as
p(u|y, X, Z, ?) = N Ku,u A?1 Ku,f (D + ?)?1 y, Ku,u A?1 Ku,u
(8)
where A = Ku,u + Ku,f (D + ?)?1 Kf ,u . The predictive distribution is expressed through the
integration of (6), evaluated at X? , with (8), giving
Z
p(y? |y, X, X? , Z, ?) = p(y? |u, Z, X? , ?)p(u|y, X, Z, ?)du
(9)
=N Kf? ,u A?1 Ku,f (D + ?)?1 y, D? + Kf? ,u A?1 Ku,f? + ?
with D? = blockdiag Kf? ,f? ? Kf? ,u K?1
u,u Ku,f? .
The functional form of (7) is almost identical to that of the PITC approximation [10], with the
samples we retain from the latent function providing the same role as the inducing values in the
partially independent training conditional (PITC) approximation. This is perhaps not surprising
given that the nature of the conditional independence assumptions in PITC is similar to that we have
made. A key difference is that in PITC it is not obvious which variables should be grouped together
when making the conditional independence assumption, here it is clear from the structure of the
model that each of the outputs should be grouped separately. However, the similarities are such that
we find it convenient to follow the terminology of [10] and also refer to our approximation as a PITC
approximation.
We have already noted that our sparse approximation reduces the computational complexity of multioutput regression with GPs to that of applying independent GPs to each output. For larger data sets
the N 3 term in the computational complexity and the N 2 term in the storage is still likely to be
prohibitive. However, we can be inspired by the analogy of our approach to the PITC approximation
and consider a more radical factorization of the outputs. In the fully independent training conditional
(FITC) [13, 14] a factorization across the data points is assumed. For us that would lead to the
following expression for conditional distribution of the output functions given the inducing variables,
QQ QN
p(y|u, Z, X, ?) = q=1 n=1 p(yqn |u, Z, X, ?) which can be briefly expressed through (6) with
?1
D = diag Kf ,f ? Kf ,u K?1
u,u Ku,f = Kf ,f ? Kf ,u Ku,u Ku,f M, with M = IQ ?IN . Similar
equations are obtained for the posterior (8), predictive (9) and marginal likelihood distributions (7)
leading to the Fully Independent Training Conditional (FITC) approximation [13, 10]. Note that
the marginal likelihood might be optimized both with respect to the parameters associated with the
covariance matrices and with respect to Z. In supplementary material we include the derivatives of
the marginal likelihood wrt the matrices Kf ,f , Ku,f and Ku,u .
4
Related work
There have been several suggestions for constructing multiple output GPs [2, 15, 1]. Under the
convolution process framework, the semiparametric latent factor model (SLFM) proposed in [15]
corresponds to a specific choice for the smoothing kernel function in (1) namely, kqr (x) = ?qr ?(x).
The
assumed to be independent GPs and in such a case, cov [fq (x), fs (x0 )] =
P latent functions are
0
>
r ?qr ?sr kur ur (x, x ). This can be written using matrix notation as Kf ,f = (??I)Ku,u (? ?I).
For computational speed up the informative vector machine (IVM) is employed [8].
In the multi-task learning model (MTLM) proposed in [1], the covariance matrix is expressed as
Kf ,f = K f ? k(x, x0 ), with K f being constrained positive semi-definite and k(x, x0 ) a covariance
function over inputs. The Nystr?om approximation is applied to k(x, x0 ). As stated in [1] with respect
to SLFM, the convolution process is related with MTLM when the smoothing kernel function is
given again by kqr (x) = ?qr ?(x) and there is only one latent function with covariance kuu (x, x0 ) =
k(x, x0 ). In this way, cov [fq (x), fs (x0 )] = ?q ?s k(x, x0 ) and in matrix notation Kf ,f = ??> ?
k(x, x0 ). In [2], the latent processes correspond to white Gaussian noises and the covariance matrix
is given by eq. (3). In this work, the complexity of the computational load is not discussed. Finally,
[12] use a similar covariance function to the MTLM approach but use an IVM style approach to
sparsification.
Note that in each of the approaches detailed above a ? function is introduced into the integral. In the
dependent GP model of [2] it is introduced in the covariance function. Our approach considers the
more general case when neither kernel nor covariance function is given by the ? function.
5
Results
For all our experiments we consideredh squared exponential covariance
functions for the latent proi
1
0
0 >
0
cess of the form kur ur (x, x ) = exp ? 2 (x ? x ) Lr (x ? x ) , where Lr is a diagonal matrix
which allows for different length-scales along each dimension. The smoothing kernel had the same
S |L |1/2
form, kqr (? ) = qr(2?)qrp/2 exp ? 21 ? > Lqr ? , where Sqr ? R and Lqr is a symmetric positive definite matrix. For this kernel/covariance function combination the necessary integrals are tractable
(see supplementary material).
We first setup a toy problem in which we evaluate the quality of the prediction and the speed of
the approximation. The toy problem consists of Q = 4 outputs, one latent function, R = 1, and
N = 200 observation points for each output. The training data was sampled from the full GP with
the following parameters, S11 = S21 = 1, S31 = S41 = 5, L11 = L21 = 50, L31 = 300, L41 = 200
for the outputs and L1 = 100 for the latent function. For the independent processes, wq (x), we
simply added white noise with variances ?12 = ?22 = 0.0125, ?32 = 1.2 and ?42 = 1. For the sparse
approximations we used M = 30 fixed inducing points equally spaced between the range of the
input and R = 1. We sought the kernel parameters through maximizing the marginal likelihood
using a scaled conjugate gradient algorithm. For test data we removed a portion of one output as
shown in Figure 1 (points in the interval [?0.8, 0] were removed). The predictions shown correspond
to the full GP (Figure 1(a)), an independent GP (Figure 1(b)), the FITC approximation (Figure 1(c))
and the PITC approximation (Figure 1(d)). Due to the strong dependencies between the signals, our
model is able to capture the correlations and predicts accurately the missing information.
Table 1 shows prediction results over an independent test set. We used 300 points to compute the
standarized mean square error (SMSE) [11] and ten repetitions of the experiment, so that we also
included one standard deviation for the ten repetitions. The training times for iteration of each model
are 1.45 ? 0.23 secs for the full GP, 0.29 ? 0.02 secs for the FITC and 0.48 ? 0.01 for the PITC.
Table 1, shows that the SMSE of the sparse approximations is similar to the one obtained with the
full GP with a considerable reduction of training times.
Method
Full GP
FITC
PITC
Output 1
1.07 ? 0.08
1.08 ? 0.09
1.07 ? 0.08
Output 2
0.99 ? 0.03
1.00 ? 0.03
0.99 ? 0.03
Output 3
1.12 ? 0.07
1.13 ? 0.07
1.12 ? 0.07
Output 4
1.05 ? 0.07
1.04 ? 0.07
1.05 ? 0.07
Table 1: Standarized mean square error (SMSE) for the toy problem over an independent test set. All numbers
are to be multiplied by 10?2 . The experiment was repeated ten times. Table included the value of one standard
deviation over the ten repetitions.
We now follow a similar analysis for a dataset consisting of weather data collected from a sensor network located on the south coast of England. The network includes four sensors (named Bramblemet,
Sotonmet, Cambermet and Chimet) each of which measures several environmental variables [12].
We selected one of the sensors signals, tide height, and applied the PITC approximation scheme
with an additional squared exponential independent kernel for each wq (x) [11]. Here Q = 4 and
we chose N = 1000 of the 4320 for the training set, leaving the remaining points for testing. For
comparison we also trained a set of independent GP models. We followed [12] in simulating sensor
failure by introducing some missing ranges for these signals. In particular, we have a missing range
10
10
8
8
6
6
4
4
2
2
0
0
?2
?2
?4
?4
?6
?6
?8
?8
?10
?10
?1
?0.5
0
0.5
1
(a) Output 4 using the full GP
10
10
8
8
6
6
4
4
2
2
0
0
?2
?2
?4
?4
?6
?6
?8
?8
?10
?10
?1
?0.5
0
0.5
?1
?0.5
0
0.5
1
(b) Output 4 using an independent GP
1
?1
?0.5
0
0.5
1
(c) Output 4 using the FITC approximation (d) Output 4 using the PITC approximation
Figure 1: Predictive mean and variance using the full multi-output GP, the sparse approximation and an independent GP for output 4. The solid line corresponds to the mean predictive, the shaded region corresponds to
2 standard deviations away from the mean and the dash line is the actual value of the signal without noise. The
dots are the noisy training points. There is a range of missing data in the interval [?0.8, 0.0]. The crosses in
figures 1(c) and 1(d) corresponds to the locations of the inducing inputs.
of [0.6, 1.2] for the Bramblemet tide height sensor and [1.5, 2.1] for the Cambermet. For the other
two sensors we used all 1000 training observations. For the sparse approximation we took M = 100
equally spaced inducing inputs. We see from Figure 2 that the PITC approximation captures the dependencies and predicts closely the behavior of the signal in the missing range. This contrasts with
the behavior of the independent model, which is not able to follow the original signal.
As another example we employ the Jura dataset, which consists of measurements of concentrations
of several heavy metals collected in the topsoil of a 14.5 km2 region of the Swiss Jura. The data is
divided into a prediction set (259 locations) and a validation set (100 locations)1 . In a typical situation, referred as undersampled or heterotopic case, a few expensive measurements of the attribute
of interest are supplemented by more abundant data on correlated attributes that are cheaper to sample. We follow the experiments described in [5, p. 248,249] in which a primary variable (cadmium
and copper) at prediction locations in conjunction with some secondary variables (nickel and zinc
for cadmium; lead, nickel and zinc for copper) at prediction and validation locations, are employed
to predict the concentration of the primary variable at validation locations. We compare results of
independent GP, the PITC approximation, the full GP and ordinary co-kriging. For the PITC experiments, a k-means procedure is employed first to find the initial locations of the inducing values
and then these locations are optimized in the same optimization procedure used for the parameters.
Each experiment is repeated ten times. The results for ordinary co-kriging were obtained from [5,
p. 248,249]. In this case, no values for standard deviation are reported. Figure 3 shows results of
prediction for cadmium (Cd) and copper (Cu). From figure 3(a), it can be noticed that using 50 inducing values, the approximation exhibits a similar performance to the co-kriging method. As more
1
This data is available at http://www.ai-geostats.org/
5
4.5
4
4
3.5
3.5
Tide Height (m)
Tide Height (m)
5
4.5
3
2.5
3
2.5
2
2
1.5
1.5
1
1
0.5
0
0.5
0
0.5
1
1.5
Time (days)
2
2.5
3
5
5
4.5
4.5
4
4
3.5
3.5
3
2.5
2
1.5
1
1
0.5
0
0.5
0
1.5
Time (days)
2
2.5
2
2.5
3
2.5
3
3
2
1
1.5
Time (days)
2.5
1.5
0.5
1
(b) Bramblemet using PITC
Tide Height (m)
Tide Height (m)
(a) Bramblemet using an independent GP
0.5
3
(c) Cambermet using an independent GP
0.5
1
1.5
Time (days)
2
(d) Cambermet using PITC
Figure 2: Predictive Mean and variance using independent GPs and the PITC approximation for the tide height
signal in the sensor dataset. The dots indicate the training observations while the dash indicates the testing
observations. We have emphasized the size of the training points to differentiate them from the testing points.
The solid line corresponds to the mean predictive. The crosses in figures 2(b) and 2(d) corresponds to the
locations of the inducing inputs.
inducing values are included, the approximation follows the performance of the full GP, as it would
be expected. From figure 3(b), it can be observed that, although the approximation is better that the
independent GP, it does not obtain similar results to the full GP. Summary statistics of the prediction
data ([5, p. 15]) shows higher variability for the copper dataset than for the cadmium dataset, which
explains in some extent the different behaviors.
16
MEAN ABSOLUTE ERROR Cu
MEAN ABSOLUTE ERROR Cd
0.58
0.56
0.54
0.52
0.5
0.48
0.46
0.44
0.42
IGP
P(50) P(100) P(200) P(500) FGP
(a) Cadmium (Cd)
CK
15
14
13
12
11
10
9
8
7
IGP
P(50) P(100) P(200) P(500) FGP
CK
(b) Copper (Cu)
Figure 3: Mean absolute error and standard deviation for ten repetitions of the experiment for the Jura dataset
In the bottom of each figure, IGP stands for independent GP, P(M ) stands for PITC with M inducing values,
FGP stands for full GP and CK stands for ordinary co-kriging (see [5] for detailed description).
6
Conclusions
We have presented a sparse approximation for multiple output GPs, capturing the correlated information among outputs and reducing the amount of computational load for prediction and optimization purposes. The reduction in computational complexity for the PITC approximation is from
O(N 3 Q3 ) to O(N 3 Q). This matches the computational complexity for modeling with independent
GPs. However, as we have seen, the predictive power of independent GPs is lower.
Linear dynamical systems responses can be expressed as a convolution between the impulse response of the system with some input function. This convolution approach is an equivalent way of
representing the behavior of the system through a linear differential equation. For systems involving
high amounts of coupled differential equations [4], the approach presented here is a reasonable way
of obtaining approximate solutions and incorporating prior domain knowledge to the model.
One could optimize with respect to positions of the values of the latent functions. As the input
dimension grows, it might be more difficult to obtain an acceptable response. Some solutions to this
problem have already been proposed [14].
Acknowledgments
We thank the authors of [12] who kindly made the sensor network database available.
References
[1] E. V. Bonilla, K. M. Chai, and C. K. I. Williams. Multi-task Gaussian process prediction. In J. C. Platt,
D. Koller, Y. Singer, and S. Roweis, editors, NIPS, volume 20, Cambridge, MA, 2008. MIT Press. In
press.
[2] P. Boyle and M. Frean. Dependent Gaussian processes. In L. Saul, Y. Weiss, and L. Bouttou, editors,
NIPS, volume 17, pages 217?224, Cambridge, MA, 2005. MIT Press.
[3] M. Brookes. The matrix reference manual. Available on-line., 2005. http://www.ee.ic.ac.uk/
hp/staff/dmb/matrix/intro.html.
[4] P. Gao, A. Honkela, M. Rattray, and N. D. Lawrence. Gaussian process modelling of latent chemical
species: Applications to inferring transcription factor activities. Bioinformatics, 24(16):i70?i75, 2008.
[5] P. Goovaerts. Geostatistics For Natural Resources Evaluation. Oxford University Press, 1997. ISBN
0-19-511538-4.
[6] D. M. Higdon. Space and space-time modelling using process convolutions. In C. Anderson, V. Barnett,
P. Chatwin, and A. El-Shaarawi, editors, Quantitative methods for current environmental issues, pages
37?56. Springer-Verlag, 2002.
[7] N. D. Lawrence. Learning for larger datasets with the Gaussian process latent variable model. In Meila
and Shen [9].
[8] N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: The informative
vector machine. In S. Becker, S. Thrun, and K. Obermayer, editors, NIPS, volume 15, pages 625?632,
Cambridge, MA, 2003. MIT Press.
[9] M. Meila and X. Shen, editors. AISTATS, San Juan, Puerto Rico, 21-24 March 2007. Omnipress.
[10] J. Qui?nonero Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process
regression. JMLR, 6:1939?1959, 2005.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA, 2006. ISBN 0-262-18253-X.
[12] A. Rogers, M. A. Osborne, S. D. Ramchurn, S. J. Roberts, and N. R. Jennings. Towards real-time information processing of sensor network data using computationally efficient multi-output Gaussian processes.
In Proceedings of the International Conference on Information Processing in Sensor Networks (IPSN
2008), 2008. In press.
[13] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Y. Weiss,
B. Sch?olkopf, and J. C. Platt, editors, NIPS, volume 18, Cambridge, MA, 2006. MIT Press.
[14] E. Snelson and Z. Ghahramani. Local and global sparse Gaussian process approximations. In Meila and
Shen [9].
[15] Y. W. Teh, M. Seeger, and M. I. Jordan. Semiparametric latent factor models. In R. G. Cowell and
Z. Ghahramani, editors, AISTATS 10, pages 333?340, Barbados, 6-8 January 2005. Society for Artificial
Intelligence and Statistics.
| 3553 |@word cu:3 briefly:1 inversion:1 covariance:35 nystr:1 solid:2 igp:3 reduction:2 initial:1 lqr:2 recovered:1 current:1 surprising:1 must:1 written:2 multioutput:1 informative:2 s21:1 intelligence:1 prohibitive:1 selected:1 lr:2 location:10 herbrich:1 org:1 height:7 along:1 differential:2 consists:2 overhead:3 introduce:1 x0:19 expected:1 behavior:4 nor:1 multi:8 inspired:1 actual:1 notation:5 q2:2 whilst:2 sparsification:1 pseudo:1 quantitative:1 qm:4 scaled:1 uk:3 platt:2 mauricio:1 positive:2 local:1 oxford:1 might:2 chose:1 higdon:1 shaded:1 co:4 factorization:2 range:5 pitc:20 acknowledgment:1 testing:3 block:3 definite:2 swiss:1 procedure:2 goovaerts:1 weather:1 convenient:1 marginalize:1 operator:1 storage:7 context:1 influence:1 applying:2 www:2 equivalent:1 optimize:1 dz:5 maximizing:1 missing:5 williams:2 kqr:8 shen:3 boyle:1 q:1 qq:1 user:1 exact:1 gps:10 element:5 expensive:1 located:1 predicts:2 database:1 observed:4 role:1 bottom:1 capture:2 region:2 removed:2 kriging:4 intuition:1 complexity:11 trained:1 rewrite:1 predictive:10 joint:1 blockdiag:3 represented:1 fast:1 artificial:1 cokriging:1 richer:1 larger:3 supplementary:2 relax:1 cov:14 neil:1 statistic:2 gp:32 jointly:2 noisy:1 differentiate:1 isbn:2 took:1 propose:1 product:3 zm:2 km2:1 hadamard:1 combining:1 nonero:1 roweis:1 description:1 inducing:11 olkopf:1 qr:4 exploiting:2 manchester:2 chai:1 iq:2 radical:1 ac:3 frean:1 school:2 eq:2 strong:1 c:2 involves:1 come:1 indicate:3 closely:1 attribute:2 ipsn:1 material:2 rogers:1 explains:1 hold:1 considered:1 ic:1 exp:2 standarized:2 lawrence:4 predict:1 sought:1 purpose:1 grouped:2 repetition:4 puerto:1 mit:5 sensor:11 gaussian:16 aim:1 rather:1 ck:3 conjunction:1 q3:2 modelling:3 rank:2 fq:17 likelihood:9 indicates:1 contrast:1 seeger:2 inference:3 dependent:4 el:1 typically:3 entire:1 w:1 relation:1 koller:1 going:1 interested:1 ksp:1 among:1 html:1 issue:1 smoothing:6 integration:1 constrained:1 marginal:6 construct:1 once:1 barnett:1 identical:1 others:2 employ:2 few:2 sotonmet:1 cheaper:1 replaced:1 consisting:1 interest:1 evaluation:1 brooke:1 accurate:1 integral:2 necessary:1 jura:3 abundant:1 formalism:2 modeling:4 ordinary:3 introducing:1 deviation:5 entry:1 kq:2 inspires:1 reported:1 dependency:4 synthetic:2 international:1 retain:1 barbados:1 dmb:1 together:1 again:1 squared:2 juan:1 derivative:1 leading:3 style:1 toy:3 s41:1 account:1 sec:2 includes:1 bonilla:1 view:1 candela:1 portion:1 bayes:1 om:1 square:2 variance:4 who:2 sqr:1 correspond:2 spaced:2 modelled:1 accurately:1 qrp:1 l21:1 manual:1 failure:1 involved:1 obvious:1 associated:3 sampled:1 dataset:6 knowledge:1 rico:1 higher:1 day:4 follow:4 methodology:1 response:3 alvarez:1 wei:2 evaluated:5 though:1 strongly:1 anderson:1 l31:1 correlation:5 honkela:1 quality:1 perhaps:1 impulse:1 grows:2 chemical:1 symmetric:1 white:4 noted:1 zinc:2 demonstrate:1 cp:4 l1:1 omnipress:1 coast:1 snelson:2 functional:1 volume:4 discussed:1 approximates:1 relating:1 significant:1 refer:1 measurement:2 cambridge:5 ai:1 meila:3 hp:1 had:2 dot:2 similarity:1 posterior:2 verlag:1 s11:1 tide:7 seen:1 additional:2 staff:1 employed:3 signal:7 semi:1 smoother:1 full:16 multiple:7 ramchurn:1 reduces:1 match:2 england:1 calculation:1 cross:7 characterized:1 divided:1 equally:2 y:2 prediction:13 involving:1 regression:5 iteration:1 kernel:11 cps:1 semiparametric:2 separately:1 interval:2 leaving:1 sch:1 sr:1 south:1 elegant:1 jordan:1 ee:1 constraining:1 independence:6 reduce:1 expression:2 becker:1 f:7 kuu:1 generally:1 jennings:1 clear:1 detailed:2 amount:2 ten:6 http:2 notice:1 fgp:3 rattray:1 write:2 key:2 four:1 terminology:1 drawn:1 chimet:1 neither:1 ce:1 undertaken:1 inverse:2 named:1 almost:1 reasonable:1 acceptable:1 qui:1 capturing:1 followed:1 simplification:1 dash:2 neill:1 activity:1 constraint:2 kronecker:1 speed:2 ksr:2 structured:1 combination:1 march:1 conjugate:1 across:4 s31:1 ur:26 making:1 computationally:1 equation:4 resource:1 slfm:2 turn:1 discus:1 wrt:1 singer:1 tractable:1 available:3 kur:7 multiplied:1 apply:1 away:1 simulating:1 convolved:1 rp:2 original:1 remaining:1 include:3 marginalized:1 unifying:1 exploit:1 giving:1 ghahramani:3 establish:2 society:1 pollution:1 already:2 added:1 noticed:1 intro:1 strategy:1 concentration:3 primary:2 diagonal:4 obermayer:1 exhibit:1 gradient:1 thank:1 thrun:1 considers:1 collected:2 trivial:1 extent:1 length:2 retained:1 providing:1 setup:1 difficult:1 robert:1 potentially:1 stated:1 teh:1 l11:1 convolution:17 observation:4 datasets:2 january:1 situation:1 variability:1 y1:1 introduced:2 copper:5 namely:1 required:2 specified:1 z1:2 optimized:2 learned:1 geostatistics:2 nip:4 address:1 able:2 cadmium:5 dynamical:1 smse:3 challenge:1 power:1 natural:2 predicting:1 undersampled:1 dz0:3 representing:2 scheme:2 improve:1 fitc:7 imply:1 yq:11 coupled:1 review:1 prior:2 kf:33 fully:3 suggestion:1 analogy:1 nickel:2 validation:3 metal:2 editor:7 corrupt:1 share:2 heavy:2 cd:3 summary:1 rasmussen:2 neat:1 allow:1 pollutant:1 saul:1 focussed:1 absolute:3 sparse:16 distributed:1 dimension:3 xn:2 world:1 stand:4 qn:4 author:1 made:2 san:1 simplified:1 employing:1 approximate:3 compact:1 transcription:1 keep:1 global:1 assumed:3 latent:33 table:4 ku:24 nature:1 obtaining:1 du:2 constructing:2 domain:2 diag:1 kindly:1 aistats:2 main:1 whole:1 noise:6 osborne:1 allowed:1 repeated:2 x1:2 referred:1 position:1 inferring:1 exponential:2 jmlr:1 z0:12 theorem:1 load:3 specific:1 emphasized:1 supplemented:1 incorporating:1 demand:1 simply:1 likely:1 gao:1 expressed:8 partially:2 u2:2 cowell:1 springer:1 corresponds:6 ivm:2 environmental:2 bramblemet:4 ma:5 conditional:13 cambermet:4 towards:1 man:2 considerable:1 included:3 determined:1 except:1 reducing:2 typical:1 specie:1 secondary:1 intact:1 wq:6 bioinformatics:1 evaluate:1 correlated:3 |
2,817 | 3,554 | Exact Convex Confidence-Weighted Learning
Koby Crammer Mark Dredze Fernando Pereira?
Department of Computer and Information Science , University of Pennsylvania
Philadelphia, PA 19104
{crammer,mdredze,pereira}@cis.upenn.edu
Abstract
Confidence-weighted (CW) learning [6], an online learning method for linear classifiers, maintains a Gaussian distributions over weight vectors, with a covariance
matrix that represents uncertainty about weights and correlations. Confidence
constraints ensure that a weight vector drawn from the hypothesis distribution
correctly classifies examples with a specified probability. Within this framework,
we derive a new convex form of the constraint and analyze it in the mistake bound
model. Empirical evaluation with both synthetic and text data shows our version of
CW learning achieves lower cumulative and out-of-sample errors than commonly
used first-order and second-order online methods.
1
Introduction
Online learning methods for linear classifiers, such as the perceptron and passive-aggressive (PA)
algorithms [4], have been thoroughly analyzed and are widely used. However, these methods do not
model the strength of evidence for different weights arising from differences in the use of features
in the data, which can be a serious issue in text classification, where weights of rare features should
be trusted less than weights of frequent features.
Confidence-weighted (CW) learning [6], motivated by PA learning, explicitly models classifier
weight uncertainty with a full multivariate Gaussian distribution over weight vectors. The PA geometrical margin constraint is replaced by the probabilistic constraint that a classifier drawn from
the distribution should, with high probability, classify correctly the next example. While Dredze
et al. [6] explained CW learning in terms of the standard deviation of the margin induced by the
hypothesis Gaussian, in practice they used the margin variance to make the problem convex. In this
work, we use their original constraint but maintain convexity, yielding experimental improvements.
Our primary contributions are a mistake-bound analysis [11] and comparison with related methods.
We emphasize that this work focuses on the question of uncertainty about feature weights, not on
confidence in predictions. In large-margin classification, the margin?s magnitude for an instance
is sometimes taken as a proxy for prediction confidence for that instance, but that quantity is not
calibrated nor is it connected precisely to a measure of weight uncertainty. Bayesian approaches to
linear classification, such as Bayesian logistic regression [9], use a simple mathematical relationship
between weight uncertainty and prediction uncertainty, which unfortunately cannot be computed
exactly. CW learning preserves the convenient computational properties of PA algorithms while
providing a precise connection between weight uncertainty and prediction confidence that has led to
weight updates that are more effective in practice [6, 5].
We begin with a review of the CW approach, then show that the constraint can be expressed in a
convex form, and solve it to obtain a new CW algorithm. We also examine a dual representation
that supports kernelization. Our analysis provides a mistake bound and indicates that the algorithm
is invariant to initialization. Simulations show that our algorithm improves over first-order methods
?
Current affiliation: Google, Mountain View, CA 94043, USA.
1
(perceptron and PA) as well as other second order methods (second-order perceptron). We conclude
with a review of related work.
2
Confidence-Weighted Linear Classification
The CW binary-classifier learner works in rounds. On round i, the algorithm applies its current
linear classification rule hw (x) = sign(w ? x) to an instance xi ? Rd to produce a prediction
y?i ?{? 1, +1}, receives a true label yi ?{? 1, +1} and suffers a loss !(yi , y?i ). The rule hw can be
identified with w up to a scaling, and we will do so in what follows since our algorithm will turn out
to be scale-invariant. As usual, we define the margin of an example on round i as mi = yi (wi ? xi ),
where positive sign corresponds to a correct prediction.
CW classification captures the notion of confidence in the weights of a linear classifier with a probability density on classifier weight vectors, specifically a Gaussian distribution with mean ? ? Rd
and covariance matrix ? ? Rd?d . The values ?p and ?p,p represent knowledge of and confidence
in the weight for feature p. The smaller ?p,p , the more confidence we have in the mean weight value
?p . Each covariance term ?p,q captures our knowledge of the interaction between features p and q.
In the CW model, the traditional signed margin is the mean of the induced univariate Gaussian
random variable
"
!
(1)
M ? N y(? ? x), x# ?x .
This probabilistic model can be used for prediction in different ways. Here, we use the average
weight vector E [w] = ?, analogous to Bayes point machines [8]. The information captured by the
covariance ? is then used just to adjust training updates.
3
Update Rule
The CW update rule of Dredze et al. [6] makes the smallest adjustment to the distribution that
ensures the probability of correct prediction on instance i is no smaller than the confidence hyperparameter ? ? [0, 1]: Pr [yi (w ? xi ) ? 0] ? ?. The magnitude of the update is measured by its KL
divergence to the previous distribution, yielding the following constrained optimization:
(?i+1 , ?i+1 ) = arg min DKL (N (?, ?) % N (?i , ?i )) s.t. Pr [yi (w ? xi ) ? 0] ? ? . (2)
?,?
They rewrite the above optimization in terms of the standard deviation as:
# $
%
&
'
! ?1 "
1
det? i
# ?1
min
log
+ Tr ?i ? + (?i ? ?) ?i (?i ? ?) s.t. yi (? ? xi ) ? ? x#
i ?xi .
2
det?
(3)
Unfortunately, while the constraint of this problem is linear in ?, it is not convex in ?.
Dredze et al. [6, eq. (7)] circumvented that lack of convexity by removing the square root from
the right-hand-size of the constraint, which yields the variance. However, we found that the original optimization can be preserved while maintaining convexity with a change of variable. Since ?
1/2
1/2
is positive semidefinite (PSD), it can be written as ? =? 2 with ? = Qdiag(?1 , . . . ,? d )Q#
where Q is orthonormal and ?1 , . . . ,? d are the eigenvalues of ?; ? is thus also PSD. This change
yields the following convex optimization with a convex constraint in ? and ? simultaneously:
$
%
" 1
det? 2i
1
1 !
#
2
(?i+1 , ?i+1 ) = arg min log
+ (?i ? ?) ??2
+ Tr ??2
i ?
i (?i ? ?)
2
det? 2
2
2
s.t. yi (? ? xi ) ? ?%?xi % , ? is PSD .
(4)
We call our algorithm CW-Stdev and the original algorithm of Dredze et al. CW-Var.
3.1
Closed-Form Update
While standard optimization techniques can solve the convex program (4), we favor a closed-form
solution. Omitting the PSD constraint for now, we obtain the Lagrangian for (4),
( $
)
%
! ?2 2 "
1
det? 2i
# ?2
L=
log
?
?
?)
?
(?
?
?)
+? (?yi (? ? xi ) + ?%?xi %)
+
Tr
?
+
(?
i
i
i
i
2
det? 2
(5)
2
Input parameters a > 0 ; ? ? [0.5, 1]
Initialize ?1 = 0 , ?1 = aI ,? = ??1 (?) , ? = 1 + ?2 /2 , ? = 1 + ?2 .
For i = 1, . . . , n
? Receive a training example xi ? Rd
`
`
??
? Compute Gaussian margin distribution Mi ? N (?i ? xi ) , x#
i ?i x i
? Receive true label yi and compute
?2
?
q
1
2 v 2 ?2 + 4v
vi = x #
?
x
,
m
=
y
(?
?
x
)
(11)
,
u
=
?
+
?
??v
i i
i
i
i
i
i
i
i
i
i
4
(
!)
r
1
?i ?
?4
?i = max 0,
(14) , ?i = ?
?mi ? + m2i
+ vi ?2 ?
vi ?
4
ui + vi ?i ?
? Update
(12)
(22)
?i+1 = ?i + ?i yi ?i xi
?i+1 = ?i ? ?i ?i xi x#
i ?i
?
??1
1
?
+ ?i ?ui 2 diag2 (xi )
?i+1 = ??1
i
`
(full)
(10)
(diag)
(15)
?
Output Gaussian distribution N ?n+1 , ?n+1 .
Figure 1: The CW-Stdev algorithm. The numbers in parentheses refer to equations in the text.
At the optimum, it must be that
?
L = ??2
i (? ? ?i ) ? ?yi xi = 0
??
?i+1 = ?i + ?yi ?2i xi ,
?
(6)
where we assumed that ?i is non-singular (PSD). At the optimum, we must also have,
?xi x#
?
1
1
xi x#
?2
i ?
i
L = ???1 + ??2
+ ?? *
=0,
i ? + ??i + ?? * # 2
2x
??
2
2
2 xi ? xi
2 x#
?
i
i
(7)
from which we obtain the implicit-form update
xi x#
?2
i
??2
.
i+1 = ?i + ?? '
2 x
x#
?
i
i+1 i
(8)
Conveniently, these updates can be expressed in terms of the covariance matrix 1 :
?i+1 = ?i + ?yi ?i xi
,
x i x#
?1
i
??1
.
i+1 = ?i + ?? * #
xi ?i+1 xi
(9)
?1
?1
We observe that (9) computes ??1
i+1 as the sum of a rank-one PSD matrix and ?i . Thus, if ?i has
?1
strictly positive eigenvalues, so do ?i+1 and ?i+1 . Thus, ?i and ?i are indeed PSD non-singular,
as assumed above.
3.2
Solving for the Lagrange Multiplier ?
We now determine the value of the Lagrange multiplier ? and make the covariance update explicit.
We start by computing the inverse of (9) using the Woodbury identity [14, Eq. 135] to get
,
+
,?1
+
xi x #
??
?1
i
?i+1 = ?i + ?? *
x#
= ?i ? ?i xi *
i ?i . (10)
#?
# ? x ??
x#
?
x
x
x
+
x
i+1 i
i+1 i
i i
i
i
i
Let
ui = x#
i ?i+1 xi
,
vi = x#
i ? i xi
1
,
mi = yi (?i ? xi ) .
(11)
Furthermore, writing the Lagrangian of (3) and solving it would yield the same solution as Eqns. (9). Thus
the optimal solution of both (3) and (4) are the same.
3
Multiplying (10) by x#
i (left) and xi (right) we get ui = vi ? vi
solved for ui to obtain
*
?
??vi ? + ?2 vi2 ?2 + 4vi
.
ui =
2
-
?
??
ui +vi ??
.
vi , which can be
(12)
The KKT conditions for the optimization imply that either ? = 0 and no update is needed, or the
constraint (4) is an equality after the
? update. Using the equality version of (4) and Eqs. (9,10,11,12)
??vi ?+
?2 v 2 ?2 +4vi
i
we obtain mi + ?vi = ?
be rearranged into a quadratic equation
- 2 2 . ! , which can
!
"
"
?
2 2
2
2
2
in ?: ? vi 1 + ? + 2?mi vi 1 + 2 + mi ? vi ? = 0 . The smaller root of this equation
is always negative and thus not a valid Lagrange multiplier. We use the following abbreviations for
writing the larger root ?i : ? = 1 + ?2 /2 ; ? = 1 + ?2 . The larger root is then
*
?mi vi ? + m2i vi2 ? 2 ? vi2 ? (m2i ? vi ?2 )
?i =
.
(13)
vi2 ?
?
?
The constraint (4) is satisfied before the update if mi ? ? vi ? 0. If mi ? 0, then mi ? ? vi and
from (13) we have that ?i > 0. If, instead, mi ? 0, then, again by (13), we have
'
?i > 0 ? mi vi ? < m2i vi2 ? 2 ? vi2 ? (m2i ? vi ?2 ) ? mi <?v i .
From the KKT conditions, either ?i = 0 or (3) is satisfied as an equality and ?i = ?i > 0. We
summarize the discussion in the following lemma:
Lemma 1 The solution of (13) satisfies the KKT conditions, that is either ?i ? 0 or the constraint
of (3) is satisfied before the update with the parameters ?i and ?i .
We obtain the final form of ?i by simplifying (13) together with Lemma 1,
'
?
?
? 1 ?mi ? + m2i ?4 + vi ?2 ? ?
4
.
max 0,
?
? vi
?
(14)
To summarize, after receiving the correct label yi the algorithm checks whether the probability of a
correct prediction under the current parameters is greater than a confidence threshold ? =?( ?). If
so, it does nothing. Otherwise it performs an update as described above. We initialize ?1 = 0 and
?1 = aI for some a > 0. The algorithm is summarized in Fig. 1.
Two comments are in order. First, if ? = 0.5, then from Eq. (9) we see that only ? will be updated,
not ?, because ? = 0 ? ? = 0.5. In this case the covariance ? parameter does not influence the
decision, only the mean ?. Furthermore, for length-one input vectors, at the first round we have
2
?1 = aI, so the first-round constraint is yi (wi ? xi ) ? a %xi % = a, which is equivalent to the
original PA update.
Second, the update described above yields full covariance matrices. However, sometimes we may
prefer diagonal covariance matrices, which can be achieved by projecting the matrix ?i+1 that
results from the update onto the set of diagonal matrices. In practice it requires setting all the
off-diagonal elements to zero, leaving only the diagonal elements. In fact, if ?i is diagonal then we
only need to project xi x#
i to a diagonal matrix. We thus replace (9) with the following update,
?i
2
?1
??1
i+1 = ?i + ? ? diag (xi ) ,
ui
(15)
where diag2 (xi ) is a diagonal matrix made from the squares of the elements of xi on the diagonal.
Note that for diagonal matrices there is no need to use the Woodbury equation to compute the inverse,
as it can be computed directly element-wise. We use CW-Stdev (or CW-Stdev-full) to refer to the
full-covariance algorithm, and CW-Stdev-diag to refer to the diagonal-covariance algorithm.
Finally, the following property of our algorithm shows that it can be used with Mercer kernels:
4
Theorem 2 (Representer Theorem) The mean ?i and covariance ?i parameters computed by the
algorithm in Fig. 1 can be written as linear combinations of the input vectors with coefficients that
depend only on inner products of input vectors:
?i =
i?1
5
(i)
?p,q
xp x#
q + aI
,
?i =
i?1
5
?p(i) xp .
(16)
p
p,q=1
The proof, given in the appendix, is a simple induction.
4
Analysis
We analyze CW-Stdev in two steps. First, we show that performance does not depend on initialization and then we compute a bound on the number of mistakes that the algorithm makes.
4.1
Invariance to Initialization
The algorithm in Fig. 1 uses a predefined parameter a to initialize the covariance matrix. Since the
decision to update depends on the covariance matrix, which implicitly depends on a through ?i and
vi , one may assume that a effects performance. In fact the number of mistakes is independent of
a, i.e. the constraint of (3) is invariant to scaling. Specifically, if it holds for mean and covariance
parameters ? and ?, it holds also for the scaled parameters c? and c2 ? for any c > 0. The following
lemma states that the scaling is controlled by a. Thus, we can always initialize the algorithm with a
value of a = 1. If, in addition to predictions, we also need the distribution over weight vectors, the
scale parameter a should be calibrated.
Lemma 3 Fix a sequence of examples (x1 , y 1 ) . . . (xn , y n ). Let ?i , ?i , mi , vi , ?i , ui be the quantities obtained throughout the execution of the algorithm described in Fig. 1 initialized with (0, I)
? i, ?
? i, m
? i , v?i , ?
?i, u
?i be the corresponding quantities obtained throughout the exe(a = 1). Let also ?
cution of the algorithm, with an alternative initialization of (0, aI) (for some a > 0). The following
relations between the two set of quantities hold:
m
?i =
?
?
1
? i = a?i .
? i = a?i , u
ami , v?i = avi , ?
? i = ? ?i , ?
?i = aui , ?
a
(17)
Proof sketch: The proof proceeds by induction. The initial values of these quantities clearly satisfy
the required equalities. For the induction step we assume that (17) holds for some i and show that
these identities also hold for i + 1 using Eqs. (9,14,11,12)
. ?
?
From the lemma we see that the quantity m
? i / v?i = mi / vi is invariant to a. Therefore, the
behavior of the algorithm in general, and its updates and mistakes in particular, are independent to
the choice of a. Therefore, we assume a = 1 in what follows.
4.2
Analysis in the Mistake Bound Model
The main theorem of the paper bounds the number of mistakes made by CW-Stdev.
Theorem 4 Let (x1 , y 1 ) . . . (xn , y n ) be an input sequence for the algorithm of Fig. 1, initialized
with (0, I), with xi ? Rd and y i ?{? 1, +1} . Assume there exist ?? and ?? such that for all i for
which the algorithm made an update (?i > 0),
?? # xi yi ? ?#
i+1 xi yi
and
?
#
x#
i ? xi ? xi ?i+1 xi
.
(18)
Then the following holds:
no. mistakes ?
5
i
?i2 vi ?
.
1 + ?2 ?
? log det? ? + Tr (?? ) + ?? # ??1
n+1 ? ? d
2
?
5
(19)
160
Cumulative Loss
140
120
1.00
9
Perceptron
PA
2nd Ord
Std?diag
Std?full
Var?diag
Var?full
8
0.95
7
Stdev Accuracy
180
6
Test Error
200
100
80
5
4
60
3
40
2
0.90
0.85
Reuters
Sentiment
20 Newsgroups
0.80
20
1
100
200
300
400
500 600
Round
700
800
900 1000
(a)
0.80
0
Perceptron
PA
0.85
(b)
0.90
Variance Accuracy
2nd OrderStd?diag Std?full Var?diag Var?full
0.95
1.00
(c)
Figure 2: (a) The average and standard deviation of the cumulative number of mistakes for seven
algorithms. (b) The average and standard deviation of test error (%) over unseen data for the seven
algorithms. (c) Comparison between CW-Stdev-diag and CW-Var-diag on text classification.
The proof is given in the appendix.
The above bound depends on an output of the algorithm, ?n+1 , similar to the bound for the secondorder perceptron [3]. The two conditions (18) imply linear separability of the input sequence by
?? :
'
(18)
(4)
(18)
# ?
# ?
#
?? # xi yi ? ?#
i+1 xi yi ? ? xi ?i+1 xi ? xi ? xi ? min xi ? xi > 0 ,
i
where the superscripts in parentheses refer to the inequalities used. From (10), we observe that
?i+1 * ?i for all i, so ?n+1 * ?i+1 * ?1 = I for all i. Therefore, the conditions on ?? in (18)
are satisfied by ?? = ?n+1 . Furthermore, if ?? satisfies the stronger conditions yi (?? ? xi ) ? %xi %,
from ?i+1 * I above it follows that
'
'
#
#
(??? )# xi yi ? ?%xi % = ? x#
i Ixi ? ? xi ?i+1 xi = ?i+1 xi yi ,
where the last equality holds since we assumed that an update was made for the ith example. In this
situation, the bound becomes
.
?2 + 1
2
? # ?1
?
(?
log
det?
+
Tr
(?
)
?
d)
+
(?
+
1)
?
?
?
.
n+1
n+1
n+1
?2
2
?
2
?
The quantity ?? # ??1
n+1 ? in this bound is analogous to the quantity R %? % in the perceptron
bound [13], except that the norm of the examples does not come in explicitly as the radius R of the
enclosing ball, but implicitly through the fact that ??1
n+1 is a sum of example outer products (9). In
addition, in this version of the bound we impose a margin of 1 under the condition that examples
have unit norm, whereas in the perceptron bound, the margin of 1 is for examples with arbitrary
norm. This follows from the fact that (4) is invariant to the norm of xi .
5
Empirical Evaluation
We illustrate the benefits of CW-Stdev with synthetic data experiments. We generated 1, 000 points
in R20 where the first two coordinates were drawn from a 45? rotated Gaussian distribution with
standard deviation 1. The remaining 18 coordinates were drawn from independent Gaussian distributions N (0, 2). Each point?s label depended on the first two coordinates using a separator parallel
to the long axis of the ellipsoid, yielding a linearly separable set (Fig. 3(top)). We evaluated five online learning algorithms: the perceptron [16] , the passive-aggressive (PA) algorithm [4], the secondorder perceptron (SOP) [3], CW-Var-diag, CW-Var-full [6], CW-Stdev-diag and CW-Stdev-full. All
algorithm parameters were tuned over 1, 000 runs.
Fig. 2(a) shows the average cumulative mistakes for each algorithm; error bars indicate one unit of
standard deviation. Clearly, second-order algorithms, which all made fewer than 80 mistakes, outperform the first-order ones, which made at least 129 mistakes. Additionally, CW-Var makes more
mistakes than CW-Stdev: 8% more in the diagonal case and 17% more in the full. The diagonal
methods performed better than the first order methods, indicating that while they do not use any
6
second-order information, they capture additional information for single features. For each repetition, we evaluated the resulting classifiers on 10, 000 unseen test examples (Fig. 2(b)). Averaging
improved the first-order methods. The second-order methods outperform the first-order methods,
and CW-Stdev outperforms all the other methods. Also, the full case is less sensitive across runs.
The Gaussian distribution over weight vectors after 50 rounds is represented in Fig. 3(bot). The 20
dimensions of the version space are grouped into 10 pairs, the first containing the two meaningful
features. The dotted segment represents the first two coordinates of possible representations of
the true hyperplane in the positive quadrant. Clearly, the corresponding vectors are orthogonal to
the hyperplane shown in Fig. 3(top). The solid black ellipsoid represents the first two significant
feature weights; it does not yet lie of the dotted segment because the algorithm has not converged.
Nevertheless, the long axis is already parallel to the true set of possible weight vectors. The axis
perpendicular to the weight-vector set is very small, showing that there is little freedom in that
direction. The remaining nine ellipsoids represent the covariance of pairs of noise features. Those
ellipsoids are close to circular and have centers close to the origin, indicating that the corresponding
feature weights should be near zero but without much confidence.
NLP Evaluation: We compared CW-Stdev-diag with CW-Var-diag, which beat many state of the
art algorithms on 12 NLP datasets [6]. We followed the same evaluation setting using 10-fold cross
validation and the same splits for both algorithms. Fig. 2(c) compares the accuracy on test data of
each algorithm; points above the line represent improvements of CW-Stdev over CW-Var. Stdev
improved on eight of the twelve datasets and, while the improvements are not significant, they show
the effectiveness of our algorithm on real world data.
6
Related Work
Online additive algorithms have a long history, from with the
perceptron [16] to more recent methods [10, 4]. Our update
has a more general form, in which the input vector xi is linearly transformed using the covariance matrix, both rotating
the input and assigning weight specific learning rates. Weightspecific learning rates appear in neural-network learning [18],
although they do not model confidence based on feature variance.
30
20
10
0
?10
?20
?30
?25
?20
?15
?10
?5
0
5
0
0.5
1
1.5
10
15
20
25
2.5
2
1.5
1
0.5
0
?0.5
?1
?0.5
2
2.5
3
The second order perceptron (SOP) [3] demonstrated that
second-order information can improve on first-order methods.
Both SOP and CW maintain second-order information. SOP
is mistake driven while CW is passive-aggressive. SOP uses
the current instance in the correlation matrix for prediction
while CW updates after prediction. A variant of CW-Stdev
similar to SOP follows from our derivation if we fix the Lagrange multiplier in (5) to a predefined value ?i = ?, omit
the square root, and use a gradient-descent optimization step.
Fundamentally, CW algorithms have a probabilistic motivation, while the SOP is geometric: replace the ball around an
example with a refined ellipsoid. Shivaswamy and Jebara [17]
used a similar motivation in batch learning.
Figure 3: Top : Plot of the two informative features of the synthetic
data. Bottom: Feature weight distributions of CW-Stdev-full after 50
examples.
Ensemble learning shares the idea of combining multiple classifiers. Gaussian process classification (GPC) maintains a
Gaussian distribution over weight vectors (primal) or over regressor values (dual). Our algorithm uses a different update
criterion than the standard GPC Bayesian updates [15, Ch.3],
avoiding the challenge of approximating posteriors. Bayes
point machines [8] maintain a collection of weight vectors
consistent with the training data, and use the single linear classifier which best represents the collection. Conceptually, the collection is a non-parametric distribution over the weight vectors. Its online
version [7] maintains a finite number of weight-vectors which are updated simultaneously. The rele7
vance vector machine [19] incorporates probabilistic models into the dual formulation of SVMs. As
in our work, the dual parameters are random variables distributed according to a diagonal Gaussian
with example specific variance. The weighted-majority [12] algorithm and later improvements [2]
combine the output of multiple arbitrary classifiers, maintaining a multinomial distribution over the
experts. We assume linear classifiers as experts and maintain a Gaussian distribution over their
weight vectors.
7
Conclusion
We presented a new confidence-weighted learning method for linear classifier based on the standard
deviation. We have shown that the algorithm is invariant to scaling and we provided a mistake-bound
analysis. Based on both synthetic and NLP experiments, we have shown that our method improves
upon recent first and second order methods. Our method also improves on previous CW algorithms.
We are now investigating special cases of CW-Stdev for problems with very large numbers of features, multi-class classification, and batch training.
References
[1] Y. Censor and S.A. Zenios. Parallel Optimization: Theory, Algorithms, and Applications. Oxford University Press, New York, NY, USA, 1997.
[2] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth. How to
use expert advice. Journal of the Association for Computing Machinery, 44(3):427?485, May 1997.
[3] Nicol?o Cesa-Bianchi, Alex Conconi, and Claudio Gentile. A second-order perceptron algorithm. Siam
Journal of Commutation, 34(3):640?668, 2005.
[4] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive algorithms.
Journal of Machine Learning Research, 7:551?585, 2006.
[5] Mark Dredze and Koby Crammer. Active learning with confidence. In ACL, 2008.
[6] Mark Dredze, Koby Crammer, and Fernando Pereira. Confidence-weighted linear classification. In International Conference on Machine Learning, 2008.
[7] E. Harrington, R. Herbrich, J. Kivinen, J. Platt, and R.C. Williamson. Online bayes point machines. In
7th Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD), 2003.
[8] R. Herbrich, T. Graepel, and C. Campbell. Bayes point machines. JMLR, 1:245?279, 2001.
[9] T. Jaakkola and M. Jordan. A variational approach to bayesian logistic regression models and their
extensions. In Workshop on Artificial Intelligence and Statistics, 1997.
[10] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors.
Information and Computation, 132(1):1?64, January 1997.
[11] N. Littlestone. Learning when irrelevant attributes abound: A new linear-threshold algorithm. Machine
Learning, 2:285?318, 1988.
[12] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation,
108:212?261, 1994.
[13] A. B. J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the
Mathematical Theory of Automata, volume XII, pages 615?622, 1962.
[14] K. B. Petersen and M. S. Pedersen. The matrix cookbook, 2007.
[15] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
[16] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the
brain. Psychological Review, 65:386?407, 1958. (Reprinted in Neurocomputing (MIT Press, 1988).).
[17] P. Shivaswamy and T. Jebara. Ellipsoidal kernel machines. In AISTATS, 2007.
[18] Richard S. Sutton. Adapting bias by gradient descent: an incremental version of delta-bar-delta. In
Proceedings of the Tenth National Conference on Artificial Intelligence, pages 171?176. MIT Press, 1992.
[19] M. E. Tipping. Sparse bayesian learning and the relevance vector machine. Journal of Machine Learning
Research, 1:211?244, 2001.
[20] L. Xu, K. Crammer, and D. Schuurmans. Robust support vector machine training via convex outlier
ablation. In AAAI-2006, 2006.
8
| 3554 |@word version:6 stronger:1 norm:4 nd:2 dekel:1 simulation:1 covariance:17 simplifying:1 tr:5 solid:1 initial:1 tuned:1 outperforms:1 current:4 yet:1 assigning:1 written:2 must:2 additive:1 informative:1 plot:1 update:27 intelligence:2 fewer:1 warmuth:3 ith:1 provides:1 herbrich:2 five:1 mathematical:2 c2:1 symposium:1 combine:1 upenn:1 indeed:1 behavior:1 nor:1 examine:1 multi:1 brain:1 little:1 becomes:1 begin:1 classifies:1 project:1 provided:1 abound:1 what:2 mountain:1 exactly:1 classifier:13 scaled:1 platt:1 unit:2 omit:1 appear:1 positive:4 before:2 mistake:16 depended:1 sutton:1 oxford:1 signed:1 black:1 acl:1 initialization:4 perpendicular:1 woodbury:2 practice:3 empirical:2 adapting:1 convenient:1 confidence:18 quadrant:1 petersen:1 get:2 onto:1 close:2 cannot:1 storage:1 influence:1 writing:2 equivalent:1 lagrangian:2 center:1 demonstrated:1 williams:1 convex:9 commutation:1 automaton:1 helmbold:1 rule:4 haussler:1 orthonormal:1 notion:1 coordinate:4 analogous:2 updated:2 exact:1 us:3 ixi:1 hypothesis:2 secondorder:2 origin:1 pa:10 element:4 std:3 bottom:1 solved:1 capture:3 ensures:1 connected:1 convexity:3 ui:9 depend:2 rewrite:1 solving:2 segment:2 upon:1 learner:1 represented:1 stdev:20 derivation:1 effective:1 artificial:2 avi:1 shalev:1 refined:1 widely:1 solve:2 larger:2 otherwise:1 favor:1 statistic:1 unseen:2 final:1 online:8 superscript:1 sequence:3 eigenvalue:2 interaction:1 product:2 frequent:1 combining:1 ablation:1 convergence:1 optimum:2 produce:1 incremental:1 rotated:1 derive:1 illustrate:1 measured:1 eq:5 come:1 indicate:1 direction:1 radius:1 correct:4 attribute:1 fix:2 strictly:1 extension:1 hold:7 around:1 achieves:1 smallest:1 label:4 sensitive:1 grouped:1 repetition:1 weighted:8 trusted:1 mit:3 clearly:3 gaussian:15 always:2 claudio:1 jaakkola:1 focus:1 improvement:4 rank:1 indicates:1 check:1 censor:1 shivaswamy:2 relation:1 transformed:1 harrington:1 issue:1 classification:10 dual:4 arg:2 constrained:1 art:1 initialize:4 special:1 represents:4 koby:3 cookbook:1 representer:1 fundamentally:1 serious:1 novikoff:1 richard:1 simultaneously:2 preserve:1 divergence:1 neurocomputing:1 national:1 replaced:1 maintain:4 psd:7 freedom:1 organization:1 circular:1 mining:1 evaluation:4 adjust:1 analyzed:1 yielding:3 semidefinite:1 primal:1 predefined:2 orthogonal:1 machinery:1 initialized:2 rotating:1 littlestone:2 psychological:1 instance:5 classify:1 deviation:7 rare:1 predictor:1 exe:1 synthetic:4 calibrated:2 thoroughly:1 density:1 twelve:1 siam:1 international:1 probabilistic:5 off:1 receiving:1 regressor:1 together:1 again:1 aaai:1 cesa:2 satisfied:4 containing:1 expert:3 aggressive:4 summarized:1 coefficient:1 satisfy:1 explicitly:2 vi:29 depends:3 performed:1 root:5 view:1 closed:2 later:1 analyze:2 start:1 bayes:4 maintains:3 parallel:3 contribution:1 square:3 accuracy:3 variance:5 ensemble:1 yield:4 conceptually:1 bayesian:5 pedersen:1 multiplying:1 converged:1 history:1 suffers:1 proof:5 mi:17 knowledge:3 improves:3 graepel:1 campbell:1 tipping:1 asia:1 improved:2 formulation:1 evaluated:2 furthermore:3 just:1 implicit:1 correlation:2 hand:1 receives:1 sketch:1 lack:1 google:1 logistic:2 dredze:7 usa:2 omitting:1 effect:1 true:4 multiplier:4 equality:5 i2:1 round:7 eqns:1 criterion:1 performs:1 passive:4 geometrical:1 wise:1 variational:1 multinomial:1 volume:1 association:1 refer:4 significant:2 ai:5 rd:5 multivariate:1 posterior:1 recent:2 irrelevant:1 driven:1 inequality:1 affiliation:1 binary:1 yi:23 captured:1 greater:1 additional:1 impose:1 gentile:1 determine:1 fernando:2 full:14 multiple:2 cross:1 long:3 dkl:1 parenthesis:2 controlled:1 prediction:12 variant:1 regression:2 m2i:6 sometimes:2 represent:3 kernel:2 achieved:1 preserved:1 receive:2 addition:2 whereas:1 singular:2 leaving:1 comment:1 induced:2 incorporates:1 effectiveness:1 jordan:1 call:1 near:1 split:1 newsgroups:1 pennsylvania:1 identified:1 zenios:1 inner:1 idea:1 reprinted:1 det:8 whether:1 motivated:1 sentiment:1 york:1 nine:1 gpc:2 ellipsoidal:1 svms:1 rearranged:1 schapire:1 outperform:2 exist:1 dotted:2 bot:1 sign:2 delta:2 arising:1 correctly:2 rosenblatt:1 xii:1 hyperparameter:1 threshold:2 nevertheless:1 drawn:4 tenth:1 sum:2 run:2 inverse:2 uncertainty:7 throughout:2 decision:2 prefer:1 scaling:4 appendix:2 bound:14 followed:1 fold:1 quadratic:1 strength:1 aui:1 constraint:15 precisely:1 alex:1 pakdd:1 min:4 separable:1 circumvented:1 department:1 pacific:1 according:1 combination:1 ball:2 smaller:3 across:1 separability:1 wi:2 explained:1 invariant:6 pr:2 projecting:1 outlier:1 taken:1 equation:4 turn:1 needed:1 singer:1 eight:1 observe:2 alternative:1 batch:2 original:4 top:3 remaining:2 ensure:1 nlp:3 maintaining:2 approximating:1 question:1 quantity:8 already:1 parametric:1 primary:1 usual:1 traditional:1 diagonal:13 gradient:4 cw:41 majority:2 outer:1 seven:2 induction:3 length:1 relationship:1 ellipsoid:5 providing:1 unfortunately:2 sop:7 negative:1 enclosing:1 bianchi:2 ord:1 datasets:2 finite:1 descent:3 beat:1 january:1 situation:1 mdredze:1 precise:1 arbitrary:2 jebara:2 pair:2 required:1 specified:1 kl:1 connection:1 bar:2 proceeds:1 summarize:2 challenge:1 program:1 max:2 vi2:6 kivinen:2 improve:1 imply:2 axis:3 philadelphia:1 text:4 review:3 geometric:1 discovery:1 nicol:1 freund:1 loss:2 var:11 versus:1 validation:1 proxy:1 xp:2 mercer:1 consistent:1 share:1 last:1 rasmussen:1 bias:1 exponentiated:1 perceptron:14 sparse:1 benefit:1 distributed:1 dimension:1 xn:2 valid:1 cumulative:4 world:1 computes:1 commonly:1 made:6 collection:3 emphasize:1 implicitly:2 r20:1 kkt:3 investigating:1 active:1 conclude:1 assumed:3 xi:60 shwartz:1 additionally:1 robust:1 ca:1 schuurmans:1 williamson:1 separator:1 diag:13 aistats:1 main:1 linearly:2 reuters:1 noise:1 motivation:2 nothing:1 x1:2 xu:1 fig:11 advice:1 ny:1 pereira:3 explicit:1 lie:1 jmlr:1 hw:2 removing:1 theorem:4 specific:2 showing:1 evidence:1 workshop:1 ci:1 keshet:1 magnitude:2 execution:1 margin:10 led:1 univariate:1 conveniently:1 lagrange:4 expressed:2 adjustment:1 conconi:1 applies:1 ch:1 corresponds:1 satisfies:2 abbreviation:1 identity:2 replace:2 change:2 specifically:2 except:1 ami:1 averaging:1 hyperplane:2 lemma:6 invariance:1 experimental:1 meaningful:1 perceptrons:1 indicating:2 mark:3 support:2 crammer:6 relevance:1 kernelization:1 avoiding:1 |
2,818 | 3,555 | Artificial Olfactory Brain for Mixture Identification
Mehmet K. Muezzinoglu1
Nikolai F. Rulkov1
Alexander Vergara1
Heny D. I. Abarbanel1
1
Institute for Nonlinear Science
University of California San Diego
9500 Gilman Dr., La Jolla, CA, 92093-0402
Ramon Huerta1
Allen Selverston1
2
Thomas Nowotny2
Mikhail I. Rabinovich1
Centre for Computational Neuroscience and Robotics
Department of Informatics, University of Sussex
Falmer, Brighton, BN1 9QJ, UK
Abstract
The odor transduction process has a large time constant and is susceptible to various types of noise. Therefore, the olfactory code at the sensor/receptor level is in
general a slow and highly variable indicator of the input odor in both natural and
artificial situations. Insects overcome this problem by using a neuronal device in
their Antennal Lobe (AL), which transforms the identity code of olfactory receptors to a spatio-temporal code. This transformation improves the decision of the
Mushroom Bodies (MBs), the subsequent classifier, in both speed and accuracy.
Here we propose a rate model based on two intrinsic mechanisms in the insect AL,
namely integration and inhibition. Then we present a MB classifier model that resembles the sparse and random structure of insect MB. A local Hebbian learning
procedure governs the plasticity in the model. These formulations not only help to
understand the signal conditioning and classification methods of insect olfactory
systems, but also can be leveraged in synthetic problems. Among them, we consider here the discrimination of odor mixtures from pure odors. We show on a set
of records from metal-oxide gas sensors that the cascade of these two new models facilitates fast and accurate discrimination of even highly imbalanced mixtures
from pure odors.
1
Introduction
Odor sensors are diverse in terms of their sensitivity to odor identity and concentrations. When
arranged in parallel arrays, they may provide a rich representation of the odor space. Biological
olfactory systems owe the bulk of their success to employing a large number of olfactory receptor
neurons (ORNs) of various phenotypes. However, chemo-diversity comes at the expense of two
pressing factors, namely response time and reproducibility, while fast and accurate processing of
chemo-sensory information is vital for survival not only in natural, but also in many artificial situations, including security applications.
Identifying and quantifying an odor accurately in a short time is an impressive characteristic of
insect olfaction. Given that there are approximately tens of thousands of ORNs sending slow and
noisy messages in parallel to downstream olfactory layers, in order to account for the observed
recognition performance, a computationally non-trivial process must be taking place along the insect
olfactory pathway following the transduction. The two stations responsible for this processing are
the Antennal Lobe (AL) and the Mushroom Bodies (MBs). The former acts as a signal conditioning
/ feature extraction device and the latter as an algebraic classifier.
Our motivation in this study is the potential for skillful feature extraction and classification methods
by insect olfactory systems in synthetic applications, which also deal with slow and noisy sensory
data. The particular problem we address is the discrimination of two-component odor mixtures from
1
1
2
Odor
Mushroom
Body
Classifier
Model
3
Odor
Identity
...
16
Sensor
Array
Dynamical
Antennal Lobe Model
Snapshot
Figure 1: The considered biomimetic framework to identify whether an applied gas is a pure odor or
a mixture. The input is transduced by 16 parallel metal-oxide gas sensors of different type generating
slow and noisy resistance time series. The signal conditioning in the antennal lobe is achieved by
the interaction of an excitatory Projection Neuron (PN) population (white nodes) with an inhibitory
Local Neurons (LNs, black nodes). The outcomes of AL processing is read from the PNs and
classified in the Mushroom Body, which is trained by a local Hebbian rule.
pure odors in a three-class classification setting. The problem is nontrivial when concentrations of
mixture components are imbalanced. It becomes particularly challenging when the overall mixture
concentration is small. We treat the problem on two mixture datasets recorded from metal-oxide gas
sensors (included in the supplementary material).
We propose in the next section a dynamical rate model mimicking the AL?s signal conditioning
function. By testing the model first with a generic Support Vector Machine (SVM) classifier, we
validate the substantial improvement that AL adds on the classificatory value of raw sensory signal
(Section 2). Then, we introduce a MB-like classifier to substitute for the SVM and complete the
biomimetic framework, as outlined in Fig. 1. The model MB exploits the structural organization of
the insect MB. Its plasticity is adjusted by a local Hebbian learning procedure, which is gated by a
binary learning signal (Section 3). Some concluding remarks are given in Section 4.
2
2.1
The Antennal Lobe
Insect Antennal Lobe Outline
The Antennal Lobe is a spatio-temporal encoder for ORN signals that include time in coding space.
Some of its qualitative properties are apparent from the input-output perspective, without requiring
much insight into its physiology. A direct analysis of spiking rates in raw ORN responses and in
the AL output [1] shows that in fruit fly AL maps ORN output to a low dimensional feature space
while providing lower variability in responses to the same odor type (reducing within-class scatter)
and longer average distance between responses for different odors (boosting between-class scatter).
These observations constitute sufficient evidence that a realistic AL model should be sought within
the class of nonlinear filters.
Another remarkable achievement of the AL shows itself in terms of recognition time. When subjected to a constant odor concentration, the settling time of ORN activity is on the order of hundreds
of milliseconds to seconds [3], whereas recognition is known to occur earlier [7]. This is a clear
indicator that the AL makes extensive use of the ORN transient, since instantaneous activity is less
odor-specific in transient than it is in during the steady state. To provide high accuracy under such a
temporal constraint, the classificatory information during this period must be somehow accumulated,
which means that AL has to be a dynamical system, utilizing memory.
It is the cooperation of these filtering and memory mechanisms in the AL that expedites and consolidates the decision made in the subsequent classifier.
Strong experimental evidence suggests that the insect AL representation of odors is a transient,
yet reproducible, spatio-temporal encoding [8]. The AL is a dynamical network that is formed by
the coupling of an excitatory neuron population (projection neurons, PNs) with an inhibitory one
2
(local neurons, LNs). It receives input from glomeruli, junctions of synapses that group the ORNs
according to the receptor gene they express. The fruit fly has about 50 glomeruli as chemotopic
clusters of synapses from nearly 50, 000 ORNs. There is no consensus on the functional role of this
convergence beyond serving as an input terminal to AL, which is certainly an active processing layer.
In the analogy we are building here (c.f. Fig. 1), the 16 artificial gas sensors actually correspond to
glomeruli (rather than individual ORNs) so that the AL has direct access to sensor resistances.
We suggest that the two key principles underlying the AL?s information processing are decorrelation
(filtering) and integration (memory), which can be unified on a dynamical system. The filter property
provides selectivity, while the integrator accumulates the refined information on trajectories. This
setting is capable of evaluating the transient portion of the sensory signal effectively.
An instantaneous value read from a receptor early in the transduction process is considered as immature, failing to convey a consistently high classificatory value by its own. Nevertheless, the ORN
transient as an interval indeed offers unique features to expedite the classification. In particular,
the novelty gained due to observing consecutive samples during the transient is on average greater
than the informational gain obtained during the steady-state. Hence, newly observed samples of the
receptor transients are likely to contribute to the cumulative classificatory information base formed
so far, whereas the informational entropy vanishes as the signal reaches the steady-state. As a device
that extracts and integrates odor-specific information in ORN signals, the AL provides an enriched
transient to the subsequent MB so that it can achieve accurate classification early in the odor period.
We also note that there have been efforts, e.g., [9, 10] to illustrate the sharpening effect of inhibition
in the olfactory system. However, to the best of our knowledge, the approach we present here is the
first to formulate the temporal gain due to AL processing.
2.2 The Model
The model AL is comprised of a population of PNs that project from the AL to downstream processing. The neural activity corresponding to the rate of action potential generation of the biological
neurons is given by xi (t), i = 1, 2, ..., NE , for the NE neurons in the PN population. There are also
NI interneurons or LNs whose activity is yi (t); i = 1, 2, ..., NI .
The rate of change in these activities is stimulated by a weighted sum over both populations and a
set of input signals SiE (t) and SiI (t) indicating the activity in the glomeruli stimulating the PNs and
the LNs, respectively. In addition, each population receives noise from the AL environment. Our
formulation of these ideas is through a Wilson-Cowan-like population model [11]
?
?
NI
X
dx
(t)
i
EI
E
?iE
= KiE ? ? ??
wij
yj (t) + ginp
SiE (t)? ? xi (t) + ?E
i (t), i ? 1, . . . , NE ,
dt
j=1
?
?
NE
X
dy
(t)
i
IE
I
?iI
= KiI ? ? ?
wij
xj (t) + ginp
SiI (t)? ? yi (t) + ?Ii (t), i ? 1, . . . , NI .
dt
j=1
XY
The superscripts E and I stand for excitatory and inhibitory populations. The matrix elements wij
,
X, Y ? {E, I} are time-independent weights quantifying the effect from units of type Y to units
of type X. SiX (t) is the external input to i-th unit from a glomerulus (odor sensor) weighted by
X
coupling strength ginp
. ?Yi is an additive noise process and ?(?), the unit-ramp activation function:
?(u) = 0 for u < 0, and ?(u) = u, otherwise. The gains KiE , KiI and time constants ?iE , ?iI are
fixed for an individual unit but vary across PN and LN populations.
The network topology is formed through a random process of Bernoulli type:
1 , with probability pXY
XY
wij
= gY ?
0 , with probability 1 ? pXY
where gY is a fixed coupling strength. pXY is a design parameter to be chosen by us.
Each unit, regardless of its type, accepts external input from exactly one sensor in the form of raw
resistance time series. This sensor is assigned randomly among all 16 available sensors, ensuring
that all sensors are covered1 .
1
It is assumed that NE + NI > 16.
3
4
5
x 10
TGS 2602
TGS 2600
TGS 2010
TGS 2620
x 10
10
i=1,...,75
10
12
5
xi(t)
Sensor response (?)
15
8
6
4
2
0
0
0
20
40
60
Time (s)
80
100
0
(a)
0.5
1
Time (s)
1.5
2
(b)
Figure 2: (a) A record from Dataset 1, where 100ppm acetaldehyde was applied to the sensor array
for 0 ? t ? 100s. Offsets are removed from the time-series. Curve labels indicate the sensor types.
(b) Activity of NE = 75 excitatory PN units of the sample AL model in response to the (time-scaled
version of) record shown on panel (a). The conductances are selected as (gE , gI ) = (10?6 , 9 ? 10?6 )
and other parameters as given in text. Bar indicates the odor period.
For the mixture identification problem of this study, we consider a network with NE = NI = 75 and
E
I
ginp
= ginp
= 10?2 . The probabilities used in the generative Bernoulli process are fixed at pIE =
pEI = 0.5. The synaptic conductances gE and gI are optimized for the particular classification
instance through the brute force search described below. The gains KiE , KjI and the time-scales
?iE , ?jI , i = 1, . . . , NE , j = 1, . . . , NI are drawn independently from exponential distributions
with ?K = 7.5 and ?? = 0.5, respectively. Following construction, the initial condition of each
unit is taken as zero and ? is taken as a white noise process with variance 10?4 independently
for each unit. We perform the simulation of the 150-dimensional Wilson-Cowan dynamics by 5/6
Runge-Kutta integration with variable step size where the error tolerance is set to 10?15 .
Although the considered network structure can accommodate limit cycles and strange attractors, the
selected range of parameters yield a fixed point behavior. We confirm this in all simulations with the
selected parameter values, both during and after the sensory input (odor) period (see Fig. 2(b)).
2.3 Validation
We consider the activity in PN population as the only piece of information regarding the input odor
that is passed on to higher-order layers of the olfactory system. Access to this activity by those
layers can be modeled as instantaneous sampling of a selected brief window of temporal behavior
of PNs [7]. Therefore, the recognition system in our model utilizes such snapshots from the spiking
activity in the excitatory population xi (t). A snapshot is passed as the feature vector to the classifier;
it is comprised of an NE -dimensional fixed vector taken as a sample from the states x1 , . . . , xNE at
a particular time ts .
2.3.1 Dataset
The model is driven by responses recorded from 16 metal-oxide gas sensors in parallel. We have
made 80 recordings and grouped them into two sets based on vapor concentration: records for
100ppm vapor in Dataset 1 and 50ppm in Dataset 2. Each dataset contains 40 records from three
classes: 10 pure acetaldehyde, 10 pure toluene, and 20 mixture records. The mixture class contains records from imbalanced acetaldehyde-toluene mixtures with 96%-4%, 98%-2%, 2%-98%,
and 4%-96% partial concentrations, five from each. Hence, we have two instances of the mixture
identification problem in the form of three-class classification. See the supplementary material for
details on measurement process.
We removed the offset from each sensor record and scaled the odor period to 1s. This was done
by mapping the odor period, which has fixed length of 100s in the original records, to 1s by reindexing the time series. These one-second long raw time series, included in the supplementary
material, constitute the pool of raw inputs to be applied to the AL network during the time interval
0.5 ? t ? 1.5s. The input is set to zero outside of this odor period. See Fig. 2 for a sample record
and the AL network?s response to it. Note that, although we apply the network to pre-recorded data
in simulations, the general scheme is causal, thus can be applied in real-time.
4
80
Success Rate (%) at ts=1.5s
Success rate (%)
100
60
40
without AL network
with optimized AL network
20
0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5
Snapshot time ts (s)
Max 99.7%
at (1e?6,9e?6)
99
100
98
98
97
96
96
94
1
No connectivity
0.5
gI
?5
x 10
0.5
0 1
(a)
0
95
?5
x 10
g
E
(b)
80
Success Rate (%) at ts=1.5s
Success rate (%)
100
60
40
without AL network using SVM
with optimized AL network using SVM
with optimized AL network using MB
20
0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5
Snapshot time ts (s)
Max 95.5%
at (6e?6,9e?6)
100
95
90
90
80
1
No connectivity
?5
x 10
0
0.5
gI
0.5
gE
0 1
(c)
?5
x 10
85
(d)
Figure 3: (a) Estimated correct classification profile versus snapshot time ts during the normalized
odor period for Dataset 1. The red curve is the classification profile due to the proposed AL network,
which has the fixed sample topology with (gE , gI ) = (10?6 , 9 ? 10?6 ). The black baseline profile is
obtained by discarding AL and directly classifying snapshots from raw sensor responses by SVM.
(b) Correct classification rates extracted by a sweep through gE , gI using Dataset 1. Panels (c) & (d)
show the results for Dataset 2, where the best pair is determined as (gE , gI ) = (6 ? 10?6 , 9 ? 10?6 ).
2.3.2
Adjustment of AL Network and Performance Evaluation
To reveal the signal conditioning performance of the stand-alone AL model, we first interface it
with an established classifier. We use a Support Vector Machine (SVM) classifier with linear kernel
to map the snapshots from PN activity to odor identity. This choice is due to the parameter-free
design that rules out the possibility of over-fitting. The classifier is realized by the publicly available
software LibSVM [2].
Due to the wide diversity of PNs and LNs in terms of their time scales ? and gains K, the performance of the network is highly sensitive to the agreement between the outcome of the generative
process and the choice of parameters gE and gI . Therefore, it is not possible to give a one-sizefits-all value for these. Instead, we have generated one sample network topology via the Bernoulli
process described above and customized gE and gI for it on each problem. For reproducibility, this
topology is provided in the supplementary material. Comparable results can be obtained with other
topologies but possibly with different gE , gI values than the ones reported below.
The validation is carried out in the following way: First we set the classification problem (i.e.,
select Dataset 1 or 2) and fix gE = gI = 0 (suppress the connectivity). We present each record
in the dataset to the network and then log the network response from excitatory population in the
form of NE simultaneous time series (see Fig. 2). Then, at each percentile of the odor period
ts ? {0.5 + k/100}100
k=0 , we take a snapshot from each NE -dimensional time series and label it
by the odor identity (pure acetaldehyde, pure toluene, or mixture). We use randomly selected 80%
of the resulting data in training the SVM classifier and keep the remaining 20% for testing it. We
record the rate of correct classification on the test data. The train-test stage is repeated 1000 times
with different random splits of labelled data. The average correct classification rate is assigned as
the performance of the AL model at that ts . The classification profile versus time is extracted when
the ts sweep through the odor period is complete.
To maximize the performance over conductances gE and gI , we further perform a sweep through
a range of these parameters by repeating the above procedure for each combination of gE , gI . Fig5
ure 3 (a) shows the classification profile for the best pair encountered along the parameter sweep
gE , gI ? {k/100}100
k=0 . This pair is determined as the one maximizing classification success rate
when samples from the end of odor period is used ts = 1.5. Note that these optimum values are
problem-specific. For the two instances considered in this work, we mark them by the peaks of the
surfaces in Fig. 3 (b) and (d).
Dataset 1 induces an easier instance of the identification problem toward the end of odor period,
which can be resolved reasonably well using raw sensor data at the steady state. Therefore, the gain
over baseline due to AL processing is not so significant in later portions of the odor period for Dataset
1. Also observe from panels (b) and (d) that, when dealing with Dataset 1, the conductance values are
less decisive than they are for Dataset 2. Again, this is because the former is an easier problem when
the sensors reach the steady-state at ts = 1.5s, where almost all conductance within the swept range
ensures > 95% performance. The relative difficulty of the problem in Dataset 2 manifests itself as
the fluctuations in the baseline performance. We see in Fig. 3(c) that there are actually periods early
in the period where the raw sensor data can be fairly indicative of the class information; however, it
is not possible to predict these intervals in advance. It should also be noted that some of these peaks
in baseline performance, at least the very first one near ts = 0.55s, are artifacts (due to classification
of pure noise) since we know that there is hardly any vapor in the measurement chamber during that
period (see Fig. 2(a) and other records in supplementary material). In any case, in both problems,
the suggested AL dynamics (with adjusted parameters) contributes substantially to the classification
performance during the transient of the sensory signal. This makes early decisions of the classifier
substantially and consistently more accurate with respect to the baseline classification.
Having established the contribution of the AL network to classification, our goal in the remainder
of the paper is to replace the unbiased SVM classifier by a biologically plausible MB model, while
preserving the performance gain seen in Fig. 3.
3
Mushroom Body Classifier
The MBs of insects employ a large number of identical small intrinsic cells, the so-called Kenyon
cells, and fewer output neurons in the MB lobes. It has been observed that, unlike in the AL, the
activity in the KCs is very sparse, both across the population and for individual cells over time. Theoretical work suggests that a large number of cells with sparse activity enables efficient classification
with random connectivity [4]. The power of this architecture lies in its versatility: The connectivity
is not optimized for any specific task and can, therefore, accommodate a variety of input types.
3.1 The Model
The insect MB consists of four crucial elements (see Fig. 4): i) a nonlinear expansion from the
AL representation at the final stage, x, that resembles the connectivity from the Antennal Lobe to
the MBs, ii) a gain control in the MB to achieve a uniform level of sparse activity the KCs, y, iii)
a classification phase, where the connections from the KCs to the output neurons, z, are modified
according to a Hebbian learning rule, and iv) a learning signal that determines when and which
output neuron?s synapses are reinforced.
It has been shown in locusts that the activity patterns in the AL are practically discretized by a
periodic feedforward inhibition onto the MB calyces and that the activity levels in KCs are very
low [7]. Based on the observed discrete and sparse activity pattern in insect MB, we choose to
represent KC units as simple algebraic McCulloch-Pitts ?neurons.? Theneural activity values
taken
PNE
KC
by this neural model are binary (0 = no spike and 1 = spike): ?j = ?
c
x
?
?
j=
i=1 ji i
1, 2, ..., NKC . The vector x is the representation of the odor that is received as a snapshot from the
excitatory PN units of AL model. The components of the vector x = (x1 , x2 , ..., xNE ) are the direct
values obtained by integration of the ODE of the AL model described above. The KC layer vector ?
is NKC dimensional. cij ? {0, 1} are the components of the connectivity matrix which is NE ?NKC
in size. The firing threshold ?KC is integer number and ?(?) is the Heaviside function.
The connectivity matrix [cji ] is determined randomly by an independent Bernoulli process. Since
the degree of connectivity from the input neurons to the KC neurons did not appear to be critical for
the performance of the system, we made it uniform by setting the connection probability as pc = 0.1.
It, nevertheless, seems advisable to ensure in the construction that the input-to-KC layer mapping is
bijective to avoid loss of information. We performed this check during network construction. All
other parameters of the KC layer are then assigned admissible values uniformly randomly and fixed.
6
Calyx
?
[ c ji ]
Antennal Lobe PNs
x
Output Nodes
z
Pure
Toluene
[ wlj ]
Pure
Acetaldehyde
Mixture
Acetaldehyde
+
Toluene
Figure 4: Suggested MB model for
classifying the AL output. The first layer
of connections from AL to calyx are set
randomly and fixed. The plasticity of the
output layer is due to a binary learning
signal that rewards the weights of output
units responding to the correct stimulus.
Although the basic system described so far implements the divergent (and static) input layer observed in insect calyx, it is very unstable against fluctuations in the total number of active input
neurons due to the divergence of connectivity. This is an obstacle for inducing sparse activity at KC
level. One mechanism suggested to remove this instability is gain control by feedforward inhibition.
For our purposes, we impose a number nKC of simultaneously active KCs, and admit the firing of
only the top nKC = NKC /5 neurons that receive the most excitation in the first layer.
The fan-in stage of projections from the KCs to the extrinsic MB cells in the MB lobes is the
hypothesized locus of learning.
In our model, the
P
output units in the MB lobes are again McCullochNKC
LB
Pitts neurons: zl = ?
, l = 1, 2, ..., NLB . Here, the index LB denotes the
j=1 wlj ? ?j ? ?
MB lobes. The output vector z of the MB lobes has dimension NLB (equals 3 in our problem) and
?LB is the threshold for the decision neurons in the MB lobes. The NLB ? NKC connectivity matrix
wlj has integer entries. Similar to the above-mentioned gain control, we allow only the decision
neuron that receives the highest synaptic input to fire. These synaptic strengths wlj are subject to
changes during learning according to a Hebbian type plasticity rule described next.
3.2 Training
The hypothesis of locating reinforcement learning in mushroom bodies goes back to Montague and
collaborators [6]. Every odor class is associated with an output neuron of the MB, so there are three
output nodes firing for either pure toluene, pure acetaldehyde, or mixture type of input. The plasticity rule is applied on the connectivity matrix W , whose entries are randomly and independently
initialized within [0, 10]. The exact initial distribution of weights have no significant impact on the
resulting performance nor on the learning speed.
During learning, the inputs are presented to the system in an arbitrary order. The entries of the
connectivity matrix at the time of the nth input are denoted by wlj (n). When the next training input
with label ? is applied, then the weight w?j is updated by the rule w?j (n + 1) = H (z? , ?j , w?j (n)),
where H(z, ?, w) = w + 1 when z = 1, ? = 1; and 0, otherwise. This learning rule strenghtens
a synaptic connection with probability p+ if presynaptic activity is accompanied by postsynaptic
activity. To facilitate learning during the training phase, the ?correct? output neuron ? is forced to
fire for an input with label ?, while the rest are kept silent. This is provided by pulling down the
threshold ??LB , unless neuron ? is already firing for such input. Learning is terminated when the
performance (correct classification rate) converges.
3.3 Validation
Using Dataset 2, we applied the proposed MB model with NKC = 10, 000 KCs at the output of
the sample AL topology having the same parameters reported in Section 2. For p+ = p? = 1,
we trained the output layer of MB using the labelled AL outputs sampled at 10 points in the odor
period. The mean correct classification rate over 20 splits of the labelled snapshots (five-fold crossvalidation) are shown in Fig.3(c) as blue dots. With respect to the red curve on the same panel, which
was obtained by the (maximum-margin) SVM classifier, a slight reduction in the generalization
capability is visible. Nevertheless, the MB classifier in its current form still exploits the superior
job of AL over baseline classification during transient, while mimicking two essential features of
the biological MB, namely sparsity in KC-layer and incremental local learning in MB lobes. The
implementation details and parameters of the MB model are provided in the supplementary material.
7
4
Conclusions
We have presented a complete odor identification scheme based on the key principles of insect
olfaction, and demonstrated its validity in discriminating mixtures of odors from pure odors using
actual records from metal-oxide gas sensors.
The bulk of the observed performance is due to the AL, which is a dynamical feature extractor for
slow and noisy chemo-sensory time series. The cooperation of integration (accumulation) mechanism and sharpening filter enabled by inhibition leave an almost linearly separable problem for the
subsequent classifier. The proposed signal conditioning scheme can be considered as a mathematical
image of reservior computing [5]. For this simplified classification task, we have also suggested a
bio-inspired MB classifier with local Hebbian plasticity. By exploiting the dynamical nature of the
AL stage and the sparsity in MB representation, the overall model provides an explanation for the
high speed and accuracy of odor identification in insect olfactory processing.
For future study, we envision an improvement on the MB classification performance, which has been
explored here to be slightly worse than linear SVM. We think that this can be done without compromising biological plausibility, by imposing mild constraints on the KC-level generative process.
The mixture identification problem investigated here is in general more difficult than the traditional
problem of discriminating pure odors, since the mixture class can be made arbitrarily close to the
pure odor classes. The classification performance attained here is promising for other mixturerelated problems that are among the hardest in the field of artificial olfaction.
Acknowledgments
This work was supported by the MURI grant ONR N00014-07-1-0741.
References
[1] V. Bhandawat, S. R. Olsen, N. W. Gouwens, M. L. Schlief, and R. I. Wilson. Sensory processing in the
drosphila antennal lobe increases reliability and separability of ensemble odor representations. Nature
Neuroscience, 10:1474?1482, 2007.
[2] C.C. Chang and C. J. Lin. LibSVM - A library for support vector machines, v2.85, 2007.
[3] M de Bruyne, P. J. Clyne, and J. R. Carlson. Odor coding in a model olfactory organ: The Drosophila
maxillary palp. Journal of Neuroscience, 11:4520?4532, 1999.
[4] R. Huerta, T. Nowotny, M. Garcia-Sanchez, H. D. I. Abarbanel, and M. I. Rabinovich. Learning classification in the olfactory system of insects. Neural Computation, 16:1601?1640, 2004.
[5] W. Maass, T. Natschlaeger, and H. Markram. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 14:2531?2560, 2002.
[6] P. R. Montague, P. Dayan, C. Person, and T. J. Sejnowski. Bee foraging in uncertain environments using
predictive Hebbian learning. Nature, 337:725?728, 1995.
[7] J. Perez-Orive, O. Mazor, G. C. Turner, S. Cassenaer, R. I. Wilson, and G. Laurent. Oscillations and
sparsening of odor representations in the mushroom body. Science, 297:359?365, 2002.
[8] M. I. Rabinovich, R. Huerta, and G. Laurent. Transient dynamics for neural processing. Science, 321:48?
50, 2008.
[9] B. Raman and R. Gutierrez-Osuna. Chemosensory processing in a spiking model of the olfactory bulb:
Chemotopic convergence and center surround inhibition. In L. K. Saul, Y. Weiss, and L. Bottou, editors,
NIPS 17, pages 1105?1112. MIT Press, Cambridge, MA, 2005.
[10] M. Schmuker and G. Schneider. Processing and classification of chemical data inspired by insect olfaction. Proc. Nat. Acad. Sci., 104:20285?20289, 2007.
[11] H. R. Wilson and J. D. Cowan. A mathematical theory of the functional dynamics of cortical and thalamic
nervous tissue. Kybernetik, 13:55?80, 1973.
8
| 3555 |@word mild:1 schmuker:1 version:1 seems:1 simulation:3 lobe:17 accommodate:2 reduction:1 initial:2 series:8 contains:2 envision:1 current:1 activation:1 mushroom:7 scatter:2 must:2 yet:1 dx:1 subsequent:4 realistic:1 additive:1 plasticity:6 visible:1 enables:1 remove:1 reproducible:1 discrimination:3 alone:1 generative:3 selected:5 device:3 fewer:1 nervous:1 indicative:1 short:1 record:14 provides:3 boosting:1 node:4 contribute:1 five:2 mathematical:2 along:2 sii:2 direct:3 qualitative:1 consists:1 pathway:1 fitting:1 olfactory:15 introduce:1 indeed:1 behavior:2 nor:1 brain:1 terminal:1 integrator:1 discretized:1 informational:2 inspired:2 actual:1 window:1 becomes:1 project:1 provided:3 underlying:1 panel:4 transduced:1 mcculloch:1 vapor:3 substantially:2 unified:1 transformation:1 sharpening:2 temporal:6 every:1 act:1 exactly:1 classifier:19 scaled:2 uk:1 brute:1 control:3 unit:13 zl:1 appear:1 bio:1 grant:1 local:7 treat:1 limit:1 kybernetik:1 acad:1 receptor:6 encoding:1 accumulates:1 ure:1 fluctuation:2 firing:4 approximately:1 laurent:2 black:2 resembles:2 suggests:2 challenging:1 range:3 locust:1 classificatory:4 responsible:1 unique:1 testing:2 yj:1 acknowledgment:1 implement:1 procedure:3 cascade:1 physiology:1 projection:3 pre:1 suggest:1 onto:1 close:1 huerta:2 instability:1 accumulation:1 map:2 expedites:1 demonstrated:1 maximizing:1 center:1 go:1 regardless:1 independently:3 formulate:1 identifying:1 pure:16 rule:7 insight:1 array:3 utilizing:1 chemotopic:2 enabled:1 population:13 updated:1 diego:1 construction:3 exact:1 hypothesis:1 agreement:1 element:2 gilman:1 recognition:4 particularly:1 muri:1 observed:6 biomimetic:2 role:1 fly:2 thousand:1 ensures:1 cycle:1 removed:2 highest:1 substantial:1 mentioned:1 vanishes:1 environment:2 reward:1 ppm:3 dynamic:4 trained:2 predictive:1 resolved:1 montague:2 various:2 train:1 skillful:1 forced:1 fast:2 sejnowski:1 artificial:5 outcome:2 refined:1 outside:1 apparent:1 whose:2 supplementary:6 plausible:1 ramp:1 otherwise:2 encoder:1 tgs:4 gi:14 think:1 noisy:4 itself:2 superscript:1 runge:1 final:1 pressing:1 propose:2 interaction:1 mb:34 remainder:1 reproducibility:2 achieve:2 inducing:1 validate:1 crossvalidation:1 achievement:1 exploiting:1 convergence:2 cluster:1 optimum:1 generating:1 incremental:1 converges:1 leave:1 help:1 coupling:3 illustrate:1 advisable:1 received:1 job:1 bn1:1 strong:1 come:1 indicate:1 correct:8 compromising:1 filter:3 transient:11 material:6 orn:7 kii:2 fix:1 generalization:1 collaborator:1 drosophila:1 biological:4 adjusted:2 practically:1 considered:5 mapping:2 predict:1 pitt:2 sought:1 early:4 consecutive:1 vary:1 purpose:1 failing:1 proc:1 integrates:1 label:4 sensitive:1 grouped:1 organ:1 gutierrez:1 weighted:2 mit:1 sensor:23 modified:1 rather:1 natschlaeger:1 pn:7 avoid:1 wilson:5 improvement:2 consistently:2 bernoulli:4 indicates:1 check:1 baseline:6 dayan:1 accumulated:1 kc:17 wij:4 mimicking:2 overall:2 classification:30 among:3 insect:18 denoted:1 integration:5 fairly:1 equal:1 field:1 extraction:2 having:2 sampling:1 identical:1 hardest:1 nearly:1 future:1 stimulus:1 employ:1 randomly:6 simultaneously:1 divergence:1 individual:3 phase:2 owe:1 attractor:1 fire:2 olfaction:4 versatility:1 conductance:5 organization:1 message:1 interneurons:1 highly:3 possibility:1 evaluation:1 certainly:1 mixture:19 pc:1 perez:1 accurate:4 capable:1 partial:1 xy:2 unless:1 iv:1 initialized:1 causal:1 mazor:1 theoretical:1 uncertain:1 instance:4 earlier:1 obstacle:1 rabinovich:2 entry:3 hundred:1 comprised:2 uniform:2 reported:2 foraging:1 periodic:1 nlb:3 synthetic:2 person:1 peak:2 sensitivity:1 discriminating:2 ie:4 informatics:1 pool:1 connectivity:13 again:2 recorded:3 leveraged:1 possibly:1 choose:1 dr:1 worse:1 external:2 oxide:5 admit:1 abarbanel:1 account:1 potential:2 diversity:2 de:1 gy:2 accompanied:1 coding:2 decisive:1 piece:1 later:1 performed:1 observing:1 portion:2 red:2 thalamic:1 parallel:4 capability:1 contribution:1 formed:3 ni:7 accuracy:3 publicly:1 variance:1 characteristic:1 reinforced:1 correspond:1 identify:1 yield:1 ensemble:1 identification:7 raw:8 accurately:1 trajectory:1 tissue:1 classified:1 simultaneous:1 synapsis:3 reach:2 synaptic:4 against:1 associated:1 static:1 bhandawat:1 gain:10 newly:1 dataset:16 sampled:1 manifest:1 knowledge:1 improves:1 actually:2 back:1 higher:1 dt:2 attained:1 response:10 sie:2 wei:1 formulation:2 arranged:1 done:2 stage:4 wlj:5 receives:3 ei:1 nonlinear:3 somehow:1 artifact:1 reveal:1 pulling:1 sparsening:1 building:1 effect:2 kenyon:1 requiring:1 normalized:1 unbiased:1 facilitate:1 former:2 hence:2 assigned:3 chemical:1 read:2 maass:1 nkc:8 deal:1 white:2 during:14 sussex:1 noted:1 steady:5 percentile:1 excitation:1 brighton:1 bijective:1 outline:1 complete:3 allen:1 interface:1 image:1 instantaneous:3 superior:1 functional:2 spiking:3 ji:3 conditioning:6 slight:1 measurement:2 significant:2 surround:1 imposing:1 cambridge:1 outlined:1 glomerulus:5 centre:1 dot:1 reliability:1 access:2 stable:1 impressive:1 longer:1 inhibition:6 surface:1 add:1 base:1 imbalanced:3 own:1 perspective:1 jolla:1 driven:1 nowotny:1 pns:7 selectivity:1 n00014:1 binary:3 success:6 arbitrarily:1 immature:1 onr:1 yi:3 swept:1 preserving:1 seen:1 greater:1 impose:1 schneider:1 novelty:1 maximize:1 period:17 signal:16 ii:4 expedite:1 hebbian:7 plausibility:1 offer:1 long:1 lin:1 ensuring:1 impact:1 basic:1 kernel:1 represent:1 robotics:1 achieved:1 cell:5 receive:1 whereas:2 addition:1 ode:1 interval:3 nikolai:1 crucial:1 rest:1 unlike:1 recording:1 subject:1 facilitates:1 cowan:3 sanchez:1 integer:2 structural:1 near:1 feedforward:2 vital:1 split:2 iii:1 variety:1 xj:1 architecture:1 topology:6 silent:1 idea:1 regarding:1 qj:1 whether:1 six:1 cji:1 passed:2 effort:1 locating:1 algebraic:2 resistance:3 constitute:2 remark:1 action:1 hardly:1 governs:1 clear:1 transforms:1 repeating:1 ten:1 induces:1 inhibitory:3 millisecond:1 neuroscience:3 estimated:1 extrinsic:1 bulk:2 serving:1 diverse:1 blue:1 discrete:1 express:1 group:1 key:2 four:1 nevertheless:3 threshold:3 drawn:1 falmer:1 libsvm:2 kept:1 downstream:2 sum:1 place:1 almost:2 strange:1 utilizes:1 oscillation:1 raman:1 decision:5 dy:1 kji:1 comparable:1 layer:13 pxy:3 fan:1 fold:1 encountered:1 activity:21 nontrivial:1 strength:3 occur:1 constraint:2 x2:1 software:1 speed:3 concluding:1 separable:1 consolidates:1 department:1 according:3 combination:1 chemosensory:1 across:2 slightly:1 postsynaptic:1 separability:1 osuna:1 biologically:1 taken:4 computationally:1 ln:1 mechanism:4 know:1 locus:1 ge:13 subjected:1 end:2 sending:1 junction:1 available:2 apply:1 observe:1 v2:1 generic:1 chamber:1 lns:5 odor:48 thomas:1 substitute:1 original:1 remaining:1 include:1 ensure:1 responding:1 top:1 denotes:1 carlson:1 exploit:2 sweep:4 already:1 realized:1 pne:1 spike:2 concentration:6 traditional:1 kutta:1 distance:1 sci:1 presynaptic:1 consensus:1 trivial:1 toward:1 unstable:1 validity:1 code:3 length:1 modeled:1 index:1 providing:1 pie:1 susceptible:1 xne:2 cij:1 difficult:1 expense:1 suppress:1 design:2 implementation:1 pei:1 gated:1 perform:2 neuron:22 snapshot:11 datasets:1 observation:1 t:12 gas:7 situation:2 variability:1 perturbation:1 station:1 lb:4 arbitrary:1 namely:3 pair:3 extensive:1 optimized:5 connection:4 security:1 california:1 accepts:1 established:2 nip:1 chemo:3 address:1 beyond:1 bar:1 suggested:4 dynamical:7 below:2 pattern:2 sparsity:2 including:1 ramon:1 explanation:1 memory:3 max:2 power:1 critical:1 decorrelation:1 natural:2 settling:1 force:1 difficulty:1 indicator:2 customized:1 turner:1 nth:1 scheme:3 brief:1 library:1 ne:12 carried:1 extract:1 mehmet:1 text:1 bee:1 relative:1 loss:1 antennal:10 generation:1 filtering:2 analogy:1 versus:2 remarkable:1 validation:3 degree:1 bulb:1 metal:5 sufficient:1 fruit:2 principle:2 editor:1 classifying:2 excitatory:7 cooperation:2 supported:1 free:1 allow:1 understand:1 institute:1 wide:1 saul:1 taking:1 markram:1 mikhail:1 sparse:6 tolerance:1 overcome:1 curve:3 toluene:6 evaluating:1 cumulative:1 rich:1 stand:2 sensory:8 dimension:1 made:4 reinforcement:1 san:1 simplified:1 employing:1 far:2 olsen:1 gene:1 confirm:1 keep:1 dealing:1 active:3 hypothesized:1 assumed:1 spatio:3 xi:4 search:1 stimulated:1 promising:1 nature:3 reasonably:1 ca:1 contributes:1 expansion:1 investigated:1 bottou:1 did:1 cortical:1 maxillary:1 linearly:1 terminated:1 motivation:1 noise:5 profile:5 repeated:1 convey:1 body:7 neuronal:1 fig:11 enriched:1 x1:2 transduction:3 slow:5 exponential:1 lie:1 orns:5 extractor:1 admissible:1 down:1 specific:4 discarding:1 offset:2 divergent:1 svm:10 explored:1 survival:1 evidence:2 intrinsic:2 essential:1 effectively:1 gained:1 nat:1 margin:1 phenotype:1 easier:2 entropy:1 garcia:1 likely:1 adjustment:1 chang:1 determines:1 extracted:2 ma:1 stimulating:1 identity:5 goal:1 quantifying:2 labelled:3 replace:1 change:2 included:2 determined:3 reducing:1 uniformly:1 called:1 total:1 experimental:1 la:1 indicating:1 select:1 support:3 mark:1 latter:1 alexander:1 heaviside:1 |
2,819 | 3,556 | Kernel Change-point Analysis
Za??d Harchaoui
LTCI, TELECOM ParisTech and CNRS
46, rue Barrault, 75634 Paris cedex 13, France
[email protected]
Francis Bach
Willow Project, INRIA-ENS
45, rue d?Ulm, 75230 Paris, France
[email protected]
?
Eric
Moulines
LTCI, TELECOM ParisTech and CNRS
46, rue Barrault, 75634 Paris cedex 13, France
[email protected]
Abstract
We introduce a kernel-based method for change-point analysis within a sequence
of temporal observations. Change-point analysis of an unlabelled sample of observations consists in, first, testing whether a change in the distribution occurs within
the sample, and second, if a change occurs, estimating the change-point instant
after which the distribution of the observations switches from one distribution to
another different distribution. We propose a test statistic based upon the maximum
kernel Fisher discriminant ratio as a measure of homogeneity between segments.
We derive its limiting distribution under the null hypothesis (no change occurs),
and establish the consistency under the alternative hypothesis (a change occurs).
This allows to build a statistical hypothesis testing procedure for testing the presence of a change-point, with a prescribed false-alarm probability and detection
probability tending to one in the large-sample setting. If a change actually occurs,
the test statistic also yields an estimator of the change-point location. Promising
experimental results in temporal segmentation of mental tasks from BCI data and
pop song indexation are presented.
1
Introduction
The need to partition a sequence of observations into several homogeneous segments arises in many
applications, ranging from speaker segmentation to pop song indexation. So far, such tasks were
most often dealt with using probabilistic sequence models, such as hidden Markov models [1], or
their discriminative counterparts such as conditional random fields [2]. These probabilistic models
require a sound knowledge of the transition structure between the segments and demand careful
training beforehand to yield competitive performance; when data are acquired online, inference in
such models is also not straightforward (see, e.g., [3, Chap. 8]). Such models essentially perform
multiple change-point estimation, while one is often also interested in meaningful quantitative measures for the detection of a change-point within a sample.
When a parametric model is available to model the distributions before and after the change, a comprehensive literature for change-point analysis has been developed, which provides optimal criteria
from the maximum likelihood framework, as described in [4]. Nonparametric procedures were also
proposed, as reviewed in [5], but were limited to univariate data and simple settings. Online counterparts have also been proposed and mostly built upon the cumulative sum scheme (see [6] for
extensive references). However, so far, even extensions to the case where the distribution before the
change is known, and the distribution after the change is not known, remains an open problem [7].
This brings to light the need to develop statistically grounded change-point analysis algorithms,
working on multivariate, high-dimensional, and also structured data.
1
We propose here a regularized kernel-based test statistic, which allows to simultaneously provide
quantitative answers to both questions: 1) is there a change-point within the sample? 2) if there is
one, then where is it? We prove that our test statistic for change-point analysis has a false-alarm probability tending to ? and a detection probability tending to one as the number of observations tends
to infinity. Moreover, the test statistic directly provides an accurate estimate of the change-point
instant. Our method readily extends to multiple change-point settings, by performing a sequence of
change-point analysis in sliding windows running along the signal. Usually, physical considerations
allow to set the window-length to a sufficiently small length for being guaranteed that at most one
change-point occurs within each window, and sufficiently large length for our decision rule to be
statistically significant (typically n > 50).
In Section 2, we set up the framework of change-point analysis, and in Section 3, we describe how
to devise a regularized kernel-based approach to the change-point problem. Then, in Section 4
and in Section 5, we respectively derive the limiting distribution of our test statistic under the null
hypothesis H0 : ?no change occurs?, and establish the consistency in power under the alternative
HA : ?a change occurs?. These theoretical results allow to build a test statistic which has provably a
false-alarm probability tending to a prescribed level ?, and a detection probability tending to one, as
the number of observations tends to infinity. Finally, in Section 7, we display the performance of our
algorithm for respectively, segmentation into mental tasks from BCI data and temporal segmentation
of pop songs.
2
Change-point analysis
In this section, we outline the change-point problem, and describe formally a strategy for building
change-point analysis test statistics.
Change-point problem
Let X1 , . . . , Xn be a time series of independent random variables. The
change-point analysis of the sample {X1 , . . . , Xn } consists in the following two steps.
1) Decide between
H0 : PX1 = ? ? ? = PXk = ? ? ? = PXn
HA : there exists 1 < k ? < n such that
PX1 = ? ? ? = PXk? 6= PXk? +1 = ? ? ? = PXn .
(1)
2) Estimate k ? from the sample {X1 , . . . , Xn } if HA is true .
While sharing many similarities with usual machine learning problems, the change-point problem is
different.
Statistical hypothesis testing An important aspect of the above formulation of the changepoint problem is its natural embedding in a statistical hypothesis testing framework. Let us recall briefly the main concepts in statistical hypothesis testing, in order to rephrase them within
the change-point problem framework (see, e.g., [8]). The goal is to build a decision rule to
answer question 1) in the change-point problem stated above. Set a false-alarm probability ?
with 0 < ? < 1 (also called level or Type I error), whose purpose is to theoretically guarantee that P(decide HA , when H0 is true) is close to ?. Now, if there actually is a changepoint within the sample, one would like not to miss it, that is that the detection probability
? = P(decide HA , when HA is true)?also called power and equal to one minus the Type II
error?should be close to one. The purpose of Sections 4-5 is to give theoretical guarantees to those
practical requirements in the large-sample setting, that is when the number of observations n tends
to infinity.
Running maximum partition strategy An efficient strategy for building change-point analysis
procedures is to select the partition of the sample which yields a maximum heterogeneity between
the two segments: given a sample {X1 , . . . , Xn } and a candidate change point k with 1 < k < n,
assume we may compute a measure of heterogeneity ?n,k between the segments {X1 , . . . , Xk } on
the one hand, and {Xk+1 , . . . , Xn } on the other hand. Then, the ?running maximum partition strategy? consists in using max1<k<n ?n,k as a building block for change-point analysis (cf. Figure 1).
Not only max1<k<n ?n,k may be used to test for the presence of a change-point and assess/discard
2
P
P
00000000000000000000000000000000000000000000
11111111111111111111111111111111111111111111
11111111111111111111111111111
00000000000000000000000000000
000000000000000000000000000000
111111111111111111111111111111
000000000000000000000000000000
111111111111111111111111111111
000000000000000000000000000000
111111111111111111111111111111
11111111
00000000
000000000000000000000000000000
111111111111111111111111111111
000000000000000000000000000000
111111111111111111111111111111
000000000000000000000000000000
111111111111111111111111111111
000000000000000000000000000000
111111111111111111111111111111
1010
1010
111111111111111111111111111111111111111111111111111111111111111111111111111
000000000000000000000000000000000000000000000000000000000000000000000000000
(?)
1
(r)
k?
k
n
Figure 1: The running maximum strategy for change-point analysis. The test statistic for changepoint analysis runs a candidate change-point k with 1 < k < n along the sequence of observations,
hoping to catch the true change-point k ? .
the overall homogeneity of the sample; besides, k? = argmax1<k<n ?n,k provides a sensible estimator of the true change-point instant k ? [5].
3
Kernel Change-point Analysis
In this section, we describe how the kernel Fisher discriminant ratio, which has proven relevant for
measuring the homogeneity of two samples in [9], may be embedded into the running maximum partition strategy to provide a powerful test statistic, coined KCpA for Kernel Change-point Analysis,
for addressing the change-point problem. This is described in the operator-theoretic framework,
developed for the statistical analysis of kernel-based learning and testing algorithms in [10, 11].
Reproducing kernel Hilbert space
Let (X , d) be a separable measurable metric space. Let
X be an X -valued random variable, with probability measure P; the expectation with respect to
P is denoted by E[?] and the covariance by Cov(?, ?). Consider a reproducing kernel Hilbert space
(RKHS) (H, h?, ?iH ) of functions from X to R. To each point x ? X , there corresponds an element
?(x) ? H such that h?(x), f iH = f (x) for all f ? H, and h?(x), ?(y)iH = k(x, y), where
k : X ? X ? R is a positive definite kernel [12]. In the following, we exclusively work with the
Aronszajn-map, that is, we take ?(x) = k(x, ?) for all x ? X . It is assumed from now on that
H is a separable Hilbert space. Note that this is always the case if X is a separable metric space
and if the kernel is continuous [13]. We make the following two assumptions on the kernel (which
are satisfied in particular for the Gaussian kernel; see [14]): (A1) the kernel k is bounded, that is
sup(x,y)?X ?X k(x, y) < ?, (A2) for all probability distributions P on X , the RKHS associated
with k(?, ?) is dense in L2 (P).
Kernel Fisher Discriminant Ratio
Consider a sequence of independent observations
X1 , . . . , Xn ? X . For any [i, j] ? {2, . . . , n ? 1}, define the corresponding empirical mean elements and covariance operators as follows
j
?
?i:j
X
1
:=
k(X? , ?) ,
j?i+1
?=i
j
? i:j :=
?
X
1
{k(X? , ?) ? ?
?i:j } ? {k(X? , ?) ? ?
?i:j } .
j?i+1
?=i
These quantities have obvious population counterparts, the population mean element and the population covariance operator, defined for any probability measure P as h?P , f iH := E[f (X)] for
all f ? H, and hf, ?P giH := CovP [f (X), g(X)] for f, g ? H. For all k ? {2, . . . , n ? 1} the
(maximum) kernel Fisher discriminant ratio, which we abbreviate as KFDR is defined as
2
?1/2
k(n ? k)
n?k?
k?
KFDRn,k;? (X1 , . . . , Xn ) :=
?1:k +
?k+1:n + ?I
(?
?k+1:n ? ?
?1:k )
.
n
n
n
H
Note that, if we merge two labelled samples {X1 , . . . , Xn1 } and {X1? , . . . , Xn? 2 } into a single sample
as {X1 , . . . , Xn1 , X1? , . . . , Xn? 2 }, then with KFDRn1 +n2 ,n1 +1;? (X1 , . . . , Xn1 , X1? , . . . , Xn? 2 ) we recover the test statistic considered in [9] for testing the homogeneity of two samples {X1 , . . . , Xn1 }
and {X1? , . . . , Xn? 2 }.
3
Following [9], we make the following assumptions on all the covariance operators ? considered in
P? 1/2
this paper: (B1) the eigenvalues {?p (?)}p?1 satisfy p=1 ?p (?) < ?, (B2) there are infinitely
many strictly positive eigenvalues {?p (?)}p?1 of ?.
Kernel change-point analysis
Now, we may apply the strategy described before (cf. Figure 1)
to obtain the main building block of our test statistic for change-point analysis. Indeed, we define
our test statistic Tn,k;? as
?W )
KFDRn,k;? ? d1,n,k;? (?
n,k
Tn;? (k) := max
,
?
?W )
an <k<bn
2 d2,n,k;? (?
n,k
? W := k ?
? 1:k + (n ? k)?
? k+1:n . The quantities d1,n,k;? (?
? W ) and d2,n,k;? (?
? W ), defined
where n?
n,k
n,k
n,k
respectively as
? W ) := Tr{(?
? W + ?I)?1 ?
? W } , d2,n,k;? (?
? W ) := Tr{(?
? W + ?I)?2 (?
? W )2 } ,
d1,n,k;? (?
n,k
n,k
n,k
n,k
n,k
n,k
act as normalizing constants for Tn;? (k) to have zero-mean and unit-variance as n tends to infinity,
a standard statistical transformation known as studentization. The maximum is searched within the
interval [an , bn ] with an > 1 and bn < n, which is restriction of ]1, n[, in order to prevent the
test statistic from uncontrolled behaviour in the neighborhood of the interval boundaries, which is
standard practice in this setting [15].
Remark
Note that, if the input space is Euclidean, for instance X = Rd , and if the kernel is linear
T
k(x, y) = x y, then Tn;? (k) may be interpreted as a regularized version of the classical maximumlikelihood multivariate test statistic used to test change in mean with unequal covariances, under the
assumption of normal observations, described in [4, Chap. 3]. Yet, as the next section shall show,
our test statistic is truly nonparametric, and its large-sample properties do not require any ?gaussian
in the feature space?-type of assumption. Moreover, in practice it may be computed thanks to the
kernel trick, adapted to the kernel Fisher discriminant analysis and outlined in [16, Chapter 6].
False-alarm and detection probability
In order to build a principled testing procedure, a proper
theoretical analysis from a statistical point of view is necessary. First, as the next section shows, for a
prescribed ?, we may build a procedure which has, as n tends to infinity, the false-alarm probability
? under the null hypothesis H0 , that is when the sample is completely homogeneous and contains
no-change-point. Besides, when the sample actually contains at most one change-point, we prove
that our test statistic is able to catch it with probability one as n tends to infinity.
Large-sample setting
For the sake of generality, we describe here the large-sample setting for
the change-point problem under the alternative hypothesis HA . Essentially, it corresponds to normalizing the signal sampling interval to [0, 1] and letting the resolution increase as we observe more
data points [4].
Assume there is 0 < k ? < n such that PX1 = ? ? ? = PXk? 6= PXk? +1 = ? ? ? = PXn . Define
? ? := k ? /n such that ? ? ?]0, 1[, and define P(?) the probability distribution prevailing within the
left segment of length ? ? , and P(r) the probability distribution prevailing within the right segment
of length 1 ? ? ? . Then, we want to study what happens if we have ?n? ? ? observations from P(?)
(before change) and ?n(1 ? ? ? )? observations from P(r) (after change) where n is large and ? ? is
kept fixed.
4
Limiting distribution under the null hypothesis
Throughout this section, we work under the null hypothesis H0 that is PX1 = ? ? ? = PXk = ? ? ? =
PXn for all 2 ? k ? n. The first result gives the limiting distribution of Tn;? (k) as the number of
observations n tends to infinity.
Before stating the theoretical results, let us describe informally the crux of our approach. We may
prove, under H0 , using operator-theoretic pertubation results ?
similar to [9], that it is sufficient to
study the large-sample behaviour of T?n;? (k) := maxan <k<bn ( 2 d2;? (?))?1 Qn,?;? (k) where
2
k(n ? k)
?1/2
Qn,?;? (k) :=
(?
?k+1:n ? ?
?1:k )
? d1;? (?) , 1 < k < n ,
(2)
(? + ?I)
n
H
4
and d1;? (?) and d2;? (?) are respectively the population recentering and rescaling quantities with
? = ?1:n = ?W
1:n the within-class covariance operator. Note that the only remaining stochastic
term in (2) is ?
?k+1:n ? ?
?1:k . Let us expand (2) onto the eigenbasis {?p , ep }p?1 of the covariance
operator ?, as follows:
?
X
k(n ? k)
2
?1
Qn,?;? (k) =
(?p + ?)
h?k+1:n ? ?1:k , ep i ? ?p , 1 < k < n . (3)
n
p=1
Pk
?1/2
Then, defining S1:k,p := n?1/2 i=1 ?p (ep (Xi ) ? E[ep (X1 )]), we may rewrite Qn,?;? (k) as
an infinite-dimensional quadratic form in the tied-down partial sums S1:k,p ? nk S1:n,p , which yields
(
)
2
?
X
n2
k
?1
Qn,?;? (k) =
(?p + ?) ?p
S1:k,p ? S1:n,p ? 1 , 1 < k < n . (4)
k(n ? k)
n
p=1
The idea is to view {Qn,?;? (k)}1<k<n as a stochastic process, that is a random function [k 7?
Qn,?;? (k, ?)] for any ? ? ?, where (?, F, P) is a probability space. Then, invoking the socalled invariance principle in distribution [17], we realize that the random sum S1:?nt?,p (?), which
for all ? linearly interpolates between the values S1:i/n,p (?) at points i/n for i = 1, . . . , n, behaves, asymptotically as n tends to infinity, like a Brownian motion (also called Wiener process)
{Wp (t)}0<t<1 . Hence, along each component ep , we may define a Brownian bridge {Bp (t)}0<t<1 ,
that is a tied-down brownian motion Bp (t) := Wp (t) ? tWp (1) which yields continuous approximation in distribution of the corresponding {S1:k,p ? nk S1:n,p }1<k<n . The proof (omitted due to
space limitations) consists in deriving a functional (noncentral) limit theorem for KFDRn,k;? , and
then applying a continuous mapping argument.
Proposition 1 Assume (A1) and (B1), and that H0 holds, that is PXi = P for all 1 ? i ? n.
Assume in addition that the regularization parameter ? is held fixed as n tends to infinity, and that
an /n ? u > 0 and bn /n ? v < 1 as n tends to infinity. Then,
!
?
X
B2p (t)
?p (?)
1
D
Tn;? (k) ?? sup Q?;? (t) := ?
?1 ,
u<t<v
2d2;? (?) p=1 ?p (?) + ? t(1 ? t)
where {?p (?)}p?1 is the sequence of eigenvalues of the overall covariance operator ?, while
{Bp (t)}p?1 is a sequence of independent brownian bridges.
Define t1?? as the (1 ? ?)-quantile of supu<t<v Q?;? (t). We may compute t1?? either by MonteCarlo simulations, as described in [18], or by bootstrap resampling under the null hypothesis (see).
The next result proves that, using the limiting distribution under the null stated above, we may build
a test statistic with prescribed false-alarm probability ? for large n.
Corollary 2 The test maxan <k<bn Tn,? (k) ? t1?? (?, ?) has false-alarm probability ?, as n tends
to infinity.
Besides, when the sequence of regularization parameters {?n }n?1 decreases to zero slowly enough
(in particular slower than n?1/2 ), the test statistic maxan <k<bn Tn,?n (k) turns out to be asymptotically kernel-independent as n tends to infinity. While the proof hinges upon martingale functional
limit theorems [17], still, we may point out that if we replace ? by ?n in the limiting null distribution,
then Q?;? (?) is correctly normalized for all n ? 1 to have zero-mean and variance one.
Proposition 3 Assume (A1) and (B1-B2) and that H0 holds, that is PXi = P for all 1 ? i ? n.
Assume in addition that the regularization parameters {?n }n?1 is such that
d1,n;?n (?) ?1 ?1/2
? n
?0,
d2,n;?n (?) n
and that an /n ? u > 0 and bn /n ? v < 1 as n tends to infinity. Then,
?n +
D
max Tn;?n (k) ?? sup p
an <k<bn
u<t<v
5
B(t)
t(1 ? t)
.
Remark
A closer look at Proposition 1 brings to light that the reweighting by t(1 ? t) of the
squared brownian bridges on each component is crucial for our test statistic to be immune against
imbalance between segment lengths under the alternative HA , that is when ? ? is far from 1/2.
Indeed, swapping out the reweighting by t(1 ? t), to simply consider the corresponding unweighted
test statistic is hazardous, and yields a loss of power for alternatives when ? ? is far from 1/2.
This section allowed us get an ?-level test statistic for the change-point problem, by looking at the
large-sample behaviour of the test statistic under the null hypothesis H0 . The next step is to prove
that the test statistic is consistent in power, that is the detection probability tends to one as n tends
to infinity under the alternative hypothesis HA .
5
Consistency in power
This section shows that, when the alternative hypothesis HA holds, our test statistic is able to detect
presence of a change with probability one in the large-sample setting. The next proposition is proved
within the same framework as the one considered in the previous section, except that now, along each
component ep , one has to split the random sum into three parts [1, k], [k + 1, k ? ], [k ? + 1, n], and
then the large-sample behaviour of each projected random sum is the one of a two-sided Brownian
motion with drifts.
Proposition 4 Assume (A1-A2) and (B1-B2), and that HA holds, that is there is exists u < ? ? < v
with u > 0 and v < 1 such that PX?n? ? ? 6= PX?n? ? ? +1 for all 1 ? i ? n. Assume in addition that
the regularization parameter ? is held fixed as n tends to infinity, and that limn?? an /n > u and
limn?? bn /n < v. Then, for any 0 < ? < 1, we have
PHA
max Tn;? (k) > t1?? ? 1 , as n ? ? .
(5)
an <k<bn
6
Extensions and related works
Extensions
It is worthwhile to note that we may also have built similar procedures from the
maximum mean discrepancy (MMD) test statistic proposed by [19]. Note also that, instead of the
Tikhonov-type regularization of the covariance operator, other regularization schemes may also be
applied, such as the spectral truncation regularization of the covariance operator, equivalent to preprocessing by a centered kernel principal component analysis [20, 21], as used in [22] for instance.
Related works
A related problem is the abrupt change detection problem, explored in [23],
which is naturally also encompassed by our framework. Here, one is interested in the early detection of a change from a nominal distribution to an erratic distribution. The procedure KCD of
[23] consists in running a window-limited detection algorithm, using two one-class support vector
machines trained respectively on the left and the right part of the window, and comparing the sets
of obtained weights; Their approach differs from our in two points. First, we have the limiting
null distribution of KCpA, which allows to compute decision thresholds in a principled way. Second, our test statistic incorporates a reweighting to keep power against alternatives with unbalanced
segments.
7
Experiments
Computational considerations
In all experiments, we set ? = 10?5 and took the Gaussian kernel with isotropic bandwidth set by the plug-in rule used in density estimation. Second, since from k
to k + 1, the test statistic changes from KFDRn,k;? to KFDRn,k+1;? , it corresponds to take into account the change from {(X1 , Y1 = ?1), . . . , (Xk , Yk = ?1), (Xk+1 , Yk+1 = +1), . . . , (Xn , Yn =
+1)} to {(X1 , Y1 = ?1), . . . , (Xk , Yk = ?1), (Xk+1 , Yk+1 = ?1), (Xk+2 , Yk+2 =
+1) . . . , (Xn , Yn = +1)} in the labelling in KFDR [9, 16]. This motivates an efficient strategy
for the computation of the test statistic. We compute the matrix inversion of the regularized kernel
gram matrix once for all, at the cost of O(n3 ), and then compute all values of the test statistic for all
partitions in one matrix multiplication?in O(n2 ). As for computing the decision threshold t1?? ,
we used bootstrap resampling calibration with 10, 000 runs. Other Monte-Carlo based calibration
procedures are possible, but are left for future research.
6
KCpA
SVM
Subject 1
79%
76%
Subject 2
74%
69%
Subject 3
61%
60%
Table 1: Average classification accuracy for each subject
Brain-computer interface data
Signals acquired during Brain-Computer Interface (BCI) trial
experiments naturally exhibit temporal structure. We considered a dataset proposed in BCI competition III1 acquired during 4 non-feedback sessions on 3 normal subjects, where each subject was
asked to perform different tasks, the time where the subject switches from one task to another being
random (see also [24]). Mental tasks segmentation is usually tackled with supervised classification
algorithms, which require labelled data to be acquired beforehand. Besides, standard supervised
classification algorithms are context-sensitive, and sometimes yield poor performance on BCI data.
We performed a sequence of change-point analysis on sliding windows overlapping by 20% along
the signals. We provide here two ways of measuring the performance of our method. First, in Figure 2 (left), we give in the empirical ROC-curve of our test statistic, averaged over all the signals at
hand. This shows that our test statistic yield competitive performance for testing the presence of a
change-point, when compared with a standard parametric multivariate procedure (param) [4]. Second, in Table 1, we give experimental results in terms of classification accuracy, which proves that
we can reach comparable/better performance as supervised multi-class (one-versus-one) classification algorithms (SVM) with our completely unsupervised kernel change-point analysis algorithm.
If each segment is considered as a sample of a given class, then the classification accuracy corresponds here to the proportion of correctly assigned points at the end of the segmentation process.
This also clearly shows that KCpA algorithm give accurate estimates of the change-points, since the
change-point estimation error is directly measured by the classification accuracy.
ROC Curve
1
0.8
0.8
0.6
0.6
Power
Power
ROC Curve
1
0.4
0.2
0.4
0.2
KCpA
param
0
0
0.1
0.2
0.3
0.4
KCpA
KCD
0
0
0.5
Level
0.1
0.2
0.3
0.4
0.5
Level
Figure 2: Comparison of ROC curves for task segmentation from BCI data (left), and pop songs
segmentation (right).
Pop song segmentation
Indexation of music signals aims to provide a temporal segmentation
into several sections with different dynamic or tonal or timbral characteristics. We investigated
the performance of KCpA on a database of 100 full-length ?pop music? signals, whose manual
segmentation is available. In Figure 2 (right), we provide the respective ROC-curves of KCD of [23]
and KCpA. Our approach is indeed competitive in this context.
8
Conclusion
We proposed a principled approach for the change-point analysis of a time-series of independent
observations. It provides a powerful testing procedure for testing the presence of a change in distribution in a sample. Moreover, we saw in experiments that it also allows to accurately estimate the
change-point when a change occurs. We are currently exploring several extensions of KCpA. Since
experimental results are promising on real data, in which the assumption of independence is rather
unrealistic, it is worthwhile to analyze the effect of dependence on the large-sample behaviour of our
1
see http://ida.first.fraunhofer.de/projects/bci/competition_iii/
7
test statistic, and explain why the test statistic remains powerful even for (weakly) dependent data.
We are also investigating adaptive versions of the change-point analysis, in which the regularization
parameter ? and the reproducing kernel k are learned from the data.
Acknowledgments
This work has been supported by Agence Nationale de la Recherche under contract ANR-06-BLAN0078 KERNSIG.
References
[1] F. De la Torre Frade, J. Campoy, and J. F. Cohn. Temporal segmentation of facial behavior. In
ICCV, 2007.
[2] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In Proc. ICML, 2001.
[3] O. Capp?e, E. Moulines, and T. Ryden. Inference in Hidden Markov Models. Springer, 2005.
[4] J. Chen and A.K. Gupta. Parametric Statistical Change-point Analysis. Birkh?auser, 2000.
[5] M. Cs?org?o and L. Horv?ath. Limit Theorems in Change-Point Analysis. Wiley and sons, 1998.
[6] M. Basseville and N. Nikiforov. Detection of abrupt changes. Prentice-Hall, 1993.
[7] T. L. Lai. Sequential analysis: some classical problems and new challenges. Statistica Sinica,
11, 2001.
[8] E. Lehmann and J. Romano. Testing Statistical Hypotheses (3rd ed.). Springer, 2005.
[9] Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity with kernel Fisher discriminant analysis. In Adv. NIPS, 2007.
[10] G. Blanchard, O. Bousquet, and L. Zwald. Statistical properties of kernel principal component
analysis. Machine Learning, 66, 2007.
[11] K. Fukumizu, F. Bach, and A. Gretton. Statistical convergence of kernel canonical correlation
analysis. JLMR, 8, 2007.
[12] C. Gu. Smoothing Spline ANOVA Models. Springer, 2002.
[13] I. Steinwart, D. Hush, and C. Scovel. An explicit description of the rkhs of gaussian RBF
kernels. IEEE Trans. on Inform. Th., 2006.
[14] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, G. R. G. Lanckriet, and B. Sch?olkopf. Injective
hilbert space embeddings of probability measures. In COLT, 2008.
[15] B. James, K. L. James, and D. Siegmund. Tests for a change-point. Biometrika, 74, 1987.
[16] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Camb. UP, 2004.
[17] P. Billingsley. Convergence of Probability Measures (2nd ed.). Wiley Interscience, 1999.
[18] P. Glasserman. Monte Carlo Methods in Financial Engineering (1rst ed.). Springer, 2003.
[19] A. Gretton, K. Borgwardt, M. Rasch, B. Schoelkopf, and A.J. Smola. A kernel method for the
two-sample problem. In Adv. NIPS, 2006.
[20] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2002.
[21] G. Blanchard and L. Zwald. Finite-dimensional projection for classification and statistical
learning. IEEE Transactions on Information Theory, 54(9):4169?4182, 2008.
[22] Z. Harchaoui, F. Vallet, A. Lung-Yut-Fong, and O. Capp?e. A regularized kernel-based approach
to unsupervised audio segmentation. In ICASSP, 2009.
[23] F. D?esobry, M. Davy, and C. Doncarli. An online kernel change detection algorithm. IEEE
Trans. on Signal Processing, 53(8):2961?2974, August 2005.
[24] Z. Harchaoui and O. Capp?e. Retrospective multiple change-point estimation with kernels. In
IEEE Workshop on Statistical Signal Processing (SSP), 2007.
8
| 3556 |@word trial:1 version:2 briefly:1 inversion:1 proportion:1 nd:1 open:1 d2:7 simulation:1 bn:11 covariance:10 invoking:1 minus:1 tr:2 series:2 exclusively:1 contains:2 rkhs:3 scovel:1 comparing:1 nt:1 ida:1 yet:1 readily:1 realize:1 partition:6 zaid:1 hoping:1 resampling:2 xk:7 isotropic:1 mccallum:1 recherche:1 mental:3 provides:4 argmax1:1 barrault:2 location:1 org:2 covp:1 along:5 consists:5 prove:4 interscience:1 introduce:1 theoretically:1 acquired:4 indeed:3 behavior:1 multi:1 brain:2 moulines:4 chap:2 glasserman:1 window:6 param:2 project:2 estimating:1 moreover:3 bounded:1 null:10 what:1 interpreted:1 developed:2 transformation:1 guarantee:2 temporal:6 quantitative:2 act:1 biometrika:1 unit:1 yn:2 segmenting:1 before:5 positive:2 t1:5 engineering:1 tends:16 limit:3 merge:1 inria:1 limited:2 statistically:2 averaged:1 practical:1 acknowledgment:1 testing:14 practice:2 block:2 definite:1 supu:1 differs:1 bootstrap:2 procedure:10 empirical:2 projection:1 davy:1 get:1 onto:1 close:2 operator:10 prentice:1 context:2 applying:1 zwald:2 restriction:1 measurable:1 map:1 equivalent:1 straightforward:1 resolution:1 abrupt:2 estimator:2 rule:3 deriving:1 studentization:1 financial:1 embedding:1 population:4 siegmund:1 limiting:7 nominal:1 ulm:1 homogeneous:2 hypothesis:16 pxn:4 lanckriet:1 trick:1 element:3 database:1 ep:6 schoelkopf:1 adv:2 decrease:1 yk:5 principled:3 asked:1 cristianini:1 mine:1 dynamic:1 trained:1 weakly:1 rewrite:1 segment:10 upon:3 max1:2 eric:2 completely:2 gu:1 capp:3 icassp:1 chapter:1 describe:5 monte:2 birkh:1 labeling:1 neighborhood:1 h0:9 whose:2 valued:1 anr:1 bci:7 statistic:34 cov:1 pertubation:1 online:3 sequence:11 eigenvalue:3 took:1 propose:2 fr:2 relevant:1 ath:1 description:1 competition:1 olkopf:2 eigenbasis:1 rst:1 convergence:2 requirement:1 noncentral:1 derive:2 develop:1 stating:1 measured:1 c:1 rasch:1 torre:1 stochastic:2 centered:1 require:3 behaviour:5 crux:1 proposition:5 extension:4 strictly:1 exploring:1 hold:4 sufficiently:2 considered:5 hall:1 normal:2 mapping:1 changepoint:3 early:1 a2:2 omitted:1 purpose:2 estimation:4 proc:1 currently:1 bridge:3 sensitive:1 saw:1 fukumizu:2 mit:1 clearly:1 indexation:3 always:1 gaussian:4 aim:1 rather:1 corollary:1 pxi:2 likelihood:1 detect:1 inference:2 dependent:1 cnrs:2 typically:1 hidden:2 expand:1 willow:1 france:3 interested:2 frade:1 provably:1 overall:2 classification:8 colt:1 denoted:1 socalled:1 prevailing:2 auser:1 smoothing:1 field:2 equal:1 once:1 sampling:1 look:1 unsupervised:2 icml:1 discrepancy:1 future:1 spline:1 simultaneously:1 homogeneity:5 comprehensive:1 n1:1 ltci:2 detection:12 truly:1 light:2 swapping:1 held:2 accurate:2 beforehand:2 closer:1 partial:1 necessary:1 injective:1 respective:1 facial:1 euclidean:1 taylor:1 theoretical:4 iii1:1 instance:2 measuring:2 cost:1 addressing:1 answer:2 fong:1 thanks:1 density:1 borgwardt:1 probabilistic:3 contract:1 squared:1 yut:1 satisfied:1 slowly:1 rescaling:1 account:1 de:3 b2:3 blanchard:2 satisfy:1 performed:1 view:2 analyze:1 francis:2 sup:3 competitive:3 hf:1 recover:1 lung:1 ass:1 accuracy:4 wiener:1 variance:2 characteristic:1 yield:8 dealt:1 accurately:1 carlo:2 za:1 explain:1 reach:1 inform:1 sharing:1 manual:1 ed:3 against:2 sriperumbudur:1 james:2 obvious:1 naturally:2 associated:1 proof:2 xn1:4 billingsley:1 proved:1 dataset:1 recall:1 knowledge:1 segmentation:13 hilbert:4 actually:3 supervised:3 formulation:1 generality:1 smola:2 correlation:1 working:1 hand:3 steinwart:1 aronszajn:1 cohn:1 reweighting:3 overlapping:1 brings:2 building:4 effect:1 concept:1 true:5 normalized:1 counterpart:3 hence:1 regularization:8 assigned:1 wp:2 during:2 speaker:1 criterion:1 outline:1 theoretic:2 tn:10 motion:3 interface:2 ranging:1 consideration:2 tending:5 behaves:1 functional:2 physical:1 camb:1 significant:1 rd:2 consistency:3 px1:4 outlined:1 session:1 shawe:1 immune:1 calibration:2 similarity:1 multivariate:3 brownian:6 agence:1 discard:1 tikhonov:1 devise:1 signal:9 ii:1 sliding:2 multiple:3 harchaoui:5 sound:1 full:1 gretton:3 unlabelled:1 plug:1 bach:4 lai:1 a1:4 essentially:2 metric:2 expectation:1 kernel:38 grounded:1 mmd:1 sometimes:1 addition:3 want:1 interval:3 limn:2 crucial:1 sch:2 cedex:2 subject:7 incorporates:1 lafferty:1 vallet:1 presence:5 split:1 enough:1 embeddings:1 switch:2 independence:1 bandwidth:1 idea:1 whether:1 tonal:1 retrospective:1 song:5 interpolates:1 romano:1 remark:2 pxk:6 informally:1 nonparametric:2 kcd:3 gih:1 http:1 canonical:1 correctly:2 shall:1 threshold:2 prevent:1 anova:1 kept:1 asymptotically:2 sum:5 run:2 powerful:3 lehmann:1 extends:1 throughout:1 decide:3 decision:4 comparable:1 uncontrolled:1 guaranteed:1 display:1 tackled:1 quadratic:1 adapted:1 infinity:15 bp:3 n3:1 sake:1 bousquet:1 aspect:1 argument:1 prescribed:4 performing:1 separable:3 px:2 structured:1 poor:1 pha:1 enst:2 son:1 happens:1 s1:9 iccv:1 sided:1 remains:2 turn:1 montecarlo:1 letting:1 end:1 available:2 nikiforov:1 apply:1 observe:1 b2p:1 worthwhile:2 spectral:1 alternative:8 slower:1 running:6 cf:2 remaining:1 hinge:1 instant:3 music:2 coined:1 quantile:1 build:6 establish:2 prof:2 classical:2 question:2 quantity:3 occurs:9 parametric:3 strategy:8 dependence:1 usual:1 ssp:1 exhibit:1 sensible:1 discriminant:6 length:7 besides:4 ratio:4 sinica:1 mostly:1 stated:2 proper:1 motivates:1 perform:2 imbalance:1 observation:14 markov:2 finite:1 heterogeneity:2 defining:1 looking:1 y1:2 reproducing:3 august:1 drift:1 paris:3 extensive:1 rephrase:1 unequal:1 learned:1 pop:6 hush:1 nip:2 trans:2 able:2 usually:2 pattern:1 challenge:1 built:2 max:3 erratic:1 power:8 unrealistic:1 natural:1 regularized:5 abbreviate:1 scheme:2 fraunhofer:1 catch:2 literature:1 l2:1 multiplication:1 embedded:1 loss:1 limitation:1 proven:1 versus:1 sufficient:1 consistent:1 principle:1 supported:1 truncation:1 horv:1 allow:2 recentering:1 boundary:1 feedback:1 xn:13 transition:1 cumulative:1 unweighted:1 qn:7 gram:1 curve:5 adaptive:1 projected:1 preprocessing:1 far:4 transaction:1 keep:1 kcpa:9 investigating:1 b1:4 assumed:1 discriminative:1 xi:1 continuous:3 why:1 reviewed:1 table:2 promising:2 hazardous:1 timbral:1 investigated:1 rue:3 pk:1 main:2 dense:1 linearly:1 statistica:1 alarm:8 n2:3 allowed:1 x1:18 telecom:2 en:1 roc:5 encompassed:1 martingale:1 wiley:2 pereira:1 explicit:1 doncarli:1 candidate:2 tied:2 down:2 theorem:3 explored:1 svm:2 gupta:1 normalizing:2 exists:2 workshop:1 ih:4 false:8 sequential:1 labelling:1 demand:1 nk:2 nationale:1 chen:1 simply:1 univariate:1 infinitely:1 springer:4 corresponds:4 conditional:2 goal:1 careful:1 basseville:1 rbf:1 labelled:2 replace:1 fisher:6 change:77 paristech:2 infinite:1 except:1 miss:1 principal:2 called:3 invariance:1 experimental:3 la:2 meaningful:1 formally:1 select:1 searched:1 maximumlikelihood:1 support:1 arises:1 unbalanced:1 audio:1 d1:6 |
2,820 | 3,557 | Multi-resolution Exploration in Continuous Spaces
Ali Nouri
Department of Computer Science
Rutgers University
Piscataway , NJ 08854
[email protected]
Michael L. Littman
Department of Computer Science
Rutgers University
Piscataway , NJ 08854
[email protected]
Abstract
The essence of exploration is acting to try to decrease uncertainty. We propose
a new methodology for representing uncertainty in continuous-state control problems. Our approach, multi-resolution exploration (MRE), uses a hierarchical mapping to identify regions of the state space that would benefit from additional samples. We demonstrate MRE?s broad utility by using it to speed up learning in a prototypical model-based and value-based reinforcement-learning method. Empirical
results show that MRE improves upon state-of-the-art exploration approaches.
1
Introduction
Exploration, in reinforcement learning, refers to the strategy an agent uses to discover new information about the environment. A rich set of exploration techniques, some ad hoc and some not, have
been developed in the RL literature for finite MDPs (Kaelbling et al., 1996). Using optimism in the
face of uncertainty in combination with explicit model representation, some of these methods have
led to the derivation of polynomial sample bounds on convergence to near-optimal policies (Kearns
& Singh, 2002; Brafman & Tennenholtz, 2002). But, because they treat each state independently,
these techniques are not directly applicable to continuous-space problems, where some form of generalization must be used.
Some attempts have been made to improve the exploration effectiveness of algorithms in continuousstate spaces. Kakade et al. (2003) extended previous work of Kearns and Singh (2002) to metric
spaces and provided a conceptual approach for creating general provably convergent model-based
learning methods. Jong and Stone (2007) proposed a method that can be interpreted as a practical
implementation of this work, and Strehl and Littman (2007) improved its complexity in the case that
the model can be captured by a linear function.
The performance metric used in these works demands near-optimal behavior after a polynomial
number of timesteps with high probability, but does not insist on performance improvements before or after convergence. Such ?anytime? behavior is encouraged by algorithms with regret
bounds (Auer & Ortner, 2006), although regret-type algorithms have not yet been explored in
continuous-state space problems to our knowledge.
As a motivating example for the work we present here, consider how a discrete state-space algorithm
might be adapted to work for a continuous state-space problem. The practitioner must decide how
to discretize the state space. While finer discretizations allow the learning algorithm to learn more
accurate policies, they require much more experience to learn well. The dilemma of picking fine
or coarse resolution has to be resolved in advance using estimates of the available resources, the
dynamics and reward structure of the environment, and a desired level of optimality. Performance
depends critically on these a priori choices instead of responding dynamically to the available resources.
We propose using multi-resolution exploration (MRE) to create algorithms that explore continuous
state spaces in an anytime manner without the need for a priori discretization. The key to this ideal
is to be able to dynamically adjust the level of generalization the agent uses during the learning
process. MRE sports a knownness criterion for states that allows the agent to reliably apply function
approximation with different degrees of generalization to different regions of the state space.
One of the main contributions of this work is to provide a general exploration framework that can be
used in both model-based and value-based algorithms. While model-based techniques are known for
their small sample complexities, thanks to their smart exploration, they haven?t been as successful
as value-based methods in continuous spaces because of their expensive planning part. Value-based
methods, on the other hand, have been less fortunate in terms of intelligent exploration, and some
of the very powerful RL techniques in continuous spaces, such as LSPI (Lagoudakis & Parr, 2003)
and fitted Q-iteration (Ernst et al., 2005) are in the form of offline batch learning and completely
ignore the problem of exploration. In practice, an exploration strategy is usually incorporated with
these algorithms to create online versions. Here, we examine fitted Q-iteration and show how MRE
can be used to improve its performance over conventional exploration schemes by systematically
collecting better samples.
2
Background
We consider environments that are modeled as Markov decision processes (MDPs) with continuous
state spaces (Puterman, 1994). An MDP M in our setting can be described as a tuple hS, A, T, R, ?i,
where S is a bounded measurable subspace of <k ; we say the problem is k-dimensional as one can
represent a state by a vector of size k and we use s(i) to denote the i-th component of this vector.
A = {a1 , ..., am } is the discrete set of actions. T is the transition function that determines the next
state given the current state and action. It can be written in the form of st+1 = T (xt , at ) + ?t ,
where xt and at are the state and action at time t and ?t is a white noise drawn i.i.d. from a known
distribution. R : S ? < is the bounded reward function, whose maximum we denote by Rmax , and
? is the discount factor.
Other concepts are similar to that of a general finite MDP (Puterman, 1994). In particular, a policy
? is a mapping from states to actions that prescribes what action to take from each state. Given a
policy ? and a starting state s, the value of s under ?, denoted by V ? (s), is the expected discounted
sum of rewards the agent will collect by starting from s and following policy ?. Under mild conditions (Puterman, 1994), at least one policy exists that maximizes this value function over all states,
which we refer to as the optimal policy or ? ? . The value of states under this policy is called the
?
optimal value function V ? (?) = V ? (?).
The learning agent has prior knowledge of S, ?, ? and Rmax , but not T and R, and has to find a
near-optimal policy solely through direct interaction with the environment.
3
Multi-resolution Exploration
We?d like to build upon the previous work of Kakade et al. (2003). One of the key concepts to this
method and many other similar algorithms is the notion of known state. Conceptually, it refers to the
portion of the state space in which the agent can reliably predict the behavior of the environment.
Imagine how the agent would decide whether a state is known or unknown as described in (Kakade
et al., 2003). Based on the prior information about the smoothness of the environment and the level
of desired optimality, we can form a hyper sphere around each query point and check if enough data
points exist inside it to support the prediction.
In this method, we use the same hyper-sphere size across the entire space, no matter how the sample
points are distributed, and we keep this size fixed during the entire learning process. In another
words, the degree of generalization is fixed both in time and space.
To support ?anytime? behavior, we need to make the degree of generalization variable both in time
and space. MRE partitions the state space into a variable resolution discretization that dynamically
forms smaller cells for regions with denser sample sets. Generalization happens inside the cells (similar to the hyper sphere example), therefore it allows for wider but less accurate generalization in
parts of the state space that have fewer sample points, and narrow but more accurate ones for denser
parts.
To effectively use this mechanism, we need to change the notion of known states, as its common
definition is no longer applicable. Let?s define a new knownness criterion that maps S into [0, 1] and
quantifies how much we should trust the function approximation. The two extreme values, 0 and
1, are the two degenerate cases equal to unknown and known conditions in the previous definitions.
In the remainder of this section, we first show how to form the variable resolution structure and
compute the knownness, and then we demonstrate how to use this structure in a prototypical modelbased and value-based algorithm.
3.1
Regression Trees and Knownness
Regression trees are function approximators that partition the input space into non-overlapping regions and use the training samples of each region for prediction of query points inside it. Their ability
to maintain a non-uniform discretization of high-dimensional spaces with relatively fast query time
has proven to be very useful in various RL algorithms (Ernst et al., 2005; Munos & Moore, 2002).
For the purpose of our discussion, we use a variation of the kd-tree structure (Preparata & Shamos,
1985) to maintain our variable-resolution partitioning and produce knownness values. We call this
structure the knownness-tree. As this structure is not used in a conventional supervised-learning
setting, we next describe some of the details.
A knownness-tree ? with dimension k accepts points s ? <k satisfying ||s||? ? 1 1 , and answers
queries of the form 0 ? knownness(s) ? 1. Each node ? of the tree covers a bounded region and
keeps track of the points inside that region, with the root covering the whole space. Let R? be the
region of ?.
Each internal node splits its region into two half-regions along one of the dimensions to create two
child nodes. Parameter ? determines the maximum allowed number of points in each leaf. For a
node l, l.size is the inf-norm of the size of the region it covers and l.count is the number of points
inside it. Given n points, the normalizing size of the resulting tree, denoted by ?, is the region size of
a hypothetical uniform discretization of the space that puts ?/k points inside each cell, if the points
1
were uniformly distributed in the space; that is ? = ?
.
k
b
nk/?c
Upon receiving a new point, the traversal algorithm starts at the root and travels down the tree,
guided by the splitting dimension and value of each internal node. Once inside a leaf l, it adds the
point to its list of points; if l.count is more than ?, the node splits and creates two new half-regions2 .
Splitting is performed by selecting a dimension j ? [1..k] and splitting the region into two equal
half-regions along the j-th dimension.
The points inside the list are added to each of the children according to what half-region they fall
into. Similar to regular regression trees, several different criteria could be used to select j. Here, we
assume a round-robin method just like kd-tree.
To answer a query knownness(s), the lookup algorithm first finds the corresponding leaf that contains s, denoted l(s), then computes knownness based on l(s).size, l(s).count and ?:
knownness(s) = min(1,
l(s).count
?
.
)
?
l(s).size
(1)
The normalizing size of the tree is bigger when the total number of data points is small. This creates
higher knownness values for a fixed cell at the beginning of the learning. As more experience is
collected, ? becomes smaller and encourages finer discretization. This process creates a variable
degree of generalization over time.
1
2
In practice, scaling can be used to satisfy this property.
For the sake of practicality, we can assign a maximum depth to avoid indefinite growth of the tree
3.2
Application to Model-based RL
The model-based algorithm we describe here uses function approximation to estimate T and R,
which are the two unknown parameters of the environment. Let ? be the set of function approximators for estimating the transition function, with each ?ij ? ? : <k ? < predicting the i-th
component of T (., aj ). Accordingly, let ?ij be a knownness-tree for ?ij . Let ? : <k ? < be the
function approximator for the reward function. The estimated transition function, T?(s, a), is therefore formed by concatenating all the ?ia (s). Let knownness(s, a) = mini {?ia .knownness(s)}.
D
E
Construct the augmented MDP M 0 = S + sf , A, T?0 , ?, ? by adding a new state, sf , with a
reward of Rmax and only self-loop transitions. The augmented transition function T?0 is a stochastic
function defined as:
(
T?0 (s, a) =
sf
T?(s, a) + ?
, with probability 1 ? knownness(s, a)
, otherwise
(2)
Algorithm 1 constructs and solves M 0 and always acts greedily with respect to this internal model.
DPlan is a continuous MDP planner that supports two operations: solveModel, which solves a given
MDP and getBestAction, which returns the greedy action for a given state.
Algorithm 1 A model-based algorithm using MRE for exploration
1: Variables: DPlan, ?, ? and solving period planFreq
2: Observe a transition of the form (st , at , rt , st+1 )
3: Add (st , rt ) as a training sample to ?.
4: Add (st , st+1 (i)) as a training sample to ?iat .
5: Add (st ) to ?iat .
6: if t mod planFreq = 0 then
7:
Construct the Augmented MDP M 0 as defined earlier.
8:
DPlan.solveModel(M 0 )
9: end if
10: Execute action DPlan.getBestAction(st+1 )
While we leave a rigorous theoretical analysis of Algorithm 1 to another paper, we?d like to discuss
some of its properties. The core of the algorithm is the way knownness is computed and how it?s
used to make the estimated transition function optimistic. In particular, if we use a uniform fixed
grid instead of the knownness-tree, the algorithm starts to act similar to MBIE (Strehl & Littman,
2005). That is, like MBIE, the value of a state becomes gradually less optimistic as more data is
available. Because of their similarity, we hypothesize that similar PAC-bounds could be proved for
MRE in this configuration.
If we further change knownness(s, a) to be bknownness(s.a)c, the algorithm reduces to an instance of metric E3 (Kakade et al., 2003), which can also be used to derive finite sample bounds.
But, Algorithm1 also has ?anytime? behavior. Let?s assume the transition and reward functions are
Lipschitz smooth with Lipschitz constants CT and CR respectively. Let ?t be the maximum size of
the cells and `t be the minimum knownness of all of the trees ?ij at time t. The following establishes
performance guarantee of the algorithm at time t.
Theorem 1 If learning is frozen at time t, Algorithm 1 achieves -optimal behavior, with being:
? (C + C ?k) + 2(1 ? ` )
t
t
R
T
=O
(1 ? ?)2
Proof 1 (sketch) This follows as an application of the simulation lemma (Kearns & Singh, 2002).
We can use the smoothness assumptions to compute the closeness of T?0 to the original transition
function based on the shape of the trees and the knownness they output.
Of course, this theorem doesn?t provide a bound for ?t and `t based on t, as used in common
?anytime? analyses, but gives us some insight on how the algorithm would behave. For example,
the incremental refinement of model estimation assures a certain global accuracy before forcing the
algorithm to collect denser sampling locally. As a result, MRE encourages more versatile sampling
at the early stages of learning. As time goes by and size of the cells gets smaller, the algorithm
gets closer to the optimal policy. In fact, we hypothesize that with some caveats concerning the
computation of ?, it can be proved that Algorithm 1 converges to the optimal policy in the limit,
given that an oracle planner is available.
The bound in Theorem 1 is loose because it involves only the biggest cell size, as opposed to individual cell sizes. Alternatively, one might be able to achieve better bounds, similar to those in the
work of Munos and Moore (2000), by taking the variable resolution of the tree into account.
3.3
Application to Value-based RL
Here, we show how to use MRE in fitted Q-iteration, which is a value-based batch learner for
continuous spaces. A similar approach can be used to apply MRE to other types of value-based
methods, such as LSPI, as an alternative to random sampling or -greedy exploration, which are
widely used in practice.
The fitted Q-iteration algorithm accepts a set of four-tuple samples S = {(sl , al , rl , s0l ), l = 1 . . . n}
?
? j be
and uses regression trees to iteratively compute more accurate Q-functions.
In particular, let Q
i
j
the regression tree used to approximate Q(?, j) in the i-th iteration. Let S ? S be the set of samples
? j are S j = {(sl , rl )|(sl , al , rl , s0l ) ? S j }. Q
?j
with action equal to j. The training samples for Q
0
0
i+1
is constructed based on Q?i in the following way:
xl
y
l
j
Si+1
= {sl |(sl , al , rl , s0l ) ? S j }
? ai (s0l )|(sl , al , rl , s0l ) ? S j }
= {rl + ? max Q
(4)
= {(xl , y l )}.
(5)
a?A
(3)
Random sampling is usually used to collect S for fitted Q-iteration when used as an offline algorithm.
In online settings, -greedy can be used as the exploration scheme to collect samples. The batch
portion of the algorithm is applied periodically to incorporate the new collected samples.
? j for all i?s, and be
Combining MRE with fitted Q-iteration is very simple. Let ? j correspond to Q
i
trained on the same samples. The only change in the algorithm is the computation of Equation 4. In
?
order to use optimistic values, we elevate Q-functions
according to their knownness:
y l = ? j .knownness(sl ) rl + ? max Qai (s0l ) +
a?A
Rmax
1 ? ? j .knownness sl
.
1??
4
Experimental Results
To empirically evaluate the performance of MRE, we consider a well-studied environment called
?Mountain Car? (Sutton & Barto, 1998). In this domain, an underpowered car tries to climb up
to the right of a valley, but has to increase its velocity via several back and forth trips across the
valley. The state space is 2-dimensional and consists of the horizontal position of the car in the
range of [?1.2, 0.6], and its velocity in [?0.07, 0.07]. The action set is forward, backward, and
neutral, which correspond to accelerating in the intended direction. Agent receives a ?1 penalty in
each timestep except for when it escapes the valley and receives a reward of 0 that ends the episode.
Each episode has a cap of 300 steps, and ? = 0.95 is used for all the experiments. A small amount
of gaussian noise ? ? N (0, 0.01) is added to the position component of the deterministic transition
function used in the original definition, and the starting position of the car is chosen very close to
the bottom of the hill with a random velocity very close to 0 (achieved by drawing samples from a
normal distribution with the mean on the bottom of the hill and variance of 1/15 of the state space.
This set of parameters makes this environment especially interesting for the purpose of comparing
exploration strategies, because it is unlikely for random exploration to guide the car to the top of the
hill. Similar scenarios occur in almost all of the complex real-life domains, where a long trajectory
is needed to reach the goal.
Three versions of Algorithm 1 are compared in Figure 1(a): the first two implementations use fixed
discretizations instead of the knownness-tree, with different normalized resolutions of 0.05 and 0.3.
The third one uses variable discretization using the knownness-tree as defined in Section 3.1. All
the instances use the same ? and ?, which are regular kd-tree structures (Ernst et al., 2005) with
maximum allowed points of 10 in each cell. All of the algorithms use fitted value-iteration (Gordon,
1999) as their DPlan, and their planFreq is set to 100. Furthermore, the known threshold parameter
of the first two instances was hand-tuned to 4 and 30 respectively.
300
variable
Step to
o goal
250
fixed 0
0.05
05
fixed 0.3
200
150
100
50
0
50
100
Episode
(a)
150
Average step to go
oal per episode
The learning curve in Figure 1(a) is averaged over 20 runs with different random seeds and smoothed
over a window of size 5 to avoid a cluttered graph. The finer fixed-discretization converges to a
very good policy, but takes a long time to do so, because it trusts only very accurate estimations
throughout the learning. The coarse discretization on the other hand, converges very fast, but not
to a very good policy; it constructs rough estimations and doesn?t compensate as more samples are
gathered. MRE refines the notion of knownness to make use of rough estimations at the beginning
and accurate ones later, and therefore converges to a good policy fast.
250
fixed 0.05
fixed 0.3
variable
200
150
100
50
0
episode 0-100
episode 100-200
episode 200-300
(b)
Figure 1: (a) The learning curve of Algorithm 1 in Mountain Car with three different exploration
strategies. (b) Average performance of Algorithm 1 in Mountain Car with three exploration strategies. Performance is evaluated at three different stages of learning.
A more detailed comparison of this result is shown in Figure 1(b), where the average time-perepisode is provided for three different phases: At the early stages of learning (episode 1-100), at
the middle of learning (episode 100-200), and during the late stages (episode 200-300). Standard
deviation is used as the error bar.
To have a better look at why MRE provides better results than the fixed 0.05 at the early stages of
learning (note that both of them achieve the same performance level at the end), value functions of
the two algorithms at timestep = 1500 are shown in Figure 2. Most of the samples at this stage
have very small knownness in the fixed version, due to the very fine discretization, and therefore have
very little effect on the estimation of the transition function. This situation results in a too optimistic
value function (the flat part of the function). The variable discretization however, achieves a more
realistic and smooth value function by allowing coarser generalizations in parts of the state space
with fewer samples.
The same type of learning curve is shown for the fitted Q-iteration algorithm in Figure 3. Here,
we compare -greedy to two versions of variable-resolution MRE; in the first version, although a
knownness-tree is chosen for partitioning the state space, knownness is computed as a Boolean value
using the bc operator. The second version uses continuous knownness. For -greedy, is set to 0.3
at the beginning and is decayed linearly to 0.03 as t = 10000, and is kept constant afterward. This
parameter setting is the result of a rough optimization through a few trial and errors. As expected,
-greedy performs poorly, because it cannot collect good samples to feed the batch learner. Both of
the versions of MRE converge to the same policy, although the one that uses continuous knownness
does so faster.
0
0
-5
-5
-10
-10
-15
-15
-20
0.1
1
0.05
-20
0.1
0
0
1
0.5
0.05
0.5
-0.5
-0.05
-1
-0.1
0
0
-0.5
-0.05
-1
-0.1
-1.5
(a)
-1.5
(b)
Figure 2: Snapshot of the value function at timestep 1500 in Algorithm 1 with two configuration:
(a) fixed discretization with resolution= 0.05, and (b) variable resolution.
300
Continuous knownness
Boolean knownness
Ste p to goalll
250
-greedy
200
150
100
50
0
50
100
150
200
250
Episode
Figure 3: The learning curve for fitted Q-iteration in Mountain Car. -greedy is compared to two
versions of MRE: one that uses Boolean knownness, and one that uses continuous knownness.
To have a better understanding of why the continuous knownness helps fitted Q-iteration during the
early stages of learning, snapshots of knownness from the two versions are depicted in Figure 6,
along with the set of visited states at timestep 1500. Black indicates a completely unknown region,
while white means completely known; gray is used for intermediate values. The continuous notion
of knownness helps fitted Q-iteration in this case to collect better-covering samples at the beginning
of learning.
5
Conclusion
In this paper, we introduced multi-resolution exploration for reinforcement learning in continuous
spaces and demonstrated how to use it in two algorithms from the model-based and value-based
paradigms. The combination of two key features distinguish MRE from previous smart exploration
schemes in continuous spaces: The first is that MRE uses a variable-resolution structure to identify
known vs. unknown regions, and the second is that it successively refines the notion of knownness
during learning, which allows it to assign continuous, instead of Boolean, knownness. The applicability of MRE to value-based methods allows us to benefit from smart exploration ideas from the
model-based setting in powerful value-based batch learners that usually use naive approaches like
0.06
0.06
0.06
0.06
0
0
0
0
1.2
0.4
0.6
(a)
-0.06
-0.06
-0.06
0.06
-1
0.6
-1.2
0.4
0.6
-1
-0.4
0.6
(b)
Figure 4: Knownness computed in two versions of MRE for fitted Q-iteration: One that has Boolean
values, and one that uses continuous ones. Black indicates completely unknown and white means
completely known. Collected samples are also shown for the same two versions at timestep 1500.
random sampling to collect data. Experimental results confirm that MRE holds significant advantage
over some other exploration techniques widely used in practice.
References
Auer, P., & Ortner, R. (2006). Logarithmic online regret bounds for undiscounted reinforcement
learning. Advances in Neural Information Processing Systems 20 (NIPS-06).
Brafman, R. I., & Tennenholtz, M. (2002). R-max, a general polynomial time algorithm for nearoptimal reinforcement learning. Journal of Machine Learning Research, 3, 213?231.
Ernst, D., Geurts, P., & Wehenkel, L. (2005). Tree-based batch mode reinforcement learning. Journal of Maching Learning Research, 6, 503?556.
Gordon, G. J. (1999). Approximate solutions to Markov decision processes. Doctoral dissertation,
School of Computer Science, Carnegie Mellon University, Pittsburgh, PA.
Jong, N. K., & Stone, P. (2007). Model-based function approximation for reinforcement learning.
The Sixth International Joint Conference on Autonomous Agents and Multiagent Systems.
Kaelbling, L. P., Littman, M. L., & Moore, A. P. (1996). Reinforcement learning: A survey. Journal
of Artificial Intelligence Research, 4, 237?285.
Kakade, S., Kearns, M., & Langford, J. (2003). Exploration in metric state spaces. In Proc. of the
20th International Conference on Machine Learning, 2003.
Kearns, M. J., & Singh, S. P. (2002). Near-optimal reinforcement learning in polynomial time.
Machine Learning, 49, 209?232.
Lagoudakis, M. G., & Parr, R. (2003). Least-squares policy iteration. Journal of Machine Learning
Research, 4, 1107?1149.
Munos, R., & Moore, A. (2002). Variable resolution discretization in optimal control. Machine
Learning, 49, 291?323.
Munos, R., & Moore, A. W. (2000). Rates of convergence for variable resolution schemes in optimal
control. Proceedings of the Seventeenth International Conference on Machine Learning (ICML00) (pp. 647?654).
Preparata, F. P., & Shamos, M. I. (1985). Computational geometry - an introduction. Springer.
Puterman, M. L. (1994). Markov decision processes: Discrete stochastic dynamic programming.
New York: Wiley.
Strehl, A., & Littman, M. (2007). Online linear regression and its application to model-based reinforcement learning. Advances in Neural Information Processing Systems 21 (NIPS-07).
Strehl, A. L., & Littman, M. L. (2005). A theoretical analysis of model-based interval estimation.
ICML-05 (pp. 857?864).
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge, MA:
MIT Press.
| 3557 |@word h:1 mild:1 trial:1 version:11 middle:1 polynomial:4 norm:1 simulation:1 versatile:1 configuration:2 contains:1 selecting:1 tuned:1 bc:1 current:1 discretization:12 comparing:1 si:1 yet:1 must:2 written:1 refines:2 periodically:1 partition:2 realistic:1 shape:1 hypothesize:2 v:1 half:4 fewer:2 leaf:3 greedy:8 intelligence:1 accordingly:1 beginning:4 core:1 dissertation:1 caveat:1 coarse:2 provides:1 node:6 knownness:42 along:3 constructed:1 direct:1 consists:1 inside:8 manner:1 expected:2 behavior:6 planning:1 examine:1 multi:5 insist:1 discounted:1 little:1 window:1 becomes:2 provided:2 discover:1 bounded:3 estimating:1 maximizes:1 what:2 mountain:4 interpreted:1 rmax:4 developed:1 nj:2 guarantee:1 collecting:1 hypothetical:1 act:2 growth:1 control:3 partitioning:2 before:2 treat:1 limit:1 sutton:2 solely:1 might:2 black:2 doctoral:1 studied:1 dynamically:3 collect:7 range:1 averaged:1 seventeenth:1 practical:1 practice:4 regret:3 empirical:1 discretizations:2 word:1 refers:2 regular:2 get:2 cannot:1 close:2 valley:3 operator:1 put:1 conventional:2 measurable:1 map:1 deterministic:1 demonstrated:1 go:2 starting:3 independently:1 cluttered:1 survey:1 resolution:17 splitting:3 insight:1 notion:5 variation:1 autonomous:1 imagine:1 qai:1 programming:1 us:12 pa:1 velocity:3 expensive:1 satisfying:1 coarser:1 bottom:2 region:17 episode:11 decrease:1 environment:9 complexity:2 reward:7 littman:6 dynamic:2 traversal:1 prescribes:1 singh:4 solving:1 trained:1 smart:3 ali:1 dilemma:1 upon:3 creates:3 learner:3 completely:5 resolved:1 joint:1 various:1 derivation:1 fast:3 describe:2 query:5 artificial:1 hyper:3 shamos:2 whose:1 widely:2 denser:3 say:1 drawing:1 otherwise:1 ability:1 online:4 hoc:1 advantage:1 frozen:1 propose:2 interaction:1 remainder:1 ste:1 loop:1 combining:1 ernst:4 degenerate:1 achieve:2 poorly:1 forth:1 convergence:3 undiscounted:1 produce:1 incremental:1 leave:1 converges:4 wider:1 derive:1 help:2 ij:4 school:1 solves:2 c:2 involves:1 direction:1 guided:1 stochastic:2 exploration:27 require:1 iat:2 assign:2 generalization:9 hold:1 around:1 normal:1 seed:1 mapping:2 predict:1 parr:2 achieves:2 early:4 purpose:2 estimation:6 proc:1 travel:1 applicable:2 visited:1 create:3 establishes:1 maching:1 rough:3 mit:1 always:1 gaussian:1 avoid:2 cr:1 barto:2 improvement:1 check:1 indicates:2 rigorous:1 greedily:1 am:1 entire:2 unlikely:1 provably:1 denoted:3 priori:2 art:1 equal:3 once:1 construct:4 sampling:5 encouraged:1 broad:1 look:1 icml:1 preparata:2 intelligent:1 haven:1 ortner:2 escape:1 gordon:2 few:1 individual:1 intended:1 phase:1 geometry:1 maintain:2 attempt:1 adjust:1 extreme:1 accurate:6 tuple:2 closer:1 experience:2 tree:24 desired:2 theoretical:2 fitted:12 instance:3 earlier:1 boolean:5 cover:2 kaelbling:2 applicability:1 deviation:1 neutral:1 uniform:3 successful:1 too:1 motivating:1 nearoptimal:1 underpowered:1 answer:2 elevate:1 thanks:1 st:8 decayed:1 international:3 receiving:1 picking:1 michael:1 modelbased:1 successively:1 opposed:1 creating:1 return:1 account:1 lookup:1 matter:1 satisfy:1 ad:1 depends:1 performed:1 try:2 root:2 later:1 optimistic:4 portion:2 start:2 contribution:1 square:1 formed:1 accuracy:1 variance:1 correspond:2 identify:2 gathered:1 conceptually:1 critically:1 trajectory:1 finer:3 reach:1 definition:3 sixth:1 pp:2 proof:1 proved:2 anytime:5 knowledge:2 improves:1 car:8 cap:1 auer:2 back:1 mre:24 feed:1 higher:1 supervised:1 methodology:1 improved:1 execute:1 evaluated:1 furthermore:1 just:1 stage:7 langford:1 hand:3 sketch:1 horizontal:1 receives:2 trust:2 overlapping:1 mode:1 aj:1 gray:1 mdp:6 effect:1 concept:2 normalized:1 moore:5 iteratively:1 puterman:4 white:3 round:1 during:5 self:1 encourages:2 essence:1 covering:2 criterion:3 stone:2 hill:3 demonstrate:2 geurts:1 performs:1 nouri:2 lagoudakis:2 common:2 rl:12 empirically:1 refer:1 significant:1 mellon:1 cambridge:1 ai:1 smoothness:2 grid:1 longer:1 similarity:1 add:4 mlittman:1 continuousstate:1 inf:1 forcing:1 scenario:1 certain:1 life:1 approximators:2 captured:1 minimum:1 additional:1 converge:1 paradigm:1 period:1 reduces:1 smooth:2 faster:1 sphere:3 long:2 compensate:1 concerning:1 bigger:1 a1:1 prediction:2 regression:6 metric:4 rutgers:4 iteration:14 represent:1 achieved:1 cell:9 background:1 fine:2 interval:1 mod:1 effectiveness:1 climb:1 practitioner:1 call:1 near:4 ideal:1 intermediate:1 split:2 enough:1 timesteps:1 idea:1 whether:1 optimism:1 utility:1 accelerating:1 penalty:1 e3:1 york:1 action:9 useful:1 detailed:1 amount:1 discount:1 locally:1 sl:8 exist:1 estimated:2 mbie:2 track:1 per:1 discrete:3 carnegie:1 key:3 indefinite:1 four:1 threshold:1 drawn:1 oal:1 kept:1 backward:1 timestep:5 graph:1 sum:1 run:1 uncertainty:3 powerful:2 planner:2 almost:1 decide:2 throughout:1 decision:3 scaling:1 bound:8 ct:1 distinguish:1 convergent:1 oracle:1 adapted:1 occur:1 flat:1 sake:1 speed:1 optimality:2 min:1 relatively:1 department:2 according:2 piscataway:2 combination:2 kd:3 across:2 smaller:3 kakade:5 happens:1 gradually:1 resource:2 equation:1 assures:1 discus:1 count:4 mechanism:1 loose:1 needed:1 end:3 available:4 operation:1 apply:2 observe:1 hierarchical:1 batch:6 alternative:1 algorithm1:1 original:2 responding:1 top:1 wehenkel:1 practicality:1 build:1 especially:1 lspi:2 added:2 strategy:5 rt:2 subspace:1 collected:3 modeled:1 mini:1 implementation:2 reliably:2 policy:16 unknown:6 allowing:1 discretize:1 snapshot:2 markov:3 finite:3 behave:1 situation:1 extended:1 incorporated:1 smoothed:1 introduced:1 trip:1 accepts:2 narrow:1 nip:2 able:2 tennenholtz:2 bar:1 usually:3 max:3 ia:2 predicting:1 representing:1 scheme:4 improve:2 mdps:2 naive:1 prior:2 literature:1 understanding:1 multiagent:1 prototypical:2 interesting:1 afterward:1 proven:1 approximator:1 agent:9 degree:4 systematically:1 strehl:4 course:1 brafman:2 offline:2 guide:1 allow:1 fall:1 face:1 taking:1 munos:4 benefit:2 distributed:2 curve:4 dimension:5 depth:1 transition:11 rich:1 computes:1 doesn:2 forward:1 made:1 reinforcement:11 refinement:1 approximate:2 ignore:1 keep:2 confirm:1 global:1 s0l:6 conceptual:1 pittsburgh:1 alternatively:1 continuous:21 quantifies:1 why:2 robin:1 learn:2 complex:1 domain:2 main:1 linearly:1 whole:1 noise:2 child:2 allowed:2 augmented:3 biggest:1 wiley:1 position:3 explicit:1 concatenating:1 fortunate:1 sf:3 xl:2 third:1 late:1 down:1 theorem:3 xt:2 pac:1 explored:1 list:2 normalizing:2 closeness:1 exists:1 adding:1 effectively:1 demand:1 nk:1 depicted:1 led:1 logarithmic:1 explore:1 sport:1 springer:1 determines:2 ma:1 goal:2 lipschitz:2 change:3 except:1 uniformly:1 acting:1 kearns:5 lemma:1 called:2 total:1 experimental:2 jong:2 select:1 internal:3 support:3 incorporate:1 evaluate:1 |
2,821 | 3,558 | Bayesian Experimental Design of Magnetic
Resonance Imaging Sequences
Matthias W. Seeger, Hannes Nickisch, Rolf Pohmann and Bernhard Sch?olkopf
Max Planck Institute for Biological Cybernetics
Spemannstra?e 38
72012 T?ubingen, Germany
{seeger,hn,rolf.pohmann,bs}@tuebingen.mpg.de
Abstract
We show how improved sequences for magnetic resonance imaging can be
found through optimization of Bayesian design scores. Combining approximate
Bayesian inference and natural image statistics with high-performance numerical computation, we propose the first Bayesian experimental design framework
for this problem of high relevance to clinical and brain research. Our solution
requires large-scale approximate inference for dense, non-Gaussian models. We
propose a novel scalable variational inference algorithm, and show how powerful
methods of numerical mathematics can be modified to compute primitives in our
framework. Our approach is evaluated on raw data from a 3T MR scanner.
1
Introduction
Magnetic resonance imaging (MRI) [7, 2] is a key diagnostic technique in healthcare nowadays, and
of central importance for experimental research of the brain. Without applying any harmful ionizing radiation, this technique stands out by its amazing versatility: by combining different types of
radiofrequency irradiation and rapidly switched spatially varying magnetic fields (called gradients)
superimposing the homogeneous main field, a large variety of different parameters can be recorded,
ranging from basic anatomy to imaging blood flow, brain function or metabolite distribution. For
this large spectrum of applications, a huge number of sequences has been developed that describe
the temporal flow of the measurement, ranging from a relatively low number of multi-purpose techniques like FLASH [5], RARE [6], or EPI [9], to specialized methods for visualizing bones or
perfusion. To select the optimum sequence for a given problem, and to tune its parameters, is a difficult task even for experts, and even more challenging is the design of new, customized sequences
to address a particular question, making sequence development an entire field of research [1]. The
main drawbacks of MRI are high initial and running costs, since a very strong homogeneous magnetic field has to be maintained, moreover long scanning times due to weak signals and limits to
gradient amplitude. With this in mind, by far the majority of scientific work on improving MRI
is motivated by obtaining diagnostically useful images in less time. Beyond reduced costs, faster
imaging also leads to higher temporal resolution in dynamic sequences for functional MRI (fMRI),
less annoyance to patients, and fewer artifacts due to patient motion.
In this paper, we employ Bayesian experimental design to optimize MRI sequences. Image reconstruction from MRI raw data is viewed as a problem of inference from incomplete observations. In
contrast, current reconstruction techniques are non-iterative. For most sequences used in hospitals
today, reconstruction is done by a single fast Fourier transform (FFT). However, natural and MR
images show stable low-level statistical properties,1 which allows them to be reconstructed from
1
These come from the presence of edges and smooth areas, which on a low level define image structure, and
which are not present in Gaussian data (noise).
1
fewer observations. In our work, a non-Gaussian prior distribution represents low-level spectral and
local natural image statistics. A similar idea is known as compressed sensing (CS), which has been
applied to MRI [8].
A different and more difficult problem is to improve the sequence itself. In our Bayesian method,
a posterior distribution over images is maintained, which is essential for judging the quality of the
sequence: the latter can be modified so as to decrease uncertainty in regions or along directions of
interest, where uncertainty is quantified by the posterior. Importantly, this is done without the need
to run many MRI experiments in random a priori data collections. It has been proposed to design
sequences by blindly randomizing aspects thereof [8], based on CS theoretical results. Beyond being
hard to achieve on a scanner, our results indicate that random measurements do not work well for
real MR images. Similar negative findings for a variety of natural images are given in [12].
Our proposal requires efficient Bayesian inference for MR images of realistic resolution. We present
a novel scalable variational approximate inference algorithm inspired by [16]. The problem is reduced to numerical mathematics primitives, and further to matrix-vector multiplications (MVM)
with large, structured matrices, which are computed by efficient signal processing code. Most previous algorithms [3, 14, 11] iterate over single non-Gaussian potentials, which renders them of no
use for our problem here.2 Our solutions for primitives required here should be useful for other machine learning applications as well. Finally, we are not aware of Bayesian or classical experimental
design methods for dense non-Gaussian models, scaling comparably to ours. The framework of
[11] is similar, but could not be applied to the scale of interest here. Our model and experimental
design framework are described in Section 2, a novel scalable approximate inference algorithm is
developed in Section 3, and our framework is evaluated on a large-scale realistic setup with scanner
raw data in Section 4.
2
Sparse Linear Model. Experimental Design
Denote the desired MR image by u ? Rn , where n is the number of pixels. Under ideal conditions,
the raw data y ? Rm from the scanner is a linear map3 of u, motivating the likelihood
y = Xu + ?,
? ? N (0, ? 2 I).
Here, each row of X is a single Fourier filter, determined by the sequence. In the context of this
paper, the problem of experimental design is how to choose X within a space of technically feasible
sequences, so that u can be best recovered given y. As motivated in Section 1, we need to specify
a prior P (u) which represents low-level statistics of (MR) images, distinctly super-Gaussian distributions ? a Gaussian prior would not be a sensible choice. We use the one proposed in [12]. The
posterior has the form
P (u|y) ? N (y|Xu, ? 2 I)
q
Y
e???j |sj | ,
s = Bu, ??j = ?j /?,
(1)
j=1
the prior being a product of Laplacians on linear projections sj of u, among them the image gradient
and wavelet coefficients. The Laplace distribution encourages sparsity of s. Further details are given
in [12]. MVMs with B cost O(q) with q ? 3n. MAP estimation for the same model was used in
[8].
Bayesian inference for (1) is analytically not tractable, and an efficient deterministic approximation
is discussed in Section 3. In the variant of Bayesian sequential experimental design used here, an
extension of X by X? ? Rd,n is scored by the entropy difference
?(X? ) := H[P (u|y)] ? EP (y? |y) [H[P (u|y, y? )]] ,
(2)
where P (u|y, y? ) is the posterior after including (X? , y? ). This criterion measures the decrease in
uncertainty about u, averaged over the posterior P (y? |y). Our approach is sequential: a sequence
is combined from parts, each extension being chosen by maximizing the entropy difference over a
2
The model we use has q = 196096 potentials and n = 65536 latent variables. Any algorithm that iterates
over single potentials, has to solve at least q linear systems of size n, while our method often converges after
solving less than 50 of these.
3
Phase contributions in u are discussed in Section 4.
2
candidate set {X? }. After each extension, a new scanner measurement is obtained for the single
extended sequence only. Our Bayesian predictive approach allows us to score many candidates
(X? , y? ) without performing costly MR measurements for them. The sequential restriction makes
sense for several reasons. First, MR sequences naturally decompose in a sequential fashion: they
describe a discontinuous path of several smooth trajectories (see Section 4). Also, a non-sequential
approach would never make use of any real measurements, relying much more on the correctness
of the model. Finally, the computational complexity of optimizing over complete sequences is
staggering. Our sequential approach seems also better suited for dynamic MRI applications.
3
Scalable Approximate Inference
In this section, we propose a novel scalable algorithm for the variational inference approximation proposed in [3]. We make use of ideas presented in [16]. First, e???j |sj | =
?1
2
2
2
max?j >0 e??j sj /(2? ) e?(?j /2)?j , using Legendre duality (the Laplace site is log-convex in s2j )
[3]. Let ? = (?j ) and ? = diag ?. To simplify the derivation, assume that B T ?B is invertible,4
and let Q(u) ? exp(?uT B T ?Bu/(2? 2 )), Q(y, u) := P (y|u)Q(u). The joint distribution is
Gaussian, and
??1 = A := X T X + B T ?B, h = ?X T y.
(3)
R
2 T
?1
1
We have that P (y) ? e? 2 (? ) (? ) |B T ?B/(2?? 2 )|?1/2 P (y|u)Q(u) du, and
Z
P (y|u)Q(u) du = |2?? 2 ?|1/2 max Q(u|y)Q(y) = |2?? 2 ?|1/2 max P (y|u)Q(u),
Q(u|y) = N (u|h, ? 2 ?),
u
u
where the maximum is attained at u = h. Therefore, P (y) ? C1 (? 2 )e??(?)/2 with
?(?) := log |A| + (? 2 )T (? ?1 ) + min ? ?2 ky ? Xuk2 + ? ?2 sT ?s,
u
s = Bu,
and the bound is tightened by minimizing ?(?). Now, g(?) := log |A| is concave, so we can
use another Legendre duality, g(?) = minz0 z T ? ? g ? (z), to obtain an upper bound ?z (?) =
minu ?z (u, ?) ? ?(?). In the outer loop steps of our algorithm, we need to find the minimizer
z ? Rq+ ; the inner loop consists of minimizing the upper bound w.r.t. ? for fixed z. Introducing
? := ? ?1 , we find that (u, ?) 7? ?z (u, ? ?1 ) is jointly convex, which follows just as in [16], and
because z T (? ?1 ) is convex (all zj ? 0). Minimizing over ? gives the convex problem
min ? ?2 ky ? Xuk2 + 2
u
q
X
?
?j p j ,
pj := zj + ? ?2 s2j , s = Bu,
(4)
j=1
which is of standard form and can be solved very efficiently by the iteratively reweighted least
squares (IRLS) algorithm, a special case of Newton-Raphson. In every iteration, we have to solve
(X T X + B T (diag e)B)d = r, where r, e are simple functions of u. We use the linear conjugate
gradients (LCG) algorithm [4], requiring a MVM with X, X T , B, and B T per iteration. The
line search along the Newton direction d can be done in O(q), no further MVMs are required.
In our experiments, IRLS converged rapidly. At convergence, ?j0 = ?j (p0j )?1/2 , p0 = p0 (u0 ).
? T z 0 ? g(?),
? so that
For updating z ? z 0 given ?, note that ? T z 0 ? g(?) = g ? (z 0 ) = min?? ?
T 0
0
0 = ?? ? z ? g(?) = z ? ?? g(?), and
(5)
z 0 = diag?1 BA?1 B T = ? ?2 (VarQ [sj | y]).
z 0 cannot be computed by a few LCG runs. Since A has no sparse graphical structure, we cannot
use belief propagation either. However, the Lanczos algorithm can be used to estimate z 0 [10].
This algorithm is also essential for scoring many candidates in each design step of our method (see
Section 3.1).
Our algorithm iterates between updates of z (outer loop steps) and inner loop convex optimization
of (u, ?). We show in [13] that min? ?(?) is a convex problem, whenever all model sites are
log-concave (as is the case for Laplacians), a finding which is novel to the best of our knowledge.
4
The end result is valid for singular B T ?B, by a continuity argument.
3
Once converged to the global optimum of ?(?), the posterior is approximated by Q(?|y) of (3),
whose mean is given by u. The main idea is to decouple ?(?) by upper bounding the critical
term log |A|. If the z updates are done exactly, the algorithm is globally convergent [16]. Our
algorithm is inspired by [16], where a different problem is addressed. Their method produces very
sparse solutions of Xu ? y, while our focus is on close approximate inference, especially w.r.t.
the posterior covariance matrix. It was found in [12] that aggressive sparsification, notwithstanding
being computationally convenient, hurts experimental design (and even reconstruction) for natural
images. Their update of z requires (5) as well, but can be done more cheaply, since most ?j = +?,
and A can be replaced by a much smaller matrix. Finally, note that MAP estimation [8] is solving
(4) once for z = 0, so can be seen as special case of our method.
3.1
Lanczos Algorithm. Efficient Design
The Lanczos algorithm [4] is typically used to find extremal eigenvectors of large, positive definite
matrices A. Requiring an MVM with A in each iteration, it produces QT AQ = T ? Rk,k
after k iterations, where QT Q = I, T tridiagonal. Lanczos estimates of expressions linear in
? = A?1 are obtained by plugging in the low-rank approximation QT ?1 QT ? ? [10]. In our
case, z (k) := diag?1 (BQT ?1 QT B T ) ? z 0 , L(k) := log |T| ? g(?). We also use Lanczos
to compute entropy difference scores, approximating (2) by using Q(u|y) instead of P (u|y), and
Q0 (u|y) ? Q(u|y)P (y? |u) instead of P (u|y, y? ), with ? 0 = ?. The expectation over P (y? |y)
need not be done then, and
?(X? ) ? ? log |A| + log A + X?T X? = log I + X? ?X?T .
For nc candidates of d rows, computing scores would need d ? nc LCG runs, which is not feasible.
Using the Lanczos approximation of ?, we need k MVMs with X? for each candidate, then nc
Cholesky decompositions of min{k, d} ? min{k, d} matrices. Both computations can readily be
parallelized, as is done in our implementation. Note that we can compute ??(X? )/?? for X? =
X? (?), if ?X? /?? is known, so that gradient-based score optimization can be used.
The basic recurrence of the Lanczos method is treacherously simple. The loss of orthogonality
in Q has to be countered, thus typical Lanczos codes are intricate. Q has to be maintained in
memory. The matrices A we encounter here, have an almost linearly decaying spectrum, so standard
Lanczos codes, designed for geometrically decaying spectra, have to be modified. Our A have no
close low rank approximations, and eigenvalues from both ends of the spectrum converge rapidly in
Lanczos. Therefore, our estimate z (k) is not very close to the true z 0 even for quite large k. However,
z (k) z 0 , since zk?1,j ? zk,j for all j. Since the sparsity penalty on sj in (4) is stronger for smaller
zj , underestimations from the Lanczos algorithm entail more sparsity (although still zk,j > 0). In
practice, a smaller k often leads to somewhat better results, besides running much faster. While the
global convergence proof for our algorithm hinges on exact updates of z, which cannot be done to
the best of our knowledge, the empirical success of Section 4 may be due to this observation, noting
that natural image statistics are typically more super-Gaussian than the Laplacian. In conclusion,
approximate inference requires the computation of marginal variances, which for general models
cannot be approximated closely with generic techniques. In the context of sparse linear models, it
seems to be sufficient to estimate the dominating covariance eigendirections, for which the Lanczos
algorithm with a moderate number of steps can be used. More generally, the Lanczos method is a
powerful tool for approximate inference in Gaussian models, an insight which does not seem to be
widely known in machine learning.
4
Experiments
We start with some MRI terminology. An MR scanner acquires Fourier coefficients Y (k) at spatial
frequencies5 k (the 2d Fourier domain is called k-space), along smooth trajectories k(t) determined
by magnetic field gradients g(t). The control flow is called sequence. Its cost is determined by how
long it takes to obtain a complete image, depending on the number of trajectories and their shapes.
Gradient amplitude and slew rate constraints enforce smooth trajectories. In Cartesian sampling,
trajectories are parallel equispaced lines in k-space, so the FFT can be used for image reconstruction. Spiral sampling offers a better coverage of k-space for given gradient power, leading to faster
5
Both k and spatial locations r are seen as ? R2 or ? C.
4
acquisition. It is often used for dynamic studies, such as cardiac imaging and fMRI. A trajectory
T
k(t) leads to data y = Xk u, where Xk = [e?i2?r j k(t` ) ]`j . We use gridding interpolation6 with a
Kaiser-Bessel kernel [1, ch. 13.2] to approximate the multiplication with X k , which would be too
expensive otherwise. As for other reconstruction methods, most of our running time is spent in the
gridding (MVMs with X, X T , and X? ).
gy in [mT/m]
gx in [mT/m]
For our experiments, we acquired
data on an equispaced grid.7 In
r?space: U(r)
k?space: Y(k)
gradients: g(t)
theory, the image u is real-valued;
50
n
1/2
in reality, due to resonance fre0
quency offsets, magnetic field inhomogeneities, and eddy currents [1,
?50
0
2
4
6
0
50
ch. 13.4], the reconstruction contains a phase ?(r). It is com0
mon practice to discard ? after re?50
0
2
4
6
?1/2
construction. Short of modelling a 11
n ?1/2
0
1/2
t in [ms]
complex-valued u, we correct for
low-frequency phase contributions by
a cheap pre-measurement.8 Note Figure 1: MR signal acquisition: r-space and k-space representhat |utrue |, against which recon- tation of the signal on a rectangular grid as well as the trajectory
structions are judged below, is not al- obtained by means of magnetic field gradients
tered by this correction. From the
corrected raw data, we simulate all further measurements under different sequences using gridding interpolation. While no noise is added to these measurements, there remain significant highfrequency erroneous phase contributions in utrue .
Interleaved outgoing Archimedian spirals employ trajectories k(t) ? ?(t)ei2?[?(t)+?0 ] , ?(0) = 0,
where the gradient g(t) ? dk/dt grows to maximum strength at the slew rate, then stays there [1,
ch. 17.6]. Sampling along an interleave respects the Nyquist limit. The number of revolutions Nr
and interleaves Nshot determine the radial spacing. The scan time is proportional to Nshot . In our
setup, Nr = 8, resulting in 3216 complex samples per interleave. For equispaced offset angles ?0 ,
the Nyquist spiral (respecting the limit radially) has Nshot = 16. Our goal is to design spiral sequences with smaller Nshot , reducing scan time by a factor 16/Nshot . We use the sequential method
described in Section 2, where {X? ? Rn?d } is a set of potential interleaves, d = 6432. The image
resolution is 256 ? 256, so n = 65536. Since utrue is approximately real-valued, measurements at
k and ?k are quite redundant, which is why we restrict9 ourselves to offset angles ?0 ? [0, ?). We
score candidates (?/256)[0 : 255] in each round, comparing to equispaced placements j?/Nshot ,
and to drawing ?0 uniformly at random. For the former, favoured by MRI practitioners right now,
the maximum k-space distance between samples is minimized, while the latter is aligned with compressed sensing recommendations [8].
For a given sequence, we consider different image reconstructions: the posterior mode (convex MAP
estimation) [8], linear least squares (LS; linear conjugate gradients), and zero filling with density
compensation (ZFDC; based on Voronoi diagram) [1, ch. 13.2.4]. The latter requires a single MVM
with X T only, and is most commonly used in practice. We selected the ? scale parameters (there
are two of them, as in [12]) optimally for the Nyquist spiral Xnyq , and set ? 2 to the variance of
Xnyq (utrue ? |utrue |). We worked on two slices (8,12) and used 750 Lanczos iterations in our
method.10 We report L2 distances between reconstruction and true image |utrue |. Results are given
in Table 3, and some reconstructions (slice 8) are shown in Figure 2.
6
NFFT: http://www-user.tu-chemnitz.de/?potts/nfft/
Field of view (FOV) 260mm (256 ? 256 voxels, 1mm2 ), 16 brain slices with a turbo-spin sequence, 23
echoes per excitation. Train of 120? refocusing pulses, each phase encoded differently. Slices are 4mm thick.
8
We sample the center of k-space on a p ? p Cartesian grid, obtaining a low-resolution reconstruction
? we use to correct the raw data. We tried p ? {16, 32, 64} (larger p means better
by FFT, whose phase ?
correction), results below are for p = 32 only. While reconstruction errors generally decrease somewhat with
larger p, the relative differences between all settings below are insensitive to p.
9
Dropping this restriction disfavours equispaced {?0 } setups with even Nshot .
10
This seems small, given that n = 65536. We also tried 1250 iterations, which needed more memory, ran
almost twice as long, and gave slightly worse results (see end of Section 3.1).
7
5
(a) Slice
(b) MAP?op, N
8
=7, E=3.95
shot
(d) MAP?rd, Nshot=7, E=12.08
(e) MAP?eq, Nshot=8, E=2.84
(c) MAP?eq, N
=7, E=4.40
shot
(f) ZFDC?eq, Nshot=8, E=6.20
Figure 2: Reconstruction results. Differences to true image (a; scale [0, 1]) in (b-f), scale [?0.1, 0.1].
Nshot img
5 8
6 8
7 8
8 8
5 12
6 12
7 12
8 12
MAPop MAPrd MAPeq
12.99 16.01 ? 2.49 14.18
8.31 12.46 ? 2.46 10.06
3.95 11.81 ? 2.71 4.40
2.94 6.86 ? 2.00 2.84
8.01 10.17 ? 1.63 9.32
4.94 7.74 ? 1.75 5.21
2.84 7.46 ? 1.80 3.18
2.20 4.60 ? 1.26 2.09
LSop
LSrd
LSeq
17.23 19.97 ? 1.33 16.80
12.67 16.24 ? 1.13 13.19
7.80 13.71 ? 2.25 7.80
3.77 7.43 ? 2.48 3.31
12.77 14.95 ? 1.08 12.01
9.77 11.89 ? 0.95 9.77
6.40 9.95 ? 1.73 6.18
3.32 5.33 ? 1.73 2.27
ZFDCop ZFDCrd ZFDCeq
25.13 38.04 ? 6.14 23.51 Nshot
5
18.79 33.29 ? 4.71 18.16
6
14.55 33.67 ? 5.90 12.73
7
13.08 26.96 ? 4.47 6.20
8
20.58 28.88 ? 4.25 19.74
img
16.33 25.47 ? 3.15 15.36
8
12.34 26.02 ? 3.44 10.62
12
10.07 21.47 ? 3.67 4.28
slices 2,4,6,10,12,14 from design of slice 8
MAPop
MAPeq
LSop
LSeq
9.01 ? 1.3 10.67 ? 2.1
14.70 ? 1.6 14.57 ? 2.1
5.43 ? 1.1
6.51 ? 2.1
10.80 ? 1.5 10.95 ? 1.8
3.00 ? 0.5
3.27 ? 0.8
7.08 ? 1.1 6.45 ? 1.4
2.42 ? 0.3
2.34 ? 0.3
3.16 ? 0.6 2.70 ? 0.6
MAPeq , Nshot = 16, (Nyq) LSeq , Nshot = 16, (Nyq)
2.75
3.31
1.96
2.27
Figure 3: Results for spiral interleaves on slices 8, 12 (table left). Reconstruction: MAP (posterior mode [8]),
LS (least squares), ZFDC (zero filling, density compensation). Offset angles ?0 ? [0, ?): op (optimized; our
method), rd (uniformly random; avg. 10 runs), eq (equispaced). Nshot : Number of interleaves.
Table upper right: Avg. errors for slices 2,4,6,10,14, measured with sequences optimized on slice 8.
Table lower right: Results for Nyquist spiral eq[Nshot = 16].
The standard reconstruction method ZFDC is improved upon strongly by LS (both are linear, but LS
is iterative), which in turn is improved upon significantly by MAP. This is true even for the Nyquist
spiral (Nshot = 16). While the strongest errors of ZFDC lie outside the ?effective field of view?
(roughly circular for spiral), panel f of Figure 2 shows that ZFDC errors contain important structures
all over the image. Modern implementations of LS and MAP are more expensive than ZFDC by
moderate constant factors. Results such as ours, together with the availability of affordable highperformance digital computation, strongly motivate the transition away from direct signal processing
reconstruction algorithms to modern iterative statistical estimators. Note that ZFDC (and, to a lesser
extent, LS) copes best with equispaced designs, while MAP works best with optimized angles. This
is because the optimized designs leave larger gaps in k-space (see Figure 4). Nonlinear estimators
can interpolate across such gaps to some extent, using image sparsity priors. Methods like ZFDC
merely interpolate locally in k-space, uninformed about image statistics, so that violations of the
Nyquist limit anywhere necessarily translate into errors.
It is clearly evident that drawing the spiral offset angles at random does not work well, even if
MAP reconstruction is used as in [8]. The ratio MAPrd /MAPop in L2 error is 1.23, 1.45, 2.99,
2.33 in Table 3, upper left. While both MAPop and MAPeq essentially attain Nyquist performance
with Nshot = 8, MAPrd does not decrease to that level even with Nshot = 16 (not shown). Our
6
results strongly suggest that randomizing MR sequences is not a useful design principle.11 Similar
shortcomings of randomly drawn designs were reported in [12], in a more idealized setup. Reasons
why CS theory as yet fails to guide measurement design for real images, are reviewed there, see
also [15]. Beyond the rather bad average performance of random designs, the large variance across
trials in Table 3 means that in practice, a randomized sequence scan is much like a gamble. The
outcome of our Bayesian optimized design is stable, in that sequences found in several repetitions
gave almost identical reconstruction performance.
The closest competitors in Table 3 are MAPop
and MAPeq . Since utrue is close to real, both
Slice 8, Nshot=8
Slice 12, Nshot=8
attain close to Nyquist performance up from
Nshot = 8. In the true undersampling regime 0.03
0.03
Nshot ? {5, 6, 7}, MAPop improves significantly12 upon MAPeq . Comparing panels b,c
0
0
of Figure 2, the artifact across the lower right
leads to distortions in the mouth area. Under?0.03
sampling artifacts are generally amplified by ?0.03
regular sampling, which is avoided in the op?0.03
0
0.03
?0.03
0
0.03
timized designs. Breaking up such regular designs seems to be the major role of random- Figure 4: Spirals found by our algorithm. The ordering
ization in CS theory, but our results show that is color-coded: dark spirals selected first.
much is lost in the process. We see that approximate Bayesian experimental design is useful to
optimize measurement architectures for subsequent MAP reconstruction. To our knowledge, no
similar design optimization method based purely on MAP estimation has been proposed (ours needs
approximate inference), rendering the beneficial interplay between our framework and subsequent
MAP estimation all the more interesting. The computational primitives required for MAP estimation and our method are the same. Our implementation requires about 5 hours on a single standard
desktop machine to optimize 11 angles sequentially, 256 candidates per extension, with n and d as
above. The score computations dominate the running time, but can readily be parallelized.
It is neither feasible nor desirable on most current MR scanners to optimize the sequence during the
measurement, so an important question is whether sequences optimized on some slices work better
in general as well (for the same contrast and similar objects). We tested transferability by measuring
five other slices not seen by the optimization method. The results (Table 3, upper right) indicate
that the main improvements are not specific to the object the sequence was optimized for.13 Two
spirals found by our method are shown in Figure 4 (2 of 8 interleaves, Nshot = 8). The spacing
is not equidistant, and as noted above, only nonlinear MAP estimation can successfully interpolate
across resulting larger k-space gaps. On the other hand, the spacing is more regular than is typically
achieved by random sampling.
5
Discussion
We have presented the first scalable Bayesian experimental design framework for automatically
optimizing MRI sequences, a problem of high impact on clinical diagnostics and brain research. The
high demands on image resolution and processing time which come with this application are met in
principle by our novel variational inference algorithm, reducing computations to signal processing
11
Images exhibit a decay in power as function of spatial frequence (distance to k-space origin), and the most
evident failure of uniform random sampling is the ignorance of this fact [15]. While this point is noted in [8],
the variable-density weighting suggested there is built in to all designs compared here. Any spiral interleave
samples more closely around the origin. In fact, the sampling density as a function of spatial frequency |k(t)|
does not depend on the offset angles ?0 .
12
In another set of experiments (not shown), we compared optimization, randomization, and equispacing of
?0 ? [0, 2?), in disregard of the approximate real-valuedness of utrue . In this setting, equispacing performs
poorly (worse than randomization).
13
However, it is important that the object exhibits realistic natural image statistics. Artificial phantoms of
extremely simple structure, often used in MR sequence design, are not suitable in that respect. Real MR images
are much more complicated than simple phantoms, even in low level statistics, and results obtained on phantoms
only should not be given overly high attendance.
7
primitives such as FFT and gridding. We demonstrated the power of our approach in a study with
spiral sequences, using raw data from a 3T MR scanner. The sequences found by our method lead to
reconstructions of high quality, even though they are faster than traditionally used Nyquist setups by
a factor up to two. They improve strongly on sequences obtained by blind randomization. Moreover,
across all designs, nonlinear Bayesian MAP estimation was found to be essential for reconstructions
from undersamplings, and our design optimization framework is especially useful for subsequent
MAP reconstruction.
Our results strongly suggest that modifications to standard sequences can be found which produce
similar images at lower cost. Namely, with so many handles to turn in sequence design nowadays,
this is a high-dimensional optimization problem dealing with signals (images) of high complexity,
and human experts can greatly benefit from goal-directed machine exploration. Randomizing parameters of a sequence, as suggested by compressed sensing theory, helps to break wasteful symmetries
in regular standard sequences. As our results show, many of the advantages of regular sequences
are lost by randomization though. The optimization of Bayesian information leads to irregular sequences as well, improving on regular, and especially on randomized designs. Our insights should
be especially valuable in MR applications where a high temporal resolution is essential (such as
fMRI studies), so that dense spatial sampling is not even an option. An extension to 3d volume
reconstruction, making use of non-Gaussian hidden Markov models, is work in progress. Finally,
our framework seems also promising for real-time imaging [1, ch. 11.4], where the scanner allows
for on-line adaptations of the sequence depending on measurement feedback. It could be used to
help an operator homing in on regions of interest, or could even run without human intervention.
We intend to test our proposal directly on an MR scanner, using the sequential setup described in
Section 2. This will come with new problems not addressed in Section 4, such as phase or image
errors that depend on the sequence employed14 (which could be accounted for by a more elaborate
noise model). In our experiments in Section 4, the choice of different offset angles is cost-neutral,
but when a larger set of candidates is used, respective costs have to be quantified in terms of real
scan time, error-proneness, heating due to rapid gradient switching, and other factors.
Acknowledgments
We thank Stefan Kunis for help and support with NFFT.
References
[1] M.A. Bernstein, K.F. King, and X.J. Zhou. Handbook of MRI Pulse Sequences. Elsevier Academic Press,
1st edition, 2004.
[2] A. Garroway, P. Grannell, and P. Mansfield. Image formation in NMR by a selective irradiative pulse. J.
Phys. C: Solid State Phys., 7:L457?L462, 1974.
[3] M. Girolami. A variational method for learning sparse and overcomplete representations. N. Comp.,
13:2517?2532, 2001.
[4] G. Golub and C. Van Loan. Matrix Computations. Johns Hopkins University Press, 3rd edition, 1996.
[5] A. Haase, J. Frahm, D. Matthaei, W. H?anicke, and K. Merboldt. FLASH imaging: Rapid NMR imaging
using low flip-angle pulses. J. Magn. Reson., 67:258?266, 1986.
[6] J. Hennig, A. Nauerth, and H. Friedburg. RARE imaging: A fast imaging method for clinical MR. Magn.
Reson. Med., 3(6):823?833, 1986.
[7] P. Lauterbur. Image formation by induced local interactions: Examples employing nuclear magnetic
resonance. Nature, 242:190?191, 1973.
[8] M. Lustig, D. Donoho, and J. Pauly. Sparse MRI: The application of compressed sensing for rapid MR
imaging. Magn. Reson. Med., 85(6):1182?1195, 2007.
[9] P. Mansfield. Multi-planar image formation using NMR spin-echoes. J. Phys. C, 10:L50?L58, 1977.
[10] M. Schneider and A. Willsky. Krylov subspace estimation. SIAM J. Comp., 22(5):1840?1864, 2001.
[11] M. Seeger. Bayesian inference and optimal design for the sparse linear model. JMLR, 9:759?813, 2008.
[12] M. Seeger and H. Nickisch. Compressed sensing and Bayesian experimental design. In ICML 25, 2008.
[13] M. Seeger and H. Nickisch. Large scale variational inference and experimental design for sparse generalized linear models. Technical Report TR-175, Max Planck Institute for Biological Cybernetics, T?ubingen,
Germany, September 2008.
[14] M. Tipping and A. Faul. Fast marginal likelihood maximisation for sparse Bayesian models. In AI and
Statistics 9, 2003.
[15] Y. Weiss, H. Chang, and W. Freeman. Learning compressed sensing. Snowbird Learning Workshop,
Allerton, CA, 2007.
[16] D. Wipf and S. Nagarajan. A new view of automatic relevance determination. In NIPS 20, 2008.
14
Some common problems with spirals are discussed in [1, ch. 17.6.3], together with remedies.
8
| 3558 |@word trial:1 mri:14 interleave:3 seems:5 stronger:1 pulse:4 tried:2 covariance:2 p0:2 decomposition:1 tr:1 solid:1 shot:2 initial:1 contains:1 score:7 ours:3 current:3 recovered:1 comparing:2 transferability:1 yet:1 readily:2 john:1 numerical:3 realistic:3 subsequent:3 shape:1 cheap:1 designed:1 update:4 fewer:2 selected:2 desktop:1 xk:2 short:1 iterates:2 location:1 gx:1 allerton:1 five:1 along:4 direct:1 consists:1 acquired:1 intricate:1 rapid:3 utrue:8 mpg:1 nor:1 roughly:1 multi:2 brain:5 inspired:2 relying:1 globally:1 freeman:1 automatically:1 nyq:2 mvms:4 moreover:2 panel:2 developed:2 finding:2 sparsification:1 temporal:3 every:1 concave:2 exactly:1 rm:1 nmr:3 control:1 healthcare:1 intervention:1 planck:2 positive:1 magn:3 local:2 limit:4 tation:1 switching:1 path:1 interpolation:1 approximately:1 twice:1 quantified:2 fov:1 challenging:1 averaged:1 directed:1 acknowledgment:1 practice:4 lost:2 definite:1 maximisation:1 com0:1 j0:1 area:2 empirical:1 significantly:1 attain:2 projection:1 convenient:1 pre:1 radial:1 regular:6 suggest:2 cannot:4 close:5 operator:1 judged:1 context:2 applying:1 optimize:4 restriction:2 map:19 deterministic:1 www:1 maximizing:1 center:1 primitive:5 phantom:3 demonstrated:1 l:6 convex:7 rectangular:1 resolution:6 insight:2 estimator:2 importantly:1 dominate:1 nuclear:1 handle:1 traditionally:1 hurt:1 laplace:2 reson:3 construction:1 today:1 user:1 exact:1 homogeneous:2 equispaced:7 origin:2 approximated:2 expensive:2 updating:1 ep:1 role:1 solved:1 region:2 ordering:1 decrease:4 valuable:1 ran:1 rq:1 complexity:2 respecting:1 dynamic:3 motivate:1 depend:2 solving:2 predictive:1 technically:1 upon:3 ei2:1 purely:1 joint:1 homing:1 differently:1 derivation:1 train:1 epi:1 fast:3 describe:2 effective:1 shortcoming:1 artificial:1 formation:3 outside:1 outcome:1 whose:2 quite:2 widely:1 solve:2 dominating:1 valued:3 drawing:2 otherwise:1 compressed:6 distortion:1 encoded:1 statistic:8 transform:1 itself:1 jointly:1 varq:1 inhomogeneity:1 echo:2 interplay:1 sequence:44 eigenvalue:1 advantage:1 matthias:1 propose:3 reconstruction:23 interaction:1 product:1 adaptation:1 tu:1 aligned:1 combining:2 loop:4 rapidly:3 translate:1 poorly:1 achieve:1 amplified:1 olkopf:1 ky:2 convergence:2 optimum:2 produce:3 perfusion:1 converges:1 leave:1 object:3 spent:1 depending:2 help:3 radiation:1 amazing:1 snowbird:1 uninformed:1 op:3 measured:1 qt:5 progress:1 eq:5 strong:1 coverage:1 c:4 metabolite:1 come:3 indicate:2 met:1 girolami:1 direction:2 faul:1 anatomy:1 drawback:1 discontinuous:1 thick:1 filter:1 closely:2 correct:2 exploration:1 human:2 nagarajan:1 decompose:1 randomization:4 biological:2 extension:5 correction:2 scanner:10 mm:2 around:1 exp:1 minu:1 major:1 purpose:1 estimation:9 xuk2:2 extremal:1 correctness:1 repetition:1 successfully:1 tool:1 stefan:1 lcg:3 clearly:1 gaussian:11 super:2 modified:3 rather:1 zhou:1 varying:1 focus:1 improvement:1 potts:1 rank:2 likelihood:2 modelling:1 greatly:1 seeger:5 contrast:2 sense:1 elsevier:1 inference:17 voronoi:1 entire:1 typically:3 hidden:1 archimedian:1 selective:1 germany:2 pixel:1 among:1 priori:1 development:1 resonance:5 spatial:5 special:2 haase:1 marginal:2 field:9 aware:1 never:1 once:2 sampling:9 mm2:1 represents:2 identical:1 icml:1 filling:2 attendance:1 fmri:3 minimized:1 report:2 wipf:1 simplify:1 employ:2 few:1 modern:2 randomly:1 interpolate:3 replaced:1 phase:7 ourselves:1 versatility:1 huge:1 interest:3 circular:1 golub:1 violation:1 diagnostics:1 nowadays:2 edge:1 respective:1 spemannstra:1 incomplete:1 harmful:1 desired:1 re:1 overcomplete:1 theoretical:1 s2j:2 lanczos:14 measuring:1 cost:7 introducing:1 neutral:1 rare:2 uniform:1 tridiagonal:1 too:1 motivating:1 optimally:1 reported:1 scanning:1 randomizing:3 nickisch:3 combined:1 st:2 density:4 randomized:2 siam:1 stay:1 bu:4 invertible:1 together:2 hopkins:1 central:1 recorded:1 hn:1 choose:1 worse:2 expert:2 leading:1 highperformance:1 aggressive:1 potential:4 de:2 gy:1 availability:1 coefficient:2 bqt:1 idealized:1 blind:1 bone:1 view:3 break:1 start:1 decaying:2 option:1 parallel:1 complicated:1 contribution:3 square:3 spin:2 variance:3 efficiently:1 weak:1 bayesian:19 raw:7 comparably:1 irls:2 trajectory:8 comp:2 cybernetics:2 converged:2 strongest:1 phys:3 whenever:1 against:1 competitor:1 failure:1 acquisition:2 frequency:2 thereof:1 naturally:1 proof:1 tered:1 radially:1 annoyance:1 knowledge:3 ut:1 improves:1 color:1 eddy:1 amplitude:2 higher:1 attained:1 dt:1 tipping:1 planar:1 specify:1 improved:3 wei:1 hannes:1 evaluated:2 done:8 strongly:5 though:2 just:1 anywhere:1 hand:1 nonlinear:3 propagation:1 continuity:1 mode:2 quality:2 artifact:3 scientific:1 grows:1 requiring:2 true:5 remedy:1 contain:1 ization:1 former:1 analytically:1 spatially:1 q0:1 iteratively:1 staggering:1 i2:1 ignorance:1 reweighted:1 visualizing:1 round:1 during:1 encourages:1 recurrence:1 maintained:3 acquires:1 excitation:1 noted:2 criterion:1 m:1 generalized:1 evident:2 complete:2 performs:1 motion:1 image:36 variational:6 ranging:2 novel:6 common:1 specialized:1 functional:1 mt:2 insensitive:1 volume:1 discussed:3 measurement:13 significant:1 ai:1 rd:4 automatic:1 grid:3 mathematics:2 aq:1 stable:2 entail:1 interleaf:5 posterior:9 closest:1 optimizing:2 moderate:2 discard:1 slew:2 ubingen:2 success:1 scoring:1 seen:3 pauly:1 somewhat:2 mr:19 schneider:1 parallelized:2 converge:1 determine:1 redundant:1 bessel:1 signal:7 u0:1 desirable:1 smooth:4 technical:1 faster:4 academic:1 determination:1 clinical:3 long:3 raphson:1 offer:1 coded:1 plugging:1 laplacian:1 impact:1 mansfield:2 scalable:6 basic:2 variant:1 patient:2 expectation:1 essentially:1 blindly:1 iteration:6 kernel:1 affordable:1 achieved:1 c1:1 proposal:2 irregular:1 spacing:3 addressed:2 diagram:1 singular:1 sch:1 pohmann:2 induced:1 med:2 flow:3 seem:1 practitioner:1 presence:1 ideal:1 noting:1 bernstein:1 spiral:16 fft:4 variety:2 iterate:1 rendering:1 gave:2 equidistant:1 architecture:1 inner:2 idea:3 lesser:1 proneness:1 whether:1 motivated:2 expression:1 nyquist:9 penalty:1 render:1 useful:5 generally:3 eigenvectors:1 tune:1 dark:1 locally:1 recon:1 reduced:2 http:1 zj:3 judging:1 diagnostic:1 overly:1 per:4 dropping:1 hennig:1 key:1 terminology:1 lustig:1 blood:1 drawn:1 undersampling:1 wasteful:1 pj:1 neither:1 imaging:12 geometrically:1 merely:1 run:5 angle:9 powerful:2 uncertainty:3 eigendirections:1 almost:3 scaling:1 interleaved:1 bound:3 convergent:1 turbo:1 strength:1 placement:1 orthogonality:1 constraint:1 worked:1 fourier:4 aspect:1 argument:1 min:6 simulate:1 extremely:1 performing:1 relatively:1 structured:1 legendre:2 conjugate:2 smaller:4 cardiac:1 remain:1 slightly:1 across:5 beneficial:1 b:1 making:2 modification:1 computationally:1 turn:2 needed:1 mind:1 flip:1 tractable:1 end:3 away:1 spectral:1 generic:1 magnetic:9 enforce:1 encounter:1 gridding:4 running:4 graphical:1 hinge:1 newton:2 lseq:3 especially:4 approximating:1 classical:1 intend:1 question:2 added:1 kaiser:1 costly:1 countered:1 highfrequency:1 nr:2 exhibit:2 gradient:13 september:1 subspace:1 distance:3 diagnostically:1 refocusing:1 thank:1 majority:1 sensible:1 outer:2 extent:2 tuebingen:1 reason:2 willsky:1 code:3 besides:1 ratio:1 minimizing:3 nc:3 difficult:2 setup:6 negative:1 ba:1 design:37 implementation:3 upper:6 observation:3 markov:1 compensation:2 extended:1 rn:2 namely:1 required:3 optimized:7 hour:1 nip:1 address:1 beyond:3 suggested:2 krylov:1 below:3 laplacians:2 sparsity:4 regime:1 rolf:2 built:1 max:5 including:1 memory:2 belief:1 mouth:1 power:3 critical:1 suitable:1 natural:7 customized:1 improve:2 mon:1 larger:5 prior:5 voxels:1 l2:2 multiplication:2 relative:1 loss:1 interesting:1 proportional:1 digital:1 switched:1 sufficient:1 principle:2 tightened:1 row:2 accounted:1 quency:1 guide:1 institute:2 sparse:9 distinctly:1 benefit:1 slice:14 feedback:1 van:1 stand:1 valid:1 transition:1 collection:1 commonly:1 avg:2 avoided:1 far:1 employing:1 cope:1 reconstructed:1 approximate:12 sj:6 bernhard:1 dealing:1 global:2 sequentially:1 handbook:1 p0j:1 img:2 spectrum:4 search:1 iterative:3 latent:1 why:2 reality:1 table:8 reviewed:1 nature:1 zk:3 promising:1 ca:1 obtaining:2 symmetry:1 improving:2 du:2 complex:2 necessarily:1 domain:1 diag:4 dense:3 main:4 linearly:1 bounding:1 noise:3 scored:1 edition:2 heating:1 xu:3 chemnitz:1 site:2 frahm:1 elaborate:1 fashion:1 favoured:1 fails:1 mvm:4 candidate:8 lie:1 breaking:1 jmlr:1 minz:1 weighting:1 wavelet:1 rk:1 erroneous:1 bad:1 specific:1 revolution:1 sensing:6 r2:1 offset:7 dk:1 decay:1 essential:4 workshop:1 sequential:8 importance:1 notwithstanding:1 cartesian:2 demand:1 gap:3 suited:1 entropy:3 cheaply:1 recommendation:1 chang:1 ch:6 minimizer:1 viewed:1 goal:2 king:1 donoho:1 flash:2 feasible:3 hard:1 loan:1 determined:3 typical:1 corrected:1 reducing:2 uniformly:2 decouple:1 called:3 hospital:1 duality:2 experimental:14 superimposing:1 disregard:1 underestimation:1 gamble:1 select:1 cholesky:1 support:1 latter:3 scan:4 relevance:2 outgoing:1 tested:1 |
2,822 | 3,559 | Logistic Normal Priors for Unsupervised
Probabilistic Grammar Induction
Shay B. Cohen Kevin Gimpel Noah A. Smith
Language Technologies Institute
School of Computer Science
Carnegie Mellon University
{scohen,kgimpel,nasmith}@cs.cmu.edu
Abstract
We explore a new Bayesian model for probabilistic grammars, a family of
distributions over discrete structures that includes hidden Markov models
and probabilistic context-free grammars. Our model extends the correlated
topic model framework to probabilistic grammars, exploiting the logistic
normal distribution as a prior over the grammar parameters. We derive
a variational EM algorithm for that model, and then experiment with the
task of unsupervised grammar induction for natural language dependency
parsing. We show that our model achieves superior results over previous
models that use different priors.
1
Introduction
Unsupervised learning of structured variables in data is a difficult problem that has received
considerable recent attention. In this paper, we consider learning probabilistic grammars,
a class of structure models that includes Markov models, hidden Markov models (HMMs)
and probabilistic context-free grammars (PCFGs). Central to natural language processing (NLP), probabilistic grammars are recursive generative models over discrete graphical
structures, built out of conditional multinomial distributions, that make independence assumptions to permit efficient exact probabilistic inference.
There has been an increased interest in the use of Bayesian methods as applied to probabilistic grammars for NLP, including part-of-speech tagging [10, 20], phrase-structure parsing
[7, 11, 16], and combinations of models [8]. In Bayesian-minded work with probabilistic
grammars, a common thread is the use of a Dirichlet prior for the underlying multinomials,
because as the conjugate prior for the multinomial, it bestows computational feasibility.
The Dirichlet prior can also be used to encourage the desired property of sparsity in the
learned grammar [11].
A related widely known example is the latent Dirichlet allocation (LDA) model for topic
modeling in document collections [5], in which each document?s topic distribution is treated
as a hidden variable, as is the topic distribution from which each word is drawn.1 Blei and
Lafferty [4] showed empirical improvements over LDA using a logistic normal distribution
that permits different topics to correlate with each other, resulting in a correlated topic
model (CTM). Here we aim to learn analogous correlations such as: a word that is likely
to take one kind of argument (e.g., singular nouns) may be likely to take others as well
(e.g., plural or proper nouns). By permitting such correlations via the distribution over the
1
A certain variant of LDA can be seen as a Bayesian version of a zero-order HMM, where the
unigram state (topic) distribution is sampled first for each sequence (document).
?k
?k
?k
?k K
y
x
N
K
Figure 1: A graphical model for the logistic normal probabilistic grammar. y is the derivation, x is the observed string.
parameters, we hope to break independence assumptions typically made about the behavior
of different part-of-speech tags.
In this paper, we present a model, in the Bayesian setting, which extends CTM for probabilistic grammars. We also derive an inference algorithm for that model, which is ultimately
used to provide a point estimate for the grammar, permitting us to perform fast and exact
inference. This is required if the learned grammar is to be used as a component in an
application.
The rest of the paper is organized as follows. ?2 gives a general form for probabilistic
grammars built out of multinomial distributions. ?3 describes our model and an efficient
variational inference algorithm. ?4 presents a probabilistic context-free dependency grammar often used in unsupervised natural language learning. Experimental results showing
the competitiveness of our method for estimating that grammar are presented in ?5.
2
Probabilistic Grammars
A probabilistic grammar defines a probability distribution over a certain kind of structured
object (a derivation of the underlying symbolic grammar) explained through step-by-step
stochastic process. HMMs, for example, can be understood as a random walk through
a probabilistic finite-state network, with an output symbol sampled at each state. PCFGs
generate phrase-structure trees by recursively rewriting nonterminal symbols as sequences of
?child? symbols (each itself either a nonterminal symbol or a terminal symbol analogous to
the emissions of an HMM). Each step or emission of an HMM and each rewriting operation
of a PCFG is conditionally independent of the others given a single structural element (one
HMM or PCFG state); this Markov property permits efficient inference.
In general, a probabilistic grammar defines the joint probability of a string x and a grammatical derivation y:
p(x, y | ?) =
Nk
K Y
Y
k=1 i=1
f
k,i
?k,i
(x,y)
= exp
Nk
K X
X
fk,i (x, y) log ?k,i
(1)
k=1 i=1
where fk,i is a function that ?counts? the number of times the kth distribution?s ith event
occurs in the derivation. The parameters ? are a collection of K multinomials h? 1 , ..., ? K i,
the kth of which includes Nk events. Note that there may be many derivations y for a given
string x?perhaps even infinitely many in some kinds of grammars. HMMs and vanilla
PCFGs are the best known probabilistic grammars, but there are others. For example, in
?5 we experiment with the ?dependency model with valence,? a probabilistic grammar for
dependency parsing first proposed in [14].
3
Logistic Normal Prior on Probabilistic Grammars
A natural choice for a prior over the parameters of a probabilistic grammar is a Dirichlet
prior. The Dirichlet family is conjugate to the multinomial family, which makes the inference
more elegant and less computationally intensive. In addition, a Dirichlet prior can encourage
sparse solutions, a property which is important with probabilistic grammars [11].
However, in [4], Blei and Lafferty noticed that the Dirichlet distribution is limited in its
expressive power when modeling a corpus of documents, since it is less flexible about capturing relationships between possible topics. To solve this modeling issue, they extended the
LDA model to use a logistic normal distribution [2] yielding correlated topic models. The
logistic normal distribution maps a d-dimensional multivariate Gaussian to a distribution
Pd
on the d-dimensional probability simplex, Sd = {hz1 , ..., zd i ? Rd : zi ? 0, i=1 zi = 1}, by
exponentiating the normally-distributed variables and normalizing.
Here we take a step analogous to Blei and Lafferty, aiming to capture correlations between
the grammar?s parameters. Our hierarchical generative model, which we call a logisticnormal probabilistic grammar, generates a sentence and derivation tree hx, yi as follows (see
also Fig. 1):
1. Generate ? k ? N(?k , ?k ) for k = 1, ..., K.
.P
Nk
2. Set ?k,i = exp(?k,i )
i0 =1 exp(?k,i0 ) for k = 1, ..., K and i = 1, ..., Nk .
3. Generate x and y from p(x, y | ?) (i.e., sample from the probabilistic grammar).
We now turn to derive a variational inference algorithm for the model.2 Variational Bayesian
inference seeks an approximate posterior function q(?, y) which maximizes a lower bound
(the negated variational free energy) on the log-likelihood [12], a bound which is achieved
using Jensen?s inequality:
PK
log p(x, y | ?, ?) ? i=1 Eq [log p(? i | ?i , ?i )] + Eq [log p(x, y | ?)] + H(q)
(2)
We make a mean-field assumption, and assume that the posterior has the following form:
Q
K QNk
2
q(?, y) =
q(?
|
?
?
,
?
?
)
? q(y)
(3)
k,i
k,i
k,i
k=1
i=1
2
2
where q(?k,i | ?
?k,i , ?
?k,i
) is a Gaussian N(?
?k,i , ?
?k,i
).
Unfolding the expectation with respect to q(y) in the second term in Eq. 2, while recalling
that ? is a deterministic function of ?, we have that:
i
hP
K PNk P
Eq [log p(x, y | ?)] = Eq(?)
k=1
i=1
y q(y)fk,i (x, y) log ?k,i
{z
}
|
f?k,i
= Eq(?)
hP
i
PNk
K PNk ?
0
exp
?
f
?
?
log
0
k,i
k,i
k=1
i=1 k,i
i =1
(4)
where f?k,i is the expected number of occurrences of the ith event in distribution k, under
q(y).3 The logarithm term in Eq. 4 is problematic, so we follow [4] in approximating it with
a first-order Taylor expansion, introducing K more variational parameters ??1 , ..., ??K :
P
1 PNk
Nk
0
0
(5)
log
exp
?
? log ??k ? 1 +
0
k,i
i0 =1 exp ?k,i
i =1
??k
We now have
i
PNk ?
PNk
1
?
0
(6)
i=1 fk,i ?k,i ? log ?k + 1 ? ??k
i0 =1 exp ?k,i
PK PNk ?
1 PNk
?
?2
?k,i ? log ??k + 1 ?
=
?
?k,i + k,i
0 =1 exp
k=1
i=1 fk,i ?
i
2
??k
{z
}
|
Eq [log p(x, y | ?)] ? Eq(?)
hP
K
k=1
?k,i
?
PK PNk ? ?
=
k=1
i=1 fk,i ?k,i
2
(7)
We note that variational inference algorithms have been successfully applied to grammar learning tasks, for example, in [16] and [15].
3
With probabilistic grammars, this quantity can be computed using a summing dynamic programming algorithm like the forward-backward or inside-outside algorithm.
?
Note the shorthand ??k,i to denote an expression involving ?
?, ?
? , and ?.
The final form of our bound is:4
P
P
K
K PNk ? ?
log p(x, y | ?, ?) ?
E
[log
p(?
|
?
,
?
)]
+
f
?
+ H(q)
q
k
k,i
k,i
k
k
k=1
k=1
i=1
(8)
Since, we are interested in EM-style algorithm, we will alternate between finding the maximizing q(?) and the maximizing q(y). Maximization with respect to q(?) is not hard,
because q(?) is parametrized (see Appendix A). The following lemma shows that fortunately, finding the maximizing q(y), which we did not parametrize originally, is not hard
either:
?
Lemma 1. Let r(y | x, e? ) denote the conditional distribution over y given x defined as:
?
r(y | x, e? ) =
1 QK QNk
?
k=1
i=1 exp ?k,i fk,i (x, y)
?
Z(?)
(9)
? is a normalization constant. Then q(y) = r(y | x, e?? ) maximizes the bound in
where Z(?)
Eq. 8.
Proof. First note that H(q) = H(q(? | ?
?, ?
? )) + H(q(y)). This means that the terms we are
interested in maximizing from Eq. 8 are the following, after writing down f?k,i explicitly:
L = argmax
q(y)
P
y q(y)
P
K
k=1
?
f
(x,
y)
?
k,i + H(q(y))
i=1 k,i
PNk
(10)
However, note that:
?
L = argmin DKL (q(y)kr(y | x, e? ))
(11)
q(y)
where DKL denotes the KL divergence. To see that, combine the definition of KL divergence
PK PNk
?
? = log r(y | x, e?? ) where log Z(?)
with the fact that k=1 i=1
fk,i (x, y)??k,i ? log Z(?)
does not depend on q(y). Eq. 11 is minimized when q = r.
Interestingly, from the above lemma, the minimizing q(y) has the same form as the probabilistic grammar in discussion, only without having sum-to-one constraints on ? (leading
to the required normalization constant). As in classic EM with probabilistic grammars, we
never need to represent q(y) explicitly; we need only ?
f , which can be calculated as expected
?
feature values under r(y | x, e? ) using dynamic programming.
As noted, we are interested in a point estimate of ?. To achieve this, we will use the above
variational method within an EM algorithm that estimates ? and ? in empirical Bayes
fashion, then estimates ? as ?, the mean of the learned prior. In the E-step, we maximize
? and ?
the bound with respect to the variational parameters (?
?, ?
? , ?,
f ) using coordinate
ascent. We optimize each of these separately in turn, cycling through, using appropriate
optimization algorithms for each (conjugate gradient for ?
? , Newton?s method for ?
? , a closed
? and dynamic programming to solve for ?
form for ?,
f ). In the M-step, we apply maximum
likelihood estimation with respect to ? and ? given sufficient statistics gathered from the
variational parameters in the E-step. The full algorithm is given in Appendix A.
4
Probabilistic Dependency Grammar Model
Dependency grammar [19] refers to linguistic theories that posit graphical representations
of sentences in which words are vertices and the syntax is a tree. Such grammars can
be context-free or context-sensitive in power, and they can be made probabilistic [9]. Dependency syntax is widely used in information extraction, machine translation, question
4
A tighter bound was proposed in [1], but we follow [4] for simplicity.
x = hNNP VBD JJ NNPi; y =
NNP
Patrick
VBD
spoke
JJ
NNP
little French
Figure 2: An example of a dependency tree (derivation y). NNP denotes a proper noun,
VBD a past-tense verb, and JJ an adjective, following the Penn Treebank conventions.
answering, and other natural language processing applications. Here, we are interested in
unsupervised dependency parsing using the ?dependency model with valence? [14]. The
model is a probabilistic head automaton grammar [3] with a ?split? form that renders inference cubic in the length of the sentence [6].
Let x = hx1 , x2 , ..., xn i be a sentence (here, as in prior work, represented as a sequence
of part-of-speech tags). x0 is a special ?wall? symbol, $, on the left of every sentence. A
tree y is defined by a pair of functions yleft and yright (both {0, 1, 2, ..., n} ? 2{1,2,...,n} )
that map each word to its sets of left and right dependents, respectively. Here, the graph
is constrained to be a projective tree rooted at x0 = $: each word except $ has a single
parent, and there are no cycles or crossing dependencies. yleft (0) is taken to be empty, and
yright (0) contains the sentence?s single head. Let y(i) denote the subtree rooted at position
i. The probability P (y(i) | xi , ?) of generating this subtree, given its head word xi , is defined
recursively:
Q
P (y(i) | xi , ?) =
(12)
D?{left,right} ?s (stop | xi , D, [yD (i) = ?])
Q
? j?yD (i) ?s (?stop | xi , D, firsty (j)) ? ?c (xj | xi , D) ? P (y(j) | xj , ?)
where firsty (j) is a predicate defined to be true iff xj is the closest child (on either side)
to its parent xi . The probability of the entire tree is given by p(x, y | ?) = P (y(0) | $, ?).
The parameters ? are the multinomial distributions ?s (? | ?, ?, ?) and ?c (? | ?, ?). To follow
the general setting of Eq. 1, we index these distributions as ? 1 , ..., ? K . Figure 2 shows a
dependency tree and its probability under this model.
5
Experiments
Data Following the setting in [13], we experimented using part-of-speech sequences from
the Wall Street Journal Penn Treebank [17], stripped of words and punctuation. We follow
standard parsing conventions and train on sections 2?21,5 tune on section 22, and report
final results on section 23.
Evaluation After learning a point estimate ?, we predict y for unseen test data (by parsing
with the probabilistic grammar) and report the fraction of words whose predicted parent
matches the gold standard corpus, known as attachment accuracy. Two parsing methods
were considered: the most probable ?Viterbi? parse (argmaxy p(y | x, ?)) and the minimum
Bayes risk (MBR) parse (argminy Ep(y0 |x,?) [`(y; x, y0 )]) with dependency attachment error
as the loss function.
Settings Our experiment compares four methods for estimating the probabilistic grammar?s parameters:
EM Maximum likelihood estimate of ? using the EM algorithm to optimize p(x | ?) [14].
EM-MAP Maximum a posteriori estimate of ? using the EM algorithm and a fixed symmetric Dirichlet prior with ? > 1 to optimize p(x, ? | ?). Tune ? to maximize the
likelihood of an unannotated development dataset, using grid search over [1.1, 30].
5
Training in the unsupervised setting for this data set can be expensive, and requires running a
cubic-time dynamic programming algorithm iteratively, so we follow common practice in restricting
the training set (but not development or test sets) to sentences of length ten or fewer words. Short
sentences are also less structurally ambiguous and may therefore be easier to learn from.
VB-Dirichlet Use variational Bayes inference to estimate the posterior distribution p(? |
x, ?), which is a Dirichlet. Tune the symmetric Dirichlet prior?s parameter ? to
maximize the likelihood of an unannotated development dataset, using grid search
over [0.0001, 30]. Use the mean of the posterior Dirichlet as a point estimate for ?.
VB-EM-Dirichlet Use variational Bayes EM to optimize p(x | ?) with respect to ?. Use
the mean of the learned Dirichlet as a point estimate for ? (similar to [5]).
VB-EM-Log-Normal Use variational Bayes EM to optimize p(x | ?, ?) with respect to
? and ?. Use the (exponentiated) mean of this Gaussian as a point estimate for ?.
Initialization is known to be important for EM as well as for the other algorithms we
experiment with, since it involves non-convex optimization. We used the successful initializer
from [14], which estimates ? using soft counts on the training data where, in an n-length
sentence, (a) each word is counted as the sentence?s head n1 times, and (b) each word xi
attaches to xj proportional to |i ? j|, normalized to a single attachment per word. This
initializer is used with EM, EM-MAP, VB-Dirichlet, and VB-EM-Dirichlet. In the case of
VB-EM-Log-Normal, it is used as an initializer both for ? and inside the E-step. In all
experiments reported here, we run the iterative estimation algorithm until the likelihood of
a held-out, unannotated dataset stops increasing.
For learning with the logistic normal prior, we consider two initializations of the covariance
matrices ?k . The first is the Nk ? Nk identity matrix. We then tried to bias the solution
by injecting prior knowledge about the part-of-speech tags. Injecting a bias to parameter
estimation of the DMV model has proved to be useful [18]. To do that, we mapped the tag
set (34 tags) to twelve disjoint tag families.6 The covariance matrices for all dependency
distributions were initialized with 1 on the diagonal, 0.5 between tags which belong to
the same family, and 0 otherwise. These results are given in Table 1 with the annotation
?families.?
Results Table 1 shows experimental results. We report attachment accuracy on three
subsets of the corpus: sentences of length ? 10 (typically reported in prior work and most
similar to the training dataset), length ? 20, and the full corpus. The Bayesian methods all
outperform the common baseline (in which we attach each word to the word on its right),
but the logistic normal prior performs considerably better than the other two methods as
well.
The learned covariance matrices were very sparse when using the identity matrix to initialize. The diagonal values showed considerable variation, suggesting the importance of
variance alone. When using the ?tag families? initialization for the covariance, there were
151 elements across the covariance matrices which were not identically 0 (out of more than
1,000), pointing to a learned relationship between parameters. In this case, most covariance
matrices for ?c dependencies were diagonal, while many of the covariance matrices for the
stopping probabilities (?s ) had significant correlations.
6
Conclusion
We have considered a Bayesian model for probabilistic grammars, which is based on the
logistic normal prior. Experimentally, several different approaches for grammar induction
were compared based on different priors. We found that a logistic normal prior outperforms
earlier approaches, presumably because it can capitalize on similarity between part-of-speech
tags, as different tags tend to appear as arguments in similar syntactic contexts. We achieved
state-of-the-art unsupervised dependency parsing results.
6
These are simply coarser tags: adjective, adverb, conjunction, foreign, interjection, noun, number, particle, preposition, pronoun, proper, verb. The coarse tags were chosen manually to fit seven
treebanks in different languages.
Attach-Right
EM
EM-MAP, ? = 1.1
VB-Dirichlet, ? = 0.25
VB-EM-Dirichlet
(0)
VB-EM-Log-Normal, ?k = I
VB-EM-Log-Normal, families
attachment accuracy (%)
Viterbi decoding
MBR decoding
|x| ? 10 |x| ? 20
all
|x| ? 10 |x| ? 20
38.4
33.4 31.7
38.4
33.4
45.8
39.1 34.2
46.1
39.9
45.9
39.5 34.9
46.2
40.6
46.9
40.0 35.7
47.1
41.1
45.9
39.4 34.9
46.1
40.6
56.6
43.3 37.4
59.1
45.9
59.3
45.1 39.0
59.4
45.9
all
31.7
35.9
36.7
37.6
36.9
39.9
40.5
Table 1: Attachment accuracy of different learning methods on unseen test data from the
Penn Treebank of varying levels of difficulty imposed through a length filter. Attach-Right
attaches each word to the word on its right and the last word to $. EM and EM-MAP with
a Dirichlet prior (? > 1) are reproductions of earlier results [14, 18].
Acknowledgments
The authors would like to thank the anonymous reviewers, John Lafferty, and Matthew
Harrison for their useful feedback and comments. This work was made possible by an IBM
faculty award, NSF grants IIS-0713265 and IIS-0836431 to the third author and computational resources provided by Yahoo.
References
[1] A. Ahmed and E. Xing. On tight approximate inference of the logistic normal topic
admixture model. In Proc. of AISTATS, 2007.
[2] J. Aitchison and S. M. Shen. Logistic-normal distributions: some properties and uses.
Biometrika, 67:261?272, 1980.
[3] H. Alshawi and A. L. Buchsbaum. Head automata and bilingual tiling: Translation
with minimal representations. In Proc. of ACL, 1996.
[4] D. Blei and J. D. Lafferty. Correlated topic models. In Proc. of NIPS, 2006.
[5] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[6] J. Eisner. Bilexical grammars and a cubic-time probabilistic parser. In Proc. of IWPT,
1997.
[7] J. Eisner. Transformational priors over grammars. In Proc. of EMNLP, 2002.
[8] J. R. Finkel, C. D. Manning, and A. Y. Ng. Solving the problem of cascading errors:
Approximate Bayesian inference for linguistic annotation pipelines. In Proc. of EMNLP,
2006.
[9] H. Gaifman. Dependency systems and phrase-structure systems. Information and
Control, 8, 1965.
[10] S. Goldwater and T. L. Griffiths. A fully Bayesian approach to unsupervised part-ofspeech tagging. In Proc. of ACL, 2007.
[11] M. Johnson, T. L. Griffiths, and S. Goldwater. Bayesian inference for PCFGs via
Markov chain Monte Carlo. In Proc. of NAACL, 2007.
[12] M. I. Jordan, Z. Ghahramani, T. S. Jaakola, and L. K. Saul. An introduction to
variational methods for graphical models. Machine Learning, 37(2):183?233, 1999.
[13] D. Klein and C. D. Manning. A generative constituent-context model for improved
grammar induction. In Proc. of ACL, 2002.
[14] D. Klein and C. D. Manning. Corpus-based induction of syntactic structure: Models
of dependency and constituency. In Proc. of ACL, 2004.
[15] K. Kurihara and T. Sato. Variational Bayesian grammar induction for natural language.
In Proc. of ICGI, 2006.
[16] P. Liang, S. Petrov, M. Jordan, and D. Klein. The infinite PCFG using hierarchical
Dirichlet processes. In Proc. of EMNLP, 2007.
[17] M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. Building a large annotated
corpus of English: The Penn treebank. Computational Linguistics, 19:313?330, 1993.
[18] N. A. Smith and J. Eisner. Annealing structural bias in multilingual weighted grammar
induction. In Proc. of COLING-ACL, 2006.
? ement de Syntaxe Structurale. Klincksieck, 1959.
[19] L. Tesni`ere. El?
[20] K. Toutanova and M. Johnson. A Bayesian LDA-based model for semi-supervised
part-of-speech tagging. In Proc. of NIPS, 2007.
A
VB-EM for Logistic-Normal Probabilistic Grammars
The algorithm for variational inference with probabilistic grammars using logistic normal
l,(t)
prior follows.7 Since the updates for ??k are fast, we perform them after each optimization
routine in the E-step (suppressed for clarity). There are variational parameters for each
training example, indexed by `. We denote by B the variational bound in Eq. 8. Our
stopping criterion relies on the likelihood of a held-out set (?5) using a point estimate of the
model.
Input: initial parameters ?(0) , ?(0) , training data x, and development data x0
Output: learned parameters ?, ?
t?1;
repeat
E-step (for each training example ` = 1, ..., M ): repeat
`,(t)
optimize for
?k , k = 1, ..., K: use conjugate
gradient?descent ?with
? ?
?
?
`
PNk
(t?1) ?1
(t?1)
2
`
`
?L
? 0 ?
?
?k,i0 + ?
?k,i
=
?
(?
)
)(?
?
?
?
)
?
f
+
0 /2 ;
k
k,i
k
k
i0 =1 fk,i /?k exp ?
??
?`
i
k,i
`,(t)
`
, k = 1,?..., K: use Newton?s
method for each coordinate (with ?
?k,i
> 0) with
PNk ? ?
(t?1)
2
2
?L
?
0
exp(?
?
+
?
?
/2)/2
?
+
1/2?
?
;
f
=
??
/2
?
k,i
k
2
k,i
k,i
k,ii
i0 =1 k,i
??
? k,i
?
?
PNk
`,(t)
`,(t) 2
`,(t)
`,(t)
?
?
?k,i + (?
?k,i ) /2 ;
update ?k , ?k: ?k
? i=1 exp ?
?
?
PNk
`,(t)
`,(t)
`,(t)
`,(t)
`,(t)
`,(t)
1
?
update ?
, ?k: ??k,i ? ?
?k,i ? log ??k + 1 ? ?`,(t)
?k,i + (?
?k,i )2 /2 ;
k
i0 =1 exp ?
optimize ?
?k
?k
`,(t)
compute expected counts ?
fk , k = 1, ..., K: use an inside-outside algorithm to re-estimate
?`
`,(t)
expected counts f?k,i in weighted grammar q(y) with weights e? ;
until B does not change ;
M-step: Estimate ?(t) and ?(t) using the following maximum likelihood closed form solution:
PM `,(t)
(t)
1
?k,i ? M
?k,i
`=1 ?
h
i
?P
?
(t)
`,(t) `,(t)
(t) (t)
(t) P
`,(t)
(t) P
`,(t)
M
1
?k
? M
?k,i ?
?k,j + (?
? `,(t) )2k,i ?i,j + M ?k,i ?k,j ? ?k,j M
?k,i ? ?k,i M
?k,j
`=1 ?
`=1 ?
`=1 ?
i,j
where ?i,j = 1 if i = j and 0 otherwise.
until likelihood of held-out data, p(x0 | E[?(t) ]), decreases ;
t ? t + 1;
return ?(t) , ?(t)
7
An implementation of the algorithm is available at http://www.ark.cs.cmu.edu/DAGEEM.
| 3559 |@word faculty:1 version:1 seek:1 tried:1 covariance:7 recursively:2 initial:1 contains:1 document:4 interestingly:1 past:1 outperforms:1 parsing:8 john:1 update:3 alone:1 generative:3 fewer:1 ith:2 smith:2 short:1 blei:5 coarse:1 nnp:3 competitiveness:1 shorthand:1 combine:1 inside:3 x0:4 tagging:3 expected:4 behavior:1 terminal:1 little:1 increasing:1 provided:1 estimating:2 underlying:2 maximizes:2 kind:3 argmin:1 string:3 finding:2 every:1 mbr:2 biometrika:1 control:1 normally:1 penn:4 grant:1 appear:1 understood:1 sd:1 bilexical:1 aiming:1 yd:2 acl:5 initialization:3 hmms:3 pcfgs:4 limited:1 projective:1 jaakola:1 acknowledgment:1 ement:1 recursive:1 practice:1 empirical:2 word:17 refers:1 griffith:2 symbolic:1 hx1:1 yleft:2 context:7 risk:1 writing:1 optimize:7 www:1 map:6 deterministic:1 imposed:1 maximizing:4 reviewer:1 attention:1 automaton:2 convex:1 shen:1 simplicity:1 cascading:1 classic:1 coordinate:2 variation:1 analogous:3 parser:1 exact:2 programming:4 us:1 element:2 crossing:1 expensive:1 ark:1 coarser:1 observed:1 logisticnormal:1 ep:1 capture:1 cycle:1 decrease:1 pd:1 dynamic:4 ultimately:1 depend:1 tight:1 solving:1 joint:1 represented:1 derivation:7 train:1 fast:2 treebanks:1 monte:1 kevin:1 outside:2 whose:1 widely:2 solve:2 otherwise:2 grammar:52 statistic:1 unseen:2 syntactic:2 itself:1 pnk:16 final:2 sequence:4 pronoun:1 iff:1 achieve:1 gold:1 constituent:1 exploiting:1 parent:3 empty:1 generating:1 object:1 derive:3 nonterminal:2 school:1 received:1 eq:14 c:2 predicted:1 involves:1 convention:2 posit:1 annotated:1 filter:1 stochastic:1 hx:1 marcinkiewicz:1 wall:2 anonymous:1 tighter:1 probable:1 considered:2 normal:19 exp:13 presumably:1 viterbi:2 predict:1 pointing:1 matthew:1 achieves:1 ctm:2 estimation:3 proc:14 injecting:2 sensitive:1 ere:1 successfully:1 minded:1 weighted:2 hope:1 unfolding:1 gaussian:3 aim:1 finkel:1 varying:1 conjunction:1 linguistic:2 emission:2 improvement:1 likelihood:9 baseline:1 posteriori:1 inference:15 dependent:1 stopping:2 el:1 i0:8 foreign:1 typically:2 entire:1 hidden:3 icgi:1 interested:4 issue:1 flexible:1 yahoo:1 development:4 noun:4 special:1 constrained:1 initialize:1 art:1 field:1 never:1 having:1 extraction:1 ng:2 manually:1 capitalize:1 unsupervised:8 simplex:1 others:3 minimized:1 report:3 divergence:2 argmax:1 n1:1 recalling:1 interest:1 evaluation:1 punctuation:1 argmaxy:1 yielding:1 held:3 chain:1 encourage:2 tree:8 indexed:1 taylor:1 logarithm:1 walk:1 desired:1 initialized:1 re:1 minimal:1 increased:1 modeling:3 soft:1 earlier:2 maximization:1 phrase:3 introducing:1 vertex:1 subset:1 predicate:1 successful:1 johnson:2 reported:2 dependency:18 considerably:1 twelve:1 probabilistic:37 decoding:2 central:1 initializer:3 vbd:3 emnlp:3 iwpt:1 interjection:1 style:1 leading:1 return:1 suggesting:1 transformational:1 de:1 includes:3 explicitly:2 unannotated:3 break:1 closed:2 xing:1 bayes:5 annotation:2 accuracy:4 qk:1 variance:1 gathered:1 goldwater:2 bayesian:13 carlo:1 definition:1 petrov:1 energy:1 proof:1 sampled:2 stop:3 dataset:4 proved:1 knowledge:1 organized:1 routine:1 originally:1 nasmith:1 follow:5 supervised:1 improved:1 correlation:4 until:3 parse:2 expressive:1 french:1 defines:2 logistic:15 lda:5 perhaps:1 building:1 naacl:1 tense:1 true:1 normalized:1 symmetric:2 iteratively:1 conditionally:1 qnk:2 rooted:2 noted:1 ambiguous:1 criterion:1 syntax:2 performs:1 variational:18 argminy:1 superior:1 common:3 multinomial:7 cohen:1 belong:1 mellon:1 significant:1 rd:1 vanilla:1 fk:10 grid:2 hp:3 pm:1 particle:1 language:7 had:1 similarity:1 patrick:1 multivariate:1 posterior:4 recent:1 showed:2 closest:1 adverb:1 certain:2 inequality:1 yi:1 seen:1 minimum:1 fortunately:1 maximize:3 ii:3 semi:1 full:2 match:1 ahmed:1 permitting:2 award:1 dkl:2 feasibility:1 variant:1 involving:1 cmu:2 expectation:1 normalization:2 represent:1 achieved:2 addition:1 separately:1 annealing:1 harrison:1 singular:1 rest:1 ascent:1 scohen:1 comment:1 tend:1 elegant:1 lafferty:5 jordan:3 call:1 structural:2 split:1 identically:1 independence:2 xj:4 zi:2 fit:1 buchsbaum:1 intensive:1 yright:2 thread:1 expression:1 render:1 speech:7 jj:3 useful:2 tune:3 ten:1 constituency:1 generate:3 http:1 outperform:1 problematic:1 nsf:1 disjoint:1 per:1 klein:3 zd:1 aitchison:1 carnegie:1 discrete:2 four:1 drawn:1 clarity:1 rewriting:2 spoke:1 backward:1 graph:1 fraction:1 sum:1 gaifman:1 run:1 extends:2 family:8 appendix:2 vb:11 capturing:1 bound:7 sato:1 noah:1 constraint:1 x2:1 tag:12 generates:1 argument:2 structured:2 alternate:1 combination:1 manning:3 conjugate:4 describes:1 across:1 em:25 y0:2 suppressed:1 explained:1 taken:1 pipeline:1 computationally:1 resource:1 turn:2 count:4 tiling:1 parametrize:1 operation:1 available:1 permit:3 apply:1 hierarchical:2 appropriate:1 occurrence:1 denotes:2 dirichlet:21 nlp:2 running:1 linguistics:1 graphical:4 newton:2 alshawi:1 eisner:3 ghahramani:1 approximating:1 noticed:1 question:1 quantity:1 occurs:1 diagonal:3 cycling:1 gradient:2 kth:2 valence:2 thank:1 mapped:1 hmm:4 parametrized:1 street:1 topic:11 seven:1 induction:7 marcus:1 length:6 index:1 relationship:2 minimizing:1 liang:1 difficult:1 implementation:1 proper:3 perform:2 negated:1 markov:5 finite:1 descent:1 extended:1 santorini:1 head:5 verb:2 pair:1 required:2 kl:2 sentence:11 learned:7 nip:2 sparsity:1 adjective:2 built:2 including:1 power:2 event:3 natural:6 treated:1 attach:3 difficulty:1 technology:1 gimpel:1 attachment:6 admixture:1 prior:24 loss:1 fully:1 attache:2 allocation:2 proportional:1 shay:1 sufficient:1 treebank:4 translation:2 ibm:1 preposition:1 repeat:2 last:1 free:5 english:1 side:1 exponentiated:1 bias:3 institute:1 stripped:1 saul:1 sparse:2 distributed:1 grammatical:1 feedback:1 calculated:1 xn:1 forward:1 collection:2 made:3 exponentiating:1 author:2 counted:1 correlate:1 approximate:3 multilingual:1 corpus:6 summing:1 xi:8 search:2 latent:2 iterative:1 table:3 learn:2 expansion:1 did:1 pk:4 aistats:1 bilingual:1 plural:1 child:2 fig:1 fashion:1 cubic:3 hz1:1 structurally:1 position:1 answering:1 third:1 coling:1 down:1 unigram:1 showing:1 jensen:1 symbol:6 experimented:1 normalizing:1 reproduction:1 toutanova:1 restricting:1 pcfg:3 kr:1 importance:1 subtree:2 nk:8 easier:1 simply:1 explore:1 likely:2 infinitely:1 relies:1 conditional:2 identity:2 considerable:2 hard:2 experimentally:1 change:1 infinite:1 except:1 kurihara:1 lemma:3 experimental:2 ofspeech:1 correlated:4 |
2,823 | 356 | A Theory for Neural Networks with Time Delays
Jose C. Principe
Department of Electrical Engineering
University of Horida, CSE 444
Gainesville, FL 32611
Bert de Vries
Department of Electrical Engineering
University of Horida, CSE 447
Gainesville, FL 32611
Abstract
We present a new neural network model for processing of temporal
patterns. This model, the gamma neural model, is as general as a
convolution delay model with arbitrary weight kernels w(t). We
show that the gamma model can be formulated as a (partially
prewired) additive model. A temporal hebbian learning rule is
derived and we establish links to related existing models for
temporal processing.
1
INTRODUCTION
In this paper, we are concerned with developing neural nets with short term memory for
processing of temporal patterns. In the literature, basically two ways have been
reported to incorporate short-term memory in the neural system equations. The first
approach utilizes reverberating (self-recurrent) units of type :
= - aa (x) + e, that
hold a trace of the past neural net states x(t) or the input e(t). Elman (1988) and
Jordan (1986) have successfully used this approach. The disadvantage of this method
is the lack of weighting flexibility in the temporal domain, since the system equations
are described by first order dynamics, implementing a recency gradient (exponential
for linear units).
The second approach involves explicit inclusion of delays in the neural system
equations. A general formulation for this type requires a time-dependent weight
matrix W(t). In such a system, multiplicative interactions are substituted by temporal
convolution operations, leading to the following system equations for an additive
convolution model t
:
= JW(t-s)a(x(s?ds+e.
o
162
( 1)
A Theory for Neural Networks with Time Delays
Due to the complexity of general convolution models, only strong simplifications of
the weight kernel have been proposed. Lang et. al. (1990) use a delta function kernel,
K
W(I) =
L W 8(1-l
k
k) ,
which is the core for the Time-Delay-Neural-Network
k=O
(TDNN). Tank and Hopfield (1987) prewire W(t) as a weighted sum of dispersive
t
K
delay kernels, W (I) =
~
k k(l--)
I
~ Wk (t)
k=O
e
11;
K
=
k
~
~ Wkh k (I, t k ).
The kernels
k=O
hk (I, t k ) are the integrands of the gamma function. Tank and Hopfield described a
one-layer system for classification of isolated words. We will refer to their model as
a Concentration-In-Time-Network (CITN). The system parameters were nonadaptive, although a Hebbian rule equivalent in functional differential equation form
was suggested.
In this paper, we will develop a theory for neural convolution models that are
expressed through a sum of gamma kernels. We will show that such a gamma neural
network can be reformulated as a (Grossberg) additive model. As a consequence, the
substantial learning and stability theory for additive models is directly applicable to
gamma models.
2
THE GAMMA NEURAL MODEL - FORMAL DERIVATION
Consider the N-dimensional convolution model I
~ = -ax+ WoY+ fdsW(,-S)Y(S)
+e,
( 2)
o
where x(t), y(t)=a(x) and e(t) are N-dimensional signals; Wo is NxN and W(t) is
N xN x [0, 00] . The weight matrix W0 communicates the direct neural interactions,
whereas W(t) holds the weights for the delayed neural interactions. We will now
assume that W(t) can be written as a linear combination of normalized gamma
kernels, that is, K
W(I)
=
L Wkgk(I),
( 3)
k=l
where -
( 4)
where 1.1 is a decay parameter and k a (lag) order parameter. If W(t) decays
exponentially to zero for I ---+ 00, then it follows from the completeness of Laguerre
polynomials that this approximation can be made arbitrarily close (Cohen et. aI.,
163
164
de '\Ties and Principe
1979). In other words, for all physical plausible weight kernels there is a K such that
W(t) can be expressed as (3), ( 4). The following properties hold for the gamma
kernels g1c (t) - [1] The gamma kernels are related by a set of linear homogeneous ODEs -
dg 1
dt = -J,lgl
( 5)
dg1c
- [2] The peak value (dt
k- 1
= 0) occurs at tp = ~.
- [3] The area of the gamma kernels is a normalized, that is,
Jdsg 1c (s)
= 1.
o
Substitution of (3) into ( 2) yields -
dx
dt = -ax+
K
L W1cY1c+ e ,
(6)
1c=O
where we defined Yo (t) = Y(t) and the gamma state variables t
Y1c(t) = Jdsg 1c (t-S)YO(S) , k=l, .. ,K.
( 7)
o
The gamma state variables hold memory traces of the neural states yo(t). The
important question is how to compute Y1c (t) . Differentiating (7) using Leibniz' rule
yields -
dy
dt1c
t
= J:f1c (t -
s) y (s) ds + g1c (0) Y (t) .
( 8)
o
We now utilize gamma kernel property [1] (eq. (5? to obtain-
(9)
Note that since g1c (0) = 0 for k ~ 2 and gl (0) = J,1. (9) evaluates to -
A Theory for Neural Networks with Time Delays
dy1c
dt = -~Y1c+~Y1c-t'
( 10)
k=I, .. ,K.
The gamma model is described by (6) and (10). This extended set of ordinary
differential equations (ODEs) is equivalent to the convolution model, described by
the set of functional differential equations (2), (3) and (4).
It is a valid question to ask whether the system of ODEs that describes the gamma
model can still be expressed as a neural network model. The answer is affirmative,
since the gamma model can be formulated as a regular (Grossberg) additive model.
x
To see this, define the N(K+l)-dimensional augmented state vector X
a (x)
neural output signal Y
=
Yt
=
Yt
, the
e
, an external input E =
o
, a diagonal matrix
o
a
of
decay
parameters
Wo W t
0=
~~
M =
~
o
0\
and
the
weight
(super)matrix
~
... WK
0
. Then the gamma model can be rewritten in the following
o ,,~O
form dX
dt = -MX+QY+E,
( 11)
the familiar Grossberg additive model.
3
HEBBIAN LEARNING IN THE GAMMA MODEL
The additive model formulation of the gamma model allows a direct generalization
165
166
de 'ties and Principe
of learning techniques to the gamma model. Note however that the augmented state
vector X contains the gamma state variables Y1, ... ,YK, basically (dispersively)
delayed neural states. As a result, although associative learning rules for
conventional additive models only encode the simultaneous correlation of neural
states, the gamma learning rules are able to encode temporal associations as well.
Here we present Hebbian learning for the gamma model.
The Hebbian postulate is often mathematically translated to a learning rule of the
form
dd~ =
11x (1) yT (t) , where 11 is a learning rate constant, x the neural activation
vector and yT the neuron output signal vector. This procedure is not likely to encode
temporal order, since information about past states is not incorporated in the learning
equations.
Tank and Hopfield (1987) proposed a generalized Hebbian learning rule with delays
that can be written as ( 12)
where g (s) is a normalized delay kernel. Notice that ( 12) is a functional differential
equation, for which explicit solutions and convergence criteria are not known (for
t
J
most implementations of g (s) ). In the gamma model, the signals dsg k (s) Y(t - s)
o
are computed by the system and locally available as Yk (1) at the synaptic junctions
Wk. Thus, in the gamma model, ( 12) reduces to dWk
T
dt = 11 x (1) Yk (1) .
( 13)
This learning rule encodes simultaneous correlations (for k=O) as well as temporal
associations (for k ~ 1). Since the gamma Hebb rule is structurally similar to the
conventional Hebb rule, it is also local both in time and space.
4
RELATION TO OTHER MODELS
The gamma model is related to Tank and Hopfield 's CITN model in that both models
decompose W(t) into a linear combination of gamma kernels. The weights in the
CITN system are preset and fixed. The gamma model, expressed as a regular additive
system, allows conventional adaptation procedures to train the system parameters; Il
and K adapt the depth and shape of the memory, while Wo, .. ,W K encode
spatiotemporal correlations between neural states.
Time-Delay-Neural-Nets (TDNN) are characterized by a tapped delay line memory
structure. The relation is best illustrated by an example. Consider a linear one-layer
A Theory for Neural Networks with Time Delays
feedforward convolution model, described by -
x(t)
=e (t)
t
y(t)
( 14)
=JW(t-s)x(s)dS
o
where x(t), e(t) and y(t) are N-dimensional signals and W(t) a NxNx [0,00]
dimensional weight matrix. This system can be approximated in discrete time by -
y(n)
x(n)=e(n)
n
W(n-m)x(m)
( 15)
=L
m=O
which is the TDNN formulation. An alternative approximation of the convolution
model by means of a (discrete-time) gamma model, is described by (figure 1) -
Xo (n) = e (n)
x k (n) = (l - ~) X k (n - 1) + Ilx k _ 1 (n - 1)
K
y(n) =
k= 1,.. ,K
( 16)
L W~k(n)
k=O
The recursive memory structure in the gamma model is stable for 0 S ~ S 2, but an
interesting memory structure is obtained only for 0 < Il S 1. For Il = 0, this system
collapses to a static additive net. In this case, no information from past signal values
are stored in the net. For 0 < 1.1. < 1, the system works as a discrete-time CITN. The
gamma memory structure consists of a cascade of first-order leaky integrators. Since
the total memory structure is of order K, the shape of the memory is not restricted to
a recency gradient. The effective memory depth approximates K for small Il. For
Il
1.1. = 1, the gamma model becomes a TDNN. In this case, memory is implemented by
a tapped delay line. The strength of the gamma model is that the parameters Il and K
can be adapted by conventional additive learning procedures. Thus, the optimal
temporal structure of the neural system, whether of CITN or TDNN type, is part of
the training phase in a gamma neural net. Finally, the application of the gamma
memory structure is of course not limited to one-layer feedforward systems. The
topologies suggested by Jordan (1986) and Elman (1988) can easily be extended to
include gamma memory.
167
168
de \Ties and Principe
e(n)
Fieure L
a one-layer
ffw gamma net
5
CONCLUSIONS
We have introduced the gamma neural model, a neural net model for temporal
processing, that generalizes most existing approaches, such as the CITN and TDNN
models. The model can be described as a conventional dynamic additive model,
enabling direct application of existing learning procedures for additive models. In the
gamma model, dynamic objects are encoded by the same learning equations as static
objects.
Acknowledgments
This work has been partially supported by NSF grants ECS-8915218 and DDM8914084.
References
Cohen et. al., 1979. Stable oscillations in single species growth models with
hereditary effects. Mathematical Biosciences 44:255-268, 1979.
DeVries and Principe, 1990. The gamma neural net - A new model for temporal
processing. submitted to Neural Networks, Nov.1990.
Elman. 1988. Finding structure in time. CRL technical report 8801, 1988.
Jordan, 1986. Attractor dynamics and parallelism in a connectionist sequential
machine. Proc. Cognitive Science 1986.
Lang et. al. 1990. A time-delay neural network architecture for isolated word
recognition. Neural Networks, vol.3 (1), 1990.
Tank and Hopfield. 1987. Concentrating information in time: analog neural networks
with applications to speech recognition problems. 1st into con! on neural
networks. IEEE. 1987.
| 356 |@word effect:1 implemented:1 normalized:3 involves:1 establish:1 polynomial:1 question:2 occurs:1 concentration:1 illustrated:1 gainesville:2 diagonal:1 self:1 gradient:2 implementing:1 mx:1 link:1 criterion:1 substitution:1 contains:1 generalization:1 decompose:1 generalized:1 w0:1 mathematically:1 past:3 existing:3 hold:4 lang:2 activation:1 dx:2 written:2 dsg:1 additive:13 functional:3 physical:1 shape:2 trace:2 cohen:2 exponentially:1 association:2 proc:1 analog:1 applicable:1 approximates:1 implementation:1 refer:1 convolution:9 neuron:1 ai:1 enabling:1 short:2 core:1 successfully:1 weighted:1 inclusion:1 completeness:1 extended:2 cse:2 incorporated:1 y1:1 super:1 stable:2 bert:1 arbitrary:1 mathematical:1 direct:3 differential:4 introduced:1 encode:4 consists:1 derived:1 ax:2 yo:3 hk:1 arbitrarily:1 elman:3 integrator:1 dependent:1 able:1 suggested:2 pattern:2 laguerre:1 parallelism:1 relation:2 becomes:1 signal:6 memory:14 f1c:1 tank:5 classification:1 reduces:1 hebbian:6 technical:1 adapt:1 affirmative:1 characterized:1 finding:1 integrands:1 prewired:1 temporal:12 growth:1 tdnn:6 tie:3 report:1 connectionist:1 kernel:14 unit:2 grant:1 literature:1 qy:1 dg:1 gamma:42 whereas:1 engineering:2 local:1 delayed:2 familiar:1 ode:3 consequence:1 phase:1 nxn:1 interesting:1 attractor:1 jordan:3 dd:1 collapse:1 limited:1 feedforward:2 concerned:1 course:1 grossberg:3 acknowledgment:1 gl:1 supported:1 architecture:1 recursive:1 topology:1 formal:1 procedure:4 differentiating:1 area:1 whether:2 hereditary:1 leaky:1 isolated:2 cascade:1 depth:2 xn:1 valid:1 word:3 regular:2 wo:3 made:1 reformulated:1 disadvantage:1 close:1 tp:1 speech:1 ec:1 recency:2 g1c:3 ordinary:1 nov:1 equivalent:2 conventional:5 yt:4 delay:14 locally:1 reported:1 stored:1 answer:1 spatiotemporal:1 nsf:1 rule:10 st:1 notice:1 peak:1 delta:1 stability:1 discrete:3 vol:1 y1c:4 postulate:1 homogeneous:1 domain:1 substituted:1 tapped:2 approximated:1 recognition:2 external:1 cognitive:1 leading:1 utilize:1 nonadaptive:1 sum:2 de:4 jose:1 augmented:2 electrical:2 wk:3 hebb:2 utilizes:1 oscillation:1 structurally:1 multiplicative:1 leibniz:1 dy:1 yk:3 substantial:1 explicit:2 exponential:1 complexity:1 fl:2 layer:4 communicates:1 simplification:1 weighting:1 dynamic:4 il:6 strength:1 adapted:1 reverberating:1 decay:3 yield:2 encodes:1 translated:1 easily:1 hopfield:5 sequential:1 basically:2 derivation:1 train:1 vries:1 effective:1 department:2 developing:1 submitted:1 simultaneous:2 combination:2 ilx:1 devries:1 synaptic:1 describes:1 likely:1 lag:1 encoded:1 plausible:1 evaluates:1 expressed:4 partially:2 bioscience:1 restricted:1 static:2 con:1 xo:1 aa:1 associative:1 concentrating:1 ask:1 equation:10 net:9 dispersive:1 formulated:2 interaction:3 adaptation:1 crl:1 dt:6 operation:1 rewritten:1 available:1 flexibility:1 junction:1 generalizes:1 jw:2 formulation:3 preset:1 dwk:1 total:1 specie:1 alternative:1 correlation:3 convergence:1 d:3 principe:5 include:1 lack:1 object:2 recurrent:1 develop:1 incorporate:1 eq:1 strong:1 |
2,824 | 3,560 | Online Models for Content Optimization
Deepak Agarwal, Bee-Chung Chen, Pradheep Elango, Nitin Motgi, Seung-Taek Park,
Raghu Ramakrishnan, Scott Roy, Joe Zachariah
Yahoo! Inc.
701 First Avenue
Sunnyvale, CA 94089
Abstract
We describe a new content publishing system that selects articles to serve to a user,
choosing from an editorially programmed pool that is frequently refreshed. It is
now deployed on a major Yahoo! portal, and selects articles to serve to hundreds
of millions of user visits per day, significantly increasing the number of user clicks
over the original manual approach, in which editors periodically selected articles
to display. Some of the challenges we face include a dynamic content pool, short
article lifetimes, non-stationary click-through rates, and extremely high traffic volumes. The fundamental problem we must solve is to quickly identify which items
are popular (perhaps within different user segments), and to exploit them while
they remain current. We must also explore the underlying pool constantly to identify promising alternatives, quickly discarding poor performers. Our approach is
based on tracking per article performance in near real time through online models.
We describe the characteristics and constraints of our application setting, discuss
our design choices, and show the importance and effectiveness of coupling online
models with a randomization procedure. We discuss the challenges encountered
in a production online content-publishing environment and highlight issues that
deserve careful attention. Our analysis of this application also suggests a number
of future research avenues.
1
Introduction
The web has become the central distribution channel for information from traditional sources such
as news outlets as well as rapidly growing user-generated content. Developing effective algorithmic
approaches to delivering such content when users visit web portals is a fundamental problem that
has not received much attention. Search engines use automated ranking algorithms to return the
most relevant links in response to a user?s keyword query; likewise, online ads are targeted using
automated algorithms. In contrast, portals that cater to users who browse a site are typically programmed manually. This is because content is harder to assess for relevance, topicality, freshness,
and personal preference; there is a wide range in the quality; and there are no reliable quality or trust
metrics (such as, say, PageRank or Hub/Authority weights for URLs).
Manual programming of content ensures high quality and maintains the editorial ?voice? (the typical
mix of content) that users associate with the site. On the other hand, it is expensive to scale as the
number of articles and the number of site pages we wish to program grow. A data-driven machine
learning approach can help with the scale issue, and we seek to blend the strengths of the editorial
and algorithmic approaches by algorithmically optimizing content programming within high-level
constraints set by editors. The system we describe is currently deployed on a major Yahoo! portal,
and serves several hundred million user visits per day.
The usual machine-learning approach to ranking articles shown to users uses feature-based models,
trained using ?offline data? (data collected in the past). After making a significant effort of feature
engineering by looking at user demogrpahics, past activities on the site, various article categories,
keywords and entities in articles, etc., we concluded that it is difficult to build good models based
solely on offline data in our scenario. Our content pool is small but changing rapidly; article lifetimes are short; and there is wide variability in article performance sharing a common set of feature
values. Thus, we take the approach of tracking per-article performance by online models, which
are initialized using offline data and updated continuously using real time data. This online aspect
opens up new modeling challenges in addition to classical feature based predition, as we discuss in
this paper.
2
Problem Description
We consider the problem of optimizing content displayed in a module that is the focal point on a
major Yahoo! portal; the page also provides several other services (e.g., Mail, Weather) and content
links. The module is a panel with four slots labelled F1, F2, F3, F4. Slot F1, which accounts for a
large fraction of clicks, is prominent, and an article displayed on F1 receives many more clicks than
when it is displayed at F2, F3 or F4.
The pool of available articles is created by trained human editors, and refreshed continually. At any
point in time, there are 16 live articles in the pool. A few new articles programmed by editors get
pushed into the system periodically (every few hours) and replace some old articles. The editors
keep up with important new stories (e.g., breaking news) and eliminate irrelevant and fading stories,
and ensure that the pool of articles is consistent with the ?voice? of the site (i.e., the desired nature
and mix of content). There is no personalization in the editorially programmed system; at a given
time, the same articles are seen by all users visiting the page.
We consider how to choose the best set of four articles to display on the module to a given user.
Since the mix of content in the available pool already incorporates constraints like voice, topicality,
etc., we focus on choosing articles to maximize overall click-through rate (CTR), which is the total
number of clicks divided by total number of views for a time interval. To simplify our presentation,
we only describe learning from click feedback obtained from the most important F1 position; our
framework (and system) can use information from other positions as well.
2.1
System Challenges
Our setting poses many challenges, the most important of which are the following:
? Highly dynamic system characteristics: Articles have short lifetimes (6-8 hours), the
pool of available articles is constantly changing, the user population is dynamic, and each
article has different CTRs at different times of day or when shown in different slots in
our module. We found that fast reaction to user feedback through dynamic models based
on clicks and page views is crucial for good performance. We discuss an alternate and
commonly pursued approach of ranking articles based on offline feature-driven models in
Section 2.2.
? Scalability: The portal receives thousands of page views per second and serves hundreds
of millions of user visits per day. Data collection, model training and article scoring (using the model) are subject to tight latency requirements. For instance, we only get a few
milliseconds to decide on the appropriate content to show to a user visiting the portal.
A significant effort was required to build a scalable infrastructure that supports near real-time data
collection.1 Events (users? clicks and page views) are collected from a large number of front-end
web servers and continuously transferred to data collection clusters, which support event buffering
to handle the time lag between the user viewing a page and then clicking articles on the page. The
event stream is then fed to the modeler cluster which runs learning algorithms to update the models.
Periodically, the front-end web servers pull the updated models and serve content based on the new
models. A complete cycle of data collection, model update, and model delivery takes a few minutes.
1
The data collected is anonymized, making it impossible to relate the activity to individual users.
100
75
22:00
00:00
02:00
04:00
06:00
08:00
10:00
12:00
14:00
16:00
18:00
20:00
22:00
00:00
02:00
04:00
06:00
08:00
10:00
22:45
22:30
22:15
22:00
21:45
21:30
37
25
50
Scaled CTR
100
87
75
62
50
Scaled CTR
Regular CTR
CTR after removing clickers? repeated views
(a) CTR decay
(b) CTR time-of-day variation
Figure 1: CTR curves of a typical article in two buckets. (a) shows the article?s CTR decay when
shown continuously in a bucket at position F1; (b) shows the article?s CTR in the random bucket.
2.2
Machine Learning Challenges
A serving scheme is an automated or manual algorithm that decides which article to show at different
positions of our module for a given user. Prior to our system, articles were chosen by human editors;
we refer to this as the editorial serving scheme. A random sample of the user population is referred
to as a bucket.
We now discuss the issues that make it tricky to build predictive models in this setting. We tried
the usual approach of building offline models based on retrospective data collected while using the
editorial serving scheme. User features included Age, Gender, Geo-location and Inferred interests
based on user visit patterns. For articles, we used features based on URL, article category (e.g.,
Sports, Entertainment) and title keywords. However, this approach performed poorly. The reasons
include a wide variability in CTR for articles having a same set of feature values, dramatical changes
of article CTR over time, and the fact that retrospective data collected from non-randomized serving
schemes are confounded with factors that are hard to adjust for (see Section 5). Also, our initial
studies revealed high variability in CTRs for articles sharing some common features (e.g., Sports articles, Entertainment articles). We achieved much better performance by seeking quick convergence
(using online models) to the best article for a given user (or user segment); a lost opportunity (failure
to detect the best article quickly) can be costly and the cost increases with the margin (difference
between the best and selected articles). We now discuss some of the challenges we had to address.
Non-stationary CTRs: The CTR of an article is strongly dependent on the serving scheme used
(especially, how much F1 exposure it receives) and it may change dramatically over time. Hence,
learning techniques that assume process stationarity are inapplicable. In order to ensure webpage
stability, we consider serving schemes that don?t alter the choice of what to show a user in a given
slot until a better choice is identified. Figure 1 (a) shows the CTR curve of a typical article subject
to such a serving scheme. The decay is due to users getting exposed to an article more than once.
Exposure to an article happens in different ways and to different degrees. A user may get exposed
to an article when he/she sees a descriptive link, or clicks on it and reads the article. A user may
also click multiple ?see also? links associated with each article which may perhaps be a stronger
form of exposure. In our analysis, we consider such related clicks to be a single click event. View
exposure is more noisy since our module is only one of many content pieces shown on the portal.
A user may be looking at the Weather module when visiting the portal or he may have looked at
the article title in the link and not liked it. Hence, explaining the decay precisely in terms of repeat
exposure is difficult. For instance, not showing an article to a user after one page view containing
the link may be suboptimal since he may have overlooked the link and may click on it later. In
fact, a large number of clicks on articles occur after the first page view and depends on how a user
navigates the portal. Instead of solving the problem by imposing serving constraints per user, we
build a component in our dynamic model that tracks article CTR decay over time. We still impose
reasonable serving constraints to provide good user experience?we do not show the same article to
a user x minutes (x = 60 worked well) after he/she first saw the article.
In addition to decay, the CTR of an article also changes by time of day and day of week. Figure 1
(b) shows the CTR of a typical article when served using a randomized serving scheme (articles
served in a round-robin fashion to a randomly chosen user population). The randomization removes
any serving bias and provides an unbiased estimate of CTR seasonality. It is evident that CTRs
of articles vary dramatically over time, this clearly shows the need to adjust for time effects (e.g.,
diurnal patterns, decay) to obtain an adjusted article score when deciding to rank articles. In our
current study, we fitted a global time of day curve at 5 minute resolution to data obtained from
randomized serving scheme through a periodic (weekly) adaptive regression spline. However, there
are still interactions that occur at an article level which were difficult to estimate offline through
article features. Per-article online models that put more weight on recent observations provide an
effective self adaptive mechanism to automatically account for deviations from the global trend
when an article is pushed into the system.
Strong Serving Bias: A model built using data generated from a serving scheme is biased by that
scheme. For example, if a serving scheme decides not to show article A to any user, any model
built using this data would not learn the popularity of A from users? feedback. In general, a serving scheme may heavily exploit some regions in the feature space and generate many data points
for those regions, but few data points for other regions. Models built using such data learn very
little about the infrequently sampled regions. Moreover, every non-randomized serving scheme introduces confounding factors in the data; adjusting such factors to obtain unbiased article scores is
often difficult. In fact, early experiments that updated models using data from editorial bucket to
serve in our experimental buckets performed poorly. This bias also affects empirical evaluations or
comparisons of learning algorithms based on retrospective data, as we discuss later in Section 5.
Interaction with the editorial team: The project involved considerable interaction with human editors who have been manually and successfully placing articles on the portal for many years. Understanding how that experience can be leveraged in conjuction with automated serving schemes was
a major challenge, both technically and culturally (in that editors had to learn what ML algorithms
could do, and we had to learn all the subtle considerations in what to show). The result is our framework, wherein editors control the pool and set policies via constraints on what can be served, and
the serving algorithm chooses what to show on a given user visit.
3
Experimental Setup and Framework
Experimental Setup: We created several mutually exclusive buckets of roughly equal sizes from a
fraction of live traffic, and served traffic in each bucket using one of our candidate serving schemes.
All usual precautions were taken in the bucket creation process to ensure statistical validity of results.
We also created a control bucket that ran the editorial serving scheme. In addition, we created a
separate bucket called the random bucket, which serves articles per visit in a round-robin fashion.
Framework: Our framework consists of several components that are described below.
? Batch Learning: Due to time lag between a view and subsequent clicks (approximately
2-10 minutes) and engineering constraints imposed by high data volumes, updates to our
models occur every 5 minutes. Such constraints can be added by editors, and the serving
algorithm must satisfy them.
? Business Logic, Editorial overrides: Despite moving towards an algorithmic approach,
there are instances where the editorial team has to override the recommendations produced
by our machine learning algorithms. For instance, a breaking news story is shown immediately at the F1 position, a user visiting the portal after 60 minutes should not see the same
article he saw during his earlier visits, etc. Such constraints can be added by editors, and
the serving algo must satisfy them.
? Online models to track CTR: We build online models to track article CTR for various
user segments separately in each bucket. Articles that are currently the best are shown in
the serving bucket; others are explored in the random bucket until their scores are better
than the current best articles; at this point they get promoted to the serving bucket.
In our serving bucket, we serve the same article at the F1 position in a given 5-minute
window (except for overrides by business rules). Separate online models are tracked for
articles at the F1 position in each bucket. Articles promoted to the F1 position in the
serving bucket are subsequently scored by their model in the serving bucket; articles not
playing at the F1 position in the serving bucket are of course scored by their model in the
random bucket.
? Explore/Exploit Strategy: The random bucket is used for two purposes: (a) It provides
a simple explore-exploit strategy that does random exploration with a small probability P ,
and serves best articles ranked by their estimated scores from online models with a large
probability 1 ? P . In addition, it helps us estimate systematic effects (e.g., diurnal CTR
pattern) without the need to build elaborate statistical models that adjust for serving bias in
other non-random buckets. Thus far, we have collected about 8 months of data from this
continuously running random bucket; this has proved extremely useful in studying various
offline models, running offline evaluations and conducting simulation studies.
The randomization procedure adopted is simple but proved effective in our setting. Our
setting is different from the ones studied in classical explore-exploit literature; developing
better strategies is a focus of our ongoing work. (See Section 6.)
4
Online Models
Tracking article CTR in an online fashion is a well studied area in time series with several methods
available in the literature [3][7]; but the application to content optimization has not been carefully
studied. We provide a description of three dynamic models that are currently used in our system.
4.1
Estimated Most Popular: EMP
This model tracks the log-odds of CTR per article at the F1 position over time but does not use
any user features. The subscript t in our notation refers to the tth interval after the article is first
displayed in the bucket. Let ct , nt denote the number of clicks and views at time t, we work with the
the empirical logistic transform defined as yt = log(ct + 0.5) ? log(nt ? ct + 0.5), approximately
Gaussian for large nt with variance wt = (ct + 0.5)?1 + (nt ? ct + 0.5)?1 [6]. In our scenario,
we get roughly 300 ? 400 observations at the F1 position per article in a 5-minute interval in the
random bucket, hence the above transformation is appropriate for EMP and SS with few tens of
user segments. Given that there may be a decay pattern in log-odds of CTR at the F1 position with
increasing article lifetime, we fit a dynamic linear growth curve model which is given by
yt
= ot + ?t + ?t ? N (0, V wt )
?t
=
?t
=
(1)
?t?1 + ?t + ??t ? N (0, ??2 t )
?t?1 + ??t ? N (0, ??2t )
In Equation 1, ot is a constant offset term obtained from an offline model (e.g. hour-of-day correction), ?t is the mean of yt at time t and ?t has the interpretation of incremental decay in the level of
the series over the time interval from t ? 1 to t, evolving during that interval according to the addition of the stochastic element ??t . The evolution errors ??t and ??t are assumed to be uncorrelated.
Model parameters are initialized by observing values at t = 1 for an article in random bucket, actual
tracking begins at t = 2. In general, the initialization takes place through a feature based offline
model built using retrospective data.
To provide more intuition on how the state parameters ?t = (?t , ?t ) evolve, assume the evolutions
??t and ??t are zero at each time point. Then, ?t = ?0 + t?0 , a linear trend fitted to the yt values
through weighted least squares. The addition of non-zero evolution makes this straight line dynamic
and helps in tracking decay over time. In fact, the values of state evolution variance components ??2 t
and ??2t relative to noise variance V wt determine the amount of temporal smoothing that occurs for
the model; large relative values smooth more by using a larger history to predict the future. Model
fitting is conducted through a Kalman filter based on a discounting concept as explained in [7].
Details are discussed in [1].
4.2
Saturated Segmented Model: SS
This model generalizes EMP to incorporate user features. In particular, user covariates are used to
create disjoint subsets (segments), a local EMP model is built to track item performance in each user
segment. For a small number of user segments, we fit a separate EMP model per user segment for a
given item at the F1 position. As the number of user segments grows, data sparseness may lead to
high variance estimates in small segments, especially during early part of article lifetime. To address
this, we smooth article scores in segments at each time point through a Bayesian hierarchical model.
In particular, if (ait , Qit ), i = 1, ? ? ? , k, are predicted mean and variances of item score at F1 in k
different user segments at time t, we derive a new score as follows:
a?it =
?
Qit
ait +
a?t
? + Qit
? + Qit
(2)
where a?t is the EMP model score for the item. The constant ? that controls the amount of ?shrinkage? towards the most popular is obtained by the DerSimonian and Laird estimator [10], widely
used in meta-analysis.
4.3
Online Logistic Regression: OLR
The SS does not provide the flexibility to incorporate lower order interactions when working with
a large number of features. For instance, given age, gender and geo-location for users, the SS
model considers all possible combinations of the three variables. It is possible that an additive
model based on two-factor interations (age-gender, age-geo, and gender-geo) may provide better
performance. We use an efficient online logistic regression approach to build such models. The OLR
updates parameter for every labelled event, positive or negative. Instead of achieving additivity by
empirically transforming the data as in EMP and SS, it posits a Bernoulli likelihood for each event
and achieves linearity by parametrizing the log-odds as a linear function of features. However,
this makes the problem non-linear; we perform a quadratic approximation through a Taylor series
expansion to achieve linearity. Modeling and fitting details are discussed in [1].
5
Experiments
In this section, we present the results of experiments that compare serving schemes based on our
three online models (EMP, SS, and OLR) with the current editorial programming approach (which
we refer to as ED). We show our online models significantly outperform ED based on bucket testsing
the four alternatives concurrently on live traffic on the portal over a month. Then, by offline analysis,
we identified the reason personalization (based on user features in OLR or segmentation in SS)
did not provide improvement?it is mainly because we did not have sufficiently diverse articles to
exploit, although SS and OLR are more predictive models than EMP. Finally, by extensive bucket
tests on live traffic (which is an expensive and unusual opportunity for evaluating algorithms), we
cast serious doubts on the usefulness of the common practice of comparing serving algorithms based
on retrospective data (collected while using another serving scheme), and suggest that, without a
random bucket or an effective correction procedure, it is essential to conduct tests on live traffic for
statistical validity.
Bucket Testing Methodology: After conducting extensive offline analysis and several small pilots
with different variations of models (including feature selection), we narrowed the candidates for
live-traffic evaluation to the following: (1) EMP, (2) SS with Age ? Gender segments, (3) OLR
with features terms: Article + Position + Age?ContentCategory + Gender?ContentCategory (geolocation and user behavioral features were also bucket-tested in some other periods of time and
provided no statistically significant improvement), and (4) ED. We used each of these schemes to
serve traffic in a bucket (a random set of users) for one month; the four buckets ran concurrently.
We measure the performance of a scheme by the lifts in terms of CTR relative to the baseline ED
scheme. We also obtained significant improvements relative to round-robin serving scheme in the
random bucket but do not report it to avoid clutter.
Online Model Comparison: Our online models significantly increased CTR over the original manual editorial scheme. Moreover, the increase in CTR was achieved mainly due to increased reach,
i.e., we induced more users to click on articles. This provides evidence in favor of a strategy where
various constraints in content programming are incorporated by human editors and algorithms are
used to place them intelligently to optimize easily measurable metric like CTR. Figure 2 (a) shows
the CTR lifts of different algorithms during one month. All three online models (EMP, SS and OLR)
are significantly better than ED, with CTR lifts in the range of 30%?60%. This clearly demonstrates
the ability of our online models to accurately track CTRs in real-time. Shockingly, the models that
are based on user features, SS and OLR, are not statistically different from EMP, indicating that peronalization to our selected user segments does not provide additional lift relative to EMP, although
both SS and OLR have better predictive likelihoods relative to EMP on retrospective data analysis.
10
15
Day
20
25
1.0
120
CTR of Concurrent Best Article
5
0.8
Male Articles
Female Articles
0.6
80
60
0.4
40
0.2
20
0.0
0
?20
0
Lift (%) in Fraction of Clickers
80
60
40
0
20
Lift(%) in CTR
100
ED
EMP
SS
OLR
ED
EMP
SS
Serving Scheme
OLR
0.0
0.1
0.2
0.3
0.4
0.5
0.6
CTR of Highly Polar Articles
0.7
(a) Daily CTR Lift
(b) Lift in Fraction of Clickers
(c) Polar vs. Non-polar
Figure 2: Experimental results: (a) and (b) show bucket test results. (c) on the x-axis is the CTR
of a polar article in a segment, on the y-axis is the CTR of the global best article (during the polar
article?s lifetime) in the same segment. Refer to text for definition of polar.
Figure 2 (b) shows the lift relative to ED in terms of the fraction of clicking users. It shows that the
lift achieved by the online models is not confined to a small cohort of users, but reflects conversion
of more users to clickers.
Analysis of Personalization: We did not expect to find that personalization to user segments provided no additional CTR lift relative to EMP despite the fact that user features were predictive of
article CTR. A closer look at the data revealed the main cause to be the current editorial content
generation process, which is geared to create candidate articles that are expected to be popular for
all users (not for different user segments). In fact, there were articles that have more affinity to some
user segments than others?we define these to be articles whose CTR in a given segment was at least
twice the article?s overall CTR, and refer to them as polar articles. However, whenever polar articles
were in the pool of candidate articles, there was usually a non-polar one in the pool that was more
popular than the polar ones across all segments. As a result, we should chose the same non-polar one
for all segments. Figure 2 (c) shows, for the gender segmentation, that polar articles almost always
co-exist with an article whose overall CTR is greater than even the CTR in the segment of the polar
article. For the AgeXGender segmentation, the global-best article was the same as the segment-best
article about 65% of the intervals; the maximum expected CTR lift over global ranking was only
about 1%. We observe similar patterns for segmentations based on other user features.
Retrospective Evaluation Metrics vs. Bucket Tests: It is common practice in existing literature
to evaluate a new serving algorithm using predictive metrics obtained by running the algorithm on
retrospective data (collected while using another serving scheme). For instance, such an approach
has been used extensively in studying ad matching problems [11]. In our setting, this is equivalent
to comparing a new serving scheme (e.g., EMP, SS, or OLR) to ED by computing some predictive
metric on retrospective data obtained from ED. We found the performance differences obtained
using retrospective data do not correlate well to those obtained by runing on live traffic [1]. This
finding underscores the need for random bucket data, effective techniques to correct the bias, or a
rapid bucket testing infrastructure to compare serving schemes.
6
Related Work
Google News personalization [13], which uses collaborative filtering to provide near real-time recommendation of news articles to users, is the most closely related prior work. However, while they
select from a vast pool of unedited content aggregated from news sites across the globe, we recommend a subset from a small list of items chosen by editors. On the one hand, this allows us to build
per-article models in near real-time; on the other, the editorially controlled mix of items means all
articles are of high quality (making it hard to achieve lift by simply eliminating bad articles). Recent
work on matching ads to queries [11] and ads to webpages [2] are related. However, their primary
emphasis is on constructing accurate feature-based offline models that are updated at longer time intervals (e.g., daily), such models provide good initialization to our online models but perform poorly
for reasons discussed in section 2.2. In [9], the authors consider an active exploration strategy to
improve search rankings, which is similar in spirit to our randomization procedure. Our problem is
also related to the rich literature on multi-armed bandit problems [5][8][14][12]. However, we note
that many of the assumptions made in the classical multi-armed bandit and reinforcement learning
literature are not satisfied in our setting (dynamic set of articles, short article lifetime, batch-learning,
non-stationary CTR, lagged response). In fact, short article lifetimes, dynamism of the content pool
and the importance of learning article behaviour very quickly are the major challenges in our scenario. Preliminary experiments performed by obvious and natural modifications to the widely used
UCB1 scheme [8] performed poorly. In a recent study [4] of a content aggregation site, digg.com,
Wu et al. built a model for story popularity. However, their analysis is based on biased retrospective
data, whereas we deployed our models and present results from tests conducted on live traffic.
7
Discussion
In this paper, we described content optimization, the problem of selecting articles to present to a
user who is intent on browsing for information. There are many variants of the problem, depending
on the setting. One variant is selecting from a very large and diverse pool of articles. Examples
include recommending RSS feeds or articles from one or more RSS feeds, such as Google?s news
aggregation, and segmentation and personalization are likely to be effective. The variant that we
addressed involves selecting from a small, homogeneous set of articles; segmentation may not be
effective unless the pool of articles is chosen to be diverse, and there is a high premium in quickly
estimating and tracking popularity per-article.
Our work suggests offline feature based models are not good enough to rank articles in a highly
dynamic content publishing system where article pools are small, dynamic and of high quality; lifetimes are short; and the utility metric being measured (e.g., CTR) has a strong dynamic component.
In fact, the biased nature of data obtained from a non-randomized serving scheme also underscores
the need to obtain some percentage of data from a randomized experimental design. The delicate
tradeoffs involved in maximizing utility (e.g., total number of clicks) by quickly converging to the
best article for a given user (or user segment) through online models that are effectively initialized
through offline feature based models (after adjusting for confounding factors), and performing unbiased exploration through small randomized experiments are the key machine learning challenges in
this setting. While we have addressed them sufficiently well to handle small content pools, dealing
with larger pools will require significant advances, and is the focus of our current research.
References
[1] D. Agarwal, B-C.Chen, P. Elango, and et al. Online models for content optimization, Yahoo! Technical
Report TR-2008-004. 2008.
[2] D. Agarwal, A. Broder, D. Chakrabarti, D. Diklic, V. Josifovski, and M. Sayyadian. Estimating rates of
rare events at multiple resolutions. In KDD, pages 16?25, New York, NY, USA, 2007. ACM.
[3] B. D. Anderson and J.B.Moore. Optimal Filtering. Dover, 1974.
[4] F.Wu and B.A.Huberman. Novelty and collective attention. 104:17599?17601, 2007.
[5] J.C.Gittins. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society,
Series B, 41:148?177, 1979.
[6] P. McCullagh and J. A. Nelder. Generalized Linear Models. Chapman & Hall/CRC, 1989.
[7] M.West and J.Harrison. Bayesian Forecasting and Dynamic Models. Springer-Verlag, 1997.
[8] P.Auer, N.Cesa-Bianchi, and P.Fischer. Finite-time analysis of the multiarmed bandit problem. Machine
Learning, 47:235?256, 2002.
[9] F. Radlinski and T. Joachims. Active exploration for learning rankings from clickthrough data. In ACM
SIGKDD International Conference On Knowledge Discovery and Data Mining (KDD), 2007.
[10] R.DerSimonian and N.M.Laird. Meta-analysis in clinical trials. Controlled Clinical Trials, 7, 1986.
[11] M. Richardson, E. Dominowska, and R. Ragno. Predicting clicks: estimating the click-through rate for
new ads. In WWW, pages 521?530, 2007.
[12] P. Sandeep, D. Agarwal, D. Chakrabarti, and V. Josifovski. Bandits for taxonomies: A model-based
approach. In In Proc. of the SIAM intl. conf. on Data Mining, 2007.
[13] S.Das, D.Data, and A.Garg. Google news personalization:scalable online collaborative filtering. In WWW,
Banff, Alberta, Canada, 2007.
[14] T.Lai and H.Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4?22, 1985.
| 3560 |@word trial:2 eliminating:1 stronger:1 open:1 seek:1 tried:1 simulation:1 r:2 tr:1 harder:1 initial:1 series:4 score:8 selecting:3 past:2 reaction:1 existing:1 current:6 comparing:2 nt:4 com:1 must:4 periodically:3 subsequent:1 additive:1 kdd:2 remove:1 update:4 v:2 stationary:3 pursued:1 selected:3 precaution:1 item:7 dover:1 short:6 infrastructure:2 provides:4 nitin:1 authority:1 location:2 preference:1 banff:1 elango:2 become:1 chakrabarti:2 consists:1 fitting:2 behavioral:1 expected:2 rapid:1 roughly:2 frequently:1 growing:1 multi:2 alberta:1 automatically:1 little:1 actual:1 window:1 armed:2 increasing:2 project:1 begin:1 underlying:1 moreover:2 panel:1 notation:1 linearity:2 estimating:3 what:5 provided:2 finding:1 transformation:1 temporal:1 every:4 growth:1 weekly:1 scaled:2 demonstrates:1 tricky:1 control:3 continually:1 positive:1 service:1 engineering:2 local:1 despite:2 subscript:1 solely:1 topicality:2 approximately:2 chose:1 twice:1 initialization:2 studied:3 emphasis:1 garg:1 suggests:2 co:1 josifovski:2 programmed:4 outlet:1 range:2 statistically:2 testing:2 lost:1 practice:2 procedure:4 area:1 empirical:2 evolving:1 significantly:4 weather:2 matching:2 regular:1 refers:1 suggest:1 get:5 selection:1 put:1 live:8 impossible:1 optimize:1 measurable:1 imposed:1 quick:1 yt:4 equivalent:1 maximizing:1 exposure:5 attention:3 www:2 resolution:2 immediately:1 rule:2 estimator:1 pull:1 his:1 population:3 handle:2 stability:1 variation:2 updated:4 heavily:1 user:72 programming:4 homogeneous:1 us:2 associate:1 trend:2 roy:1 expensive:2 infrequently:1 element:1 module:7 thousand:1 region:4 ensures:1 news:8 cycle:1 keyword:1 motgi:1 ran:2 intuition:1 environment:1 transforming:1 covariates:1 seung:1 dynamic:14 personal:1 trained:2 tight:1 segment:25 solving:1 exposed:2 predictive:6 serve:6 inapplicable:1 technically:1 f2:2 creation:1 algo:1 easily:1 various:4 additivity:1 fast:1 describe:4 effective:7 query:2 lift:13 choosing:2 whose:2 lag:2 larger:2 solve:1 widely:2 say:1 s:15 favor:1 ability:1 fischer:1 richardson:1 transform:1 noisy:1 laird:2 online:28 descriptive:1 intelligently:1 interaction:4 relevant:1 rapidly:2 poorly:4 flexibility:1 achieve:2 description:2 scalability:1 getting:1 interations:1 webpage:2 convergence:1 cluster:2 requirement:1 intl:1 liked:1 incremental:1 gittins:1 help:3 coupling:1 derive:1 depending:1 pose:1 measured:1 keywords:2 received:1 strong:2 predicted:1 involves:1 posit:1 closely:1 dramatical:1 correct:1 f4:2 subsequently:1 stochastic:1 exploration:4 human:4 filter:1 viewing:1 sunnyvale:1 crc:1 require:1 behaviour:1 f1:16 preliminary:1 randomization:4 adjusted:1 correction:2 sufficiently:2 hall:1 deciding:1 algorithmic:3 week:1 predict:1 major:5 vary:1 early:2 achieves:1 purpose:1 polar:13 proc:1 currently:3 title:2 saw:2 conjuction:1 concurrent:1 robbins:1 create:2 successfully:1 weighted:1 reflects:1 clearly:2 concurrently:2 gaussian:1 always:1 avoid:1 shrinkage:1 focus:3 joachim:1 she:2 improvement:3 rank:2 bernoulli:1 likelihood:2 mainly:2 underscore:2 contrast:1 sigkdd:1 baseline:1 detect:1 dependent:1 typically:1 eliminate:1 bandit:5 selects:2 issue:3 overall:3 yahoo:5 smoothing:1 equal:1 once:1 f3:2 having:1 manually:2 chapman:1 buffering:1 park:1 placing:1 look:1 alter:1 future:2 report:2 others:2 spline:1 simplify:1 serious:1 few:6 recommend:1 randomly:1 individual:1 delicate:1 stationarity:1 interest:1 highly:3 mining:2 evaluation:4 adjust:3 saturated:1 introduces:1 male:1 personalization:7 accurate:1 closer:1 daily:2 experience:2 unless:1 conduct:1 old:1 taylor:1 initialized:3 desired:1 fitted:2 instance:6 increased:2 modeling:2 earlier:1 cost:1 geo:4 deviation:1 subset:2 rare:1 hundred:3 usefulness:1 conducted:2 front:2 periodic:1 ctrs:5 chooses:1 fundamental:2 randomized:7 broder:1 international:1 siam:1 systematic:1 pool:19 quickly:6 continuously:4 dominowska:1 ctr:45 central:1 satisfied:1 cesa:1 containing:1 choose:1 leveraged:1 conf:1 chung:1 return:1 doubt:1 account:2 inc:1 satisfy:2 ranking:6 ad:5 stream:1 piece:1 performed:4 view:10 later:2 depends:1 observing:1 traffic:10 aggregation:2 maintains:1 narrowed:1 collaborative:2 ass:1 square:1 variance:5 characteristic:2 likewise:1 who:3 conducting:2 identify:2 bayesian:2 accurately:1 produced:1 served:4 straight:1 history:1 reach:1 manual:4 sharing:2 diurnal:2 ed:10 definition:1 failure:1 whenever:1 involved:2 obvious:1 refreshed:2 associated:1 modeler:1 sampled:1 pilot:1 proved:2 adjusting:2 popular:5 knowledge:1 segmentation:6 subtle:1 carefully:1 auer:1 feed:2 day:10 methodology:1 response:2 wherein:1 zachariah:1 strongly:1 anderson:1 lifetime:9 olr:12 until:2 hand:2 receives:3 working:1 web:4 trust:1 google:3 logistic:3 quality:5 perhaps:2 runing:1 grows:1 building:1 effect:2 validity:2 concept:1 unbiased:3 usa:1 evolution:4 hence:3 discounting:1 read:1 moore:1 round:3 during:5 self:1 generalized:1 prominent:1 override:3 evident:1 complete:1 consideration:1 common:4 empirically:1 tracked:1 volume:2 million:3 discussed:3 he:5 interpretation:1 significant:5 refer:4 multiarmed:1 imposing:1 focal:1 mathematics:1 had:3 moving:1 geared:1 longer:1 etc:3 navigates:1 recent:3 confounding:2 female:1 optimizing:2 irrelevant:1 driven:2 scenario:3 verlag:1 browse:1 server:2 meta:2 cater:1 freshness:1 scoring:1 seen:1 additional:2 greater:1 impose:1 performer:1 promoted:2 determine:1 maximize:1 period:1 aggregated:1 novelty:1 multiple:2 mix:4 smooth:2 segmented:1 technical:1 clinical:2 divided:1 lai:1 visit:8 controlled:2 converging:1 scalable:2 regression:3 globe:1 variant:3 metric:6 editorial:12 agarwal:4 achieved:3 confined:1 addition:6 whereas:1 separately:1 interval:7 addressed:2 harrison:1 grow:1 source:1 concluded:1 crucial:1 biased:3 ot:2 subject:2 induced:1 incorporates:1 spirit:1 effectiveness:1 odds:3 near:4 revealed:2 cohort:1 enough:1 automated:4 affect:1 fit:2 identified:2 click:21 suboptimal:1 avenue:2 tradeoff:1 sandeep:1 utility:2 url:2 effort:2 retrospective:11 forecasting:1 york:1 cause:1 dramatically:2 useful:1 latency:1 delivering:1 amount:2 clutter:1 ten:1 extensively:1 category:2 tth:1 generate:1 outperform:1 exist:1 percentage:1 seasonality:1 millisecond:1 estimated:2 algorithmically:1 per:14 track:6 popularity:3 serving:40 disjoint:1 diverse:3 key:1 four:4 achieving:1 changing:2 vast:1 asymptotically:1 fraction:5 year:1 run:1 place:2 almost:1 reasonable:1 decide:1 wu:2 delivery:1 pushed:2 ct:5 display:2 quadratic:1 encountered:1 activity:2 strength:1 occur:3 fading:1 constraint:10 precisely:1 worked:1 aspect:1 ragno:1 extremely:2 performing:1 transferred:1 developing:2 according:1 alternate:1 combination:1 poor:1 remain:1 across:2 making:3 happens:1 modification:1 explained:1 bucket:40 taken:1 equation:1 mutually:1 discus:7 mechanism:1 fed:1 serf:4 unusual:1 raghu:1 end:2 available:4 confounded:1 studying:2 adopted:1 generalizes:1 observe:1 hierarchical:1 appropriate:2 alternative:2 voice:3 batch:2 original:2 running:3 include:3 ensure:3 publishing:3 entertainment:2 opportunity:2 qit:4 exploit:6 build:8 especially:2 classical:3 society:1 seeking:1 already:1 added:2 taek:1 blend:1 looked:1 costly:1 exclusive:1 usual:3 traditional:1 strategy:5 visiting:4 occurs:1 affinity:1 primary:1 link:7 separate:3 entity:1 mail:1 collected:8 considers:1 reason:3 kalman:1 index:1 difficult:4 setup:2 taxonomy:1 relate:1 negative:1 intent:1 lagged:1 design:2 collective:1 policy:1 clickthrough:1 perform:2 bianchi:1 conversion:1 observation:2 finite:1 parametrizing:1 displayed:4 looking:2 variability:3 team:2 incorporated:1 canada:1 inferred:1 overlooked:1 cast:1 required:1 extensive:2 engine:1 hour:3 deserve:1 address:2 below:1 pattern:5 scott:1 usually:1 challenge:10 pradheep:1 program:1 pagerank:1 built:6 reliable:1 including:1 royal:1 event:7 business:2 ranked:1 natural:1 predicting:1 scheme:30 improve:1 axis:2 created:4 text:1 prior:2 understanding:1 literature:5 discovery:1 bee:1 evolve:1 relative:8 expect:1 highlight:1 generation:1 filtering:3 allocation:2 age:6 degree:1 consistent:1 anonymized:1 article:126 editor:13 story:4 playing:1 uncorrelated:1 production:1 course:1 repeat:1 offline:15 bias:5 wide:3 explaining:1 face:1 emp:18 deepak:1 feedback:3 curve:4 evaluating:1 rich:1 author:1 commonly:1 collection:4 adaptive:3 made:1 reinforcement:1 far:1 correlate:1 keep:1 logic:1 ml:1 global:5 decides:2 active:2 dealing:1 assumed:1 recommending:1 nelder:1 don:1 search:2 robin:3 promising:1 channel:1 nature:2 learn:4 ca:1 expansion:1 constructing:1 da:1 did:3 main:1 noise:1 scored:2 ait:2 repeated:1 site:7 referred:1 west:1 elaborate:1 fashion:3 deployed:3 ny:1 position:14 wish:1 clicking:2 candidate:4 breaking:2 digg:1 minute:8 removing:1 bad:1 discarding:1 hub:1 showing:1 explored:1 decay:10 offset:1 list:1 evidence:1 essential:1 joe:1 effectively:1 importance:2 portal:13 sparseness:1 margin:1 browsing:1 chen:2 ucb1:1 simply:1 explore:4 likely:1 tracking:6 sport:2 recommendation:2 springer:1 ramakrishnan:1 gender:7 constantly:2 acm:2 slot:4 month:4 targeted:1 presentation:1 careful:1 towards:2 labelled:2 replace:1 content:28 change:3 hard:2 included:1 typical:4 considerable:1 except:1 huberman:1 wt:3 mccullagh:1 total:3 called:1 experimental:5 premium:1 indicating:1 select:1 support:2 radlinski:1 relevance:1 ongoing:1 incorporate:2 evaluate:1 tested:1 |
2,825 | 3,561 | Non-parametric Regression Between Manifolds
1
Florian Steinke1 , Matthias Hein2
Max Planck Institute for Biological Cybernetics, 72076 T?ubingen, Germany
2
Saarland University, 66041 Saarbr?ucken, Germany
[email protected], [email protected]
Abstract
This paper discusses non-parametric regression between Riemannian manifolds.
This learning problem arises frequently in many application areas ranging from
signal processing, computer vision, over robotics to computer graphics. We
present a new algorithmic scheme for the solution of this general learning problem
based on regularized empirical risk minimization. The regularization functional
takes into account the geometry of input and output manifold, and we show that it
implements a prior which is particularly natural. Moreover, we demonstrate that
our algorithm performs well in a difficult surface registration problem.
1
Introduction
In machine learning, manifold structure has so far been mainly used in manifold learning [1], to
enhance learning methods especially in semi-supervised learning. The setting we want to discuss
in this paper is rather different, and has not been addressed yet in the machine learning community.
Namely, we want to predict a mapping between known Riemannian manifolds based on input/output
example pairs. In the statistics literature [2], this problem is treated for certain special output manifolds in directional statistics, where the main applications are to predict angles (circle), directions
(sphere) or orientations (set of orthogonal matrices). More complex manifolds appear naturally in
signal processing [3, 4], computer graphics [5, 6], and robotics [7]. Impressive results in shape processing have recently been obtained [8, 9] by imposing a Riemannian metric on the set of shapes, so
that shape interpolation is reduced to the estimation of a smooth curve in the manifold of all shapes.
Moreover, note that almost any regression problem with differentiable equality constraints can also
be seen as an instance of manifold-valued learning.
The problem of learning, when input and output domain are Riemannian manifolds, is quite distinct
from standard multivariate regression or manifold learning. One fundamental problem of using
traditional regression methods for manifold-valued regression is that standard techniques use the
linear structure of the output space. It thus makes sense to linearly combine simple basis functions,
since the addition of function values is still an element of the target space. While this approach still
works for manifold-valued input, it is no longer feasible if the output space is a manifold, as general
Riemannian manifolds do not allow an addition operation, see Figure 1 for an illustration.
One way how one can learn manifold-valued mappings using standard regression techniques is to
learn mappings directly into charts of the manifold. However, this approach leads to problems
even for the simple sphere, since no single chart covers the sphere without a coordinate singularity.
Another approach is to use an embedding of the manifold in Euclidean space where one can use
standard multivariate regression and then project the learned mapping onto the manifold. But, as is
obvious from Figure 1, the projection can lead to huge distortions. Even if the original mapping in
Euclidean space is smooth, its projection onto the manifold might be discontinuous.
In this paper we generalize our previous work [6] which is based on regularized empirical risk minimization. The main ingredient is a smoothness functional which depends only on the geometric
1
Figure 1: The black line is a 1D-manifold in R2 . The average
of the red points in R2 does not lie on the manifold. Averaging
of the green points which are close with respect to the geodesic
distance is still reasonable. However, the blue points which are
close with respect to the Euclidean distance are not necessarily
close in geodesic distance and therefore averaging can fail.
properties of input and output manifold, and thus avoids the problems encountered in the naive generalization of standard regression methods discussed above. Here, we provide a theoretical analysis
of the preferred mappings of the employed regularization functional, and we show that these can
be seen as natural generalizations of linear mappings in Euclidean space to the manifold-valued
case. It will become evident that this property makes the regularizer particularly suited as a prior
for learning mappings between manifolds. Moreover, we present a new algorithm for solving the
resulting optimization problem, which compared to the our previously proposed one is more robust
and, most importantly, can deal with arbitrary manifold-valued input. In our implementation, the
manifolds can be either given analytically or as point clouds in Euclidean space, rendering our approach applicable for almost any manifold-valued regression problem. In the experimental section
we demonstrate good performance in a surface registration task, where both input manifold and
output manifold are non-Euclidean ? a task which could not be solved previously in [6].
Since the problem is new to the machine learning community, we give a brief summary of the
learning problem in Section 2 and discuss the regularizer and its properties in Section 3. Finally,
in Section 4, we describe the new algorithm for learning mappings between Riemannian manifolds,
and provide performance results for a toy problem and a surface registration task in Section 5.
2
Regularized empirical risk minimization for manifold-valued regression
Suppose we are given two Riemannian manifolds, the input manifold M of dimension m and the
target manifold N of dimension n. We assume that M is isometrically embedded in Rs , and N in Rt
respectively. Since most Riemannian manifolds are given in this form anyway ? think of the sphere
or the set of orthogonal matrices, this is only a minor restriction. Given a set of k training pairs
(Xi , Yi ) with Xi ? M and Yi ? N we would like to learn a mapping ? : M ? Rs ? N ? Rt .
This learning problem reduces to standard multivariate regression if M and N are both Euclidean
spaces Rm and Rn , and to regression on a manifold if at least N is Euclidean. We use regularized
empirical risk minimization, which can be formulated in our setting as
k
1X
L(Yi , ?(Xi )) + ? S(?),
??C ? (M,N ) k i=1
arg min
(1)
where C ? (M, N ) denotes the set of smooth mappings ? between M ? Rs and N ? Rt , L :
N ?N ? R+ is the loss function, ? ? R+ the regularization parameter, and S : C ? (M, N ) ? R+
the regularization functional.
Loss function: In multivariate regression, f : Rm ? Rn , a common loss function is the squared
2
Euclidean distance of f (Xi ) and Yi , L(Yi , f (Xi )) = kYi ? f (Xi )kRn . A quite direct generalization to a loss function on a Riemannian manifold N is to use the squared geodesic distance
in N , L(Yi , ?(Xi )) = d2N (Yi , ?(Xi )). The correspondence to the multivariate case can be seen
from the fact that dN (Yi , ?(Xi )) is the length of the shortest path between Yi and ?(Xi ) in N ,
as kf (Xi ) ? Yi k is the length of the shortest path, namely the length of the straight line, between
f (Xi ) and Yi in Rn .
Regularizer: The regularization functional should measure the smoothness of the mapping ?. We
use the so-called Eells energy introduced in [6] as our smoothness functional which, as we will show
in the next section, implements a particularly well-suited prior over mappings for many applications.
The derivation of the regularization functional is quite technical. In order that the reader can get the
main intuition without having to bother with the rather heavy machinery from differential geometry,
we will discuss the regularization functional in a simplified setting, namely we assume that the input
manifold M is Euclidean, that is, M is an open subset of Rm . The general definition is given in the
next section. Let x? be Cartesian coordinates in M and let ?(x) be given in Cartesian coordinates
2
in Rt then the Eells energy can be written as,
2
Z
t
m
X
X
? 2 ?? >
dx,
SEells (?) =
?x? ?x?
M ?Rm ?=1
(2)
?,?=1
where > denotes the projection onto the tangent space T?(x) N of the target manifold at ?(x). Note,
that the Eells energy reduces to the well-known thin-plate spline energy if also the target manifold
N is Euclidean, that is, N = Rn . Let ? : Rm ? Rn , then
2
Z
n
m
X
X
? 2 ??
dx.
(3)
SThinPlate (?) =
?x? ?x?
M ?Rm ?=1
?,?=1
The apparently small step of the projection onto the tangent space leads to huge qualitative
differences in the behavior of both energies. In particular, the Eells energy penalizes only the
second derivative along the manifold, whereas changes in the normal direction are discarded. In the
case of m = 1, that is, we are learning a curve on N , the difference is most obvious. In this case
the Eells energy penalizes only the acceleration along the curve (the change of the curve in tangent
direction) whereas the thin-plate spline energy penalizes also the normal part which just measures
the curvature of the curve in the ambient space. This is illustrated in the following figure.
The input manifold is R and the output manifold N is a one-dimensional curve embedded in R2 , i.e. ? : R ? N . If the images ?(xi ) of equidistant points xi in the
input manifold M = R are also equidistant on the output manifold, then ? has no
acceleration in terms of N , i.e. its second derivative in N should be zero. However,
the second derivative of ? in the ambient space, which is marked red in the left figure, is not vanishing in this case. Since the manifold is curved, also the graph of ?
has to bend to stay on N . The Eells energy only penalizes the intrinsic acceleration,
that is, only the component parallel to the tangent space at ?(xi ), the green arrow.
3
Advantages and properties of the Eells energy
In the last section we motivated that the Eells energy penalizes only changes along the manifold.
This property and the fact that the Eells energy is independent of the parametrization of M and N ,
can be directly seen from the covariant formulation in the following section. We briefly review the
derivation of the Eells energy derivation in [10], which we need in order to discuss properties of
the Eells energy and the extension to manifold-valued input. Our main emphasis lies on an intuitive
explanation, for the exact technical details we refer to [10].
3.1
The general Eells energy
Let x? and y ? be coordinates on M and N . The differential of ? : M ? N at x ? M is
?
???
?
dx
?
,
d? =
?x?
?y ? ?(x)
x
where it is summed over double-occurring indices. This is basically just the usual Jacobian matrix
for a multivariate map. In order to get a second covariant derivative of ?, we apply the covariant
derivative M ? of M . The problem is that the derivative M ? ?? ?y?? is not defined, since ?y?? is
?x
not an element of Tx M but of T?(x) N . For this derivative, we use the concept of the pull-back connection ?0 [11], which is given as ?0 ? ?y?? = N ?d?( ?? ) ?y?? , i.e., the direction of differentiation
?x?
?x
?
?x?
N
? Tx M is first mapped to T?(x) N using the differential d?, and then the covariant derivative
? of N is used. Putting things together, the second derivative, the ?Hessian?, of ? is given in
coordinates as
h ? 2 ??
??? M ?
?
??? ??? N ? i ?
?
??? +
??? dx ? dx? ? ? ,
(4)
?0 d? =
?
?
?
?x ?x
?x
?x? ?x?
?y
where M ?, N ? are the Christoffel symbols of M and N . Note, that if M and N are Euclidean, the
Christoffel symbols are zero and the second derivative reduces to the standard Hessian in Euclidean
3
space. The Eells energy penalizes the squared norm of this second derivative tensor, corresponding
to the Frobenius norm of the Hessian in Euclidean space,
Z
2
SEells (?) =
k?0 d?kT ? M ?T ? M ?T?(x) N dV (x).
M
x
x
In this tensorial form the energy is parametrization independent and, since it depends only on intrinsic properties, it measures smoothness of ? only with respect to the geometric properties of M
and N . Equation (4) can be simplified significantly when N is isometrically embedded in Rt . Let
i : N ? Rt to be the isometric embedding and denote by ? : M ? Rt the composition ? = i ? ?.
Then we show in [10] that ?0 d? simplifies to
h ? 2 ??
? >
??? M ? i ?
?
?0 d? =
dx
?
dx
?
?
?
,
(5)
??
?x? ?x?
?x?
?z ?
where > is the orthogonal projection onto the tangent space of N and z ? are Cartesian coordinates in
Rt . Note, that if M is Euclidean, the Christoffel symbols M ? are zero and the Eells energy reduces
to Equation (2) discussed in the previous section. This form of the Eells energy was also used in our
previous implementation in [6] which could therefore not deal with non-Euclidean input manifolds.
In this paper we generalize our setting to non-trivial input manifolds, which requires that we take
into account the slightly more complicated form of ?0 d? in Equation (5). In Section 3.3 we discuss
how to compute ?0 d? and thus the Eells energy for this general case.
3.2
The null space of the Eells energy and the generalization of linear mappings
The null space of a regularization functional S(?) is the set {? | S(?) = 0}. This set is an important
characteristic of a regularizer, since it contains all mappings which are not penalized. Thus, the
null space is the set of mappings which we are free to fit the data with ? only deviations from the
null space are penalized. In standard regression, depending on the order of the differential used for
regularization, the null space often consists out of linear maps or polynomials of small degree.
We have shown in the last section, that the Eells energy reduces to the classical thin-plate spline
energy, if input and output manifold are Euclidean. For the thin-plate spline energy it is wellknown that the null space consists out of linear maps between input and output space. However,
the concept of linearity breaks down if the input and output spaces are Riemannian manifolds, since
manifolds have no linear structure. A key observation towards a natural generalization of the concept
of linearity to the manifold setting is that linear maps map straight lines to straight lines. Now, a
straight line between two points in Euclidean space corresponds to a curve with no acceleration in a
Riemannian manifold, that is, a geodesic between the two points. In analogy to the Euclidean case
we therefore consider mappings which map geodesics to geodesics as the proper generalization of
linear maps for Riemannian manifolds.
The following proposition taken from [11] defines this concept and shows that the set of generalized
linear maps is exactly the null space of the Eells energy.
Proposition 1 [11] A map ? : M ? N is totally geodesic, if ? maps geodesics of M linearly to
geodesics of N , i.e. the image of any geodesic in M is also a geodesic in N , though potentially with
a different constant speed. We have, ? is totally geodesic if and only if ?0 d? = 0.
Linear maps encode a very simple relation in the data: the relative changes between input and output
are the same everywhere. This is the simplest relation a non-trivial mapping can encode between
input and output, and totally geodesic mappings encode the same ?linear? relationship even though
the input and output manifold are nonlinear. However, note that like linear maps, totally geodesic
maps are not necessarily distortion-free, but every distortion-free (isometric) mapping is totally
geodesic. Furthermore, given ?isometric? training points,
dM (Xi , Xj ) = dN (Yi , Yj ),
i, j = 1, . . . , k,
then among all minimizers of (1), there will be an isometry fitting the data points, given that such
an isometry exists. With this restriction in mind, one can see the Eells energy also as a measure of
distortion of the mapping ?. This makes the Eells energy an interesting candidate for a variety of
geometric fitting problems, e.g., for surface registration as demonstrated in the experimental section.
4
3.3
Computation of the Eells energy for general input manifolds
In order to compute the Eells energy for general input manifolds, we need to be able to evaluate
the second derivative in Equation (5), in particular, the Christoffel symbols of the input manifold
M . While the Christoffel symbols could be evaluated directly for analytically given manifolds,
we propose a much simpler scheme here, that also works for point clouds. It is based on local
second order approximations of M , assuming that M is given as a submanifold of Rs (where the
Riemannian metric of M is induced from the Euclidean ambient space). For simplicity, we restrict
ourselves here to the intuitive case of hypersurfaces in Rs . The case of general submanifolds in Rs
and all proofs are provided in the supplementary material.
Proposition 2 Let x1 , . . . , xm be the coordinates associated with an orthonormal basis of the tangent space at Tp M , that is p has coordinates x = 0. Then in Cartesian coordinates z of Rs centered
at p and aligned with the tangent space Tp M , the manifold can be approximated up to second order
as z(x) = (x1 , . . . , xs?1 , f s (x)), where given that the orthonormal basis in Tp M is aligned with
the principal directions we have
s?1
X
2
f s (x) =
?? x? ,
?=1
where ?? are the principal curvatures of M at x.
For an example of a second-order approximation, see the approximation of
a sphere at the south pole on the left. Note, that the principal curvature, also
called extrinsic curvature, quantifies how much the input manifold bends
with respect to the ambient space. The principal curvatures can be computed directly for manifolds in analytic form and approximated for point
cloud data using standard techniques, see Section 4.
Proposition 3 Given a second-order approximation of M at p as in Proposition 2, then for the
?
coordinates x one has M ??? (0) = 0. If x1 , . . . , xs?1 are aligned with the principal directions at p,
then the coordinate expressions for the manifold-adapted second derivative of ? (5) are at p
? 2 ??
? 2 ??
??? M ?
? 2 ??
???
??? =
?
= ? ?+
??? ?? .
?
?
?
?
?
?x ?x
?x
?x ?x
?z ?z
?z s
(6)
Note that (6) is not an approximation, but the true second derivative of ? at p on M . This is because
a parametrisation of M at p with an exponential map differs from the second order approximation
at most in third order. Expression (6) allows us to compute the Eells energy in the case of manifoldvalued input. We just have to replace the second-partial derivative in the Eells energy in (2) by this
manifold input-adapted formulation, which can be computed easily.
4
Implementation
We present a new algorithm for solving the optimization problem of (1). In comparison to [6], the
method is more robust, since it avoids the hard constraints of optimizing along the surface, and most
importantly it allows manifold-valued input through a collocation-like discretization. The basic
idea is to use a linearly parameterized set of functions and to express the objective in terms of the
parameters. The resulting non-linear optimization problem is then solved using Newton?s method.
Problem Setup: A flexible set of functions are the local polynomials. Let M be an open subset or
submanifold of Rs , then we parameterize the ?-th component of the mapping ? : Rs ? Rt as
PK
k? (k?xi k)g(?xi , wi? )
.
?? (x) = i=1PKi
j=1 k?j (k?xj k)
Here, g(?xi , wi? ) is a first or second order polynomial in ?xi with parameters wi? , ?xi = (x?ci ) is
the difference of x to the local polynomial centers ci , and k?i (r) = k( ?ri ) is a compactly supported
smoothing kernel. We choose the K local polynomial centers ci approximately uniformly distributed
over M , thereby adapting the function class to the shape of the input manifold M . If we stack all
parameters wi? into a single vector w, then ? and its partial derivatives are just linear functions of w,
5
which allows to compute these values in parallel for many points using simple matrix multiplication.
We compute the energy integral (2) as a function of w, by summing up the energy density over an
approximately uniform discretisation of M . The projection onto the tangent space, used in (2) and
(5), and the second order approximation for computing intrinsic second derivatives, used in (5) and
(6), are manifold specific and are explained below. We also express the squared geodesic distance
used as loss function in terms of w, see below, and thus end up with a finite dimensional optimisation
problem in w which we solve using Newton?s method with line search. The Newton step can be
done efficiently because the smoothing kernel has compact support and thus the Hessian is sparse.
Moreover, since we have discretised the optimisation problem directly, and not its Euler-Lagrange
equations, we do not need to explicitly formulate boundary conditions.
The remaining problem is the constraintR?(x) ? N for x ? M . We transform it into a soft constraint
and add it to the objective function as ? M d(?(x), N )2 dx, where d(?(x), N ) denotes the distance
of ?(x) to N in Rt and ? ? R+ . During the optimization, we iteratively minimize till convergence
and then increase the weight ? by a fixed amount, repeating this until the maximum distance of ? to
N is smaller than a given threshold. As initial solution, we compute the free solution, i.e. where N is
assumed to be Rt , in which case the problem becomes convex. In contrast to a simple projection of
the initial solution onto N , as done in [6], which can lead to large distortions potentially causing the
optimization to stop in a local minimum, the increasing penalization of the distance to the manifold
leads to a slow settling of the solution towards the target manifold, which turned out to be much
more robust. The projection of the second derivative of ? onto the tangent space for ?(x) 6? N , as
required in (2) or (5), is computed using the iso-distance manifolds N?(x) = {y ? Rt |d(y, N ) =
d(?(x), N )} of N . For the loss, we use dN (argminy?N k?(x) ? yk , Yi ). These two constructions
are sensible, since as ? approaches the manifold N for increasing ?, both approximations converge
to the desired operations on the manifold N .
Manifold Operations: For each output manifold N , we need to compute projections onto the tangent spaces of N and its iso-distance manifolds, the closest point to p ? Rt on N , and geodesic
distances on N . Using a signed distance function ?, projections P > onto the tangent spaces of N or
?2
its iso-distance manifolds at p ? Rt are given as P > = 1 ? k??(p)k ??(p)??(p)T . For spheres
t?1
S
the signed distance function is simply ?(x) = 1 ? kxk. Finding the closest point to p ? Rt in
t?1
S
is trivial and the geodesic distance is dSt?1 (x, y) = arccos hx, yi for x, y ? St?1 .
For the surface registration task, the manifold is given as a densely sampled point cloud with surface
normals. Here, we proceed as follows. Given a point p ? Rt , we first search for the closest point
p0 in the point cloud and compute there a local second order approximation of N , that is, we fit the
distances of the 10 nearest neighbors of p0 to the tangent plane (defined by the normal vector) with
a quadratic polynomial in the points? tangent plane coordinates using weighted least squares, see
Proposition 2. We then use the distance to the second order approximation as the desired signed
distance function ?, and also use this approximation to find the closest point to p ? Rt in N . Since
in the surface registration problem we used rather large weights for the loss, ?(Xi ) and Yi were
always close on the surface. In this case the geodesic distance can be well approximated by the
Euclidean one, so that for performance reasons we used the Euclidean distance. An exact, but more
expensive method to compute geodesics is to minimize the harmonic energy of curves, see [6].
For non-Euclidean input manifolds M , we similarly compute local second order approximations of
M in Rs to estimate the principal curvatures needed for the second derivative of ? in (6).
5
Experiments
In a simple toy experiment, we show that our framework can handle noisy training data and all
parameters can be adjusted using cross-validation. In the second experiment, we prove that the new
implementation can deal with manifold-valued input and apply it to the task of surface registration.
Line on Sphere: Consider regression from [0, 1] to the sphere S2 ? R3 . As ground-truth, we choose
a curve given in spherical coordinates as ?(t) = 40t2 , ?(t) = 1.3?t + ? sin(?t). The k training
inputs were sampled uniformly from [0, 1], the outputs were perturbed by ?additive? noise from the
von Mises distribution with concentration parameter ?. The von Mises distribution is the maximum
entropy distribution on the sphere for fixed mean and variance [2], and thus is the analog to the
Gaussian distribution. In the experiments the optimal regularization parameter ? was determined by
6
? = 10000
k = 100, ? = 10000
0
k = 100
0
10
10
?1
10
Eells
TPS
Local
?1
10
Test Error
Test Error
CV error
Eells
TPS
?2
10
Eells
TPS
Local
?1
10
?2
10
2
10
4
10
6
10
8
10
10
2
10
3
10
1/?
10
k
1e0
1e2
?
1e4
Inf
a)
b)
c)
d)
Figure 2: Regression from [0, 1] to the sphere. a) Noisy data samples (black crosses) of the black
ground-truth curve. The blue dots show the estimated curve for our Eells-regularized approach, the
green dots depict thin-plate splines (TPS) in R3 radially projected onto the sphere, and the red dots
show results for the local approach of [8]. b) Cross-validation errors for given sample size k and
noise concentration ?. Von-Mises distributed noise in this case corresponds roughly to Gaussian
noise with standard deviation 0.01. c) Test errors for different k, but fixed ?. In all experiments the
regularization parameter ? is found using cross-validation. d) Test errors for different ?, but fixed k.
performing 10-fold cross-validation and the experiment was repeated 10 times for each size of the
training sample k and noise parameter ?. The run-time was dominated by the number of parameters
chosen for ?, and mostly independent of k. For training one regression it was about 10s in this case.
We compare our framework for nonparametric regression between manifolds with standard cubic
smoothing splines in R3 ? the equivalent of thin-plate splines (TPS) for one input dimension ?
projected radially on the sphere, and with the local manifold-valued Nadaraya-Watson estimator of
[8]. As can be seen in Figure 2, our globally regularized approach performs significantly better
than [8] for this task. Note that even in places where the estimated curve of [8] follows the ground
truth relatively closely, the spacing between points varies greatly. These sampling dependent speed
changes, that are not seen in the ground truth curve, cannot be avoided without a global smoothness
prior such as for example the Eells energy. The Eells approach also outperforms the projected TPS,
in particular for small sample sizes and reasonable noise levels. For a fixed noise level of ? = 10000
a paired t-test shows that the difference in test error is statistically significant at level ? = 5% for
the sample sizes k = 70, 200, 300, 500. Clearly, as the curve is very densely sampled for high
k, both approaches perform similar, since the problem becomes essentially local, and locally all
manifolds are Euclidean. In contrast, for small sample sizes, a plausible a prior is more important.
The necessary projections for TPS can introduce arbitrary distortions, especially for parts of the
curve where consecutive training points are far apart, and where TPS thus deviate significantly from
the circle, see Figure 2a). Using our manifold-adapted approach we avoid distorting projections and
use the true manifold distances in the loss and the regularizer. The next example shows that the
difference between TPS and our approach is even more striking for more complicated manifolds.
Surface / Head Correspondence: Computing correspondence between the surfaces of different, but
similar objects, such as for example human heads, is a central problem in shape processing. A dense
correspondence map, that is, an assignment of all points of one head to the anatomically equivalent
points on the other head, allows one to perform morphing [12] or to build linear object models [13]
which are flexible tools for computer graphics as well as computer vision. While the problem is wellstudied, it remains a difficult problem which is still actively investigated. Most approaches minimize
a functional consisting of a local similarity measure and a smoothness functional or regularizer for
the overall mapping. Motivated by the fact that the Eells energy favors simple ?linear? mappings,
we propose to use it as regularizer for correspondence maps. For testing and highlighting the role
of this ?prior? independently of the choice of local similarity measure, we formulate the dense
correspondence problem as a non-parametric regression problem between manifolds where 55 point
correspondences on characteristic local texture or shape features are given (Only on the forehead we
fix some less well-defined markers, to determine a relevant length-scale).
It is in general difficult to evaluate correspondences numerically, since for different heads anatomical equivalence is not easily specified. Here, we have used a subset of the head database of [13] and
considered their correspondence as ground-truth. These correspondences are known to be perceptually highly plausible. We took the average head of one part of the database and registered it to the
other 10 faces, using the mean distance to the correspondence of [13] as error score. Apart from the
average deviation over the whole head, we also show results for an interior region, see Fig. 3 g), for
which the correspondence given by [13] is known to be more exact compared to other regions such
as, for example, around the ear or below the chin.
7
a) Original
b) 50%
c) Target
d) TPS
e) Eells
f) [12]
Error in mm
2.97 (1.17)
2.31 (0.93)
2.49 (1.46)
g) Mask
Figure 3: Correspondence computation from the original head in a) to the target head in c) with 55
markers (yellow crosses). A resulting 50% morph using our method is shown in c). Distance of
the computed correspondence to the correspondence of [13] is color-coded in d) - f) for different
methods. The numbers below give the average distance over the whole head, in brackets the average
over an interior region (red area in g).
We compared our approach against [12] and a thin-plate spline (TPS) like approach. The TPS
method represents the initial solution of our approach, that is, a mapping into R3 minimizing the
TPS energy (3), which is then projected onto the target manifold. [12] use a volume-deformation
based approach that directly finds smooth mappings from surface to surface, without the need of
projection, but their regularizer does not take into account the true distances along the surface. We
did not compare against [8], since their approach requires computing a large number of geodesics in
each iteration, which is computationally prohibitive on point clouds. In order to obtain a sufficiently
flexible, yet not too high-dimensional function set for our implementation, we place polynomial
centers ci on all markers points and also use a coarse, approximately uniform sampling of the other
parts of the manifold. Free parameters, that is, the regularisation parameter ? and the density of
additional polynomial centers, were chosen by 10-fold cross-validation for our and the TPS method,
by manual inspection for the approach of [12]. One computed correspondence example is shown in
Fig. 3, the average over all 10 test heads is summarized in the table below.
TPS Eells [12]
Mean error for the full head in mm 2.90 2.16 2.15
Mean error for the interior in mm
1.49 1.17 1.36
The proposed manifold-adapted Eells approach outperforms the TPS method, especially in regions
of high curvature such as around the nose as the error heatmaps in Fig. 3 show. Compared to [12],
our method finds a smoother, more plausible solution, also on large texture-less areas such as the
forehead or the cheeks.
References
[1] M. Belkin and P. Niyogi. Semi-supervised learning on manifolds. Machine Learning, 56:209?239, 2004.
[2] K.V. Mardia and P.E. Jupp. Directional statistics. Wiley, New York, 2000.
[3] A. Srivastava. A Bayesian approach to geometric subspace estimation. IEEE Trans. Sig. Proc.,
48(5):1390?1400, 2000.
[4] I. U. Rahman, I. Drori, V. C. Stodden, D. L. Donoho, and P. Schr?oder. Multiscale representations for
manifold-valued data. Multiscale Mod. and Sim., 4(4):1201?1232, 2005.
[5] F. M?emoli, G. Sapiro, and S. Osher. Solving variational problems and partial differential equations mapping into general target manifolds. J.Comp.Phys., 195(1):263?292, 2004.
[6] F. Steinke, M. Hein, J. Peters, and B. Sch?olkopf. Manifold-valued Thin-Plate Splines with Applications
in Computer Graphics. Computer Graphics Forum, 27(2):437?448, 2008.
[7] M. Hofer and H. Pottmann. Energy-minimizing splines in manifolds. ACM ToG, 23:284?293, 2004.
[8] B. C. Davis, P. T. Fletcher, E. Bullitt, and S. Joshi. Population shape regression from random design data.
Proc. of IEEE Int. Conf. Computer Vision (ICCV), pages 1?7, 2007.
[9] M. Kilian, N.J. Mitra, and H. Pottmann. Geometric modeling in shape space. ACM ToG, 26(3), 2007.
[10] M. Hein, F. Steinke, and B. Sch?olkopf. Energy functionals for manifold-valued mappings and their
properties. Technical Report 167, MPI for Biological Cybernetics, 2008.
[11] J. Eells and L. Lemaire. Selected topics in harmonic maps. AMS, Providence, RI, 1983.
[12] B. Sch?olkopf, F. Steinke, and V. Blanz. Object correspondence as a machine learning problem. In Proc.
of the Int. Conf. on Machine Learning (ICML), pages 777 ?784, 2005.
[13] V. Blanz and T. Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH?99 Conference
Proceedings, pages 187?194, Los Angeles, 1999. ACM Press.
8
| 3561 |@word briefly:1 polynomial:8 norm:2 tensorial:1 open:2 r:10 p0:2 thereby:1 initial:3 contains:1 score:1 outperforms:2 jupp:1 discretization:1 yet:2 dx:8 written:1 additive:1 shape:9 analytic:1 depict:1 prohibitive:1 selected:1 plane:2 inspection:1 parametrization:2 iso:3 vanishing:1 coarse:1 simpler:1 saarland:1 dn:3 along:5 direct:1 become:1 differential:5 qualitative:1 consists:2 prove:1 combine:1 fitting:2 introduce:1 mask:1 roughly:1 mpg:1 frequently:1 behavior:1 globally:1 spherical:1 ucken:1 totally:5 becomes:2 project:1 provided:1 moreover:4 linearity:2 increasing:2 null:7 submanifolds:1 finding:1 differentiation:1 sapiro:1 eells:36 every:1 isometrically:2 exactly:1 rm:6 appear:1 planck:1 local:15 mitra:1 path:2 interpolation:1 approximately:3 might:1 black:3 emphasis:1 signed:3 equivalence:1 nadaraya:1 statistically:1 yj:1 testing:1 implement:2 differs:1 drori:1 area:3 empirical:4 significantly:3 adapting:1 projection:13 vetter:1 get:2 onto:12 close:4 cannot:1 bend:2 interior:3 risk:4 restriction:2 equivalent:2 map:17 demonstrated:1 center:4 independently:1 convex:1 formulate:2 simplicity:1 estimator:1 importantly:2 orthonormal:2 pull:1 embedding:2 handle:1 anyway:1 coordinate:13 population:1 target:9 suppose:1 construction:1 exact:3 sig:1 element:2 approximated:3 particularly:3 expensive:1 database:2 cloud:6 role:1 solved:2 parameterize:1 region:4 kilian:1 yk:1 intuition:1 geodesic:21 solving:3 tog:2 basis:3 compactly:1 easily:2 siggraph:1 tx:2 regularizer:8 derivation:3 distinct:1 describe:1 quite:3 supplementary:1 valued:15 solve:1 distortion:6 plausible:3 blanz:2 favor:1 statistic:3 niyogi:1 think:1 transform:1 noisy:2 advantage:1 differentiable:1 matthias:1 took:1 propose:2 causing:1 aligned:3 turned:1 relevant:1 till:1 intuitive:2 frobenius:1 olkopf:3 los:1 convergence:1 double:1 object:3 depending:1 nearest:1 minor:1 sim:1 c:1 direction:6 closely:1 discontinuous:1 centered:1 human:1 material:1 hx:1 fix:1 generalization:6 proposition:6 biological:2 singularity:1 adjusted:1 extension:1 heatmaps:1 mm:3 around:2 considered:1 ground:5 normal:4 sufficiently:1 fletcher:1 algorithmic:1 predict:2 mapping:28 consecutive:1 estimation:2 proc:3 applicable:1 tool:1 weighted:1 minimization:4 clearly:1 always:1 gaussian:2 pki:1 rather:3 avoid:1 encode:3 mainly:1 greatly:1 contrast:2 sense:1 lemaire:1 am:1 dependent:1 minimizers:1 sb:1 collocation:1 relation:2 germany:2 arg:1 among:1 orientation:1 flexible:3 overall:1 arccos:1 smoothing:3 special:1 summed:1 having:1 sampling:2 represents:1 seells:2 icml:1 thin:8 pottmann:2 t2:1 spline:10 report:1 belkin:1 densely:2 geometry:2 ourselves:1 consisting:1 huge:2 highly:1 wellstudied:1 bracket:1 kt:1 ambient:4 integral:1 partial:3 necessary:1 orthogonal:3 machinery:1 discretisation:1 euclidean:24 penalizes:6 circle:2 desired:2 hein:3 e0:1 theoretical:1 deformation:1 instance:1 soft:1 modeling:1 d2n:1 cover:1 tp:3 assignment:1 pole:1 deviation:3 subset:3 euler:1 uniform:2 submanifold:2 graphic:5 too:1 providence:1 morph:1 perturbed:1 varies:1 st:1 density:2 fundamental:1 stay:1 enhance:1 together:1 synthesis:1 parametrisation:1 squared:4 von:3 central:1 ear:1 choose:2 conf:2 derivative:19 toy:2 actively:1 account:3 de:2 summarized:1 int:2 explicitly:1 depends:2 break:1 apparently:1 red:4 parallel:2 complicated:2 minimize:3 chart:2 square:1 variance:1 characteristic:2 efficiently:1 directional:2 yellow:1 generalize:2 bayesian:1 basically:1 comp:1 cybernetics:2 straight:4 phys:1 manual:1 definition:1 against:2 energy:39 obvious:2 dm:1 naturally:1 proof:1 riemannian:13 associated:1 mi:3 e2:1 stop:1 sampled:3 radially:2 color:1 back:1 supervised:2 isometric:3 formulation:2 evaluated:1 though:2 done:2 furthermore:1 just:4 until:1 rahman:1 nonlinear:1 marker:3 multiscale:2 defines:1 concept:4 true:3 regularization:11 equality:1 analytically:2 iteratively:1 illustrated:1 deal:3 sin:1 during:1 davis:1 mpi:1 generalized:1 plate:8 chin:1 evident:1 tps:16 demonstrate:2 performs:2 ranging:1 image:2 harmonic:2 variational:1 recently:1 argminy:1 common:1 hofer:1 functional:11 volume:1 discussed:2 analog:1 numerically:1 forehead:2 refer:1 composition:1 significant:1 imposing:1 cv:1 smoothness:6 similarly:1 dot:3 impressive:1 surface:15 longer:1 similarity:2 add:1 morphable:1 curvature:7 multivariate:6 isometry:2 closest:4 optimizing:1 inf:1 apart:2 wellknown:1 certain:1 ubingen:1 watson:1 yi:15 seen:6 minimum:1 additional:1 florian:1 employed:1 converge:1 shortest:2 determine:1 signal:2 semi:2 smoother:1 full:1 bother:1 reduces:5 smooth:4 technical:3 christoffel:5 sphere:12 cross:7 coded:1 paired:1 regression:21 basic:1 vision:3 metric:2 optimisation:2 essentially:1 iteration:1 kernel:2 robotics:2 addition:2 want:2 whereas:2 spacing:1 addressed:1 sch:3 south:1 induced:1 thing:1 mod:1 joshi:1 rendering:1 variety:1 xj:2 fit:2 equidistant:2 restrict:1 simplifies:1 idea:1 angeles:1 motivated:2 expression:2 distorting:1 peter:1 hessian:4 proceed:1 york:1 oder:1 stodden:1 amount:1 repeating:1 nonparametric:1 locally:1 simplest:1 reduced:1 estimated:2 extrinsic:1 blue:2 anatomical:1 express:2 putting:1 key:1 threshold:1 kyi:1 registration:7 emoli:1 graph:1 run:1 angle:1 everywhere:1 parameterized:1 striking:1 dst:1 place:2 almost:2 reasonable:2 reader:1 cheek:1 correspondence:17 fold:2 quadratic:1 encountered:1 adapted:4 constraint:3 ri:2 dominated:1 speed:2 min:1 performing:1 relatively:1 smaller:1 slightly:1 wi:4 osher:1 dv:1 explained:1 anatomically:1 iccv:1 taken:1 computationally:1 equation:6 previously:2 remains:1 discus:6 r3:4 fail:1 needed:1 mind:1 nose:1 end:1 operation:3 apply:2 original:3 denotes:3 remaining:1 newton:3 especially:3 build:1 classical:1 forum:1 tensor:1 objective:2 parametric:3 concentration:2 rt:17 usual:1 traditional:1 subspace:1 distance:26 mapped:1 sensible:1 topic:1 manifold:99 tuebingen:1 trivial:3 reason:1 assuming:1 length:4 index:1 relationship:1 illustration:1 minimizing:2 difficult:3 setup:1 mostly:1 potentially:2 implementation:5 design:1 proper:1 perform:2 observation:1 discarded:1 finite:1 curved:1 head:13 schr:1 rn:5 stack:1 arbitrary:2 community:2 introduced:1 namely:3 pair:2 discretised:1 required:1 connection:1 specified:1 learned:1 registered:1 saarbr:1 trans:1 able:1 below:5 xm:1 max:1 green:3 explanation:1 natural:3 treated:1 regularized:6 settling:1 scheme:2 brief:1 naive:1 deviate:1 morphing:1 prior:6 literature:1 geometric:5 kf:1 tangent:13 review:1 relative:1 multiplication:1 embedded:3 loss:8 regularisation:1 interesting:1 analogy:1 ingredient:1 penalization:1 validation:5 degree:1 heavy:1 summary:1 penalized:2 supported:1 last:2 free:5 allow:1 institute:1 steinke:4 neighbor:1 face:2 sparse:1 distributed:2 curve:15 dimension:3 boundary:1 avoids:2 projected:4 simplified:2 avoided:1 far:2 functionals:1 compact:1 uni:1 preferred:1 global:1 summing:1 assumed:1 xi:22 search:2 quantifies:1 table:1 learn:3 robust:3 investigated:1 complex:1 necessarily:2 domain:1 did:1 pk:1 main:4 dense:2 linearly:3 arrow:1 s2:1 noise:7 whole:2 repeated:1 x1:3 fig:3 cubic:1 slow:1 wiley:1 exponential:1 lie:2 candidate:1 mardia:1 jacobian:1 third:1 krn:1 down:1 e4:1 specific:1 symbol:5 r2:3 x:2 intrinsic:3 exists:1 ci:4 texture:2 perceptually:1 occurring:1 cartesian:4 suited:2 entropy:1 simply:1 lagrange:1 kxk:1 highlighting:1 covariant:4 corresponds:2 hypersurfaces:1 truth:5 acm:3 marked:1 formulated:1 acceleration:4 donoho:1 towards:2 replace:1 feasible:1 change:5 hard:1 determined:1 uniformly:2 averaging:2 principal:6 called:2 experimental:2 support:1 arises:1 evaluate:2 srivastava:1 |
2,826 | 3,562 | Regularized Co-Clustering with Dual Supervision
Vikas Sindhwani
Jianying Hu
Aleksandra Mojsilovic
IBM Research, Yorktown Heights, NY 10598
{vsindhw, jyhu, aleksand}@us.ibm.com
Abstract
By attempting to simultaneously partition both the rows (examples) and columns
(features) of a data matrix, Co-clustering algorithms often demonstrate surprisingly impressive performance improvements over traditional one-sided row clustering techniques. A good clustering of features may be seen as a combinatorial
transformation of the data matrix, effectively enforcing a form of regularization
that may lead to a better clustering of examples (and vice-versa). In many applications, partial supervision in the form of a few row labels as well as column labels
may be available to potentially assist co-clustering. In this paper, we develop two
novel semi-supervised multi-class classification algorithms motivated respectively
by spectral bipartite graph partitioning and matrix approximation formulations for
co-clustering. These algorithms (i) support dual supervision in the form of labels
for both examples and/or features, (ii) provide principled predictive capability on
out-of-sample test data, and (iii) arise naturally from the classical Representer
theorem applied to regularization problems posed on a collection of Reproducing
Kernel Hilbert Spaces. Empirical results demonstrate the effectiveness and utility
of our algorithms.
1
Introduction
Consider the setting where we are given large amounts of unlabeled data together with dual supervision in the form of a few labeled examples as well as a few labeled features, and the goal
is to estimate an unknown classification function. This setting arises naturally in numerous applications. Imagine, for example, the problem of inferring sentiment (?positive? versus ?negative?)
associated with presidential candidates from online political blog posts represented as word vectors,
given the following: (a) a vast collection of blog posts easily downloadable from the web (unlabeled
examples), (b) a few blog posts whose sentiment for a candidate is manually identified (labeled examples), and (c) prior knowledge of words that reflect positive (e.g., ?superb?) and negative (e.g,
?awful?) sentiment (labeled features). Most existing semi-supervised algorithms do not explicitly
incorporate feature supervision. They typically implement the cluster assumption [3] by learning
decision boundaries such that unlabeled points belonging to the same cluster are given the same
label, and empirical loss over labeled examples is concurrently minimized. In situations where the
classes are predominantly supported on unknown subsets of similar features, it is clear that feature
supervision can potentially illuminate the true cluster structure inherent in the unlabeled examples
over which the cluster assumption ought to be enforced.
Even when feature supervision is not available, there is ample empirical evidence in numerous recent
papers in the co-clustering literature (see e.g., [5, 1] and references therein), suggesting that the
clustering of columns (features) of a data matrix can lead to massive improvements in the quality
of row (examples) clustering. An intuitive explanation is that column clustering enforces a form of
dimensional reduction or implicit regularization that is responsible for performance enhancements
observed in many applications such as text clustering, microarray data analysis and video content
mining [1]. In this paper, we utilize data-dependent co-clustering regularizers for semi-supervised
learning in the presence of partial dual supervision.
1
Our starting point is the spectral bipartite graph partitioning approach of [5] which we briefly review
in Section 2.1. This approach effectively applies spectral clustering on a graph representation of
the data matrix and is also intimately related to Singular Value Decomposition. In Section 2.2 we
review an equivalence between this approach and a matrix approximation objective function that
is minimized under orthogonality constraints [6]. By dropping the orthogonality constraints but
imposing non-negativity constraints, one is led to a large family of co-clustering algorithms that
arise from the non-negative matrix factorization literature.
Based on the algorithmic intuitions embodied in the algorithms above, we develop two semisupervised classification algorithms that extend the spectral bipartite graph partitioning approach
and the matrix approximation approach respectively. We start with Reproducing Kernel Hilbert
Spaces (RKHSs) defined over both row and column spaces. These RKHSs are then coupled through
co-clustering regularizers. In the first algorithm, we directly adopt graph Laplacian regularizers
constructed from the bipartite graph of [5] and include it as a row and column smoothing term in
the standard regularization objective function. The solution is obtained by solving a convex optimization problem. This approach may be viewed as a modification of the Manifold Regularization
framework [2] where we now jointly learn row and column classification functions. In the second
algorithm proposed in this paper, we instead add a (non-convex) matrix approximation term to the
objective function, which is then minimized using a block-coordinate descent procedure.
Unlike, their unsupervised counterparts, our methods support dual supervison and naturally possess
out-of-sample extension. In Section 4, we provide experimental results where we compare against
various baseline approaches, and highlight the performance benefits of feature supervision.
2
Co-Clustering Algorithms
Let X denote the data matrix with n data points and d features. The methods that we discuss in
r
this section output a row partition function ?r : {i}ni=1 7? {j}m
j=1 and a column partition function
mc
d
?c : {i}i=1 7? {j}j=1 that give cluster assignments to row and column indices respectively. Here,
mr is the desired number of row clusters and mc is the desired number of column clusters. Below,
by xi we mean the ith example (row) and by fj we mean the j th column (feature) in the data matrix.
2.1
Bipartite Graph Partitioning
In the co-clustering technique introduced by [5], the data matrix is modeled as a bipartite graph
with examples (rows) as one set of nodes and features (columns) as another. An edge (i, j) exists
if feature fj assumes a non-zero value in example xi , in which case the edge is given a weight of
Xij . This bi-partite graph is undirected and there are no inter-example or inter-feature edges. The
adjacency matrix, W, and the normalized Laplacian [4], M, of this graph are given by,
1
1
0 X
(1)
W=
, M = I ? D? 2 WD? 2
XT 0
P
where D is the diagonal degree matrix defined by Dii = i Wij and I is the (n + d) ? (n + d)
identity matrix. Guided by the premise that column clustering induces row clustering while row
clustering induces column clustering, [5] propose to find an optimal partitioning of the nodes of the
bipartite graph. This method is retricted to obtaining co-clusterings where mr = mc = m. The mpartitioning is obtained by minimizing the relaxation of the normalized cut objective function using
standard spectral clustering techniques. This reduces to first constructing a spectral representation of
rows and columns given by the smallest eigenvectors of M, and then performing standard k-means
clustering on this representation, to finally obtain the partition functions ?r , ?c . Due to the special
structure of Eqn. 1, it can be shown that the spectral representation used in this algorithm is related
to the singular vectors of a normalized version of X.
2.2
Matrix Approximation Formulation
In [6] it is shown that the bipartite spectral graph partitioning is closely related to solving the following matrix approximation problem, (Fr ? , Fc ? ) = argminFr T Fr =I,Fc T Fc =I kX ? Fr Fc T kf ro
where Fr is an n ? m matrix and Fc is a d ? m matrix. Once the minimization is performed,
2
?r (i) = argmaxj Fr ?ij and ?c (i) = argmaxj Fc ?ij . In a non-negative matrix factorization approach,
the orthogonality constraints are dropped to make the optimization easier while non-negativity constraints Fr , Fc ? 0 are introduced with the goal of lending better interpretability to the solutions.
There are numerous multiplicative update algorithms for NMF which essentially have the flavor of
alternating non-convex optimization. In our empirical comparisons in Section 4, we use the Alternating Constrained Least Squares (ACLS) approach of [12]. In Section 3.2 we consider a 3-factor
non-negative matrix approximation to incorporate unequal values of mr and mc , and to improve the
quality of the approximation. See [7, 13] for more details on matrix tri-factorization based formulations for co-clustering.
3
Objective Functions for Regularized Co-clustering with Dual Supervision
Let us consider examples x to be elements of R ? ?d . We consider column values f for each
feature to be a data point in C ? ?n . Our goal is to learn partition functions defined over the entire
mc
r
row and column spaces (as opposed to matrix indices), i.e., ?r : R 7? {i}m
i=1 and ?c : C 7? {i}i=1 .
For this purpose, let us introduce kr : R ? R ? ? to be the row kernel that defines an associated
RKHS Hr . Similarly, kc : C ? C ? ? denotes the column kernel whose associated RKHS is Hc .
Below, we define ?r , ?c using these real valued function spaces.
Consider a simultaneous assignment of rows into mr classes and columns into mc classes. For any
data point x, denote Fr (x) = [fr1 (x) ? ? ? frmr (x)]T ? ?mr to be a vector whose elements are soft
class assignments where frj ? Hr for all j. For the given n data points, denote Fr to be the n ? mr
class assignment matrix. Correspondingly, Fc (f ) is defined for any feature f ? C, and Fc denotes
the associated column class assignment matrix. Additionally, we are given dual supervision in the
form of label matrices Yr ? ?n?mr and Yc ? ?m?mc where Yrij = 1 specifies that the ith
example is labeled with class j (simlarly for the feature labels matrix Yc ). The associated row sum
for a labeled point is 1. Unlabeled points have all-zero rows, and the row sums are therefore 0. Let
Jr (Jc ) denote a diagonal matrix of size n?n (d?d) whose diagonal entry is 1 for labeled examples
(features) and 0 otherwise. By Is we will denote an identity matrix of size s ? s. We use the notation
tr(A) to mean the trace of the matrix A.
3.1
Manifold Regularization with Bipartite Graph Laplacian (MR)
In this approach, we setup the following optimization problem,
argmin
m
m
Fr ?Hr r ,Fc ?Hc c
mr
mc
?c X
1
?r X
kfri k2Hr +
kf i k2 + tr (Fr ? Yr )T Jr (Fr ? Yr )
2 i=1
2 i=1 c Hc 2
?
1
Fr
+ tr (Fc ? Yc )T Jc (Fc ? Yc ) + tr Fr T Fc T M
Fc
2
2
(2)
The first two terms impose the usual RKHS norm on the class indicator functions for rows and
columns. The middle two terms measure squared loss on labeled data. The final term measure
smoothness of the row and column indicator functions with respect to the bipartite graph introduced
in Section 2.1. This term also incorporates unlabeled examples and features. ?r , ?c , ? are real-valued
parameters that tradeoff various regularization terms.
Clearly, by Representer Theorem the solution is has the form,
frj (x) =
n
X
?ij kr (x, xi ), 1 ? j ? mr , fcj (f ) =
i=1
d
X
?ij kc (f , fi ), 1 ? j ? mc
(3)
i=1
Let ?, ? denote the corresponding optimal expansion coefficient matrices. Then, plugging in Eqn. 3
and solving the optimization problem, the solution is easily seen to be given by,
?r In
0
Kr 0
Jr Kr
0
?
Yr
+ ?M
+
(4)
=
0
?c Id
0 Kc
0
Jc Kc
?
Yc
3
where Kr , Kc are gram matrices over datapoints and features respectively. The partition functions
are then defined by
?r (x) = argmax
n
X
?ij kr (x, xi ),
?c (f ) = argmax
1?j?m i=1
d
X
?ij kc (f , fi )
(5)
1?j?m i=1
As in Section 2.1, we assume mr = mc = m. If the linear system above is solved by explicitly
computing the matrix inverse, the computational cost is O((n + d)3 + (n + d)2 m). This approach
is closely related to the Manifold Regularization framework of [2], and may be viewed as an modification of the Laplacian Regularized Least Squares (LAPRLS) algorithm, which uses a euclidean
nearest neighbor row similarity graph to capture the manifold structure in the data. Instead of using
the squared loss, one can develop variants using the SVM Hinge loss or the logistic loss function.
One can also use a large family of graph regularizers derived from the graph Laplacian [3]. In
particular, we use the iterated Laplacian of the form M p where p is an integer.
3.2
Matrix Approximation under Dual Supervision (MA)
We now consider an alternative objective function where instead of the graph Laplacian regularizer,
we add a penalty term that measures how well the data matrix is approximated by a trifactorization
Fr QFc T ,
argmin
m
m
Fr ?Hr r ,Fc ?Hc c
Q??mr ?mc
mr
mc
?c X
1
?r X
kfri k2Hr +
kfci k2Hc + tr (Fr ? Yr )T Jr (Fr ? Yr )
2 i=1
2 i=1
2
?
1
+ tr (Fc ? Yc )T Jc (Fc ? Yc ) + kX ? Fr QFc T k2f ro
2
2
(6)
As before, the first two terms above enforce smoothness, the third and fourth terms measure squared
loss over labels and the final term enforces co-clustering. The classical Representer Theorem
(Eqn. 3) can again be applied since the above objective function only depends on point evaluations and RKHS norms of functions in Hr , Hc . The optimal expansion coefficient matrices, ?, ?,
in this case are obtained by solving,
argmin J (?, ?, Q)
=
?,?,Q
+
?c
1
?r T
tr ? Kr ? + tr ? T Kc ? + tr (Kr ? ? Yr )T Jr (Kr ? ? Yr )
2
2
2
?
1
(7)
tr (Kc ? ? Yc )T Jc (Kc ? ? Yc ) + kX ? Kr ?Q? T Kc k2f ro
2
2
This problem is not convex in ?, ?, Q. We propose a block coordinate descent algorithm for the
problem above. Keeping two variables fixed, the optimization over the other is a convex problem
with a unique solution. This guarantees monotonic decrease of the objective function and convergence to a stationary point. We get the simple update equations given below,
?J
=0
?Q
?J
=0
??
?J
=0
??
=?
Q = (?T K2r ?)?1 (?T Kr XKc ?)(? T K2c ?)?1
=?
[?r In + Jr Kr ] ? + ?Kr ?Zc = Jr Yr + ?XKc ?QT
(9)
=?
[?c Id + Jc Kc ] ? + ?Kc ?Zr = Jc Yc + ?XT Kr ?Q
(10)
where Zc = Q? T K2c ?QT , Zr = QT ?T K2r ?Q
(8)
(11)
In Eqn 8, we assume that the appropriate matrix inverses exist. Eqns 9 and 10 are generalized
Sylvester matrix equations of the form AXB ? + CXD? = E whose unique solution X under
certain regularity conditions can be exactly obtained by an extended version of the classical BartelsStewart method [9] whose complexity is O((p+q)3 ) for p?q-sized
matrix variable X. Alternatively,
one can solve the linear system [10]: B ? ? A + D? ? C vec(X) = vec(E) where ? denotes
4
Kronecker product and vec(X) vectorizes X in a column oriented way (it behaves as the matlab
operator X(:)). Thus, the solution to Eqns (9,10) are as follows,
[Imr ? (?r In + Jr Kr ) + ?Zc ? Kr ] vec(?)
[Imc ? (?r Id + Jc Kc ) + ?Zr ? Kc ] vec(?)
= vec(Jr Yr + ?XKc ?QT )
T
= vec(Jc Yc + ?X Kr ?Q)
(12)
(13)
These linear systems are of size nmr ? nmr and dmc ? dmc respectively. It is computationally
prohibitive to solve these systems by direct matrix inversion. We use an iterative conjugate gradients
(CG) technique instead, which can exploit hot-starts from the previous solution, and the fact that the
matrix vector products can be computed relatively efficiently as follows,
[Imr ? (?r In + Jr Kr ) + ?Zc ? Kr ] vec(?) = vec(?Kr ?Z?
c ) + ?r vec(?) + vec(Jr Kr ?)
To optimize ? (?) given fixed Q and ? (?), we run CG with a stringent tolerance of 10?10 and
maximum of 200 iterations starting from the ?(?) from the previous iteration. In an outer loop, we
monitor the relative decrease in the objective function and terminate when the relative improvement
falls below 0.0001. We use a maximum of 40 outer iterations where each iteration performs one
round of ?, ?, Q optimization. Empirically, we find that the block coordinate descent approach
often converges surprisingly quickly (see Section 4.2). The final classification is given by Eqn. 5.
4
Empirical Study
In this section, we present an empirical study aimed at comparing the proposed algorithms with several baselines: (i) Unsupervised co-clustering with spectral bipartite graph partitioning (BIPARTITE)
and non-negative matrix factorization (NMF), (ii) supervised performance of standard regularized
least squares classification (RLS) that ignores unlabeled data, and (iii) one-sided semi-supervised
performance obtained with Laplacian RLS (LAPRLS) which uses a euclidean nearest-neighbor row
similarity graph. The goal is to observe whether dual supervision particularly along features can help
improve classification performance, and whether joint RKHS regularization as formulated in our algorithms (abbreviated MR for the manifold regularization based method of Section 3.1 and MA for
the matrix approximation method of Section 3.2) along both rows and columns leads to good quality out-of-sample prediction. In the experiments below, the performance of RLS and LAPRLS is
optimized for best performance on the unlabeled set over a grid of hyperparameters. We use
Gaussian kernels with width ?r for rows and ?c for columns. These were set to 2k ?0r , 2k ?0c
respectively where ?0r , ?0c are (1/m)-quantile of pairwise euclidean distances among rows and
columns respectively for an m class problem, and k is tuned over {?2, ?1, 0, 1, 2} to optimize 3fold cross-validation performance of fully supervised RLS. The values ?r , ?c , ? are loosely tuned
for MA,MR with respect to a single random split of the data into training and validation set; more
careful hyperparameter tuning may further improve the results presented below.
We focus on performance in predicting row labels. To enable comparison with the unsupervised coclustering methods, we use the popularly used F-measure defined on pairs of examples as follows:
Precision =
Recall =
F-measure =
4.1
Number of Pairs Correctly Predicted
Number of Pairs Predicted to be In Same Cluster or Class
Number of Pairs Correctly Predicted
Number of Pairs in the Same Cluster or Class
(2 ? Precision ? Recall)/(Precision + Recall)
(14)
A Toy Dataset
We generated a toy 2-class dataset with 200 examples per class and 100 features to demonstrate the
main observations. The feature vector for a positive example is of the form [2u ? 0.1 2u + 0.1], and
for a negative example is of the form [2u+0.1 2u?0.1], where u is a 50-dimensional random vector
whose entries are uniformly distributed over the unit interval. It is clear that there is substantial overlap between
partitioning ?c , consider the following transformation:
P Given a column
Pthe two classes.
i:?c (i)=1 xi
i:?c (i)=?1 xi
T (x) =
,
that
maps
examples in ?100 to the plane ?2 by composing
|i:?c (i)=1|
|i:?c (i)=?1|
a single feature whose value equals the mean of all features in the same partition. For the correct
column partitioning, ?c (i) = 1, 1 ? i ? 50, ?c (i) = ?1, 50 < i ? 100, the examples under the
5
action of T are shown in Figure 1 (left). It is clear that T renders the data to be almost separable. It is
therefore natural to attempt to (effectively) learn T in a semi-supervised manner. In Figure 1 (right),
we plot the learning curves of various algorithms with respect to increasing number of row and column labels. On this dataset, co-clustering techniques (BIPARTITE, NMF) perform fairly well, and
even significantly better than RLS, which has an optimized F-measure of 67% with 25 row labels.
With increasing amounts of column labels, the learning curves of MR and MA steadily lift eventually outperforming the unsupervised techniques. The hyperparameters used in this experiment are:
?r = 2.1, ?c = 4.1, ?r = ?c = 0.001, ? = 10 for MR and 0.001 for MA.
Figure 1: left: Examples in the toy dataset under the transformation defined by the correct column
partitioning. right: Performance comparison ? the number of column labels used are marked.
1.4
MR,50
MA,50
0.92
1.3
bipartite
F?measure
1.2
1.1
1
0.9
0.9 nmf
0.88 MR,25
MR,10
0.86
MA,10,25
0.8
0.84
MR,5
MA,5
0.7
0.82
0.7
0.8
0.9
1
1.1
1.2
1.3
1.4
5
10
15
20
25
Number of Row Labels
4.2
Text Categorization
We performed experiments on document-word matrices drawn from the 20-newgroups dataset preprocessed as in [15]. The preprocessed data has been made publicly available by the authors of [15]1 .
For each word w and class c, we computed a score as follows:
score(w, c) = ?P (Y = c) log P (Y = c) ? P (W = w)P (Y = c|W = w) log P (Y = c|W = w)
? P (W 6= w)P (Y = c|W 6= w) log P (Y = c|W 6= w), where P (Y = c) is the fraction of
documents whose category is c, P (W = w) is the fraction of times word w is encountered, and
P (Y = c|W = w) (P (Y = c|W 6= w)) is the fraction of documents with class c when w is
present (absent). It is easy to see that
Pthe mutual information between the indicator random variable for w and the class variable is c score(w, c). We simulated manual labeling of words by
associating w with the class argmaxc score(w, c). Finally, we restricted attention to 631 words
with highest overall mutual information and 2000 documents that belong to the following 5 classes:
comp.graphics, rec.motorcycles, rec.sport.baseball, sci.space, talk.politics.mideast. Since words of
talk.politics.mideast accounted for more than half the vocabulary, we used the class normalization
prescribed in [11] to handle the imbalance in the labeled data.
Results presented in Table 1 are averaged over 10 runs. In each run, we randomly split the documents
into training and test sets, in the ratio 1 : 3. The training set is then further split into labeled
and unlabeled sets by randomly selecting 75 labeled documents. We experimented with increasing
number of randomly chosen word labels. The hyperparameters are as follows: ?r = 0.43, ?c =
0.69, ?r = ?c = ? = 1 for MR and ?r = ?c = 0.0001, ? = 0.01 for MA.
We observe that even without any word supervision MR outperforms all the baseline approaches:
unsupervised co-clustering with BIPARTITE and NMF, standard RLS that only uses labeled documents, and also LAPRLS which uses a graph Laplacian based on document similarity for semisupervised learning. This validates the effectiveness of the bipartite document and word graph
regularizer. As the amount of word supervision increases, the performance of both MR and MA improves gracefully. The out-of-sample extension to test data is of good quality, considering that
our test sets are much larger than our training sets. We also observed that the mean number of
(outer) iterations required for convergence of MA decreases as labels are increased from 0 to 500:
28.7(0), 12.2(100), 12.7(200), 9.3(350), 7.8(500). In, Figure 2 we show the top unlabeled words
1
At http://www.princeton.edu/?nslonim/data/20NG data 74000.mat.gz
6
Table 1: Performance on 5-Newsgroups Dataset with 75 row labels
(a) F-measure on Unlabeled Set
(b) F-measure on Test Set
BIPARTITE
NMF
RLS
LAPRLS
RLS
LAPRLS
54.8 (7.8)
54.4 (6.2)
62.2 (3.1)
62.5 (3.0)
61.2 (1.7)
61.9 (1.4)
(c) F-measure on Unlabeled Set
# col labs
0
100
200
350
500
MR
64.7 (1.3)
72.3 (2.2)
77.0 (2.5)
78.6 (2.1)
79.3 (1.6)
(d) F-measure on Test Set
MA
60.4 (5.6)
59.6 (5.7)
69.2 (7.1)
75.1 (4.1)
77.1 (5.8)
# col labs
0
100
200
350
500
MR
57.1 (2.1)
60.9 (2.4)
66.2 (2.8)
68.1 (1.9)
69.1 (2.4)
MA
60.3 (7.0)
60.9 (5.0)
66.2 (6.2)
70.3 (4.4)
71.0 (6.0)
for each class sorted by the real-valued prediction score assigned by MR (in one run trained with
100 labeled words). Intuitvely, the main words associated with the class are retrieved.
Figure 2: Top unlabeled words categorized by MR
COMP. GRAPHICS:
polygon, gifs, conversion, shareware, graphics, rgb, vesa, viewers, gif, format, viewer, amiga, raster, ftp, jpeg, manipulation
REC . MOTORCYCLES:
biker, archive, dogs, yamaha, plo, wheel, riders, motorcycle, probes, ama, rockies, neighbors, saudi, kilometers
REC . SPORT. BASEBALL:
SCI . SPACE:
clemens, morris, pitched, hr, batters, dodgers, offense, reds, rbi, wins, mets, innings, ted, defensive, sox, inning
oo, servicing, solar, scispace, scheduled, atmosphere, missions, telescope, bursts, orbiting, energy, observatory, island, hst, dark
TALK . POLITICS . MIDEAST :turkish,
4.3
greek, turkey, hezbollah, armenia, territory, ohanus, appressian, sahak, melkonian, civilians, greeks
Project Categorization
We also considered a problem that arises in a real business-intelligence setting. The dataset is
composed of 1169 projects tracked by the Integrated Technology Services division of IBM. These
projects need to be categorized into 8 predefined product categories within IBM?s Server Services
product line, with the eventual goal of performing various follow-up business analyses at the granularity of categories. Each project is represented as a 112-dimensional vector specifying the distribution of skills required for its delivery. Therefore, each feature is associated with a particular
job role/skill set (JR/SS) combination, e.g., ?data-specialist (oracle database)?. Domain experts validated project (row) labels and additionally provided category labels for 25 features deemed to be
important skills for delivering projects in the corresponding category. By demonstrating our algorithms on this dataset, we are able to validate a general methodology with which to approach project
categorization across all service product lines (SPLs) on a regular basis. The amount of dual supervision available in other SPLs is indeed severely limited as both the project categories and skill
definitions are constantly evolving due to the highly dynamic business environment.
Results presented in Table 2 are averaged over 10 runs. In each run, we randomly split the projects
into training and test sets, in the ratio 3 : 1. The training set is then further split into labeled and
unlabeled sets by randomly selecting 30 labeled projects. We experimented with increasing number
of randomly chosen column labels, from none to all 25 available labels. The hyperparameters are
as follows: ?r = ?c = 0.0001, ?r = 0.69, ?c = 0.27 chosen as described earlier. Results in
Tables 2(c),2(d) are obtained with ? = 10 for MR, ? = 0.001 for MA.
We observe that BIPARTITE performs significantly better than NMF on this dataset, and is competitve
with supervised RLS performance that relies only on labeled data. By using LAPRLS , performance
can be slightly boosted. We find that MR outperforms all approaches significantly even with very
few column labels. We conjecture that the comparatively lower mean and high variance in the performance of MA on this dataset is due to suboptimal local minima issues, which may be alleviated
using annealing techniques or multiple random starts, commonly used for Transductive SVMs [3].
From Tables 2(c),2(d) we also observe that both methods give high quality out-of-sample extension
on this problem.
7
Table 2: Performance on IBM Project Categorization Dataset with 30 row labels
(a) F-measure on Unlabeled Set
(b) F-measure on Test Set
BIPARTITE
NMF
RLS
LAPRLS
RLS
LAPRLS
89.1 (2.7)
56.5 (1.1)
88.1 (7.3)
90.20 (5.8)
87.8 (8.4)
90.2 (6.0)
(c) F-measure on Unlabeled Set
# col labs
0
5
10
15
25
5
MR
92.7 (4.6)
94.9 (1.8)
93.0 (4.2)
92.3 (7.0)
98.0 (0.5)
(d) F-measure on Test Set
MA
90.7 (4.8)
87.8 (6.4)
89.0 (8.0)
89.1 (7.4)
92.2 (6.0)
# col labs
0
5
10
15
25
MR
89.2 (5.5)
93.3 (1.7)
91.9 (4.2)
92.2 (5.2)
96.4 (1.6)
MA
90.0 (5.5)
87.4 (6.6)
89.1 (8.3)
89.2 (8.8)
92.1 (6.8)
Conclusion
We have developed semi-supervised kernel methods that support partial supervision along both dimensions of the data. Empirical studies show promising results and highlight the previously untapped benefits of feature supervision in semi-supervised settings. For an application of closely
related algorithms to blog sentiment classification, we point the reader to [14]. For recent work on
text categorization with labeled features instead of labeled examples, see [8].
References
[1] A. Banerjee, I. Dhillon, J. Ghosh, S.Merugu, and D.S. Modha. A generalized maximum entropy approach to bregman co-clustering and matrix approximation. JMLR, 8:1919?1986,
2007.
[2] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for
learning from labeled and unlabeled examples. JMLR, 7:2399?2434, 2006.
[3] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, 2006.
[4] F. Chung, editor. Spectral Graph Theory. AMS, 1997.
[5] I. Dhillon. Co-clustering documents and words using bipartite spectral graph partitioning. In
KDD, 2001.
[6] C. Ding, X. He, and H.D. Simon. On the equivalence of nonnegative matrix factorization and
spectral clustering. In SDM, 2005.
[7] C. Ding, T. Li, W. Peng, and H. Park. Orthogonal nonnegative matrix tri-factorizations for
clustering. In KDD, 2006.
[8] G. Druck, G. Mann, and A. McCallum. Learning from labeled features using generalized
expectation criteria. In SIGIR, 2008.
[9] J. Gardiner, Laub A.J, Amato J.J, and Moler C.B. Solution of the Sylvester matrix equation
AXBT + CXDT = E. ACM Transactions on Mathematical Software, 18(2):223?231, 1992.
[10] D. Harville. Matrix Algebra From a Statistician?s Perspective. Springer, New York, 1997.
[11] T.M. Huang and V. Kecman. Semi-supervised learning from unbalanced labeled data an
improvement. Lecture Notes in Computer Science, 3215:765?771, 2004.
[12] A. Langville, C. Meyer, and R. Albright. Initializations for the non-negative matrix factorization. In KDD, 2006.
[13] T. Li and C. Ding. The relationships among various nonnegative matrix factorization methods
for clustering. In ICDM, 2006.
[14] V. Sindhwani and P. Melville. Document-word co-regularization for semi-supervised sentiment
analysis. In ICDM, 2008.
[15] N. Slonim and N. Tishby. Document clustering using word clusters via the information bottleneck method. In SIGIR, 2000.
8
| 3562 |@word version:2 briefly:1 inversion:1 middle:1 norm:2 hu:1 rgb:1 decomposition:1 tr:10 reduction:1 score:5 selecting:2 tuned:2 rkhs:5 document:12 outperforms:2 existing:1 com:1 wd:1 comparing:1 partition:7 kdd:3 plot:1 update:2 stationary:1 half:1 prohibitive:1 yr:10 intelligence:1 plane:1 mccallum:1 ith:2 node:2 lending:1 height:1 mathematical:1 along:3 constructed:1 direct:1 burst:1 laub:1 manner:1 introduce:1 pairwise:1 peng:1 inter:2 indeed:1 k2hr:2 multi:1 considering:1 increasing:4 project:11 provided:1 notation:1 argmin:3 gif:1 developed:1 superb:1 ghosh:1 transformation:3 ought:1 guarantee:1 sox:1 exactly:1 ro:3 k2:1 nmr:2 partitioning:11 unit:1 positive:3 before:1 dropped:1 service:3 local:1 slonim:1 severely:1 id:3 modha:1 therein:1 initialization:1 equivalence:2 specifying:1 co:20 factorization:8 limited:1 bi:1 averaged:2 unique:2 responsible:1 enforces:2 block:3 implement:1 procedure:1 empirical:7 turkish:1 evolving:1 significantly:3 alleviated:1 word:19 regular:1 coclustering:1 get:1 unlabeled:17 wheel:1 operator:1 optimize:2 www:1 map:1 attention:1 starting:2 convex:5 sigir:2 defensive:1 datapoints:1 supervison:1 handle:1 coordinate:3 imagine:1 massive:1 us:4 element:2 approximated:1 particularly:1 rec:4 cut:1 labeled:22 database:1 observed:2 role:1 ding:3 solved:1 capture:1 plo:1 decrease:3 highest:1 principled:1 intuition:1 substantial:1 environment:1 complexity:1 dynamic:1 trained:1 solving:4 algebra:1 predictive:1 baseball:2 bipartite:20 division:1 basis:1 easily:2 joint:1 fr1:1 represented:2 various:5 talk:3 regularizer:2 polygon:1 labeling:1 lift:1 whose:9 posed:1 valued:3 solve:2 larger:1 s:1 otherwise:1 dodger:1 presidential:1 melville:1 niyogi:1 transductive:1 jointly:1 validates:1 final:3 online:1 sdm:1 propose:2 mission:1 product:5 fr:18 loop:1 motorcycle:3 pthe:2 ama:1 saudi:1 intuitive:1 validate:1 olkopf:1 convergence:2 cluster:10 enhancement:1 regularity:1 categorization:5 converges:1 help:1 ftp:1 develop:3 oo:1 nearest:2 ij:6 qt:4 job:1 predicted:3 hst:1 guided:1 greek:2 closely:3 popularly:1 correct:2 stringent:1 enable:1 dii:1 mann:1 adjacency:1 premise:1 atmosphere:1 extension:3 k2r:2 viewer:2 considered:1 algorithmic:1 adopt:1 smallest:1 purpose:1 combinatorial:1 label:22 observatory:1 vice:1 minimization:1 mit:1 concurrently:1 clearly:1 gaussian:1 boosted:1 vectorizes:1 derived:1 focus:1 validated:1 amato:1 improvement:4 xkc:3 political:1 cg:2 baseline:3 am:1 dependent:1 typically:1 entire:1 integrated:1 kc:14 wij:1 overall:1 dual:10 classification:8 among:2 issue:1 smoothing:1 special:1 constrained:1 fairly:1 mutual:2 equal:1 once:1 ng:1 ted:1 manually:1 park:1 unsupervised:5 k2f:2 representer:3 rls:11 minimized:3 inherent:1 few:5 belkin:1 oriented:1 randomly:6 composed:1 simultaneously:1 argmax:2 statistician:1 attempt:1 mining:1 highly:1 evaluation:1 regularizers:4 predefined:1 bregman:1 edge:3 partial:3 orthogonal:1 euclidean:3 loosely:1 desired:2 dmc:2 increased:1 column:34 soft:1 newgroups:1 civilian:1 earlier:1 jpeg:1 assignment:5 cost:1 subset:1 entry:2 graphic:3 tishby:1 fcj:1 together:1 quickly:1 druck:1 batter:1 squared:3 reflect:1 again:1 opposed:1 huang:1 cxd:1 expert:1 chung:1 toy:3 li:2 suggesting:1 downloadable:1 coefficient:2 untapped:1 jc:9 explicitly:2 depends:1 vesa:1 performed:2 multiplicative:1 lab:4 red:1 start:3 clemens:1 capability:1 simlarly:1 solar:1 simon:1 langville:1 square:3 ni:1 partite:1 publicly:1 variance:1 merugu:1 efficiently:1 territory:1 iterated:1 mc:12 none:1 comp:2 simultaneous:1 manual:1 definition:1 against:1 raster:1 energy:1 steadily:1 naturally:3 associated:7 argmaxc:1 pitched:1 dataset:11 recall:3 knowledge:1 improves:1 hilbert:2 supervised:13 follow:1 methodology:1 formulation:3 biker:1 implicit:1 eqn:5 web:1 offense:1 banerjee:1 defines:1 logistic:1 quality:5 scheduled:1 semisupervised:2 normalized:3 true:1 armenia:1 counterpart:1 regularization:12 assigned:1 alternating:2 dhillon:2 round:1 width:1 eqns:2 yorktown:1 criterion:1 generalized:3 demonstrate:3 performs:2 fj:2 novel:1 fi:2 predominantly:1 behaves:1 empirically:1 tracked:1 extend:1 belong:1 he:1 imc:1 versa:1 imposing:1 vec:11 smoothness:2 tuning:1 grid:1 similarly:1 chapelle:1 impressive:1 supervision:18 similarity:3 add:2 recent:2 retrieved:1 perspective:1 manipulation:1 certain:1 server:1 blog:4 outperforming:1 moler:1 seen:2 minimum:1 impose:1 mr:32 semi:10 ii:2 multiple:1 zien:1 reduces:1 turkey:1 cross:1 icdm:2 post:3 plugging:1 laplacian:9 prediction:2 variant:1 servicing:1 sylvester:2 essentially:1 expectation:1 iteration:5 kernel:6 normalization:1 interval:1 annealing:1 singular:2 microarray:1 sch:1 unlike:1 posse:1 archive:1 tri:2 undirected:1 ample:1 incorporates:1 effectiveness:2 integer:1 presence:1 granularity:1 iii:2 split:5 easy:1 newsgroups:1 rbi:1 yamaha:1 identified:1 associating:1 suboptimal:1 tradeoff:1 absent:1 politics:3 bottleneck:1 whether:2 motivated:1 utility:1 assist:1 sentiment:5 penalty:1 render:1 york:1 action:1 matlab:1 clear:3 eigenvectors:1 aimed:1 delivering:1 amount:4 dark:1 morris:1 induces:2 svms:1 category:6 telescope:1 http:1 specifies:1 xij:1 exist:1 correctly:2 per:1 aleksandra:1 hyperparameter:1 dropping:1 mat:1 demonstrating:1 monitor:1 drawn:1 preprocessed:2 harville:1 utilize:1 vast:1 graph:24 relaxation:1 fraction:3 sum:2 enforced:1 run:6 inverse:2 fourth:1 family:2 almost:1 reader:1 k2c:2 decision:1 delivery:1 fold:1 encountered:1 oracle:1 nonnegative:3 frj:2 gardiner:1 orthogonality:3 constraint:5 kronecker:1 software:1 prescribed:1 attempting:1 performing:2 separable:1 relatively:1 format:1 conjecture:1 combination:1 belonging:1 jr:12 conjugate:1 rider:1 across:1 intimately:1 slightly:1 island:1 modification:2 restricted:1 sided:2 computationally:1 equation:3 previously:1 discus:1 abbreviated:1 eventually:1 argmaxj:2 available:5 probe:1 observe:4 spectral:12 enforce:1 appropriate:1 alternative:1 rkhss:2 specialist:1 vikas:1 assumes:1 clustering:37 include:1 denotes:3 top:2 hinge:1 exploit:1 quantile:1 classical:3 comparatively:1 objective:9 usual:1 traditional:1 illuminate:1 diagonal:3 gradient:1 win:1 distance:1 simulated:1 sci:2 outer:3 gracefully:1 manifold:6 enforcing:1 index:2 modeled:1 relationship:1 ratio:2 minimizing:1 setup:1 potentially:2 trace:1 negative:8 unknown:2 perform:1 imbalance:1 conversion:1 observation:1 imr:2 descent:3 situation:1 extended:1 incorporate:2 reproducing:2 nmf:8 introduced:3 pair:5 required:2 dog:1 optimized:2 unequal:1 able:1 below:6 yc:11 interpretability:1 explanation:1 video:1 hot:1 overlap:1 natural:1 business:3 regularized:4 predicting:1 indicator:3 hr:6 zr:3 improve:3 technology:1 numerous:3 deemed:1 negativity:2 gz:1 coupled:1 embodied:1 text:3 prior:1 literature:2 review:2 geometric:1 kf:2 relative:2 loss:6 fully:1 highlight:2 lecture:1 versus:1 validation:2 degree:1 editor:2 ibm:5 row:35 accounted:1 vsindhw:1 surprisingly:2 supported:1 keeping:1 zc:4 neighbor:3 fall:1 correspondingly:1 kecman:1 benefit:2 tolerance:1 boundary:1 distributed:1 curve:2 gram:1 vocabulary:1 dimension:1 ignores:1 author:1 collection:2 made:1 commonly:1 transaction:1 skill:4 xi:6 alternatively:1 iterative:1 kilometer:1 table:6 additionally:2 promising:1 learn:3 terminate:1 composing:1 obtaining:1 expansion:2 hc:5 constructing:1 domain:1 main:2 arise:2 hyperparameters:4 categorized:2 ny:1 precision:3 inferring:1 meyer:1 awful:1 col:4 candidate:2 jmlr:2 third:1 mideast:3 theorem:3 xt:2 experimented:2 svm:1 mojsilovic:1 evidence:1 exists:1 effectively:3 kr:21 kx:3 easier:1 flavor:1 entropy:1 led:1 fc:17 sport:2 sindhwani:3 applies:1 monotonic:1 springer:1 constantly:1 relies:1 ma:17 acm:1 goal:5 viewed:2 identity:2 sized:1 formulated:1 careful:1 marked:1 sorted:1 eventual:1 content:1 axb:1 uniformly:1 albright:1 experimental:1 support:3 arises:2 competitve:1 unbalanced:1 laprls:9 princeton:1 |
2,827 | 3,563 | Psychiatry: insights into depression through
normative decision-making models
Quentin JM Huys1,2,? Joshua T Vogelstein3,? and Peter Dayan2,?
Center for Theoretical Neuroscience, Columbia University, New York, NY 10032, USA
2
Gatsby Computational Neuroscience Unit, University College London, London, WC1N 3AR, UK
3
Johns Hopkins School of Medicine, Baltimore MD 21231, USA
1
Abstract
Decision making lies at the very heart of many psychiatric diseases. It is also a
central theoretical concern in a wide variety of fields and has undergone detailed,
in-depth, analyses. We take as an example Major Depressive Disorder (MDD),
applying insights from a Bayesian reinforcement learning framework. We focus
on anhedonia and helplessness. Helplessness?a core element in the conceptualizations of MDD that has lead to major advances in its treatment, pharmacological and neurobiological understanding?is formalized as a simple prior over the
outcome entropy of actions in uncertain environments. Anhedonia, which is an
equally fundamental aspect of the disease, is related to the effective reward size.
These formulations allow for the design of specific tasks to measure anhedonia
and helplessness behaviorally. We show that these behavioral measures capture
explicit, questionnaire-based cognitions. We also provide evidence that these tasks
may allow classification of subjects into healthy and MDD groups based purely
on a behavioural measure and avoiding any verbal reports.
There are strong ties between decision making and psychiatry, with maladaptive decisions and behaviors being very prominent in people with psychiatric disorders. Depression is classically seen as
following life events such as divorces and job losses. Longitudinal studies, however, have revealed
that a significant fraction of the stressors associated with depression do in fact follow MDD onset,
and that they are likely due to maladaptive behaviors prominent in MDD (Kendler et al., 1999).
Clinically effective ?talking? therapies for MDD such as cognitive and dialectical behavior therapies
(DeRubeis et al., 1999; Bortolotti et al., 2008; Gotlib and Hammen, 2002; Power, 2005) explicitly
concentrate on altering patients? maladaptive behaviors and decision making processes.
Decision making is a promising avenue into psychiatry for at least two more reasons. First, it
offers powerful analytical tools. Control problems related to decision making are prevalent in a
huge diversity of fields, ranging from ecology to economics, computer science and engineering.
These fields have produced well-founded and thoroughly characterized frameworks within which
many issues in decision making can be framed. Here, we will focus on framing issues identified in
psychiatric settings within a normative decision making framework.
Its second major strength comes from its relationship to neurobiology, and particularly those neuromodulatory systems which are powerfully affected by all major clinically effective pharmacotherapies in psychiatry. The understanding of these systems has benefited significantly from theoretical
accounts of optimal control such as reinforcement learning (Montague et al., 1996; Kapur and Remington, 1996; Smith et al., 1999; Yu and Dayan, 2005; Dayan and Yu, 2006). Such accounts may be
useful to identify in more specific terms the roles of the neuromodulators in psychiatry (Smith et al.,
2004; Williams and Dayan, 2005; Moutoussis et al., 2008; Dayan and Huys, 2008).
?
[email protected], [email protected], [email protected]; www.gatsby.ucl.ac.uk/?qhuys/pub.html
1
Master
Yoked
Control
Figure 1: The learned helplessness (LH) paradigm. Three sets of rats are used in a sequence of
two tasks. In the first task, rats are exposed to escapable or inescapable shocks. Shocks come on at
random times. The master rat is given escapable shocks: it can switch off the shock by performing
an action, usually turning a wheel mounted in front of it. The yoked rat is exposed to precisely the
same shocks as the master rat, i.e its shocks are terminated when the master rat terminates the shock.
Thus its shocks are inescapable, there is nothing it can do itself to terminate them. A third set of rats
is not exposed to shocks. Then, all three sets of rats are exposed to a shuttlebox escape task. Shocks
again come on at random times, and rats have to shuttle to the other side of the box to terminate
the shock. Only yoked rats fail to acquire the escape response. Yoked rats generally fail to acquire
a wide variety of instrumental behaviours, either determined by reward or, as here, by punishment
contingencies.
This paper represents an initial attempt at validating this approach experimentally. We will frame
core notions of MDD in a reinforcement learning framework and use it to design behavioral decision
making experiments. More specifically, we will concentrate on two concepts central to current
thinking about MDD: anhedonia and learned helplessness (LH, Maier and Seligman 1976; Maier
and Watkins 2005). We formulate helplessness parametrically as prior beliefs on aspects of decision
trees, and anhedonia as the effective reward size. This allows us to use choice behavior to infer the
degree to which subjects? behavioral choices are characterized by either of these. For validation,
we correlate the parameters inferred from subjects? behavior with standard, questionnaire-based
measures of hopelessness and anhedonia, and finally use the inferred parameters alone to attempt to
recover the diagnostic classification.
1
Core concepts: helplessness and anhedonia
The basic LH paradigm is explained in figure 1. Its importance is manifold: the effect of inescapable
shock on subsequent learning is sensitive to most classes of clinically effective antidepressants; it
has arguably been a motivation framework for the development of the main talking therapies for
depression (cognitive behavioural therapy, Williams (1992), it has motivated the development of
further, yet more specific animal models (Willner, 1997), and it has been the basis of very specific
research into the cognitive basis of depression (Peterson et al., 1993).
Behavioral control is the central concept in LH: yoked and master rat do not differ in terms of the
amount of shock (stress) they have experienced, only in terms of the behavioural control over it. It
is not a standard notion in reinforcement learning, and there are several ways one could translate
the concept into RL terms. At a simple level, there is intuitively more behavioural control if, when
repeating one action, the same outcome occurs again and again, than if this were not true. Thus, at a
very first level, control might be related to the outcome entropy of actions (see Maier and Seligman
1976 for an early formulation). Of course, this is too simple. If all available actions deterministically
led to the same outcome, the agent has very little control. Finally, if one were able to achieve all
outcomes except for the one one cares about (in the rats? case switching off or avoiding the shock),
we would again not say that there is much control (see Huys (2007); Huys and Dayan (2007) for a
more detailed discussion). Despite its obvious limitations, we will here concentrate on the simplest
notion for reasons of mathematical expediency.
2
0.6
0.5
Exploration vs Exploitation
Predictive Distributions
Q(aknown)?Q(aunknown)
P(reward a known )
0.7
2
0
1
2
3
4
5
0.4
0.3
0.2
Choose blue slot machine
0.5
0
?0.5
0.1
0
1
1
2
3
4
5
Reward
?1
Choose orange slot machine
1
High control
Low control
2
3
4
5
Tree depth
Figure 2: Effect of ? on predictions, Q-values and exploration behaviour. Assume a slot machine
(blue) has been chosen five times, with possible rewards 1-5, and that reward 2 has been obtained
twice, and reward 4 three times (inset in left panel). Left: Predictive distribution for a prior with
negative ? (low control) in light gray, and large ? (extensive control) in dark gray. We see that, if
the agent believes he has much control (and outcome distributions have low entropy), the predictive
distribution puts all mass on the observations. Right: Assume now the agent gets up to 5 more pulls
(tree depth 1-5) between the blue slot machine and a new, orange slot machine. The orange slot
machine?s predictive distribution is flat as it has never been tried, and its expected value is therefore
3. The plot shows the difference between the values for the two slot machines. First consider the
agent only has one more pull to take. In this case, independently of the priors about control, the
agent will choose the blue machine, because it is just slightly better than average. Note though that
the difference is more pronounced if the agent has a high control prior. But things change if the agent
has two or more choices. Now, it is worth trying out the new machine if the agent has a high-control
prior. For in that case, if the new machine turns out to yield a large reward on the first try, it is likely
to do so again for the second and subsequent times. Thus, the prior about control determines the
exploration bonus.
The second central concept in current conceptions of MDD is that of reward sensitivity. Anhedonia,
an inability to enjoy previously enjoyable things, is one of two symptoms necessary for the diagnosis
of depression (American Psychiatric Association, 1994). A number of tasks in the literature have
attempted to measure reward sensitivity behaviourally. While these generally concur in finding
decreased reward sensitivity in subjects with MDD, these results need further clarification. Some
studies show interactions between reward and punishment sensitivities with respect to MDD, but
important aspects of the tasks are not clearly understood. For instance, Henriques et al. (1994);
Henriques and Davidson (2000) show decreased resonsiveness of MDD subjects to rewards, but
equally show decreased resonsiveness of healthy subjects to punishments. Pizzagalli et al. (2005)
introduced an asymmetrically rewarded perceptual discrimination task and show that the rate of
change of the response bias is anticorrelated with subjects? anhedonic symptoms. Exactly how
decreased reward responsivity can account for this is at pressent not clear.
Great care has to be taken to disentangle these two concepts. Anhedonia and helplessness both
provide good reasons for not taking an action: either because the reinforcements associated with the
action are insufficient (anhedonia), or because the outcome is not judged a likely result of taking
some particular action (if actions are thought to have large outcome entropy).
2
A Bayesian formulation of control
We consider a scenario where subjects have no knowledge of the outcome distributions of actions,
but rather learn about them. This means that their prior beliefs about the outcome distributions are
not overwhelmed by the likelihood of observations, and may thus have measurable effects on their
action choices. In terms of RL, this means that agents do not know the decision tree of the problem
they face. Control is formulated as a prior distribution on the outcome distributions, and thereby as
a prior distribution on the decision trees.
The concentration parameter ? of a Dirichlet process can very simply parametrise entropy, and,
if used as a prior, allow for very efficient updates of the predictive distributions of actions. Let
us assume we have actions A which have as outcomes rewards R, and keep count Nt (r, a) =
3
P
k:k<t;ak =a ?r,rk of the number of times a particular reward r ? R was observed for each action
a ? A, where t is the number of times that action has been chosen, rt is the reward on the tth trial
and ? is the Kronecker delta. The predictive distribution for action a is then
P (r|Nt , a, ?) =
1
?
B(r) +
Nt (r, a)
? + Nt (a)
? + Nt (a)
(1)
P
Here, B(r) is the base distribution, which we assume is flat, and Nt (a) =
r Nt (r, a) is the
number of times action a was chosen up to trial t. Thus, the first time an action is chosen, we draw
a sample from B(r). For ? = 0, we then always draw that very same sample again. For ? = ?, we
keep drawing from the same flat outcome distribution. Thus, ? very simply determines the entropy
of the actions? outcome distribution. To match parametric values onto control more intuitively, let
? = ? log(?) be the control parameter.
The action choice problem is now to choose action a = argmaxa Q(a|N ), where the Q values are
defined by the Bellman equation of our problem:
X
Qt (a|Nt , ?) =
p(r|Nt , a, ?)[r + argmax Qt (a? |Nt+1 (r), ?)]
(2)
a?
r
where Nt+1 (r) is the count including the (anticipated) reward r. The effect of the parameter ? on Q
values is illustrated in Figure 2. One can now infer the maximum likelihood (ML) parameters of the
prior by writing the probability of the subject?s observed actions as a standard softmaxed version of
the Q values:
Y
? M L = argmax
{?
? , ?}
p(at |Nt , ?, ?)
(3)
?,?
where
p(at |Nt , ?, ?)
=
t
exp(?Qt (a|Nt , ?))
P
?
a? exp(?Qt (a |Nt , ?))
(4)
where we have introduced a second parameter ?, which is either the softmax inverse temperature,
or, alternatively and equivalently, the size of the rewards (the maximum of R and ? are not both
inferable from action observations only).
Simulations of the inference revealed that the parameters ? and ?, our inferred reward sensitivity and
prior on control, were correlated. To alleviate this problem, subjects were additionally given a reward
sensitivity task which was interleaved with the control task (see below for the task descriptions).
The structure of the reward sensitivity task is such that Q values are correctly defined by a RescorlaWagner (RW) learning rule:
QRW
(a) = (1 ? ?)QRW
t
t?1 (a) + ?rt
(5)
where QRW
(a) is the Q value of action a at choice t, ? is the learning rate, and actions probabilities
t
again defined via softmax with a parameter ? as in equation 4. Note, importantly, i) that this is
not dependent on ?, the prior belief about control and ii) that unlike equation 2 above, this is a
?model-free? algorithm that does not look ahead and thus does not take anticipated rewards into
account). Combining inference in the two tasks (sharing ? between them), allows us to use the
reward sensitivity task as a prior on ? for the control task and to eliminate the correlation.
2.1
Task and subjects
Control task: The effects illustrated in figure 2 are easily elicited in a simple behavioral task.
Subjects are told to imagine that they are in a large casino, and will be dropped randomly in each of
100 rooms. In each room, they will get to choose between slot machines. At first, they see only one
slot machine, which they have to choose. Next, they get to choose between two slot machines. A new
machine is presented whenever all machines on the screen have been tried. Thus, the exploratory
drive is always maintained with one unexplored slot machine. Subjects get 8 choices per room, and
thus get to try a maximum of 8 machines once in each room. Subjects are informed that outcomes
for each slot machine are between 0 and 9 points. Overall, subjects are thus always kept in the
dark about the true outcome distribution of any one slot machine. Thus, their prior beliefs become
relevant. For healthy control subjects, one room was chosen randomly and the total number of points
4
MDD x outcome p=0.0029359
MDD x outcome p=7.6434e?05
p
0.5
P(repeat choice)
0.05
1
0.8
0.6
0.4
0.2
0
0
2
4
6
Outcome obtained once
8
0
2
4
6
Outcome obtained twice
8
Figure 3: Repeat modulation. Bottom plots: Probability of choosing a slot machine again given that
it has just yielded a particular outcome. Control subjects are in gray and MDD subjects in red (all
individuals as dots, means ? 1 std. err. as bars; red dots on the right of bars, gray dots on the left of
bars). Top plots: uncorrected p-values comparing the two groups for every individual outcome. Left
panel: after observing a particular outcome once, Right panel: after observing the same outcome
on a particular machine twice in a row. The p-values at the top indicate the ANOVA interaction of
outcome size with group. Thus, we see here that subjects with MDD are more likely to stick with
a bad machine, and more likely to move away from a good machine. The same result is observed
when fitting sigmoids to each subject and comparing the inferred parameters (data not shown).
earned in that room determined the payment (1 point = 1 US$, minimum 10US$, maximum 50US$).
MDD subjects were given the same instructions, but, for ethical reasons, could not be paid.
Reward sensitivity task: Subjects chose repeatedly (300 times) between two stacks of cards with
probabilistic binary outcomes. The underlying probabilities of a reward changed as a (squashed)
Ornstein-Uhlenbeck process. This task was thus accurately described by a standard RescorlaWagner (RW) rule (Daw et al., 2006).
Questionnaire measures: Finally, each subject filled out two questionnaires: the Beck Helplessness
Score (BHS), and the Beck Depression Inventory (BDI) which are standard questionnaire measures
of hopelessness and anhedonia respectively. We extracted the anhedonic subcomponent, BDIa, as
the sum of responses on questions 4, 12, 15 and 21 of the BDI.
Subjects: We recruited 17 healthy control subjects from the community. 15 subjects with MDD
were recruited as part of an ongoing treatment study, and asked to take the behavioural test while
waiting to see the psychiatrists. All subjects were given a full Structured Clinical Interview for DSMIV (First et al., 2002a,b). All MDD subjects met criteria for a current major depressive episode.
Three subjects had additionally a diagnosis of either Panic Disorder (2) or Bipolar Disorder II (1).
All the healthy control subjects had neither a present psychiatric disorder, nor a history thereof. All
procedures were approved by the New York State Institue of Psychiatry Institutional Review Board.
The subjects were matched for sex and educational level, but not for age. We thus included age
in our model formulations to exclude its effects as a nuisance variable. The depressed sample was
older, but throughout, the effects of age correlate negatively with those of depression.
3
3.1
Results
Reward sensitivity
Preliminary analysis: Repeat modulation, a very simple proxy measure of choices, provides a first
glimpse at the effects of depression on the first task. Figure 3 shows the probability with which
subjects chose a slot machine again after having received outcomes 0-9. As groups, MDD subjects
both avoid bad and exploit good machines less. Nearly half the subjects with MDD show very little
modulation with rewards. As a group, MDD subjects appear less sensitive to the reward structure in
the first task.
5
Constant factor
2
0
?2
?4
First?order term
0.1
0
-0.1
??
??
??
Constant factor
??
First?order term
0
2
?0.2
0
?2
? ?1 ? ?2 ? ?
?0.4
??1 ??2 ??
Figure 4: ML inferred values for constant and relevant first-order factors. The green lines are the
standard deviations around the ML value, and the
red represent three times that. Thus, while BDI is
related to the effective reward size ?, it is not related to the learning rate ?. Note that here the effect
of age has already been accounted for.
Figure 5: ML inferred values for the constant the
relevant first-order factors as in the previous figure.
The green lines are the standard deviation around
the ML values and the red represent three times
that. Thus, the effect of the BHS on control is
captured by ? that of BDIa on reward sensitivity
is captured by ? as predicted.
Reward sensitivity: The main hypothesis with respect to reward sensitivity is that subjects? empirically observed reward sensitivity ? in equation 5 is inversely related to their expressed anhedonia,
BDIa, in the questionnaires. We can build this into the action choice model by parametrising ? in the
QRW value (equation 5) above explicitly as a function of the questionnaire anhedonia score BDIa:
?(BDIa, AGE) = ?? BDIa + c? AGE + ??
If the hypothesis is true and subjects with higher BDIa scores do indeed care less about rewards, we
should observe that ?? < 0. Here, we included a regressor for the AGE as that was a confounding
variable in our subject sample. Furthermore, if it is true that anhedonia, as expressed by the questionnaire, relates to reward sensitivity specifically, we should be able to write a similar regression
for the learning rate ? (from equation 5)
?(BDIa, AGE) = ?? BDIa + c? AGE + ??
but find that ?? is not different from zero. Figure 4 shows the ML values for the parameters of
interest (emphasized in blue in the equations) and confirms that people who express higher levels
of anhedonia do indeed show less reward sensitivity, but do not differ in terms of learning rate. If
it were the case that subjects with higher BDIa score were just less attentive to the task, one might
also expect an effect of BDIa on learning rate.
3.2
Control
Validation: The control task is new, and we first need to ascertain that subjects were indeed sensitive
to main features of the task. We thus fit both a RW-learning rule (as in the previous section, but
adjusted for the varying number of available actions), and the full control model. Importantly, both
these models have two parameters, but only the full control model has a notion of outcome entropy,
and evaluations a tree. The chance probability of subjects? actions was 0.37, meaning that, on
average, there were just under three machines on the screen. The probability of the actions under
the RW-learning rule was better at 0.48, and that of the full control model 0.54. These differences
are highly significant as the total number of choices is 29600. Thus, we conclude that subjects were
indeed sensitive to the manipulation of outcome entropy, and that they did look ahead in a tree.
Prior belief about control: Applying the procedure from the previous task to the main task, we
write the main parameters of equations 2 and 4 as functions of the questionnaire measures and infer
linear parameters:
?1 (BDIa, BHS, age) = ??1 BHS + ??1 BDIa + c?1 AGE + ??1
?2 (BDIa, BHS, age) = ??2 BHS + ??2 BDIa + c?2 AGE + ??2
?(BDIa, BHS, age) = ?? BHS + ?? BDIa + c? AGE + ??
Importantly, because the BDIa scores and the BHS scores are correlated in our sample (they tend to
be large for the subjects with MDD), we include the cross-terms (??1 , ??2 , ?? ), as we are interested
in the specific effects of BDIa on ?, as before, and of BHS on ?.
6
3
control ?
2
Figure 6: Classification. Controls are
shown as black dots, and depressed subjects as red crosses. The blue line is a
linear classifier. Thus, the patients and
controls can be approximately classified
purely on the basis of behaviour.
1
0
83% correct
69% sensitivity
94% specificity
?1
?2
2
4
6
8
10
12
14
16
reward sensitivity ?
We here infer and display two separate values ?1 and ?2 . These correspond to the level of control
in the first and the second half of the experiment. In fact, to parallel the LH experiments better, the
slot machines in the first 50 rooms were actually very noisy (low true ?), which means that subjects
were here exposed to low levels of control just like the yoked rats in the original experiment. In the
second half of the experiment on the other hand, slot machines tended to be quite reliable (high true
?).
Figure 5 shows again the ML values for the parameters of interest (emphasized in blue in the equations). Again, we find that our parameter estimate are very significantly different from zero (> three
standard deviations).
The effect of the BHS score on the prior beliefs about control ? is much stronger in the second half
than of the experiment in the first half, i.e. the effect of BHS on the prior belief about control is
particularly prominent when subjects are in a high-control environment and have previously been
exposed to a low-control environment. This is an interesting parallel to the learned helplessness
experiments in animals.
3.3
Classification
Finally we combine the two tasks. We integrate out the learning rate ?, which we had found not be
related to the questionnaire measures (c.f. figure 4), and use the distribution over ? from the first
task as a prior distribution on ? for the second task. We also put weak priors on ? and infer both ?
and ? for the second task on a subject-by-subject basis. Figure 6 shows the posterior values for ?
and ? for MDD and healthy subjects and the ability of a linear classifier to classify them.
4
Discussion
In this paper, we have attempted to provide a specific formulation of core psychiatric concepts in
reinforcement learning terms, i.e. hopelessness as a prior belief about controllability, and anhedonia
as reward sensitivity. We have briefly explained how we expect these formulations to have effect
in a behavioural situation, have presented a behavioral task explicitly designed to be sensitive to
our formulations, and shown that people?s verbal expression of hopelessness and anhedonia do have
specific behavioral impacts. Subjects who express anhedonia display insensitivity to rewards and
those expressing hopelessness behave as if they had prior beliefs that outcome distributions of actions (slot machines) are very broad. Finally, we have shown that these purely behavioural measures
are also predictive of their psychiatric status, in that we were able to classify patients and healthy
controls purely on the basis of performance.
Several aspects of this work are novel. There have been previous attempts to map aspects of psychiatric dysfunction onto specific parametrizations (Cohen et al., 1996; Smith et al., 2004; Williams
and Dayan, 2005; Moutoussis et al., 2008), but we believe that our work represents the first attempt
to a) apply it to MDD; b) make formal predictions about subject behavior c) present strong evidence linking anhedonia specifically to reward insensitivity across two tasks d) combine tasks to
tease helplessness and anhedonia apart and e) to use the behavioral inferences for classification. The
latter point is particularly important, as it will determine any potential clinical significance (Veiel,
1997). In the future, rather than cross-validating with respect to say DSM-IV criteria, it may also be
important to validate measures such as ours in their own right in longitudinal studies.
7
Several important caveats do remain. First, the populations are not fully matched for age. We included age as an additional regressor and found all results to be robust. Secondly, only the healthy
subjects were remunerated. However, repeating the analyses presented using only the MDD subjects
yields the same results (data not shown). Thirdly, we have not yet fully mirrored the LH experiments.
We have so far only tested the transfer from a low-control environment to a high-control environment. To make statements like those in animal learned helplessness experiments, the transfer from
high-control to low-control environments will need to be examined, too. Fourth, the notion of control we have used is very simple, and more complex notions should certainly be tested (see Dayan
and Huys 2008). Fifth, and maybe most importantly, we have so far only attempted to classify MDD
and healthy subjects, and can thus not yet make any statements about the specificity of these effects with respect to MDD. Finally, it will be important to replicate these results independently, and
possibly in a different modality. Nevertheless, we believe these results to be very encouraging.
Acknowledgments: This work would not have been possible without the help of Sarah Hollingsworth Lisanby,
Kenneth Miller and Ramin V. Parsey. We would also like to thank Nathaniel Daw and Hanneke EM Den
Ouden and Ren?e Hen for invaluable discussions. Support for this work was provided by the Gatsby Charitable
Foundation (PD), a UCL Bogue Fellowship and the Swartz Foundation (QH) and a Columbia University startup
grant to Kenneth Miller.
References
American Psychiatric Association (1994). Diagnostic and Statistical Manual of Mental Disorders. American Psychiatric
Association Press.
Bortolotti, B., Menchetti, M., Bellini, F., Montaguti, M. B., and Berardi, D. (2008). Psychological interventions for major
depression in primary care: a meta-analytic review of randomized controlled trials. Gen Hosp Psychiatry, 30(4):293?302.
Cohen, J. D., Braver, T. S., and O?Reilly, R. C. (1996). A computational approach to prefrontal cortex, cognitive control and
schizophrenia: recent developments and current challenges. Philos Trans R Soc Lond B Biol Sci, 351(1346):1515?1527.
Daw, N. D., O?Doherty, J. P., Dayan, P., Seymour, B., and Dolan, R. J. (2006). Cortical substrates for exploratory decisions
in humans. Nature, 441(7095):876?879.
Dayan, P. and Huys, Q. J. M. (2008). Serotonin, inhibition, and negative mood. PLoS Comput Biol, 4(2):e4.
Dayan, P. and Yu, A. J. (2006). Phasic norepinephrine: a neural interrupt signal for unexpected events. Network, 17(4):335?
350.
DeRubeis, R. J., Gelfand, L. A., Tang, T. Z., and Simons, A. D. (1999). Medications versus cognitive behavior therapy for
severely depressed outpatients: mega-analysis of four randomized comparisons. Am J Psychiatry, 156(7):1007?1013.
First, M. B., Spitzer, R. L., Gibbon, M., and Williams, J. B. (2002a). Structured Clinical Interview for DSM-IV-TR Axis
I Disorders, Research Version, Non-Patient Edition. (SCID-I/NP). Biometrics Research, New York State Psychiatric
Institute.
First, M. B., Spitzer, R. L., Gibbon, M., and Williams, J. B. (2002b). Structured Clinical Interview for DSM-IV-TR Axis I
Disorders, Research Version, Patient Edition. (SCID-I/P). Biometrics Research, New York State Psychiatric Institute.
Gotlib, I. H. and Hammen, C. L., editors (2002). Handbook of Depression. The Guilford Press.
Henriques, J. B. and Davidson, R. J. (2000). Decreased responsiveness to reward in depression. Cognition and Emotion,
14(5):711?24.
Henriques, J. B., Glowacki, J. M., and Davidson, R. J. (1994). Reward fails to alter response bias in depression. J Abnorm
Psychol, 103(3):460?6.
Huys, Q. J. M. (2007). Reinforcers and control. Towards a computational ?tiology of depression. PhD thesis, Gatsby
Computational Neuroscience Unit, UCL, University of London.
Huys, Q. J. M. and Dayan, P. (2007). A bayesian formulation of behavioral control. Under Review, 0:00.
Kapur, S. and Remington, G. (1996). Serotonin-dopamine interaction and its relevance to schizophrenia. Am J Psychiatry,
153(4):466?76.
Kendler, K. S., Karkowski, L. M., and Prescott, C. A. (1999). Causal relationship between stressful life events and the onset
of major depression. Am. J. Psychiatry, 156:837?41.
Maier, S. and Seligman, M. (1976). Learned Helplessness: Theory and Evidence. Journal of Experimental Psychology:
General, 105(1):3?46.
Maier, S. F. and Watkins, L. R. (2005). Stressor controllability and learned helplessness: the roles of the dorsal raphe nucleus,
serotonin, and corticotropin-releasing factor. Neurosci. Biobehav. Rev., 29(4-5):829?41.
Montague, P. R., Dayan, P., and Sejnowski, T. J. (1996). A framework for mesencephalic dopamine systems based on
predictive hebbian learning. J. Neurosci., 16(5):1936?47.
Moutoussis, M., Bentall, R. P., Williams, J., and Dayan, P. (2008). A temporal difference account of avoidance learning.
Network, 19(2):137?160.
Peterson, C., Maier, S. F., and Seligman, M. E. P. (1993). Learned Helplessness: A theory for the age of personal control.
OUP, Oxford, UK.
Pizzagalli, D. A., Jahn, A. L., and O?Shea, J. P. (2005). Toward an objective characterization of an anhedonic phenotype: a
signal-detection approach. Biol Psychiatry, 57(4):319?327.
Power, M., editor (2005). Mood Disorders: A Handbook of Science and Practice. John Wiley and Sons, paperback edition.
Smith, A., Li, M., Becker, S., and Kapur, S. (2004). A model of antipsychotic action in conditioned avoidance: a computational approach. Neuropsychopharm., 29(6):1040?9.
Smith, K. A., Morris, J. S., Friston, K. J., Cowen, P. J., and Dolan, R. J. (1999). Brain mechanisms associated with depressive
relapse and associated cognitive impairment following acute tryptophan depletion. Br. J. Psychiatry, 174:525?9.
Veiel, H. O. F. (1997). A preliminary profile of neuropsychological deficits associated with major depression. J. Clin. Exp.
Neuropsychol., 19:587?603.
Williams, J. and Dayan, P. (2005).
Dopamine, learning, and impulsivity: a biological account of attentiondeficit/hyperactivity disorder. J Child Adolesc Psychopharmacol, 15(2):160?79; discussion 157?9.
Williams, J. M. G. (1992). The psychological treatment of depression. Routledge.
Willner, P. (1997). Validity, reliability and utility of the chronic mild stress model of depression: a 10-year review and
evaluation. Psychopharm, 134:319?29.
Yu, A. J. and Dayan, P. (2005). Uncertainty, neuromodulation, and attention. Neuron, 46(4):681?692.
8
| 3563 |@word mild:1 trial:3 exploitation:1 briefly:1 version:3 instrumental:1 approved:1 stronger:1 replicate:1 sex:1 instruction:1 confirms:1 simulation:1 tried:2 paid:1 thereby:1 tr:2 initial:1 responsivity:1 score:7 pub:1 ours:1 longitudinal:2 err:1 current:4 comparing:2 nt:15 yet:3 john:2 subsequent:2 subcomponent:1 analytic:1 plot:3 designed:1 update:1 v:1 alone:1 discrimination:1 half:5 smith:5 core:4 caveat:1 provides:1 mental:1 characterization:1 five:1 mathematical:1 become:1 fitting:1 combine:2 behavioral:9 indeed:4 expected:1 tryptophan:1 panic:1 nor:1 behavior:8 brain:1 bellman:1 little:2 jm:1 encouraging:1 provided:1 underlying:1 matched:2 panel:3 mass:1 bonus:1 spitzer:2 concur:1 informed:1 finding:1 temporal:1 unexplored:1 every:1 tie:1 exactly:1 bipolar:1 classifier:2 uk:4 control:57 unit:2 grant:1 enjoy:1 stick:1 appear:1 arguably:1 intervention:1 before:1 engineering:1 understood:1 dropped:1 seymour:1 severely:1 switching:1 despite:1 ak:1 oxford:1 modulation:3 approximately:1 might:2 chose:2 twice:3 black:1 examined:1 huys:7 neuropsychological:1 acknowledgment:1 practice:1 procedure:2 cowen:1 jhu:1 significantly:2 thought:1 reilly:1 prescott:1 argmaxa:1 specificity:2 psychiatric:12 get:5 divorce:1 wheel:1 parametrise:1 onto:2 judged:1 put:2 bh:12 applying:2 writing:1 www:1 measurable:1 map:1 center:1 chronic:1 outpatient:1 williams:8 economics:1 educational:1 independently:2 attention:1 formulate:1 formalized:1 disorder:10 insight:2 rule:4 avoidance:2 importantly:4 pull:2 quentin:1 population:1 notion:6 exploratory:2 imagine:1 qh:1 substrate:1 hypothesis:2 element:1 particularly:3 std:1 maladaptive:3 observed:4 role:2 bottom:1 capture:1 earned:1 episode:1 plo:1 disease:2 environment:6 questionnaire:10 pd:1 gibbon:2 reward:44 asked:1 personal:1 exposed:6 predictive:8 purely:4 powerfully:1 negatively:1 basis:5 easily:1 montague:2 effective:6 london:3 sejnowski:1 startup:1 outcome:30 choosing:1 quite:1 gelfand:1 say:2 drawing:1 serotonin:3 ability:1 itself:1 noisy:1 mood:2 sequence:1 analytical:1 net:1 ucl:4 interview:3 reinforcer:1 interaction:3 relevant:3 combining:1 gen:1 parametrizations:1 translate:1 achieve:1 insensitivity:2 description:1 pronounced:1 validate:1 help:1 sarah:1 ac:2 qt:4 school:1 received:1 job:1 strong:2 soc:1 uncorrected:1 predicted:1 come:3 indicate:1 berardi:1 met:1 concentrate:3 differ:2 correct:1 exploration:3 human:1 behaviour:3 alleviate:1 preliminary:2 biological:1 secondly:1 adjusted:1 escapable:2 therapy:5 around:2 exp:3 great:1 cognition:2 major:8 early:1 institutional:1 yoked:6 healthy:9 sensitive:5 tool:1 clearly:1 behaviorally:1 always:3 rather:2 dialectical:1 avoid:1 shuttle:1 varying:1 focus:2 interrupt:1 prevalent:1 likelihood:2 psychiatry:12 medication:1 am:3 inference:3 dayan:16 dependent:1 eliminate:1 relapse:1 interested:1 issue:2 classification:5 html:1 overall:1 development:3 animal:3 softmax:2 orange:3 field:3 once:3 never:1 having:1 emotion:1 represents:2 broad:1 yu:4 seligman:4 look:2 nearly:1 thinking:1 alter:1 anticipated:2 future:1 np:1 report:1 escape:2 randomly:2 qhuys:2 individual:2 beck:2 argmax:2 ecology:1 attempt:4 detection:1 huge:1 interest:2 highly:1 evaluation:2 certainly:1 light:1 wc1n:1 necessary:1 glimpse:1 lh:6 biometrics:2 tree:7 filled:1 iv:3 antipsychotic:1 causal:1 theoretical:3 uncertain:1 psychological:2 instance:1 classify:3 ar:1 raphe:1 altering:1 dsm:3 deviation:3 parametrically:1 front:1 too:2 punishment:3 thoroughly:1 fundamental:1 sensitivity:19 randomized:2 told:1 off:2 probabilistic:1 regressor:2 hopkins:1 thesis:1 again:11 central:4 neuromodulators:1 depressive:3 choose:7 possibly:1 prefrontal:1 classically:1 cognitive:6 american:3 li:1 account:6 exclude:1 potential:1 diversity:1 casino:1 explicitly:3 guilford:1 onset:2 dayan2:1 ornstein:1 try:2 observing:2 red:5 recover:1 elicited:1 parallel:2 simon:1 nathaniel:1 maier:6 who:2 miller:2 yield:2 identify:1 correspond:1 weak:1 bayesian:3 accurately:1 produced:1 ren:1 worth:1 drive:1 hanneke:1 history:1 classified:1 tended:1 sharing:1 whenever:1 manual:1 attentive:1 obvious:1 thereof:1 associated:5 treatment:3 stressful:1 knowledge:1 actually:1 higher:3 follow:1 response:4 willner:2 bdi:3 formulation:8 box:1 though:1 symptom:2 furthermore:1 just:5 correlation:1 hand:1 inescapable:3 gray:4 believe:2 usa:2 effect:16 validity:1 concept:7 true:6 illustrated:2 pharmacological:1 dysfunction:1 nuisance:1 maintained:1 rat:14 criterion:2 prominent:3 trying:1 stress:2 doherty:1 invaluable:1 temperature:1 ranging:1 meaning:1 novel:1 rescorlawagner:2 rl:2 empirically:1 cohen:2 thirdly:1 association:3 he:1 linking:1 significant:2 expressing:1 routledge:1 framed:1 neuromodulatory:1 philos:1 depressed:3 had:4 reliability:1 softmaxed:1 dot:4 cortex:1 acute:1 inhibition:1 base:1 disentangle:1 posterior:1 own:1 recent:1 confounding:1 apart:1 rewarded:1 scenario:1 manipulation:1 meta:1 binary:1 life:2 joshua:1 seen:1 minimum:1 captured:2 care:4 additional:1 responsiveness:1 determine:1 paradigm:2 swartz:1 signal:2 ii:2 relates:1 full:4 infer:5 hebbian:1 match:1 characterized:2 offer:1 clinical:4 cross:3 equally:2 schizophrenia:2 controlled:1 impact:1 prediction:2 basic:1 regression:1 patient:5 dopamine:3 represent:2 uhlenbeck:1 fellowship:1 baltimore:1 decreased:5 modality:1 releasing:1 unlike:1 qrw:4 subject:54 recruited:2 validating:2 tend:1 thing:2 revealed:2 conception:1 variety:2 switch:1 fit:1 psychology:1 identified:1 avenue:1 br:1 motivated:1 expression:1 utility:1 becker:1 peter:1 york:4 impulsivity:1 action:31 depression:18 repeatedly:1 impairment:1 useful:1 generally:2 detailed:2 clear:1 maybe:1 amount:1 repeating:2 dark:2 morris:1 simplest:1 tth:1 rw:4 mirrored:1 neuroscience:3 diagnostic:2 delta:1 correctly:1 per:1 blue:7 diagnosis:2 mega:1 write:2 affected:1 waiting:1 group:5 express:2 four:1 nevertheless:1 neither:1 anova:1 kenneth:2 shock:14 kept:1 fraction:1 sum:1 year:1 inverse:1 powerful:1 master:5 fourth:1 uncertainty:1 throughout:1 draw:2 decision:14 interleaved:1 expediency:1 display:2 psychiatrist:1 yielded:1 institue:1 strength:1 ahead:2 precisely:1 kronecker:1 flat:3 aspect:5 lond:1 performing:1 oup:1 structured:3 conceptualization:1 clinically:3 terminates:1 slightly:1 ascertain:1 across:1 remain:1 em:1 son:1 rev:1 making:9 explained:2 intuitively:2 den:1 depletion:1 heart:1 behavioural:7 taken:1 equation:9 previously:2 payment:1 turn:1 count:2 fail:2 mechanism:1 phasic:1 know:1 neuromodulation:1 available:2 apply:1 observe:1 away:1 braver:1 hosp:1 original:1 top:2 dirichlet:1 include:1 clin:1 medicine:1 ramin:1 exploit:1 build:1 move:1 objective:1 question:1 already:1 occurs:1 parametric:1 concentration:1 rt:2 md:1 squashed:1 primary:1 separate:1 card:1 thank:1 sci:1 paperback:1 deficit:1 manifold:1 reason:4 toward:1 relationship:2 insufficient:1 acquire:2 equivalently:1 statement:2 negative:2 design:2 anticorrelated:1 observation:3 neuron:1 kapur:3 controllability:2 behave:1 situation:1 neurobiology:1 enjoyable:1 frame:1 stack:1 community:1 inferred:6 introduced:2 extensive:1 mdd:28 framing:1 learned:7 daw:3 trans:1 able:3 bar:3 usually:1 below:1 challenge:1 including:1 green:2 reliable:1 belief:10 power:2 event:3 friston:1 turning:1 older:1 inversely:1 axis:2 psychol:1 columbia:2 prior:23 understanding:2 literature:1 review:4 hen:1 dolan:2 loss:1 expect:2 fully:2 interesting:1 limitation:1 mounted:1 versus:1 age:18 validation:2 foundation:2 contingency:1 integrate:1 degree:1 agent:9 nucleus:1 proxy:1 undergone:1 editor:2 charitable:1 row:1 course:1 changed:1 accounted:1 repeat:3 free:1 tease:1 verbal:2 side:1 allow:3 henriques:4 bias:2 formal:1 wide:2 peterson:2 taking:2 face:1 institute:2 fifth:1 depth:3 cortical:1 reinforcement:6 founded:1 far:2 correlate:2 mesencephalic:1 neurobiological:1 status:1 keep:2 kendler:2 ml:7 handbook:2 conclude:1 davidson:3 alternatively:1 norepinephrine:1 additionally:2 promising:1 terminate:2 learn:1 robust:1 transfer:2 nature:1 inventory:1 complex:1 behaviourally:1 did:1 significance:1 main:5 neurosci:2 terminated:1 motivation:1 edition:3 profile:1 nothing:1 child:1 benefited:1 screen:2 board:1 gatsby:5 ny:1 wiley:1 experienced:1 fails:1 hyperactivity:1 explicit:1 deterministically:1 comput:1 lie:1 perceptual:1 watkins:2 third:1 tang:1 rk:1 e4:1 bad:2 specific:8 emphasized:2 inset:1 normative:2 concern:1 evidence:3 importance:1 shea:1 phd:1 overwhelmed:1 sigmoids:1 conditioned:1 parametrising:1 phenotype:1 entropy:8 led:1 simply:2 likely:5 expressed:2 unexpected:1 ethical:1 talking:2 determines:2 chance:1 extracted:1 slot:18 formulated:1 towards:1 room:7 experimentally:1 change:2 included:3 determined:2 specifically:3 except:1 stressor:2 clarification:1 total:2 asymmetrically:1 experimental:1 attempted:3 college:1 people:3 support:1 latter:1 inability:1 dorsal:1 relevance:1 avoiding:2 ongoing:1 tested:2 biol:3 correlated:2 |
2,828 | 3,564 | Correlated Bigram LSA for Unsupervised Language
Model Adaptation
Yik-Cheung Tam?
InterACT, Language Technologies Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Tanja Schultz
InterACT, Language Technologies Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
We present a correlated bigram LSA approach for unsupervised LM adaptation for
automatic speech recognition. The model is trained using efficient variational EM
and smoothed using the proposed fractional Kneser-Ney smoothing which handles
fractional counts. We address the scalability issue to large training corpora via
bootstrapping of bigram LSA from unigram LSA. For LM adaptation, unigram
and bigram LSA are integrated into the background N-gram LM via marginal
adaptation and linear interpolation respectively. Experimental results on the Mandarin RT04 test set show that applying unigram and bigram LSA together yields
6%?8% relative perplexity reduction and 2.5% relative character error rate reduction which is statistically significant compared to applying only unigram LSA.
On the large-scale evaluation on Arabic, 3% relative word error rate reduction is
achieved which is also statistically significant.
1
Introduction
Language model (LM) adaptation is crucial to automatic speech recognition (ASR) as it enables
higher-level contextual information to be effectively incorporated into a background LM improving
recognition performance. Exploiting topical context for LM adaptation has shown to be effective
for ASR using latent semantic analysis (LSA) such as LSA using singular value decomposition [1],
Latent Dirichlet Allocation (LDA) [2, 3, 4] and HMM-LDA [5, 6]. One issue in LSA is the bagof-word assumption which ignores word ordering. For document classification, word ordering may
not be important. But in the LM perspective, word ordering is crucial since a trigram LM normally
performs significantly better than a unigram LM for word prediction. In this paper, we investigate
whether relaxing the bag-of-word assumption in LSA helps improving the ASR performance via
LM adaptation.
We employ bigram LSA [7] which is a natural extension of LDA to relax the bag-of-word assumption by connecting the adjacent words in a document together to form a Markov chain. There are
two main challenges in bigram LSA which are not addressed properly in [7] especially for largescale application. Firstly, the model can be very sparse since it covers topical bigrams in O(V 2 ? K)
where V and K denote the vocabulary size and the number of topics. Therefore, model smoothing
becomes critical. Secondly, model initialization is important for EM training, especially for bigram
LSA due to the model sparsity. To tackle the first challenge, we represent bigram LSA as a set
of K topic-dependent backoff LM. We propose fractional Kneser-Ney smoothing 1 which supports
?
This work is partly supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR0011-06-2-0001. Any opinions, findings and conclusions or recommendations expressed in this
material are those of the authors and do not necessarily reflect the views of DARPA.
1
This method was briefly mentioned in [8] without detail. To the best of our knowledge, our formulation in
this paper is considered new to the research community.
Prior distribution over topic mixture weights
Latent topics
Observed words <s>
?
z1
z2
zN
w1
w2
wN
Figure 1: Graphical representation of bigram LSA. Adjacent words in a document are linked together to form a Markov chain from left to right.
fractional counts to smooth each backoff LM. We show that our formulation recovers the original
Kneser-Ney smoothing [9] which supports only integral counts. To address the second challenge,
we propose a bootstrapping approach for bigram LSA training using a well-trained unigram LSA as
an initial model.
During unsupervised LM adaptation, word hypotheses from the first-pass decoding are used to estimate the topic mixture weight of each test audio to adapt both unigram and bigram LSA. The
adapted unigram and bigram LSA are combined with the background LM in two stages. Firstly,
marginal adaptation [10] is applied to integrate unigram LSA into the background LM. Then the intermediately adapted LM from the first stage is combined with bigram LSA via linear interpolation
with the interpolation weights estimated by minimizing the word perplexity on the word hypotheses.
The final adapted LM is employed for re-decoding.
Related work includes topic mixtures [11] which perform document clustering and train a trigram
LM for each document cluster as an initial model. Sentence-level topic mixtures are modeled so that
the topic label is fixed within a sentence. Topical N-gram model [12] focuses on phrase discovery
and information retrieval. We do not apply this model because the phrase-based LM seems not
outperform the word-based LM.
The paper is organized as follows: In Section 2, we describe the bigram LSA training and the
fractional Kneser-Ney smoothing algorithm. In Section 3, we present the LM adaptation approach
based on marginal adaptation and linear interpolation. In Section 4, we report LM adaptation results
on Mandarin and Arabic ASR, followed by conclusions and future work in Section 5.
2
Correlated bigram LSA
Latent semantic analysis such as LDA makes a bag-of-word assumption that each word in a document is generated irrespective of its position in a document. To relax this assumption, bigram LSA
has been proposed [7] to modify the graphical structure of LDA by connecting adjacent words in a
document together to form a Markov chain. Figure 1 shows the graphical representation of bigram
LSA where the top node represents the prior distribution over the topic mixture weights and the
middle layer represents the latent topic label associated to each observed word at the bottom layer.
The document generation procedure of bigram LSA is similar to LDA except that the previous word
is taken into consideration for generating the current word:
1. Sample ? from a prior distribution p(?)
2. For each word wi at the i-th position of a document:
(a) Sample topic label: zi ? Multinomial(?)
(b) Sample wi given the previous word wi?1 and the topic label zi : wi ? p(?|wi?1 , zi )
Our incremental contributions for bigram LSA are three-folded: Firstly, we present a technique
for topic correlation modeling using Dirichlet-Tree prior in Section 2.1. Secondly, we propose
efficient algorithm for bigram LSA training via variational Bayes approach and model bootstrapping
which are scalable to large settings in Section 2.2. Thirdly, we formulate the fractional Kneser-Ney
smoothing to generalize the original Kneser-Ney smoothing which supports only integral counts in
Section 2.3.
j=1
Dir(.)
j=1
Dir(.)
0.1+0.2
j=2
Dir(.)
Latent topics
topic 1
j=3
Dir(.)
topic 2 topic 3
topic 4
j=J
Dir(.)
topic K?1 topic K
j=2
Dir(.)
propagate
q(z=k)
0.3+0.4
0.1
0.2
j=3
Dir(.)
0.3
0.4
Figure 2: Left: Dirichlet-Tree prior of depth two. Right: Variational E-step as bottom-up propagation and summation of fractional topic counts.
2.1
Topic correlation
Modeling topic correlations is motivated by an observation that documents such as newspaper articles are usually organized into main-topic and sub-topic hierarchy for document browsing. From this
perspective, a Dirichlet prior is not appropriate since it assumes topic independence. A DirichletTree prior [13, 14] is employed to capture topic correlations. Figure 2 (Left) illustrates a depth-two
Dirichlet-Tree. A depth-one Dirichlet-tree is equivalent to a Dirichlet prior in LDA. The sampling
procedure for the topic mixture weight ? ? p(?) can be described as follows:
1. Sample a vector of branch probabilities bj ? Dirichlet(?; {?jc }) for each node j = 1...J
where {?jc } denotes the parameter of the Dirichlet distribution at node j, i.e. the pseudocounts of the outgoing branch c at node j.
Q ? (k)
2. Compute the topic mixture weight as ?k = jc bjcjc where ?jc (k) is an indicator function
which sets to unity when the c-th branch of the j-th node leads to the leaf node of topic k
and zero otherwise. The k-th topic weight ?k is computed as the product of sampled branch
probabilities from the root node to the leaf node corresponding to topic k.
The structure and the number of outgoing branches of each Dirichlet node can be arbitrary. In this
paper, we employ a balanced binary Dirichlet-tree.
2.2
Model training
Gibbs sampling was employed for bigram LSA training [7]. Despite the simplicity, it can be slow
and inefficient since it usually requires many sampling iterations for convergence. We present a
variational Bayes approach for model training. The joint likelihood of a document w1N , the latent
topic sequence z1N and ? using the bigram LSA can be written as follows:
p(w1N , z1N , ?)
= p(?) ?
N
Y
p(zi |?) ? p(wi |wi?1 , zi )
(1)
i=1
QN
By introducing a factorizable variational posterior distribution q(z1N , ?; ?) = q(?) ? i=1 q(zi )
over the latent variables and applying the Jensen?s inequality, the lower bound of the marginalized
document likelihood can be derived as follows:
Z X
p(w1N , z1N , ?; ?)
N
log p(w1 ; ?, ?) = log
q(z1N , ?; ?) ?
(2)
q(z1N , ?; ?)
? z1 ...zN
Z X
p(w1N , z1N , ?; ?)
?
q(z1N , ?; ?) ? log
(By Jensen?s Inequality) (3)
q(z1N , ?; ?)
? z ...z
1
N
N
N
X
X
p(?)
p(zi |?)
= Eq [log
]+
Eq [log
]+
Eq [log p(wi |wi?1 , zi )]
q(?)
q(zi )
i=1
i=1
(4)
= Q(w1N ; ?, ?)
(5)
where the expectation is taken using the variational posterior q(z1N , ?). For the E-step, we compute
the partial derivative of the auxiliary function Q(?) with respect to q(zi ) and the parameter ?jc in the
Dirichlet-Tree posterior q(?). Setting the derivatives to zero yields:
E-Steps:
q(zi = k)
?jc
? p(wi |wi?1 , k) ? eEq [log ?k ;{?jc }] for k = 1..K
= ?jc +
N
X
Eq [?jc (zi )] = ?jc +
i=1
where Eq [log ?k ]
=
X
N X
K
X
(6)
q(zi = k) ? ?jc (k)
(7)
i=1 k=1
?jc (k) ? Eq [log bjc ] =
jc
X
jc
?
!
X
?jc (k) ?(?jc ) ? ?(
?jc )
(8)
c
where Eqn 7 is motivated from the conjugate property that the Dirichlet-Tree posterior given the
topic sequence z1N has the same form as the Dirichlet-Tree prior:
?
?
N Y
Y
Y ? ?1
? (z )
p(bJ1 |z1N ) ? p(z1N |bJ1 ) ? p(bJ1 ; {?jc }) ? ?
bjcjc i ? ?
bjcjc
(9)
i=1 jc
=
Y (?jc +
bjc
jc
PN
i=1
?jc (zi ))?1
=
Y
? 0 ?1
bjcjc
jc
=
jc
J
Y
0
Dirichlet(bj ; {?jc
})
(10)
j=1
Figure 2 (Right) illustrates that Eqn 7 can be implemented as propagation of fractional topic counts
in a bottom-up fashion with each branch as an accumulator for ?jc . Eqn 6 and Eqn 7 are applied
iteratively until convergence is reached. For the M-step, we compute the partial derivative of the auxiliary function Q(?) over all training documents d with respect to topic bigram probability p(v|u, k)
and set it to zero:
M-Step (unsmoothed):
p(v|u, k)
?
Nd
XX
=
q(zi = k|d) ? ?(wi?1 , u)?(wi , v)
(11)
i=1
d
P
d
P PV
d
Cd (u, v|k)
v 0 =1
Cd (u, v 0 |k)
= PV
C(u, v|k)
v 0 =1
C(u, v 0 |k)
(12)
where Nd denote the number of words in document d and ?(wi , v) is a 0-1 Kronecker Delta function
to test if the i-th word in document d is vocabulary v. Cd (u, v|k) denotes the fractional counts of a
bigram (u, v) belonging to topic k in document d. Intuitively, Eqn 12 simply computes the relative
frequency of the bigram (u, v). However, this solution is not practical since bigram LSA assigns
zero probability to unseen bigrams. Therefore, bigram LSA should be smoothed properly. One
simple approach is to use Laplace-smoothing by adding a small count ? to all bigrams. However,
this approach can lead to worse performance since it will bias the bigram probability towards a
uniform distribution when the vocabulary size V gets large. Our approach is to represent p(v|u, k)
as a standard backoff LM smoothed by fractional Kneser-Ney smoothing as described in Section 2.3.
Model initialization is crucial for variational EM training. We employ a bootstrapping approach
using a well-trained unigram LSA as an initial model for bigram LSA so that p(wi |wi?1 , k) is
approximated by p(wi |k) in Eqn 6. It saves computation and avoids keeping the full initial bigram
LSA in memory during the EM training. To make the training procedure more practical, we apply
bigram pruning during statistics accumulation in the M-step when the bigram count in a document
is less than 0.1. This heuristic is reasonable since only a small number of topics are ?active? to
a bigram. With the sparsity, there is no need to store K copies of accumulators for each bigram
and thus reducing the memory requirement significantly. The pruned bigram counts are re-assigned
to the most likely topic of the current document so that the counts are conserved. For practical
implementation, accumulators are saved into the disk in batches for count merging. In the final step,
each topic-dependent LM is smoothed individually using the merged count file.
2.3
Fractional Kneser-Ney smoothing
Standard backoff N-gram LM is widely used in the ASR community. The state-of-the-art smoothing
for the backoff LM is based on Kneser-Ney smoothing [9]. The belief of its success is due to the
preservation of marginal distributions. However, the original formulation only works for integral
counts which is not suitable for bigram LSA using fractional counts. Therefore, we propose the
fractional Kneser-Ney smoothing as a generalization of the original formulation. The interpolated
form using absolute discounting can be expressed as follows:
max{C(u, v) ? D, 0}
pKN (v|u) =
+ ?(u) ? pKN (v)
(13)
C(u)
where D is a discounting factor. In the original formulation, D lies between 0 and 1. But in our
formulation, D can be any positive number. Intuitively, D controls the degree of smoothing. If D is
set to zero, the model is unsmoothed; If D is too big, bigrams with counts smaller than D are pruned
from the LM. ?(u) ensures the bigram probability sums to unity. After summing over all possible v
on both sides of Eqn 13 and re-arranging terms, ?(u) becomes:
X max{C(u, v) ? D, 0}
1 =
+ ?(u)
(14)
C(u)
v
X
X max{C(u, v) ? D, 0}
C(u, v) ? D
=1?
(15)
=? ?(u) = 1 ?
C(u)
C(u)
v
v:C(u,v)>D
P
P
C(u) ? v:C(u,v)>D C(u, v) + D v:C(u,v)>D 1
=
(16)
C(u)
P
P
v:C(u,v)?D C(u, v) + D
v:C(u,v)>D 1
=
(17)
C(u)
C?D (u, ?) + D ? N>D (u, ?)
=
(18)
C(u)
where C?D (u, ?) denotes the sum of bigram counts following u and smaller than D. N>D (u, ?)
denotes the number of word types following u with the bigram counts bigger than D.
In Kneser-Ney smoothing, the lower-order distribution pKN (v) is treated as unknown parameters
which can be estimated using the preservation of marginal distributions:
X
p?(v) =
pKN (v|u) ? p?(u)
(19)
u
where p?(v) is the marginal distribution estimated from the background training data so that p?(v) =
P C(v) 0 . Therefore, we substitute Eqn 13 into Eqn 19:
v 0 C(v )
?
X ? max{C(u, v) ? D, 0}
C(v) =
+ ?(u) ? pKN (v) ? C(u)
(20)
C(u)
u
?
!
X
X
=
max{C(u, v) ? D, 0} + pKN (v) ?
C(u) ? ?(u)
(21)
u
u
P
max{C(u,
v)
?
D,
0}
u
P
=? pKN (v) =
(22)
u C(u) ? ?(u)
C(v) ? C>D (?, v) + D ? N>D (?, v)
P
=
(23)
u C(u) ? ?(u)
C?D (?, v) + D ? N>D (?, v)
= P
(using Eqn 18)
(24)
u C?D (u, ?) + D ? N>D (u, ?)
C?D (?, v) + D ? N>D (?, v)
= P
(25)
v C?D (?, v) + D ? N>D (?, v)
Eqn 25 generalizes Kneser-Ney smoothing to integral and fractional counts. For the original formulation, C?D (u, ?) equals to zero since each observed bigram count must be at least one by definition
with D less than one. As a result, the D term cancels out yielding the original formulation which
counts the number of words preceding v and thus recovering the original formulation. Intuitively,
the numerator in Eqn 25 measures the total discounts of observed bigrams ending at v. In other
words, fractional Kneser-Ney smoothing estimates the lower-order probability distribution using the
relative frequency over discounts instead of word counts. With this approach, each topic-dependent
LM in bigram LSA can be smoothed using our formulation.
C(v) ?
3
Unsupervised LM adaptation
Unsupervised LM adaptation is performed by first inferring the topic distribution of each test audio
using the word hypotheses from the first-pass decoding via variational inference in Eqn 6?7. Relative
frequency over the branch posterior counts ?jc is applied on each Dirichlet node j. The MAP topic
mixture weight ?? and the adapted unigram and bigram LSA are computed as follows:
??k
?
Y?
jc
pa (v)
=
K
X
?jc
c0 ?jc0
P
??jc (k)
for k = 1...K
p(v|k) ? ??k and pa (v|u) =
k=1
K
X
p(v|u, k) ? ??k
(26)
(27)
k=1
The unigram LSA marginals are integrated into the background N-gram LM pbg (v|h) via marginal
adaptation [10] as follows:
p(1)
a (v|h) ?
?
pa (v)
pbg (v)
??
? pbg (v|h)
(28)
Marginal adaptation has a close connection to maximum entropy modeling since the marginal constraints can be encoded as unigram features. Intuitively, bigram LSA would be integrated in the same
fashion by introducing bigram marginal constraints. However, we found that integrating bigram
features via marginal adaptation did not offer further improvement compared to only integrating unigram features. Since marginal adaptation integrates a unigram feature as a likelihood ratio between
the adapted marginal pa (v) and the background marginal pbg (v) in Eqn 28, perhaps the unigram and
bigram likelihood ratios are very similar and thus the latter does not give extra information. Another
explanation is that marginal adaptation corresponds to only one iteration of generalized iterative
scaling (GIS). Due to the large number of bigram features in terms of millions, one GIS iteration
may not be sufficient for convergence. On the other hand, simple linear LM interpolation is found
to be effective in our experiment. The final LM adaptation formula is provided using results from
Eqn 27 and Eqn 28 as a two-stage process:
p(2)
a (v|h)
= ? ? p(1)
a (v|h) + (1 ? ?) ? pa (v|u)
(29)
where ? is tuned to optimize perplexity on word hypotheses from the first-pass decoding on a peraudio basis.
4
Experimental setup
Our LM adaptation approach was evaluated using the RT04 Mandarin Broadcast News evaluation
system. The system employed context-dependent Initial-Final acoustic models trained using 100hour broadcast news audio from the Mandarin HUB4 1997 training set and a subset of TDT4. 42dimension features were extracted after linear discriminant analysis projected from a window of
MFCC and energy features. The system employed a two-pass decoding strategy using speakerindependent and speaker-adaptive acoustic models. For the second-pass decoding, we applied standard acoustic model adaptation such as vocal tract length normalization and maximum likelihood
linear regression on the feature and model spaces. The training corpora include Xinhua News 2002
(January?September) containing 13M words and 64k documents. A background 4-gram LM was
trained using modified Kneser-Ney smoothing using the SRILM toolkit [15]. The same training
corpora were used for unigram and bigram LSA training with 200 topics. The vocabulary size is
108k words. Discounting factor D for fractional Kneser-Ney smoothing was set to 0.4.
First-pass decoding was first performed to obtain an automatic transcript for each audio show. Then
unsupervised LM adaptation was applied using the automatic transcript to obtain an adapted LM
for second-pass decoding using the approach described in Section 3. Word perplexity and character
error rates (CER) were measured on the Mandarin RT04 test set. Matched pairs sentence-segment
word error test was performed for significance test using the NIST scoring tool.
Table 1: Correlated bigram topics extracted from bigram LSA.
Topic index
Top bigrams sorted by p(u, v|k)
?topic-61?
d+dd(?s student), d+dd(?s education), dd+d(education ?s)
dd+d(school ?s), dd+d(youth class), dd+dd(quality of education)
dd+dd(expert cultivation), dd+dd(university chancellor)
d+d(famous), d+dd(high-school), d+dd(?s student)
d+dddd(and social security), d+dd(?s employment),
dd+dd(unemployed officer), dd+dd(employment position)
d+dd(?s research), dd+dd(expert people), d+dd(etc area)
dd+dd(biological technology), dd+dd(research result)
dd+ddd(Human DNA sequence), d+dd(?s DNA)
dd+dd(biological technology), dd+ddd(embryo stem cell)
?topic-62?
?topic-63?
?topic-64?
?topic-65?
Table 2: Character Error Rates (Word perplexity) on the RT04 test set. Bigram LSA was applied in
addition to unigram LSA.
LM (13M)
CCTV
NTDTV
RFA
OVERALL
background LM
+unigram LSA
+bigram LSA (Kneser-Ney, 30 topics)
+bigram LSA (Witten-Bell)
+bigram LSA (Kneser-Ney)
15.3% (748)
14.4 (629)
14.5 (604)
14.1 (594)
14.0 (587)
21.8 (1718)
21.5 (1547)
20.7 (1502)
20.9 (1452)
20.8 (1448)
39.5 (3655)
38.9 (3015)
39.0 (2736)
38.3 (2628)
38.2 (2586)
24.9
24.3
24.1
23.8
23.7
4.1
LM adaptation results
Table 1 shows the correlated bigram topics sorted by the joint bigram probability p(v|u, k) ? p(u|k).
Most of the top bigrams appear either as phrases or words attached with a stopword such as d(?s in
English). Table 2 shows the LM adaptation results in CER and perplexity. Applying both unigram
and bigram LSA yields consistent improvement over unigram LSA in the range of 6.4%?8.5%
relative reduction in perplexity and 2.5% relative reduction in the overall CER. The CER reduction is
statistically significant at 0.1% significance level. We compared our proposed fractional Kneser-Ney
smoothing with Witten-Bell smoothing which also supports fractional counts. The results showed
that Kneser-Ney smoothing performs slightly better than Witten-Bell smoothing. Increasing the
number of topics in bigram LSA helps despite model sparsity. We applied extra EM iterations on
top of the bootstrapped bigram LSA but no further performance improvement was observed.
4.2
Large-scale evaluation
We evaluated our approach using the CMU-InterACT vowelized Arabic transcription system discriminatively trained on 1500-hour transcribed audio using MMIE for the GALE Phase-3 evaluation.
A large background 4-gram LM was trained using 962M-word text corpora with 737k vocabulary.
Unigram and bigram LSA were trained on the same corpora and were applied to lattice rescoring on
Dev07 and unseen Dev08 test sets with 2.6-hour and 3-hour audio shows containing broadcast news
(BN) and broadcast conversation (BC) genre. Table 3 shows that bigram LSA rescoring reduces the
overall word error rate by more than 3.0% relative compared to the unadapted baseline on both sets
which are statistically significant at 0.1% significance level. However, degradation is observed using
trigram LSA compared to bigram LSA which may be due to data sparseness.
Table 3: Lattice rescoring results in word error rate on Dev07 (unseen Dev08) using the CMUInterACT Arabic transcription system for the GALE Phase-3 evaluation.
GALE LM (962M)
background LM
+unigram LSA
+bigram LSA (Witten-Bell)
+bigram LSA (Kneser-Ney)
+trigram LSA (Kneser-Ney)
BN
11.6%
11.5
11.0
11.0
11.3
BC
19.4
19.2
19.0
18.9
18.8
OVERALL
14.3 (16.4)
14.2 (16.3)
13.9 (15.9)
13.8 (15.9)
14.0 (-)
5
Conclusion
We present a correlated bigram LSA approach for unsupervised LM adaptation for ASR. Our contributions include efficient variational EM for model training and fractional Kneser-Ney approach
for LM smoothing with fractional counts. Bigram LSA yields additional improvement in both perplexity and recognition performance in addition to unigram LSA. Increasing the number of topics
for bigram LSA helps despite the model sparsity. Bootstrapping bigram LSA from unigram LSA
saves computation and memory requirement during EM training. Our approach is scalable to large
training corpora and works well on different languages. The improvement from bigram LSA is
statistically significant compared to the unadapted baseline. Future work includes applying the proposed approach for statistical machine translation.
Acknowledgement
We would like to thank Mark Fuhs for help parallelizing the bigram LSA training via condor.
References
[1] J. R. Bellegarda, ?Large Vocabulary Speech Recognition with Multispan Statistical Language
Models,? IEEE Transactions on Speech and Audio Processing, vol. 8, no. 1, pp. 76?84, Jan
2000.
[2] D. Blei, A. Ng, and M. Jordan, ?Latent Dirichlet Allocation,? in Journal of Machine Learning
Research, 2003, pp. 1107?1135.
[3] Y. C. Tam and T. Schultz, ?Language model adaptation using variational Bayes inference,? in
Proceedings of Interspeech, 2005.
[4] D. Mrva and P. C. Woodland, ?Unsupervised language model adaptation for mandarin broadcast conversation transcription,? in Proceedings of Interspeech, 2006.
[5] T. Griffiths, M. Steyvers, D. Blei, and J. Tenenbaum, ?Integrating topics and syntax,? in
Advances in Neural Information Processing Systems, 2004.
[6] B. J. Hsu and J. Glass, ?Style and topic language model adaptation using HMM-LDA,? in
Proceedings of Empirical Methods on Natural Language Processing (EMNLP), 2006.
[7] Hanna M. Wallach, ?Topic Modeling: Beyond Bag-of-Words,? in International Conference
on Machine Learning, 2006.
[8] P. Xu, A. Emami, and F. Jelinek, ?Training connectionist models for the structured language
model,? in Proceedings of Empirical Methods on Natural Language Processing (EMNLP),
2003.
[9] R. Kneser and H. Ney, ?Improved backing-off for M-gram language modeling,? in Proceedings
of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),
1995, vol. 1, pp. 181?184.
[10] R. Kneser, J. Peters, and D. Klakow, ?Language model adaptation using dynamic marginals,?
in Proceedings of European Conference on Speech Communication and Technology (EUROSPEECH), 1997, pp. 1971?1974.
[11] R. Iyer and M. Ostendorf, ?Modeling long distance dependence in language: Topic mixtures
versus dynamic cache models,? IEEE Transactions on Speech and Audio Processing, vol. 7,
no. 1, pp. 30?39, Jan 1999.
[12] X. Wang, A. McCallum, and X. Wei, ?Topical N-grams: Phrase and topic discovery, with an
application to information retrieval,? in IEEE International Conference on Data Mining, 2007.
[13] T. Minka, ?The dirichlet-tree distribution,? 1999.
[14] Y. C. Tam and T. Schultz, ?Correlated latent semantic model for unsupervised language model
adaptation,? in Proceedings of the IEEE International Conference on Acoustics, Speech, and
Signal Processing (ICASSP), 2007.
[15] A. Stolcke, ?SRILM - an extensible language modeling toolkit,? in Proceedings of International Conference on Spoken Language Processing (ICSLP), 2002.
| 3564 |@word arabic:4 middle:1 briefly:1 yct:1 bigram:78 seems:1 nd:2 disk:1 c0:1 propagate:1 bn:2 decomposition:1 reduction:6 initial:5 tuned:1 document:21 bootstrapped:1 bc:2 current:2 contextual:1 z2:1 written:1 must:1 speakerindependent:1 enables:1 leaf:2 mccallum:1 blei:2 rescoring:3 node:10 firstly:3 window:1 cache:1 increasing:2 becomes:2 project:1 xx:1 provided:1 matched:1 klakow:1 bj1:3 spoken:1 finding:1 bootstrapping:5 tackle:1 control:1 normally:1 lsa:69 appear:1 positive:1 modify:1 rfa:1 despite:3 interpolation:5 kneser:24 initialization:2 wallach:1 relaxing:1 range:1 statistically:5 practical:3 accumulator:3 procedure:3 jan:2 area:1 empirical:2 bell:4 significantly:2 word:41 integrating:3 vocal:1 griffith:1 get:1 close:1 context:2 applying:5 accumulation:1 equivalent:1 map:1 optimize:1 formulate:1 simplicity:1 assigns:1 steyvers:1 handle:1 laplace:1 arranging:1 hierarchy:1 hypothesis:4 pa:7 recognition:5 approximated:1 observed:6 bottom:3 fuhs:1 wang:1 capture:1 ensures:1 news:4 ordering:3 mentioned:1 balanced:1 agency:1 xinhua:1 stopword:1 dynamic:2 employment:2 trained:8 segment:1 jc0:1 basis:1 icassp:2 darpa:2 joint:2 genre:1 train:1 effective:2 describe:1 heuristic:1 widely:1 encoded:1 relax:2 otherwise:1 statistic:1 gi:2 unseen:3 final:4 sequence:3 propose:4 product:1 adaptation:31 scalability:1 exploiting:1 convergence:3 cluster:1 requirement:2 generating:1 incremental:1 tract:1 help:4 mandarin:6 measured:1 z1n:13 school:2 transcript:2 eq:6 auxiliary:2 c:2 implemented:1 recovering:1 merged:1 saved:1 human:1 opinion:1 material:1 education:3 icslp:1 generalization:1 biological:2 secondly:2 summation:1 extension:1 considered:1 unadapted:2 bj:2 lm:46 pkn:7 trigram:4 integrates:1 bag:4 label:4 individually:1 tool:1 modified:1 pn:1 derived:1 focus:1 properly:2 improvement:5 likelihood:5 w1n:5 baseline:2 glass:1 inference:2 dependent:4 integrated:3 backing:1 issue:2 classification:1 overall:4 smoothing:24 art:1 marginal:15 equal:1 asr:6 ng:1 sampling:3 represents:2 cer:4 unsupervised:9 cancel:1 future:2 report:1 connectionist:1 employ:3 phase:2 investigate:1 mining:1 evaluation:5 mixture:9 yielding:1 chain:3 integral:4 condor:1 partial:2 tree:9 re:3 modeling:7 cover:1 extensible:1 zn:2 lattice:2 phrase:4 introducing:2 subset:1 uniform:1 too:1 eurospeech:1 dir:7 combined:2 international:5 contract:1 off:1 decoding:8 together:4 connecting:2 w1:2 reflect:1 containing:2 broadcast:5 gale:3 emnlp:2 transcribed:1 worse:1 tam:3 expert:2 inefficient:1 derivative:3 style:1 student:2 includes:2 jc:30 performed:3 view:1 root:1 linked:1 reached:1 bayes:3 contribution:2 yield:4 generalize:1 famous:1 mfcc:1 definition:1 energy:1 frequency:3 pp:5 minka:1 associated:1 recovers:1 sampled:1 hsu:1 knowledge:1 fractional:20 conversation:2 organized:2 higher:1 improved:1 wei:1 formulation:10 evaluated:2 stage:3 correlation:4 until:1 hand:1 eqn:16 ostendorf:1 propagation:2 quality:1 perhaps:1 lda:8 discounting:3 assigned:1 iteratively:1 semantic:3 adjacent:3 during:4 numerator:1 interspeech:2 speaker:1 generalized:1 syntax:1 performs:2 variational:10 consideration:1 witten:4 multinomial:1 attached:1 thirdly:1 million:1 marginals:2 mellon:2 significant:5 gibbs:1 automatic:4 language:18 toolkit:2 etc:1 posterior:5 showed:1 perspective:2 perplexity:8 store:1 inequality:2 binary:1 success:1 scoring:1 conserved:1 additional:1 preceding:1 employed:5 signal:2 preservation:2 branch:7 full:1 reduces:1 stem:1 smooth:1 adapt:1 youth:1 offer:1 long:1 retrieval:2 bigger:1 prediction:1 scalable:2 regression:1 ddd:2 cmu:3 expectation:1 iteration:4 represent:2 normalization:1 achieved:1 cell:1 background:11 addition:2 pbg:4 addressed:1 bellegarda:1 singular:1 crucial:3 w2:1 extra:2 file:1 jordan:1 wn:1 stolcke:1 independence:1 zi:15 whether:1 motivated:2 eeq:1 defense:1 peter:1 speech:8 yik:1 woodland:1 backoff:5 discount:2 tenenbaum:1 dna:2 outperform:1 bagof:1 estimated:3 delta:1 carnegie:2 vol:3 officer:1 intermediately:1 sum:2 reasonable:1 scaling:1 layer:2 bound:1 followed:1 adapted:6 kronecker:1 constraint:2 interpolated:1 hub4:1 unsmoothed:2 pruned:2 structured:1 conjugate:1 belonging:1 smaller:2 slightly:1 em:7 character:3 unity:2 wi:17 pseudocounts:1 intuitively:4 embryo:1 taken:2 count:25 bjc:2 generalizes:1 apply:2 appropriate:1 ney:23 save:2 batch:1 original:8 substitute:1 top:4 clustering:1 dirichlet:18 assumes:1 denotes:4 graphical:3 include:2 marginalized:1 especially:2 strategy:1 dependence:1 september:1 distance:1 thank:1 hmm:2 topic:60 discriminant:1 length:1 modeled:1 index:1 ratio:2 minimizing:1 setup:1 implementation:1 unknown:1 perform:1 observation:1 markov:3 nist:1 january:1 incorporated:1 communication:1 topical:4 smoothed:5 tanja:2 arbitrary:1 parallelizing:1 community:2 dddd:1 pair:1 z1:2 sentence:3 connection:1 security:1 acoustic:5 hour:4 address:2 hr0011:1 beyond:1 usually:2 sparsity:4 challenge:3 max:6 memory:3 explanation:1 belief:1 critical:1 suitable:1 natural:3 treated:1 largescale:1 indicator:1 advanced:1 technology:5 irrespective:1 text:1 prior:9 discovery:2 acknowledgement:1 chancellor:1 relative:9 discriminatively:1 generation:1 allocation:2 versus:1 integrate:1 degree:1 sufficient:1 consistent:1 article:1 dd:31 cd:3 translation:1 supported:1 keeping:1 copy:1 english:1 bias:1 side:1 institute:2 absolute:1 sparse:1 jelinek:1 depth:3 vocabulary:6 gram:8 avoids:1 ending:1 qn:1 ignores:1 author:1 computes:1 dimension:1 projected:1 adaptive:1 schultz:3 social:1 newspaper:1 transaction:2 pruning:1 transcription:3 active:1 corpus:6 pittsburgh:2 summing:1 latent:10 iterative:1 table:6 improving:2 interact:3 hanna:1 necessarily:1 european:1 factorizable:1 did:1 significance:3 main:2 big:1 xu:1 fashion:2 slow:1 sub:1 position:3 pv:2 inferring:1 lie:1 formula:1 unigram:25 jensen:2 adding:1 effectively:1 merging:1 iyer:1 illustrates:2 sparseness:1 browsing:1 entropy:1 simply:1 likely:1 expressed:2 recommendation:1 corresponds:1 extracted:2 cheung:1 sorted:2 towards:1 folded:1 except:1 reducing:1 degradation:1 total:1 pas:7 partly:1 experimental:2 support:4 people:1 latter:1 mark:1 outgoing:2 audio:8 correlated:7 |
2,829 | 3,565 | Learning a Discriminative Hidden Part Model for
Human Action Recognition
Yang Wang
School of Computing Science
Simon Fraser University
Burnaby, BC, Canada, V5A 1S6
[email protected]
Greg Mori
School of Computing Science
Simon Fraser University
Burnaby, BC, Canada, V5A 1S6
[email protected]
Abstract
We present a discriminative part-based approach for human action recognition
from video sequences using motion features. Our model is based on the recently
proposed hidden conditional random field (hCRF) for object recognition. Similar
to hCRF for object recognition, we model a human action by a flexible constellation of parts conditioned on image observations. Different from object recognition, our model combines both large-scale global features and local patch features
to distinguish various actions. Our experimental results show that our model is
comparable to other state-of-the-art approaches in action recognition. In particular, our experimental results demonstrate that combining large-scale global features and local patch features performs significantly better than directly applying
hCRF on local patches alone.
1
Introduction
Recognizing human actions from videos is a task of obvious scientific and practical importance.
In this paper, we consider the problem of recognizing human actions from video sequences on a
frame-by-frame basis. We develop a discriminatively trained hidden part model to represent human
actions. Our model is inspired by the hidden conditional random field (hCRF) model [16] in object
recognition.
In object recognition, there are three major representations: global template (rigid, e.g. [3], or deformable, e.g. [1]), bag-of-words [18], and part-based [7, 6]. All three representations have been
shown to be effective on certain object recognition tasks. In particular, recent work [6] has shown
that part-based models outperform global templates and bag-of-words on challenging object recognition tasks.
A lot of the ideas used in object recognition can also be found in action recognition. For example,
there is work [2] that treats actions as space-time shapes and reduces the problem of action recognition to 3D object recognition. In action recognition, both global template [5] and bag-of-words
models [14, 4, 15] have been shown to be effective on certain tasks. Although conceptually appealing and promising, the merit of part-based models has not yet been widely recognized in action
recognition. The goal of this work is to address this gap.
Our work is partly inspired by a recent work in part-based event detection [10]. In that work,
template matching is combined with a pictorial structure model to detect and localize actions in
crowded videos. One limitation of that work is that one has to manually specify the parts. Unlike
Ke et al. [10], the parts in our model are initialized automatically.
(a)
(b)
(c)
(d)
(e)
Figure 1: Construction of the motion descriptor. (a) original image; (b) optical flow; (c) x and y components of optical flow vectors Fx , Fy ; (d) half-wave rectification of x and y components to obtain
?
+
?
4 separate channels Fx+ , Fx? , Fy+ , Fy? ; (e) final blurry motion descriptors F b+
x , F bx , F by , F by .
The major contribution of this work is that we combine the flexibility of part-based approaches with
the global perspectives of large-scale template features in a discriminative model. We show that the
combination of part-based and large-scale template features improves the final results.
2
Our Model
The hidden conditional random field model [16] was originally proposed for object recognition
and has also been applied in sequence labeling [19]. Objects are modeled as flexible constellations of parts conditioned on the appearances of local patches found by interest point operators.
The probability of the assignment of parts to local features is modeled by a conditional random
field (CRF) [11]. The advantage of the hCRF is that it relaxes the conditional independence assumption commonly used in the bag-of-words approaches of object recognition.
Similarly, local patches can also be used to distinguish actions. Figure. 4(a) shows some examples
of human motion and the local patches that can be used to distinguish them. A bag-of-words representation can be used to model these local patches for action recognition. However, it suffers from
the same restriction of conditional independence assumption that ignores the spatial structures of
the parts. In this work, we use a variant of hCRF to model the constellation of these local patches in
order to alleviate this restriction.
There are also some important differences between objects and actions. For objects, local patches
could carry enough information for recognition. But for actions, we believe local patches are not
sufficiently informative. In our approach, we modify the hCRF model to combine local patches and
large-scale global features. The large-scale global features are represented by a root model that takes
the frame as a whole. Another important difference with [16] is that we use the learned root model
to find discriminative local patches, rather than using a generic interest-point operator.
2.1
Motion features
Our model is built upon the optical flow features in [5]. This motion descriptor has been shown to
perform reliably with noisy image sequences, and has been applied in various tasks, such as action
classification, motion synthesis, etc.
To calculate the motion descriptor, we first need to track and stabilize the persons in a video sequence. Any reasonable tracking or human detection algorithm can be used, since the motion descriptor we use is very robust to jitters introduced by the tracking. Given a stabilized video sequence
in which the person of interest appears in the center of the field of view, we compute the optical flow
at each frame using the Lucas-Kanade [12] algorithm. The optical flow vector field F is then split
into two scalar fields Fx and Fy , corresponding to the x and y components of F . Fx and Fy are further half-wave rectified into four non-negative channels Fx+ , Fx? , Fy+ , Fy? , so that Fx = Fx+ ? Fx?
and Fy = Fy+ ? Fy? . These four non-negative channels are then blurred with a Gaussian kernel and
?
+
?
normalized to obtain the final four channels F b+
x ,F bx ,F by ,F by (see Fig. 1).
2.2
Hidden conditional random field(hCRF)
Now we describe how we model a frame I in a video sequence. Let x be the motion feature of
this frame, and y be the corresponding class label of this frame, ranging over a finite label alphabet
Y. Our task is to learn a mapping from x to y. We assume each image I contains a set of salient
patches {I1 , I2 , ..., Im }. we will describe how to find these salient patches in Sec. 3. Our training set
consists of labeled images (xt , y t ) (as a notation convention, we use superscripts to index training
images and subscripts to index patches) for t = 1, 2, ..., n, where y t ? Y and xt = (xt1 , xt2 ..., xtm ).
xti = xt (Iit ) is the feature vector extracted from the global motion feature xt at the location of the
patch Iit . For each image I = {I1 , I2 , ..., Im }, we assume there exists a vector of hidden ?part?
variables h = {h1 , h2 , ..., hm }, where each hi takes values from a finite set H of possible parts.
Intuitively, each hi assigns a part label to the patch Ii , where i = 1, 2, ..., m. For example, for the
action ?waving-two-hands?, these parts may be used to characterize the movement patterns of the
left and right arms. The values of h are not observed in the training set, and will become the hidden
variables of the model.
We assume there are certain constraints between some pairs of (hj , hk ). For example, in the case of
?waving-two-hands?, two patches hj and hk at the left hand might have the constraint that they tend
to have the same part label, since both of them are characterized by the movement of the left hand. If
we consider hi (i = 1, 2, ..., m) to be vertices in a graph G = (E, V ), the constraint between hj and
hk is denoted by an edge (j, k) ? E. See Fig. 2 for an illustration of our model. Note that the graph
structure can be different for different images. We will describe how to find the graph structure E in
Sec. 3.
?(?)
?(?)
?(?)
?(?)
class label
y
hi
hk
hidden parts
hj
xi
xk
image
xj
x
Figure 2: Illustration of the model. Each circle corresponds to a variable, and each square corresponds to a factor in the model.
Given the motion feature x of an image I, its corresponding class label y, and part labels h, a hidden
exp(?(y,x,h;?))
P
conditional random field is defined as p(y, h|x; ?) = P
, where ? is the
?
exp(?(?
y ,x,h;?))
y?Y
?
m
?
h?H
model parameter, and ?(y, h, x; ?) ? R is a potential function parameterized by ?. It follows that
P
X
m exp(?(y, h, x; ?))
P
p(y|x; ?) =
p(y, h|x; ?) = P h?H
(1)
y , h, x; ?))
y??Y
h?Hm exp(?(?
m
h?H
We assume ?(y, h, x) is linear in the parameters ? = {?, ?, ?, ?}:
X
X
X
?(y, h, x; ?) =
?? ??(xj , hj )+
? ? ??(y, hj )+
? ? ??(y, hj , hk )+? ? ??(y, x) (2)
j?V
j?V
(j,k)?E
where ?(?) and ?(?) are feature vectors depending on unary hj ?s, ?(?) is a feature vector depending
on pairs of (hj , hk ), ?(?) is a feature vector that does not depend on the values of hidden variables.
The details of these feature vectors are described in the following.
Unary potential ?? ? ?(xj , hj ) : This potential function models the compatibility between xj and
the part label hj , i.e., how likely the patch xj is labeled as part hj . It is parameterized as
X
?? ? ?(xj , hj ) =
?c? ? 1{hj =c} ? [f a (xj ) f s (xj )]
(3)
c?H
where we use [f a (xj ) f s (xj )] to denote the concatenation of two vectors f a (xj ) and f s (xj ).
f a (xj ) is a feature vector describing the appearance of the patch xj . In our case, f a (xj ) is
simply the concatenation of four channels of the motion features at patch xj , i.e., f a (xj ) =
?
+
?
s
[F b+
x (xj ) F bx (xj ) F by (xj ) F by (xj )]. f (xj ) is a feature vector describing the spatial location
of the patch xj . We discretize the whole image locations into l bins, and f s (xj ) is a length l vector
of all zeros with a single one for the bin occupied by xj . The parameter ?c can be interpreted as
the measurement of compatibility between feature vector [f a (xj ) f s (xj )] and the part label hj = c.
The parameter ? is simply the concatenation of ?c for all c ? H.
Unary potential ? ? ??(y, hj ) : This potential function models the compatibility between class label
y and part label hj , i.e., how likely an image with class label y contains a patch with part label hj .
It is parameterized as
XX
? ? ? ?(y, hj ) =
?a,b ? 1{y=a} ? 1{hj =b}
(4)
a?Y b?H
where ?a,b indicates the compatibility between y = a and hj = b.
Pairwise potential ? ? ? ?(y, hj , hk ): This pairwise potential function models the compatibility
between class label y and a pair of part labels (hj , hk ), i.e., how likely an image with class label y
contains a pair of patches with part labels hj and hk , where (j, k) ? E corresponds to an edge in
the graph. It is parameterized as
XXX
? ? ? ?(y, hj , hk ) =
?a,b,c ? 1{y=a} ? 1{hj =b} ? 1{hk =c}
(5)
a?Y b?H c?H
where ?a,b,c indicates the compatibility of y = a, hj = b and hk = c for the edge (j, k) ? E.
Root model ? ? ? ?(y, x): The root model is a potential function that models the compatibility of
class label y and the large-scale global feature of the whole image. It is parameterized as
X
? ? ? ?(y, x) =
?a? ? 1{y=a} ? g(x)
(6)
a?Y
where g(x) is a feature vector describing the appearance of the whole image. In our case, g(x)
is the concatenation of all the four channels of the motion features in the image, i.e., g(x) =
?
+
?
[F b+
x F bx F by F by ]. ?a can be interpreted as a root filter that measures the compatibility between the appearance of an image g(x) and a class label y = a. And ? is simply the concatenation
of ?a for all a ? Y.
The parameterization of ?(y, h, x) is similar to that used in object recognition [16]. But there are
two important differences. First of all, our definition of the unary potential function ?(?) encodes
both appearance and spatial information of the patches. Secondly, we have a potential function ?(?)
describing the large scale appearance of the whole image. The representation in Quattoni et al. [16]
only models local patches extracted from the image. This may be appropriate for object recognition.
But for human action recognition, it is not clear that local patches can be sufficiently informative.
We will demonstrate this experimentally in Sec. 4.
3
Learning and Inference
The model parameters ? are learned by maximizing the conditional log-likelihood on the training
images:
!
X
X
X
?
t t
t
t
? = arg max L(?) = arg max
p(y , h|x ; ?)
log p(y |x ; ?) = arg max
log
(7)
?
?
t
?
t
h
?1
2
The objective function L(?) in Quattoni et al.[16] also has a regularization term 2?
2 ||?|| . In our
experiments, we find that the regularization does not seem to have much effect on the final results,
so we will use the un-regularized version. Different from conditional random field (CRF) [11], the
objective function L(?) of hCRF is not concave, due to the hidden variables h. But we can still use
gradient ascent to find ? that is locally optimal. The gradient of the log-likelihood with respect to
the t-th training image (xt , y t ) can be calculated as:
X
?Lt (?)
=
Ep(hj |yt ,xt ;?) ?(xtj , hj ) ? Ep(hj ,y|xt ;?) ?(xtj , hj )
??
j?V
t
?L (?)
??
=
X
Ep(hj |yt ,xt ;?) ?(hj , y t ) ? Ep(hj ,y|xt ;?) ?(hj , y)
j?V
t
?L (?)
??
=
?Lt (?)
??
= ?(y t , xt ) ? Ep(y|xt ;?) ?(y, xt )
X
Ep(hj ,hk |yt ,xt ;?) ?(y t , hj , hk ) ? Ep(hj ,hk ,y|xt ;?) ?(y, hj , hk )
(j,k)?E
(8)
Assuming the edges E form a tree, the expectations in Eq. 8 can be calculated in O(|Y||E||H|2 )
time using belief propagation.
Now we describe several details about how the above ideas are implemented.
Learning root filter ?: Given a set of training images (xt , y t ), we firstly learn the root filter ? by
solving the following optimization problem:
X
X
exp ? ? ? ?(y t , xt )
?
t t
? = arg max
log L(y |x ; ?) = arg max
log P
(9)
?
t
?
?
y exp (? ? ?(y, x ))
t
t
In other words, ? ? is learned by only considering the feature vector ?(?). We then use ? ? as the
starting point for ? in the gradient ascent (Eq. 8). Other parameters ?, ?, ? are initialized randomly.
Patch initialization: We use a simple heuristic similar to that used in [6] to initialize ten salient
patches on every training image from the root filter ? ? trained above. For each training image I with
class label a, we apply the root filter ?a on I, then select an rectangle region of size 5 ? 5 in the
image that has the most positive energy. We zero out the weights in this region and repeat until ten
patches are selected. Figure 4(a) shows examples of the patches found in some images. The tree
G = (V, E) is formed by running a minimum spanning tree algorithm over the ten patches.
Inference: During testing, we do not know the class label of a given test image, so we cannot use the
patch initialization described above to initialize the patches, since we do not know which root filter
to use. Instead, we run root filters from all the classes on a test image, then calculate the probabilities
of all possible instantiations of patches under our learned model, and classify the image by picking
the class label that gives the maximum of the these probabilities. In other words, for a testing image
with motion descriptor x, we first obtain |Y| instances {x(1) , x(2) , ..., x(|Y|) }, where each x(k) is
obtained by initializing the patches on x using the root filter ?k . The final class
label y ? of x is
?
(1)
(2)
(|Y|)
obtained as y = arg maxy max{p(y|x ; ?), p(y|x ; ?), ..., p(y|x
; ?)} .
4
Experiments
We test our algorithm on two publicly available datasets that have been widely used in action recognition: Weizmann human action dataset [2], and KTH human motion dataset [17]. Performance on
these benchmarks is saturating ? state-of-the-art approaches achieve near-perfect results. We show
our method achieves results comparable to the state-of-the-art, and more importantly that our extended hCRF model significantly outperforms a direct application of the original hCRF model [16].
Weizmann dataset: The Weizmann human action dataset contains 83 video sequences showing nine different people, each performing nine different actions: running, walking, jumpingjack, jumping-forward-on-two-legs,jumping-in-place-on-two-legs, galloping-sideways, wavingtwo-hands, waving-one-hand, bending. We track and stabilize the figures using the background
subtraction masks that come with this dataset.
We randomly choose videos of five subjects as training set, and the videos in the remaining four
subjects as test set. We learn three hCRF models with different sizes of possible part labels, |H| =
6, 10, 20. Our model classifies every frame in a video sequence (i.e., per-frame classification), but
bend 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
bend 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
jack 0.02 0.93 0.01 0.02 0.00 0.00 0.00 0.00 0.01
jack 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
jump 0.01 0.03 0.74 0.00 0.06 0.02 0.12 0.02 0.00
jump 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00 0.00
pjump 0.01 0.00 0.00 0.99 0.00 0.00 0.00 0.00 0.00
pjump 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00 0.00
run 0.00 0.05 0.00 0.00 0.72 0.06 0.17 0.00 0.00
run 0.00 0.00 0.00 0.00 1.00 0.00 0.00 0.00 0.00
side 0.00 0.01 0.07 0.00 0.02 0.73 0.17 0.00 0.00
side 0.00 0.00 0.00 0.00 0.00 0.75 0.25 0.00 0.00
walk 0.00 0.00 0.01 0.00 0.05 0.06 0.88 0.00 0.00
walk 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00 0.00
wave1 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.99 0.00
wave1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00 0.00
wave2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00
wave2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.00
be
nd
jac
k
jum
p
pju
mp
run
sid
e
w
wa
ve ave
lk
2
1
wa
Frame-by-frame classification
j
be
nd ack
jum
p
pju
mp
run
sid
e
w
wa
wa
ve ave
lk
2
1
Video classification
Figure 3: Confusion matrices of classification results on Weizmann dataset. Horizontal rows are
ground truths, and vertical columns are predictions.
method
root model
per-frame
per-video
0.7470
0.8889
|H| = 6
0.5722
0.5556
local hCRF
|H| = 10 |H| = 20
0.6656
0.6383
0.6944
0.6111
|H| = 6
0.8682
0.9167
our approach
|H| = 10 |H| = 20
0.9029
0.8557
0.9722
0.9444
Table 1: Comparison of two baseline systems with our approach on Weizmann dataset.
we can also obtain the class label for the whole video sequence by the majority voting of the labels
of its frames (i.e., per-video classification). We show the confusion matrix with |H| = 10 for both
per-frame and per-video classification in Fig. 3.
We compare our system to two baseline methods. The first baseline (root model) only uses the root
filter ? ? ? ?(y, x), which is simply a discriminative version of Efros et al. [5]. The second baseline
(local hCRF) is a direct application of the original hCRF model [16]. It is similar to our model, but
without the root filter ? ? ? ?(y, x), i.e., local hCRF only uses the root filter to initialize the salient
patches, but does not use it in the final model. The comparative results are shown in Table 1. Our
approach significantly outperforms the two baseline methods. We also compare our results(with
|H| = 10) with previous work in Table 2. Note [2] classifies space-time cubes. It is not clear how it
can be compared with other methods that classify frames or videos. Our result is significantly better
than [13], and comparable to [8]. Although we accept the fact that the comparison is not completely
fair, since [13] does not use any tracking or background subtraction.
We visualize the learned parts in Fig. 4(a). Each patch is represented by a color that corresponds to
the most likely part label of that patch. We also visualize the root filters applied on these images in
Fig. 4(b).
KTH dataset: The KTH human motion dataset contains six types of human actions (walking,
jogging, running, boxing, hand waving and hand clapping) performed several times by 25 subjects
in four different scenarios: outdoors, outdoors with scale variation, outdoors with different clothes
and indoors. We first run an automatic preprocessing step to track and stabilize the video sequences,
so that all the figures appear in the center of the field of view.
We split the videos roughly equally into training/test sets and randomly sample 10 frames from each
video. The confusion matrices (with |H| = 10) for both per-frame and per-video classification are
Our method
Jhuang et al. [8]
Niebles & Fei-Fei [13]
Blank et al. [2]
per-frame(%)
90.3
N/A
55
N/A
per-video(%)
97.2
98.8
72.8
N/A
per-cube(%)
N/A
N/A
N/A
99.64
Table 2: Comparison of classification accuracy with previous work on the Weizmann dataset.
(a)
(b)
Figure 4: (a) Visualization of the learned parts. Patches are colored according to their most likely
part labels. Each color corresponds to a part label. Some interesting observations can be made.
For example, the part label represented by red seems to correspond to the ?moving down? patterns
mostly observed in the ?bending? action. The part label represented by green seems to correspond
to the motion patterns distinctive of ?hand-waving? actions; (b) Visualization of root filters applied
on these images. For each image with class label c, we apply the root filter ?c . The results show the
filter responses aggregated over four motion descriptor channels. Bright areas correspond to positive
energies, i.e., areas that are discriminative for this class.
boxing 0.55
0.04
0.03
0.10
0.17
0.12
boxing 0.86
0.00
0.03
0.02
0.05
0.05
handclapping 0.02
0.74
0.10
0.07
0.04
0.02
handclapping 0.00
0.97
0.00
0.03
0.00
0.00
handwaving 0.02
0.10
0.77
0.01
0.05
0.04
handwaving 0.00
0.02
0.98
0.00
0.00
0.00
jogging 0.02
0.01
0.04
0.55
0.20
0.18
jogging 0.00
0.00
0.00
0.67
0.19
0.14
running 0.01
0.00
0.07
0.09
0.67
0.16
running 0.00
0.00
0.02
0.03
0.84
0.11
0.10
0.73
walking 0.00
0.00
0.04
0.01
nin
g
wa
lkin
g
walking 0.02
0.01
ha
bo
xin
g
0.05
ha
nd
nd
cla
pp
ing
0.08
wa
jog
gin
g
vin
run
bo
x
ing
g
Frame-by-frame classification
jog
ha
ha
nd
nd
gin
wa
cla
g
vin
pp
ing
g
0.01
run
nin
g
0.93
wa
lkin
g
Video classification
Figure 5: Confusion matrices of classification results on KTH dataset. Horizontal rows are ground
truths, and vertical columns are predictions.
shown in Fig. 5. The comparison with the two baseline algorithms is summarized in Table 3. Again,
our approach outperforms the two baselines systems.
The comparison with other approaches is summarized in Table 4. We should emphasize that we do
not attempt a direct comparison, since different methods listed in Table 4 have all sorts of variations
in their experiments (e.g., different split of training/test data, whether temporal smoothing is used,
whether per-frame classification can be performed, whether tracking/background subtraction is used,
whether the whole dataset is used etc.), which make it impossible to directly compare them. We
provide the results only to show that our approach is comparable to the state-of-the-art.
method
root model
per-frame
per-video
0.5377
0.7339
|H| = 6
0.4749
0.5607
local hCRF
|H| = 10 |H| = 20
0.4452
0.4282
0.5814
0.5504
|H| = 6
0.6633
0.7855
our approach
|H| = 10 |H| = 20
0.6698
0.6444
0.8760
0.7512
Table 3: Comparison of two baseline systems with our approach on KTH dataset.
methods
Our method
Jhuang et al. [8]
Nowozin et al. [15]
Niebles et al. [14]
Doll?ar et al. [4]
Schuldt et al. [17]
Ke et al. [9]
accuracy(%)
87.60
91.70
87.04
81.50
81.17
71.72
62.96
Table 4: Comparison of per-video classification accuracy with previous approaches on KTH dataset.
5
Conclusion
We have presented a discriminatively learned part model for human action recognition. Unlike
previous work [10], our model does not require manual specification of the parts. Instead, the parts
are initialized by a learned root filter. Our model combines both large-scale features used in global
templates and local patch features used in bag-of-words models. Our experimental results show that
our model is quite effective in recognizing actions. The results are comparable to the state-of-theart approaches. In particular, we show that the combination of large-scale features and local patch
features performs significantly better than using either of them alone.
References
[1] A. C. Berg, T. L. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondence. In IEEE CVPR, 2005.
[2] M. Blank, L. Gorelick, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes. In IEEE
ICCV, 2005.
[3] N. Dalal and B. Triggs. Histogram of oriented gradients for human detection. In IEEE CVPR, 2005.
[4] P. Doll?ar, V. Rabaud, G. Cottrell, and S. Belongie. Behavior recognition via sparse spatio-temporal
features. In VS-PETS Workshop, 2005.
[5] A. A. Efros, A. C. Berg, G. Mori, and J. Malik. Recognizing action at a distance. In IEEE ICCV, 2003.
[6] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part
model. In IEEE CVPR, 2008.
[7] P. F. Felzenszwalb and D. P. Huttenlocher. Pictorial structures for object recognition. IJCV, 61(1):55?79,
January 2003.
[8] H. Jhuang, T. Serre, L. Wolf, and T. Poggio. A biologically inspired system for action recognition. In
IEEE ICCV, 2007.
[9] Y. Ke, R. Sukthankar, and M. Hebert. Efficient visual event detection using volumetric features. In IEEE
ICCV, 2005.
[10] Y. Ke, R. Sukthankar, and M. Hebert. Event detection in crowded videos. In IEEE ICCV, 2007.
[11] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In ICML, 2001.
[12] B. D. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision.
In Proc. DARPA Image Understanding Workshop, 1981.
[13] J. C. Niebles and L. Fei-Fei. A hierarchical model of shape and appearance for human action classification.
In IEEE CVPR, 2007.
[14] J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learning of human action categories using spatialtemporal words. In BMVC, 2006.
[15] S. Nowozin, G. Bakir, and K. Tsuda. Discriminative subsequence mining for action classification. In
IEEE ICCV, 2007.
[16] A. Quattoni, M. Collins, and T. Darrell. Conditional random fields for object recognition. In NIPS 17,
2005.
[17] C. Schuldt, L. Laptev, and B. Caputo. Recognizing human actions: a local SVM approach. In IEEE
ICPR, 2004.
[18] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering objects and their
location in images. In IEEE ICCV, 2005.
[19] S. B. Wang, A. Quattoni, L.-P. Morency, D. Demirdjian, and T. Darrell. Hidden conditional random fields
for gesture recognition. In IEEE CVPR, 2006.
| 3565 |@word version:2 dalal:1 seems:2 nd:6 triggs:1 cla:2 carry:1 shechtman:1 contains:5 bc:2 outperforms:3 blank:2 yet:1 cottrell:1 informative:2 shape:4 v:1 alone:2 half:2 selected:1 discovering:1 parameterization:1 xk:1 mccallum:1 colored:1 location:4 firstly:1 five:1 direct:3 become:1 consists:1 ijcv:1 combine:4 pairwise:2 jac:1 mask:1 behavior:1 roughly:1 inspired:3 freeman:1 automatically:1 xti:1 considering:1 xx:1 notation:1 classifies:2 interpreted:2 clothes:1 temporal:2 every:2 voting:1 concave:1 ramanan:1 appear:1 segmenting:1 positive:2 local:22 treat:1 modify:1 subscript:1 might:1 initialization:2 challenging:1 weizmann:6 practical:1 testing:2 area:2 significantly:5 matching:2 word:9 cannot:1 operator:2 bend:2 applying:1 impossible:1 sukthankar:2 restriction:2 center:2 maximizing:1 yt:3 starting:1 ke:4 assigns:1 importantly:1 s6:2 fx:10 variation:2 construction:1 us:2 recognition:31 walking:4 labeled:2 huttenlocher:1 observed:2 ep:7 wang:3 initializing:1 calculate:2 region:2 movement:2 russell:1 trained:3 depend:1 solving:1 laptev:1 upon:1 distinctive:1 basis:1 completely:1 darpa:1 iit:2 various:2 represented:4 alphabet:1 effective:3 describe:4 labeling:2 quite:1 heuristic:1 widely:2 cvpr:5 distortion:1 noisy:1 final:6 superscript:1 sequence:12 advantage:1 ywang12:1 combining:1 flexibility:1 achieve:1 deformable:2 darrell:2 nin:2 comparative:1 perfect:1 object:20 depending:2 develop:1 school:2 eq:2 implemented:1 c:2 come:1 convention:1 filter:16 human:19 mcallester:1 bin:2 require:1 alleviate:1 niebles:4 secondly:1 im:2 sufficiently:2 ground:2 exp:6 mapping:1 visualize:2 major:2 achieves:1 efros:3 proc:1 bag:6 label:32 sideways:1 gaussian:1 rather:1 occupied:1 hj:39 indicates:2 likelihood:2 hk:16 ave:2 baseline:8 detect:1 inference:2 demirdjian:1 rigid:1 xtm:1 unary:4 burnaby:2 accept:1 hidden:13 i1:2 compatibility:8 arg:6 classification:16 flexible:2 denoted:1 lucas:2 art:4 spatial:3 initialize:3 smoothing:1 cube:2 field:14 manually:1 icml:1 unsupervised:1 theart:1 randomly:3 oriented:1 ve:2 pictorial:2 xtj:2 attempt:1 detection:5 interest:3 mining:1 edge:4 poggio:1 jumping:2 tree:3 initialized:3 circle:1 walk:2 tsuda:1 instance:1 classify:2 column:2 ar:2 assignment:1 vertex:1 recognizing:5 characterize:1 combined:1 person:2 probabilistic:1 picking:1 synthesis:1 pju:2 again:1 choose:1 bx:4 potential:10 sec:3 stabilize:3 summarized:2 crowded:2 blurred:1 mp:2 performed:2 root:22 lot:1 view:2 h1:1 red:1 wave:2 sort:1 vin:2 simon:2 waving:5 contribution:1 square:1 publicly:1 v5a:2 greg:1 descriptor:7 formed:1 accuracy:3 bright:1 correspond:3 conceptually:1 sid:2 rectified:1 quattoni:4 suffers:1 manual:1 definition:1 volumetric:1 energy:2 pp:2 obvious:1 dataset:14 color:2 improves:1 bakir:1 appears:1 originally:1 xxx:1 specify:1 response:1 bmvc:1 zisserman:1 lkin:2 until:1 schuldt:2 hand:9 horizontal:2 wave1:2 multiscale:1 propagation:1 outdoors:3 scientific:1 believe:1 effect:1 serre:1 normalized:1 regularization:2 irani:1 i2:2 during:1 ack:1 crf:2 demonstrate:2 confusion:4 performs:2 motion:19 image:36 ranging:1 jack:2 recently:1 wave2:2 measurement:1 automatic:1 similarly:1 moving:1 specification:1 etc:2 recent:2 perspective:1 scenario:1 certain:3 minimum:1 recognized:1 subtraction:3 aggregated:1 ii:1 reduces:1 ing:3 jog:2 characterized:1 gesture:1 equally:1 fraser:2 prediction:2 variant:1 vision:1 expectation:1 histogram:1 represent:1 kernel:1 background:3 unlike:2 ascent:2 subject:3 tend:1 flow:5 lafferty:1 seem:1 near:1 yang:1 split:3 enough:1 relaxes:1 independence:2 xj:27 gorelick:1 idea:2 whether:4 six:1 stereo:1 nine:2 action:37 clear:2 indoors:1 listed:1 locally:1 ten:3 category:1 outperform:1 stabilized:1 track:3 per:15 four:8 salient:4 localize:1 registration:1 rectangle:1 pjump:2 graph:4 xt2:1 run:8 parameterized:5 jitter:1 place:1 reasonable:1 patch:42 sfu:2 comparable:5 hi:4 distinguish:3 correspondence:1 constraint:3 fei:6 encodes:1 performing:1 optical:5 according:1 icpr:1 combination:2 appealing:1 biologically:1 maxy:1 leg:2 intuitively:1 iccv:7 mori:3 rectification:1 visualization:2 describing:4 know:2 merit:1 available:1 boxing:3 doll:2 clapping:1 apply:2 hierarchical:1 generic:1 appropriate:1 blurry:1 original:3 running:5 remaining:1 objective:2 malik:2 gin:2 gradient:4 kth:6 distance:1 separate:1 concatenation:5 majority:1 fy:10 spanning:1 pet:1 assuming:1 length:1 modeled:2 index:2 illustration:2 mostly:1 negative:2 reliably:1 perform:1 discretize:1 vertical:2 observation:2 datasets:1 benchmark:1 finite:2 january:1 extended:1 frame:22 canada:2 introduced:1 pair:4 sivic:1 learned:8 nip:1 address:1 pattern:3 built:1 max:6 green:1 video:26 belief:1 event:3 regularized:1 arm:1 lk:2 hm:2 bending:2 understanding:1 discriminatively:3 interesting:1 limitation:1 h2:1 nowozin:2 row:2 hcrf:17 jhuang:3 repeat:1 hebert:2 side:2 template:7 felzenszwalb:2 sparse:1 calculated:2 ignores:1 forward:1 commonly:1 jump:2 preprocessing:1 made:1 rabaud:1 emphasize:1 basri:1 global:11 instantiation:1 xt1:1 belongie:1 spatio:1 discriminative:7 xi:1 subsequence:1 un:1 iterative:1 table:9 kanade:2 learn:3 promising:1 channel:7 ca:2 robust:1 caputo:1 whole:7 fair:1 jogging:3 fig:6 pereira:1 down:1 xt:16 handwaving:2 showing:1 constellation:3 svm:1 exists:1 workshop:2 importance:1 conditioned:2 gap:1 spatialtemporal:1 lt:2 simply:4 appearance:7 likely:5 visual:1 saturating:1 tracking:4 scalar:1 bo:2 corresponds:5 truth:2 wolf:1 extracted:2 conditional:13 goal:1 experimentally:1 morency:1 partly:1 experimental:3 xin:1 select:1 berg:3 people:1 collins:1 |
2,830 | 3,566 | Robust Kernel Principal Component Analysis
Minh Hoai Nguyen & Fernando De la Torre
Carnegie Mellon University, Pittsburgh, PA 15213, USA.
Abstract
Kernel Principal Component Analysis (KPCA) is a popular generalization of linear PCA that allows non-linear feature extraction. In KPCA, data in the input
space is mapped to higher (usually) dimensional feature space where the data can
be linearly modeled. The feature space is typically induced implicitly by a kernel
function, and linear PCA in the feature space is performed via the kernel trick.
However, due to the implicitness of the feature space, some extensions of PCA
such as robust PCA cannot be directly generalized to KPCA. This paper presents
a technique to overcome this problem, and extends it to a unified framework for
treating noise, missing data, and outliers in KPCA. Our method is based on a novel
cost function to perform inference in KPCA. Extensive experiments, in both synthetic and real data, show that our algorithm outperforms existing methods.
1
Introduction
Principal Component Analysis (PCA) [9] is one of the primary statistical techniques for feature
extraction and data modeling. One drawback of PCA is its limited ability to model non-linear
structures that exist in many computing applications. Kernel methods [18] enable us to extend PCA
to model non-linearities while retaining its computational efficiency. In particular, Kernel PCA
(KPCA) [19] has repeatedly outperformed PCA in many image modeling tasks [19, 14].
Unfortunately, realistic visual data is often corrupted by undesirable artifacts due to occlusion (e.g.
a hand in front of a face, Fig. 1.d), illumination (e.g. specular refection, Fig. 1.e), noise (e.g. from
capturing device, Fig.1.b), or from the underlying data generation method (e.g. missing data due
to transmission, Fig. 1.c). Therefore, robustness to noise, missing data, and outliers is a desired
property to have for algorithms in computer vision.
a
b
c
d
e
Principal
Subspace
x
f
g
h
i
z?
Input Space
Feature Space
Figure 1: Several types of data corruption and Figure 2: Using KPCA principal subspace to
results of our method. a) original image, b) cor- find z, a clean version of corrupted sample x.
ruption by additive Gaussian noise, c) missing
data, d) hand occlusion, e) specular reflection.
f) to i) are the results of our method for recovering uncorrupted data from b) to e) respectively.
Throughout the years, several extensions of PCA have been proposed to address the problems of
outliers and missing data, see [6] for a review. However, it still remains unclear how to generalize
those extensions to KPCA; since directly migrating robust PCA techniques to KPCA is not possible
1
due to the implicitness of the feature space. To overcome this problem, in this paper, we propose
Robust KPCA (RKPCA), a unified framework for denoising images, recovering missing data, and
handling intra-sample outliers. Robust computation in RKPCA does not suffer from the implicitness of the feature space because of a novel cost function for reconstructing ?clean? images from
corrupted data. The proposed cost function is composed of two terms, requiring the reconstructed
image to be close to the KPCA principal subspace as well as to the input sample. We show that
robustness can be naturally achieved by using robust functions to measure the closeness between the
reconstructed and the input data.
2
Previous work
2.1 KPCA and pre-image
KPCA [19, 18, 20] is a non-linear extension of principal component analysis (PCA) using kernel
methods. The kernel represents an implicit mapping of the data to a (usually) higher dimensional
space where linear PCA is performed.
Let X denote the input space and H the feature space. The mapping function ? : X ? H is
implicitly induced by a kernel function k : X ? X ? ? that defines the similarity between data in
the input space. One can show that if k(?, ?) is a kernel then the function ?(?) and the feature space
H exist; furthermore k(x, y) = h?(x), ?(y)i [18].
However, directly performing linear PCA in the feature space might not be feasible because the
feature space typically has very high dimensionality (including infinity). Thus KPCA is often done
via the kernel trick. Let D = [d1 d2 ... dn ], see notation1 , be a training data matrix, such that
di ? X ?i = 1, n. Let k(?, ?) denote a kernel function, and K denote the kernel matrix (element
ij of K is kij = k(di , dj )). KPCA is computed in closed form by finding first m eigenvectors
(ai ?s) corresponding to the largest eigenvalues (?i ?s) of the kernel matrix K (i.e. KA = A?). The
eigenvectors in the feature space V can be computed as V = ?A, where ? = [?(d1 )...?(dn )]. To
m
ensure orthonormality of {vi }m
i=1 , KPCA imposes that ?i hai , ai i = 1. It can be shown that {vi }i=1
form an orthonormal basis of size m that best preserves the variance of data in the feature space [19].
Assume x is a data point in the input space, and let P?(x) denote the projection of ?(x) onto
m
m
the
Pmprincipal subspace {vi }1 . Because {vi }1 is a set of orthonormal vectors, we have P?(x) =
i=1 h?(x), vi i vi . The reconstruction error (in feature space) is given by:
X
2
Eproj (x) = ||?(x) ? P?(x)||22 = h?(x), ?(x)i ?
h?(x), vi i = k(x, x) ? r(x)T Mr(x),
X
where r(x) = ?T ?(x), and M =
ai aTi .
(1)
The pre-image of the projection is the z ? X that satisfies ?(z) = P?(x); z is also referred to as
the KPCA reconstruction of x. However, the pre-image of P?(x) usually does not exist, so finding
the KPCA reconstruction of x means finding z such that ?(z) is as close to P?(x) as possible.
It should be noted that the closeness between ?(z) and P?(x) can be defined in many ways, and
different cost functions lead to different optimization problems. Sch?olkopf et al [17] and Mika et
al [13] propose to approximate the reconstruction of x by arg minz ||?(z) ? P?(x)||22 . Two other
objective functions have been proposed by Kwok & Tsang [10] and Bakir et al [2].
2.2 KPCA-based algorithms for dealing with noise, outliers and missing data
Over the years, several methods extending KPCA algorithms to deal with noise, outliers, or missing
data have been proposed. Mika et al [13], Kwok & Tsang [10], and Bakir et al [2] show how
denoising can be achieved by using the pre-image. While these papers present promising denoising
results for handwritten digits, there are at least two problems with these approaches. Firstly, because
the input image x is noisy, the similarity measurement between x and other data point di (i.e.
k(x, di ) the kernel) might be adversely affected, biasing the KPCA reconstruction of x. Secondly,
1
Bold uppercase letters denote matrices (e.g. D), bold lowercase letters denote column vectors (e.g. d). dj
represents the j th column of the matrix D. dij denotes the scalar in the row ith and column j th of the matrix
D and the ith element of the column vector dj . Non-bold letters represent scalar variables. 1k ? Rk?1 is a
column vector of ones. Ik ? Rk?k is the identity matrix.
2
Principal
Subspace
x
z?
z?
Input Space
Principal
Subspace
x
a
Feature Space
Input Space
Feature Space
b
Figure 3: Key difference between previous work (a) and ours (b). In (a), one seeks z such that ?(z)
is close to P?(x). In (b), we seek z such that ?(z) is close to both ?(x) and the principal subspace.
current KPCA reconstruction methods equally weigh all the features (i.e. pixels); it is impossible to
weigh the importance of some features over the others.
Other existing methods also have limitations. Some [7, 22, 1] only consider robustness of the principal subspace; they do not address robust fitting. Lu et al [12] present an iterative approach to handle
outliers in training data. At each iteration, the KPCA model is built, and the data points that have the
highest reconstruction errors are regarded as outliers and discarded from the training set. However,
this approach does not handle intra-sample outliers (outliers that occur at a pixel level [6]). Several
other approaches also considering Berar et al [3] propose to use KPCA with polynomial kernels to
handle missing data. However, it is not clear how to extend this approach to other kernels. Furthermore, with polynomial kernels of high degree, the objective function is hard to optimize. Sanguinetti
& Lawrence [16] propose an elegant framework to handle missing data. The framework is based on
the probabilistic interpretation inherited from Probabilistic PCA [15, 21, 11]. However, Sanguinetti
& Lawrence [16] do not address the problem of outliers.
This paper presents a novel cost function that unifies the treatment of noise, missing data and outliers
in KPCA. Experiments show that our algorithm outperforms existing approaches [6, 10, 13, 16].
3
Robust KPCA
3.1 KPCA reconstruction revisited
Given an image x ? X , Fig. 2 describes the task of finding the KPCA-reconstructed image of x
(uncorrupted version of x to which we will refer as KPCA reconstruction). Mathematically, the task
is to find a point z ? X such that ?(z) is in the principal subspace (denote PS) and ?(z) is as close
to ?(x) as possible. In other words, finding the KPCA reconstruction of x is to optimize:
arg min ||?(z) ? ?(x)||2 s.t. ?(z) ? PS .
z
(2)
However, since there might not exist z ? X such that ?(z) ? PS, the above optimization problem
needs to be relaxed. There is a common relaxation approach used by existing methods for computing
the KPCA reconstruction of x. This approach conceptually involves two steps:(i) finding P?(x)
which is the closest point to ?(x) among all the points in the principal subspace, (ii) finding z
such that ?(z) is as close to P?(x) as possible. This relaxation is depicted in Fig. 3a. If L2 norm
is used to measure the closeness between ?(z) and P?(x), the resulting KPCA reconstruction is
arg minz ||?(z) ? P?(x)||22 .
This approach for KPCA reconstruction is not robust. For example, if x is corrupted with intrasample outliers (e.g. occlusion), ?(x) and P?(x) will also be adversely affected. As a consequence, finding z that minimizes ||?(z) ? P?(x)||22 does not always produce a ?clean? version of
x. Furthermore, it is unclear how to incorporate robustness to the above formulation.
Here, we propose a novel relaxation of (2) that enables the incorporation of robustness. The KPCA
reconstruction of x is taken as:
arg min ||?(x) ? ?(z)||22 + C ||?(z) ? P?(z)||22 .
z
|
{z
}
Eproj (z)
3
(3)
Algorithm 1 RKPCA for missing attribute values in training data
Input: training data D, number of iterations m, number of partitions k.
Initialize: missing values by the means of known values.
for iter = 1 to m do
Randomly divide D into k equal partitions D1 , ..., Dk
for i = 1 to k do
Train RKPCA using data D \ Di
Run RKPCA fitting for Di with known missing attributes.
end for
Update missing values of D
end for
Intuitively, the above cost function requires the KPCA reconstruction of x is a point z that ?(z)
is close to both ?(x) and the principal subspace. C is a user-defined parameter that controls the
relative importance of these two terms. This approach is depicted in Fig. 3b.
It is possible to generalize the above cost function further. The first term of Eq. 3 is not necessarily
||?(x) ? ?(z)||22 . In fact, for the sake of robustness, it is preferable that ||?(x) ? ?(z)||22 is replaced
by a robust function E0 : X ? X ? ? for measuring similarity between x and z. Furthermore,
there is no reason why E0 should be restricted to the metric of the feature space. In short, the KPCA
reconstruction of x can be taken as:
arg min E0 (x, z) + CEproj (z) .
(4)
z
By choosing appropriate forms for E0 , one can use KPCA to handle noise, missing data, and intrasample outliers. We will show that in the following sections.
3.2 Dealing with missing data in testing samples
Assume the KPCA has been learned from complete and noise free data. Given a new image x
with missing values, a logical function E0 that does not depend on the the missing values could be:
E0 (x, z) = ?exp(??2 ||W(x ? z)||22 ), where W is a diagonal matrix; the elements of its diagonal
are 0 or 1 depending on whether the corresponding attributes of x have missing values or not.
3.3 Dealing with intra-sample outliers in testing samples
To handle intra-sample outliers, we could use a robust function for E0 . For instance: E0 (x, z) =
Pd
2
?exp(??2 i=1 ?(xi ? zi , ?)), where ?(?, ?) is the Geman-McClure function, ?(y, ?) = y2y+?2 ,
and ? is a parameter of the function. This function is also used in [6] for Robust PCA.
3.4 Dealing with missing data and intra-sample outliers in training data
Previous sections have shown how to deal with outliers and missing data in the testing set (assuming
KPCA has been learned from a clean training set). If we have missing data in the training samples
[6], a simple approach is to iteratively alternate between estimating the missing values and updating the KPCA principal subspace until convergence. Algorithm 1 outlines the main steps of this
approach. An algorithm for handling intra-sample outliers in training data could be constructed
similarly.
Alternatively, a kernel matrix could be computed ignoring the missing values, that is, each kij =
1
exp(??2 ||Wi Wj (xi ? xj )||22 ), where ?2 = trace(W
. However, the positive definiteness of the
i Wj )
resulting kernel matrix cannot be guaranteed.
3.5 Optimization
In general, the objective function in Eq. 4 is not concave, hence non-convex optimization techniques are required. In this section, we restrict our attention to the Gaussian kernel (k(x, y) =
exp(??||x ? y||2 )) that is the most widely used kernel. If E0 takes the form of Sec.3.2, we need to
maximize E(z) = exp(??2 ||W(x ? z)||2 ) +C ? r(z)T Mr(z) ,
|
{z
}
|
{z
}
E1 (z)
4
E2 (z)
(5)
where r(?), and M are defined in Eq.1. Note that optimizing this function is not harder than optimizing the objective function used by Mika et al [13]. Here, we also derive a fixed-point optimization algorithm. The necessary condition for a minimum has to satisfy the following equation:
?z E(z) = ?z E1 (z) + ?z E2 (z) = 0 . The expression for the gradients are given by:
?z E1 (z) = ?2?2 exp(??2 ||W(x ? z)||2 )W2 (z ? x), ?z E2 (z) = ?4?[(1Tn Q1n )z ? DQ1n ] ,
|
{z
}
W2
where Q is a matrix such that qij = mij exp(??||z ? di ||2 ? ?||z ? dj ||2 ). A fixed-point update is:
1 ?2
1 ?2
z=[
W2 + (1Tn Q1n )In ]?1 (
W2 x + DQ1n )
2C ?
2C ?
|
{z
}
|
{z
}
(6)
u
W3
The above equation is the update rule for z at every iteration of the algorithm. The algorithm stops
when the difference between two successive z?s is smaller than a threshold.
Note that W3 is a diagonal matrix with non-negative entries since Q is a positive semi-definite
matrix. Therefore, W3 is not invertible only if there are some zero elements on the diagonal. This
only happens
of the diagonal of W are 0 and 1Tn Q1n = 0. It can be shown that
PifmsomeT elements
T
2
1n Q1n = i=1 (vi ?(z)) , so 1Tn Q1n = 0 when ?(z)?vi , ?i. However, this rarely occurs in
practice; moreover, if this happens we can restart the algorithm from a different initial point.
Consider the update rule given in Eq.6: z = W3?1 u. The diagonal matrix W3?1 acts as a normalization factor of u. Vector u is a weighted combination of two terms, the training data D and x.
Furthermore, each element of x is weighted differently by W2 which is proportional to W. In the
case of missing data (some entries in the diagonal of W, and therefore W2 , will be zero), missing
components of x would not affect the computation of u and z. Entries corresponding to the missing
components of the resulting z will be pixel-weighted combinations of the training data. The contribution of x also depends on the ratio ?2 /?, C, and the distance from the current z to x. Similar to
the observation of Mika et al [13], the second term of vector u pulls z towards a single Gaussian
cluster. The attraction force generated by a training data point di reflects the correlation between
?(z) and ?(di ), the correlation between ?(z) and eigenvectors vj ?s, and the contributions of ?(di )
to the eigenvectors. The forces from the training data, together with the attraction force by x, draw
z towards a Gaussian cluster that is close to x.
1 ?2
One can derive a similar update rule for z if E0 takes the form in Sec.3.3. z = [ 2C
? W4 +
P
d
2
T
?1 1 ?2
(1n Q1n )In ] ( 2C ? W4 x + DQ1n ), with W4 = exp(??2 i=1 ?(xi ? zi , ?))W5 , where W5
is a diagonal matrix; the ith entry of the diagonal is ?/((zi ? xi )2 + ? 2 ). The parameter ? is updated
at every iteration as follows: ? = 1.4826 ? median({|zi ? xi |}di=1 ) [5].
4
Experiments
4.1 RKPCA for intra-sample outliers
In this section, we compare RKPCA with three approaches for handling intra-sample outliers: (i)
Robust Linear PCA [6], (ii) Mika et al ?s KPCA reconstruction [13], and (iii) Kwok & Tsang?s KPCA
reconstruction [10]. The experiments are done on the CMU Multi-PIE database [8].
The Multi-PIE database consists of facial images of 337 subjects taken under different illuminations,
expressions and poses, at four different sessions. We only make use of the directly-illuminated
frontal face images under five different expressions (smile, disgust, squint, surprise and scream),
see Fig. 4. Our dataset contains 1100 images, 700 are randomly selected for training, 100 are used
for validation, and the rest is reserved for testing. Note that no subject in the testing set appears
in the training set. Each face is manually labeled with 68 landmarks, as shown in Fig. 4a. A
shape-normalized face is generated for every face by warping it towards the mean shape using affine
transformation. Fig. 4b shows an example of such a shape-normalized face. The mean shape is used
as the face mask and the values inside the mask are vectorized.
To quantitatively compare different methods, we introduce synthetic occlusions of different sizes
(20, 30, and 40 pixels) into the test images. For each occlusion size and test image pair, we generate
5
Energy 80%
Energy 95%
Figure 4: a) 68
landmarks, b) a
shape-normalized
face, c) synthetic
occlusion.
Occ.Sz Region Type
Whole face
20
Occ. Reg.
Non-occ Reg.
Whole face
30
Occ. Reg.
Non-occ Reg.
Whole face
40
Occ. Reg.
Non-occ Reg.
Whole face
20
Occ. Reg.
Non-occ Reg.
Whole face
30
Occ. Reg.
Non-occ Reg.
Whole face
40
Occ. Reg.
Non-occ Reg.
Base Line Mika et al Kwok&Tsang Robust PCA Ours
14.0?5.5 13.5?3.3 14.1?3.4
10.8?2.4 8.1?2.3
71.5?5.5 22.6?7.9 17.3?6.6
13.3?5.5 16.1?6.1
0.0?0.0 11.3?2.3 13.2?2.9
10.1?2.2 6.0?1.7
27.7?10.2 17.5?4.8 16.6?4.6
12.2?3.2 10.9?4.2
70.4?3.9 24.2?7.1 19.3?6.6
15.4?5.1 18.4?5.8
0.0?0.0 13.3?3.0 14.7?3.8
9.6?2.3 5.7?4.3
40.2?12.7 20.9?5.9 18.8?5.8
16.4?7.1 14.3?6.3
70.6?3.6 25.7?7.2 21.1?7.1
20.1?8.0 19.8?6.3
0.0?0.0 15.2?4.2 16.1?5.3
9.4?2.3 8.8?8.1
14.2?5.3 12.6?3.1 13.8?3.2
9.1?2.3 7.0?2.1
71.2?5.4 29.2?8.4 17.3?6.4
18.6?7.1 18.1?6.1
0.0?0.0 8.6?1.6
12.9?2.9
6.5?1.4 4.1?1.6
26.8?9.5 17.4?4.4 16.2?4.1
13.4?5.0 10.2?3.7
70.9?4.4 30.0?7.6 19.5?6.5
23.8?7.8 21.0?6.3
0.0?0.0 10.1?1.9 14.1?3.2
6.3?1.4 3.1?1.7
40.0?11.9 22.0?5.9 18.9?6.0
22.7?11.7 14.3?5.8
70.7?3.6 30.1?7.2 21.4?7.4
32.4?11.9 22.4?7.0
0.0?0.0 12.1?3.3 15.9?5.2
7.0?2.5 5.0?6.7
Figure 5: Results of several methods on MPIE database. This shows the means
and standard deviations of the absolute differences between reconstructed images and the ground-truths. The statistics are available for three types of face
regions (whole face, occluded region, and non-occluded region), different occlusion sizes, and different energy settings. Our method consistently outperforms other methods for different occlusion sizes and energy levels.
a square occlusion window of that size, drawing the pixel values randomly from 0 to 255. A synthetic testing image is then created by pasting the occlusion window at a random position. Fig. 4c
displays such an image with occlusion size of 20. For every synthetic testing image and each of
the four algorithms, we compute the mean (at pixel level) of the absolute differences between the
reconstructed image and the original test image without occlusion. We record these statistics for
occluded region, non-occluded region and the whole face. The average statistics together with standard deviations are then calculated over the set of all testing images. These results are displayed
in Fig. 5. We also experiment with several settings for the energy levels for PCA and KPCA. The
energy level essentially means the number of components of PCA/KPCA subspace. In the interest
of space, we only display results for two settings 80% and 95%. Base Line is the method that does
nothing; the reconstructed images are exactly the same as the input testing images. As can be seen
from Fig.5, our method consistently outperforms others for all energy levels and occlusion sizes
(using the whole-face statistics). Notably, the performance of our method with the best parameter
settings is also better than the performances of other methods with their best parameter settings.
The experimental results for Mika et al, Kwok & Tsang, Robust PCA [6] are generated using our
own implementations. For Mika et al, Kwok & Tsang?s methods, we use Gaussian kernels with
? = 10?7 . For our method, we use E0 defined in Sec. 3.3. The kernel is Gaussian with ? =
10?7 , ?2 = 10?6 , and C = 0.1. The parameters are tuned using validation data.
4.2
RKPCA for incomplete training data
To compare the ability to handle missing attributes in training data of our algorithm with other
methods, we perform some experiments on the well known Oil Flow dataset [4]. This dataset is
also used by Sanguinetti & Lawrence [16]. This dataset contains 3000 12-dimensional synthetically
generated data points modeling the flow of a mixture of oil, water and gas in a transporting pipe-line.
We test our algorithm with different amount of missing data (from 5% to 50%) and repeat each
experiment for 50 times. For each experiment, we randomly choose 100 data points and randomly
remove some attribute values at some certain rate. We run Algorithm 1 to recover the values of the
missing attributes, with m = 25, k = 10, ? = 0.0375 (same as [16]), ?2 = 0.0375, C = 107 . The
squared difference between the reconstructed data and the original groundtruth data is measured,
and the mean and standard deviation for 50 runs are calculated. Note that this experiment setting is
exactly the same as the setting by [16].
6
Table 1: Reconstruction errors for 5 different methods and 10 probabilities of missing values for the
Oil Flow dataset. Our method outperforms other methods for all levels of missing data.
p(del)
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
mean 13 ? 4 28 ? 4 43 ? 7 53 ? 8 70 ? 9 81 ? 9 97 ? 9 109 ? 8 124 ? 7 139 ? 7
1-NN
5 ? 3 14 ? 5 30 ? 10 60 ? 20 90 ? 20 NA
NA
NA
NA
NA
PPCA 3.7 ? .6 9 ? 2 17 ? 5 25 ? 9 50 ? 10 90 ? 30 110 ? 30 110 ? 20 120 ? 30 140 ? 30
PKPCA 5 ? 1 12 ? 3 19 ? 5 24 ? 6 32 ? 6 40 ? 7 45 ? 4 61 ? 8 70 ? 10 100 ? 20
Ours 3.2 ? 1.9 8 ? 4 12 ? 4 19 ? 6 27 ? 8 34 ? 10 44 ? 9 53 ? 12 69 ? 13 83 ? 15
Experimental results are summarized in Tab. 1. The results of our method are shown in the last
column. The results of other methods are copied verbatim from [16]. The mean method is a widely
used heuristic where the missing value of an attribute of a data point is filled by the mean of known
values of the same attribute of other data points. The 1-NN method is another widely used heuristic
in which the missing values are replaced by the values of the nearest point, where the pairwise
distance is calculated using only the attributes with known values. PPCA is the probabilistic PCA
method [11], and PKPCA is the method proposed by [16]. As can be seen from Tab. 1, our method
outperforms other methods for all levels of missing data.
4.3 RKPCA for denoising
This section describes denoising experiments on the Multi-PIE database with Gaussian additive
noise. For a fair evaluation, we only compare our algorithm with Mika et al ?s, Kwok & Tsang?s
and Linear PCA. These are the methods that perform denoising based on subspaces and do not rely
explicitly on the statistics of natural images. Quantitative evaluations show that the denoising ability
of our algorithm is comparable with those of other methods.
Figure 6: Example of denoised images. a) original image, b) corrupted by Gaussian noise, c) denoised using PCA, d) using Mika et al, e) using Kwok & Tsang method, f) result of our method.
The set of images used in these experiments is exactly the same as those in the occlusion experiments
described in Sec. 4.1. For every testing image, we synthetically corrupt it with Gaussian additive
noise with standard deviation of 0.04. An example for a pair of clean and corrupted images are
shown in Fig. 6a and 6b. For every synthetic testing image, we compute the mean (at pixel level)
of the absolute difference between the denoised image and the ground-truth. The results of different
methods with different energy settings are summarized in Tab. 2. For these experiments, we use E0
defined in Sec. 3.2 with W being the identity matrix. We use Gaussian kernel with ? = ?2 = 10?7 ,
and C = 1. These parameters are tuned using validation data.
Table 2: Results of image denoising on the Multi-PIE database. Base Line is the method that does
nothing. The best energy setting for all methods is 100%. Our method is better than the others.
Energy
80%
95%
100%
Base Line
8.14?0.16
8.14?0.16
8.14?0.16
Mika
9.07?1.86
6.37?1.30
5.55?0.97
Kwok& Tsang
11.79?2.56
11.55?2.52
11.52?2.52
PCA
10.04?1.99
6.70?1.20
6.44?0.39
Ours
7.01?1.27
5.70?0.96
5.43?0.78
Tab. 2 and Fig. 6 show the performance of our method is comparable with others. In fact, the quantitative results show that our method is marginally better than Mika et al ?s method and substantially
better than the other two. In terms of visual appearance (Fig. 6c-f), the reconstruction image of our
method preserves much more fine details than the others.
7
5
Conclusion
In this paper, we have proposed Robust Kernel PCA, a unified framework for handling noise, occlusion and missing data. Our method is based on a novel cost function for Kernel PCA reconstruction.
The cost function requires the reconstructed data point to be close to the original data point as well
as to the principal subspace. Notably, the distance function between the reconstructed data point
and the original data point can take various forms. This distance function needs not to depend on
the kernel function and can be evaluated easily. Therefore, the implicitness of the feature space is
avoided and optimization is possible. Extensive experiments, in two well known data sets, show that
our algorithm outperforms existing methods.
References
[1] Alzate, C. & Syukens, J.A. (2005) ?Robust Kernel Principal Component Analysis uisng Huber?s Loss
Function.? 24th Benelux Meeting on Systems and Control.
[2] Bakir, G.H., Weston, J. & Sch?olkopf, B. (2004) ?Learning to Find Pre-Images.? in Thrun, S., Saul, L. &
Sch?olkopf, B. (Eds) Advances in Neural Information Processing Systems.
[3] Berar, M., Desvignes, M., Bailly, G., Payan, Y. & Romaniuk, B. (2005) ?Missing Data Estimation using
Polynomial Kernels.? Proceedings of International Conference on Advances in Pattern Recognition.
[4] Bishop, C.M., Svens?en, M. & Williams, C.K.I. (1998) ?GTM: The Generative Topographic Mapping.?
Neural Computation, 10(1), 215?234.
[5] Black, M.J. & Anandan, P. (1996) ?The Robust Estimation of Multiple Motions: Parametric and
Piecewise-smooth Flow Fields.? Computer Vision and Image Understanding, 63(1), 75?104.
[6] de la Torre, F. & Black, M.J. (2003) ?A Framework for Robust Subspace Learning.? International Journal
of Computer Vision, 54(1?3), 117?142.
[7] Deng, X., Yuan, M. & Sudijanto, A. (2007) ?A Note on Robust Principal Component Analysis.? Contemporary Mathematics, 443, 21?33.
[8] Gross, R., Matthews, I., Cohn, J., Kanade, T. & Baker, S. (2007) ?The CMU Multi-pose, Illumination,
and Expression (Multi-PIE) Face Database.? Technical report, Carnegie Mellon University.TR-07-08.
[9] Jolliffe, I. (2002) Principal Component Analysis. 2 edn. Springer-Verlag, New York.
[10] Kwok, J.T.Y. & Tsang, I.W.H. (2004) ?The Pre-Image Problem in Kernel Methods.? IEEE Transactions
on Neural Networks, 15(6), 1517?1525.
[11] Lawrence, N.D. (2004) ?Gaussian Process Latent Variable Models for Visualization of High Dimensional
Data.? in Thrun, S., Saul, L. & Sch?olkopf, B. (Eds) Advances in Neural Information Processing Systems.
[12] Lu, C., Zhang, T., Zhang, R. & Zhang, C. (2003) ?Adaptive Robust Kernel PCA Algorithm.? Proceedings
of IEEE International Conference on Acoustics, Speech, and Signal Processing.
[13] Mika, S., Sch?olkopf, B., Smola, A., M?uller, K.R., Scholz, M. & R?atsch, G. (1999) ?Kernel PCA and
De-Noising in Feature Spaces.? Advances in Neural Information Processing Systems.
[14] Romdhani, S., Gong, S. & Psarrou, A. (1999) ?Multi-view Nonlinear Active Shape Model Using Kernel
PCA.? British Machine Vision Conference, 483?492.
[15] Roweis, S. (1998) ?EM Algorithms for PCA and SPCA.? in Jordan, M., Kearns, M. & Solla, S. (Eds)
Advances in Neural Information Processing Systems 10.
[16] Sanguinetti, G. & Lawrence, N.D. (2006) ?Missing Data in Kernel PCA.? Proceedings of European
Conference on Machine Learning.
[17] Sch?olkopf, B., Mika, S., Smola, A., R?atsch, G. & M?uller, K.R. (1998) ?Kernel PCA Pattern Reconstruction via Approximate Pre-Images.? International Conference on Artificial Neural Networks.
[18] Sch?olkopf, B. & Smola, A. (2002) Learning with Kernels: Support Vector Machines, Regularization,
Optimization, and beyond. MIT Press, Cambridge, MA.
[19] Sch?olkopf, B., Smola, A. & Mller, K. (1998) ?Nonlinear Component Analysis as a Kernel Eigenvalue
Problem.? Neural Computation, 10, 1299?1319.
[20] Shawe-Taylor, J. & Cristianini, N. (2004) Kernel Methods for Pattern Analysis. Cambridge Uni. Press.
[21] Tipping, M. & Bishop, C.M. (1999) ?Probabilistic Principal Component Analysis.? Journal of the Royal
Statistical Society B, 61, 611?622.
[22] Wang, L., Pang, Y.W., Shen, D.Y. & Yu, N.H. (2007) ?An Iterative Algorithm for Robust Kernel Principal
Component Analysis.? Conference on Machine Learning and Cybernetics.
8
| 3566 |@word version:3 polynomial:3 norm:1 d2:1 seek:2 tr:1 harder:1 initial:1 contains:2 tuned:2 ours:4 ati:1 outperforms:7 existing:5 psarrou:1 ka:1 current:2 realistic:1 additive:3 partition:2 shape:6 enables:1 remove:1 treating:1 update:5 generative:1 selected:1 device:1 ith:3 short:1 record:1 revisited:1 successive:1 firstly:1 zhang:3 five:1 dn:2 constructed:1 ik:1 qij:1 consists:1 yuan:1 fitting:2 inside:1 introduce:1 pairwise:1 notably:2 huber:1 mask:2 multi:7 y2y:1 window:2 considering:1 estimating:1 linearity:1 underlying:1 moreover:1 baker:1 minimizes:1 substantially:1 unified:3 finding:8 transformation:1 pasting:1 quantitative:2 every:6 act:1 concave:1 preferable:1 exactly:3 control:2 positive:2 consequence:1 might:3 mika:14 black:2 limited:1 scholz:1 testing:11 transporting:1 practice:1 definite:1 digit:1 w4:3 projection:2 pre:7 word:1 cannot:2 undesirable:1 close:9 onto:1 noising:1 impossible:1 optimize:2 missing:39 notation1:1 williams:1 attention:1 convex:1 shen:1 rule:3 attraction:2 regarded:1 orthonormal:2 pull:1 handle:7 updated:1 user:1 edn:1 pa:1 trick:2 element:6 recognition:1 updating:1 geman:1 database:6 labeled:1 wang:1 tsang:10 wj:2 region:6 solla:1 highest:1 contemporary:1 weigh:2 gross:1 pd:1 occluded:4 cristianini:1 depend:2 efficiency:1 basis:1 easily:1 differently:1 various:1 gtm:1 train:1 artificial:1 choosing:1 heuristic:2 widely:3 drawing:1 ability:3 statistic:5 topographic:1 noisy:1 eigenvalue:2 propose:5 reconstruction:22 roweis:1 olkopf:8 convergence:1 cluster:2 transmission:1 extending:1 p:3 produce:1 depending:1 derive:2 gong:1 pose:2 measured:1 nearest:1 ij:1 eq:4 recovering:2 involves:1 drawback:1 torre:2 attribute:9 enable:1 generalization:1 secondly:1 mathematically:1 extension:4 migrating:1 ground:2 exp:8 lawrence:5 mapping:3 mller:1 matthew:1 estimation:2 outperformed:1 largest:1 weighted:3 reflects:1 uller:2 mit:1 gaussian:11 always:1 consistently:2 inference:1 scream:1 lowercase:1 nn:2 typically:2 pixel:7 arg:5 among:1 retaining:1 initialize:1 equal:1 field:1 extraction:2 manually:1 represents:2 yu:1 others:5 report:1 quantitatively:1 piecewise:1 randomly:5 composed:1 preserve:2 verbatim:1 replaced:2 occlusion:15 interest:1 w5:2 intra:8 evaluation:2 mixture:1 uppercase:1 necessary:1 facial:1 filled:1 incomplete:1 divide:1 taylor:1 desired:1 e0:12 kij:2 column:6 modeling:3 instance:1 measuring:1 kpca:44 cost:9 deviation:4 entry:4 dij:1 front:1 corrupted:6 synthetic:6 international:4 probabilistic:4 invertible:1 together:2 na:5 squared:1 choose:1 adversely:2 de:3 bold:3 sec:5 summarized:2 satisfy:1 explicitly:1 vi:9 depends:1 performed:2 view:1 closed:1 tab:4 recover:1 denoised:3 inherited:1 hoai:1 contribution:2 square:1 pang:1 variance:1 reserved:1 conceptually:1 generalize:2 handwritten:1 unifies:1 lu:2 marginally:1 corruption:1 cybernetics:1 romdhani:1 ed:3 energy:10 e2:3 naturally:1 di:11 stop:1 ppca:2 dataset:5 treatment:1 popular:1 logical:1 dimensionality:1 bakir:3 appears:1 higher:2 tipping:1 formulation:1 done:2 evaluated:1 furthermore:5 implicit:1 smola:4 until:1 correlation:2 hand:2 cohn:1 nonlinear:2 del:1 defines:1 artifact:1 oil:3 usa:1 requiring:1 orthonormality:1 normalized:3 hence:1 regularization:1 iteratively:1 deal:2 noted:1 generalized:1 outline:1 complete:1 tn:4 motion:1 reflection:1 image:41 novel:5 common:1 extend:2 interpretation:1 mellon:2 measurement:1 refer:1 cambridge:2 ai:3 mathematics:1 similarly:1 session:1 shawe:1 dj:4 similarity:3 base:4 closest:1 own:1 optimizing:2 certain:1 verlag:1 meeting:1 uncorrupted:2 seen:2 minimum:1 relaxed:1 anandan:1 mr:2 deng:1 fernando:1 maximize:1 signal:1 ii:2 semi:1 multiple:1 smooth:1 technical:1 mcclure:1 equally:1 e1:3 vision:4 metric:1 cmu:2 essentially:1 iteration:4 kernel:40 represent:1 normalization:1 achieved:2 fine:1 median:1 sch:8 w2:6 rest:1 induced:2 subject:2 elegant:1 flow:4 smile:1 jordan:1 synthetically:2 spca:1 iii:1 xj:1 affect:1 specular:2 zi:4 w3:5 restrict:1 whether:1 expression:4 pca:33 suffer:1 speech:1 york:1 repeatedly:1 clear:1 eigenvectors:4 amount:1 generate:1 exist:4 carnegie:2 affected:2 key:1 iter:1 four:2 threshold:1 clean:5 relaxation:3 year:2 run:3 letter:3 disgust:1 extends:1 throughout:1 groundtruth:1 draw:1 comparable:2 illuminated:1 capturing:1 guaranteed:1 display:2 copied:1 occur:1 infinity:1 incorporation:1 svens:1 sake:1 min:3 performing:1 alternate:1 combination:2 describes:2 smaller:1 reconstructing:1 em:1 wi:1 happens:2 outlier:21 intuitively:1 restricted:1 taken:3 equation:2 visualization:1 remains:1 jolliffe:1 cor:1 end:2 available:1 kwok:10 appropriate:1 robustness:6 original:6 denotes:1 ensure:1 society:1 warping:1 objective:4 occurs:1 parametric:1 primary:1 diagonal:9 unclear:2 hai:1 gradient:1 subspace:16 distance:4 mapped:1 thrun:2 restart:1 landmark:2 berar:2 reason:1 water:1 assuming:1 modeled:1 ratio:1 pie:5 unfortunately:1 trace:1 negative:1 implementation:1 squint:1 perform:3 observation:1 discarded:1 minh:1 displayed:1 gas:1 pair:2 required:1 extensive:2 pipe:1 acoustic:1 learned:2 address:3 beyond:1 usually:3 pattern:3 biasing:1 built:1 including:1 royal:1 natural:1 force:3 rely:1 created:1 review:1 understanding:1 l2:1 relative:1 loss:1 generation:1 limitation:1 proportional:1 validation:3 degree:1 affine:1 vectorized:1 imposes:1 corrupt:1 row:1 repeat:1 last:1 free:1 saul:2 face:19 absolute:3 overcome:2 calculated:3 adaptive:1 avoided:1 nguyen:1 transaction:1 reconstructed:9 approximate:2 uni:1 implicitly:2 dealing:4 sz:1 active:1 pittsburgh:1 xi:5 sanguinetti:4 alternatively:1 iterative:2 latent:1 why:1 table:2 promising:1 kanade:1 robust:22 ignoring:1 necessarily:1 european:1 vj:1 main:1 linearly:1 whole:9 noise:13 nothing:2 fair:1 q1n:6 fig:16 referred:1 en:1 definiteness:1 position:1 minz:2 rk:2 british:1 bishop:2 dk:1 closeness:3 importance:2 illumination:3 surprise:1 depicted:2 appearance:1 bailly:1 visual:2 scalar:2 springer:1 mij:1 truth:2 satisfies:1 ma:1 weston:1 identity:2 towards:3 occ:13 feasible:1 hard:1 denoising:8 principal:21 kearns:1 experimental:2 la:2 atsch:2 rarely:1 support:1 frontal:1 incorporate:1 reg:12 d1:3 handling:4 |
2,831 | 3,567 | The Recurrent Temporal Restricted Boltzmann
Machine
Ilya Sutskever, Geoffrey Hinton, and Graham Taylor
University of Toronto
{ilya, hinton, gwtaylor}@cs.utoronto.ca
Abstract
The Temporal Restricted Boltzmann Machine (TRBM) is a probabilistic model for
sequences that is able to successfully model (i.e., generate nice-looking samples
of) several very high dimensional sequences, such as motion capture data and the
pixels of low resolution videos of balls bouncing in a box. The major disadvantage of the TRBM is that exact inference is extremely hard, since even computing
a Gibbs update for a single variable of the posterior is exponentially expensive.
This difficulty has necessitated the use of a heuristic inference procedure, that
nonetheless was accurate enough for successful learning. In this paper we introduce the Recurrent TRBM, which is a very slight modification of the TRBM for
which exact inference is very easy and exact gradient learning is almost tractable.
We demonstrate that the RTRBM is better than an analogous TRBM at generating
motion capture and videos of bouncing balls.
1
Introduction
Modeling sequences is an important problem since there is a vast amount of natural data, such as
speech and videos, that is inherently sequential. A good model for these data sources could be useful
for finding an abstract representation that is helpful for solving ?natural? discrimination tasks (see
[4] for an example of this approach for the non-sequential case). In addition, it could be also used
for predicting the future of a sequence from its past, be used as a prior for denoising tasks, and be
used for other applications such as tracking objects in video. The Temporal Restricted Boltzmann
Machine [14, 13] is a recently introduced probabilistic model that has the ability to accurately model
complex probability distributions over high-dimensional sequences. It was shown to be able to
generate realistic motion capture data [14], and low resolution videos of 2 balls bouncing in a box
[13], as well as complete and denoise such sequences.
As a probabilistic model, the TRBM is a directed graphical model consisting of a sequence of Restricted Boltzmann Machines (RBMs) [3], where the state of one or more previous RBMs determines
the biases of the RBM in next timestep. This probabilistic formulation straightforwardly implies a
learning procedure where approximate inference is followed by learning. The learning consists of
learning a conditional RBM at each timestep, which is easily done with Contrastive Divergence
(CD) [3]. Exact inference in TRBMs, on the other hand, is highly non-trivial, since computing even
a single Gibbs update requires computing the ratio of two RBM partition functions. The approximate inference procedure used in [13] was heuristic and was not even derived from a variational
principle.
In this paper we introduce the Recurrent TRBM (RTRBM), which is a model that is very similar
to the TRBM, and just as expressive. Despite the similarity, exact inference is very easy in the
RTRBM and computing the gradient of the log likelihood is feasible (up to the error introduced
by the use of Contrastive Divergence). We demonstrate that the RTRBM is able to generate more
realistic samples than an equivalent TRBM for the motion capture data and for the pixels of videos
of bouncing balls. The RTRBM?s performance is better than the TRBM mainly because it learns to
convey more information through its hidden-to-hidden connections.
2
Restricted Boltzmann Machines
The building block of the TRBM and the RTRBM is the Restricted Boltzmann Machine [3]. An
RBM defines a probability distribution over pairs of vectors, V ? {0, 1}NV and H ? {0, 1}NH (a
shorthand for visible and hidden) by the equation
P (v, h) = P (V = v, H = h) = exp(v ? bV + h? bH + v ? W h)/Z
(1)
where bV is a vector of biases for the visible vectors, bH is a vector of biases for the hidden vectors,
and W is the matrix of connection weights. The quantity Z = Z(bV , bH , W ) is the value of the
partition function that ensures that Eq. 1 is a valid probability distribution. The RBM?s definition
implies that the conditional distributions P (H|v) and P (V |h) are factorial (i.e., all the components of H in P (H|v) are independent) and are given by P (H (j) = 1|v) = s(bH + W ? v)(j) and
P (V (i) = 1|h) = s(bV + W h)(i) , where s(x)(j) = (1 + exp(?x(j) ))?1 is the logistic function
and x(j) is the jth component of the vector x. In general, we use i to index visible vectors V and j
to index hidden vectors H. 1 The RBM can be slightly modified to allow the vector V to take real
values; one way of achieving this is by the definition
P (v, h) = exp(?kvk2 /2 + v ? bV + h? bH + v ? W h)/Z.
(2)
Using this equation does not change the form of the gradients and the conditional distribution
P (H|v). The only change it introduces is in the conditional distribution P (V |h), which is equal
to a multivariate Gaussian with parameters N (bV + W h, I). See [18, 14] for more details and
generalizations.
P
The gradient of the average log probability given a dataset S, L = 1/|S| v?S log P (v), has the
following simple form:
?L/?W = V ? H ? P (H|V )P? (V ) ? V ? H ? P (H,V )
(3)
P
where P? (V ) = 1/|S| v?S ?v (V ) (here ?x (X) is a distribution over real-valued vectors that is
concentrated at x), and hf (X)iP (X) is the expectation of f (X) under the distribution P . Computing
the exact values of the expectations h?iP (H,V ) is computationally intractable, and much work has
been done on methods for computing approximate values for the expectations that are good enough
for practical learning and inference tasks (e.g., [16, 12, 19], including [15], which works well for
the RBM).
We will approximate the gradients with respect to the RBM?s parameters using the Contrastive
Divergence [3] learning procedure, CDn , whose updates are computed by the following algorithm.
Algorithm 1 (CDn )
1.
2.
3.
4.
Sample (v, h) ? P (H|V )P? (V )
Set ?W to v ? h?
repeat n times: sample v ? P (V |h), then sample h ? P (H|v)
Decrease ?W by v ? h?
Models learned by CD1 are often reasonable generative models of the data [3], but if learning is
continued with CD25 , the resulting generative models are much better [11]. The RBM also plays a
critical role in deep belief networks [4], [5], but we do not use this connection in this paper.
3
The TRBM
It is easy to construct the TRBM with RBMs. The TRBM, as described in the introduction, is
a sequence of RBMs arranged in such a way that in any given timestep, the RBM?s biases depend only on the state of the RBM in the previous timestep. In its simplest form, the TRBM can
1
We use uppercase variables (as in P (H|v)) to denote distributions and lowercase variables (as in P (h|v))
to denote the (real-valued) probability P (H = h|v).
Figure 1: The graphical structure of a TRBM: a directed sequence of RBMs.
be viewed as a Hidden Markov Model (HMM) [9] with an exponentially large state space that
has an extremely compact parameterization of the transition and the emission probabilities. Let
XttAB = (XtA , . . . , XtB ) denote a sequence of variables. The TRBM defines a probability distribution P (V1T = v1T , H1T = hT1 ) by the equation
P (v1T , hT1 ) =
T
Y
P (vt , ht |ht?1 )P0 (v1 , h1 )
(4)
t=2
which is identical to the defining equation of the HMM. The conditional distribution P (Vt , Ht |ht?1 )
is that of an RBM, whose biases for Ht are a function of ht?1 . Specifically,
?
P (vt , ht |ht?1 ) = exp vt? bV + vt? W ht + h?
(5)
t (bH + W ht?1 ) /Z(ht?1 )
where bV , bH and W are as in Eq. 1, while W ? is the weight matrix of the connections from Ht?1
to Ht , making bH + W ? ht?1 be the bias of RBM at time t. In this equation, V ? {0, 1}NV and
H ? {0, 1}NH ; it is easy to modify this definition to allow V to take real values as was done in Eq. 2.
The RBM?s partition function depends on ht?1 , because the parameters (i.e., the biases) of the RBM
at time t depend on the value of the random variable Ht?1 . Finally, the distribution P0 is defined
by an equation very similar to Eq. 5, except that the (undefined) term W ? h0 is replaced by the
term binit , so the hidden units receive a special initial bias at P0 ; we will often write P (V1 , H1 |h0 )
for P0 (V1 , H1 ) and W ? h0 for binit . It follows from these equations that the TRBM is a directed
graphical model that has an (undirected) RBM at each timestep (a related directed sequence of
Boltzmann Machines has been considered in [7]).
As in most probabilistic models, the weight update is computed by solving the inference problem
and computing the weight update as if the inferred variables were observed. fully-visible case. If
the hidden variables are observed, equation 4 implies that the gradient of the log likelihood with
PT
respect to the TRBM?s parameters is t=1 ?log P (vt , ht |ht?1 ), and each term, being the gradient
of the log likelihood of an RBM, can be approximated using CDn . Thus the main computational
difficulty of learning TRBMs is in obtaining samples from a distribution approximating the posterior
P (H1T |v1T ).
Inference in a TRBM
Unfortunately, the TRBM?s inference problem is harder than that of a typical undirected graphical
(j)
model, because even computing the probability P (Ht = 1| everything else) involves evaluating
the exact ratio of two RBM partition functions, which can be seen from Eq. 5. This difficulty necessitated the use of a heuristic inference procedure [13], which is based on the observation that the
t
distribution P (Ht |ht?1
1 , v1 ) = P (Ht |ht?1 , vt ) is factorial by definition. This inference procedure
does not do any kind of smoothing from the future and only does approximate filtering from the past
QT
by sampling from the distribution t=1 P (Ht |H1t?1 , v1t ) instead of the true posterior distribution
QT
t?1 T
t
2
, v1 ), which is easy because P (Ht |ht?1
1 , v1 ) is factorial.
t=1 P (Ht |H1
4
Recurrent TRBMs
Let us start with notation. Consider an arbitrary factorial distribution P ? (H). The statement h ?
P ? (H) means that h is sampled from the factorial distribution P ? (H), so each h(j) is set to 1 with
2
This is a slightly simplified description of the inference procedure in [13].
Figure 2: The graphical structure of the RTRBM, Q. The variables Ht are real valued while the
variables Ht? are binary. The conditional distribution Q(Vt ,Ht? |ht?1 ) is given by the equation
Q(vt , h?t |ht?1 ) = exp vt? W h?t + vt? bV + h?t (bH + W ? ht?1 ) /Z(ht?1 ), which is essentially the
same as the TRBM?s conditional distribution P from equation 5. We will always integrate out Ht?
and will work directly with the distribution Q(Vt |ht?1 ). Notice that when V1 is observed, H1? cannot
affect H1 .
probability P ? (H (j) = 1), and 0 otherwise. In contrast, the statement h ? P ? (H) means that each
h(j) is set to the real value P ? (H (j) = 1), so this is a ?mean-field? update [8, 17]. The symbol P
stands for the distribution of some TRBM, while the symbol Q stands for the distribution defined by
an RTRBM. Note that the outcome of the operation ? ? P (Ht |vt , ht?1 ) is s(W vt + W ? ht?1 + bH ).
An RTRBM, Q(V1T , H1T ), is defined by the equation
Q(v1T , hT1 )
=
T
Y
Q(vt |ht?1 )Q(ht |vt , ht?1 ) ? Q0 (v1 ). Q0 (h1 |v1 )
(6)
t=2
The terms appearing in this equation will be defined shortly.
Let us contrast the generative process of the two models. To sample from a TRBM P , we need
to perform a directed pass, sampling from each RBM on every timestep. One way of doing this is
described by the following algorithm.
Algorithm 2 (for sampling from the TRBM):
for 1 ? t ? T :
1. sample vt ? P (Vt |ht?1 )
2. sample ht ? P (Ht |vt , ht?1 ) 3
where step 1 requires sampling from the marginals of a Boltzmann Machine (by integrating out Ht ),
which involves running a Markov chain.
By definition, RTRBMs and TRBMs are parameterized in the same way, so from now on we will
assume that P and Q have identical parameters, which are W, W ? , bV , bH , and binit . The following
algorithm samples from the RTRBM Q under this assumption.
Algorithm 3 (for sampling from the RTRBM)
for 1 ? t ? T :
1. sample vt ? P (Vt |ht?1 )
2. set ht ? P (Ht |vt , ht?1 )
We can infer that Q(Vt |ht?1 ) = P (Vt |ht?1 ) because of step 1 in Algorithm 3, which is also consistent with the equation given in figure 2 where Ht? is integrated out. The only difference between
Algorithm 2 and Algorithm 3 is in step 2. The difference may seem small, since the operations
ht ? P (Ht |vt , ht?1 ) and ht ? P (Ht |vt , ht?1 ) appear similar. However, this difference significantly alters the inference and learning procedures of the RTRBM; in particular, it can already be
seen that Ht are real-valued for the RTRBM.
3
When t = 1, P (Ht |vt , ht?1 ) stands for P0 (H1 |v1 ), and similarly for other conditional distributions. The
same convention is used in all algorithms.
4.1
Inference in RTRBMs
Inference in RTRBMs given v1T is very easy, which might be surprising in light of its similarity to
the TRBM. The reason inference is easy is similar to the reason inference in square ICAs is easy [1]:
There is a unique and an easily computable value of the hidden variables that has a nonzero posterior
probability. Suppose, for example, that the value of V1 is v1 , which means that v1 was produced at
the end of step 1 in Algorithm 3. Since step 2, the deterministic operation h1 ? P0 (H1 |v1 ), has
been executed, the only value h1 can take is the value assigned by the operation ? ? P0 (H1 |v1 ). Any
other value for h1 is never produced by a generative process that outputs v1 and thus has posterior
probability 0. In addition, by executing this operation, we can recover h1 . Thus, Q0 (H1 |v1 ) =
?s(W v1 +bH +binit ) (H1 ). Note that H1 ?s value is completely independent of v2T .
Once h1 is known, we can consider the generative process that produced v2 . As before, since v2
was produced at the end of step 1, then the fact that step 2 has been executed implies that h2 can be
computed by h2 ? P (H2 |v2 , h1 ) (recall that at this point h1 is known with absolute certainty). If
the same reasoning is repeated t times, then all of ht1 is uniquely determined and is easily computed
when V1t is known. There is no need for smoothing because Vt and Ht?1 influence Ht with such
T
strength that the knowledge of Vt+1
cannot alter the model?s belief about Ht . This is because
Q(Ht |vt , ht?1 ) = ?s(W vt +bH +W ? ht?1 ) (Ht ).
The resulting inference algorithm is simple:
Algorithm 4 (inference in RTRBMs)
for 1 ? t ? T :
1. ht ? P (Ht |vt , ht?1 )
Let h(v)T1 denote the output of the inference algorithm on input v1T , in which case the posterior is
described by
Q(H1T |v1T ) = ?h(v)T1 (H1T ).
(7)
4.2
Learning in RTRBMs
Learning in RTRBMs may seem easy once inference is solved, since the main difficulty in learning
TRBMs is the inference problem.
However, the RTRBM does not allow EM-like learning because
the equation ?log Q(v1T ) = ?log Q(v1T , hT1 ) hT ?Q(H T |vT ) is not meaningful. To be precise,
1
1
1
the gradient ?log Q(v1T , hT1 ) is undefined because ?s(W ? ht?1 +bH +W T vt ) (ht ) is not, in general, a
continuous function of W . Thus, the gradient has to be computed differently.
PT
Notice that the RTRBM?s log probability satisfies log Q(v1T ) = t=1 log Q(vt |v1t?1 ), so we could
PT
try computing the sum ? t=1 log Q(vt |v1t?1 ). The key observation that makes the computation
feasible is the equation
Q(Vt |v1t?1 ) = Q(Vt |h(v)t?1 )
(8)
where h(v)t?1 is the value computed
inference algorithm with inputs v1t . This equaR by the RTRBM
t?1
?
tion holds because Q(vt |v1 ) = h? Q(vt |ht?1 )Q(h?t?1 |v1t?1 )dh?t?1 = Q(vt |h(v)t?1 ), as the
t?1
posterior distribution Q(Ht?1 |v1t?1 ) = ?h(v)t?1 (Ht?1 ) is a point-mass at h(v)t?1 , which follows
from Eq. 7.
The equality Q(Vt |v1t?1 ) = Q(Vt |h(v)t?1 ) allows us to define a recurrent neural network (RNN)
[10] whose parameters are identical to those of the RTRBM, and whose cost function is equal to the
log likelihood of the RTRBM. This is useful because it is easy to compute gradients with respect to
the RNN?s parameters using the backpropagation through time algorithm [10]. The RNN has a pair
of variables at each timestep, {(vt , rt )}Tt=1 , where vt are the input variables and rt are the RNN?s
hidden variables (all of which are deterministic). The hiddens r1T are computed by the equation
rt = s(W vt + bH + W ? rt?1 )
(9)
where W ? rt?1 is replaced with binit when t = 1. This definition was chosen so that the equation
r1T = h(v)T1 would hold. The RNN attempts to probabilistically predict the next timestep from its
history using the marginal distribution of the RBM Q(Vt+1 |rt ), so its objective function at time t is
defined to be log Q(vt+1 |rt ), where Q depends on the RNN?s parameters in the same way it depends
on the RTRBM?s parameters (the two sets of parameters being identical). This is a valid definition
of an RNN whose cumulative objective for the sequence v1T is
O=
T
X
log Q(vt |rt?1 )
(10)
t=1
where Q(v1 |r0 ) = Q0 (v1 ). But since rt as computed in equation 9 on input v1T is identical to h(v)t ,
the equality log Q(vt |rt?1 ) = log Q(vt |v1t?1 ) holds. Substituting this identity into Eq. 10 yields
O=
T
X
t=1
log Q(vt |rt?1 ) =
T
X
log Q(vt |v1t?1 ) = log Q(v1T )
(11)
t=1
which is the log probability of the corresponding RTRBM.
This means that ?O = ? log Q(v1T ) can be computed with the backpropagation through time algorithm [10], where the contribution of the gradient from each timestep is computed with Contrastive
Divergence.
4.3
Details of the backpropagation through time algorithm
The backpropagation through time algorithm is identical to the usual backpropagation algorithm
where the feedforward neural network is turned ?on its side?. Specifically, the algorithm maintains
a term ?O/?rt which is computed from ?O/?rt+1 and ? log Q(vt+1 |rt )/?rt using the chain rule,
by the equation
?
?O/?rt = W ? (rt+1 .(1 ? rt+1 ).?O/?rt+1 ) + W ?? ? log Q(vt |rt?1 )/?bH
(12)
where a.b denotes component-wise multiplication, the term rt .(1 ? rt ) arises from the derivative of
the logistic function s? (x) = s(x).(1 ? s(x)), and ? log Q(vt+1 |rt )/?bH is computed by CD. Once
?O/?rt is computed for all t, the gradients of the parameters can be computed using the following
equations
T
X
?O
=
rt?1 (rt .(1 ? rt ).?O/?rt )?
(13)
?W ?
t=2
?O
?W
=
T
?1
X
t=1
T
? X
?
vt W ? (rt+1 .(1 ? rt+1 ).?O/?rt+1 ) +
? log Q(vt |rt?1 )/?W (14)
t=1
The first summation in Eq. 14 arises from the use of W as weights for inference for computing rt and
the second summation arises from the use of W as RBM parameters for computing log Q(vt |rt?1 ).
Each term of the form ? log Q(vt+1 |rt )/?W is also computed with CD. Computing ?O/?rt is done
most conveniently with a single backward pass through the sequence. As always, log Q(v1 |r0 ) =
Q0 (v1 ). It is also seen that the gradient would be computed exactly if CD were to return the exact
gradient of the RBM?s log probability.
5
Experiments
We report the results of experiments comparing an RTRBM to a TRBM. The results in [14, 13] were
obtained using TRBMs that had several delay-taps, which means that each hidden unit could directly
observe several previous timesteps. To demonstrate that the RTRBM learns to use the hidden units to
store information, we did not use delay-taps for the RTRBM nor the TRBM, which causes the results
to be worse (but not much) than in [14, 13]. If delay-taps are allowed, then the results of [14, 13]
show that there is little benefit from the hidden-to-hidden connections (which are W ? ), making the
comparison between the RTRBM and the TRBM uninteresting.
In all experiments, the RTRBM and the TRBM had the same number of hidden units, their parameters were initialized in the same manner, and they were trained for the same number of weight
updates. When sampling from the TRBM, we would use the sampling procedure of the RTRBM
using the TRBM?s parameters to eliminate the additional noise from its hidden units. If this is not
done, the samples produced by the TRBM are significantly worse. Unfortunately, the evaluation
metric is entirely qualitative since computing the log probability on a test set is infeasible for both
the TRBM and the RTRBM. We provide the code for our experiments in [URL].
Figure 3: This figure shows the receptive fields of the first 36 hidden units of the RTRBM on the
left, and the corresponding hidden-to-hidden weights between these units on the right: the ith row on
the right corresponds to the ith receptive field on the left, when counted left-to-right. Hidden units
18 and 19 exhibit unusually strong hidden-to-hidden connections; they are also the ones with the
weakest visible-hidden connections, which effectively makes them belong to another hidden layer.
5.1
Videos of bouncing balls
We used a dataset consisting of videos of 3 balls bouncing in a box. The videos are of length 100
and of resolution 30?30. Each training example is synthetically generated, so no training sequence
is seen twice by the model which means that overfitting is highly unlikely. The task is to learn to
generate videos at the pixel level. This problem is high-dimensional, having 900 dimensions per
frame, and the RTRBM and the TRBM are given no prior knowledge about the nature of the task
(e.g., by convolutional weight matrices).
Both the RTRBM and the TRBM had 400 hidden units. Samples from these models are provided as
videos 1,2 (RTRBM) and videos 3,4 (TRBM). A sample training sequence is given in video 5. All
the samples can be found in [URL]. The real-values in the videos are the conditional probabilities
of the pixels [13]. The RTRBM?s samples are noticeably better than the TRBM?s samples; a key
difference between these samples is that the balls produced by the TRBM moved in a random walk,
while those produced by the RTRBM moved in a more persistent direction. An examination of the
visible to hidden connection weights of the RTRBM reveals a number of hidden units that are not
connected to visible units. These units have the most active hidden to hidden connections, which
must be used to propagate information through time. In particular, these units are the only units that
?
do not have a strong self connection (i.e., Wi,i
is not large; see figure 3). No such separation of units
is found in the TRBM and all its hidden units have large visible to hidden connections.
5.2
Motion capture data
We used a dataset that represents human motion capture data by sequences of joint angle, translations, and rotations of the base of the spine [14]. The total number of frames in the dataset set was
3000, from which the model learned on subsequences of length 50. Each frame has 49 dimensions,
and both models have 200 hidden units. The data is real-valued, so the TRBM and the RTRBM
were adapted to have Gaussian visible variables using equation 2. The samples produced by the
RTRBM exhibit less sticking and foot-skate than those produced by the TRBM; samples from these
models are provided as videos 6,7 (RTRBM) and videos 8,9 (TRBM); video 10 is a sample training
sequence. Part of the Gaussian noise was removed in a manner described in [14] in both models.
5.3
Details of the learning procedures
Each problem was trained for 100,000 weight updates, with a momentum of 0.9, where the gradient was normalized by the length of the sequence for each gradient computation. The weights are
updated after computing the gradient on a single sequence. The learning starts with CD10 for the
first 1000 weight updates, which is then switched to CD25 . The visible to hidden weights, W , were
initialized with static CD5 (without using the (R)TRBM learning rules) on 30 sequences (which
resulted in 30 weight updates) with learning rate of 0.01 and momentum 0.9. These weights were
then given to the (R)TRBM learning procedure, where the learning rate was linearly reduced towards 0. The weights W ? and the biases were initialized with a sample from spherical Gaussian of
standard-deviation 0.005. For the bouncing balls problem the initial learning rate was 0.01, and for
the motion capture data it was 0.005.
6
Conclusions
In this paper we introduced the RTRBM, which is a probabilistic model as powerful as the intractable
TRBM that has an exact inference and an almost exact learning procedure. The common disadvantage of the RTRBM is that it is a recurrent neural network, a type of model known to have difficulties
learning to use its hidden units to their full potential [2]. However, this disadvantage is common to
many other probabilistic models, and it can be partially alleviated using techniques such as the long
short term memory RNN [6].
Acknowledgments
This research was partially supported by the Ontario Graduate Scholarship and by the Natural Council of Research and Engineering of Canada. The mocap data used in this project was obtained
from http://people.csail.mit.edu/ehsu/work/sig05stf/. For Matlab playback
of motion and generation of videos, we have adapted portions of Neil Lawrence?s motion capture
toolbox (http://www.dcs.shef.ac.uk/?neil/mocap/).
References
[1] A.J. Bell and T.J. Sejnowski. An Information-Maximization Approach to Blind Separation and Blind
Deconvolution. Neural Computation, 7(6):1129?1159, 1995.
[2] Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult.
Neural Networks, IEEE Transactions on, 5(2):157?166, 1994.
[3] G.E. Hinton. Training Products of Experts by Minimizing Contrastive Divergence. Neural Computation,
14(8):1771?1800, 2002.
[4] G.E. Hinton, S. Osindero, and Y.W. Teh. A Fast Learning Algorithm for Deep Belief Nets. Neural
Computation, 18(7):1527?1554, 2006.
[5] G.E. Hinton and R.R. Salakhutdinov. Reducing the Dimensionality of Data with Neural Networks. Science, 313(5786):504?507, 2006.
[6] S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735?1780,
1997.
[7] S. Osindero and G. Hinton. Modeling image patches with a directed hierarchy of Markov random fields.
Advances Neural Information Processing Systems, 2008.
[8] C. Peterson and J.R. Anderson. A mean field theory learning algorithm for neural networks. Complex
Systems, 1(5):995?1019, 1987.
[9] L.R. Rabiner. A tutorial on hidden Markov models and selected applications inspeech recognition. Proceedings of the IEEE, 77(2):257?286, 1989.
[10] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning representations by back-propagating errors.
Nature, 323(6088):533?536, 1986.
[11] R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In Proceedings of
the International Conference on Machine Learning, volume 25, 2008.
[12] D. Sontag and T. Jaakkola. New Outer Bounds on the Marginal Polytope. Advances in Neural Information
Processing Systems, 2008.
[13] I. Sutskever and G.E. Hinton. Learning multilevel distributed representations for high-dimensional sequences. Proceeding of the Eleventh International Conference on Artificial Intelligence and Statistics,
pages 544?551, 2007.
[14] G.W. Taylor, G.E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. Advances in Neural Information Processing Systems, 19:1345?1352, 2007.
[15] T. Tieleman. Training restricted boltzmann machines using approximations to the likelihood gradient. In
Proceedings of the International Conference on Machine Learning, volume 25, 2008.
[16] M.J. Wainwright, T.S. Jaakkola, and A.S. Willsky. A new class of upper bounds on the log partition
function. IEEE Transactions on Information Theory, 51(7):2313?2335, 2005.
[17] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. UC
Berkeley, Dept. of Statistics, Technical Report, 649, 2003.
[18] M. Welling, M. Rosen-Zvi, and G. Hinton. Exponential family harmoniums with an application to information retrieval. Advances in Neural Information Processing Systems, 17:1481?1488, 2005.
[19] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations.
Exploring Artificial Intelligence in the New Millennium, pages 239?236, 2003.
| 3567 |@word propagate:1 p0:7 contrastive:5 harder:1 initial:2 past:2 comparing:1 surprising:1 must:1 realistic:2 visible:10 partition:5 update:10 discrimination:1 generative:5 selected:1 intelligence:2 parameterization:1 ith:2 short:2 toronto:1 kvk2:1 persistent:1 qualitative:1 consists:1 shorthand:1 eleventh:1 manner:2 introduce:2 spine:1 nor:1 v1t:28 salakhutdinov:2 freeman:1 spherical:1 little:1 provided:2 project:1 notation:1 mass:1 kind:1 finding:1 temporal:3 certainty:1 every:1 quantitative:1 berkeley:1 unusually:1 exactly:1 uk:1 unit:18 appear:1 before:1 t1:3 engineering:1 modify:1 despite:1 v2t:1 might:1 twice:1 graduate:1 directed:6 practical:1 unique:1 acknowledgment:1 block:1 backpropagation:5 procedure:12 rnn:8 bell:1 significantly:2 alleviated:1 integrating:1 cannot:2 bh:17 influence:1 www:1 equivalent:1 deterministic:2 williams:1 resolution:3 rule:2 continued:1 analogous:1 updated:1 pt:3 play:1 suppose:1 hierarchy:1 exact:10 rumelhart:1 expensive:1 approximated:1 recognition:1 observed:3 role:1 solved:1 capture:8 ensures:1 connected:1 decrease:1 removed:1 trbms:6 trained:2 depend:2 solving:2 harmonium:1 completely:1 easily:3 joint:1 differently:1 fast:1 sejnowski:1 artificial:2 outcome:1 h0:3 whose:5 heuristic:3 valued:5 otherwise:1 ability:1 statistic:2 neil:2 ip:2 sequence:21 trbm:46 net:1 product:1 turned:1 ontario:1 roweis:1 description:1 moved:2 sticking:1 sutskever:2 generating:1 xta:1 executing:1 object:1 recurrent:6 ac:1 propagating:1 qt:2 eq:8 strong:2 c:1 involves:2 implies:4 convention:1 direction:1 foot:1 human:2 everything:1 noticeably:1 multilevel:1 generalization:2 summation:2 exploring:1 hold:3 considered:1 exp:5 lawrence:1 predict:1 substituting:1 major:1 council:1 successfully:1 mit:1 gaussian:4 always:2 modified:1 playback:1 jaakkola:2 probabilistically:1 derived:1 emission:1 likelihood:5 mainly:1 contrast:2 helpful:1 inference:28 lowercase:1 integrated:1 eliminate:1 unlikely:1 hidden:35 pixel:4 smoothing:2 special:1 uc:1 marginal:2 equal:2 construct:1 field:5 never:1 once:3 sampling:7 having:1 identical:6 represents:1 frasconi:1 alter:1 future:2 rosen:1 report:2 divergence:5 resulted:1 replaced:2 consisting:2 attempt:1 highly:2 evaluation:1 introduces:1 undefined:2 uppercase:1 light:1 chain:2 accurate:1 necessitated:2 taylor:2 initialized:3 walk:1 modeling:3 disadvantage:3 maximization:1 cost:1 deviation:1 uninteresting:1 delay:3 successful:1 osindero:2 zvi:1 rtrbm:39 straightforwardly:1 dependency:1 hiddens:1 international:3 csail:1 probabilistic:7 ilya:2 worse:2 expert:1 derivative:1 simard:1 return:1 potential:1 depends:3 blind:2 tion:1 h1:20 try:1 doing:1 portion:1 start:2 hf:1 recover:1 maintains:1 contribution:1 square:1 convolutional:1 yield:1 rabiner:1 accurately:1 produced:9 history:1 definition:7 nonetheless:1 rbms:5 rbm:22 static:1 sampled:1 dataset:4 recall:1 knowledge:2 dimensionality:1 back:1 wei:1 formulation:1 done:5 box:3 arranged:1 anderson:1 just:1 hand:1 expressive:1 propagation:1 defines:2 logistic:2 cd25:2 building:1 normalized:1 true:1 equality:2 assigned:1 q0:5 nonzero:1 self:1 uniquely:1 complete:1 demonstrate:3 tt:1 motion:10 reasoning:1 image:1 variational:2 wise:1 recently:1 common:2 rotation:1 exponentially:2 nh:2 volume:2 belong:1 slight:1 marginals:1 gibbs:2 similarly:1 had:3 similarity:2 base:1 posterior:7 multivariate:1 schmidhuber:1 store:1 binary:2 ht1:6 vt:60 seen:4 additional:1 r0:2 mocap:2 full:1 infer:1 technical:1 long:3 retrieval:1 essentially:1 expectation:3 metric:1 hochreiter:1 receive:1 addition:2 shef:1 else:1 source:1 nv:2 undirected:2 seem:2 jordan:1 synthetically:1 feedforward:1 bengio:1 enough:2 easy:10 affect:1 timesteps:1 computable:1 url:2 sontag:1 speech:1 cause:1 matlab:1 deep:3 useful:2 cd5:1 factorial:5 amount:1 concentrated:1 simplest:1 reduced:1 generate:4 http:2 tutorial:1 notice:2 alters:1 per:1 write:1 key:2 achieving:1 ht:79 backward:1 v1:23 vast:1 timestep:9 sum:1 angle:1 parameterized:1 powerful:1 bouncing:7 almost:2 reasonable:1 family:2 separation:2 patch:1 graham:1 entirely:1 layer:1 bound:2 followed:1 h1t:6 bv:10 strength:1 adapted:2 extremely:2 ball:8 slightly:2 em:1 wi:1 modification:1 making:2 restricted:7 computationally:1 equation:21 tractable:1 end:2 operation:5 yedidia:1 observe:1 v2:3 appearing:1 shortly:1 denotes:1 running:1 graphical:6 scholarship:1 murray:1 approximating:1 objective:2 already:1 quantity:1 receptive:2 rt:36 usual:1 exhibit:2 gradient:19 hmm:2 outer:1 polytope:1 trivial:1 reason:2 willsky:1 code:1 length:3 index:2 ratio:2 minimizing:1 difficult:1 unfortunately:2 executed:2 skate:1 statement:2 boltzmann:9 perform:1 teh:1 upper:1 observation:2 markov:4 descent:1 defining:1 hinton:10 looking:1 precise:1 frame:3 dc:1 arbitrary:1 canada:1 inferred:1 introduced:3 pair:2 toolbox:1 connection:11 tap:3 learned:2 able:3 including:1 memory:2 video:18 belief:5 wainwright:2 critical:1 difficulty:5 natural:3 examination:1 predicting:1 millennium:1 nice:1 prior:2 understanding:1 multiplication:1 fully:1 generation:1 filtering:1 geoffrey:1 cdn:3 h2:3 integrate:1 switched:1 consistent:1 principle:1 cd:4 translation:1 row:1 repeat:1 supported:1 jth:1 infeasible:1 bias:9 allow:3 side:1 peterson:1 absolute:1 benefit:1 distributed:1 dimension:2 valid:2 transition:1 evaluating:1 stand:3 cumulative:1 simplified:1 counted:1 welling:1 transaction:2 approximate:5 compact:1 r1t:2 overfitting:1 reveals:1 active:1 subsequence:1 continuous:1 latent:1 learn:1 nature:2 ca:1 inherently:1 obtaining:1 complex:2 did:1 main:2 linearly:1 noise:2 denoise:1 repeated:1 allowed:1 convey:1 gwtaylor:1 momentum:2 exponential:2 learns:2 utoronto:1 symbol:2 weakest:1 deconvolution:1 intractable:2 sequential:2 effectively:1 cd1:1 conveniently:1 tracking:1 partially:2 corresponds:1 tieleman:1 determines:1 satisfies:1 dh:1 conditional:9 viewed:1 identity:1 towards:1 feasible:2 hard:1 change:2 specifically:2 except:1 typical:1 determined:1 reducing:1 denoising:1 total:1 pas:2 meaningful:1 people:1 arises:3 dept:1 |
2,832 | 3,568 | Learning Bounded Treewidth Bayesian Networks
Gal Elidan
Department of Statistics
Hebrew University
Jerusalem, 91905, Israel
[email protected]
Stephen Gould
Department of Electrical Engineering
Stanford University
Stanford, CA 94305, USA
[email protected]
Abstract
With the increased availability of data for complex domains, it is desirable to
learn Bayesian network structures that are sufficiently expressive for generalization while also allowing for tractable inference. While the method of thin junction
trees can, in principle, be used for this purpose, its fully greedy nature makes it
prone to overfitting, particularly when data is scarce. In this work we present a
novel method for learning Bayesian networks of bounded treewidth that employs
global structure modifications and that is polynomial in the size of the graph and
the treewidth bound. At the heart of our method is a triangulated graph that we
dynamically update in a way that facilitates the addition of chain structures that
increase the bound on the model?s treewidth by at most one. We demonstrate the
effectiveness of our ?treewidth-friendly? method on several real-life datasets. Importantly, we also show that by using global operators, we are able to achieve better generalization even when learning Bayesian networks of unbounded treewidth.
1
Introduction
Recent years have seen a surge of readily available data for complex and varied domains. Accordingly, increased attention has been directed towards the automatic learning of complex probabilistic
graphical models [22], and in particular learning the structure of a Bayesian network. With the goal
of making predictions or providing probabilistic explanations, it is desirable to learn models that
generalize well and at the same time have low inference complexity or a small treewidth [23].
While learning optimal tree-structured models is easy [5], learning the optimal structure of general
and even quite simple (e.g., poly-trees, chains) Bayesian networks is computationally difficult [8,
10, 19]. Several works attempt to generalize the tree-structure result of Chow and Liu [5], either
by making assumptions about the true distribution (e.g., [1, 21]), by searching for a local maxima
over tree mixtures [20], or by approximate methods that are polynomial in the size of the graph but
exponential in the treewidth bound (e.g., [3, 15]). In the context of general Bayesian networks, the
thin junction tree approach of Bach and Jordan [2] is a local greedy search procedure that relies
at each step on tree-decomposition heuristic techniques for computing an upper bound the true
treewidth of the model. Like any local search approach, this method does not provide performance
guarantees but is appealing in its ability to efficiently learn models with an arbitrary treewidth bound.
The thin junction tree method, however, suffers from two important limitations. First, while useful
on average, even the best of the tree-decomposition heuristics exhibit some variance in the treewidth
estimate [16]. As a result, a single edge addition can lead to a jump in the treewidth estimate despite
the fact that it can increase the true treewidth by at most one. More importantly, structure learning
scores (e.g., BIC, BDe) tend to learn spurious edges that result in overfitting when the number of
samples is relatively small, a phenomenon that is made worse by a fully greedy approach. Intuitively, to generalize well, we want to learn bounded treewidth Bayesian networks where structure
modifications are globally beneficial (i.e., contribute to the score in many regions of the network).
In this work we propose a novel method for efficiently learning Bayesian networks of bounded
treewidth that addresses these concerns. At the heart of our method is a dynamic update of the
triangulation of the model in a way that is tree-width friendly: the treewidth of the triangulated
graph (upper bound on the model?s true treewidth) is guaranteed to increase by at most one when an
1
edge is added to the network. Building on the single edge triangulation, we characterize sets of edges
that are jointly treewidth-friendly. We use this characterization in a dynamic programming approach
for learning the optimal treewidth-friendly chain with respect to a node ordering. Finally, we learn
a bounded treewidth Bayesian network by iteratively augmenting the model with such chains.
Instead of using local edge modifications, our method progresses by incrementally adding chain
structures that are globally beneficial, improving our ability to generalize. We are also able to
guarantee that the bound on the model?s treewidth grows by at most one at each iteration. Thus, our
method resembles the global nature of Chow and Liu [5] more closely than the thin junction tree
approach of Bach and Jordan [2], while being applicable in practice to any desired treewidth.
We evaluate our method on several challenging real-life datasets and show that our method is able
to learn richer models that generalize better than the thin junction tree approach as well as an unbounded aggressive search strategy. Furthermore, we show that even when learning models with
unbounded treewidth, by using global structure modification operators, we are better able to cope
with the problem of local maxima and learn better models.
2
Background: Bayesian networks and tree decompositions
A Bayesian network [22] is a pair (G, ?) that encodes a joint probability distribution over a finite
set X = {X1 , . . . , Xn } of random variables. G is a directed acyclic graph whose nodes correspond
to the variables in X . The parameters ?Xi |Pai encode local conditional probability distributions
(CPDs) for each node Xi given its parents in G.QTogether, these define a unique joint probability
n
distribution over X given by P (X1 , . . . , Xn ) = i=1 P (Xi | Pai ).
Given a structure G and a complete training set D, estimating the (regularized) maximum likelihood
(ML) parameters is easy for many choices of CPDs (see [14] for details). Learning the structure of
a network, however, is generally NP-hard [4, 10, 19] as the number of possible structures is superexponential in the number of variables. In practice, structure learning relies on a greedy search
procedure that examines easy to evaluate local structure changes (add, delete or reverse an edge).
This search is usually guided by a decomposable score that balances the likelihood of the data and
the complexity of the model (e.g., BIC [24], Bayesian score [14]). Chow and Liu [5] showed that
the ML tree can be learned efficiently. Their result is easily generalized to any decomposable score.
Given a model, we are interested in the task of inference, or evaluating queries of the form P (Y | Z)
where Y and Z are arbitrary subsets of X . This task is, in general, NP-hard [7], except when G is
tree structured. The actual complexity of inference in a Bayesian network is proportional to its
treewidth [23] which, roughly speaking, measures how closely the network resembles a tree. The
notions of tree-decompositions and treewidth were introduced by Robertson and Seymour [23]:1
Definition 2.1: A tree-decomposition of an undirected
S graph H = (V, E) is a pair ({Ci }i?T , T ),
where T is a tree, {Ci } is a subset of V such that i?T Ci = V and where
? for all edges (v, w) ? E there exists an i ? T with v ? Ci and w ? Ci .
? for all i, j, k ? T : if j is on the (unique) path from i to k in T , then Ci ? Ck ? Cj .
The treewidth of a tree-decomposition is defined to be maxi?T |Ci | ? 1. The treewidth T W (H) of
an undirected graph H is the minimum treewidth over all possible tree-decompositions of H. An
equivalent notion of treewidth can be phrased in terms of a graph that is a triangulation of H.
Definition 2.2: An induced path P in an undirected graph H is a path such that for every nonadjacent vertices pi , pj ? P there is no edge (pi ?pj ) ? H. A triangulated (chordal) graph is an
undirected graph with no induced cycles. Equivalently, it is an undirected graph in which every
cycle of length four or more contains a chord.
It can be easily shown that the treewidth of a triangulated graph is the size of the maximal clique of
the graph minus one [23]. The treewidth of an undirected graph H is then the minimum treewidth
of all triangulations of H. For the underlying directed acyclic graph of a Bayesian network, the
treewidth can be characterized via a triangulation of the moralized graph.
Definition 2.3: A moralized graph M of a directed acyclic graph G is an undirected graph that has
an edge (i?j) for every (i ? j) ? G and an edge (p?q) for every pair (p ? i), (q ? i) ? G.
1
The tree-decomposition properties are equivalent to the corresponding family preserving and running intersection properties of clique trees introduced by Lauritzen and Spiegelhalter [17] at around the same time.
2
Input: dataset D, treewidth bound K
Output: a network with treewidth ? K
G ? best scoring tree
M+ ? undirected skeleton of G
k?1
While k < K
O ? node ordering given G and M+
C ? best chain with respect to O
G ?G?C
Foreach (i ? j) ? C do
M+ ? EdgeUpdate(M+ , (i ? j))
k ? maximal clique size of M+
Greedily add edges while treewidth ? K
Return G
cM
cM
v1
v2
v3
s
p1
p2 s
p1
p1
p2 s
t
p2
t
(a)
t
(b)
cM
s
cM
(c)
cM
cM
p1
p2
s
p1
t
(d)
p2
s
p1
p2
t
(e)
t
(f)
Figure 1: (left) Outline of our algorithm for learning Bayesian networks of bounded treewidth. (right) An
example of the different steps of our triangulation procedure (b)-(e) when (s ? t) is added to the graph in (a).
The blocks are {s, v1 }, {v1 , cM }, and {cM , v2 , v3 , p1 , p2 , t} with corresponding cut-vertices v1 and cM . The
augmented graph (e) has a treewidth of three (maximal clique of size four). An alternative triangulation (f),
connecting cM to t, would result in a maximal clique of size five.
The treewidth of a Bayesian network graph G is defined as the treewidth of its moralized graph M.
It follows that the maximal clique of any moralized triangulation of G is an upper bound on the
treewidth of the model, and thus its inference complexity.
3
Learning Bounded Treewidth Bayesian Networks
In this section we outline our approach for learning Bayesian networks given an arbitrary treewidth
bound that is polynomial in both the number of variables and the desired treewidth. We rely on
global structure modifications that are optimal with respect to a node ordering.
At the heart of our method is the idea of using a dynamically maintained triangulated graph to upper
bound the treewidth of the current model. When an edge is added to the Bayesian network we update
this triangulated graph in a way that is not only guaranteed to produce a valid triangulation, but that
is also treewidth-friendly. That is, our update is guaranteed to increase the size of the maximal clique
of the triangulated graph, and hence the treewidth bound, by at most one. An important property of
our edge update is that we can characterize the parts of the network that are ?contaminated? by the
new edge. This allows us to define sets of edges that are jointly treewidth-friendly. Building on the
characterization of these sets, we propose a dynamic programming approach for efficiently learning
the optimal treewidth-friendly chain with respect to a node ordering.
Figure 1 shows pseudo-code for our method. Briefly, we learn a Bayesian network with bounded
treewidth K by starting from a Chow-Liu tree and iteratively augmenting the current structure with
an optimal treewidth-friendly chain. During each iteration (below the treewidth bound) we apply
our treewidth-friendly edge update procedure that maintains a moralized and triangulated graph for
the model at hand. Appealingly, as each global modification can increase the treewidth by at most
one, at least K such chains will be added before we face the problem of local maxima. In practice,
as some chains do not increase the treewidth, many more such chains are added for a given K.
Theorem 3.1: Given a treewidth bound K and dataset over N variables, the algorithm outlined in
Figure 1 runs in time polynomial in N and K.
This result relies on the efficiency of each step of the algorithm and that there can be at most N ? K
iterations (? |edges|) before exceeding the treewidth bound. In the next sections we develop the
edge update and best scoring chain procedures and show that both are polynomial in N and K.
4
Treewidth-Friendly Edge Update
The basic building block of our method is a procedure for maintaining a valid triangulation of the
Bayesian network. An appealing feature of this procedure is that the treewidth bound is guaranteed
to grow by at most one after the update. We first consider single edge (s ? t) addition to the model.
For clarity of exposition, we start with a simple variant of our procedure, and later refine this to
allow for multiple edge additions while maintaining our guarantee on the treewidth bound.
3
To gain intuition into how the dynamic nature of our update is useful, we use the notion of induced
paths or paths with no shortcuts (see Section 2), and make explicit the following obvious fact:
Observation 4.1: Let G be a Bayesian network structure and let M+ be a moralized triangulation
of G. Let M(s?t) be M+ augmented with the edge (s?t) and with the edges (s?p) for every
parent p of t in G. Then, every non-chordal cycle in M(s?t) involves s and either t or a parent of t
and an induced path between the two vertices.
Stated simply, if the graph was triangulated before the addition of (s ? t) to the Bayesian network,
then we only need to triangulate cycles created by the addition of the new edge or those forced by
moralization. This observation immediately suggests a straight-forward single-source triangulation
whereby we simply add an edge (s?v) for every node v on an induced path between s and t or its
parents before the edge update. Clearly, this naive method results in a valid moralized triangulation
of G ? (s ? t). Surprisingly, we can also show that it is treewidth-friendly.
Theorem 4.2: The treewidth of the graph produced by the single-source triangulation procedure is
greater than the treewidth of the input graph M+ by at most one.
Proof: (outline) For the treewidth to increase by more than one, some maximal C in M+ needs to
connect to two new nodes. Since all edges are being added from s, this can only happen in one of
two ways: (i) either t, a parent p of t, or a node v on induced path between s and t is also connected
to C, but not part of C, or (ii) two such (non-adjacent) nodes exist and s is in C. In either case one
edge is missing after the update procedure preventing the formation of a larger clique.
One problem with the proposed single-source triangulation, despite it being treewidth-friendly, is
that many vertices are connected to the source node, making the triangulations shallow. This can
have an undesirable effect on future edge additions and increases the chances of the formation of
large cliques. We can alleviate this problem with a refinement of the single-source triangulation
procedure that makes use of the concepts of cut-vertices, blocks, and block trees.
Definition 4.3: A block of an undirected graph H is a set of connected nodes that cannot be disconnected by the removal of a single vertex. By convention, if the edge (u?v) is in H then u and v are
in the same block. Vertices that separate (are in the intersection of) blocks are called cut-vertices.
It is easy to see that between every two nodes in a block of size greater than two there are at least
two distinct paths, i.e. a cycle. There are also no simple cycles involving nodes in different blocks.
Definition 4.4: The (unique) block tree B of an undirected graph H is a graph with nodes that
correspond both to cut-vertices and to blocks of H. The edges in the block tree connect any block
node Bi with a cut-vertex node vj if and only if vj ? Bi in H.
It can be easily shown that any path in H between two nodes in different blocks passes through all
the cut-vertices along the path between the blocks in B. An important consequence that follows
from Dirac [11] is that an undirected graph whose blocks are triangulated is overall triangulated.
Our refined treewidth-friendly triangulation procedure (illustrated via an example in Figure 1) makes
use of this fact as follows. First, the triangulated graph is augmented with the edge (s?t) and any
edges needed for moralization (Figure 1(b) and (c)). Second, a block level triangulation is carried
out by zig-zagging across cut-vertices along the unique path between the blocks containing s and
t and its parents (Figure 1(d)). Next, within each block (not containing s or t) along the path, a
single-source triangulation is performed with respect to the ?entry? and ?exit? cut-vertices. This
short-circuits any other node path through (and within) the block. For the block containing s the
single-source triangulation is performed between s and the ?exit? cut-vertex. The block containing
t and its parents is treated differently: we add chords directly from s to any node v within the block
that is on an induced path between s and t (or parents of t) (Figure 1(e)). This is required to prevent
moralization and triangulation edges from interacting in a way that will increase the treewidth by
more than one (e.g., Figure 1(f)). If s and t happen to be in the same block, then we only triangulate
the induced paths between s and t, i.e., the last step outlined above. Finally, in the special case that s
and t are in disconnected components of G, the only edges added are those required for moralization.
Theorem 4.5: Our revised edge update procedure results in a triangulated graph with a treewidth at
most one greater than that of the input graph. Furthermore, it runs in polynomial time.
Proof: (outline) First, observe that the final step of adding chords emanating from s is a singlesource triangulation once the other steps have been performed. Since each block along the block
path between s and t is triangulated separately, we only need to consider the zig-zag triangulation between blocks. As this creates 3-cycles, the graph must also be triangulated. To see that the treewidth
4
increases by at most one, we use similar arguments to those used in the proof of Theorem 4.2, and
observe that the zig-zag triangulation only touches cut-vertices and any three of these vertices could
not have been in the same clique. The fact that the update procedure runs in polynomial time follows
from the fact that an adaptation (not shown for lack of space) of maximum cardinality search (see,
for example [16]) can be used to efficiently identify all induced nodes between s and t.
Multiple Edge Updates. We now consider the addition of multiple edges to the graph G. To ensure
that multiple edges do not interact in ways that will increase the treewidth bound by more than one,
we need to characterize the nodes contaminated by each edge addition?a node v is contaminated
by the adding (s ? t) to G if it is incident to a new edge added during our treewidth friendly
triangulation. Below are several examples of contaminated sets (solid nodes) incident to edges
added (dashed) by our edge update procedure for different candidate edge additions (s ? t) to the
Bayesian network on the left. In all examples except the last treewidth is increased by one.
s
s
t
t
s
s
t
t
s
t
Using the notion of contamination, we can characterize sets of edges that are jointly treewidthfriendly. We will use this to learn optimal treewidth friendly chains given a ordering in Section 5.
Theorem 4.6: (Treewidth-friendly set). Let G be a graph structure and M+ be its corresponding
moralized triangulation. If {(si ? ti )} is a set of candidate edges satisfying the following:
? the contaminated sets of any (si ? ti ) and (sj ? tj ) are disjoint, or,
? the contaminated sets overlap at a single cut-vertex, but the endpoints of each edge are not
in the same block and the block paths between the endpoints do not overlap;
then adding all edges to G can increase the treewidth bound by at most one.
Proof: (outline) The theorem holds trivially for the first condition. Under the second condition, the
only common vertex is a cut-vertex. However, since all other contaminated nodes are in in different
blocks, they cannot interact to form a large clique.
5
Learning Optimal Treewidth-Friendly Chains
In the previous section we described our edge update procedure and characterized edge chains that
jointly increase the treewidth bound by at most one. We now use this to search for optimal chain
structures that satisfy Theorem 4.6, and are thus treewidth friendly, given a topological node ordering. On the surface, one might question the need for a specific node ordering altogether if chain
global operators are to be used?given the result of Chow and Liu [5], one might expect that learning
the optimal chain with respect to any ordering can be carried out efficiently. However, Meek [19]
showed that learning an optimal chain over a set of random variables is computationally difficult and
the result can be generalized to learning a chain conditioned the current model. Thus, during any
iteration of our algorithm, we cannot expect to find the overall optimal chain.
Instead, we commit to a single node ordering that is topologically consistent (each node appears
after its parent in the network) and learn the optimal treewidth-friendly chain with respect to that
order (we briefly discuss the details of our ordering below). To find such a chain in polynomial
time, we use a straightforward dynamic programming approach: the best treewidth-friendly chain
that contains (Os ? Ot ) is the concatenation of:
? the best chain from the first node O1 to OF , the first node contaminated by (Os ? Ot )
? the edge (Os ? Ot )
optimal chain
optimal chain
? the best chain starting from
the last node contaminated OL
OF
Os
Ot OL
ON
to the last node in the order ON . O1
We note that when the end nodes are not separating cut-vertices, we maintain a gap so that the
contamination sets are disjoint and the conditions of Theorem 4.6 are met.
5
11
30
10
Ours
Treewidth bound
9
0
Thin Junction-tree
-1
8
7
6
Aggressive
-2
5
-3
4
3
-4
Length of chain
2
-5
Runtime in minutes
Test log-loss / instance
1
25
20
Ours
15
Thin Junction tree
10
5
1
5
10
15
20
25
30
35
40
45
50
55
60
0
0
Treewidth bound
5
10
15
Iteration
20
25
30
0
10
20
30
40
50
60
Treewidth bound
Figure 2: Gene expression results: (left) 5-fold mean test log-loss per/instance vs. treewidth bound. Our
method (solid blue squares) is compared to the thin junction tree method (dashed red circles), and an unbounded
aggressive search (dotted black). (middle) the treewidth estimate and the number of edges in the chain during
the iterations of a typical run with the bound set to 10. (right) shows running time as a function of the bound.
Formally, we define C[i, j] as the optimal chain whose contamination is limited to the range [Oi ,Oj ]
and our goal is to compute C[1, N ]. Using F to denote the first node ordered in the contamination set
of (s ? t) (and L for the last), we can compute C[1, N ] via the following recursive update principle
(
maxs,t:F =i,L=j (s ? t)
no split
maxk=i+1:j?1 C[i, k] ? C[k, j] split
C[i, j] =
?
leave a gap
where the maximization is with respect to the structure score (e.g., BIC). That is, the best chain in a
subsequence [i, j] in the ordering is the maximum of three alternatives: edges whose contamination
boundaries are exactly i and j (no split); two chains that are joined at some node i < k < j (split);
a gap between i and j when there is no positive edge whose contamination is in [i, j].
Finally, for lack of space we only provide a brief description of our topological node ordering.
Intuitively, since edges contaminate nodes along the block path between the edge?s endpoints (see
Section 4), we want to adopt a DFS ordering over the blocks so as to facilitate as many edges as
possible between different branches of the block tree. We order nodes with a block by the distance
from the ?entry? vertex as motivated by the following result on the distance dM
min (u, v) between
nodes u, v in the triangulated graph M+ (proof not shown for lack of space):
Theorem 5.1: Let r, s, t be nodes in a block B in the triangulated graph M+ with dM
min (r, s) ?
M
M
dM
(r,
t).
Then
for
any
v
on
an
induced
path
between
s
and
t
we
have
d
(r,
v)
?
d
(r,
t).
min
min
min
The efficiency of our method outlined in Figure 1 in the number of variables and the treewidth bound
(Theorem 3.1) now follows from the efficiency of the ordering and chain learning procedures.
6
Experimental Evaluation
We compare our approach on four real-world datasets to several methods. The first is an improved
variant of the thin junction tree method [2]. We start (as in our method) with a Chow-Liu forest and
iteratively add the single best scoring edge as long as the treewidth bound is not exceeded. To make
the comparison independent of the choice of triangulation method, at each iteration we replace the
heuristic triangulation (best of maximum cardinality search or minimum fill-in [16], which in practice had negligible differences) with our triangulation if it results in a lower treewidth.The second
baseline is an aggressive structure learning approach that combines greedy edge modifications with
a TABU list (e.g., [13]) and random moves and that is not constrained by a treewidth bound. Where
relevant we also compare our results to the results of Chechetka and Guestrin [3].
Gene Expression. We first consider a continuous dataset of the expression of yeast genes (variables)
in 173 experiments (instances) [12]. We learn sigmoid Bayesian networks using the BIC structure
score [24] using the fully observed set of 89 genes that participate in general metabolic processes.
Here a learned model indicates possible regulatory or functional connections between genes.
Figure 2(a) shows test log-loss as a function of treewidth bound. The first obvious phenomenon
is that both our method and the thin junction tree approach are superior to the aggressive baseline.
As one might expect, the aggressive baseline achieves a higher BIC score on training data (not
shown), but overfits due to the scarcity of the data. The consistent superiority of our method over
thin junction trees demonstrates that a better choice of edges, i.e., ones chosen by a global operator,
can lead to increased robustness and better generalization. Indeed, even when the treewidth bound
6
-60
-65
Chechetka+Guestrin
100
200
300
400
500
Ours
-30
Thin Junction-tree
-32
-34
Chechetka+Guestrin
-36
-38
Test log-loss / instance
Thin Junction-tree
-55
Test log-loss / instance
Test log-loss / instance
Ours
-50
Ours
Aggressive
Aggressive
-45
[unordered]
-28
Thin Junction-tree
-30
-32
-34
-36
600
700
Training instances
800
900
1000
100
200
300
400
500
600
700
800
900
1000
Training instances
Figure 3: 5-fold mean test log-loss/instance for a treewidth
bound of two vs. training set size for the temperature (left) and
traffic (right) datasets. Compared are our approach (solid blue
squares), the thin junction tree method (dashed red circles), an
aggressive unbounded search (dotted black), and the method of
Chechetka and Guestrin [3] (dash-dot magenta diamonds).
2
4
6
8
10
12
14
16
18
20
Treewidth bound
Figure 4: Average log-loss vs. treewidth
bound for the Hapmap data. Compared
are an unbounded aggressive search (dotted) and unconstrained (thin) and constrained by the DNA order (thick) variants
of ours and the thin junction tree method.
is increased past the saturation point, our method surpasses both baselines. In this case, we are
learning unbounded networks and all benefit comes from the global nature of our updates.
To qualitatively illustrate the progression of our algorithm, in Figure 2(b) we plot the number of
edges in the chain and the treewidth estimate at the end of each iteration for a typical run. Our
algorithm aggressively adds multi-edge chains until the treewidth bound is reached, at which point
(iteration 24) it becomes fully greedy. To appreciate the non-triviality of some of the chains learned
with 4 ? 7 edges, we recall that the chains are added after a Chow-Liu model was initially learned. It
is also worth noting that despite their complexity, some chains do not increase the treewidth estimate
and we typically have more than K iterations where chains with more than one edge are added. The
number of such iterations is still polynomially bounded as for a Bayesian network with N variables
adding more than K ? N edges will necessarily result in a treewidth that is greater than K.
To evaluate the efficiency of our method we measured its running time as a function of the treewidth
bound. Figure 2(c) shows results for the gene expression dataset. Observe that our method and the
greedy thin junction tree approach are both approximately linear in the treewidth bound. Appealingly, the additional computation our method requires is not significant (? 25%). This should not
come as a surprise as the bulk of the time is spent on the collection of the data sufficient statistics.
It is also worth discussing the range of treewidths we considered in the above experiment as well as
the Haplotype experiment below. While treewidths greater than 25 seem excessive for exact inference, state-of-the-art techniques (e.g., [9, 18]) can reasonably handle inference in networks of this
complexity. Furthermore, as our results show, it is beneficial in practice to learn such models. Thus,
combining our method with state-of-the-art inference techniques can allow practitioners to push the
envelope of the complexity of models learned for real applications that rely on exact inference.
The Traffic and Temperature Datasets. We now compare our method to the mutual-information
based LPACJT approach of Chechetka and Guestrin [3] (we compare to the better variant). As their
method is exponential in the treewidth and cannot be used in the gene expression setting, we compare
to it on the two discrete real-life datasets Chechetka and Guestrin [3] considered: the temperature
data is from a deployment of 54 sensor nodes; the traffic dataset contains traffic flow information
measured every 5 minutes in 32 locations in California. To make the comparison fair, we used the
same discretization and train/test splits. Furthermore, as their method can only be applied to a small
treewidth bound, we also limited our model to a treewidth of two. Figure 3 compares the different
methods. Both our method and the thin junction tree approach significantly outperform the LPACJT
on small sample sizes. This result is consistent with the results reported in Chechetka and Guestrin
[3] and is due to the fact that the LPACJT method does not facilitate the use of regularization which
is crucial in the sparse-data regime. The performance of our method is comparable to the greedy
thin junction tree approach with no obvious superiority of either method. This should not come as a
surprise since the fact that the unbounded aggressive search is not significantly better suggests that
the strong signal in the data can be captured rather easily. In fact, Chechetka and Guestrin [3] show
that even a Chow-Liu tree does rather well on these datasets (compare this to the gene expression
dataset where the aggressive variant was superior even at a treewidth of five).
Haplotype Sequences. Finally we consider a more difficult discrete dataset of a sequence of single
nucleotide polymorphism (SNP) alleles from the Human HapMap project [6]. Our model is defined
over 200 SNPs (binary variables) from chromosome 22 of a European population consisting of 60
individuals (we considered several different sequences along the chromosome with similar results).
7
In this case, there is a natural ordering of variables that corresponds to the position of the SNPs in
the DNA sequence. Figure 4 shows test log-loss results when this ordering is enforced (thicker)
and when it is not (thinner). The superiority of our method when the ordering is used is obvious
while the performance of the thin junction tree method degrades. This can be expected as the greedy
method does not make use of a node ordering, while our method provides optimality guarantees with
respect to a variable ordering at each iteration. Whether constrained to the natural variable ordering
or not, our method ultimately also surpasses the unbounded aggressive search.
7
Discussion and Future Work
In this work we presented a novel method for learning Bayesian networks of bounded treewidth in
time that is polynomial in both the number of variables and the treewidth bound. Our method builds
on an edge update algorithm that dynamically maintains a valid moralized triangulation in a way
that facilitates the addition of chains that are guaranteed to increase the treewidth bound by at most
one. We demonstrated the effectiveness of our treewidth-friendly method on real-life datasets, and
showed that by utilizing global structure modification operators, we are able to learn better models
than competing methods, even when the treewidth of the models learned is not constrained.
Our method can be viewed as a generalization of the work of Chow and Liu [5] that is constrained to
a chain structure but that provides an optimality guarantee (with respect to a node ordering) at every
treewidth. In addition, unlike the thin junction trees approach of Bach and Jordan [2], we provide
a guarantee that our estimate of the treewidth bound will not increase by more than one at each
iteration. Furthermore, we add multiple edges at each iteration, which in turn allows us to better
cope with the problem of local maxima in the search. To our knowledge, ours is the first method for
efficiently learning Bayesian networks with an arbitrary treewidth bound that is not fully greedy.
Our method motivates several exciting future directions. It would be interesting to see to what
extent we could overcome the need to commit to a specific node ordering at each iteration. While
we provably cannot consider every ordering, it may be possible to polynomially provide a reasonable
approximation. Second, it may be possible to refine our characterization of the contamination that
results from an edge update, which in turn may facilitate the addition of more complex treewidthfriendly structures at each iteration. Finally, we are most interested in exploring whether tools
similar to the ones employed in this work could be used to dynamically update the bounded treewidth
structure that is the approximating distribution in a variational approximate inference setting.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
P. Abbeel, D. Koller, and A. Ng. Learning factor graphs in poly. time & sample complexity. JMLR, 2006.
F. Bach and M. I. Jordan. Thin junction trees. In NIPS, 2001.
A. Chechetka and C. Guestrin. Efficient principled learning of thin junction trees. In NIPS. 2008.
D. Chickering. Learning Bayesian networks is NP-complete. In Learning from Data: AI & Stats V. 1996.
C. Chow and C. Liu. Approx. discrete distrib. with dependence trees. IEEE Trans. on Info. Theory, 1968.
The International HapMap Consortium. The international hapmap project. Nature, 2003.
G. F. Cooper. The computationl complexity of probabilistic inference using belief networks. AI, 1990.
P. Dagum and M. Luby. An optimal approximation algorithm for baysian inference. AI, 1993.
A. Darwiche. Recursive conditioning. Artificial Intelligence, 2001.
S. Dasgupta. Learning polytrees. In UAI, 1999.
G. A. Dirac. On rigid circuit graphs. Abhandlungen aus dem Math. Seminar der Univ. Hamburg 25, 1961.
A. Gasch et al. Genomic expression program in the response of yeast cells to environmental changes.
Molecular Biology of the Cell, 2000.
F. Glover and M. Laguna. Tabu search. In Modern Heuristic Tech. for Comb. Problems, 1993.
D. Heckerman. A tutorial on learning Bayesian networks. Technical report, Microsoft Research, 1995.
D. Karger and N. Srebro. Learning markov networks: maximum bounded tree-width graphs. In Symposium on Discrete Algorithms, 2001.
A. Koster, H. Bodlaender, and S. Van Hoesel. Treewidth: Computational experiments. Technical report,
Universiteit Utrecht, 2001.
S. Lauritzen and D. Spiegelhalter. Local computations with probabilities on graphical structures. Journal
of the Royal Statistical Society, 1988.
R. Marinescu and R. Dechter. And/or branch-and-bound for graphical models. IJCAI, 2005.
C. Meek. Finding a path is harder than finding a tree. Journal of Artificial Intelligence Research, 2001.
M. Meila and M. I. Jordan. Learning with mixtures of trees. JMLR, 2000.
M. Narasimhan and J. Bilmes. Pac-learning bounded tree-width graphical models. In UAI, 2004.
J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.
N. Robertson and P. Seymour. Graph minors II. algorithmic aspects of tree-width. J. of Algorithms, 1987.
G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461?464, 1978.
8
| 3568 |@word middle:1 briefly:2 polynomial:9 decomposition:8 minus:1 solid:3 harder:1 liu:10 contains:3 score:8 karger:1 ours:7 past:1 current:3 discretization:1 chordal:2 si:2 must:1 readily:1 dechter:1 happen:2 cpds:2 plot:1 update:22 v:3 greedy:10 intelligence:2 accordingly:1 short:1 characterization:3 provides:2 contribute:1 node:45 location:1 math:1 chechetka:9 five:2 unbounded:9 glover:1 along:6 symposium:1 combine:1 comb:1 darwiche:1 expected:1 indeed:1 roughly:1 p1:7 surge:1 multi:1 ol:2 globally:2 actual:1 cardinality:2 becomes:1 project:2 estimating:2 bounded:13 underlying:1 circuit:2 israel:1 appealingly:2 cm:10 what:1 narasimhan:1 finding:2 gal:1 guarantee:6 pseudo:1 contaminate:1 every:11 ti:2 friendly:21 thicker:1 runtime:1 exactly:1 demonstrates:1 superiority:3 before:4 positive:1 engineering:1 local:10 negligible:1 seymour:2 thinner:1 consequence:1 laguna:1 despite:3 path:21 approximately:1 might:3 black:2 au:1 resembles:2 dynamically:4 suggests:2 challenging:1 polytrees:1 deployment:1 limited:2 bi:2 range:2 directed:4 unique:4 practice:5 block:35 recursive:2 procedure:17 significantly:2 consortium:1 cannot:5 undesirable:1 operator:5 context:1 equivalent:2 demonstrated:1 missing:1 jerusalem:1 attention:1 starting:2 straightforward:1 decomposable:2 immediately:1 stats:1 examines:1 utilizing:1 importantly:2 fill:1 tabu:2 population:1 searching:1 notion:4 handle:1 annals:1 exact:2 programming:3 robertson:2 satisfying:1 particularly:1 cut:13 observed:1 electrical:1 region:1 cycle:7 connected:3 ordering:23 contamination:7 chord:3 zig:3 principled:1 intuition:1 complexity:9 skeleton:1 nonadjacent:1 dynamic:5 ultimately:1 creates:1 efficiency:4 exit:2 easily:4 joint:2 differently:1 train:1 univ:1 forced:1 distinct:1 query:1 emanating:1 artificial:2 formation:2 refined:1 quite:1 heuristic:4 stanford:3 richer:1 whose:5 larger:1 ability:2 statistic:3 commit:2 jointly:4 final:1 sequence:4 propose:2 maximal:7 adaptation:1 relevant:1 combining:1 achieve:1 description:1 dirac:2 parent:9 ijcai:1 produce:1 leave:1 spent:1 illustrate:1 develop:1 ac:1 augmenting:2 measured:2 minor:1 lauritzen:2 progress:1 strong:1 p2:7 involves:1 treewidth:114 come:3 triangulated:17 convention:1 met:1 guided:1 thick:1 closely:2 dfs:1 direction:1 allele:1 human:1 hapmap:4 polymorphism:1 generalization:4 abbeel:1 alleviate:1 exploring:1 hold:1 sufficiently:1 around:1 considered:3 algorithmic:1 achieves:1 adopt:1 purpose:1 applicable:1 dagum:1 schwarz:1 tool:1 clearly:1 sensor:1 genomic:1 ck:1 rather:2 encode:1 likelihood:2 indicates:1 tech:1 greedily:1 baseline:4 inference:12 rigid:1 marinescu:1 typically:1 chow:10 initially:1 spurious:1 koller:1 interested:2 provably:1 overall:2 constrained:5 special:1 art:2 mutual:1 once:1 ng:1 biology:1 pai:2 excessive:1 thin:24 triangulate:2 future:3 np:3 contaminated:9 report:2 intelligent:1 employ:1 modern:1 individual:1 consisting:1 maintain:1 microsoft:1 attempt:1 evaluation:1 mixture:2 tj:1 chain:42 edge:69 nucleotide:1 tree:54 desired:2 circle:2 delete:1 increased:5 instance:9 moralization:4 maximization:1 vertex:21 subset:2 entry:2 surpasses:2 characterize:4 reported:1 connect:2 international:2 huji:1 probabilistic:4 connecting:1 containing:4 worse:1 return:1 aggressive:13 unordered:1 availability:1 dem:1 satisfy:1 later:1 performed:3 overfits:1 traffic:4 red:2 start:2 reached:1 maintains:2 universiteit:1 il:1 square:2 oi:1 variance:1 kaufmann:1 efficiently:7 correspond:2 identify:1 generalize:5 bayesian:31 produced:1 utrecht:1 worth:2 bilmes:1 straight:1 suffers:1 definition:5 obvious:4 dm:3 proof:5 gain:1 dataset:7 recall:1 knowledge:1 cj:1 appears:1 exceeded:1 higher:1 response:1 improved:1 furthermore:5 until:1 hand:1 expressive:1 touch:1 o:4 lack:3 incrementally:1 yeast:2 grows:1 building:3 usa:1 effect:1 concept:1 true:4 facilitate:3 hence:1 regularization:1 aggressively:1 iteratively:3 illustrated:1 galel:1 adjacent:1 during:4 width:4 maintained:1 whereby:1 generalized:2 outline:5 complete:2 demonstrate:1 temperature:3 snp:3 reasoning:1 variational:1 novel:3 common:1 sigmoid:1 superior:2 functional:1 haplotype:2 endpoint:3 foreach:1 conditioning:1 significant:1 ai:3 automatic:1 unconstrained:1 outlined:3 trivially:1 approx:1 meila:1 had:1 dot:1 surface:1 add:7 recent:1 triangulation:31 showed:3 reverse:1 hamburg:1 binary:1 discussing:1 life:4 der:1 scoring:3 seen:1 minimum:3 preserving:1 greater:5 guestrin:9 additional:1 captured:1 employed:1 morgan:1 v3:2 elidan:1 dashed:3 stephen:1 ii:2 multiple:5 desirable:2 branch:2 signal:1 technical:2 characterized:2 bach:4 long:1 molecular:1 prediction:1 variant:5 basic:1 involving:1 iteration:16 cell:2 addition:13 want:2 background:1 separately:1 grow:1 source:7 crucial:1 ot:4 envelope:1 unlike:1 pass:1 induced:10 tend:1 facilitates:2 undirected:11 flow:1 effectiveness:2 jordan:5 seem:1 practitioner:1 noting:1 split:5 easy:4 bic:5 competing:1 idea:1 whether:2 expression:7 motivated:1 triviality:1 speaking:1 useful:2 generally:1 dna:2 outperform:1 exist:1 tutorial:1 dotted:3 disjoint:2 per:1 bulk:1 blue:2 discrete:4 dasgupta:1 four:3 clarity:1 prevent:1 pj:2 v1:4 graph:47 year:1 enforced:1 run:5 koster:1 topologically:1 family:1 reasonable:1 comparable:1 bound:43 guaranteed:5 meek:2 dash:1 fold:2 topological:2 refine:2 encodes:1 phrased:1 aspect:1 argument:1 min:5 optimality:2 relatively:1 gould:1 department:2 structured:2 disconnected:2 beneficial:3 across:1 heckerman:1 appealing:2 shallow:1 modification:8 making:3 intuitively:2 heart:3 computationally:2 discus:1 turn:2 needed:1 tractable:1 end:2 junction:23 available:1 apply:1 observe:3 progression:1 v2:2 luby:1 alternative:2 robustness:1 altogether:1 bodlaender:1 running:3 ensure:1 graphical:4 maintaining:2 build:1 approximating:1 society:1 appreciate:1 move:1 added:11 question:1 strategy:1 degrades:1 dependence:1 exhibit:1 distance:2 separate:1 separating:1 concatenation:1 participate:1 extent:1 gasch:1 length:2 code:1 o1:2 providing:1 balance:1 hebrew:1 equivalently:1 difficult:3 info:1 stated:1 bde:1 motivates:1 diamond:1 allowing:1 upper:4 observation:2 revised:1 datasets:8 markov:1 finite:1 zagging:1 lpacjt:3 maxk:1 superexponential:1 interacting:1 varied:1 treewidths:2 arbitrary:4 introduced:2 pair:3 required:2 connection:1 sgould:1 baysian:1 california:1 learned:6 pearl:1 nip:2 trans:1 address:1 able:5 usually:1 below:4 regime:1 saturation:1 program:1 oj:1 max:1 explanation:1 belief:1 royal:1 overlap:2 treated:1 rely:2 regularized:1 natural:2 scarce:1 spiegelhalter:2 brief:1 created:1 carried:2 hoesel:1 naive:1 removal:1 fully:5 expect:3 loss:9 interesting:1 limitation:1 proportional:1 acyclic:3 srebro:1 incident:2 sufficient:1 consistent:3 principle:2 metabolic:1 exciting:1 pi:2 prone:1 surprisingly:1 last:5 allow:2 face:1 sparse:1 benefit:1 van:1 boundary:1 overcome:1 xn:2 evaluating:1 valid:4 world:1 dimension:1 preventing:1 forward:1 made:1 jump:1 refinement:1 qualitatively:1 collection:1 polynomially:2 cope:2 sj:1 approximate:2 gene:8 clique:11 ml:2 global:10 overfitting:2 uai:2 xi:3 subsequence:1 search:15 continuous:1 regulatory:1 learn:14 chromosome:2 nature:5 ca:1 reasonably:1 improving:1 forest:1 interact:2 complex:4 poly:2 necessarily:1 domain:2 vj:2 european:1 fair:1 x1:2 augmented:3 cooper:1 seminar:1 position:1 exceeding:1 explicit:1 exponential:2 candidate:2 chickering:1 jmlr:2 theorem:10 moralized:9 minute:2 magenta:1 specific:2 pac:1 maxi:1 list:1 concern:1 exists:1 adding:5 ci:7 conditioned:1 push:1 gap:3 surprise:2 intersection:2 simply:2 ordered:1 joined:1 corresponds:1 chance:1 relies:3 environmental:1 conditional:1 goal:2 viewed:1 exposition:1 towards:1 replace:1 shortcut:1 hard:2 change:2 typical:2 except:2 called:1 experimental:1 zag:2 formally:1 distrib:1 scarcity:1 evaluate:3 phenomenon:2 |
2,833 | 3,569 | An Online Algorithm for Maximizing
Submodular Functions
Daniel Golovin
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Matthew Streeter
Google, Inc.
Pittsburgh, PA 15213
[email protected]
Abstract
We present an algorithm for solving a broad class of online resource allocation
problems. Our online algorithm can be applied in environments where abstract
jobs arrive one at a time, and one can complete the jobs by investing time in a
number of abstract activities, according to some schedule. We assume that the
fraction of jobs completed by a schedule is a monotone, submodular function of
a set of pairs (v, ? ), where ? is the time invested in activity v. Under this assumption, our online algorithm performs near-optimally according to two natural
metrics: (i) the fraction of jobs completed within time T , for some fixed deadline T > 0, and (ii) the average time required to complete each job. We evaluate
our algorithm experimentally by using it to learn, online, a schedule for allocating
CPU time among solvers entered in the 2007 SAT solver competition.
1
Introduction
This paper presents an algorithm for solving the following class of online resource allocation problems. We are given as input a finite set V of activities. A pair (v, ? ) ? V ? R>0 is called an
action, and represents spending time ? performing activity v. A schedule is a sequence of actions.
We use S to denote the set of all schedules. A job is a function f : S ? [0, 1], where for any
schedule S ? S, f (S) represents the proportion of some task that is accomplished by performing
the sequence of actions S. We require that a job f have the following properties (here ? is the
concatenation operator):
1. (monotonicity) for any schedules S1 , S2 ? S, we have f (S1 ) ? f (S1 ? S2 ) and f (S2 ) ?
f (S1 ? S2 )
2. (submodularity) for any schedules S1 , S2 ? S and any action a ? V ? R>0 , fa (S1 ? S2 ) ?
fa (S1 ), where we define fa (S) ? f (S ? hai) ? f (S).
We will evaluate schedules in terms of two objectives. The first objective, which we call benefitmaximization, is to maximize f (S) subject to the constraint ` (S) ? T , for some fixed T > 0, where
` (S) equals the sum of the durations of the actions in S. For example if S = h(v1 , 3), (v2 , 3)i, then
`(S) = 6. The second objective is to minimize the cost of a schedule, which we define as
Z ?
c (f, S) =
1 ? f Shti dt
t=0
where Shti is the schedule that results from truncating schedule S at time t. For example if S =
h(v1 , 3), (v2 , 3)i then Sh5i = h(v1 , 3), (v2 , 2)i.1 One way to interpret this objective is to imagine
More formally, if S = ha1 , a2 , . . .i, where ai = (vi , ?i ), then Shti = ha1 , a2 , . . . , ak?1 , ak , (vk+1 , ? 0 )i,
P
P
where k is the largest integer such that ki=1 ?i < t and ? 0 = t ? ki=1 ?i .
1
1
that f (S) is the probability that some desired event occurs as a Rresult of performing the actions in
?
S. For any non-negative random variable X, we have E [X] = t=0 P [X > t] dt. Thus c (f, S) is
the expected time we must wait before the desired event occurs if we execute actions according to
the schedule S. The following example illustrates these definitions.
Example 1. Let each activity v represent a randomized algorithm for solving some decision problem, and let the action (v, ? ) represent running the algorithm (with a fresh random seed) for time
? . Fix some particular instance of the decision problem, and for any schedule S, let f (S) be the
probability that one (or more) of the runs in the sequence S yields a solution to that instance. So
f (ShT i ) is (by definition) the probability that performing the runs in schedule S yields a solution
to the problem instance in time ? T , while c (f, S) is the expected time that elapses before a solution is obtained. It is clear that f (S) is monotone, because adding runs to the sequence S can
only increase the probability that one of the runs is successful. The fact that f is submodular can
be seen as follows. For any schedule S and action a, fa (S) equals the probability that action a
succeeds after every action in S has failed, which can also be written as (1 ? f (S)) ? f (hai). This,
together with the monotonicity of f , implies that for any schedules S1 , S2 and any action a, we have
fa (S1 ? S2 ) = (1 ? f (S1 ? S2 )) ? f (hai) ? (1 ? f (S1 )) ? f (hai) = fa (S1 ).
In the online setting, an arbitrary sequence hf (1) , f (2) , . . . , f (n) i of jobs arrive one at a time, and
we must finish each job (via some schedule) before moving on to the next job. When selecting a
schedule S (i) to use to finish job f (i) , we have knowledge of the previous jobs f (1) , f (2) , . . . , f (i?1)
but we have no knowledge of f (i) itself or of any subsequent jobs. In this setting we aim to minimize
regret, which measures the difference between the average cost (or average benefit) of the schedules
produced by our online algorithm and that of the best single schedule (in hindsight) for the given
sequence of jobs.
1.1
Problems that fit into this framework
A number of previously-studied problems can be cast as the task of computing a scheduleS that
Pn
Q
minimizes c (f, S), where f is of the form f (S) = n1 i=1 1 ? (v,? )?S (1 ? pi (v, ? )) . This
expression can be interpreted as follows: the job f consists of n subtasks, and pi (v, ? ) is the probability that investing time ? in activity v completes the ith subtask. Thus, f (S) is the expected
fraction of subtasks that are finished after performing the sequence of actions S. Assuming pi (v, ? )
is a non-decreasing function of ? for all i and v, it can be shown that any function f of this form is
monotone and submodular. P IPELINED S ET C OVER [11, 15] can be defined as the special case in
which for each activity v there is an associated time ?v , and pi (v, ? ) = 1 if ? ? ?v and pi (v, ? ) = 0
otherwise. M IN -S UM S ET C OVER [7] is the special case in which, additionally, ?v = 1 or ?v = ?
for all v ? V. The problem of constructing efficient sequences of trials [5] corresponds to the case
in which we are given a matrix q, and pi (v, ? ) = qv,i if ? ? 1 and pi (v, ? ) = 0 otherwise.
The problem of maximizing f (ShT i ) is a slight generalization of the problem of maximizing a
monotone submodular set function subject to a knapsack constraint [14, 20] (which in turn generalizes B UDGETED M AXIMUM C OVERAGE [12], which generalizes M AX k-C OVERAGE [16]). The
only difference between the two problems is that, in the latter problem, f (S) may only depend on
the set of actions in the sequence S, and not on the order in which the actions appear.
1.2
Applications
We now discuss three applications, the first of which is the focus of our experiments in ?5.
1. Online algorithm portfolio design. An algorithm portfolio [9] is a schedule for interleaving the
execution of multiple (randomized) algorithms and periodically restarting them with a fresh random
seed. Previous work has shown that combining multiple heuristics for NP-hard problems into a portfolio can dramatically reduce average-case running time [8, 9, 19]. In particular, algorithms based
on chronological backtracking often exhibit heavy-tailed run length distributions, and periodically
restarting them with a fresh random seed can reduce the mean running time by orders of magnitude
[8]. As illustrated in Example 1, our algorithms can be used to learn an effective algorithm portfolio
online, in the course of solving a sequence of problem instances.
2
2. Database query processing. In database query processing, one must extract all the records in a
database that satisfy every predicate in a list of one or more predicates (the conjunction of predicates
comprises the query). To process the query, each record is evaluated against the predicates one
at a time until the record either fails to satisfy some predicate (in which case it does not match
the query) or all predicates have been examined. The order in which the predicates are examined
affects the time required to process the query. Munagala et al. [15] introduced and studied a problem
called P IPELINED S ET C OVER (discussed in ?1.1), which entails finding an evaluation order for the
predicates that minimizes the average time required to process a record. Our work addresses the
online version of this problem, which arises naturally in practice.
3. Sensor placement. Sensor placement is the task of assigning locations to a set of sensors so
as to maximize the value of the information obtained (e.g., to maximize the number of intrusions
that are detected by the sensors). Many sensor placement problems can be optimally solved by
maximizing a monotone submodular set function subject to a knapsack constraint [13], a special
case of our benefit-maximization problem (see ?1.1). Our online algorithms could be used to select
sensor placements when the same set of sensors is repeatedly deployed in an unknown or adversarial
environment.
1.3
Summary of results
We first consider the offline variant of our problem. As an immediate consequence of existing
results [6, 7], we find that, for any > 0, (i) achieving an approximation ratio of 4 ? for the
cost-minimization problem is NP-hard and (ii) achieving an approximation ratio of 1 ? 1e + for the
benefit-maximization problem is NP-hard. We then present a greedy approximation algorithm that
simultaneously achieves the optimal approximation ratios (of 4 and 1 ? 1e ) for these two problems,
building on and generalizing previous work on special cases of these two problems [7, 20].
In the online setting we provide an online algorithm whose worst-case performance guarantees approach those of the offline greedy approximation algorithm asymptotically (as the number of jobs
approaches infinity). We then show how to modify our online algorithm for use in several different
?bandit? feedback settings. Finally, we prove information-theoretic lower bounds on regret. We
conclude with an experimental evaluation.
2
Related Work
As discussed in ?1.1, the offline cost-minimization problem considered here generalizes M IN -S UM
S ET C OVER [7], P IPELINED S ET C OVER [11, 15], and the problem of constructing efficient sequences of trials [5]. Several of these problems have been considered in the online setting. Munagala et al. [15] gave an online algorithm for P IPELINED S ET C OVER that is asymptotically
O (log |V|)-competitive. Babu et al. [3] and Kaplan et al. [11] gave online algorithms for P IPE LINED S ET C OVER that are asymptotically 4-competitive, but only in the special case where the
jobs are drawn independently at random from a fixed probability distribution (whereas our online
algorithm is asymptotically 4-competitive on an arbitrary sequence of jobs).
Our offline benefit-maximization problem generalizes the problem of maximizing a monotone submodular set function subject to a knapsack constraint. Previous work gave offline greedy approximation algorithms for this problem [14, 20], which generalized earlier algorithms for B UDGETED
M AXIMUM C OVERAGE [12] and M AX k-C OVERAGE [16]. To our knowledge, none of these problems have previously been studied in an online setting. Note that our problem is quite different from
online set covering problems (e.g., [1]) that require one to construct a single collection of sets that
covers each element in a sequence of elements that arrive online.
In this paper we convert a specific greedy approximation algorithm into an online algorithm. Recently, Kakade et al. [10] gave a generic procedure for converting an ?-approximation algorithm
into an online algorithm that is asymptotically ?-competitive. Their algorithm applies to linear
optimization problems, but not to the non-linear problems we consider here.
Independently of us, Radlinkski et al. [17] developed a no-regret algorithm for the online version of
M AX k-C OVERAGE, and applied it to online ranking. As it turns out, their algorithm is a special
case of the algorithm OGunit that we present in ?4.1.
3
3
Offline Greedy Algorithm
In the offline setting, we are given as input a job f : S [0, 1]. Our goal is to compute a schedule S
that achieves one of two objectives, either minimizing the cost c (f, S) or maximizing f (S) subject
to the constraint (S) T .2 As already mentioned, this offline problem generalizes M IN -S UM S ET
C OVER under the former objective and generalizes M AX k-C OVERAGE under the latter objective,
which implies the following computational complexity result [6, 7].
Theorem 1. For any > 0, achieving a 4 (resp. 1 1e + ) approximation ratio for the costminimization (resp. benefit-maximization) problem is NP-hard.
fg (Gj )
We now consider an arbitrary schedule G, whose j th action is gj = (vj , j ). Let sj = j j ,
f
(G )
where Gj =
g1 , g2 , ..., gj1 , and let j = max(v )V? R>0 (v ) j sj . We will prove
bounds on the performance of G in terms of the
j values.
Note that we can ensure j = 0 j by
f
(G )
greedily choosing gj = arg max(v )V? R>0 (v ) j (i.e., greedily appending actions to the
schedule so as to maximize the resulting increase in f per unit time). A key property is stated in the
following lemma, which follows from the submodularity assumption (for the proof, see [18]).
Lemma 1. For any schedule S, any positive integer j, and any t > 0, f (St) f (Gj )+t? (sj +j ).
Using Lemma 1, together with a geometric proof technique developed in [7], we now show that the
greedy algorithm achieves the optimal approximation ratio for the cost-minimization problem.
Theorem 2. Let S = arg minSS c (f, S). If j = 0 j, then c (f, G) 4 ? c (f, S ). More
L
generally, let L be a positive integer, and let T = j=1 j . For any schedule S, define cT (f, S)
T
L
1 f St dt. Then cT (f, G) 4 ? c (f, S ) + j=1 Ej j , where Ej = l<j l l .
t=0
Proof. We consider the special case j = 0 j; for the full proof see [18]. Let Rj = 1 f (Gj );
R
R
R
). By Lemma 1, h(xj ) Rj 2j = yj .
let xj = 2sjj ; let yj = 2j ; and let h(x) = 1 f (S
x
The monotonicity of f implies that h(x)is non-increasingand also that the sequence
y1 , y2 , ...is
non-increasing. These facts imply that x=0 h(x) dx j1 xj (yj yj+1 ) (see Figure 1). The
R Rj+1
left hand side equals c (f, S ), and, using the fact that sj = j
to 14 j1 Rj j 14 c (f, G), proving c (f, G) 4 ? c (f, S ).
j
, the right hand side simplifies
h(x)
y1
y2
y3
y4
y5
x1
x2
x3
x4
x5
x
Figure 1: An illustration of the inequality
x=0
h(x) dx
j1
xj (yj yj+1 ).
The greedy algorithm also achieves the optimal approximation ratio for the benefit-maximization
problem, as can be shown using arguments similar to the ones in [14, 20]; see [18] for details.
L
Theorem 3. Let L be a positive integer, and let T =
Then f GT >
j=1 j .
L
1 1e maxSS f ST j=1 j j .
Given a set of jobs { f (1) f (2)
f (n) } , we can optimize the average schedule cost (or benefit) simply
P
(i)
(since any convex combination of jobs is a job).
by applying our offline algorithm to the job f = n1 n
i=1 f
2
4
4
Online Greedy Algorithm
In the online setting we are fed, one at a time, a sequence hf (1) , f (2) , . . . , f (n) i of jobs. Prior to
receiving job f (i) , we must specify a schedule S (i) . We then receive complete access to the function
f (i) .
We measure performance
objective,
notions ofregret.
Pn using two different
Pn For the cost-minimization
we define Rcost = n1 i=1 cT S (i) , f (i) ?4?minS?S n1 i=1 c S, f (i) , for some fixed T > 0.
RT
Here for any schedule S and job f , we define cT (S, f ) = t=0 1 ? f Shti dt to be the value of
c (S, f ) when
the integral is truncated at time T . Some form of truncation is necessary because
c S (i) , f (i) could be infinite, and without bounding it we could not prove any finite bound on regret
(our regret bounds will be
of T ). For
objective, we
the benefit-maximization
stated asa function
Pn
Pn
define Rbenefit = 1 ? 1e maxS?S n1 i=1 f (i) ShT i ? n1 i=1 f (i) S (i) . Here we require
that for each i, E ` S (i) = T , where the expectation is over the online algorithm?s random bits.
That is, we allow the online algorithm to treat T as a budget in expectation, rather than a hard budget.
Our goal is to bound the worst-case expected values of Rcost and Rbenefit . For simplicity, we
consider the oblivious adversary model, in which the sequence of jobs is fixed in advance and does
not change in response to the decisions made by our online algorithm. We confine our attention to
schedules that consist of actions that come from some finite set A, and assume that the actions in A
have integer durations (i.e. A ? V ? Z>0 ).
4.1
Unit-cost actions
In the special case in which each action takes unit time (i.e., A ? V ? {1}), our online algorithm
OGunit is very simple. OGunit runs T action-selection algorithms, E1 , E2 , . . . , ET , where T is
the number of time steps for which our schedule is defined. The intent is that each action-selection
algorithm is a no-regret algorithm such as randomized weighted majority (WMR) [4], which selects
actions so as to maximize payoffs associated with the actions. Just before job f (i) arrives, each
(i)
action-selection algorithm Et selects an action ait . The schedule used byOGunit
on job f is
(i)
(i)
S (i) = hai1 , ai2 , . . . , aiT i. The payoff that Et associates with action a is fa Sht?1i .
q
T
and E [Rcost ] =
Theorem 4. Algorithm OGunit has E [Rbenefit ] = O
n ln |A|
q
O T Tn ln |A| in the worst case, when WMR [4] is the subroutine action-selection algorithm.
Proof. We willP
view OGunit as producing an approximate version of the offline greedy schedule for
n
the job f = n1 i=1 f (i) . First, view the sequence of actions selected by Et as a single meta-action
a
?t , and extend the domain of each f (i) to include the meta-actions by defining f (i) (S ? h?
at i) =
f (i) (S ? hait i) for all S ? S (note each f (i) remains monotone and submodular). Thus, the online
algorithm produces a single schedule S? = h?
a1 , a
?2 , . . . , a
?T i for all i.nLetrt be the
regret experienced
o
by action-selection algorithm Et . By construction, rt = maxa?A fa S?ht?1i
? fa? S?ht?1i .
t
Thus OGunit behaves exactly like the greedy schedule G for the function f , with t = rt . Thus,
PT
Theorem 3 implies that Rbenefit ? t=1 rt ? R. Similarly, Theorem 2 implies that Rcost ? T R.
Tocomplete the analysis,
it remains to bound E [R]. WMR has worst-case expected regret
p
1
O n Gmax ln |A| , where Gmax is the maximum sum of payoffs payoff for any single action.3 Because
qeach payoff
is at most 1 and there are n rounds, Gmax ? n, so a trivial bound is
1
E [R] = O T n ln |A| . In fact, the worst case is when Gmax = ? Tn for all T action-selection
q
T
algorithms, leading to an improved bound of E [R] = O
ln
|A|
(for details see [18]), which
n
completes the proof.
3
This bound requires Gmax to be known in advance; however, the same guarantee can be achieved by
guessing a value of Gmax and doubling the guess whenever it is proven wrong.
5
4.2
From unit-cost actions to arbitrary actions
In this section we generalize the online greedy algorithm presented in the previous section to accommodate actions with arbitrary durations. Like OGunit , our generalized algorithm OG makes use
of a series of action-selection algorithms E1 , E2 , . . . , EL (for L to be determined). On each round
i, OG constructs a schedule S (i) as follows: for t = 1, 2, . . . , L, it uses Et to choose an action
(i)
ait = (v, ? ) ? A, and appends this action to S (i) with probability ?1 . Let St denote the schedule
(i)
that results from the first t steps of this process (so St contains between 0 and t actions). The
(i)
payoff that Et associates with an action a = (v, ? ) equals ?1 fa (St?1 ) (i.e., the increase in f per unit
time that would have resulted from appending a to the schedule-under-construction).
As in the previous section, we view each action-selection algorithm Et as selecting a single metaaction a
?t . We extend the domain of each f (i) to include the meta-actions by defining f (i) (S?h?
at i) =
(i)
f (S ? hait i) if ait was appended to S (i) , and f (i) (S ? h?
at i) = f (i) (S) otherwise. Thus, the online
algorithm produces a single schedule S? = h?
a1 , a
?2 , . . . , a
?L i for all i. Note that each f (i) remains
monotone and submodular.
For the purposes of analysis, we will imagine that each meta-action a
?t always takes unit time
(whereas in fact, a
?t takes unit time per job in expectation). We show later that this assumption
does not invalidate any of our arguments.
Pn
Let f = n1 i=1 f (i) , and let S?t = h?
a1 , a
?2 , . . . ,na
?t i. Thus S? can o
be viewed
as a version
of the
1
? fa?t (S?t?1 ) , where we
greedy schedule from ?3, with t = max(v,? )?A ? f(v,? ) (S?t?1 )
are using the assumption that a
?t takes unit time. Let rt be the regret experienced by Et . Although
rt 6= t in general, the two quantities are equal in expectation (proof omitted).
Lemma 2. E [t ] = E [rt ].
We now prove a bound on E [Rbenefit ]. Because each f (i) is monotone and submodular, f is monotone and submodular as well, so the greedy schedule?s approximation guarantees apply to f . In
PT
particular, by Theorem 3, we have Rbenefit ? t=1 t . Thus by Lemma 2, E [Rbenefit ] ? E [R],
PT
where R = t=1 rt .
To bound E [Rbenefit ], it remains to justify the assumption that each meta-action a
?t always takes unit
?
time. First, note that the value of the objective function f (S) is independent of how long each metaaction a
?t takes. Thus, the only potential
danger
is that in making this assumption
we have
overlooked
a constraint violation of the form E ` S (i) 6= T . But by construction, E ` S (i) = L for each
i, regardless of what actions are chosen by each action-selection algorithm. Thus if we set L = T
there is no constraint violation. Combining the bound on E [R] stated in the proof of Theorem 4
with the fact that E [Rbenefit ] ? E [R] yields the following theorem.
Theorem 5. Algorithm OG, run with input L = T , has E [Rbenefit
q ] ? E[R]. If WMR [4] is used
T
as the subroutine action-selection algorithm, then E [R] = O
n ln |A| .
The argument bounding E [Rcost ] is similar,although somewhat more involved (for details, see [18]).
One additional complication is that ` S (i) is now a random variable, whereas in the definition of
Rcost the cost of a schedule
is always calculated up to time T . This can be addressed by making the
probability that ` S (i) < T sufficiently small, which can be done by setting L T and applying
concentration of measure inequalities. However, E [R] grows as a function of L, so we do not want
to make L too large. The (approximately) best bound is obtained by setting L = T ln n.
Theorem 6. Algorithm OG, run with input L = T ln n, has E [Rcost ] = O(T ln n ? E [R] + ?Tn ).
q
3
In particular, E [Rcost ] = O (ln n) 2 T Tn ln |A| if WMR [4] is used as the subroutine actionselection algorithm.
6
4.3
Dealing with limited feedback
Thus far we have assumed that, after specifying a schedule S (i) , the online algorithm receives complete access to the job f (i) . We now consider three more limited feedback settings that may arise
in practice. In the priced feedback model, to receive access to f (i) we must pay a priceC, which
(i)
is added to our regret. In the partially transparent feedback model, we only observe f (i) Shti for
each t > 0. In the opaque feedback model, we only observe f (i) S (i) .
The priced and partially transparent feedback models arise naturally in the case where action (v, ? )
represents running a deterministic algorithm v for ? time units, and f (S) = 1 if some action in S
yields a solution to some particular problem instance, and f (S) = 0 otherwise. If we execute a
schedule S and halt as soon as some action yields a solution, we obtain exactly the information that
is revealed in the partially transparent model. Alternatively, running each algorithm v until it returns
a solution would completely reveal the function f (i) , but incurs a computational cost, as reflected in
the priced feedback model.
Algorithm OG can be adapted to work in each of these three feedback settings; see [18] for the
specific bounds. In all cases, the high-level idea is to replace the unknown quantities used by OG
with (unbiased) estimates of those quantities. This technique has been used in a number of online
algorithms (e.g., see [2]).
4.4
Lower bounds on regret
We now state lower bounds on regret; for the proofs see the full paper [18]. Our proofs have the
same high-level structure as that of the lower bound given in [4], in that we define a distribution
over jobs that allows any online algorithm?s expected performance to be easily bounded, and then
prove a bound on the expected performance of the best schedule in hindsight. The upper bounds in
Theorem 4 match the lower bounds in Theorem 7 up to logarithmic factors, although the latter apply
to standard regret as opposed to Rbenefit and Rcost (which include factors of 1 ? 1e and 4).
q
Theorem 7. Let X = Tn ln |V|
T . Then any online algorithm has worst-case expected regret ? (X)
(resp. ? (T X)) for the online benefit-maximization (resp. cost-minimization) problem.
5
Experimental Evaluation on SAT 2007 Competition Data
The annual SAT solver competition (www.satcompetition.org) is designed to encourage the
development of efficient Boolean satisfiability solvers, which are used as subroutines in state-ofthe-art model checkers, theorem provers, and planners. The competition consists of running each
submitted solver on a number of benchmark instances, with a per-instance time limit. Solvers are
ranked according to the instances they solve within each of three instance categories: industrial,
random, and hand-crafted.
We evaluated the online algorithm OG by using it to combine solvers from the 2007 SAT solver
competition. To do so, we used data available on the competition web site to construct a matrix
X, where Xi,j is the time that the j th solver required on the ith benchmark instance. We used this
data to determine whether or not a given schedule would solve an instance within the time limit
T (schedule S solves instance i if and only if, for some j, ShT i contains an action (hj , ? ) with
? ? Xi,j ). As illustrated in Example 1, the task of maximizing the number of instances solved
within the time limit, in an online setting in which a sequence of instances must be solved one at a
time, is an instance of our online problem (under the benefit-maximization objective).
Within each instance category, we compared OG to the offline greedy schedule, to the individual
solver that solved the most instances within the time limit, and to a schedule that ran each solver
in parallel at equal strength. For these experiments, we ran OG in the full-information feedback
model, after finding that the number of benchmark instances was too small for OG to be effective
in the limited feedback models. Table 1 summarizes the results. In each category, the offline greedy
schedule and the online greedy algorithm outperform all solvers entered in the competition as well
as the na??ve parallel schedule.
7
Table 1: Number of benchmark instances solved within the time limit.
Category
Industrial
Random
Hand-crafted
Offline
greedy
147
350
114
Online
greedy
149
347
107
Parallel
schedule
132
302
95
Top
solver
139
257
98
References
[1] Noga Alon, Baruch Awerbuch, and Yossi Azar. The online set cover problem. In Proceedings
of the 35th STOC, pages 100?105, 2003.
[2] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic
multiarmed bandit problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[3] Shivnath Babu, Rajeev Motwani, Kamesh Munagala, Itaru Nishizawa, and Jennifer Widom.
Adaptive ordering of pipelined stream filters. In Proc. Intl. Conf. on Management of Data,
pages 407?418, 2004.
[4] Nicol`o Cesa-Bianchi, Yoav Freund, David Haussler, David Helmbold, Robert Schapire, and
Manfred Warmuth. How to use expert advice. Journal of the ACM, 44(3):427?485, 1997.
[5] Edith Cohen, Amos Fiat, and Haim Kaplan. Efficient sequences of trials. In Proceedings of
the 14th SODA, pages 737?746, 2003.
[6] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 45(4):634?
652, 1998.
[7] Uriel Feige, L?aszl?o Lov?asz, and Prasad Tetali. Approximating min sum set cover. Algorithmica, 40(4):219?234, 2004.
[8] Carla P. Gomes and Bart Selman. Algorithm portfolios. Artificial Intelligence, 126:43?62,
2001.
[9] Bernardo A. Huberman, Rajan M. Lukose, and Tad Hogg. An economics approach to hard
computational problems. Science, 275:51?54, 1997.
[10] Sham Kakade, Adam Kalai, and Katrina Ligett. Playing games with approximation algorithms.
In Proceedings of the 39th STOC, pages 546?555, 2007.
[11] Haim Kaplan, Eyal Kushilevitz, and Yishay Mansour. Learning with attribute costs. In Proceedings of the 37th STOC, pages 356?365, 2005.
[12] Samir Khuller, Anna Moss, and Joseph (Seffi) Naor. The budgeted maximum coverage problem. Information Processing Letters, 70(1):39?45, 1999.
[13] Andreas Krause and Carlos Guestrin. Near-optimal nonmyopic value of information in graphical models. In Proceedings of the 21st UAI, pages 324?331, 2005.
[14] Andreas Krause and Carlos Guestrin. A note on the budgeted maximization of submodular
functions. Technical Report CMU-CALD-05-103, Carnegie Mellon University, 2005.
[15] Kamesh Munagala, Shivnath Babu, Rajeev Motwani, Jennifer Widom, and Eiter Thomas. The
pipelined set cover problem. In Proc. Intl. Conf. on Database Theory, pages 83?98, 2005.
[16] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions. Mathematical Programming, 14(1):265?294, 1978.
[17] Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. Learning diverse rankings with
multi-armed bandits. In Proceedings of the 25th ICML, pages 784?791, 2008.
[18] Matthew Streeter and Daniel Golovin. An online algorithm for maximizing submodular functions. Technical Report CMU-CS-07-171, Carnegie Mellon University, 2007.
[19] Matthew Streeter, Daniel Golovin, and Stephen F. Smith. Combining multiple heuristics online. In Proceedings of the 22nd AAAI, pages 1197?1203, 2007.
[20] Maxim Sviridenko. A note on maximizing a submodular set function subject to a knapsack
constraint. Operations Research Letters, 32:41?43, 2004.
8
| 3569 |@word trial:3 version:4 proportion:1 nd:1 widom:2 prasad:1 incurs:1 accommodate:1 series:1 contains:2 selecting:2 daniel:3 existing:1 com:1 assigning:1 dx:2 must:6 written:1 subsequent:1 periodically:2 designed:1 ligett:1 bart:1 greedy:18 selected:1 guess:1 intelligence:1 warmuth:1 ith:2 smith:1 record:4 manfred:1 complication:1 location:1 org:1 mathematical:1 consists:2 prove:5 naor:1 combine:1 lov:1 expected:8 multi:1 decreasing:1 cpu:1 armed:1 solver:13 increasing:2 bounded:1 what:1 interpreted:1 minimizes:2 maxa:1 developed:2 aximum:2 hindsight:2 finding:2 guarantee:3 every:2 y3:1 bernardo:1 chronological:1 exactly:2 um:3 wrong:1 unit:10 appear:1 producing:1 before:4 positive:3 modify:1 treat:1 limit:5 consequence:1 ak:2 approximately:1 studied:3 examined:2 specifying:1 limited:3 yj:6 practice:2 regret:15 x3:1 procedure:1 danger:1 wait:1 pipelined:2 selection:10 operator:1 applying:2 optimize:1 www:1 deterministic:1 maximizing:10 attention:1 regardless:1 duration:3 truncating:1 independently:2 convex:1 economics:1 simplicity:1 helmbold:1 kushilevitz:1 haussler:1 proving:1 notion:1 resp:4 imagine:2 construction:3 pt:3 yishay:1 programming:1 us:1 pa:2 element:2 associate:2 database:4 aszl:1 solved:5 worst:6 ordering:1 ran:2 mentioned:1 subtask:1 environment:2 complexity:1 depend:1 solving:4 completely:1 easily:1 provers:1 effective:2 query:6 detected:1 artificial:1 edith:1 choosing:1 whose:2 heuristic:2 quite:1 solve:2 katrina:1 otherwise:4 g1:1 invested:1 itself:1 online:48 sequence:19 combining:3 entered:2 competition:7 motwani:2 intl:2 produce:2 adam:1 alon:1 job:33 solves:1 coverage:1 c:2 implies:5 come:1 submodularity:2 attribute:1 filter:1 munagala:4 require:3 fix:1 generalization:1 transparent:3 confine:1 considered:2 sufficiently:1 seed:3 matthew:3 achieves:4 a2:2 omitted:1 purpose:1 proc:2 largest:1 qv:1 weighted:1 amos:1 minimization:5 sensor:7 always:3 aim:1 rather:1 kalai:1 pn:6 ej:2 hj:1 og:10 conjunction:1 ax:4 focus:1 joachim:1 vk:1 intrusion:1 industrial:2 adversarial:1 greedily:2 el:1 bandit:3 subroutine:4 selects:2 arg:2 among:1 development:1 art:1 special:8 equal:6 construct:3 x4:1 represents:3 broad:1 icml:1 np:4 report:2 oblivious:1 simultaneously:1 resulted:1 ve:1 individual:1 algorithmica:1 n1:8 evaluation:3 violation:2 arrives:1 allocating:1 integral:1 gmax:6 encourage:1 necessary:1 desired:2 instance:19 earlier:1 boolean:1 cover:5 yoav:2 maximization:9 cost:14 successful:1 predicate:8 too:2 optimally:2 st:4 randomized:3 siam:1 receiving:1 together:2 na:2 aaai:1 cesa:2 management:1 opposed:1 choose:1 conf:2 expert:1 leading:1 return:1 potential:1 babu:3 inc:1 satisfy:2 ranking:2 vi:1 stream:1 later:1 view:3 eyal:1 actionselection:1 competitive:4 hf:2 carlos:2 parallel:3 minimize:2 appended:1 yield:5 ofthe:1 generalize:1 produced:1 none:1 submitted:1 whenever:1 definition:3 against:1 involved:1 e2:2 naturally:2 associated:2 proof:10 eiter:1 seffi:1 appends:1 knowledge:3 satisfiability:1 fiat:1 schedule:54 auer:1 dt:4 reflected:1 specify:1 response:1 improved:1 execute:2 evaluated:2 done:1 hait:2 just:1 shivnath:2 uriel:2 until:2 hand:4 receives:1 web:1 rajeev:2 google:2 reveal:1 grows:1 building:1 cald:1 y2:2 unbiased:1 former:1 awerbuch:1 ai2:1 illustrated:2 round:2 x5:1 game:1 covering:1 generalized:2 complete:5 theoretic:1 tn:5 performs:1 spending:1 recently:1 nonmyopic:1 behaves:1 cohen:1 discussed:2 slight:1 extend:2 interpret:1 mellon:3 multiarmed:1 ai:1 hogg:1 similarly:1 submodular:15 portfolio:5 moving:1 access:3 entail:1 invalidate:1 gj:7 sht:5 inequality:2 meta:5 accomplished:1 seen:1 guestrin:2 additional:1 somewhat:1 converting:1 determine:1 maximize:5 ii:2 stephen:1 multiple:3 full:3 rj:4 sham:1 technical:2 match:2 long:1 deadline:1 e1:2 tad:1 a1:3 halt:1 variant:1 overage:6 cmu:3 metric:1 expectation:4 represent:2 achieved:1 receive:2 whereas:3 want:1 krause:2 addressed:1 completes:2 noga:1 checker:1 asz:1 subject:6 call:1 integer:5 near:2 revealed:1 affect:1 finish:2 fit:1 gave:4 xj:4 nonstochastic:1 reduce:2 simplifies:1 idea:1 andreas:2 whether:1 expression:1 ipe:1 peter:1 action:54 repeatedly:1 dramatically:1 generally:1 clear:1 category:4 schapire:2 outperform:1 elapses:1 per:4 diverse:1 carnegie:3 rajan:1 key:1 threshold:1 achieving:3 drawn:1 budgeted:2 ht:2 v1:3 asymptotically:5 baruch:1 monotone:10 fraction:3 sum:3 convert:1 run:8 letter:2 opaque:1 soda:1 arrive:3 planner:1 decision:3 summarizes:1 bit:1 ki:2 bound:20 ct:4 pay:1 haim:2 annual:1 activity:7 adapted:1 strength:1 placement:4 constraint:8 infinity:1 x2:1 sviridenko:1 kleinberg:1 argument:3 min:3 performing:5 according:4 combination:1 feige:2 kakade:2 joseph:1 making:2 s1:12 thorsten:1 ln:13 resource:2 previously:2 remains:4 turn:2 discus:1 jennifer:2 lined:1 yossi:1 fed:1 generalizes:6 available:1 operation:1 apply:2 observe:2 sjj:1 v2:3 generic:1 appending:2 knapsack:4 thomas:1 top:1 running:6 ensure:1 include:3 completed:2 graphical:1 approximating:2 objective:11 already:1 quantity:3 occurs:2 added:1 fa:11 concentration:1 rt:9 guessing:1 hai:4 exhibit:1 nemhauser:1 concatenation:1 majority:1 y5:1 trivial:1 fresh:3 assuming:1 length:1 y4:1 illustration:1 ratio:6 minimizing:1 robert:3 stoc:3 negative:1 kaplan:3 stated:3 intent:1 design:1 unknown:2 bianchi:2 upper:1 benchmark:4 finite:3 kamesh:2 truncated:1 immediate:1 payoff:6 defining:2 y1:2 mansour:1 arbitrary:5 subtasks:2 overlooked:1 introduced:1 david:2 pair:2 required:4 cast:1 address:1 adversary:1 max:5 event:2 natural:1 ranked:1 imply:1 finished:1 extract:1 moss:1 prior:1 geometric:1 nicol:2 freund:2 wolsey:1 allocation:2 proven:1 playing:1 pi:7 heavy:1 course:1 summary:1 truncation:1 soon:1 offline:13 side:2 allow:1 fg:1 benefit:11 ha1:2 feedback:11 calculated:1 priced:3 selman:1 collection:1 made:1 adaptive:1 far:1 sj:4 restarting:2 approximate:1 monotonicity:3 dealing:1 uai:1 sat:4 filip:1 pittsburgh:2 conclude:1 assumed:1 gomes:1 xi:2 alternatively:1 investing:2 streeter:3 tailed:1 table:2 additionally:1 learn:2 golovin:3 constructing:2 domain:2 vj:1 anna:1 s2:9 bounding:2 arise:2 azar:1 ait:4 x1:1 crafted:2 site:1 advice:1 deployed:1 fails:1 experienced:2 comprises:1 samir:1 interleaving:1 theorem:15 specific:2 list:1 consist:1 lukose:1 adding:1 maxim:1 magnitude:1 execution:1 illustrates:1 budget:2 generalizing:1 logarithmic:1 backtracking:1 simply:1 carla:1 failed:1 khuller:1 g2:1 doubling:1 partially:3 applies:1 corresponds:1 acm:2 goal:2 viewed:1 price:1 replace:1 fisher:1 experimentally:1 hard:6 change:1 infinite:1 determined:1 huberman:1 justify:1 lemma:6 called:2 experimental:2 succeeds:1 formally:1 select:1 radlinski:1 latter:3 arises:1 evaluate:2 |
2,834 | 357 | Learning Theory and Experiments with
Competitive Networks
Griff L. Bilbro
North Carolina State University
Box 7914
Raleigh, NC 27695-7914
David E. Van den Bout
North Carolina State University
Box 7914
Raleigh, NC 27695-7914
Abstract
We apply the theory of Tishby, Levin, and Sol1a (TLS) to two problems.
First we analyze an elementary problem for which we find the predictions
consistent with conventional statistical results. Second we numerically
examine the more realistic problem of training a competitive net to learn
a probability density from samples. We find TLS useful for predicting
average training behavior.
.
1
TLS APPLIED TO LEARNING DENSITIES
Recently a theory of learning has been constructed which describes the learning
of a relation from examples (Tishby, Levin, and Sol1a, 1989), (Schwarb, Samalan,
Sol1a, and Denker, 1990). The original derivation relies on a statistical mechanics
treatment of the probability of independent events in a system with a specified
average value of an additive error function.
The resulting theory is not restricted to learning relations and it is not essentially
statistical mechanical. The TLS theory can be derived from the principle of mazimum entropy, a general inference tool which produces probabilities characterized
by certain values of the averages of specified functions(Jaynes, 1979). A TLS theory
can be constructed whenever the specified function is additive and associated with
independent examples. In this paper we treat the problem of learning a probability
density from samples.
Consider the model as some function p( z Iw) of fixed form and adjustable parameters
w which are to be chosen to approximate 1'(z) where the overline denotes the true
density. All we know about l' are the elements of a training set T which are drawn
846
Learning Theory and Experiments with Competitive Networks
from it. Define an error e(zlw). By the principal of maximum entropy
1
p(zlw)= z(.B)e-~(zIW),
(1)
can be interpreted as the unique density which contains no other information except
a specified value of the average error
(e) =
f
dz p(zlw)e(zlw).
(2)
In Equation 1 z is a normalization that is assumed to be independent of the value
of Wj the parameter .B is called the ,en,itivity and is adjusted so that the average
error is equal to some eT, the specified target error on the training set. We will use
the convention that an integral operates on the entire expression that follows it.
The usual Bayes rule produces a density in w from p(zlw) and from a prior density
p(O)(w) which reflects at best a genuine prior probability or at least a restriction
to the acceptable portion of the search space. Posterior to training on m certain
examples,
(3)
where Zm is a normalization that depends on the particular set of examples as well
as their number. In order to remove the effect of any particular set of examples, we
can average this posterior density over all possible m examples
(4)
This average posterior density models the expected density of nets or w after training. This distribtution in w implies the followin~ expected posterior density for a
new example Zm+l
(5)
TLS compare this probability in Zm+l with the true target probability to obtain
the A verage Prediction Probability or APP after training
(6)
the average over both the training set
z(m)
and an independent test example
Zm+l.
In the averages of Equations 4 and 6 are inconvenient to evaluate exactly because of
the Zm term in Equation 3. TLS propose an "annealed approximation" to APP in
which the average of the ratio of Equation 4 is replaced by the ratio of the averages.
Equation 6 becomes
p(m)
= J dwp(o)(w)gm+l(w)
J dwp(O) (w)gm (w)
where
g(w)
=
J
dzp{z)p(zlw).
(7)
(8)
847
848
Bilbro and Van den Bout
Equation 7 is well suited for theoretical analysis and is also convenient for numerical predictions. To apply Equation 7 numerically, we will produce Monte Carlo
estimates for the moments of 9 that involve sampling p(O) (w). If the dimension of w
is larger than 50, it is preferable to histogram 9 rather than evaluate the moments
directly.
1.1
ANALYSIS OF AN ELEMENTARY EXAMPLE
In this section we theoretically analyze a learning problem with the TLS theory.
We will study the adjustment of the mean of a Gaussian density to represent a
finite number of samples. The utility of this elementary example is that it admits
an analytic solution for the APP of the previous section. All the relevant integrals
can be computed with the identity
1
00
~
dz exp (-adz - bd 2 - a2(z - b2)2) =
V~
-00
exp (- a1 a 2 (b 1 - b2)2).
al
+a2
(9)
We take the true density to be a Gaussian of mean wand variance 1/20
p(z) =
~e-a(Z-iii)3.
(10)
We model the prior density as a Gaussian with mean wo and variance 1/21'
p(O)(w)
= ~e-"(W-WO)3.
(11)
We choose the simplest error function
e(zlw)
= (z -
w)2,
(12)
the squared error between a sample z and the Gaussian "model" defined by its
mean w, which is to become our estimate of w. In Equation 1, this error function
leads to
(13)
with z(/3) =
fi
which is independent of w as assumed. We determine
for the error on the training set to get
/3 by solving
/3 = -12 ?
ET
The generalization, Equation 8, can now be evaluated with Equation 9
g(w)
= ~e-"(W-iii)3,
(14)
0/3 ,
0+/3
(15)
where
K=
is less than either
0
or
/3.
The denominator of Equation 7 becomes
(~)m/2 ~
7r
V~
exp(-
mK1'
mK+1'
(w-wo)2)
(16)
Learning Theory and Experiments with Competitive Networks
with a similar expression for the numerator.
The case of many examples or little prior knowledge is interesting. Consider Equations 7 and 16 in the limit mit > > f'
(m)
p
=
{K {Tn
(17)
Y;Ym+1'
which climbs to an asymptotic value of ~ for m - - t 00. In order to compare this
with intuition, consider that the sample mean of {ZlJ Z2J "'J zm} approaches w to
within a variance of 1/2ma:, so that
(p(m)(w))z
~
Jrn;
e- ma (z-w)3
(18)
which makes Equation 6 agree with Equation 17 for large enough {3. In this sense,
the statistical mechanical theory of learning differs from conventional Bayesian estimation only in its choice of an unconventional performance criterion APP.
2
GENERAL NUMERICAL PROCEDURE
In this section we apply the theory to the more realistic problem of learning a
continuous probability density from a finite sample set. We can estimate the moments of Equation 7 by the following Monte Carlo procedure. Given a training set
T = {Zt H~r drawn from the unknown density p on domain X with finite volume V J
an error function f( Z \w ), a training error fT J and a prior density p(O) (w) of vectors
such that each w specifies a candidate function,
=
1. Construct two sample sets: a prior set of P functions P
{wp } drawn from
p(O)(w) and a set of U input vectors U
{zu} drawn uniformly from X. For
each p in the prior set, tabulate the error fup
?(zulwp) for every point in U
and the error ftp = f(Zt\Wp) for every point in T.
=
=
2. Determine the sensitivity f3 by solving the equation (?)
()
f
=
= ?T where
Eu e-/J?.... fup
Eu e-/J'.. .
(19)
3. Estimate the average generalization of a given wp from Equation 8
(20)
4. The performance after m examples is the ratio of Equation 7. By construction
P is drawn from p(O) so that
(21)
849
850
Bilbro and Vern den Bout
2r---~--~----~--~---'
2r-------~--~~--~--~
.010
1.5
1.5
.013
.ol!l
:I!!
:81
.oIl
.0111
A-
mI
~
8:C
1
0.7
0.7
0
2D
40
60
80
100
0
20
Training Set Size
40
80
60
100
Training Set SIzIe
(a)
(b)
Figure 1: Predicted APP versus number of training samples for a 20-neuron competitive network trained to various target errors where the neuron weights were
initialized from (a) a uniform density, (b) an antisymmetrically skewed density.
3
COMPETITIVE LEARNING NETS
We consider competitive learning nets (CLNs) because they are familiar and useful to us (Van den Bout and Miller, 1990), because there exist two widely known
training strategies for CLN s (the neurons can learn either independently or under a
global interaction called conscience (DeSieno, 1988), and because CLNs can be applied to one-dimensional problems without being too trivial. Competitive learning
nets with conscience qualitatively change their behavior when they are trained on
finite sample sets containing fewer examples than neurons; except for that regime
we found the theory satisfactory. All experiments in this section were conducted
upon the following one-dimensional training density
15(z)
= { ~!;z
o <z< I,
otherwise.
=
In Figure 1 is the Average Prediction Probability (APP) for k
20 versus m,
for several values of target error fT and for two prior densitsities; first consider
predictions from the uniform prior. For fT = 0.01, APP practically attains its
asymptote of 1.5 by m
40 examples. Assuming the APP to be dominated in
the limit by the largest g, we expect a CLN trained to an error of 0.01 on a set of
40 examples to perform 1.5 times better than an untrained net on unseen samples
drawn from the same probability density. This leads to a predicted probable error
of about
=
fJWob
For k
= 2k
= 20, fpf'ob = 0.017 for fT = .01 and
1
pCm) ?
fpt'ob
(22)
= 0.021 for fT = 0.02.
We performed 5,000 training trials of a 20-neuron CLN on randomly selected sets of
Learning Theory and Experiments with Competitive Networks
0.04
r-------.---__._----.
0.04
?
...
?
0.03
?
?
2
W
. ?,. ?
?
(
)
iI
..-
0.03
2
w
iI
?
~
?
?? ?
?
?
~
0.02
0.02
0~1~-~~---~---~
o
0.01
0.02
Trlining Error
0~1~--~---~---~
o
0.03
(a)
0.01
0.02
Trlinlng ElTor
0.03
(b)
Figure 2: Experimentally determined and predicted values of total error across
the training density after competitive learning was performed using a 20-neuron
network trained to various target errors (a) with 40 samples, (b) with 20 samples.
40 samples from the training density. Each network was trained to a target error in
the range [0.005,0.03] on its 40 samples, and the average error on the total density
was then calculated for the trained network. Figure 2 is a plot of 500 of these
trials along with the predicted errors for various target errors. The probable error
is qualitatively correct and the seatter of actual experiments increases in width by
about the ratio of APPs for m 20 and m 40. For the ease of m 20 examples,
the same net can only be expected to exhibit probable errors of .019 and .023 for
corresponding training target errors, which is compared graphically in Figure 2 with
the experimentally determined errors for m = 20.
=
=
=
The APP curves saturate at a value of m that is insensitive to the prior density
from which the nets are drawn. The vertical seale does depend somewhat on the
prior however. Consider Figure 1, which also shows the APP curves for the same
k
20 net with the prior density antisymmetrically skewed away from the true
density by the following function:
=
p
(0)
(w)
l
={OV1-W
0 ~ W < 1,
otherwise.
For m > 20 the 6hape6 of the curves are almost unchanged, even though the vertical
scale is different: saturation occurs at about the same value of m. Even when
the prior greatly overrepresents poor nets, their effect on the prediction rapidly
diminishes with training set size. This is important because in actual training, the
effect of the initial configuration is also quickly lost. For m < 20 the predictions
are not valid in any case, since our simple error function does not reflect the actual
probability even approximately for m < k in these nets. It is for m < 20 where
the only significant differences between the two families of curves occur. We have
also been able to draw the same conclusions from less structured prior densities
generated by assigning positive normalized random numbers to intervals of the
851
852
Bilbro and v.m den Bout
domain. Moreover, we generally find that TLS predicts that about twice as many
samples as neurons are needed to train competitive nets of other sizes.
4
CONCLUSION
TLS can be applied to learning densities as well as relations. We considered the
effects of varying the number of examples, the target training error, and the choice
of prior density. In these experiments on learning a density as well as others dealing
with learning a binary output (Bilbro and Snyder, 1990), a ternary output (Chow,
Bilbro, and Yee, 1990), and a continuous output (Bilbro and Klenin, 1990) we
find if saturation occurs for m substantially less than the total number of available
samples, say m < ITI/2, that m is a good predictor of sufficient training set size.
Moreover there is evidence from a reformulation of the learning theory based on the
grand canonical ensemble that supports this statistical approach (Klenin,1990).
References
G. L. Bilbro and M. Klenin. (1990) Thermodynamic Models of Learning: Applications. Unpublished.
G. L. Bilbro and W. E. Snyder. (1990) Learning theory, linear separability, and
noisy data. CCSP-TR-90/7, Center for Communications and Signal Processing,
Box 7914, Raleigh, NC 27695-7914.
M. Y. Chow, G. L. Bilbro and S. O. Yee. (1990) Application of Learning Theory to
Single-Phase Induction Motor Incipient Fault Detection Artificial Neural Networks.
Submitted to International Journal of Neural Syltem,.
D. DeSieno. (1988) Adding a conscience to competitive learning. In IEEE International Conference on Neural Network" pages 1:117-1:124.
E. T. Jaynes. (1979) Where Do We Stand on Maximum Entropy? In R. D. Leven
and M. Tribus (Eds.), Mazimum Entropy Formali,m, M. I. T. Press, Cambridge,
pages 17-118.
M. Klenin. (1990) Learning Models and Thermostatistics: A Description of Overtraining and Generalization Capacities. NETR-90/3, Center for Communications
and Signal Processing, Neural Engineering Group, Box 7914, Raleigh, NC 276957914.
D. B. Schwartz, V. K. Samalan, S. A. Solla &. J. S. Denker. (1990) Exhaustive
Learning. Neural Computation.
N. Tishby, E. Levin, and S. A. Solla. (1989) Consistent inference of probabilities in
layered networks: Predictions and generalization. IJCNN, IEEE, New York, pages
II:403-410.
D. E. Van den Bout and T. K. Miller III. (1990) TInMANN: The integer markovian
artificial neural network. Accepted for publication in the Journal of Parallel and
Diltributed Computing.
| 357 |@word trial:2 carolina:2 tr:1 moment:3 initial:1 configuration:1 contains:1 tabulate:1 dzp:1 jaynes:2 assigning:1 bd:1 additive:2 numerical:2 realistic:2 analytic:1 motor:1 remove:1 asymptote:1 plot:1 fewer:1 selected:1 conscience:3 along:1 constructed:2 become:1 theoretically:1 overline:1 expected:3 behavior:2 examine:1 mechanic:1 ol:1 little:1 actual:3 becomes:2 moreover:2 interpreted:1 substantially:1 every:2 exactly:1 preferable:1 schwartz:1 positive:1 engineering:1 treat:1 limit:2 approximately:1 twice:1 dwp:2 ease:1 range:1 bilbro:10 unique:1 ternary:1 lost:1 differs:1 procedure:2 convenient:1 get:1 layered:1 yee:2 restriction:1 conventional:2 dz:2 center:2 annealed:1 graphically:1 independently:1 rule:1 target:9 gm:2 construction:1 element:1 predicts:1 ft:5 wj:1 eu:2 solla:2 intuition:1 trained:6 depend:1 solving:2 upon:1 vern:1 various:3 derivation:1 train:1 monte:2 artificial:2 exhaustive:1 larger:1 widely:1 say:1 otherwise:2 unseen:1 noisy:1 net:12 propose:1 interaction:1 zm:6 relevant:1 rapidly:1 description:1 produce:3 ftp:1 predicted:4 implies:1 convention:1 correct:1 generalization:4 probable:3 elementary:3 adjusted:1 practically:1 fpf:1 considered:1 exp:3 a2:2 estimation:1 diminishes:1 iw:1 largest:1 tool:1 reflects:1 mit:1 gaussian:4 rather:1 varying:1 publication:1 derived:1 zlj:1 greatly:1 attains:1 sense:1 inference:2 entire:1 chow:2 relation:3 equal:1 genuine:1 construct:1 f3:1 sampling:1 others:1 randomly:1 familiar:1 replaced:1 phase:1 detection:1 integral:2 initialized:1 inconvenient:1 theoretical:1 mk:1 markovian:1 uniform:2 predictor:1 levin:3 conducted:1 tishby:3 cln:3 too:1 density:31 grand:1 sensitivity:1 international:2 ym:1 quickly:1 squared:1 reflect:1 containing:1 choose:1 b2:2 north:2 depends:1 performed:2 analyze:2 portion:1 competitive:12 bayes:1 parallel:1 desieno:2 variance:3 miller:2 ensemble:1 apps:1 bayesian:1 carlo:2 app:10 submitted:1 overtraining:1 whenever:1 ed:1 associated:1 mi:1 treatment:1 thermostatistics:1 knowledge:1 evaluated:1 box:4 though:1 oil:1 effect:4 normalized:1 true:4 wp:3 satisfactory:1 numerator:1 skewed:2 width:1 criterion:1 tn:1 recently:1 fi:1 insensitive:1 volume:1 numerically:2 significant:1 cambridge:1 posterior:4 certain:2 binary:1 fault:1 somewhat:1 sol1a:3 determine:2 signal:2 ii:3 thermodynamic:1 characterized:1 a1:1 prediction:8 mk1:1 denominator:1 essentially:1 histogram:1 normalization:2 represent:1 interval:1 climb:1 integer:1 tinmann:1 iii:3 enough:1 followin:1 expression:2 utility:1 wo:3 york:1 tribus:1 useful:2 generally:1 involve:1 jrn:1 simplest:1 specifies:1 exist:1 canonical:1 snyder:2 group:1 incipient:1 reformulation:1 drawn:7 wand:1 almost:1 family:1 draw:1 ob:2 acceptable:1 occur:1 ijcnn:1 dominated:1 structured:1 verage:1 poor:1 describes:1 across:1 separability:1 den:6 restricted:1 equation:18 agree:1 needed:1 know:1 unconventional:1 ov1:1 available:1 apply:3 denker:2 away:1 netr:1 original:1 denotes:1 unchanged:1 occurs:2 strategy:1 usual:1 exhibit:1 capacity:1 trivial:1 induction:1 assuming:1 ratio:4 syltem:1 nc:4 zt:2 adjustable:1 unknown:1 perform:1 vertical:2 neuron:7 iti:1 clns:2 finite:4 communication:2 david:1 unpublished:1 mechanical:2 specified:5 bout:6 able:1 regime:1 saturation:2 fpt:1 event:1 predicting:1 prior:15 asymptotic:1 expect:1 interesting:1 versus:2 sufficient:1 consistent:2 principle:1 raleigh:4 van:4 curve:4 dimension:1 calculated:1 valid:1 stand:1 qualitatively:2 approximate:1 dealing:1 global:1 assumed:2 search:1 continuous:2 learn:2 untrained:1 domain:2 tl:10 en:1 adz:1 candidate:1 saturate:1 zu:1 admits:1 evidence:1 adding:1 suited:1 entropy:4 pcm:1 adjustment:1 relies:1 ma:2 identity:1 change:1 experimentally:2 determined:2 except:2 operates:1 uniformly:1 principal:1 called:2 total:3 accepted:1 support:1 griff:1 evaluate:2 fup:2 |
2,835 | 3,570 | Short-Term Depression in VLSI Stochastic Synapse
Peng Xu, Timothy K. Horiuchi, and Pamela Abshire
Department of Electrical and Computer Engineering, Institute for Systems Research
University of Maryland, College Park, MD 20742
pxu,timmer,[email protected]
Abstract
We report a compact realization of short-term depression (STD) in a VLSI stochastic synapse. The behavior of the circuit is based on a subtractive single release model of STD. Experimental results agree well with simulation and exhibit
expected STD behavior: the transmitted spike train has negative autocorrelation
and lower power spectral density at low frequencies which can remove redundancy in the input spike train, and the mean transmission probability is inversely
proportional to the input spike rate which has been suggested as an automatic
gain control mechanism in neural systems. The dynamic stochastic synapse could
potentially be a powerful addition to existing deterministic VLSI spiking neural
systems.
1
Introduction
Synapses are the primary locations in neural systems where information is processed and transmitted. Synaptic transmission is a stochastic process by nature, i.e. it has been observed that at central
synapses transmission proceeds in an all-or-none fashion with a certain probability. The synaptic
weight has been modeled as R = npq [1], where n is the number of quantal release sites, p is the
probability of release per site, and q is some measure of the postsynaptic effect. The synapse undergoes constant changes in order to learn from and adapt to the ever-changing outside world. The
variety of synaptic plasticities differ in the triggering condition, time span, and involvement of preand postsynaptic activity. Regulation of the vesicle release probability has been considered as the
underlying mechanism for various synaptic plasticities [1?3]. The stochastic nature of the neural
computation has been investigated and the benefits of stochastic computation such as energy efficiency, communication efficiency, and computational efficiency have been shown [4?6]. Recently
there is increasing interest in probabilistic modeling of brain functions [7]. VLSI stochastic synapse
could provide a useful hardware tool to investigate stochastic nature of the synapse and also function
as the basic computing unit for VLSI implementation of stochastic neural computation.
Although adaptive deterministic VLSI synapses have been extensively studied and developed for
neurally inspired VLSI learning systems [8?13], stochastic synapses have been difficult to implement in VLSI because it is hard to properly harness the probabilistic behavior, normally provided
by noise. Although stochastic behavior in integrated circuits has been investigated in the context of
random number generators (RNGs) [14], these circuits either are too complicated to use for a stochastic synapse or suffer from poor randomness. Therefore other approaches were explored to bring
randomness into the systems. Stochastic transmission was implemented in software using a lookup
table and a pseudo random number generator [15]. Stochastic transition between potentiation and
depression has been demonstrated in bistable synapses driven by stochastic spiking behavior at the
network level for stochastic learning [16].
Previously we reported the first VLSI stochastic synapse. Experimental results demonstrated true
randomness as well as the adjustable transmission probability. The implementation with ? 15 transistors is compact for these added features, although there are much more compact deterministic
synapses with as few as five transistors. We also proposed the method to implement plasticity and
demonstrated the implementation of STD by modulating the probability of spike transmission. Like
its deterministic counterpart, this stochastic synapse operates on individual spike train inputs; its
stochastic character, however, creates the possibility of a broader range of computational primitives
such as rate normalization of Poisson spike trains, probabilistic multiplication, or coincidence detection. In this paper we extend the subtractive single release model of STD to the VLSI stochastic
synapse. We present the simulation of the new model. We describe a novel compact VLSI implementation of a stochastic synapse with STD and demonstrate extensive experimental results showing
the agreement with both simulation and theory over a range of conditions and biases.
2
VLSI Stochastic Synapse and Plasticity
Vicm
Vicm
Vdd2
Ibias
Vr
Vr
Vc
Vi+
Vg+
M1
M2
Vg-
Vw
M3
M5
Vp
C
M4
Vtran
Vbias
M7
Vo+
M6
Vdd
Vpre
Vh
Vi-
Vpre~
Vo-
Vdd
Vw
Vo+
Vo-
Figure 1: Schematic of the stochastic synapse with STD.
Previously we demonstrated a compact stochastic synapse circuit exhibiting true randomness and
consuming very little power (10-44 ?W). The core of the structure is a clocked, cross-coupled differential pair comparator with input voltages Vi+ and Vi? , as shown in the dashed box in Fig. 1.
It uses competition between two intrinsic circuit noise sources to generate random events. The differential design helps to reduce the influence from other noise sources. When a presynaptic spike
arrives, Vpre? goes low, and transistor M5 shuts off. Vo+ and Vo? are nearly equal and the circuit is
in its metastable state. When the two sides are closely matched, the imbalance between Vo+ and Vo?
caused by current noise in M1-M4 eventually triggers positive feedback, which drives one output
to Vc and the other close to ground. We use a dynamic buffer, shown in the dotted box in Fig. 1,
to generate rail-to-rail transmitted spikes Vtran . Vtran either goes high (with probability p) or stays
low (with probability 1 ? p) during an input spike, emulating stochastic transmission.
Fabrication mismatch in an uncompensated stochastic synapse circuit would likely permanently bias
the circuit to one solution. In this circuit, floating gate inputs to a pFET differential pair allow the
mismatch to be compensated. By controlling the common-mode voltage of the floating gates, we
operate the circuit such that hot-electron injection occurs only on the side where the output voltage
is close to ground. Over multiple clock cycles hot-electron injection works in negative feedback
to equalize the floating gate voltages, bringing the circuit into stochastic operation. The procedure
can be halted to achieve a specific probability or allowed to reach equilibrium (50% transmission
probability).
The transmission probability can be adjusted by changing the input offset or the floating gate
charges. The higher Vg+ is, ?the lower ?p is. ??
The probability tuning function is closely fitted by
?
, where ? is the input offset voltage for p = 50%,
an error function f (v) = 0.5 1 + erf v??
2?
? is the standard deviation characterizing the spread of the probability tuning, and v = Vi? ? Vi+
is the input offset voltage. Synaptic plasticity can be implemented by dynamically modulating the
probability. Input offset modulation is suitable for short-term plasticity. Short-term depression is
triggered by the transmitted input spikes Vtran to emulate the probability decrease because of vesicle depletion. Short-term facilitation is triggered by the input spikes Vpre to emulate the probability
increase because of presynaptic Ca2+ accumulation. Nonvolatile storage at the floating gate is suitable for long-term plasticity. STDP can be implemented by modulating the probability depending
on the precise timing relation between the pre- and postsynaptic spikes.
3
Short-Term Depression: Model and Simulation
Although long-term plasticity has attracted much attention because of its apparent association with
learning and memory, the functional role of short-term plasticity has only recently begun to be understood. Recent evidence suggests that short-term synaptic plasticity is involved in many functions
such as gain control [17], phase shift [18], coincidence detection, and network reconfiguration [19].
It has also been shown that depressing stochastic synapses can increase information transmission
efficiency by filtering out redundancy in presynaptic spike trains [5].
Activity dependent short-term changes in synaptic efficacy at the macroscopic level are determined
by activity dependent changes in vesicle release probability at the microscopic level. We will focus
on STD here. STD during repetitive stimulation results from a decrease in released vesicles. Since
there is a finite pool of vesicles, and released vesicles cannot be replenished immediately, a successful release triggered by one spike potentially reduces the probability of release triggered by the next
spike. We propose an STD model based on our VLSI stochastic synapse that closely emulates the
simple subtractive single release model [5, 20]. A presynaptic spike that is transmitted reduces the
input offset voltage v at the VLSI stochastic synapse by ?v, so that the transmission probability p(t)
is reduced. Between successful releases, v relaxes back to its maximum value vmax exponentially
with a time constant ?d so that p(t) relaxes back to its maximum value pmax as well. The model can
be written as
v(t+ ) = v(t? ) ? ?v, successful transmission at t
(1)
dv(t)
= vmax ? v(t)
(2)
?d
dt
p(t) = f (v(t))
(3)
For an input spike train with Poisson arrivals, the model can be expressed as a stochastic differential
equation
vmax ? v
dt ? ?v ? dNp?r(t)
(4)
dv =
?d
where dNp?r(t) is a Poisson counting process with rate p ? r(t), and r(t) is the input spike rate. By
taking the expectation E(?) on both sides, we obtain a differential equation
vmax ? E(v)
dE(v)
=
? ?v ? E(p)r(t)
(5)
dt
?d
When v is reduced, the probability that it will be reduced again becomes smaller.
??
? v is effectively
?
?
constrained to a small range where we can approximate the function f (v) = 0.5 1 + erf v??
2?
by a linear function f (v) = av + 0.5, where ? = 0 for simplicity. We can then solve for E(p) at
steady state:
avmax + 0.5
pmax
1
(6)
pss ?
?
?
1 + a?v?d r
a?v?d r
r
Therefore the steady state mean probability is inversely proportional to the input spike rate when
a?v?d r ? 1. This is consistent with prior work that modeled STD at the macroscopic level [17].
??
?
?
v
, obtained
We simulated the model (1)-(3). We use the function f (v) = 0.5 1 + erf ?2?2.16
from the best fit of the experimental data. Initially v is set to 5 mV which sets pmax close to 1.
Although the transformation from v to p is nonlinear, both simulation and experimental data show
that this implementation exhibits behavior similar to the model with the linear approximation and
the biological data. Fig. 2(a) and 2(b) show that the mean probability is a linear function of the
inverse of the input spike rate at various ?v and ?d for high input spike rates. Both ?v and ?d affect
the slope of the linear relation, following the trend suggested by (6): the bigger the ?v or the bigger
the ?d , the smaller the slope is. Fig. 3 shows a simulation of the transient probability for a period
of 200 ms. Fig. 4 shows that the output spike train exhibits negative autocorrelation at small time
intervals and lower power spectral density (PSD) at low frequencies. This is a direct consequence
of STD.
0.4
0.4
?v = 2 mV
?v = 4 mV
?v = 6 mV
0.35
0.3
0.2
p
0.25
0.2
p
?d = 200 ms
?d = 300 ms
0.3
0.25
0.15
0.15
0.1
0.1
0.05
0.05
0
0
?d = 100 ms
0.35
0.002
0.004
0.006
0.008
0
0
0.01
0.002
0.004
1/r
0.006
0.008
0.01
1/r
(a) ?v = 2, 4, 6 mV, ?d = 100 ms.
(b) ?d = 100, 200, 300 ms, ?v = 2 mV.
Figure 2: Mean probability as a function of input spike rate from simulation. Data were collected at
input rates from 100 Hz to 1000 Hz at 100 Hz intervals. The solid lines show the least mean square
fit for input rates from 400 Hz to 1000 Hz.
0.6
0.5
p(t)
0.4
0.3
0.2
0.1
0
0
20
40
60
80 100 120 140 160 180 200
Time (ms)
Figure 3: Simulated probability trajectory over 200 ms period. r = 100 Hz, ? = 100 ms, ?v = 2
mV.
0.1
20
0
0.06
PSD (dB)
Autocorrelation
0.08
0.04
?20
?40
0.02
?60
0
?0.02
0
10
20
30
Intervals
(a) Autocorrelation.
40
50
?80
0
10
20
30
Frequency (Hz)
40
50
(b) Power spectral density.
Figure 4: Characterization of the output spike train from the simulation of the stochastic synapse
with STD. r = 100 Hz, ?d = 200 ms, ?v = 6 mV, Vmax = 5 mV.
4
VLSI Implementation of Short-Term Depression
We implemented this model using the stochastic synapse circuit described above (see Fig. 1). Both
inputs are restored up to an equilibrium value Vicm by tunable resistors implemented by subthreshold
pFETs operating in the ohmic region. To change the transmission probability we only need to
modulate one side of the input, in this case Vi? . The resistor and capacitor provide for exponential
recovery of the voltage to its equilibrium value. The input Vi? is modulated by transistors M6
and M7 based on the result of the previous spike transmission. Every time a spike is transmitted
successfully, a pulse with height Vh and width Tp is generated at Vp . Tp is same as the input
spike pulse width. This pulse discharges the capacitor with a small current determined by Vw and
reduces Vi? by a small amount, thus decreasing the transmission probability. The value of the
tunable resistors is controlled by the gate voltage of the pFETs, Vr . When Vi? is reduced, the
probability that it will be reduced again becomes smaller. Since the probability tuning only occurs
in a small voltage range (? 10 mV), the change in Vi? is limited to this small range as well. Under
this special condition, the resistance implemented by the subthreshold pFET is linear and large (?
G?). With capacitance as small as 100 fF, the exponential time constant is tens of milliseconds and
is adjustable. Similar control circuits can be applied to Vi+ to implement short-term facilitation.
The update mechanism would then be driven by the presynaptic spike rather than the successfully
transmitted spike. The extra components on the left provide for future implementation of short-term
facilitation and also symmetrize the stochastic synapse, improving its randomness.
5
Experimental Results
The circuit has been fabricated in a commercially-available 0.5 ?m CMOS process with 2 polysilicon layers and 3 metal layers. The layout size of the stochastic synapse is 151.9 ?m ? 91.7 ?m
and the layout size of the STD block is 35 ?m ? 32.2 ?m. A 2-to-1 multiplexer with size 35 ?m
? 30 ?m is used to enable or disable STD. As a proof of concept, the layout of the circuit is quite
conservative. Assuming no loss of performance, the existing circuit area could be reduced by 50%.
The circuit uses a nominal power supply of 5 V for normal operation. The differential pair comparator uses a separate power supply for hot-electron injection. Each floating-gate pFET has a tunnelling
structure, which is a source-drain connected pFET with its gate connected to the floating node. A
separate power supply provides the tunnelling voltage to the shorted source and drain (tunnelling
node). When the tunnelling voltage is high enough (?14-15 V), electron tunnels through the silicon dioxide, from the floating gate to the tunnelling node. We use this phenomenon to remove
electrons from the floating gate only during initialization. Alternatively Ultra-Violet (UV) activated
conductances may be used to remove electrons from the gate to avoid the need for special power
supplies.
To begin the test, we first remove residual charges on the floating gates in the stochastic synapse.
We set Vicm = 2 V. We raise the power supply of the differential pair comparator to 5.3 V to
facilitate the hot-electron injection. We use the negative feedback operation of hot-electron injection
described above to automatically program the circuit into its stochastic regime. We halt the injection
by lowering the power supply to 5 V. During this procedure, STD is disabled, so that the probability
at this operating point is the synaptic transmission probability without any dynamics.
We then enable STD. We use a signal generator to generate pulse signals which serve as input
spikes. Although spike trains are better modeled by Poisson arrivals, the averaging behavior should
be similar for deterministic spike trains which make testing easier. We use Ibias = 100 nA. The
power consumption of the STD block is much smaller than the stochastic synapse. The total power
consumption is about 10 ?W.
We collect output spikes from the depressing stochastic synapse at an input spike rate of 100 Hz. We
divide time into bins according to the input spike rate so that in each bin there is either 1 or 0 output
spike. In this way, we convert the output spike train into a bit sequence s(k). We then compute the
normalized autocorrelation, defined as A(n) = E(s(k)s(k + n)) ? E 2 (s(k)), where n is the number of time intervals between two bits. A(0) gives the variance of the sequence. For two bits with
distance n > 0, A(n) = 0 if they are independent, indicating good randomness, and A(n) < 0 if
they are anticorrelated, indicating the depressing effect of preceding spikes on the later spikes. Fig.
5 shows the autocorrelation of the output spike trains at two different Vr . There is significant nega-
tive correlation at small time intervals and little correlation at large time intervals, as expected from
STD. Fig. 6 shows the PSD of the output spike trains from the same data shown in Fig. 5. Clearly,
the PSD is reduced at low frequencies. The time constant of STD increases with Vr so that the larger
Vr is, the longer the period of the negative autocorrelation is and the lower the frequencies where
power is reduced. This agrees with simulation results. Notice that the autocorrelation and PSD for
Vr = 1.59 V show very close similarity to the simulation results in Fig. 4. Normally redundant
information is represented by positive autocorrelation in the time domain, which is characterized by
power at low frequencies. By reducing the low frequency component of the spike train, redundant
information is suppressed and overall information transmission efficiency is improved. If the negative autocorrelation of the synaptic dynamics matches the positive autocorrelation in the input spike
train, the redundancy is cancelled and the output is uncorrelated [5].
Vr = 1.59 V
0.25
0.1
0.2
0.08
Autocorrelation
Autocorrelation
Vr = 1.56 V
0.15
0.1
0.05
0
0.06
0.04
0.02
0
?0.05
?0.1
0
10
20 30
Intervals
40
?0.02
0
50
10
20 30
Intervals
40
50
Figure 5: Autocorrelation of output spike trains from the VLSI stochastic synapse with STD for
an input spike rate of 100 Hz. Autocorrelation at zero time represents the sequence variance, and
negative autocorrelation at short time intervals indicates STD.
V = 1.56 V
V = 1.59 V
r
20
0
0
PSD (dB)
PSD (dB)
r
20
?20
?40
?60
?80
0
?20
?40
?60
10
20 30 40
Frequency (Hz)
50
?80
0
10
20 30 40
Frequency (Hz)
50
Figure 6: Power spectral density of output spike trains from the VLSI stochastic synapse with STD
for an input spike rate of 100 Hz. Lower PSD at low frequencies indicates STD.
We collect output spikes in response to 104 input spikes at input spike rates from 100 Hz to 1000
Hz with 100 Hz intervals. Fig. 7(a) shows that the mean transmission probability is inversely proportional to the input spike rate for various pulse widths when the rate is high enough. This matches
the theoretical prediction in (6) very well. By scaling the probability with the input spike rate, the
synapse tends to normalize the DC component of input frequency and preserve the neuron dynamic
range, thus avoiding saturation due to fast firing presynaptic neurons and retaining sensitivity to less
frequently firing neurons [17]. The slope of mean probability decreases as the pulse width increases.
Since the pulse width determines the discharging time of the capacitor at Vi? , the larger the pulse
width, the larger the ?v is and the smaller the slope is. Fig. 7(b) shows that a?v?d scales linearly
with the pulse width. The discharging current is approximately constant, thus ?v is proportional to
the pulse width.
1
0.9
0.8
0.7
10 us
20 us
30 us
40 us
50 us
0.04
0.03
a?v??d
p
0.6
0.5
0.4
0.02
0.3
0.2
0.01
0.1
0
0
0.002
0.004
0.006
0.008
10
0.01
20
30
40
50
Pulse width (?s)
1/r
(a) Mean probability as a function of input spike
rate for pulse width Tp =10, 20, 30, 40, 50 ?s.
Data were collected at input rates from 100 Hz to
1000 Hz at 100 Hz intervals. The dotted lines show
the least mean square fit from 200 Hz to 1000 Hz.
(b) a?v?d as a function of the pulse width. The
dotted line shows the least mean square fit, f (x) =
0.0008x + 0.0017.
Figure 7: Steady state behavior of VLSI stochastic synapse with STD for different pulse widths.
We perform the same experiments for different Vr and Vw . As Vr increases, the slope of mean
transmission probability as a linear function of 1r decreases. This is due to the increasing ?d = RC,
where the equivalent resistance R from the pFET increases with Vr . Fig. 8(a) shows that a?v?d
is approximately an exponential function of Vr , indicating that the equivalent R of the pFET is approximately exponential to its gate voltage Vr . For Vw , the slope of mean transmission probability
decreases as Vw increases. This is due to the increasing ?v with Vw . Fig. 8(b) shows that a?v?d is
approximately an exponential function of Vw , indicating that the discharging current from the transistor M6 is approximately exponential to its gate voltage Vw . This matches the I-V characteristics
of the MOSFET in subthreshold.
0.12
0.1
0.1
0.08
a?v??
a?v??d
0.12
d
0.14
0.08
0.06
0.06
0.04
0.04
0.02
0.02
1.55
1.56
1.57
1.58
1.59
Vr (V)
(a) a?v?d as a function of Vr . The dotted line shows
the least mean square fit, f (x) = e(44.54x?72.87) .
0
0.3
0.35
0.4
0.45
0.5
Vw (V)
(b) a?v?d as a function of Vw . The dotted
line shows the least mean square fit, f (x) =
e(15.47x?9.854) .
Figure 8: The effect of biases Vr and Vw on the depressing behavior.
6
Conclusion
We designed and tested a VLSI stochastic synapse with short-term depression. The behavior of
the depressing synapse agrees with theoretical predictions and simulation. The strength and time
duration of the depression can be tuned by the biases. The circuit is compact and consumes low
power. It is a good candidate to bring randomness and rich dynamics into VLSI spiking neural
systems, such as for rate-independent coincidence detection of Poisson spike trains. However, the
application of such dynamic stochastic synapses in large networks still remains a challenge.
References
[1] C. Koch, Biophysics of Computation: Information Processing in Single Neurons. New York,
NY: Oxford University Press, 1999.
[2] M. V. Tsodyks and H. Markram, ?The neural code between neocortical pyramidal neurons
depends on neurotransmitter release probability,? Proc. Natl. Acad. Sci. USA, vol. 94, pp. 719?
723, 1997.
[3] W. Senn, H. Markram, and M. Tsodyks, ?An algorithm for modifying neurotransmitter release
probability based on pre- and postsynaptic spike timing,? Neural Computation, vol. 13, pp.
35?67, 2000.
[4] W. Maass and A. M. Zador, ?Dynamic stochastic synapses as computational units,? Neural
Comput., vol. 11, no. 4, pp. 903?917, 1999.
[5] M. S. Goldman, P. Maldonado, and L. F. Abbott, ?Redundancy reduction and sustained firing
with stochastic depressing synapses,? J. Neurosci., vol. 22, no. 2, pp. 584?591, 2002.
[6] W. B. Levy and R. A. Baxter, ?Energy-efficient neuronal computation via quantal synaptic
failures,? J. Neurosci., vol. 22, no. 11, pp. 4746?4755.
[7] R. Rao, B. Olshausen, and M. Lewicki, Eds., Statistical Theories of the Brain. MIT Press,
2001.
[8] C. Diorio, P. Hasler, B. A. Minch, and C. Mead, ?A single-transistor silicon synapse,? IEEE
Trans. Electron Devices, vol. 43, pp. 1972?1980, Nov. 1996.
[9] P. H?afliger and M. Mahowald, ?Spike based normalizing Hebbian learning in an analog VLSI
artificial neuron,? Int. J. Analog Integr. Circuits Signal Process., vol. 18, no. 2-3, pp. 133?139,
1999.
[10] S.-C. Liu, ?Analog VLSI circuits for short-term dynamic synapses,? EURASIP Journal on
Applied Signal Processing, vol. 2003, pp. 620?628, 2003.
[11] E. Chicca, G. Indiveri, and R. Douglas, ?An adaptive silicon synapse,? in Proc. IEEE Int. Symp.
Circuits Systems, vol. 1, Bangkok, Thailand, May 2003, pp. 81?84.
[12] A. Bofill, A. F. Murray, and D. P. Thompson, ?Circuits for VLSI implementation of temporally
asymmetric Hebbian learning,? in Advances in Neural Information Processing Systems, S. B.
T. G. Dietterich and Z. Ghahramani, Eds. Cambridge, MA, USA: MIT Press, 2002.
[13] G. Indiveri, E. Chicca, and R. Douglas, ?A VLSI array of low-power spiking neurons and
bistable synapses with spike-timing dependent plasticity,? IEEE Trans. Neural Networks,
vol. 17, pp. 211?221, 2006.
[14] C. S. Petrie and J. A. Connelly, ?A noise-based IC random number generator for applications
in cryptography,? IEEE Trans. Circuits Syst. I, vol. 47, no. 5, pp. 615?621, May 2000.
[15] D. H. Goldberg, G. Cauwenberghs, and A. G. Andreou, ?Probabilistic synaptic weighting in
a reconfigurable network of VLSI integrate-and-fire neurons,? Neural Networks, vol. 14, pp.
781?793, 2001.
[16] S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D. J. Amit, ?Spike driven synaptic plasticity: theory, simulation, VLSI implementation,? Neural Computation, vol. 12, pp. 2227?2258,
2000.
[17] L. F. Abbott, J. A. Varela, K. Sen, and S. B. Nelson, ?Synaptic depression and cortical gain
control,? Science, vol. 275, pp. 220?224, 1997.
[18] F. S. Chance, S. B. Nelson, and L. F. Abbott, ?Synaptic depression and the temporal response
characteristics of V1 cells,? J. Neurosci., vol. 18, no. 12, pp. 4785?4799, 1998.
[19] F. Nadim and Y. Manor, ?The role of short-term synaptic dynamics in motor control,? Curr.
Opin. Neurobiol., vol. 10, pp. 683?690, Dec. 2000.
[20] R. S. Zucker, ?Short-term synaptic plasticity,? Ann. Rev. Neurosci., vol. 12, pp. 13?31, 1989.
| 3570 |@word pulse:14 simulation:12 solid:1 reduction:1 liu:1 efficacy:1 tuned:1 existing:2 current:4 attracted:1 written:1 plasticity:13 motor:1 remove:4 designed:1 opin:1 update:1 device:1 short:17 core:1 characterization:1 provides:1 node:3 location:1 five:1 height:1 rc:1 direct:1 m7:2 differential:7 supply:6 vpre:4 sustained:1 symp:1 autocorrelation:16 peng:1 expected:2 behavior:10 frequently:1 brain:2 inspired:1 decreasing:1 automatically:1 goldman:1 little:2 increasing:3 becomes:2 provided:1 begin:1 underlying:1 matched:1 circuit:24 neurobiol:1 developed:1 shuts:1 transformation:1 fabricated:1 pseudo:1 temporal:1 every:1 charge:2 control:5 unit:2 normally:2 discharging:3 timmer:1 positive:3 engineering:1 timing:3 understood:1 tends:1 consequence:1 acad:1 oxford:1 mead:1 firing:3 modulation:1 approximately:5 initialization:1 studied:1 dynamically:1 suggests:1 collect:2 limited:1 range:6 testing:1 block:2 implement:3 procedure:2 area:1 pre:2 cannot:1 close:4 storage:1 context:1 influence:1 accumulation:1 equivalent:2 deterministic:5 demonstrated:4 compensated:1 primitive:1 go:2 attention:1 layout:3 duration:1 zador:1 thompson:1 simplicity:1 recovery:1 immediately:1 chicca:2 m2:1 array:1 facilitation:3 discharge:1 controlling:1 trigger:1 nominal:1 us:3 goldberg:1 agreement:1 trend:1 npq:1 std:25 asymmetric:1 observed:1 role:2 coincidence:3 electrical:1 vbias:1 tsodyks:2 region:1 cycle:1 connected:2 diorio:1 decrease:5 consumes:1 equalize:1 dynamic:10 vdd:2 raise:1 vesicle:6 serve:1 creates:1 efficiency:5 various:3 emulate:2 represented:1 neurotransmitter:2 ohmic:1 train:18 horiuchi:1 fast:1 describe:1 mosfet:1 artificial:1 outside:1 apparent:1 quite:1 larger:3 solve:1 erf:3 triggered:4 sequence:3 transistor:6 sen:1 propose:1 connelly:1 realization:1 achieve:1 competition:1 normalize:1 transmission:20 cmos:1 help:1 depending:1 implemented:6 differ:1 exhibiting:1 closely:3 modifying:1 stochastic:46 vc:2 transient:1 bistable:2 enable:2 bin:2 potentiation:1 ultra:1 biological:1 adjusted:1 koch:1 considered:1 ground:2 stdp:1 normal:1 ic:1 equilibrium:3 electron:9 released:2 proc:2 modulating:3 agrees:2 successfully:2 tool:1 mit:2 clearly:1 manor:1 rather:1 avoid:1 voltage:14 broader:1 release:12 focus:1 indiveri:2 properly:1 ps:1 indicates:2 annunziato:1 dependent:3 integrated:1 initially:1 vlsi:26 relation:2 overall:1 retaining:1 constrained:1 special:2 equal:1 represents:1 park:1 nega:1 nearly:1 future:1 commercially:1 report:1 shorted:1 few:1 preserve:1 individual:1 m4:2 floating:10 phase:1 fire:1 psd:8 curr:1 detection:3 conductance:1 interest:1 investigate:1 possibility:1 maldonado:1 arrives:1 activated:1 natl:1 divide:1 theoretical:2 fitted:1 modeling:1 rao:1 tp:3 halted:1 mahowald:1 violet:1 deviation:1 fabrication:1 successful:3 too:1 afliger:1 reported:1 minch:1 density:4 sensitivity:1 stay:1 probabilistic:4 off:1 pool:1 na:1 again:2 central:1 multiplexer:1 syst:1 de:1 lookup:1 int:2 caused:1 mv:10 vi:13 depends:1 later:1 cauwenberghs:1 complicated:1 slope:6 square:5 variance:2 emulates:1 characteristic:2 subthreshold:3 vp:2 none:1 trajectory:1 drive:1 randomness:7 synapsis:12 reach:1 synaptic:16 ed:2 failure:1 energy:2 frequency:11 involved:1 pp:17 proof:1 gain:3 tunable:2 begun:1 back:2 salamon:1 higher:1 dt:3 harness:1 response:2 improved:1 synapse:32 depressing:6 box:2 clock:1 correlation:2 nonlinear:1 undergoes:1 mode:1 disabled:1 olshausen:1 facilitate:1 dietterich:1 effect:3 concept:1 true:2 normalized:1 counterpart:1 usa:2 maass:1 during:4 width:12 steady:3 clocked:1 m:10 m5:2 neocortical:1 demonstrate:1 vo:8 bring:2 novel:1 recently:2 petrie:1 common:1 functional:1 spiking:4 stimulation:1 exponentially:1 extend:1 association:1 m1:2 analog:3 silicon:3 pfets:2 significant:1 cambridge:1 automatic:1 tuning:3 uv:1 zucker:1 longer:1 operating:2 similarity:1 recent:1 involvement:1 driven:3 certain:1 buffer:1 transmitted:7 disable:1 preceding:1 period:3 redundant:2 dashed:1 signal:4 neurally:1 multiple:1 reduces:3 hebbian:2 match:3 adapt:1 polysilicon:1 cross:1 long:2 characterized:1 bigger:2 halt:1 controlled:1 schematic:1 prediction:2 biophysics:1 basic:1 expectation:1 poisson:5 repetitive:1 normalization:1 cell:1 dec:1 addition:1 interval:11 pyramidal:1 source:4 macroscopic:2 extra:1 operate:1 umd:1 bringing:1 hz:21 db:3 capacitor:3 vw:12 counting:1 enough:2 relaxes:2 m6:3 variety:1 affect:1 fit:6 baxter:1 triggering:1 reduce:1 shift:1 suffer:1 resistance:2 york:1 depression:10 tunnel:1 useful:1 amount:1 thailand:1 extensively:1 ten:1 hardware:1 processed:1 reduced:8 generate:3 millisecond:1 notice:1 dotted:5 senn:1 per:1 vol:17 redundancy:4 badoni:1 varela:1 changing:2 douglas:2 abbott:3 hasler:1 lowering:1 v1:1 convert:1 inverse:1 powerful:1 ca2:1 fusi:1 scaling:1 bit:3 layer:2 activity:3 strength:1 software:1 span:1 injection:6 department:1 metastable:1 according:1 pfet:6 poor:1 smaller:5 postsynaptic:4 character:1 suppressed:1 rev:1 dv:2 depletion:1 equation:2 agree:1 previously:2 dioxide:1 remains:1 eventually:1 mechanism:3 integr:1 available:1 operation:3 spectral:4 cancelled:1 permanently:1 gate:14 ghahramani:1 murray:1 amit:1 capacitance:1 added:1 spike:58 occurs:2 restored:1 primary:1 md:1 exhibit:3 microscopic:1 distance:1 separate:2 maryland:1 simulated:2 sci:1 consumption:2 nelson:2 presynaptic:6 collected:2 bofill:1 assuming:1 code:1 modeled:3 quantal:2 regulation:1 difficult:1 potentially:2 negative:7 pmax:3 implementation:9 design:1 adjustable:2 perform:1 anticorrelated:1 imbalance:1 av:1 neuron:8 finite:1 emulating:1 ever:1 communication:1 precise:1 dc:1 tive:1 pair:4 extensive:1 andreou:1 tunnelling:5 trans:3 suggested:2 proceeds:1 mismatch:2 regime:1 challenge:1 program:1 saturation:1 memory:1 power:17 event:1 hot:5 suitable:2 residual:1 inversely:3 temporally:1 coupled:1 vh:2 prior:1 drain:2 multiplication:1 loss:1 proportional:4 filtering:1 vg:3 generator:4 integrate:1 metal:1 consistent:1 subtractive:3 uncorrelated:1 bias:4 side:4 allow:1 institute:1 characterizing:1 ibias:2 taking:1 markram:2 benefit:1 feedback:3 cortical:1 world:1 transition:1 rich:1 symmetrize:1 adaptive:2 vmax:5 approximate:1 compact:6 nov:1 consuming:1 alternatively:1 table:1 nature:3 learn:1 improving:1 investigated:2 domain:1 spread:1 linearly:1 neurosci:4 noise:5 arrival:2 allowed:1 cryptography:1 xu:1 neuronal:1 site:2 fig:14 ff:1 fashion:1 nonvolatile:1 vr:17 ny:1 resistor:3 exponential:6 comput:1 candidate:1 rail:2 levy:1 bangkok:1 weighting:1 specific:1 reconfigurable:1 showing:1 explored:1 offset:5 evidence:1 normalizing:1 intrinsic:1 effectively:1 easier:1 pamela:1 timothy:1 likely:1 expressed:1 lewicki:1 dnp:2 determines:1 chance:1 ma:1 comparator:3 modulate:1 ann:1 change:5 hard:1 eurasip:1 determined:2 operates:1 reducing:1 averaging:1 reconfiguration:1 conservative:1 total:1 experimental:6 m3:1 indicating:4 college:1 preand:1 modulated:1 phenomenon:1 tested:1 avoiding:1 |
2,836 | 3,571 | Non-stationary dynamic Bayesian networks
Joshua W. Robinson and Alexander J. Hartemink
Department of Computer Science
Duke University
Durham, NC 27708-0129
{josh,amink}@cs.duke.edu
Abstract
A principled mechanism for identifying conditional dependencies in time-series
data is provided through structure learning of dynamic Bayesian networks
(DBNs). An important assumption of DBN structure learning is that the data are
generated by a stationary process?an assumption that is not true in many important settings. In this paper, we introduce a new class of graphical models called
non-stationary dynamic Bayesian networks, in which the conditional dependence
structure of the underlying data-generation process is permitted to change over
time. Non-stationary dynamic Bayesian networks represent a new framework for
studying problems in which the structure of a network is evolving over time. We
define the non-stationary DBN model, present an MCMC sampling algorithm for
learning the structure of the model from time-series data under different assumptions, and demonstrate the effectiveness of the algorithm on both simulated and
biological data.
1
Introduction
Structure learning of dynamic Bayesian networks allows conditional dependencies to be identified
in time-series data with the assumption that the data are generated by a distribution that does not
change with time (i.e., it is stationary). An assumption of stationarity is adequate in many situations
since certain aspects of data acquisition or generation can be easily controlled and repeated. However, other interesting and important circumstances exist where that assumption does not hold and
potential non-stationarity cannot be ignored.
As one example, structure learning of DBNs has been used widely in reconstructing transcriptional
regulatory networks from gene expression data [1]. But during development, these regulatory networks are evolving over time, with certain conditional dependencies between gene products being created as the organism develops, while others are destroyed. As another example, dynamic
Bayesian networks have been used to identify the networks of neural information flow that operate
in the brains of songbirds [2]. However, as the songbird learns from its environment, the networks
of neural information flow are themselves slowly adapting to make the processing of sensory information more efficient. As yet another example, one can use a DBN to model traffic flow patterns.
The roads upon which traffic passes do not change on a daily basis, but the dynamic utilization of
those roads changes daily during morning rush, lunch, evening rush, and weekends.
If one collects time-series data describing the levels of gene products in the case of transcriptional
regulation, neural activity in the case of neural information flow, or traffic density in the case of traffic
flow, and attempts to learn a DBN describing the conditional dependencies in these time-series, one
could be seriously misled if the data-generation process is non-stationary.
Here, we introduce a new class of graphical model called a non-stationary dynamic Bayesian network (nsDBN), in which the conditional dependence structure of the underlying data-generation
1
process is permitted to change over time. In the remainder of the paper, we introduce and define the
nsDBN framework, present a simple but elegant algorithm for efficiently learning the structure of
an nsDBN from time-series data under different assumptions, and demonstrate the effectiveness of
these algorithms on both simulated and experimental data.
1.1
Previous work
In this paper, we are interested in identifying how the conditional dependencies between time-series
change over time; thus, we focus on the task of inferring network structure as opposed to parameters of the graphical model. In particular, we are not as interested in making predictions about
future data (such as spam prediction via a na??ve Bayes classifier) as we are in analysis of collected
data to identify non-stationary relationships between variables in multivariate time-series. Here we
describe the few previous approaches to identifying non-stationary networks and discuss the advantages and disadvantages of each. The model we describe in this paper has none of the disadvantages
of the models described below primarily because it makes fewer assumptions about the relationships
between variables.
Recent work modeling the temporal progression of networks from the social networks community
includes an extension to the discrete temporal network model [3], in which the the networks are
latent (unobserved) variables that generate observed time-series data [4]. Unfortunately, this technique has certain drawbacks: the variable correlations remain constant over time, only undirected
edges can be identified, and segment or epoch divisions must be identified a priori.
In the continuous domain, some research has focused on learning the structure of a time-varying
Gaussian graphical model [5] with a reversible-jump MCMC approach to estimate the time-varying
variance structure of the data. However, some limitations of this method include: the network
evolution is restricted to changing at most a single edge at a time and the total number of segments is
assumed known a priori. A similar algorithm?also based on Gaussian graphical models?iterates
between a convex optimization for determining the graph structure and a dynamic programming
algorithm for calculating the segmentation [6]. This approach is fast, has no single edge change
restriction, and the number of segments is calculated a posteriori; however, it does require that the
graph structure is decomposable. Additionally, both of the aforementioned approaches only identify
undirected edges and assume that the networks in each segment are independent, preventing data
and parameters from being shared between segments.
2
Brief review of structure learning of Bayesian networks
Bayesian networks are directed acyclic graphical models that represent conditional dependencies
between variables as edges. They define a simple decomposition of the complete joint distribution?
a variable is conditionally independent of its non-descendants
given its parents. Therefore, the joint
Q
distribution of every variable xi can be rewritten as i P (xi |?i , ?i ), where ?i are the parents of xi ,
and ?i parameterizes the conditional probability distribution between a variable and its parents. The
posterior probability of a given network G (i.e., the set of conditional dependencies) after having
observed data D is estimated via Bayes? rule: P (G|D) ? P (D|G)P (G). The structure prior P (G)
can be used to incorporate prior knowledge about the network structure, either about the existence
of specific edges or the topology more generally (e.g., sparse); if prior information is not available,
this is often assumed uniform. The marginal likelihood P (D|G) can be computed exactly, given
a conjugate prior for ?i . When the ?i are independent and multinomially distributed, a Dirichlet
conjugate prior is used, and the data are complete, the exactly solution for the marginal likelihood
is the Bayesian-Dirichlet equivalent (BDe) metric [7]. Since we will be modifying it later in this
paper, we show the expression for the BDe metric here:
qi
ri
n Y
Y
Y
?(?ijk + Nijk )
?(?ij )
(1)
P (D|G) =
?(?
+
N
)
?(?ijk )
ij
ij
i=1 j=1
k=1
where qi is the number
Pri of configurations of the parent set ?i , ri is the number of discrete states of
variable xi , Nij = k=1
Nijk , Nijk is the number of times Xi took on the value k given the parent
configuration j, and ?ij and ?ijk are Dirichlet hyper-parameters on various entries in ?. If ?ijk
is set everywhere to ?/(qi ri ), we get a special case of the BDe metric: the uniform BDe metric
(BDeu).
2
Given a metric for evaluating the marginal likelihood P (D|G), a technique for finding the best network(s) must be chosen. Heuristic search methods (i.e., simulated annealing, greedy hill-climbing)
may be used to find a best network or set of networks. Alternatively, sampling methods may be
used to estimate a posterior over all networks [8]. If the best network is all that is desired, heuristic searches will typically find it more quickly than sampling techniques. In settings where many
modes are expected, sampling techniques will more accurately capture posterior probabilities regarding various properties of the network.
Finally, once a search or sampling strategy has been selected, we must determine how to move
through the space of all networks. A move set defines a set of local traversal operators for moving
from a particular state (i.e., a network) to nearby states. Ideally, the move set includes changes that
allow posterior modes to be frequently visited. For example, it is reasonable to assume that networks
that differ by a single edge will have similar likelihoods. A well designed move set results in fast
convergence since less time is spent in the low probability regions of the state space. For Bayesian
networks, the move set is often chosen to be {add an edge, delete an edge, and reverse an edge} [8].
DBNs are an extension of Bayesian networks to time-series data, enabling cyclic dependencies
between variables to be modeled across time. Structure learning of DBNs is essentially the same
as described above, except that modeling assumptions are made regarding how far back in time one
variable can depend on another (minimum and maximum lag), and constraints need to be placed
on edges so that they do not go backwards in time. For notational simplicity, we assume hereafter
that the minimum and maximum lag are both 1. More detailed reviews of structure learning can be
found in [9, 10].
3
Learning non-stationary dynamic Bayesian networks
We would like to extend the dynamic Bayesian network model to account for non-stationarity. In
this section, we detail how the structure learning procedure for DBNs to must be changed to account
for non-stationarity when learning non-stationary DBNs (nsDBNs).
Assume that we observe the state of n random variables at N discrete times. Call this multivariate
time-series data D, and further assume that it is generated according to a non-stationary process,
which is unknown. The process is non-stationary in the sense that the network of conditional dependencies prevailing at any given time is itself changing over time. We call the initial network of
conditional dependencies G1 and subsequent networks are called Gi for i = 2, 3, . . . , m. We define
?gi to be the set of edges that change (either added or deleted) between Gi and Gi+1 . The number
of edge changes specified in ?gi is Si . We define the transition time ti to be the time at which Gi
is replaced by Gi+1 in the data-generation process. We call the period of time between consecutive transition times?during which a single network of conditional dependencies is operative?an
epoch. So we say that G1 prevails during the first epoch, G2 prevails during the second epoch, and
so forth. We will refer to the entire series of prevailing networks as the structure of the nsDBN.
Since we wish to learn a set of networks instead of one network we must derive a new expression
for the marginal likelihood. Assume that there exist m different epochs with m ? 1 transition
times T = {t1 , . . . , tm?1 }. The network Gi+1 prevailing in epoch i + 1 differs from network Gi
prevailing in epoch i by a set of edge changes we call ?gi . We would like to determine the sequence
of networks G1 , . . . , Gm that maximize the posterior:
P (G1 , . . . , Gm |D, T ) ? P (D|G1 , . . . , Gm , T )P (G1 , . . . , Gm )
(2)
? P (D|G1 , ?g1 , . . . , ?gm?1 , T )P (G1 , ?g1 , . . . , ?gm?1 )
(3)
? P (D|G1 , ?g1 , . . . , ?gm?1 , T )P (G1 )P (?g1 , . . . , ?gm?1 ) (4)
We assume the prior over networks can be further split into independent components describing the
initial network and subsequent edge changes, as demonstrated in Equation (4). As in the stationary
setting, if prior knowledge about particular edges or overall topology is available, an informative
prior can be placed on G1 . In the results reported here, we assume this to be uniform. We do,
however, place some prior assumptions on the ways in which edges change in the structure. First,
we assume that the networks evolve smoothly over time. To encode this prior
P knowledge, we place
an exponential prior with rate ?s on the total number of edge changes s = i Si . We also assume
that the networks evolve slowly over time (i.e., a transition does not occur at every observation) by
3
placing another exponential prior with rate ?m on the number of epochs m. The updated posterior
for an nsDBN structure is given as:
P (G1 , ?g1 , . . . , ?gm?1 |T ) ? P (D|G1 , ?g1 , . . . , ?gm?1 , T ) e??s s e??m m
To evaluate the new likelihood, we choose to extend the BDe metric because after the parameters
have been marginalized away, edges are the only representation of conditional dependencies that
are left; this provides a useful definition of non-stationarity that is both simple to define and easy
to analyze. We will assume that any other sources of non-stationarity are either small enough to
not alter edges in the predicted network or large enough to be approximated by edge changes in the
predicted network.
In Equation (1), Nij and Nijk are calculated for a particular parent set over the entire dataset D.
However, in an nsDBN, a node may have multiple parent sets operative at different times. The
calculation for Nij and Nijk must therefore be modified to specify the intervals during which each
parent set is operative. Note that an interval may be defined over several epochs. Specifically, an
epoch is defined between adjacent transition times while an interval is defined over the epochs during
which a particular parent set is operative (which may include all epochs).
For each node i, the previous parent set ?i in the BDe metric is replaced by a set of parent sets ?ih ,
where h indexes the interval Ih during which parent set ?ih is operative for node i. Let pi be the
number of such intervals and let qih be the number of configurations of ?ih . Then we can write:
P (D|G1 , . . . , Gm , T ) ?
pi Y
qih
n Y
Y
i=1 h=1
ri
Y
?(?ijk (Ih ) + Nijk (Ih ))
?(?ij (Ih ))
?(?ij (Ih ) + Nij (Ih ))
?(?ijk (Ih ))
j=1
(5)
k=1
where the counts Nijk and pseudocounts ?ijk have been modified to apply only to the data in
each interval Ih . The modified BDe metric will be referred to as nsBDe. We have chosen to set
?ijk (Ih ) = (?ijk |Ih |)/N (e.g., proportional to the length of the interval during which that particular
parent set is operative).
We use a sampling approach rather than heuristic search because the posterior over structures includes many modes. Additionally, sampling allows us to answer questions like ?what are the most
likely transition times???a question that would be difficult to answer in the context of heuristic
search.
Because the number of possible nsDBN structures is so large (significantly greater than the number
of possible DBNs), we must be careful about what options are included in the move set. To achieve
quick convergence, we want to ensure that every move in the move set efficiently jumps between
posterior modes. Therefore, the majority of the next section is devoted to describing effective move
sets under different levels of uncertainty.
4
Different settings regarding the number and times of transitions
An nsDBN can be identified under a variety of settings that differ in the level of uncertainty about
the number of transitions and whether the transition times are known. The different settings are
abbreviated according to the type of uncertainty: whether the number of transitions is known (KN)
or unknown (UN) and whether the transition times themselves are known (KT) or unknown (UT).
When the number and times of transitions are known a priori (KNKT setting), we only need to
identify the most likely initial network G1 and sets of edge changes ?g1 . . . ?gm?1 . Thus, we wish
to maximize Equation (4).
To create a move set that results in an effectively mixing chain, we consider which types of local
moves result in jumps between posterior modes. As mentioned earlier, structures that differ by
a single edge will probably have similar likelihoods. Additionally, structures that have slightly
different edge change sets will have similar likelihoods. The add edge, remove edge, add to edge
set, remove from edge set, and move from edge set moves are listed as (M1 ) ? (M5 ) in Table 1 in
the Appendix.
Knowing in advance the times at which all the transitions occur is often unrealistic. When the
number of transitions is known but the times are unknown a priori (KNUT setting), the transition
times T must also be estimated a posteriori.
4
Figure 1: Structure learning of nsDBNs under several settings. A. True non-stationary datageneration process. Under the KNKT setting, the recovered structure is exactly this one. B. Under
the KNUT setting, the algorithm learns the model-averaged nsDBN structure shown. C: Posterior
probabilities of transition times when learning an nsDBN in the UNUT setting (with ?s = 1 and
?m = 5). The blue triangles represent the true transition times and the red dots represent one standard deviation from the mean probability obtained from several runs. D: Posterior probabilities of
the number of epochs.
Structures with the same edge sets but slightly different transition times will probably have similar
likelihoods. Therefore, we can add a new move that proposes a local shift to one of the transition
times: let d be some small positive integer and let the new time t0i be drawn from a discrete uniform
distribution t0i ? DU (ti ? d, ti + d) with the constraint that ti?1 < t0i < ti+1 . Initially, we set the
m ? 1 transition times so that the epochs are roughly equal in length. The complete move set for
this setting includes all of the moves described previously as well as the new local shift move, listed
as (M6 ) in Table 1 in the Appendix.
Finally, when the number and times of transitions are unknown (UNUT setting), both m and T must
be estimated. While this is the most interesting setting, it is also the most difficult since one of
the unknowns is the number of unknowns. Using the reversible jump Markov chain Monte Carlo
sampling technique [11], we can further augment the move set to allow for the number of transitions
to change. Since the number of epochs m is allowed to vary, this is the only setting that incorporates
the prior on m.
To allow the number of transitions to change during sampling, we introduce merge and split operations to the move set. For the merge operation, two adjacent edge sets (?gi and ?gi+1 ) are
combined to create a new edge set. The transition time of the new edge set is selected to be the mean
of the previous locations weighted by the size of each edge set: t0i = (Si ti + Si+1 ti+1 )/(Si + Si+1 ).
For the split operation, an edge set ?gi is randomly chosen and randomly partitioned into two new
0
edge sets ?gi0 and ?gi+1
with all subsequent edge sets re-indexed appropriately. Each new transition time is selected as described above. The move set is completed with the inclusion of the add
transition time and delete transition time operations. These moves are similar to the split and merge
operations except they also increase or decrease s, the total number of edge changes in the structure.
The four additional moves are listed as (M7 ) ? (M10 ) in Table 1 in the Appendix.
5
Results on simulated data
To evaluate the effectiveness of our method, we first apply it to a small, simulated dataset. The
first experiment is on a simulated ten node network with six single-edge changes between seven
5
epochs where the length of each epoch varies between 20 and 400 observations. The true network
is shown in Figure 1A. For each of the three settings, we generate ten individual datasets and then
collect 250,000 samples from each, with the first 50,000 samples thrown out for burn-in. We repeat
the sample collection 25 times for each dataset to obtain variance estimates on posterior quantities
of interest. The sample collection takes about 25 seconds for each dataset on a 3.6GHz dual-core
Intel Xeon machine with 4 GB of RAM, but all runs can easily be executed in parallel. To obtain a
consensus (model averaged) structure prediction, an edge is considered present at a particular time
if the posterior probability of the edge is greater than 0.5.
In the KNKT setting, the sampler rapidly converges to the correct solution. The value of ?m has
no effect in this setting, and the value of ?s is varied between 0.1 and 50. The predicted structure
is identical to the true structure shown in Figure 1A for a broad range of values: 0.5 ? ?s ? 10.0,
indicating robust and accurate learning.
In the KNUT setting, transition times are unknown and must be estimated a posteriori. The value
of ?m still has no effect in this setting and the value of ?s is again varied between 0.1 and 50. The
predicted consensus structure is shown in Figure 1B for ?s = 5.0; this choice of ?s provides the
most accurate predictions.
The estimated structure and transition times are very close to the truth. All edges are correct, with
the exception of two missing edges in G1 , and the predicted transition times are all within 10 of the
true transition times. We also discovered that the convergence rate under the KNUT and the KNKT
settings were very similar for a given m. This implies that the posterior over transition times is quite
smooth; therefore, the mixing rate is not greatly affected when sampling transition times. Finally,
we consider the UNUT setting, when the number and times of transitions are both unknown.
We use the range 1 ? ?s ? 5 because we know from the previous settings that the most accurate
solutions were obtained from a prior within this range; the range 1 ? ?m ? 50 is selected to provide
a wide range of estimates for the prior since we have no a priori knowledge of what it should be.
We can examine the posterior probabilities of transition times over all sampled structures, shown
in Figure 1C. Highly probable transition times correspond closely with the true transition times
indicated by blue triangles; nevertheless, some uncertainty exists on about the exact locations of t3
and t4 since the fourth epoch is exceedingly short. We can also examine the posterior number of
epochs, shown in Figure 1D. The most probable posterior number of epochs is six, close to the true
number of seven.
To identify the best parameter settings for ?s and ?m , we examine the best F1-measure (the harmonic
mean of the precision and recall) for each. The best F1-measure of 0.992 is obtained when ?s = 5
and ?m = 1, although nearly all choices result in an F1-measure above 0.90 (see Appendix).
To evaluate the scalability of our technique, we also simulated data from a 100 variable network
with an average of fifty edges over five epochs spanning 4800 observations, with one to three edges
changing between each epoch. Learning nsDBNs on this data for ?s ? {1, 2, 5} and ?m ? {2, 3, 5}
results in F1-measures above 0.93, with the ?s = 1 and ?m = 5 assignments to be best for this data,
with an F1-measure of 0.953.
6
Results on Drosophila muscle development gene expression data
We also apply our method to identify non-stationary networks using Drosophila development gene
expression data from [12]. This data contains expression measurements over 66 time steps of 4028
Drosophila genes throughout development and growth during the embryonic, larval, pupal, and adult
stages of life. Using a subset of the genes involved in muscle development, some researchers have
identified a single directed network [13], while others have learned a time-varying undirected network [4]. To facilitate comparison with as many existing methods as possible, we apply our method
to the same data. Unfortunately, no other techniques predict non-stationary directed networks, so
our prediction in Figure 2C is compared to the stationary directed network in Figure 2A and the
non-stationary undirected network in Figure 2B.
While all three predictions share many edges, certain similarities between our prediction and one
or both of the other two predictions are of special interest. In all three predictions, a cluster seems
to form around myo61f, msp-300, up, mhc, prm, and mlc1. All of these genes except up are in the
6
Figure 2: Learning nsDBNS from the Drosophila muscle development data. A. The directed network reported by [13]. B. The undirected networks reported by [4]. C. The nsDBN structure learned
under the KNKT setting with ?s = 2.0. Only the edges that occurred in greater than 50 percent of
the samples are shown, with thicker edges representing connections that occurred more frequently.
D. Posterior probabilities of transition times using ?m = ?s = 2 under the UNUT setting. Blue
triangles represent the borders of embryonic, larval, pupal, and adult stages. E. Posterior probability
of the number of epochs under the UNUT setting.
myosin family, which contains genes involved in muscle contraction. Within the directed predictions,
msp-300 primarily serves as a hub gene that regulates the other myosin family genes. It is interesting
to note that the undirected method predicts connections between mcl1, prm, and mhc while neither
directed method make these predictions. Since msp-300 seems to serve as a regulator to these genes,
the method from [4] may be unable to distinguish between direct interactions and correlations due
to its undirected nature.
Despite the similarities, some notable differences exist between our prediction and the other two
predictions. First, we predict interactions from myo61f to both prm and up, neither of which is
predicted in the other methods, suggesting a greater role for myo61f during muscle development.
Also, we do not predict any interactions with twi. During muscle development in Drosophila, twi
acts as a regulator of mef2 that in turn regulates some myosin family genes, including mlc1 and
mhc [14]; our prediction of no direct connection from twi mirrors this biological behavior. Finally,
we note that in our predicted structure, actn never connects as a regulator (parent) to any other
genes, unlike in the network in Figure 2A. Since actn (actinin) only binds actin, we do not expect it
to regulate other muscle development genes, even indirectly.
We can also look at the posterior probabilities of transition times and epochs under the UNUT
setting. These plots are shown in Figure 2D and 2E, respectively. The transition times with high
posterior probabilities correspond well to the embryonic?larval and the larval?pupal transitions,
but a posterior peak occurs well before the supposed time of the pupal?adult transition; this reveals
that the gene expression program governing the transition to adult morphology is active well before
the fly emerges from the pupa, as would clearly be expected. Also, we see that the most probable
number of epochs is three or four, mirroring closely the total number of developmental stages.
Since we could not biologically validate the fly network, we generated a non-stationary time-series
with the same number of nodes and a similar level of connectivity to evaluate the accuracy a recovered nsDBN on a problem of exactly this size. We generated data from an nsDBN with 66
observations and transition times at 30, 40, and 58 to mirror the number of observations in embryonic, larval, pupal, and adult stages of the experimental fly data. Since it is difficult to estimate the
amount of noise in the experimental data, we simulated noise at 1:1 to 4:1 signal-to-noise ratios.
7
Finally, since many biological processes have more variables than observations, we examined the
effect of increasing the number of experimental replicates. We found that the best F1-measures
(greater than 0.75 across all signal-to-noise ratios and experimental replicates) were obtained when
?m = ?s = 2, which is why we used those values to analyze the Drosophila muscle network data.
7
Discussion
Non-stationary dynamic Bayesian networks provide a useful framework for learning Bayesian networks when the generating processes are non-stationary. Using the move sets described in this
paper, nsDBN learning is efficient even for networks of 100 variables, generalizable to situations of
varying uncertainty (KNKT, KNUT, and UNUT), and the predictions are stable over many choices
of hyper-parameters. Additionally, by using a sampling-based approach, our method allows us to
assess a confidence for each predicted edge?an advantage that neither [13] nor [4] share.
We have demonstrated the feasibility of learning an nsDBN in all three settings using simulated data,
and in the KNKT and UNUT settings using real biological data. Although the predicted fly muscle
development networks are difficult to verify, simulated experiments of a similar scale demonstrate
highly accurate predictions, even with noisy data and few replicates.
Non-stationary DBNs offer all of the advantages of DBNs (identifying directed non-linear interactions between multivariate time-series) and are additionally able to identify non-stationarities in the
interactions between time-series. In future work, we hope to analyze data from other fields that
have traditionally used dynamic Bayesian networks and instead use nsDBNs to identify and model
previously unknown or uncharacterized non-stationary behavior.
References
[1] Nir Friedman, Michal Linial, Iftach Nachman, and Dana Pe?er. Using Bayesian networks to analyze
expression data. In RECOMB 4, pages 127?135. ACM Press, 2000.
[2] V. Anne Smith, Jing Yu, Tom V. Smulders, Alexander J. Hartemink, and Erich D. Jarvis. Computational
inference of neural information flow networks. PLoS Computational Biology, 2(11):1436?1449, 2006.
[3] Steve Hanneke and Eric P. Xing. Discrete temporal models of social networks. In Workshop on Statistical
Network Analysis, ICML 23, 2006.
[4] Fan Guo, Steve Hanneke, Wenjie Fu, and Eric P. Xing. Recovering temporally rewiring networks: A
model-based approach. In ICML 24, 2007.
[5] Makram Talih and Nicolas Hengartner. Structural learning with time-varying components: Tracking the
cross-section of financial time series. Journal of the Royal Statistical Society B, 67(3):321?341, 2005.
[6] Xiang Xuan and Kevin Murphy. Modeling changing dependency structure in multivariate time series. In
ICML 24, 2007.
[7] David Heckerman, Dan Geiger, and David Maxwell Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20(3):197?243, 1995.
[8] Claudia Tarantola. MCMC model determination for discrete graphical models. Statistical Modelling,
4(1):39?61, 2004.
[9] P Krause. Learning probabilistic networks. The Knowledge Engineering Review, 13(4):321?351, 1998.
[10] Kevin Murphy. Learning Bayesian network structure from sparse data sets. U.C. Berkeley Technical
Report, Computer Science Department 990, University of California at Berkeley, 2001.
[11] Peter J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82(4):711?732, 1995.
[12] M Arbeitman, E Furlong, F Imam, E Johnson, B Null, B Baker, M Krasnow, M Scott, R Davis, and
K White. Gene expression during the life cycle of Drosophila melanogaster. Science, 5590(297):2270?
2275, 2002.
[13] Wentao Zhao, Erchin Serpedin, and Edward R. Dougherty. Inferring gene regulatory networks from time
series data using the minimum description length principle. Bioinformatics, 22(17):2129?2135, 2006.
[14] T Sandmann, L Jensen, J Jakobsen, M Karzynski, M Eichenlaub, P Bork, and E Furlong. A temporal map
of transcription factor activity: mef2 directly regulates target genes at all stages of muscle development.
Developmental Cell, 10(6):797?807, 2006.
8
| 3571 |@word seems:2 decomposition:1 contraction:1 initial:3 configuration:3 series:18 cyclic:1 hereafter:1 contains:2 seriously:1 existing:1 recovered:2 michal:1 anne:1 si:6 yet:1 must:10 tarantola:1 subsequent:3 informative:1 remove:2 designed:1 plot:1 stationary:25 greedy:1 fewer:1 selected:4 smith:1 core:1 short:1 iterates:1 provides:2 node:5 location:2 five:1 direct:2 multinomially:1 m7:1 descendant:1 dan:1 introduce:4 expected:2 behavior:2 themselves:2 frequently:2 examine:3 morphology:1 brain:1 nor:1 roughly:1 bdeu:1 increasing:1 provided:1 underlying:2 baker:1 null:1 what:3 generalizable:1 unobserved:1 finding:1 temporal:4 berkeley:2 every:3 stationarities:1 ti:7 act:1 growth:1 thicker:1 exactly:4 biometrika:1 classifier:1 wenjie:1 utilization:1 before:2 t1:1 positive:1 local:4 bind:1 engineering:1 despite:1 merge:3 burn:1 examined:1 collect:2 range:5 averaged:2 directed:8 differs:1 procedure:1 nijk:7 evolving:2 mhc:3 adapting:1 significantly:1 confidence:1 road:2 get:1 cannot:1 close:2 operator:1 context:1 restriction:1 equivalent:1 map:1 demonstrated:2 quick:1 missing:1 go:1 convex:1 focused:1 decomposable:1 identifying:4 simplicity:1 rule:1 financial:1 traditionally:1 updated:1 dbns:9 gm:12 target:1 exact:1 duke:2 programming:1 approximated:1 predicts:1 observed:2 role:1 fly:4 capture:1 region:1 cycle:1 plo:1 decrease:1 principled:1 mentioned:1 environment:1 developmental:2 ideally:1 dynamic:13 traversal:1 depend:1 segment:5 serve:1 upon:1 division:1 linial:1 eric:2 basis:1 triangle:3 easily:2 joint:2 various:2 weekend:1 fast:2 describe:2 effective:1 monte:2 hyper:2 kevin:2 quite:1 heuristic:4 widely:1 lag:2 say:1 gi:14 g1:23 dougherty:1 itself:1 noisy:1 advantage:3 sequence:1 took:1 rewiring:1 interaction:5 product:2 remainder:1 jarvis:1 rapidly:1 mixing:2 achieve:1 supposed:1 forth:1 description:1 validate:1 scalability:1 parent:14 convergence:3 cluster:1 jing:1 generating:1 xuan:1 converges:1 spent:1 derive:1 pupa:1 ij:6 edward:1 recovering:1 c:1 predicted:9 implies:1 differ:3 drawback:1 correct:2 closely:2 modifying:1 require:1 imam:1 f1:6 drosophila:7 biological:4 probable:3 larval:5 extension:2 hold:1 around:1 considered:1 predict:3 prm:3 vary:1 consecutive:1 nachman:1 visited:1 knut:5 create:2 weighted:1 hope:1 clearly:1 gaussian:2 modified:3 rather:1 varying:5 encode:1 focus:1 notational:1 modelling:1 likelihood:9 greatly:1 sense:1 posteriori:3 inference:1 typically:1 entire:2 initially:1 interested:2 overall:1 aforementioned:1 dual:1 augment:1 priori:5 development:11 proposes:1 prevailing:4 special:2 marginal:4 field:1 equal:1 once:1 having:1 never:1 sampling:11 biology:1 identical:1 broad:1 look:1 yu:1 nearly:1 placing:1 icml:3 alter:1 future:2 others:2 report:1 develops:1 few:2 primarily:2 randomly:2 ve:1 individual:1 murphy:2 replaced:2 connects:1 attempt:1 thrown:1 friedman:1 stationarity:6 interest:2 highly:2 replicates:3 devoted:1 chain:3 kt:1 accurate:4 edge:49 fu:1 daily:2 indexed:1 desired:1 re:1 rush:2 nij:4 delete:2 xeon:1 modeling:3 earlier:1 disadvantage:2 assignment:1 deviation:1 entry:1 subset:1 uniform:4 johnson:1 reported:3 dependency:13 answer:2 kn:1 varies:1 combined:1 density:1 m10:1 peak:1 probabilistic:1 quickly:1 na:1 connectivity:1 again:1 opposed:1 prevails:2 slowly:2 choose:1 zhao:1 account:2 potential:1 suggesting:1 includes:4 notable:1 later:1 analyze:4 traffic:4 red:1 xing:2 bayes:2 option:1 parallel:1 ass:1 accuracy:1 variance:2 efficiently:2 correspond:2 identify:8 t3:1 climbing:1 myosin:3 bayesian:21 accurately:1 none:1 carlo:2 hanneke:2 researcher:1 definition:1 acquisition:1 involved:2 sampled:1 dataset:4 recall:1 knowledge:6 ut:1 emerges:1 segmentation:1 back:1 steve:2 maxwell:1 permitted:2 specify:1 tom:1 governing:1 stage:5 correlation:2 twi:3 reversible:3 morning:1 defines:1 mode:5 indicated:1 facilitate:1 effect:3 verify:1 true:8 evolution:1 pri:1 white:1 conditionally:1 adjacent:2 during:14 songbird:2 davis:1 claudia:1 m5:1 hill:1 complete:3 demonstrate:3 percent:1 harmonic:1 regulates:3 extend:2 organism:1 m1:1 occurred:2 refer:1 measurement:1 dbn:4 erich:1 inclusion:1 dot:1 moving:1 stable:1 similarity:2 add:5 multivariate:4 posterior:22 recent:1 reverse:1 certain:4 life:2 joshua:1 muscle:10 minimum:3 greater:5 additional:1 determine:2 maximize:2 period:1 signal:2 multiple:1 smooth:1 technical:1 determination:2 calculation:1 offer:1 cross:1 controlled:1 qi:3 prediction:16 feasibility:1 circumstance:1 metric:8 essentially:1 represent:5 cell:1 want:1 krause:1 operative:6 annealing:1 interval:7 source:1 appropriately:1 fifty:1 operate:1 unlike:1 pass:1 probably:2 elegant:1 undirected:7 flow:6 incorporates:1 effectiveness:3 call:4 integer:1 structural:1 backwards:1 pupal:5 split:4 easy:1 destroyed:1 enough:2 variety:1 m6:1 identified:5 topology:2 regarding:3 parameterizes:1 tm:1 knowing:1 shift:2 whether:3 expression:9 six:2 gb:1 peter:1 adequate:1 mirroring:1 ignored:1 generally:1 useful:2 detailed:1 listed:3 amount:1 ten:2 generate:2 exist:3 estimated:5 blue:3 discrete:6 write:1 affected:1 four:2 hengartner:1 nevertheless:1 deleted:1 drawn:1 changing:4 neither:3 ram:1 graph:2 run:2 everywhere:1 uncertainty:5 fourth:1 place:2 throughout:1 reasonable:1 family:3 geiger:1 appendix:4 distinguish:1 fan:1 activity:2 occur:2 constraint:2 ri:4 nearby:1 aspect:1 regulator:3 department:2 according:2 combination:1 conjugate:2 remain:1 across:2 reconstructing:1 slightly:2 heckerman:1 partitioned:1 lunch:1 making:1 biologically:1 pseudocounts:1 restricted:1 iftach:1 equation:3 previously:2 describing:4 discus:1 mechanism:1 count:1 abbreviated:1 know:1 turn:1 msp:3 serf:1 studying:1 available:2 operation:5 rewritten:1 apply:4 progression:1 observe:1 away:1 regulate:1 indirectly:1 existence:1 dirichlet:3 include:2 ensure:1 completed:1 graphical:7 marginalized:1 calculating:1 society:1 move:23 added:1 question:2 quantity:1 occurs:1 strategy:1 dependence:2 transcriptional:2 unable:1 simulated:10 majority:1 seven:2 collected:1 consensus:2 uncharacterized:1 spanning:1 length:4 modeled:1 relationship:2 index:1 ratio:2 nc:1 regulation:1 unfortunately:2 difficult:4 executed:1 bde:7 unknown:10 observation:6 markov:2 datasets:1 enabling:1 qih:2 situation:2 discovered:1 varied:2 community:1 david:2 specified:1 connection:3 california:1 learned:2 robinson:1 adult:5 able:1 below:1 pattern:1 scott:1 program:1 including:1 royal:1 green:1 unrealistic:1 misled:1 representing:1 t0i:4 brief:1 temporally:1 created:1 nir:1 epoch:25 review:3 prior:15 evolve:2 determining:1 xiang:1 expect:1 generation:5 interesting:3 limitation:1 proportional:1 acyclic:1 dana:1 recomb:1 krasnow:1 principle:1 gi0:1 pi:2 share:2 embryonic:4 changed:1 placed:2 repeat:1 actin:1 allow:3 wide:1 sparse:2 distributed:1 ghz:1 talih:1 calculated:2 evaluating:1 transition:44 exceedingly:1 sensory:1 preventing:1 made:1 jump:5 collection:2 spam:1 far:1 social:2 melanogaster:1 transcription:1 gene:19 active:1 reveals:1 assumed:2 xi:5 alternatively:1 continuous:1 regulatory:3 evening:1 latent:1 search:5 un:1 table:3 additionally:5 why:1 learn:2 nature:1 robust:1 nicolas:1 du:1 domain:1 border:1 noise:4 repeated:1 allowed:1 referred:1 intel:1 precision:1 inferring:2 wish:2 exponential:2 pe:1 chickering:1 learns:2 specific:1 hub:1 er:1 jensen:1 exists:1 workshop:1 ih:13 effectively:1 mirror:2 t4:1 durham:1 smoothly:1 likely:2 josh:1 hartemink:2 tracking:1 g2:1 furlong:2 truth:1 acm:1 conditional:14 careful:1 shared:1 change:21 included:1 specifically:1 except:3 sampler:1 called:3 total:4 experimental:5 ijk:9 indicating:1 exception:1 guo:1 alexander:2 bioinformatics:1 incorporate:1 evaluate:4 mcmc:3 |
2,837 | 3,572 | One Sketch For All: Theory and Application of
Conditional Random Sampling
Ping Li
Dept. of Statistical Science
Cornell University
[email protected]
Kenneth W. Church
Microsoft Research
Microsoft Corporation
[email protected]
Trevor J. Hastie
Dept. of Statistics
Stanford University
[email protected]
Abstract
Conditional Random Sampling (CRS) was originally proposed for efficiently
computing pairwise (l2 , l1 ) distances, in static, large-scale, and sparse data. This
study modifies the original CRS and extends CRS to handle dynamic or streaming data, which much better reflect the real-world situation than assuming static
data. Compared with many other sketching algorithms for dimension reductions
such as stable random projections, CRS exhibits a significant advantage in that it
is ?one-sketch-for-all.? In particular, we demonstrate the effectiveness of CRS in
efficiently computing the Hamming norm, the Hamming distance, the lp distance,
and the ?2 distance. A generic estimator and an approximate variance formula are
also provided, for approximating any type of distances.
We recommend CRS as a promising tool for building highly scalable systems, in
machine learning, data mining, recommender systems, and information retrieval.
1
Introduction
Learning algorithms often assume a data matrix A ? Rn?D with n observations and D attributes
and operate on the data matrix A through pairwise distances. The task of computing and maintaining
distances becomes non-trivial, when the data (both n and D) are large and possibly dynamic.
For example, if A denotes a term-doc matrix at Web scale with each row representing one Web page,
then n ? O(1010 ) (which may be verified by querying ?A? or ?The? in a search engine). Assuming
105 English words, the simplest uni-gram model requires the dimension D ? O(105 ); and a bi-gram
model can boost the dimension to D ? O(1010 ). Google book search program currently provides
data sets on indexed digital books up to five-grams. Note that the term-doc matrix is ?transposable,?
meaning that one can treat either documents or terms as features, depending on applications.
Another example is the image data. The Caltech 256 benchmark contains n = 30, 608 images,
provided by two commercial firms. Using pixels as features, a 1024 ? 1024 color image can be
represented by a vector of dimension D = 10242 ?3 = 3, 145, 728. Using histogram-based features
(e.g., [3]), D = 2563 = 16, 777, 216 is possible if one discretizes the RGB space into 2563 scales.
Text data are large and sparse, as most terms appear only in a small fraction of documents. For
example, a search engine reports 107 pagehits for the query ?NIPS,? which is not common to the
general audience. Out of 1010 pages, 107 pagehits indicate a sparsity of 99.9%. (We define sparsity
as the percentage of zero elements.) In the absolute magnitude, however, 107 is actually very large.
Not all large-scale data are sparse. Image data are usually sparse when features are represented by
histograms; they are, however, dense when pixel-based features are used.
1.1
Pairwise Distances Used in Machine Learning
The lp distance and ?2 distance are both popular. Denote by u1 and u2 the leading two rows in
A ? Rn?D . The lp distance (raised to the pth power), and the ?2 distance, are, respectively,
dp (u1 , u2 ) =
D
X
|u1,i ? u2,i |p ,
d?2 (u1 , u2 ) =
i=1
D
X
(u1,i ? u2,i )2
,
u1,i + u2,i
i=1
(
0
= 0).
0
The ?2 distance is only a special case of Helbertian metrics, defined as,
dH,?,? (u1 , u2 ) =
?1/?
?
?1/?
?
?
D 21/? u? + u?
? 21/? u?
X
1,i
2,i
1,i + u2,i
i=1
21/? ? 21/?
,
? ? [1, ?), ? ? [1/2, ?] or ? ? [??, ?1].
Helbertian metrics are defined over probability space[7] and hence suitable for data generated from
histograms, e.g., the ?bag-of-words? model. For applications in text and images using SVM, empirical studies have demonstrated the superiority of Helbertian metrics over lp distances[3, 7, 9].
More generally, we are interested in any linear summary statistics which can be written in the form:
dg (u1 , u2 ) =
D
X
g(u1,i , u2,i ),
(1)
i=1
for any generic function g. An efficient method for computing (1) for any g would be desirable.
1.2
Bottleneck in Distance/Kernel-based Learning Algorithms
A ubiquitous task in learning is to compute, store, update, and retrieve various types of distances[17].
For popular kernel SVM solvers including the SMO algorithm[16], storing and computing kernels
is the major bottleneck[2], because computing kernels is expensive, and more seriously, storing the
full kernel matrix in memory is infeasible when the number of observations n > 105 .
One popular strategy is to evaluate kernels on the fly[2]. This works well in low-dimensional data
(i.e., relatively small D). With high-dimensional data, however, either computing distances ondemand becomes too slow or the data matrix A ? Rn?D itself may not fit in memory.
We should emphasize that this challenge is a universal issue in distance-based methods, not limited
to SVMs. For example, popular clustering algorithms and multi-dimensional scaling algorithms
require frequently accessing a (di)similarity matrix, which is usually distance-based.
In addition to computing and storing distances, another general issue is that, for many real-world
applications, entries of the data matrix may be frequently updated, for example, data streams[15].
There have been considerable studies on learning from dynamic data, e.g., [5, 1]. Since streaming
data are often not stored (even on disks), computing and updating distances becomes challenging.
1.3
Contributions and Paper Organization
Conditional Random Sampling (CRS)[12, 13] was originally proposed for efficiently computing
pairwise (l2 and l1 ) distances, in large-scale static data. The contributions of this paper are:
1. We extend CRS to handle dynamic data. For example, entries of a matrix may vary over
time, or the data matrix may not be stored at all. We illustrate that CRS has the one-sketchfor-all property, meaning that the same set of samples/sketches can be used for computing
any linear summary statistics (1). This is a significant advantage over many other dimension reduction or data stream algorithms.
For example, the method of stable random
projections (SRP)[8, 10, 14] was designed for estimating the lp norms/distances for a fixed
p with 0 < p ? 2. Recently, a new method named Compressed Counting[11] is able to
very efficiently approximate the lp moments of data streams when p ? 1.
2. We introduce a modification to the original CRS and theoretically justify that this modification makes CRS rigorous, at least for computing the Hamming norm, an important
application in databases. We point out the original CRS was based on a heuristic argument.
3. We apply CRS for computing Hilbertian metrics[7], a popular family of distances for constructing kernels in SVM. We focus on a special case, by demonstrating that CRS is effective in approximating the ?2 distance.
Section 2 reviews the original CRS. Section 3 extends CRS to dynamic/streaming data. Section 4
focuses on using CRS to estimate the Hamming norm of a single vector, based on which Section 5
provides a generic estimation procedure for CRS, for estimating any linear summary statistics, with
the focus on the Hamming distance and the ?2 distance. Finally, Section 6 concludes the paper.
2
Conditional Random Sampling (CRS), the Original Version
Conditional Random Sampling (CRS)[12, 13] is a local sampling strategy. Since distances are local
(i.e., one pair at a time), there is no need to consider the whole matrix at one time.
As the first step, CRS applies a random permutation on the columns of A ? Rn?D . Figure 1(a)
provides an example of a column-permuted data matrix. The next step of CRS is to construct a
sketch for each row of the data matrix. A sketch can be viewed as a linked list which stores a small
fraction of the non-zero entries from the front of each row. Figure 1(b) demonstrates three sketches
corresponding to the three rows of the (column) permuted data matrix in Figure 1(a).
1 2 3
u1 5 0 0
u2 0 9 2
u3 0 4 0
4
1
0
0
5
0
6
2
6
7
0
0
7
0
0
0
8
0
7
0
9 10 11
0 8 0
0 5 0
8 0 0
12
1
0
3
13
0
4
0
14
8
0
0
15 16
0 2
0 13
12 0
(a) Permuted data matrix
K 1 : 1 {5} 4 {1} 6 {7} 10 {8}
K 2 : 2 {9} 3 {2} 5 {6} 8 {7}
K 3 : 2 {4} 5 {2} 9 {8} 12 {3}
(b) Sketches
Figure 1: (a): A data matrix with three rows and D = 16 columns. We assume the columns are
already permuted. (b): Sketches are the first ki non-zero entries ascending by IDs (here ki = 4).
In Figure 1, the sketch for row ui is denoted by Ki . Each element of Ki is a tuple ?ID {val},? where
?ID? is the column ID after the permutation and ?{val}? is the value of that entry.
Consider two rows u1 and u2 . The last (largest) IDs of sketches K1 and K2 are max(ID(K1 )) = 10
and max(ID(K2 )) = 8, respectively. Here, ?ID(K)? stands for the vector of IDs in the sketch K. It is
clear that K1 and K2 contain all information about u1 and u2 from columns 1 to min(10, 8) = 8.
Had we directly taken the first Ds = 8 columns from the permuted data matrix, we would obtain the
same non-zero entries as in K1 and K2 , if we exclude elements in K1 and K2 whose IDs > Ds = 8.
in this example, the element 10{8} in sketch K1 is excluded.
On the other hand, since the columns are already permuted, any Ds columns constitute a random
sample of size Ds . This means, by only looking at sketches K1 and K2 , one can obtain a ?random?
sample of size Ds . By statistics theory, one can easily obtain an unbiased estimate of any linear
summary statistics from a random sample. Since Ds is unknown until we look at K1 and K2 together,
[13] viewed this as a random sample conditioning on Ds .
Note that the Ds varies pairwise. When considering the rows u1 and u3 , the sketches K1 and K3
suggest their Ds = min(max(ID(K1 )), max(ID(K3 ))) = min(10,12) = 10.
In this study, we point out that, although the ?conditioning? argument appeared intuitive, it is only a
(good) heuristic. There are two ways to understand why this argument is not strictly correct.
Consider a true random sample of size Ds , directly obtained from the first Ds columns of the
permuted data matrix. Assuming sparse data, elements at the Ds th column should be most likely
zero. However, in the ?conditional random sample? obtained from CRS, at least one element at the
Ds th column is non-zero. Thus, the estimates of the original CRS are, strictly speaking, biased.
For a more obvious example, we can consider two rows with exactly one non-zero entry in each row
at the same column. The original CRS can not obtain an unbiased estimate unless Ds = D.
3
CRS for Dynamic Data and Introduction to Stable Random Projections
The original CRS was proposed for static data. In reality, the ?data matrix? may be frequently
updated. When data arrive in a streaming fashion, they often will not be stored (even on disks)[15].
Thus, a one-pass algorithm is needed to compute and update distances for training. Learning with
dynamic (or incremental) data has become an active topic of research, e.g., [5, 1].
3.1
Dynamic/Streaming Data
We first consider only one data vector u of length D (viewed as one row in the data matrix). At each
time t, there is an input stream st = (it , It ), it ? [1, D] which updates u (denoted by ut ) by
ut [it ] = H(ut?1 [it ], It ),
where It is the increment/decrement at time t and H is an updating function. The so-called Turnstile
model [15] is extremely popular and assumes a linear updating function H, i.e.,
ut [it ] = ut?1 [it ] + It .
(2)
For example, ut [it ] can represent the number of orders a ?user? i has purchased up to time t, where
a user may be identified by his/her IP address (i.e., i ? [1, D = 264 ]); It is the number of orders the
user i orders (i.e., It > 0) or cancels (i.e., It < 0) at time t.
In terms of the data matrix A ? Rn?D , we can view it to be a collection of n data streams.
3.2
CRS for Streaming Data
For each stream ut , we maintain a sketch K with length (i.e., capacity) k. Each entry of K is a tuple
?ID{val}.? Initially, all entries are empty. The procedure for sketch construction works as follows:
1. Generate a random permutation ? : [1, D] ? [1, D].
2. For each st = (it , It ), if ?[it ] > max(ID(K)) and the capacity of K is reached, do nothing.
3. Suppose ?[it ] ? max(ID(K)) or the capacity of K is not reached. If an entry with ID =
?[it ] does not exist, insert a new entry. Otherwise, update that entry according to H.1
4. Apply the procedure to each data stream using the same random permutation mapping ?.
Once sketches are constructed, the estimation procedure will be the same regardless whether the
original data are dynamic or static. Thus, we will use static data to verify some estimators of CRS.
3.3
(Symmetric) Stable Random Projections (SRP)
Since the method of (symmetric) stable random projections (SRP)[8, 10] has become a standard
algorithm for data stream computations, we very briefly introduce SRP for the sake of comparisons.
The procedure of SRP is to multiply the data matrix A ? Rn?D by a random matrix R ? RD?k ,
whose entries are i.i.d. samples from a standard (symmetric) stable distribution S(p, 1), 0 < p ? 2.
Consider two rows, u1 and u2 , in A. By properties of stable distributions, the projected vectors
v1 = RT u1 and v2 = RT u2 have i.i.d. stable entries, i.e., for j = 1 to k,
?
v1,j ? S
p, Fp =
D
X
!
|u1,i |
p
?
,
v1,j ? v2,j ? S
p, dp =
i=1
D
X
!
|u1,i ? u2,i |
p
.
i=1
Thus, one can estimate an individual norm or distance from k samples. SRP is applicable to dynamic/streaming data, provided the data follow the Turnstile model in (2). Because the Turnstile
model is linear and matrix multiplication is also linear, one can conduct A ? R incrementally.
Compared with Conditional Random Sampling (CRS), SRP has an elegant mathematical derivation, with various interesting estimators and rigorous sample complexity bounds, i.e., k can be predetermined in fully rigorous fashion. The accuracy of SRP is not affected by heavy-tailed data.
CRS, however, exhibits certain advantages over SRP:
? CRS is ?one-sketch-for-all?.
The same sketch of CRS can approximate any linear
summary statistics (1). SRP is limited to the lp norm and distance with 0 < p ? 2. One has
to conduct SRP 10 times (and store 10 sets of sketches) if 10 different p values are needed.
? CRS allows ?term-weighting? in dynamic data.
In machine learning, the distances
?
are often computed using weighted data (e.g., u1,i or log(1 + u1,i )), which is critical for
good performance. For static data, one can first term-weight the data before applying SRP.
For dynamic data, however, there is no way to trace back the original data after projections.
? CRS is not restricted to the Turnstile model.
? CRS is not necessary less accurate,
especially for sparse data or binary data.
4
Approximating Hamming Norms in Dynamic Data
Counting the Hamming norm (i.e., number of non-zeros) in an exceptionally long, dynamic vector
has important applications[4, 15]. For example, if a vector ut records the numbers of items users
have ordered, one meaningful question to ask may be ? how many distinct users are there??
The purpose of this section is three-fold. (1) This is the case we can rigorously analyze CRS and
propose a truly unbiased estimator. (2) This analysis brings better insights and more reasonable
estimators for pairs of data vectors. (3) In this case, despite its simplicity, CRS theoretically achieves
similar accuracy as stable random projections (SRP). Empirically, CRS (slightly) outperforms SRP.
1
We leave it for particular applications to decide whether an entry updated to zero should be discarded or
should be kept in the sketch. In reality, this case does not occur often. For example, the most important type of
data streams[15] is ?insertion-only,? meaning that the values will never decrease.
4.1
The Proposed (Unbiased) Estimator and Variance
Suppose we have obtained the sketch K. For example, consider the first row in Figure 1: D = 16,
k = 4 and the number of non-zeros f = 7. Lemma 1 (whose proof is omitted) proposes an unbiased
estimator of f , denoted by f?, and a biased estimator based on the maximum likelihood, fmle .
Lemma 1
? ?
D(k ? 1)
,
f? =
Z = max(ID(K)),
E f? = f,
D?f ?k>1
Z ?1
? ?
(D ? f )f
f2 ? f D
?
,
(k > 2)
Var f? < VfU =
k?2 D?1
D?1
? ?
(k ? 1)f (f ? 1)(f ? 2)D
,
(k > 3).
Var f? > VfL = VfU ?
(k ? 2)(k ? 3)(D ? 1)(D ? 2)
? ?
? ?
2
Assume f /D is small and k/f is also small, then Var f? = fk + O k12 .
The maximum likelihood estimator is f?mle =
k(D+1)
Z
? 1.
? ?
Note that, since Var f? /f 2 ? 1/k, independent of the data, the estimator f? actually has the worst-
case complexity bound similar to that of SRP[10], although the precise constant is not easy to obtain.
4.2
The Approximation Using the Conditioning Argument
D(k?1)
, appears to be the estimator for a hypergeometric
Interestingly, this estimator, f? = max(ID(K))?1
random sample of size Ds = max(ID(K)) ? 1. That is, suppose we randomly pick Ds balls (without replacement) from a pool of D balls and we observe that k 0 balls are red; then a natural (and
unbiased) estimator for the total number of red balls would be DDs k 0 ; here k 0 = k ? 1.
This seems to imply that the ?conditioning? argument in the original CRS in Section 2 is ?correct?
if we make a simple modification by using the Ds which is the original Ds minus 1. While this is
what we will recommend as the modified CRS, it is only a close approximation.
Consider f?app = f?, where we assume f?app is the estimator for the hypergeometric distribution, then
?
?
?
? ?
?
?
?
f
f
D ? Ds
D
D
f
D2
1?
?
=
?1 f 1?
Var f?app |Ds = Z ? 1 = 2 Ds
Ds
D
D
D?1
D ? 1 Ds
D
? ?
?
? ?
?
?
??
?
??
?
?
?
?
D
f
Df
f
D
f
?
?
E
?1 f 1?
=
?1
Var fapp = E Var fapp |Ds
=
1?
D?1
Z?1
D
D?1 k?1
D
4.3
(3)
Comparisons with Stable Random Projections (SRP)
P
p
Based on the observation that f = limp?0+ D
i=1 |ui | , [4] proposed using SRP to approximate the
lp norm with very small p, as an approximation to f . For p ? 0+, the recent work for SRP [10]
proposed the harmonic mean estimator. Recall that after projections v = RT u ? Rk consists of
PD
i.i.d. stable samples with scale parameter Fp = i=1 |ui |p . The harmonic mean estimator is
!!
?
??
?
2
?(?p) sin ?
??
???(?2p) sin (?p)
2p
?
1
,
k? ?
Pk
??
?
2
?p
?(?p) sin ?
j=1 |vj |
2p
!
?
?
?
?
?
1
???(?2p) sin (?p)
21
.
Var F?p,hm = Fp
?
? ? ??2 ? 1 + O
2
k
k
?(?p) sin 2 p
?
?
???(?2p) sin (?p)
2
?
p ? 1,
lim ? ?
lim ? ?(?p) sin
??2 ? 1 ? 1.
?
p?0+
p?0+
?
2
?(?p) sin ?
2p
F?p,hm =
?
?
Denote this estimator by f?srp (using p as small as possible), whose variance is Var f?srp ?
which is roughly equivalent to the variance of f?, the unbiased estimator for CRS.
f2
k
,
We empirically compared CRS with SRP. Four word vectors were selected; entries of each vector
record the numbers of occurrences of the word in D = 216 Web pages. The data are very heavytailed. The percentage of zero elements (i.e., sparsity) varies from 58% to 95%.
Figure 2 presents the comparisons. (1): It is possible that CRS may outperform SRP non-negligibly.
(2): The variance (3) based on the approximate ?conditioning? argument is very accurate. (3): The
unbiased estimator f? is more accurate than f?mle ; the latter actually uses one more sample.
0
0
?1
10
10
CRS
CRS+mle
SRP
1/k
Approx. Var
?1
20
k
30
40
3
10
ADDRESS
20
k
30
40
3
10
CRS
CRS+mle
SRP
1/k
Approx. Var
0
10
HAVE
THIS
3
10
CRS
CRS+mle
SRP
1/k
Approx. Var
Standardized MSE
?1
10
10
Standardized MSE
Standardized MSE
CRS
CRS+mle
SRP
1/k
Approx. Var
Standardized MSE
0
10
10
?1
10
CUSTOMER
20
k
30
40
3
10
20
k
30
40
Figure 2: Comparing CRS with SRP for approximating Hamming norms in Web crawl data (four
word vectors), using the normalized mean square errors (MSE, normalized by f 2 ). ?CRS? and
?CRS+mle? respectively correspond to f? and f?mle , derived in Lemma 1. ?SRP? corresponds to the
harmonic mean estimator of SRP using p = 0.04. ?1/k? is the theoretical asymptotic variance of
both CRS and SRP. The curve labeled ?Approx. Var? is the approximate variance in (3).
5
The Modified CRS Estimation Procedure
The modified CRS estimation procedure is based on the theoretical analysis for using CRS to approximate Hamming norms. Suppose we are interested in the distance between rows u1 and u2 and
we have access to sketches K1 and K2 . Our suggested ?equivalent? sample size Ds would be
Ds = min{Z1 ? 1, Z2 ? 1},
Z1 = max(ID(K1 ), Z2 = max(ID(K2 ).
(4)
We should not include elements in K1 and K2 whose IDs are larger than Ds
Consider K1 and K2 in Figure 1, the modified CRS adopts Ds = min(10 ? 1, 8 ? 1) = min(9, 7) =
7. Removing 10{8} from K1 and 8{7} from K2 , we obtain a sample for u1 and u2 :
u
?1,1 = 5, u
?1,4 = 1, u
?1,6 = 7,
u
?2,2 = 9, u
?2,3 = 2, u
?2,5 = 6.
All other sample entries are zero: u
?1,2 = u
?1,3 = u
?1,5 = u
?1,7 = 0, u
?2,1 = u
?2,4 = u
?2,6 = u
?2,7 = 0.
5.1
A Generic Estimator and Approximate Variance
Rigorous theoretical analysis on one pair of sketches is difficult. We resort to the approximate
?conditioning? argument using the modified Ds in (4). We consider a generic distance dg (u1 , u2 ) =
PD
s
u1,j , u
?2,j }D
j=1 is exactly
i=1 g (u1,i , u2,i ), and assume that, conditioning on Ds , the sample {?
equivalent to the sample from randomly selected Ds columns without replacement. Under this
assumption, an ?unbiased? estimator of dg (u1 , u2 ) (and two special cases) would be
Ds
Ds
D
u1,j ? u
?2,j )2
D X (?
D X
D X
g(?
|?
u1,j ? u
?2,j |p , d??2 =
d?g (u1 , u2 ) =
u1,j , u
?2,j ), d?p =
.
Ds i=1
Ds j=1
Ds j=1 u
?1,j + u
?2,j
A generic (approximate) variance formula can be obtained as follows:
? ?
?
?
?
?
D2
D ? Ds
2
2
? 2 Ds E g (?
Var d?g (u1 , u2 )|Ds ?
u1,j , u
?2,j ) ? E (g(?
u1,j , u
?2,j ))
D?1
Ds
?
!
?
!2 ?
?
??
D
D
d2g
1 X 2
D
D ? Ds D 2
1 X
D
?
?
D
g
(u
,
g(u
,
?
1
d
?
.
=
u
)
?
u
)
=
s
1,i
1,i
2,i
2,i
g2
D ? 1 Ds2
D i=1
D i=1
D ? 1 Ds
D
?
?
?
?
??
Var d?g (u1 , u2 ) ? E Var d?g (u1 , u2 )|Ds
=
D
D?1
?
?
E
D
Ds
!
?
??
d2g
D
D
,
} ?1
dg2 ?
Z1 ? 1 Z2 ? 1
D
!
?
? ?
?
?
??
??
d2g
D
D
D
max E
,E
?1
dg 2 ?
?
D?1
Z1 ? 1
Z2 ? 1
D
!
?
?
?
??
2
d
f1
f2
D
g
max
,
?1
dg 2 ?
.
=
D?1
k1 ? 1 k2 ? 1
D
=
D
D?1
?
?1
!
??
d2g
dg 2 ?
D
? ?
E max{
(5)
Here, k1 and k2 are the sketch sizes of K1 and K2 , respectively, f1 and f2 are the numbers of nonzeros in the original data, u1 , u2 , respectively. We have used the results in Lemma 1 and a common
statistical approximation: E(max(x, y)) ? max (E(x), E(y)).
Fromn(5), we know
o the variance is affected by two factors. If the data are very sparse, i.e.,
f2
f1
max k1 ?1 , k2 ?1 is small, then the variance also tends to be small. If the data are heavy-tailed,
i.e., Ddg2 ? d2g , then the variance tends to be large. Text data are often highly sparse and heavytailed; but machine learning applications often need to use the weighted data (i.e., taking logarithm
or binary quantization). This is why we expect CRS will be successful in real applications, although
it in general does not have the worst-case performance guarantees.
The next two subsections apply CRS to estimating the Hamming distance and the ?2 distance.
Empirical studies [3, 7, 9] have demonstrated that, in text and image data, using the Hamming
distance or the ?2 distance for kernel SVMs achieved good performance.
5.2
Estimating the Hamming Distance
P
Following the definition of Hamming distance in [4]: h (u1 , u2 ) = D
i=1 1{u1,i ? u2,i 6= 0}, we esti?
mate h using the modified CRS procedure, denoted by h. The approximate variance (5) becomes
? ?
? ?
Var h
D
D?1
?
?
max
f1
f2
,
k1 ? 1 k2 ? 1
?
??
?
h2
?1
.
h?
D
(6)
We also apply SRP using small p and its most accurate harmonic mean estimator[10]. The empirical
comparisons in Figure 3 verify two points. (1): CRS can be considerably more accurate than SRP for
estimating Hamming distances in [4]. (2): The approximate variance formula (6) is very accurate.
0
0
10
CRS
Approx. Var
SRP
Standardized MSE
Standardized MSE
10
?1
10
THIS ? HAVE
?1
10
ADDRESS ? CUSTOMER
?2
10
CRS
Approx. Var
SRP
?2
10
20
k
30
10
40
10
20
k
30
40
Figure 3: Approximating Hamming distances (h) using two pairs of words. The results are presented
in terms of the normalized (by h2 ) MSE. The curves labeled ?Approx. Var? correspond to the
approximate variance of CRS in (6).
In this example, the seemingly impressive improvement of CRS over SRP is actually due to that we
used the definition of Hamming distance in [4]. An alternative definition of Hamming distance is
PD
h(u1 , u2 ) = i=1 [1 {u1,i 6= 0 and u2,i = 0} + 1 {u1,i = 0 and u2,i 6= 0}], which is basically the
lp distance after a binary term-weighting. As we have commented, if using SRP in dynamic data,
term-weighting is not possible; thus we only experimented with the definition in [4].
5.3
Estimating the ?2 Distance
We apply CRS to estimating the ?2 distance between u1 and u2 : d?2 (u1 , u2 ) =
According to (5), the estimation variance should be approximately
D
D?1
?
?
max
f1
f2
,
k1 ? 1 k2 ? 1
?
?1
? ?X
D
i=1
d2?2
(u1,i ? u2,i )4
?
(u1,i + u2,i )2
D
P
(u
?u
)4
PD
i=1
(u1,i ?u2,i )2
u1,i +u2,i
.
!
,
(7)
P
1,i
2,i
D
2
which is affected only by the second moments, because D
i=1 (u1,i + u2,i ) .
i=1 (u1,i +u2,i )2 ?
There are proved negative results [6] that in the worst-case no efficient algorithms exist for approximating the ?2 distances. CRS does not provide any worst-case guarantees; its performance relies
on the assumption that the data are often reasonably sparse and the second moments should be
reasonably bounded in machine learning applications.
Figure 4 presents some empirical study, using the same four words, plus the UCI Dexter data. Even
though the four words are fairly common (i.e., not very sparse) and they are heavy-tailed (no termweighting was applied), CRS still achieved good performance in terms of the normalized MSE (e.g.,
? 0.1) at reasonably small k. And again, the approximate variance formula (7) is accurate.
Results in the Dexter data set (which is more realistic for machine learning) are encouraging. Only
about k = 10 is needed to achieve small MSE.
0
1
10
?1
10
THIS ? HAVE
?2
10
0 10 20
CRS
Approx. Var
0
10
?1
40
60
80
k
100
10
ADDRESS ? CUSTOMER
0 10 20
40
60
k
10%
50%
90%
Dexter
Normalized MSE
CRS
Approx. Var
Standardized MSE
Standardized MSE
10
0
10
?1
10
?2
80
100
10
3 5
10
15
k
20
25
30
Figure 4: Left two panels: CRS for approximating the ?2 distance using two pairs of words (D =
216 ). The curves report the normalized MSE and the approximate variance in (7).
Right-most panel: The Dexter data, D = 20000, with 300 data points. We estimate all pairwise (i.e.,
44850 pairs) ?2 distances using CRS. The three curves report the quantiles of normalized MSEs.
6
Conclusion
The ubiquitous phenomenon of massive, high-dimensional, and possibly dynamic data, has brought
in serious challenges. It is highly desirable to achieve compact data presentation and efficiently
computing and retrieving summary statistics, in particular, various types of distances. Conditional
Random Sampling (CRS) provides a simple and effective mechanism to achieve this goal.
Compared with other ?main stream? sketching algorithms such as stable random projections (SRP),
the major advantage of CRS is that it is ?one-sketch-for-all,? meaning that the same set of sketches
can approximate any linear summary statistics. This would be very convenient in practice.
The major disadvantage of CRS is that it relies heavily on the data sparsity and also on the assumption that in machine learning applications the ?worst-case? data distributions are often avoided (e.g.,
through term-weighting). Also, the theoretical analysis is difficult, despite it is a simple algorithm.
Originally based on a heuristic argument, the preliminary version of CRS, was proposed as a tool
for computing pairwise l2 and l1 distances in static data. This paper provides a partial theoretical
justification of CRS and various modifications, to make the algorithm more rigorous and to extend
CRS for handling dynamic/streaming data. We demonstrate, empirically and theoretically, the effectiveness of CRS in approximating the Hamming norms/distances and the ?2 distances.
Acknowledgement
Ping Li is partially supported by grant DMS-0808864 from the National Science Foundation, and
a gift from Microsoft. Trevor Hastie was partially supported by grant DMS-0505676 from the
National Science Foundation, and grant 2R01 CA 72028-07 from the National Institutes of Health.
References
[1] Charu C. Aggarwal, Jiawei Han, Jianyong Wang, and Philip S. Yu. On demand classification of data streams. In KDD, 503?508, 2004.
[2] L?eon Bottou, Olivier Chapelle, Dennis DeCoste, and Jason Weston, editors. Large-Scale Kernel Machines. The MIT Press, 2007.
[3] Olivier Chapelle, Patrick Haffner, and Vladimir N. Vapnik. Support vector machines for histogram-based image classification. IEEE
Trans. Neural Networks, 10(5):1055?1064, 1999.
[4] Graham Cormode, Mayur Datar, Piotr Indyk, and S. Muthukrishnan. Comparing data streams using hamming norms (how to zero in).
IEEE Transactions on Knowledge and Data Engineering, 15(3):529?540, 2003.
[5] Carlotta Domeniconi and Dimitrios Gunopulos. Incremental support vector machine construction. In ICDM, pages 589?592, 2001.
[6] Sudipto Guha, Piotr Indyk, and Andrew McGregor. Sketching infomration divergence. In COLT, pages 424?438, 2007.
[7] M. Hein and O. Bousquet. Hilbertian metrics and positive definite kernels on probability measures. In AISTATS, pages 136?143, 2005.
[8] Piotr Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. J. of ACM, 53(3):307?323, 2006.
[9] Yugang Jiang, Chongwah Ngo, and Jun Yang. Towards optimal bag-of-features for object categorization and semantic video retrieval. In
CIVR, pages 494?501, 2007.
[10] Ping Li. Estimators and tail bounds for dimension reduction in l? (0 < ? ? 2) using stable random projections. In SODA, 2008.
[11] Ping Li. Compressed Counting. In SODA, 2009.
[12] Ping Li and Kenneth W. Church. A sketch algorithm for estimating two-way and multi-way Associations. Computational Linguistics,
33(3):305-354, 2007. Preliminary results appeared in HLT/EMNLP, 2005.
[13] Ping Li, Kenneth W. Church, and Trevor J. Hastie. Conditional random sampling: A sketch-based sampling technique for sparse data. In
NIPS, pages 873?880, 2007.
[14] Ping Li. Computationally efficient estimators for dimension reductions using stable random projections. In ICDM, 2008.
[15] S. Muthukrishnan. Data streams: Algorithms and applications. Found. and Trends in Theoretical Computer Science, 1:117?236, 2 2005.
[16] John C. Platt. Using analytic QP and sparseness to speed training of support vector machines. In NIPS, pages 557?563, 1998.
[17] Bernhard Sch?olkopf and Alexander J. Smola. Learning with Kernels. The MIT Press, 2002.
| 3572 |@word briefly:1 version:2 norm:13 seems:1 disk:2 d2:3 rgb:1 pick:1 minus:1 moment:3 reduction:4 contains:1 seriously:1 document:2 interestingly:1 outperforms:1 com:1 comparing:2 z2:4 written:1 john:1 realistic:1 limp:1 predetermined:1 kdd:1 analytic:1 designed:1 update:4 selected:2 item:1 record:2 cormode:1 provides:5 five:1 mathematical:1 constructed:1 become:2 retrieving:1 mayur:1 consists:1 introduce:2 theoretically:3 pairwise:7 roughly:1 frequently:3 multi:2 encouraging:1 decoste:1 solver:1 considering:1 becomes:4 provided:3 estimating:8 bounded:1 gift:1 panel:2 what:1 corporation:1 guarantee:2 esti:1 yugang:1 exactly:2 demonstrates:1 k2:18 platt:1 grant:3 appear:1 superiority:1 before:1 positive:1 engineering:1 local:2 treat:1 tends:2 gunopulos:1 despite:2 id:22 jiang:1 datar:1 approximately:1 plus:1 challenging:1 limited:2 bi:1 ms:1 practice:1 definite:1 carlotta:1 procedure:8 empirical:4 universal:1 projection:12 convenient:1 word:9 chongwah:1 suggest:1 close:1 applying:1 equivalent:3 demonstrated:2 customer:3 modifies:1 regardless:1 simplicity:1 estimator:25 insight:1 his:1 retrieve:1 handle:2 increment:1 justification:1 updated:3 construction:2 commercial:1 suppose:4 user:5 massive:1 heavily:1 olivier:2 us:1 element:8 trend:1 expensive:1 updating:3 database:1 labeled:2 negligibly:1 fly:1 wang:1 worst:5 decrease:1 accessing:1 pd:4 ui:3 complexity:2 insertion:1 rigorously:1 dynamic:17 f2:7 easily:1 represented:2 various:4 muthukrishnan:2 derivation:1 distinct:1 effective:2 query:1 firm:1 whose:5 heuristic:3 stanford:2 larger:1 otherwise:1 compressed:2 statistic:9 itself:1 ip:1 seemingly:1 indyk:3 advantage:4 propose:1 uci:1 achieve:3 sudipto:1 intuitive:1 olkopf:1 empty:1 categorization:1 incremental:2 leave:1 object:1 depending:1 illustrate:1 andrew:1 indicate:1 correct:2 attribute:1 require:1 f1:5 civr:1 preliminary:2 strictly:2 insert:1 k3:2 mapping:1 major:3 vary:1 u3:2 achieves:1 omitted:1 heavytailed:2 purpose:1 estimation:5 applicable:1 bag:2 currently:1 largest:1 tool:2 weighted:2 brought:1 mit:2 modified:6 cr:82 cornell:2 dexter:4 derived:1 focus:3 improvement:1 likelihood:2 rigorous:5 streaming:8 jiawei:1 initially:1 her:1 interested:2 pixel:2 issue:2 classification:2 colt:1 denoted:4 hilbertian:2 proposes:1 raised:1 special:3 fairly:1 construct:1 once:1 never:1 piotr:3 sampling:10 look:1 cancel:1 yu:1 report:3 recommend:2 serious:1 randomly:2 dg:6 national:3 divergence:1 individual:1 dimitrios:1 replacement:2 microsoft:4 maintain:1 organization:1 highly:3 mining:1 multiply:1 truly:1 accurate:7 tuple:2 partial:1 necessary:1 unless:1 indexed:1 conduct:2 logarithm:1 hein:1 theoretical:6 column:15 disadvantage:1 entry:17 successful:1 guha:1 too:1 front:1 stored:3 varies:2 considerably:1 st:2 pool:1 together:1 sketching:3 again:1 reflect:1 possibly:2 emnlp:1 book:2 resort:1 leading:1 li:7 exclude:1 stream:14 view:1 jason:1 linked:1 analyze:1 reached:2 red:2 contribution:2 square:1 accuracy:2 variance:18 efficiently:5 correspond:2 basically:1 app:3 ping:7 trevor:3 hlt:1 definition:4 obvious:1 dm:2 proof:1 di:1 static:8 hamming:19 turnstile:4 proved:1 popular:6 ask:1 recall:1 knowledge:1 color:1 ut:8 lim:2 ubiquitous:2 subsection:1 actually:4 back:1 appears:1 originally:3 follow:1 though:1 smola:1 until:1 d:44 sketch:29 hand:1 dennis:1 web:4 google:1 incrementally:1 brings:1 building:1 contain:1 unbiased:9 true:1 verify:2 normalized:7 hence:1 excluded:1 symmetric:3 semantic:1 sin:8 demonstrate:2 l1:3 meaning:4 image:7 harmonic:4 recently:1 common:3 permuted:7 empirically:3 qp:1 conditioning:7 extend:2 tail:1 association:1 significant:2 rd:1 approx:10 fk:1 had:1 chapelle:2 stable:15 access:1 similarity:1 impressive:1 han:1 patrick:1 recent:1 store:3 certain:1 binary:3 caltech:1 full:1 desirable:2 nonzeros:1 aggarwal:1 long:1 retrieval:2 icdm:2 mle:8 scalable:1 metric:5 df:1 histogram:4 kernel:11 represent:1 achieved:2 audience:1 addition:1 sch:1 biased:2 operate:1 elegant:1 effectiveness:2 ngo:1 counting:3 yang:1 easy:1 embeddings:1 fit:1 hastie:4 identified:1 haffner:1 bottleneck:2 whether:2 speaking:1 constitute:1 generally:1 clear:1 svms:2 simplest:1 pagehits:2 generate:1 outperform:1 percentage:2 exist:2 affected:3 commented:1 four:4 dg2:1 demonstrating:1 verified:1 kenneth:3 kept:1 v1:3 fraction:2 soda:2 named:1 extends:2 family:1 arrive:1 reasonable:1 decide:1 charu:1 doc:2 scaling:1 graham:1 ki:4 bound:3 fold:1 occur:1 sake:1 bousquet:1 u1:48 speed:1 argument:8 min:6 extremely:1 pseudorandom:1 relatively:1 according:2 ball:4 slightly:1 lp:9 modification:4 restricted:1 taken:1 computationally:1 mechanism:1 needed:3 know:1 ascending:1 discretizes:1 apply:5 observe:1 v2:2 generic:6 occurrence:1 alternative:1 original:13 denotes:1 clustering:1 assumes:1 standardized:8 include:1 linguistics:1 maintaining:1 eon:1 k1:21 especially:1 approximating:8 r01:1 purchased:1 pingli:1 already:2 question:1 strategy:2 rt:3 exhibit:2 dp:2 distance:54 capacity:3 philip:1 topic:1 evaluate:1 trivial:1 assuming:3 length:2 vladimir:1 difficult:2 trace:1 ds2:1 negative:1 unknown:1 recommender:1 observation:3 discarded:1 benchmark:1 mate:1 situation:1 looking:1 precise:1 rn:6 pair:6 z1:4 engine:2 smo:1 hypergeometric:2 boost:1 nip:3 trans:1 address:4 able:1 suggested:1 usually:2 appeared:2 sparsity:4 challenge:2 fp:3 program:1 including:1 memory:2 max:19 video:1 power:1 suitable:1 critical:1 natural:1 representing:1 imply:1 church:4 concludes:1 hm:2 jun:1 health:1 text:4 review:1 l2:3 acknowledgement:1 val:3 multiplication:1 asymptotic:1 fully:1 expect:1 permutation:4 interesting:1 querying:1 var:23 generator:1 digital:1 h2:2 foundation:2 transposable:1 dd:1 editor:1 storing:3 heavy:3 row:15 summary:7 supported:2 last:1 english:1 infeasible:1 fapp:2 understand:1 institute:1 taking:1 absolute:1 sparse:11 k12:1 curve:4 dimension:7 world:2 gram:3 stand:1 crawl:1 adopts:1 collection:1 projected:1 avoided:1 pth:1 transaction:1 approximate:16 emphasize:1 uni:1 compact:1 bernhard:1 active:1 search:3 tailed:3 why:2 reality:2 promising:1 reasonably:3 ca:1 mse:14 bottou:1 constructing:1 vj:1 aistats:1 pk:1 dense:1 main:1 decrement:1 whole:1 srp:37 nothing:1 quantiles:1 fashion:2 slow:1 weighting:4 formula:4 rk:1 removing:1 list:1 experimented:1 svm:3 quantization:1 vapnik:1 magnitude:1 sparseness:1 demand:1 likely:1 ordered:1 g2:1 partially:2 u2:39 applies:1 corresponds:1 relies:2 dh:1 acm:1 weston:1 conditional:9 viewed:3 presentation:1 goal:1 towards:1 considerable:1 exceptionally:1 justify:1 lemma:4 called:1 total:1 pas:1 domeniconi:1 meaningful:1 fmle:1 support:3 latter:1 alexander:1 infomration:1 phenomenon:1 dept:2 mcgregor:1 handling:1 |
2,838 | 3,573 | Modeling the effects of memory on human online
sentence processing with particle filters
Roger Levy
Department of Linguistics
University of California, San Diego
[email protected]
Florencia Reali Thomas L. Griffiths
Department of Psychology
University of California, Berkeley
{floreali,tom griffiths}@berkeley.edu
Abstract
Language comprehension in humans is significantly constrained by memory, yet
rapid, highly incremental, and capable of utilizing a wide range of contextual
information to resolve ambiguity and form expectations about future input. In
contrast, most of the leading psycholinguistic models and fielded algorithms for
natural language parsing are non-incremental, have run time superlinear in input
length, and/or enforce structural locality constraints on probabilistic dependencies
between events. We present a new limited-memory model of sentence comprehension which involves an adaptation of the particle filter, a sequential Monte Carlo
method, to the problem of incremental parsing. We show that this model can
reproduce classic results in online sentence comprehension, and that it naturally
provides the first rational account of an outstanding problem in psycholinguistics,
in which the preferred alternative in a syntactic ambiguity seems to grow more
attractive over time even in the absence of strong disambiguating information.
1
Introduction
Nearly every sentence occurring in natural language can, given appropriate contexts, be interpreted
in more than one way. The challenge of comprehending a sentence is identifying the intended
intepretation from among these possibilities. More formally, each interpretation of a sentence w can
be associated with a structural description T , and to comprehend a sentence is to infer T from w ?
parsing the sentence to reveal its underlying structure. From a probabilistic perspective, this requires
computing the posterior distribution P (T |w) or some property thereof, such as the description T
with highest posterior probability. This probabilistic perspective has proven extremely valuable in
developing both effective methods by which computers can process natural language [1, 2] and
models of human language processing [3].
In real life, however, people receive nearly all linguistic input incrementally: sentences are spoken,
and written sentences are by and large read, from beginning to end. There is considerable evidence
that people also comprehend incrementally, making use of linguistic input moment by moment to resolve structural ambiguity and form expectations about future inputs [4, 5]. The incremental parsing
problem can, roughly, be stated as the problem of computing the posterior distribution P (T |w1 ...i )
for a partial input w1 ...i . To be somewhat more precise, incremental parsing involves constructing a
distribution over partial structural descriptions of w1 ...i which implies the posterior P (T |w1 ...i ). A
variety of ?rational? models of online sentence processing [6, 7, 8, 9] take exactly this perspective,
using the properties of P (T |w1 ...i ) or quantities derived from it to explain why people find some
sentences more difficult to comprehend than others.
Despite their success in capturing a variety of psycholinguistic phenomena, existing rational models of online sentence processing leave open a number of questions, both theoretical and empirical.
On the theoretical side, these models assume that humans are ?ideal comprehenders? capable of
computing P (T |w1 ...i ) despite its significant computational cost. This kind of idealization is common in rational models of cognition, but raises questions about how resource constraints might
affect language processing. For structured probabilistic formalisms in widespread use in compu-
tational linguistics, such as probabilistic context-free grammars (PCFGs), incremental processing
algorithms exist that allow the exact computation of the posterior (implicitly represented) in polynomial time [10, 11, 12], from which k-best structures [13] or samples from the posterior [14] can
be efficiently obtained. However, these algorithms are psychologically implausible for two reasons:
(1) their run time (both worst-case and practical) is superlinear in sentence length, whereas human
processing time is essentially linear in sentence length; and (2) the probabilistic formalisms utilized
in these algorithms impose strict locality conditions on the probabilistic dependence between events
at different levels of structure, whereas humans seem to be able to make use of arbitrary features of
(extra-)linguistic context in forming incremental posterior expectations [4, 5].
Theoretical questions about the mechanisms underlying online sentence processing are complemented by empirical data that are hard to explain purely in probabilistic terms. For example, one of
the most compelling phenomena in psycholinguistics is that of garden-path sentences, such as:
(1)
The woman brought the sandwich from the kitchen tripped.
Comprehending such sentences presents a significant challenge, and many readers fail completely
on their first attempt. However, the sophisticated dynamic programming algorithms typically used
for incremental parsing implicitly represent all possible continuations of a sentence, and are thus
able to recover the correct interpretation in a single pass. Another phenomenon that is hard to
explain simply in terms of the probabilities of interpretations of a sentence is the ?digging in? effect,
in which the preferred alternative in a syntactic ambiguity seems to grow more attractive over time
even in the absence of strong disambiguating information [15].
In this paper, we explore the hypothesis that these phenomena can be explained as the consequence
of constraints on the resources available for incremental parsing. Previous work has addressed the
issues of feature locality and resource constraints by adopting a pruning approach, in which hard
locality constraints on probabilistic dependence are abandoned and only high-probability candidate
structures are maintained after each step of incremental parsing [6, 16, 17, 18]. These approaches
can be thought of as focusing on holding on to the highest posterior-probability parse as often as
possible. Here, we look to the machine learning literature to explore an alternative approach focused
on approximating the posterior distribution P (T |w1 ...i ). We use particle filters [19], a sequential
Monte Carlo method commonly used for approximate probabilistic inference in an online setting, to
explore how the computational resources available influence the comprehension of sentences. This
approach builds on the strengths of rational models of online sentence processing, allowing us to
examine how performance degrades as the resources of the ideal comprehender decrease.
The plan of the paper is as follows. Section 2 introduces the key ideas behind particle filters, while
Section 3 outlines how these ideas can be applied in the context of incremental parsing. Section 4
illustrates the approach for the kind of garden-path sentence given above, and Section 5 presents an
experiment with human participants testing the predictions that the resulting model makes about the
digging-in effect. Section 6 concludes the paper.
2
Particle filters
Particle filters are a sequential Monte Carlo method typically used for probabilistic inference in
contexts where the amount of data available increases over time [19]. The canonical setting in which
a particle filter would be used involves a sequence of latent variables z 1 , . . . , z n and a sequence of
observed variables x1 , . . . , xn , with the goal of estimating P (z n |x1 ...n ). The particle filter solves
this problem recursively, relying on the fact that the chain rule gives
X
P (z n |x1 ...n ) ? P (xn |z n )
(1)
z n?1 P (z n |z n?1 )P (z n?1 |x1 ...n?1 )
where we assume xn and z n are independent of all other variables given z n and z n?1 respectively.
Assume we know P (z n?1 |x1 ...n?1 ). Then we can use this distribution to construct an importance sampler for P (z n |x1 ...n ). We generate several values of z n?1 from P (z n?1 |x1 ...n?1 ).
Then, we draw z n from P (z n |z n?1 ) for each instance of z n?1 , to give us a set of values
from P (z n |x1 ...n?1 ). Finally, we assign each value of z n a weight proportional to P (xn |z n ),
to give us an approximation to P (z n |x1 ...n ). The particle filter is simply the recursive version
of this algorithm, in which a similar approximation was used to construct to P (z n?1 |x1 ...n?1 )
from P (z n?2 |x1 ...n?2 ) and so forth. The algorithm thus approximates P (z n?1 |x1 ...n?1 ) with
a weighted set of ?particles? ? discrete values of z i ? which are updated using P (z n |z n?1 ) and
P (xn |z n ) to provide an approximation to P (z n |x1 ...n ). The particle filter thus has run-time linear
in the number of observations, and provides a way to explore the influence of memory capacity (reflected in the number of particles) on probabilistic inference (cf. [20, 21]). In this paper, we focus
on the conditions under which the particle filter fails as a source of information about the challenges
of limited memory capacity for online sentence processing.
3
Incremental parsing with particle filters
In this section we develop an algorithm for top-down, incremental particle-filter parsing. We first
lay out the algorithm, then consider options for representations and grammars.
3.1
The basic algorithm
We assume that the structural descriptions of a sentence are context-free trees, as might be produced
by a PCFG. Without loss of generality, we also assume that preterminal expansions are always
unary rewrites. A tree is generated incrementally in a sequence of derivation operations ? 1 ...m ,
such that no word can be generated unless all the words preceding it in the sentence have already
been generated. The words of the sentence can thus be considered observations, and the hidden state
is a partial derivation (D, S), where D is an incremental tree structure and S is a stack of items of
the form hN, Opi, where N is a target node in D and Op is a derivation operation type. Later in this
section, we outline three possible derivation orders.
The problem of inferring a distribution over partial derivations from observed words can be approximated using particle filters as outlined in Section 2. Assume a model that specifies a probability
distribution P (?|(D, S), w1 ...i ) over the next derivation operation ? given the current partial deriva? 1 ...j
tion and words already seen. By (D, S) ? (D? , S ? ) we denote that the sequence of derivation
operations ? 1 ...j takes the partial derivation (D, S) to a new partial derivation (D? , S ? ). Now consider a partial derivation (Di| , S i| ) in which the most recent derivation operation has generated the
?
ith word in the input. Through the ? relation, our model implies a probability distribution over new
partial derivations in which the next operation would be the generation of the i + 1th word; call this
distribution P ((D|i+1 , S |i+1 )|(Di| , S i| )). In the nomenclature of particle filters introduced above,
partial derivations (D|i , S |i ) thus correspond to latent variables z i , words wi to observations xi ,
and our importance sampler involves drawing from P ((D|i , S |i )|(Di?1 | , S i?1 | )) and reweighting
by P (wi |(D|i , S |i )). This differs from the standard particle filter only in that z i is not necessarily
independent of x1 ...i?1 given z i?1 .
3.2
Representations and grammars
We now describe three possible derivation orders that can be used with our approach. For each order,
a derivation operation ? Op of a given type Op specifies a sequence of symbols Y 1 . . . Y k (possibly
? Op
the empty sequence ?), and can be applied to a partial derivation: (D, [hN, Opi]?S) ? (D? , A?S),
with ? being list concatenation. That is, a derivation operation involves popping the top item off the
stack, choosing a derivation operation of the appropriate type, applying it to add some symbols to D
yielding D? , and pushing a list of new items A back on the stack. Derivation operations differ in the
relationship between D and D? , and derivation orders differ in the contents of A.
Order 1: Expansion (Exp) only. D? consists of D with node N expanded to have daughters
Y 1 . . . Y k ; and A = [hY 1 , Expi, . . . , hY k , Expi].
Order 2: Expansion and Right-Sister (Sis). The sequence of symbols specified by any ? Op is of
maximum length 1. Expansion operations affect D as above. For a right-sister operation
? Sis , D? consists of D with Y 1 added as the right sister of N (if ? Sis specifies ?, then
D = D? ). A = [hY 1 , Expi, hY 1 , Sisi, . . . , hY k , Expi, hY k , Sisi].
Order 3: Expansion, Right-Sister, and Adjunction (Adj). The sequence of symbols specified
by any ? Op is of maximum length 1. Expansion operations affect D as above. Expansion and right-sister operations are as above. For a right-sister operation ? Adj , D?
consists of D with Y 1 spliced in at the node N ? that is, Y 1 replaces N in the tree,
and N becomes the lone daughter of Y 1 (if ? Adj specifies ?, then D = D? ). A =
[hY 1 , Expi, hY 1 , Sisi, hY 1 , Adji, . . . , hY k , Expi, hY k , Sisi, hY k , Adji].
D
S
ROOT
hVBD, Expi
hADVP, Expi
hCC, Expi
hS3 , Expi
S1
S2
NP
N
CC
S3
VP
VBD
ADVP
D
ROOT
S1
S2
S
hVBD, Expi
hVBD, Sisi
hVP, Sisi
hS2 , Sisi
hS1 , Sisi
(a)
S2
NP
VP
NP
VP
N
VBD
N
VBD
Pat
S
hVBD, Expi
hVBD, Sisi
hVBD, Adji
hVP, Sisi
hVP, Adji
hS2 , Sisi
hS2 , Adji
(c)
Pat
Pat
D
ROOT
(b)
Figure 1: Three possible derivation orders for the sentence ?Pat walked yesterday and Sally slept?.
In each case, the partial derivation (D|i , S |i ) is shown for i = 2 ? up to just before the generation of
the word ?walked?. The symbols ADVP, CC, and S3 in (a) will be generated later in the derivations
of (b) and (c) as right-sister operations; the symbol S1 will be generated in (c) as an adjunction
operation. During the incremental parsing of ?walked? these partial derivations would be reweighted
by P Exp (walked |(D|i , S |i )).
In all cases, the initial state of a derivation is a root symbol targeted for expansion:
(ROOT, [hROOT, Expi]), and a derivation is complete when the stack is empty. Figure 1 illustrates
the partial derivation state for each order just after the generation of a word in mid-sentence.
For each derivation operation type Op, it is necessary to define an underlying grammar and estimate
the parameters of a distribution P Op (?|(D, S)) over next derivation operations given the current state
of the derivation. For a sentence whose tree structure is known, the sequence of derivation operations
for derivation orders 1 and 2 is unambiguous and thus supervised training can be used for such a
model. For derivation order 3, a known tree structure still underspecifies the order of derivation
operations, so the underlying sequence of derivation operations could either be canonicalized or
treated as a latent variable in training. Finally, we note that a known PCFG could be encoded in a
model using any of these derivation orders; for PCFGs, the partial derivation representations used
in order 3 may be thought of as marginalizing over the unary chains on the right frontier of the
representations in order 2, which in turn may be thought of as marginalizing over the extra childless
nonterminals in the incremental representations of order 1. In the context of the particle filter,
the representations with more operation types could thus be expected to function as having larger
effective sample sizes for a fixed number of particles [22]. For the experiments reported in this paper,
we use derivation order 2 with a PCFG trained using unsmoothed relative-frequency estimation on
the parsed Brown corpus.
This approach has several attractive features for the modeling of online human sentence comprehension. The number of particles can be considered a rough estimate of the quantity of working memory
resources devoted to the sentence comprehension task; as we will show in Section 5, sentences difficult to parse can become easier when more particles are used. After each word, the incremental
posterior over partial structures T can be read off the particle structures and weights. Finally, the
approximate surprisal of each word ? a quantity argued to be correlated with many types of processing difficulty in sentence comprehension [8, 9, 23] ? is essentially a by-product of the incremental
parsing process: it is the negative log of the mean (unnormalized) weight P (wi |(D|i , S |i )).
4
The garden-path sentence
To provide some intuitions about our approach, we illustrate its ability to model online disambiguation in sentence comprehension using the garden-path sentence given in Example 1. In this sentence,
a local structural ambiguity is introduced at the word brought due to the fact that this word could
be either (i) a past-tense verb, in which case it is the main verb of the sentence and The woman is
its complete subject; or (ii) a participial verb, in which case it introduces a reduced relative clause,
The woman is its recipient, and the subject of the main clause has not yet been completed. This
ambiguity is resolved in favor of (ii) by the word tripped, the main verb of the sentence. It is well
documented (e.g., [24]) that locally ambiguous sentences such as Example 1 are read more slowly
at the disambiguating region when compared with unambiguous counterparts (c.f. The woman who
0.66 0.64
S
0.91
S
0.90
S
NP
NP
DT
DT NN
NP
0.89
S
VP
DT NN VBD
NP
0.90
S
VP
DT NN VBD NP
NP
0.91
S
VP
DT NN VBD NP
DT
0.87
S
NP
VP
DT NN VBD NP PP
DT NN
NP
VP
DT NN VBD NP
DT NN IN
NP
PP
VP
DT NN VBD NP
DT NN IN NP
S
S
NP
NP
NP
NP
DT
DT NN
0.04
0.04
S
S
NP
NP
0.04
S
NP
VP
DT NN VBN
NP
0.05
NP
VP
DT NN VBN NP
NP
DT NN VBN NP
DT
DT NN
NP
0.05
S
NP
VP
DT NN
0.03
S
DT NN VBN NP PP
NP
NP
PP
the
0.92
0.78
sandwich
0.90
0.09
from
0.83
0.3
NP
VP
DT NN VBN NP
DT NN IN NP
DT
Thewoman brought
1.00 0.99
0.96
0.4 0.09
0.29
S
NP
VP
DT NN VBN NP
DT NN IN
0.63
S
NP
VP
PP
DT NN IN NP
DT
0.15 0.19
0.37
N/A
S
the
0.83
0.2
NP
PP
DT NN IN NP
DT NN
kitchen
0.83
0.09
VP
VP
DT NN VBN NP
VBD
PP
DT NN IN NP
DT NN
tripped
0.17
4
Figure 2: Incremental parsing of a garden-path sentence. Trees indicate the canonical structures for
main-verb (above) and reduced-relative (below) interpretations. Numbers above the trees indicate
the posterior probabilites of main-verb and reduced-relative interpretations, marginalizing over precise details of parse structure, as estimated by a parser using 1000 particles. Since the grammar is
quite noisy, the main-verb interpretation still has some posterior probability after disambiguation at
tripped. Numbers in the second-to-last line indicate the proportion of particle filters with 20 particles that produce a viable parse tree including the given word. The final line indicates the variance
(?10?3 ) of particle weights after parsing each word.
was brought the sandwich from the kitchen tripped), and in cases where the local bias strongly favors
(i), many readers may fail to recover the correct reading altogether.
Figure 2 illustrates the behavior of the particle filter on the garden-path sentence in Example 1.
The word brought shifts the posterior strongly toward the main-verb interpretation. The rest of
the reduced relative clause has little effect on the posterior, but the disambiguator tripped shifts
the posterior in favor of the correct reduced-relative interpretation. In low-memory situations, as
represented by a particle filter with a small number of particles (e.g., 20), the parser is usually
able to construct an interpretation for the sentence up through the word kitchen, but fails at the
disambiguator, and when it succeeds the variance in particle weights is high.
5
Exploring the ?digging in? phenomenon
An important feature distinguishing ?rational? models of online sentence comprehension [6, 7, 8, 9]
from what are sometimes called ?dynamical systems? models [25, 15] is that the latter have an
internal feedback mechanism: in the absence of any biasing input, the activation of the leading
candidate interpretation tends to grow with the passage of time. A body of evidence exists in the
psycholinguistic literature that seems to support an internal feedback mechanism: increasing the
duration of a local syntactic ambiguity increases the difficulty of recovery at disambiguation to the
disfavored interpretation. It has been found, for example, that 2a and 3a, in which the second NP (the
gossip. . . /the deer. . . ) initially seems to be the object of the preceding verb, are harder to recover
from than 2b and 3b [26, 27, 15].
(2)
?NP/S? ambiguous sentences
a. Long (A-L): Tom heard the gossip about the neighbors wasn?t true.
b. Short (A-S): Tom heard the gossip wasn?t true.
(3)
?NP/Z? ambiguous sentences
a. Long (A-L): While the man hunted the deer that was brown and graceful ran into the woods.
b. Short (A-S): While the man hunted the deer ran into the woods.
From the perspective of exact rational inference ? or even for rational pruning models such as [6]
? this ?digging in? effect is puzzling.1 The result finds an intuitive explanation, however, in our
limited-memory particle-filter model. The probability of parse failure at the disambiguating word
wi is a function of (among other things) the immediately preceding estimated posterior probability
of the disfavored interpretation. If this posterior probability is low, then the resampling of particles
performed after processing each word provides another point at which particles representing the
disfavored interpretation could be deleted. Consequently, total parse failure at the disambiguator
will become more likely the greater the length of the preceding ambiguous region.
We quantify these predictions by assuming that the more often no particle is able to integrate a given
word wi in context ? that is, P (wi |(D|i , S |i )) ? the more difficult, on average, people should find
wi to read. In the sentences of Examples 2-3, by far the most likely position for the incremental
parser to fail is at the disambiguating verb. We can also compare processing of these sentences with
syntactically similar but unambiguous controls.
(4)
?NP/S? unambiguous controls
a. Long (U-L): Tom heard that the gossip about the neighbors wasn?t true.
b. Short (U-S): Tom heard that the gossip wasn?t true.
(5)
?NP/Z? unambiguous controls
a. Long (U-L): While the man hunted, the deer that was brown and graceful ran into the woods.
b. Short (U-S): While the man hunted, the deer ran into the woods.
Figure 3a shows, for each sentence of each type, the proportion of runs in which the parser successfully integrated (assigned non-zero probability to) the disambiguating verb (was in Example 2a
and ran in Example 3a), among those runs in which the sentence was successfully parsed up to the
preceding word. Consistent with our intuitive explanation, both the presence of local ambiguity and
length of the preceding region make parse failure at the disambiguator more likely.
In the remainder of this section we test this explanation with an offline sentence acceptability study
of digging-in effects. The experiment provides a way to make more detailed comparisons between
the model?s predictions and sentence acceptability. Consistent with the predictions of the model,
ratings show differences in the magnitude of digging-in effects associated with different types of
structural ambiguities. As the working-memory resources (i.e. number of particles) devoted to comprehension of the sentence increase, the probability of successful comprehension goes up, but local
ambiguity and length of the second NP remain associated with greater comprehension difficulty.
5.1
Method
Thirty-two native English speakers from the university subject pool completed a questionnaire corresponding to the complexity-rating task. Forty experimental items were tested with four conditions per item, counterbalanced across questionnaires, plus 84 fillers, with sentence order pseudorandomized. Twenty experimental items were NP/S sentences and twenty were NP/Z sentences. We
used a 2 ? 2 design with ambiguity and length of the ambiguous noun phrase as factors. In NP/S
sentences, structural ambiguity was manipulated by the presence/absence of the complementizer
that, while in NP/Z sentences, structural ambiguity was manipulated by the absence/presence of a
comma after the first verb. Participants were asked to rate how difficult to understand sentences are
on a scale from 0 to 10, 0 indicating ?Very easy? and 10 ?Very difficult?.
5.2
Results and Discussion
Figure 3b shows the mean complexity rating for each type of sentences. For both NP/S and NP/Z
sentences, the ambiguous long-subject (A-L) was rated the hardest to understand, and the unambiguous short-subject (U-S) condition was rated the easiest; these results are consistent with model
predictions. Within sentence type, the ratings were subjected to an analysis of variance (ANOVA)
with two factors: ambiguity and length. In the case of NP/S sentences there was a main effect of
ambiguity, F 1(1, 31) = 12.8, p < .001, F 2(1, 19) = 47.8, p < .0001 and length, F 1(1, 31) = 4.9,
1
For these examples, noun phrase length is a weakly misleading cue ? objects tend to be longer than subjects ? and that these ?digging in? examples might also be analyzable as cases of exact rational inference [9].
However, the effects of length in some of the relevant experiments are quite strong. The explanation we offer
here would magnify the effects of weakly misleading cues, and also extend to where cues are neutral or even
favor the ultimately correct interpretation.
0
100
200
300
Number of particles
400
Mean difficulty rating
2
3
4
5
NP/S
NP/Z
1
U?S
A?S
U?L
A?L
0
U?S
A?S
U?L
A?L
6
Proportion of parse successes at disambiguator
0.0
0.2
0.4
0.6
0.8
1.0
NP/Z
Proportion of parse successes at disambiguator
0.0
0.2
0.4
0.6
0.8
1.0
NP/S
0
(a) Model results
100
200
300
Number of particles
400
U?S
A?S
U?L
A?L
(b) Behavioral results
Figure 3: Frequency of irrevocable garden path in particle-filter parser as a function of number of
particles, and mean empirical difficulty rating, for NP/S and NP/Z sentences.
p = .039, F 2(1, 19) = 32.9, p < .0001, and the interaction between factors was significant,
F 1(1, 31) = 8.28, p = .007, F 2(1, 19) = 5.56, p = .029. In the case of NP/Z sentences there
was a main effect of ambiguity, F 1(1, 31) = 63.6, p < .0001, F 2(1, 19) = 150.9, p < .0001 and
length, F 1(1, 31) = 127.2, p < .0001, F 2(1, 19) = 124.7, p < .0001 and the interaction between
factors was significant by subjects only, F 1(1, 31) = 4.6, p = .04, F 2(1, 19) = 1.6, p = .2. The
experiment thus bore out most of the model?s predictions, with ambiguity and length combining to
make sentence processing more difficult. One reason that our model may underestimate the effect
of subject length on ease of understanding, at least in the NP/Z case, is the tendency of subject NPs
to be short in English, which was not captured in the grammar used by the model.
6
Conclusion and Future Work
In this paper we have presented a new incremental parsing algorithm based on the particle filter
and shown that it provides a useful foundation for modeling the effect of memory limitations in
human sentence comprehension, including a novel solution to the problem posed by ?digging-in?
effects [15] for rational models. In closing, we point out two issues ? both involving the problem
of resampling prominent in particle filter research ? in which we believe future research may help
deepen our understanding of language processing.
The first issue involves the question of when to resample. In this paper, we have take the approach of
generating values of z n?1 from which to draw P (z n |z n?1 , x1 ...n?1 ) by sampling with replacement
(i.e., resampling) after every word from the multinomial over P (z n?1 |x1 ...n?1 ) represented by the
weighted particles. This approach has the problem that particle diversity can be lost rapidly, as it
decreases monotonically with the number of observations. Another option is to resample only when
the variance in particle weights exceeds a predefined threshold, sampling without replacement when
this variance is low [22]. As Figure 2 shows, a word that resolves a garden-path generally creates
high weight variance. Our preliminary investigations indicate that associating variance-sensitive
resampling with processing difficulty leads to qualitatively similar predictions to the total parse
failure approach taken in Section 5, but further investigation is required.
The other issue involves how to resample. Since particle diversity can never increase, when parts
of the space of possible T are missed by chance early on, they can never be recovered. As a consequence, applications of the particle filter in machine learning and statistics tend to supplement the
basic algorithm with additional steps such as running Markov chain Monte Carlo on the particles
in order to re-introduce diversity (e.g., [28]). Further work would be required, however, to specify an MCMC algorithm over trees given an input prefix. Both of these issues may help achieve
a deeper understanding of the details of reanalysis in garden-path recovery [29]. For example, the
initial reaction of many readers to the sentence The horse raced past the barn fell is to wonder what
a ?barn fell? is. With variance-sensitive resampling, this observation could be handled by smoothing
the probabilistic grammar; with diversity-introducing MCMC, it might be handled by tree-changing
operations chosen during reanalysis.
Acknowledgments
RL would like to thank Klinton Bicknell and Gabriel Doyle for useful comments and suggestions.
FR and TLG were supported by grants BCS-0631518 and BCS-070434 from the National Science
Foundation.
References
[1] C. D. Manning and H. Sch?utze. Foundations of Statistical Natural Language Processing. MIT Press, 1999.
[2] D. Jurafsky and J. H. Martin. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice-Hall, second edition, 2008.
[3] D. Jurafsky. Probabilistic modeling in psycholinguistics: Linguistic comprehension and production. In Rens
Bod, Jennifer Hay, and Stefanie Jannedy, editors, Probabilistic Linguistics, pages 39?95. MIT Press, 2003.
[4] M. K. Tanenhaus, M. J. Spivey-Knowlton, K. Eberhard, and J. C. Sedivy. Integration of visual and linguistic
information in spoken language comprehension. Science, 268:1632?1634, 1995.
[5] G. T. Altmann and Y. Kamide. Incremental interpretation at verbs: restricting the domain of subsequent reference. Cognition, 73(3):247?264, 1999.
[6] D. Jurafsky. A probabilistic model of lexical and syntactic access and disambiguation. Cognitive Science,
20(2):137?194, 1996.
[7] N. Chater, M. Crocker, and M. Pickering. The rational analysis of inquiry: The case for parsing. In M. Oaksford
and N. Chater, editors, Rational models of cognition. Oxford, 1998.
[8] J. Hale. A probabilistic Earley parser as a psycholinguistic model. In Proceedings of NAACL, volume 2, pages
159?166, 2001.
[9] R. Levy. Expectation-based syntactic comprehension. Cognition, 106:1126?1177, 2008.
[10] J. Earley. An efficient context-free parsing algorithm. Communications of the ACM, 13(2):94?102, 1970.
[11] A. Stolcke. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165?201, 1995.
[12] M.-J. Nederhof. The computational complexity of the correct-prefix property for TAGs. Computational Linguistics, 25(3):345?360, 1999.
[13] L. Huang and D. Chiang. Better k-best parsing. In Proceedings of the International Workshop on Parsing
Technologies, 2005.
[14] M. Johnson, T. L. Griffiths, and S. Goldwater. Bayesian inference for PCFGs via Markov chain Monte Carlo.
In Proceedings of Human Language Technologies 2007: The Conference of the North American Chapter of the
Association for Computational Linguistics, 2007.
[15] W. Tabor and S. Hutchins. Evidence for self-organized sentence processing: Digging in effects. Journal of
Experimental Psychology: Learning, Memory, and Cognition,, 30(2):431?450, 2004.
[16] B. Roark. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249?276,
2001.
[17] M. Collins and B. Roark. Incremental parsing with the perceptron algorithm. In Proceedings of the ACL, 2004.
[18] J. Henderson. Lookahead in deterministic left-corner parsing. In Proceedings of the Workshop on Incremental
Parsing: Bringing Engineering and Cognition Together, 2004.
[19] A. Doucet, N. de Freitas, and N. Gordon, editors. Sequential Monte Carlo Methods in Practice. Springer, 2001.
[20] A. N. Sanborn, T. L. Griffiths, and D. J. Navarro. A more rational model of categorization. In Proceedings of
the 28th Annual Conference of the Cognitive Science Society, Mahwah, NJ, 2006. Erlbaum.
[21] N. Daw and A. Courville. The pigeon as particle filter. In Advances in Neural Information Processing Systems
20, Cambridge, MA, 2008. MIT Press.
[22] A. Doucet, N. de Freitas, K. Murphy, and S. Russell. Rao-Blackwellised particle filtering for dynamic Bayesian
networks. In Advances in Neural Information Processing Systems, 2000.
[23] N. Smith and R. Levy. Optimal processing times in reading: a formal model and empirical investigation. In
Proceedings of the 30th Annual Meeting of the Cognitive Science Society, 2008.
[24] M. C. MacDonald. Probabilistic constraints and syntactic ambiguity resolution. Language and Cognitive
Processes, 9(2):157?201, 1994.
[25] M. J. Spivey and M. K. Tanenhaus. Syntactic ambiguity resolution in discourse: Modeling the effects of referential content and lexical frequency. Journal of Experimental Psychology: Learning, Memory, and Cognition,
24(6):1521?1543, 1998.
[26] L. Frazier and K. Rayner. Making and correcting errors during sentence comprehension: Eye movements in
the analysis of structurally ambiguous sentences. Cognitive Psychology, 14:178?210, 1982.
[27] F. Ferreira and J. M. Henderson. Recovery from misanalyses of garden-path sentences. Journal of Memory and
Language, 31:725?745, 1991.
[28] N. Chopin. A sequential particle filter method for static models. Biometrika, 89:539?552, 2002.
[29] P. Sturt, M. J. Pickering, and M. W. Crocker. Structural change and reanalysis difficulty in language comprehension. Journal of Memory and Language, 40:143?150, 1999.
| 3573 |@word version:1 polynomial:1 seems:4 proportion:4 open:1 rayner:1 harder:1 recursively:1 moment:2 initial:2 prefix:3 crocker:2 past:2 existing:1 reaction:1 current:2 contextual:1 adj:3 reali:1 recovered:1 si:3 yet:2 activation:1 written:1 parsing:25 subsequent:1 resampling:5 cue:3 item:6 beginning:1 ith:1 smith:1 short:6 chiang:1 provides:5 node:3 become:2 viable:1 consists:3 behavioral:1 introduce:1 expected:1 rapid:1 behavior:1 roughly:1 examine:1 relying:1 resolve:3 little:1 increasing:1 becomes:1 estimating:1 underlying:4 what:2 easiest:1 kind:2 interpreted:1 probabilites:1 lone:1 spoken:2 sisi:11 adji:5 nj:1 berkeley:2 blackwellised:1 every:2 spivey:2 exactly:1 ferreira:1 biometrika:1 control:3 grant:1 before:1 disfavored:3 local:5 engineering:1 tends:1 consequence:2 despite:2 oxford:1 path:10 might:4 plus:1 acl:1 jurafsky:3 ease:1 limited:3 pcfgs:3 range:1 practical:1 thirty:1 acknowledgment:1 testing:1 recursive:1 lost:1 practice:1 differs:1 empirical:4 significantly:1 thought:3 word:25 griffith:4 superlinear:2 prentice:1 context:10 influence:2 applying:1 deterministic:1 lexical:2 go:1 duration:1 focused:1 resolution:2 identifying:1 recovery:3 immediately:1 correcting:1 rule:1 utilizing:1 classic:1 updated:1 diego:1 target:1 parser:6 exact:3 programming:1 distinguishing:1 hypothesis:1 approximated:1 recognition:1 utilized:1 lay:1 native:1 observed:2 hunted:4 worst:1 region:3 spliced:1 decrease:2 highest:2 russell:1 valuable:1 ran:5 movement:1 intuition:1 questionnaire:2 complexity:3 asked:1 dynamic:2 ultimately:1 trained:1 raise:1 rewrite:1 weakly:2 purely:1 creates:1 completely:1 acceptability:2 resolved:1 psycholinguistics:3 represented:3 chapter:1 derivation:38 effective:2 describe:1 monte:6 horse:1 deer:5 choosing:1 whose:1 encoded:1 larger:1 quite:2 posed:1 drawing:1 grammar:7 ability:1 favor:4 statistic:1 syntactic:7 noisy:1 final:1 online:11 sequence:10 interaction:2 product:1 adaptation:1 remainder:1 fr:1 relevant:1 combining:1 rapidly:1 achieve:1 lookahead:1 bod:1 forth:1 description:4 intuitive:2 magnify:1 empty:2 produce:1 generating:1 incremental:24 leave:1 categorization:1 object:2 help:2 illustrate:1 develop:1 op:8 solves:1 preterminal:1 strong:3 involves:7 implies:2 indicate:4 quantify:1 differ:2 correct:5 filter:27 human:10 argued:1 assign:1 preliminary:1 investigation:3 comprehension:18 exploring:1 frontier:1 considered:2 barn:2 hall:1 exp:2 cognition:7 early:1 utze:1 resample:3 estimation:1 hs1:1 sensitive:2 successfully:2 weighted:2 brought:5 rough:1 mit:3 always:1 hs2:3 chater:2 linguistic:5 derived:1 focus:1 tabor:1 frazier:1 indicates:1 contrast:1 inference:6 unary:2 nn:27 typically:2 integrated:1 initially:1 hidden:1 relation:1 reproduce:1 chopin:1 issue:5 among:3 plan:1 constrained:1 noun:2 smoothing:1 integration:1 construct:3 never:2 having:1 sampling:2 look:1 hardest:1 nearly:2 future:4 others:1 np:67 gordon:1 manipulated:2 doyle:1 national:1 murphy:1 kitchen:4 intended:1 replacement:2 sandwich:3 attempt:1 highly:1 possibility:1 hvp:3 henderson:2 introduces:2 popping:1 yielding:1 behind:1 devoted:2 chain:4 predefined:1 capable:2 partial:16 necessary:1 unless:1 tree:11 re:1 theoretical:3 instance:1 formalism:2 modeling:6 compelling:1 rao:1 phrase:2 cost:1 introducing:1 hutchins:1 neutral:1 wonder:1 nonterminals:1 successful:1 johnson:1 erlbaum:1 reported:1 dependency:1 eberhard:1 international:1 probabilistic:20 off:2 pool:1 together:1 w1:8 ambiguity:19 hn:2 possibly:1 woman:4 vbd:10 slowly:1 huang:1 compu:1 cognitive:5 american:1 corner:1 leading:2 account:1 diversity:4 de:2 north:1 later:2 tion:1 root:5 performed:1 recover:3 participant:2 option:2 walked:4 variance:8 who:1 efficiently:1 correspond:1 vp:17 goldwater:1 bayesian:2 vbn:7 produced:1 carlo:6 cc:2 explain:3 implausible:1 inquiry:1 psycholinguistic:4 failure:4 underestimate:1 frequency:3 pp:7 thereof:1 naturally:1 associated:3 di:3 static:1 rational:13 organized:1 sophisticated:1 back:1 focusing:1 dt:33 supervised:1 tom:5 reflected:1 specify:1 earley:2 strongly:2 generality:1 roger:1 just:2 working:2 parse:10 reweighting:1 incrementally:3 widespread:1 reveal:1 believe:1 effect:16 naacl:1 brown:3 tense:1 true:4 counterpart:1 assigned:1 read:4 attractive:3 comprehend:3 reweighted:1 during:3 self:1 maintained:1 yesterday:1 unambiguous:6 ambiguous:7 unnormalized:1 speaker:1 prominent:1 outline:2 complete:2 syntactically:1 passage:1 novel:1 common:1 multinomial:1 clause:3 rl:1 volume:1 extend:1 interpretation:15 approximates:1 association:1 significant:4 cambridge:1 tanenhaus:2 outlined:1 particle:51 hcc:1 closing:1 language:17 access:1 longer:1 tlg:1 add:1 posterior:17 recent:1 perspective:4 hay:1 success:3 life:1 meeting:1 seen:1 comprehender:1 greater:2 somewhat:1 impose:1 preceding:6 captured:1 additional:1 forty:1 monotonically:1 ii:2 bcs:2 infer:1 exceeds:1 offer:1 long:5 prediction:7 involving:1 basic:2 essentially:2 expectation:4 psychologically:1 represent:1 adopting:1 sometimes:1 receive:1 whereas:2 addressed:1 grow:3 source:1 sch:1 extra:2 rest:1 bringing:1 strict:1 fell:2 subject:9 tend:2 comment:1 navarro:1 thing:1 seem:1 call:1 structural:10 presence:3 ideal:2 easy:1 stolcke:1 variety:2 affect:3 psychology:4 counterbalanced:1 associating:1 idea:2 wasn:4 shift:2 handled:2 nomenclature:1 speech:2 gabriel:1 useful:2 heard:4 detailed:1 generally:1 amount:1 referential:1 mid:1 locally:1 deriva:1 reduced:5 continuation:1 generate:1 specifies:4 exist:1 documented:1 canonical:2 s3:2 estimated:2 per:1 sister:7 discrete:1 key:1 four:1 threshold:1 deleted:1 changing:1 anova:1 opi:2 tational:1 idealization:1 wood:4 run:5 comprehending:2 reader:3 missed:1 draw:2 disambiguation:4 roark:2 capturing:1 altmann:1 courville:1 replaces:1 annual:2 strength:1 constraint:6 hy:12 tag:1 extremely:1 unsmoothed:1 expanded:1 graceful:2 martin:1 department:2 developing:1 structured:1 manning:1 remain:1 across:1 wi:7 making:2 s1:3 explained:1 taken:1 resource:7 jennifer:1 turn:1 mechanism:3 fail:3 know:1 subjected:1 end:1 available:3 operation:24 enforce:1 appropriate:2 adjunction:2 alternative:3 altogether:1 thomas:1 abandoned:1 top:3 recipient:1 linguistics:8 cf:1 completed:2 running:1 pushing:1 parsed:2 build:1 approximating:1 society:2 question:4 quantity:3 already:2 added:1 degrades:1 dependence:2 sanborn:1 thank:1 macdonald:1 capacity:2 concatenation:1 reason:2 toward:1 assuming:1 length:16 relationship:1 difficult:6 digging:9 holding:1 stated:1 comprehenders:1 daughter:2 negative:1 design:1 twenty:2 allowing:1 observation:5 markov:2 pat:4 situation:1 communication:1 precise:2 ucsd:1 stack:4 arbitrary:1 verb:13 rating:6 introduced:2 required:2 specified:2 sentence:73 fielded:1 california:2 daw:1 able:4 deepen:1 below:1 usually:1 dynamical:1 biasing:1 reading:2 challenge:3 including:2 memory:14 garden:10 explanation:4 surprisal:1 event:2 natural:5 treated:1 difficulty:7 representing:1 rated:2 misleading:2 oaksford:1 technology:2 eye:1 concludes:1 stefanie:1 literature:2 understanding:3 marginalizing:3 relative:6 loss:1 generation:3 limitation:1 proportional:1 suggestion:1 proven:1 filtering:1 foundation:3 integrate:1 consistent:3 editor:3 irrevocable:1 reanalysis:3 production:1 supported:1 last:1 free:4 english:2 offline:1 formal:1 side:1 allow:1 bias:1 understand:2 wide:1 neighbor:2 deeper:1 perceptron:1 feedback:2 xn:5 canonicalized:1 knowlton:1 computes:1 commonly:1 qualitatively:1 san:1 far:1 pruning:2 approximate:2 preferred:2 implicitly:2 doucet:2 corpus:1 advp:2 xi:1 comma:1 latent:3 why:1 expansion:8 necessarily:1 constructing:1 domain:1 main:9 s2:3 ling:1 edition:1 mahwah:1 x1:16 body:1 gossip:5 analyzable:1 fails:2 inferring:1 position:1 structurally:1 candidate:2 levy:3 down:2 hale:1 symbol:7 list:2 evidence:3 exists:1 workshop:2 restricting:1 sequential:5 pcfg:3 importance:2 supplement:1 magnitude:1 illustrates:3 occurring:1 easier:1 locality:4 pigeon:1 simply:2 explore:4 likely:3 forming:1 visual:1 sally:1 springer:1 chance:1 discourse:1 complemented:1 acm:1 ma:1 goal:1 targeted:1 consequently:1 disambiguating:6 absence:5 considerable:1 hard:3 content:2 man:4 bore:1 change:1 sampler:2 freitas:2 called:1 total:2 pas:1 experimental:4 tendency:1 succeeds:1 indicating:1 formally:1 puzzling:1 internal:2 people:4 support:1 latter:1 slept:1 collins:1 outstanding:1 filler:1 mcmc:2 tested:1 phenomenon:5 correlated:1 |
2,839 | 3,574 | Multi-label Multiple Kernel Learning
Shuiwang Ji
Arizona State University
Tempe, AZ 85287
[email protected]
Liang Sun
Arizona State University
Tempe, AZ 85287
[email protected]
Rong Jin
Michigan State University
East Lansing, MI 48824
[email protected]
Jieping Ye
Arizona State University
Tempe, AZ 85287
[email protected]
Abstract
We present a multi-label multiple kernel learning (MKL) formulation in which
the data are embedded into a low-dimensional space directed by the instancelabel correlations encoded into a hypergraph. We formulate the problem in the
kernel-induced feature space and propose to learn the kernel matrix as a linear
combination of a given collection of kernel matrices in the MKL framework. The
proposed learning formulation leads to a non-smooth min-max problem, which
can be cast into a semi-infinite linear program (SILP). We further propose an approximate formulation with a guaranteed error bound which involves an unconstrained convex optimization problem. In addition, we show that the objective
function of the approximate formulation is differentiable with Lipschitz continuous gradient, and hence existing methods can be employed to compute the optimal
solution efficiently. We apply the proposed formulation to the automated annotation of Drosophila gene expression pattern images, and promising results have
been reported in comparison with representative algorithms.
1 Introduction
Spectral graph-theoretic methods have been used widely in unsupervised and semi-supervised learning recently. In this paradigm, a weighted graph is constructed for the data set, where the nodes
represent the data points and the edge weights characterize the relationships between vertices. The
structural and spectral properties of graph can then be exploited to perform the learning task. One
fundamental limitation of using traditional graphs for this task is that they can only represent pairwise relationships between data points, and hence higher-order information cannot be captured [1].
Hypergraphs [1, 2] generalize traditional graphs by allowing edges, called hyperedges, to connect
more than two vertices, thereby being able to capture the relationships among multiple vertices.
In this paper, we propose to use a hypergraph to capture the correlation information for multi-label
learning [3]. In particular, we propose to construct a hypergraph for multi-label data in which
all data points annotated with a common label are included in a hyperedge, thereby capturing the
similarity among data points with a common label. By exploiting the spectral properties of the
constructed hypergraph, we propose to embed the multi-label data into a lower-dimensional space
in which data points with a common label tend to be close to each other. We formulate the multi-label
learning problem in the kernel-induced feature space, and show that the well-known kernel canonical
correlation analysis (KCCA) [4] is a special case of the proposed framework. As the kernel plays an
essential role in the formulation, we propose to learn the kernel matrix as a linear combination of a
given collection of kernel matrices in the multiple kernel learning (MKL) framework. The resulting
formulation involves a non-smooth min-max problem, and we show that it can be cast into a semiinfinite linear program (SILP). To further improve the efficiency and reduce the non-smoothness
effect of the SILP formulation, we propose an approximate formulation by introducing a smoothing
term into the original problem. The resulting formulation is unconstrained and convex. In addition,
the objective function of the approximate formulation is shown to be differentiable with Lipschitz
continuous gradient. We can thus employ the Nesterov?s method [5, 6], which solves smooth convex
problems with the optimal convergence rate, to compute the solution efficiently.
We apply the proposed formulation to the automated annotation of Drosophila gene expression
pattern images, which document the spatial and temporal dynamics of gene expression during
Drosophila embryogenesis [7]. Comparative analysis of such images can potentially reveal new
genetic interactions and yield insights into the complex regulatory networks governing embryonic
development. To facilitate pattern comparison and searching, groups of images are annotated with a
variable number of labels by human curators in the Berkeley Drosophila Genome Project (BDGP)
high-throughput study [7]. However, the number of available images produced by high-throughput
in situ hybridization is now rapidly increasing. It is therefore tempting to design computational
methods to automate this task [8]. Since the labels are associated with groups of a variable number
of images, we propose to extract invariant features from each image and construct kernels between
groups of images by employing the vocabulary-guided pyramid match algorithm [9]. By applying
various local descriptors, we obtain multiple kernel matrices and the proposed multi-label MKL
formulation is applied to obtain an optimal kernel matrix for the low-dimensional embedding. Experimental results demonstrate the effectiveness of the kernel matrices obtained by the proposed
formulation. Moreover, the approximate formulation is shown to yield similar results to the original
formulation, while it is much more efficient.
2 Multi-label Learning with Hypergraph
An essential issue in learning from multi-label data is how to exploit the correlation information
among labels. We propose to capture such information through a hypergraph as described below.
2.1 Hypergraph Spectral Learning
Hypergraphs generalize traditional graphs by allowing hyperedges to connect more than two vertices, thus capturing the joint relationships among multiple vertices. We propose to construct a
hypergraph for multi-label data in which each data point is represented as a vertex. To document the
joint similarity among data points annotated with a common label, we propose to construct a hyperedge for each label and include all data points annotated with a common label into one hyperedge.
Following the spectral graph embedding theory [10], we propose to compute the low-dimensional
embedding through a linear transformation W by solving the following optimization problem:
min
tr W T ?(X)L?(X)T W
(1)
W
subject to
W T ?(X)?(X)T + ?I W = I,
where ?(X) = [?(x1 ), ? ? ? , ?(xn )] is the data matrix consisting of n data points in the feature
space, ? is the feature mapping, L is the normalized Laplacian matrix derived from the hypergraph,
and ? > 0 is the regularization parameter. In this formulation, the instance-label correlations are
encoded into L through the hypergraph, and data points sharing a common label tend to be close to
each other in the embedded space.
It follows from the representer theorem [11] that W = ?(X)B for some matrix B ? Rn?k where
k is the number of labels. By noting that L = I ? C for some matrix C, the problem in Eq. (1) can
be reformulated as
max
tr B T (KCK)B
(2)
B
subject to
B T (K 2 + ?K)B = I,
where K = ?(X)T ?(X) is the kernel matrix. Kernel canonical correlation analysis (KCCA) [4] is
a widely-used method for dimensionality reduction. It can be shown [4] that KCCA is obtained by
substituting C = Y T (Y Y T )?1 Y in Eq. (2) where Y ? Rk?n is the label indicator matrix. Thus,
KCCA is a special case of the proposed formulation.
2.2 A Semi-infinite Linear Program Formulation
It follows from the theory of kernel methods [11] that the kernel K in Eq. (2) uniquely determines the
feature mapping ?. Thus, kernel selection (learning) is one of the central issues in kernel methods.
Following the MKL framework [12], we propose to learn an optimal kernel matrix by integrating
multiple candidate kernel matrices, that is,
?
?
p
?
?
X
T
K ?K= K=
?j Kj ? e = 1, ? ? 0 ,
(3)
?
?
j=1
where {Kj }pj=1 are the p candidate kernel matrices, {?j }pj=1 are the weights for the linear combination, and e is the vector of all ones of length p. We have assumed in Eq. (3) that all the candidate
kernel matrices are normalized to have a unit trace value. It has been shown [8] that the optimal
weights maximizing the objective function in Eq. (2) can be obtained by solving a semi-infinite linear program (SILP) [13] in which a linear objective is optimized subject to an infinite number of
linear constraints, as summarized in the following theorem:
Theorem 2.1. Given a set of p kernel matrices {Kj }pj=1 , the optimal kernel matrix in K that maximizes the objective function in Eq. (2) can be obtained by solving the following SILP problem:
max
?,?
subject to
?
(4)
T
? ? 0, ? e = 1,
p
X
?j Sj (Z) ? ?, for all Z ? Rn?k ,
(5)
j=1
where Sj (Z), for j = 1, ? ? ? , p, is defined as
k
X
1 T
1 T
T
z zi +
z K j z i ? z i hi ,
Sj (Z) =
4 i
4? i
i=1
(6)
Z = [z1 , ? ? ? , zk ], H is obtained from C such that HH T = C, and H = [h1 , ? ? ? , hk ].
Note that the matrix C is symmetric and positive semidefinite. Moreover, for the L considered in
this paper, we have rank(C) = k. Hence, H ? Rn?k is always well-defined. The SILP formulation
in Theorem 2.1 can be solved by the column generation technique as in [14].
3 The Approximate Formulation
The multi-label kernel learning formulation proposed in Theorem 2.1 involves optimizing a linear
objective subject to an infinite number of constraints. The column generation technique used to solve
this problem adds constraints to the problem successively until all the constraints are satisfied. Since
the convergence rate of this algorithm is slow, the problem solved at each iteration may involve a
large number of constraints, and hence is computationally expensive. In this section, we propose an
approximate formulation by introducing a smoothing term into the original problem. This results in
an unconstrained and smooth convex problem. We propose to employ existing methods to solve the
smooth convex optimization problem efficiently in the next section.
By rewriting the formulation in Theorem 2.1 as
max
min
?:? T e=1,??0 Z
p
X
?j Sj (Z)
j=1
and exchanging the minimization and maximization, the SILP formulation can be expressed as
min f (Z)
(7)
Z
where f (Z) is defined as
f (Z) =
max
?:? T e=1,??0
p
X
j=1
?j Sj (Z).
(8)
The maximization problem in Eq. (8) with respect to ? leads to a non-smooth objective function for
f (Z). To reduce this effect, we introduce a smoothing term and modify the objective to f? (Z) as
?
?
p
p
?X
?
X
f? (Z) =
max
?j Sj (Z) ? ?
?j log ?j ,
(9)
?
?:? T e=1,??0 ?
j=1
j=1
where ? is a positive constant controlling the approximation. The following lemma shows that the
problem in Eq. (9) can be solved analytically:
Lemma 3.1. The optimization problem in Eq. (9) can be solved analytically, and the optimal value
can be expressed as
?
?
p
X
1
Sj (Z) ? .
(10)
f? (Z) = ? log ?
exp
?
j=1
Proof. Define the Lagrangian function for the optimization problem in Eq. (9) as
?
?
p
p
p
p
X
X
X
X
L=
?j Sj (Z) ? ?
?j log ?j +
?j ?j + ?
?j ? 1? ?,
j=1
j=1
j=1
(11)
j=1
where {?j }pj=1 and ? are Lagrangian dual variables. Taking the derivative of the Lagrangian func
tion with respect to ?j and setting it to zero, we obtain that ?j = exp ?1 (Sj (Z) + ?j + ? ? ?) .
It follows from the complementarity condition that ?j ?j = 0 for j = 1, ? ? ? , p. Since ?j 6= 0, we
have ?j = 0 for j = 1, ? ? ? , p. By removing {?j }pj=1 and substituting ?j into the objective function
in Eq. (9), we obtain that f? (Z) = ? ? ?. Since ? ? ? = Sj (Z) ? ? log ?j , we have
?j = exp ((Sj (Z) ? f? (Z))/?) .
Following 1 =
Pp
j=1 ?j
=
Pp
j=1
(12)
exp ((Sj (Z) ? f? (Z))/?) , we obtain Eq. (10).
The above discussion shows that we can approximate the original non-smooth constrained min-max
problem in Eq. (7) by the following smooth unconstrained minimization problem:
min f? (Z),
(13)
Z
where f? (Z) is defined in Eq. (10). We show in the following two lemmas that the approximate
formulation in Eq. (13) is convex and has a guaranteed approximation bound controlled by ?.
Lemma 3.2. The problem in Eq. (13) is a convex optimization problem.
Proof. The optimization problem in Eq. (13) can be expressed equivalently as
?
!?
p
k
X
X
ziT hi ?
min
? log ?
exp uj + vj ?
p
Z,{uj }p
j=1 ,{vj }j=1
j=1
k
subject to
?uj ?
1X T
z zi ,
4 i=1 i
(14)
i=1
k
?vj ?
1 X T
z Kj zi , j = 1, ? ? ? , p.
4? i=1 i
Since the log-exponential-sum function is a convex function and the two constraints are second-order
cone constraints, the problem in Eq. (13) is a convex optimization problem.
Lemma 3.3. Let f (Z) and f? (Z) be defined as above. Then we have f? (Z) ? f (Z) and |f? (Z) ?
f (Z)| ? ? log p.
Pp
Proof. The term ? j=1 ?j log ?j defines the entropy of {?j }pj=1 when it is considered as a probability distribution, since ? ? 0 and ?T e = 1. Hence,
this term is non-negative and f? (Z) ? f (Z). It
P
is known from the property of entropy that ? pj=1 ?j log ?j is maximized with a uniform {?j }pj=1 ,
Pp
i.e., ?j = p1 for j = 1, ? ? ? , p. Thus, we have ? j=1 ?j log ?j ? log p and |f? (Z) ? f (Z)| =
Pp
?? j=1 ?j log ?j ? ? log p. This completes the proof of the lemma.
4 Solving the Approximate Formulation Using the Nesterov?s Method
The Nesterov?s method (known as ?the optimal method? in [5]) is an algorithm for solving smooth
convex problems with the optimal rate of convergence. In this method, the objective function needs
to be differentiable with Lipschitz continuous gradient. In order to apply this method to solve the
proposed approximate formulation, we first compute the Lipschitz constant for the gradient of function f? (Z), as summarized in the following lemma:
Lemma 4.1. Let f? (Z) be defined as in Eq. (10). Then the Lipschitz constant L of the gradient of
f? (Z) can be bounded from above as
L ? L? ,
(15)
where L? is defined as
L? =
1
1
1
+
max ?max (Kj ) +
tr(Z T Z) max ?max ((Ki ? Kj )(Ki ? Kj )T ), (16)
1?i,j?p
2 2? 1?j?p
8??2
and ?max (?) denotes the maximum eigenvalue. Moreover, the distance from the origin to the optimal
set of Z can be bounded as tr(Z T Z) ? R?2 where R?2 is defined as
s
!2
k
X
1
R?2 =
||[Cj ]i ||2 + 4? log p + tr CjT I + Kj Cj
,
(17)
?
i=1
Cj = 2 I + ?1 Kj
?1
H and [Cj ]i denotes the ith column of Cj .
Proof. To compute the Lipschitz constant for the gradient of f? (Z), we first compute the first and
second order derivatives as follows:
p
X
vec(Z) vec(Kj Z)
?f? (Z) =
gj
+
? vec(H),
(18)
2
2?
j=1
?2 f? (Z) =
+
p
X
gj
1
I+
Dk (Kj )
2
2?
j=1
T
p
vec(Ki Z) vec(Kj Z)
vec(Ki Z) vec(Kj Z)
1 X
gi gj
?
?
, (19)
8? i,j=1
?
?
?
?
where vec(?) converts a matrix into a vector, Dk (Kj ) ? R(n?k)?(n?k)
is a block diagonal matrix
P
with the kth diagonal block as Kj , and gj = exp(Sj (Z)/?)/ pi=1 exp(Si (Z)/?). Then we have
L?
1
1
1
+
max ?max (Kj ) +
max tr(Z T (Ki ? Kj )(Ki ? Kj )T Z) ? L? .
2 2? 1?j?p
8??2 1?i,j?p
where L? is defined in Eq. (16).
We next derive the upper bound for tr(Z T Z). To this end, we first rewrite Sj (Z) as
1
1
1
1
T
T
Sj (Z) = tr (Z ? Cj ) I + Kj (Z ? Cj ) ? tr Cj I + Kj Cj .
4
?
4
?
Since min f? (Z) ? f? (0) = ? log p, and f? (Z) ? Sj (Z), we have S
j (Z) ? ? log p for j =
1, ? ? ? , p. It follows that 14 tr (Z ? Cj )T (Z ? Cj ) ? ? log p + 14 tr CjT I + ?1 Kj Cj . By using
this inequality, it can be verified that tr(Z T Z) ? R?2 where R?2 is defined in Eq. (17).
The Nesterov?s method for solving the proposed approximate formulation is presented in Table 1.
After the optimal Z is obtained from the Nesterov?s method, the optimal {?j }pj=1 can be computed
from Eq. (12). It follows from the convergence proof in [5] that after N iterations, as long as
f? (X i ) ? f? (X 0 ) for i = 1, ? ? ? , N, we have
f? (Z N +1 ) ? f? (Z ? ) ?
4L? R?2
,
(N + 1)2
(20)
Table 1: The Nesterov?s method for solving the proposed multi-label MKL formulation.
? Initialize X 0 = Z 1 = Q0 = 0 ? Rn?k , t0 = 1, L0 = 12 +
? = N1 where N is the predefined number of iterations
? for i = 1, ? ? ? , N do
? Set X i = Z i ?
1
i
ti?1 (Z
1
2?
max1?j?p ?max (Kj ), and
+ Qi?1 )
? Compute f? (X i ) and ?f? (X i )
? Set L = Li?1
1
tr((?f? (X i ))T ?f? (X i )) do
? while f? (X i ? ?f? (X i )/L) > f? (X i ) ? 2L
? L=L?2
? end while
? Set Li = L
i
? Set Z i+1 = X i ? L1i ?f? (X i ), Qi = Qi?1 + ti?1
Li ?f? (X )
q
? Set ti = 12 1 + 1 + 4t2i?1
? end for
where Z ? = arg minZ f? (Z). Furthermore, since f? (Z N +1 ) ? f (Z N +1 ) and f? (Z ? ) ? f (Z ? ) +
? log p, we have
4L? R?2
f (Z N +1 ) ? f (Z ? ) ? ? log p +
.
(21)
(N + 1)2
By setting ? = O(1/N ), we have that L? ? O(1/?) ? O(N ). Hence, the convergence rate of the
Nesterov?s method is on the order of O(1/N ). This is significantly better than the convergence rates
of O(1/N 1/3 ) and O(1/N 1/2 ) for the SILP and the gradient descent method, respectively.
5 Experiments
In this section, we evaluate the proposed formulation on the automated annotation of gene expression
pattern images. The performance of the approximate formulation is also validated.
Experimental Setup The experiments use a collection of gene expression pattern images retrieved
from the FlyExpress database (http://www.flyexpress.net). We apply nine local descriptors (SIFT, shape context, PCA-SIFT, spin image, steerable filters, differential invariants, complex
filters, moment invariants, and cross correlation) on regular grids of 16 and 32 pixels in radius and
spacing on each image. These local descriptors are commonly used in computer vision problems
[15]. We also apply Gabor filters with different wavelet scales and filter orientations on each image
to obtain global features of 384 and 2592 dimensions. Moreover, we sample the pixel values of each
image to obtain features of 10240, 2560, and 640 dimensions. After generating the features, we
apply the vocabulary-guided pyramid match algorithm [9] to construct kernels between the image
sets. A total of 23 kernel matrices (2 grid size ? 9 local descriptors + 2 Gabor + 3 pixel) are constructed. Then the proposed MKL formulation is employed to obtain the optimal integrated kernel
matrix based on which the low-dimensional embedding is computed. We use the expansion-based
approach (star and clique) to construct the hypergraph Laplacian, since it has been shown [1] that
the Laplacians constructed in this way are similar to those obtained directly from a hypergraph. The
performance of kernel matrices (either single or integrated) is evaluated by applying the support
vector machine (SVM) for each term using the one-against-rest scheme. The F1 score is used as
the performance measure, and both macro-averaged and micro-averaged F1 scores across labels are
reported. In each case, the entire data set is randomly partitioned into training and test sets with a
ratio of 1:1. This process is repeated ten times, and the averaged performance is reported.
Performance Evaluation It can be observed from Tables 2 and 3 that in terms of both macro and
micro F1 scores, the kernels integrated by either star or clique expansions achieve the highest performance on almost all of the data sets. In particular, the integrated kernels outperform the best
individual kernel significantly on all data sets. This shows that the proposed formulation is effective
Table 2: Performance of integrated kernels and the best individual kernel (denoted as BIK) in terms
of macro F1 score. The number of terms used are 20, 30, and 40, and the number of image sets
used are 1000, 1500, and 2000. ?SILP?, ?APP?, ?SVM1?, and ?Uniform? denote the performance
of kernels combined with the SILP formulation, the approximate formulation, the 1-norm SVM formulation proposed in [12] applied for each label separately, and the case where all kernels are given
the same weight, respectively. The subscripts ?star? and ?clique? denote the way that Laplacian is
constructed, and ?KCCA? denotes the case where C = Y T (Y Y T )?1 Y .
# of labels
20
30
40
# of sets
1000
1500
2000
1000
1500
2000
1000
1500
2000
SILPstar
0.4396 0.4903 0.4575 0.3852 0.4437 0.4162 0.3768 0.4019 0.3927
SILPclique
0.4536 0.5125 0.4926 0.4065 0.4747 0.4563 0.4145 0.4346 0.4283
SILPKCCA 0.3987 0.4635 0.4477 0.3497 0.4240 0.4063 0.3538 0.3872 0.3759
APPstar
0.4404 0.4930 0.4703 0.3896 0.4494 0.4267 0.3900 0.4100 0.3983
APPclique
0.4510 0.5125 0.4917 0.4060 0.4741 0.4563 0.4180 0.4338 0.4281
APPKCCA
0.4029 0.4805 0.4586 0.3571 0.4313 0.4146 0.3642 0.3914 0.3841
SVM1
0.3780 0.4640 0.4356 0.3523 0.4352 0.4200 0.3741 0.4048 0.3955
Uniform
0.3727 0.4703 0.4480 0.3513 0.4410 0.4191 0.3719 0.4111 0.3986
0.4241 0.4515 0.4344 0.3782 0.4312 0.3996 0.3914 0.3954 0.3827
BIK
Table 3: Performance in terms of micro F1 score. See the caption of Table 2 for explanations.
# of labels
20
30
40
# of sets
1000
1500
2000
1000
1500
2000
1000
1500
2000
SILPstar
0.4861 0.5199 0.4847 0.4472 0.4837 0.4473 0.4277 0.4470 0.4305
SILPclique
0.5039 0.5422 0.5247 0.4682 0.5127 0.4894 0.4610 0.4796 0.4660
SILPKCCA 0.4581 0.4994 0.4887 0.4209 0.4737 0.4532 0.4095 0.4420 0.4271
APPstar
0.4852 0.5211 0.4973 0.4484 0.4875 0.4582 0.4355 0.4541 0.4346
APPclique
0.5013 0.5421 0.5239 0.4673 0.5124 0.4894 0.4633 0.4793 0.4658
0.4612 0.5174 0.5018 0.4299 0.4828 0.4605 0.4194 0.4488 0.4350
APPKCCA
SVM1
0.4361 0.5024 0.4844 0.4239 0.4844 0.4632 0.3947 0.4234 0.4188
Uniform
0.4390 0.5096 0.4975 0.4242 0.4939 0.4683 0.3999 0.4358 0.4226
0.4614 0.4735 0.4562 0.4189 0.4484 0.4178 0.3869 0.3905 0.3781
BIK
in combining multiple kernels and exploiting the complementary information contained in different
kernels constructed from various features. Moreover, the proposed formulation based on a hypergraph outperforms the classical KCCA consistently.
SILP versus the Approximate Formulation In terms of classification performance, we can observe
from Tables 2 and 3 that the SILP and the approximate formulations are similar. More precisely,
the approximate formulations perform slightly better than SILP in almost all cases. This may be
due to the smoothness nature of the formulations and the simplicity of the computational procedure
employed in the Nesterov?s method so that it is less prone to numerical problems. Figure 1 compares
the computation time and the kernel weights of SILPstar and APPstar . It can be observed that in
general the approximate formulation is significantly faster than SILP, especially when the number
of labels and the number of image sets are large, while they both yields very similar kernel weights.
6 Conclusions and Future Work
We present a multi-label learning formulation that incorporates instance-label correlations by a hypergraph. We formulate the problem in the kernel-induced feature space and propose to learn the
kernel matrix in the MKL framework. The resulting formulation leads to a non-smooth min-max
problem, and it can be cast as an SILP. We propose an approximate formulation by introducing a
smoothing term and show that the resulting formulation is an unconstrained convex problem that can
be solved by the Nesterov?s method. We demonstrate the effectiveness and efficiency of the method
on the task of automated annotation of gene expression pattern images.
400
APP
star
APP
star
0.3
300
250
200
150
0.25
0.2
0.15
100
0.1
50
0.05
0
SILPstar
0.35
Weight for kernels
Computation time (in seconds)
350
0.4
SILPstar
20(1000) 20(1500) 20(2000) 30(1000) 30(1500) 30(2000) 40(1000) 40(1500) 40(2000)
The number of labels and the number of image sets
(a) Comparison of computation time
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Kernel number
(b) Comparison of kernel weights
Figure 1: Comparison of computation time and kernel weights for SILPstar and APPstar . The left
panel plots the computation time of two formulations on one partition of the data set as the number
of labels and image sets increase gradually, and the right panel plots the weights assigned to each of
the 23 kernels by SILPstar and APPstar on a data set of 40 labels and 1000 image sets.
The experiments in this paper focus on the annotation of gene expression pattern images. The
proposed formulation can also be applied to the task of multiple object recognition in computer
vision. We plan to pursue other applications in the future. Experimental results indicate that the
best individual kernel may not lead to a large weight by the proposed MKL formulation. We plan to
perform a detailed analysis of the weights in the future.
Acknowledgements
This work is supported in part by research grants from National Institutes of Health (HG002516 and
1R01-GM079688-01) and National Science Foundation (IIS-0612069 and IIS-0643494).
References
[1] S. Agarwal, K. Branson, and S. Belongie. Higher order learning with graphs. In ICML, pages 17?24,
2006.
[2] D. Zhou, J. Huang, and B. Sch?olkopf. Learning with hypergraphs: Clustering, classification, and embedding. In NIPS, pages 1601?1608. 2007.
[3] Z. H. Zhou and M. L. Zhang. Multi-instance multi-label learning with application to scene classification.
In NIPS, pages 1609?1616. 2007.
[4] D. R. Hardoon, S. R. Szedmak, and J. R. Shawe-taylor. Canonical correlation analysis: An overview with
application to learning methods. Neural Computation, 16(12):2639?2664, 2004.
[5] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer, 2003.
[6] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?
152, 2005.
[7] P. Tomancak and et al. Systematic determination of patterns of gene expression during Drosophila embryogenesis. Genome Biology, 3(12), 2002.
[8] S. Ji, L. Sun, R. Jin, S. Kumar, and J. Ye. Automated annotation of Drosophila gene expression patterns
using a controlled vocabulary. Bioinformatics, 24(17):1881?1888, 2008.
[9] K. Grauman and T. Darrell. Approximate correspondences in high dimensions. In NIPS, pages 505?512.
2006.
[10] F. R. K. Chung. Spectral Graph Theory. American Mathematical Society, 1997.
[11] S. Sch?olkopf and A. Smola. Learning with Kernels: Support Vector Machines,Regularization, Optimization and Beyond. MIT Press, 2002.
[12] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. I. Jordan. Learning the kernel matrix
with semidefinite programming. Journal of Machine Learning Research, 5:27?72, 2004.
[13] R. Hettich and K. O. Kortanek. Semi-infinite programming: Theory, methods, and applications. SIAM
Review, 35(3):380?429, 1993.
[14] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning. Journal of
Machine Learning Research, 7:1531?1565, July 2006.
[15] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 27(10):1615?1630, 2005.
| 3574 |@word norm:1 thereby:2 tr:13 reduction:1 moment:1 score:5 genetic:1 document:2 outperforms:1 existing:2 si:1 numerical:1 partition:1 shape:1 plot:2 intelligence:1 asu:3 ith:1 node:1 cse:1 zhang:1 mathematical:2 constructed:6 differential:1 introductory:1 introduce:1 lansing:1 pairwise:1 p1:1 multi:15 increasing:1 hardoon:1 project:1 moreover:5 bounded:2 maximizes:1 panel:2 pursue:1 transformation:1 temporal:1 berkeley:1 ti:3 grauman:1 unit:1 grant:1 positive:2 local:5 modify:1 tempe:3 subscript:1 branson:1 averaged:3 directed:1 silp:15 block:2 procedure:1 steerable:1 significantly:3 gabor:2 integrating:1 regular:1 cannot:1 close:2 selection:1 context:1 applying:2 www:1 lagrangian:3 jieping:2 maximizing:1 convex:12 formulate:3 simplicity:1 insight:1 embedding:5 searching:1 controlling:1 play:1 caption:1 programming:3 origin:1 lanckriet:1 complementarity:1 expensive:1 recognition:1 database:1 observed:2 role:1 t2i:1 solved:5 capture:3 sun:3 sonnenburg:1 highest:1 hypergraph:14 nesterov:11 cristianini:1 dynamic:1 solving:7 rewrite:1 max1:1 efficiency:2 joint:2 various:2 represented:1 effective:1 encoded:2 widely:2 solve:3 gi:1 differentiable:3 eigenvalue:1 net:1 propose:17 interaction:1 macro:3 combining:1 rapidly:1 achieve:1 az:3 olkopf:3 exploiting:2 convergence:6 darrell:1 comparative:1 generating:1 object:1 derive:1 eq:22 zit:1 kortanek:1 solves:1 involves:3 indicate:1 guided:2 radius:1 annotated:4 filter:4 human:1 f1:5 drosophila:6 rong:1 considered:2 exp:7 mapping:2 automate:1 substituting:2 label:36 weighted:1 minimization:3 mit:1 always:1 zhou:2 derived:1 l0:1 validated:1 focus:1 consistently:1 rank:1 hk:1 integrated:5 entire:1 embryogenesis:2 pixel:3 issue:2 among:5 dual:1 arg:1 orientation:1 denoted:1 classification:3 development:1 plan:2 smoothing:4 special:2 spatial:1 constrained:1 initialize:1 construct:6 biology:1 unsupervised:1 throughput:2 representer:1 icml:1 future:3 micro:3 employ:2 randomly:1 national:2 individual:3 consisting:1 n1:1 situ:1 evaluation:2 semidefinite:2 predefined:1 edge:2 taylor:1 instance:3 column:3 exchanging:1 maximization:2 introducing:3 vertex:6 uniform:4 characterize:1 reported:3 connect:2 combined:1 fundamental:1 siam:1 systematic:1 central:1 satisfied:1 successively:1 huang:1 american:1 derivative:2 chung:1 li:3 star:5 summarized:2 tion:1 h1:1 annotation:6 spin:1 descriptor:5 efficiently:3 maximized:1 yield:3 generalize:2 produced:1 app:3 sharing:1 against:1 pp:5 associated:1 mi:1 proof:6 dimensionality:1 cj:12 higher:2 supervised:1 formulation:50 evaluated:1 furthermore:1 governing:1 smola:1 correlation:9 until:1 mkl:9 defines:1 reveal:1 facilitate:1 effect:2 ye:3 normalized:2 hence:6 regularization:2 analytically:2 assigned:1 symmetric:1 q0:1 during:2 uniquely:1 theoretic:1 demonstrate:2 image:22 recently:1 common:6 ji:3 overview:1 hypergraphs:3 vec:8 smoothness:2 unconstrained:5 grid:2 shawe:1 svm1:3 afer:1 similarity:2 gj:4 add:1 retrieved:1 optimizing:1 inequality:1 hyperedge:3 exploited:1 captured:1 employed:3 paradigm:1 tempting:1 july:1 semi:5 ii:2 multiple:10 smooth:12 match:2 determination:1 faster:1 cross:1 long:1 laplacian:3 controlled:2 qi:3 kcca:6 basic:1 vision:2 iteration:3 represent:2 kernel:54 agarwal:1 pyramid:2 addition:2 separately:1 spacing:1 completes:1 hyperedges:2 sch:4 rest:1 induced:3 tend:2 subject:6 tomancak:1 hybridization:1 incorporates:1 effectiveness:2 bik:3 jordan:1 structural:1 noting:1 automated:5 zi:3 reduce:2 t0:1 expression:9 pca:1 bartlett:1 reformulated:1 nine:1 detailed:1 involve:1 ten:1 http:1 outperform:1 canonical:3 kck:1 group:3 pj:9 rewriting:1 verified:1 graph:9 sum:1 cone:1 convert:1 almost:2 hettich:1 capturing:2 bound:3 hi:2 ki:6 guaranteed:2 correspondence:1 arizona:3 constraint:7 precisely:1 scene:1 min:10 kumar:1 combination:3 across:1 slightly:1 partitioned:1 invariant:3 gradually:1 ghaoui:1 computationally:1 hh:1 end:3 available:1 apply:6 observe:1 spectral:6 original:4 denotes:3 clustering:1 include:1 exploit:1 uj:3 especially:1 classical:1 r01:1 society:1 objective:10 traditional:3 diagonal:2 gradient:7 kth:1 distance:1 length:1 relationship:4 ratio:1 liang:2 equivalently:1 setup:1 potentially:1 trace:1 negative:1 design:1 perform:3 allowing:2 upper:1 jin:2 descent:1 rn:4 cast:3 optimized:1 z1:1 nip:3 able:1 beyond:1 below:1 pattern:10 shuiwang:2 laplacians:1 program:4 max:18 explanation:1 indicator:1 scheme:1 improve:1 extract:1 health:1 schmid:1 kj:22 func:1 szedmak:1 review:1 acknowledgement:1 curator:1 embedded:2 lecture:1 generation:2 limitation:1 versus:1 foundation:1 pi:1 embryonic:1 prone:1 course:1 supported:1 institute:1 taking:1 dimension:3 vocabulary:3 xn:1 genome:2 mikolajczyk:1 collection:3 commonly:1 employing:1 transaction:1 sj:16 approximate:20 gene:9 clique:3 global:1 assumed:1 belongie:1 msu:1 continuous:3 regulatory:1 table:7 promising:1 learn:4 zk:1 nature:1 rongjin:1 expansion:2 complex:2 vj:3 repeated:1 complementary:1 x1:1 representative:1 slow:1 exponential:1 candidate:3 minz:1 wavelet:1 theorem:6 rk:1 embed:1 removing:1 sift:2 dk:2 svm:2 essential:2 entropy:2 michigan:1 cjt:2 expressed:3 contained:1 springer:1 determines:1 lipschitz:6 included:1 infinite:6 lemma:8 called:1 total:1 experimental:3 east:1 atsch:1 support:2 l1i:1 bioinformatics:1 evaluate:1 |
2,840 | 3,575 | Deflation Methods for Sparse PCA
Lester Mackey
Computer Science Division
University of California, Berkeley
Berkeley, CA 94703
Abstract
In analogy to the PCA setting, the sparse PCA problem is often solved by iteratively alternating between two subtasks: cardinality-constrained rank-one variance maximization and matrix deflation. While the former has received a great
deal of attention in the literature, the latter is seldom analyzed and is typically
borrowed without justification from the PCA context. In this work, we demonstrate that the standard PCA deflation procedure is seldom appropriate for the
sparse PCA setting. To rectify the situation, we first develop several deflation alternatives better suited to the cardinality-constrained context. We then reformulate
the sparse PCA optimization problem to explicitly reflect the maximum additional
variance objective on each round. The result is a generalized deflation procedure
that typically outperforms more standard techniques on real-world datasets.
1
Introduction
Principal component analysis (PCA) is a popular change of variables technique used in data compression, predictive modeling, and visualization. The goal of PCA is to extract several principal
components, linear combinations of input variables that together best account for the variance in a
data set. Often, PCA is formulated as an eigenvalue decomposition problem: each eigenvector of
the sample covariance matrix of a data set corresponds to the loadings or coefficients of a principal
component. A common approach to solving this partial eigenvalue decomposition is to iteratively
alternate between two subproblems: rank-one variance maximization and matrix deflation. The first
subproblem involves finding the maximum-variance loadings vector for a given sample covariance
matrix or, equivalently, finding the leading eigenvector of the matrix. The second involves modifying
the covariance matrix to eliminate the influence of that eigenvector.
A primary drawback of PCA is its lack of sparsity. Each principal component is a linear combination
of all variables, and the loadings are typically non-zero. Sparsity is desirable as it often leads to
more interpretable results, reduced computation time, and improved generalization. Sparse PCA
[8, 3, 16, 17, 6, 18, 1, 2, 9, 10, 12] injects sparsity into the PCA process by searching for ?pseudoeigenvectors?, sparse loadings that explain a maximal amount variance in the data.
In analogy to the PCA setting, many authors attempt to solve the sparse PCA problem by iteratively alternating between two subtasks: cardinality-constrained rank-one variance maximization
and matrix deflation. The former is an NP-hard problem, and a variety of relaxations and approximate solutions have been developed in the literature [1, 2, 9, 10, 12, 16, 17]. The latter subtask
has received relatively little attention and is typically borrowed without justification from the PCA
context. In this work, we demonstrate that the standard PCA deflation procedure is seldom appropriate for the sparse PCA setting. To rectify the situation, we first develop several heuristic deflation
alternatives with more desirable properties. We then reformulate the sparse PCA optimization problem to explicitly reflect the maximum additional variance objective on each round. The result is a
generalized deflation procedure that typically outperforms more standard techniques on real-world
datasets.
1
The remainder of the paper is organized as follows. In Section 2 we discuss matrix deflation as it relates to PCA and sparse PCA. We examine the failings of typical PCA deflation in the sparse setting
and develop several alternative deflation procedures. In Section 3, we present a reformulation of the
standard iterative sparse PCA optimization problem and derive a generalized deflation procedure
to solve the reformulation. Finally, in Section 4, we demonstrate the utility of our newly derived
deflation techniques on real-world datasets.
Notation
I is the identity matrix. Sp+ is the set of all symmetric, positive semidefinite matrices in Rp?p .
Card(x) represents the cardinality of or number of non-zero entries in the vector x.
2
Deflation methods
A matrix deflation modifies a matrix to eliminate the influence of a given eigenvector, typically by
setting the associated eigenvalue to zero (see [14] for a more detailed discussion). We will first
discuss deflation in the context of PCA and then consider its extension to sparse PCA.
2.1
Hotelling?s deflation and PCA
In the PCA setting, the goal is to extract the r leading eigenvectors of the sample covariance matrix,
A0 ? Sp+ , as its eigenvectors are equivalent to the loadings of the first r principal components.
Hotelling?s deflation method [11] is a simple and popular technique for sequentially extracting these
eigenvectors. On the t-th iteration of the deflation method, we first extract the leading eigenvector
of At?1 ,
xt = argmax xT At?1 x
(1)
x:xT x=1
and we then use Hotelling?s deflation to annihilate xt :
At = At?1 ? xt xTt At?1 xt xTt .
(2)
The deflation step ensures that the t + 1-st leading eigenvector of A0 is the leading eigenvector of
At . The following proposition explains why.
Proposition 2.1. If ?1 ? . . . ? ?p are the eigenvalues of A ? Sp+ , x1 , . . . , xp are the corresponding
eigenvectors, and A? = A ? xj xTj Axj xTj for some j ? 1, . . . , p, then A? has eigenvectors x1 , . . . , xp
with corresponding eigenvalues ?1 , . . . , ?j?1 , 0, ?j+1 , . . . , ?p .
P ROOF.
? j = Axj ? xj xT Axj xT xj = Axj ? xj xT Axj = ?j xj ? ?j xj = 0xj .
Ax
j
j
j
T
T
?
Axi = Axi ? xj xj Axj xj xi = Axi ? 0 = ?i xi , ?i 6= j.
Thus, Hotelling?s deflation preserves all eigenvectors of a matrix and annihilates a selected eigenvalue while maintaining all others. Notably, this implies that Hotelling?s deflation preserves positivesemidefiniteness. In the case of our iterative deflation method, annihilating the t-th leading eigenvector of A0 renders the t + 1-st leading eigenvector dominant in the next round.
2.2
Hotelling?s deflation and sparse PCA
In the sparse PCA setting, we seek r sparse loadings which together capture the maximum amount
of variance in the data. Most authors [1, 9, 16, 12] adopt the additional constraint that the loadings
be produced in a sequential fashion. To find the first such ?pseudo-eigenvector?, we can consider a
cardinality-constrained version of Eq. (1):
x1 =
argmax
x:xT x=1,Card(x)?k1
2
xT A0 x.
(3)
That leaves us with the question of how to best extract subsequent pseudo-eigenvectors. A common
approach in the literature [1, 9, 16, 12] is to borrow the iterative deflation method of the PCA setting.
Typically, Hotelling?s deflation is utilized by substituting an extracted pseudo-eigenvector for a true
eigenvector in the deflation step of Eq. (2). This substitution, however, is seldom justified, for the
properties of Hotelling?s deflation, discussed in Section 2.1, depend crucially on the use of a true
eigenvector.
To see what can go wrong when Hotelling?s deflation is applied to a non-eigenvector, consider the
following example.
2 1
, a 2 ? 2 matrix. The eigenvalues of C are ?1 = 2.6180 and ?2 =
Example. Let C =
1 1
.3820. Let x = (1, 0)T , a sparse
and C? = C ? xxT CxxT , the corresponding
pseudo-eigenvector,
0 1
deflated matrix. Then C? =
with eigenvalues ??1 = 1.6180 and ??2 = ?.6180. Thus,
1 1
Hotelling?s deflation does not in general preserve positive-semidefiniteness when applied to a noneigenvector.
That Sp+ is not closed under pseudo-eigenvector Hotelling?s deflation is a serious failing, for most
iterative sparse PCA methods assume a positive-semidefinite matrix on each iteration. A second,
related shortcoming of pseudo-eigenvector Hotelling?s deflation is its failure to render a pseudoeigenvector orthogonal to a deflated matrix. If A is our matrix of interest, x is our pseudo-eigenvector
? = Ax ?
with variance ? = xT Ax, and A? = A ? xxT AxxT is our deflated matrix, then Ax
xxT AxxT x = Ax ? ?x is zero iff x is a true eigenvector. Thus, even though the ?variance? of
? = xT Ax ? xT xxT AxxT x = ? ? ? = 0), ?covariances? of the form
x w.r.t. A? is zero (xT Ax
T ?
y Ax for y 6= x may still be non-zero. This violation of the Cauchy-Schwarz inequality betrays a
lack of positive-semidefiniteness and may encourage the reappearance of x as a component of future
pseudo-eigenvectors.
2.3
Alternative deflation techniques
In this section, we will attempt to rectify the failings of pseudo-eigenvector Hotelling?s deflation by
considering several alternative deflation techniques better suited to the sparse PCA setting. Note
that any deflation-based sparse PCA method (e.g. [1, 9, 16, 12]) can utilize any of the deflation
techniques discussed below.
2.3.1
Projection deflation
Given a data matrix Y ? Rn?p and an arbitrary unit vector in x ? Rp , an intuitive way to remove
the contribution of x from Y is to project Y onto the orthocomplement of the space spanned by x:
Y? = Y (I ? xxT ). If A is the sample covariance matrix of Y , then the sample covariance of Y? is
given by A? = (I ? xxT )A(I ? xxT ), which leads to our formulation for projection deflation:
Projection deflation
At = At?1 ? xt xTt At?1 ? At?1 xt xTt + xt xTt At?1 xt xTt = (I ? xt xTt )At?1 (I ? xt xTt )
(4)
Note that when xt is a true eigenvector of At?1 with eigenvalue ?t , projection deflation reduces to
Hotelling?s deflation:
At = At?1 ? xt xTt At?1 ? At?1 xt xTt + xt xTt At?1 xt xTt
= At?1 ? ?t xt xTt ? ?t xt xTt + ?t xt xTt
= At?1 ? xt xTt At?1 xt xTt .
However, in the general case, when xt is not a true eigenvector, projection deflation maintains the
desirable properties that were lost to Hotelling?s deflation. For example, positive-semidefiniteness
is preserved:
?y, y T At y = y T (I ? xt xTt )At?1 (I ? xt xTt )y = z T At?1 z
where z = (I ? xt xTt )y. Thus, if At?1 ? Sp+ , so is At . Moreover, At is rendered left and right
orthogonal to xt , as (I ?xt xTt )xt = xt ?xt = 0 and At is symmetric. Projection deflation therefore
annihilates all covariances with xt : ?v, v T At xt = xTt At v = 0.
3
2.3.2
Schur complement deflation
Since our goal in matrix deflation is to eliminate the influence, as measured through variance and
covariances, of a newly discovered pseudo-eigenvector, it is reasonable to consider the conditional
variance of our data variables given a pseudo-principal component. While this conditional variance
is non-trivial to compute in general, it takes on a simple closed form when the variables are normally
distributed. Let x ? Rp be a unit vector and W ? Rp be a Gaussian random vector, representing the
joint distribution
If W has covariance matrix ?, then (W, W x) has covariance
of the data variables.
T
?
?x
?
matrix V =
, and V ar(W |W x) = ? ? ?xx
whenever xT ?x 6= 0 [15].
xT ?x
xT ? xT ?x
That is, the conditional variance is the Schur complement of the vector variance xT ?x in the full
covariance matrix V . By substituting sample covariance matrices for their population counterparts,
we arrive at a new deflation technique:
Schur complement deflation
At = At?1 ?
At?1 xt xTt At?1
xTt At?1 xt
(5)
Schur complement deflation, like projection deflation, preserves positive-semidefiniteness. To
v T At?1 xt xT
t At?1 v
see this, suppose At?1 ? Sp+ . Then, ?v, v T At v = v T At?1 v ?
? 0 as
xT At?1 xt
t
v T At?1 vxTt At?1 xt ? (v T At?1 xt )2 ? 0 by the Cauchy-Schwarz inequality and xTt At?1 xt ? 0
as At?1 ? Sp+ .
Furthermore, Schur complement deflation renders xt left and right orthogonal to At , since At is
A
x xT
t At?1 xt
= At?1 xt ? At?1 xt = 0.
symmetric and At xt = At?1 xt ? t?1xT tAt?1
xt
t
Additionally, Schur complement deflation reduces to Hotelling?s deflation when xt is an eigenvector
of At?1 with eigenvalue ?t 6= 0:
At?1 xt xTt At?1
xTt At?1 xt
?t xt xTt ?t
= At?1 ?
?t
T
= At?1 ? xt xt At?1 xt xTt .
At = At?1 ?
While we motivated Schur complement deflation with a Gaussianity assumption, the technique admits a more general interpretation as a column projection of a data matrix. Suppose Y ? Rn?p
xxT Y T
)Y , the projection
is a mean-centered data matrix, x ? Rp has unit norm, and Y? = (I ? Y||Y
x||2
of the columns of Y onto the orthocomplement of the space spanned by the pseudo-principal component, Y x. If Y has sample covariance matrix A, then the sample covariance of Y? is given by
T
xxT Y T T
xxT Y T
xxT Y T
A
A? = n1 Y T (I ? Y||Y
) (I ? Y||Y
)Y = n1 Y T (I ? Y||Y
)Y = A ? Axx
.
xT Ax
x||2
x||2
x||2
2.3.3
Orthogonalized deflation
While projection deflation and Schur complement deflation address the concerns raised by performing a single deflation in the non-eigenvector setting, new difficulties arise when we attempt to
sequentially deflate a matrix with respect to a series of non-orthogonal pseudo-eigenvectors.
Whenever we deal with a sequence of non-orthogonal vectors, we must take care to distinguish
between the variance explained by a vector and the additional variance explained, given all previous vectors. These concepts are equivalent in the PCA setting, as true eigenvectors of a matrix
are orthogonal, but, in general, the vectors extracted by sparse PCA will not be orthogonal. The
additional variance explained by the t-th pseudo-eigenvector, xt , is equivalent to the variance explained by the component of xt orthogonal to the space spanned by all previous pseudo-eigenvectors,
qt = xt ? Pt?1 xt , where Pt?1 is the orthogonal projection onto the space spanned by x1 , . . . , xt?1 .
On each deflation step, therefore, we only want to eliminate the variance associated with qt . Annihilating the full vector xt will often lead to ?double counting? and could re-introduce components
parallel to previously annihilated vectors. Consider the following example:
4
?
?
2
2 T
Example.
1Let C01 =
I. If we apply projection deflation w.r.t. x1 = ( 2 , 2 ) , the result is
?2
2
C1 =
, and x1 is orthogonal to C1 . If we next apply projection deflation to C1 w.r.t.
1
? 12
2
0 0
x2 = (1, 0)T , the result, C2 =
, is no longer orthogonal to x1 .
1
0 2
The authors of [12] consider this issue of non-orthogonality in the context of Hotelling?s deflation.
Their modified deflation procedure is equivalent to Hotelling?s deflation (Eq. (2)) for t = 1 and can
be easily expressed in terms of a running Gram-Schmidt decomposition for t > 1:
Orthogonalized Hotelling?s deflation (OHD)
(I ? Qt?1 QTt?1 )xt
qt =
(I ? Qt?1 QTt?1 )xt
(6)
At = At?1 ? qt qtT At?1 qt qtT
where q1 = x1 , and q1 , . . . , qt?1 form the columns of Qt?1 . Since q1 , . . . , qt?1 form an orthonormal
basis for the space spanned by x1 , . . . , xt?1 , we have that Qt?1 QTt?1 = Pt?1 , the aforementioned
orthogonal projection.
Since the first round of OHD is equivalent to a standard application of Hotelling?s deflation, OHD
inherits all of the weaknesses discussed in Section 2.2. However, the same principles may be applied
to projection deflation to generate an orthogonalized variant that inherits its desirable properties.
Schur complement deflation is unique in that it preserves orthogonality in all subsequent rounds.
A
xt xT
t At?1 v
That is, if a vector v is orthogonal to At?1 for any t, then At v = At?1 v ? t?1
= 0 as
xT
t At?1 xt
At?1 v = 0. This further implies the following proposition.
Proposition 2.2. Orthogonalized Schur complement deflation is equivalent to Schur complement
deflation.
Proof. Consider the t-th round of Schur complement deflation. We may write xt = ot + pt , where
pt is in the subspace spanned by all previously extracted pseudo-eigenvectors and ot is orthogonal
to this subspace. Then we know that At?1 pt = 0, as pt is a linear combination of x1 , . . . , xt?1 ,
and At?1 xi = 0, ?i < t. Thus, xTt At xt = pTt At pt + oTt At pt + pTt At ot + oTt At ot = oTt At ot .
Further, At?1 xt xTt At?1 = At?1 pt pTt At?1 +At?1 pt oTt At?1 +At?1 ot pTt At?1 +At?1 ot oTt At?1 =
At?1 ot oTt At?1 . Hence, At = At?1 ?
At?1 ot oT
t At?1
oT
t At?1 ot
= At?1 ?
At?1 qt qtT At?1
qtT At?1 qt
as qt =
ot
||ot || .
Table 1 compares the properties of the various deflation techniques studied in this section.
Method
Hotelling?s
Projection
Schur complement
Orth. Hotelling?s
Orth. Projection
xTt At xt = 0
X
X
X
X
X
At xt = 0
?
X
X
?
X
At ? Sp+
?
X
X
?
X
As xt = 0, ?s > t
?
?
X
?
X
Table 1: Summary of sparse PCA deflation method properties
3
Reformulating sparse PCA
In the previous section, we focused on heuristic deflation techniques that allowed us to reuse the
cardinality-constrained optimization problem of Eq. (3). In this section, we explore a more principled alternative: reformulating the sparse PCA optimization problem to explicitly reflect our maximization objective on each round.
Recall that the goal of sparse PCA is to find r cardinality-constrained pseudo-eigenvectors which
together explain the most variance in the data. If we additionally constrain the sparse loadings to
5
be generated sequentially, as in the PCA setting and the previous section, then a greedy approach of
maximizing the additional variance of each new vector naturally suggests itself.
T
On round t, the additional variance of a vector x is given by q qTAq0 q where A0 is the data covariance matrix, q = (I ? Pt?1 )x, and Pt?1 is the projection onto the space spanned by previous
pseudo-eigenvectors x1 , . . . , xt?1 . As q T q = xT (I ? Pt?1 )(I ? Pt?1 )x = xT (I ? Pt?1 )x, maximizing additional variance is equivalent to solving a cardinality-constrained maximum generalized
eigenvalue problem,
max xT (I ? Pt?1 )A0 (I ? Pt?1 )x
x
subject to xT (I ? Pt?1 )x = 1
Card(x) ? kt .
(7)
If we let qs = (I ? Ps?1 )xs , ?s ? t ? 1, then q1 , . . . , qt?1 form an orthonormal basis for the space
Pt?1
Qt?1
spanned by x1 , . . . , xt?1 . Writing I ? Pt?1 = I ? s=1 qs qsT = s=1 (I ? qs qsT ) suggests a
generalized deflation technique that leads to the solution of Eq. (7) on each round. We imbed the
technique into the following algorithm for sparse PCA:
Algorithm 1 Generalized Deflation Method for Sparse PCA
p
Given: A0 ? S+
, r ? N, {k1 , . . . , kr } ? N
Execute:
1. B0 ? I
2. For t := 1, . . . , r
? xt ?
argmax
xT At?1 x
x:xT Bt?1 x=1,Card(x)?kt
?
?
?
?
qt ? Bt?1 xt
At ? (I ? qt qtT )At?1 (I ? qt qtT )
Bt ? Bt?1 (I ? qt qtT )
xt ? xt / ||xt ||
Return: {x1 , . . . , xr }
Adding a cardinality constraint to a maximum eigenvalue problem renders the optimization problem
NP-hard [10], but any of several leading sparse eigenvalue methods, including GSLDA of [10],
DCPCA of [12], and DSPCA of [1] (with a modified trace constraint), can be adapted to solve this
cardinality-constrained generalized eigenvalue problem.
4
Experiments
In this section, we present several experiments on real world datasets to demonstrate the value added
by our newly derived deflation techniques. We run our experiments with Matlab implementations
of DCPCA [12] (with the continuity correction of [9]) and GSLDA [10], fitted with each of the
following deflation techniques: Hotelling?s (HD), projection (PD), Schur complement (SCD), orthogonalized Hotelling?s (OHD), orthogonalized projection (OPD), and generalized (GD).
4.1
Pit props dataset
The pit props dataset [5] with 13 variables and 180 observations has become a de facto standard for
benchmarking sparse PCA methods. To demonstrate the disparate behavior of differing deflation
methods, we utilize each sparse PCA algorithm and deflation technique to successively extract six
sparse loadings, each constrained to have cardinality less than or equal to kt = 4. We report the
additional variances explained by each sparse vector in Table 2 and the cumulative percentage variance explained on each iteration in Table 3. For reference, the first 6 true principal components of
the pit props dataset capture 87% of the variance.
6
HD
2.938
2.209
0.935
1.301
1.206
0.959
DCPCA
SCD
OHD
2.938 2.938
2.076 2.209
1.926 0.935
1.164 0.799
1.477 0.901
0.725 0.431
PD
2.938
2.209
1.464
1.464
1.057
0.980
OPD
2.938
2.209
1.464
1.464
1.058
0.904
GD
2.938
2.209
1.477
1.464
1.178
0.988
HD
2.938
2.107
1.988
1.352
1.067
0.557
PD
2.938
2.280
2.067
1.304
1.120
0.853
GSLDA
SCD
OHD
2.938 2.938
2.065 2.107
2.243 1.985
1.120 1.335
1.164 0.497
0.841 0.489
OPD
2.938
2.280
2.067
1.305
1.125
0.852
GD
2.938
2.280
2.072
1.360
1.127
0.908
Table 2: Additional variance explained by each of the first 6 sparse loadings extracted from the Pit
Props dataset.
On the DCPCA run, Hotelling?s deflation explains 73.4% of the variance, while the best performing
methods, Schur complement deflation and generalized deflation, explain approximately 79% of the
variance each. Projection deflation and its orthogonalized variant also outperform Hotelling?s deflation, while orthogonalized Hotelling?s shows the worst performance with only 63.2% of the variance
explained. Similar results are obtained when the discrete method of GSLDA is used. Generalized
deflation and the two projection deflations dominate, with GD achieving the maximum cumulative
variance explained on each round. In contrast, the more standard Hotelling?s and orthogonalized
Hotelling?s underperform the remaining techniques.
HD
22.6%
39.6%
46.8%
56.8%
66.1%
73.4%
PD
22.6%
39.6%
50.9%
62.1%
70.2%
77.8%
DCPCA
SCD
OHD
22.6%
22.6%
38.6%
39.6%
53.4%
46.8%
62.3%
52.9%
73.7%
59.9%
79.3%
63.2%
OPD
22.6%
39.6%
50.9%
62.1%
70.2%
77.2%
GD
22.6%
39.6%
51.0%
62.2%
71.3%
78.9%
HD
22.6%
38.8%
54.1%
64.5%
72.7%
77.0%
PD
22.6%
40.1%
56.0%
66.1%
74.7%
81.2%
GSLDA
SCD
OHD
22.6%
22.6%
38.5%
38.8%
55.7%
54.1%
64.4%
64.3%
73.3%
68.2%
79.8%
71.9%
OPD
22.6%
40.1%
56.0%
66.1%
74.7%
81.3%
GD
22.6%
40.1%
56.1%
66.5%
75.2%
82.2%
Table 3: Cumulative percentage variance explained by the first 6 sparse loadings extracted from the
Pit Props dataset.
4.2
Gene expression data
The Berkeley Drosophila Transcription Network Project (BDTNP) 3D gene expression data
[4] contains gene expression levels measured in each nucleus of developing Drosophila embryos and averaged across many embryos and developmental stages. Here, we analyze 03 1160524183713 s10436-29ap05-02.vpc, an aggregate VirtualEmbryo containing 21 genes and
5759 example nuclei. We run GSLDA for eight iterations with cardinality pattern 9,7,6,5,3,2,2,2
and report the results in Table 4.
PC 1
PC 2
PC 3
PC 4
PC 5
PC 6
PC 7
PC 8
HD
1.784
1.464
1.178
0.716
0.444
0.303
0.271
0.223
GSLDA additional variance explained
PD
SCD
OHD
OPD
1.784
1.784
1.784
1.784
1.453
1.453
1.464
1.453
1.178
1.179
1.176
1.178
0.736
0.716
0.713
0.721
0.574
0.571
0.460
0.571
0.306
0.278
0.354
0.244
0.256
0.262
0.239
0.313
0.239
0.299
0.257
0.245
GD
1.784
1.466
1.187
0.743
0.616
0.332
0.304
0.329
HD
21.0%
38.2%
52.1%
60.5%
65.7%
69.3%
72.5%
75.1%
GSLDA cumulative percentage variance
PD
SCD
OHD
OPD
21.0%
21.0%
21.0%
21.0%
38.1%
38.1%
38.2%
38.1%
51.9%
52.0%
52.0%
51.9%
60.6%
60.4%
60.4%
60.4%
67.4%
67.1%
65.9%
67.1%
71.0%
70.4%
70.0%
70.0%
74.0%
73.4%
72.8%
73.7%
76.8%
77.0%
75.9%
76.6%
GD
21.0%
38.2%
52.2%
61.0%
68.2%
72.1%
75.7%
79.6%
Table 4: Additional variance and cumulative percentage variance explained by the first 8 sparse
loadings of GSLDA on the BDTNP VirtualEmbryo.
The results of the gene expression experiment show a clear hierarchy among the deflation methods.
The generalized deflation technique performs best, achieving the largest additional variance on every
round and a final cumulative variance of 79.6%. Schur complement deflation, projection deflation,
and orthogonalized projection deflation all perform comparably, explaining roughly 77% of the total
variance after 8 rounds. In last place are the standard Hotelling?s and orthogonalized Hotelling?s
deflations, both of which explain less than 76% of variance after 8 rounds.
7
5
Conclusion
In this work, we have exposed the theoretical and empirical shortcomings of Hotelling?s deflation in
the sparse PCA setting and developed several alternative methods more suitable for non-eigenvector
deflation. Notably, the utility of these procedures is not limited to the sparse PCA setting. Indeed,
the methods presented can be applied to any of a number of constrained eigendecomposition-based
problems, including sparse canonical correlation analysis [13] and linear discriminant analysis [10].
Acknowledgments
This work was supported by AT&T through the AT&T Labs Fellowship Program.
References
[1] A. d?Aspremont, L. El Ghaoui, M. I. Jordan, and G. R. G. Lanckriet. A Direct Formulation for
Sparse PCA using Semidefinite Programming. In Advances in Neural Information Processing
Systems (NIPS). Vancouver, BC, December 2004.
[2] A. d?Aspremont, F. R. Bach, and L. E. Ghaoui. Full regularization path for sparse principal
component analysis. In Proceedings of the 24th international Conference on Machine Learning. Z. Ghahramani, Ed. ICML ?07, vol. 227. ACM, New York, NY, 177-184, 2007.
[3] J. Cadima and I. Jolliffe. Loadings and correlations in the interpretation of principal components. Applied Statistics, 22:203.214, 1995.
[4] C.C. Fowlkes, C.L. Luengo Hendriks, S.V. Kernen, G.H. Weber, O. Rbel, M.-Y. Huang, S.
Chatoor, A.H. DePace, L. Simirenko and C. Henriquez et al. Cell 133, pp. 364-374, 2008.
[5] J. Jeffers. Two case studies in the application of principal components. Applied Statistics, 16,
225-236, 1967.
[6] I.T. Jolliffe and M. Uddin. A Modified Principal Component Technique based on the Lasso.
Journal of Computational and Graphical Statistics, 12:531.547, 2003.
[7] I.T. Jolliffe, Principal component analysis, Springer Verlag, New York, 1986.
[8] I.T. Jolliffe. Rotation of principal components: choice of normalization constraints. Journal of
Applied Statistics, 22:29-35, 1995.
[9] B. Moghaddam, Y. Weiss, and S. Avidan. Spectral bounds for sparse PCA: Exact and greedy
algorithms. Advances in Neural Information Processing Systems, 18, 2006.
[10] B. Moghaddam, Y. Weiss, and S. Avidan. Generalized spectral bounds for sparse LDA. In Proc.
ICML, 2006.
[11] Y. Saad, Projection and deflation methods for partial pole assignment in linear state feedback,
IEEE Trans. Automat. Contr., vol. 33, pp. 290-297, Mar. 1998.
[12] B.K. Sriperumbudur, D.A. Torres, and G.R.G. Lanckriet. Sparse eigen methods by DC programming. Proceedings of the 24th International Conference on Machine learning, pp. 831838, 2007.
[13] D. Torres, B.K. Sriperumbudur, and G. Lanckriet. Finding Musically Meaningful Words by
Sparse CCA. Neural Information Processing Systems (NIPS) Workshop on Music, the Brain
and Cognition, 2007.
[14] P. White. The Computation of Eigenvalues and Eigenvectors of a Matrix. Journal of the Society
for Industrial and Applied Mathematics, Vol. 6, No. 4, pp. 393-437, Dec., 1958.
[15] F. Zhang (Ed.). The Schur Complement and Its Applications. Kluwer, Dordrecht, Springer,
2005.
[16] Z. Zhang, H. Zha, and H. Simon, Low-rank approximations with sparse factors I: Basic algorithms and error analysis. SIAM J. Matrix Anal. Appl., 23 (2002), pp. 706-727.
[17] Z. Zhang, H. Zha, and H. Simon, Low-rank approximations with sparse factors II: Penalized
methods with discrete Newton-like iterations. SIAM J. Matrix Anal. Appl., 25 (2004), pp.
901-920.
[18] H. Zou, T. Hastie, and R. Tibshirani. Sparse Principal Component Analysis. Technical Report,
Statistics Department, Stanford University, 2004.
8
| 3575 |@word version:1 compression:1 loading:13 norm:1 underperform:1 seek:1 crucially:1 tat:1 decomposition:3 covariance:16 reappearance:1 q1:4 automat:1 substitution:1 series:1 dspca:1 contains:1 bc:1 outperforms:2 must:1 subsequent:2 remove:1 interpretable:1 mackey:1 greedy:2 selected:1 leaf:1 zhang:3 c2:1 direct:1 become:1 introduce:1 notably:2 indeed:1 roughly:1 behavior:1 examine:1 brain:1 little:1 cardinality:12 considering:1 project:2 xx:1 notation:1 moreover:1 what:1 eigenvector:27 developed:2 deflate:1 c01:1 finding:3 differing:1 pseudo:18 berkeley:3 every:1 wrong:1 facto:1 lester:1 unit:3 normally:1 positive:6 path:1 approximately:1 studied:1 suggests:2 pit:5 appl:2 limited:1 averaged:1 unique:1 acknowledgment:1 lost:1 xr:1 procedure:8 empirical:1 projection:25 word:1 onto:4 context:5 influence:3 writing:1 equivalent:7 maximizing:2 modifies:1 go:1 attention:2 focused:1 q:3 borrow:1 spanned:8 orthonormal:2 dominate:1 hd:7 population:1 searching:1 justification:2 pt:21 suppose:2 hierarchy:1 exact:1 programming:2 lanckriet:3 utilized:1 subproblem:1 solved:1 capture:2 worst:1 ensures:1 imbed:1 principled:1 subtask:1 pd:7 developmental:1 scd:7 depend:1 solving:2 exposed:1 predictive:1 division:1 basis:2 easily:1 joint:1 various:1 xxt:11 shortcoming:2 aggregate:1 dordrecht:1 heuristic:2 stanford:1 solve:3 statistic:5 itself:1 final:1 sequence:1 eigenvalue:15 maximal:1 remainder:1 iff:1 intuitive:1 double:1 p:1 derive:1 develop:3 measured:2 qt:20 b0:1 received:2 borrowed:2 eq:5 involves:2 implies:2 drawback:1 modifying:1 centered:1 explains:2 musically:1 generalization:1 drosophila:2 proposition:4 extension:1 correction:1 great:1 cognition:1 substituting:2 adopt:1 failing:3 proc:1 schwarz:2 largest:1 gaussian:1 modified:3 axj:6 derived:2 ax:9 inherits:2 rank:5 contrast:1 industrial:1 contr:1 el:1 typically:7 eliminate:4 a0:7 bt:4 issue:1 aforementioned:1 among:1 constrained:10 raised:1 equal:1 represents:1 icml:2 uddin:1 future:1 np:2 others:1 report:3 serious:1 preserve:5 roof:1 xtj:2 argmax:3 n1:2 attempt:3 interest:1 weakness:1 violation:1 analyzed:1 semidefinite:3 pc:8 kt:3 moghaddam:2 encourage:1 partial:2 orthogonal:14 re:1 orthogonalized:11 theoretical:1 fitted:1 column:3 modeling:1 ar:1 assignment:1 maximization:4 ott:6 pole:1 entry:1 gd:8 st:2 qtt:10 international:2 siam:2 together:3 reflect:3 successively:1 containing:1 huang:1 leading:8 return:1 account:1 de:1 semidefiniteness:4 gaussianity:1 coefficient:1 explicitly:3 closed:2 lab:1 analyze:1 zha:2 maintains:1 parallel:1 simon:2 contribution:1 variance:42 produced:1 comparably:1 explain:4 whenever:2 ed:2 failure:1 sriperumbudur:2 pp:6 naturally:1 associated:2 proof:1 newly:3 dataset:5 popular:2 recall:1 organized:1 improved:1 wei:2 formulation:2 execute:1 though:1 mar:1 furthermore:1 stage:1 correlation:2 annihilated:1 lack:2 continuity:1 lda:1 concept:1 true:7 counterpart:1 former:2 regularization:1 hence:1 reformulating:2 alternating:2 symmetric:3 iteratively:3 deal:2 annihilating:2 round:13 hendriks:1 white:1 jeffers:1 generalized:12 demonstrate:5 performs:1 weber:1 common:2 rotation:1 discussed:3 interpretation:2 kluwer:1 orthocomplement:2 seldom:4 mathematics:1 rectify:3 longer:1 dominant:1 verlag:1 inequality:2 additional:13 care:1 cxxt:1 ii:1 relates:1 full:3 desirable:4 reduces:2 technical:1 ptt:4 bach:1 variant:2 avidan:2 basic:1 opd:7 iteration:5 normalization:1 cell:1 c1:3 preserved:1 justified:1 want:1 fellowship:1 dec:1 ot:14 saad:1 subject:1 december:1 schur:17 jordan:1 extracting:1 counting:1 cadima:1 variety:1 xj:10 hastie:1 lasso:1 motivated:1 pca:48 six:1 utility:2 expression:4 reuse:1 render:4 york:2 matlab:1 luengo:1 detailed:1 eigenvectors:15 clear:1 amount:2 reduced:1 generate:1 outperform:1 percentage:4 canonical:1 tibshirani:1 write:1 discrete:2 vol:3 reformulation:2 achieving:2 utilize:2 relaxation:1 injects:1 run:3 arrive:1 place:1 reasonable:1 bound:2 cca:1 distinguish:1 adapted:1 constraint:4 orthogonality:2 constrain:1 x2:1 performing:2 rendered:1 relatively:1 department:1 developing:1 alternate:1 combination:3 across:1 explained:12 embryo:2 ghaoui:2 visualization:1 previously:2 discus:2 deflation:100 jolliffe:4 know:1 apply:2 eight:1 appropriate:2 spectral:2 fowlkes:1 hotelling:32 alternative:7 schmidt:1 eigen:1 rp:5 running:1 remaining:1 graphical:1 maintaining:1 newton:1 music:1 k1:2 ghahramani:1 society:1 objective:3 question:1 added:1 primary:1 subspace:2 card:4 cauchy:2 discriminant:1 trivial:1 reformulate:2 equivalently:1 subproblems:1 trace:1 disparate:1 implementation:1 anal:2 perform:1 observation:1 datasets:4 situation:2 dc:1 rn:2 discovered:1 arbitrary:1 subtasks:2 annihilates:2 complement:17 axx:1 california:1 nip:2 trans:1 address:1 below:1 pattern:1 sparsity:3 program:1 max:1 including:2 suitable:1 difficulty:1 representing:1 aspremont:2 extract:5 vpc:1 literature:3 xtt:32 vancouver:1 analogy:2 eigendecomposition:1 nucleus:2 xp:2 principle:1 summary:1 penalized:1 supported:1 last:1 explaining:1 sparse:48 distributed:1 feedback:1 axi:3 world:4 gram:1 cumulative:6 author:3 approximate:1 qst:2 transcription:1 gene:5 annihilate:1 sequentially:3 xi:3 iterative:4 why:1 table:8 additionally:2 ca:1 henriquez:1 zou:1 sp:8 arise:1 allowed:1 x1:13 benchmarking:1 fashion:1 torres:2 ny:1 orth:2 xt:106 deflated:3 admits:1 x:1 concern:1 workshop:1 sequential:1 adding:1 kr:1 suited:2 explore:1 expressed:1 springer:2 corresponds:1 extracted:5 acm:1 prop:5 conditional:3 goal:4 formulated:1 identity:1 change:1 hard:2 typical:1 principal:15 total:1 meaningful:1 latter:2 |
2,841 | 3,576 | Grouping Contours Via a Related Image
Praveen Srinivasan
GRASP Laboratory
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Liming Wang
Fudan University
Shanghai, PRC 200433
[email protected]
Jianbo Shi
GRASP Laboratory
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Abstract
Contours have been established in the biological and computer vision literature
as a compact yet descriptive representation of object shape. While individual
contours provide structure, they lack the large spatial support of region segments
(which lack internal structure). We present a method for further grouping of contours in an image using their relationship to the contours of a second, related
image. Stereo, motion, and similarity all provide cues that can aid this task; contours that have similar transformations relating them to their matching contours
in the second image likely belong to a single group. To find matches for contours, we rely only on shape, which applies directly to all three modalities without
modification, in contrast to the specialized approaches developed for each independently. Visually salient contours are extracted in each image, along with a set
of candidate transformations for aligning subsets of them. For each transformation, groups of contours with matching shape across the two images are identified
to provide a context for evaluating matches of individual contour points across the
images. The resulting contexts of contours are used to perform a final grouping
on contours in the original image while simultaneously finding matches in the related image, again by shape matching. We demonstrate grouping results on image
pairs consisting of stereo, motion, and similar images. Our method also produces
qualitatively better results against a baseline method that does not use the inferred
contexts.
1
Introduction
Researchers in biological vision have long hypothesized that image contours (ordered sets of edge
pixels, or contour points) are a compact yet descriptive representation of object shape. In computer
vision, there has been substantial interest in extracting contours from images as well as using object
models based on contours for object recognition ([15, 5]), and 3D image interpretation [11].
We examine the problem of grouping contours in a single image aided by a related image, such as
stereo pair, a frame from the same motion sequence, or a similar image. Relative motion of contours
in one image to their matching contours in the other provides a cue for grouping. The contours
themselves are detected bottom-up without a model, and are provided as input to our method. While
contours already represent groupings of edges, they typically lack large spatial support. Region segments, on the other hand, have large spatial support, but lack the structure that contours provide.
Therefore, additional grouping of contours can give us both qualities. This has important applications for object recognition and scene understanding, since these larger groups of contours are often
large pieces of objects.
Figure 1 shows a single image in the 1st column, with contours; in the other columns, top row,
are different images related by stereo, motion and similarity to the first, shown with their contours.
Below each of these images are idealized groupings of contours in the original image. Note that
internal contours on cars and buildings are grouped, providing rich, structured shape information
over a larger image region.
1
Stereo
Motion
Similarity
Image 2
Image 1
Grouping in image 1 using related images
Figure 1: Contours (white) in the image on the left can be further grouped using the contours of a
second, related image (top row). The bottom row shows idealized groupings in the original image
according to the inter-image relationship.
2
Related Work
Stereo, motion, and similar image matching have been studied largely in isolation, and often with
different purposes in mind than perceptual grouping. Much of the stereo literature focuses on perpixel depth recovery; however, as [7] noted, stereo can be used for perceptual grouping without
requiring precise depth estimation. Motion is often used for estimating optical flow or dense segmentation of images into groups of pixels undergoing similar motion [13]. These approaches to
motion and stereo are largely region-based, and therefore do not provide the same internal structure
that groups of contours provide. Similar image matching has been used for object recognition [1],
but is rarely applied to image segmentation.
In work on contours, [12] matched contour points in the context of aerial imagery, but use constraints such as ordering of matches along scanlines that are not appropriate for motion or similar
images, and do not provide grouping information. [9] grouped image pixels into contours according
to similar motion using optical flow as a local cue. While the result addresses the long-standing
aperture problem , it does not extend to large inter-image deformations or matching similar images.
[8] grouped and matched image regions across different images and unstable segmentations (as we
do with contours), but the regions lack internal structure. [2, 6] used stereo pairs of images to detect
depth discontinuities as potential object boundaries. However, these methods will not detect and
group group contours in the interior of fronto-parallel surfaces.
3
Grouping Criteria
We present definitions and basic criteria for grouping contours. The inputs to our method are:
1. Images: I1 I2 ; for each image Ii i 1 2 we also have:
2. A set of points (typically image edges) IiP ; p IiP , p R2 . We restrict the set of points
to those that lie on image contours, defined next.
3. A set of contours IiC , where the jth contour Cij IiC is an ordered subset of points in IiP :
pkn ] IiC .
Cij = [pk1 pk2
We would like to infer groups G1 GnGroups , each with the following attributes:
1. A transformation Ti that aligns a subset of contours (e.g., corresponding to an object) in I1
to I2 . T is the set of all Ti .
2. A subset of contours Coni in each image, known as a context, such that the two subsets
C
C
have similar overall shape. Coni = { Con1i Con2i } ; Con1i { 0 1} I1 , Con2i { 0 1} I2 .
j
Each Coni is vector that indicates which contours are in the context for image Ij . Con is
the set of all Coni .
We further define the following variables on contours C1j = [pk1
pkn ]:
1. A group label lj ; lj = a implies that C1j belongs to group Ga . L = { lj } , set of all labels.
qrn ], qri I2P , s.t. pk1 matches qrn . Match = { Matchj } ,
2. Matches Matchj = [qr1
the set of all matches for each contour.
2
Contour selection indicators
(a)
12
1
Matching
Closest
Edges
All ? Mismatch
No shape match
(b)
(c)
(d)
SC*
=
SC*
=
SC*
=
2
Shape Context
Matrix
Matching
Closest
Edges
(e)
(f)
All ?Mismatch Selected ? Good Match
(g)
(h)
(a)
(b)
SC
(c)
(d)
Figure 2: (left) - matching closest points cannot reject false positives; simply enlarging the feature
size rejects true positives; increasing the feature size and selecting correct context fixes the problem and Figure 3: (right); the realized shape context from choosing a subset of contours can be
summarized as the multiplication of a shape context matrix M and a binary indicator vector.
We would like the groups to possess the following criteria:
1. Good continuation - for contours that overlap significantly, we prefer that they are present
in the same group, if they are grouped at all.
2. Common fate by shape: contours with similar transformations mapping them to their
matches should be grouped. We also require that each contour point in a grouped contour and its matching point in the second image have similar local shape. Shape in each
image is defined with respect to a subset of image contours known as a context; we will
explain the importance of choosing the correct context for shape comparison, as first noted
in [16].
3. Maximality/simplicity: We would like to group as many of the contours as possible into
as few groups as possible, while still maintaining the similarity of local shape described
above.
We will encode these criteria in our cost function F(T Con L Match), which we seek to minimize. Our cost function has the following properties, which we develop in the following sec Con
min F(T
L Match) is a
tions: 1) For fixed contexts Con
and transformations T,
Match L
Markov random field (MRF) that can be minimized exactly via graph cuts ([3]). This corresponds
to a standard computational formulation for graph matching (in this case, there is one graph in
and labels
each image, over the contours). 2) For fixed matches Match,
transformations, T
F decomposes as the sum over i = { 1
L,
nGroups} and we can minimize independently:
min Fi (Ti Coni Matchj l(j)=i ) as an integer linear program. This can be easily relaxed to a closely
Coni
related linear program (LP), allowing for an efficient approximation. This combination of the MRF
standard graph matching technique with an LP for inferring context for accurate matching by shape
is our main contribution.
The layout of our paper is as follows: we explain the problem and importance of selecting contours
as context for accurate matching and grouping, outline our computational solution (LP) for inferring
Con given T, our technique for choosing T, followed by finding L and matches Match based
on the inferred contexts (via graph cuts). Results using our method follow, and we demonstrate
improvement over a baseline that lacks that benefits of our context selection procedure.
4
Matching and Context Selection
We can evaluate the hypothesis of a particular contour point match by comparing the local shape
around the point and around its match. Although local features such as curvature and simple proximity (in the case of roughly aligned contours) have been used for matching ([12]), inconsistencies
in the input contours across two images make these them prone to error. Local features exhibit completeness (good score for correctly matching shapes), but not soundness (bad score to not matching
shapes). Figure 2 illustrates this distinction. Two aligned sets of contours are shown in a),e). In a),
the contours do not match, while in e), a ?7? shape is common to both. In b) and f), matching of
3
Input: images w/ roughly aligned contours
#contours
# bin
SC11
SC12
SC13
SC21
? SC
2
2
3
2
SC
SC1selc1
?
SC2selc2
SelectionCost( selc1 , selc2 ) ?
? || SC1selc1 ? SC2selc2 || L ? || min( SC1selc1 , SC2selc2 ) || L
1
1
? ?k ?1 ? || SC selc1 ? SC selc2 || L1 ? || min( SC selc1 , SC selc2 ) || L1
#SC
k
1
k
2
k
1
k
2
Relaxation
c ij ?{0,1} ? [0,1] ?i, j
? ?k ?1 SCMatchCos t(SC1k selc1,SC2k selc2 ,?)
#SC
SCMatchCos t ( s1 , s2 , ? ) ? ?i ?1 BinMatchCo st( s1i , s2i , ? )
BinMatchCost(a, b, ?) ? ? | a ? b | ? min(a, b)
# bin
s1 , s2 ? R #bin
a, b ? R
Trade off between Lower mismatch and High intersection. Lower is better.
Linear program
min
c1 ,c 2
SelectionCost(c1 , c 2 )
i
j
s.t. c ? [0,1] ?i, j
Figure 4: The context selection process for roughly aligned sets of contours. See text for full description.
closest points between two roughly aligned shapes finds matches in both examples due to the small
support of local features, even though there is no valid match in a).
However, increasing the support of the feature does not solve the problem. As an example, we use
the shape context, an image feature that has been widely used for shape matching ([1]). Briefly, a
shape context provides a log-polar spatial histogram that records the number of points that fall into a
particular bin. In Figure 2 c,g), shape contexts (darker bins mean larger bin count) with large spatial
support placed according to the rough alignment exhibit high dissimilarity in both cases, failing to
find a match in a). The large feature failed because contours in each image that had no match in
the other image were used in computing the shape context. Inferring which contours to include and
which to omit would give better features, as in Figure 2 d),h). This fixes the completeness problem,
while retaining soundness: no combination of contours in the two images in a) can produce matching
shapes. Therefore, with rough alignment we can cast the first step of shape matching as context
selection: which subset of contours, or context, to use for feature computation. Given the correct
context, matching individual contour points is much easier.
4.1 Selection Problem
We can neatly summarize the effect of a particular context selection on a shape context as seen in
Figure 3. a) shows two contours which we can select from. b) shows the shape contexts for each
individual contour. The bin counts for a shape context can be encoded as a vector, represented
by the shape contexts and their vector representations alongside. In c), we put the vector form of
these shape contexts into a matrix SC, where each column corresponds to one contour. SC has
dimensions nBins by nContours, where nBins is the number of bins in the shape context (and the
length of the associated vector). The entry SC(i j) is the bin count for bin i and contour j. For each
contour Cij in an image Ii , we associate an selection indicator variable selji { 0 1} , which indicates
whether or not the contour is selected; the vector of these indicator variables is seli . Then the shape
context bin counts realized by a particular selection of contours is SCseli , simply the multiplication
of the matrix SC and the vector seli . d) shows the effect on the shape context histogram of various
context selections.
4.2 Shape context matching cost
The effectiveness of selection depends significantly on the shape context matching cost. Traditional
matching costs (Chi-square, L1 , L2 ) only measure similarity, but selecting no contours in either
image gives a perfect matching cost, since in both shape contexts, all bins will be 0. While similarity
is important, so is including more contours rather than fewer (maximality).
Our shape context matching cost, SCMatchCost(s1 s2 ) in Figure 4, is a linear combination of the
L1 distance between shape context vectors s1 and s2 (similarity), and the intersection distance (maximality, one of our original grouping criteria), the L1 norm of min(s1 s2 ) where min is element4
Image 1 contours
transformed
Contour selection
Selection result
(d)
(e)
Input images
w/ contours
(a)
SIFT Matches
(b)
(c)
Figure 5: For a pair of images, SIFT matches propose different transformations of the contours in
image 1 to align with contours in image 2. The selection process is run for each transformation to
infer a context suitable for evaluating contour point matches via shape.
wise. The intersection term encourages higher bin counts in each shape context and therefore the
inclusion of more contours. The parameter trades off between similarity and maximality; typically
1.
4.3 Computational Solution
Our formulation of the problem follows the construction first presented in [16], which studied the
role of context selection for object recognition. Figure 4 shows the formulation of the overall selec
tion cost SelectionCost. This minimizes Fi (T
i Coni Matchj l(j)=i ) over Coni . We begin with two
input images, where the contours are in rough alignment (by applying known Ti to I1C ). Multiple
shape contexts are placed in each image on a uniform grid (an approximation of Matchj l(j)=i , since
we initially have no matches). Like-colored (in the figure) shape contexts will be compared across
images. Our goal is to select contours in each image to minimize the sum of SCMatchCost for each
pair of shape contexts. For each shape context j in each image i, we compute the corresponding
shape context matrix SCij . All the SCij in a particular image Ii are stacked to form matrix SCi . SCi
for each image has been color coded to show the SCij matrix corresponding to each shape context.
1
n
We introduce the indicator vectors selc1 = [selc11 selcm
1 ] and selc2 = [selc1 selc1 ] for images
j
j
I1 I2 . selci = 1 implies that contour Ci is selected. SCi selci is then the realized bin counts for
all the shape contexts in image Ii under selection selci . We seek to choose selc1 and selc2 such that
SC1 selc1 SC2 selc2 in a shape sense; entries of SC1 selc1 and SC2 selc2 , or realized bin counts,
are in correspondence, so we can score these pairs of bin counts using BinMatchCost. A compact
summary of this cost function SelectionCost is shown in Figure 4; its decomposition as the sum of
SCMatchCost terms, which are each in turn a sum over BinMatchCost terms is shown.
The minimization of SelectionCost over selc1 and selc2 is in fact an integer linear program (L1
distance and min are easily encoded with additional variables and linear constraints). By relaxing
each selji { 0 1} [0 1], we obtain a linear program (LP) which can be solved efficiently using
standard solvers (e.g. SDPT3). Although other methods exist for solving integer linear programs,
such as branch-and-bound, we found that directly discretizing the selji with a fixed threshold worked
well. Then Con
i = { selc1 selc2 } .
4.4 Multiple Context Selections for Image Matching
Now that we have established how to do selection in the case were are given Ti , we now apply it
in images where there may be multiple objects that are related across the two images by different
alignments. We first need to infer the set of candidate transformations T; for our purposes, we will
5
restrict them to be similarity transforms, although we note that non-linear or piecewise linear (e.g.,
articulation) transformations could certainly be used. A simple method for proposing transformations in the two images is via SIFT ([10]) feature matches. A SIFT match provides scale, orientation,
and translation (a similarity transform). RANSAC with multiple matches can be used to estimate
full homographies, similar to [14].
Figure 5 depicts an idealized selection process for two images (only the contours are shown). For
groups of SIFT matches that describe similar transformations, a transformation Ti is extracted and
warps the contours in image 1 to line up with those of image 2, in c). The selection problem is
formulated separately for each set of aligned contours d). The solution vectors of the SelectionCost
1
LP for each T provide a context {Con\
, Con2 } ({selc , selc } previously) of matching contours,
i
i
1
i
2
e). Two correct transforms align the car and person, and the selection result includes the respective
contours (rows 1,2 of e). A third, wrong transform results in an empty selection (row 3 of e). We
can view the context selection procedure for minimizing Fi as choosing the context of contours so
as to best reduce the matching cost of the hypothesized inter-image matches for contours with label
i, under the transformation Ti . In a sense, we are optimizing the local features via an LP, which
traditional graph matching techniques do not do. The result of this optimization will appear in the
unary term of the label/match MRF described next.
5
Graph Cuts for Group Assignment and Matching
We previously computed context selections (as solutions to the SelectionCost LP), which found
[ = {Con
[1 , ..., Con
\
groups of contours in each image that have similar shape, Con
nGroups } under
b Given these, we seek to compute L and Match. Some labels in 1, ..., nGroups
transformations T.
may not be assigned to any contours, satisfying our simplicity criterion for grouping. Note that a
contour C1j need not be selected as context in a particular group a in order to have lj = a. Recall
b Con,
[ L, Match) We
with respect to the original cost function, we seek to optimize: min F(T,
Match,L
phrase this label assignment problem as inference in a Markov network (MN).
QThe MN
Qencodes the
joint distribution over the labels L as a product of potentials: P (L) = Z1 j ?(lj ) j,k ?(lj , lk )
where Z is a normalization constant.
The binary potentials ?(lj , lk ) encode the preference that overlapping contours C1j , C1k have the
same label:
1
a=b
?(lj = a, lk = b) =
(1)
1 ? ? a 6= b
where 0 ? ? ? 1 controls the penalty of having different labels. This is a simple smoothing potential
to encourage continuity. Two contours overlap if they contain at least one point in common.
The unary potential ?(lj ) encodes how well contour C1j = [pk1 , pk2 , ..., pkn ] can be matched in the
1
second image with respect to the context {Con\
, Con2 }. The log-unary potential decomposes as
a
a
the sum of matching costs of the individual points pki to their best match in image I2 , with respect
1
to the context {Con\
, Con2 }:
a
a
log ?(lj = a) ? ?
n
X
[ min MatchCostInContext(pki , q, a)]
i=1
q?I2P
(2)
T (p) [1
q [2
p
where MatchCostInContext(p, q, a) = SCMatchCost(SC1 a Con
a , SC2 Cona ) and SC1 and
q
SC2 are respectively the shape context matrix computed for a shape context centered at Ta (p) using
the contours in image 1 under transformation Ta , and the matrix for a shape context centered at q
using the contours in image 2.
We compute the exact MAP estimate in the MN using the ? ? ? swap graph cut algorithm ([3]),
which can maximize this type of energy. Instead of using all contours image 1 as nodes in the MN,
we only allow contours were selected in at least one of the context Con1i ; likewise, we only permit
matches to points in image 2 that appear in a contour selected in at least one Con2j . This better
allows us to deal with contours that appear only in one image and thus cannot be reliably grouped
based on relative motion.
6
Image 2
Our result
Baseline result
Similarity
Motion
Stereo
Image 1
Image 2
Our grouping
Dense correspondences
Stereo
Motion
Stereo
Motion
Similar
Image 1
Figure 6: Baseline comparison (top) and additional results (bottom). Top: Columns 1,2: original images with input contours, each colored. Columns 3,4: grouping results for our method and baseline;
groups of contours are a single color. In stereo pairs, like colors indicate similar disparity. Bottom:
Columns 1,2: original images with input contours, each colored. Column 3: our grouping result.
Columns 4,5: matches across images indicated by like colors. Please view in color.
5.1 Baseline Comparison
As a baseline comparison, we attempted grouping using an MN that involved no selection information. The binary potential remained the same, while the unary potential ?(lj = a) was a function of
the distance of each contour point in contour C1j to its closest match in I2P , under the transformation
Ta :
n
X
log ?(lj = a) ? ?
[ min (||Ta (pki ) ? q||2L2 , occlusionThresh2 )]
(3)
i=1
q?I2P
The constant occlusionThresh serves a threshold in case a contour point had no nearby match in
I2P under the transformation T a . Points which had no match within occlusionThresh distance were
marked as occluded for the hypothesis lj = a. If more than half the points in the final assignment
lj? for a contour were occluded, we marked the entire contour as occluded, and it was not displayed.
Since we omitted all selection information, all contours in the 1st image were included in the MN
as nodes, and their contour points were allowed to match to any contour point in I2P . We again
optimized the MN energy with the ? ? ? swap graph cut. Free parameters were tuned by hand to
produce the best result possible.
6
Experiments
We tested our method and the baseline over stereo, motion and similar image pairs. Input contours
in each image were extracted automatically using the method of [15]. SIFT matches were extracted
from images, keeping only confident matches as described in [10]; matches proposing similar transformations were pruned to a small set, typically 10-20. Because of the high quality of the inferred
contexts, we used large shape contexts (radius 90 pixels, in images of size 400 by 500), which
7
made matching very robust. The shape contexts were augmented with edge orientation bins in addition to the standard radial and angular bins. Shape contexts were placed on a uniform grid atop the
registered contours (via Ti ) with a spacing 50 pixels in the x and y dimensions. Image pairs were
taken from the Caltech 101 dataset [4] and from a stereo rig with 1m baseline mounted on a car
from our lab (providing stereo and motion images). The running time of our unoptimized MATLAB
implementation was several minutes for each image pair.
Figure 6, top block, shows the results of our method and the baseline method on stereo, motion and
similar images. We can see that our method provides superior groupings that better respect object
boundaries. Groups for stereo image pairs are colored according to disparity. Due to the lack of large
context, the baseline method is able to find a good match for a given contour point under almost any
group hypothesis lj = a, since in cluttered regions, there are always nearby matches. However, by
using a much larger, optimized context, our method exploits large-scale shape information and is
better able to infer about occlusion, as well as layer assignment. We present additional results on
different images in Figure 6, bottom block, and also show the dense correspondences. Interesting
groups found in our results include facades of buildings, people, and a car (top row).
7
Conclusion
We introduced the problem of grouping of contours in an image using a related image, such as stereo,
motion or similar, as an important step for object recognition and scene understanding. Grouping
depends on the ability to match contours across images to determine their relative motion. Selecting
a good context for shape evaluation was key to robust simultaneous and grouping of contours across
images. A baseline method similar to our proposed method, but without context, produced worse
groupings on stereo, motion and similar images. Future work will include trying to learn 3D object
models from stereo and motion images, and a probabilistic formulation of the matching framework.
Introducing learning to improve the grouping result is also an area of significant interest; some shape
configurations are more reliable for matching than others.
References
[1] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE
PAMI, 2002.
[2] S. Birchfield and C. Tomasi. Depth discontinuities by pixel-to-pixel stereo. In ICCV, 1998.
[3] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI, 2001.
[4] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. PAMI, 2006.
[5] V. Ferrari, L. Fevrier, F. Jurie, and C. Schmid. Groups of adjacent contour segments for object detection.
PAMI, 2008.
[6] M. Gelautz and D. Markovic. Recognition of object contours from stereo images: an edge combination
approach. 3D PVT, 2004.
[7] W.E.L. Grimson. Why stereo vision is not always about 3d reconstruction. In MIT AI Memo, Technical
Report AIM-1435, 1993.
[8] V. Hedau, H. Arora, and N. Ahuja. Matching images under unstable segmentation. In CVPR 2008.
[9] C. Liu, W. T. Freeman, and E.H. Adelson. Analysis of contour motions. In NIPS, 2006.
[10] D. Lowe. Distinctive image features from scale-invariant keypoints. In IJCV, 2003.
[11] D.G. Lowe and T.O. Binford. The recovery of three-dimensional structure from image curves. PAMI,
1985.
[12] D. Sherman and S. Peleg. Stereo by incremental matching of contours. PAMI, 1990.
[13] J.Y.A. Wang and E.H. Adelson. Layered representation for motion analysis. In CVPR, 1993.
[14] J. Wills, S. Agarwal, and S. Belongie. A feature-based approach for dense segmentation and estimation
of large disparity motion. IJCV, 2006.
[15] Q. Zhu, G. Song, and J. Shi. Untangling cycles for contour grouping. In ICCV 2007.
[16] Qihui Zhu, Liming Wang, Yang Wu, and Jianbo Shi. Contour context selection for object detection: A
set-to-set contour matching approach. In ECCV, 2008.
8
| 3576 |@word briefly:1 norm:1 seek:4 decomposition:1 shot:1 configuration:1 liu:1 score:3 selecting:4 disparity:3 fevrier:1 tuned:1 comparing:1 yet:2 atop:1 shape:64 cue:3 selected:6 fewer:1 half:1 record:1 colored:4 provides:4 completeness:2 node:2 preference:1 along:2 ijcv:2 scij:3 introduce:1 inter:3 upenn:2 roughly:4 themselves:1 examine:1 chi:1 freeman:1 automatically:1 solver:1 increasing:2 provided:1 estimating:1 matched:3 begin:1 fudan:2 minimizes:1 developed:1 proposing:2 finding:2 transformation:20 ti:8 exactly:1 jianbo:2 wrong:1 control:1 omit:1 appear:3 positive:2 local:8 pami:6 studied:2 relaxing:1 jurie:1 block:2 procedure:2 area:1 reject:2 significantly:2 matching:42 radial:1 prc:1 cannot:2 interior:1 ga:1 selection:27 layered:1 put:1 context:73 applying:1 optimize:1 map:1 shi:3 layout:1 independently:2 cluttered:1 simplicity:2 recovery:2 pk2:2 ferrari:1 construction:1 exact:1 hypothesis:3 pa:2 associate:1 recognition:7 satisfying:1 cut:6 bottom:5 role:1 wang:3 solved:1 region:7 s1i:1 cycle:1 rig:1 ordering:1 trade:2 substantial:1 grimson:1 occluded:3 solving:1 segment:3 distinctive:1 swap:2 untangling:1 easily:2 joint:1 represented:1 various:1 s2i:1 stacked:1 fast:1 describe:1 detected:1 sc:16 choosing:4 encoded:2 larger:4 iic:3 solve:1 widely:1 cvpr:2 ability:1 soundness:2 g1:1 transform:2 final:2 descriptive:2 sequence:1 propose:1 reconstruction:1 product:1 facade:1 aligned:6 description:1 empty:1 sea:1 produce:3 perfect:1 incremental:1 object:19 tions:1 develop:1 ij:2 implies:2 indicate:1 peleg:1 radius:1 closely:1 correct:4 attribute:1 centered:2 bin:18 require:1 fix:2 biological:2 proximity:1 around:2 visually:1 mapping:1 pkn:3 omitted:1 purpose:2 failing:1 estimation:2 polar:1 label:10 grouped:8 minimization:2 rough:3 mit:1 always:2 aim:1 pki:3 rather:1 encode:2 focus:1 improvement:1 indicates:2 contrast:1 baseline:12 detect:2 sense:2 inference:1 unary:4 typically:4 lj:15 qthe:1 initially:1 entire:1 perona:1 transformed:1 unoptimized:1 i1:4 pixel:7 overall:2 orientation:2 retaining:1 spatial:5 smoothing:1 field:1 having:1 adelson:2 future:1 minimized:1 report:1 others:1 piecewise:1 c1j:6 few:1 simultaneously:1 individual:5 consisting:1 occlusion:1 detection:2 interest:2 evaluation:1 grasp:2 alignment:4 certainly:1 accurate:2 edge:7 encourage:1 respective:1 i1c:1 deformation:1 fronto:1 column:8 assignment:4 phrase:1 cost:12 introducing:1 subset:8 entry:2 uniform:2 veksler:1 confident:1 st:3 person:1 standing:1 probabilistic:1 off:2 again:2 imagery:1 iip:3 choose:1 worse:1 potential:8 con2:3 summarized:1 sec:1 includes:1 idealized:3 depends:2 piece:1 tion:1 view:2 lowe:2 lab:1 parallel:1 contribution:1 minimize:3 square:1 largely:2 efficiently:1 likewise:1 produced:1 researcher:1 explain:2 simultaneous:1 binford:1 aligns:1 definition:1 against:1 energy:3 involved:1 associated:1 con:14 dataset:1 recall:1 color:5 car:4 segmentation:5 higher:1 ta:4 follow:1 formulation:4 pvt:1 though:1 angular:1 pk1:4 hand:2 overlapping:1 lack:7 continuity:1 quality:2 indicated:1 building:2 effect:2 hypothesized:2 requiring:1 true:1 contain:1 assigned:1 laboratory:2 i2:5 white:1 deal:1 adjacent:1 encourages:1 sc1:4 please:1 noted:2 criterion:6 trying:1 outline:1 demonstrate:2 motion:26 l1:6 image:118 wise:1 fi:3 boykov:1 common:3 superior:1 specialized:1 jshi:1 shanghai:1 extend:1 belong:1 interpretation:1 relating:1 significant:1 ai:1 grid:2 inclusion:1 neatly:1 had:3 sherman:1 similarity:11 surface:1 align:2 aligning:1 curvature:1 closest:5 optimizing:1 belongs:1 binary:3 discretizing:1 inconsistency:1 caltech:1 seen:1 additional:4 relaxed:1 determine:1 sc2:4 maximize:1 ii:4 branch:1 full:2 multiple:4 keypoints:1 infer:4 technical:1 match:51 long:2 coded:1 mrf:3 basic:1 ransac:1 vision:4 histogram:2 represent:1 normalization:1 agarwal:1 c1:2 addition:1 separately:1 spacing:1 modality:1 posse:1 flow:2 fate:1 effectiveness:1 integer:3 extracting:1 yang:1 isolation:1 pennsylvania:2 identified:1 restrict:2 reduce:1 cn:1 whether:1 sdpt3:1 c1k:1 qihui:1 penalty:1 stereo:26 song:1 matlab:1 transforms:2 zabih:1 category:1 continuation:1 exist:1 correctly:1 srinivasan:1 group:23 key:1 salient:1 threshold:2 graph:10 relaxation:1 sum:5 run:1 almost:1 wu:1 homographies:1 prefer:1 bound:1 layer:1 followed:1 correspondence:3 constraint:2 worked:1 fei:2 scene:2 encodes:1 nearby:2 min:12 pruned:1 optical:2 structured:1 according:4 combination:4 aerial:1 across:9 selec:1 lp:7 modification:1 s1:5 praveen:1 iccv:2 invariant:1 taken:1 previously:2 turn:1 count:8 mind:1 serf:1 permit:1 apply:1 appropriate:1 original:7 top:6 running:1 include:3 maintaining:1 coni:8 exploit:1 malik:1 already:1 realized:4 traditional:2 exhibit:2 distance:5 sci:3 unstable:2 length:1 relationship:2 providing:2 minimizing:1 birchfield:1 cij:3 memo:1 implementation:1 reliably:1 scanlines:1 perform:1 allowing:1 markov:2 displayed:1 precise:1 frame:1 inferred:3 introduced:1 pair:11 cast:1 z1:1 optimized:2 tomasi:1 distinction:1 registered:1 perpixel:1 established:2 discontinuity:2 nip:1 address:1 able:2 alongside:1 below:1 mismatch:3 articulation:1 summarize:1 program:6 including:1 reliable:1 overlap:2 suitable:1 rely:1 indicator:5 mn:7 zhu:2 improve:1 maximality:4 lk:3 arora:1 schmid:1 philadelphia:2 text:1 literature:2 understanding:2 l2:2 multiplication:2 relative:3 interesting:1 mounted:1 translation:1 row:6 prone:1 eccv:1 summary:1 placed:3 free:1 keeping:1 jth:1 allow:1 warp:1 fall:1 benefit:1 curve:1 boundary:2 depth:4 dimension:2 evaluating:2 valid:1 contour:131 rich:1 hedau:1 qualitatively:1 made:1 approximate:1 compact:3 aperture:1 belongie:2 fergus:1 decomposes:2 why:1 learn:1 robust:2 dense:4 main:1 i2p:6 s2:5 allowed:1 augmented:1 depicts:1 ahuja:1 darker:1 aid:1 inferring:3 candidate:2 lie:1 perceptual:2 third:1 minute:1 enlarging:1 remained:1 bad:1 sift:6 undergoing:1 r2:1 qri:1 grouping:30 false:1 importance:2 ci:2 dissimilarity:1 illustrates:1 easier:1 intersection:3 simply:2 likely:1 failed:1 ordered:2 applies:1 corresponds:2 extracted:4 goal:1 formulated:1 marked:2 aided:1 included:1 nbins:2 attempted:1 rarely:1 select:2 puzicha:1 internal:4 support:6 people:1 evaluate:1 tested:1 liming:2 |
2,842 | 3,577 | Generative versus discriminative training of RBMs
for classification of fMRI images
Geoffrey E. Hinton
Department of Computer Science
University of Toronto
Toronto, Canada
[email protected]
Tanya Schmah
Department of Computer Science
University of Toronto
Toronto, Canada
[email protected]
Richard S. Zemel
Department of Computer Science
University of Toronto
Toronto, Canada
[email protected]
Steven L. Small
Department of Neurology
The University of Chicago
Chicago, USA
[email protected]
Stephen Strother
The Rotman Research Institute Baycrest
Toronto, Canada
[email protected]
Abstract
Neuroimaging datasets often have a very large number of voxels and a very small
number of training cases, which means that overfitting of models for this data
can become a very serious problem. Working with a set of fMRI images from
a study on stroke recovery, we consider a classification task for which logistic
regression performs poorly, even when L1- or L2- regularized. We show that
much better discrimination can be achieved by fitting a generative model to each
separate condition and then seeing which model is most likely to have generated
the data. We compare discriminative training of exactly the same set of models,
and we also consider convex blends of generative and discriminative training.
1
Introduction
Pattern classification approaches to analyzing functional neuroimaging data have become increasingly popular [12] [3] [4]. These approaches allow one to use well-founded classification methods
to test whether the imaging data contains enough information to discriminate between different conditions. They may also lead to insight into underlying neural representations, highlighting brain
regions that are most informative with respect to particular experimental variables.
One difficulty in applying these models is the paucity of data: the number of images available to
analyze is typically very small relative to the data dimensionality, particularly if one does not want to
restrict a priori the input to subsets of the voxels. Generative models are therefore of great interest,
because building a density model of the imaging data itself can often uncover features of the data
that are useful for classification as well as for generation. In regimes in which the number of training
examples is relatively small, it has been shown that classifiers based on generative models can outperform discriminative classifiers, e.g., naive Bayes classifiers can beat logistic regression [11].
In this paper we investigate ways of using generative models to improve the discrimination of different conditions in functional neuroimaging data. Our primary interest with respect to the imaging
data is to elucidate the brain changes that occur during recovery from a stroke. Towards this aim,
we define an early-late discrimination task to see if the learning approach can find properties that
distinguish pre-recovery from post-recovery scans.
2
Restricted Boltzmann Machines
A set of fMRI volumes can be modeled using a two-layer network called a ?Restricted Boltzmann
Machine? (RBM) [5], in which stochastic ?visible units? are connected to stochastic ?hidden units?
using symmetrically weighted connections. The visible units of the RBM correspond to voxels,
while the hidden units can be thought of as feature detectors. In the typical RBM, both visible and
hidden units are binary, but we use a version in which the visible units are continuous and have
Gaussian marginal distributions [15] [7] [1]. For simplicity, and since we are free to scale the data,
we choose unit variance for the marginal distributions of the visible units.
The energy of the joint configuration (v, h) of the visible and hidden units is
E(v, h) := ?
X
vi wij hj ?
X
i,j
cj hj +
j
1X
2
(vi ? bi ) ,
2 i
(1)
where wij , bi , cj are fixed parameters. The joint distribution over visible and hidden variables is
P (v, h) :=
R
with partition function Z :=
du
P
1
exp (?E(v, h)) ,
Z
(2)
exp (?E(u, g)).
g
The marginal distribution over the visible units can be expressed as:
P (v) =
X
P (v, h) =
h
1
exp (?F (v)) ,
Z
(3)
where F is the free energy:
!
X
F (v) = ? log
exp (?E(v, h))
h
=?
X
log (1 + exp (vi wij + cj )) +
j
1X
2
(vi ? bi ) .
2 i
(4)
The marginal distribution over the visible units is typically intractable because of the partition function Z. However Gibbs sampling can be used to sample from an approximation to the marginal
distribution, since the conditional probability distributions P (v|h) and P (h|v) are tractable:
?
?
!
Y
X
Y
X
?
?
P (v|h) =
N bi +
wij hj , 1 ,
P (h|v) =
?
vi wij + cj ,
i
j
j
i
where ? is the logistic function, ?(z) := 1/1 + exp(?z). Note that the conditional probabilities of
the hidden units are the same as for binary-only RBMs.
The aim of generative training of an RBM is to model the marginal distribution of the visible units
P (v). In maximum likelihood learning, the aim is to minimize the negative log probability of the
training data,
X
Lgen = ?
log P (v|?),
(5)
v?S
where S is the training set and ? is the vector of all parameters wij , bi , cj . The gradient of this
function is intractable, however there is an approximation to maximum likelihood learning called
Contrastive Divergence (CD), which works well in practice [5]. We use a n-step version of CD, with
n equal to either 3 or 6. At each iteration, the parameter increments are:
?wij = hvi hj i0 ? hvi hj in
?bi = hvi ? bi i0 ? hvi ? bi in
?cj = hhj i0 ? hhj in
In this definition, angle brackets denote expected value over a certain distribution over the visible
units, with the hidden units distributed according to the conditional distribution P (h|v). A subscript
0 indicates that the data distribution is used, i.e., visible units are given values corresponding to observed fMRI volumes; while a subscript n indicates that n steps of Gibbs sampling have been done,
beginning at data points, to give an approximation to an expected value over the true distribution
P (v).
3
Classification using multiple RBMs
We consider binary classification tasks. The methods clearly generalize to arbitrary numbers of
classes.
3.1
Classification via (mostly) generative training
We begin by generatively training two independent RBMs, one for each data class. For maximum
likelihood learning, the cost function is the negative log probability of the training data:
X
X
B
Lgen = LA
log P (v|?A ) ?
log P (v|?B ),
(6)
gen (SA ) + Lgen (SB ) := ?
v?SA
v?SB
where SA and SB are the training data from classes A and B, and ?A and ?B are the parameter
vectors for the two RBMs. In practice we regularize by adding a term to this cost function that
corresponds to putting a prior distribution on the weights wij .
In general, given probabilistic generative models for each of two classes, A and B, data can be
classified by Bayes? theorem. For brevity, we write ?A? for ?v is of class A?, and similarly for B. If
we assume that v is a priori equally likely to belong to both classes, then
P (A|v) =
P (v|A)
.
P (v|A) + P (v|B)
If the distributions P (v|A) and P (v|B) are defined by RBMs, then they can be expressed in terms
of free energies FA and FB and partition functions ZA and ZB , as in Equation (3). Substituting this
into Bayes? theorem gives
P (A|v) = ? (FB (v) ? FA (v) ? T ) ,
(7)
where T := log (ZA /ZB ). The free energies in this formula can be calculated easily using (4).
However the partition functions are intractable for RBMs with large numbers of hidden and visible
units. For this reason, we replace the unknown ?threshold? T with an independent parameter ?,
and fit it discriminatively. (Thus this method is not pure generative training.) The aim of discriminative training is to model the conditional probability of the class labels given the visible units.
In maximum likelihood learning, the cost function to be minimized is the negative log conditional
probability of the class labels of the training data,
X
Ldisc = ?
log P (class of v|v, ?A , ?B , ?).
(8)
v?S
3.2
Classification via discriminative training
As an alternative to generative training, the function Ldisc (defined in the previous equation) can
be minimized directly, with respect to all parameters simultaneously: the wij , bi and cj of both
RBMs and the threshold parameter ?. We use exactly the same model of P (class of v|v) as before,
summarized in Equation (7). By substituting Equations (7) and (4) into Equation (8) the gradient of
Ldisc with respect to all parameters can be calculated exactly.
Substituting Equations (7) into Equation (8) gives
X
X
log ? (? + FA (v) ? FB (v)) .
log ? (FB (v) ? FA (v) ? ?) ?
C=?
v?SB
v?SA
where SA and SB are the sets of training data in classes A and B.
Since
d
dz
log ?(z) = ?(?z), the partial derivative of C with respect to the threshold parameter is:
X
X
?C
? (FB (v) ? FA (v) ? ?) .
? (? + FA (v) ? FB (v)) ?
=
??
v?SB
v?SA
The free energies depend on the weights of the RBMs (suppressed in the above equation for ease of
notation). If the parameters for the two RBMs are not linked in any way, then any given parameter
? affects either model A or model B but not both, so either ?FB /?? = 0 or ?FA /?? = 0. From (4)
it follows that
?
?
?
F (v) = ?pj vi ,
F (v) = ?pj ,
F (v) = bi ? vi ,
?wij
?cj
?bi
where pj := ?(zj ) = P (hj |v). It follows that, setting M (v) := ? (FB (v) ? FA (v) ? ?) , the
derivatives for the parameters of model A are:
X
X
?C
=
(1 ? M (v)) (?vi pj ) +
M (v) (vi pj ) ,
?wij
v?SA
v?SB
X
X
?C
=
(1 ? M (v)) (?pj ) +
M (v) (pj ) ,
?cj
v?SA
v?SB
X
X
?C
=
(1 ? M (v)) (bi ? vi ) +
M (v) (vi ? bi ) ;
?bi
v?SA
v?SB
where pj := PA (hj |v). The formulae for model B are the same with opposite sign, and with
pj := PB (hj |v). Note that there is no need to assume that both RBMs have the same number of
hidden units. We note that discriminative training of a single RBM was suggested in [6] and [8].
4
4.1
Experiments on fMRI data
Data and preprocessing
For our numerical experiments, we use the fMRI data from a study of recovery from stroke described in [14]. A stroke permanently damages part of the brain ( the ?lesion?), resulting in loss
of the corresponding function. Some function can be recovered over a period of months or years.
Since the lesion is still present, the patient must have learned to used other parts of the brain to
compensate. Studying stroke patients during recovery with fMRI can help determine what changes
in brain function occur during recovery, and to what degree these changes correlate with degree
of recovery. The study of [14] analysed mean volumes of activation over 4 regions of interest in
each hemisphere. The main conclusion of that paper is that patients with good recovery have higher
activations (averaged over all sessions) in the ipsilateral cerebellum.
Twelve subjects were studied at 1,2,3 and 6 months post-stroke. Due to data irregularities, we study
only 9 of these subjects in this paper; Each of the four imaging sessions consisted of four continuous
recording runs. During each run, the subject alternated two kinds of hand movement: tapping finger
and thumb together, or wrist flexion/extension; with rest breaks in between. The movement was
paced auditorily at 1Hz. During a single run, only one hand moved; during the following run, the
other hand moved. Within a run, the experimental design was : (3 seconds rest, 6 seconds finger tap,
3 seconds rest, 6 seconds wrist flexion), repeated 8 times.
The fMRI images, called ?volumes?, are recorded every 4 seconds. The volumes are made up of 24
axial (i.e. horizontal) slices of thickness 6mm, and within each slice the pixel size is 2mm ? 2mm.
The data for all 9 subjects has been co-registered and motion-corrected using the Automated Image
Registration (AIR) package [16]. For computational ease, we retain only 7 horizontal fMRI slices
out of an available 24 (slices 2,3,4,5,21,22,23, with 24 being the top of the head), resulting in 10499
voxels. The choice of slices is based on prior assumptions about what parts of the brain are involved
in finger and wrist motion. We temporally filter the data by dividing each ?active? image (finger
or wrist) by the mean of the previous two rest images. We linearly scaled all of the data for each
subject in such a way that the each voxel has mean 0 and variance approximately 1. So as to avoid
the long transients intrinsic in fMRI imaging, we discard the first image from each movement block,
and all rest images.
4.2
Classification tasks
We have studied two binary classification tasks. The first task is to predict whether a given fMRI
volume was recorded ?early? in the study, defined as the first or second recording session (1 or 2
months post-stroke) or ?late? in the study, defined as the third or fourth recording session (3 or 6
months post-stroke). This task addresses our interest in the long-term changes in brain organisation
and function during stroke recovery. The second task is to predict whether the volume was recorded
during finger or wrist movement. Both classification tasks are complex, in the sense that each of
the two classes is known to be heterogeneous. For example, in the early vs. late task, the ?early?
group is known to contain volumes in four sub-classes: healthy finger movement, healthy wrist
movement, impaired finger movement and impaired wrist movement; and similarly for the ?late?
group. In addition, there are many sources of variability between volumes that are extraneous to the
classification task and that are present in any fMRI study, including physiological noise, fatigue and
attention.
4.3
Classification methods and testing procedures
We used compared four basic methods: generatively- and discriminatively- trained pairs of RBMs;
logistic regression and K nearest neighbours. Each method was tested on individual fMRI slices
and also on the set of 7 slices described above. For the RBMs, minimization of the cost function
was by gradient descent, while for logistic regression we used the conjugate gradient algorithm as
implemented by Carl Rasmussen?s minimize.m. 1
Data for each subject is treated separately. For each subject, the data is split into three subsets:
75% training, 12.5% validation and 12.5% test. The splitting is done by first partitioning the data
into 32 half-runs, each of which contains either all of the finger movement volumes or all of the
wrist movement volumes for one run. One half-run contains 8 groups of 5 consecutively-recorded
volumes. From each of these half-runs, one of the 8 groups was randomly chosen and assigned to
the validation set, a second group was randomly assigned to the test set, and the remaining 6 were
assigned to the training set. This random splitting of each half-run into training, validation and test
sets was done 20 times with different random seeds, leading to 20 uncorrelated splittings. Each
classification method is evaluated on each of the 20 different random splits for each subject.
Logistic regression was L1-regularized, and the value of the regularization hyperparameter was
chosen by validation. The number of nearest neighbours, K, was also chosen by validation. The
RBMs were Cauchy-regularized, which we found to be a slight improvement over L1 regularization.
When testing on individual fMRI slices (which vary in size), we used 500 hidden units; while 3500
hidden units were used when testing on the larger set of 7 slices, which contained 10499 voxels.
The RBM training algorithm has many variations and hyperparameters, and is very slow to run on
data of this size, so rather than doing formal validation, we adjusted the algorithm informally via
many experiments on data from two of the subjects, mostly using only slice 4 but sometimes using
all 7 slices. These subjects were then included in the test data, so we have not observed a strict
separation of training and test data for the RBM-based methods. We note that our implementation
of the discriminative gradient inadvertently used a residual variance of 1/2 instead of 1.
1
We had originally used conjugate gradient for discriminatively-trained RBMs as well but we found, late
in the study, that gradient descent ran faster and gave better results. We haven?t investigated this, beyond
numerical verification of our gradients, but it suggests that care should be taken using conjugate gradient with
very high-dimensional data.
We also studied various blends of generative and discriminative training of a pair of RBMs, in which
the cost function is a convex combination of the negative log likelihood functions,
L? = (1 ? ?)Lgen + ?Ldisc .
4.4
(9)
Results
The following two tables show mean misclassification errors, averaged over all 9 subjects and all
20 splittings of the data from each subject. Following each mean error is the standard deviation
of the means for the 9 subjects. The first table shows mean misclassification errors early vs. late
classification task:
log. reg.
K near. neigh.
discrim. RBM
gen. RBM
Slice 2
28.6% ? 7.6
11.4% ? 6.0
7.6% ? 2.8
3.0% ? 2.0
Slice 3
27.1% ? 6.7
12.3% ? 6.1
10.4% ? 2.2
2.8% ? 1.9
Slice 4
28.1% ? 8.2
11.3% ? 5.4
9.7% ? 2.3
2.4% ? 1.8
Slice 5
26.3% ? 7.0
12.6% ? 5.7
10.0% ? 2.1
4.2% ? 3.2
Slice 21
24.2% ? 8.9
16.8% ? 7.4
10.6% ? 4.4
4.8% ? 3.4
Slice 22
23.7% ? 6.9
15.3% ? 6.5
9.2% ? 3.5
3.7% ? 2.1
Slice 23
24.1% ? 4.7
13.0% ? 4.7
7.7% ? 2.6
5.2% ? 3.7
All 7 slices
20.1% ? 8.7
10.0% ? 4.1
?
0.2% ? 0.2
In all cases, the generatively-trained RBMs outperform all of the other methods tested. We omitted discriminative training an RBM pair for the large dataset of all 7 slices together, due to the
computional expense.2
The next table shows mean error rates for the finger vs. wrist classification task:
log. reg.
K near. neigh.
discrim. RBM
gen. RBM
Slice 4
17.0% ? 2.8
11.7% ? 1.5
9.7% ? 2.3
21.8% ? 4.6
All 7 slices
7.9% ? 3.0
10.6% ? 2.3
(6.9% ? 2.3)
11.5% ? 1.5
For this task, we did discriminatively train an RBM pair on the entire dataset, however due to the
computational expense we used only 1000 hidden units instead of the the 3500 used in generative
training. Experiments on one subject suggests that the results for discriminative training are not very
sensitive to the number of hidden units.
Figure 1 shows the performance of several convex blends of generative and discriminative training,
tested on fMRI Slice 4. Due to the computational intensity, only 5 splittings of the data were tested
for each blend. Note that the for the early vs. late task, pure generative training outperforms all other
blends; while for the finger vs. wrist task, pure discriminative training outperforms all other blends.
5
Discussion
This study shows that generative models, and in particular, Restricted Boltzmann Machines, can be
very useful in discrimination tasks. It is been shown before that generative training can make use
of unlabelled data to improve discriminative performance [7] [6]. The present study, like that of Ng
and Jordan [11], shows that generative training can improve discriminative performance even if all
data is labelled.
2
As noted earlier, we began by using conjugate gradient to train these models, and found it to be extremely
slow. Now that we have switched to gradient descent, discriminative training should be of comparable speed to
the generative training, which is still very computationally intensive for this dataset.
Figure 1: Misclassification rates for a combination of (1 ? ?) times generative training plus ? times
discriminative training, as in Equation (9). The ? axis has been warped to emphasize values near 0
and 1. For each ? value, the mean error rate across all subjects is marked with a circle. The smaller
dots, joined by a vertical bar, show mean error rates for individual subjects.
We studied two methods of training a pair of RBM models: one almost entirely generative, and one
discriminative. To use the terminology of Ng and Jordan, the two algorithms form a generativediscriminative pair, since they use exactly the same models of the input data and differ only in the
training criterion. We found that training a pair of RBM models generatively rather than discriminatively yielded better discriminative performance for one of the two tasks studied. This is consistent
with the results of Ng and Jordan, who studied the generative-discriminative pair consisting of naive
Bayes and logistic regression and found that naive Bayes can outperform logistic regression. Their
theoretical and experimental results suggest that generative training is more likely to be superior to
discriminative training when the number of training examples is small compared to the dimension
of the input data space. Since fMRI studies are in this regime, generative training looks promising
for fMRI-based classification tasks.
The two tasks studied in the present work are: (i) classify fMRI volumes as having been recorded in
either the earlier or later part of the study; and (ii) classify fMRI volumes as corresponding to either
finger or wrist movement. We found that generative training yielded better results for the early vs.
late task, while discriminative training was superior for the finger vs. wrist task.
Why does the relative performance of the two methods vary so much between the two tasks? One
general observation is that generative training is trying to model many different features at once,
many of which may be irrelevant to the discrimination task; whereas, by definition, discriminative
models always focus on the task at hand. Thus there is a possibility for generative models to be
?distracted? (from the point of view of discrimination) by rich structure in the data that is extraneous
to the discrimination task. It seems reasonable that, the more structure there is in the images that is
irrelevant to the discrimination task, the poorer will be the discriminative power of the generative
models. We hypothesize that a lot of the complex structure in the fMRI volumes is relevant to
early vs. late classification, but that most of it is irrelevant to finger vs. wrist classification. In
other words, we hypothesise that the long-term changes during stroke recovery are complex and
distributed throughout the brain; and that, by contrast the differences in brain activation between
finger and wrist movements are relatively simple.
It is interesting to compare these results with those of [13] which shows, using the same data as the
present study, that linear classification methods perform better than non-linear ones on the finger vs.
wrist classification task, while for the early vs. late classification task the reverse is true.
We have evaluated blends of generative and discriminative training, as other authors have found
that a combination can out-perform both pure generative and pure discriminative training [9][2].
However this did not occur in our experiments for either of the classification tasks.
From the point of view of neuroscience or medicine, this work has two ultimate aims. The first is to
elucidate neural changes that occur during recovery from a stroke. This is why we chose to study the
early vs. late task. This classification task may shed light on neural representations, as the regions
that change over time will be those that are useful for making this discrimination. The present study
identifies a specific method that is very successful at the early vs. late classification task, but does
not go on to address the problem of ?opening the box?, i.e. shedding light on how the classification
method works. Interpreting a set of RBM parameters is known to be more difficult than for linear
models, but there are avenues available such as automatic relevance determination [10] that can
indicate which voxels are most significant in the discrimination. The second aim is find general
classification methods that can eventually be applied in clinical studies to classify patients as likely
responders or non-responders to certain treatments on the basis of fMRI scans. We have shown
that RBM-based models warrant further investigation in this context. In future work we intend to
evaluate such models for their power to generalize strongly across subjects and recording sessions.
Acknowledgments
We thank Natasa Kovacevic for co-registering and motion-correcting the fMRI data used in this
study. This work was supported by the Brain Network Recovery Group through a grant from the
James S. McDonnell Foundation (No. 22002082).
References
[1] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In
Advances in Neural Information Processing Systems 19, pages 153? 160. MIT Press, 2007.
[2] C. M. Bishop and J. Lasserre. Generative or discriminative? getting the best of both worlds. Bayesian
Statistics, 8:3 ? 24, 2007.
[3] L. K. Hansen. Multivariate strategies in functional magnetic resonance imaging. Brain and Language,
102:186?191, August 2007.
[4] S. J. Hanson and Y.O. Halchenko. Brain Reading Using Full Brain Support Vector Machines for Object
Recognition: There Is No ?Face? Identification Area. Neural Computation, 20(2):486?503, 2008.
[5] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14(8):1711?1800, 2002.
[6] G. E. Hinton. To recognize shapes, first learn to generate images. In P. Cisek, T. Drew, and J. Kalaska,
editors, Computational Neuroscience: Theoretical Insights into Brain Function. Elsevier, 2007.
[7] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504 ? 507, July 2006.
[8] H. Larochelle and Y. Bengio. Classification using discriminative restricted boltzmann machines. In ICML
?08: Proceedings of the 25th international conference on Machine learning. ACM, 2008.
[9] Andrew McCallum, Chris Pal, Greg Druck, and Xuerui Wang. Multi-conditional learning: Generative/discriminative training for clustering and classification. In To appear in AAAI ?06: American Association for Artificial Intelligence National Conference on Artificial Intelligence, 2006.
[10] R. M. Neal. Bayesian Learning for Neural Networks. Springer Verlag, 1996.
[11] A. Ng and M. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression
and naive Bayes. Advances in Neural Information Processing Systems 14: Proceedings of the 2002 [sic]
Conference, 2002.
[12] A.J. O?Toole, F. Jiang, H. Abdi, N. Penard, J.P. Dunlop, and M.A. Parent. Theoretical, statistical, and practical perspectives on pattern-based classification approaches to functional neuroimaging analysis. Journal
of Cognitive Neuroscience, 19(11):1735?1752, 2007.
[13] T. Schmah, G. Yourganov, R. S. Zemel, G. E. Hinton, S. L. Small, and S. Strother. A comparison of
classification methods for longitudinal fmri studies. in preparation.
[14] S. L. Small, P. Hlustik, D. C. Noll, C. Genovese, and A. Solodkin. Cerebellar hemispheric activation
ipsilateral to the paretic hand correlates with functional recovery after stroke. Brain, 125(7):1544, 2002.
[15] M. Welling, M. Rosen-Zvi, and G. E. Hinton. Exponential family harmoniums with an application to
information retrieval. In Advances in Neural Information Processing Systems 17. MIT Press, 2005.
[16] R. P. Woods, S. T. Grafton, C. J. Holmes, S. R. Cherry, and J. C. Mazziotta. Automated image registration:
I. general methods and intrasubject, intramodality validation. Journal of Computer Assisted Tomography,
22:139?152, 1998.
| 3577 |@word version:2 seems:1 contrastive:2 noll:1 configuration:1 contains:3 generatively:4 halchenko:1 longitudinal:1 outperforms:2 recovered:1 analysed:1 activation:4 must:1 visible:14 numerical:2 informative:1 chicago:2 partition:4 shape:1 hypothesize:1 discrimination:10 v:14 generative:33 half:4 greedy:1 intelligence:2 mccallum:1 beginning:1 toronto:10 registering:1 become:2 fitting:1 expected:2 multi:1 brain:15 salakhutdinov:1 begin:1 underlying:1 notation:1 what:3 kind:1 every:1 shed:1 exactly:4 classifier:4 scaled:1 partitioning:1 unit:23 grant:1 appear:1 before:2 analyzing:1 jiang:1 subscript:2 tapping:1 approximately:1 plus:1 chose:1 studied:7 suggests:2 co:2 ease:2 bi:14 averaged:2 acknowledgment:1 practical:1 wrist:15 testing:3 practice:2 block:1 irregularity:1 procedure:1 area:1 thought:1 pre:1 word:1 seeing:1 suggest:1 context:1 applying:1 dz:1 go:1 attention:1 convex:3 simplicity:1 recovery:14 splitting:2 pure:5 correcting:1 insight:2 holmes:1 lamblin:1 regularize:1 variation:1 increment:1 elucidate:2 carl:1 pa:1 recognition:1 particularly:1 observed:2 steven:1 wang:1 region:3 connected:1 movement:12 grafton:1 ran:1 trained:3 depend:1 harmonium:1 basis:1 easily:1 joint:2 various:1 finger:15 train:2 artificial:2 zemel:3 larger:1 statistic:1 itself:1 product:1 relevant:1 gen:3 poorly:1 moved:2 getting:1 parent:1 impaired:2 object:1 help:1 andrew:1 axial:1 nearest:2 sa:9 dividing:1 implemented:1 c:3 indicate:1 larochelle:2 differ:1 filter:1 stochastic:2 consecutively:1 transient:1 strother:2 investigation:1 adjusted:1 extension:1 assisted:1 mm:3 exp:6 great:1 seed:1 predict:2 substituting:3 vary:2 hvi:4 early:11 omitted:1 label:2 hansen:1 healthy:2 sensitive:1 weighted:1 minimization:1 mit:2 clearly:1 gaussian:1 always:1 aim:6 rather:2 avoid:1 hj:8 focus:1 improvement:1 likelihood:5 indicates:2 contrast:1 sense:1 elsevier:1 i0:3 sb:9 typically:2 entire:1 hidden:13 wij:11 pixel:1 classification:32 priori:2 extraneous:2 resonance:1 marginal:6 equal:1 once:1 having:1 ng:4 sampling:2 neigh:2 look:1 icml:1 warrant:1 genovese:1 fmri:23 minimized:2 future:1 rosen:1 richard:1 serious:1 haven:1 opening:1 randomly:2 neighbour:2 simultaneously:1 divergence:2 recognize:1 individual:3 national:1 consisting:1 interest:4 investigate:1 possibility:1 bracket:1 light:2 cherry:1 poorer:1 partial:1 circle:1 theoretical:3 classify:3 earlier:2 hypothesise:1 cost:5 deviation:1 subset:2 successful:1 pal:1 zvi:1 thickness:1 density:1 twelve:1 international:1 retain:1 rotman:2 probabilistic:1 together:2 druck:1 aaai:1 recorded:5 choose:1 cognitive:1 warped:1 derivative:2 leading:1 expert:1 american:1 baycrest:2 summarized:1 vi:11 later:1 break:1 view:2 lot:1 analyze:1 linked:1 doing:1 bayes:6 hhj:2 minimize:2 air:1 responder:2 greg:1 variance:3 who:1 correspond:1 generalize:2 bayesian:2 thumb:1 identification:1 classified:1 stroke:12 za:2 detector:1 definition:2 rbms:17 energy:5 involved:1 james:1 rbm:17 dataset:3 treatment:1 popular:1 dimensionality:2 cj:9 uncover:1 higher:1 originally:1 done:3 evaluated:2 box:1 strongly:1 working:1 hand:5 horizontal:2 logistic:9 sic:1 mazziotta:1 usa:1 building:1 consisted:1 true:2 contain:1 regularization:2 assigned:3 neal:1 cerebellum:1 during:10 noted:1 criterion:1 hemispheric:1 trying:1 fatigue:1 dunlop:1 performs:1 l1:3 interpreting:1 motion:3 image:12 wise:1 began:1 superior:2 functional:5 volume:15 belong:1 association:1 slight:1 significant:1 gibbs:2 automatic:1 similarly:2 session:5 language:1 had:1 dot:1 multivariate:1 perspective:1 hemisphere:1 irrelevant:3 discard:1 reverse:1 certain:2 verlag:1 binary:4 care:1 determine:1 period:1 july:1 stephen:1 ii:1 multiple:1 full:1 faster:1 unlabelled:1 determination:1 clinical:1 compensate:1 long:3 kalaska:1 retrieval:1 post:4 equally:1 regression:8 basic:1 heterogeneous:1 patient:4 iteration:1 sometimes:1 cerebellar:1 achieved:1 addition:1 want:1 separately:1 whereas:1 source:1 rest:5 strict:1 subject:17 recording:4 hz:1 jordan:4 near:3 symmetrically:1 split:2 enough:1 bengio:2 automated:2 affect:1 fit:1 gave:1 discrim:2 restrict:1 opposite:1 avenue:1 intensive:1 whether:3 ultimate:1 splittings:3 deep:1 useful:3 informally:1 tomography:1 generate:1 outperform:3 zj:1 sign:1 neuroscience:3 lgen:4 ipsilateral:2 write:1 hyperparameter:1 group:6 putting:1 four:4 terminology:1 threshold:3 pb:1 pj:9 registration:2 imaging:6 year:1 wood:1 run:11 angle:1 package:1 fourth:1 almost:1 reasonable:1 throughout:1 family:1 separation:1 comparable:1 entirely:1 layer:2 xuerui:1 paced:1 distinguish:1 yielded:2 occur:4 speed:1 extremely:1 flexion:2 relatively:2 department:4 according:1 combination:3 mcdonnell:1 conjugate:4 across:2 smaller:1 increasingly:1 suppressed:1 making:1 restricted:4 taken:1 computationally:1 equation:9 eventually:1 tractable:1 studying:1 available:3 magnetic:1 alternative:1 permanently:1 top:1 remaining:1 clustering:1 tanya:1 medicine:1 paucity:1 intend:1 blend:7 fa:8 primary:1 damage:1 strategy:1 gradient:11 separate:1 thank:1 chris:1 cauchy:1 reason:1 modeled:1 minimizing:1 difficult:1 neuroimaging:4 mostly:2 expense:2 negative:4 design:1 implementation:1 boltzmann:4 unknown:1 perform:2 vertical:1 observation:1 datasets:1 descent:3 beat:1 hinton:7 variability:1 head:1 distracted:1 arbitrary:1 august:1 canada:4 intensity:1 toole:1 pair:8 connection:1 hanson:1 tap:1 generativediscriminative:1 learned:1 registered:1 address:2 beyond:1 suggested:1 bar:1 computional:1 pattern:2 regime:2 reading:1 including:1 power:2 misclassification:3 difficulty:1 treated:1 regularized:3 residual:1 improve:3 temporally:1 identifies:1 axis:1 naive:4 alternated:1 prior:2 voxels:6 l2:1 popovici:1 relative:2 loss:1 discriminatively:5 generation:1 interesting:1 versus:1 geoffrey:1 validation:7 foundation:1 switched:1 degree:2 verification:1 consistent:1 editor:1 uncorrelated:1 cd:2 supported:1 free:5 rasmussen:1 formal:1 uchicago:1 allow:1 institute:1 face:1 distributed:2 slice:23 calculated:2 dimension:1 world:1 rich:1 fb:8 author:1 made:1 preprocessing:1 founded:1 voxel:1 welling:1 correlate:2 emphasize:1 overfitting:1 active:1 discriminative:30 neurology:1 continuous:2 why:2 table:3 lasserre:1 promising:1 learn:1 ca:1 du:1 investigated:1 complex:3 did:2 main:1 linearly:1 noise:1 hyperparameters:1 lesion:2 repeated:1 slow:2 sub:1 exponential:1 late:12 third:1 theorem:2 formula:2 specific:1 bishop:1 physiological:1 organisation:1 intractable:3 intrinsic:1 adding:1 drew:1 likely:4 highlighting:1 expressed:2 contained:1 joined:1 springer:1 corresponds:1 acm:1 conditional:6 month:4 marked:1 towards:1 labelled:1 replace:1 change:7 included:1 typical:1 corrected:1 reducing:1 zb:2 called:3 discriminate:1 experimental:3 la:1 inadvertently:1 shedding:1 support:1 scan:2 abdi:1 brevity:1 relevance:1 preparation:1 evaluate:1 reg:2 tested:4 |
2,843 | 3,578 | Mixed Membership Stochastic Blockmodels
Edoardo M. Airoldi 1,2 , David M. Blei 1 , Stephen E. Fienberg 3,4 & Eric P. Xing 4?
1
Department of Computer Science, 2 Lewis-Sigler Institute, Princeton University
3
Department of Statistics, 4 School of Computer Science, Carnegie Mellon University
[email protected]
Abstract
In many settings, such as protein interactions and gene regulatory networks, collections of author-recipient email, and social networks, the data consist of pairwise measurements, e.g., presence or absence of links between pairs of objects.
Analyzing such data with probabilistic models requires non-standard assumptions,
since the usual independence or exchangeability assumptions no longer hold. In
this paper, we introduce a class of latent variable models for pairwise measurements: mixed membership stochastic blockmodels. Models in this class combine
a global model of dense patches of connectivity (blockmodel) with a local model
to instantiate node-specific variability in the connections (mixed membership).
We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic
blockmodel with applications to social networks and protein interaction networks.
1
Introduction
The problem of modeling relational information among objects, such as pairwise relations represented as graphs, arises in a number of settings in machine learning. For example, scientific literature connects papers by citation, the Web connects pages by links, and protein-protein interaction
data connect proteins by physical interaction records. In these settings, we often wish to infer hidden
attributes of the objects from the pairwise observations. For example, we might want to compute
a clustering of the web-pages, predict the functions of a protein, or assess the degree of relevance
of a scientific abstract to a scholar?s query. Unlike traditional attribute data measured over individual objects, relational data violate the classical independence or exchangeability assumptions
made in machine learning and statistics. The objects are dependent by their very nature, and this
interdependence suggests that a different set of assumptions is more appropriate.
Recently proposed models aim at resolving relational information into a collection of connectivity
motifs. Such models are based on assumptions that often ignore useful technical necessities, or important empirical regularities. For instance, exponential random graph models [11] summarize the
variability in a collection of paired measurements with a set of relational motifs, but do not provide a
representation useful for making unit-specific predictions. Latent space models [4] project individual units of analysis into a low-dimensional latent space, but do not provide a group structure into
such space useful for clustering. Stochastic blockmodels [8, 6] resolve paired measurements into
groups and connectivity between pairs of groups, but constrain each unit to instantiate the connectivity patterns of a single group as observed in most applications. Mixed membership models, such
as latent Dirichlet allocation [1], have emerged in recent years as a flexible modeling tool for data
where the single group assumption is violated by the heterogeneity within a unit of analysis?e.g., a
document, or a node in a graph. They have been successfully applied in many domains, such as document analysis [1], image processing [7], and population genetics [9]. Mixed membership models
associate each unit of analysis with multiple groups rather than a single groups, via a membership
?
A longer version of this work is available online, at http://jmlr.csail.mit.edu/papers/v9/airoldi08a.html
1
probability-like vector. The concurrent membership of a data in different groups can capture its different aspects, such as different underlying topics for words constituting each document. The mixed
membership formalism is a particularly natural idea for relational data, where the objects can bear
multiple latent roles or cluster-memberships that influence their relationships to others. Existing
mixed membership models, however, are not appropriate for relational data because they assume
that the data are conditionally independent given their latent membership vectors. Conditional independence assumptions that technically instantiate mixed membership in recent work, however, are
inappropriate for the relational data settings. In such settings, an objects is described by its relationships to others. Thus assuming that the ensemble of mixed membership vectors help govern the
relationships of each object would be more appropriate.
Here we develop mixed membership models for relational data and we describe a fast variational
inference algorithm for inference and estimation. Our model captures the multiple roles that objects exhibit in interaction with others, and the relationships between those roles in determining the
observed interaction matrix. We apply our model to protein interaction and social networks.
2
The Basic Mixed Membership Blockmodel
Observations consist of pairwise measurements, represented as a graph G = (N , Y ), where Y (p, q)
denotes the measurement taken on the pair of nodes (p, q). In this section we consider observations
consisting of a single binary matrix, where Y (p, q) ? {0, 1}, i.e., the data can be represented with a
directed graph. The model generalizes to two important settings, however, as we discuss below?a
collection of matrices and/or other types of measurements. We summarize a collection of pairwise
measurements with a mapping from nodes to sets of nodes, called blocks, and pairwise relations
among the blocks themselves. Intuitively, the inference process aims at identifying nodes that are
similar to one another in terms of their connectivity to blocks of nodes. Similar nodes are mapped
to the same block. Individual nodes are allowed to instantiate connectivity patterns of multiple
blocks. Thus, the goal of the analysis with a Mixed Membership Blockmodel (MMB) is to identify
(i) the mixed membership mapping of nodes, i.e., the units of analysis, to a fixed number of blocks,
K, and (ii) the pairwise relations among the blocks. Pairwise measurements among N nodes are
then generated according to latent distributions of block-membership for each node and a matrix of
block-to-block interaction strength. Latent per-node distributions are specified by simplicial vectors.
Each node is associated with a randomly drawn vector, say ~?i for node i, where ?i,g denotes the
probability of node i belonging to group g. In this fractional sense, each node can belong to multiple
groups with different degrees of membership. The probabilities of interactions between different
groups are defined by a matrix of Bernoulli rates B(K?K) , where B(g, h) represents the probability
of having a connection from a node in group g to a node in group h. The indicator vector ~zp?q
denotes the specific block membership of node p when it connects to node q, while ~zp?q denotes
the specific block membership of node q when it is connected from node p. The complete generative
process for a graph G = (N , Y ) is as follows:
? For each node p ? N :
? Draw a K dimensional mixed membership vector ~?p ? Dirichlet
?
~ .
? For each pair of nodes (p, q) ? N ? N :
? Draw membership indicator for the initiator, ~zp?q ? Multinomial ~?p .
? Draw membership indicator for the receiver, ~zq?p ? Multinomial ~?q .
>
? Sample the value of their interaction, Y (p, q) ? Bernoulli ~zp?q
B ~zp?q .
Note that the group membership of each node is context dependent, i.e., each node may assume
different membership when interacting with different peers. Statistically, each node is an admixture
of group-specific interactions. The two sets of latent group indicators are denoted by {~zp?q : p, q ?
N } =: Z? and {~zp?q : p, q ? N } =: Z? . Further, the pairs of group memberships that underlie
interactions, e.g., (~zp?q , ~zp?q ) for Y (p, q), need not be equal; this fact is useful for characterizing
asymmetric interaction networks. Equality may be enforced when modeling symmetric interactions.
The joint probability of the data Y and the latent variables {~?1:N , Z? , Z? } sampled according to
the MMB is:
Y
Y
p(Y, ~?1:N , Z? , Z? |~
?, B) =
P (Y (p, q)|~zp?q , ~zp?q , B)P (~zp?q |~?p )P (~zp?q |~?q )
P (~?p |~
?).
p,q
p
2
z 1?1
z 1?2
z 1?3
1
z 1?1
y 11
z 1?2
z 2?1
?
y 12
z 1?3
z 2?2
y 13
z 1?N
z 2?3
y 1N
z 2?N
...
?
2
z 2?1
y 21
z 2?2
z 3?1
y 22
z 2?3
z 3?2
y 23
z 2?N
z 3?3
y 2N
B
Figure 1: The graphical model
of the mixed membership blockmodel (MMB). We did not draw all
the arrows out of the block model
B for clarity. All the pairwise measurements, Y (p, q), depend on it.
z 3?N
...
?
3
z 3?1
y 31
z 3?2
y 32
z N?1
z 3?3
y 33
.
.
.
z N?2
z 3?N
.
.
.
.
.
.
.
.
.
.
.
.
z 1?N
...
?
y 3N
.
.
.
z N?3
z N?N
...
?
n
z N?1
y N1
z N?2
y N2
z 1?1
y N3
z N?N
y NN
Introducing Sparsity. Adjacency matrices encoding binary pairwise measurements often contain
a large amount of zeros, or non- interactions; they are sparse. It is useful to distinguish two sources
of non- interaction: they may be the result of the rarity of interactions in general, or they may be
an indication that the pair of relevant blocks rarely interact. In applications to social sciences, for
instance, nodes may represent people and blocks may represent social communities. In this setting,
it is reasonable to expect that a large portion of the non- interactions is due to limited opportunities
of contact between people in a large population, or by design of the questionnaire, rather than due to
deliberate choices, the structure of which the blockmodel is trying to estimate. It is useful to account
for these two sources of sparsity at the model level. A good estimate of the portion of zeros that
should not be explained by the blockmodel B reduces the bias of the estimates of B?s elements.
We introduce a sparsity parameter ? ? [0, 1] in the model above to characterize the source of noninteraction. Instead of sampling a relation Y (p.q) directly the Bernoulli with parameter specified as
>
B ~zp?q . This is
above, we down- weight the probability of successful interaction to (1 ? ?) ? ~zp?q
the result of assuming that the probability of a non- interaction comes from a mixture, 1 ? ?pq =
>
(1 ? ?) ? ~zp?q
(1 ? B) ~zp?q + ?, where the weight ? capture the portion zeros that should not be
explained by the blockmodel B. A large value of ? will cause the interactions in the matrix to be
weighted more than non- interactions, in determining plausible values for {~
?, B, ~?1:N }.
Recall that {~
?, B} are constant quantities to be estimated, while {~?1:N , Z? , Z? } are unknown variable quantities whose posterior distribution needs to be determined. Below, we detail the variational
expectation- maximization (EM) procedure to carry out approximate estimation and inference.
2.1
Variational E- Step
During the E- step, we update the posterior distribution over the unknown variable quantities
{~?1:N , Z? , Z? }. The normalizing constant of the posterior is the marginal probability of the data,
which requires an intractable integral over the simplicial vectors ~?p ,
Z
X
p(Y | ?
~ , B) =
p(Y, ~?1:N , Z? , Z? |~
?, B).
(1)
~
?1:N zp?q ,zp?q
We appeal to mean- field variational methods [5] to approximate the posterior of interest. The main
idea behind variational methods is to posit a simple distribution of the latent variables with free
parameters, which are fit to make the approximation close in Kullback- Leibler divergence to the
true posterior of interest. The log of the marginal probability in Equation 1 can be bound as follows,
log p(Y | ?, B) ? Eq log p(Y, ~?1:N , Z? , Z? |?, B) ?Eq log q(~?1:N , Z? , Z? ) , (2)
by introducing a distribution of the latent variables q that depends on a set of free parameters.
We specify q as the mean- field fully- factorized family, q(~?1:N , Z? , Z? | ~?1:N , ?? , ?? ), where
{~?1:N , ?? , ?? } is the set of free variational parameters that must be set to tighten the bound. We
3
tighten the bound with respect to the variational parameters, to minimize the KL divergence between
q and the true posterior. The update for the variational multinomial parameters is
Y
1?Y (p,q) ?p?q,h
Eq log ?p,g
Y (p,q)
?
?p?q,g ? e
?
B(g, h)
? 1 ? B(g, h)
(3)
h
??p?q,h
? e Eq
Y
log ?q,h
?
B(g, h)Y (p,q) ?
1 ? B(g, h)
1?Y (p,q)
?p?q,g
,
(4)
g
for g, h = 1, . . . , K. The update for the variational Dirichlet parameters ?p,k is
X
X
??p,k = ?k +
?p?q,k +
?p?q,k ,
q
(5)
q
for all nodes p = 1, . . . , N and k = 1, . . . , K.
Nested Variational Inference. To improve convergence, we developed a nested variational inference scheme based on an alternative schedule of updates to the traditional ordering [5]. In a na??ve
iteration scheme for variational inference, one initializes the variational Dirichlet parameters ~?1:N
~ p?q , ?
~ p?q ) to non-informative values, and then itand the variational multinomial parameters (?
~ p?q and ?p?q for all edges (p, q),
erates until convergence the following two steps: (i) update ?
and (ii) update ~?p for all nodes p ? N . At each variational inference cycle one needs to allocate
N K + 2N 2 K scalars. In our experiments, the na??ve variational algorithm often failed to converge,
or converged only after many iterations. We attribute this behavior to dependence between ~?1:N and
B in the model, which is not accounted for by the na??ve algorithm. The nested variational inference
algorithm retains portion of this dependence across iterations by following a particular path to con~ p?q , ?
~ p?q ) at their optimal values conditionally
vergence. We keep the block of free parameters (?
on the other variational parameters. These parameters are involved in the updates of parameters
in ~?1:N and in B, thus effectively providing a channel to maintain some dependence among them.
From a computational perspective, the nested algorithm trades time for space thus allowing us to
deal with large graphs. At each variational cycle we allocate N K + 2K scalars only. The algorithm
can be parallelized, and, empirically, leads to a better likelihood bound per unit of running time.
2.2
M-Step
During the M-step, we maximize the lower bound in Equation 2, used as a surrogate for the likelihood, with respect to the unknown constants {~
?, B}. In other words, we compute the empirical
Bayes estimates of the hyper-parameters. The M-step is equivalent to finding the MLE using expected sufficient statistics under the variational distribution. We consider the maximization step for
each parameter in turn. A closed form solution for the approximate maximum likelihood estimate
of ?
~ does not exist. We used linear-time Newton-Raphson, with gradient and Hessian
X
X
X
?L?~
= N ?
?k ??(?k ) +
?(?p,k ) ? ?
?p,k
, and
??k
p
k
k
X
?L?~
0
0
= N I(k1 =k2 ) ? ? (?k1 ) ? ?
?k
,
??k1 ?k2
k
to find optimal values for ?
~ , numerically. The approximate MLE of B is
P
p,q Y (p, q) ? ?p?qg ?p?qh
?
P
,
B(g, h) =
p,q ?p?qg ?p?qh
for every pair (g, h) ? [1, K]2 . Finally, the approximate MLE of the sparsity parameter ? is
P
P
1 ? Y (p, q) ?
p,q
g,h ?p?qg ?p?qh
P
P
?? =
.
p,q
g,h ?p?qg ?p?qh
(6)
(7)
Alternatively, we can fix ? prior to the analysis; the density of the interaction matrix is estimated
P
? This latter estimator
with d? = p,q Y (p, q)/N 2 , and the sparsity parameter is set to ?? = (1 ? d).
4
attributes all the information in the non-interactions to the point mass, i.e., to latent sources other
than the block model B or the mixed membership vectors ~?1:N . It can be used, however, as a quick
recipe to reduce the computational burden during exploratory analyses.
Several model selection strategies exist for hierarchical models. In our setting, model selection
translates into the choice of the number of blocks, K. Below, we chose K with held-out likelihood
in a cross-validation experiment, on large networks, and with approximate BIC, on small networks.
2.3
Summarizing and De-Noising Pairwise Measurements
It is useful to consider two data analysis perspectives the MMB can offer: (i) it summarizes the
data, Y , in terms of the global blockmodel, B, and the node-specific mixed memberships, ?s,
(ii) it de-noises the data, Y , in terms of the global blockmodel, B, and interaction-specific single
memberships, Zs. In both cases the model depends on a small set of unknown constants to be
estimated: ?, and B. The likelihood is the same in both cases, although, the reasons for including the
set of latent variables Zs differ. When summarizing data, we could integrate out the Zs analytically;
this leads to numerical optimization of a smaller set of variational parameters, ?s. We choose to
keep the Zs to simplify inference. When de-noising, the Zs are instrumental in estimating posterior
expectations of each interactions individually?a network analog to the Kalman Filter. The posterior
~ p?q 0 B ?
~ p?q , in the two cases.
expectations of an interaction is computed as ~?p 0 B ~?q , and ?
3
Empirical Results
We evaluated the MMB on simulated data and on three collections of pairwise measurements. Results on simulated data sampled accordingly to the model show that variational EM accurately recovers the mixed membership map, ~?1:N , and the blockmodel, B. Cross-validation suggests an accurate
estimate for K. Nested variational scheduling of parameter updates makes inference parallelizable
and a typically reaches a better solution than the na??ve scheduling.
First we consider, whom-do-like relations among 18 novices in a New England monastery. The
unsupervised analysis demonstrates the type of patterns that MMB recovers from data, and allows us
to contrast the summaries of the original measurements achieved through prediction and de-noising.
The data was collected by Sampson during his stay at the monastery, while novices were preparing
to join the monastic order [10]. Sampson?s original analysis is rooted in direct anthropological
observations. He made a strong case for the existence of tight factions among the novices: the loyal
opposition (whose members joined the monastery first), the young turks (who joined later on), the
outcasts (who were not accepted in the two main factions), and the waverers (who did not take sides).
The events that took place during Sampson?s stay at the monastery supported his observations?
members of the young turks resigned or were expelled over religious differences (John and Gregory).
Scholars in the social sciences typically regard the faction labels assigned by Sampson to the novices
(and his conclusions, more in general) as ground truth to the extent of assessing the quality of results
of quantitative analyses; we shall do the same here. Using the nested variational EM algorithm
above, we fit an array of mixed membership blockmodels with different values of K, and collected
? and posterior mixed membership vectors ~?1:18 for the novices. We used an
model estimates {?
?, B}
? = 3,
approximation of BIC to choose the value of K supported by the data. This criterion selects K
the same number of proper groups that Sampson identified based on anthropological observations?
the waverers are interstitial members, rather than a group. Figure 2 shows the patterns that the mixed
? = 3 recovers from data. In particular, the top-left panel shows a
membership blockmodel with K
? The block that we can identify a-posteriori with the
graphical representation of the blockmodel B.
loyal opposition is portrayed as central to the monastery, while the block identified with the outcasts
shows the lowest internal coherence, in accordance with Sampson?s observations. The top-right
panel illustrates the posterior means of the mixed membership scores, E[~? |Y ], for the 18 monks in
the monastery. The model (softly) partitions the monks according to Sampson?s classification, with
Young Turks, Loyal Opposition, and Outcasts dominating each corner respectively. Notably, we can
quantify the central role played by John Bosco and Gregory, who exhibit relations in all three groups,
as well as the uncertain affiliations of Ramuald and Victor; Amand?s uncertain affiliation, however,
is not captured. The bottom panels contrast the different resolution of the original adjacency matrix
of whom-do-like sociometric relations (left panel) obtained with the two analyses MMB enables.
5
? Top- Right: Posterior mixed membership vectors,
Figure 2: Top- Left: Estimated blockmodel, B.
? top- left, and ?
? = 0.058.
~?1:18 , projected in the simplex. The estimates correspond to a model with B
Numbered points can be mapped to monks? names using the legend on the right. The colors identify
the four factions defined by Sampson?s anthropological observations. Bottom: Original adjacency
matrix of whom- do- like sociometric relations (left), relations predicted using approximate MLEs
for ~?1:N and B (center), and relations de- noised using the model including Zs indicators (right).
If the goal of the analysis if to find a parsimonious summary of the data, the amount of relational
? and E[~? |Y ] leads to a coarse reconstruction of the original
information that is captured by in ?
? , B,
sociomatrix (central panel). If the goal of the analysis if to de- noising a collection of pairwise
? and E[Z? , Z? |Y ] leads
measurements, the amount of relational information that is revealed by ?
?, B
to a finer reconstruction of the original sociomatrix, Y ?relations in Y are re- weighted according to
how much they make sense to the model (right panel). Substantively, the unsupervised analysis of the
sociometric relations with MMB offers quantitative support to several of Sampson?s observations.
Second, we consider a friendship network among a group of 69 students in grades 7?12. The analysis
here directly compares clustering results obtained with MMB to published results obtained with
competing models, in a setting where a fair amount of social segregation is expected [2, 3].
The data is a collection of friendship relations among 69 students in a school surveyed in the National Study of Adolescent Health. The original population in the school of interest consisted of 71
students. Two students expressed no friendship preferences and were excluded from the analysis.
We used variational EM algorithm to fit an array of mixed membership blockmodels with different
values of K, collected model estimates, and used an approximation to BIC to select K. This proce? = 6 as the model- size that best explains the data; note that six is the number of
dure identified K
grade- groups in the student population. The blocks are clearly interpretable a- posteriori in terms of
grades, thus providing a mapping between grades and blocks. Conditionally on such a mapping, we
assign students to the grade they are most associated with, according to their posterior- mean mixed
membership vectors, E[~?n |Y ]. To be fair in the comparison with competing models, we assign
students with a unique grade?despite MMB allows for mixed membership. Table 1 computes the
correspondence of grades to blocks by quoting the number of students in each grade- block pair, for
MMB versus the mixture blockmodel (MB) in [2], and the latent space cluster model (LSCM) in
[3]. The higher the sum of counts on diagonal elements is the better is the correspondence, while the
higher the sum of counts off diagonal elements is the worse is the correspondence. MMB performs
best by allocating 63 students to their grades, versus 57 of MB, and 37 of LSCM. Correspondence
only partially partially captures goodness of fit, however, it is a good metric in the setting we consider, where a fair amount of clustering is present. The extra- flexibility MMB offers over MB and
LSCM reduces bias in the prediction of the membership of students to blocks, in this problem.
In other words, mixed membership does not absorb noise in this example, rather it accommodates
variability in the friendship relation that is instrumental in producing better predictions.
6
Grade
7
8
9
10
11
12
1
13
0
0
0
0
0
2
1
9
0
0
0
0
MMB Clusters
3
4
5
0
0
0
2
0
0
16
0
0
0 10
0
1
0 11
0
0
0
6
0
1
0
0
1
4
1
13
0
0
0
0
0
2
1
10
0
0
0
0
MB Clusters
3
4
5
0
0
0
2
0
0
10
0
0
0 10
0
1
0 11
0
0
0
6
0
0
6
0
1
4
1
13
0
0
0
0
0
LSCM Clusters
2
3
4
5
1
0
0
0
11
1
0
0
0
7
6
3
0
0
0
3
0
0
0
3
0
0
0
0
6
0
0
0
7
10
4
Table 1: Grade levels versus (highest) expected posterior membership for the 69 students, according
to three alternative models. MMSB is the proposed mixed membership stochastic blockmodel, MSB
is the mixture blockmodel in [2], and LSCM is the latent space cluster model in [3].
Third, we consider physical interactions among 871 proteins in yeast. The analysis allows us to evaluate the utility of MMB in summarizing and de-noising complex connectivity patterns quantitatively,
using an independent set of functional annotations?consider two models that suggest different sets
of interactions as reliable; we prefer the model that reveals functionally relevant interactions.
Precision
The pairwise measurements consist of a hand-curated collection of physical protein interactions
made available by the Munich Institute of Protein Sequencing (MIPS). The yeast genome database
provides independent functional annotations for each protein, which we use for evaluating the functional content of the protein networks estimated with the MMB from the MIPS data, as detailed
below. We explored a large model space, K = 2 . . . 225, and used five-fold cross-validation to identify a blockmodel B that reduces the dimensionality of the physical interactions among proteins in
the training set, while revealing robust aspects of connectivity that can be leveraged to predict physical interactions among proteins in the test set. We determined that a fairly parsimonious model,
K = 50, provides a good description of the observed physical interaction network. This finding
supports the hypothesis that proteins derived from the MIPS data are interpretable in terms functional biological contexts. Alternatively, the blocks might encode signal at a finer resolution, such
as that of protein complexes. If that was the case, however, we would expect the optimal number of
blocks to be significantly higher; 871/5 ? 175, given an average size of five proteins in a protein
complex. We then evaluated the functional content of the posterior induced by MMB. The goal is
to assess to what extent MMB reveals substantive information about the functionality of proteins
that can be used to inform subsequent analyses. To do this, first, we fit a model on the whole data
set to estimate the blockmodel, B(50?50) , and the mixed membership vectors between proteins and
blocks, ~?1:871 , and second, we either impute physical interactions by thresholding the posterior
expectations computed using blockmodel and node-specific memberships (summarization task), or
we de-noise the observed interactions using blockmodel and pair-specific memberships (de-noising
task). Posterior expectations of each interaction are in [0, 1]. Thresholding such expectations at q,
for instance, leads to a collection of binary physical interactions that are at reliable with probability p ? q. We used an independent set of functional annotations from the yeast database (SGD
at www.yeastgenome.org) to decide which interactions are functionally meaningful; namely those
between pairs of proteins that share at least one functional annotation. In this sense, between two
models that suggest different sets of interactions as reliable, our evaluation assigns a higher score
MMB (K=50; MIPS de-noised with Zs & B)
MMB (K=50; MIPS summarized with ?s & B)
Recall (unnormalized)
Figure 3: Functional content of the MIPS collection of protein interactions (yellow diamond) on a
precision-recall plot, compared against other published collections of interactions and microarray
data, and to the posterior estimates of the MMB models?computed as described in the text.
7
to the model that reveals functionally relevant interactions according to SGD. Figure 3 shows the
functional content of the original MIPS collection of physical interactions (point no.2), and of the
collections of interactions computed using (B, ?s), the light blue (??) line, and using (B, Zs),
the dark blue (?+) line, thresholded at ten different levels?precision-recall curves. The posterior
means of ?s provide a parsimonious representation for the MIPS collection, and lead to precise
protein interaction estimates, in moderate amount (?? line). The posterior means of Zs provide a
richer representation for the data, and describe most of the functional content of the MIPS collection
with high precision (?+ line). Importantly, the estimated networks corresponding to lower levels
of recall for both model variants (i.e., ? and +) feature a more precise functional content than the
original network. This means that the proposed latent block structure is helpful in effectively denoising the collection of interactions?by ranking them properly. On closer inspection, dense blocks
of predicted interactions contain known functional predictions that were not in the MIPS collection,
thus effectively improving the quality of the protein binding data that instantiate cellular activity
of specific biological contexts, such as biopolymer catabolism and homeostasis. In conclusion, our
results suggest that MMB successfully reduces the dimensionality of the data, while discovering information about the multiple functionality of proteins that can be used to inform follow-up analyses.
Remarks. A. In the relational setting, cross-validation is feasible if the blockmodel estimated
on training data can be expected to hold on test data; for this to happen the network must be of
reasonable size, so that we can expect members of each block to be in both training and test sets.
In this setting, scheduling of variational updates is important; nested variational scheduling leads to
efficient and parallelizable inference. B. MMB includes two sources of variability, B, ?s, that are
apparently in competition for explaining the data, possibly raising an identifiability issue. This is
not the case, however, as the blockmodel B captures global/asymmetric relations, while the mixed
membership vectors ?s capture local/symmetric relations. This difference practically eliminates the
issue, unless there is no signal in the data to begin with. C. MMB generalizes to two important cases.
First, multiple data collections Y1:M on the same objects can be generated by the same latent vectors.
This might be useful, for instance, for analyzing multivariate sociometric relations simultaneously.
Second, in the MMSB the data generating distribution is a Bernoulli, but B can be a matrix of
parameterizes for any kind of distribution. For instance, technologies for measuring interactions
between pairs of proteins, such as mass spectrometry and tandem affinity purification, which return
a probabilistic assessment about the presence of interactions, thus setting the range of Y ? [0, 1].
References
[1] D. M. Blei, A. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research,
3:993?1022, 2003.
[2] P. Doreian, V. Batagelj, and A. Ferligoj. Discussion of ?Model-based clustering for social networks?.
Journal of the Royal Statistical Society, Series A, 170, 2007.
[3] M. S. Handcock, A. E. Raftery, and J. M. Tantrum. Model-based clustering for social networks. Journal
of the Royal Statistical Society, Series A, 170:1?22, 2007.
[4] P. D. Hoff, A. E. Raftery, and M. S. Handcock. Latent space approaches to social network analysis.
Journal of the American Statistical Association, 97:1090?1098, 2002.
[5] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. Introduction to variational methods for graphical
models. Machine Learning, 37:183?233, 1999.
[6] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with
an infinite relational model. In Proc. of the 21st National Conference on Artificial Intelligence, 2006.
[7] F.-F. Li and P. Perona. A Bayesian hierarchical model for learning natural scene categories. IEEE Computer Vision and Pattern Recognition, 2005.
[8] K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of
the American Statistical Association, 96:1077?1087, 2001.
[9] J. K. Pritchard, M. Stephens, N. A. Rosenberg, and P. Donnelly. Association mapping in structured
populations. American Journal of Human Genetics, 67:170?181, 2000.
[10] F. S. Sampson. A Novitiate in a period of change: An experimental and case study of social relationships.
PhD thesis, Cornell University, 1968.
[11] S. Wasserman, G. Robins, and D. Steinley. A brief review of some recent research. In: Statistical Network
Analysis: Models, Issues and New Directions, Lecture Notes in Computer Science. Springer-Verlag, 2007.
8
| 3578 |@word bosco:1 version:1 instrumental:2 sgd:2 carry:1 anthropological:3 necessity:1 series:2 score:2 document:3 existing:1 must:2 john:2 subsequent:1 happen:1 informative:1 numerical:1 partition:1 enables:1 plot:1 interpretable:2 update:9 msb:1 generative:1 instantiate:5 discovering:1 intelligence:1 accordingly:1 monk:3 inspection:1 yamada:1 record:1 blei:2 coarse:1 provides:2 node:33 preference:1 org:1 five:2 direct:1 combine:1 introduce:2 interdependence:1 pairwise:15 notably:1 expected:4 behavior:1 themselves:1 sociometric:4 grade:11 resolve:1 inappropriate:1 tandem:1 project:1 estimating:1 underlying:1 begin:1 panel:6 factorized:1 mass:2 lowest:1 what:1 kind:1 z:9 developed:1 finding:2 quantitative:2 every:1 k2:2 mmb:24 demonstrates:1 unit:7 underlie:1 producing:1 local:2 accordance:1 despite:1 encoding:1 analyzing:2 path:1 might:3 chose:1 suggests:2 limited:1 faction:4 range:1 statistically:1 directed:1 unique:1 block:31 procedure:1 empirical:3 significantly:1 revealing:1 word:3 griffith:1 numbered:1 protein:26 suggest:3 close:1 selection:2 noising:6 scheduling:4 context:3 influence:1 www:1 equivalent:1 map:1 quick:1 center:1 batagelj:1 adolescent:1 resolution:2 identifying:1 assigns:1 wasserman:1 estimator:1 array:2 importantly:1 his:3 biopolymer:1 population:5 exploratory:1 qh:4 hypothesis:1 associate:1 element:3 recognition:1 particularly:1 curated:1 asymmetric:2 database:2 observed:4 role:4 bottom:2 capture:6 noised:2 connected:1 cycle:2 ordering:1 trade:1 highest:1 questionnaire:1 govern:1 sigler:1 depend:1 tight:1 technically:1 eric:1 joint:1 represented:3 fast:2 describe:2 query:1 artificial:1 hyper:1 peer:1 whose:2 emerged:1 richer:1 plausible:1 dominating:1 say:1 statistic:3 online:1 advantage:1 indication:1 took:1 reconstruction:2 interaction:50 mb:4 outcast:3 relevant:3 flexibility:1 description:1 competition:1 recipe:1 convergence:2 regularity:1 cluster:6 zp:19 assessing:1 generating:1 object:10 help:1 develop:2 measured:1 school:3 eq:4 strong:1 predicted:2 come:1 quantify:1 differ:1 direction:1 posit:1 attribute:4 filter:1 stochastic:6 functionality:2 human:1 dure:1 adjacency:3 explains:1 assign:2 fix:1 scholar:2 biological:2 hold:2 practically:1 ground:1 mapping:5 predict:2 estimation:3 proc:1 label:1 individually:1 concurrent:1 homeostasis:1 successfully:2 tool:1 weighted:2 mit:1 clearly:1 aim:2 rather:4 cornell:1 exchangeability:2 jaakkola:1 rosenberg:1 encode:1 derived:1 properly:1 bernoulli:4 likelihood:5 sequencing:1 contrast:2 blockmodel:23 sense:3 summarizing:3 helpful:1 inference:14 posteriori:2 dependent:2 motif:2 membership:49 nn:1 softly:1 typically:2 hidden:1 relation:17 perona:1 selects:1 issue:3 among:12 flexible:1 sociomatrix:2 denoted:1 classification:1 html:1 fairly:1 hoff:1 marginal:2 equal:1 field:2 religious:1 having:1 ng:1 sampling:1 preparing:1 represents:1 unsupervised:2 simplex:1 others:3 ramuald:1 simplify:1 quantitatively:1 randomly:1 simultaneously:1 divergence:2 ve:4 individual:3 national:2 connects:3 consisting:1 n1:1 maintain:1 interest:3 evaluation:1 mixture:3 light:1 behind:1 held:1 accurate:1 allocating:1 integral:1 edge:1 closer:1 unless:1 re:1 uncertain:2 instance:5 formalism:1 modeling:3 retains:1 goodness:1 measuring:1 maximization:2 introducing:2 successful:1 characterize:1 connect:1 gregory:2 mles:1 st:1 density:1 csail:1 stay:2 probabilistic:2 off:1 na:4 connectivity:8 thesis:1 central:3 choose:2 leveraged:1 possibly:1 worse:1 corner:1 american:3 return:1 li:1 account:1 de:10 student:11 summarized:1 includes:1 ranking:1 depends:2 later:1 closed:1 apparently:1 portion:4 xing:1 bayes:1 annotation:4 identifiability:1 ass:2 minimize:1 purification:1 v9:1 who:4 ensemble:1 simplicial:2 identify:4 correspond:1 yellow:1 bayesian:1 accurately:1 finer:2 published:2 converged:1 parallelizable:2 reach:1 inform:2 email:1 against:1 involved:1 turk:3 associated:2 recovers:3 con:1 sampled:2 recall:5 color:1 fractional:1 substantively:1 dimensionality:2 schedule:1 resigned:1 higher:4 follow:1 specify:1 evaluated:2 until:1 hand:1 web:2 assessment:1 quality:2 scientific:2 yeast:3 name:1 contain:2 true:2 consisted:1 concept:1 equality:1 analytically:1 assigned:1 excluded:1 symmetric:2 leibler:1 nowicki:1 deal:1 conditionally:3 during:5 impute:1 amand:1 rooted:1 unnormalized:1 criterion:1 trying:1 complete:1 demonstrate:1 performs:1 image:1 variational:29 recently:1 multinomial:4 functional:12 physical:9 empirically:1 belong:1 analog:1 he:1 association:3 numerically:1 functionally:3 mellon:1 measurement:16 handcock:2 pq:1 longer:2 posterior:20 multivariate:1 recent:3 perspective:2 moderate:1 verlag:1 binary:3 affiliation:2 proce:1 victor:1 captured:2 parallelized:1 converge:1 maximize:1 period:1 signal:2 stephen:2 resolving:1 violate:1 multiple:7 snijders:1 infer:1 ii:3 reduces:4 technical:1 england:1 cross:4 raphson:1 offer:3 mle:3 paired:2 qg:4 prediction:6 variant:1 basic:1 vision:1 expectation:6 metric:1 iteration:3 represent:2 achieved:1 spectrometry:1 want:1 source:5 microarray:1 extra:1 eliminates:1 unlike:1 induced:1 member:4 legend:1 jordan:2 presence:2 revealed:1 mips:10 independence:3 fit:5 bic:3 identified:3 competing:2 reduce:1 idea:2 parameterizes:1 translates:1 six:1 allocate:2 utility:1 edoardo:1 hessian:1 cause:1 remark:1 useful:8 detailed:1 loyal:3 amount:6 dark:1 ten:1 tenenbaum:1 category:1 http:1 deliberate:1 exist:2 estimated:7 per:2 blue:2 carnegie:1 shall:1 group:22 donnelly:1 four:1 drawn:1 clarity:1 thresholded:1 graph:7 year:1 sum:2 enforced:1 place:1 family:1 reasonable:2 decide:1 ueda:1 monastery:6 patch:1 parsimonious:3 draw:4 blockstructures:1 coherence:1 summarizes:1 prefer:1 bound:5 opposition:3 distinguish:1 played:1 correspondence:4 fold:1 activity:1 strength:1 constrain:1 n3:1 scene:1 aspect:2 department:2 munich:1 according:7 structured:1 belonging:1 across:1 smaller:1 em:4 making:1 intuitively:1 explained:2 fienberg:1 taken:1 interstitial:1 equation:2 segregation:1 discus:1 turn:1 count:2 available:2 generalizes:2 apply:1 hierarchical:2 appropriate:3 alternative:2 existence:1 original:9 recipient:1 denotes:4 clustering:6 dirichlet:5 running:1 top:5 graphical:3 opportunity:1 newton:1 k1:3 ghahramani:1 classical:1 society:2 contact:1 initializes:1 quantity:3 strategy:1 dependence:3 usual:1 traditional:2 surrogate:1 diagonal:2 exhibit:2 gradient:1 affinity:1 link:2 mapped:2 simulated:2 accommodates:1 topic:1 whom:3 collected:3 extent:2 cellular:1 reason:1 kemp:1 substantive:1 assuming:2 kalman:1 relationship:5 providing:2 design:1 proper:1 summarization:1 unknown:4 diamond:1 allowing:1 observation:9 heterogeneity:1 relational:12 variability:4 precise:2 y1:1 interacting:1 pritchard:1 community:1 david:1 pair:11 namely:1 specified:2 kl:1 connection:2 raising:1 below:4 pattern:6 sparsity:5 summarize:2 including:2 reliable:3 royal:2 event:1 natural:2 indicator:5 scheme:2 improve:1 technology:1 brief:1 admixture:1 raftery:2 health:1 text:1 prior:1 literature:1 review:1 determining:2 fully:1 expect:3 bear:1 lecture:1 mixed:31 allocation:2 versus:3 validation:4 integrate:1 degree:2 sufficient:1 thresholding:2 share:1 genetics:2 summary:2 accounted:1 supported:2 free:4 bias:2 side:1 institute:2 explaining:1 saul:1 characterizing:1 sparse:1 regard:1 curve:1 evaluating:1 genome:1 computes:1 author:1 collection:19 made:3 projected:1 novice:5 constituting:1 social:11 tighten:2 approximate:8 citation:1 ignore:1 mmsb:2 kullback:1 absorb:1 gene:1 keep:2 global:4 reveals:3 receiver:1 alternatively:2 regulatory:1 latent:20 zq:1 vergence:1 table:2 robin:1 nature:1 channel:1 robust:1 improving:1 interact:1 complex:3 domain:1 did:2 blockmodels:5 dense:2 main:2 arrow:1 whole:1 noise:3 n2:1 allowed:1 fair:3 join:1 rarity:1 precision:4 surveyed:1 wish:1 exponential:1 portrayed:1 jmlr:1 third:1 young:3 down:1 friendship:4 specific:10 appeal:1 explored:1 normalizing:1 consist:3 intractable:1 burden:1 effectively:3 tantrum:1 airoldi:1 phd:1 illustrates:1 failed:1 expressed:1 partially:2 scalar:2 joined:2 binding:1 springer:1 nested:7 truth:1 lewis:1 conditional:1 goal:4 sampson:10 absence:1 content:6 feasible:1 change:1 determined:2 infinite:1 denoising:1 called:1 accepted:1 experimental:1 meaningful:1 rarely:1 select:1 internal:1 people:2 support:2 latter:1 arises:1 relevance:1 violated:1 evaluate:1 princeton:2 |
2,844 | 3,579 | A rational model of preference learning
and choice prediction by children
Thomas L. Griffiths
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720, USA
tom [email protected]
Christopher G. Lucas
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720, USA
[email protected]
Fei Xu
Department of Psychology
University of British Columbia
Vancouver, B.C., Canada V6T 1Z4
[email protected]
Christine Fawcett
Max-Planck-Institute for Psycholinguistics
Wundtlaan 1, Postbus 310
6500AH, Nijmegen, The Netherlands
[email protected]
Abstract
Young children demonstrate the ability to make inferences about the preferences
of other agents based on their choices. However, there exists no overarching account of what children are doing when they learn about preferences or how they
use that knowledge. We use a rational model of preference learning, drawing on
ideas from economics and computer science, to explain the behavior of children
in several recent experiments. Specifically, we show how a simple econometric
model can be extended to capture two- to four-year-olds? use of statistical information in inferring preferences, and their generalization of these preferences.
1
Introduction
Economists and computer scientists are often concerned with inferring people?s preferences from
their choices, developing econometric methods (e.g., [1, 2]) and collaborative filtering algorithms
(e.g., [3, 4, 5]) that will allow them to assess the subjective value of an item or determine which
other items a person might like. However, identifying the preferences of others is also a key part of
social cognitive development, allowing children to understand how people act and what they want.
Young children are thus often in the position of economists or computer scientists, trying to infer
the nebulous preferences of the people around them from the choices they make. In this paper, we
explore whether the inferences that children draw about preferences can be explained within the
same kind of formalism as that used in economics and computer science, testing the hypothesis that
children are making rational inferences from the limited data available to them.
Before about 18 months of age, children seem to assume that everyone likes the same things as
themselves, having difficulty understanding the subjective nature of preferences (e.g., [6]). However,
shortly after coming to recognize that different agents can maintain different preferences, children
demonstrate a remarkably sophisticated ability to draw conclusions about the preferences of others
from their behavior. For example, two-year-olds seem to be capable of using shared preferences
between an agent and themselves as the basis for generalization of other preferences [7], while threeand four-year-olds can use statistical information to reason about preferences, inferring a preference
for an object when an agent chooses the object more often than expected by chance [8].
This literature in developmental psychology is paralleled by work in econometrics on statistical
models for inferring preferences from choices. In this paper, we focus on an approach that grew out
1
of the Nobel prize-winning work of McFadden (see [1] for a review), exploring a class of models
known as mixed multinomial logit models [2]. These models assume that agents assign some utility
to every option in a choice, and choose in a way that is stochastically related to these utilities. By
observing the choices people make, we can recover their utilities by applying statistical inference,
providing a simple rational standard against which the inferences of children can be compared.
Research on preferences in computer science has tended to go beyond modeling individual choice,
focusing on predicting which options people will like based not just on their own previous choice
patterns but also drawing on the choices of other people ? a problem known as collaborative filtering [3]. This work has led to the development of the now-ubiquitous recommendation systems that
suggest which items one might like to purchase based on previous purchases, and has reached notoriety through the recent Netflix challenge. Economists have also explored models for the choices
of multiple agents, using hierarchical Bayesian statistics [9]. These models combine information
across agents to make inferences about the properties or value of different options.
Our contribution in this paper is to bring together these different threads of research to develop rational models of children?s inferences about preferences. Section 2 summarizes developmental work
on children?s inferences about preferences. Section 3 outlines the basic idea behind rational choice
models, drawing on previous work in economics and computer science. We then consider how these
models can be used to explain developmental data, with Section 4 concerned with inferences about
preferences from choices, and Section 5 focusing on inferences about the properties of objects from
preferences. Section 6 discusses the implications of our results and concludes the paper.
2
Children?s inferences about preferences
The basic evidence that children do not differentiate the preferences of others before about 18 months
of age comes from Repacholi and Gopnik [6]. Subsequent work has built on these results to explore
the kinds of cues that children can use in inferring preferences, and how children generalize consistent patterns of preferences.
2.1
Learning preferences from statistical evidence
While 18-month-olds are able to infer preferences from affective responses, we often need to make
inferences from more impoverished data, such as the patterns of choices that people make when
faced with various options. Recent work by Kushnir and colleagues [8] provided the first evidence
that 3- and 4-year-old children can use statistical sampling information as the basis for inferring an
agent?s preference for toys. Three groups of children were tested in a simple task. Each child was
shown a big box of toys. For the first group, the box was filled with just one type of toy (e.g., 100%
red discs). For the second group, the box was filled with two types of toys (e.g., 50% red discs and
50% blue plastic flowers). For the third group, the box was also filled with two types of toys, but in
different proportions (e.g., 18% red discs and 82% blue plastic flowers). A puppet named Squirrel
came in to play a game with the child. Squirrel looked into the box and picked out five toys. The
sample always consisted of five red discs for all three conditions. Then the child was given three toys
? a red disc (the target), a blue plastic flower (the alternative), and a yellow cylinder (the distractor) ?
and was asked to give Squirrel the one he liked. Each child received two trials with different objects.
The results of the experiment showed that the children chose the target (the red disc) 0.96, 1.29, and
1.67 times (out of 2) in the 100%, 50%, and 18% conditions, respectively, suggesting that children
used the non-random sampling behavior of Squirrel as the basis for inferring his preferences.
2.2
Generalizing from shared preferences
Recognizing that preferences can vary from one agent to another also establishes an opportunity
to discover that those preferences can differ in the degree to which they are related to one?s own.
Fawcett and Markson [7] asked under what conditions children would use shared preferences between themselves and another agent as the basis for generalization, using a task similar to the
?collaborative filtering? problem explored in computer science. Their experiments began with four
blocks of training involving two actors. In each block the actors introduced two objects from a common category, including toys, television shows and foods. Each actor expressed liking the object
she introduced and dislike for the other?s object. One actor had preferences that were matched to the
2
child?s in all blocks, in that her objects had features chosen to be more interesting to the child. After
each actor reacted to the objects, the child was given an opportunity to play with the objects, and his
or her preference for one object over the other was judged by independent coders, based on relative
interest in and play with each object.
After the training blocks, the first test block began. Each actor brought out a new object that was
described as being in the same category as the training objects, but was hidden from the child by an
opaque container. Each actor then reacted to her novel object in a manner that varied by condition.
In the like condition, the actor?s reaction was to examine the object and describe it as her favorite
object of the category. In the dislike condition, she examined the object and expressed dislike. In
the indifferent condition the actor did not examine the toy, and professed ignorance about it. The
child was then given an opportunity to choose one hidden object to play with. Finally, a second test
block began, identical to the first except that the hidden objects were members of a different category
from those seen in training. In Experiment 1, members of the new category could be taken to share
features with members of the training category, e.g., toys versus books, while in Experiment 2 the
new category was chosen to minimize such overlap, e.g., food versus television shows. Children
consistently chose the test items that were favored by the agent who shared their own preferences
during training, for both toys and the similar category, books. In contrast, when a highly distant
category was used during test, children did not show any systematic generalization behaviors. These
results suggest that children use shared preferences as the basis for generalization, but they also take
into account whether the categories are related or not.
2.3
Summary and prospectus
Recent results in developmental psychology indicate that young children are capable of making
remarkably sophisticated inferences about the preferences of others. This raises the question of
how they make these inferences, and whether the kinds of conclusions that children draw from
the behavior of others are justified. We explore this question in the remainder of the paper. In
the tradition of rational analysis [10], we consider the problem of how one might optimally infer
people?s preferences from their choices, and compare the predictions of such a model with the
developmental data. The results of this analysis will help us understand how children might conceive
of the relationship between the choices that people make and the preferences they have.
3
A rational model connecting choice and preference
In developing a rational account of how an agent might learn others? preferences from choice information, we must first posit a specific ecological relationship between people?s preferences and their
choices, and then determine how an agent would make optimal inferences from others? behavior
given knowledge of this relationship. Fortunately, the relationship between preferences and choices
has been the subject of extensive research in economics and psychology.
One of the most basic models of choice behavior is the Luce-Shepard choice rule [11, 12], which
asserts that when presented with a set of J options with utilities u = (u1 , . . . , uJ ), people will
choose option i with probability
exp(ui )
P (c = i|u) = P
(1)
j exp(uj )
where j ranges over the options considered in the choice. Given this choice rule, learning about
an agent?s preferences is a simple matter of Bayesian inference. Specifically, having observed a
sequence of choices c = (c1 , . . . , cN ), we can compute a posterior distribution over the utilities of
the options involved by applying Bayes? rule
P (c|u)p(u)
(2)
p(u|c) = R
P (c|u)p(u) du
where p(u) is a probability density expressing the prior probability of a vector of utilities u, and
the likelihood P (c|u) is obtained by assuming that the choices are independent given u, being the
product of the probabilities of the individual choices as in Equation 1.
While this simple model is sufficient to capture preferences among a constrained set of objects, most
models used in econometrics aim to predict the choices that agents will make about novel objects.
3
This can be done by assuming that options have features that determine their utility, with the utility
of option i being a function of the utility of its features. If we let xi be a binary vector indicating
whether an option possesses each of a finite set of features, and ? a be the utility that agent a assigns
to those features1 , we can express the utility of option i for agent a as the inner product of these two
vectors. The probability of agent a choosing option i is then
exp(? Ta xi )
P (c = j|X, ? a ) = P
T
j exp(? a xj )
(3)
where X collects the features of all of the options. We can also integrate out ? a to obtain the choice
probabilities given just the features of the options, with
Z
exp(? Ta xi )
P (c = j|X) =
p(?a ) d? a
(4)
P
T
j exp(? a xj )
This corresponds to the mixed multinomial logit (MML; [2]) model, which has been used for several
decades in econometrics to model discrete-choice preferences in populations of consumers.
The MML model and the Luce-Shepard rule on which it is built are theoretically appealing for
several reasons. First, the Luce-Shepard rule reflects the choice probabilities that result when agents
seek to maximize their utility in the presence of noise on utilities that follows a Weibull distribution
[13], and is thus compatible with the standard assumptions of statistical decision theory. Second,
the MML can approximate the distribution of choices for essentially any heterogeneous population
of utility-maximizing agents given appropriate choice of p(? a ) [1]. Finally, this approach has been
widely used and generally successful in applications in marketing and econometrics (e.g., [9, 14]).
While a wide range of random utility models can be represented with an appropriate choice of
prior over ? a , one common variation supposes that ? a follows a Gaussian distribution around a
population mean which in turn has a Gaussian prior. Given that the individuals in the experiments we
examine are only dealing with two actors, we will assume a single-parameter prior in which different
agents? preferences are independent and preferences for individual features are uncorrelated with a
Gaussian distribution with mean zero and variance ? 2 : ? a ? N (0, ? 2 I).
The model outlined in this section provides a way to optimally answer the question of how to infer
the preferences of an agent from their choices. In the remainder of the paper, we explore how well
this simple rational model accounts for the inferences that children make about preferences, applying
the model to the key developmental phenomena introduced in the previous section.
4
Using statistical information to infer preferences
The experiment conducted by Kushnir and colleagues [8], discussed in Section 2.1, provides evidence that children are sensitive to statistical information when inferring the preferences of agents.
In this section, we examine whether this inference is consistent with the predictions of the rational
model outlined above. We first consider how to apply the MML model in this context, then discuss
the model predictions and alternative explanations.
4.1
Applying the MML model
The child?s goal is to learn what Squirrel?s preferences are, so as to offer an appropriate toy. Let
? a be Squirrel?s preferences, c = (c1 . . . cN ) the sequence of N choices Squirrel makes, and
Xn = [xn1 . . . xnJn ]T the observed features of Squirrel?s Jn options at choice event n. The
set {X1 , . . . , XN } will be denoted with X. Estimating ? a entails computing p(? a |c, X) ?
P (c|? a , X)p(? a ), analogous to the inference of u in Equation 2. The probability of Squirrel?s
QN
choices is P (c|X, ? a ) = n=1 P (cn |Xn , ? a ), where P (cn = j|Xn , ? a ) is given by Equation 3.
We chose to represent the objects as having minimal and orthogonal feature vectors, so that red
discs (Squirrel?s target toy) had features [1 0 0]T , blue flowers (the alternative option in his choices)
had features [0 1 0]T , and yellow cylinders (the distractor) had features [0 0 1]T , respectively. The
1
We will refer to the utilities of features as ?preferences? to distinguish them from the utilities of options.
4
Predictions vs choice probabilities for ?2=2
(a)
(b)
1
0.5
0.5
0.45
target
alternate
distractor
Sum squared error (SSE)
0
P(choice)
1
50 percent target
model
children
0.5
0
target
alternate
distractor
1
18 percent target
0.4
0.35
0.3
0.25
0.2
0.15
0.5
0
Variance (?2) parameter versus sum squared error
0.55
100 percent target
0.1
target
alternate
object
0.05
distractor
0
1
2
3
4
5
variance
6
7
8
9
10
Figure 1: Model predictions for data in [8]. (a) Predicted probability that objects will be selected,
plotted against observed proportions. (b) Sensitivity of model to setting of variance parameter.
number of options in each choice Squirrel made from the box was the total number of objects
in the box (38), with each type of object being represented with the appropriate frequency. The
N = 5 choices made by Squirrel thus provide the data c from which its preferences can be inferred.
We constructed an approximation to the posterior distribution over ? a given c using importance
sampling (see, e.g., [15] for details), drawing a sample of values of ? a from the prior distribution
p(? a ) and giving each value weight proportional to the corresponding likelihood P (c|X, ? a ).
The child must now select an object to give Squirrel, with one target, one alternative, and one
distractor as options. If we suppose each child is matching Squirrel?s choice distribution, we can use
the Luce-Shepard choice rule (Equation 3) to predict the rates at which children should choose the
different objects for a particular value of ? a , P (cchild = j|X, ? a ). The probability of a particular
choice is then obtained by averaging over ? a , with
Z
P (cchild = j|X, c) =
P (cchild = j|X, ? a )p(? a |X, c) d? a
(5)
which we compute using the approximate posterior distribution yielded by importance sampling.
All simulations presented here use 106 samples, and were performed for a range of values of ? 2 , the
parameter that determines the variance of the prior on ? a .
4.2
Results
Figure 1 (a) compares the predictions of the model with ? 2 = 2 to the participants? choice probabilities. The sum squared error (SSE) of the predictions was .0758 when compared with the observed
probabilities of selecting the target object, with the correlation of the model predictions and observed data being r = 0.93. Figure 1 (b) shows that the goodness of fit is generally insensitive to
the variance of the prior ? 2 , provided ? 2 > 1. This is essentially the only free parameter of the
model, as it sets the scale for the features X, indicating that there is a close correspondence between
the predictions of the rational model and the inferences of the children under a variety of reasonable
assumptions about the distribution of preferences.
The only conspicuous difference between the model?s predictions and the children?s choices was the
tendency of children to choose the target object more frequently than alternatives even when it was
the only object in the box. This can be explained by observing that under the cover story used in the
experiment, Squirrel was freely choosing to select objects from the box, implicitly indicating that
it was choosing these objects over other unobserved options. As a simple test of this explanation,
we generated new predictions under the assumption that each choice included one other unobserved
option, with features orthogonal to the choices in the box. This improved the fit of the model,
resulting in an SSE of .05 and a correlation of r = .95.
5
4.3
Alternative explanations
Kushnir and colleagues [8] suggest that the children in the experiment may be learning preferences
by using statistical information to identify situations where the agent?s behavior is not consistent
with random sampling. We do not dispute that this may be correct; our analysis does not entail a
commitment to a procedure by which children make inferences about preferences, but to the idea
that whatever the process is, it should provides a good solution to the problem with which children
are faced given the constraints under which they operate. We will add two observations. The first
is that we need to consider how such an explanation might be generalized to explain behavior in
other preference-learning situations, and if not, what additional processes might be at work. The
second is that it is not difficult to test salient variations on the sampling-versus-preference view
that are inconsistent with our own, as they predict that children make a dichotomous judgment ?
distinguishing random from biased sampling ? rather than one that reveals the extent of a preference
in addition to its presence and valence. The former predicts that over a wide range of sets of evidence
that indicate an agent strongly prefers objects of type X to those of type Y, one can generate evidence
consistent with a weaker preference for type W over Y (over more data points) that will lead children
to offer the agent objects of type W over X.
5
Generalizing preferences to novel objects
The study of Fawcett and Markson [7] introduced in Section 2.2 provides a way to go beyond simple
estimation of preferences from choices, exploring how children solve the ?collaborative filtering?
problem of generalizing preferences to novel objects. We will outline how this can be captured
using the MML model, present simulation results, and then consider alternative explanations.
5.1
Applying the MML model
Forming an appropriate generalization in this task requires two kinds of inferences. The first inference the child must make ? learning the actors? preferences by computing p(? a |X, c) for a ? (1, 2)
? is the same as that necessary for the first set of experiments discussed above. The second inference
is estimating the two hidden objects? features via those preferences. In order to solve this problem,
we need to modify the model slightly to allow us to predict actions other than choices. Specifically, we need to define how preferences are related to affective responses, since the actors simply
indicated their affective response to the novel object.
We will refer to the actor whose preferences matched those of the child as Actor 1, and the other
actor as Actor 2. Let the features of Actor a?s preferred object in round n be xna , the features of the
same-category novel object be xsa , and the features of the different-category novel object be xda .
When the category is irrelevant, we will use x?a ? {xsa , xda } to indicate the features of the novel
object. The goal of the child is to infer x?a (and thus whether they themselves will like the novel
object) from the observed affective response of agent a, the features of the objects from the previous
rounds X, and the choices of the agent on the previous rounds c. This can be done by evaluating
Z
P (x?a |X, c, ra ) = P (x?a |? a , ra )p(? a |X, c) d?a
(6)
where P (x?a |? a , ra ) is the posterior distribution over the features of the novel object given the
preferences and affective response of the agent. Computing this distribution requires defining a
likelihood P (ra |x?a , ? a ) and a prior on features P (x?a ). We deal with these problems in turn.
The likelihood P (ra |x?a , ? a ) reflects the probability of the agent producing a particular affective
response given the properties of the object and the agent?s preferences. In the experiment, the
affective responses produced by the actors were of two types. In the like condition, the actor declared
this to be her favorite object. If one takes the actor?s statement at face value and supposes that the
actor has encountered an arbitrarily large number of such objects, then
P (x?a |? a , ra = ?like?) = 1 for x?a = arg max ? Ta x?a , else 0
x?a
(7)
In the dislike condition, the action ? saying ?there?s a toy in here, but I don?t like it?? communicates
negative utility, or at least utility below some threshold which we will take to be zero, hence:
P (x?a |? a , ra = ?dislike?) = 1 for ? Ta x?a < 0, else 0
6
(8)
1
Model predictions versus child choices, 4 features, ? 2=2
model
children
0.9
0.8
P(choice=1)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
C4LS C4LD C4DS C4DD C3LS C3LD C3DS C3DD
Condition
Figure 2: Model predictions for data in Experiment 1 of [7], excluding cases where children
had fewer than 4 chances to play with training
objects. The prefixes C4 and C3 denote cases
where children chose to play with the objects
presented by the matched actor (i.e., Actor 1)
in training 4 and 3 times out of 4, respectively.
The third character denotes the like (L) or dislike
(D) condition, and the fourth character denotes
whether the object is in the same category (S)
or a different category (D) from those seen in
training. P (choice = 1) is the probability of
selecting Actor 1?s novel object.
In defining the prior distributions from which the features of both the observed objects and the
novel objects are sampled, it is important to represent differences between categories. Our feature
vectors were concatenations of category-1 features, category-2 features, and multiple-category features, where each feature was present with probability 0.5 if its category could possess the feature,
otherwise zero. We arbitrarily chose four features per category, for a total of twelve.
Having computed a posterior distribution over x?a using this prior and likelihood, the child must
combine this information with his or her own preferences to select an object. Unfortunately, we do
not have direct access to the utilities of the children in this experiment, so we must estimate them
from the children?s choice data. We did this using the same procedure as for the adults? utilities. That
done, we apply the choice rule a final time and obtain choice probabilities for the objects selected
by the children on the final trial.
5.2
Results
Figure 2 shows the rates at which children chose actor 1?s object and the model?s predictions for the
same. With the variance parameter set to ? 2 = 2 and twelve features, the correlation between the
predictions and the data was r = 0.88. When examining only the 4-year-old participants (N = 68),
the correlation rose to r = 0.94. The number of features has little influence on predictive accuracy:
with 30 features, the correlations were r = 0.85 and r = 0.93 for all participants and 4-year-old
participants, respectively. The model predicts less-extreme probabilities than were observed in the
choices of the children, in particular in the cases where children chose to play with one of actor 2?s
objects. This may be attributed to actor 1?s objects having features one would expect people to like
a priori, making the zero-mean preference prior inappropriate.
5.3
Alternative explanations
The model described above provides good predictions of children?s inferences in this experiment,
but also attributes relatively complex beliefs to the child. A natural question is whether a simpler
mechanism might explain behavior in this case. Fawcett and Markson [7] discussed several alternative explanations of their findings, so this section will briefly recapitulate their discussion with a
specific view towards alternative models.
The simplest model might suggest that children were simply learning associations between specific
behaviors by the matching actor and the presence of a desirable object. The most basic model of this
kind, assuming that the matching actor is generally associated with desirable objects, is falsified by
the children showing no bias towards the matching actor?s objects in the dislike condition. A more
elaborate version, in which children associate positive affect by the matching actor with desirable
objects, is inconsistent with children?s stronger preferences when the novel objects were similar to
the training objects. This evidence also runs against explanations that the children are selecting
objects based on liking the matched actor or believing that the matched actor is a more reliable
judge of quality. Any of these alternatives might be made to fit the data via ad hoc assumptions
7
about subjective feature correlations or category similarity, but we see no reason to adopt a more
complex model without better explanatory power.
6
Conclusion
We have outlined a simple rational model for inferences about preferences from choices, drawing on
ideas from economics and computer science, and shown that this model produces predictions that
closely parallel the behavior of children reasoning about the preferences of others. These results shed
light on how children may think about choice, desire, and other minds, and highlight new questions
and possible extensions. In future work, we intend to explore whether the developmental shift
discovered by Repacholi and Gopnik [6] can be explained in terms of rational model selection under
an MML view. We believe that a hierarchical MML and the ?Bayesian Ockham?s razor? provide a
simple account, resembling a recent Bayesian treatment of false-belief learning [16]. Our approach
also provides a framework for predicting how children might make inferences to preferences at the
population level and exploring the information provided by correlated features.
Acknowledgments. This work was supported by AFOSR grant number FA9550-07-1-0351, NSERC and
SSHRC Canada, and the McDonnell Causal Learning Collaborative.
References
[1] D. McFadden and K. E. Train. Mixed MNL models of discrete response. Journal of Applied Econometrics, 15:447?470, 2000.
[2] J. Hayden Boyd and R. E. Mellman. Effect of fuel economy standards on the u.s. automotive market: An
hedonic demand analysis. Transportation Research A, 14:367?378, 1980.
[3] D. Goldberg, David N., B. M. Oki, and T. Douglas. Using collaborative filtering to weave an information
tapestry. Communications of the ACM, 35(12):61?70, 1992.
[4] J. A. Konstan, B. N. Miller, D. Maltz, J. L. Herlocker, L. R. Gordon, and J. Riedl. Grouplens: applying
collaborative filtering to usenet news. Communications of the ACM, 40:77?87, 1997.
[5] C. Kadie J. Breese, D. Heckerman. Empirical analysis of predictive algorithms for collaborative filtering.
In Proceedings of the Fourteenth Annual Conference on Uncertainty in Artificial Intelligence (UAI 98),
San Francisco, CA, 1998. Morgan Kaufmann.
[6] B. M. Repacholi and A. Gopnik. Early reasoning about desires: Evidence from 14- and 18-month-olds.
Developmental Psychology, 33(1):12?21, 1997.
[7] C. A. Fawcett and L. Markson. Children reason about shared preferences. revised manuscript submitted
for publication. Developmental Psychology, under review.
[8] T. Kushnir, F. Xu, and H. Wellman. Preschoolers use sampling information to infer the preferences of
others. In 28th Annual Conference of the Cognitive Science Society, 2008.
[9] K. E. Train, D. McFadden, and M. Ben-Akiva. The demand for local telephone service: A fully discrete
model of residential calling patterns and service choices. The RAND Journal of Economics, 18(1):109?
123, 1987.
[10] J. R. Anderson. The adaptive character of thought. Erlbaum, Hillsdale, NJ, 1990.
[11] R. D. Luce. Individual choice behavior. John Wiley, New York, 1959.
[12] R. N. Shepard. Stimulus and response generalization: A stochastic model relating generalization to
distance in psychological space. Psychometrika, 22:325?345, 1957.
[13] D. McFadden. Conditional logit analysis of qualitative choice behavior. In P. Zarembka, editor, Frontiers
in Econometrics. Academic Press, New York, 1973.
[14] D. Revelt and K. E. Train. Mixed logit with repeated choices: Households? choices of appliance efficiency
level. The Review of Economics and Statistics, 80(4):647?657, 1998.
[15] R. M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report CRGTR-93-1, University of Toronto, 1993.
[16] N. D. Goodman, C. L. Baker, E. Baraff Bonawitz, V. K. Mansinghka, A. Gopnik, H. Wellman, L. Schulz,
and J. B. Tenenbaum. Intuitive theories of mind: A rational approach to false belief. In 28th Annual
Conference of the Cognitive Science Society, 2006.
8
| 3579 |@word trial:2 version:1 briefly:1 proportion:2 stronger:1 logit:4 seek:1 simulation:2 recapitulate:1 selecting:3 prefix:1 subjective:3 reaction:1 must:5 john:1 distant:1 subsequent:1 v:1 cue:1 selected:2 fewer:1 item:4 intelligence:1 prize:1 fa9550:1 provides:6 appliance:1 toronto:1 preference:83 simpler:1 five:2 constructed:1 direct:1 qualitative:1 combine:2 weave:1 affective:7 manner:1 theoretically:1 dispute:1 ra:7 market:1 expected:1 themselves:4 examine:4 distractor:6 frequently:1 behavior:14 food:2 little:1 inappropriate:1 psychometrika:1 provided:3 discover:1 matched:5 features1:1 estimating:2 baker:1 fuel:1 coder:1 what:5 kind:5 psych:1 weibull:1 unobserved:2 finding:1 nj:1 berkeley:6 every:1 act:1 shed:1 oki:1 puppet:1 whatever:1 grant:1 planck:1 producing:1 before:2 positive:1 scientist:2 local:1 modify:1 service:2 usenet:1 might:11 chose:7 examined:1 collect:1 limited:1 range:4 acknowledgment:1 testing:1 block:6 procedure:2 empirical:1 thought:1 matching:5 boyd:1 griffith:2 suggest:4 close:1 selection:1 judged:1 context:1 applying:6 influence:1 transportation:1 maximizing:1 resembling:1 go:2 overarching:1 economics:7 identifying:1 assigns:1 rule:7 his:4 population:4 variation:2 sse:3 analogous:1 target:12 play:7 suppose:1 distinguishing:1 goldberg:1 hypothesis:1 associate:1 econometrics:6 predicts:2 v6t:1 observed:8 capture:2 news:1 preschooler:1 rose:1 developmental:9 ui:1 asked:2 raise:1 predictive:2 efficiency:1 basis:5 psycholinguistics:1 various:1 represented:2 train:3 describe:1 monte:1 artificial:1 choosing:3 whose:1 widely:1 solve:2 drawing:5 otherwise:1 ability:2 statistic:2 think:1 final:2 differentiate:1 sequence:2 markson:4 hoc:1 coming:1 product:2 remainder:2 commitment:1 intuitive:1 asserts:1 produce:1 liked:1 ben:1 object:68 help:1 develop:1 mansinghka:1 received:1 predicted:1 come:1 indicate:3 judge:1 differ:1 posit:1 gopnik:4 correct:1 attribute:1 closely:1 stochastic:1 hillsdale:1 assign:1 generalization:8 exploring:3 extension:1 frontier:1 squirrel:15 around:2 considered:1 exp:6 predict:4 vary:1 adopt:1 early:1 estimation:1 grouplens:1 sensitive:1 establishes:1 reflects:2 brought:1 always:1 gaussian:3 aim:1 rather:1 publication:1 focus:1 she:2 consistently:1 likelihood:5 believing:1 contrast:1 tradition:1 inference:28 economy:1 explanatory:1 her:6 hidden:4 reacted:2 schulz:1 arg:1 among:1 denoted:1 favored:1 lucas:1 development:2 priori:1 constrained:1 having:5 sampling:8 identical:1 purchase:2 future:1 report:1 others:9 gordon:1 stimulus:1 xda:2 recognize:1 individual:5 maintain:1 cylinder:2 interest:1 highly:1 indifferent:1 extreme:1 nl:1 wellman:2 light:1 behind:1 chain:1 implication:1 capable:2 necessary:1 orthogonal:2 filled:3 old:8 plotted:1 causal:1 minimal:1 psychological:1 formalism:1 modeling:1 cover:1 goodness:1 recognizing:1 successful:1 examining:1 conducted:1 erlbaum:1 optimally:2 answer:1 supposes:2 chooses:1 person:1 density:1 twelve:2 sensitivity:1 systematic:1 probabilistic:1 together:1 connecting:1 squared:3 choose:5 cognitive:3 stochastically:1 book:2 toy:14 account:5 suggesting:1 kadie:1 matter:1 ad:1 performed:1 view:3 picked:1 doing:1 observing:2 red:7 netflix:1 recover:1 option:22 reached:1 bayes:1 participant:4 parallel:1 collaborative:8 ass:1 contribution:1 minimize:1 accuracy:1 conceive:1 who:1 variance:7 miller:1 judgment:1 identify:1 kaufmann:1 yellow:2 generalize:1 bayesian:4 plastic:3 disc:7 produced:1 carlo:1 ah:1 submitted:1 explain:4 tended:1 against:3 colleague:3 frequency:1 involved:1 associated:1 attributed:1 xn1:1 rational:15 sampled:1 treatment:1 knowledge:2 ubiquitous:1 sophisticated:2 impoverished:1 focusing:2 manuscript:1 ta:4 tom:1 response:9 improved:1 rand:1 done:3 box:10 strongly:1 anderson:1 just:3 marketing:1 correlation:6 christopher:1 quality:1 indicated:1 believe:1 usa:2 effect:1 consisted:1 former:1 hence:1 neal:1 ignorance:1 deal:1 round:3 game:1 during:2 razor:1 mpi:1 generalized:1 trying:1 outline:2 demonstrate:2 christine:2 bring:1 percent:3 reasoning:2 novel:13 began:3 common:2 multinomial:2 shepard:5 insensitive:1 discussed:3 he:1 association:1 relating:1 expressing:1 refer:2 outlined:3 z4:1 xsa:2 had:6 access:1 actor:33 entail:2 similarity:1 add:1 posterior:5 own:5 recent:5 showed:1 irrelevant:1 ecological:1 binary:1 came:1 arbitrarily:2 seen:2 captured:1 fortunately:1 additional:1 morgan:1 freely:1 determine:3 maximize:1 multiple:2 liking:2 desirable:3 infer:7 technical:1 academic:1 offer:2 prediction:18 involving:1 basic:4 heterogeneous:1 essentially:2 fawcett:6 represent:2 c1:2 justified:1 addition:1 want:1 remarkably:2 else:2 goodman:1 container:1 operate:1 biased:1 posse:2 subject:1 tapestry:1 thing:1 member:3 inconsistent:2 seem:2 presence:3 concerned:2 variety:1 xj:2 fit:3 psychology:8 affect:1 inner:1 idea:4 cn:4 luce:5 shift:1 whether:9 thread:1 utility:21 york:2 prefers:1 action:2 generally:3 netherlands:1 tenenbaum:1 category:22 simplest:1 generate:1 per:1 blue:4 discrete:3 express:1 group:4 key:2 four:4 salient:1 threshold:1 douglas:1 econometric:2 year:6 sum:3 residential:1 run:1 fourteenth:1 fourth:1 uncertainty:1 opaque:1 named:1 saying:1 reasonable:1 draw:3 decision:1 summarizes:1 distinguish:1 correspondence:1 encountered:1 yielded:1 annual:3 constraint:1 fei:2 automotive:1 calling:1 dichotomous:1 declared:1 u1:1 relatively:1 department:3 developing:2 alternate:3 mcdonnell:1 riedl:1 across:1 slightly:1 heckerman:1 character:3 conspicuous:1 appealing:1 making:3 explained:3 taken:1 equation:4 discus:2 turn:2 mechanism:1 mml:9 mind:2 available:1 apply:2 hierarchical:2 zarembka:1 appropriate:5 alternative:11 shortly:1 jn:1 thomas:1 denotes:2 opportunity:3 household:1 giving:1 uj:2 society:2 intend:1 question:5 looked:1 valence:1 distance:1 concatenation:1 extent:1 reason:4 nobel:1 assuming:3 economist:3 consumer:1 relationship:4 providing:1 difficult:1 unfortunately:1 statement:1 nijmegen:1 negative:1 herlocker:1 kushnir:4 allowing:1 observation:1 revised:1 ockham:1 markov:1 finite:1 situation:2 extended:1 grew:1 defining:2 excluding:1 communication:2 discovered:1 varied:1 canada:2 inferred:1 introduced:4 david:1 extensive:1 c3:1 c4:1 california:2 adult:1 beyond:2 able:1 flower:4 pattern:4 below:1 challenge:1 built:2 reliable:1 max:2 including:1 everyone:1 explanation:8 belief:3 overlap:1 event:1 difficulty:1 natural:1 power:1 predicting:2 concludes:1 columbia:1 faced:2 review:3 understanding:1 literature:1 prior:11 dislike:7 vancouver:1 relative:1 afosr:1 fully:1 expect:1 highlight:1 mcfadden:4 mixed:4 interesting:1 filtering:7 proportional:1 versus:5 age:2 integrate:1 agent:31 degree:1 sufficient:1 consistent:4 editor:1 story:1 uncorrelated:1 share:1 compatible:1 summary:1 supported:1 free:1 hayden:1 bias:1 allow:2 understand:2 weaker:1 institute:1 wide:2 hedonic:1 face:1 xn:4 evaluating:1 qn:1 made:3 adaptive:1 san:1 social:1 approximate:2 implicitly:1 preferred:1 dealing:1 reveals:1 uai:1 francisco:1 xi:3 don:1 decade:1 bonawitz:1 favorite:2 learn:3 nature:1 ca:4 du:1 complex:2 did:3 mnl:1 big:1 noise:1 child:78 repeated:1 xu:2 x1:1 crgtr:1 elaborate:1 wiley:1 inferring:8 position:1 winning:1 konstan:1 communicates:1 third:2 young:3 british:1 specific:3 showing:1 explored:2 maltz:1 evidence:8 exists:1 false:2 importance:2 television:2 demand:2 falsified:1 generalizing:3 led:1 simply:2 explore:5 forming:1 expressed:2 desire:2 nserc:1 recommendation:1 ubc:1 corresponds:1 chance:2 determines:1 acm:2 conditional:1 month:4 goal:2 towards:2 shared:6 included:1 specifically:3 except:1 telephone:1 averaging:1 total:2 breese:1 tendency:1 akiva:1 indicating:3 select:3 people:12 paralleled:1 tested:1 phenomenon:1 correlated:1 |
2,845 | 358 | Continuous Speech Recognition by
Linked Predictive Neural Networks
Joe Tebelskis, Alex Waibel, Bojan Petek, and Otto Schmidbauer
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
We present a large vocabulary, continuous speech recognition system based
on Linked Predictive Neural Networks (LPNN's). The system uses neural networks as predictors of speech frames, yielding distortion measures
which are used by the One Stage DTW algorithm to perform continuous
speech recognition. The system, already deployed in a Speech to Speech
Translation system, currently achieves 95%, 58%, and 39% word accuracy
on tasks with perplexity 5, 111, and 402 respectively, outperforming several simple HMMs that we tested. We also found that the accuracy and
speed of the LPNN can be slightly improved by the judicious use of hidden
control inputs. We conclude by discussing the strengths and weaknesses
of the predictive approach.
1
INTRODUCTION
Neural networks are proving to be useful for difficult tasks such as speech recognition, because they can easily be trained to compute smooth, nonlinear, nonparametric functions from any input space to output space. In speech recognition, the
function most often computed by networks is classification, in which spectral frames
are mapped into a finite set of classes, such as phonemes. In theory, classification
networks approximate the optimal Bayesian discriminant function [1], and in practice they have yielded very high accuracy [2, 3, 4]. However, integrating a phoneme
classifier into a speech recognition system is nontrivial, since classification decisions
tend to be binary, and binary phoneme-level errors tend to confound word-level
hypotheses. To circumvent this problem, neural network training must be carefully
integrated into word level training [1, 5]. An alternative function which can be com-
199
200
Tebelskis, Waibel, Petek, and Schmidbauer
puted by networks is prediction, where spectral frames are mapped into predicted
spectral frames. This provides a simple way to get non-binary distortion measures,
with straightforward integration into a speech recognition system. Predictive networks have been used successfully for small vocabulary [6, 7] and large vocabulary
[8, 9] speech recognition systems. In this paper we describe our prediction-based
LPNN system [9], which performs large vocabulary continuous speech recognition,
and which has already been deployed within a Speech to Speech Translation system
[10]. We present our experimental results, and discuss the strengths and weaknesses
of the predictive approach.
2
LINKED PREDICTIVE NEURAL NETWORKS
The LPNN system is based on canonical phoneme models, which can be logically
concatenated in any order (using a "linkage pattern") to create templates for different words; this makes the LPNN suitable for large vocabulary recognition.
Each canonical phoneme is modeled by a short sequence of neural networks. The
number of nets in the sequence, N >= 1, corresponds to the granularity of the
phoneme model. These phone modeling networks are nonlinear, multilayered, feedforward, and "predictive" in the sense that, given a short section of speech, the
networks are required to extrapolate the raw speech signal, rather than to classify
it. Thus, each predictive network produces a time-varying model of the speech
signal which will be accurate in regions corresponding to the phoneme for which
that network has been trained, but inaccurate in other regions (which are better
modeled by other networks). Phonemes are thus "recognized" indirectly, by virtue
of the relative accuracies of the different predictive networks in various sections of
speech. Note, however, that phonemes are not classified at the frame level. Instead,
continuous scores (prediction errors) are accumulated for various word candidates,
and a decision is made only at the word level, where it is finally appropriate.
2.1
TRAINING AND TESTING ALGORITHMS
The purpose of the training procedure is both (a) to train the networks to become
better predictors, and (b) to cause the networks to specialize on different phonemes.
Given a known training utterance, the training procedure consists of three steps:
1. Forward Pass: All the networks make their predictions across the speech sample, and we compute the Euclidean distance matrix of prediction errors between
predicted and actual speech frames. (See Figure 1.)
2. Alignment Step: We compute the optimal time-alignment path between the
input speech and corresponding predictor nets, using Dynamic Time Warping.
3. Backward Pass: Prediction error is backpropagated into the networks according
to the segmentation given by the alignment path. (See Figure 2.)
Hence backpropagation causes the nets to become better predictors, and the alignment path induces specialization of the networks for different phonemes.
Testing is performed using the One Stage algorithm [11], which is a classical extension of the Dynamic Time Warping algorithm for continuous speech.
Continuous Speech Recognition by Linked ftedictive Neural Networks
?
?
???
?
- - - - prediction errors
..--+--+-----if---~
......?................ . ...
+--+---+-+--+-----4:?......?....................
t--t---t--+--t---t--~
.... . .?......... . . . ........
CtJ
?
phoneme "a"
predictors
phoneme "b"
predictors
Figure 1: The forward pass during training. Canonical phonemes are modeled by
sequences of N predictive networks, shown as triangles (here N=3). Words are
represented by "linkage patterns" over these canonical phoneme models (shown in
the area above the triangles), according to the phonetic spelling of the words. Here
we are training on the word "ABA". In the forward pass, prediction errors (shown
as black circles) are computed for all predictors, for each frame of the input speech.
As these prediction errors are routed through the linkage pattern, they fill a distance
matrix (upper right).
?
+--+---1--1---1--+---1-: : :: : :A(igr)~~~(p.~th?
? ....
CtJ
? \~~-\tI------+-
~;:;:1tt:;:;:;:;:::;::;:;::;::;::;:;:;:;:;::;:::;::;J;;tt
Figure 2: The backward pass during training. After the DTW alignment path has
been computed, error is backpropagated into the various predictors responsible for
each point along the alignment path. The back propagated error signal at each such
point is the vector difference between the predicted and actual frame. This teaches
the networks to become better predictors, and also causes the networks to specialize
on different phonemes.
201
202
Tebelskis, Waibel, Petek, and Schmidbauer
3
RECOGNITION EXPERIMENTS
We have evaluated the LPNN system on a database of continuous speech recorded
at CMU. The database consists of 204 English sentences using a vocabulary of 402
words, comprising 12 dialogs in the domain of conference registration. Training
and testing versions of this database were recorded in a quiet office by multiple
speakers for speaker-dependent experiments. Recordings were digitized at a sampling rate of 16 KHz. A Hamming window and an FFT were computed, to produce
16 melscale spectral coefficients every 10 msec. In our experiments we used 40
context-independent phoneme models (including one for silence), each of which had
a 6-state phoneme topology similar to the one used in the SPICOS system [12].
f~ lIE
(1/
lit Z
tV>o tt..1s; I?UD IS THIS Il? IFfll?
"*,,,theOl ..d p/l<QM.;
,
iii EH l
ClII'EREKI
111 IH 5
(seero
111 1ft
= 17. 61
III f 1ft 5
KRAHfRIftNS
f ER
,
I'
(1/
,
. , ,; n .....
.. 11 ? ? , 11 . . . . . " " , ?? , . , . . . . . . ,., . . . . . . . .. . . . . . . . . , . . . . . , . . . . . . . . .
:
. . . " ? ? ? 1 . 11' . . .. . -. " . . . . 1111 . .. .. " . .. " . . . . 11 . . . . . . . . . " ????????
''':::::::::~:: 1I11 ............ lh ?? , . . . . . lllIltlll ? ? ??? .. III ........ ?' .. . . . . . . ..
, . ? ? , llll . . . . . . . . . . . . . . ,IIIIIIIIU .. ' ? ? "..
: ::~ml;i!E::!~:l;::~:!i;~: I""
U "." U lu, ........ ,III.,IIII.III . . . . . . . . . . . . . . . . . . . . . . . . . . . ? . ?...
"ll ........... " .. .. ... ,11 . . . . . . " " ?? ? ? h' U ??? ,I.'??, ,111""
.... , ?? U .... , . . . . I.II ............ UI ..... , ................ 111 . . . . . . . . . . . ' .,.111 .. 11 ?? 11. ,? . 111"'?? 111_1111,,0, ?? 1111111'1'" '1" " " "
" " " ' " , .? '".IIIII .... IIII ... IIIIUIIIlIlIl"I . . . . . . . . . . ." ?? ? ?? IU ... 'IIUIIlIlIlIl ......II'h .. . ' , ..................... IIIU' ..... , ? ? ,II'." I.
,It.......... ,?," "III"hl. . ? "."""
" 1' "
'11." ??
', ......
'11'
......
1111
????
,.'.'10?,
.????
1......11 ..... . """'11'" " " .? 00.""
., ..... 11 ........ ".0 , ............ ,.
''';;:;;;:;:~:::: ,1I1I1I1I11t1l11l"11I.1I""","1I1O ? ???
!!~
' " .... UIlIlI ...... "
' ' ' .... 'I . . . . ? . . . . . . . . . . . . . . . . ,., " , .. II . . . . . ,? " ..
. ' " ???? 1111111 .. . 11"11 . . . . . . . . . . . , . . . . . . . . . . . . . ' . .. .. '.1' . . . . . . . , ?
'
.
IUIlIl ................? .,
""''' ' 1
I
.
I
... , ??? ,.
'01' '',,,
"'''."
"""11
Figure 3: Actual and predicted spectrograms.
Figure 3 shows the result of testing the LPNN system on a typical sentence. The
top portion is the actual spectrogram for this utterance; the bottom portion shows
the frame-by-frame predictions made by the networks specified by each point along
the optimal alignment path. The similarity of these two spectrograms indicates that
the hypothesis forms a good acoustic model of the unknown utterance (in fact the
hypothesis was correct in this case). In our speaker-dependent experiments using
two males speakers, our system averaged 95%, 58%, and 39% word accuracy on
tasks with perplexity 5, 111, and 402 respectively.
In order to confirm that the predictive networks were making a positive contribution to the overall system, we performed a set of comparisons between the LPNN
and several pure HMM systems. When we replaced each predictive network by a
univariate Gaussian whose mean and variance were determined analytically from
the labeled training data, the resulting HMM achieved 44% word accuracy, compared to 60% achieved by the LPNN under the same conditions (single speaker,
perplexity 111). When we also provided the HMM with delta coefficients (which
were not directly available to the LPNN), it achieved 55%. Thus the LPNN was
outperforming each of these simple HMMs.
Continuous Speech Recognition by Linked R-edictive Neural Networks
4
HIDDEN CONTROL EXPERIMENTS
In another series of experiments, we varied the LPNN architecture by introducing
hidden control inputs, as proposed by Levin [7]. The idea, illustrated in Figure 4,
is that a sequence of independent networks is replaced by a single network which is
modulated by an equivalent number of "hidden control" input bits that distinguish
the state.
Sequence of
Predictive Networks
Hidden Control
Network
Figure 4: A sequence of networks corresponds to a single Hidden Control network.
A theoretical advantage of hidden control architectures is that they reduce the
number offree parameters in the system. As the number of networks is reduced, each
one is exposed to more training data, and - up to a certain point - generalization
may improve. The system can also run faster, since partial results of redundant
forward pass computations can be saved. (Notice, however, that the total number
of forward passes is unchanged.) Finally, the savings in memory can be significant.
In our experiments, we found that by replacing 2-state phoneme models by equivalent Hidden Control networks, recognition accuracy improved slightly and the system ran much faster. On the other hand, when we replaced all of the phonemic
networks in the entire system by a single Hidden Control network (whose hidden
control inputs represented the phoneme as well as its state), recognition accuracy
degraded significantly. Hence, hidden control may be useful, but only if it is used
judiciously.
5
CURRENT LIMITATIONS OF PREDICTIVE NETS
While the LPNN system is good at modeling the acoustics of speech, it presently
tends to suffer from poor discrimination. In other words, for a given segment
of speech, all of the phoneme models tend to make similarly good predictions,
rendering all phoneme models fairly confusable. For example, Figure 5 shows an
actual spectrogram and the frame-by-frame predictions made by the /eh/ model
and the /z/ model. Disappointingly, both models are fairly accurate predictors for
the entire utterance.
This problem arises because each predictor receives training in only a small region
of input acoustic space (i.e., those frames corresponding to that phoneme). Consequently, when a predictor is shown any other input frames, it will compute an
203
204
Tebelskis, Waibel, Petek, and Schmidbauer
"1
., 0,
1~";";":7';l' lti1"rl
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . " .. . . .. ' ?? " . . . . . . . . . . . . . . . . . . . . . . 11 . . . . . . . . . . . . . . . . . . . . . . 1' . . . . . . . . . . ? . . . . 1111" ?? ? . . . . . . . . . . ? . . . " ,. ? . . . . . . . . . " . . . . . . . . . .. . . . . . . " ?? ? ? " , . . . . . . . . . . ..
.... II .. ' .... ' ? ? " . . ......... "nu ..... . .. . ... IIII . . . . . 'U .. . , ' . . . . . .UI ... I I U l l l l l l ' ' ' IIU ............ . ... , . . . u ' I , ...... II ... .. ,II . .... U. l lh . . ... ' .... " ..... .... . . .. . . ... . 1 .... ..
/eh/
, ? ??? 1111 ... "1111 ??? ,1.111111 .. " ... , . . ... ................. 111 ? ? 111" ... 11 . 111 ??? , ..... . ..... .. ... 111 .............. 1' .. '1111111' .. .... " .111I111".,t" ? .... , . . . . .. . ?.' ." ......... ..
, ? ?? 1111 . . . . 11"1 . . . . . . . . . 1,.'1' . ... .... ..... . .... "111 ? ? " . . . . . . . . . ' ... ' " " . . . . . . . . . . . '11 ?? ? ,'1 ? ? " " ... 111 ...... . ... '1 11' ... 1" . .. . . ... 11 11 . . . . . , . '? .. .. ? . .. '1 ?? 1. '? . . ...... ... .. 1
? ? ? 1 .............................. 1 ?? 1........ 1. . . . 1111. . . . . . . . . . 1..... , . . . . . . . . . . . . .111 ...... 11 . . . . . . . . . . . 111111 . . . . . . ,1 ?? , . . . . .'11 . . . . . . . . 1.' ???????? ' .1 ' ?? ? .... 1. . . . . ..
....... , ..... ul,." ???? 'I III'II . ....... ", ..... " ........... ...
1.. 1.11' ... 111??? " .... , ..... 1... . .. ' .1 1' . . . . " ....... ' ? ??? " " ?? ? ' ...... , ?? , ,, ? ?
II'".II '",., .........." .......
?? '111 . . . . . .. ... 111' ..'1 .. " ' .. " 1 ? ? " ' ..... 1'?" .. ? ' ' ' .. ' .. ' ' 1 ... 1 .. 111111 . . . . 1 . . . . . . . . . 11' .... 11 ?? ' ' ' ??' . ' ' '' ''"1'1 ..... ' . .. 1111' . . . . . . . . . . . . . . " " . ' . " ............ . . , ? ? ? , , " " , .
11' . . . . . . . . . . . 1111'111 ?? ' .. " .... 11' .... 11111111111 . . . . . . . . . . . . . . . . . . . 111 . . . . " . . . . . . '. 1.11' ... 1'''. . . . ' .... 1111 11 11 111111 ...... 11 . . . . . . . . . . . . . . .......... , ..... . . ... 1 . ' ... ..
? 1 . . . . . .. . . . . . . . . . . . . . . . . . . 111 11 .. 11111 ... 11.' ? ??? , . . . ." ?? ? "1 .... 1 . . . . . . . IIH ?? ? ? ' .. , ............. IIII'I' . . . . . . . . , ???? ? "., . . . . . I1 ......... . , . . . . . . . . . . ... .. '. 1.1 ??? "
................................... , ..... ," ?? "'1111 ?? 11111"111 ..... 1 .. '".11". . . . . . . 1..... 1 ..... 1.. 111'"11 ?? "1 . . . . . 1 ??? " ?? " . . . . . . . ., ....... , " "',I., IIf,.,. " .I."" ? ??
?
:: :~~-::::IC:=:~~::~~!:::::::::::::::::::::~::::.~!::::~::~:::~~~:~ .:=::::::::::::~~:::::~::::.-:::~~::.:::::::::~~:;u::::: ::::: ::~:~::::: : :: :::!:: : :~:
'."N"'"
.
..... _ . - - . . .................................. , .......... _ . , ...... , ....... ,.,? . . . . . . . 1.....? .......- . . . .. ?? ......... ' ..... ,111 ..... ..
........ 11 ........ , .........................,1111 ?? " ...... , ........................ , ...... , .......... , ................... " . . . . 11.11 ??????
??????? , ...........................................................1......................................................, .......... , ......... I'II ??.!
r"TT~!"."""'""'"TmmTmTmTnm""'TnT""'IJII1""'rrm~!!!'I1TT!".""rrmrrm""'...,.,nT11""';nnIlPl_rm""'m;1,m,mn'""rrmTm?nnTTi1TtTTT11TTTm .mml
,It."., '''' ,., ?.
I1 .. ' ??
?? ??111" ............
/z/
?? t.'1 ?
I . . . . . . . . , ?? ? ???? ,,, . . . . . . . . . . . . . . , . . . . . . . . . . 1.1 ............. . '1111, ...... . ...... , ., ??? , .. ......... . .. , .....
u . , .... , ".' .. '11' . . . , . . . . . . . . . . . . . . . . 11.111
.. . 1 .... 11 ..... .. " " ,?" ? ? .. 1.. ............ .. '.11 . . . . . . . '",, . . . . . . . '.11'".11 ?? " .. , .. ,,' ?? 1 ... 1111' ... 111.1111.1111" ....... , .... , . ............ '00 . .. .. . .. , . , ... , ........... , . . . . . . ,
??? I .. ' .. . .
I.II ....
"III III I ? ? HIIIIII' ., 111 ?? ' 11 11111011' ....." ...
1111."".' .... ' . ....... " ........ 1.11111 .......... ".1 ?? 1 . . . . . . . .
111 1 11.11111" ...... . , ... . ... ,. ? .. .. .. , . . . . ..
? , , ......... "1 '1. 11111111 1' ..... . .. . , "" .. , ' .... U ' .. I IIII " ....... , 11 1 ' .. 111 . . . . . '11 .. " .... . , , . .. ... , .. , ........ , . .. .. '"11 ' ?? ? ? ? , " .1111111'1 11'" " ' " ... .. , ????? ' . """111"
. , . IUff'.""I".'.IIIIIIIIIII'" . " , . " ? ? 11, .... , ... ..... ' .. 1'1 ?? ' .. . 11".' ............ , ... ? ' ? ? 1 ? ? ?? 11 .... ",,, ,, ,,,,," ... ,11' "' , . . . . . . . . . . . , ... . " .. ....... , .. ,
.. " '1 . . . . . . . 111111111111 11 .' ..... ' . ' ... .. ,.11 ' ? ? ? ? ' " ' ... ' .1.1I1t1l.1 ..... ' .. ' " ' . . . . . . . . . 1 ?? , ? ? " ".tt'"IIII , I" .. . ... 111111 .. . .. '
??
II I H ?? . ?
.. ? 1 . . . . . . . . . . . . . . . . . , . . . . . . ' ?? , .. ,? , .. ... , 11 ... .. , ???? 'II . . . . . . . ' ?? , .. ... . . . . . . . . . . . . . . . . . . 1.. " . . . . . . . ..... .. 1" .... "1 .. 1'
.... . . . . . . 1..... .. '" , ,11 " .... . .
,." . . . . . . . . . . . . . . . . . . . . . ,," ....... 11 1''''' ......... 11 ......'1' . . ..... ............... ' . . ... 1 . . . . . . . 11' ..... ' .......... " , ... , ............ .... , .11101 ...... ,
. . . . . . . . 1 ?? ""
I..... ' ........... ,,' ......
1........ 1'"'''.' .... ", .,?? .' .??111 ''.' .' ... .10 ... .. . '. ,.... . , ............. .
11"' ??? ?11' ".1, ........... , ........ , ..... ?????
'.I ....
?. , " " . . . . . . . . . . . . . . . I . . . . U IIl' ... . .. ... .. , .. . ... .. , .. . , . . . 1' . 1. ' ?? , . .. .. ' " ' . . . . . . . . . 11" . . .. I .............. . .. . .
' " ? ? " . . . . . . . . . . . . . . . . . . . . . f ... ? ' . ...... .
11 ....... ' " '1".', '11 "" ?? , .. I II' .......... "" . . ????? 11'1"
" .'11., ?? 1 ? ? . , ? ? ? , . . . . . . . ... .. , ,"
" " 01 ' 11" "
"" " " ??? 1 1."110 ?? '? ? . ? . . . . . . . . 111 .... . . ? ? ? . . . . . ...... . , ., ... . , ... .
? , ? ? , . . . . . . . . . . . . . . . . . . . . . . . . 1"" ? ? ? , .. .... .. .. 1 ?? , . . . . . . . 11" ?? ,0, ? ? , I................. . ', .. ,' ...... "00 , . .. .. ....... 1" ... "., . . ............... . . ,. , , ". 11 tu ? ? , . . . . ... ' ' " ??
? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . "1 . . . . . . . 11.1." . . . . . . . . . 1. , ... ..................... . . 0011 ......... 11' ?? , . . . . . . . . . 1 ..... ' .' . . . . . . . . . . . " ...... . , , "III.'h, ... . . " ? ? 00 ? ?
, . . . . . . . . . . . . . I.H ?? ? ??? III . . . . . . . . . . . . I ....... " . . . . . . II ... 'I ?? ? ?? ,' . . . . . . . . . . . . . . . . . 1 .. . ........... 1" " '" ? ?? 1 . . . . . . . . . . . . .. . 1............ 11 ??? , '.' " ........ 1'.'. ', " ' "
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . ." ........ 111 ........... , ........ ' "' .............. 11' ???? , . . . . . . . .. . ,' . . . . . . " . . . . ,.11 ?? I.,.I .. '~~h ??? ~.!~ .. ,
Figure 5: Actual spectrogram, and corresponding predictions by the /eh/ and /z/
phoneme models.
undefined output, which may overlap with the outputs of other predictors. In other
words, the predictors are currently only trained on positive instances, because it is
not obvious what predictive output target is meaningful for negative instances; and
this leads to problematic "undefined regions" for the predictors. Clearly some type
of discriminatory training technique should be introduced, to yield better performance in prediction based recognizers.
6
CONCLUSION
We have studied the performance of Linked Predictive Neural Networks for large vocabulary, continuous speech recognition. Using a 6-state phoneme topology, without
duration modeling or other optimizations, the LPNN achieved an average of 95%,
58%, and 39% accuracy on tasks with perplexity 5, 111, and 402, respectively. This
was better than the performance of several simple HMMs that we tested . Further
experiments revealed that the accuracy and speed of the LPNN can be slightly
improved by the judicious use of hidden control inputs.
The main advantages of predictive networks are that they produce non-binary distortion measures in a simple and elegant way, and that by virtue of their nonlinearity
they can model the dynamic properties of speech (e.g., curvature) better than linear predictive models [13]. Their main current weakness is that they have poor
discrimination, since their strictly positive training causes them all to make confusably accurate predictions in any context. Future research should concentrate
on improving the discriminatory power of the LPNN, by such techniques as corrective training, explicit context dependent phoneme modeling, and function word
modeling.
Continuous Speech Recognition by Linked ltedictive Neural Networks
Acknowledgements
The authors gratefully acknowledge the support of DARPA, the National Science
Foundation, ATR Interpreting Telephony Research Laboratories, and NEC Corporation. B. Petek also acknowledges support from the University of Ljubljana and
the Research Council of Slovenia. O. Schmidbauer acknowledges support from his
employer, Siemens AG, Germany.
References
[1] H. Bourlard and C. J. Wellekens. Links Between Markov Models and Multilayer
Perceptrons. Pattern Analysis and Machine Intelligence, 12:12, December 1990.
[2] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang. Phoneme Recognition
Using Time-Delay Neural Networks. IEEE Transactions on Acoustics, Speech, and
Signal Processing, March 1989.
[3] M. Miyatake, H. Sawai, and K. Shikano. Integrated Training for Spotting Japanese
Phonemes Using Large Phonemic Time-Delay Neural Networks. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, April 1990.
[4] E. McDermott and S. Katagiri. Shift-Invariant, Multi-Category Phoneme Recognition
using Kohonen's LVQ2. In Proc. IEEE International Conference on Acoustics, Speech,
and Signal Processing, May 1989.
[5] P. Haffner, M. Franzini, and A. Waibel. Integrating Time Alignment and Connectionist Networks for High Performance Continuous Speech Recognition. In Proc. IEEE
International Conference on Acoustics, Speech, and Signal Processing, May 1991.
[6] K. Iso and T. Watanabe. Speaker-Independent Word Recognition Using a Neural
Prediction Model. In Proc. IEEE International Conference on Acoustics, Speech, and
Signal Processing, April 1990.
[7] E. Levin. Speech Recognition Using Hidden Control Neural Network Architecture.
In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing,
April 1990.
[8] J. Tebelskis and A. Waibel. Large Vocabulary Recognition Using Linked Predictive
Neural Networks. In Proc. IEEE International Conference on Acoustics, Speech, and
Signal Processing, April 1990.
[9] J. Tebelskis, A. Waibel, B. Petek, and O. Schmidbauer. Continuous Speech Recognition Using Linked Predictive Neural Networks. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, May 1991.
[10] A. Waibel, A. Jain, A. McNair, H. Saito, A. Hauptmann, and J. Tebelskis. A Speechto-Speech Translation System Using Connectionist and Symbolic Processing Strategies. In Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing, May 1991.
[11] H. Ney. The Use of a One-Stage Dynamic Programming Algorithm for Connected
Word Recognition. IEEE Transactions on Acoustics, Speech, and Signal Processing,
32:2, April 1984.
[12] H. Ney, A. Noll. Phoneme Modeling Using Continuous Mixture Densities. In Proc.
IEEE International Conference on Acoustics, Speech, and Signal Processing, April
1988.
[13] N. Tishby. A Dynamic Systems Approach to Speech Processing. In Proc. IEEE
International Conference on Acoustics, Speech, and Signal Processing, April 1990.
205
| 358 |@word version:1 disappointingly:1 noll:1 series:1 score:1 current:2 com:1 lang:1 must:1 discrimination:2 intelligence:1 iso:1 short:2 provides:1 along:2 become:3 specialize:2 consists:2 dialog:1 multi:1 actual:6 window:1 provided:1 what:1 ag:1 corporation:1 every:1 ti:1 classifier:1 qm:1 control:13 positive:3 tends:1 path:6 black:1 studied:1 hmms:3 discriminatory:2 averaged:1 responsible:1 testing:4 practice:1 backpropagation:1 procedure:2 saito:1 area:1 significantly:1 word:17 integrating:2 symbolic:1 get:1 context:3 equivalent:2 straightforward:1 duration:1 iiu:1 pure:1 fill:1 his:1 proving:1 target:1 programming:1 us:1 hypothesis:3 pa:1 recognition:25 database:3 labeled:1 bottom:1 ft:2 region:4 connected:1 ran:1 ui:2 dynamic:5 trained:3 segment:1 exposed:1 predictive:20 triangle:2 easily:1 darpa:1 various:3 represented:2 corrective:1 train:1 jain:1 describe:1 whose:2 distortion:3 otto:1 hanazawa:1 sequence:6 advantage:2 net:4 tu:1 kohonen:1 produce:3 school:1 phonemic:2 predicted:4 concentrate:1 correct:1 saved:1 generalization:1 extension:1 strictly:1 ic:1 iil:1 achieves:1 purpose:1 proc:10 currently:2 council:1 create:1 successfully:1 clearly:1 gaussian:1 rather:1 ctj:2 varying:1 office:1 bojan:1 logically:1 indicates:1 sense:1 dependent:3 accumulated:1 inaccurate:1 integrated:2 entire:2 hidden:13 theol:1 comprising:1 i1:2 germany:1 iu:1 overall:1 classification:3 integration:1 fairly:2 saving:1 sampling:1 iif:1 lit:1 aba:1 future:1 connectionist:2 national:1 replaced:3 alignment:8 weakness:3 male:1 mixture:1 melscale:1 yielding:1 undefined:2 clii:1 accurate:3 partial:1 lh:2 iiiiiiiiiii:1 euclidean:1 circle:1 confusable:1 theoretical:1 instance:2 classify:1 modeling:6 sawai:1 introducing:1 predictor:15 delay:2 levin:2 tishby:1 ljubljana:1 density:1 international:10 recorded:2 rrm:1 coefficient:2 performed:2 linked:9 portion:2 contribution:1 il:1 accuracy:10 degraded:1 phoneme:30 variance:1 yield:1 bayesian:1 raw:1 lvq2:1 lu:1 classified:1 obvious:1 hamming:1 propagated:1 segmentation:1 schmidbauer:6 carefully:1 back:1 improved:3 april:7 evaluated:1 stage:3 hand:1 receives:1 replacing:1 iiiii:1 nonlinear:2 puted:1 hence:2 analytically:1 laboratory:1 illustrated:1 offree:1 ll:1 during:2 speaker:6 tt:5 performs:1 interpreting:1 slovenia:1 rl:1 khz:1 mellon:1 significant:1 similarly:1 nonlinearity:1 gratefully:1 had:1 katagiri:1 similarity:1 recognizers:1 curvature:1 perplexity:4 phone:1 phonetic:1 certain:1 outperforming:2 binary:4 discussing:1 mcdermott:1 spectrogram:5 recognized:1 ud:1 redundant:1 signal:15 ii:16 multiple:1 smooth:1 faster:2 prediction:16 multilayer:1 cmu:1 achieved:4 iiii:6 pass:1 recording:1 tend:3 elegant:1 december:1 granularity:1 revealed:1 feedforward:1 iii:11 fft:1 rendering:1 architecture:3 topology:2 reduce:1 idea:1 haffner:1 judiciously:1 shift:1 specialization:1 linkage:3 ul:1 routed:1 suffer:1 speech:48 cause:4 useful:2 nonparametric:1 backpropagated:2 induces:1 category:1 reduced:1 canonical:4 problematic:1 notice:1 delta:1 carnegie:1 registration:1 backward:2 run:1 petek:6 employer:1 decision:2 bit:1 distinguish:1 yielded:1 nontrivial:1 strength:2 alex:1 tebelskis:7 speed:2 tv:1 according:2 waibel:9 march:1 poor:2 across:1 slightly:3 making:1 hl:1 lpnn:16 presently:1 invariant:1 confound:1 wellekens:1 discus:1 mml:1 available:1 llll:1 spectral:4 indirectly:1 appropriate:1 ney:2 alternative:1 top:1 concatenated:1 classical:1 unchanged:1 franzini:1 warping:2 already:2 strategy:1 spelling:1 quiet:1 distance:2 link:1 mapped:2 atr:1 hmm:3 discriminant:1 modeled:3 difficult:1 teach:1 negative:1 unknown:1 perform:1 upper:1 markov:1 finite:1 acknowledge:1 hinton:1 digitized:1 frame:14 varied:1 tnt:1 introduced:1 required:1 specified:1 iih:1 sentence:2 acoustic:15 nu:1 spotting:1 pattern:4 including:1 memory:1 mcnair:1 power:1 suitable:1 overlap:1 eh:4 circumvent:1 bourlard:1 mn:1 improve:1 dtw:2 acknowledges:2 utterance:4 acknowledgement:1 relative:1 limitation:1 telephony:1 i111:1 foundation:1 spicos:1 translation:3 english:1 silence:1 template:1 vocabulary:8 forward:5 made:3 author:1 transaction:2 approximate:1 confirm:1 ml:1 pittsburgh:1 conclude:1 shikano:2 continuous:14 improving:1 japanese:1 domain:1 main:2 multilayered:1 miyatake:1 deployed:2 watanabe:1 msec:1 explicit:1 candidate:1 lie:1 er:1 virtue:2 joe:1 ih:1 i11:1 nec:1 hauptmann:1 univariate:1 corresponds:2 consequently:1 judicious:2 typical:1 determined:1 total:1 pas:6 experimental:1 siemens:1 meaningful:1 perceptrons:1 support:3 modulated:1 arises:1 tested:2 extrapolate:1 |
2,846 | 3,580 | Mortal Multi-Armed Bandits
Ravi Kumar
Yahoo! Research
Sunnyvale, CA 94089
[email protected]
Deepayan Chakrabarti
Yahoo! Research
Sunnyvale, CA 94089
[email protected]
Filip Radlinski?
Microsoft Research
Cambridge, UK
[email protected]
Eli Upfal?
Brown University
Providence, RI 02912
[email protected]
Abstract
We formulate and study a new variant of the k-armed bandit problem, motivated
by e-commerce applications. In our model, arms have (stochastic) lifetime after
which they expire. In this setting an algorithm needs to continuously explore new
arms, in contrast to the standard k-armed bandit model in which arms are available
indefinitely and exploration is reduced once an optimal arm is identified with nearcertainty. The main motivation for our setting is online-advertising, where ads
have limited lifetime due to, for example, the nature of their content and their
campaign budgets. An algorithm needs to choose among a large collection of ads,
more than can be fully explored within the typical ad lifetime.
We present an optimal algorithm for the state-aware (deterministic reward function) case, and build on this technique to obtain an algorithm for the state-oblivious
(stochastic reward function) case. Empirical studies on various reward distributions, including one derived from a real-world ad serving application, show that
the proposed algorithms significantly outperform the standard multi-armed bandit
approaches applied to these settings.
1
Introduction
Online advertisements (ads) are a rapidly growing source of income for many Internet content
providers. The content providers and the ad brokers who match ads to content are paid only when
ads are clicked; this is commonly referred to as the pay-per-click model. In this setting, the goal of
the ad brokers is to select ads to display from a large corpus, so as to generate the most ad clicks and
revenue. The selection problem involves a natural exploration vs. exploitation tradeoff: balancing
exploration for ads with better click rates against exploitation of the best ads found so far.
Following [17, 16], we model the ad selection task as a multi-armed bandit problem [5]. A multiarmed bandit models a casino with k slot machines (one-armed bandits), where each machine (arm)
has a different and unknown expected payoff. The goal is to sequentially select the optimal sequence
of slot machines to play (i.e., slot machine arms to pull) to maximize the expected total reward.
Considering each ad as a slot machine, that may or may not provide a reward when presented to
users, allows any multi-armed bandit strategy to be used for the ad selection problem.
?
Most of this work was done while the author was at Yahoo! Research.
Part of this work was done while the author was visiting the Department of Information Engineering at the
University of Padova, Italy, supported in part by the FP6 EC/IST Project 15964 AEOLUS. Work supported in
part by NSF award DMI-0600384 and ONR Award N000140610607.
?
A standard assumption in the multi-armed bandit setting, however, is that each arm exists perpetually. Although the payoff function of an arm is allowed to evolve over time, the evolution is assumed
to be slow. Ads, on the other hand, are regularly created while others are removed from circulation.
This occurs as advertisers? budgets run out, when advertising campaigns change, when holiday
shopping seasons end, and due to other factors beyond the control of the ad selection system. The
advertising problem is even more challenging as the set of available ads is often huge (in the tens of
millions), while standard multi-armed bandit strategies converge only slowly and require time linear
in the number of available options.
In this paper we initiate the study of a rapidly changing variant of the multi-armed bandit problem.
We call it the mortal multi-armed bandit problem since ads (or equivalently, available bandit arms)
are assumed to be born and die regularly. In particular, we will show that while the standard multiarmed bandit setting allows for algorithms that only deviate from the optimal total payoff by O(ln t)
[21], in the mortal arm setting a regret of ?(t) is possible.
Our analysis of the mortal multi-arm bandit problem considers two settings. First, in the less realistic
but simpler state-aware (deterministic reward) case, pulling arm i always provides a reward that
equals the expected payoff of the arm. Second, in the more realistic state-oblivious (stochastic
reward) case, the reward from arm i is a binomial random variable indicating the true payoff of the
arm only in expectation. We provide an optimal algorithm for the state-aware case. This algorithm
is based on characterizing the precise payoff threshold below which repeated arm pulls become
suboptimal. This characterization also shows that there are cases when a linear regret is inevitable.
We then extend the algorithm to the state-oblivious case, and show that it is near-optimal. Following
this, we provide a general heuristic recipe for modifying standard multi-armed bandit algorithms to
be more suitable in the mortal-arm setting. We validate the efficacy of our algorithms on various
payoff distributions including one empirically derived from real ads. In all cases, we show that the
algorithms presented significantly outperform standard multi-armed bandit approaches.
2
Modeling mortality
Suppose we wish to select the ads to display on a webpage. Every time a user visits this webpage,
we may choose one ad to display. Each ad has a different potential to provide revenue, and we wish
to sequentially select the ads to maximize the total expected revenue. Formally, say that at time t,
we have ads A(t) = {ad1t , . . . , adkt } from which we must pick one to show. Each adit has a payoff
?it ? [0, 1] that is drawn from some known cumulative distribution F (?)1 . Presenting adit at time
t provides a (financial) reward R(?it ); the reward function R(?) will be specified below.
If the pool of available ads A(t) were static, or if the payoffs were only slowly changing with t,
this problem could be solved using any standard multi-armed bandit approach. As described earlier,
in reality the available ads are rapidly changing. We propose the following simple model for this
change: at the end of each time step t, one or more ads may die and be replaced with new ads. The
process then continues with time t + 1. Note that since change happens only through replacement
of ads, the number of ads k = |A(t)| remains fixed. Also, as long as an ad is alive, we assume that
its payoff is fixed.
Death can be modeled in two ways, and we will address both in this work. An ad i may have a
budget Li that is known a priori and revealed to the algorithm. The ad dies immediately after it has
been selected Li times; we assume that Li values are drawn from a geometric distribution, with an
expected budget of L. We refer to this case as budgeted death. Alternatively, each ad may die with
a fixed probability p after every time step, whether it was selected or not. This is equivalent to each
ad being allocated a lifetime budget Li , drawn from a geometric distribution with parameter p, that
is fixed when the arm is born but is never revealed to the algorithm; in this case new arms have an
expected lifetime of L = 1/p. We call this timed death. In both death settings, we assume in our
theoretical analysis that at any time there is always at least one previously unexplored ad available.
This reflects reality where the number of ads is practically unlimited.
Finally, we model the reward function in two ways, the first being simpler to analyze and the latter
more realistic. In the state-aware (deterministic reward) case, we assume R(?it ) = ?it . This
1
We limit our analysis to the case where F (?) is stationary and known, as we are particularly interested in
the long-term steady-state setting.
provides us with complete information about each ad immediately after it is chosen to be displayed.
In the state-oblivious (stochastic reward) case, we take R(?it ) to be a random variable that is 1 with
probability ?it and 0 otherwise.
The mortal multi-armed bandit setting requires different performance measures than the ones used
with static multi-armed bandits. In the static setting, very little exploration is needed once an optimal
arm is identified with near-certainty; therefore the quality measure is the total regret over time. In
our setting the algorithm needs to continuously explore newly available arms. We therefore study the
long term, steady-state, mean regret per time step of various solutions. We define this regret as the
expected payoff of the best currently alive arm minus the payoff actually obtained by the algorithm.
3
Related work
Our work is most related to the study of dynamic versions of the multi-arm bandit (MAB) paradigm
where either the set of arms or their expected reward may change over time. Motivated by task
scheduling, Gittins [10] proposed a policy where only the state of the active arm (the arm currently
being played) can change in a given step, and proved its optimality for the Bayesian formulation
with time discounting. This seminal result gave rise to a rich line of work, a proper review of which
is beyond the scope of this paper. In particular, Whittle [23] introduced an extension termed restless
bandits [23, 6, 15], where the states of all arms can change in each step according to a known (but
arbitrary) stochastic transition function. Restless bandits have been shown to be intractable: e.g.,
even with deterministic transitions the problem of computing an (approximately) optimal strategy is
PSPACE -hard [18]. Sleeping bandits problem, where the set of strategies is fixed but only a subset of
them available in each step, were studied in [9, 7] and recently, using a different evaluation criteria,
in [13]. Strategies with expected rewards that change gradually over time were studied in [19]. The
mixture-of-experts paradigm is related [11], but assumes that data tuples are provided to each expert,
instead of the tuples being picked by the algorithm, as in the bandit setting.
Auer et al. [3] adopted an adversarial approach: they defined the adversarial MAB problem where
the reward distributions are allowed to change arbitrarily over time, and the goal is to approach
the performance of the best time-invariant policy. This formulation has been further studied in
several other papers. Auer et al. [3, 1] also considered a more general definition of regret, where
the comparison is to the best policy that can change arms a limited number of times. Due to the
overwhelming strength of the adversary, the guarantees obtained in this line of work are relatively
weak when applied to the setting that we consider in this paper.
Another aspect of our model is that unexplored arms are always available. Related work broadly
comes in three flavors. First, new arms can become available over time; the optimality of Gittins?
index was shown to extend to this case [22]. The second case is that of infinite-armed bandits with
discrete arms, first studied by [4] and recently extended to the case of unknown payoff distributions
and an unknown time horizon [20]. Finally, the bandit arms may be indexed by numbers from the
real line, implying uncountably infinite bandit arms, but where ?nearby? arms (in terms of distance
along the real line) have similar payoffs [12, 14]. However, none of these approaches allows for
arms to appear then disappear, which as we show later critically affects any regret bounds.
4
Upper bound on mortal reward
In this section we show that in the mortal multi-armed bandit setting, the regret per time step of any
algorithm can never go to zero, unlike in the standard MAB setting. Specifically, we develop an
upper bound on the mean reward per step of any such algorithm for the state-aware, budgeted death
case. We then use reductions between the different models to show that this bound holds for the
state-oblivious, timed death cases as well.
We prove the bound assuming we always have new arms available. The expected reward of an arm
is drawn from a cumulative distribution F (?) with support in [0, 1]. For X ? F (?), let E[X] be the
expectation of X over F (?). We assume that the lifetime of an arm has an exponential distribution
with parameter p, and denote its expectation by L = 1/p. The following function captures the
tradeoff between exploration and exploitation in our setting and plays a major role in our analysis:
E[X] + (1 ? F (?))(L ? 1)E[X|X ? ?]
?(?) =
.
(1)
1 + (1 ? F (?))(L ? 1)
Theorem 1. Let ?
?(t) denote the maximum mean reward that any algorithm for the state-aware
mortal multi-armed bandit problem can obtain in t steps in the budgeted death case. Then
limt?? ?
?(t) ? max? ?(?).
Proof sketch. We distinguish between fresh arm pulls, i.e., pulls of arms that were not pulled before,
and repeat arm pulls. Assume that the optimal algorithm pulls ? (t) distinct (fresh) arms in t steps,
and hence makes t ? ? (t) repeat pulls. The expected number of repeat pulls to an arm before it
expires is (1 ? p)/p. Thus, using Wald?s equation [8], the expected number of different arms the
algorithm must use for the repeat pulls is (t ? ? (t)) ? p/(1 ? p). Let `(t) ? ? (t) be the number
of distinct arms that get pulled more than once. Using Chernoff bounds, we can show that for any
? > 0, for sufficiently large t, with probability ? 1 ? 1/t2 the algorithm uses at least `(t) =
p(t ? ? (t))/(1 ? p) ? (1 ? ?) different arms for the repeat pulls. Call this event E1 (?).
Next, we upper bound the expected reward of the best `(t) arms found in ? (t) fresh probes. For any
h > 0, let ?(h) = F ?1 (1 ? (`(t)/? (t))(1 ? h)). In other words, the probability of picking an arm
with expected reward greater or equal to ?(h) is (`(t)/? (t))(1 ? h). Applying the Chernoff bound,
for any ?, h > 0 there exists t(?, h) such that for all t ? t(?, h) the probability that the algorithm
finds at least `(t) arms with expected reward at least ?(?, h) = ?(h)(1 ? ?) is bounded by 1/t2 . Call
this event E2 (?, h).
Let E(?, h) be the event E1 (?) ? ?E2 (?, h). The expected reward of the algorithm in this event after
t steps is then bounded by ? (t)E[X] + (t ? ? (t))E[X | X ? ?(?, h)] Pr(E(?, h)) + (t ? ? (t))(1 ?
Pr(E(?, h)). As ?, h ? 0, Pr(E(?, h)) ? 1, and the expected reward per step when the algorithm
pulls ? (t) fresh arms is given by
1
lim sup ?
?(t) ?
? (t)E[X] + (t ? ? (t))E[X | X ? ?] ,
t
t??
?1
where ? = F (1 ? `(t)/? (t)) and `(t) = (t ? ? (t))p/(1 ? p). After some calculations, we get
lim supt?? ?
?(t) ? max? ?(?).
In Section 5.1 we present an algorithm that achieves this performance bound in the state-aware case.
The following two simple reductions establish the lower bound for the timed death and the stateoblivious models.
Lemma 2. Assuming that new arms are always available, any algorithm for the timed death model
obtains at least the same reward per timestep in the budgeted death model.
Although we omit the proof due to space constraints, the intuition behind this lemma is that an arm
in the timed case can die no sooner than in the budgeted case (i.e., when it is always pulled). As a
result, we get:
Lemma 3. Let ?
?det (t) and ?
?sto (t) denote the respective maximum mean expected rewards that any
algorithm for the state-aware and state-oblivious mortal multi-armed bandit problems can obtain
after running for t steps. Then ?
?sto (t) ? ?
?det (t).
We now present two applications of the upper bound. The first simply observes that if the time to
find an optimal arm is greater than the lifetime of such an arm, the the mean reward per step of any
algorithm must be smaller than the best value. This is in contrast to the standard MAB problem with
the same reward distribution, where the mean regret per step tends to 0.
Corollary 4. Assume that the expected reward of a bandit arms is 1 with probability p < 1/2 and
1 ? ? otherwise, for some ? ? (0, 1]. Let the lifetime of arms have geometric distribution with the
same parameter p. The mean reward per step of any algorithm for this supply of arms is at most
1 ? ? + ?p, while the maximum expected reward is 1, yielding an expected regret per step of ?(1).
Corollary 5. Assume arm payoffs are drawn from a uniform distribution, F (x) = x, x ? [0, 1].
Consider
the timed death case with parameter p ? (0, 1). Then the mean reward per step in bounded
?
?
1? p
by 1?p and expected regret per step of any algorithm is ?( p).
5
Bandit algorithms for mortal arms
In this section we present and analyze a number of algorithms specifically designed for the mortal
multi-armed bandit task. We develop the optimal algorithm for the state-aware case and then modify
the algorithm to the state-oblivious case, yielding near-optimal regret. We also study a subset approach that can be used in tandem with any standard multi-armed bandit algorithm to substantially
improve performance in the mortal multi-armed bandit setting.
5.1
The state-aware case
We now show that the algorithm D ET O PT is optimal for this deterministic reward setting.
Algorithm D ET O PT
input: Distribution F (?), expected lifetime L
?? ? argmax? ?(?)
[? is defined in (1)]
while we keep playing
i ? random new arm
Pull arm i; R ? R(?i ) = ?i
if R > ??
[If arm is good, stay with it]
Pull arm i every turn until it expires
end if
end while
Assume the same setting as in the previous section, with a constant supply of new arms. The
expected reward of an arm is drawn from cumulative distribution F (?). Let X be a random variable
with that distribution, and E[X] be its expectation over F (?). Assume that the lifetime of an arm
has an exponential distribution with parameter p, and denote its expectation by L = 1/p. Recall
?(?) from (1) and let ?? = argmax? ?(?). Now,
Theorem 6. Let D ET O PT(t) denote the mean per turn reward obtained by D ET O PT after running
for t steps with ?? = argmax? ?(?), then limt?? D ET O PT(t) = max? ?(?).
Note that the analysis of the algorithm holds for both budgeted and timed death models.
5.2
The state-oblivious case
We now present a modified version of D ET O PT for the state-oblivious case. The intuition behind
this modification, S TOCHASTIC, is simple: instead of pulling an arm once to determine its payoff
?i , the algorithm pulls each arm n times and abandons it unless it looks promising. A variant, called
S TOCHASTIC WITH E ARLY S TOPPING, abandons the arm earlier
if its maximum possible future
reward will still not justify its retention. For n = O log L/2 , S TOCHASTIC gets an expected
reward per step of ?(?? ?) and is thus near-optimal; the details are omitted due to space constraints.
Algorithm S TOCHASTIC
input: Distribution F (?), expected lifetime L
?? ? argmax? ?(?)
[? is defined in (1)]
while we keep playing
[Play a random arm n times]
i ? random new arm; r ? 0
for d = 1, . . . , n
Pull arm i; r ? r + R(?i )
end for
if r > n?? [If it is good, stay with it forever ]
Pull arm i every turn until it dies
end if
end while
Algorithm S TOCH . WITH E ARLY S TOPPING
input: Distribution F (?), expected lifetime L
?? ? argmax? ?(?)
[? is defined in (1)]
while we keep playing
[Play random arm as long as necessary]
i ? random new arm; r ? 0; d ? 0
while d < n and n ? d ? n?? ? r
Pull arm i; r ? r + R(?i ); d ? d + 1
end while
if r > n?? [If it is good, stay with it forever]
Pull arm i every turn until it dies
end if
end while
The subset heuristic. Why can?t we simply use a standard multi-armed bandit (MAB) algorithm
for mortal bandits as well? Intuitively, MAB algorithms invest a lot of pulls on all arms (at least
logarithmic in the total number of pulls) to guarantee convergence to the optimal arm. This is
necessary in the traditional bandit settings, but in the limit as t ? ?, the cost is recouped and leads
to sublinear regret. However, such an investment is not justified for mortal bandits: the most gain
we can get from an arm is L (if the arm has payoff 1), which reduces the importance of convergence
to the best arm. In fact, as shown by Corollary 4, converging to a reasonably good arm suffices.
However, standard MAB algorithms do identify better arms very well. This suggests the following
epoch-based heuristic: (a) select a subset of k/c arms uniformly at random from the total k arms at
the beginning of each epoch, (b) operate a standard bandit algorithm on these until the epoch ends,
and repeat. Intuitively, step (a) reduces the load on the bandit algorithm, allowing it to explore less
and converge faster, in return for finding an arm that is probably optimal only among the k/c subset.
Picking the right c and the epoch length then depends on balancing the speed of convergence of
the bandit algorithm, the arm lifetimes, and the difference between the k-th and the k/c-th order
statistics of the arm payoff distribution; in our experiments, c is chosen empirically.
Using the subset heuristic, we propose an extension of the UCB1 algorithm2 [2], called UCB1 K/C,
for the state-oblivious case. Note that this is just one example of the use of this heuristic; any standard bandit algorithm could have been used in place of UCB1 here. In the next section, UCB1 K/C
is shown to perform far better than UCB1 in the mortal arms setting.
The A DAPTIVE G REEDY heuristic. Empirically, simple greedy MAB algorithms have previously
been shown to perform well due to fast convergence. Hence for the purpose of evaluation, we also
compare to an adaptive greedy heuristic for mortal bandits. Note that the n -greedy algorithm [2]
does not apply directly to mortal bandits since the probability t of random exploration decays to
zero for large t, which can leave the algorithm with no good choices should the best arm expire.
Algorithm UCB1 K/C
input: k-armed bandit, c
while we keep playing
S ? k/c random arms
dead ? 0
AU CB1 (S) ? Initialize UCB1 over arms S
repeat
i ? arm selected by AU CB1 (S)
Pull arm i, provide reward to AU CB1 (S)
x ? total arms that died this turn
Check for newly dead arms in S, remove any
dead ? dead + x
until dead ? k/2 or |S| = 0
end while
6
Algorithm A DAPTIVE G REEDY
input: k-armed bandit, c
Initialization: ?i ? [1, k], ri , ni ? 0
while we keep playing
m ? argmaxi ri /ni [Find best arm so far]
pm ? rm /nm
With probability min(1, c ? pm )
j?m
Otherwise
[Pull a random arm]
j ? uniform(1, k)
r ? R(j)
rj ? rj + r [Update the observed rewards]
nj ? nj + 1
end while
Empirical evaluation
In this section we evaluate the performance of UCB1 K/C, S TOCHASTIC, S TOCHASTIC WITH
E ARLY S TOPPING, and A DAPTIVE G REEDY in the mortal arm state-oblivious setting. We also compare these to the UCB1 algorithm [2], that does not consider arm mortality in its policy but is among
the faster converging standard multi-armed bandit algorithms. We present the results of simulation
studies using three different distributions of arm payoffs F (?).
Uniform distributed arm payoffs. Our performance analyses assume that the cumulative payoff
distribution F (?) of new arms is known. A particularly simple one is the uniform distribution,
?it ? uniform(0, 1). Figure 1(a) shows the performance of these algorithms as a function of the
expected lifetime of each arm, using a timed death and state-oblivious model. The evaluation was
performed over k = 1000 arms, with each curve showing the mean regret per turn obtained by each
algorithm when averaged over ten runs. Each run was simulated for ten times the expected lifetime
of the arms, and all parameters were empirically optimized for each algorithm and each lifetime.
Repeating the evaluation with k = 100, 000 arms produces qualitatively very similar performance.
We first note the striking difference between UCB1 and UCB1 K/C, with the latter performing far
better. In particular, even with the longest lifetimes, each arm can be sampled in expectation at most
100 times. With such limited sampling, UCB1 spends almost all the time exploring and generates
almost the same regret of 0.5 per turn as would an algorithm that pulls arms at random.
In contrast, UCB1 K/C is able to obtain a substantially lower regret by limiting the exploration to a
subset of the arms. This demonstrates the usefulness of the K/C idea: by running the UCB1 algorithm
on an appropriately sized subset of arms, the overall regret per turn is reduced drastically. In practice,
2
UCB1 plays the arm j, previously pulled nj times, with highest mean historical payoff plus
p
(2 ln n)/nj .
0.7
0.6
UCB1
UCB1-k/c
Stochastic
Stochastic with Early Stop.
AdaptiveGreedy
0.4
0.3
Regret per time step
Regret per time step
0.5
0.2
0.1
0.5
0.4
0.3
0.2
UCB1
UCB1-k/c
Stochastic
Stochastic with Early Stopping
AdaptiveGreedy
0.1
0
0
100
(a)
1000
10000
Expected arm lifetime
100000
100
(b)
1000
10000
100000
Expected arm lifetime
Figure 1: Comparison of the regret per turn obtained by five different algorithms assuming that new
arm payoffs come from the (a) uniform distribution and (b) beta(1, 3) distribution.
with k = 1000 arms, the best performance was obtained with K/C between 4 and 40, depending on
the arm lifetime.
Second, we see that S TOCHASTIC outperformed UCB1 K/C with optimally chosen parameters.
Moreover, S TOCHASTIC WITH E ARLY S TOPPING performs as well as A DAPTIVE G REEDY, which
matches the best performance we were able to obtain by any algorithm. This demonstrates that (a)
the state-oblivious versions of the optimal deterministic algorithm is effective in general, and (b) the
early stopping criterion allows arms with poor payoff to be quickly weeded out.
Beta distributed arm payoffs. While the strategies discussed perform well when arm payoffs are
uniformly distributed, it is unlikely that in a real setting the payoffs would be so well distributed.
In particular, if there are occasional arms with substantially higher payoffs, we could expect any
algorithm that does not exhaustively search available arms may obtain very high regret per turn.
Figure 1(b) shows the results when the arm payoff probabilities are drawn from the beta(1, 3) distribution. We chose this distribution as it has finite support yet tends to select small payoffs for most
arms while selecting high payoffs occasionally. Once again, we see that S TOCHASTIC WITH E ARLY
S TOPPING and A DAPTIVE G REEDY perform best, with the relative ranking of all other algorithms
the same as in the uniform case above. The absolute regret of the algorithms we have proposed is
increased relative to that seen in Figure 1(a), but still substantially better than that of the UCB1. In
fact, the regret of the UCB1 has increased more under this distribution than any other algorithm.
Real-world arm payoffs. Considering the application that motivated this work, we now evaluate the
performance of the four new algorithms when the arm payoffs come from the empirically observed
distribution of clickthrough rates on real ads served by a large ad broker.
Figure 2(a) shows a histogram of the payoff probabilities for a random sample of approximately
300 real ads belonging to a shopping-related category when presented on web pages classified as
belonging to the same category. The probabilities have been linearly scaled such that all ads have
payoff between 0 and 1. We see that the distribution is unimodal, and is fairly tightly concentrated.
By sampling arm payoffs from a smoothed version of this empirical distribution, we evaluated the
performance of the algorithms presented earlier. Figure 2(b) shows that the performance of all the
algorithms is consistent with that seen for both the uniform and beta payoff distributions. In particular, while the mean regret per turn is somewhat higher than that seen for the uniform distribution,
it is still lower than when payoffs are from the beta distribution. As before, S TOCHASTIC WITH
E ARLY S TOPPING and A DAPTIVE G REEDY perform best, indistinguishable from each other.
7
Conclusions
We have introduced a new formulation of the multi-armed bandit problem motivated by the real
world problem of selecting ads to display on webpages. In this setting the set of strategies available
to a multi-armed bandit algorithm changes rapidly over time. We provided a lower bound of linear
regret under certain payoff distributions. Further, we presented a number of algorithms that perform
substantially better in this setting than previous multi-armed bandit algorithms, including one that is
optimal under the state-aware setting, and one that is near-optimal under the state-oblivious setting.
Finally, we provided an extension that allows any previous multi-armed bandit algorithm to be used
0.14
Ad Payoff Distribution
0.5
Regret per time step
Fraction of arms
0.12
0.1
0.08
0.06
0.04
0.02
Stochastic
Stochastic with Early Stopping
AdaptiveGreedy
UCB1
UCB1-k/c
0.4
0.3
0.2
0.1
0
0
(a)
0.2
0.4
0.6
0.8
Payoff probability (scaled)
0
1
100
(b)
1000
10000
100000
Expected arm lifetime
Figure 2: (a) Distribution of real world ad payoffs, scaled linearly such that the maximum payoff is
1 and (b) Regret per turn under the real-world ad payoff distribution.
in the case of mortal arms. Simulations on multiple payoff distributions, including one derived from
real-world ad serving application, demonstrate the efficacy of our approach.
Acknowledgments
We would like to thank the anonymous reviewers for their helpful comments and suggestions.
References
[1] P. Auer. Using confidence bounds for exploitation-exploration trade-offs. J. Machine Learning Research,
3:397?422, 2002.
[2] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multi-armed bandit problem. Machine
Learning, 47:235?256, 2002.
[3] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem.
SIAM J. Comput., 32(1):48?77, 2002.
[4] D. A. Berry, R. W. Chen, A. Zame, D. C. Heath, and L. A. Shepp. Bandit problems with infinitely many
arms. The Annals of Statistics, 25(5):2103?2116, 1997.
[5] D. A. Berry and B. Fristedt. Bandit Problems: Sequential Allocation of Experiments. Chapman and Hall,
London, UK, 1985.
[6] D. Bertsimas and J. Nino-Mora. Restless bandits, linear programming relaxations, and a primal-dual
index heuristic. Operations Research, 48(1):80?90, 2000.
[7] A. Blum and Y. Mansour. From external to internal regret. In 18th COLT, pages 621?636, 2005.
[8] W. Feller. An Introduction to Probability Theory and Its Applications, Volume 2. Wiley, 1971.
[9] Y. Freund, R. Schapire, Y. Singer, and M. K. Warmuth. Using and combining predictors that specialize.
In 29th STOC, pages 334?343, 1997.
[10] J. C. Gittins and D. M. Jones. A dynamic allocation index for the sequential design of experiments. In
J. G. et al., editor, Progress in Statistics, pages 241?266. North-Holland, 1974.
[11] M. Herbster and M. K. Warmuth. Tracking the best expert. Machine Learning, 32:151?178, 1998.
[12] R. Kleinberg. Online Decision Problems with Large Strategy Sets. PhD thesis, MIT, 2005.
[13] R. D. Kleinberg, A. Niculescu-Mizil, and Y. Sharma. Regret bounds for sleeping experts and bandits. In
21st COLT, pages 425?436, 2008.
[14] A. Krause and C. Guestrin. Nonmyopic active learning of Gaussian processes: An explorationexploitation approach. In 24th ICML, pages 449?456, 2007.
[15] J. Nino-Mora. Restless bandits, partial conservation laws and indexability. Adv. Appl. Prob., 33:76?98,
2001.
[16] S. Pandey, D. Agarwal, D. Chakrabarti, and V. Josifovski. Bandits for taxonomies: A model-based
approach. In SDM, pages 216?227, 2007.
[17] S. Pandey, D. Chakrabarti, and D. Agarwal. Multi-armed bandit problems with dependent arms. In ICML,
pages 721?728, 2007.
[18] C. H. Papadimitriou and J. N. Tsitsiklis. The complexity of optimal queueing network control. In 9th
CCC, pages 318?322, 1994.
[19] A. Slivkins and E. Upfal. Adapting to a changing environment: The Brownian restless bandits. In 21st
COLT, pages 343?354, 2008.
[20] O. Teytaud, S. Gelly, and M. Sebag. Anytime many-armed bandits. In CAP, 2007.
[21] T.Lai and H.Robbins. Asymptotically efficient adaptive allocation rules. Adv. Appl. Math., 6:4?22, 1985.
[22] P. Whittle. Arm-acquiring bandits. The Annals of Probability, 9(2):284?292, 1981.
[23] P. Whittle. Restless bandits: Activity allocation in a changing world. J. of Appl. Prob., 25A:287?298,
1988.
| 3580 |@word exploitation:4 version:4 simulation:2 pick:1 paid:1 minus:1 reduction:2 born:2 efficacy:2 selecting:2 com:3 yet:1 must:3 realistic:3 remove:1 designed:1 update:1 v:1 stationary:1 implying:1 selected:3 greedy:3 warmuth:2 beginning:1 indefinitely:1 provides:3 characterization:1 math:1 teytaud:1 simpler:2 five:1 along:1 become:2 supply:2 chakrabarti:3 beta:5 prove:1 specialize:1 expected:32 growing:1 multi:29 little:1 armed:34 overwhelming:1 considering:2 tandem:1 clicked:1 project:1 provided:3 bounded:3 moreover:1 substantially:5 spends:1 finding:1 nj:4 guarantee:2 certainty:1 every:5 unexplored:2 rm:1 demonstrates:2 uk:2 control:2 scaled:3 omit:1 appear:1 before:3 retention:1 engineering:1 died:1 tends:2 limit:2 modify:1 approximately:2 plus:1 chose:1 au:3 studied:4 initialization:1 suggests:1 challenging:1 appl:3 josifovski:1 limited:3 campaign:2 averaged:1 commerce:1 acknowledgment:1 investment:1 regret:29 practice:1 empirical:3 significantly:2 adapting:1 word:1 confidence:1 get:5 selection:4 scheduling:1 applying:1 seminal:1 equivalent:1 deterministic:6 reviewer:1 indexability:1 go:1 formulate:1 immediately:2 rule:1 pull:23 financial:1 mora:2 holiday:1 limiting:1 annals:2 pt:6 play:5 suppose:1 user:2 programming:1 us:1 particularly:2 continues:1 observed:2 role:1 solved:1 capture:1 adv:2 trade:1 removed:1 highest:1 observes:1 intuition:2 feller:1 environment:1 complexity:1 reward:41 dynamic:2 exhaustively:1 expires:2 various:3 distinct:2 fast:1 effective:1 london:1 argmaxi:1 heuristic:8 say:1 otherwise:3 statistic:3 fischer:1 abandon:2 online:3 sequence:1 sdm:1 propose:2 combining:1 rapidly:4 validate:1 recipe:1 webpage:3 arly:6 invest:1 convergence:4 produce:1 gittins:3 leave:1 depending:1 develop:2 progress:1 c:1 involves:1 come:3 modifying:1 stochastic:11 exploration:8 sunnyvale:2 require:1 shopping:2 suffices:1 anonymous:1 mab:8 topping:6 extension:3 exploring:1 hold:2 practically:1 sufficiently:1 considered:1 hall:1 scope:1 major:1 achieves:1 early:4 omitted:1 purpose:1 outperformed:1 currently:2 robbins:1 reflects:1 offs:1 mit:1 gaussian:1 always:6 supt:1 modified:1 season:1 corollary:3 derived:3 longest:1 check:1 contrast:3 adversarial:2 helpful:1 dependent:1 stopping:3 niculescu:1 unlikely:1 bandit:66 interested:1 overall:1 among:3 dual:1 colt:3 priori:1 yahoo:5 initialize:1 fairly:1 dmi:1 once:5 aware:11 equal:2 never:2 sampling:2 chernoff:2 chapman:1 look:1 jones:1 icml:2 inevitable:1 future:1 papadimitriou:1 others:1 t2:2 oblivious:14 tightly:1 replaced:1 argmax:5 replacement:1 microsoft:2 huge:1 evaluation:5 mixture:1 yielding:2 behind:2 primal:1 algorithm2:1 partial:1 necessary:2 respective:1 unless:1 indexed:1 sooner:1 timed:8 theoretical:1 increased:2 modeling:1 earlier:3 cost:1 subset:8 uniform:9 usefulness:1 predictor:1 optimally:1 providence:1 daptive:6 st:2 cb1:3 siam:1 herbster:1 stay:3 pool:1 picking:2 continuously:2 quickly:1 thesis:1 again:1 mortality:2 nm:1 cesa:2 choose:2 slowly:2 dead:5 external:1 expert:4 return:1 li:4 potential:1 casino:1 whittle:3 north:1 inc:2 ranking:1 ad:49 depends:1 later:1 performed:1 lot:1 picked:1 analyze:2 sup:1 option:1 ni:2 circulation:1 who:1 identify:1 weak:1 bayesian:1 critically:1 none:1 provider:2 advertising:3 served:1 classified:1 definition:1 against:1 e2:2 proof:2 static:3 gain:1 fristedt:1 newly:2 proved:1 sampled:1 stop:1 recall:1 lim:2 anytime:1 cap:1 actually:1 auer:5 higher:2 formulation:3 done:2 evaluated:1 lifetime:21 just:1 until:5 hand:1 sketch:1 web:1 quality:1 pulling:2 brown:2 true:1 evolution:1 discounting:1 hence:2 death:13 indistinguishable:1 die:4 steady:2 criterion:2 presenting:1 complete:1 demonstrate:1 performs:1 recently:2 nonmyopic:1 empirically:5 volume:1 million:1 extend:2 discussed:1 refer:1 multiarmed:3 cambridge:1 pm:2 brownian:1 italy:1 termed:1 occasionally:1 certain:1 onr:1 arbitrarily:1 seen:3 guestrin:1 greater:2 somewhat:1 sharma:1 converge:2 maximize:2 advertiser:1 paradigm:2 determine:1 multiple:1 unimodal:1 rj:2 reduces:2 match:2 faster:2 calculation:1 long:4 lai:1 ravikumar:1 award:2 visit:1 e1:2 converging:2 variant:3 wald:1 expectation:6 histogram:1 limt:2 pspace:1 agarwal:2 sleeping:2 justified:1 krause:1 source:1 allocated:1 appropriately:1 operate:1 unlike:1 zame:1 heath:1 probably:1 comment:1 regularly:2 call:4 near:5 revealed:2 affect:1 gave:1 nonstochastic:1 identified:2 click:3 suboptimal:1 idea:1 tradeoff:2 det:2 whether:1 motivated:4 repeating:1 ten:3 concentrated:1 category:2 reduced:2 generate:1 schapire:2 outperform:2 nsf:1 per:24 serving:2 broadly:1 discrete:1 ist:1 four:1 threshold:1 blum:1 drawn:7 queueing:1 expire:2 changing:5 budgeted:6 ravi:1 timestep:1 bertsimas:1 relaxation:1 asymptotically:1 fp6:1 fraction:1 run:3 eli:2 prob:2 striking:1 place:1 almost:2 decision:1 dy:3 bound:14 internet:1 pay:1 played:1 display:4 distinguish:1 activity:1 strength:1 constraint:2 alive:2 ri:3 unlimited:1 nearby:1 generates:1 aspect:1 speed:1 kleinberg:2 optimality:2 min:1 kumar:1 performing:1 relatively:1 department:1 according:1 poor:1 belonging:2 smaller:1 modification:1 happens:1 intuitively:2 gradually:1 invariant:1 pr:3 ln:2 equation:1 remains:1 previously:3 turn:12 needed:1 initiate:1 singer:1 end:13 adopted:1 available:15 operation:1 tochastic:10 probe:1 apply:1 occasional:1 binomial:1 assumes:1 running:3 gelly:1 build:1 establish:1 disappear:1 occurs:1 strategy:8 traditional:1 ccc:1 visiting:1 distance:1 thank:1 simulated:1 explorationexploitation:1 considers:1 padova:1 fresh:4 assuming:3 length:1 modeled:1 deepayan:1 index:3 equivalently:1 stoc:1 taxonomy:1 rise:1 design:1 proper:1 policy:4 unknown:3 perform:6 allowing:1 upper:4 clickthrough:1 bianchi:2 finite:2 displayed:1 payoff:45 extended:1 precise:1 mansour:1 smoothed:1 arbitrary:1 introduced:2 specified:1 optimized:1 slivkins:1 address:1 beyond:2 adversary:1 able:2 below:2 shepp:1 including:4 max:3 suitable:1 event:4 natural:1 arm:134 mizil:1 improve:1 created:1 deviate:1 review:1 geometric:3 epoch:4 berry:2 evolve:1 relative:2 law:1 freund:2 fully:1 expect:1 sublinear:1 suggestion:1 allocation:4 revenue:3 upfal:2 consistent:1 editor:1 playing:5 balancing:2 uncountably:1 supported:2 repeat:7 drastically:1 tsitsiklis:1 pulled:4 characterizing:1 absolute:1 distributed:4 curve:1 world:7 cumulative:4 rich:1 transition:2 author:2 collection:1 commonly:1 perpetually:1 adaptive:2 qualitatively:1 historical:1 far:4 income:1 ec:1 obtains:1 forever:2 keep:5 mortal:20 sequentially:2 active:2 filip:1 corpus:1 assumed:2 conservation:1 tuples:2 nino:2 alternatively:1 search:1 pandey:2 why:1 reality:2 promising:1 nature:1 reasonably:1 ca:2 main:1 linearly:2 motivation:1 sebag:1 allowed:2 repeated:1 toch:1 referred:1 slow:1 sto:2 wiley:1 wish:2 exponential:2 comput:1 advertisement:1 theorem:2 load:1 showing:1 explored:1 decay:1 exists:2 intractable:1 sequential:2 importance:1 phd:1 budget:5 horizon:1 restless:6 flavor:1 reedy:6 chen:1 ucb1:24 logarithmic:1 simply:2 explore:3 infinitely:1 tracking:1 holland:1 acquiring:1 slot:4 goal:3 sized:1 content:4 broker:3 change:10 hard:1 typical:1 infinite:2 specifically:2 uniformly:2 justify:1 lemma:3 total:7 called:2 indicating:1 select:6 formally:1 internal:1 support:2 radlinski:1 latter:2 evaluate:2 |
2,847 | 3,581 | Beyond Novelty Detection: Incongruent Events, when
General and Specific Classifiers Disagree
Abstract
Unexpected stimuli are a challenge to any machine learning algorithm. Here we
identify distinct types of unexpected events, focusing on ?incongruent events? when ?general level? and ?specific level? classifiers give conflicting predictions.
We define a formal framework for the representation and processing of incongruent events: starting from the notion of label hierarchy, we show how partial order
on labels can be deduced from such hierarchies. For each event, we compute its
probability in different ways, based on adjacent levels (according to the partial
order) in the label hierarchy. An incongruent event is an event where the probability computed based on some more specific level (in accordance with the partial
order) is much smaller than the probability computed based on some more general
level, leading to conflicting predictions. We derive algorithms to detect incongruent events from different types of hierarchies, corresponding to class membership
or part membership. Respectively, we show promising results with real data on
two specific problems: Out Of Vocabulary words in speech recognition, and the
identification of a new sub-class (e.g., the face of a new individual) in audio-visual
facial object recognition.
1 Introduction
Machine learning builds models of the world using training data from the application domain and
prior knowledge about the problem. The models are later applied to future data in order to estimate
the current state of the world. An implied assumption is that the future is stochastically similar to
the past. The approach fails when the system encounters situations that are not anticipated from the
past experience. In contrast, successful natural organisms identify new unanticipated stimuli and
situations and frequently generate appropriate responses.
By definition, an unexpected event is one whose probability to confront the system is low, based
on the data that has been observed previously. In line with this observation, much of the computational work on novelty detection focused on the probabilistic modeling of known classes, identifying
outliers of these distributions as novel events (see e.g. [1, 2] for recent reviews). More recently, oneclass classifiers have been proposed and used for novelty detection without the direct modeling of
data distribution [3, 4]. There are many studies on novelty detection in biological systems [5], often
focusing on regions of the hippocampus [6].
To advance beyond the detection of outliers, we observe that there are many different reasons why
some stimuli could appear novel. Our work, presented in Section 2, focuses on unexpected events
which are indicated by the incongruence between prediction induced by prior experience (training)
and the evidence provided by the sensory data. To identify an item as incongruent, we use two
parallel classifiers. One of them is strongly constrained by specific knowledge (both prior and dataderived), the other classifier is more general and less constrained. Both classifiers are assumed
to yield class-posterior probability in response to a particular input signal. A sufficiently large
discrepancy between posterior probabilities induced by input data in the two classifiers is taken as
indication that an item is incongruent.
Thus, in comparison with most existing work on novelty detection, one new and important characteristic of our approach is that we look for a level of description where the novel event is highly
probable. Rather than simply respond to an event which is rejected by all classifiers, which more
often than not requires no special attention (as in pure noise), we construct and exploit a hierarchy of
1
representations. We attend to those events which are recognized (or accepted) at some more abstract
levels of description in the hierarchy, while being rejected by the more concrete classifiers.
There are various ways to incorporate prior hierarchical knowledge and constraints within different
classifier levels, as discussed in Section 3. One approach, used to detect images of unexpected incongruous objects, is to train the more general, less constrained classifier using a larger more diverse
set of stimuli, e.g., the facial images of many individuals. The second classifier is trained using a
more specific (i.e. smaller) set of specific objects (e.g., the set of Einstein?s facial images). An
incongruous item (e.g., a new individual) could then be identified by a smaller posterior probability
estimated by the more specific classifiers relative to the probability from the more general classifier.
A different approach is used to identify unexpected (out-of-vocabulary) lexical items. The more
general classifier is trained to classify sequentially speech sounds (phonemes) from a relatively short
segments of the input speech signal (thus yielding an unconstrained sequence of phoneme labels);
the more constrained classifier is trained to classify a particular set of words (highly constrained
sequences of phoneme labels) from the information available in the whole speech sentence. A word
that did not belong to the expected vocabulary of the more constrained recognizer could then be
identified by discrepancy in posterior probabilities of phonemes derived from both classifiers.
Our second contribution in Section 2 is the presentation of a unifying theoretical framework for
these two approaches. Specifically, we consider two kinds of hierarchies: Part membership as in
biological taxonomy or speech, and Class membership, as in human categorization (or levels of
categorization). We define a notion of partial order on such hierarchies, and identify those events
whose probability as computed using different levels of the hierarchy does not agree. In particular,
we are interested in those events that receive high probability at more general levels (for example,
the system is certain that the new example is a dog), but low probability at more specific levels (in the
same example, the system is certain that the new example is not any known dog breed). Such events
correspond to many interesting situations that are worthy of special attention, including incongruous
scenes and new sub-classes, as shown in Section 3.
2 Incongruent Events - unified approach
2.1 Introducing label hierarchy
The set of labels represents the knowledge base about stimuli, which is either given (by a teacher in
supervised learning settings) or learned (in unsupervised or semi-supervised settings). In cognitive
systems such knowledge is hardly ever a set; often, in fact, labels are given (or can be thought of) as
a hierarchy. In general, hierarchies can be represented as directed graphs. The nodes of the graphs
may be divided into distinct subsets that correspond to different entities (e.g., all objects that are
animals); we call these subsets ?levels?. We identify two types of hierarchies:
Part membership, as in biological taxonomy or speech. For example, eyes, ears, and nose combine
to form a head; head, legs and tail combine to form a dog.
Class membership, as in human categorization ? where objects can be classified at different levels
of generality, from sub-ordinate categories (most specific level), to basic level (intermediate level),
to super-ordinate categories (most general level). For example, a Beagle (sub-ordinate category) is
also a dog (basic level category), and it is also an animal (super-ordinate category).
The two hierarchies defined above induce constraints on the observed features in different ways. In
the class-membership hierarchy, a parent class admits higher number of combinations of features
than any of its children, i.e., the parent category is less constrained than its children classes. In
contrast, a parent node in the part-membership hierarchy imposes stricter constraints on the observed
features than a child node. This distinction is illustrated by the simple ?toy? example shown in Fig. 1.
Roughly speaking, in the class-membership hierarchy (right panel), the parent node is the disjunction
of the child categories. In the part-membership hierarchy (left panel), the parent category represents
a conjunction of the children categories. This difference in the effect of constraints between the two
representations is, of course, reflected in the dependency of the posterior probability on the class,
conditioned on the observations.
2
Figure 1: Examples. Left: part-membership hierarchy, the concept of a dog requires a conjunction of parts a head, legs and tail. Right: class-membership hierarchy, the concept of a dog is defined as the disjunction of
more specific concepts - Afghan, Beagle and Collie.
In order to treat different hierarchical representations uniformly we invoke the notion of partial
order. Intuitively speaking, different levels in each hierarchy are related by a partial order: the more
specific concept, which corresponds to a smaller set of events or objects in the world, is always
smaller than the more general concept, which corresponds to a larger set of events or objects.
To illustrate this point, consider Fig. 1 again. For the part-membership hierarchy example (left
panel), the concept of ?dog? requires a conjunction of parts as in DOG = LEGS ? HEAD ? TAIL,
and therefore, for example, DOG ? LEGS ? DOG LEGS . Thus
DOG LEGS , DOG HEAD, DOG TAIL
In contrast, for the class-membership hierarchy (right panel), the class of dogs requires the conjunction of the individual members as in DOG = AFGHAN ? BEAGEL ? COLLIE , and therefore,
for example, DOG ? AFGHAN ? DOG AFGHAN . Thus
DOG AFGHAN , DOG BEAGEL, DOG COLLIE
2.2 Definition of Incongruent Events
Notations
We assume that the data is represented as a Graph {G, E} of Partial Orders (GP O). Each node in
G is a random variable which corresponds to a class or concept (or event). Each directed link in E
corresponds to partial order relationship as defined above, where there is a link from node a to node
b iff a b.
For each node (concept) a, define As = {b ? G, b a} - the set of all nodes (concepts) b more
specific (smaller) than a in accordance with the given partial order; similarly, define A g = {b ?
G, a b} - the set of all nodes (concepts) b more general (larger) than a in accordance with the
given partial order.
For each concept a and training data T , we train up to 3 probabilistic models which are derived from
T in different ways, in order to determine whether the concept a is present in a new data point X:
? Qa (X): a probabilistic model of class a, derived from training data T without using the
partial order relations in the GP O.
? If |As | > 1
Qsa (X): a probabilistic model of class a which is based on the probability of concepts in
As , assuming their independence of each other. Typically, the model incorporates some
relatively simple conjunctive and/or disjunctive relations among concepts in A s .
? If |Ag | > 1
Qga (X): a probabilistic model of class a which is based on the probability of concepts in
Ag , assuming their independence of each other. Here too, the model typically incorporates
some relatively simple conjunctive and/or disjunctive relations among concepts in A g .
3
Examples
To illustrate, we use the simple examples shown in Fig. 1, where our concept of interest a is the
concept ?dog?:
In the part-membership hierarchy (left panel), |Ag | = 3 (head, legs, tail). We can therefore learn 2
models for the class ?dog? (Qsdog is not defined):
1. Qdog - obtained using training pictures of ?dogs? and ?not dogs? without body part labels.
2. Qgdog - obtained using the outcome of models for head, legs and tail, which were trained on
the same training set T with body part labels. For example, if we assume that concept a is
the conjunction of its part member concepts as defined above, and assuming that these part
concepts are independent of each other, we get
Y
Qgdog =
Qb = QHead ? QLegs ? QTail
(1)
b?Ag
In the class-membership hierarchy (right panel), |As | = 3 (Afghan, Beagle, Collie). If we further
assume that a class-membership hierarchy is always a tree, then |A g | = 1. We can therefore learn 2
models for the class ?dog? (Qgdog is not defined):
1. Qdog - obtained using training pictures of ?dogs? and ?not dogs? without breed labels.
2. Qsdog - obtained using the outcome of models for Afghan, Beagle and Collie, which were
trained on the same training set T with only specific dog type labels. For example, if we
assume that concept a is the disjunction of its sub-class concepts as defined above, and
assuming that these sub-class concepts are independent of each other, we get
X
Qsdog =
Qb = QAf ghan + QBeagle + QCollie
b?As
Incongruent events
In general, we expect the different models to provide roughly the same probability for the presence
of concept a in data X. A mismatch between the predictions of the different models should raise the
red flag, possibly indicating that something new and interesting had been observed. In particular, we
are interested in the following discrepancy:
Definition: Observation X is incongruent if there exists a concept 0 a0 such that
Qga (X) Qa (X) or Qa (X) Qsa (X).
(2)
Alternatively, observation X is incongruent if a discrepancy exists between the inference of the two
classifiers: either the classifier based on the more general descriptions from level g accepts the X
while the direct classier rejects it, or the direct classifier accepts X while the classifier based on the
more specific descriptions from level s rejects it. In either case, the concept receives high probability
at the more general level (according to the GP O), but much lower probability when relying only on
the more specific level.
Let us discuss again the examples we have seen before, to illustrate why this definition indeed
captures interesting ?surprises?:
? In the part-membership hierarchy (left panel of Fig. 1), we have
Qgdog = QHead ? QLegs ? QTail Qdog
In other words, while the probability of each part is high (since the multiplication of those
probabilities is high), the ?dog? classifier is rather uncertain about the existence of a dog in
this data.
How can this happen? Maybe the parts are configured in an unusual arrangement for a dog
(as in a 3-legged cat), or maybe we encounter a donkey with a cat?s tail (as in Shrek 3).
Those are two examples of the kind of unexpected events we are interested in.
4
? In the class-membership hierarchy (right panel of Fig. 1), we have
Qsdog = QAf ghan + QBeagle + QCollie Qdog
In other words, while the probability of each sub-class is low (since the sum of these probabilities is low), the ?dog? classifier is certain about the existence of a dog in this data.
How may such a discrepancy arise? Maybe we are seeing a new type of dog that we haven?t
seen before - a Pointer. The dog model, if correctly capturing the notion of ?dogness?,
should be able to identify this new object, while models of previously seen dog breeds
(Afghan, Beagle and Collie) correctly fail to recognize the new object.
3 Incongruent events: algorithms
Our definition for incongruent events in the previous section is indeed unified, but as a result quite
abstract. In this section we discuss two different algorithmic implementations, one generative and
one discriminative, which were developed for the part membership and class membership hierarchies respectively (see definition in Section 1). In both cases, we use the notation Q(x) for the class
probability as defined above, and p(x) for the estimated probability.
3.1 Part membership - a generative algorithm
Consider the left panel of Fig. 1. The event in the top node is incongruent if its probability is low,
while the probability of all its descendants is high.
In many applications, such as speech recognition, one computes the probability of events (sentences)
based on a generative model (corresponding to a specific language) which includes a dictionary of
parts (words). At the top level the event probability is computed conditional on the model; in which
case typically the parts are assumed to be independent, and the event probability is computed as
the multiplication of the parts probabilities conditioned on the model. For example, in speech processing and assuming a specific language (e.g., English), the probability of the sentence is typically
computed by multiplying the probability of each word using an HMM model trained on sentences
from a specific language. At the bottom level, the probability of each part is computed independently
of the generative model.
More formally, Consider an event u composed of parts wk . Using the generative model of events
and assuming the conditional independence of the parts given this model, the prior probability of the
event is given by the product of prior probabilities of the parts,
Y
p(u|L) =
p(wk |L)
(3)
k
where L denotes the generative model (e.g., the language).
For measurement X, we compute Q(X) as follows
X
Y
Q(X) = p(X|L) =
p(X|u, L)p(u|L) ? p(X|?
u, L)p(?
u|L) = p(X|?
u)
p(wk |L)
u
(4)
k
using p(X|u, L) = p(X|u) and (3), and where u
? = arg max p(u|L) is the most likely interpretau
tion. At the risk of notation abuse, {wk } now denote the parts which compose the most likely event
u
?. We assume that the first sum is dominated by the maximal term.
Given a part-membership hierarchy, we can use (1) to compute the probability Q g (X) directly,
without using the generative model L.
X
Y
Qg (X) = p(X) =
p(X|u)p(u) ? p(X|?
u)p(?
u) = p(X|?
u)
p(wk )
(5)
u
k
It follows from (4) and (5) that
Y p(wk |L)
Q(X)
?
g
Q (X)
p(wk )
k
5
(6)
We can now conclude that X is an incongruent event according to our definition if there exists at
least one part k in the final event u
?, such that p(wk ) p(wk |L) (assuming all other parts have
roughly the same conditional and unconditional probabilities). In speech processing, a sentence is
incongruent if it includes an incongruent word - a word whose probability based on the generative
language model is low, but whose direct probability (not constrained by the language model) is high.
Example: Out Of Vocabulary (OOV) words
For the detection of OOV words, we performed experiments using a Large Vocabulary Continuous
Speech Recognition (LVCSR) system on the Wall Street Journal Corpus (WSJ). The evaluation
set consists of 2.5 hours. To introduce OOV words, the vocabulary was restricted to the 4968 most
frequent words from the language training texts, leaving the remaining words unknown to the model.
A more detailed description is given in [7].
In this task, we have shown that the comparison between two parallel classifiers, based on strong
and weak posterior streams, is effective for the detection of OOV words, and also for the detection
of recognition errors. Specifically, we use the derivation above to detect out of vocabulary words,
by comparing their probability when computed based on the language model, and when computed
based on mere acoustic modeling. The best performance was obtained by the system when a Neural
Network (NN) classifier was used for the direct estimation of frame-based OOV scores. The network
was directly fed by posteriors from the strong and the weak systems. For the WSJ task, we achieved
performance of around 11% Equal-Error-Rate (EER) (Miss/False Alarm probability), see Fig. 2.
Figure 2: Several techniques used to detect OOV: (i) Cmax: Confidence measure computed ONLY from
strongly constrained Large Vocabulary Continuous Speech Recognizer (LVCSR), with frame-based posteriors.
(ii) LVCSR+weak features: Strongly and weakly constrained recognizers, compared via the KL-divergence
metric. (iii) LVCSR+NN posteriors: Combination of strong and weak phoneme posteriors using NN classifier.
(iv) all features: fusion of (ii) and (iii) together.
3.2 Class membership - a discriminative algorithm
Consider the right panel of Fig. 1. The general class in the top node is incongruent if its probability
is high, while the probability of all its sub-classes is low. In other words, the classifier of the
parent object accepts the new observation, but all the children object classifiers reject it. Brute
force computation of this definition may follow the path taken by traditional approaches to novelty
detection, e.g., looking for rejection by all one class classifiers corresponding to sub-class objects.
The result we have obtained by this method were mediocre, probably because generative models are
not well suited for the task. Instead, it seems like discriminative classifiers, trained to discriminate
6
between objects at the sub-class level, could be more successful. We note that unlike traditional
approaches to novelty detection, which must use generative models or one-class classifiers in the
absence of appropriate discriminative data, our dependence on object hierarchy provides discriminative data as a by-product. In other words, after the recognition by a parent-node classifier, we may
use classifiers trained to discriminate between its children to implement a discriminative novelty
detection algorithm.
Specifically, we used the approach described in [8] to build a unified representation for all objects
in the sub-class level, which is the representation computed for the parent object whose classifier
had accepted (positively recognized) the object. In this feature space, we build a classifier for each
sub-class based on the majority vote between pairwise discriminative classifiers. Based on these
classifiers, each example (accepted by the parent classifier) is assigned to one of the sub-classes, and
the average margin over classifiers which agree with the final assignment is calculated. The final
classifier then uses a threshold on this average margin to identify each object as known sub-class or
new sub-class. Previous research in the area of face identification can be viewed as an implicit use
of this propsed framework, see e.g. [9].
Example: new face recognition from audio-visual data
We tested our algorithm on audio-visual speaker verification. In this setup, the general parent category level is the ?speech? (audio) and ?face? (visual), and the different individuals are the offspring
(sub-class) levels. The task is to identify an individual as belonging to the trusted group of individuals vs. being unknown, i.e. known sub-class vs. new sub-class in a class membership hierarchy.
The unified representation of the visual cues was built using the approach described in [8]. All
objects in the sub-class level (different individuals) were represented using the representation learnt
for the parent level (?face?). For the audio cues we used the Perceptual linear predictive (PLP)
Cepstral features [10] as the unified representation. We used SVM classifiers with RBF kernel as the
pairwise discriminative classifiers for each of the different audio/visual representations separately.
Data was collected for our experiments using a wearable device, which included stereo panoramic
vision sensors and microphone arrays. In the recorded scenario, individuals walked towards the
device and then read aloud an identical text; we acquired 30 sequences with 17 speakers (see Fig. 3
for an example). We tested our method by choosing six speakers as members of the trusted group,
while the rest were assumed unknown.
The method was applied separately using each one of the different modalities, and also in an integrated manner using both modalities. For this fusion the audio signal and visual signal were
synchronized, and the winning classification margins of both signals were normalized to the same
scale and averaged to obtain a single margin for the combined method.
Since the goal is to identify novel incongruent events, true positive and false positive rates were
calculated by considering all frames from the unknown test sequences as positive events and the
known individual test sequences as negative events. We compared our method to novelty detection
based on one-class SVM [3] extended to our multi-class case. Decision was obtained by comparing
the maximal margin over all one-class classifiers to a varying threshold. As can be seen in Fig. 3,
our method performs substantially better in both modalities as compared to the ?standard? one class
approach for novelty detection. Performance is further improved by fusing both modalities.
4 Summary
Unexpected events are typically identified by their low posterior probability. In this paper we employed label hierarchy to obtain a few probability values for each event, which allowed us to tease
apart different types of unexpected events. In general there are 4 possibilities, based on the classifiers? response at two adjacent levels:
1
2
3
4
Specific level
reject
reject
accept
accept
General level
reject
accept
reject
accept
possible reason
noisy measurements, or a totally new concept
incongruent concept
inconsistent with partial order, models are wrong
known concept
7
1
0.9
True positive rate
0.8
0.7
0.6
0.5
0.4
0.3
audio
visual
audio?visual
audio (OC?SVM)
visual (OC?SVM)
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
False positive rate
0.7
0.8
0.9
1
Figure 3: Left: Example: one frame used for the visual verification task. Right: True Positive vs. False
Positive rates when detecting unknown vs. trusted individuals. The unknown are regarded as positive events.
Results are shown for the proposed method using both modalities separately and the combined method (solid
lines). For comparison, we show results with a more traditional novelty detection method using One Class
SVM (dashed lines).
We focused above on the second type of events - incongruent concepts, which have not been studied
previously in isolation. Such events are characterized by some discrepancy between the response of
two classifiers, which can occur for a number different reasons: Context: in a given context such as
the English language, a sentence containing a Czech word is assigned low probability. In the visual
domain, in a given context such as a street scene, otherwise high probability events such as ?car?
and ?elephant? are not likely to appear together. New sub-class: a new object has been encountered,
of some known generic type but unknown specifics.
We described how our approach can be used to design new algorithms to address these problems,
showing promising results on real speech and audio-visual facial datasets.
References
[1] Markou, M., Singh, S.: Novelty detection: a review-part 1: statistical approaches. Signal Processing 83
(2003) 2499 ? 2521
[2] Markou, M., Singh, S.: Novelty detection: a review-part 2: neural network based approaches. Signal
Processing 83 (2003) 2481?2497
[3] Scholkopf, B., Williamson, R.C., Smola, A.J., Shawe-Taylor, J., Platt, J.: Support vector method for
novelty detection. In: Proc. NIPS. Volume 12. (2000) 582?588
[4] Lanckrietand, G.R.G., Ghaoui, L.E., Jordan, M.I.: Robust novelty detection with single-class mpm. In:
Proc. NIPS. Volume 15. (2003) 929?936
[5] Berns, G.S., Cohen, J.D., Mintun, M.A.: Brain regions responsive to novelty in the absence of awareness.
Science 276 (1997) 1272 ? 1275
[6] Rokers, B., Mercado, E., Allen, M.T., Myers, C.E., Gluck, M.A.: A connectionist model of septohippocampal dynamics during conditioning: Closing the loop. Behavioral Neuroscience 116 (2002) 48?62
[7] Burget, L., Schwarz, P., Matejka, P., Hannemann, M., Rastrow, A., White, C., Khudanpur, S., Hermansky,
H., Cernocky, J.: Combination of strongly and weakly constrained recognizers for reliable detection of
oovs. In: Proceedings of IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP). (2008)
[8] Bar-Hillel, A., Weinshall, D.: Subordinate class recognition using relational object models. Proc. NIPS
19 (2006)
[9] Lanitis, A., Taylor, C.J., Cootes, T.F.: A unified approach to coding and interpreting face images. In:
Proc. ICCV. (1995) 368?373
[10] Hermansky, H.: Perceptual linear predictive (PLP) analysis of speech. The Journal of the Acoustical
Society of America 87 (1990) 1738
8
| 3581 |@word hippocampus:1 seems:1 solid:1 score:1 past:2 existing:1 current:1 comparing:2 conjunctive:2 must:1 happen:1 v:4 generative:10 cue:2 device:2 item:4 mpm:1 classier:1 short:1 pointer:1 provides:1 detecting:1 node:13 direct:5 scholkopf:1 descendant:1 consists:1 combine:2 compose:1 behavioral:1 manner:1 introduce:1 acquired:1 pairwise:2 indeed:2 expected:1 roughly:3 frequently:1 multi:1 brain:1 relying:1 considering:1 totally:1 provided:1 notation:3 panel:10 weinshall:1 kind:2 substantially:1 developed:1 unified:6 ag:4 stricter:1 classifier:45 wrong:1 platt:1 brute:1 appear:2 before:2 positive:8 attend:1 accordance:3 treat:1 offspring:1 path:1 abuse:1 studied:1 averaged:1 directed:2 incongruous:3 implement:1 area:1 thought:1 reject:7 burget:1 word:19 induce:1 eer:1 seeing:1 confidence:1 get:2 mediocre:1 risk:1 context:3 lexical:1 attention:2 starting:1 independently:1 focused:2 identifying:1 pure:1 array:1 regarded:1 notion:4 hierarchy:33 us:1 recognition:8 observed:4 disjunctive:2 bottom:1 capture:1 region:2 dynamic:1 legged:1 trained:8 raise:1 weakly:2 segment:1 singh:2 predictive:2 qaf:2 icassp:1 various:1 represented:3 cat:2 america:1 derivation:1 train:2 distinct:2 effective:1 outcome:2 choosing:1 hillel:1 disjunction:3 whose:5 quite:1 larger:3 aloud:1 otherwise:1 elephant:1 breed:3 gp:3 noisy:1 final:3 sequence:5 indication:1 myers:1 product:2 maximal:2 frequent:1 loop:1 iff:1 description:5 parent:11 categorization:3 wsj:2 object:21 derive:1 illustrate:3 strong:3 synchronized:1 human:2 subordinate:1 wall:1 biological:3 probable:1 sufficiently:1 around:1 algorithmic:1 dictionary:1 recognizer:2 estimation:1 proc:4 label:13 schwarz:1 trusted:3 sensor:1 always:2 super:2 rather:2 varying:1 conjunction:5 derived:3 focus:1 panoramic:1 contrast:3 detect:4 inference:1 membership:25 nn:3 typically:5 integrated:1 a0:1 accept:4 relation:3 interested:3 arg:1 among:2 classification:1 animal:2 constrained:11 special:2 equal:1 construct:1 identical:1 represents:2 look:1 unsupervised:1 hermansky:2 anticipated:1 future:2 discrepancy:6 connectionist:1 stimulus:5 rokers:1 haven:1 few:1 composed:1 recognize:1 divergence:1 individual:11 detection:20 hannemann:1 interest:1 highly:2 possibility:1 evaluation:1 yielding:1 unconditional:1 partial:12 experience:2 facial:4 tree:1 iv:1 taylor:2 theoretical:1 uncertain:1 classify:2 modeling:3 assignment:1 introducing:1 fusing:1 subset:2 successful:2 too:1 dependency:1 teacher:1 lanitis:1 learnt:1 combined:2 deduced:1 probabilistic:5 invoke:1 together:2 concrete:1 again:2 ear:1 recorded:1 containing:1 possibly:1 stochastically:1 cognitive:1 conf:1 leading:1 toy:1 coding:1 wk:9 includes:2 int:1 configured:1 stream:1 later:1 tion:1 performed:1 red:1 parallel:2 walked:1 contribution:1 phoneme:5 characteristic:1 yield:1 identify:10 correspond:2 weak:4 identification:2 mere:1 multiplying:1 classified:1 definition:8 markou:2 wearable:1 knowledge:5 car:1 afghan:8 focusing:2 higher:1 oov:6 supervised:2 follow:1 reflected:1 response:4 improved:1 strongly:4 generality:1 rejected:2 implicit:1 smola:1 receives:1 indicated:1 effect:1 concept:31 normalized:1 true:3 assigned:2 read:1 illustrated:1 white:1 adjacent:2 during:1 plp:2 speaker:3 oc:2 performs:1 allen:1 interpreting:1 image:4 novel:4 recently:1 cohen:1 conditioning:1 volume:2 discussed:1 belong:1 tail:7 organism:1 measurement:2 unconstrained:1 beagle:5 similarly:1 closing:1 language:9 had:2 shawe:1 recognizers:2 base:1 something:1 posterior:11 recent:1 apart:1 scenario:1 certain:3 seen:4 employed:1 recognized:2 novelty:16 determine:1 signal:8 semi:1 ii:2 dashed:1 sound:1 characterized:1 divided:1 qg:1 prediction:4 basic:2 confront:1 metric:1 vision:1 kernel:1 achieved:1 receive:1 separately:3 leaving:1 modality:5 rest:1 unlike:1 probably:1 induced:2 member:3 incorporates:2 inconsistent:1 jordan:1 call:1 presence:1 intermediate:1 iii:2 independence:3 isolation:1 identified:3 donkey:1 whether:1 six:1 lvcsr:4 stereo:1 speech:15 speaking:2 hardly:1 detailed:1 maybe:3 matejka:1 category:10 generate:1 estimated:2 neuroscience:1 correctly:2 diverse:1 group:2 threshold:2 graph:3 sum:2 respond:1 cootes:1 decision:1 capturing:1 encountered:1 occur:1 constraint:4 scene:2 dominated:1 qb:2 relatively:3 according:3 combination:3 belonging:1 smaller:6 leg:8 outlier:2 intuitively:1 restricted:1 ghaoui:1 iccv:1 taken:2 agree:2 previously:3 discus:2 fail:1 nose:1 fed:1 unusual:1 available:1 observe:1 einstein:1 hierarchical:2 appropriate:2 generic:1 responsive:1 encounter:2 existence:2 top:3 denotes:1 remaining:1 cmax:1 unifying:1 exploit:1 build:3 society:1 implied:1 arrangement:1 dependence:1 traditional:3 link:2 entity:1 hmm:1 street:2 majority:1 acoustical:1 collected:1 reason:3 assuming:7 relationship:1 setup:1 taxonomy:2 collie:6 negative:1 implementation:1 design:1 unknown:7 disagree:1 observation:5 datasets:1 cernocky:1 situation:3 relational:1 extended:1 ever:1 head:7 looking:1 unanticipated:1 worthy:1 frame:4 ordinate:4 dog:36 kl:1 incongruent:22 sentence:6 acoustic:2 learned:1 conflicting:2 distinction:1 accepts:3 hour:1 czech:1 nip:3 qa:3 address:1 beyond:2 able:1 bar:1 mismatch:1 challenge:1 built:1 including:1 max:1 reliable:1 event:46 natural:1 force:1 eye:1 picture:2 text:2 prior:6 review:3 multiplication:2 relative:1 expect:1 interesting:3 awareness:1 verification:2 imposes:1 course:1 summary:1 english:2 tease:1 bern:1 formal:1 face:6 cepstral:1 qsa:2 calculated:2 vocabulary:8 world:3 computes:1 sensory:1 sequentially:1 corpus:1 assumed:3 conclude:1 discriminative:8 alternatively:1 continuous:2 why:2 promising:2 learn:2 robust:1 williamson:1 domain:2 did:1 whole:1 noise:1 arise:1 alarm:1 child:7 allowed:1 body:2 positively:1 fig:10 sub:20 fails:1 winning:1 perceptual:2 specific:21 showing:1 oneclass:1 admits:1 svm:5 evidence:1 fusion:2 exists:3 false:4 conditioned:2 margin:5 surprise:1 rejection:1 suited:1 gluck:1 simply:1 likely:3 visual:13 unexpected:9 khudanpur:1 corresponds:4 conditional:3 viewed:1 presentation:1 goal:1 rbf:1 towards:1 absence:2 included:1 specifically:3 uniformly:1 flag:1 miss:1 microphone:1 discriminate:2 accepted:3 vote:1 indicating:1 formally:1 support:1 incorporate:1 audio:11 tested:2 |
2,848 | 3,582 | A general framework for investigating how far the
decoding process in the brain can be simplified
Masafumi Oizumi1 , Toshiyuki Ishii2 , Kazuya Ishibashi1
Toshihiko Hosoya2 , Masato Okada1,2
[email protected]
[email protected],[email protected]
[email protected], [email protected]
1
University of Tokyo, Kashiwa-shi, Chiba, JAPAN
2
RIKEN Brain Science Institute, Wako-shi, Saitama, JAPAN
Abstract
?How is information decoded in the brain?? is one of the most difficult and important questions in neuroscience. Whether neural correlation is important or not
in decoding neural activities is of special interest. We have developed a general
framework for investigating how far the decoding process in the brain can be simplified. First, we hierarchically construct simplified probabilistic models of neural responses that ignore more than Kth-order correlations by using a maximum
entropy principle. Then, we compute how much information is lost when information is decoded using the simplified models, i.e., ?mismatched decoders?. We
introduce an information theoretically correct quantity for evaluating the information obtained by mismatched decoders. We applied our proposed framework to
spike data for vertebrate retina. We used 100-ms natural movies as stimuli and
computed the information contained in neural activities about these movies. We
found that the information loss is negligibly small in population activities of ganglion cells even if all orders of correlation are ignored in decoding. We also found
that if we assume stationarity for long durations in the information analysis of dynamically changing stimuli like natural movies, pseudo correlations seem to carry
a large portion of the information.
1
Introduction
An ultimate goal of neuroscience is to elucidate how information is encoded and decoded by neural
activities. To investigate what information is encoded by neurons in certain area of the brain, the
mutual information between stimuli and neural responses is often calculated. In the analysis of
mutual information, it is implicitly assumed that encoded information is decoded by an optimal
decoder, which exactly matches the encoder. In other words, the brain is assumed to have full
knowledge of the encoding process. Generally, if the neural activities are correlated, the amount of
data needed for the optimal decoding scales exponentially with the number of neurons. Since a large
amount of data and many complex computations are needed for optimal decoding, the assumption
of an optimal decoder in the brain is doubtful.
The reason mutual information is widely used in neuroscience despite the doubtfulness of the optimal decoder is that we are completely ignorant of how information is decoded in the brain. Thus,
we simply evaluate the maximal amount of information that can be extracted from neural activities
by calculating the mutual information. To address this lack of knowledge, we can ask a different
question: ?How much information can be obtained by a decoder that has partial knowledge of the
encoding process?? [10, 14] We call this type of a decoder ?simplified decoder? or a ?mismatched
decoder?. For example, an independent decoder is a simplified decoder; it takes only the marginal
1
distribution of the neural responses into consideration and ignores the correlations between neuronal
activities. The independent decoder is of particular importance because several studies have shown
that maximum likelihood estimation can be implemented by a biologically plausible network [2, 4].
If it is experimentally shown that a sufficiently large portion of information is obtained by the independent decoder, we can say that the brain may function in a manner similar to the independent
decoder. In this context, Nirenberg et al. computed the amount of information obtained by the independent decoder in pairs of retinal ganglion cells activities [10]. They showed that no pair of
cells showed a loss of information greater than 11%. Because only pairs of cells were considered
in their analysis, it has not been still elucidated whether correlations are not important in population
activities.
To elucidate whether correlations are important or not in population activities, we have developed
a general framework for investigating the importance of correlation in decoding neural activities.
When population activities are analyzed, we have to deal with not only second-order correlations
but also higher-order correlations in general. Therefore, we need to hierarchically construct simplified decoders that account of up to Kth-order correlations, where K = 1, 2, ..., N . By computing
how much information is obtained by the simplified decoders, we investigate how many orders of
correlation should be taken into account to extract enough information. To compute the information
obtained by the mismatched decoders, we introduce a information theoretically correct quantity derived by Merhav et al. [8]. Information for mismatched decoders previously proposed by Nirenberg
and Latham is the lower bound on the correct information [5, 11]. Because this lower bound can be
very loose and their proposed information can be negative when many cells are analyzed as is shown
in the paper, we need to accurately evaluate the information obtained by mismatched decoders.
The plan of the paper is as follows. In Section 2, we describe a way of computing the information
that can be extracted from neural activities by mismatched decoders using the information derived
by Merhav et al.. Using analytical computation, we demonstrate how information for mismatched
decoders previously proposed by Nirenberg and Latham differs from the correct information derived
by Merhav et al., especially when many cells are analyzed. In Section 3, we apply our framework to
spike data for ganglion cells in the salamander retina. We first describe the method of hierarchically
constructing simplified decoders by using the maximum entropy principle [12]. We then compute the
information obtained with the simplified decoders. We find that more than 90% of the information
can be extracted from the population activities of ganglion cells even if all orders of correlations
are ignored in decoding. We also describe the problem of previous studies [10, 12] in which the
stationarity of stimuli is assumed for a duration that is too long. Using a toy model, we demonstrate
that pseudo correlations seem to carry a large portion of the information because of the stationarity
assumption.
2
Information for mismatched decoders
Let us consider how much information about stimuli can be extracted from neural responses. We
assume that we experimentally obtain the conditional probability distribution p(r|s) that neural responses r are evoked by stimulus s. We can say that the stimulus is encoded by neural response r,
which obeys the distribution p(r|s). We call p(r|s) the ?encoding model?. The maximal amount of
information obtained with the optimal decoder can be evaluated by using the mutual information:
Z
Z
X
I = ? drp(r) log2 p(r) + dr
p(s)p(r|s) log2 p(r|s),
(1)
s
P
where p(r) = s p(r|s)p(s) and p(s) is the prior probability of stimuli. In the optimal decoder, the
probability distribution q(r|s) that exactly matches the encoding model p(r|s) is used for decoding;
that is, q(r|s) = p(r|s). We call q(r|s) the ?decoding model?. We can also compute the maximal
amount of information obtained by a decoder using a decoding model q(r|s) that does not match the
encoding model p(r|s) by using an equation derived by Merhav et al. [8]:
Z
Z
X
X
I ? (?) = ? drp(r) log2
p(s)q(r|s)? + dr
p(s)p(r|s) log2 q(r|s)? ,
(2)
s
s
?
where ? takes the value that maximizes I (?). Thus, ? is the value that satisfies ?I ? /?? = 0. We
call a decoder using the mismatched decoding model a ?mismatched decoder?.
2
?
????
?????
?
?
???????????????
???????????????????????????????
?????????????????????????????
???????????????????????????????
?????????????????????????????
?
?????
?????
???????????????
Figure 1: Comparison between correct information I ? derived by Merhav et al. and NirenbergLatham information I N L . A: Difference between I ? /I (solid line) and I N L /I (dotted line) in
Gaussian model where correlations and derivatives of mean firing rates are uniform. Correlation
parameter c = 0.01. B: Difference between I1? /I (solid line) and I1N L /I (dotted line) when spike
data in Figure 3A are used. For this spike data and other spike data analyzed, Nirenberg-Latham
information provides a tight lower bound on the correct information, possibly because the number
of cells is small.
Previously, Nirenberg and Latham proposed that the information obtained by mismatched decoders
can be evaluated by using [11]
Z
Z
X
X
I N L = ? drp(r) log2
p(s)q(r|s) + dr
p(s)p(r|s) log2 q(r|s).
(3)
s
s
We call their proposed information ?Nirenberg-Latham information?. If we set ? = 1 in Eq. 2,
we obtain Nirenberg-Latham information, I ? (1) = I N L . Thus, Nirenberg-Latham information
does not give correct information; instead, it simply provides the lower bound on the correct information, I ? (?), which is the maximum value with respect to ? [5, 8]. The lower bound provided
by Nirenberg-Latham information can be very loose and the Nirenberg-Latham information can be
negative when many cells are analyzed.
Theoretical evaluation of information I, I ? , and I N L
We consider the problem where mutual information is computed when stimulus s, which is a single
variable, and slightly different stimulus s + ?s are presented. We assume the prior probability of
stimuli, p(s) and p(s + ?s), are equal: p(s) = p(s + ?s) = 1/2. Neural responses evoked by the
stimuli are denoted by r, which is considered here to be the neuron firing rate. When the difference
between two stimuli is small, the conditional probability p(r|s + ?s) can be expanded with respect
to ?s as p(r|s+?s) = p(r|s)+p? (r|s)?s+ 21 p?? (r|s)(?s)2 +..., where ? represents differentiation
with respect to s. Using the expansion, to leading order of ?s, we can write mutual information I
as
Z
(p? (r|s))2
?s2
dr
I=
,
(4)
8
p(r|s)
R
?
(r|s)2
where dr pp(r|s)
is the Fisher information. Thus, we can see that the mutual information is proportional to the Fisher information when ?s is small. Similarly, the correct information I ? for the
mismatched decoders and the Nirenberg-Latham information I N L can be written as
?Z
?2 ?Z
??1
?s2
p? (r|s)q ? (r|s)
p(r|s)(q ? (r|s))2
dr
dr
,
(5)
I? =
8
q(r|s)
q(r|s)2
? Z
!
? ?
?2
Z
?s2
q (r|s)
p? (r|s)q(r|s)
NL
I
=
? drp(r|s)
+ 2 dr
.
(6)
8
q(r|s)
q(r|s)
Taking into consideration the proportionality of the mutual information to the Fisher information, we
?R
?2 ?R
??1
?
?
?
(r|s)
(r|s))2
can interpret that
dr p (r|s)q
in Eq. 5 is a Fisher information-like
dr p(r|s)(q
q(r|s)
q(r|s)2
quantity for mismatched decoders.
3
Let us consider the case in which the encoding model p(r|s) obeys the Gaussian distribution
?
?
1
1
T ?1
p(r|s) = exp ? (r ? f (s)) C (r ? f (s)) ,
Z
2
(7)
where T stands for the transpose operation, f (s) is the mean firing rates given stimulus s, and C is
the covariance matrix. We consider an independent decoding model q(r|s) that ignores correlations:
?
?
1
1
T ?1
q(r|s) =
exp ? (r ? f (s)) CD (r ? f (s)) ,
(8)
ZD
2
where CD is the diagonal covariance matrix obtained by setting the off-diagonal elements of C to
0. If the Gaussian integral is performed for Eqs. 4-5, I, I ? , and I N L can be written as
?s2 ?T
f (s)C?1 f ? (s),
(9)
8
?
2
?s2 (f ?T (s)C?1
D f (s))
I? =
,
(10)
?1
?1
8 f ?T (s)CD CCD f ? (s)
?
?s2 ? ?T
?1 ?
?1 ?
?T
?f (s)C?1
(11)
INL =
D CCD f (s) + 2f (s)CD f (s) .
8
The correct information obtained by the independent decoder for the Gaussian model (Eq. 10) is
inversely proportional to the decoding error of s when the independent decoder is applied, which
was computed from the generalized Cram?er Rao bound by Wu et al. [14].
I=
As a simple example, we consider a uniform correlation model [1, 14] in which covariance matrix C
is given by Cij = ? 2 [?ij + c(1 ? ?ij )] and assume that the derivatives of the firing rates are uniform:
that is fi? = f ? . In this case, I, I ? , and I N L can be computed using
?s2
N f ?2
,
(12)
8 ? 2 (N c + 1 ? c)
?s2
N f ?2
I? =
,
(13)
8 ? 2 (N c + 1 ? c)
?s2 (?c(N ? 1) + 1)N f ?2
INL =
,
(14)
8
?2
where N is the number of cells. We can see that I ? is equal to I, which means that information is
not lost even if correlation is ignored in the decoding process. Figure 1A shows I N L /I and I ? /I
when the degree of correlation c is 0.01. As shown in Figure 1A, the difference between the correct
information I ? and Nirenberg-Latham information I N L is very large when the number of cells N is
NL
is negative. Analysis showed that using Nirenberg-Latham information
large. When N > c+1
c ,I
NL
I
as a lower bound on the correct information I ? can lead to wrong conclusions, especially when
many cells are analyzed.
I=
3
3.1
Analysis of information in population activities of ganglion cells
Methods
We analyzed the data obtained when N = 7 retinal ganglion cells were simultaneously recorded
using a multielectrode array. The stimulus was a natural movie, which was 200 s long and repeated
45 times. We divided the movie into many short natural movies and considered them as stimuli over
which information contained in neural activities is computed. For instance, when it was divided into
10-s-long natural movies, there were 20 stimuli. Figure 2A shows the response of the seven retinal
ganglion cells to natural movies from 0 to 10 s in length. To apply information theoretic techniques,
we first discretized the time into small time bins ?? and indicated whether a spike was emitted or
not in each time bin with a binary variable: ?i = 1 means that the cell i spiked and ?i = 0 means that
it did not spike. We set the length of the time, ?? , to 5 ms so that it was short enough to avoid two
spikes falling into the same bin. In this way, the spike pattern of ganglion cells was transformed into
an N -letter binary word, ? = {?1 , ?2 , ..., ?N }, as shown in Figure 2B. Then, we determined the
4
???????????
???????????
?
?
????????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
???????
????????
Figure 2: A: Raster plot of seven retinal ganglion cells responding to a natural movie. B: Transformation of spike trains into binary words.
frequency with which a particular spike pattern, ?, was observed during each stimulus and estimated
the conditional probability distribution pdata (?|s) from experimental data. Using these conditional
probabilities, we evaluated the information contained in N -letter binary words ?.
Generally, the joint probability of N binary variables can be written as [9]
?
?
X
X
1
pN (?) = exp ?
?i ?i +
?ij ?i ?j + ? ? ? + ?12...N ?1 ?2 ...?N ? .
Z
i
i<j
(15)
This type of probability distribution is called a log-linear model. Because the number of parameters
in a log-linear model is equal to the number of all possible configurations of an N -letter binary word
?, we can determine the values of parameters so that the log-linear model pN (?) exactly matches
empirical probability distribution pdata (?): that is, pN (?) = pdata (?).
To compute the information for mismatched decoders, we construct simplified models of neural
responses that partially match the empirical distribution, pdata (?). The simplest model is an ?independent model? p1 (?), where only the average of each ?i agrees with the experimental data: that is,
h?i ip1 (?) = h?i ipdata (?) . There are many possible probability distributions that satisfy these constraints. In accordance
P with the maximum entropy principle [12], we choose the one that maximizes
entropy H, H = ? ? p1 (?) log p1 (?). The resulting maximum entropy distribution is
"
#
X (1)
1
p1 (?) =
exp
?i ?i .
(16)
Z1
i
in which model parameters ? (1) are determined so that the constraints are satisfied. This model
corresponds to a log-linear model in which all orders of correlation parameters {?ij , ?ijk , ..., ?12...N }
are omitted. If we perform maximum likelihood estimation of model parameters ? (1) in the loglinear model, the result is that the average ?i under the log-linear model equals the average ?i
found in the data: that is, h?i ip1 (?) = h?i ipdata (?) . This result is identical to the constraints of
the maximum entropy model. Generally, the maximum entropy method is equivalent to maximum
likelihood fitting of a log-linear model [6].
Similarly, we can consider a ?second-order correlation model? p2 (?), which is consistent with not
only the averages of ?i but also the averages of all products ?i ?j found in the data. Maximizing the
entropy with constraints h?i ip2 (?) = h?i ipdata (?) and h?i ?j ip2 (?) = h?i ?j ipdata (?) , we obtain
?
?
X (2)
X (2)
1
exp ?
?i ?i +
?ij ?i ?j ? ,
(17)
p2 (?) =
Z2
i
i,j
in which model parameters ? (2) are determined so that the constraints are satisfied. The procedure
described above can also be used to construct a ?Kth-order correlation model? pK (?). If we substitute the simplified models of neural responses pK (?|s) into mismatched decoding models q(?|s) in
5
?
?????
?????
???????????????????????????????
?????????????????????????????
???????????????????????????????
?????????????????????????????
?
???????????????
?????
?????
???????????????
Figure 3: Dependence of amount of information obtained by simplified decoders on number of
ganglion cells analyzed. Same spike data obtained from retinal ganglion cells responding to a natural
movie were used to obtain analysis results shown in panels A and B. A: 10-s-long natural movie B:
100-ms-long natural movie
Eq. 2, we can compute the amount of information that can be obtained when more than Kth-order
correlations are ignored in the decoding,
X
X
X
X
?
IK
pN (?) log2
p(s)pK (?|s)? +
p(s)
pN (?|s) log2 pK (?|s)? .
(18)
(?) = ?
?
s
s
?
?
IK
/I,
By evaluating the ratio of information,
we can infer how many orders of correlation should
be taken into account to extract enough information.
3.2
Results
First, we investigated how the ratio of information obtained by an independent model, I1? /I, and that
obtained by a second-order correlation model, I2? /I, changed when the number of cells analyzed was
changed. We set the length of the stimulus to 10 s. We could obtain 20 kinds of stimuli from a 200-slong natural movie (see Methods). In previous studies, comparable length stimuli (7 s for Nirenberg
et al.?s study [10] and 20 s for Schneidman et al.?s study [12]) were used. When two neurons were
analyzed, there were 21 possible combinations for choosing 2 cells out of 7 cells, which is the total
?
number of cells simultaneously recorded. We computed the average value of IK
/I for K = 1, 2 over
?
?
all possible combinations of cells. Figure 3A shows that I1 /I and I2 /I monotonically decreased
when the number of cells was increased. A comparison between the correct information, I1? /I, and
Nirenberg-Latham information, I1N L /I where I1N L = I1? (? = 1), is shown in Figure 1B. When
only two cells were considered, I1? /I exceeded 90%, which means that ignoring correlation leads
to only a small loss of information. This is consistent with the result obtained by Nirenberg et al.
[10]. However, when all cells (N = 7) were used in the analysis, I1? /I becomes only about 60%.
Thus, correlation seems to be much more important for decoding when population activities are
considered than when only two cells are considered. At least, we can say that qualitatively different
things occur when large populations of cells are analyzed, as Schneidman et al. pointed out [12].
We should be careful about concluding from the results shown in Figure 3A that correlation is
important for decoding. In this analysis, we considered a 10-s-long stimuli and assumed stationarity
during each stimulus. By stationarity we mean that we assumed spikes are generated by a single
process that can be described by a single conditional distribution p(?|s). Because the natural movies
change much more rapidly and our visual system has much higher time resolution than 10 s [13],
we also considered shorter stimuli. In Figure 3B, we computed I1? /I and I2? /I over 100-ms-long
natural movies. In this case, we could obtain 2000 stimuli from the 200-s-long natural movie. When
the length of each stimulus was 100 ms, no spikes occurred while some stimuli were presented. We
removed those stimuli and used the remaining stimuli for the analysis. In this case, the amount of
information obtained by independent model I1? was more than 90% even when all cells (N = 7)
were considered. Although 100 ms may still be too long to be considered as a single process, the
result shown in Figure 3B reflects a situation that our brain has to deal with, that is more realistic than
that reflected in Figure 3A. Figure 4A shows the dependence of information obtained by simplified
decoders on the length of stimulus. In this analysis, we changed the length of the stimulus from 100
ms to 10 s and computed I1? /I and I2? /I for activities of N = 7 cells. We also analyzed additional
experimental data obtained when N = 6 retinal ganglion cells were simultaneously recorded from
6
?
?
?
?
? ??
? ??
?
?
?????
?????
??????????????????????
?
???????????????????????????????
?????????????????????????????
?
???????????????????????????????
?????????????????????????????
???????????????????????????????
?????????????????????????????
?
?????
??????????????????????
??????????????????????
??????????????????????
Figure 4: Dependence of amount of information obtained by simplified decoders on length of stimuli. Stimulus was same natural movie for both panels, but spike data obtained from retinas of
different salamander were used in panels A and B. A: Seven simultaneously recorded ganglion cells
B: Six simultaneously recorded ganglion cells C: Artificial spike data generated according to the
firing rates shown in Figure 5A
?
?
? ??
??
??
??
??
??
??
??
????????
????????
????????
Figure 5: Firing rates of two model cells. Rate of cell #1 shown in top panel; rate of cell #2 is shown
in bottom panel. A: Firing rates from 0 to 2 s. B: Firing rates (solid line) and mean firing rates
(dashed line) when stimulus was 1 s long. C: Firing rates (solid line) and mean firing rates (dashed
line) when stimulus was 500 ms long.
another salamander retina. The same 200-s-long natural movie was used as a stimulus for Figure 4B
as for Figure 4A, and the activities of N = 6 cells were analyzed. Figure 4B shows the result. We
can clearly see the same tendency as shown in Figures 4A and B: the amount of information decoded
by the simplified decoders monotonically increased as the length of the stimulus was shortened.
To clarify the reason the correlation becomes less important as the stimulus is shortened, we used
the toy model shown in Figure 5. We considered the case in which two cells fire independently
in accordance with a Poisson process and performed an analysis similar to the one we did for the
actual spike data. We used simulated spike data for the two cells generated in accordance with the
firing rates shown in Figure 5A. The firing rates with a 2-s stimulus sinusoidally change with time.
We divided the 2-s-long stimulus into two 1-s-long stimulus, s1 and s2 , as shown in Figure 5B.
Then, we computed mutual information I and the information obtained by independent model I1?
over s1 and s2 . Because the two cells fired independently, there were no correlations between two
cells essentially. However, there was pseudo correlation due to the assumption of stationarity for the
dynamically changing stimulus. The pseudo correlation was high for s1 and low for s2 . This means
that ?correlation? plays an important role in discriminating two stimuli, s1 and s2 . In contrast, the
mean firing rates of the two cells during each stimulus were equal for s1 and s2 . Therefore, if the
stimulus is 1 s long, we cannot discriminate two stimuli by using the independent model, that is,
I1? = 0. We also considered the case in which the stimulus was 0.5 s long, as shown in Figure
5C. In this case, pseudo correlations again appeared but there was a significant difference in the
mean firing rates between the stimuli. Thus, the independent model can be used to extract almost all
the information. The dependence of I1? /I on the stimulus length is shown in Figure 4C. Behaviors
similar to those represented in Figure 4C were also observed in the analysis of the actual spike data
for retinal ganglion cells (Figure 4A and 4B). Even if we observe that correlation carries a significant
large portion of information for longer stimuli compared with the speed of change in the firing rates,
7
it may simply be caused by meaningless pseudo correlation. To assess the role of correlation in
information processing, the stimuli used should be short enough to think neural responses to these
stimuli generated by a single process.
4
Summary and Discussion
We described a general framework for investigating how far the decoding process in the brain can
be simplified. We computed the amount of information that can be extracted by using simplified
decoders constructed using a maximum entropy model, i.e., mismatched decoders. We showed
that more than 90% of the information encoded in retinal ganglion cells activities can be decoded
by using an independent model that ignores correlation. Our results imply that the brain uses a
simplified decoding strategy in which correlation is ignored.
When we computed the information obtained by the independent model, we regarded a 100-ms-long
natural movie as one stimulus. However, when we considered stimuli that were long compared with
the speed of the change in the firing rates as one stimulus, correlation carried a large portion of
information. This is due to pseudo correlation, which is observed if stationarity is assumed for long
durations. The human visual system can process visual information in less than 150 ms [13]. We
should set the length of the stimulus appropriately by taking the time resolution of our visual system
into account.
Our results do not imply that any kind of correlation does not carry much information because we
dealt only with correlated spikes within a 5-ms time bin. In our analysis, we did not analyze the
correlation on a longer time scale, which can be observed in the activities of retinal ganglion cells
[7]. We also did not investigate the information carried by the relative timing of spikes [3]. Further
investigations are needed for these types of correlation. Our approach of comparing the mutual
information with the information obtained by simplified decoders can also be used for studying
other types of correlations.
References
[1] Abbott, L. F., & Dayan, P. (1999). Neural Comput., 11, 91-101.
[2] Deneve, S., Latham, P. E., & Pouget, A. (1999). Nature Neurosci., 2, 740-745.
[3] Gollish, S., & Meister, M. (2008). Science, 319, 1108-1111.
[4] Jazayeri, M. & Movshon, J. A. (2006). Nature Neurosci., 9, 690-696.
[5] Latham, P. E., & Nirenberg, S. (2005). J. Neurosci., 25, 5195-5206.
[6] MacKay, D. (2003). Information Theory, Inference and Learning Algorithms (Cambridge Univ.
Press, Cambridge, England).
[7] Meister, M., & Berry, M. J. II (1999). Neuron, 22, 435-450.
[8] Merhav, N., Kaplan, G., Lapidoth, A., & Shamai Shitz, S. (1994). IEEE Trans. Inform. Theory,
40, 1953-1967.
[9] Nakahara, H., & Amari, S. (2002). Neural Comput., 14, 2269-2316.
[10] Nirenberg, S., Carcieri, S. M., Jacobs, A. L., & Latham, P. E. (2001). Nature, 411, 698-701.
[11] Nirenberg, S., & Latham, P. (2003). Proc. Natl. Acad. Sci. USA, 100, 7348-7353.
[12] Schneidman, E., Berry, M. J. II, Segev, R., & Bialek. W. (2006). Nature, 440, 1007-1012.
[13] Thorpe, S., Fize, D., & Marlot, C. (1996). Nature, 381, 520-522.
[14] Wu, S., Nakahara, H., & Amari, S. (2001). Neural Comput., 13, 775-797.
8
| 3582 |@word seems:1 proportionality:1 covariance:3 jacob:1 solid:4 carry:4 configuration:1 wako:1 z2:1 comparing:1 written:3 realistic:1 shamai:1 plot:1 short:3 provides:2 constructed:1 ik:3 fitting:1 introduce:2 manner:1 theoretically:2 behavior:1 p1:4 brain:14 discretized:1 actual:2 vertebrate:1 becomes:2 provided:1 maximizes:2 panel:5 what:1 kind:2 developed:2 transformation:1 differentiation:1 pseudo:7 i1n:3 exactly:3 wrong:1 accordance:3 timing:1 acad:1 despite:1 encoding:6 shortened:2 firing:17 dynamically:2 evoked:2 obeys:2 lost:2 differs:1 procedure:1 area:1 empirical:2 word:5 cram:1 carcieri:1 cannot:1 context:1 equivalent:1 shi:2 maximizing:1 duration:3 independently:2 resolution:2 pouget:1 array:1 regarded:1 population:8 elucidate:2 play:1 us:1 element:1 observed:4 negligibly:1 drp:4 bottom:1 role:2 removed:1 tight:1 completely:1 joint:1 represented:1 riken:3 train:1 univ:1 describe:3 artificial:1 choosing:1 encoded:5 widely:1 plausible:1 say:3 amari:2 encoder:1 nirenberg:19 think:1 analytical:1 maximal:3 product:1 rapidly:1 fired:1 ac:3 ij:5 eq:5 p2:2 implemented:1 tokyo:4 correct:13 human:1 bin:4 investigation:1 clarify:1 sufficiently:1 considered:13 exp:5 omitted:1 estimation:2 proc:1 agrees:1 reflects:1 clearly:1 gaussian:4 avoid:1 pn:5 derived:5 likelihood:3 salamander:3 contrast:1 inference:1 dayan:1 transformed:1 i1:13 denoted:1 plan:1 toshihiko:1 special:1 mackay:1 mutual:11 marginal:1 equal:5 construct:4 identical:1 represents:1 pdata:4 stimulus:56 retina:4 thorpe:1 simultaneously:5 ignorant:1 fire:1 stationarity:7 interest:1 investigate:3 marlot:1 evaluation:1 analyzed:13 nl:3 natl:1 integral:1 partial:1 shorter:1 theoretical:1 doubtful:1 jazayeri:1 instance:1 increased:2 sinusoidally:1 rao:1 saitama:1 uniform:3 too:2 discriminating:1 probabilistic:1 off:1 decoding:21 again:1 recorded:5 satisfied:2 choose:1 possibly:1 dr:10 derivative:2 leading:1 toy:2 japan:2 account:4 retinal:9 satisfy:1 caused:1 performed:2 analyze:1 portion:5 ass:1 toshiyuki:1 dealt:1 accurately:1 inform:1 raster:1 pp:1 frequency:1 ask:1 knowledge:3 exceeded:1 higher:2 reflected:1 response:11 evaluated:3 correlation:46 lack:1 indicated:1 usa:1 i2:4 deal:2 during:3 m:11 generalized:1 theoretic:1 demonstrate:2 latham:17 consideration:2 fi:1 jp:5 exponentially:1 occurred:1 interpret:1 significant:2 cambridge:2 similarly:2 pointed:1 longer:2 showed:4 certain:1 binary:6 greater:1 additional:1 determine:1 schneidman:3 dashed:2 monotonically:2 ii:2 full:1 infer:1 match:5 england:1 long:20 divided:3 essentially:1 poisson:1 cell:48 decreased:1 appropriately:1 meaningless:1 thing:1 seem:2 call:5 emitted:1 enough:4 ip1:2 masato:1 whether:4 six:1 ultimate:1 movshon:1 ignored:5 generally:3 amount:12 simplest:1 dotted:2 neuroscience:3 estimated:1 zd:1 write:1 falling:1 changing:2 abbott:1 fize:1 deneve:1 letter:3 almost:1 wu:2 comparable:1 bound:7 multielectrode:1 activity:21 elucidated:1 occur:1 constraint:5 segev:1 speed:2 concluding:1 expanded:1 according:1 ip2:2 combination:2 slightly:1 biologically:1 s1:5 spiked:1 taken:2 equation:1 previously:3 loose:2 needed:3 studying:1 meister:2 operation:1 apply:2 observe:1 slong:1 substitute:1 responding:2 remaining:1 top:1 ccd:2 log2:8 calculating:1 especially:2 question:2 quantity:3 spike:21 strategy:1 okada1:1 dependence:4 diagonal:2 loglinear:1 bialek:1 kth:4 simulated:1 sci:1 decoder:43 seven:3 reason:2 length:11 ratio:2 difficult:1 cij:1 merhav:6 negative:3 kaplan:1 perform:1 neuron:5 situation:1 pair:3 z1:1 trans:1 address:1 pattern:2 appeared:1 natural:17 mn:2 movie:19 inversely:1 imply:2 carried:2 extract:3 prior:2 berry:2 relative:1 loss:3 proportional:2 degree:1 consistent:2 principle:3 cd:4 changed:3 summary:1 transpose:1 institute:1 mismatched:17 taking:2 chiba:1 calculated:1 evaluating:2 stand:1 ignores:3 qualitatively:1 simplified:20 far:3 ignore:1 implicitly:1 investigating:4 assumed:6 nature:5 okada:1 ignoring:1 expansion:1 investigated:1 complex:1 constructing:1 did:4 pk:4 hierarchically:3 neurosci:3 s2:14 repeated:1 neuronal:1 decoded:7 comput:3 hosoya:1 er:1 importance:2 entropy:9 simply:3 ganglion:17 kashiwa:1 visual:4 contained:3 inl:2 partially:1 corresponds:1 satisfies:1 extracted:5 conditional:5 goal:1 nakahara:2 careful:1 fisher:4 experimentally:2 change:4 determined:3 called:1 total:1 discriminate:1 experimental:3 tendency:1 ijk:1 evaluate:2 correlated:2 |
2,849 | 3,583 | A Scalable Hierarchical Distributed Language Model
Andriy Mnih
Department of Computer Science
University of Toronto
[email protected]
Geoffrey Hinton
Department of Computer Science
University of Toronto
[email protected]
Abstract
Neural probabilistic language models (NPLMs) have been shown to be competitive with and occasionally superior to the widely-used n-gram language models.
The main drawback of NPLMs is their extremely long training and testing times.
Morin and Bengio have proposed a hierarchical language model built around a
binary tree of words, which was two orders of magnitude faster than the nonhierarchical model it was based on. However, it performed considerably worse
than its non-hierarchical counterpart in spite of using a word tree created using
expert knowledge. We introduce a fast hierarchical language model along with
a simple feature-based algorithm for automatic construction of word trees from
the data. We then show that the resulting models can outperform non-hierarchical
neural models as well as the best n-gram models.
1
Introduction
Statistical language modelling is concerned with building probabilistic models of word sequences.
Such models can be used to discriminate probable sequences from improbable ones, a task important
for performing speech recognition, information retrieval, and machine translation. The vast majority
of statistical language models are based on the Markov assumption, which states that the distribution of a word depends only on some fixed number of words that immediately precede it. While
this assumption is clearly false, it is very convenient because it reduces the problem of modelling
the probability distribution of word sequences of arbitrary length to the problem of modelling the
distribution on the next word given some fixed number of preceding words, called the context. We
will denote this distribution by P (wn |w1:n?1 ), where wn is the next word and w1:n?1 is the context
(w1 , ..., wn?1 ).
n-gram language models are the most popular statistical language models due to their simplicity
and surprisingly good performance. These models are simply conditional probability tables for
P (wn |w1:n?1 ), estimated by counting the n-tuples in the training data and normalizing the counts
appropriately. Since the number of n-tuples is exponential in n, smoothing the raw counts is essential
for achieving good performance. There is a large number of smoothing methods available for n-gram
models [4]. In spite of the sophisticated smoothing methods developed for them, n-gram models are
unable to take advantage of large contexts since the data sparsity problem becomes extreme. The
main reason for this behavior is the fact that classical n-gram models are essentially conditional
probability tables where different entries are estimated independently of each other. These models
do not take advantage of the fact that similar words occur in similar contexts, because they have no
concept of similarity. Class-based n-gram models [3] aim to address this issue by clustering words
and/or contexts into classes based on their usage patterns and then using this class information to
improve generalization. While it can improve n-gram performance, this approach introduces a very
rigid kind of similarity, since each word typically belongs to exactly one class.
An alternative and much more flexible approach to counteracting the data sparsity problem is to
represent each word using a real-valued feature vector that captures its properties, so that words
1
used in similar contexts will have similar feature vectors. Then the conditional probability of the
next word can be modelled as a smooth function of the feature vectors of the context words and the
next word. This approach provides automatic smoothing, since for a given context similar words
are now guaranteed to be assigned similar probabilities. Similarly, similar contexts are now likely to
have similar representations resulting in similar predictions for the next word. Most models based
on this approach use a feed-forward neural network to map the feature vectors of the context words
to the distribution for the next word (e.g. [12], [5], [9]). Perhaps the best known model of this type is
the Neural Probabilistic Language Model [1], which has been shown to outperform n-gram models
on a dataset of about one million words.
2
The hierarchical neural network language model
The main drawback of the NPLM and other similar models is that they are very slow to train and
test [10]. Since computing the probability of the next word requires explicitly normalizing over all
words in the vocabulary, the cost of computing the probability of the given next word and the cost of
computing the full distribution over the next word are virtually the same ? they take time linear in the
vocabulary size. Since computing the exact gradient in such models requires repeatedly computing
the probability of the next word given its context and updating the model parameters to increase that
probability, training time is also linear in the vocabulary size. Typical natural language datasets have
vocabularies containing tens of thousands of words, which means that training NPLM-like models
the straightforward way is usually too computationally expensive in practice. One way to speed
up the process is to use a specialized importance sampling procedure to approximate the gradients
required for learning [2]. However, while this method can speed up training substantially, testing
remains computationally expensive.
The hierarchical NPLM introduced in [10], provides an exponential reduction in time complexity of
learning and testing as compared to the NPLM. It achieves this reduction by replacing the unstructured vocabulary of the NPLM by a binary tree that represents a hierarchical clustering of words in
the vocabulary. Each word corresponds to a leaf in the tree and can be uniquely specified by the
path from the root to that leaf. If N is the number of words in the vocabulary and the tree is balanced, any word can be specified by a sequence of O(log N ) binary decisions indicating which of
the two children of the current node is to be visited next. This setup replaces one N -way choice by a
sequence of O(log N ) binary choices. In probabilistic terms, one N -way normalization is replaced
by a sequence of O(log N ) local (binary) normalizations. As a result, a distribution over words in
the vocabulary can be specified by providing the probability of visiting the left child at each of the
nodes. In the hierarchical NPLM, these local probabilities are computed by giving a version of the
NPLM the feature vectors for the context words as well as a feature vector for the current node as
inputs. The probability of the next word is then given by the probability of making a sequence of
binary decisions that corresponds to the path to that word.
When applied to a dataset of about one million words, this model outperformed class-based trigrams,
but performed considerably worse than the NPLM [10]. The hierarchical model however was more
than two orders of magnitude faster than the NPLM. The main limitation of this work was the
procedure used to construct the tree of words for the model. The tree was obtained by starting
with the WordNet IS-A taxonomy and converting it into a binary tree through a combination of
manual and data-driven processing. Our goal is to replace this procedure by an automated method
for building trees from the training data without requiring expert knowledge of any kind. We will
also explore the performance benefits of using trees where each word can occur more than once.
3
The log-bilinear model
We will use the log-bilinear language model (LBL) [9] as the foundation of our hierarchical model
because of its excellent performance and simplicity. Like virtually all neural language models, the
LBL model represents each word with a real-valued feature vector. We will denote the feature vector
for word w by rw and refer to the matrix containing all these feature vectors as R. To predict the
next word wn given the context w1:n?1 , the model computes the predicted feature vector r? for the
next word by linearly combining the context word feature vectors:
2
r? =
n?1
X
Ci rwi ,
(1)
i=1
where Ci is the weight matrix associated with the context position i. Then the similarity between the
predicted feature vector and the feature vector for each word in the vocabulary is computed using
the inner product. The similarities are then exponentiated and normalized to obtain the distribution
over the next word:
exp(?
rT rw + bw )
P (wn = w|w1:n?1 ) = P
.
(2)
rT rj + bj )
j exp(?
Here bw is the bias for word w, which is used to capture the context-independent word frequency.
Note that the LBL model can be interpreted as a special kind of a feed-forward neural network
with one linear hidden layer and a softmax output layer. The inputs to the network are the feature
vectors for the context words, while the matrix of weights from the hidden layer to the output layer
is simply the feature vector matrix R. The vector of activities of the hidden units corresponds to the
the predicted feature vector for the next word. Unlike the NPLM, the LBL model needs to compute
the hidden activities only once per prediction and has no nonlinearities in its hidden layer. In spite
of its simplicity the LBL model performs very well, outperforming both the NPLM and the n-gram
models on a fairly large dataset [9].
4
The hierarchical log-bilinear model
Our hierarchical language model is based on the hierarchical model from [10]. The distinguishing
features of our model are the use of the log-bilinear language model for computing the probabilities
at each node and the ability to handle multiple occurrences of each word in the tree. Note that the
idea of using multiple word occurrences in a tree was proposed in [10], but it was not implemented.
The first component of the hierarchical log-bilinear model (HLBL) is a binary tree with words at its
leaves. For now, we will assume that each word in the vocabulary is at exactly one leaf. Then each
word can be uniquely specified by a path from the root of the tree to the leaf node the word is at.
The path itself can be encoded as a binary string d of decisions made at each node, so that di = 1
corresponds to the decision to visit the left child of the current node. For example, the string ?10?
corresponds to a path that starts at the root, visits its left child, and then visits the right child of that
child. This allows each word to be represented by a binary string which we will call a code.
The second component of the HLBL model is the probabilistic model for making the decisions
at each node, which in our case is a modified version of the LBL model. In the HLBL model,
just like in its non-hierarchical counterpart, context words are represented using real-valued feature
vectors. Each of the non-leaf nodes in the tree also has a feature vector associated with it that is
used for discriminating the words in the left subtree form the words in the right subtree of the node.
Unlike the context words, the words being predicted are represented using their binary codes that are
determined by the word tree. However, this representation is still quite flexible, since each binary
digit in the code encodes a decision made at a node, which depends on that node?s feature vector.
In the HLBL model, the probability of the next word being w is the probability of making the
sequences of binary decisions specified by the word?s code, given the context. Since the probability
of making a decision at a node depends only on the predicted feature vector, determined by the
context, and the feature vector for that node, we can express the probability of the next word as a
product of probabilities of the binary decisions:
Y
P (wn = w|w1:n?1 ) =
P (di |qi , w1:n?1 ),
(3)
i
th
where di is i digit in the code for word w, and qi is the feature vector for the ith node in the path
corresponding to that code. The probability of each decision is given by
P (di = 1|qi , w1:n?1 ) = ?(?
rT qi + bi ),
(4)
where ?(x) is the logistic function and r? is the predicted feature vector computed using Eq. 1. bi in
the equation is the node?s bias that captures the context-independent tendency to visit the left child
when leaving this node.
3
The definition of P (wn = w|w1:n?1 ) can be extended to multiple codes per word by including a
summation over all codes for w as follows:
X Y
P (wn = w|w1:n?1 ) =
P (di |qi , w1:n?1 ),
(5)
d?D(w) i
where D(w) is a set of codes corresponding to word w. Allowing multiple codes per word can allow
better prediction of words that have multiple senses or multiple usage patterns. Using multiple codes
per word also makes it easy to combine several separate words hierarchies to into a single one to to
reflect the fact that no single hierarchy can express all the relationships between words.
Using the LBL model instead of the NPLM for computing the local probabilities allows us to avoid
computing the nonlinearities in the hidden layer which makes our hierarchical model faster at making predictions than the hierarchical NPLM. More importantly, the hierarchical NPLM needs to
compute the hidden activities once for each of the O(log N ) decisions, while the HLBL model
computes the predicted feature vector just once per prediction. However, the time complexity of
computing the probability for a single binary decision in an LBL model is still quadratic in the
feature vector dimensionality D, which might make the use of high-dimensional feature vectors
too computationally expensive. We make the time complexity linear in D by restricting the weight
matrices Ci to be diagonal.1 Note that for a context of size 1, this restriction does not reduce the
representational power of the model because the context weight matrix C1 can be absorbed into the
word feature vectors. And while this restriction does makes the models with larger contexts slightly
less powerful, we believe that this loss is more than compensated for by much faster training times
which allow using more complex trees.
HLBL models can be trained by maximizing the (penalized) log-likelihood. Since the probability of
the next word depends only on the context weights, the feature vectors of the context words, and the
feature vectors of the nodes on the paths from the root to the leaves containing the word in question,
only a (logarithmically) small fraction of the parameters need to be updated for each training case.
5
Hierarchical clustering of words
The first step in training a hierarchical language model is constructing a binary tree of words for the
model to use. This can be done by using expert knowledge, data-driven methods, or a combination of
the two. For example, in [10] the tree was constructed from the IS-A taxonomy DAG from WordNet
[6]. After preprocessing the taxonomy by hand to ensure that each node had only one parent, datadriven hierarchical binary clustering was performed on the children of the nodes in the taxonomy
that had more than two children, resulting in a binary tree.
We are interested in using a pure learning approach applicable in situations where the expert knowledge is unavailable. It is also not clear that using expert knowledge, even when it is available,
will lead to superior performance. Hierarchical binary clustering of words based on the their usage
statistics is a natural choice for generating binary trees of words automatically. This task is similar
to the task of clustering words into classes for training class-based n-gram models, for which a large
number of algorithms has been proposed. We considered several of these algorithms before deciding to use our own algorithm which turned out to be surprisingly effective in spite of its simplicity.
However, we will mention two existing algorithms that might be suitable for producing binary word
hierarchies. Since we wanted an algorithm that scaled well to large vocabularies, we restricted our
attention to the top-down hierarchical clustering algorithms, as they tend to scale better than their
agglomerative counterparts [7]. The algorithm from [8] produces exactly the kind of binary trees
we need, except that its time complexity is cubic in the vocabulary size.2 We also considered the
distributional clustering algorithm [11] but decided not to use it because of the difficulties involved
in using contexts of more than one word for clustering. This problem is shared by most n-gram
clustering algorithms, so we will describe it in some detail. Since we would like to cluster words for
easy prediction of the next word based on its context, it is natural to describe each word in terms of
the contexts that can precede it. For example, for a single-word context one such description is the
Pn?1
1
Thus the feature vector for the next word can now be computed as r? = i=1 ci ? rwi , where ci is a vector
of context weights for position i and ? denotes the elementwise product of two vectors.
2
More precisely, the time complexity of the algorithm is cubic in the number of the frequent words, but that
is still to slow for our purposes.
4
distribution of words that precede the word of interest in the training data. The problem becomes
apparent when we consider using larger contexts: the number of contexts that can potentially precede a word grows exponentially in the context size. This is the very same data sparsity problem that
affects the n-gram models, which is not surprising, since we are trying to describe words in terms of
exponentially large (normalized) count vectors. Thus, clustering words based on such large-context
representations becomes non-trivial due to the computational cost involved as well as the statistical
difficulties caused by the sparsity of the data.
We avoid these difficulties by operating on low-dimensional real-valued word representations in our
tree-building procedure. Since we need to train a model to obtain word feature vectors, we perform
the following bootstrapping procedure: we generate a random binary tree of words, train an HLBL
model based on it, and use the distributed representations it learns to represent words when building
the word tree.
Since each word is represented by a distribution over contexts it appears in, we need a way of
compressing such a collection of contexts down to a low-dimensional vector. After training the
HLBL model, we summarize each context w1:n?1 with the predicted feature vector produced from
it using Eq. 1. Then, we condense the distribution of contexts that precede a given word into a
feature vector by computing the expectation of the predicted representation w.r.t. that distribution.
Thus, for the purposes of clustering each word is represented by its average predicted feature vector.
After computing the low-dimensional real-valued feature vectors for words, we recursively apply a
very simple clustering algorithm to them. At each step, we fit a mixture of two Gaussians to the
feature vectors and then partition them into two subsets based on the responsibilities of the two
mixture components for them. We then partition each of the subsets using the same procedure, and
so on. The recursion stops when the current set contains only two words. We fit the mixtures by
running the EM algorithm for 10 steps3 . The algorithm updates both the means and the spherical
covariances of the components. Since the means of the components are initialized based on a random
partitioning of the feature vectors, the algorithm is not deterministic and will produce somewhat
different clusterings on different runs. One appealing property of this algorithm is that the running
time of each iteration is linear in the vocabulary size, which is a consequence of representing words
using feature vectors of fixed dimensionality. In our experiments, the algorithm took only a few
minutes to build a hierarchy for a vocabulary of nearly 18000 words based on 100-dimensional
feature vectors.
The goal of an algorithm for generating trees for hierarchical language models is to produce trees
that are well-supported by the data and are reasonably well-balanced so that the resulting models
generalize well and are fast to train and test. To explore the trade-off between these two requirements, we tried several splitting rules in our tree-building algorithm. The rules are based on the
observation that the responsibility of a component for a datapoint can be used as a measure of confidence about the assignment of the datapoint to the component. Thus, when the responsibilities of
both components for a datapoint are close to 0.5, we cannot be sure that the datapoint should be in
one component but not the other.
Our simplest rule aims to produce a balanced tree at any cost. It sorts the responsibilities and
splits the words into two disjoint subsets of equal size based on the sorted order. The second rule
makes splits well-supported by the data even if that results in an unbalanced tree. It achieves that
by assigning the word to the component with the higher responsibility for the word. The third
and the most sophisticated rule is an extension of the second rule, modified to assign a point to
both components whenever both responsibilities are within ? of 0.5, for some pre-specified ?. This
rule is designed to produce multiple codes for words that are difficult to cluster. We will refer to
the algorithms that use these rules as BALANCED, ADAPTIVE, and ADAPTIVE(?) respectively.
Finally, as a baseline for comparison with the above algorithms, we will use an algorithm that
generates random balanced trees. It starts with a random permutation of the words and recursively
builds the left subtree based one the first half of the words and the right subtree based on the second
half of the words. We will call this algorithm RANDOM.
3
Running EM for more than 10 steps did not make a significant difference in the quality of the resulting
trees.
5
Table 1: Trees of words generated by the feature-based algorithm. The mean code length is the sum
of lengths of codes associated with a word, averaged over the distribution of the words in the training
data. The run-time complexity of the hierarchical model is linear in the mean code length of the tree
used. The mean number of codes per word refers to the number of codes per word averaged over the
training data distribution. Since each non-leaf node in a tree has its own feature vector, the number
of free parameters associated with the tree is linear in this quantity.
Tree
label
T1
T2
T3
T4
T5
T6
T7
Generating
algorithm
RANDOM
BALANCED
ADAPTIVE
ADAPTIVE(0.25)
ADAPTIVE(0.4)
ADAPTIVE(0.4) ? 2
ADAPTIVE(0.4) ? 4
Mean code
length
14.2
14.3
16.1
24.2
29.0
69.1
143.2
Mean number of
codes per word
1.0
1.0
1.0
1.3
1.7
3.4
6.8
Number of
non-leaf nodes
17963
17963
17963
22995
30296
61014
121980
Table 2: The effect of the feature dimensionality and the word tree used on the test set perplexity of
the model.
Feature
Perplexity using
Perplexity using
Reduction
dimensionality
a random tree
a non-random tree in perplexity
25
191.6
162.4
29.2
50
166.4
141.7
24.7
75
156.4
134.8
21.6
100
151.2
131.3
19.9
6
Experimental results
We compared the performance of our models on the APNews dataset containing the Associated
Press news stories from 1995 and 1996. The dataset consists of a 14 million word training set,
a 1 million word validation set, and 1 million word test set. The vocabulary size for this dataset
is 17964. We chose this dataset because it had already been used to compare the performance of
neural models to that of n-gram models in [1] and [9], which allowed us to compare our results to
the results in those papers. Except for where stated otherwise, the models used for the experiments
used 100 dimensional feature vectors and a context size of 5. The details of the training procedure
we used are given in the appendix. All models were compared based on their perplexity score on
the test set.
We started by training a model that used a tree generated by the RANDOM algorithm (tree T1 in
Table 1). The feature vectors learned by this model were used to build a tree using the BALANCED
algorithm (tree T2). We then trained models of various feature vector dimensionality on each of
these trees to see whether a highly expressive model can compensate for using a poorly constructed
tree. The test scores for the resulting models are given in Table 2. As can be seen from the scores,
using a non-random tree results in much better model performance. Though the gap in performance
can be reduced by increasing the dimensionality of feature vectors, using a non-random tree drastically improves performance even for the model with 100-dimensional feature vectors. It should be
noted however, that models that use the random tree are not entirely hopeless. For example, they
outperform the unigram model which achieved the perplexity of 602.0 by a very large margin. This
suggests that the HLBL architecture is sufficiently flexible to make effective use of a random tree
over words.
Since increasing the feature dimensionality beyond 100 did not result in a substantial reduction in
perplexity, we used 100-dimensional feature vectors for all of our models in the following experiments. Next we explored the effect of the tree building algorithm on the performance of the resulting
HLBL model. To do that, we used the RANDOM, BALANCED, and ADAPTIVE algorithms to
generate one tree each. The ADAPTIVE(?) algorithm was used to generate two trees: one with ? set
6
Table 3: Test set perplexity results for the hierarchical LBL models. All the distributed models
in the comparison used 100-dimensional feature vectors and a context size of 5. LBL is the nonhierarchical log-bilinear model. KNn is a Kneser-Ney n-gram model. The scores for LBL, KN3,
and KN5 are from [9]. The timing for LBL is based on our implementation of the model.
Model
type
HLBL
HLBL
HLBL
HLBL
HLBL
HLBL
HLBL
LBL
KN3
KN5
Tree
used
T1
T2
T3
T4
T5
T6
T7
?
?
?
Tree generating
algorithm
RANDOM
BALANCED
ADAPTIVE
ADAPTIVE(0.25)
ADAPTIVE(0.4)
ADAPTIVE(0.4) ? 2
ADAPTIVE(0.4) ? 4
?
?
?
Perplexity
151.2
131.3
127.0
124.4
123.3
115.7
112.1
117.0
129.8
123.2
Minutes
per epoch
4
4
4
6
7
16
32
6420
?
?
to 0.25 and the other with ? set to 0.4. We then generated a 2? overcomplete tree by running the
ADAPTIVE(? = 0.4) algorithm twice and creating a tree with a root node that had the two generated
trees as its subtrees. Since the ADAPTIVE(?) algorithm involves some randomization we tried to
improve the model performance by allowing the model to choose dynamically between two possible
clusterings. Finally, we generated a 4? overcomplete using the same approach. Table 1 lists the
generated trees as well as some statistics for them. Note that trees generated using ADAPTIVE(?)
using ? > 0 result in models with more parameters due to the greater number of tree-nodes and thus
tree-node feature vectors, as compared to trees generated using methods producing one code/leaf
per word.
Table 3 shows the test set perplexities and time per epoch for the resulting models along with the
perplexities for models from [9]. The results show that the performance of the HLBL models based
on non-random trees is comparable to that of the n-gram models. As expected, building word trees
adaptively improves model performance. The general trend that emerges is that bigger trees tend to
lead to better performing models. For example, a model based on a single tree produced using the
ADAPTIVE(0.4) algorithm, performs as well as the 5-gram but not as well as the non-hierarchical
LBL model. However, using a 2? overcomplete tree generated using the same algorithm results in a
model that outperforms both the n-gram models and the LBL model, and using a 4? overcomplete
tree leads to a further reduction in perplexity. The time-per-epoch statistics reported for the neural
models in Table 3 shows the great speed advantage of the HLBL models over the LBL model.
Indeed, the slowest of our HLBL models is over 200 times faster than the LBL model.
7
Discussion and future work
We have demonstrated that a hierarchal neural language model can actually outperform its nonhierarchical counterparts and achieve state-of-the-art performance. The key to making a hierarchical
model perform well is using a carefully constructed hierarchy over words. We have presented a
simple and fast feature-based algorithm for automatic construction of such hierarchies. Creating
hierarchies in which every word occurred more than once was essential to getting the models to
perform better.
An inspection of trees generated by our adaptive algorithm showed that the words with the largest
numbers of codes (i.e. the word that were replicated the most) were not the words with multiple
distinct senses. Instead, the algorithm appeared to replicate the words that occurred relatively infrequently in the data and were therefore difficult to cluster. The failure to use multiple codes for
words with several very different senses is probably a consequence of summarizing the distribution
over contexts with a single mean feature vector when clustering words. The ?sense multimodality?
of context distributions would be better captured by using a small set of feature vectors found by
clustering the contexts.
7
Finally, since our tree building algorithm is based on the feature vectors learned by the model, it
is possible to periodically interrupt training of such a model to rebuild the word tree based on the
feature vectors provided by the model being trained. This modified training procedure might produce
better models by allowing the word hierarchy to adapt to the probabilistic component of the model
and vice versa.
Appendix: Details of the training procedure
The models have been trained by maximizing the log-likelihood using stochastic gradient ascent.
All model parameters other than the biases were initialized by sampling from a Gaussian of small
variance. The biases for the tree nodes were initialized so that the distribution produced by the model
with all the non-bias parameters set to zero matched the base rates of the words in the training set.
Models were trained using the learning rate of 10?3 until the perplexity on the validation set started
to increase. Then the learning rate was reduced to 3 ? 10?5 and training was resumed until the
validation perplexity started increasing again. All model parameters were regulated using a small
L2 penalty.
Acknowledgments
We thank Martin Szummer for his comments on a draft of this paper. This research was supported
by NSERC and CFI. GEH is a fellow of the Canadian Institute for Advanced Research.
References
[1] Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic
language model. Journal of Machine Learning Research, 3:1137?1155, 2003.
[2] Yoshua Bengio and Jean-S?ebastien Sen?ecal. Quick training of probabilistic neural nets by
importance sampling. In AISTATS?03, 2003.
[3] P.F. Brown, R.L. Mercer, V.J. Della Pietra, and J.C. Lai. Class-based n-gram models of natural
language. Computational Linguistics, 18(4):467?479, 1992.
[4] Stanley F. Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. In Proceedings of the Thirty-Fourth Annual Meeting of the Association for
Computational Linguistics, pages 310?318, San Francisco, 1996.
[5] Ahmad Emami, Peng Xu, and Frederick Jelinek. Using a connectionist model in a syntactical
based language model. In Proceedings of ICASSP, volume 1, pages 372?375, 2003.
[6] C. Fellbaum et al. WordNet: an electronic lexical database. Cambridge, Mass: MIT Press,
1998.
[7] J. Goodman. A bit of progress in language modeling. Technical report, Microsoft Research,
2000.
[8] John G. McMahon and Francis J. Smith. Improving statistical language model performance
with automatically generated word hierarchies. Computational Linguistics, 22(2):217?247,
1996.
[9] A. Mnih and G. Hinton. Three new graphical models for statistical language modelling. Proceedings of the 24th international conference on Machine learning, pages 641?648, 2007.
[10] Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model.
In Robert G. Cowell and Zoubin Ghahramani, editors, AISTATS?05, pages 246?252, 2005.
[11] F. Pereira, N. Tishby, and L. Lee. Distributional clustering of English words. Proceedings of
the 31st conference on Association for Computational Linguistics, pages 183?190, 1993.
[12] Holger Schwenk and Jean-Luc Gauvain. Connectionist language modeling for large vocabulary continuous speech recognition. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, pages 765?768, 2002.
8
| 3583 |@word version:2 replicate:1 tried:2 covariance:1 mention:1 recursively:2 reduction:5 contains:1 score:4 t7:2 outperforms:1 existing:1 current:4 surprising:1 gauvain:1 assigning:1 john:1 periodically:1 partition:2 christian:1 wanted:1 designed:1 update:1 half:2 leaf:10 inspection:1 ith:1 smith:1 provides:2 draft:1 node:26 toronto:4 along:2 constructed:3 consists:1 combine:1 multimodality:1 introduce:1 peng:1 datadriven:1 indeed:1 expected:1 behavior:1 spherical:1 automatically:2 increasing:3 becomes:3 provided:1 matched:1 mass:1 kind:4 interpreted:1 substantially:1 string:3 developed:1 bootstrapping:1 fellow:1 every:1 exactly:3 scaled:1 partitioning:1 unit:1 producing:2 before:1 t1:3 local:3 timing:1 consequence:2 bilinear:6 path:7 kneser:1 might:3 chose:1 twice:1 dynamically:1 suggests:1 bi:2 averaged:2 decided:1 acknowledgment:1 thirty:1 testing:3 practice:1 digit:2 procedure:9 cfi:1 empirical:1 convenient:1 word:142 confidence:1 pre:1 refers:1 morin:2 spite:4 zoubin:1 cannot:1 close:1 context:45 restriction:2 map:1 deterministic:1 compensated:1 maximizing:2 demonstrated:1 straightforward:1 attention:1 starting:1 independently:1 quick:1 lexical:1 simplicity:4 unstructured:1 immediately:1 pure:1 splitting:1 rule:8 importantly:1 his:1 handle:1 updated:1 construction:2 hierarchy:9 exact:1 distinguishing:1 logarithmically:1 trend:1 recognition:2 expensive:3 updating:1 infrequently:1 distributional:2 database:1 capture:3 thousand:1 compressing:1 news:1 trade:1 ahmad:1 balanced:9 substantial:1 complexity:6 trained:5 icassp:1 schwenk:1 represented:5 various:1 train:4 distinct:1 fast:3 effective:2 describe:3 quite:1 encoded:1 widely:1 valued:5 larger:2 apparent:1 ducharme:1 otherwise:1 jean:2 ability:1 statistic:3 knn:1 itself:1 sequence:8 advantage:3 net:1 took:1 sen:1 product:3 frequent:1 turned:1 combining:1 poorly:1 achieve:1 representational:1 description:1 getting:1 parent:1 cluster:3 requirement:1 produce:6 generating:4 progress:1 eq:2 implemented:1 c:2 predicted:10 involves:1 drawback:2 stochastic:1 assign:1 generalization:1 randomization:1 probable:1 summation:1 extension:1 around:1 considered:2 sufficiently:1 exp:2 deciding:1 great:1 predict:1 bj:1 trigram:1 achieves:2 purpose:2 outperformed:1 precede:5 applicable:1 label:1 visited:1 largest:1 vice:1 mit:1 clearly:1 gaussian:1 aim:2 rwi:2 modified:3 avoid:2 pn:1 interrupt:1 modelling:4 likelihood:2 slowest:1 baseline:1 summarizing:1 sense:1 rebuild:1 rigid:1 jauvin:1 typically:1 hidden:7 interested:1 condense:1 issue:1 flexible:3 pascal:1 smoothing:5 special:1 softmax:1 fairly:1 art:1 equal:1 construct:1 once:5 sampling:3 represents:2 holger:1 nearly:1 future:1 t2:3 yoshua:3 connectionist:2 report:1 few:1 pietra:1 replaced:1 bw:2 microsoft:1 interest:1 highly:1 mnih:2 introduces:1 mixture:3 extreme:1 sens:3 subtrees:1 improbable:1 tree:74 initialized:3 lbl:17 overcomplete:4 nplms:2 modeling:3 assignment:1 resumed:1 cost:4 entry:1 subset:3 too:2 tishby:1 reported:1 considerably:2 adaptively:1 st:1 international:2 discriminating:1 kn3:2 probabilistic:9 off:1 lee:1 w1:13 again:1 reflect:1 containing:4 choose:1 worse:2 creating:2 expert:5 nonlinearities:2 explicitly:1 caused:1 depends:4 performed:3 root:5 responsibility:6 nplm:14 francis:1 competitive:1 start:2 sort:1 variance:1 t3:2 generalize:1 modelled:1 raw:1 vincent:1 produced:3 datapoint:4 manual:1 whenever:1 definition:1 failure:1 frequency:1 involved:2 associated:5 di:5 stop:1 dataset:7 popular:1 knowledge:5 emerges:1 dimensionality:7 improves:2 stanley:1 sophisticated:2 actually:1 carefully:1 fellbaum:1 appears:1 feed:2 higher:1 done:1 though:1 just:2 until:2 hand:1 replacing:1 expressive:1 logistic:1 quality:1 perhaps:1 believe:1 grows:1 usage:3 building:8 effect:2 concept:1 requiring:1 normalized:2 counterpart:4 brown:1 assigned:1 uniquely:2 noted:1 trying:1 performs:2 superior:2 specialized:1 exponentially:2 volume:1 million:5 association:2 occurred:2 elementwise:1 refer:2 significant:1 versa:1 cambridge:1 dag:1 automatic:3 similarly:1 language:28 had:4 similarity:4 operating:1 base:1 own:2 showed:1 belongs:1 driven:2 perplexity:14 occasionally:1 hierarchal:1 binary:23 outperforming:1 meeting:1 joshua:1 seen:1 captured:1 greater:1 somewhat:1 preceding:1 converting:1 signal:1 full:1 multiple:10 rj:1 reduces:1 smooth:1 technical:1 faster:5 adapt:1 long:1 retrieval:1 compensate:1 lai:1 visit:4 bigger:1 qi:5 prediction:6 scalable:1 essentially:1 hlbl:20 expectation:1 iteration:1 represent:2 normalization:2 achieved:1 c1:1 leaving:1 appropriately:1 goodman:2 unlike:2 sure:1 probably:1 ascent:1 tend:2 virtually:2 comment:1 call:2 counteracting:1 counting:1 canadian:1 bengio:4 easy:2 concerned:1 wn:9 automated:1 affect:1 fit:2 split:2 architecture:1 andriy:1 inner:1 idea:1 reduce:1 whether:1 syntactical:1 penalty:1 speech:3 repeatedly:1 clear:1 ten:1 rw:2 simplest:1 generate:3 reduced:2 outperform:4 estimated:2 disjoint:1 per:12 geh:1 express:2 key:1 achieving:1 vast:1 nonhierarchical:3 fraction:1 sum:1 run:2 powerful:1 fourth:1 electronic:1 decision:12 appendix:2 comparable:1 bit:1 entirely:1 layer:6 guaranteed:1 replaces:1 quadratic:1 annual:1 activity:3 occur:2 precisely:1 encodes:1 generates:1 speed:3 extremely:1 performing:2 relatively:1 martin:1 department:2 combination:2 slightly:1 em:2 appealing:1 making:6 restricted:1 computationally:3 equation:1 remains:1 ecal:1 count:3 amnih:1 available:2 gaussians:1 apply:1 hierarchical:30 occurrence:2 ney:1 alternative:1 top:1 clustering:18 ensure:1 denotes:1 running:4 linguistics:4 graphical:1 giving:1 ghahramani:1 build:3 classical:1 question:1 quantity:1 already:1 rt:3 diagonal:1 visiting:1 gradient:3 regulated:1 unable:1 separate:1 thank:1 majority:1 agglomerative:1 trivial:1 reason:1 length:5 code:22 relationship:1 providing:1 setup:1 difficult:2 kn5:2 robert:1 taxonomy:4 potentially:1 stated:1 implementation:1 ebastien:1 perform:3 allowing:3 observation:1 markov:1 datasets:1 situation:1 hinton:3 extended:1 arbitrary:1 introduced:1 required:1 specified:6 acoustic:1 learned:2 address:1 beyond:1 frederick:1 usually:1 pattern:2 rejean:1 appeared:1 sparsity:4 summarize:1 built:1 including:1 power:1 suitable:1 natural:4 difficulty:3 recursion:1 advanced:1 representing:1 improve:3 created:1 started:3 epoch:3 l2:1 loss:1 permutation:1 limitation:1 geoffrey:1 validation:3 foundation:1 mercer:1 editor:1 story:1 translation:1 hopeless:1 penalized:1 surprisingly:2 supported:3 free:1 t6:2 english:1 drastically:1 bias:5 exponentiated:1 allow:2 institute:1 jelinek:1 distributed:3 benefit:1 vocabulary:16 gram:19 computes:2 t5:2 forward:2 made:2 collection:1 preprocessing:1 adaptive:19 replicated:1 san:1 mcmahon:1 approximate:1 francisco:1 tuples:2 continuous:1 table:10 reasonably:1 unavailable:1 improving:1 excellent:1 complex:1 constructing:1 did:2 aistats:2 main:4 linearly:1 child:9 allowed:1 xu:1 cubic:2 slow:2 position:2 pereira:1 exponential:2 third:1 learns:1 down:2 minute:2 unigram:1 explored:1 list:1 normalizing:2 frederic:1 essential:2 false:1 restricting:1 importance:2 ci:5 magnitude:2 subtree:4 t4:2 margin:1 gap:1 chen:1 simply:2 likely:1 explore:2 absorbed:1 nserc:1 cowell:1 corresponds:5 conditional:3 goal:2 sorted:1 replace:1 shared:1 luc:1 typical:1 determined:2 except:2 wordnet:3 called:1 discriminate:1 tendency:1 experimental:1 indicating:1 szummer:1 unbalanced:1 della:1 |
2,850 | 3,584 | Evaluating probabilities under high-dimensional
latent variable models
Iain Murray and Ruslan Salakhutdinov
Department of Computer Science
University of Toronto
Toronto, ON. M5S 3G4. Canada.
{murray,rsalakhu}@cs.toronto.edu
Abstract
We present a simple new Monte Carlo algorithm for evaluating probabilities of
observations in complex latent variable models, such as Deep Belief Networks.
While the method is based on Markov chains, estimates based on short runs are
formally unbiased. In expectation, the log probability of a test set will be underestimated, and this could form the basis of a probabilistic bound. The method is
much cheaper than gold-standard annealing-based methods and only slightly more
expensive than the cheapest Monte Carlo methods. We give examples of the new
method substantially improving simple variational bounds at modest extra cost.
1
Introduction
Latent variable models capture underlying structure in data by explaining observations as part of a
more complex, partially observed system. A large number of probabilistic latent variable models
have been developed, most of which express a joint distribution P (v, h) over observed quantities v
and their unobserved counterparts h. Although it is by no means the only way to evaluate a model,
a natural question to ask is ?what probability P (v) is assigned to a test observation??.
In some models the latent variables associated with a test input can be easily summed out: P (v) =
P
h P (v, h). As an example, standard mixture models have a single discrete mixture component
indicator for each data point; the joint probability P (v, h) can be explicitly evaluated for each setting
of the latent variable.
More complex graphical models explain data through the combination of many latent variables.
This provides richer representations, but provides greater computational challenges. In particular,
marginalizing out many latent variables can require complex integrals or exponentially large sums.
One popular latent variable model, the Restricted Boltzmann Machine (RBM), is unusual in that
the posterior over hiddens P (h|v) is fully-factored, which allows efficient evaluation of P (v) up
to a constant. Almost all other latent variable models have posterior dependencies amongst latent
variables, even if they are independent a priori.
Our current work is motivated by recent work on evaluating RBMs and their generalization to Deep
Belief Networks (DBNs) [1]. For both types of models, a single constant was accurately approximated so that P (v, h) could be evaluated point-wise. For RBMs, the remaining sum over hidden
variables was performed analytically. For DBNs, test probabilities were lower-bounded through
a variational technique. Perhaps surprisingly, the bound was unable to reveal any significant improvement over RBMs in an experiment on MNIST digits. It was unclear whether this was due to
looseness of the bound, or to there being no difference in performance.
A more accurate method for summing over latent variables would enable better and broader evaluation of DBNs. In section 2 we consider existing Monte Carlo methods. Some of them are certainly
more accurate, but prohibitively expensive for evaluating large test sets. We then develop a new
cheap Monte Carlo procedure for evaluating latent variable models in section 3. Like the variational
method used previously, our method is unlikely to spuriously over-state test-set performance. Our
presentation is for general latent variable models, however for a running example, we use DBNs
(see section 4 and [2]). The benefits of our new approach are demonstrated in section 5.
2
Probability of observations as a normalizing constant
The probability of a data vector, P (v), is the normalizing constant relating the posterior over hidden
variables to the joint distribution in Bayes rule, P (h|v) = P (h, v)/P (v). A large literature on
computing normalizing constants exists in physics, statistics and computer science. In principle,
there are many methods that could be applied to evaluating the probability assigned to data by a
latent variable model. We review a subset of these methods, with notation and intuitions that will
help motivate and explain our new algorithm.
In what follows, all auxiliary distributions Q and transition operators T are conditioned on the
current test case v, this is not shown in the notation to reduce clutter. Further, all of these methods
assume that we can evaluate P (h, v). Graphical models with undirected connections will require
the separate estimation of a single constant as in [1].
2.1
Importance sampling
Importance sampling can in principle find the normalizing constant of any distribution. The algorithm involves averaging a simple ratio under samples from some convenient tractable distribution
over the hidden variables, Q(h). Provided Q(h) 6= 0 whenever P (h, v) 6= 0, we obtain:
S
X P (h, v)
1 X P h(s) , v
, h(s) ? Q h(s) .
Q(h) ?
P (v) =
(1)
(s)
Q(h)
S s=1 Q h
h
Importance sampling relies on the sampling distribution Q(h) being similar to the target distribution
P (h|v). Specifically, the variance of the estimator is an ?-divergence between the distributions [3].
Finding a tractable Q(h) with small divergence is difficult in high-dimensional problems.
2.2
The Harmonic mean method
Using Q(h) = P (h|v) in (1) gives an ?estimator? that requires knowing P (v). As an alternative, the
harmonic mean method, also called the reciprocal method, gives an unbiased estimate of 1/P (v):
S
X P (h)
X P (h|v)
1X
1
1
,
=
=
?
P (v)
P (v)
P (v|h)
S s=1 P v|h(s)
h
h
h(s) ? P h(s) |v).
(2)
In practice correlated samples from MCMC are used; then the estimator is asymptotically unbiased.
It was clear from the original paper and its discussion that the harmonic mean estimator can behave
very poorly [4]. Samples in the tails of the posterior have large weights, which makes it easy to
construct distributions where the estimator has infinite variance. A finite set of samples will rarely
include any extremely large weights, so the estimator?s empirical variance can be misleadingly low.
In many problems, the estimate of 1/P (v) will be an underestimate with high probability. That is,
the method will overestimate P (v) and often give no indication that it has done so.
Sometimes the estimator will have manageable variance. Also, more expensive versions of the
estimator exist with lower variance. However, it is still prone to overestimate test probabili?
ties:
Mean Estimator in (2), Jensen?s inequality gives P (v) =
If 1/PHME (v)
is the
Harmonic
?
1 E 1/PHME (v) ? E P?HME (v) . Similarly log P (v) will be overestimated in expectation.
Hence the average of a large number of test log probabilities is highly likely to be an overestimate.
Despite these problems the estimator has received significant attention in statistics, and has been
used for evaluating latent variable models in recent machine learning literature [5, 6]. This is understandable: all of the existing, more accurate methods are harder to implement and take considerably
longer to run. In this paper we propose a method that is nearly as easy to use as the harmonic mean
method, but with better properties.
2.3
Importance sampling based on Markov chains
Paradoxically, introducing auxiliary variables and making a distribution much higher-dimensional
than it was before, can help find an approximating Q distribution that closely matches the target
distribution. As an example we give a partial review of Annealed Importance Sampling (AIS) [7], a
special case of a larger family of Sequential Monte Carlo (SMC) methods (see, e.g., [8]). Some of
this theory will be needed in the new method we present in section 3.
Annealing algorithms start with a sample from some tractable distribution P1 . Steps are taken with
a series of operators T2 , T3 , . . . , TS , whose stationary distributions, Ps , are ?cooled? towards the
distribution of interest. The probability over the resulting sequence H = {h(1) , h(2) , . . . h(S) } is:
S
Y
QAIS (H) = P1 h(1)
Ts h(s) ? h(s?1) .
(3)
s=2
To compute importance weights, we need to define a ?target? distribution on the same state-space:
S
Y
PAIS (H) = P h(S) |v
Tes h(s?1) ? h(s) .
(4)
s=2
Because h(S) has marginal P (h|v) = P (h, v)/P (v), PAIS (H) has our target, P (v), as its normalizing constant. The Te operators are the reverse operators, of those used to define QAIS .
For any transition operator T that leaves a distribution P (h|v) stationary, there is a unique corresponding ?reverse operator? Te, which is defined for any point h0 in the support of P :
T (h0 ? h) P (h|v)
T (h0 ? h) P (h|v)
.
(5)
=
Te(h ? h0 ) = P
0
P (h0 |v)
h T (h ? h) P (h|v)
The sum in the denominator is known because T leaves the posterior stationary. Operators that
are their own reverse operator are said to satisfy ?detailed balance? and are also known as ?reversible?. Many transition operators used in practice, such as Metropolis?Hastings, are reversible.
Non-reversible operators are usually composed from a sequence of reversible operations, such as the
component updates in a Gibbs sampler. The reverse of these (so-called) non-reversible operators is
constructed from the same reversible base operations, but applied in reverse order.
The definitions above allow us to write:
S
Y
Ts h(s) ? h(s?1)
P1 h(1)
QAIS (H)
?
QAIS (H) = PAIS (H)
= PAIS (H)
PAIS (H)
P h(S) |v s=2 Tes h(s?1) ? h(s)
"
#
S
Y
P1 h(1)
PAIS (H) P (v)
Ps? (h(s) )
?
= PAIS (H) P (v)
.
?
?
(s?1)
(S)
w(H)
)
P h , v s=2 Ps (h
(6)
We can usually evaluate the Ps? , which are unnormalized versions of the stationary distributions of
the Markov chain operators. Therefore the AIS importance weight w(H) = 1/ [? ? ? ] is tractable as
long as we can evaluate P (h, v). The AIS importance weight provides an unbiased estimate:
h
i
X
EQAIS (H) w(H) = P (v)
PAIS (H) = P (v).
(7)
H
As with standard importance sampling, the variance of the estimator depends on a divergence between PAIS and QAIS . This can be made small, at large computational expense, by using hundreds
or thousands of steps S, allowing the neighboring intermediate distributions Ps (h) to be close.
2.4
Chib-style estimators
Bayes rule implies that for any special hidden state h? , P (v) = P (h? , v)/P (h? |v).
(8)
This trivial identity suggests a family of estimators introduced by Chib [9]. First, we choose a
particular hidden state h? , usually one with high posterior probability, and then estimate P (h? |v).
We would like to obtain an estimator that is based on a sequence of states H = {h(1) , h(2) , . . . , h(S) }
generated by a Markov chain that explores the posterior distribution P (h|v). The
P most naive estimate of P (h? |v) is the fraction of states in H that are equal to the special state s I(h(s) = h? )/S.
Obviously this estimator is impractical as it equals zero with high probability when applied to highdimensional problems. A ?Rao?Blackwellized? version of this estimator, p?(H), replaces the indicator function with the probability of transitioning from h(s) to the special state under a Markov
chain transition operator that leaves the posterior stationary. This can be derived directly from the
operator?s stationary condition:
S
X
1X
P (h? |v) =
T (h? ? h)P (h|v) ? p?(H) ?
T (h? ? h(s) ), {h(s) } ? P(H), (9)
S s=1
h
where P(H) is the joint distribution arising from S steps of a Markov chain. If the chain has
stationary distribution P (h|v) and could be initialized at equilibrium so that
S
Y
P(H) = P h(1) v
T h(s) ? h(s?1) ,
(10)
s=2
?
then p?(H) would be an unbiased estimate of P (h |v). For ergodic chains the stationary distribution
is achieved asymptotically and the estimator is consistent regardless of how it is initialized.
If T is a Gibbs sampling transition operator, the only way of moving from h to h? is to draw each
element of h? in turn. If updates are made in index order from 1 to M , the move has probability:
T (h? ? h) =
M
Y
P h?j h?1:(j?1) , h(j+1):M .
(11)
j=1
Equations (9, 11) have been used in schemes for monitoring the convergence of Gibbs samplers [10].
It is worth emphasizing that we have only outlined the simplest possible scheme inspired by Chib?s
general approach. For some Markov chains, there are technical problems with the above construction, which require an extension explained in the appendix. Moreover the approach above is not what
Chib recommended. In fact, [11] explicitly favors a more elaborate procedure involving sampling
from a sequence of distributions. This opens up the possibility of many sophisticated developments,
e.g. [12, 13]. However, our focus in this work is on obtaining more useful results from simple cheap
methods. There are also well-known problems with the Chib approach [14], to which we will return.
3
A new estimator for evaluating latent-variable models
We start with the simplest Chib-inspired estimator based on equations (8,9,11). Like many Markov
chain Monte Carlo algorithms, (9) provides only (asymptotic) unbiasedness. For our purposes this
is not sufficient. Jensen?s inequality tells us
P (h? , v)
P (h? , v)
P (h? , v)
=
?
E
.
(12)
P (v) =
P (h? |v)
E[?
p(H)]
p?(H)
That is, we will overestimate the probability of a visible vector in expectation. Jensen?s inequality
also says that we will overestimate log P (v) in expectation.
Ideally we would like an accurate estimate of log P (v). However, if we must suffer some bias,
then a lower bound that does not overstate performance will usually be preferred. An underestimate
of P (v) would result from overestimating P (h? |v). The probability of the special state h? will
often be overestimated in practice if we initialize our Markov chain at h? . There are, however,
simple counter-examples where this does not happen. Instead we describe a construction based on a
sequence of Markov steps starting at h? that does have the desired effect. We draw a state sequence
from the following carefully designed distribution, using the algorithm in figure 1:
S
S
Y
Y
s?1
0
0
0
0
1 X e (s)
T h ? h?
T h(s ) ? h(s ?1)
Te h(s ) ? h(s +1) .
(13)
S s=1
s0 =s+1
s0 =1
If the initial state were drawn from P (h|v) instead of Te h(s) ? h? , then the algorithm would give
a sample from an equilibrium sequence with distribution P(H) defined in (10). This can be checked
by repeated substitution of (5). This allows us to express Q in terms of P, as we did for AIS:
"
#
S
S
1
1X
1 X Te h(s) ? h?
?
(s)
P(H) =
T h ?h
P(H).
(14)
Q(H) =
S s=1 P h(s) |v
P (h? |v) S s=1
Q(H) =
Inputs: v, observed test vector
h? , a (preferably high posterior probability) hidden state
S, number of Markov chain steps
T , Markov chain operator that leaves P (h|v) stationary
1. Draw s ? Uniform({1, . . . S})
2. Draw h(s) ? Te h(s) ? h?
h?
h(4)
T
h?
3. for s0 = (s + 1) : S
0
0
0
4.
Draw h(s ) ? T h(s ) ? h(s ?1)
5. for s0 = (s ? 1) : ?1 : 1
0
0
0
6.
Draw h(s ) ? Te h(s ) ? h(s +1)
T
T
h(3)
Te
h?
Te
h?
T
h(2)
Te
S
.1 X
0
T (h? ? h(s ) )
7. P (v) ? P (v, h )
S 0
?
h?
s =1
T
h(1)
Figure 1: Algorithm for the proposed method. The graphical model shows Q(H|s = 3) for S = 4. At
0
each generated state T (h? ? h(s ) ) is evaluated (step 7), roughly doubling the cost of sampling. The reverse
operator, Te, was defined in section 2.3.
The quantity in square brackets is the estimator for P (h? |v) given in (9). The expectation of the
reciprocal of this quantity under draws from Q(H) is exactly the quantity needed to compute P (v):
#
" ,
S
X
1
1
1X
?
(s)
P(H) =
.
(15)
T h ?h
=
EQ(H) 1
?
S s=1
P (h |v)
P (h? |v)
H
Although we are using the simple estimator from (9), by drawing H from a carefully constructed
Markov chain procedure, the estimator is now unbiased in P (v). This is not an asymptotic result. As
long as no division by zero has occurred in the above equations, the estimator is unbiased in P (v)
for finite runs of the Markov chain. Jensen?s implies that log P (v) is underestimated in expectation.
Neal noted that Chibs method will return incorrect answers in cases where the Markov chain does not
mix well amongst modes [14]. Our new proposed method will suffer from the same problem. Even
if no transition probabilities are exactly zero, unbiasedness does not exclude being on a particular
side of the correct answer with very high probability. Poor mixing may cause P (h? |v) to be overestimated with high probability, which would result in an underestimate of P (v), i.e., an overly
conservative estimate of test performance.
The variance of the estimator is generally unknown, as it depends on the (generally unavailable)
auto-covariance structure of the Markov chain. We can note one positive property: for the ideal
Markov chain operator that mixes in one step, the estimator has zero variance and gives the correct
answer immediately. Although this extreme will not actually occur, it does indicate that on easy
problems, good answers can be returned more quickly than by AIS.
4
Deep Belief Networks
In this section we provide a brief overview of Deep Belief Networks (DBNs), recently introduced
by [2]. DBNs are probabilistic generative models, that can contain many layers of hidden variables.
Each layer captures strong high-order correlations between the activities of hidden features in the
layer below. The top two layers of the DBN model form a Restricted Boltzmann Machine (RBM)
which is an undirected graphical model, but the lower layers form a directed generative model. The
original paper introduced a greedy, layer-by-layer unsupervised learning algorithm that consists of
learning a stack of RBMs one layer at a time.
Consider a DBN model with two layers of hidden features. The model?s joint distribution is:
P (v, h1 , h2 ) = P (v|h1 ) P (h2 , h1 ),
(16)
where P (v|h1 ) represents a sigmoid belief network, and P (h1 , h2 ) is the joint distribution defined
by the second layer RBM. By explicitly summing out h2 , we can easily evaluate an unnormalized
probability P ?(v,h1 ) = ZP (v, h1 ). Using an approximating factorial posterior distribution Q(h|v),
Image Patches
Estimated Test Log?probability
Estimated Test Log?probability
MNIST digits
?85
AIS Estimator
?85.5
Our Proposed Estimator
?86
?86.5
?87
Estimate of Variational
Lower Bound
5
10
15
20
25
30
35
Number of Markov chain steps
40
AIS Estimator
?565
Our Proposed Estimator
?570
?575
?580
?585
Estimate of Variational
Lower Bound
5
10
15
20
25
30
35
Number of Markov chain steps
40
Figure 2: AIS, our proposed estimator and a variational method were used to sum over the hidden states for
each of 50 randomly sampled test cases to estimate their average log probability. The three methods shared the
same AIS estimate of a single global normalization constant Z.
obtained as a byproduct of the greedy learning procedure, and an AIS estimate of the model?s partition function Z, [1] proposed obtaining an estimate of a variational lower bound:
X
log P (v) ?
Q(h1 |v) log P ? (v, h1 ) ? log Z + H(Q(h1 |v)).
(17)
h1
The entropy term H(?) can be computed analytically, since Q is factorial, and the expectation term
was estimated by a simple Monte Carlo approximation:
X
1 X
log P ? (v, h1(s) ),
where h1(s) ? Q(h1 |v).
(18)
Q(h1 |v) log P ? (v, h1 ) ?
S
s=1..S
h1
P
Instead of the variational approach, we could also adopt AIS to estimate h1 P ? (v, h1 ). This
would be computationally very expensive, since we would need to run AIS for each test case.
In the next section we show that variational lower bounds can be quite loose. Running AIS on the
entire test set, containing many thousands of test cases, is computationally too demanding. Our
proposed estimator requires the same single AIS estimate of Z as the variational method, so that
we can evaluate P (v, h1 ). It then provides better estimates of log P (v) by approximately summing
over h1 for each test case in a reasonable amount of computer time.
5
Experimental Results
We present experimental results on two datasets: the MNIST digits and a dataset of image
patches, extracted from images of natural scenes taken from the collection of Van Hateren
(http://hlab.phys.rug.nl/imlib/). The MNIST dataset contains 60,000 training and 10,000 test images of ten handwritten digits (0 to 9), with 28?28 pixels. The image dataset consisted of 130,000
training and 20,000 test 20?20 patches. The raw image intensities were preprocessed and whitened
as described in [15]. Gibbs sampling was used as a Markov chain transition operator throughout.
All log probabilities quoted use natural logarithms, giving values in nats.
5.1
MNIST digits
In our first experiment we used a deep belief network (DBN) taken from [1]. The network had two
hidden layers with 500 and 2000 hidden units, and was greedily trained by learning a stack of two
RBMs one layer at a time. Each RBM was trained using the Contrastive Divergence (CD) learning
rule. The estimate of the lower bound on the average test log probability, using (17), was ?86.22.
To estimate how loose the variational bound is, we randomly sampled 50 test cases, 5 of each class,
and ran AIS for each test case to estimate the true test log probability. Computationally, this is
equivalent to estimating 50 additional partition functions. Figure 2, left panel, shows the results.
The estimate of the variational bound was ?87.05 per test case, whereas the estimate of the true test
log probability using AIS was ?85.20. Our proposed estimator, averaged over 10 runs, provided
an answer of ?85.22. The special state h? for each test example v was obtained by first sampling
from the approximating distribution Q(h|v), and then performing deterministic hill-climbing in
log p(v, h) to get to a local mode.
AIS used a hand-tuned temperature schedule designed to equalize the variance of the intermediate
log weights [7]. We needed 10,000 intermediate distributions to get stable results, which took about
3.6 days on a Pentium Xeon 3.00GHz machine, whereas for our proposed estimator we only used
S = 40, which took about 50 minutes. For a more direct comparison we tried giving AIS 50 minutes,
which allows 100 temperatures. This run gave an estimate of ?89.59, which is lower than the lower
bound and tells us nothing. Giving AIS ten times more time, 1000 temperatures, gave ?86.05. This
is higher than the lower bound, but still worse than our estimator at S = 40, or even S = 5.
Finally, using our proposed estimator, the average test log probability on the entire MNIST test data
was ?84.55. The difference of about 2 nats shows that the variational bound in [1] was rather tight,
although a very small improvement of the DBN over the RBM is now revealed.
5.2
Image Patches
In our second experiment we trained a two-layer DBN model on the image patches of natural scenes.
The first layer RBM had 2000 hidden units and 400 Gaussian visible units. The second layer represented a semi-restricted Boltzmann machine (SRBM) with 500 hidden and 2000 visible units. The
SRBM contained visible-to-visible connections, and was trained using Contrastive Divergence together with mean-field. Details of training can be found in [15]. The overall DBN model can be
viewed as a directed hierarchy of Markov random fields with hidden-to-hidden connections.
To estimate the model?s partition function, we used AIS with 15,000 intermediate distributions and
100 annealing runs. The estimated lower bound on the average test log probability (see Eq. 17),
using a factorial approximate posterior distribution Q(h1 |v), which we also get as a byproduct of
the greedy learning algorithm, was ?583.73. The estimate of the true test log probability, using our
proposed estimator, was ?563.39. In contrast to the model trained on MNIST, the difference of over
20 nats shows that, for model comparison purposes, the variational lower bound is quite loose.
For comparison, we also trained square ICA and a mixture of factor analyzers (MFA) using code
from [16, 17]. Square ICA achieves a test log probability of ?551.14, and MFA with 50 mixture
components and a 30-dimensional latent space achieves ?502.30, clearly outperforming DBNs.
6
Discussion
Our new Monte Carlo procedure is formally unbiased in estimating P (v). In practice it is likely to
underestimate the (log-)probability of a test set. Although the algorithm involves Markov chains,
importance sampling underlies the estimator. Therefore the methods discussed in [18] could be used
to bound the probability of accidentally over-estimating a test set probability.
In principle our procedure is a general technique for estimating normalizing constants. It would not
always be appropriate however, as it would suffer the problems outlined in [14]. As an example our
method will not succeed in estimating the global normalizing constant of an RBM.
For our method to work well, a state drawn from Te(h(s) ? h? ) should look like it could be part
of an equilibrium sequence H ? P(H). The details of the algorithm arose by developing existing
Monte Carlo estimators, but the starting state h(s) could be drawn from any arbitrary distribution:
#
"
S
S
1 X q(h(s) )
1 X q(h(s) )
Qvar (H) =
P(H).
(19)
P(H) = P (v)
S s=1 P (h(s) |v)
S s=1 P (h(s) , v)
As before the reciprocal of the quantity in square brackets would give an estimate of P (v). If an
approximation q(h) is available that captures more mass than Te(h ? h? ), this generalized estimator
could perform better. We are hopeful that our method will be a natural next step in a variety of
situations where improvements are sought over a deterministic approximation.
Acknowledgments
This research was supported by NSERC and CFI. Iain Murray was supported by the government of
Canada. We thank Geoffrey Hinton and Radford Neal for useful discussions, Simon Osindero for
providing preprocessed image patches of natural scenes, and the reviewers for useful comments.
References
[1] Ruslan Salakhutdinov and Iain Murray. On the quantitative analysis of Deep Belief Networks. In Proceedings of the International Conference on Machine Learning, volume 25, pages 872?879, 2008.
[2] Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep belief nets.
Neural Computation, 18(7):1527?1554, 2006.
[3] Tom Minka. Divergence measures and message passing. TR-2005-173, Microsoft Research, 2005.
[4] Michael A. Newton and Adrian E. Raftery. Approximate Bayesian inference with the weighted likelihood
bootstrap. Journal of the Royal Statistical Society, Series B (Methodological), 56(1):3?48, 1994.
[5] Thomas L. Griffiths, Mark Steyvers, David M. Blei, and Joshua B. Tenenbaum. Integrating topics and
syntax. In Advances in Neural Information Processing Systems (NIPS*17). MIT Press, 2005.
[6] Hanna M. Wallach. Topic modeling: beyond bag-of-words. In Proceedings of the 23rd international
conference on Machine learning, pages 977?984. ACM Press New York, NY, USA, 2006.
[7] Radford M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125?139, 2001.
[8] Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. Sequential Monte Carlo samplers. Journal of the
Royal Statistical Society B, 68(3):1?26, 2006.
[9] Siddhartha Chib. Marginal likelihood from the Gibbs output. Journal of the American Statistical Association, 90(432):1313?1321, December 1995.
[10] Christian Ritter and Martin A. Tanner. Facilitating the Gibbs sampler: the Gibbs stopper and the griddyGibbs sampler. Journal of the American Statistical Association, 87(419):861?868, 1992.
[11] Siddhartha Chib and Ivan Jeliazkov. Marginal likelihood from the Metropolis?Hastings output. Journal
of the American Statistical Association, 96(453), 2001.
[12] Antonietta Mira and Geoff Nicholls. Bridge estimation of the probability density at a point. Statistica
Sinica, 14:603?612, 2004.
[13] Francesco Bartolucci, Luisa Scaccia, and Antonietta Mira. Efficient Bayes factor estimation from the
reversible jump output. Biometrika, 93(1):41?52, 2006.
[14] Radford M. Neal. Erroneous results in ?Marginal likelihood from the Gibbs output?, 1999. Available
from http://www.cs.toronto.edu/?radford/chib-letter.html.
[15] Simon Osindero and Geoffrey Hinton. Modeling image patches with a directed hierarchy of Markov
random fields. In Advances in Neural Information Processing Systems (NIPS*20). MIT Press, 2008.
[16] Aapo Hyv?arinen. Fast and robust fixed-point algorithms for independent component analysis. IEEE
Transactions on Neural Networks, 10(3):626?634, 1999.
[17] Zoubin Ghahramani and Geoffrey E. Hinton. The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, University of Toronto, 1997.
[18] Vibhav Gogate, Bozhena Bidyuk, and Rina Dechter. Studies in lower bounding probability of evidence
using the Markov inequality. In 23rd Conference on Uncertainty in Artificial Intelligence (UAI), 2007.
A
Real-valued latents and Metropolis?Hastings
There are technical difficulties with the original Chib-style approach applied to Metropolis?Hastings
and continuous latent variables. The continuous version of equation (9),
R
PS
(20)
P (h? |v) = T (h? ? h)P (h|v) dh ? S1 s=1 T (h? ? h(s) ), h(s) ? P(H),
doesn?t work if T is the Metropolis?Hastings operator. The Dirac-delta function at h = h? contains
a significant part of the integral, which is ignored by samples from P (h|v) with probability one.
Following [11], the fix is to instead integrate over the generalized detailed balance relationship (5).
Chib and Jeliazkov implicitly took out the h? = h point from all of their integrals. We do the same:
.R
R
dh T (h ? h? ).
(21)
P (h? |v) = dh Te(h? ? h)P (h|v)
The numerator can be estimated as before. As both integrals omit h = h? , the denominator
is less than one when T contains a delta function. For Metropolis?Hastings: T (h ? h? ) =
q(h; h? ) min 1, a(h; h? ) , where a(h; h? ) is an easy-to-compute acceptance ratio. Sampling from
q(h; h? ) and averaging min(1, a(h; h? )) provides an estimate of the denominator.
In our importance sampling approach there is no need to separately approximate an additional quantity. The algorithm in figure 1 still applies if the T ?s are interpreted as probability density functions.
If, due to a rejection, h? is drawn in step 2. then the sum in step 7. will contain an infinite term giving
a trivial underestimate P (v) = 0. (Steps 3?6 need not be performed in this case.) On repeated runs,
the average estimate is still unbiased, or an underestimate for chains that can?t mix. Alternatively,
the variational approach (19) could be applied together with Metropolis?Hastings sampling.
| 3584 |@word version:4 manageable:1 open:1 adrian:1 hyv:1 tried:1 covariance:1 contrastive:2 tr:2 harder:1 initial:1 substitution:1 series:2 contains:3 tuned:1 existing:3 current:2 must:1 dechter:1 visible:5 partition:3 happen:1 cheap:2 christian:1 designed:2 update:2 stationary:9 generative:2 leaf:4 greedy:3 intelligence:1 reciprocal:3 short:1 blei:1 provides:6 toronto:5 blackwellized:1 constructed:2 direct:1 incorrect:1 consists:1 g4:1 ica:2 roughly:1 p1:4 salakhutdinov:2 inspired:2 provided:2 estimating:5 underlying:1 bounded:1 notation:2 moreover:1 panel:1 mass:1 what:3 interpreted:1 substantially:1 developed:1 unobserved:1 finding:1 impractical:1 quantitative:1 preferably:1 tie:1 exactly:2 prohibitively:1 biometrika:1 unit:4 omit:1 overestimate:5 before:3 positive:1 local:1 despite:1 srbm:2 approximately:1 wallach:1 suggests:1 smc:1 averaged:1 directed:3 unique:1 acknowledgment:1 practice:4 implement:1 bootstrap:1 digit:5 procedure:6 cfi:1 bidyuk:1 empirical:1 luisa:1 convenient:1 word:1 integrating:1 griffith:1 zoubin:1 get:3 close:1 operator:20 yee:1 www:1 equivalent:1 deterministic:2 demonstrated:1 reviewer:1 annealed:2 attention:1 regardless:1 starting:2 ergodic:1 immediately:1 factored:1 iain:3 rule:3 estimator:39 steyvers:1 cooled:1 dbns:7 target:4 construction:2 hierarchy:2 element:1 expensive:4 approximated:1 observed:3 probabili:1 capture:3 thousand:2 rina:1 counter:1 ran:1 equalize:1 intuition:1 nats:3 ideally:1 motivate:1 trained:6 tight:1 division:1 basis:1 easily:2 joint:6 geoff:1 represented:1 fast:2 describe:1 monte:10 artificial:1 tell:2 h0:5 whose:1 richer:1 larger:1 quite:2 valued:1 say:1 drawing:1 favor:1 statistic:3 obviously:1 sequence:8 indication:1 net:1 took:3 propose:1 neighboring:1 mixing:1 poorly:1 gold:1 dirac:1 convergence:1 p:6 zp:1 nicholls:1 help:2 develop:1 received:1 eq:2 strong:1 auxiliary:2 c:2 involves:2 implies:2 indicate:1 closely:1 correct:2 enable:1 require:3 government:1 arinen:1 fix:1 generalization:1 crg:1 extension:1 equilibrium:3 achieves:2 adopt:1 sought:1 purpose:2 ruslan:2 estimation:3 bag:1 bridge:1 weighted:1 mit:2 clearly:1 gaussian:1 always:1 rather:1 arose:1 broader:1 derived:1 focus:1 improvement:3 methodological:1 likelihood:4 pentium:1 contrast:1 greedily:1 inference:1 unlikely:1 entire:2 hidden:16 pixel:1 overall:1 html:1 priori:1 development:1 summed:1 special:6 initialize:1 marginal:4 equal:2 construct:1 field:3 sampling:17 represents:1 look:1 unsupervised:1 nearly:1 t2:1 report:1 overestimating:1 randomly:2 composed:1 chib:11 divergence:6 cheaper:1 microsoft:1 interest:1 message:1 acceptance:1 highly:1 possibility:1 evaluation:2 certainly:1 mixture:5 bracket:2 extreme:1 nl:1 chain:23 accurate:4 integral:4 partial:1 byproduct:2 modest:1 logarithm:1 initialized:2 desired:1 xeon:1 modeling:2 rao:1 cost:2 introducing:1 subset:1 latents:1 hundred:1 uniform:1 osindero:3 too:1 dependency:1 answer:5 considerably:1 unbiasedness:2 density:2 hiddens:1 explores:1 international:2 overestimated:3 probabilistic:3 physic:1 ritter:1 jeliazkov:2 michael:1 together:2 quickly:1 tanner:1 containing:1 choose:1 imlib:1 worse:1 american:3 style:2 return:2 exclude:1 hme:1 satisfy:1 explicitly:3 depends:2 performed:2 h1:22 start:2 bayes:3 simon:3 square:4 variance:9 t3:1 climbing:1 handwritten:1 raw:1 bayesian:1 accurately:1 carlo:10 monitoring:1 worth:1 m5s:1 explain:2 phys:1 whenever:1 checked:1 definition:1 underestimate:6 rbms:5 minka:1 associated:1 rbm:7 sampled:2 dataset:3 popular:1 ask:1 schedule:1 sophisticated:1 carefully:2 actually:1 higher:2 day:1 tom:1 evaluated:3 done:1 correlation:1 hand:1 hastings:7 reversible:7 del:1 mode:2 reveal:1 perhaps:1 vibhav:1 usa:1 effect:1 contain:2 unbiased:9 consisted:1 counterpart:1 true:3 analytically:2 assigned:2 hence:1 arnaud:1 neal:4 numerator:1 noted:1 unnormalized:2 generalized:2 whye:1 syntax:1 hill:1 pais:9 temperature:3 overstate:1 variational:15 wise:1 harmonic:5 recently:1 image:10 sigmoid:1 overview:1 exponentially:1 volume:1 tail:1 occurred:1 discussed:1 relating:1 association:3 significant:3 gibbs:8 ai:20 rd:2 outlined:2 dbn:6 similarly:1 analyzer:2 had:2 moving:1 stable:1 longer:1 base:1 posterior:11 own:1 recent:2 reverse:6 inequality:4 outperforming:1 joshua:1 greater:1 rug:1 additional:2 recommended:1 semi:1 mix:3 technical:3 match:1 long:2 involving:1 underlies:1 aapo:1 denominator:3 whitened:1 expectation:7 ajay:1 sometimes:1 normalization:1 achieved:1 whereas:2 separately:1 annealing:3 underestimated:2 extra:1 comment:1 undirected:2 december:1 ideal:1 intermediate:4 revealed:1 easy:4 paradoxically:1 variety:1 ivan:1 gave:2 reduce:1 knowing:1 whether:1 motivated:1 moral:1 suffer:3 returned:1 passing:1 cause:1 york:1 deep:7 ignored:1 useful:3 generally:2 clear:1 detailed:2 factorial:3 amount:1 clutter:1 ten:2 tenenbaum:1 simplest:2 http:2 exist:1 estimated:5 arising:1 overly:1 per:1 delta:2 discrete:1 write:1 siddhartha:2 express:2 drawn:4 preprocessed:2 asymptotically:2 fraction:1 sum:5 run:8 letter:1 uncertainty:1 almost:1 family:2 reasonable:1 throughout:1 patch:7 draw:7 appendix:1 bound:18 layer:15 replaces:1 activity:1 occur:1 scene:3 extremely:1 min:2 performing:1 jasra:1 martin:1 department:1 developing:1 combination:1 poor:1 slightly:1 em:1 metropolis:7 rsalakhu:1 making:1 s1:1 explained:1 restricted:3 taken:3 computationally:3 equation:4 previously:1 turn:1 loose:3 needed:3 tractable:4 unusual:1 available:2 operation:2 appropriate:1 pierre:1 alternative:1 original:3 thomas:1 top:1 remaining:1 running:2 include:1 graphical:4 newton:1 giving:4 ghahramani:1 murray:4 approximating:3 society:2 move:1 question:1 quantity:6 unclear:1 said:1 amongst:2 unable:1 separate:1 thank:1 antonietta:2 spuriously:1 topic:2 trivial:2 code:1 index:1 relationship:1 gogate:1 ratio:2 balance:2 providing:1 difficult:1 sinica:1 expense:1 understandable:1 boltzmann:3 looseness:1 unknown:1 allowing:1 perform:1 teh:1 observation:4 francesco:1 markov:24 datasets:1 finite:2 behave:1 t:3 situation:1 hinton:4 stack:2 arbitrary:1 canada:2 intensity:1 introduced:3 david:1 connection:3 nip:2 beyond:1 usually:4 below:1 challenge:1 hlab:1 royal:2 belief:8 mfa:2 demanding:1 natural:6 difficulty:1 indicator:2 scheme:2 brief:1 raftery:1 naive:1 auto:1 review:2 literature:2 marginalizing:1 asymptotic:2 fully:1 geoffrey:4 h2:4 integrate:1 sufficient:1 consistent:1 s0:4 principle:3 cd:1 prone:1 surprisingly:1 supported:2 accidentally:1 bias:1 allow:1 side:1 explaining:1 hopeful:1 benefit:1 van:1 ghz:1 evaluating:8 transition:7 doesn:1 made:2 collection:1 jump:1 transaction:1 approximate:3 preferred:1 implicitly:1 global:2 doucet:1 uai:1 summing:3 quoted:1 alternatively:1 continuous:2 latent:19 qvar:1 robust:1 obtaining:2 improving:1 unavailable:1 hanna:1 complex:4 cheapest:1 did:1 statistica:1 bounding:1 nothing:1 repeated:2 facilitating:1 elaborate:1 ny:1 mira:2 minute:2 emphasizing:1 transitioning:1 erroneous:1 jensen:4 normalizing:7 evidence:1 exists:1 mnist:7 sequential:2 importance:12 te:17 conditioned:1 rejection:1 entropy:1 likely:2 contained:1 nserc:1 partially:1 doubling:1 applies:1 radford:4 relies:1 extracted:1 acm:1 dh:3 succeed:1 identity:1 presentation:1 viewed:1 towards:1 shared:1 specifically:1 infinite:2 averaging:2 sampler:5 conservative:1 called:2 experimental:2 rarely:1 formally:2 highdimensional:1 support:1 mark:1 hateren:1 evaluate:6 mcmc:1 correlated:1 |
2,851 | 3,585 | Sparse Online Learning via Truncated Gradient
John Langford
Yahoo! Research
[email protected]
Lihong Li
Department of Computer Science
Rutgers University
[email protected]
Tong Zhang
Department of Statistics
Rutgers University
[email protected]
Abstract
We propose a general method called truncated gradient to induce sparsity in the
weights of online-learning algorithms with convex loss. This method has several
essential properties. First, the degree of sparsity is continuous?a parameter controls the rate of sparsification from no sparsification to total sparsification. Second,
the approach is theoretically motivated, and an instance of it can be regarded as
an online counterpart of the popular L1 -regularization method in the batch setting. We prove small rates of sparsification result in only small additional regret
with respect to typical online-learning guarantees. Finally, the approach works
well empirically. We apply it to several datasets and find for datasets with large
numbers of features, substantial sparsity is discoverable.
1
Introduction
We are concerned with machine learning over large datasets. As an example, the largest dataset
we use in this paper has over 107 sparse examples and 109 features using about 1011 bytes. In this
setting, many common approaches fail, simply because they cannot load the dataset into memory
or they are not sufficiently efficient. There are roughly two approaches which can work: one is
to parallelize a batch learning algorithm over many machines (e.g., [3]), the other is to stream the
examples to an online-learning algorithm (e.g., [2, 6]). This paper focuses on the second approach.
Typical online-learning algorithms have at least one weight for every feature, which is too expensive
in some applications for a couple reasons. The first is space constraints: if the state of the onlinelearning algorithm overflows RAM it can not efficiently run. A similar problem occurs if the state
overflows the L2 cache. The second is test-time constraints: reducing the number of features can
significantly reduce the computational time to evaluate a new sample.
This paper addresses the problem of inducing sparsity in learned weights while using an onlinelearning algorithm. Natural solutions do not work for our problem. For example, either simply
adding L1 regularization to the gradient of an online weight update or simply rounding small weights
to zero are problematic. However, these two ideas are closely related to the algorithm we propose
and more detailed discussions are found in section 3. A third solution is black-box wrapper approaches which eliminate features and test the impact of the elimination. These approaches typically
run an algorithm many times which is particularly undesirable with large datasets.
Similar problems have been considered in various settings before. The Lasso algorithm [12] is
commonly used to achieve L1 regularization for linear regression. This algorithm does not work
automatically in an online fashion. There are two formulations of L1 regularization. Consider a
loss function L(w, zi ) which is convex in w, where zi = (xi , yi ) is an input?output pair. One is the
convex-constraint formulation
w
? = arg min
w
n
X
i=1
L(w, zi )
subject to kwk1 ? s,
(1)
where s is a tunable parameter. The other is soft-regularization with a tunable parameter g:
w
? = arg min
w
n
X
L(w, zi ) + gkwk1 .
(2)
i=1
With appropriately chosen g, the two formulations are equivalent. The convex-constraint formulation has a simple online version using the projection idea in [14]. It requires the projection of
weight w into an L1 ball at every online step. This operation is difficult to implement efficiently
for large-scale data with many features even if all features are sparse, although important progress
was made recently so the complexity is logarithmic in the number of features [5]. In contrast, the
soft-regularization formulation (2) is efficient for a batch setting[8] so we pursue it here in an online
setting where it has complexity independent of the number of features. In addition to the L 1 regularization formulation (2), the family of online-learning algorithms we consider also includes some
non-convex sparsification techniques.
The Forgetron [4] is an online-learning algorithm that manages memory use. It operates by decaying the weights on previous examples and then rounding these weights to zero when they become
small. The Forgetron is stated for kernelized online algorithms, while we are concerned with the
simple linear setting. When applied to a linear kernel, the Forgetron is not computationally or space
competitive with approaches operating directly on feature weights.
At a high level, our approach is weight decay to a default value. This simple method enjoys strong
performance guarantees (section 3). For instance, the algorithm never performs much worse than a
standard online-learning algorithm, and the additional loss due to sparsification is controlled continuously by a single real-valued parameter. The theory gives a family of algorithms with convex loss
functions for inducing sparsity?one per online-learning algorithm. We instantiate this for square
loss in section 4 and show how this algorithm can be implemented efficiently in large-scale problems with sparse features. For such problems, truncated gradient enjoys the following properties:
(i) It is computationally efficient: the number of operations per online step is linear in the number
of nonzero features, and independent of the total number of features; (ii) It is memory efficient: it
maintains a list of active features, and can insert (when the corresponding weight becomes nonzero)
and delete (when the corresponding weight becomes zero) features dynamically.
Theoretical results stating how much sparsity is achieved using this method generally require additional assumptions which may or may not be met in practice. Consequently, we rely on experiments
in section 5 to show truncated gradient achieves good sparsity in practice. We compare truncated
gradient to a few others on small datasets, including the Lasso, online rounding of coefficients to
zero, and L1 -regularized subgradient descent. Details of these algorithms are given in section 3.
2
Online Learning with Stochastic Gradient Descent
We are interested in the standard sequential prediction problems where for i = 1, 2, . . .:
1. An unlabeled example xi arrives.
2. We make a prediction y?i based on the current weights wi = [wi1 , . . . , wid ] ? Rd .
3. We observe yi , let zi = (xi , yi ), and incur some known loss L(wi , zi ) convex in wi .
4. We update weights according to some rule: wi+1 ? f (wi ).
Pt
We want an update rule f allows us to bound the sum of losses, i=1 L(wi , zi ), as well as achieving
sparsity. For this purpose, we start with the standard stochastic gradient descent (SGD) rule, which
is of the form:
f (wi ) = wi ? ??1 L(wi , zi ),
(3)
where ?1 L(a, b) is a subgradient of L(a, b) with respect to the first variable a. The parameter ? > 0
is often referred to as the learning rate. In the analysis, we only consider constant learning rate for
simplicity. In theory, it might be desirable to have a decaying learning rate ? i which becomes smaller
when i increases to get the so-called no-regret bound without knowing T in advance. However, if
T is known in advance, one can select a constant ? accordingly so the regret vanishes as T ? ?.
Since the focus of the present paper is on weight sparsity, rather than choosing the learning rate, we
use a constant learning rate in the analysis because it leads to simpler bounds.
The above method has been widely used in online learning (e.g., [2, 6]). Moreover, it is argued
to be efficient even for solving batch problems where we repeatedly run the online algorithm over
training data multiple times. For example, the idea has been successfully applied to solve large-scale
standard SVM formulations [10, 13]. In the scenario outlined in the introduction, online-learning
methods are more suitable than some traditional batch learning methods. However, the learning rule
(3) itself does not achieve sparsity in the weights, which we address in this paper. Note that variants
of SGD exist in the literature, such as exponentiated gradient descent (EG) [6]. Since our focus is
sparsity, not SGD vs. EG, we shall only consider modifications of (3) for simplicity.
3
Sparse Online Learning
In this section, we first examine three methods for achieving sparsity in online learning, including
a novel algorithm called truncated gradient. As we shall see, all these ideas are closely related.
Then, we provide theoretical justifications for this algorithm, including a general regret bound and
a fundamental connection to the Lasso.
3.1
Simple Coefficient Rounding
In order to achieve sparsity, the most natural method is to round small coefficients (whose magnitudes are below a threshold ? > 0) to zero after every K online steps. That is, if i/K is not an
integer, we use the standard SGD rule (3); if i/K is an integer, we modify the rule as:
f (wi ) = T0 (wi ? ??1 L(wi , zi ), ?),
(4)
where: ? ? 0 is a threshold, T0 (v, ?) = [T0 (v1 , ?), . . . , T0 (vd , ?)] for vector v = [v1 , . . . , vd ] ? Rd ,
T0 (vj , ?) = vj I(|vj | < ?), and I(?) is the set-indicator function. In other words, we first perform a
standard stochastic gradient descent, and then round the coefficients to zero. The effect is to remove
nonzero and small weights. In general, we should not take K = 1, especially when ? is small, since
in each step wi is modified by only a small amount. If a coefficient is zero, it remains small after
one online update, and the rounding operation pulls it back to zero. Consequently, rounding can be
done only after every K steps (with a reasonably large K); in this case, nonzero coefficients have
sufficient time to go above the threshold ?. However, if K is too large, then in the training stage, we
must keep many more nonzero features in the intermediate steps before they are rounded to zero. In
the extreme case, we may simply round the coefficients in the end, which does not solve the storage
problem in the training phase at all. The sensitivity in choosing appropriate K is a main drawback of
this method; another drawback is the lack of theoretical guarantee for its online performance. These
issues motivate us to consider more principled solutions.
3.2
L1 -Regularized Subgradient
In the experiments, we also combined rounding-in-the-end-of-training with a simple online subgradient method for L1 regularization with a regularization parameter g > 0:
f (wi ) = wi ? ??1 L(wi , zi ) ? ?g sgn(wi ),
(5)
where for a vector v = [v1 , . . . , vd ], sgn(v) = [sgn(v1 ), . . . , sgn(vd )], and sgn(vj ) = 1 if vj > 0,
sgn(vj ) = ?1 if vj < 0, and sgn(vj ) = 0 if vj = 0. In the experiments, the online method (5)
with rounding in the end is used as a simple baseline. Notice this method does not produce sparse
weights online simply because only in very rare cases do two floats add up to 0. Therefore, it is not
feasible in large-scale problems for which we cannot keep all features in memory.
3.3
Truncated Gradient
In order to obtain an online version of the simple rounding rule in (4), we observe that direct rounding to zero is too aggressive. A less aggressive version is to shrink the coefficient to zero by a
smaller amount. We call this idea truncated gradient, where the amount of shrinkage is controlled
by a gravity parameter gi > 0:
f (wi ) = T1 (wi ? ??1 L(wi , zi ), ?gi , ?),
(6)
where for a vector v = [v1 , . . . , vd ] ? Rd , and a scalar g ?
[T1 (v1 , ?, ?), . . . , T1 (vd , ?, ?)], with
?
?max(0, vj ? ?) if vj ? [0, ?]
T1 (vj , ?, ?) = min(0, vj + ?) if vj ? [??, 0] .
?
vj
otherwise
0, T1 (v, ?, ?)
=
Again, the truncation can be performed every K online steps. That is, if i/K is not an integer, we
let gi = 0; if i/K is an integer, we let gi = Kg for a gravity parameter g > 0. The reason for
doing so (instead of a constant g) is that we can perform a more aggressive truncation with gravity
parameter Kg after each K steps. This can potentially lead to better sparsity. We also note that when
?Kg ? ?, truncate gradient coincides with (4). But in practice, as is also verified by the theory, one
should adopt a small g; hence, the new learning rule (6) is expected to differ from (4).
In general, the larger the parameters g and ? are, the more sparsity is expected. Due to the extra
truncation T1 , this method can lead to sparse solutions, which is confirmed empirically in section 5.
A special case, which we use in the experiment, is to let g = ? in (6). In this case, we can use only
one parameter g to control sparsity. Since ?Kg ? when ?K is small, the truncation operation
is less aggressive than the rounding in (4). At first sight, the procedure appears to be an ad-hoc
way to fix (4). However, we can establish a regret bound (in the next subsection) for this method,
showing it is theoretically sound. Therefore, it can be regarded as a principled variant of rounding.
Another important special case of (6) is setting ? = ?, in which all weight components shrink in
every online step. The method is a modification of the L1 -regularized subgradient descent rule (5).
The parameter gi ? 0 controls the sparsity achieved with the algorithm, and setting gi = 0 gives
exactly the standard SGD rule (3). As we show in section 3.5, this special case of truncated gradient
can be regarded as an online counterpart of L1 regularization since it approximately solves an L1
regularization problem in the limit of ? ? 0. We also show the prediction performance of truncated gradient, measured by total loss, is comparable to standard stochastic gradient descent while
introducing sparse weight vectors.
3.4
Regret Analysis
Throughout the paper, we use k ? k1 for 1-norm, and k ? k for 2-norm. For reference, we make the
following assumption regarding the loss function:
Assumption 3.1 We assume L(w, z) is convex in w, and there exist non-negative constants A and
B such that (?1 L(w, z))2 ? AL(w, z) + B for all w ? Rd and z ? Rd+1 .
For linear prediction problems, we have a general loss function of the form L(w, z) = ?(w T x, y).
The following are some common loss functions ?(?, ?) with corresponding choices of parameters A
and B (which are not unique), under the assumption supx kxk ? C. All of them can be used for
binary classification where y ? ?1, but the last one is more often used in regression where y ? R:
Logistic: ?(p, y) = ln(1 + exp(?py)), with A = 0 and B = C 2 ; SVM (hinge loss): ?(p, y) =
max(0, 1 ? py), with A = 0 and B = C 2 ; Least squares (square loss): ?(p, y) = (p ? y)2 , with
A = 4C 2 and B = 0.
The main result is Theorem 3.1 which is parameterized by A and B. The proof will be provided in
a longer paper.
Theorem 3.1 (Sparse Online Regret) Consider sparse online update rule (6) with w 1 = [0, . . . , 0]
and ? > 0. If Assumption 3.1 holds, then for all w
? ? Rd we have
T
gi
1 ? 0.5A? X
L(wi , zi ) +
kwi+1 ? I(wi+1 ? ?)k1
T
1 ? 0.5A?
i=1
?
T
kwk
? 2
1X
?
B+
+
[L(w,
? zi ) + gi kw
? ? I(wi+1 ? ?)k1 ],
2
2?T
T i=1
where kv ?I(|v 0 | ? ?)k1 =
Pd
j=1
|vj |I(|vj0 | ? ?) for vectors v = [v1 , . . . , vd ] and v 0 = [v10 , . . . , vd0 ].
The theorem is stated with a constant learning rate ?. As mentioned earlier, it is possible to obtain
a result with variable learning rate where ? = ?i decays as i increases. Although this may lead to a
no-regret bound without knowing T in advance, it introduces extra complexity to the presentation of
the main idea. Since the focus is on sparsity rather than optimizing learning rate, we do not include
such a result
then in the above bound, one can simply take
? for clarity. If T is known in advance,
?
? = O(1/ T ) and the regret is of order O(1/ T ).
In the above theorem, the right-hand side involves a term gi kw
? ? I(wi+1 ? ?)k1 that depends on
wi+1 which is not easily estimated. To remove this dependency, a trivial upper bound of ? = ?
can be used, leading to L1 penalty gi kwk
? 1 . In the general case of ? < ?, we cannot remove the
wi+1 dependency because the effective regularization condition (as shown on the left-hand side)
is the non-convex penalty gi kw ? I(|w| ? ?)k1 . Solving such a non-convex formulation is hard
both in the online and batch settings. In general, we only know how to efficiently discover a local
minimum which is difficult to characterize. Without a good characterization of the local minimum,
it is not possible for us to replace gi kw
? ? I(wi+1 ? ?)k1 on the right-hand side by gi kw
? ? I(w
? ?
?)k1 because such a formulation implies we could efficiently solve a non-convex problem with a
simple online update rule. Still, when ? < ?, one naturally expects the right-hand side penalty
gi kw
? ? I(wi+1 ? ?)k1 is much smaller than the corresponding L1 penalty gi kwk
? 1 , especially when
wj has many components close to 0. Therefore the situation with ? < ? can potentially yield better
performance on some data.
Theorem 3.1 also implies a tradeoff between sparsity and regret performance. We may simply
consider the case where gi = g is a constant. When g is small, we have less sparsity but the regret
term gkw
? ? I(wi+1 ? ?)k1 ? gkwk
? 1 on the right-hand side is also small. When g is large, we are
able to achieve more sparsity but the regret gkw?I(w
?
i+1 ? ?)k1 on the right-hand side also becomes
large. Such a tradeoff between sparsity and prediction accuracy is empirically studied in section 5,
where we achieve significant sparsity with only a small g (and thus small decrease of performance).
Now consider the case ? = ? and gi = g. When T ? ?, if we let ? ? 0 and ?T ? ?, then
"
#
T
T
1X
1X
[L(wi , zi ) + gkwi k1 ] ? inf
L(w,
? zi ) + 2gkwk
? 1 + o(1).
d
T i=1
T i=1
w?R
?
follows from Theorem 3.1. In other words, if we let L0 (w, z) = L(w, z) + gkwk1 be the L1 regularized loss, then the L1 -regularized regret is small when ? ? 0 and T ? ?. This implies
truncated gradient can be regarded as the online counterpart of L1 -regularization methods. In the
stochastic setting where the examples are drawn iid from some underlying distribution, the sparse
online gradient method proposed in this paper solves the L1 regularization problem.
3.5
Stochastic Setting
SGD-based online-learning methods can be used to solve large-scale batch optimization problems.
In this setting, we can go through training examples one-by-one in an online fashion, and repeat
multiple times over the training data. To simplify the analysis, instead of assuming we go through
example one by one, we assume each additional example is drawn from the training data randomly
with equal probability. This corresponds to the standard stochastic optimization setting, in which
observed samples are iid from some underlying distributions. The following result is a simple consequence of Theorem 3.1. For simplicity, we only consider the case with ? = ? and constant gravity
gi = g. The expectation E is taken over sequences of indices i1 , . . . , iT .
Theorem 3.2 (Stochastic Setting) Consider a set of training data zi = (xi , yi ) for 1 ? i ? n. Let
n
1X
L(w, zi ) + gkwk1
R(w, g) =
n i=1
be the L1 -regularized loss over training data. Let w
?1 = w1 = 0, and define recursively for t ? 1:
wt+1 = T (wt ? ??1 (wt , zit ), g?),
w
?t+1 = w
?t + (wt+1 ? w
?t )/(t + 1),
where each it is drawn from {1, . . . , n} uniformly at random. If Assumption 3.1 holds, then for all
T and w
? ? Rd :
"
#
?
E (1 ? 0.5A?)R(w
?T ,
?
T
kwk
? 2
1 ? 0.5A? X
g
?
g
) ?E
R(wi ,
) ? B+
+R(w,
? g).
1 ? 0.5A?
T
1 ? 0.5A?
2
2?T
i=1
Observe
? T , g)] ?
h P that if we leti ? ? 0 and ?T ? ?, the bound in Theorem 3.2 becomes E [R(w
T
1
? g) + o(1). In other words, on average w
?T approximately solves
E T t=1 R(wt , g) ? inf w? R(w,
1 Pn
the batch L1 -regularization problem inf w n i=1 L(w, zi ) + gkwk1 when T is large. If we
choose a random stopping time T , then the above inequalities say that on average w T also solves
this L1 -regularization problem approximately. Thus, we use the last solution wT instead of the aggregated solution w
?T in experiments. Since L1 regularization is often used to achieve sparsity in the
batch learning setting, the connection of truncated gradient to L1 regularization can be regarded as
an alternative justification for the sparsity ability of this algorithm.
4
Efficient Implementation of Truncated Gradient for Square Loss
The truncated descent update rule (6) can be applied to least-squares regression using square loss,
P
leading to f (wi ) = T1 (wi + 2?(yi ? y?i )xi , ?gi , ?), where the prediction is given by y?i = j wij xji .
We altered an efficient SGD implementation, Vowpal Wabbit [7], for least-squares regression
according to truncated gradient. The program operates in an entirely online fashion. Features are
hashed instead of being stored explicitly, and weights can be easily inserted into or deleted from the
table dynamically. So the memory footprint is essentially just the number of nonzero weights, even
when the total numbers of data and features are astronomically large.
In many online-learning situations such as web applications, only a small subset of the features
have nonzero values for any example x. It is thus desirable to deal with sparsity only in this small
subset rather than in all features, while simultaneously inducing sparsity on all feature weights. The
approach we take is to store a time-stamp ?j for each feature j. The time-stamp is initialized to the
index of the example where feature j becomes nonzero for the first time. During online learning, at
each step i, we only go through the nonzero features j of example i, and calculate the un-performed
shrinkage of w j between ?j and the current time i. These weights are then updated, and their time
stamps are reset to i. This lazy-update idea of delaying the shrinkage calculation until needed is
the key to efficient implementation of truncated gradient. The implementation satisfies efficiency
requirements outlined at the end of the introduction section. A similar time-stamp trick can be
applied to the other two algorithms given in section 3.
5
Empirical Results
We applied the algorithm, with the efficiently implemented sparsify option, as described in the
previous section, to a selection of datasets, including eleven datasets from the UCI repository [1],
the much larger dataset rcv1 [9], and a private large-scale dataset Big_Ads related to ad interest
prediction. While UCI datasets are useful for benchmark purposes, rcv1 and Big_Ads are more
interesting since they embody real-world datasets with large numbers of features, many of which
are less informative for making predictions than others. The UCI datasets we used do not have
many features, and it seems a large fraction of these features are useful for making predictions. For
comparison purposes and to better demonstrate the behavior of our algorithm, we also added 1000
random binary features to those datasets. Each feature has value 1 with prob. 0.05 and 0 otherwise.
In the first set of experiments, we are interested in how much reduction in the number of features is
possible without affecting learning performance significantly; specifically, we require the accuracy
be reduced by no more than 1% for classification tasks, and the total square loss be increased by no
more than 1% for regression tasks. As common practice, we allowed the algorithm to run on the
training data set for multiple passes with decaying learning rate. For each dataset, we performed 10fold cross validation over the training set to identify the best set of parameters, including the learning
rate ?, the gravity g, number of passes of the training set, and the decay of learning rate across these
passes. This set of parameters was then used on the whole training set. Finally, the learned classifier/regressor was evaluated on the test set. We fixed K = 1 and ? = ? in this set of experiments.
The effects of K and ? are included in an extended version of this paper. Figure 1 shows the fraction
of reduced features after sparsification is applied to each dataset. For UCI datasets with randomly
added features, truncated gradient was able to reduce the number of features by a fraction of more
than 90%, except for the ad dataset in which only 71% reduction was observed. This less satisfying
result might be improved by a more extensive parameter search in cross validation. However, if
Base data
1000 extra
1.2
Fraction of Features Left
0.6
rcv1
wpbc
wbc
wdbc
spam
shroom
ad
zoo
rcv1
Big_Ads
Dataset
wpbc
wbc
wdbc
spam
shroom
krvskp
magic04
0
housing
0.2
0
krvskp
0.4
0.2
magic04
0.4
0.8
crx
Ratio
0.6
ad
Ratio of AUC
1
0.8
crx
Fraction Left
Base data
1000 extra
1
Dataset
Figure 1: Left: the amount of features left after sparsification for each dataset without 1% performance loss. Right: the ratio of AUC with and without sparsification.
we tolerated 1.3% decrease in accuracy (instead of 1%) during cross validation, truncated gradient
was able to achieve 91.4% reduction, indicating a large reduction is still possible at the tiny additional accuracy loss of 0.3%. Even for the original UCI datasets without artificially added features,
some of the less useful features were removed while the same level of performance was maintained.
For classification tasks, we also studied how truncated gradient affects AUC (Area Under the ROC
Curve), a standard metric for classification. We use AUC here because it is insensitive to threshold,
unlike accuracy. Using the same sets of parameters from 10-fold cross validation described above,
we found the criterion was not affected significantly by sparsification and in some cases, it was actually improved, due to removal of some irrelevant features. The ratios of the AUC with and without
sparsification for all classification tasks are plotted in Figure 1. Often these ratios are above 98%.
The previous results do not exercise the full power of the approach presented here because they are
applied to datasets where the standard Lasso is computationally viable. We have also applied this
approach to a large non-public dataset Big_Ads where the goal is predicting which of two ads was
clicked on given context information (the content of ads and query information). Here, accepting a
0.9% increase in classification error allows us to reduce the number of features from about 3 ? 10 9
to about 24 ? 106 ?a factor of 125 decrease in the number of features.
The next set of experiments compares truncated gradient to other algorithms regarding their abilities
to tradeoff feature sparsification and performance. Again, we focus on the AUC metric in UCI
classification tasks. The algorithms for comparison include: (i) the truncated gradient algorithm with
K = 10 and ? = ?; (ii) the truncated gradient algorithm with K = 10 and ? = g; (iii) the rounding
algorithm with K = 10; (iv) the L1 -regularized subgradient algorithm with K = 10; and (v) the
Lasso [12] for batch L1 regularization (a publicly available implementation [11] was used). We have
chosen K = 10 since it worked better than K = 1, and this choice was especially important for the
coefficient rounding algorithm. All unspecified parameters were identified using cross validation.
Note that we do not attempt to compare these algorithms on rcv1 and Big_Ads simply because their
sizes are too large for the Lasso and subgradient descent. Figure 2 gives the results on datasets
ad and spambase. Results on other datasets were qualitatively similar. On all datasets, truncated
gradient (with ? = ?) is consistently competitive with the other online algorithms and significantly
outperformed them in some problems, implying truncated gradient is generally effective. Moreover,
truncated gradient with ? = g behaves similarly to rounding (and sometimes better). This was
expected as truncated gradient with ? = g can be regarded as a principled variant of rounding with
valid theoretical justification. It is also interesting to observe the qualitative behavior of truncated
gradient was often similar to LASSO, especially when very sparse weight vectors were allowed
(the left sides in the graphs). This is consistent with theorem 3.2 showing the relation between
these two algorithms. However, LASSO usually performed worse when the allowed number of
nonzero weights was large (the right side of the graphs). In this case, LASSO seemed to overfit
while truncated gradient was more robust to overfitting. The robustness of online learning is often
attributed to early stopping, which has been extensively studied (e.g., in [13]).
Finally, it is worth emphasizing that these comparison experiments shed some light on the relative
strengths of these algorithms in terms of feature sparsification, without considering which one can
be efficiently implemented. For large datasets with sparse features, only truncated gradient and the
ad hoc coefficient rounding algorithm are applicable.
ad
0.9
0.9
0.8
0.8
0.7
0.7
0.6
Trunc. Grad. (?=?)
Trunc. Grad. (?=g)
Rounding
Sub?gradient
Lasso
0.5
0.4
0.3 0
10
1
spambase
1
2
10
10
Number of Features
3
10
AUC
AUC
1
0.6
Trunc. Grad. (?=?)
Trunc. Grad. (?=g)
Rounding
Sub?gradient
Lasso
0.5
0.4
0.3 0
10
1
2
10
10
Number of Features
3
10
Figure 2: Comparison of the five algorithms in two sample UCI datasets.
6
Conclusion
This paper covers the first efficient sparsification technique for large-scale online learning with
strong theoretical guarantees. The algorithm, truncated gradient, is the natural extension of Lassostyle regression to the online-learning setting. Theorem 3.1 proves the technique is sound: it never
harms performance much compared to standard stochastic gradient descent in adversarial situations.
Furthermore, we show the asymptotic solution of one instance of the algorithm is essentially equivalent to Lasso regression, thus justifying the algorithm?s ability to produce sparse weight vectors
when the number of features is intractably large. The theorem is verified experimentally in a number of problems. In some cases, especially for problems with many irrelevant features, this approach
achieves a one or two orders of magnitude reduction in the number of features.
References
[1] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007. UC Irvine.
[2] N. Cesa-Bianchi, P.M. Long, and M. Warmuth. Worst-case quadratic loss bounds for prediction using
linear functions and gradient descent. IEEE Transactions on Neural Networks, 7(3):604?619, 1996.
[3] C.-T. Chu, S.K. Kim, Y.-A. Lin, Y. Yu, G. Bradski, A.Y. Ng, and K. Olukotun. Map-reduce for machine
learning on multicore. In Advances in Neural Information Processing Systems 20, pages 281?288, 2008.
[4] O. Dekel, S. Shalev-Schwartz, and Y. Singer. The Forgetron: A kernel-based perceptron on a fixed budget.
In Advances in Neural Information Processing Systems 18, pages 259?266, 2006.
[5] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ` 1 -ball for learning
in high dimensions. In Proceedings of ICML-08, pages 272?279, 2008.
[6] J. Kivinen and M.K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors.
Information and Computation, 132(1):1?63, 1997.
[7] J. Langford, L. Li, and A.L. Strehl. Vowpal Wabbit (fast online learning), 2007. http://hunch.net/?vw/.
[8] Honglak Lee, Alexis Batle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms. In
Advances in Neural Information Processing Systems 19 (NIPS-07), 2007.
[9] D.D. Lewis, Y. Yang, T.G. Rose, and F. Li. RCV1: A new benchmark collection for text categorization
research. Journal of Machine Learning Research, 5:361?397, 2004.
[10] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal Estimated sub-GrAdient SOlver for SVM.
In Proceedings of ICML-07, pages 807?814, 2007.
[11] K. Sj?strand. Matlab implementation of LASSO, LARS, the elastic net and SPCA, June 2005. Version
2.0, http://www2.imm.dtu.dk/pubdb/p.php?3897.
[12] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
B., 58(1):267?288, 1996.
[13] T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In
Proceedings of ICML-04, pages 919?926, 2004.
[14] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings
of ICML-03, pages 928?936, 2003.
| 3585 |@word private:1 repository:2 version:5 norm:2 seems:1 dekel:1 sgd:7 recursively:1 reduction:5 wrapper:1 crx:2 vd0:1 spambase:2 current:2 com:1 magic04:2 chu:1 must:1 john:1 informative:1 eleven:1 remove:3 update:8 v:1 implying:1 instantiate:1 warmuth:2 accordingly:1 accepting:1 characterization:1 simpler:1 zhang:2 five:1 direct:1 become:1 viable:1 qualitative:1 prove:1 theoretically:2 expected:3 xji:1 embody:1 behavior:2 examine:1 roughly:1 automatically:1 cache:1 considering:1 solver:1 becomes:6 provided:1 discover:1 moreover:2 underlying:2 clicked:1 kg:4 unspecified:1 pursue:1 sparsification:14 guarantee:4 every:6 shed:1 gravity:5 exactly:1 classifier:1 schwartz:1 control:3 before:2 t1:7 local:2 modify:1 limit:1 consequence:1 parallelize:1 approximately:3 black:1 might:2 studied:3 dynamically:2 unique:1 practice:4 regret:13 implement:1 footprint:1 procedure:1 area:1 empirical:1 significantly:4 projection:3 word:3 induce:1 get:1 cannot:3 undesirable:1 unlabeled:1 close:1 selection:2 storage:1 context:1 onto:1 pegasos:1 py:2 equivalent:2 map:1 zinkevich:1 vowpal:2 go:4 convex:12 simplicity:3 rule:13 regarded:6 pull:1 justification:3 updated:1 pt:1 programming:1 alexis:1 hunch:1 trick:1 expensive:1 particularly:1 satisfying:1 observed:2 inserted:1 worst:1 calculate:1 wj:1 decrease:3 removed:1 substantial:1 principled:3 tongz:1 vanishes:1 complexity:3 pd:1 mentioned:1 rose:1 trunc:4 motivate:1 solving:3 incur:1 efficiency:1 easily:2 various:1 fast:1 effective:2 query:1 newman:1 choosing:2 shalev:3 whose:1 widely:1 valued:1 solve:4 larger:2 say:1 otherwise:2 ability:3 statistic:1 gi:19 itself:1 online:51 hoc:2 sequence:1 housing:1 wabbit:2 net:2 propose:2 reset:1 uci:8 achieve:7 inducing:3 kv:1 requirement:1 produce:2 categorization:1 andrew:1 stating:1 v10:1 measured:1 multicore:1 progress:1 zit:1 solves:4 implemented:3 c:1 involves:1 implies:3 strong:2 met:1 differ:1 closely:2 drawback:2 stochastic:10 lars:1 wid:1 sgn:7 elimination:1 public:1 require:2 argued:1 fix:1 insert:1 extension:1 hold:2 sufficiently:1 considered:1 exp:1 krvskp:2 achieves:2 adopt:1 early:1 purpose:3 wi1:1 outperformed:1 applicable:1 largest:1 successfully:1 sight:1 modified:1 rather:3 pn:1 shrinkage:4 sparsify:1 l0:1 focus:5 june:1 consistently:1 contrast:1 adversarial:1 baseline:1 kim:1 stopping:2 eliminate:1 typically:1 kernelized:1 relation:1 wij:1 interested:2 i1:1 arg:2 issue:1 classification:7 yahoo:2 special:3 uc:1 equal:1 never:2 ng:2 kw:6 yu:1 icml:4 rci:1 others:2 simplify:1 few:1 randomly:2 simultaneously:1 phase:1 attempt:1 interest:1 bradski:1 introduces:1 arrives:1 extreme:1 light:1 primal:1 iv:1 initialized:1 plotted:1 theoretical:5 delete:1 instance:3 increased:1 soft:2 earlier:1 cover:1 introducing:1 subset:2 rare:1 expects:1 predictor:1 rounding:19 too:4 characterize:1 stored:1 dependency:2 supx:1 tolerated:1 combined:1 fundamental:1 sensitivity:1 lee:1 rounded:1 regressor:1 continuously:1 w1:1 again:2 cesa:1 choose:1 worse:2 leading:2 li:3 aggressive:4 coding:1 includes:1 coefficient:10 inc:1 explicitly:1 ad:10 stream:1 depends:1 performed:4 doing:1 kwk:4 competitive:2 decaying:3 maintains:1 option:1 start:1 asuncion:1 square:8 publicly:1 accuracy:5 php:1 efficiently:7 yield:1 identify:1 manages:1 iid:2 zoo:1 confirmed:1 worth:1 infinitesimal:1 naturally:1 proof:1 attributed:1 couple:1 irvine:1 dataset:11 tunable:2 popular:1 subsection:1 actually:1 back:1 appears:1 forgetron:4 improved:2 formulation:9 done:1 box:1 shrink:2 evaluated:1 furthermore:1 just:1 stage:1 langford:2 until:1 hand:6 overfit:1 web:1 lack:1 logistic:1 vj0:1 effect:2 counterpart:3 regularization:19 hence:1 nonzero:10 eg:2 deal:1 round:3 during:2 auc:8 maintained:1 coincides:1 criterion:1 generalized:1 demonstrate:1 performs:1 l1:24 duchi:1 novel:1 recently:1 common:3 behaves:1 empirically:3 insensitive:1 jl:1 significant:1 honglak:1 rd:7 outlined:2 similarly:1 lihong:2 hashed:1 longer:1 operating:1 add:1 base:2 optimizing:1 inf:3 irrelevant:2 scenario:1 store:1 inequality:1 binary:2 kwk1:1 yi:5 minimum:2 additional:5 aggregated:1 ii:2 multiple:3 desirable:2 sound:2 full:1 calculation:1 cross:5 long:1 lin:1 justifying:1 controlled:2 impact:1 prediction:11 variant:3 regression:8 essentially:2 expectation:1 rutgers:4 metric:2 chandra:1 kernel:2 sometimes:1 achieved:2 addition:1 want:1 affecting:1 float:1 appropriately:1 extra:4 unlike:1 pass:3 kwi:1 subject:1 ascent:1 integer:4 call:1 www2:1 vw:1 yang:1 intermediate:1 iii:1 spca:1 concerned:2 affect:1 zi:18 lasso:14 identified:1 reduce:4 idea:7 regarding:2 knowing:2 tradeoff:3 grad:4 t0:5 motivated:1 penalty:4 repeatedly:1 matlab:1 generally:2 useful:3 detailed:1 amount:4 extensively:1 reduced:2 http:2 exist:2 problematic:1 notice:1 estimated:2 per:2 tibshirani:1 shall:2 affected:1 key:1 threshold:4 achieving:2 drawn:3 deleted:1 clarity:1 verified:2 v1:7 ram:1 graph:2 subgradient:7 olukotun:1 fraction:5 sum:1 run:4 prob:1 parameterized:1 family:2 throughout:1 comparable:1 entirely:1 bound:10 fold:2 quadratic:1 strength:1 constraint:4 worked:1 wbc:2 min:3 discoverable:1 rcv1:6 department:2 according:2 truncate:1 ball:2 smaller:3 across:1 wi:33 modification:2 making:2 taken:1 computationally:3 ln:1 remains:1 fail:1 needed:1 know:1 singer:3 end:4 available:1 operation:4 apply:1 observe:4 appropriate:1 batch:10 alternative:1 robustness:1 original:1 include:2 hinge:1 k1:12 especially:5 overflow:2 establish:1 prof:1 society:1 added:3 occurs:1 traditional:1 gradient:46 vd:7 trivial:1 reason:2 assuming:1 index:2 ratio:5 difficult:2 potentially:2 stated:2 negative:1 implementation:6 perform:2 bianchi:1 upper:1 datasets:19 benchmark:2 descent:13 truncated:30 situation:3 extended:1 delaying:1 pair:1 extensive:1 connection:2 learned:2 nip:1 address:2 able:3 below:1 usually:1 sparsity:27 program:1 including:5 memory:5 max:2 royal:1 power:1 suitable:1 natural:3 rely:1 regularized:7 predicting:1 indicator:1 kivinen:1 raina:1 altered:1 dtu:1 byte:1 text:1 literature:1 l2:1 removal:1 relative:1 asymptotic:1 loss:21 interesting:2 srebro:1 versus:1 validation:5 degree:1 sufficient:1 consistent:1 tiny:1 strehl:1 wpbc:2 repeat:1 last:2 truncation:4 intractably:1 enjoys:2 side:8 exponentiated:2 perceptron:1 sparse:15 curve:1 default:1 dimension:1 world:1 valid:1 seemed:1 commonly:1 made:1 qualitatively:1 collection:1 spam:2 transaction:1 sj:1 keep:2 active:1 overfitting:1 imm:1 harm:1 xi:5 shwartz:2 continuous:1 un:1 search:1 table:1 reasonably:1 robust:1 elastic:1 artificially:1 vj:16 main:3 whole:1 allowed:3 referred:1 roc:1 fashion:3 tong:1 sub:3 exercise:1 stamp:4 third:1 theorem:12 emphasizing:1 load:1 showing:2 list:1 decay:3 svm:3 dk:1 essential:1 adding:1 sequential:1 magnitude:2 budget:1 wdbc:2 logarithmic:1 simply:8 lazy:1 kxk:1 strand:1 scalar:1 corresponds:1 satisfies:1 astronomically:1 lewis:1 goal:1 presentation:1 consequently:2 replace:1 feasible:1 experimentally:1 hard:1 content:1 included:1 typical:2 specifically:1 reducing:1 operates:2 wt:6 uniformly:1 except:1 called:3 total:5 indicating:1 select:1 rajat:1 evaluate:1 |
2,852 | 3,586 | Adaptive Forward-Backward Greedy Algorithm for
Sparse Learning with Linear Models
Tong Zhang
Statistics Department
Rutgers University, NJ
[email protected]
Abstract
Consider linear prediction models where the target function is a sparse linear combination of a set of basis functions. We are interested in the problem of identifying
those basis functions with non-zero coefficients and reconstructing the target function from noisy observations. Two heuristics that are widely used in practice are
forward and backward greedy algorithms. First, we show that neither idea is adequate. Second, we propose a novel combination that is based on the forward
greedy algorithm but takes backward steps adaptively whenever beneficial. We
prove strong theoretical results showing that this procedure is effective in learning
sparse representations. Experimental results support our theory.
1
Introduction
Consider a set of input vectors x1 , . . . , xn ? Rd , with corresponding desired output variables
y1 , . . . , yn . The task of supervised learning is to estimate the functional relationship y ? f (x)
between the input x and the output variable y from the training examples {(x1 , y1 ), . . . , (xn , yn )}.
The quality of prediction is often measured through a loss function ?(f (x), y). In this paper, we
consider linear prediction model f (x) = wT x. As in boosting or kernel methods, nonlinearity can
be introduced by including nonlinear features in x.
We are interested in the scenario that d n. That is, there are many more features than the number
of samples. In this case, an unconstrained empirical risk minimization is inadequate because the
solution overfits the data. The standard remedy for this problem is to impose a constraint on w to
obtain a regularized problem. An important target constraint is sparsity, which corresponds to the
(non-convex) L0 regularization, where we define kwk0 = |{j : wj 6= 0}| = k. If we know the
sparsity parameter k, a good learning method is L0 regularization:
n
? = arg min
w
w?Rd
1X
?(wT xi , yi ) subject to kwk0 ? k.
n i=1
(1)
If k is not known, then one may regard k as a tuning parameter, which can be selected through crossvalidation. This method is often referred to as subset selection in the literature. Sparse learning is an
essential topic in machine learning, which has attracted considerable interests recently. Generally
speaking, one is interested in two closely related themes: feature selection, or identifying the basis
functions with non-zero coefficients; estimation accuracy, or reconstructing the target function from
noisy observations. It can be shown that the solution of the L0 regularization problem in (1) achieves
? It can also
good prediction accuracy if the target function can be approximated by a sparse w.
solve the feature selection problem under extra identifiability assumptions. However, a fundamental
difficulty with this method is the computational cost, because the number of subsets of {1, . . . , d}
of cardinality k (corresponding to the nonzero components of w) is exponential in k. There are no
efficient algorithms to solve the subset selection formulation (1).
1
Due to the computational difficult, in practice, there are three standard methods for learning sparse
representations by solving approximations of (1). The first approach is L1 -regularization (Lasso).
The idea is to replace the L0 regularization in (1) by L1 regularization. It is the closest convex
approximation to (1). It is known that L1 regularization often leads to sparse solutions. Its performance has been theoretically analyzed recently. For example, if the target is truly sparse, then it
was shown in [10] that under some restrictive conditions referred to as irrepresentable conditions,
L1 regularization solves the feature selection problem. The prediction performance of this method
has been considered in [6, 2, 1, 9]. Despite its popularity, there are several problems with L1 regularization: first, the sparsity is not explicitly controlled, and good feature selection property requires
strong assumptions; second, in order to obtain very sparse solution, one has to use a large regularization parameter that leads to suboptimal prediction accuracy because the L1 penalty not only shrinks
irrelevant features to zero, but also shrinks relevant features to zero. A sub-optimal remedy is to
threshold the resulting coefficients; however this requires additional tuning parameters, making the
resulting procedures more complex and less robust. The second approach to approximately solve
the subset selection problem is forward greedy algorithm, which we will describe in details in Section 2. The method has been widely used by practitioners. The third approach is backward greedy
algorithm. Although this method is widely used by practitioners, there isn?t any theoretical analysis
when n d (which is the case we are interested in here). The reason will be discussed later.
In this paper, we are particularly interested in greedy algorithms because they have been widely
used but the effectiveness has not been well analyzed. As we shall explain later, neither the standard
forward greedy idea nor th standard backward greedy idea is adequate for our purpose. However,
the flaws of these methods can be fixed by a simple combination of the two ideas. This leads to a
novel adaptive forward-backward greedy algorithm which we present in Section 3. The general idea
works for all loss functions. For least squares loss, we obtain strong theoretical results showing that
the method can solve the feature selection problem under moderate conditions.
For clarity, this paper only considers the fixed design formulation. To simplify notations in our
description, we will replace the optimization problem in (1) with a more general formulation. Instead of working with n input data vectors xi ? Rd , we work with d feature vectors fj ? Rn
(j = 1, . . . , d), and y ? Rn . Each fj corresponds to the j-th feature component of xi for
i = 1, . . . , n. That is, fj,i = xi,j . Using this notation, we can generally rewrite (1) with in the
? = arg minw?Rd R(w) subject to kwk0 ? k, where weight w = [w1 , . . . , wd ] ? Rd ,
form w
and R(w) is a real-valued cost function
P which we are interested in optimization. For least squares
regression, we have R(w) = n?1 k j wj fj ? yk22 . In the following, we also let ej ? Rd be the
vector of zeros, except for the j-component which is one. For convenience, we also introduce the
following notations.
Definition 1.1 Define supp(w) = {j : wj 6= 0} as the set of nonzero coefficients of a vector
w = [w1 , . . . , wd ] ? Rd . For a weight vector w ? Rd , we define mapping f : Rd ? Rn as:
Pd
d
?
f (w) =
f ) = minw?Rd kf (w) ?
j=1 wj fj . Given f ? R and F ? {1, . . . , d}, let w(F,
2
? ) = w(F,
?
f k2 subject to supp(w) ? F , and let w(F
y) be the solution of the least squares
problem using features F .
2
Forward and Backward Greedy Algorithms
Forward greedy algorithms have been widely used in applications. The basic algorithm is presented
in Figure 1. Although a number of variations exist, they all share the basic form of greedily picking
an additional feature at every step to aggressively reduce the cost function. The intention is to make
most significant progress at each step in order to achieve sparsity. In this regard, the method can be
considered as an approximation algorithm for solving (1).
A major flaw of this method is that it can never correct mistakes made in earlier steps. As an
illustration, we consider the situation plotted in Figure 2 with least squares regression. In the figure,
y can be expressed as a linear combination of f1 and f2 but f3 is closer to y. Therefore using the
forward greedy algorithm, we will find f3 first, then f1 and f2 . At this point, we have already found
all good features as y can be expressed by f1 and f2 , but we are not able to remove f3 selected
in the first step. The above argument implies that forward greedy method is inadequate for feature
selection. The method only works when small subsets of the basis functions {fj } are near orthogonal
2
Input: f1 , . . . , fd , y ? Rn and > 0
Output: F (k) and w(k)
let F (0) = ? and w(0) = 0
for k = 1, 2, . . .
let i(k) = arg mini min? R(w(k?1) + ?ei )
let F (k) = {i(k) } ? F (k?1)
? (k) )
let w(k) = w(F
(k?1)
if (R(w
) ? R(w(k) ) ? ) break
end
Figure 1: Forward Greedy Algorithm
f1
f5
f4
f3
y
f2
Figure 2: Failure of Forward Greedy Algorithm
[7]. In general, Figure 2 (which is the case we are interested in in this paper) shows that forward
greedy algorithm will make errors that are not corrected later on.
In order to remedy the problem, the so-called backward greedy algorithm has been widely used by
practitioners. The idea is to train a full model with all the features, and greedily remove one feature
(with the smallest increase of cost function) at a time. Although at the first sight, backward greedy
method appears to be a reasonable idea that addresses the problem of forward greedy algorithm, it is
computationally very costly because it starts with a full model with all features. Moreover, there are
no theoretical results showing that this procedure is effective. In fact, under our setting, the method
may only work when d n (see, for example, [3]), which is not the case we are interested in. In
the case d n, during the first step, we start with a model with all features, which can immediately
overfit the data with perfect prediction. In this case, the method has no ability to tell which feature
is irrelevant and which feature is relevant because removing any feature still completely overfits
the data. Therefore the method will completely fail when d n, which explains why there is no
theoretical result for this method.
3
Adaptive Forward-Backward Greedy Algorithm
The main strength of forward greedy algorithm is that it always works with a sparse solution explicitly, and thus computationally efficient. Moreover, it does not significantly overfit the data due
to the explicit sparsity. However, a major problem is its inability to correct any error made by the
algorithm. On the other hand, backward greedy steps can potentially correct such an error, but need
to start with a good model that does not completely overfit the data ? it can only correct errors with
a small amount of overfitting. Therefore a combination of the two can solve the fundamental flaws
of both methods. However, a key design issue is how to implement a backward greedy strategy
that is provably effective. Some heuristics exist in the literature, although without any effectiveness
proof. For example, the standard heuristics, described in [5] and implemented in SAS, includes
another threshold 0 in addition to : a feature is deleted if the cost-function increase by performing
the deletion is no more than 0 . Unfortunately we cannot provide an effectiveness proof for this
heuristics: if the threshold 0 is too small, then it cannot delete any spurious features introduced in
the forward steps; if it is too large, then one cannot make progress because good features are also
deleted. In practice it can be hard to pick a good 0 , and even the best choice may be ineffective.
3
This paper takes a more principled approach, where we specifically design a forward-backward
greedy procedure with adaptive backward steps that are carried out automatically. The procedure
has provably good performance and fixes the drawbacks of forward greedy algorithm illustrated in
Figure 2. There are two main considerations in our approach: we want to take reasonably aggressive
backward steps to remove any errors caused by earlier forward steps, and to avoid maintaining a
large number of basis functions; we want to take backward step adaptively and make sure that any
backward greedy step does not erase the gain made in the forward steps. Our algorithm, which we
refer to as FoBa, is listed in Figure 3. It is designed to balance the above two aspects. Note that we
only take a backward step when the increase of cost function is no more than half of the decrease of
cost function in earlier forward steps. This implies that if we take ` forward steps, then no matter
how many backward steps are performed, the cost function is decreased by at least an amount of
`/2. It follows that if R(w) ? 0 for all w ? Rd , then the algorithm terminates after no more than
2R(0)/ steps. This means that the procedure is computationally efficient.
Input: f1 , . . . , fd , y ? Rn and > 0
Output: F (k) and w(k)
let F (0) = ? and w(0) = 0
let k = 0
while true
let k = k + 1
// forward step
let i(k) = arg mini min? R(w(k?1) + ?ei )
let F (k) = {i(k) } ? F (k?1)
? (k) )
let w(k) = w(F
(k)
let ? = R(w(k?1) ) ? R(w(k) )
if (? (k) ? )
k =k?1
break
endif
// backward step (can be performed after each few forward steps)
while true
(k)
let j (k) = arg minj?F (k) R(w(k) ? wj ej )
(k)
let ? 0 = R(w(k) ? wj (k) ej (k) ) ? R(w(k) )
if (? 0 > 0.5? (k) ) break
let k = k ? 1
let F (k) = F (k+1) ? {j (k+1) }
? (k) )
let w(k) = w(F
end
end
Figure 3: FoBa: Forward-Backward Greedy Algorithm
Now, consider an application of FoBa to the example in Figure 2. Again, in the first three forward
steps, we will be able to pick f3 , followed by f1 and f2 . After the third step, since we are able
to express y using f1 and f2 only, by removing f3 in the backward step, we do not increase the
cost. Therefore at this stage, we are able to successfully remove the incorrect basis f3 while keeping
the good features f1 and f2 . This simple illustration demonstrates the effectiveness of FoBa. In
the following, we formally characterize this intuitive example, and prove the effectiveness of FoBa
for feature selection as well as parameter estimation. Our analysis assumes the least squares loss.
However, it is possible to handle more general loss functions with a more complicated derivation.
We introduce the following definition, which characterizes how linearly independent small subsets
of {fj } of size k are. For k n, the number ?(k) can be bounded away from zero even when
d n. For example, for random basis functions fj , we may take ln d = O(n/k) and still have ?(k)
to be bounded away from zero. This quantity is the smallest eigenvalue of the k ? k diagonal blocks
of the d ? d design matrix [fiT fj ]i,j=1,...,d , and has appeared in recent analysis of L1 regularization
4
methods such as in [2, 8], etc. We shall refer it to as the sparse eigenvalue condition. This condition
is the least restrictive condition when compared to other conditions in the literature [1].
Definition 3.1 Define for all 1 ? k ? d: ?(k) = inf n1 kf (w)k22 /kwk22 : kwk0 ? k .
Assumption 3.1 Consider least squares loss R(w) = n1 kf (w(k) ) ? yk22 . Assume that the basis
functions are normalized such that n1 kfj k22 = 1 for all j = 1, . . . , d, and assume that {yi }i=1,...,n
are independent (but not necessarily identically distributed) sub-Gaussians: there exists ? ? 0 such
2 2
that ?i and ?t ? R, Eyi et(yi ?Eyi ) ? e? t /2 .
Both Gaussian and bounded random variables are sub-Gaussian using the above definition. For
example, we have the following Hoeffding?s inequality. If a random variable ? ? [a, b], then
2 2
2 2
E? et(??E?) ? e(b?a) t /8 . If a random variable is Gaussian: ? ? N (0, ? 2 ), then E? et? ? e? t /2 .
The following theorem is stated with an explicit for convenience. In applications, one can always
run the algorithm with a smaller and use cross-validation to determine the optimal stopping point.
Theorem 3.1 Consider the FoBa algorithm in Figure 3, where Assumption 3.1 holds. Assume also
? ? Rd such that w
? T xi = Eyi for i = 1, . . . , n, and
that the target is sparse: there exists w
? Let k? = |F? |, and > 0 be the stopping criterion in Figure 3. Let s ? d be an
F? = supp(w).
? j |2 ?
integer which either equals d or satisfies the condition 8k? ? s?(s)2 . If minj?supp(w)
? |w
64
?2
, and for some ? ? (0, 1/3), ? 64?(s)?2 ? 2 ln(2d/?)/n, then with probability
25 ?(s)
? 2 ?
larger than 1 ?
when the algorithm
terminates, we have F (k) = F? and kw(k) ? wk
h 3?,p
i
p
?
?
? k/(n?(k)) 1 + 20 ln(1/?) .
? j are
The result shows that one can identify the correct set of features F? as long as the weights w
not close to zero when j ? F? . This condition is necessary for all feature selection algorithms
including previous analysis of Lasso. The theorem can be applied as long as eigenvalues of small
s ? s diagonal blocks of the design matrix [fiT fj ]i,j=1,...,d are bounded away from zero (i.e., sparse
eigenvalue condition). This is the situation under which the forward greedy step can make mistakes,
but such mistakes can be corrected using FoBa. Because the conditions of the theorem do not prevent
forward steps from making errors, the example described in Figure 2 indicates that it is not possible
to prove a similar result for the forward greedy algorithm. The result we proved is also better than
that of Lasso, which can successfully select features under irrepresentable conditions of [10]. It is
known that the sparse eigenvalue condition considered here is generally weaker [8, 1].
p
? j | (j ? F? ) is larger than the noise level O(? ln d/n) in
Our result relies on the assumption that |w
order to select features effectively. If any nonzero weight is below the noise level, then no algorithm
can distinguish it from zero with large probability. That is, in this case, one cannot reliably perform
feature selection due to the noise. Therefore FoBa is near optimal in term of its ability to perform
reliable feature selection, except for the constant hiding in O(?). For target that is not truly sparse,
similar results can be obtained. In this case, it is not possible to correctly identify all the features
with large probability. However, we can show that FoBa can still select part of the features reliably,
with good parameter estimation accuracy. Such results can be found in the full version of the paper,
available from the author?s website.
4
Experiments
We compare FoBa described in Section 3) to forward-greedy and L1 -regularization on artificial
and real data. They show that in practice, FoBa is closer to subset selection than the other two
approaches, in the sense that FoBa achieves smaller training error given any sparsity level. In oder
to compare with Lasso, we use the LARS [4] package in R, which generates a path of actions for
adding and deleting features, along the L1 solution path. For example, a path of {1, 3, 5, ?3, . . .}
means that in the fist three steps, feature 1, 3, 5 are added; and the next step removes feature 3.
Using such a solution path, we can compare Lasso to Forward-greedy and FoBa under the same
framework. Similar to the Lasso path, FoBa also generates a path with both addition and deletion
operations, while forward-greedy algorithm only adds features without deletion.
5
Our experiments compare the performance of the three algorithms using the corresponding feature
addition/deletion paths. We are interested in features selected by the three algorithms at any sparsity
level k, where k is the desired number of features presented in the final solution. Given a path, we
can keep an active feature set by adding or deleting features along the path. For example, for path
{1, 3, 5, ?3}, we have two potential active feature sets of size k = 2: {1, 3} (after two steps) and
{1, 5} (after four steps). We then define the k best features as the active feature set of size k with
the smallest least squares error because this is the best approximation to subset selection (along the
path generated by the algorithm). From the above discussion, we do not have to set explicitly in
the FoBa procedure. Instead, we just generate a solution path which is five times as long as the
maximum desired sparsity k, and then generate the best k features for any sparsity level using the
above described procedure.
4.1
Simulation Data
Since for real data, we do not know the true feature set F? , simulation is needed to compare feature
selection performance. We generate n = 100 data points of dimension d = 500. The target vector
? is truly sparse with k? = 5 nonzero coefficients generated uniformly from 0 to 10. The noise
w
level is ? 2 = 0.1. The basis functions fj are randomly generated with moderate correlation: that
is, some basis functions are correlated to the basis functions spanning the true target. Note that
if there is no correlation (i.e., fj are independent random vectors), then both forward-greedy and
L1 -regularization work well because the basis functions are near orthogonal (this is the well-known
case considered in the compressed sensing literature). Therefore in this experiment, we generate
moderate correlation so that the performance of the three methods can be differentiated. Such moderate correlation does not violate the sparse eigenvalue condition in our analysis, but violates the
more restrictive conditions for forward-greedy method and Lasso.
least squares training error
parameter estimation error
feature selection error
FoBa
0.093 ? 0.02
0.057 ? 0.2
0.76 ? 0.98
Forward-greedy
0.16 ? 0.089
0.52 ? 0.82
1.8 ? 1.1
L1
0.25 ? 0.14
1.1 ? 1
3.2 ? 0.77
Table 1: Performance comparison on simulation data at sparsity level k = 5
Table 1 shows the performance of the three methods (including two versions of FoBa), where we
repeat the experiments 50 times, and report the average ? standard-deviation. We use the three
methods to select five best features, using the procedure described above. We report three metrics.
Training error is the squared error of the least squares solution with the selected five features. Parameter estimation error is the 2-norm of the estimated parameter (with the five features) minus the
true parameter. Feature selection error is the number of incorrectly selected features. It is clear
from the table that for this data, FoBa achieves significantly smaller training error than the other two
methods, which implies that it is closest to subset selection. Moreover, the parameter estimation
performance and feature selection performance are also better. The two versions of FoBa perform
very similarly for this data.
4.2
Real Data
Instead of listing results for many datasets without gaining much insights, we present a more detailed
study on a typical dataset, which reflect typical behaviors of the algorithms. Our study shows that
FoBa does what it is designed to do well: that is, it gives a better approximation to subset selection
than either forward-greedy or L1 regularization. Moreover, the difference between aggressive FoBa
and conservative FoBa become more significant.
In this study, we use the standard Boston Housing data, which is the housing data for 506 census tracts of Boston from the 1970 census, available from the UCI Machine Learning Database
Repository: http://archive.ics.uci.edu/ml/. Each census tract is a data-point, with 13 features (we
add a constant offset one as the 14th feature), and the desired output is the housing price. In the
experiment, we randomly partition the data into 50 training plus 456 test points. We perform the
experiments 50 times, and for each sparsity level from 1 to 10, we report the average training and test
squared error. The results are plotted in Figure 4. From the results, we can see that FoBa achieves
6
?
FoBa
forward?greedy
L1
?
?
50
test error
55
?
45
30
training error
40
60
50
FoBa
forward?greedy
L1
65
60
?
?
70
better training error for any given sparsity, which is consistent with the theory and the design goal of
FoBa. Moreover, it achieves better test accuracy with small sparsity level (corresponding to a more
sparse solution). With large sparsity level (corresponding to a less sparse solution), the test error
increase more quickly with FoBa. This is because it searches a larger space by more aggressively
mimic subset selection, which makes it more prone to overfitting. However, at the best sparsity level
of 2 or 3 (for aggressive and conservative FoBa, respectively), FoBa achieves significantly better test
error. Moreover, we can observe with small sparsity level (a more sparse solution), L1 regularization
performs poorly, due to the bias caused by using a large L1 -penalty.
?
?
?
?
?
40
20
?
?
?
?
?
?
?
2
4
6
8
?
35
?
10
2
sparsity
?
4
?
6
8
10
sparsity
Figure 4: Performance of the algorithms on Boston Housing data Left: average training squared
error versus sparsity; Right: average test squared error versus sparsity
For completeness, we also compare FoBa to the backward-greedy algorithm and the classical heuristic forward-backward greedy algorithm as implemented in SAS (see its description at the beginning
of Section 3). We still use the Boston Housing data, but plot the results separately, in order to avoid
cluttering. As we have pointed out, there is no theory for the SAS version of forward-backward
greedy algorithm. It is difficult to select an appropriate backward threshold 0 : a too small value
leads to few backward steps, and a too large value leads to overly aggressive deletion, and the procedure terminates very early. In this experiment, we pick a value of 10, because it is a reasonably
large quantity that does not lead to an extremely quick termination of the procedure. The performance of the algorithms are reported in Figure 5. From the results, we can see that backward greedy
algorithm performs reasonably well on this problem. Note that for this data, d n, which is the
scenario that backward does not start with a completely overfitted full model. Still, it is inferior to
FoBa at small sparsity level, which means that some degree of overfitting still occurs. Note that
backward-greedy algorithm cannot be applied in our simulation data experiment, because d n
which causes immediate overfitting. From the graph, we also see that FoBa is more effective than
the SAS implementation of forward-backward greedy algorithm. The latter does not perform significant better than the forward-greedy algorithm with our choice of 0 . Unfortunately, using a larger
backward threshold 0 will lead to an undesirable early termination of the algorithm. This is why the
provably effective adaptive backward strategies introduced in this paper are superior.
5
Discussion
This paper investigates the problem of learning sparse representations using greedy algorithms. We
showed that neither forward greedy nor backward greedy algorithms are adequate by themselves.
However, through a novel combination of the two ideas, we showed that an adaptive forward-back
greedy algorithm, referred to as FoBa, can effectively solve the problem under reasonable conditions. FoBa is designed to be a better approximation to subset selection. Under the sparse eigenvalue
condition, we obtained strong performance bounds for FoBa for feature selection and parameter estimation. In fact, to the author?s knowledge, in terms of sparsity, the bounds developed for FoBa in
this paper are superior to all earlier results in the literature for other methods.
7
FoBa
Forward?Backward (SAS)
forward?greedy
backward?greedy
50
?
60
40
?
FoBa
Forward?Backward (SAS)
forward?greedy
backward?greedy
?
70
?
?
?
?
?
50
test error
30
training error
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
40
20
?
?
?
?
?
?
?
?
?
?
?
?
2
4
6
8
10
2
sparsity
4
6
8
10
sparsity
Figure 5: Performance of greedy algorithms on Boston Housing data. Left: average training squared
error versus sparsity; Right: average test squared error versus sparsity
Our experiments also showed that FoBa achieves its design goal: that is, it gives smaller training
error than either forward-greedy or L1 regularization for any given level of sparsity. Therefore the
experiments are consistent with our theory. In real data, better sparsity helps on some data such
as Boston Housing. However, we shall point out that while FoBa always achieves better training
error for a given sparsity in our experiments on other datasets (thus it achieves our design goal), L1 regularization some times achieves better test performance. This is not surprising because sparsity is
not always the best complexity measure for all problems. In particular, the prior knowledge of using
small weights, which is encoded in the L1 regularization formulation but not in greedy algorithms,
can lead to better generalization performance on some data (when such a prior is appropriate).
References
[1] Peter Bickel, Yaacov Ritov, and Alexandre Tsybakov. Simultaneous analysis of Lasso and
Dantzig selector. Annals of Statistics, 2008. to appear.
[2] Florentina Bunea, Alexandre Tsybakov, and Marten H. Wegkamp. Sparsity oracle inequalities
for the Lasso. Electronic Journal of Statistics, 1:169?194, 2007.
[3] Christophe Couvreur and Yoram Bresler. On the optimality of the backward greedy algorithm
for the subset selection problem. SIAM J. Matrix Anal. Appl., 21(3):797?808, 2000.
[4] Bradley Efron, Trevor Hastie, Iain Johnstone, and Robert Tibshirani. Least angle regression.
Annals of Statistics, 32(2):407?499, 2004.
[5] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer,
2001.
[6] Vladimir Koltchinskii. Sparsity in penalized empirical risk minimization. Annales de l?Institut
Henri Poincar?, 2008.
[7] Joel A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Info.
Theory, 50(10):2231?2242, 2004.
[8] Cun-Hui Zhang and Jian Huang. Model-selection consistency of the Lasso in high-dimensional
linear regression. Technical report, Rutgers University, 2006.
[9] Tong Zhang. Some sharp performance bounds for least squares regression with L1 regularization. The Annals of Statistics, 2009. to appear.
[10] Peng Zhao and Bin Yu. On model selection consistency of Lasso. Journal of Machine Learning
Research, 7:2541?2567, 2006.
8
| 3586 |@word repository:1 version:4 norm:1 termination:2 simulation:4 pick:3 minus:1 bradley:1 wd:2 surprising:1 attracted:1 partition:1 remove:5 designed:3 plot:1 greedy:57 selected:5 half:1 website:1 beginning:1 completeness:1 boosting:1 zhang:3 five:4 along:3 become:1 incorrect:1 prove:3 introduce:2 theoretically:1 peng:1 behavior:1 themselves:1 nor:2 automatically:1 kwk0:4 cardinality:1 erase:1 hiding:1 cluttering:1 notation:3 moreover:6 bounded:4 what:1 developed:1 nj:1 every:1 k2:1 demonstrates:1 yn:2 appear:2 mistake:3 despite:1 path:12 foba:40 approximately:1 plus:1 koltchinskii:1 dantzig:1 appl:1 practice:4 block:2 implement:1 procedure:11 poincar:1 empirical:2 significantly:3 intention:1 convenience:2 irrepresentable:2 selection:27 close:1 undesirable:1 cannot:5 risk:2 quick:1 marten:1 convex:2 identifying:2 immediately:1 insight:1 iain:1 handle:1 variation:1 annals:3 target:10 element:1 approximated:1 particularly:1 database:1 wj:6 decrease:1 overfitted:1 principled:1 pd:1 complexity:1 solving:2 rewrite:1 f2:7 basis:12 completely:4 derivation:1 train:1 effective:5 describe:1 artificial:1 tell:1 heuristic:5 widely:6 solve:6 valued:1 larger:4 encoded:1 compressed:1 ability:2 statistic:5 noisy:2 final:1 housing:7 eigenvalue:7 propose:1 relevant:2 uci:2 poorly:1 achieve:1 description:2 intuitive:1 crossvalidation:1 perfect:1 tract:2 help:1 stat:1 measured:1 progress:2 sa:6 strong:4 solves:1 implemented:2 implies:3 closely:1 correct:5 drawback:1 f4:1 lars:1 violates:1 bin:1 explains:1 f1:9 fix:1 generalization:1 hold:1 considered:4 ic:1 mapping:1 algorithmic:1 major:2 achieves:10 early:2 smallest:3 bickel:1 purpose:1 estimation:7 successfully:2 bunea:1 minimization:2 always:4 sight:1 gaussian:3 avoid:2 ej:3 l0:4 indicates:1 greedily:2 sense:1 flaw:3 stopping:2 spurious:1 interested:9 provably:3 arg:5 issue:1 equal:1 never:1 f3:7 kw:1 yu:1 mimic:1 report:4 simplify:1 few:2 randomly:2 n1:3 friedman:1 interest:1 fd:2 joel:1 analyzed:2 truly:3 closer:2 necessary:1 minw:2 orthogonal:2 institut:1 desired:4 plotted:2 theoretical:5 delete:1 earlier:4 cost:9 deviation:1 subset:13 inadequate:2 too:4 characterize:1 reported:1 adaptively:2 fundamental:2 siam:1 picking:1 wegkamp:1 quickly:1 w1:2 again:1 squared:6 reflect:1 huang:1 hoeffding:1 f5:1 zhao:1 supp:4 aggressive:4 potential:1 de:1 wk:1 includes:1 coefficient:5 matter:1 explicitly:3 caused:2 eyi:3 later:3 break:3 performed:2 overfits:2 characterizes:1 start:4 complicated:1 identifiability:1 square:10 accuracy:5 listing:1 identify:2 explain:1 minj:2 simultaneous:1 whenever:1 trevor:1 definition:4 failure:1 proof:2 gain:1 proved:1 dataset:1 knowledge:2 efron:1 back:1 appears:1 alexandre:2 supervised:1 formulation:4 ritov:1 shrink:2 just:1 stage:1 correlation:4 overfit:3 working:1 hand:1 tropp:1 ei:2 nonlinear:1 quality:1 k22:2 normalized:1 true:5 remedy:3 regularization:19 aggressively:2 nonzero:4 illustrated:1 during:1 inferior:1 criterion:1 performs:2 l1:20 fj:12 consideration:1 novel:3 recently:2 yaacov:1 superior:2 functional:1 discussed:1 significant:3 refer:2 rd:12 unconstrained:1 consistency:2 tuning:2 similarly:1 pointed:1 nonlinearity:1 etc:1 add:2 closest:2 recent:1 showed:3 irrelevant:2 moderate:4 inf:1 scenario:2 inequality:2 christophe:1 yi:3 additional:2 impose:1 determine:1 fist:1 full:4 violate:1 technical:1 cross:1 long:3 controlled:1 prediction:7 regression:5 basic:2 metric:1 rutgers:3 kernel:1 addition:3 want:2 separately:1 decreased:1 jian:1 extra:1 archive:1 ineffective:1 sure:1 subject:3 kwk22:1 effectiveness:5 practitioner:3 integer:1 near:3 yk22:2 identically:1 fit:2 hastie:2 lasso:11 suboptimal:1 reduce:1 idea:9 greed:1 penalty:2 peter:1 speaking:1 cause:1 oder:1 adequate:3 action:1 generally:3 clear:1 listed:1 detailed:1 amount:2 tsybakov:2 generate:4 http:1 exist:2 estimated:1 overly:1 popularity:1 correctly:1 tibshirani:2 shall:3 endif:1 express:1 key:1 four:1 threshold:5 deleted:2 clarity:1 prevent:1 neither:3 backward:39 graph:1 annales:1 run:1 package:1 angle:1 tzhang:1 reasonable:2 electronic:1 florentina:1 investigates:1 bound:3 followed:1 distinguish:1 oracle:1 strength:1 constraint:2 generates:2 aspect:1 argument:1 min:3 extremely:1 optimality:1 performing:1 department:1 combination:6 beneficial:1 terminates:3 reconstructing:2 smaller:4 cun:1 making:2 census:3 computationally:3 ln:4 fail:1 needed:1 know:2 end:3 available:2 gaussians:1 operation:1 observe:1 away:3 differentiated:1 appropriate:2 assumes:1 maintaining:1 yoram:1 restrictive:3 classical:1 already:1 quantity:2 added:1 occurs:1 strategy:2 costly:1 diagonal:2 topic:1 considers:1 reason:1 spanning:1 relationship:1 illustration:2 mini:2 balance:1 vladimir:1 difficult:2 unfortunately:2 robert:1 potentially:1 info:1 stated:1 design:8 reliably:2 implementation:1 anal:1 perform:5 observation:2 datasets:2 incorrectly:1 immediate:1 situation:2 y1:2 rn:5 sharp:1 introduced:3 deletion:5 trans:1 address:1 able:4 below:1 appeared:1 sparsity:32 including:3 reliable:1 gaining:1 deleting:2 difficulty:1 regularized:1 carried:1 isn:1 prior:2 literature:5 kf:3 loss:6 bresler:1 versus:4 validation:1 degree:1 consistent:2 share:1 prone:1 penalized:1 repeat:1 keeping:1 bias:1 weaker:1 johnstone:1 sparse:23 distributed:1 regard:2 dimension:1 xn:2 forward:50 made:3 adaptive:6 author:2 henri:1 selector:1 keep:1 ml:1 overfitting:4 active:3 xi:5 search:1 why:2 table:3 reasonably:3 robust:1 complex:1 necessarily:1 main:2 linearly:1 noise:4 x1:2 referred:3 tong:2 sub:3 theme:1 kfj:1 explicit:2 exponential:1 third:2 removing:2 theorem:4 showing:3 sensing:1 offset:1 essential:1 exists:2 adding:2 effectively:2 hui:1 boston:6 expressed:2 springer:1 corresponds:2 satisfies:1 relies:1 goal:3 replace:2 price:1 considerable:1 hard:1 specifically:1 except:2 corrected:2 uniformly:1 wt:2 typical:2 conservative:2 called:1 experimental:1 formally:1 select:5 support:1 latter:1 inability:1 correlated:1 |
2,853 | 3,587 | Unifying the Sensory and Motor Components
of Sensorimotor Adaptation
Adrian Haith
School of Informatics
University of Edinburgh, UK
[email protected]
Carl Jackson
School of Psychology
University of Birmingham, UK
[email protected]
Chris Miall
School of Psychology
University of Birmingham, UK
[email protected]
Sethu Vijayakumar
School of Informatics
University of Edinburgh, UK
[email protected]
Abstract
Adaptation of visually guided reaching movements in novel visuomotor environments (e.g. wearing prism goggles) comprises not only motor adaptation but also substantial sensory adaptation, corresponding to shifts in the
perceived spatial location of visual and proprioceptive cues. Previous computational models of the sensory component of visuomotor adaptation have
assumed that it is driven purely by the discrepancy introduced between visual and proprioceptive estimates of hand position and is independent of
any motor component of adaptation. We instead propose a unified model in
which sensory and motor adaptation are jointly driven by optimal Bayesian
estimation of the sensory and motor contributions to perceived errors. Our
model is able to account for patterns of performance errors during visuomotor adaptation as well as the subsequent perceptual aftereffects. This
unified model also makes the surprising prediction that force field adaptation will elicit similar perceptual shifts, even though there is never any
discrepancy between visual and proprioceptive observations. We confirm
this prediction with an experiment.
1
Introduction
When exposed to a novel visuomotor environment, for instance while wearing prism goggles,
subjects initially exhibit large directional errors during reaching movements but are able to
rapidly adapt their movement patterns and approach baseline performance levels within
around 30-50 reach trials. Such visuomotor adaptation is multifaceted, comprising both
sensory and motor components [5]. The sensory components of adaptation can be measured
through alignment tests in which subjects are asked to localize either a visual target or their
unseen fingertip, with their other (also unseen) fingertip (without being able to make contact
between hands). These tests reveal substantial shifts in the perceived spatial location of both
visual and proprioceptive cues, following adaptation to shifted visual feedback [7].
While a shift in visual spatial perception will be partially reflected in reaches towards visual
targets, sensory adaptation alone cannot fully account for the completenes of visuomotor adaptation, since the shifts in visual perception are always substantially less than the
experimentally-imposed shift. There must therefore be some additional motor component
of adaptation, i.e. some change in the relationship between the planned movement and the
disturbances
rtv
}
rty
rtp
vt
ut
motor command
yt
hand position
pt
proprioceptive
observation
Figure 1: Graphical model of a single reach
in a motor adaptation experiment. Motor
command ut , and visual and proprioceptive
observations of hand position, vt and pt , are
available to the subject. Three distinct disturbances affect observations: A motor disturbance rty may affect the hand position yt
given the motor command ut . Visual and
proprioceptive disturbances, rtv and rtp , may
affect the respective observations given hand
position.
visual observation
issued motor command. This argument is reinforced by the finding that patterns of reach
aftereffects following visuomotor adaptation depend strongly on the motor task performed
during adaptation [5].
From a modelling point of view, the sensory and motor components of adaptation have
previously only been addressed in isolation of one another. Previously proposed models of
sensory adaptation have assumed that it is driven purely by discrepancies between hand
position estimates from different sensory modalities. Ghahramani et al. [2] proposed a
computational model based on a maximum likelihood principle, details of which we give in
Section 3. On its own, this sensory adaptation model cannot provide a complete description
of visuomotor adaptation since it does not fully account for improvements in performance
from trial to trial. It can, however, be plausibly combined with a conventional error-driven
motor adaptation model in which the performance error is calculated using the maximum
likelihood estimate of hand position. The resulting composite model could plausibly account
for both performance improvements and perceptual shifts during visuomotor adaptation.
According to this view, sensory and motor adaptation are very much independent processes,
one driven by sensory discrepancy and the other driven by (estimated) task performance
error.
In Section 4, we argue for a more unified view of sensory and motor adaptation in which
all three components of adaptation are jointly guided by optimal Bayesian inference of the
corresponding potential sources of error experienced on each trial, given noisy visual and
proprioceptive observations of performance and noisy motor execution. This unified sensory
and motor adaptation model is also able to account for both performance improvements and
perceptual shifts during visuomotor adaptation. However, our unified model also makes the
surprising prediction that a motor disturbance, e.g. an external force applied to hand via
a manipulandum, will also elicit sensory adaptation. The MLE-based model predicts no
such sensory adaptation, since there is never any discrepancy between sensory modalities.
We test this prediction directly with an experiment (Section 5) and find that force field
adaptation does indeed lead to sensory as well as motor adaptation.
2
Modelling framework
Before describing the details of the models, we first outline a basic mathematical framework for describing reaching movements in the context of a motor adaptation experiment,
representing the assumptions common to both the MLE-based and the Bayesian adaptation models. Figure 1 illustrates a graphical model of a single reaching movement during
an adaptation experiment, from the subject?s point of view. The multiple components of
visuomotor adaptation described above correspond to three distinct potential sources of
observed outcome error (across both observation) modalities in a single reaching trial.
On trial t, the subject generates a (known) motor command ut . This motor command ut
leads to a final hand position yt , which also depends on some (unknown) motor disturbance
rv
rp
Figure 2: MLE-based sensor adaptation model.
Visual and proprioceptive disturbances rv , rp are
treated as parameters of the model. Estimates r?tv
and r?tp of these parameters are maintained via an
online EM-like procedure.
yt
vt
pt
rty (e.g. an external force applied to the hand) and motor noise ?ut . We assume the final
hand position yt is given by
yt = ut + rty + ?ut ,
(1)
u
2
where ?t ? N (0, ?u ). Although this is a highly simplified description of the forward dynamics of the reaching movement, it can be regarded as a first-order approximation to the true
dynamics. Similar assumptions have proved very successful elsewhere in models of force
field adaptation, e.g. [1]
The experimenter ultimately measures the hand position yt , however this is not directly
observed by the subject. Instead, noisy and potentially shifted observations are available
through visual and proprioceptive modalities,
vt
pt
=
=
yt + rtv + ?vt ,
yt + rtp + ?pt ,
(2)
(3)
where the observation noises ?vt and ?pt are zero-mean and Gaussian with variances ?v2 and
?p2 , respectively.
We denote the full set of potential disturbances on trial t by
rt = (rtv , rtp , rty )T .
(4)
(?
rtv , r?tp , r?ty )T
We assume that the subject maintains an internal estimate ?rt =
of the total
disturbance rt and selects his motor commands on each trial accordingly. For reaches to a
visual target located at vt? , the appropriate motor command is given by
ut = vt? ? r?tv ? r?ty .
(5)
Adaptation can be viewed as a process of iteratively updating the disturbance estimate, ?rt ,
following each trial given the new (noisy) observations vt and pt and the motor command
ut . Exactly how the subject uses the information available to infer the current disturbances
is the subject of subsequent sections of this paper.
3
Existing sensory adaptation models
The prevailing view of sensory adaptation centres around the principle of maximum likelihood estimation and was first proposed by Ghahramani et al. [2] in the context of combining
discrepant visual and auditory cues in a target location task. It has nevertheless been wideley accepted as a model of how the nervous system deals with visual and proprioceptive
cues. Van Beers et al. [7], for instance, based an analysis of the relative uncertainty of
visual and proprioceptive estimates of hand location on this principle.
We suppose that, given the subject?s current estimate of the visual and proprioceptive
disturbance, r?tv and r?tp , the visual and proprioceptive estimates of hand position are given
by
y?tv
y?tp
= vt ? r?tv ,
= pt ? r?tp
(6)
(7)
respectively. These distinct estimates of hand position are combined via maximum likelihood
estimation [7] into a single fused estimate of hand position.The maximum likelihood estimate
(MLE) of the true hand position yt is given by
y?tMLE =
?p2
?v2
v
y
?
+
y?p .
t
?v2 + ?p2
?v2 + ?p2 t
(8)
v
rt+1
rtv
y
rt+1
rty
p
rt+1
rtp
vt
ut
ut+1
yt
yt+1
vt+1
pt
Figure 3: Bayesian combined sensory and motor
adaptation model.
The
subject assumes that disturbances vary randomly,
but smoothly, from trial to
trial.
pt+1
The MLE-based sensory adaptation model states that subjects adapt their future visual and
proprioceptive estimates of hand location towards the MLE in such a way that the MLE
itself remains unchanged. The corresponding updates are given by
v
r?t+1
=
r?tv + ?wp [?
ytp ? y?tv ] ,
(9)
p
r?t+1
=
r?tp + ?wv [?
ytv ? y?tp ] ,
(10)
where ? is some fixed adaptation rate. This adaptation principle can be interpreted as an
online expectation-maximization (EM) procedure in the graphical model shown in Figure
2. In this model, rv and rp are treated as parameters of the model. The E-step of the EM
procedure corresponds to finding the MLE of yt and the M-step corresponds to gradient
ascent on the likelihood of r?v and r?p .
3.1
Extending the MLE model to account for motor component of adaptation
As it stands, the MLE-based model described above only accounts for sensory adaptation
and does not provide a complete description of sensorimotor adaptation. Visual adaptation
will affect the estimated location of a visual target, and therefore also the planned movement,
but the effect on performance will not be enough to account for complete (or nearly complete)
adaptation. The performance gain from this component of adaptation will be equal to the
discrepancy between the initial visual setimate of hand posion and the MLE - which will be
substantially less than the experimentally imposed shift.
This sensory adaptation model can, however, be plausibly combined with a conventional
error-driven state space model [6, 1] of motor adaptation to yield an additional motor
component of adaptation r?ty . The hand position MLE y?t can be used in place of the usual
uni-modal observation assumed in these models when calculating the endpoint error. The
resulting update for the estimated motor disturbance r?ty on trial t is given by
y
r?t+1
= r?ty + ?(?
yt? ? y?tMLE ),
where
rate.
y?t?
= (v
?
? r?tv )
(11)
is the estimated desired hand location, and ? is some fixed adaptation
This combined model reflects the view that sensory and motor adaptation are distinct
processes. The sensory adaptation component is driven purely by discrepancy between the
senses, while the motor adaptation component only has access to a single, fused estimate of
hand position and is driven purely by estimated performance error.
4
Unified Bayesian sensory and motor adapatation model
We propose an alternative approach to solving the sensorimotor adaptation problem. Rather
than treat the visual shifts rv and rp as parameters, we consider all the disturbances (including rty ) as dynamic random variables. We assume that the subject?s beliefs about how
Directional Error/o
30
Data
Bayesian Model
MLE Model
20
10
0
0
5
10
15
20
Trial Number
25
30
Figure 4: Model comparison with visuomotor adaptation data. The Bayesian model
(solid blue line) and MLE-based model
(dashed red line) were fitted to performance
data (filled circles) from a visuomotor adaptation experiment [4]. Both models made
qualitatively similar predictions about how
adaptation was distributed across components.
these disturbances evolve over time are characterised by a trial-to-trial disturbance dynamics
model given by
rt+1 = Art + ? t ,
(12)
where A is some diagonal matrix and ? t is a random drift term with zero mean and diagonal
covariance matrix Q, i.e.
? t ? N (0, Q).
(13)
A and Q are both diagonal to reflect the fact that each disturbance evolves independently.
We denote the diagonal elements of A by a = (av , ap , au ) and the diagonal of Q by q =
(q v , q p , q u ). The vector a describes the timescales over which each disturbance persists,
while q describes the amount of random variation from trial to trial, or volatility of each
disturbance. These parameters reflect the statistics of the usual fluctuations in sensory
calibration errors and motor plant dynamics, which the sensorimotor system must adapt to
on an ongoing basis. (Similar assumptions have previously been made elsewhere [3, 4]).
Combining these assumptions with the statistical model of each individual trial described
in Section 2 (and Figure 1), gives rise to a dynamical model of the disturbances and their
impact on reaching movements, across all trials. This model, representing the subjects
beliefs about how his sensorimotor performance is liable to vary over time, is illustrated in
Figure 4. We propose that the patterns of adaptation and the sensory aftereffects exhibited
by subjects correspond to optimal inference of the disturbances rt within this model, given
the observations on each trial.
The linear dynamics and Gaussian noise of the observer?s model mean that exact inference is
straightforward and equivalent to a Kalman filter. The latent state tracked by the Kalman
filter is the vector of disturbances rt = (rtv , rtp , rty )T , with state dynamics given by (12). The
observations vt and pt are related to the disturbances via
vt
ut
1 0 1
=
+
(rt + ?t ) ,
(14)
pt
ut
0 1 1
where ?t = (?vt , ?pt , ?ut )T . We can write this in a more conventional form as
zt = Hrt + H?t ,
(15)
T
where zt = (vt ? ut , pt ? ut ) and H is the matrix of 1?s and 0?s in equation (14). The
observation noise covariance is given by
2
?v + ?u2
?u2
R = E (H?t )(H?t )T =
.
(16)
?u2
?p2 + ?u2
The standard Kalman filter update equations can be used to predict how a subject will
update estimates of the disturbances following each trial and therefore how he will select
his actions on the next trial, leading to a full prediction of performance from the first trial
onwards.
5
Model comparison and experiments
We have described two alternative models of visuomotor adaptation which we have claimed
can account for both the motor and sensory components of adaptation. We fitted both
(a)
(b)
Error
Target
Adapted
trajectory
Catch trial
trajectory
y
x
Start
Figure 5: (a) Experimental Setup, (b) Sample trajectories and performance error measure
models to performance data from a visuomotor adaptation experiment [4] to validate this
claim. In this study in which this data was taken from, subjects performed visually guided
reaching movements to a number of targets. Visual feedback of hand position (given via a
cursor on a screen) was rotated by 30o relative to the starting position of each movement.
The mean directional error (averaged over targets and over subjects) over trials is plotted in
Figure 4. The Matlab function lsqnonlin was used to find the parameters for each model
which minimized the sum of the error between the data and the predictions of each model.
There were 5 free parameters for the MLE-based model (?v2 , ?p2 , ?u2 , ?, ?). For the Bayesian
model we assumed that all disturbances had the same timescale, i.e. all elements of a were
the same, leaving 7 free parameters (?v2 , ?p2 , ?u2 , q v , q p , q u , a). The results of the fits are shown
in Figure 4. The spread of adaptation across components of the model was qualitatively
similar between the two models, although no data on perceptual aftereffects was available
from this study for quantitative comparison. The Bayesian model clearly displays a closer fit
to the data and the Akaike information criterion (AIC) confirmed that this was not simply
due to extra parameters (AIC = 126.7 for the Bayesian model vs AIC = 159.6 for the
MLE-based model).
Although the Bayesian model appears to describe the data better, this analysis is by no
means conclusive. Furthermore, the similar scope of predictions between the two models
means that gathering additional data from alignment tests may not provide any further
leverage to distinguish between the two models. There is, however, a more striking difference
in predictions between the two models. While the MLE-based model predicts there will be
sensory adaptation only when there is a discrepancy between the senses, the Bayesian model
predicts that there will also be sensory adaptation in response to a motor disturbance such
as an external force applied to the hand). Just as a purely visual disturbance can lead
to a multifaceted adaptive response, so can a purely motor disturbance, with both motor
and sensory components predicted, even though there is never any discrepancy between the
senses. This prediction enables us to distinguish decisively between the two models.
5.1
Experimental Methods
We experimentally tested the hypothesis that force field adaptation would lead to sensory
adaptation. We tested 11 subjects who performed a series of trials consisting of reaching
movements interleaved with perceptual alignment tests.
Subjects grasped the handle of a robotic manipulandum with their right hand. The hand
was not visible directly, but a cursor displayed via a mirror/flat screen monitor setup (Figure 5.1(a)) was exactly co-planar and aligned with the handle of the manipulandum. In
the movement phase, subjects made an out-and-back reaching movement towards a visual
target with their right hand. In the visual localization phase, a visual target was displayed
pseudorandomly in one of 5 positions and the subjects moved their left fingertip to the
perceived location of the target. In the proprioceptive localization phase, the right hand
was passively moved to a random target location, with no visual cue of its position, and
subjects moved their left fingertip to the perceived location of the right hand. Left fingertip
Mean Localization Error ? x
Mean Localization Error ? y
4
11
Pre?Adaptation
Post?Adaptation
10
2.5
9.5
2
1.5
1
0.5
9
8.5
8
7.5
0
7
?0.5
6.5
?1
Pre?Adaptation
Post?Adaptation
10.5
3
Mean Error / cm
Mean Error / cm
3.5
Vision
Proprioception
Modality
6
Vision
Proprioception
Modality
Figure 6: (a) Average lateral (in direction of the perturbation) localization error across
subjects before vs after adaptation, for vision and proprioception. Error bars indicate
standard errors. (b) Same plots for y-direction
positions were recorded using a Polhemus motion tracker. Neither hand was directly visible
at any time during the experiment.
Subjects were given 25 baseline trials with zero external force, after which a force field was
gradually introduced. A leftward lateral force Fx was applied to the right hand during the
reaching phase. The magnitude of the force was proportional to the forward velocity y? of
the hand, i.e.
Fx = ?ay.
?
(17)
The force was applied only on the outward part of the movement (i.e. only when y? > 0).
After steadily incrementing a during 50 adaptation trials, the force field was then kept
constant at a = 0.3 N/(cms?1 ) for a further 25 post-adaptation test trials. All subjects
received a catch trial at the very end in which the force field was turned off.
The particular force field used was chosen so that the cursor trajectories (and motor commands required to counter the perturbation) would be as close as possible to those used
to generate the linear trajectories required when exposed to a visuomotor shift (such as
that described in [7]). Figure 5.1(b) shows two trajectories from a typical subject, one from
the post-adaptation test phase and one from the catch trial after adaptation. The initial
outward part of the catch trial trajectory, the initial movement is very straight, implying
that similar motor commands were used to those required by a visuomotor shift.
5.2
Results
We compared the average performance in the visual and proprioceptive alignment tests
before and after adaptation in the velocity-dependent force field. The results are summarized
in Figure 6(a). Most subjects exhibited small but significant shifts in performance in both
the visual and proprioceptive alignment tests. Two subjects exhibited shifts which were
more than two standard deviations away from the average shift and were excluded from the
analysis. We found significant lateral shifts in both visual and proprioceptive localization
error in the direction of the perturbation (both p < .05, one-tailed paired t-test). Figure
6(b) shows the same data for the direction perpendicular to the perturbation. Although the
initial localization bias was high, there was no significant shift in this direction following
adaptation.
We quantified each subject?s performance on each trial as the perpendicular distance of the
furthest point in the trajectory from the straight line between the starting point and the
target (Fig. 5.1(b)). We fitted the Bayesian and MLE-based models to the data following the
same procedure as before, only this time penalizing the disagreement between the model
and the data for the alignment tests, in addition to the reaching performance. Figure 7
illustrates the averaged data along with the model fits. Both models were able to account
reasonably well for the trends in reaching performance across trials (7(a)). Figures 7(b) and
7(c) show the model fits for the perceptual localization task. The Bayesian model is able to
account for both the extent of the shift and the timecourse of this shift during adaptation.
(a) Reaching Performance
(b) Visual Alignment
(c) Proprioceptive Alignment
Data
Bayesian Model
MLE Model
1
0
?1
?2
?3
4
Alignment Error / cm
2
4
Alignment Error / cm
Performance Error / cm
3
2
0
?2
0
20
40
60
80
100
2
0
?2
0
20
Trial Number
40
60
Trial Number
80
100
0
20
40
60
80
100
Trial Number
Figure 7: Trial-by-trial data and model fits. (a) Reaching error, (b) Visual alignment test
error, (c) Proprioceptive alignment test error. The Bayesian (solid blue lines) and MLEbased (dashed red lines) were fitted to averaged data across subjects (circles).
Since there was never any sensory discrepancy, the MLE-based model predicted no change
in the localization task.
6
Conclusions and discussion
Our experimental results demonstrate that adaptation of reaching movements in a force
field results in shifts in visual and proprioceptive spatial perception. This novel finding
strongly supports the Bayesian model, which predicted such adaptation, and refutes the
MLE-based model, which did not. The Bayesian model was able to account for the trends
in both reaching performance and alignment test errors on a trial-to-trial basis.
Several recent models have similarly described motor adaptation as a process of Bayesian
inference of the potential causes of observed error. K?ording et al. [3] proposed a model of
saccade adaptation and Krakauer et al. [4] modelled visuomotor adaptation based on this
principle. Our work extends the framework of these models to include multiple observation
modalities instead of just one, and multiple classes of disturbances which affect the different
observation modalities in different, experimentally measurable ways.
Overall, our results suggest that the nervous system solves the problems of sensory and
motor adaptation in a principled and unified manner, supporting the view that sensorimotor
adaptation proceeds according to optimal estimation of encountered disturbances.
References
[1] Opher Donchin, Joseph T Francis, and Reza Shadmehr. Quantifying generalization from
trial-by-trial behavior of adaptive systems that learn with basis functions: theory and
experiments in human motor control. J Neurosci, 23(27):9032?9045, Oct 2003.
[2] Z. Ghahramani, D.M. Wolpert, and M.I. Jordan. Computational models for sensorimotor
integration. In P.G. Morasso and V. Sanguineti, editors, Self-Organization, Computational Maps and Motor Control, pages 117?147. North-Holland, Amsterdam, 1997.
[3] Konrad P. K?
ording, Joshua B. Tenenbaum, and Reza Shadmehr. The dynamics of
memory as a consequence of optimal adaptation to a changing body. Nat Neurosci,
10(6):779?786, June 2007.
[4] John W Krakauer, Pietro Mazzoni, Ali Ghazizadeh, Roshni Ravindran, and Reza Shadmehr. Generalization of motor learning depends on the history of prior action. PLoS
Biol, 4(10):e316, Sep 2006.
[5] M.C. Simani, L.M. McGuire, and P.N. Sabes. Visual-shift adaptation is composed of
separable sensory and task-dependent effects. J Neurophysiol, 98:2827?2841, Nov 2007.
[6] K A Thoroughman and R Shadmehr. Learning of action through adaptive combination
of motor primitives. Nature, 407(6805):742?747, Oct 2000.
[7] Robert J van Beers, Daniel M Wolpert, and Patrick Haggard. When feeling is more
important than seeing in sensorimotor adaptation. Curr Biol, 12(10):834?837, May
2002.
| 3587 |@word trial:43 adrian:2 covariance:2 solid:2 initial:4 series:1 daniel:1 ording:2 existing:1 current:2 surprising:2 must:2 john:1 visible:2 subsequent:2 enables:1 motor:51 plot:1 update:4 v:2 alone:1 cue:5 implying:1 manipulandum:3 nervous:2 accordingly:1 location:10 mathematical:1 along:1 manner:1 ravindran:1 indeed:1 behavior:1 cm:6 interpreted:1 substantially:2 unified:7 finding:3 quantitative:1 exactly:2 uk:8 control:2 before:4 persists:1 treat:1 tmle:2 consequence:1 fluctuation:1 sanguineti:1 ap:1 au:1 quantified:1 co:1 perpendicular:2 averaged:3 procedure:4 grasped:1 elicit:2 composite:1 pre:2 seeing:1 suggest:1 cannot:2 close:1 context:2 measurable:1 map:1 conventional:3 imposed:2 equivalent:1 yt:14 straightforward:1 primitive:1 starting:2 independently:1 regarded:1 jackson:2 his:3 handle:2 variation:1 fx:2 target:13 pt:14 suppose:1 rty:8 exact:1 carl:1 us:1 akaike:1 hypothesis:1 element:2 velocity:2 trend:2 located:1 updating:1 predicts:3 observed:3 plo:1 movement:17 counter:1 substantial:2 principled:1 environment:2 asked:1 dynamic:8 ultimately:1 depend:1 solving:1 exposed:2 ali:1 purely:6 localization:9 basis:3 neurophysiol:1 sep:1 distinct:4 describe:1 visuomotor:18 outcome:1 statistic:1 unseen:2 timescale:1 jointly:2 noisy:4 itself:1 final:2 online:2 propose:3 adaptation:94 aligned:1 combining:2 turned:1 rapidly:1 description:3 moved:3 validate:1 extending:1 rotated:1 volatility:1 ac:4 measured:1 school:4 received:1 solves:1 p2:7 hrt:1 predicted:3 indicate:1 direction:5 guided:3 filter:3 human:1 generalization:2 around:2 tracker:1 visually:2 scope:1 predict:1 claim:1 vary:2 perceived:5 estimation:4 birmingham:2 reflects:1 clearly:1 sensor:1 always:1 gaussian:2 reaching:17 rather:1 command:11 june:1 improvement:3 modelling:2 likelihood:6 baseline:2 inference:4 dependent:2 initially:1 comprising:1 selects:1 overall:1 spatial:4 prevailing:1 art:1 integration:1 field:10 equal:1 never:4 nearly:1 discrepancy:10 future:1 minimized:1 randomly:1 composed:1 individual:1 phase:5 consisting:1 curr:1 decisively:1 onwards:1 organization:1 fingertip:5 highly:1 goggles:2 alignment:13 sens:3 closer:1 respective:1 filled:1 desired:1 circle:2 plotted:1 fitted:4 instance:2 planned:2 tp:7 ytp:1 maximization:1 deviation:1 successful:1 combined:5 vijayakumar:2 off:1 informatics:2 fused:2 reflect:2 recorded:1 external:4 leading:1 account:12 potential:4 summarized:1 north:1 depends:2 performed:3 view:7 observer:1 francis:1 red:2 start:1 maintains:1 contribution:1 variance:1 who:1 reinforced:1 correspond:2 yield:1 directional:3 modelled:1 bayesian:19 trajectory:8 liable:1 confirmed:1 straight:2 history:1 reach:5 ed:2 ty:5 sensorimotor:8 steadily:1 gain:1 auditory:1 proved:1 experimenter:1 ut:17 bham:2 back:1 appears:1 reflected:1 modal:1 response:2 planar:1 though:2 strongly:2 furthermore:1 just:2 hand:33 reveal:1 multifaceted:2 effect:2 true:2 excluded:1 iteratively:1 wp:1 proprioceptive:22 illustrated:1 deal:1 konrad:1 during:10 self:1 maintained:1 criterion:1 outline:1 complete:4 ay:1 demonstrate:1 motion:1 novel:3 common:1 tracked:1 endpoint:1 reza:3 he:1 significant:3 haggard:1 similarly:1 centre:1 had:1 access:1 calibration:1 patrick:1 own:1 recent:1 leftward:1 driven:9 claimed:1 issued:1 prism:2 wv:1 vt:16 joshua:1 additional:3 rtv:7 dashed:2 rv:4 multiple:3 full:2 infer:1 adapt:3 post:4 mle:21 paired:1 impact:1 prediction:10 basic:1 vision:3 expectation:1 addition:1 discrepant:1 addressed:1 source:2 leaving:1 modality:8 extra:1 exhibited:3 ascent:1 subject:31 proprioception:3 jordan:1 leverage:1 enough:1 affect:5 isolation:1 psychology:2 fit:5 shift:21 cause:1 action:3 matlab:1 amount:1 outward:2 tenenbaum:1 generate:1 shifted:2 estimated:5 blue:2 write:1 nevertheless:1 monitor:1 localize:1 changing:1 neither:1 penalizing:1 kept:1 pietro:1 sum:1 uncertainty:1 striking:1 place:1 extends:1 interleaved:1 distinguish:2 display:1 aic:3 encountered:1 adapted:1 flat:1 generates:1 argument:1 passively:1 separable:1 tv:8 according:2 combination:1 across:7 describes:2 em:3 joseph:1 evolves:1 gradually:1 gathering:1 taken:1 aftereffect:4 equation:2 previously:3 remains:1 describing:2 end:1 available:4 v2:6 appropriate:1 away:1 disagreement:1 alternative:2 rp:4 assumes:1 include:1 graphical:3 unifying:1 calculating:1 krakauer:2 plausibly:3 ghahramani:3 contact:1 unchanged:1 mcguire:1 mazzoni:1 rt:11 usual:2 diagonal:5 exhibit:1 gradient:1 distance:1 lateral:3 chris:1 sethu:2 argue:1 extent:1 furthest:1 kalman:3 relationship:1 setup:2 robert:1 haith:2 potentially:1 rise:1 zt:2 unknown:1 av:1 observation:17 displayed:2 supporting:1 perturbation:4 drift:1 introduced:2 required:3 conclusive:1 timecourse:1 able:7 bar:1 proceeds:1 dynamical:1 pattern:4 perception:3 including:1 memory:1 belief:2 treated:2 force:17 disturbance:31 representing:2 morasso:1 sabes:1 catch:4 prior:1 evolve:1 relative:2 fully:2 plant:1 proportional:1 beer:2 principle:5 editor:1 polhemus:1 thoroughman:1 elsewhere:2 free:2 bias:1 edinburgh:2 van:2 feedback:2 calculated:1 distributed:1 stand:1 ytv:1 sensory:39 forward:2 made:3 qualitatively:2 adaptive:3 simplified:1 feeling:1 miall:2 nov:1 uni:1 confirm:1 robotic:1 assumed:4 latent:1 tailed:1 learn:1 reasonably:1 nature:1 did:1 timescales:1 spread:1 neurosci:2 incrementing:1 noise:4 body:1 fig:1 screen:2 experienced:1 position:21 comprises:1 refutes:1 perceptual:7 rtp:6 donchin:1 mirror:1 magnitude:1 execution:1 nat:1 illustrates:2 cursor:3 smoothly:1 wolpert:2 simply:1 visual:39 amsterdam:1 partially:1 u2:6 saccade:1 holland:1 corresponds:2 oct:2 viewed:1 quantifying:1 towards:3 experimentally:4 change:2 characterised:1 typical:1 shadmehr:4 total:1 accepted:1 experimental:3 select:1 opher:1 internal:1 support:1 ongoing:1 wearing:2 tested:2 biol:2 |
2,854 | 3,588 | How memory biases affect information transmission:
A rational analysis of serial reproduction
Jing Xu Thomas L. Griffiths
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720-1650
{jing.xu,tom griffiths}@berkeley.edu
Abstract
Many human interactions involve pieces of information being passed from one
person to another, raising the question of how this process of information transmission is affected by the capacities of the agents involved. In the 1930s, Sir
Frederic Bartlett explored the influence of memory biases in ?serial reproduction?
of information, in which one person?s reconstruction of a stimulus from memory
becomes the stimulus seen by the next person. These experiments were done using relatively uncontrolled stimuli such as pictures and stories, but suggested that
serial reproduction would transform information in a way that reflected the biases
inherent in memory. We formally analyze serial reproduction using a Bayesian
model of reconstruction from memory, giving a general result characterizing the
effect of memory biases on information transmission. We then test the predictions of this account in two experiments using simple one-dimensional stimuli.
Our results provide theoretical and empirical justification for the idea that serial
reproduction reflects memory biases.
1
Introduction
Most of the facts that we know about the world are not learned through first-hand experience, but
are the result of information being passed from one person to another. This raises a natural question:
how are such processes of information transmission affected by the capacities of the agents involved?
Decades of memory research have charted the ways in which our memories distort reality, changing
the details of experiences and introducing events that never occurred (see [1] for an overview). We
might thus expect that these memory biases would affect the transmission of information, since such
a process relies on each person remembering a fact accurately.
The question of how memory biases affect information transmission was first investigated in detail
in Sir Frederic Bartlett?s ?serial reproduction? experiments [2]. Bartlett interpreted these studies
as showing that people were biased by their own culture when they reconstruct information from
memory, and that this bias became exaggerated through serial reproduction. Serial reproduction
has become one of the standard methods used to simulate the process of cultural transmission, and
several subsequent studies have used this paradigm (e.g., [3, 4]). However, this phenomenon has not
been systematically and formally analyzed, and most of these studies have used complex stimuli that
are semantically rich but hard to control. In this paper, we formally analyze and empirically evaluate
how information is changed by serial reproduction and how this process relates to memory biases.
In particular, we provide a rational analysis of serial reproduction (in the spirit of [5]), considering
how information should change when passed along a chain of rational agents.
Biased reconstructions are found in many tasks. For example, people are biased by their knowledge
of the structure of categories when they reconstruct simple stimuli from memory. One common
effect of this kind is that people judge stimuli that cross boundaries of two different categories to
be further apart than those within the same category, although the distances between the stimuli
are the same in the two situations [6]. However, biases need not reflect suboptimal performance.
If we assume that memory is solving the problem of extracting and storing information from the
noisy signal presented to our senses, we can analyze the process of reconstruction from memory as
a Bayesian inference. Under this view, reconstructions should combine prior knowledge about the
world with the information provided by noisy stimuli. Use of prior knowledge will result in biases,
but these biases ultimately make memory more accurate [7].
If this account of reconstruction from memory is true, we would expect the same inference process
to occur at every step of serial reproduction. The effects of memory biases should thus be accumulated. Assuming all participants share the same prior knowledge about the world, serial reproduction
should ultimately reveal the nature of this knowledge. Drawing on recent work exploring other processes of information transmission [8, 9], we show that a rational analysis of serial reproduction
makes exactly this prediction. To test the predictions of this account, we explore the special case
where the task is to reconstruct a one-dimensional stimulus using the information that it is drawn
from a fixed Gaussian distribution. In this case we can precisely characterize behavior at every step
of serial reproduction. Specifically, we show that this defines a simple first-order autoregressive, or
AR(1), process, allowing us to draw on a variety of results characterizing such processes. We use
these predictions to test the Bayesian models of serial reproduction in two laboratory experiments
and show that the predictions hold serial reproduction both between- and within-subjects.
The plan of the paper is as follows. Section 2 lays out the Bayesian account of serial reproduction.
In Section 3 we show how this Bayesian account corresponds to the AR(1) process. Sections 4 and
5 present two experiments testing the model?s prediction that serial reproduction reveals memory
biases. Section 6 concludes the paper.
2
A Bayesian view of serial reproduction
We will outline our Bayesian approach to serial reproduction by first considering the problem of
reconstruction from memory, and then asking what happens when the solution to this problem is
repeated many times, as in serial reproduction.
2.1
Reconstruction from memory
Our goal is to give a rational account of reconstruction from memory, considering the underlying
computational problem and finding the optimal solution to that problem. We will formulate the
problem of reconstruction from memory as a problem of inferring and storing accurate information
about the world from noisy sensory data. Given a noisy stimulus x, we seek to recover the true state
of the world ? that generated that stimulus, storing an estimate ?
? in memory. The optimal solution
to this problem is provided by Bayesian statistics. Previous experience provides a ?prior? distribution on possible states of the world, p(?). On observing x, this can be updated to a ?posterior?
distribution p(?|x) by applying Bayes? rule
p(x|?)p(?)
p(?|x) = R
(1)
p(x|?)p(?) d?
where p(x|?) ? the ?likelihood? ? indicates the probability of observing x if ? is the true state of
the world. Having computed p(?|x), a number of schemes could be used to select an estimate of?
?
to store. Perhaps the simplest such scheme is sampling from the posterior, with ?
? ? p(?|x).
This analysis provides a general schema for modeling reconstruction from memory, applicable for
any form of x and ?. A simple example is the special case where x and ? vary along a single
continuous dimension. In the experiment presented later in the paper we take this dimension to be
the width of a fish, showing people a fish and asking them to reconstruct its width from memory, but
the dimension of interest could be any subjective quantity such as the perceived length, loudness,
duration, or brightness of a stimulus. Assume that previous experience establishes that ? has a
Gaussian distribution, with ? ? N (?0 , ?02 ), and that the noise process means that x has a Gaussian
distribution centered on ?, x|? ? N (?, ?x2 ). In this case, we can use standard results from Bayesian
statistics [10] to show that the outcome of Equation 1 is also a Gaussian distribution, with p(?|x)
being N (?x + (1 ? ?)?0 , ??x2 ), where ? = 1/(1 + ?x2 /?02 ).
The analysis presented in the previous paragraph makes a clear prediction: that the reconstruction ?
?
should be a compromise between the observed value x and the mean of the prior ?0 , with the terms
of the compromise being set by the ratio of the noise in the data ?x2 to the uncertainty in the prior ?02 .
This model thus predicts a systematic bias in reconstruction that is not a consequence of an error of
memory, but the optimal solution to the problem of extracting information from a noisy stimulus.
Huttenlocher and colleagues [7] have conducted several experiments testing this account of memory
biases, showing that people?s reconstructions interpolate between observed stimuli and the mean of
a trained distribution as predicted. Using a similar notion of recosntruction from memory, Hemmer
and Steyvers [11] have conducted experiments to show that people formed appropriate Bayesian
reconstructions for realistic stimuli such as images of fruit, and seemed capable of drawing on prior
knowledge at multiple levels of abstraction in doing so.
2.2
Serial reproduction
With a model of how people might approach the problem of reconstruction from memory in hand,
we are now in a position to analyze what happens in serial reproduction, where the stimuli that
people receive on one trial are the results of a previous reconstruction. On the nth trial, a participant
sees a stimulus xn . The participant then computes p(?|xn ) as outlined in the previous section, and
stores a sample ?
? from this distribution in memory. When asked to produce a reconstruction, the
participant generates a new value xn+1 from a distribution that depends on ?
?. If the likelihood,
p(x|?), reflects perceptual noise, then it is reasonable to assume that xn+1 will be sampled from this
distribution, substituting ?
? for ?. This value of xn+1 is the stimulus for the next trial.
Viewed from this perspective, serial reproduction defines a stochastic process: a sequence of random
variables evolving over time. In particular, it is a Markov chain, since the reconstruction produced
on the current trial depends only on the value produced on the preceding trial (e.g. [12]). The
transition probabilities of this Markov chain are
Z
p(xn+1 |xn ) = p(xn+1 |?)p(?|xn ) d?
(2)
being the probability that xn+1 is produced as a reconstruction for the stimulus xn . If this Markov
chain is ergodic (see [12] for details) it will converge to a stationary distribution ?(x), with p(xn |x1 )
tending to ?(xn ) as n ? ?. That is, after many reproductions, we should expect the probability
of seeing a particular stimulus being produced as a reproduction to stabilize to a fixed distribution.
Identifying this distribution will help us understand the consequences of serial reproduction.
The transition probabilities given in Equation 2 have a special form, being the result of sampling
a value from the posterior distribution p(?|xn ) and then sampling a value from the likelihood
p(xn+1 |?). In this case, it is possible to identify the stationary distribution of the Markov chain
[8, 9]. The stationary distribution of this Markov chain is the prior predictive distribution
Z
?(x) = p(x|?)p(?) d?
(3)
being the probability of observing the stimulus x when ? is sampled from the prior. This happens
because this Markov chain is a Gibbs sampler for the joint distribution on x and ? defined by
multiplying p(x|?) and p(?) [9]. This gives a clear characterization of the consequences of serial
reproduction: after many reproductions, the stimuli being produced will be sampled from the prior
distribution assumed by the participants. Convergence to the prior predictive distribution provides
a formal justification for the traditional claims that serial reproduction reveals cultural biases, since
those biases would be reflected in the prior.
In the special case of reconstruction of stimuli that vary along a single dimension, we can also
analytically compute the probability density functions for the transition probabilities and stationary
distribution. Applying Equation 2 using the results summarized in the previous section, we have
xn+1 |xn ? N (?n , (?x2 + ?n2 )), where ?n = ?xn + (1 ? ?)?0 , and ?n2 = ??x2 . Likewise, Equation
3 indicates that the stationary distribution is N (?0 , (?x2 + ?02 )). The rate at which the Markov chain
converges to the stationary distribution depends on the value of ?. When ? is close to 1, convergence
is slow since ?n is close to xn . As ? gets closer to 0, ?n is more influenced by ?0 and convergence is
faster. Since ? = 1/(1 + ?x2 /?02 ), the convergence rate thus depends on the ratio of the participant?s
perceptual noise and the variance of the prior distribution, ?x2 /?02 . More perceptual noise results in
faster convergence, since the specific value of xn is trusted less; while more uncertainty in the prior
results in slower convergence, since xn is given greater weight.
3
Serial reproduction of one-dimensional stimuli as an AR(1) process
The special case of serial reproduction of one-dimensional stimuli can also give us further insight
into the consequences of modifying our assumptions about storage and reconstruction from memory, by exploiting a further property of the underlying stochastic process: that it is a first-order
autoregressive process, abbreviated to AR(1). The general form of an AR(1) process is
xn+1 = c + ?xn + ?n+1
(4)
N (0, ??2 ).
where ?n+1 ?
Equation 4 has the familiar form of a regression equation, predicting one
variable as a linear function of another, plus Gaussian noise. It defines a stochastic process because
each variable is being predicted from that which precedes it in sequence. AR(1) models are widely
used to model timeseries data, being one of the simplest models for capturing temporal dependency.
Just as showing that a stochastic process is a Markov chain provides information about its dynamics
and asymptotic behavior, showing that it reduces to an AR(1) process provides access to a number
of results characterizing the properties of these processes. If ? < 1 the process has a stationary
distribution that is Gaussian with mean c/(1 ? ?) and variance ??2 /(1 ? ?2 ). The autocovariance at
a lag of n is ?n ??2 /(1 ? ?2 ), and thus decays geometrically in ?. An AR(1) process thus converges
to its stationary distribution at a rate determined by ?.
It is straightforward to show that the stochastic process defined by serial reproduction where a sample from the posterior distribution on ? is stored in memory and a new value x is sampled from the
likelihood is an AR(1) process. Using the results in the previous section, at the (n + 1)th iteration
xn+1 = (1 ? ?)?0 + ?xn + ?n+1
(5)
where ? = 1/(1 + ?x2 /?02 ) and ?n+1 ? N (0, (?x2 + ?n2 )) with ?n2 = ??x2 . This is an AR(1) process
with c = (1 ? ?)?0 , ? = ?, and ??2 = ?x2 + ?n2 . Since ? is less than 1 for any ?02 and ?x2 , we can
find the stationary distribution by substituting these values into the expressions given above.
Identifying serial reproduction for single-dimensional stimuli as an AR(1) process allows us to relax
our assumptions about the way that people are storing and reconstructing information. The AR(1)
model can accommodate different assumptions about memory storage and reconstruction.1 All these
ways of characterizing serial reproduction lead to the same basic prediction: that repeatedly reconstructing stimuli from memory will result in convergence to a distribution whose mean corresponds
to the mean of the prior. In the remainder of the paper we test this prediction.
In the following sections, we present two serial reproduction experiments conducted with stimuli
that vary along only one dimension (width of fish). The first experiment follows previous research
in using a between-subjects design, with the reconstructions of one participant serving as the stimuli
for the next. The second experiment uses a within-subjects design in which each person reconstructs
stimuli that they themselves produced on a previous trial, testing the potential of this design to reveal
the memory biases of individuals.
4
Experiment 1: Between-subjects serial reproduction
This experiment directly tested the basic prediction that the outcome of serial reproduction will
reflect people?s priors. Two groups of participants were trained on different distributions of a onedimensional quantity ? the width of a schematic fish ? that would serve as a prior for reconstructing
1
In the memorization phase, the participant?s memory ?
? can be 1) a sample from the posterior distribution
p(?|xn ), as assumed above, or 2) a value such that ?
? = argmax? p(?|xn ), which is also the expected value
of the Gaussian posterior, p(?|xn ). In the reproduction phase, the participant?s reproduction xn+1 can be 1)
a noisy reconstruction, which is a sample from the likelihood p(xn+1 |?
?), as assumed above, or 2) a perfect
reconstruction from memory, such that xn+1 = ?
?. This defines four different models of serial reproduction,
all of which correspond to AR(1) processes that differ only in the variance ??2 (although maximizing p(?|xn )
and then storing a perfect reconstruction is degenerate, with ??2 = 0). In all four cases serial reproduction thus
converges to a Gaussian stationary distribution with mean ?0 , but with different variances.
similar stimuli from memory. The two distributions differed in their means, allowing us to examine
whether the mean of the distribution produced by serial reproduction is affected by the prior.
4.1
Method
The experiment followed the same basic procedure as Bartlett?s classic experiments [2]. Participants
were 46 members of the university community. Stimuli were the same as those used in [7]: fish with
elliptical bodies and fan-shaped tails. All the fish stimuli varied only in one dimension, the width of
the fish, ranging from 2.63cm to 5.76cm. The stimuli were presented on an Apple iMac computer
by a Matlab script using PsychToolBox extensions [13, 14].
Participants were first trained to discriminate fish-farm and ocean fish. The width of the fish-farm
fish was normally distributed and that of the ocean fish was uniformly distributed between 2.63 and
5.75cm. Two groups of participants were trained on one of the two distributions of fish-farm fish
(prior distributions A and B), with different means and same standard deviations. In condition A,
?0 = 3.66cm, ?0 = 1.3cm; in condition B, ?0 = 4.72cm, ?0 = 1.3cm.
In the training phase, participants first received a block of 60 trials. On each trial, a stimulus was
presented at the center of a computer monitor and participants tried to predict which type of fish it
was by pressing one of the keys on the keyboard and they received feedback about the correctness of
the prediction. The participants were then tested for 20 trials on their knowledge of the two types of
fish. The procedure was the same as the training block except there was no feedback. The trainingtesting loop was repeated until the participants reached 80% correct in using the optimal decision
strategy. If a participant could not pass the test after five iterations, the experiment halted.
In the reproduction phase, the participants were told that they were to record fish sizes for the fish
farm. On each trial, a fish stimulus was flashed at the center of the screen for 500ms and then
disappeared. Another fish of random size appeared at one of four possible positions near the center
of screen and the participants used the up and down arrow keys to adjust the width of the fish until
they thought it matched the fish they just saw. The fish widths seen by the first participant in each
condition were 120 values randomly sampled from a uniform distribution from 2.63 to 5.75cm.
The first participant tried to memorize these random samples and then gave the reconstructions.
Each subsequent participant in each condition was then presented with the data generated by the
previous participant and they again tried to reconstruct those fish widths. Thus, each participant?s
data constitute one slice of time in 120 serial reproduction chains.
At the end of the experiment, the participants were given a final 50-trial test to check if their prior
distributions had drifted. Ten participants? data were excluded from the chains based on three criteria: 1) final testing score was less than 80% of optimal performance; 2) the difference between
the reproduced value and stimulus shown was greater than the difference between the largest and
the smallest stimuli in the training distribution on any trial; 3) there were no adjustments from the
starting value of the fish width for more than half of the trials.
4.2
Results and Discussion
There were 18 participants in each condition, resulting in 18 generations of serial reproduction. Figure 1 shows the initial and final distributions of the reconstructions, together with the autoregression
plots for the two conditions. The mean reconstructed fish widths produced by the first participants
in conditions A and B were 4.22 and 4.21cm respectively, which were not statistically significantly
different (t(238) = 0.09, p = 0.93). For the final participants in each chain, the mean reconstructed
fish widths were 3.20 and 3.68cm respectively, a statistically significant difference (t(238) = 6.93,
p < 0.001). The difference in means matches the direction of the difference in the training provided
in conditions A and B, although the overall size of the difference is reduced and the means of the
stationary distributions were lower than those of the distributions used in training.
The autoregression plots provide a further quantitative test of the predictions of our Bayesian model.
The basic prediction of the model is that reconstruction should look like regression, and this is
exactly what we see in Figure 1. The correlation between the stimulus xn and its reconstruction xn+1
is the correlation between the AR(1) model?s predictions and the data, and this correlation was high
in both conditions, being 0.91 and 0.86 (p < 0.001) for conditions A and B respectively. Finally,
we examined whether the Markov assumption underlying our analysis was valid, by computing the
Initial distribution
Autoregression
Final distribution
0.07
Stimuli
Condition A
Condition B
0.06
xn+1
0.05
0.04
0.03
0.02
0.02
0.04
0.06
Fish width (m)
0.02
0.04
0.06
Fish width (m)
0.01
0.02
0.04
xn
0.06
Figure 1: Initial and final distributions for the two conditions in Experiment 1. (a) The distribution of
stimuli and Gaussian fits to reconstructions for the first participants in the two conditions. (b) Gaussian fits to reconstructions generated by the 18th participants in each condition. (c) Autoregression
plot for xn+1 as a function of xn for the two conditions.
correlation between xn+1 and xn?1 given xn . The resulting partial correlation was low for both
conditions, being 0.04 and 0.01 in conditions A and B respectively (both p < 0.05).
5
Experiment 2: Within-subjects serial reproduction
The between-subjects design allows us to reproduce the process of information transmission, but
our analysis suggests that serial reproduction might also have promise as a method for investigating
the memory biases of individuals. To explore the potential of this method, we tested the model
with a within-subjects design, in which a participant?s reproduction in the current trial became the
stimulus for that same participant in a later trial. Each participant?s responses over the entire experiment thus produced a chain of reproductions. Each participant produced three such chains, starting
from widely separated initial values. Control trials and careful instructions were used so that the
participants would not realize that some of the stimuli were their own reproductions.
5.1
Method
Forty-six undergraduates from the university research participation pool participated the experiment.
The basic procedure was the same as Experiment 1, except in the reproduction phase. Each participant?s responses in this phase formed three chains of 40 trials. The chains started with three original
stimuli with width values of 2.63cm, 4.19cm, and 5.76cm, then in the following trials, the stimuli
participants saw were their own reproductions in the previous trials in the same chain. To prevent
participants from realizing this fact, chain order was randomized and the Markov chain trials were
intermixed with 40 control trials in which widths were drawn from the prior distribution.
5.2
Results and Discussion
Participants? data were excluded based on the same criteria as used in Experiment 1, with a lower
testing score of 70% of optimal performance and one additional criterion relevant to the withinsubjects case: participants were also excluded if the three chains did not converge, with the criterion
for convergence being that the lower and upper chains must cross the middle chain. After these
screening procedures, 40 participants? data were accepted, with 21 in condition A and 19 in condition B. It took most participants about 20 trials for the chains to converge, so only the second half of
the chains (trials 21-40) were analyzed further.
The locations of the stationary distributions were measured by computing the means of the reproduced fish widths for each participant. For conditions A (3.66cm) and B (4.72cm), the average of
these means was 3.32 and 4.01cm respectively (t(38) = 2.41, p = 0.021). The right panel of Figure
Figure 2: Stimuli, training distributions and stationary distributions for Experiment 2. Each data
point in the right panel shows the mean of the last 20 iterations for a single participant. Boxes show
the 95% confidence interval around the mean for each condition.
Serial Reproduction
Training
Gaussian Fit
Auto Regression
condition A
xt+1
Fish Width (m)
0.06
0.04
0.02
0
10
0
10
20
30
40
30
40
xt
xt+1
Fish Width (m)
0.06
0.04
condition B
0.02
20
Iteration
xt
Figure 3: Chains and stationary distributions for individual participants from the two conditions.
(a) The three Markov chains generated by each participant, starting from three different values.
(b) Training distributions for each condition. (c) Gaussian fits for the last 20 iterations of each
participant?s data. (d) Autoregression for the last 20 iterations of each participant?s data.
2 shows the mean values for these two conditions. The basic prediction of the model was borne
out: participants converged to distributions that differed significantly in their means when they were
exposed to data suggesting a different prior. However, the means were in general lower than those
of the prior. This effect was less prominent in the control trials, which produced means of 3.63 and
4.53cm respectively.2
Figure 3 shows the chains, training distributions, the Gaussian fits and the autoregression for the
second half of the Markov chains for two participants in the two conditions. Correlation analysis
showed that the AR(1) model?s predictions are highly correlated with the data generated by each
participant, with mean correlations being 0.90 and 0.81 for conditions A and B respectively. The
2
Since both experiments produced stationary distributions with means lower than those of the training distributions, we conducted a separate experiment examining the reconstructions that people produced without
training. The mean fish width produced by 20 participants was 3.43cm, significantly less than the mean of the
initial values of each chain, 4.19cm (t(19) = 3.75, p < 0.01). This result suggested that people seem to have
an a priori expectation that fish will have widths smaller than those used as our category means, suggesting that
people in the experiments are using a prior that is a compromise between this expectation and the training data.
correlations are significant for all participants. The mean partial correlation between xt+1 and xt?1
given xt was low, being 0.07 and 0.11 for conditions A and B respectively, suggesting that the
Markov assumption was satisfied. The partial correlations were significant (p < 0.05) for only one
participant in condition B.
6
Conclusion
We have presented a Bayesian account of serial reproduction, and tested the basic predictions of this
account using two strictly controlled laboratory experiments. The results of these experiments are
consistent with the predictions of our account, with serial reproduction converging to a distribution
that is influenced by the prior distribution established through training. Our analysis connects the
biases revealed by serial reproduction with the more general Bayesian strategy of combining prior
knowledge with noisy data to achieve higher accuracy [7]. It also shows that serial reproduction can
be analyzed using Markov chains and first-order autoregressive models, providing the opportunity
to draw on a rich body of work on the dynamics and asymptotic behavior of such processes. These
connections allows us to provide a formal justification for the idea that serial reproduction changes
the information being transmitted in a way that reflects the biases of the people transmitting it,
establishing that this result holds under several different characterizations of the processes involved
in storage and reconstruction from memory.
Acknowledgments
This work was supported by grant number 0704034 from the National Science Foundation.
References
[1] D. L. Schacter, J. T. Coyle, G. D. Fischbach, M. M. Mesulam, and L. E. Sullivan, editors. Memory
distortion: How minds, brains, and societies reconstruct the past. Harvard University Press, Cambridge,
MA, 1995.
[2] F. C. Bartlett. Remembering: a study in experimental and social psychology. Cambridge University Press,
Cambridge, 1932.
[3] A. Bangerter. Transformation between scientific and social representations of conception: The method of
serial reproduction. British Journal of Social Psychology, 39:521?535, 2000.
[4] J. Barrett and M. Nyhof. Spreading nonnatural concepts: The role of intuitive conceptual structures in
memory and transmission of cultural materials. Journal of Cognition and Culture, 1:69?100, 2001.
[5] J. R. Anderson. The adaptive character of thought. Erlbaum, Hillsdale, NJ, 1990.
[6] A. M. Liberman, F. S. Cooper, D. P. shankweiler, and M. Studdert-Kennedy. Perception of the speech
code. Psychological Review, 74:431?461, 1967.
[7] J. Huttenlocher, L. V. Hedges, and J. L. Vevea. Why do categories affect stimulus judgment? Journal of
Experimental Psychology: General, pages 220?241, 2000.
[8] T. L. Griffiths and M. L. Kalish. A Bayesian view of language evolution by iterated learning. In B. G.
Bara, L. Barsalou, and M. Bucciarelli, editors, Proceedings of the Twenty-Seventh Annual Conference of
the Cognitive Science Society, pages 827?832. Erlbaum, Mahwah, NJ, 2005.
[9] T. L. Griffiths and M. L. Kalish. Language evolution by iterated learning with bayesian agents. Cognitive
Science, 31:441?480, 2007.
[10] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian data analysis. Chapman & Hall, New
York, 1995.
[11] P. Hemmer and M. Steyvers. A bayesian account of reconstructive memory. In Proceedings of the 30th
Annual Conference of the Cognitive Science Society, 2008.
[12] J. R. Norris. Markov Chains. Cambridge University Press, Cambridge, UK, 1997.
[13] D. H. Brainard. The Psychophysics Toolbox. Spatial Vision, 10:433?436, 1997.
[14] D. G. Pelli. The VideoToolbox software for visual psychophysics: Transforming numbers into movies.
Spatial Vision, 10:437?442, 1997.
| 3588 |@word trial:24 middle:1 instruction:1 seek:1 tried:3 brightness:1 accommodate:1 initial:5 score:2 subjective:1 past:1 current:2 elliptical:1 must:1 realize:1 subsequent:2 realistic:1 plot:3 stationary:15 half:3 realizing:1 record:1 provides:5 characterization:2 location:1 five:1 along:4 become:1 combine:1 paragraph:1 expected:1 behavior:3 themselves:1 examine:1 brain:1 considering:3 becomes:1 provided:3 underlying:3 cultural:3 matched:1 panel:2 what:3 kind:1 interpreted:1 cm:19 finding:1 transformation:1 nj:2 temporal:1 berkeley:3 every:2 quantitative:1 exactly:2 uk:1 control:4 normally:1 grant:1 consequence:4 establishing:1 might:3 plus:1 examined:1 suggests:1 statistically:2 acknowledgment:1 testing:5 block:2 sullivan:1 procedure:4 empirical:1 evolving:1 thought:2 significantly:3 confidence:1 griffith:4 seeing:1 get:1 close:2 gelman:1 storage:3 influence:1 memorization:1 applying:2 center:3 maximizing:1 straightforward:1 starting:3 duration:1 ergodic:1 formulate:1 identifying:2 rule:1 insight:1 steyvers:2 classic:1 notion:1 justification:3 updated:1 us:1 harvard:1 lay:1 predicts:1 huttenlocher:2 observed:2 role:1 transforming:1 asked:1 dynamic:2 ultimately:2 trained:4 raise:1 solving:1 compromise:3 predictive:2 serve:1 exposed:1 joint:1 separated:1 reconstructive:1 precedes:1 outcome:2 whose:1 lag:1 widely:2 distortion:1 drawing:2 reconstruct:6 relax:1 statistic:2 transform:1 noisy:7 farm:4 final:6 reproduced:2 kalish:2 sequence:2 pressing:1 took:1 reconstruction:35 interaction:1 remainder:1 relevant:1 loop:1 combining:1 degenerate:1 achieve:1 intuitive:1 exploiting:1 convergence:8 transmission:10 jing:2 produce:1 disappeared:1 perfect:2 converges:3 help:1 brainard:1 measured:1 received:2 predicted:2 judge:1 memorize:1 differ:1 direction:1 correct:1 modifying:1 stochastic:5 centered:1 human:1 material:1 hillsdale:1 exploring:1 extension:1 strictly:1 hold:2 around:1 hall:1 cognition:1 predict:1 claim:1 substituting:2 vary:3 smallest:1 perceived:1 applicable:1 spreading:1 saw:2 largest:1 correctness:1 establishes:1 reflects:3 trusted:1 gaussian:13 likelihood:5 indicates:2 check:1 inference:2 schacter:1 abstraction:1 accumulated:1 entire:1 reproduce:1 overall:1 psychtoolbox:1 priori:1 plan:1 spatial:2 special:5 psychophysics:2 never:1 having:1 shaped:1 sampling:3 chapman:1 look:1 coyle:1 stimulus:48 inherent:1 randomly:1 national:1 interpolate:1 individual:3 familiar:1 phase:6 argmax:1 connects:1 bara:1 interest:1 screening:1 highly:1 adjust:1 analyzed:3 sens:1 chain:31 accurate:2 capable:1 closer:1 partial:3 experience:4 culture:2 autocovariance:1 theoretical:1 psychological:1 modeling:1 asking:2 ar:15 halted:1 introducing:1 deviation:1 uniform:1 examining:1 conducted:4 erlbaum:2 seventh:1 characterize:1 stored:1 dependency:1 person:6 density:1 randomized:1 systematic:1 told:1 pool:1 together:1 transmitting:1 again:1 reflect:2 satisfied:1 reconstructs:1 borne:1 cognitive:3 account:11 potential:2 suggesting:3 summarized:1 stabilize:1 depends:4 piece:1 script:1 view:3 later:2 analyze:4 observing:3 schema:1 doing:1 bayes:1 participant:56 recover:1 reached:1 formed:2 accuracy:1 became:2 variance:4 likewise:1 correspond:1 identify:1 judgment:1 bayesian:17 iterated:2 accurately:1 produced:14 multiplying:1 apple:1 kennedy:1 converged:1 influenced:2 distort:1 colleague:1 involved:3 rational:5 sampled:5 knowledge:8 higher:1 tom:1 reflected:2 response:2 done:1 box:1 anderson:1 just:2 correlation:10 until:2 hand:2 defines:4 reveal:2 perhaps:1 scientific:1 effect:4 concept:1 true:3 evolution:2 analytically:1 excluded:3 laboratory:2 flashed:1 width:21 m:1 criterion:4 prominent:1 outline:1 image:1 ranging:1 common:1 tending:1 empirically:1 overview:1 tail:1 occurred:1 onedimensional:1 significant:3 cambridge:5 gibbs:1 outlined:1 language:2 had:1 access:1 posterior:6 own:3 recent:1 showed:1 exaggerated:1 perspective:1 apart:1 store:2 keyboard:1 seen:2 transmitted:1 greater:2 remembering:2 preceding:1 additional:1 converge:3 paradigm:1 forty:1 signal:1 relates:1 multiple:1 reduces:1 faster:2 match:1 cross:2 serial:49 controlled:1 schematic:1 prediction:18 converging:1 regression:3 basic:7 vision:2 expectation:2 iteration:6 receive:1 participated:1 interval:1 biased:3 subject:7 member:1 spirit:1 seem:1 extracting:2 near:1 revealed:1 conception:1 variety:1 affect:4 fit:5 psychology:4 gave:1 carlin:1 suboptimal:1 idea:2 whether:2 expression:1 six:1 bartlett:5 passed:3 speech:1 york:1 constitute:1 repeatedly:1 matlab:1 bucciarelli:1 clear:2 involve:1 vevea:1 ten:1 category:5 simplest:2 reduced:1 fish:34 serving:1 promise:1 imac:1 affected:3 group:2 key:2 four:3 monitor:1 drawn:2 changing:1 prevent:1 geometrically:1 barsalou:1 uncertainty:2 reasonable:1 draw:2 decision:1 capturing:1 uncontrolled:1 followed:1 fan:1 annual:2 occur:1 precisely:1 x2:14 software:1 generates:1 simulate:1 relatively:1 department:1 smaller:1 reconstructing:3 character:1 happens:3 equation:6 abbreviated:1 know:1 mind:1 end:1 autoregression:6 appropriate:1 ocean:2 studdert:1 slower:1 thomas:1 original:1 opportunity:1 giving:1 society:3 hemmer:2 question:3 quantity:2 strategy:2 traditional:1 loudness:1 distance:1 separate:1 capacity:2 assuming:1 length:1 code:1 ratio:2 providing:1 intermixed:1 design:5 stern:1 twenty:1 allowing:2 upper:1 markov:15 timeseries:1 situation:1 varied:1 community:1 toolbox:1 connection:1 pelli:1 raising:1 california:1 learned:1 established:1 suggested:2 perception:1 appeared:1 memory:45 event:1 natural:1 predicting:1 participation:1 nth:1 scheme:2 movie:1 picture:1 started:1 concludes:1 auto:1 prior:26 review:1 asymptotic:2 sir:2 expect:3 generation:1 foundation:1 agent:4 consistent:1 fruit:1 rubin:1 editor:2 story:1 systematically:1 storing:5 share:1 changed:1 supported:1 last:3 bias:22 formal:2 understand:1 characterizing:4 distributed:2 slice:1 feedback:2 charted:1 boundary:1 dimension:6 world:7 xn:41 rich:2 autoregressive:3 sensory:1 seemed:1 computes:1 transition:3 valid:1 adaptive:1 social:3 reconstructed:2 liberman:1 reveals:2 investigating:1 conceptual:1 assumed:3 continuous:1 decade:1 why:1 reality:1 nature:1 ca:1 investigated:1 complex:1 did:1 arrow:1 noise:6 mahwah:1 n2:5 repeated:2 xu:2 x1:1 body:2 screen:2 differed:2 slow:1 cooper:1 inferring:1 position:2 perceptual:3 down:1 british:1 specific:1 xt:7 showing:5 explored:1 decay:1 barrett:1 reproduction:60 frederic:2 undergraduate:1 explore:2 visual:1 drifted:1 adjustment:1 norris:1 corresponds:2 relies:1 ma:1 hedge:1 goal:1 viewed:1 careful:1 hard:1 change:2 specifically:1 determined:1 uniformly:1 semantically:1 sampler:1 except:2 discriminate:1 pas:1 accepted:1 experimental:2 formally:3 select:1 people:14 evaluate:1 tested:4 phenomenon:1 correlated:1 |
2,855 | 3,589 | Bayesian Model of Behaviour in Economic Games
Brooks King-Casas
Computational Psychiatry Unit
Baylor College of Medicine.
Houston, TX 77030. USA
[email protected]
Debajyoti Ray
Computation and Neural Systems
California Institute of Technology
Pasadena, CA 91125. USA
[email protected]
Peter Dayan
Gatsby Computational Neuroscience Unit
University College London
London. WC1N 3AR. UK
[email protected]
P. Read Montague
Human NeuroImaging Lab
Baylor College of Medicine.
Houston, TX 77030. USA
[email protected]
Abstract
Classical game theoretic approaches that make strong rationality assumptions have
difficulty modeling human behaviour in economic games. We investigate the role
of finite levels of iterated reasoning and non-selfish utility functions in a Partially
Observable Markov Decision Process model that incorporates game theoretic notions of interactivity. Our generative model captures a broad class of characteristic
behaviours in a multi-round Investor-Trustee game. We invert the generative process for a recognition model that is used to classify 200 subjects playing this game
against randomly matched opponents.
1
Introduction
Trust tasks such as the Dictator, Ultimatum and Investor-Trustee games provide an empirical basis
for investigating social cooperation and reciprocity [11]. Even in completely anonymous settings,
human subjects show rich patterns of behavior that can be seen in terms of such personality concepts
as charity, envy and guilt. Subjects also behave as if they model these aspects of their partners
in games, for instance acting to avoid being taken advantage of. Different subjects express quite
different personalities, or types, and also have varying abilities at modelling their opponents.
The burgeoning interaction between economic psychology and neuroscience requires formal treatments of these issues. From the perspective of neuroscience, such treatments can provide a precise
quantitative window into neural structures involved in assessing utilties of outcomes, capturing risk
and probabilities associated with interpersonal interactions, and imputing intentions and beliefs to
others. In turn, evidence from brain responses associated with these factors should elucidate the neural algorithms of complex interpersonal choices, and thereby illuminate economic decision-making.
Here, we consider a sequence of paradigmatic trust tasks that have been used to motivate a variety
of behaviorally-based economic models. In brief, we provide a formalization in terms of partially
observable Markov decision processes, approximating type-theoretic Bayes-Nash equilibria [8] using finite hierarchies of belief, where subjects? private types are construed as parameters of their
inequity averse utility functions [2]. Our inference methods are drawn from machine learning.
Figure 1a shows a simple one-round trust game. In this, an Investor is paired against a randomly
assigned Trustee. The Investor can either choose a safe option with a low payoff for both, or take
a risk and pass the decision to the Trustee who can either choose to defect (and thus keep more for
herself) or choose the fair option that leads to more gains for both players (though less profitable
1
Figure 1: (a) In a simple Trust game, the Investor can take a safe option with a payoff of $[Investor=20,Trustee=20] (i.e. the Investor gets $20 and the Trustee gets $20). The game ends if the
Investor chooses the safe option; alternatively, he can pass the decision to the Trustee. The Trustee
can now choose a fair option $[25,25] or choose to defect $[15,30]. (b) In the multi-round version
of the Trust game, the Investor gets $20 dollars at every round. He can invest any (integer) part; this
quantity is trebled on the way to the Trustee. In turn, she has the option of repaying any (integer)
amount of her resulting allocation to the Investor. The game continues for 10 rounds.
for herself alone than if she defected). Figure 1b shows the more sophisticated game we consider,
namely a multi-round, sequential, version of the Trust game [15].
The fact that even in a purely anonymized setting, Investors invest at all, and Trustees reciprocate at
all in games such as that of figure 1a, is a challenge to standard, money-maximizing doctrines (which
expect to find the Nash equilibrium where neither happens), and pose a problem for modeling. One
popular strategy is to retain the notion that subjects attempt to optimize their utilities, but to include
in these utilities social factors that penalize cases in which opponents win either more (crudely envy,
parameterized by ?) or less (guilt, parameterized by ?) than themselves [2]. One popular InequityAversion utility function [2] characterizes player i by the type Ti = (?i , ?i ) of her utility function:
U (?i , ?i ) = xi ? ?i max{(xj ? xi ), 0} ? ?i max{(xi ? xj ), 0}
(1)
where xi , xj are the amounts received by players i and j respectively.
In the multi-round version of figure 1b, reputation formation comes into play [15]. Investors have
the possibility of gaining higher rewards from giving money to the Trustee; and, at least until the
final round, the Trustee has an incentive to maintain a reputation of trustworthiness in order to coax
the Investor to offer more (against any Nash tendencies associated with solipsistic utility functions).
Social utility functions such as that of equation 1 mandate probing, belief manipulation and the like.
We cast such tasks as Bayesian Games. As in the standard formulation [8], players know their own
types but not those of their opponents; dyads are thus playing games of incomplete information.
A player also has prior beliefs about their opponent that are updated in a Bayesian manner after
observing the opponent?s actions. Their own actions also influence their opponent?s beliefs. This
leads to an infinite hierarchy of beliefs: what the Trustee thinks of the Investor; what the Trustee
thinks the Investor thinks of him; what the Trustee thinks the Investor thinks the Trustee thinks of
her; and so on. If players have common prior beliefs over the possible types in the game, and this
prior is common knowledge, then (at least one) subjective equilibrium known as the Bayes-Nash
Equilibrium (BNE), exists [8]. Algorithms to compute BNE solutions have been developed but, in
the general case, are NP-hard [6] and thus infeasible for complex multi-round games [9].
One obvious approach to this complexity is to consider finite rather than infinite belief hierarchies.
This has both theoretical and empirical support. First, a finite hierarchy of beliefs can provably
approximate the equilibrium solution that arises in an infinite belief hierarchy arbitrarily closely [10],
an idea that has indeed been employed in practice to compute equilibria in a multi-agent setting [5].
Second, based on a whole wealth of games such as the p-Beauty game [11], it has been suggested
that human subjects only employ a very restricted number of steps of strategic thinking. According
to cognitive hierarchy theory, a celebrated account of this, this number is on average a mere 1.5 [13].
In order to capture the range of behavior exhibited by subjects in these games, we built a finite
belief hierarchy model, using inequity averse utility functions in the context of a partially observable
hidden Markov model of the ignorance each subject has about its opponent?s type and in the light of
sequential choice. We used inference strategies from machine learning to find approximate solutions
to this model. In this paper, we use this generative model to investigate the qualitative classes of
behaviour that can emerge in these games.
2
Figure 2: Each player?s decision-making requires solving a POMDP, which involves solving the opponent?s POMDP. Higher order beliefs are required as each player?s action influences the opponent?s
beliefs which in turn influence their policy.
2
Partially Observable Markov Games
As in the framework of Bayesian games, player i?s inequity aversion type Ti = (?i , ?i ) is known to
it, but not to the opponent. Player i does have a prior distribution over the type of the other player j,
(0)
bi (Tj ); and, if suitably sophisticated, can also have higher-order priors over the whole hierarchy
(0)
(0) (0)0 (0)00
of recursive beliefs about types. We denote the collection of priors as ~bi = {bi , bi , bi , ...}.
(t)
Play proceeds sequentially, with player i choosing action ai at time t according to the expected future value of this choice. In this (hidden) Markovian setting, this value, called a Q-value depends on
(t)
the stage (given the finite horizon), the current beliefs of the player ~bi (which are sufficient statis(t)
tics for the past observations), and the policies P (ai = a|D(t) ) (which depend on the observations
(t)
D ) of both players up to time t:
(t) (t) (t)
(t) (t) (t)
Qi (~bi , ai ) = Ui (~bi , ai )+
X
X
(t)
(t)
(t+1) ~ (t+1) (t+1)
(t+1)
(t) (t)
P (aj |{D(t) , ai })
Qi
(bi
, ai
)P (ai+1 |{D(t) , ai , aj })
(t)
(t)
(t+1)
aj ?Aj
ai
(2)
(t+1)
?Ai
where we arbitrarily define the softmax policy,
X
(t)
(t) (t)
(t) (t)
P (ai = a|D(t) ) = exp ?Qi (~bi , a) /
exp ?Qi (~bi , b)
(3)
b
akin to Quantal Response Equilibrium [12], which depends on player i?s beliefs about player j,
(t)
(t)
which are, in turn, updated using Bayes? rule based on the likelihood function P (aj |{D(t) , ai })
(t+1)
bi
(t)
(t)
(t)
(t)
(t)
(Tj ) = P (Tj |aj , ai , bi ) = P (Tj , ai , aj )
(t)
(t) (t) (t)
0
Tj0 bi (Tj )/P (aj |ai , bi )
P
(4)
(t)
switching between history-based (Dt ) and belief-based (bi (Tj )) representations. Given the interdependence of beliefs and actions, we expect to see probing (to find out the type and beliefs of one?s
opponent) and belief manipulation (being nice now to take advantage of one?s opponent later).
If the other player?s decisions are assumed to emerge from equivalent softmax choices, then for the
subject to calculate this likelihood, they must also solve their opponent?s POMDP. This leads to an
infinite recursion (illustrated in fig. 2). In order to break this, we assume that each player has k
levels of strategic thinking as in the Cognitive Hierarchy framework [13]. Thus each k-level player
assumes that his opponent is a k ? 1-level player. At the lowest level of the recursion, the 0-level
player uses a simple likelihood to update their opponent?s beliefs.
(t)
(t)
(t)
The utility Ui (ai ) is calculated at every round for each player i for action ai by marginalizing
(t)
over the current beliefs bi . It is extremely challenging to compute with belief states, since they
are probability distributions, and are therefore continuous-valued rather than discrete. To make this
computationally reasonable, we discretize the values of the types. As an example, if there are only
two types for a player the belief state, which is a continuous probability distribution over the interval
3
[0, 1] is discretized to take K values bi1 = 0, . . . , biK = 1. The utility of an action is obtained by
marginalizing over the beliefs as:
X
(t) (t)
(t) (t) (t)
Ui (ai ) =
bik Qi (bik , ai )
(5)
k=1:K
Furthermore, we solve the resulting POMDP using a mixture of explicit expansion of the tree from
the current start point to three stages ahead, and a stochastic, particle-filter-based scheme (as in [7]),
from four stages ahead to the end of the game.
One characteristic of this explicit process model, or algorithmic approach, is that it is possible to
consider what happens when the priors of the players differ. In this case, as indeed also for the
case of only a finite belief hierarchy, there is typically no formal Bayes-Nash equilibrium. We also
verified our algorithm against the QRE and BNE solutions provided by GAMBIT ([14]) on a 1 and
2 round Trust game for k = 1, 2 respectively. However unlike the BNE solution in the extensive
form game, our algorithm gives rise to belief manipulation and effects at the end of the game.
3
Generative Model for Investor-Trustee Game
Reputation-formation plays a particularly critical role in the Investor-Trustee game, with even the
most selfish players trying to benefit from cooperation, at least in the initial rounds. In order to
reduce complexity in analyzing this, we set ?I = ?I = 0 (i.e., a purely selfish Investor) and
consider 2 values of ?T (0.3 and 0.7) such that in the last round the Trustee with type ?T = 0.3 will
not return any amount to the Investor and will choose fair outcome if ?T = 0.7. We generate a rich
tapestry of behavior by varying the prior expectations as to ?T and the values of strategic (k) level
(0,1,2) for the players.
3.1
Factors Affecting Behaviour
As an example, fig. 3 shows the evolution of the Players? Q-values and 1st-order beliefs of the
Investor and 2nd-order beliefs of the Trustee (i.e., her beliefs as to the Investor?s beliefs about her
value of ?T ) over the course of a single game. Here, both players have kI = kT = 1 (i.e. they are
strategic players), but the Trustee is actually less guilty ?T = 0.3.
In the first round, the Investor gives $15, and receives back $30 from the Trustee. This makes
the Investor?s beliefs about ?T go from being uniform to being about 0.75 for ?T = 0.7 and 0.25
for ?T = 0.3 (showing the success in the Trustee?s exercise in belief manipulation). This causes
the Q-value for the action corresponding to giving $20 dollars to be highest, inspiring the Investor?s
generosity in round 2. Equally, the Trustee?s (2nd-order) beliefs after receiving $15 in the first round
peak for the value ?T = 0.7, corresponding to thinking that the Investor believes the Trustee is Nice.
In subsequent rounds, the Trustee?s nastiness limits what she returns, and so the Investor ceases
giving high amounts. In response, in rounds 5 and 7, the Trustee tries to coax the Investor. We find
this ?reciprocal give and take? to be a characteristic behaviour of strategic Investors and Trustees
(with k = 1). For naive Players with k = 0, a return of a very low amount for a high amount
invested would lead to a complete breakdown of Trust formation.
Fig. 4 shows the statistics of dyadic interactions between Investors and Trustees with Uniform priors.
The amount given by the Investor varies significantly depending on whether or not he is strategic,
and also on his priors. In round 1, Investors with kI = 0 and 1 offer $20 first (the optimal probing
action based on uniform prior beliefs) and for kI = 2 offers $15 dollars. The corresponding amount
returned by the Trustee depends significantly on kT . A Trustee with kT = 0 and low ?T will return
nothing whereas an unconditionally cooperative Trustee (high ?T ) returns roughly the same amount
as received. Irrespective of the Trustee?s ?T type, the amount returned by strategic Trustees with
kT = 1, 2 is higher (between 1.5 and 2 times the amount received).
In round 2 we find that the low amount received causes trust to break down for Investors with
kI = 0. In fact, naive Investors and Trustees do not form Trust in this game. Strategic Trustees return
more initially and are able to coax naive Investors to give higher amounts in the game. Generally
unconditionally cooperative Trustees return more, and form Trust throughout the game if they are
strategic or if they are playing against strategic Investors. Trustees with low ?T defect towards the
end of the game but coax more investment in the beginning of the game.
4
Figure 3: The generated game shows the amount given by an Investor with kI = 1 and a Trustee
with ?T = 0.3 and kT = 1. The red bar indicates amount given by the Investor and the blue bar
is the amount returned by the Trustee (after receiving 3 times amount given by the Investor). The
figures on the right reveal the inner workings of the algorithm: Q-values through the rounds of the
game for 5 different actions of the Investor (0, 5, 10, 15, 20) and 5 actions of the Trustee between
values 0 and 3 times amount given by Investor. Also shown are the Investor?s 1st-order beliefs (left
bar for ?T = 0.3 and right bar for ?T = 0.7) and Trustee?s 2nd-order beliefs over the rounds.
Figure 4: The dyadic interactions between the Investor and Trustee across the 10 rounds of the
game. The top half shows Investor playing against Trustee with low ?T (= 0.3) and the bottom half
is the Trustee with high ?T (= 0.7): unconditionally cooperative. The top dyad shows the amount
given the Investor and the bottom dyad shows the amount returned by Trustee. Within each dyad
the rows represent the strategic (kI ) levels of Investor (0, 1 or 2) and the columns represent kT
level of the Trustee (0, 1 or 2). The dyads are shown here for the first 2 and final 2 rounds. Two
particular examples are highlighted within the dyads: Investor with kI = 0 and Trustee with kT = 2,
uncooperative (?Tlow ) and Investor kI = 1 and Trustee kT = 2, cooperative (?Thigh ). Lighter colours
reveal higher amounts (with amount given by Investor in first round being 15 dollars).
The effect of strategic level is more dramatic for the Investor, since his ability to defect at any
point places him in effective charge of the interaction. Strategic Investors give more money in the
game than naive Investors. Consequently they also get more return on their investment because of
the beneficial effects of this on their reputations. A further observation is that strategic Investors
are more immune to the Trustee?s actions. While this means that break-downs in the game due to
5
mistakes of the Trustee (or unfortunate choices from her softmax) are more easily corrected by the
strategic Investor, he is also more likely to continue investing even if the Trustee doesn?t reciprocate.
It is also worth noting the differences between k = 1 and k = 2 players. The latter typically offer
less in the game and are also less susceptible to the actions of their opponent. Overall in this game,
the Investors with kI = 1 make the most amount of money playing against a cooperative Trustee
while kI = 0 Investors make the least. The best dyad consists of a kI = 1 Investor playing with a
cooperative Trustee with kT = 0 or 1.
A very wide range of patterns of dyadic interaction, including the main observations of [15], can
thus be captured by varying just the limited collection of parameters of our model
4
Recognition and Classification
One of the main reasons to build this generative model for play is to have a refined method for
classifying individual players on the basis of the dyadic behaviour. We do this by considering the
statistical inverse of the generative model as a recognition model. Denote the sequence of plays in the
(1) (1)
(10)
10-round Investor-Trustee game as D = {[a1 , a2 ], .., [a1 , a10
2 ]}. Since the game is Markovian
(t)
we can calculate the probability of player i taking the action sequence {ai , t = 1, ..., 10} given his
(0)
Type Ti and prior beliefs ~bi as:
(0)
P ({ati }|Ti , ~bi )
=
(1)
(0)
P (a1 |Ti , ~bi )
10
Y
(t)
P (ai |D(t) , Ti )
(6)
t=2
(1)
(0)
(1)
where P (a1 |Ti , ~bi ) is the probability of initial action ai given by the softmax distribution and
(0)
(t)
(t)
(t)
prior beliefs ~bi , and P (ai |D(t) , Ti ) is the probability of action ai after updating beliefs ~bi
(t?1)
(t)
from previous beliefs ~bi
upon the observation of the past sequence of moves D . This is a
(0)
~
likelihood function for Ti , bi , and so can be used for posterior inference about type given D. We
classify the players for their utility function (?T value for the Trustee), strategic (ToM) levels and
(0)
(0)
prior beliefs using the MAP value (Ti? , ~bi ? ) = maxTi ,~b(0) P (D|Ti , ~bi ).
i
We used our recognition model to classify subject pairs playing the 10-round Investor-Trustee game
[15]. The data included 48 student pairs playing an Impersonal task for which the opponents? identities were hidden and 54 student pairs playing a Personal task for which partners met.
Each Investor-Trustee pair was classified for their level of strategic thinking k and the Trustee?s ?T
type (cooperative/uncooperative; see the table in Figure 5). We are able to capture some characteristic behaviours with our model. The highlighted interactions reveal that many of the pairs in the
Impersonal task consisted of strategic Investors and cooperative Trustees, who formed trust in the
game with the levels of investment decreasing towards the end of the game. We also highlight the
difference between strategic and non-strategic Investors. An Investor with kI = 0 will not form
trust if the Trustee does not return a significant amount initially whilst an Investor with kI = 2 will
continue offering money in the game even if the Trustee gives back less than fair amounts in return.
There is also a strong correlation between the proportion of Trustees classified as being cooperative:
estimated as 48%, 30%, on the Impersonal and Personal tasks respectively and the corresponding
Return on Investment (how much the Investor receives for the amount Invested): 120%, 109%.
Although the recognition model captures key characteristics, we do not expect the Trustees to have
the specified values of ?Tlow = 0.3 and ?Thigh = 0.7. To test the robustness of the recognition model
we generated behaviours (450 dyads) with different values of ?T (?Tlow = [0, 0.1, 0.2, 0.3, 0.4] and
?Thigh = [0.6, 0.7, 0.8, 0.9, 1.0]), that were classified using the recognition model. Figure 5 shows
how confidently players of the given type were classified to have that type.
We find that the recognition model tends to misclassify Trustees with low ?T as having kT = 2.
This is because the Trustees with those characteristics will offer high amounts to coax the Investor.
Investor are shown to be correctly classified in most cases. Overall the recognition model has a
tendency to assign higher kT to the Trustees than their true type, though the model correctly assigns
the right cooperative/uncooperative type to the Trustee.
6
Figure 5: Subject pairs are classified into levels of Theory of Mind for the Investor (rows) and
Trustee (columns). The number of subject-pairs with the classification are shown in each entry
along with whether the Trustee was classified as uncooperative / cooperative (?Tlow , ?Thigh ). The
subjects play an Impersonal game where they do not know the identities of the opponent and a
Personal game where identities are revealed.
We reveal the dominant or unique behavioural classification within tables (highlighted): Impersonal
(kI = 1, kT = 2, cooperative) group averaged over 10 subjects, Personal group (kI = 0, kT = 0,
uncooperative) averaged over 3 subjects, and Personal group with (kI = 2, kT = 0, uncooperative)
averaged over 11 subjects.
We also show the classification confidence for the types given the behaviour was generated from our
model with other values of ?T for the Trustee, as well as the type that the player is most likely to be
classified as in brackets. (A Trustee with low ?T and kT = 1 is very likely to be misclassified as a
player with kT = 2, while a player with kT = 2 will mostly be classified with kT = 2)
5
Discussion
We built a generative model that captures classes of observed behavior in multi-round trust tasks.
The critical features of the model are a social utility function, with parameters covering different
types of subjects; partial observability, accounting for subjects? ignorance about their opponents;
an explicit and finite cognitive hierarchy to make approximate equilibrium calculations marginally
tractable; and partly deterministic and partly sample-based evaluation methods.
Despite its descriptive adequacy, we do not claim that it is uniquely competent. We also do not
suggest a normative rationale for pieces of the model such as the social utility function. Nevertheless,
the separation between the vagaries of utility and the exactness of inference is attractive, not the least
by providing clearly distinct signals as to the inner workings of the algorithm that can be extremely
useful to capture neural findings. Indeed, the model is relevant to a number of experimental findings,
including those due to [15], [18], [19]. The underlying foundation in reinforcement learning is
congenial, given the substantial studies of the neural bases of this [20].
The model does directly license some conclusions. For instance, we postulate that higher activation
will be observed in regions of the brain associated with theory of mind for Investors that give more
in the game, and for Trustees that can coax more. However, unlike [13] our Naive players still build
models, albeit unsophisticated ones, of the other player (in contrast to level 0 players who assume
the opponent to play a random strategy). So this might lead to an investigation of how sophisticated
and naive theory of mind models are built by subjects in the game.
We also constructed the recognition model, which is the statistical inverse to this generative model.
While we showed this to capture a broad class of behaviours, it only explains the coarse features
of the behaviour. We need to incorporate some of the other parameters of our model, such as
the Investor?s envy and the temperature parameter of the softmax distribution in order to capture
the nuances in the interactions. Further it would be interesting to use the recognition model in
pathological populations, looking at such conditions as autism and borderline personality disorder.
7
Finally, this computational model provides a guide for designing experiments to probe aspects of
social utility, strategic thinking levels and prior beliefs, as well as inviting ready extensions to related
tasks such as Public Goods games. The inference method may also have wider application, for
instance to identifying which of a collection of Bayes-Nash equilibria is most likely to arise, given
psychological factors about human utilities.
Acknowledgments
We thank Wako Yoshida, Karl Friston and Terry Lohrenz for useful discussions.
References
[1] K.A. McCabe, M.L. Rigdon and V.L. Smith. Positive Reciprocity and Intentions in Trust Games (2003).
Journal of Economic Behaviour and Organization.
[2] E. Fehr and K.M. Schmidt. A Theory of Fairness, Competition and Cooperation (1999). The Quarterly
Journal of Economics.
[3] E. Fehr and S. Gachter. Fairness and Retaliation: The Economics of Reciprocity (2000). Journal of Economic Perspectives.
[4] E. Fehr and U. Fischbacher. Social norms and human cooperation (2004). TRENDS in Cog. Sci. 8:4.
[5] P.J. Gmytrasiewicz and P. Doshi. A Framework for Sequential Planning in Multi-Agent Settings (2005).
Journal of Artificial Intelligence Research.
[6] V. Conitzer and T. Sandholm (2002). Complexity Results about Nash Equilibria. Technical Report CMUCS-02-135, School of Computer Science, Carnegie-Mellon University.
[7] S. Thrun. Monte Carlo POMDPs (2000). Advances in Neural Information Processing Systems 12.
[8] JC Harsanyi (1967). Games with Incomplete Information Played by ?Bayesian? Players, I-III. Management
Science.
[9] J.F. Mertens and S. Zamir. Formulation of Bayesian analysis for games with incomplete information (1985).
International Journal of Game Theory.
[10] Y. Nyarko. Convergence in Economic Models with Bayesian Hierarchies of Beliefs (1997). Journal of
Economic Theory.
[11] C. Camerer. Behavioural Game Theory: Experiments in Strategic Interaction (2003). Princeton Univ.
[12] R. McKelvey and T. Palfrey. Quantal Response Equilibria for Extensive Form Games (1998). Experimental Economics 1:9-41.
[13] C. Camerer, T-H. Ho and J-K. Chong. A Cognitive Hierarchy Model of Games (2004). The Quarterly
Journal of Economics.
[14] R.D. McKelvey, A.M. McLennan and T.L. Turocy (2007). Gambit: Software Tools for Game Theory.
[15] B. King-Casas, D. Tomlin, C. Anen, C.F. Camerer, S.R. Quartz and P.R. Montague (2005). Getting to
know you: Reputation and Trust in a two-person economic exchange. Science 308:78-83.
[16] D. Tomlin, M.A. Kayali, B. King-Casas, C. Anen, C.F. Camerer, S.R. Quartz and P.R. Montague (2006).
Agent-specific responses in cingulate cortex during economic exchanges. Science 312:1047-1050.
[17] L.P. Kaelbling, M.L. Littman and A.R. Cassandra. Planning and acting in partially observable stochastic
domains (1998). Artificial Intelligence.
[18] K. McCabe, D. Houser, L. Ryan, V. Smith, T. Trouard. A functional imaging study of cooperation in
two-person reciprocal exchange. Proc. Natl. Acad. Sci. USA 98:11832-35.
[19] K. Fliessbach, B. Weber, P. Trautner, T. Dohmen, U. Sunde, C.E. Elger and A. Falk. Social Comparison
Affects Reward-Related Brain Activity in the Human Ventral Striatum (2007). Science 318:1302-1305.
[20] B. Lau and P. W. Glimcher (2008). Representations in the Primate Striatum during Matching Behaviour.
Neuron 58.
8
| 3589 |@word private:1 cingulate:1 version:3 proportion:1 norm:1 nd:3 suitably:1 accounting:1 dramatic:1 thereby:1 initial:2 celebrated:1 offering:1 ati:1 subjective:1 past:2 casas:3 current:3 wako:1 activation:1 must:1 subsequent:1 uncooperative:6 statis:1 update:1 alone:1 generative:8 half:2 intelligence:2 beginning:1 reciprocal:2 smith:2 coarse:1 provides:1 along:1 constructed:1 qualitative:1 consists:1 ray:1 manner:1 interdependence:1 expected:1 indeed:3 roughly:1 themselves:1 planning:2 behavior:4 multi:8 brain:3 discretized:1 decreasing:1 cpu:1 window:1 considering:1 provided:1 matched:1 underlying:1 mccabe:2 lowest:1 what:5 tic:1 developed:1 whilst:1 finding:2 quantitative:1 every:2 ti:11 charge:1 uk:2 unit:2 conitzer:1 positive:1 tends:1 limit:1 mistake:1 switching:1 despite:1 acad:1 striatum:2 analyzing:1 might:1 challenging:1 limited:1 range:2 bi:27 averaged:3 unsophisticated:1 unique:1 acknowledgment:1 practice:1 recursive:1 investment:4 borderline:1 empirical:2 significantly:2 matching:1 intention:2 confidence:1 suggest:1 get:4 risk:2 influence:3 context:1 optimize:1 equivalent:1 map:1 deterministic:1 maximizing:1 go:1 yoshida:1 economics:4 pomdp:4 disorder:1 assigns:1 identifying:1 rule:1 retaliation:1 his:4 population:1 notion:2 profitable:1 updated:2 elucidate:1 play:7 rationality:1 hierarchy:13 thigh:4 lighter:1 us:1 designing:1 trend:1 recognition:11 particularly:1 updating:1 continues:1 breakdown:1 cooperative:12 bottom:2 role:2 observed:2 capture:8 calculate:2 zamir:1 region:1 averse:2 highest:1 substantial:1 nash:7 complexity:3 ui:3 reward:2 littman:1 reciprocate:2 personal:5 motivate:1 depend:1 solving:2 purely:2 upon:1 basis:2 completely:1 easily:1 montague:4 herself:2 tx:2 univ:1 distinct:1 effective:1 london:2 monte:1 artificial:2 formation:3 outcome:2 choosing:1 refined:1 doctrine:1 quite:1 solve:2 valued:1 ability:2 statistic:1 tomlin:2 think:6 invested:2 highlighted:3 final:2 advantage:2 sequence:4 descriptive:1 gambit:2 ucl:1 interaction:9 relevant:1 competition:1 getting:1 invest:2 convergence:1 assessing:1 wider:1 depending:1 ac:1 pose:1 school:1 received:4 strong:2 involves:1 come:1 met:1 differ:1 safe:3 closely:1 filter:1 stochastic:2 mclennan:1 human:7 public:1 explains:1 exchange:3 behaviour:14 assign:1 anonymous:1 bi1:1 investigation:1 ryan:1 extension:1 exp:2 equilibrium:12 algorithmic:1 claim:1 ventral:1 a2:1 inviting:1 proc:1 him:2 mertens:1 tool:1 exactness:1 clearly:1 behaviorally:1 rather:2 avoid:1 beauty:1 varying:3 she:3 modelling:1 likelihood:4 indicates:1 guilt:2 psychiatry:1 generosity:1 contrast:1 dollar:4 inference:5 dayan:2 typically:2 gmytrasiewicz:1 initially:2 pasadena:1 her:6 hidden:3 misclassified:1 provably:1 issue:1 overall:2 classification:4 elger:1 softmax:5 having:1 broad:2 fairness:2 thinking:5 future:1 others:1 np:1 report:1 employ:1 randomly:2 pathological:1 fehr:3 falk:1 individual:1 maintain:1 attempt:1 misclassify:1 organization:1 investigate:2 possibility:1 evaluation:1 chong:1 mixture:1 bracket:1 light:1 tj:6 natl:1 wc1n:1 kt:18 partial:1 tree:1 incomplete:3 theoretical:1 psychological:1 instance:3 classify:3 modeling:2 interpersonal:2 markovian:2 column:2 ar:1 strategic:22 kaelbling:1 entry:1 uniform:3 varies:1 chooses:1 st:2 person:2 peak:1 international:1 retain:1 receiving:2 postulate:1 management:1 choose:6 cognitive:4 return:11 account:1 student:2 maxti:1 jc:1 depends:3 piece:1 later:1 try:1 break:3 lab:1 observing:1 characterizes:1 red:1 start:1 investor:70 bayes:5 option:6 construed:1 formed:1 characteristic:6 who:3 camerer:4 bayesian:7 iterated:1 mere:1 marginally:1 carlo:1 worth:1 autism:1 pomdps:1 history:1 classified:9 a10:1 against:7 involved:1 obvious:1 doshi:1 associated:4 gain:1 treatment:2 popular:2 knowledge:1 sophisticated:3 actually:1 back:2 higher:8 dt:1 tlow:4 tom:1 response:5 formulation:2 though:2 furthermore:1 just:1 stage:3 crudely:1 until:1 working:2 receives:2 correlation:1 trust:16 aj:8 reveal:4 usa:4 effect:3 concept:1 consisted:1 true:1 evolution:1 assigned:1 read:1 illustrated:1 ignorance:2 vagary:1 round:28 attractive:1 game:64 during:2 uniquely:1 covering:1 trying:1 theoretic:3 complete:1 temperature:1 harsanyi:1 reasoning:1 weber:1 common:2 imputing:1 palfrey:1 functional:1 he:4 significant:1 mellon:1 ai:24 particle:1 immune:1 cortex:1 money:5 base:1 dominant:1 posterior:1 own:2 showed:1 perspective:2 manipulation:4 arbitrarily:2 success:1 continue:2 caltech:1 seen:1 captured:1 houston:2 employed:1 paradigmatic:1 signal:1 technical:1 calculation:1 offer:5 equally:1 paired:1 a1:4 qi:5 expectation:1 represent:2 invert:1 penalize:1 affecting:1 whereas:1 interval:1 mandate:1 wealth:1 unlike:2 exhibited:1 subject:20 tapestry:1 incorporates:1 bik:3 integer:2 adequacy:1 noting:1 revealed:1 iii:1 variety:1 xj:3 affect:1 psychology:1 economic:11 idea:1 reduce:1 inner:2 observability:1 whether:2 utility:17 colour:1 dyad:8 akin:1 peter:1 returned:4 cause:2 action:16 generally:1 useful:2 amount:27 inspiring:1 generate:1 mckelvey:2 neuroscience:3 estimated:1 correctly:2 blue:1 discrete:1 carnegie:1 incentive:1 express:1 group:3 key:1 four:1 burgeoning:1 nevertheless:1 drawn:1 license:1 neither:1 verified:1 imaging:1 defect:4 inverse:2 parameterized:2 you:1 place:1 throughout:1 reasonable:1 separation:1 decision:7 capturing:1 ki:16 played:1 activity:1 ahead:2 software:1 aspect:2 extremely:2 according:2 across:1 beneficial:1 sandholm:1 making:2 happens:2 primate:1 lau:1 restricted:1 taken:1 computationally:1 equation:1 behavioural:2 turn:4 know:3 mind:3 tractable:1 end:5 opponent:21 probe:1 quarterly:2 schmidt:1 robustness:1 ho:1 personality:3 assumes:1 top:2 include:1 unfortunate:1 medicine:2 giving:3 build:2 approximating:1 classical:1 move:1 quantity:1 coax:6 strategy:3 illuminate:1 win:1 thank:1 charity:1 sci:2 thrun:1 partner:2 guilty:1 reason:1 nuance:1 reciprocity:3 quantal:2 providing:1 baylor:2 trustworthiness:1 neuroimaging:1 susceptible:1 mostly:1 rise:1 policy:3 discretize:1 observation:5 neuron:1 markov:4 finite:8 behave:1 payoff:2 looking:1 precise:1 incorporate:1 namely:1 cast:1 required:1 extensive:2 pair:7 specified:1 tmc:2 california:1 bcm:2 brook:1 able:2 suggested:1 proceeds:1 bar:4 pattern:2 challenge:1 confidently:1 built:3 max:2 gaining:1 including:2 belief:45 terry:1 critical:2 difficulty:1 friston:1 recursion:2 scheme:1 technology:1 brief:1 irrespective:1 ready:1 naive:6 unconditionally:3 anen:2 prior:15 nice:2 marginalizing:2 expect:3 highlight:1 rationale:1 interactivity:1 interesting:1 allocation:1 hnl:1 aversion:1 foundation:1 agent:3 sufficient:1 anonymized:1 playing:9 classifying:1 row:2 karl:1 course:1 cooperation:5 last:1 infeasible:1 cmucs:1 formal:2 guide:1 institute:1 wide:1 taking:1 emerge:2 benefit:1 calculated:1 rich:2 doesn:1 collection:3 reinforcement:1 social:8 debajyoti:1 approximate:3 observable:5 keep:1 sequentially:1 investigating:1 bne:4 assumed:1 xi:4 alternatively:1 continuous:2 investing:1 reputation:5 table:2 ca:1 expansion:1 complex:2 domain:1 main:2 lohrenz:1 whole:2 arise:1 nothing:1 fair:4 dyadic:4 competent:1 fig:3 envy:3 gatsby:2 probing:3 formalization:1 explicit:3 exercise:1 down:2 cog:1 specific:1 showing:1 quartz:2 normative:1 cease:1 evidence:1 exists:1 albeit:1 sequential:3 horizon:1 cassandra:1 selfish:3 likely:4 partially:5 identity:3 king:3 consequently:1 towards:2 hard:1 included:1 infinite:4 corrected:1 acting:2 called:1 pas:2 partly:2 tendency:2 experimental:2 player:42 college:3 support:1 latter:1 arises:1 glimcher:1 princeton:1 |
2,856 | 359 | Connectionist Approaches to the Use of
Markov Models for Speech Recognition
Herve Bourlard t,~
t L & H Speechproducts
Koning Albert 1 laan, 64
1780 Wemmel, BELGIUM
Nelson Morgan ~ I.e Chuck Wooters ~
~ IntI. Compo Sc. Institute
1947, Center St., Suite 600
Berkeley, CA 94704, USA
ABSTRACT
Previous work has shown the ability of Multilayer Perceptrons
(MLPs) to estimate emission probabilities for Hidden Markov Models (HMMs). The advantages of a speech recognition system incorporating both MLPs and HMMs are the best discrimination and
the ability to incorporate multiple sources of evidence (features,
temporal context) without restrictive assumptions of distributions
or statistical independence. This paper presents results on the
speaker-dependent portion of DARPA's English language Resource
Management database. Results support the previously reported
utility of MLP probability estimation for continuous speech recognition. An additional approach we are pursuing is to use MLPs as
nonlinear predictors for autoregressive HMMs. While this is shown
to be more compatible with the HMM formalism, it still suffers
from several limitations. This approach is generalized to take account of time correlation between successive observations, without
any restrictive assumptions about the driving noise.
1
INTRODUCTION
We have been working on continuous speech recognition using moderately large
vocabularies (1000 words) [1,2]. While some of our research has been in speakerindependent recognition [3], we have primarily used a German speaker-dependent
213
214
Bourlard, Morgan, and \\boters
database called SPICOS [1,2]. In our previously reported work, we developed a
hybrid MLP /HMM algorithm in which an MLP is trained to generate the output
probabilities of an HMM [1,2]. Given speaker-dependent training, we have been able
to recognize 50-60 % of the words in the SPICOS test sentences. While this is not a
state-of-the-art level of performance, it was accomplished with single-state phoneme
models, no triphone or allophone representations, no function word modeling, etc.,
and so may be regarded as a "baseline" system. The main point to using such a
simple system is simplicity for comparison of the effectiveness of alternate probability estimation techniques. While we are working on extending our technique to
more complex systems, the current paper describes the application of the baseline
system (with a few changes, such as different VQ features) to the speaker-dependent
portion of the English language Resource Management (RM) database (continuous
utterances built up from a lexicon of roughly 1000 words) [4]. While this exercise
was primarily intended to confirm that the previous result, which showed the utility
of MLPs for the estimation of HMM output probabilities, was not restricted to the
limited data set of our first experiments, it also shows how to improve further the
initial scheme.
However, potential problems remain. In order to improve local discrimination, the
MLP is usually provided with contextual inputs [1,2,3] or recurrent links. Unfortunately, in these cases, the dynamic programming recurrences of the Viterbi alg<r
rithm are no longer stricly valid when the local probabilities are generated by these
contextual MLPs. To solve this problem, we have started considering, as initially
proposed in [9] and [10], another approach in which MLP is used as a nonlinear
predictor. Along this line, a new approach is suggested and preliminary results are
reported.
2
METHODS AND RESULTS
As shown by both theoretical [5] and experimental [1] results, MLP output values
may be considered to be estimates of a posteriori probabilities. Either these or
some other related quantity (such as the output normalized by the prior probability
of the corresponding class) may be used in a Viterbi search to determine the best
time-warped succession of states to explain the observed speech measurements.
This hybrid approach has the potential of exploiting the interpolating capabilities
of MLPs while using Dynamic Time Warping (DTW) to capture the dynamics of
speech. As described in [2], the practical application of the technique requires crossvalidation during training to determine the stopping point, division by the priors at
the output to generate likelihoods, optimized word transition penalties, and training
sentence alignment via iterations of the Viterbi algorithm.
For the RM data, initial development was done on a single speaker to confirm that
the techniques we developed previously [2] were still applicable. Although we experimented slightly with this data, the system we ended up with was substantially
unchanged, with the exception of the program modifications required to use different vector quantized (VQ) features. Input features used were based on the front
Connectionist Approaches to the Use of Markov Models for Speech Recognition
end for SRI's DECIPHER system [6], including vector quantized mel-cepstrum (12
coefficients), vector-quantized difference of mel-cepstrum, quantized energy, and
quantized difference of energy. Both vector quantization codebooks contained 256
prototypes, while energy and delta energy were quantized into 25 levels. A feature
vector was calculated for each 10 ms of input speech. Since each feature was represented by a simple binary input vector with only one bit 'on', each 10 ms frame
of speech signal was represented by a 562-dimensional binary vector with only 4
bits 'on'. Some experiments were run with no context (Le., only one frame was
input to the network for each classification). To show the advantage of contextual
information, other experiments were run with nine frames of input to the network,
allowing four frames of contextual information on each side of the current frame
being classified. In this case, the input field contained 9 x 562 5058 units. The
size of the output layer was kept fixed at 61 units, corresponding to the 61 phonemes
to be recognized. As we found in our SPICOS experiments, a hidden layer was not
useful for this problem, probably because of the high dimension of the binary input
space and, as a consequence, of the large number of parameters. Of course, it could
be argued that a hidden layer should reduce this huge number of parameters, and
thus improve generalization. However, networks with no hidden units always outperformed experimental systems with hidden layers, on both the frame and word
levels. The ability of the 'Simpler nets to generalize well, despite the sheer number
of parameters, was probably du~ - to the cross-validation technique used during the
MLP training [7]. However, as shown in [3], hidden layers are useful for the case
of continuous input features. In this case, the dimension of the input layer of the
MLP is much lower (even with contextual information), so that large hidden layers
(e.g., 1000 units) may be useful.
=
For each speaker, we used 400 sentences for training, 100 for cross-validation, and
a final 100 for recognition tests. Starting from an initial segmentation (derived
from the average length of the phonemes), a Viterbi algorithm was then iterated
with standard emission probabilities (Le., by counting, no contextual information
and assuming independence of the features) to generate a final segmentation which
provided us with initial targets for the MLP training. Training of the MLP was
done by an error-back propagation algorithm, using an entropy criterion. In each
iteration, the complete training set was presented, and the parameters were updated after each training pattern (stochastic gradient). To avoid overtraining of
the MLP, improvement on the cross-validation set was checked after each iteration.
If the classification rate on the cross-validation set had not improved more than a
small threshold, the learning rate of the gradient procedure was reduced by a factor
of two. Compared with the results reported in [11], it has been observed recently
that it was still possible to improve significantly the recognition performance [11] by
starting from a lower initial learning constant and by adapting the segmentation of
the training sentences to the MLP. This has been done by using the final segmentation of the standard Viterbi as a new starting point of a Viterbi training embedding
now the MLP for estimating the emission probabilities. In this case, each iteration
of the Viterbi is followed by a new optimization of the MLP (according to the new
215
216
Bourlard, Morgan, and \\bolers
Table 1: Word Recognition Performance on RM database
speaker
jws04
bef03
cmr02
dtb03
das12
ers07
dms04
tab07
hxs06
rkm05
pghOl
mean
Perplexity = 1000
ML I MLP(9) I + FWM
48.2
62.3
39.3
56.7
59.5
70.9
49.8
61.2
63.8
76.5
81.8
45.4
58.3
58.0
69.1
60.8
70.5
60.9
76.3
37.9
60.2
53.8
63.6
50.4
52.2
65.4
segmentation generated by the Viterbi alignment). Recognition performance resulting of this process are reported in the column "MLP(9)" of Table 1. Comparison
with results presented in [11] clearly shows the additional improvement (which was
also observed at the frame level) that can be gained from such modifications.
3
RECOGNITION AND DISCUSSION
For recognition, the output layer of the MLP was evaluated for each frame, and
(after division by the prior probability of each phoneme) was used as emission
probabilities in a discrete HMM system. In this case, each phoneme k was thus
associated with a single conditional density evaluated on the k-th output unit of the
MLP. In our system, in order to model state duration, each phoneme was modeled by
an HMM with a single state qk repeated D /2 times, where D is the prior estimate
of the duration of the phoneme as observed on the training set. Only selfloops
and sequential transitions were permitted. A Viterbi decoding was then used for
recognition of the first thirty sentences of the cross-validation set to optimize word
transition probabilities. Note that this same simplified HMM was used for both the
Maximum Likelihood (ML) reference system (estimating probabilities directly from
relative frequencies) and the MLP system, and that the same input features were
used for both.
The first two columns of Table 1 shows the recognition rates (100 % - error rate,
where errors include insertions, deletions, and substitutions) for the 100 test sentences of the 11 speakers which were left out in the development, respectively for
standard Maximum Likelihood (ML) and MLP with 9 frames of contextual input
(MLP(9?. These results (all obtained with no language model, i.e., with a perplexity of 1000 for a 1000 word vocabulary) show the significant improvements that can
Connectionist Approaches to the Use of Markov Models for Speech Recognition
be achieved using MLPs for continuous speech recognition (over simpler probability estimators) and that the incorporation of context has a major effect. However,
it was also particularly interesting to note that the improvement was already significant with no contextual information at the input [11]. This can be explained
by the fact that in standard HMM (denoted ML in Table 1) we must assume the
independence of the four features so that we can estimate the joint density by their
product, which is not the case with the MLP. This observation was also valid at the
frame level [1,11].
However, these results are not the best ones we can expect from such an approach.
A way to improve further the performance is to add function word models for
small words as it is often done in standard HMMs. This idea was tested by using
28 additional output units (representing 12 word models) to the initial scheme.
Results for the best and the worse speaker are reported under the column denoted
"+ FWM" in Table 1. In view of the improvements, it can be concluded that many
of the tricks valid for standard HMMs are also useful in our approach and can
improve significantly the initial results.
4
MLP AS AUTOREGRESSIVE MODEL
As shown in the previous Section, it is clear that the proposed HMM/MLP hybrid
approach can achieve significant improvements over standard HMMs. However, it
has to be observed that these improvements are obtained despite some theoretical
weaknesses. Indeed, it can be shown that the Dynamic Programming (DP) recurrences of the Viterbi algorithm (used for training and recognition) are no longer
strictly valid when the local probabilities are generated by MLPs with contextual inputs. For a sequence of acoustic vectors X = {Xl, ... , XN} and a Markov model M,
P(XIM) cannot simply be obtained by DP recurrences (which are only valid for first
order Markov models) using the contextual MLP outputs (divided by the priors).
Thus, neither feedback or contextual input to the MLP (followed by the Bayes' rule
to estimate P(XIM? are stricly correct to use for the Viterbi algorithm, since both
violate the restriction to instantaneous features on the left side of the conditional
in local probabilities (in our case, the system is even not causal any more). This
problem does not appear in standard HMMs were contextual information is usually
provided via dynamic features such as the first and second derivatives (which are,
in theory, estimates of instantaneous features) of the time-varying acoustic vectors.
In [9] and [10], another approach, related to autoregressive (AR) HMMs [8], is
proposed in which the MLP is used as a nonlinear predictor. The basic idea is
to assume that the observed vectors associated with each HMM state are drawn
from a particular AR process described by an AR function that can be linear [8]
or nonlinear and associated with the transfer function of an MLP. If Xn is the
acoustic vector at time n and if X;:~;
{xn-p, ... , Xn-l} denotes the input of
the MLP (which attempts to predict X n , the desired output of the MLP associated
with X;::;), it can be shown [8,9,12] that, if the prediction error is assumed to
be Gaussian with zero mean and unity variance, minimization of the prediction
=
217
218
Bourlard, Morgan, and \'\(>oters
X::=;)
error is equivalent to estimation of p(xn Iqk'
(where qk is the HMM state
associated with x n ), which can be expressed as a Gaussian (with unity variance)
where the exponent is the prediction error. Consequently, the prediction errors can
be used as local distances in DP and are fully compatible with the recurrences of
the Viterbi algorithm. However, although the MLP /HMM interface problem seems
to be solved, we are now limited to Gaussian AR processes. Furthermore, each state
must be associated with its own MLP [10]. An alternative approach, as proposed
in [9], is to have a single MLP with additional "control" inputs coding the state
being considered. However, in both cases, the discriminant character of the MLP is
lost since it is only used as a nonlinear predictor. On preliminary experiments on
SPICOS we were unable to get significant results from these approaches compared
with the method presented in the previous Section [1,2].
However, it is possible to generalize the former approach and to avoid the Gaussian
hypothesis. It is indeed easy to prove (by using Bayes' rule with an additional
conditional
everywhere) that:
X::=;
(1)
IX::-=-;)
As p(xn
in (1) is independent of the classes qk it can overlooked in the DP
recurrences. In this case, without any assumption about mean and covariance of
the driving noise, p(xnlq~,
can be expressed as the ratio of the output values
of two "standard" MLPs (as used in the previous Section and in [1,2]), respectively
with
and
p as input. In preliminary experiments, this approach lead to
better results then the former AR models without however bearing comparison with
the method used in the previous Section and in [1,2]. For example, on SPICOS and
after tuning, we got 46 % recognition rate instead of 65 % with our best method
X::=;)
X::=;
X::_
[2].
5
CONCLUSION
Despite some theoretical nonidealities, the HMM/MLP hybrid approach can achieve
significant improvement over comparable standard HMMs. This was observed using a simplified HMM system with single-state monophone models, and no langauge
model. However, the reported results also show that many of the tricks used to improve standard HMMs are also valid for our hybrid approach, which leaves the
way open to all sort of further developments. Now that we have confirmed the
principle, we are beginning to develop a complete system, which will incorporate
context-dependent sound units. In this framework, we are studying the possibility of modeling multi-states HMMs and triphones. On the other hand, in spite of
preliminary disappointing performance (which seems to corroborate previous experiments done by others [13,14] with AR processes for speech recognition), MLPs
as AR models are still worth considering further given their attractive theoretical
basis and better interface with the HMM formalism.
Connectionist Approaches to the Use of Markov Models for Speech Recognition
References
[1] Bourlard, H., Morgan, N., & Wellekens, C.J., "Statistical Inference in Multilayer
Perceptrons and Hidden Markov Models with Applications in Continuous Speech
Recognition", Neurocomputing, Ed. F. Fogelman & J. Herault, NATO ASI Series,
vol. F68, pp. 217-226, 1990.
[2] Morgan, N., & Bourlard, H., "Continuous Speech Recognition using Multilayer Perceptrons with Hidden Markov Models", IEEE Proc. of the 1990 Inti. Conf. on
ASSP, pp. 413-416, Albuquerque, NM, April 1990.
[3] Morgan, N. , Hermansky, H., Bourlard, H., Kohn, P., Wooters, C., & Kohn, P., "Continuous Speech Recognition Using PLP Analysis with Multilayer Perceptrons" accepted for IEEE Proc. of the 1991 Inti. Conf. on ASSP, Toronto, 1991.
[4] Price, P., Fisher, W., Bernstein, J., & Pallet, D., "The DARPA 1000-Word Resource
Management Database for Continuous Speech Recognition", Proc. IEEE Inti. Conf.
on ASSP, pp. 651-654, New-York, 1988.
[5] Bourlard, H., & Wellekens, C.J., "Links between Markov Models and Multilayer
Perceptrons", IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 12,
No. 12, pp. 1167-1178, December 1990.
[6] Murveit, H., & Weintraub, M., "1000-Word Speaker-Independent Continuous Speech
Recognition Using Hidden Markov Models", Proc. IEEE Inti. Conf. on ASSP, pp.
115-118, New-York, 1988.
[7] Morgan, N., & Bourlard, H., "Generalization and Parameter Estimation in Feedforward Nets: Some Experiments", Advances in Neural Information Processing Systems
2, Ed . D.S Touretzky, San Mateo, CA: Morgan-Kaufmann, pp. 630-637, 1990.
[8] Juang, RH. & Rabiner, L.R., "Mixture Autoregressive Hidden Markov Models for
Speech Signals", IEEE Trans. on ASSP, vol. 33, no. 6, pp. 1404-1412, 1985.
[9] Levin, E., "Speech Recognition Using Hidden Control Neural Network Architecture" ,
Proc. of IEEE Inti. Conf. on ASSP, Albuquerque, New Mexico, 1990.
[10] Tebelskis, J ., & Waibel A., "Large Vocabulary Recognition Using Linked Predictive
Neural Networks", Proc. of IEEE Inti. Conf. on ASSP, Albuquerque, New Mexico,
1990.
[11] Morgan, N., Wooters, C., Bourlard, H ., & Cohen, M., "Continuous Speech Recognition on the Resource Management Database Using Connectionist Probability Estimation", Proc. of Inti. Conf. on Spoken Language Processing, Kobe, Japan, 1990.
[12] Bourlard, H., "How Connectionist Models Could Improve Markov Models for Speech
Recognition", Advanced Neural Computers, Ed. R. Eckmiller, North-Holland, pp.
247-254, 1990.
[13] de La Noue, P., Levinson, S., & Sondhi M., "Incorporating the Time Correlation
Between Successive Observations in an Acoustic-Phonetic Hidden Markov Model
for Continuous Speech Recognition", AT&T Technical Memorandum No. 11226,
1989.
[14] Wellekens, C.J., "Explicit Time Correlation in Hidden Markov Models", Proc. of the
IEEE Inti. Conf. on ASSP, Dallas, Texas, 1987.
219
| 359 |@word sri:1 seems:2 open:1 covariance:1 initial:7 substitution:1 series:1 current:2 contextual:12 must:2 speakerindependent:1 discrimination:2 intelligence:1 leaf:1 beginning:1 compo:1 quantized:6 toronto:1 successive:2 lexicon:1 simpler:2 along:1 prove:1 indeed:2 roughly:1 multi:1 fwm:2 considering:2 provided:3 estimating:2 substantially:1 developed:2 spoken:1 ended:1 suite:1 temporal:1 berkeley:1 rm:3 control:2 unit:7 appear:1 local:5 dallas:1 consequence:1 despite:3 mateo:1 hmms:11 limited:2 practical:1 thirty:1 lost:1 procedure:1 asi:1 significantly:2 adapting:1 got:1 word:14 spite:1 get:1 cannot:1 context:4 optimize:1 restriction:1 equivalent:1 center:1 starting:3 duration:2 simplicity:1 estimator:1 rule:2 regarded:1 embedding:1 memorandum:1 updated:1 target:1 programming:2 hypothesis:1 trick:2 recognition:30 particularly:1 database:6 observed:7 solved:1 capture:1 insertion:1 moderately:1 dynamic:5 trained:1 predictive:1 division:2 basis:1 darpa:2 joint:1 sondhi:1 represented:2 sc:1 solve:1 ability:3 final:3 advantage:2 sequence:1 net:2 product:1 achieve:2 crossvalidation:1 exploiting:1 juang:1 xim:2 extending:1 recurrent:1 develop:1 correct:1 murveit:1 stochastic:1 argued:1 generalization:2 preliminary:4 strictly:1 considered:2 viterbi:12 predict:1 driving:2 major:1 belgium:1 estimation:6 proc:8 outperformed:1 applicable:1 minimization:1 clearly:1 always:1 gaussian:4 avoid:2 varying:1 derived:1 emission:4 improvement:8 likelihood:3 baseline:2 posteriori:1 inference:1 dependent:5 stopping:1 initially:1 hidden:14 fogelman:1 classification:2 denoted:2 exponent:1 herault:1 development:3 art:1 field:1 langauge:1 hermansky:1 connectionist:6 wooters:3 others:1 primarily:2 few:1 kobe:1 recognize:1 neurocomputing:1 intended:1 attempt:1 mlp:35 huge:1 possibility:1 alignment:2 weakness:1 mixture:1 herve:1 desired:1 causal:1 theoretical:4 monophone:1 formalism:2 modeling:2 column:3 ar:7 corroborate:1 predictor:4 levin:1 front:1 reported:7 st:1 density:2 decoding:1 nm:1 management:4 worse:1 conf:8 warped:1 derivative:1 japan:1 account:1 potential:2 de:1 coding:1 north:1 coefficient:1 view:1 linked:1 portion:2 bayes:2 sort:1 capability:1 mlps:10 phoneme:7 qk:3 variance:2 succession:1 kaufmann:1 rabiner:1 decipher:1 generalize:2 iterated:1 albuquerque:3 confirmed:1 worth:1 classified:1 overtraining:1 explain:1 suffers:1 touretzky:1 checked:1 ed:3 energy:4 frequency:1 pp:8 weintraub:1 associated:6 segmentation:5 back:1 permitted:1 improved:1 april:1 cepstrum:2 done:5 evaluated:2 furthermore:1 correlation:3 working:2 hand:1 nonlinear:5 propagation:1 usa:1 effect:1 normalized:1 former:2 attractive:1 during:2 recurrence:5 plp:1 speaker:10 mel:2 m:2 generalized:1 criterion:1 complete:2 interface:2 instantaneous:2 recently:1 cohen:1 measurement:1 significant:5 tuning:1 language:4 had:1 longer:2 etc:1 add:1 triphones:1 own:1 showed:1 perplexity:2 disappointing:1 phonetic:1 binary:3 chuck:1 accomplished:1 morgan:10 additional:5 triphone:1 determine:2 recognized:1 signal:2 levinson:1 multiple:1 violate:1 sound:1 technical:1 cross:5 divided:1 prediction:4 basic:1 multilayer:5 albert:1 iteration:4 achieved:1 source:1 concluded:1 probably:2 december:1 effectiveness:1 counting:1 bernstein:1 feedforward:1 easy:1 independence:3 architecture:1 codebooks:1 reduce:1 prototype:1 idea:2 texas:1 kohn:2 utility:2 penalty:1 speech:23 york:2 nine:1 useful:4 clear:1 reduced:1 generate:3 delta:1 discrete:1 vol:3 eckmiller:1 four:2 sheer:1 threshold:1 drawn:1 neither:1 kept:1 run:2 everywhere:1 pursuing:1 comparable:1 bit:2 layer:8 followed:2 incorporation:1 tebelskis:1 according:1 alternate:1 waibel:1 describes:1 remain:1 slightly:1 character:1 unity:2 modification:2 explained:1 restricted:1 inti:9 resource:4 vq:2 previously:3 wellekens:3 german:1 end:1 studying:1 alternative:1 denotes:1 include:1 restrictive:2 unchanged:1 warping:1 already:1 quantity:1 gradient:2 dp:4 distance:1 link:2 unable:1 hmm:15 nelson:1 discriminant:1 assuming:1 length:1 modeled:1 ratio:1 mexico:2 unfortunately:1 allowing:1 observation:3 markov:15 assp:8 frame:10 overlooked:1 required:1 sentence:6 optimized:1 acoustic:4 deletion:1 trans:2 able:1 suggested:1 usually:2 pattern:2 program:1 built:1 including:1 hybrid:5 bourlard:11 stricly:2 advanced:1 representing:1 scheme:2 improve:8 dtw:1 started:1 utterance:1 prior:5 relative:1 fully:1 expect:1 interesting:1 limitation:1 validation:5 spicos:5 principle:1 compatible:2 course:1 english:2 side:2 institute:1 feedback:1 calculated:1 vocabulary:3 valid:6 transition:3 dimension:2 autoregressive:4 xn:6 san:1 simplified:2 nonidealities:1 nato:1 confirm:2 ml:4 assumed:1 continuous:12 search:1 table:5 transfer:1 ca:2 alg:1 du:1 bearing:1 complex:1 interpolating:1 main:1 rh:1 noise:2 repeated:1 rithm:1 explicit:1 exercise:1 xl:1 ix:1 pallet:1 experimented:1 evidence:1 incorporating:2 quantization:1 sequential:1 gained:1 entropy:1 simply:1 expressed:2 contained:2 holland:1 conditional:3 consequently:1 price:1 fisher:1 change:1 laan:1 called:1 accepted:1 experimental:2 la:1 perceptrons:5 exception:1 support:1 incorporate:2 tested:1 |
2,857 | 3,590 | Continuously-adaptive discretization for
message-passing algorithms
Kannan Achan
Microsoft Research Silicon Valley
Mountain View, California, USA
Michael Isard
Microsoft Research Silicon Valley
Mountain View, California, USA
John MacCormick
Dickinson College
Carlisle, Pennsylvania, USA
Abstract
Continuously-Adaptive Discretization for Message-Passing (CAD-MP) is a new
message-passing algorithm for approximate inference. Most message-passing algorithms approximate continuous probability distributions using either: a family
of continuous distributions such as the exponential family; a particle-set of discrete samples; or a fixed, uniform discretization. In contrast, CAD-MP uses a discretization that is (i) non-uniform, and (ii) adaptive to the structure of the marginal
distributions. Non-uniformity allows CAD-MP to localize interesting features
(such as sharp peaks) in the marginal belief distributions with time complexity that
scales logarithmically with precision, as opposed to uniform discretization which
scales at best linearly. We give a principled method for altering the non-uniform
discretization according to information-based measures. CAD-MP is shown in
experiments to estimate marginal beliefs much more precisely than competing approaches for the same computational expense.
1
Introduction
Message passing algorithms such as Belief Propagation (BP) [1] exploit factorization to perform
inference. Exact inference is only possible when the distribution to be inferred can be represented
by a tree and the model is either linear-Gaussian or fully discrete [2, 3]. One attraction of BP is
that algorithms developed for tree-structured models can be applied analogously [4] to models with
loops, such as Markov Random Fields.
There is at present no general-purpose approximate algorithm that is suitable for all problems, so
the choice of algorithm is governed by the form of the model. Much of the literature concentrates on
problems from statistics or control where point measurements are made (e.g. of an animal population
or a chemical plant temperature), and where the state evolution is non-linear or the process noise
is non-Gaussian [5, 6]. Some problems, notably those from computer vision, have more complex
observation distributions that naturally occur as piecewise-constant functions on a grid (i.e. images),
and so it is common to discretize the underlying continuous model to match the structure of the
observations [7, 8]. As the dimensionality of the state-space increases, a na??ve uniform discretization
rapidly becomes intractable [8]. When models are complex functions of the observations, sampling
methods such as non-parametric belief propagation (NBP) [9, 10], have been successful.
Distributions of interest can often be represented by a factor graph [11]. ?Message passing? is a
class of algorithms for approximating these distributions, in which messages are iteratively updated
between factors and variables. When a given message is to be updated, all other messages in the
graph are fixed and treated as though they were exact. The algorithm proceeds by picking, from
1
a family of approximate functions, the message that minimizes a divergence to the local ?exact?
message. In some forms of the approach [12] this minimization takes place over approximate belief
distributions rather than approximate messages.
A general recipe for producing message passing algorithms, summarized by Minka [13], is as follows: (i) pick a family of approximating distributions; (ii) pick a divergence measure to minimize;
(iii) construct an optimization algorithm to perform this minimization within the approximating
family. This paper makes contributions in all three steps of this recipe, resulting in a new algorithm
termed Continuously-Adaptive Discretization for Message-Passing (CAD-MP).
For step (i), we advocate an approximating family that has received little attention in recent years:
piecewise-constant probability densities with a bounded number of piecewise-constant regions. Although others have used this family in the past [14], it has not to our knowledge been employed in a
modern message-passing framework. We believe piecewise-constant probability densities are very
well suited to some problem domains, and this constitutes the chief contribution of the paper. For
step (ii), we have chosen for our initial investigation the ?inclusive? KL-divergence [13]?a standard choice which leads to the well known Belief Propagation message update equations. We show
that for a special class of piecewise-constant probability densities (the so-called naturally-weighted
densities), the minimal divergence is achieved by a distribution of minimum entropy, leading to
an intuitive and easily-implemented algorithm. For step (iii), we employ a greedy optimization
by traversing axis-aligned binary-split kd-trees (explained in Section 3). The contribution here is an
efficient algorithm called ?informed splitting? for performing the necessary optimization in practice.
As we show in Section 4, CAD-MP computes much more accurate approximations than competing
approaches for a given computational budget.
2
Discretizing a factor graph
Let us consider what it means to discretize an inference problem represented by a factor graph with
factors fi and continuous variables x? taking values in some subset of RN . One constructs a nonuniform discretization of the factor graph by partitioning the state space of each variable x? into
K regions H?k for k = 1, . . . , K. This discretization induces a discrete approximation fi0 of the
factors, which are now regarded as functions of discrete variables x0? taking integer values in the set
{1, 2, . . . , K}:
Z
fi0 (k, l, . . .) =
fi (x? , x? , . . .),
(1)
k ,x ?H l ,...
x? ?H?
?
?
for k, l, . . . = 1, . . . , K. A slight variant of BP [4] could then be used to infer the marginals on x0?
according to the update equations for messages m and beliefs b:
Y
m?,i (k) =
mj,? (k)
(2)
fj0 ?x0? \fi0
mi,? (k)
b? (k)
=
1
|H?k |
= |H?k |
X
fi0 (x0 )
x0 |x0? =k
Y
Y
m?,i (x0? )
(3)
x0? ?fi0 \x0?
mi,? (k),
(4)
fj0 ?x0?
0
where a ? b\c
R means ?all neighbors a of b except c?, x is an assignment of values to all variables,
k
and |H? | = H k 1. Thus, given a factor graph of continuous variables and a particular choice of dis?
cretization {H?k }, one gets a piecewise-constant approximation to the marginals by first discretizing
the variables according to (1), then using BP according to (2)?(4). The error in the approximation
to the true marginals arises from (3) when fi0 (x) is not constant over x in the given partition.
Consider the task of selecting between discretizations of a continuous probability distribution p(x)
over some subset U of Euclidean space. A discretization of p consists in
Ppartitioning U into K
disjoint subsets V1 , . . . , VK and assigning a weight wk to each Vk , with k wk = 1. The corresponding discretized probability distribution q(x) assigns density wk /|Vk | to Vk . We are interested
in finding a discretization for which the KL divergence KL(p||q) is as small
R as possible. The optimal choice of the wk for any fixed partitioning V1 , . . . , VK is to take wk = x?Vk p(x) [14]; we call
2
0.29 0.09
H ?+ H ++
H
-
PP
PP
q
P
0.61
0.11
0.29
0.01
(a)
(b)
0.01
H 1? H 1+
H 2+
H
0.12
:
0.25 0.03
0.28
H ?? H +? XX
H 2?
XXX
XXX 0.16
XX
0.14 0.02
XXX
z
X
(c)
(d)
(e)
(f)
Figure 1: Expanding a hypercube in two dimensions. Hypercube H (b), a subset of the full
state space (a), is first ?expanded? into the sub-cubes {H ?? , H +? , H ?+ , H ++ } (c) by splitting
along each possible dimension. These sub-cubes are then re-combined to form two possible split
candidates {H 1? , H 1+ } (d) and {H 2? , H 2+ } (e). Informed belief values are computed for the
re-combined hypercubes, including a new estimate for ?b(H) (f), by summing the beliefs in the
finer-scale partitioning. The new estimates are more accurate since the error introduced by the
discretization decreases as the partitions become smaller.
these the natural weights for p(x), given the Vk . There is a simple relationship between the quality
of a naturally-weighted discretization and its entropy H(?):
Theorem 1. Among any collection of naturally-weighted discretizations of p(x), the minimum KL
divergence to p(x) is achieved by a discretization of minimal entropy.
R
PK
wk
Proof. For a naturally-weighted discretization q, KL(p||q) = ? k=1 wk log |V
+ U p log p =
k|
H(q) ? H(p). H(p) is constant, so KL(p||q) is minimized by minimizing H(q).
Suppose we are given a discretization {H?k } and have computed messages and beliefs for every
node using (2)?(4). The messages have not necessarily reached a fixed point, but we nevertheless
have some current estimate for them. For any arbitrary hypercube H at x? (not necessarily in its
current discretization) we can define the informed belief, denoted ?b(H), to be the belief H would
receive if all other nodes and their incoming messages were left unaltered. To compute the informed
belief, one first computes new discrete factor function values involving H using integrals like (1).
These values are fed into (2), (3) to produce ?informed? messages mi,? (H) arriving at x? from each
neighbor fi . Finally, the informed messages are fed into (4) to obtain the informed belief ?b(H).
3
Continuously-adaptive discretization
The core of the CAD-MP algorithm is the procedure for passing a message to a variable x? . Given
fixed approximations at every other node, any discretization of ? induces an approximate belief distribution q? (x? ). The task of the algorithm is to select the best discretization, and as Theorem 1
shows, a good strategy for this selection is to look for a naturally-weighted discretization that minimizes the entropy of q? . We achieve this using a new algorithm called ?informed splitting? which
is described next.
CAD-MP employs an axis-aligned binary-split kd-tree [15] to represent the discrete partitioning of
a D-dimensional continuous state space at each variable (the same representation was used in [14]
where it was called a Binary Split Partitioning). For our purposes, a kd-tree is a binary tree in which
each vertex is assigned a subset?actually a hypercube?of the state space. The root is assigned the
whole space, and any internal vertex splits its hypercube equally between its two children using an
axis-aligned plane. The subsets assigned to all leaves partition the state space into hypercubes.
We build the kd-tree greedily by recursively splitting leaf vertices: at each step we must choose
a hypercube H?k in the current partitioning to split, and a dimension d to split it. According to
Theorem 1, we should choose k and d to minimize the entropy of the resulting discretization?
provided that this discretization has ?natural? weights. In practice, the natural weights are estimated
using informed beliefs; we nevertheless proceed as though they were exact and choose the k- and
3
d-values leading to lowest entropy. A subroutine of the algorithm involves ?expanding? a hypercube
into sub-cubes as illustrated in the two-dimensional case in Figure 1. The expansion procedure
generalizes to D dimensions by first expanding to 2D subcubes and then re-combining these into
2D candidate splits. Note that for all d ? {1, . . . , D}
?b(H) ? ?b(H d? ) + ?b(H d? ).
(5)
Once we have expanded each hypercube in the current partitioning and thereby computed values for
?b(H k ), ?b(H k,d? ) and ?b(H k,d+ ) for all k and d, we choose k and d to minimize the ?split entropy?
?
?
?
?? (k, d) = ?
X
i6=k
? i
? k,d?
? k,d+
?b(H i ) log b(H? ) ? ?b(H k,d? ) log b(H? ) ? ?b(H k,d+ ) log b(H? ) .
?
?
?
k,d?
i
|H? |
|H? |
|H?k,d+ |
(6)
Note that from (5) we can perform this minimization without normalizing the ?b(?).
We can now describe the CAD-MP algorithm using informed splitting, which re-partitions a variable of the factor graph by producing a new kd-tree whose leaves are the hypercubes in the new
partitioning:
1. Initialize the root vertex of the kd-tree with its associated hypercube being the whole state
space, with belief 1. Add this root to a leaf set L and ?expand? it as shown in Figure 1.
2. While the number of leaves |L| is less than the desired number of partitions in the discretized model:
(a) Pick the leaf H and split dimension d that minimize the split-entropy (6).
(b) Create two new vertices H ? and H + by splitting H along dimension d, and ?expand?
these new vertices.
(c) Remove H from L, and add H ? and H + to L.
All variables in the factor graph are initialized with the trivial discretization (a single partition). Variables can be visited according to any standard message-passing schedule, where a ?visit? consists
of repartitioning according to the above algorithm. A simple example showing the evolution of the
belief at one variable is shown in Figure 2.
If the variable being repartitioned has T neighbors and we require a partitioning of K hypercubes,
then a straightforward implementation of this algorithm requires the computation of 2K ? 2D ?
KT message components. Roughly speaking, then, informed splitting pays a factor of 2D+1 over
BP which must compute K 2 T message components. But CAD-MP trades this for an exponential
factor in K since it can home in on interesting areas of the state space using binary search, so if
BP requires K partitions for a given level of accuracy, CAD-MP (empirically) achieves the same
accuracy with only O(log K) partitions. Note that in special cases, including some low-level vision
applications [16], classical BP can be performed in O(KT ) time and space; however this is still
prohibitive for large K.
4
Experiments
We would like to compare our candidate algorithms against the marginal belief distributions that
would be computed by exact inference, however no exact inference algorithm is known for our
models. Instead, for each experiment we construct a fine-scale uniform discretization Df of the
model and input data, and compute the marginal belief distributions p(x? ; Df ) at each variable
x? using the standard forward-backward BP algorithm. Given a candidate approximation C we
can then compare the marginals p(x? ; C) under that approximation to the fine-scale discretization
by computing the KL-divergence KL(p(x? ; Df )||p(x? ; C)) at each variable. In results below, we
report the mean of this divergence across all variables in the graph, and refer to it in the text as ?(C).
While a ?fine-enough? uniform discretization will tend to the true marginals, we do not a priori
know how fine that is. We therefore construct a sequence of coarser uniform discretizations Dci of
the same model and data, and compute ?(Dci ) for each of them. If ?(Dci ) is converging rapidly
enough to zero, as is the case in the experiments below, we have confidence that the fine-scale
discretization is a good approximation to the exact marginals.
4
Observation (local factor)
(a)
(b)
(c)
Figure 2: Evolution of discretization at a single variable. The left image is the local (singlevariable) factor at the first node in a simple chain MRF whose nodes have 2-D state spaces. The
next three images, from left to right, show the evolution of the informed belief. Initially (a) the partitioning is informed simply by the local factor, but after messages have been passed once along the
chain and back (b), the posterior marginal estimate has shifted and the discretization has adapted accordingly. Subsequent iterations over the chain (c) do not substantially alter the estimated marginal
belief. For this toy example only 16 partitions are used, and the normalized log of the belief is
displayed to make the structure of the distribution more apparent.
We compare our adaptive discretization algorithm against non-parametric belief propagation
(NBP) [9, 10] which represents the marginal distribution at a variable by a particle set. We generate
some importance samples directly from the observation distribution, both to initialize the algorithm
and to ?re-seed? the particle set when it gets lost. Particle sets typically do not approximate the tails
of a distribution well, leading to zeros in the approximate marginals and divergences that tend to
infinity. We therefore regularize all divergence computations as follows:
R
R
X
+ xk q(x)
+ H k p(x)
p?k
?
?
?
?
R
R
KL (p||q) =
pk log( ? ), pk = P
, qk = P
(7)
qk
n ( + H n p(x))
n ( + H n q(x))
k
where {H } are the partitions in the fine-scale discretization Df . All experiments use = 10?4
which was found empirically to show good results for NBP.
k
We begin with a set of experiments over ten randomly generated input sequences of a onedimensional target moving through structured clutter of similar-looking distractors. One of the
sequences is shown in Figure 3a, where time goes from bottom to top. The measurement at a timestep consists in 240 ?pixels? (piecewise-constant regions of uniform width) generated by simulating
a small one-dimensional target in clutter, with additive Gaussian shot-noise. There are stationary
clutter distractors, and also periodic ?forkings? where a moving clutter distractor emerges from the
target and proceeds for a few time-steps before disappearing. Each sequence contains 256 timesteps, and the ?exact? marginals (Figure 3b) are computed using standard discrete BP with 15360
states per time-step. The modes of the marginals generated by all the experiments are similar to
those in Figure 3b, except for one run of NBP shown in Figure 3c that failed entirely to find the
mode (red line) due to an unlucky random seed. However, the distributions differ in fine structure,
where CAD-MP approximates the tails of the distribution much better than NBP.
Figure 4a shows the divergences ?(?) for the various discrete algorithms: both uniform discretization
at various degrees of coarseness, and adaptive discretization using CAD-MP with varying numbers
of partitions. Each data point shows the mean divergence ?(?) for one of the ten simulated onedimensional datasets. As the number of adaptive partitions increases, the variance of ?(?) across
trials increases, but the divergence stays small. Higher divergences in CAD-MP trials correspond
to a mis-estimation of the tails of the marginal belief at a few time-steps. The straight line on
the log/log plot for the uniform discretizations gives us confidence that the fine-scale discretization
is a close approximation to the exact beliefs. The adaptive discretization provides a very faithful
approximation to this ?exact? distribution with vastly fewer partitions.
Figure 4b shows the divergences for the same ten one-dimensional trial sequences when the
marginals are computed using NBP with varying numbers of particles. The NBP algorithm was
run five times on each of the ten simulated one-dimensional datasets with different random seeds
each time, and the particle-set sizes were chosen to approximately match the computation time of
the CAD-MP algorithm. The NBP algorithm does worse absolutely (the divergences are much larger
even after regularization, indicating that areas of high belief are sometimes mis-estimated), and also
5
(a): Observations
(b): ?Exact? beliefs
(c): an NBP ?failure?
(d)
(e)
(f)
(g)
Exact beliefs (d) are represented more faithfully by CAD-MP (e), (f) than NBP (g)
Figure 3: One of the one-dimensional test sequences. The region of the white rectangle in (b) is
expanded in (d)?(g), with beliefs now plotted on log intensity scale to expand their dynamic range.
CAD-MP using only 16 partitions per time-step (e) already produces a faithful approximation to the
exact belief (d), and increasing to 128 partitions (f) fills in more details. The NBP algorithm using
800 particles (g) does not approximate the tails of the distribution well.
+)
8
7
.2
-6
3
5
5
324
01/
.
,-
!"
#$#%&'(
+
HII JKLMNOPQR
SII JKLMNOPQR
)*+
TII JKLMNOPQR
UII JKLMNOPQR
)*)+
+))
(a): 1D test?discrete algorithms
(b): 1D test?NBP
?
X
Z
d
c
Z
[
YZ
b
_
a
a
Z
_Y`
^
]\
[
Z
Y
+)))
9:;<=> ?@ AB>CDEF=G
?
?
?
?
??
??
?
?
?
?
???
???
?
??
VWX
VWVX
rstuvwx
yzy{|t}~
VWVVX
VWVVVX
XV
XVV
XVVV
XVVVV
?
??? ?????????
??? ?????????
??? ?????????
??
???? ?????????
???? ?????????
??
XVVVVV
?
efghij kl mnjopopkeq
?
?
?????? ?? ?????????
(c): 2D test?discrete algorithms
(d): 2D test?NBP
Figure 4: Adaptive discretization achieves the same accuracy as uniform discretization using
many fewer partitions, but non-parametric belief propagation is less effective. See Section 4
for details.
6
varies greatly across different trial sequences, and when re-run with different random seeds on the
same trial sequence. Note also that the ?(?) are bi-modal?values of ?(?) above around 0.5 signify
runs on which NBP incorrectly located the mode of the marginal belief distribution at some or all
time-steps, as in Figure 3c.
We performed a similar set of experiments using a simulated two-dimensional data-set. This time
the input data is a 64 ? 64 image grid, and the ?exact? fine-scale discretization is at a resolution
of 512 ? 512 giving 262144 discrete states in total. Figures 4c and 4d show that adaptive discretization still greatly outperforms NBP for an equivalent computational cost. Again there is a
straight-line trend in the log/log plots for both CAD-MP and uniform discretization, though as in
the one-dimensional case the variance of the divergences increases with more partitions. NBP again
performs less accurately, and frequently fails to find the high-weight regions of the belief at all at
some time-steps, even with 3200 particles.
Adaptive discretization seems to correct some of the well-known limitations of particle-based methods. The discrete distribution is able to represent probability mass well into the tails of the distribution, which leads to a more faithful approximation to the exact beliefs. This also prevents the
catastrophic failure case for NBP shown in Figure 3c, where the mode of the distribution is lost
entirely because no particles were placed nearby. Moreover, CAD-MP?s computational complexity
scales linearly with the number of incoming messages at a factor. NBP has to resort to heuristics to
sample from the product of incoming messages once the number of messages is greater than two.
5
Related work
The work most closely related to CAD-MP is the 1997 algorithm of Kozlov and Koller [14]. We
refer to this algorithm as ?KK97?; its main differences to CAD-MP are: (i) KK97 is described in a
junction tree setting and computes the marginal posterior of just the root node, whereas CAD-MP
computes beliefs everywhere in the graph; (ii) KK97 discretizes messages (on junction tree edges)
rather than variables (in a factor graph), so multiplying incoming messages together requires the
substantial additional complexity of merging disparate discretizations, compared to CAD-MP in
which the incoming messages share the same discretization. Difference (i) is the more serious, since
it renders KK97 inapplicable to the type of early-vision problem we are motivated by, where the
marginal at every variable must be estimated.
Coarse-to-fine techniques can speed up the convergence of loopy BP [16] but this does not address
the discrete state-space explosion. One can also prune the state space based on local evidence [17,
18]. However, this approach is unsuitable when the data function has high entropy; moreover, it is
very difficult to bring a state back into the model once it has been pruned.
Another interesting approach is to retain the uniform discretization, but enforce sparsity on messages
to reduce computational cost. This was done in both [19] (in which messages are approximated using a using a mixture of delta functions, which in practice results in retaining the K largest message
components) and [20] (which uses an additional uniform distribution in the approximating distribution to ensure non-zero weights for all states in the discretization). However, these approaches
appear to suffer when multiplying messages with disjoint peaks whose tails have been truncated to
enforce sparsity: such peaks are unable to fuse their evidence correctly. Also, [20] is not directly
applicable when the state-space is multi-dimensional.
Expectation Propagation [5] is a highly effective algorithm for inference in continuous-valued networks, but is not valid for densities that are multimodal mixtures.
6
Discussion
We have demonstrated that our new algorithm, CAD-MP, performs accurate approximate inference with complex, multi-modal observation distributions and corresponding multi-modal posterior
distributions. It substantially outperforms the two standard methods for inference in this setting:
uniform-discretization and non-parametric belief propagation. While we only report results here on
simulated data, we have successfully used the method on low-level vision problems and are preparing a companion publication to describe these results. We believe CAD-MP and variants on it may
be applicable to other domains where complex distributions must be estimated in spaces of low to
7
moderate dimension. The main challenge in applying the technique to an arbitrary factor graph is
the tractability of the definite integrals (1).
This paper describes a particular set of engineering choices motivated by our problem domain. We
use kd-trees to describe partitionings: other data structures could certainly be used. Also, we employ
a greedy heuristic to select a partitioning with low entropy rather than exhaustively computing a
minimimum entropy over some family of discretizations. We have experimented with a Metropolis
algorithm to augment this greedy search: a Metropolis move consists in ?collapsing? some sub-tree
of the current partitioning and then re-expanding using a randomized form of the minimum-entropy
criterion. We have also tried tree-search heuristics that do not need the O(2D ) ?expansion? step,
and thus may be more effective when D is large. The choices reported here seem to give the best
accuracy on our problems for a given computational budget, however many others are possible and
we hope this work will serve as a starting point for a renewed interest in adaptive discretization in a
variety of inference settings.
References
[1] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988.
[2] P. Dagum and M. Luby. Approximating probabilistic inference in bayesian belief networks is NP-hard.
Artificial Intelligence, 60(1):141?153, 1993.
[3] Robert G. Cowell, A. Philip Dawid, Steffen L. Lauritzen, and David J. Spiegelhalter. Probabilistic Networks and Expert Systems. Springer, 1999.
[4] Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Generalized belief propagation. In NIPS, pages
689?695, 2000.
[5] T. Minka. Expectation propagation for approximate bayesian inference. In Proc. UAI, pages 362?369,
2001.
[6] G. Kitagawa. The two-filter formula for smoothing and an implementation of the gaussian-sum smoother.
Ann. Inst. Statist. Math., 46(4):605?623, 1994.
[7] P.F. Felzenszwalb and D.P. Huttenlocher. Efficient belief propagation for early vision. In Proc. CVPR,
2004.
[8] M. Isard and J. MacCormick. Dense motion and disparity estimation via loop belief propagation. In
ACCV, pages 32?41, 2006.
[9] E. Sudderth, A. Ihler, W. Freeman, and A. Willsky. Nonparametric belief propagation. In Proc. CVPR,
volume 1, pages 605?612, 2003.
[10] M. Isard. Pampas: Real-valued graphical models for computer vision. In Proc. CVPR, volume 1, pages
613?620, 2003.
[11] F.R. Kschischang, B.J. Frey, and H.A. Loeliger. Factor graphs and the sum-product algorithm. IEEE
Transactions on Information Theory, 47(2):498?519, 2001.
[12] O. Zoeter and H. Heskes. Deterministic approximate inference techniques for conditionally gaussian state
space models. Statistics and Computing, 16(3):279?292, 2006.
[13] T. Minka. Divergence measures and message passing. Technical Report MSR-TR-2005-173, Microsoft
Research, 2005.
[14] Alexander V. Kozlov and Daphne Koller. Nonuniform dynamic discretization in hybrid networks. In
Proc. UAI, pages 314?325, 1997.
[15] Jon Louis Bentley. Multidimensional binary search trees used for associative searching. Commun. ACM,
18(9):509?517, 1975.
[16] P.F. Felzenszwalb and D.P. Huttenlocher. Pictorial structures for object recognition. Int. J. Computer
Vision, 61(1):55?79, 2005.
[17] J. Coughlan and S. Ferreira. Finding deformable shapes using loopy belief propagation. In Proc. ECCV,
pages 453?468, 2002.
[18] J. Coughlan and H. Shen. Shape matching with belief propagation: Using dynamic quantization to accommodate occlusion and clutter. In Proc. Workshop on Generative-Model Based Vision, 2004.
[19] C. Pal, C. Sutton, and A. McCallum. Sparse forward-backward using minimum divergence beams for
fast training of conditional random fields. In International Conference on Acoustics, Speech, and Signal
Processing, 2006.
[20] J. Lasserre, A. Kannan, and J. Winn. Hybrid learning of large jigsaws. In Proc. CVPR, 2007.
8
| 3590 |@word trial:5 msr:1 unaltered:1 coarseness:1 seems:1 tried:1 pick:3 thereby:1 tr:1 accommodate:1 shot:1 recursively:1 initial:1 contains:1 disparity:1 selecting:1 loeliger:1 renewed:1 past:1 outperforms:2 current:5 discretization:49 cad:25 assigning:1 must:4 john:1 subsequent:1 partition:17 additive:1 shape:2 remove:1 plot:2 update:2 stationary:1 isard:3 greedy:3 leaf:6 prohibitive:1 fewer:2 accordingly:1 plane:1 xk:1 intelligence:1 mccallum:1 coughlan:2 core:1 provides:1 coarse:1 node:6 math:1 daphne:1 five:1 along:3 sii:1 become:1 consists:4 advocate:1 x0:10 notably:1 roughly:1 frequently:1 distractor:1 multi:3 steffen:1 discretized:2 freeman:2 little:1 increasing:1 becomes:1 provided:1 xx:2 underlying:1 bounded:1 begin:1 mass:1 moreover:2 lowest:1 what:1 mountain:2 nbp:18 minimizes:2 substantially:2 developed:1 informed:13 finding:2 every:3 multidimensional:1 ferreira:1 control:1 partitioning:13 appear:1 producing:2 louis:1 before:1 engineering:1 local:5 frey:1 xv:1 sutton:1 approximately:1 disappearing:1 factorization:1 range:1 bi:1 faithful:3 practice:3 lost:2 definite:1 procedure:2 area:2 discretizations:6 matching:1 confidence:2 get:2 close:1 valley:2 selection:1 applying:1 equivalent:1 deterministic:1 demonstrated:1 straightforward:1 attention:1 go:1 starting:1 resolution:1 shen:1 splitting:7 assigns:1 attraction:1 regarded:1 fill:1 regularize:1 population:1 searching:1 updated:2 target:3 suppose:1 exact:15 dickinson:1 us:2 logarithmically:1 trend:1 approximated:1 dawid:1 located:1 recognition:1 coarser:1 huttenlocher:2 bottom:1 region:5 decrease:1 trade:1 principled:1 substantial:1 complexity:3 dynamic:3 exhaustively:1 uniformity:1 serve:1 inapplicable:1 easily:1 multimodal:1 represented:4 various:2 fast:1 describe:3 effective:3 artificial:1 whose:3 apparent:1 larger:1 heuristic:3 valued:2 plausible:1 cvpr:4 statistic:2 associative:1 sequence:8 product:2 aligned:3 loop:2 combining:1 rapidly:2 achieve:1 deformable:1 fi0:6 intuitive:1 pampas:1 recipe:2 convergence:1 produce:2 object:1 lauritzen:1 received:1 implemented:1 involves:1 differ:1 concentrate:1 closely:1 correct:1 filter:1 require:1 investigation:1 kitagawa:1 around:1 seed:4 achieves:2 early:2 purpose:2 estimation:2 proc:8 applicable:2 dagum:1 visited:1 largest:1 create:1 faithfully:1 successfully:1 weighted:5 minimization:3 hope:1 gaussian:5 rather:3 varying:2 publication:1 vk:7 greatly:2 contrast:1 greedily:1 inst:1 inference:14 typically:1 initially:1 koller:2 expand:3 subroutine:1 interested:1 pixel:1 among:1 denoted:1 priori:1 retaining:1 augment:1 animal:1 smoothing:1 special:2 initialize:2 marginal:12 field:2 construct:4 once:4 cube:3 sampling:1 preparing:1 represents:1 look:1 constitutes:1 jon:1 alter:1 minimized:1 report:3 np:1 others:2 piecewise:7 employ:3 few:2 serious:1 intelligent:1 modern:1 randomly:1 ve:1 divergence:19 pictorial:1 occlusion:1 microsoft:3 william:1 ab:1 interest:2 message:38 highly:1 certainly:1 unlucky:1 mixture:2 chain:3 accurate:3 kt:2 fj0:2 integral:2 edge:1 necessary:1 explosion:1 traversing:1 tree:15 euclidean:1 initialized:1 re:7 desired:1 plotted:1 minimal:2 altering:1 assignment:1 loopy:2 cost:2 tractability:1 vertex:6 subset:6 uniform:16 successful:1 pal:1 reported:1 varies:1 periodic:1 combined:2 hypercubes:4 density:6 peak:3 randomized:1 international:1 stay:1 retain:1 probabilistic:3 picking:1 michael:1 analogously:1 continuously:4 together:1 na:1 vastly:1 again:2 opposed:1 choose:4 collapsing:1 worse:1 resort:1 expert:1 leading:3 toy:1 tii:1 summarized:1 wk:7 int:1 mp:25 performed:2 view:2 root:4 jigsaw:1 reached:1 red:1 zoeter:1 contribution:3 minimize:4 accuracy:4 qk:2 variance:2 kaufmann:1 correspond:1 bayesian:2 accurately:1 multiplying:2 finer:1 straight:2 against:2 failure:2 pp:2 minka:3 naturally:6 proof:1 mi:5 associated:1 ihler:1 knowledge:1 distractors:2 dimensionality:1 emerges:1 schedule:1 actually:1 back:2 higher:1 xxx:3 modal:3 wei:1 done:1 though:3 just:1 propagation:14 mode:4 quality:1 believe:2 bentley:1 usa:3 normalized:1 true:2 evolution:4 regularization:1 assigned:3 chemical:1 iteratively:1 illustrated:1 white:1 conditionally:1 width:1 criterion:1 generalized:1 performs:2 motion:1 temperature:1 bring:1 reasoning:1 image:4 fi:3 common:1 empirically:2 volume:2 tail:6 slight:1 approximates:1 marginals:10 onedimensional:2 silicon:2 measurement:2 refer:2 grid:2 heskes:1 i6:1 particle:10 moving:2 add:2 posterior:3 recent:1 moderate:1 commun:1 termed:1 binary:6 discretizing:2 morgan:1 minimum:4 greater:1 additional:2 employed:1 prune:1 signal:1 ii:4 smoother:1 full:1 infer:1 technical:1 match:2 equally:1 visit:1 converging:1 variant:2 involving:1 mrf:1 vision:8 expectation:2 df:4 iteration:1 represent:2 sometimes:1 achieved:2 beam:1 receive:1 whereas:1 fine:10 signify:1 winn:1 sudderth:1 tend:2 seem:1 integer:1 call:1 iii:2 split:11 enough:2 variety:1 timesteps:1 pennsylvania:1 competing:2 reduce:1 motivated:2 passed:1 render:1 suffer:1 speech:1 passing:12 proceed:1 speaking:1 subcubes:1 clutter:5 nonparametric:1 ten:4 statist:1 induces:2 generate:1 shifted:1 estimated:5 disjoint:2 per:2 delta:1 correctly:1 discrete:13 nevertheless:2 localize:1 achan:1 backward:2 v1:2 timestep:1 graph:13 rectangle:1 fuse:1 year:1 sum:2 run:4 everywhere:1 place:1 family:8 home:1 entirely:2 pay:1 adapted:1 occur:1 precisely:1 infinity:1 uii:1 bp:10 inclusive:1 nearby:1 speed:1 pruned:1 performing:1 expanded:3 structured:2 according:7 kd:7 smaller:1 across:3 describes:1 metropolis:2 explained:1 equation:2 know:1 fed:2 generalizes:1 junction:2 yedidia:1 discretizes:1 enforce:2 hii:1 simulating:1 luby:1 yair:1 top:1 ensure:1 graphical:1 unsuitable:1 exploit:1 giving:1 build:1 yz:1 approximating:6 hypercube:9 classical:1 move:1 already:1 parametric:4 strategy:1 maccormick:2 unable:1 simulated:4 philip:1 trivial:1 kannan:2 willsky:1 xvv:1 generative:1 relationship:1 minimizing:1 difficult:1 robert:1 dci:3 expense:1 disparate:1 implementation:2 perform:3 discretize:2 observation:7 markov:1 datasets:2 accv:1 displayed:1 incorrectly:1 truncated:1 looking:1 rn:1 nonuniform:2 sharp:1 arbitrary:2 intensity:1 inferred:1 introduced:1 david:1 kl:10 california:2 acoustic:1 pearl:1 nip:1 address:1 able:1 proceeds:2 below:2 sparsity:2 challenge:1 including:2 belief:44 suitable:1 treated:1 natural:3 hybrid:2 spiegelhalter:1 axis:3 text:1 literature:1 fully:1 plant:1 interesting:3 limitation:1 degree:1 share:1 eccv:1 placed:1 arriving:1 dis:1 neighbor:3 taking:2 felzenszwalb:2 sparse:1 dimension:7 valid:1 computes:4 forward:2 made:1 adaptive:13 collection:1 transaction:1 approximate:13 incoming:5 uai:2 summing:1 repartitioned:1 continuous:8 search:4 chief:1 lasserre:1 mj:1 expanding:4 kschischang:1 expansion:2 complex:4 necessarily:2 domain:3 pk:3 main:2 dense:1 linearly:2 whole:2 noise:2 child:1 singlevariable:1 precision:1 sub:4 fails:1 exponential:2 candidate:4 governed:1 theorem:3 companion:1 formula:1 showing:1 experimented:1 normalizing:1 evidence:2 intractable:1 workshop:1 quantization:1 merging:1 importance:1 budget:2 vwx:1 suited:1 entropy:12 simply:1 failed:1 prevents:1 cowell:1 springer:1 acm:1 conditional:1 ann:1 hard:1 except:2 called:4 total:1 catastrophic:1 indicating:1 select:2 college:1 internal:1 arises:1 jonathan:1 alexander:1 absolutely:1 |
2,858 | 3,591 | On the Design of Loss Functions for Classification:
theory, robustness to outliers, and SavageBoost
Hamed Masnadi-Shirazi
Statistical Visual Computing Laboratory,
University of California, San Diego
La Jolla, CA 92039
[email protected]
Nuno Vasconcelos
Statistical Visual Computing Laboratory,
University of California, San Diego
La Jolla, CA 92039
[email protected]
Abstract
The machine learning problem of classifier design is studied from the perspective
of probability elicitation, in statistics. This shows that the standard approach of
proceeding from the specification of a loss, to the minimization of conditional
risk is overly restrictive. It is shown that a better alternative is to start from the
specification of a functional form for the minimum conditional risk, and derive
the loss function. This has various consequences of practical interest, such as
showing that 1) the widely adopted practice of relying on convex loss functions is
unnecessary, and 2) many new losses can be derived for classification problems.
These points are illustrated by the derivation of a new loss which is not convex,
but does not compromise the computational tractability of classifier design, and
is robust to the contamination of data with outliers. A new boosting algorithm,
SavageBoost, is derived for the minimization of this loss. Experimental results
show that it is indeed less sensitive to outliers than conventional methods, such as
Ada, Real, or LogitBoost, and converges in fewer iterations.
1
Introduction
The binary classification of examples x is usually performed with recourse to the mapping y? =
sign[f (x)], where f is a function from a pre-defined class F, and y? the predicted class label. Most
state-of-the-art classifier design algorithms, including SVMs, boosting, and logistic regression, determine the optimal function f ? by a three step procedure: 1) define a loss function ?(yf (x)), where
y is the class label of x, 2) select a function class F, and 3) search within F for the function f ? which
minimizes the expected value of the loss, known as minimum conditional risk. Although tremendously successful, these methods have been known to suffer from some limitations, such as slow
convergence, or too much sensitivity to the presence of outliers in the data [1, 2]. Such limitations
can be attributed to the loss functions ?(?) on which the algorithms are based. These are convex
bounds on the so-called 0-1 loss, which produces classifiers of minimum probability of error, but is
too difficult to handle from a computational point of view.
In this work, we analyze the problem of classifier design from a different perspective, that has long
been used to study the problem of probability elicitation, in the statistics literature. We show that the
two problems are identical, and probability elicitation can be seen as a reverse procedure for solving
the classification problem: 1) define the functional form of expected elicitation loss, 2) select a
function class F, and 3) derive a loss function ?. Both probability elicitation and classifier design
reduce to the problem of minimizing a Bregman divergence. We derive equivalence results, which
allow the representation of the classifier design procedures in ?probability elicitation form?, and the
representation of the probability elicitation procedures in ?machine learning form?. This equivalence
is useful in two ways. From the elicitation point of view, the risk functions used in machine learning
can be used as new elicitation losses. From the machine learning point of view, new insights on the
relationship between loss ?, optimal function f ? , and minimum risk are obtained. In particular, it is
shown that the classical progression from loss to risk is overly restrictive: once a loss ? is specified,
1
both the optimal f ? , and the functional form of the minimum risk are immediately pined down.
This is, however, not the case for the reverse progression: it is shown that any functional form of
the minimum conditional risk, which satisfies some mild constraints, supports many (?, f ? ) pairs.
Hence, once the risk is selected, one degree of freedom remains: by selecting a class of f ? , it is
possible to tailor the loss ?, so as to guarantee classifiers with desirable traits. In addition to this,
the elicitation view reveals that the machine learning emphasis on convex losses ? is misguided. In
particular, it is shown that what matters is the convexity of the minimum conditional risk. Once a
functional form is selected for this quantity, the convexity of the loss ? does not affect the convexity
of the Bregman divergence to be optimized.
These results suggest that many new loss functions can be derived for classifier design. We illustrate
this, by deriving a new loss that trades convexity for boundedness. Unlike all previous ?, the one
now proposed remains constant for strongly negative values of its argument. This is akin to robust
loss functions proposed in the statistics literature to reduce the impact of outliers. We derive a new
boosting algorithm, denoted SavageBoost, by combination of the new loss and the procedure used
by Friedman to derive RealBoost [3]. Experimental results show that the new boosting algorithm is
indeed more outlier resistant than classical methods, such as AdaBoost, RealBoost, and LogitBoost.
2
Classification and risk minimization
A classifier is a mapping g : X ? {?1, 1} that assigns a class label y ? {?1, 1} to a feature
vector x ? X , where X is some feature space. If feature vectors are drawn with probability density
PX (x), PY (y) is the probability distribution of the labels y ? {?1, 1}, and L(x, y) a loss function,
the classification risk is R(f ) = EX,Y [L(g(x), y)]. Under the 0-1 loss, L0/1 (x, y) = 1 if g(x) 6= y
and 0 otherwise, this risk is the expected probability of classification error, and is well known to be
minimized by the Bayes decision rule. Denoting by ?(x) = PY |X (1|x) this can be written as
g ? (x) = sign[2?(x) ? 1].
(1)
Classifiers are usually implemented with mappings of the form g(x) = sign[f (x)], where f is some
mapping from X to R. The minimization of the 0-1 loss requires that
sign[f ? (x)] = sign[2?(x) ? 1], ?x
(2)
When the classes are separable, any f (x) such that yf (x) ? 0, ?x has zero classification error. The
0-1 loss can be written as a function of this quantity
L0/1 (x, y) = ?0/1 [yf (x)] = sign[?yf (x)].
This motivates the minimization of the expected value of this loss as a goal for machine learning.
However, this minimization is usually difficult. Many algorithms have been proposed to minimize
alternative risks, based on convex upper-bounds of the 0-1 loss. These risks are of the form
R? (f ) = EX,Y [?(yf (x))]
(3)
where ?(?) is a convex upper bound of ?0/1 (?). Some examples of ?(?) functions in the literature are
given in Table 1. Since these functions are non-negative, the risk is minimized by minimizing the
conditional risk EY |X [?(yf (x))|X = x] for every x ? X . This conditional risk can be written as
C? (?, f ) = ??(f ) + (1 ? ?)?(?f ),
(4)
where we have omitted the dependence of ? and f on x for notational convenience.
Various authors have shown that, for the ?(?) of Table 1, the function f?? which minimizes (4)
f?? (?) = arg min C? (?, f )
f
(5)
satisfies (2) [3, 4, 5]. These functions are also presented in Table 1. It can, in fact, be shown that (2)
holds for any f?? (?) which minimizes (4) whenever ?(?) is convex, differentiable at the origin, and
has derivative ?? (0) = 0 [5].
While learning algorithms based on the minimization of (4), such as SVMs, boosting, or logistic
regression, can perform quite well, they are known to be overly sensitive to outliers [1, 2]. These
are points for which yf (x) < 0. As can be seen from Figure 1, the sensitivity stems from the large
2
Table 1: Machine learning algorithms progress from loss ?, to inverse link function f?? (?), and minimum
conditional risk C?? (?).
Algorithm
Least squares
Modified LS
SVM
Boosting
Logistic Regression
?(v)
(1 ? v)2
max(1 ? v, 0)2
max(1 ? v, 0)
exp(?v)
log(1 + e?v )
f?? (?)
2? ? 1
2? ? 1
sign(2? ? 1)
?
1
2 log 1??
?
log 1??
C?? (?)
4?(1 ? ?)
4?(1 ? ?)
1p
? |2? ? 1|
2 ?(1 ? ?)
-? log ? ? (1 ? ?) log(1 ? ?)
(infinite) weight given to these points by the ?(?) functions when yf (x) ? ??. In this work, we
show that this problem can be eliminated by allowing non-convex ?(?). This may, at first thought,
seem like a bad idea, given the widely held belief that the success of the aforementioned algorithms
is precisely due to the convexity of these functions. We will see, however, that the convexity of ?(?)
is not important. What really matters is the fact, noted by [4], that the minimum conditional risk
C?? (?) = inf C? (?, f ) = C? (?, f?? )
f
(6)
satisfies two properties. First, it is a concave function of ? (? ? [0, 1])1 . Second, if f?? is differentiable, then C?? (?) is differentiable and, for any pair (v, ??) such that v = f?? (?
? ),
C? (?, v) ? C?? (?) = B?C?? (?, ??),
(7)
where
BF (?, ??) = F (?) ? F (?
? ) ? (? ? ??)F ? (?
? ).
(8)
is the Bregman divergence of the convex function F . The second property provides an interesting
interpretation of the learning algorithms as methods for the estimation of the class posterior probability ?(x): the search for the f (x) which minimizes (4) is equivalent to a search for the probability
estimate ??(x) which minimizes (7). This raises the question of whether minimizing a cost of the
form of (4) is the best way to elicit the posterior probability ?(x).
3
Probability elicitation
This question has been extensively studied in statistics. In particular, Savage studied the problem of
designing reward functions that encourage probability forecasters to make accurate predictions [6].
The problem is formulated as follows.
? let I1 (?
? ) be the reward for the prediction ?? when the event y = 1 holds.
? let I?1 (?
? ) be the reward for the prediction ?? when the event y = ?1 holds.
The expected reward is
I(?, ??) = ?I1 (?
? ) + (1 ? ?)I?1 (?
? ).
(9)
Savage asked the question of which functions I1 (?), I?1 (?) make the expected reward maximal when
?? = ?, ??. These are the functions such that
I(?, ??) ? I(?, ?) = J(?), ??
(10)
with equality if and only if ?? = ?. Using the linearity of I(?, ??) on ?, and the fact that J(?) is
supported by I(?, ??) at, and only at, ? = ??, this implies that J(?) is strictly convex [6, 7]. Savage
then showed that (10) holds if and only if
I1 (?)
I?1 (?)
= J(?) + (1 ? ?)J ? (?)
= J(?) ? ?J ? (?).
(11)
(12)
Defining the loss of the prediction of ? by ?? as the difference to the maximum reward
L(?, ??) = I(?, ?) ? I(?, ??)
1
Here, and throughout the paper, we omit the dependence of ? on x, whenever we are referring to functions
of ?, i.e. mappings whose range is [0, 1].
3
Table 2: Probability elicitation form for various machine learning algorithms, and Savage?s procedure. In
Savage 1 and 2 m? = m + k.
Algorithm
Least squares
Modified LS
SVM
Boosting
Log. Regression
Savage 1
Savage 2
I1 (?)
?4(1 ? ?)2
?4(1 ? ?)2
sign[2?
q? 1] ? 1
I?1 (?)
?4? 2
?4? 2
sign[2?
q? 1] + 1
? 1??
?
log ?
?k(1 ? ?)2 + m? + l
?k(1/? + log ?) + m? + l
?
? 1??
log(1 ? ?)
?k? 2 + m
?k log ? + m?
J(?)
?4?(1 ? ?)
?4?(1 ? ?)
|2? ? 1| ? 1
p
?2 ?(1 ? ?)
? log ? + (1 ? ?) log(1 ? ?)
k? 2 + l? + m
m + l? ? k log ?
it follows that
L(?, ??) = BJ (?, ??),
(13)
i.e. the loss is the Bregman divergence of J. Hence, for any probability ?, the best prediction ?? is the
one of minimum Bregman divergence with ?. Savage went on to investigate which functions J(?)
are admissible. He showed that for losses of the form L(?, ??) = H(h(?) ? h(?
? )), with H(0) = 0
and H(v) > 0, v 6= 0, and h(v) any function, only two cases are possible. In the first h(v) = v, i.e.
the loss only depends on the difference ? ? ??, and the admissible J are
J1 (?) = k? 2 + l? + m,
(14)
for some integers (k, l, m). In the second h(v) = log(v), i.e. the loss only depends on the ratio ?/?
?,
and the admissible J are of the form
J2 (?) = m + l? ? k log ?.
4
(15)
Classification vs. probability elicitation
The discussion above shows that the optimization carried out by the learning algorithms is identical
to Savage?s procedure for probability elicitation. Both procedures reduce to the search for
??? = arg min BF (?, ??),
??
(16)
where F (?) is a convex function. In both cases, this is done indirectly. Savage starts from the specification of F (?) = J(?), from which the conditional rewards I1 (?) and I2 (?) are derived, using (11)
and (12). ??? is then found by maximizing the expected reward I(?, ??) of (9) with respect to ??. The
learning algorithms start from the loss ?(?). The conditional risk C? (?, f ) is then minimized with
respect to f , so as to obtain the minimum conditional risk C?? (?) and the corresponding f?? (?
? ). This
is identical to solving (16) with F (?) = ?C?? (?). Using the relation J(?) = ?C?? (?) it is possible
to express the learning algorithms in ?Savage form?, i.e. as procedures for the maximization of (9),
by deriving the conditional reward functions associated with each of the C?? (?) in Table 1. This is
done with (11) and (12) and the results are shown in Table 2. In all cases I1 (?) = ??(f?? (?)) and
I?1 (?) = ??(?f?? (?)).
The opposite question of whether Savage?s algorithms be expressed in ?machine learning form?, i.e.
as the minimization of (4), is more difficult. It requires that the Ii (?) satisfy
I1 (?) = ??(f (?))
I?1 (?) = ??(?f (?))
(17)
(18)
for some f (?), and therefore constrains J(?). To understand the relationship between J, ?, and f?? it
helps to think of the latter as an inverse link function. Or, assuming that f?? is invertible, to think of
? = (f?? )?1 (v) as a link function, which maps a real v into a probability ?. Under this interpretation,
it is natural to consider link functions which exhibit the following symmetry
f ?1 (?v) = 1 ? f ?1 (v).
?1
(19)
Note that this implies that f (0) = 1/2, i.e. f maps v = 0 to ? = 1/2. We refer to such link
functions as symmetric, and show that they impose a special symmetry on J(?).
4
Table 3: Probability elicitation form progresses from minimum conditional risk, and link function (f?? )?1 (?),
to loss ?. f?? (?) is not invertible for the SVM and modified LS methods.
Algorithm
Least squares
Modified LS
SVM
Boosting
Logistic Regression
J(?)
?4?(1 ? ?)
?4?(1 ? ?)
|2?p? 1| ? 1
?2 ?(1 ? ?)
? log ? + (1 ? ?) log(1 ? ?)
(f?? )?1 (v)
1
2 (v + 1)
NA
N/A
e2v
1+ev2v
e
1+ev
?(v)
(1 ? v)2
max(1 ? v, 0)2
max(1 ? v, 0)
exp(?v)
log(1 + e?v )
Theorem 1. Let I1 (?) and I?1 (?) be two functions derived from a continuously differentiable
function J(?) according to (11) and (12), and f (?) be an invertible function which satisfies (19).
Then (17) and (18) hold if and only if
In this case,
J(?) = J(1 ? ?).
(20)
?(v) = ?J[f ?1 (v)] ? (1 ? f ?1 (v))J ? [f ?1 (v)].
(21)
The theorem shows that for any pair J(?), f (?), such that J(?) has the symmetry of (20) and f (?)
the symmetry of (19), the expected reward of (9) can be written in the ?machine learning form?
of (4), using (17) and (18) with the ?(v) given by (21). The following corollary specializes this
result to the case where J(?) = ?C?? (?).
Corollary 2. Let I1 (?) and I?1 (?) be two functions derived with (11) and (12) from any continuously differentiable J(?) = ?C?? (?), such that
C?? (?) = C?? (1 ? ?),
(22)
and f? (?) be any invertible function which satisfies (19). Then
I1 (?)
I?1 (?)
with
= ??(f? (?))
= ??(?f? (?))
?(v) = C?? [f??1 (v)] + (1 ? f??1 (v))(C?? )? [f??1 (v)].
(23)
(24)
(25)
Note that there could be many pairs ?, f? for which the corollary holds2 . Selecting a particular f?
?pins down? ?, according to (25). This is the case of the algorithms in Table 1, for which C?? (?)
and f?? have the symmetries required by the corollary. The link functions associated with these
algorithms are presented in Table 3. From these and (25) it is possible to recover ?(v), also shown
in the table.
5
New loss functions
The discussion above provides an integrated picture of the ?machine learning? and ?probability elicitation? view of the classification problem. Table 1 summarizes the steps of the ?machine learning
view?: start from the loss ?(v), and find 1) the inverse link function f?? (?) of minimum conditional risk, and 2) the value of this risk C?? (?). Table 3 summarizes the steps of the ?probability
elicitation view?: start from 1) the expected maximum reward function J(?) and 2) the link function (f?? )?1 (v), and determine the loss function ?(v). If J(?) = ?C?? (?), the two procedures are
equivalent, since they both reduce to the search for the probability estimate ??? of (16).
Comparing to Table 2, it is clear that the least squares procedures are special cases of Savage 1, with
k = ?l = 4 and m = 0, and the link function ? = (v + 1)/2. The constraint k = ?l is necessary
?
2
would be more suitable. We, nevertheless,
This makes the notation f? and C?? technically inaccurate. Cf,?
retain the C?? notation for the sake of consistency with the literature.
5
1
Least squares
Modified LS
SVM
Boosting
Logistic Reg.
Savage Loss
Zero?One
4
0.8
3.5
3
0.6
2
C* (?)
Least squares
?
?(v)
2.5
Modified LS
1.5
0.4
SVM
1
Boosting
Savage Loss
0
?6
0.2
Logistic Reg.
0.5
Zero?One
?5
?4
0
?3
?2
?1
0
1
2
0
v
0.2
0.4
?
0.6
0.8
1
Figure 1: Loss function ?(v) (left) and minimum conditional risk C?? (?) (right) associated with the different
methods discussed in the text.
for (22) to hold, but not the others. For Savage 2, a ?machine learning form? is not possible (at
this point), because J(?) 6= J(1 ? ?). We currently do not know if such a form can be derived
in cases like this, i.e. where the symmetries of (19) and/or (22) are absent. From the probability
elicitation point of view, an important contribution of the machine learning research (in addition
to the algorithms themselves) has been to identify new J functions, namely those associated with
the techniques other than least squares. From the machine learning point of view, the elicitation
perspective is interesting because it enables the derivation of new ? functions.
The main observation is that, under the customary specification of ?, both C?? (?) and f?? (?) are
immediately set, leaving no open degrees of freedom. In fact, the selection of ? can be seen as the
indirect selection of a link function (f?? )?1 and a minimum conditional risk C?? (?). The latter is an
approximation to the minimum conditional risk of the 0-1 loss, C??0/1 (?) = 1 ? max(?, 1 ? ?). The
approximations associated with the existing algorithms are shown in Figure 1. The approximation
error is smallest for the SVM, followed by least squares, logistic regression, and boosting, but all
approximations are comparable. The alternative, suggested by the probability elicitation view, is
to start with the selection of the approximation directly. In addition to allowing direct control over
the quantity that is usually of interest (the minimum expected risk of the classifier), the selection of
C?? (?) (which is equivalent to the selection of J(?)) has the added advantage of leaving one degree
of freedom open. As stated by Corollary 2 it is further possible to select across ? functions, by
controlling the link function f? . This allows tailoring properties of detail of the classifier, while
maintaining its performance constant, in terms of the expected risk.
We demonstrate this point, by proposing a new loss function ?. We start by selecting the minimum
conditional risk of least squares (using Savage?s version with k = ?l = 1, m = 0) C?? (?) =
?(1 ? ?), because it provides the best approximation to the Bayes error, while avoiding the lack of
differentiability of the SVM. We next replace the traditional link function of least squares by the
?
logistic link function (classically used with logistic regression) f?? = 21 log 1??
. When used in the
context of boosting (LogitBoost [3]), this link function has been found less sensitive to outliers than
other variants [8]. We then resort to (25) to find the ? function, which we denote by Savage loss,
?(v) =
1
.
(1 + e2v )2
(26)
A plot of this function is presented in Figure 1, along with those associated with all the algorithms
of Table 1. Note that the proposed loss is very similar to that of least squares in the region where |v|
is small (the margin), but quickly becomes constant as v ? ??. This is unlike all other previous ?
functions, and suggests that classifiers designed with the new loss should be more robust to outliers.
It is also interesting to note that the new loss function is not convex, violating what has been an
hallmark of the ? functions used in the literature. The convexity of ? is, however, not important,
a fact that is made clear by the elicitation view. Note that the convexity of the expected reward
of (9) only depends on the convexity of the functions I1 (?) and I?1 (?). These, in turn, only depend
on the choice of J(?), as shown by (11) and (12). From Corollary 2 it follows that, as long as
the symmetries of (22) and (19) hold, and ? is selected according to (25), the selection of C?? (?)
6
Algorithm 1 SavageBoost
Input: Training set D = {(x1 , y1 ), . . . , (xn , yn )}, where y ? {1, ?1} is the class label of
example x, and number M of weak learners in the final decision rule.
(1)
1
Initialization: Select uniform weights wi = |D|
, ?i.
for m = {1, . . . , M } do
compute the gradient step Gm (x) with (30).
(m+1)
(m)
update weights wi according to wi
= wi ? eyi Gm (xi ) .
end for
PM
Output: decision rule h(x) = sgn[ m=1 Gm (x)].
completely determines the convexity of the conditional risk of (4). Whether ? is itself convex does
not matter.
6
SavageBoost
We have hypothesized that classifiers designed with (26) should be more robust than those derived
from the previous ? functions. To test this we designed a boosting algorithm based in the new loss,
using the procedure proposed by Friedman to derive RealBoost [3]. At each iteration the algorithm
searches for the weak learner G(x) which further reduces the conditional risk EY |X [?(y(f (x) +
G(x)))|X = x] of the current f (x), for every x ? X . The optimal weak learner is
G? (x) = arg min ?(x)?w (G(x)) + (1 ? ?(x))?w (?G(x))
(27)
G(x)
where
?w (yG(x)) =
1
(1 + w(x, y)2 e2y(G(x)) )2
(28)
and
w(x, y) = eyf (x)
(29)
The minimization is by gradient descent. Setting the gradient with respect to G(x) to zero results in
Pw (y = 1|x)
1
log
(30)
G? (x) =
2
Pw (y = ?1|x)
where Pw (y = i|x) are probability estimates obtained from the re-weighted training set. At each
iteration the optimal weak learner is found from (30) and reweighing is performed according to (29).
We refer to the algorithm as SavageBoost, and summarize it in the inset.
7
Experimental results
We compared SavageBoost to AdaBoost [9], RealBoost [3], and LogitBoost [3]. The latter is generally considered more robust to outliers [8] and thus a good candidate for comparison. Ten binary
UCI data sets were used: Pima-diabetes, breast cancer diagnostic, breast cancer prognostic, original
Wisconsin breast cancer, liver disorder, sonar, echo-cardiogram, Cleveland heart disease, tic-tac-toe
and Haberman?s survival. We followed the training/testing procedure outlined in [2] to explore the
robustness of the algorithms to outliers. In all cases, five fold validation was used with varying
levels of outlier contamination. Figure 2 shows the average error of the four methods on the LiverDisorder set. Table 4 shows the number of times each method produced the smallest error (#wins)
over the ten data sets at a given contamination level, as well as the average error% over all data
sets (at that contamination level). Our results confirm previous studies that have noted AdaBoost?s
sensitivity to outliers [1]. Among the previous methods AdaBoost indeed performed the worst, followed by RealBoost, with LogistBoost producing the best results. This confirms previous reports
that LogitBoost is less sensitive to outliers [8]. SavageBoost produced generally better results than
Ada and RealBoost at all contamination levels, including 0% contamination. LogitBoost achieves
7
48
Sav. Loss (SavageBoost)
46
Exp Loss (RealBoost)
44
Log Loss (LogitBoost)
Exp Loss (AdaBoost)
%Error
42
40
38
36
34
32
30
28
0
5
10
15
20
25
Outlier Percentage
30
35
40
Figure 2: Average error for four boosting methods at different contamination levels.
Table 4: (number of wins, average error%) for each method and outlier percentage.
Method
Savage Loss (SavageBoost)
Log Loss(LogitBoost)
Exp Loss(RealBoost)
Exp Loss(AdaBoost)
0% outliers
(4, 19.22%)
(4, 20.96%)
(2, 23.99%)
(0, 24.58%)
5% outliers
(4, 19.91%)
(4, 22.04%)
(2, 25.34%)
(0, 26.45%)
40% outliers
(6, 25.9%)
(3, 31.73%)
(0, 33.18%)
(1, 38.22%)
comparable results at low contamination levels (0%, 5%) but has higher error when contamination
is significant. With 40% contamination SavageBoost has 6 wins, compared to 3 for LogitBoost
and, on average, about 6% less error. Although, in all experiments, each algorithm was allowed
50 iterations, SavageBoost converged much faster than the others, requiring an average of 25 iterations at 0% cantamination. This is in contrast to 50 iterations for LogitBoost and 45 iterations for
RealBoost. We attribute fast convergence to the bounded nature of the new loss, that prevents so
called ?early stopping? problems [10]. Fast convergence is, of course, a great benefit in terms of the
computational efficiency of training and testing. This issue will be studied in greater detail in the
future.
References
[1] T. G. Dietterich, ?An experimental comparison of three methods for constructing ensembles of decision
trees: Bagging, boosting, and randomization,? Machine Learning, 2000.
[2] Y. Wu and Y. Liu, ?Robust truncated-hinge-loss support vector machines,? JASA, 2007.
[3] J. Friedman, T. Hastie, and R. Tibshirani, ?Additive logistic regression: A statistical view of boosting,?
Annals of Statistics, 2000.
[4] T. Zhang, ?Statistical behavior and consistency of classification methods based on convex risk minimization,? Annals of Statistics, 2004.
[5] P. Bartlett, M. Jordan, and J. D. McAuliffe, ?Convexity, classification, and risk bounds,? JASA, 2006.
[6] L. J. Savage, ?The elicitation of personal probabilities and expectations,? JASA, vol. 66, pp. 783?801,
1971.
[7] S. Boyd and L. Vandenberghe, Convex Optimization.
Cambridge: Cambridge University Press, 2004.
[8] R. McDonald, D. Hand, and I. Eckley, ?An empirical comparison of three boosting algorithms on real
data sets with artificial class noise,? in International Workshop on Multiple Classifier Systems, 2003.
[9] Y. Freund and R. Schapire, ?A decision-theoretic generalization of on-line learning and an application to
boosting,? Journal of Computer and System Sciences, 1997.
[10] T. Zhang and B. Yu, ?Boosting with early stopping: Convergence and consistency,? Annals of Statistics,
2005.
8
| 3591 |@word mild:1 version:1 pw:3 prognostic:1 bf:2 open:2 confirms:1 e2v:2 forecaster:1 boundedness:1 liu:1 selecting:3 denoting:1 existing:1 savage:20 comparing:1 current:1 written:4 additive:1 tailoring:1 j1:1 enables:1 plot:1 designed:3 update:1 v:1 fewer:1 selected:3 provides:3 boosting:19 zhang:2 five:1 along:1 direct:1 expected:12 indeed:3 behavior:1 themselves:1 relying:1 haberman:1 becomes:1 cleveland:1 linearity:1 notation:2 bounded:1 what:3 tic:1 minimizes:5 proposing:1 guarantee:1 every:2 concave:1 classifier:16 control:1 omit:1 yn:1 producing:1 mcauliffe:1 consequence:1 emphasis:1 initialization:1 studied:4 equivalence:2 suggests:1 range:1 practical:1 testing:2 practice:1 procedure:13 empirical:1 elicit:1 thought:1 boyd:1 pre:1 suggest:1 convenience:1 eckley:1 selection:6 risk:35 context:1 py:2 conventional:1 equivalent:3 map:2 maximizing:1 l:6 convex:15 disorder:1 immediately:2 assigns:1 rule:3 insight:1 deriving:2 vandenberghe:1 handle:1 annals:3 diego:2 controlling:1 gm:3 designing:1 origin:1 diabetes:1 worst:1 region:1 cardiogram:1 went:1 trade:1 contamination:10 disease:1 convexity:11 constrains:1 reward:12 asked:1 personal:1 raise:1 solving:2 depend:1 compromise:1 technically:1 efficiency:1 learner:4 completely:1 indirect:1 various:3 derivation:2 fast:2 artificial:1 quite:1 whose:1 widely:2 otherwise:1 statistic:7 think:2 itself:1 echo:1 final:1 advantage:1 differentiable:5 maximal:1 j2:1 uci:1 convergence:4 produce:1 converges:1 help:1 derive:6 illustrate:1 liver:1 progress:2 implemented:1 predicted:1 implies:2 attribute:1 sgn:1 generalization:1 really:1 randomization:1 strictly:1 hold:7 considered:1 exp:6 great:1 mapping:5 bj:1 achieves:1 early:2 smallest:2 omitted:1 estimation:1 label:5 currently:1 sensitive:4 weighted:1 minimization:10 modified:6 varying:1 corollary:6 derived:8 l0:2 notational:1 contrast:1 tremendously:1 stopping:2 inaccurate:1 integrated:1 relation:1 i1:12 arg:3 classification:12 aforementioned:1 among:1 denoted:1 issue:1 art:1 special:2 once:3 vasconcelos:1 eliminated:1 identical:3 yu:1 future:1 minimized:3 others:2 report:1 masnadi:1 divergence:5 friedman:3 freedom:3 interest:2 investigate:1 held:1 accurate:1 bregman:5 encourage:1 necessary:1 tree:1 re:1 eyf:1 ada:2 maximization:1 tractability:1 cost:1 uniform:1 successful:1 too:2 sav:1 referring:1 density:1 international:1 sensitivity:3 retain:1 invertible:4 continuously:2 quickly:1 yg:1 na:1 classically:1 resort:1 derivative:1 matter:3 satisfy:1 eyi:1 depends:3 performed:3 view:12 analyze:1 start:7 bayes:2 recover:1 contribution:1 minimize:1 square:11 ensemble:1 identify:1 weak:4 produced:2 converged:1 hamed:1 whenever:2 pp:1 nuno:2 toe:1 associated:6 attributed:1 higher:1 violating:1 adaboost:6 done:2 strongly:1 hand:1 lack:1 logistic:10 yf:8 shirazi:1 dietterich:1 hypothesized:1 requiring:1 hence:2 equality:1 symmetric:1 laboratory:2 i2:1 illustrated:1 noted:2 theoretic:1 demonstrate:1 mcdonald:1 hallmark:1 functional:5 discussed:1 interpretation:2 he:1 trait:1 refer:2 significant:1 cambridge:2 tac:1 consistency:3 pm:1 outlined:1 specification:4 resistant:1 realboost:9 posterior:2 showed:2 perspective:3 jolla:2 inf:1 reverse:2 binary:2 success:1 seen:3 minimum:18 greater:1 impose:1 ey:2 determine:2 ii:1 multiple:1 desirable:1 reduces:1 stem:1 faster:1 long:2 impact:1 prediction:5 variant:1 regression:8 breast:3 expectation:1 iteration:7 addition:3 leaving:2 unlike:2 seem:1 jordan:1 integer:1 presence:1 affect:1 hastie:1 opposite:1 reduce:4 idea:1 absent:1 whether:3 bartlett:1 akin:1 suffer:1 useful:1 generally:2 clear:2 extensively:1 ten:2 svms:2 differentiability:1 schapire:1 percentage:2 sign:9 diagnostic:1 overly:3 tibshirani:1 vol:1 express:1 four:2 nevertheless:1 drawn:1 inverse:3 tailor:1 throughout:1 wu:1 decision:5 summarizes:2 comparable:2 bound:4 followed:3 fold:1 constraint:2 precisely:1 sake:1 misguided:1 argument:1 min:3 separable:1 px:1 according:5 combination:1 across:1 wi:4 outlier:19 heart:1 recourse:1 remains:2 pin:1 turn:1 know:1 end:1 adopted:1 progression:2 indirectly:1 alternative:3 robustness:2 customary:1 original:1 bagging:1 cf:1 maintaining:1 hinge:1 restrictive:2 classical:2 question:4 quantity:3 added:1 reweighing:1 dependence:2 traditional:1 exhibit:1 gradient:3 win:3 link:15 assuming:1 relationship:2 ratio:1 minimizing:3 difficult:3 pima:1 negative:2 stated:1 design:8 motivates:1 perform:1 allowing:2 upper:2 observation:1 descent:1 truncated:1 defining:1 y1:1 ucsd:2 pair:4 required:1 specified:1 namely:1 optimized:1 california:2 elicitation:22 suggested:1 usually:4 ev:1 summarize:1 including:2 max:5 belief:1 event:2 suitable:1 natural:1 picture:1 carried:1 specializes:1 text:1 literature:5 wisconsin:1 freund:1 loss:61 interesting:3 limitation:2 validation:1 degree:3 jasa:3 cancer:3 course:1 supported:1 allow:1 understand:1 benefit:1 xn:1 author:1 made:1 san:2 confirm:1 reveals:1 unnecessary:1 xi:1 search:6 sonar:1 table:17 nature:1 robust:6 ca:2 symmetry:7 constructing:1 main:1 logitboost:10 noise:1 allowed:1 x1:1 slow:1 candidate:1 admissible:3 down:2 theorem:2 bad:1 inset:1 showing:1 svm:8 survival:1 workshop:1 margin:1 explore:1 visual:2 prevents:1 expressed:1 savageboost:12 satisfies:5 determines:1 conditional:21 goal:1 formulated:1 replace:1 infinite:1 called:2 experimental:4 la:2 select:4 support:2 latter:3 reg:2 avoiding:1 ex:2 |
2,859 | 3,592 | Learning Taxonomies by Dependence Maximization
Matthew B. Blaschko
Arthur Gretton
Max Planck Institute for Biological Cybernetics
Spemannstr. 38
72076 T?ubingen, Germany
{blaschko,arthur}@tuebingen.mpg.de
Abstract
We introduce a family of unsupervised algorithms, numerical taxonomy clustering, to simultaneously cluster data, and to learn a taxonomy that encodes the relationship between the clusters. The algorithms work by maximizing the dependence between the taxonomy and the original data. The resulting taxonomy is
a more informative visualization of complex data than simple clustering; in addition, taking into account the relations between different clusters is shown to
substantially improve the quality of the clustering, when compared with state-ofthe-art algorithms in the literature (both spectral clustering and a previous dependence maximization approach). We demonstrate our algorithm on image and text
data.
1
Introduction
We address the problem of finding taxonomies in data: that is, to cluster the data, and to specify in a
systematic way how the clusters relate. This problem is widely encountered in biology, when grouping different species; and in computer science, when summarizing and searching over documents
and images. One of the simpler methods that has been used extensively is agglomerative clustering
[18]. One specifies a distance metric and a linkage function that encodes the cost of merging two
clusters, and the algorithm greedily agglomerates clusters, forming a hierarchy until at last the final
two clusters are merged into the tree root. A related alternate approach is divisive clustering, in
which clusters are split at each level, beginning with a partition of all the data, e.g. [19]. Unfortunately, this is also a greedy technique and we generally have no approximation guarantees. More
recently, hierarchical topic models [7, 23] have been proposed to model the hierarchical cluster structure of data. These models often rely on the data being representable by multinomial distributions
over bags of words, making them suitable for many problems, but their application to arbitrarily
structured data is in no way straightforward. Inference in these models often relies on sampling
techniques that can affect their practical computational efficiency.
On the other hand, many kinds of data can be easily compared using a kernel function, which
encodes the measure of similarity between objects based on their features. Spectral clustering algorithms represent one important subset of clustering techniques based on kernels [24, 21]: the
spectrum of an appropriately normalized similarity matrix is used as a relaxed solution to a partition
problem. Spectral techniques have the advantage of capturing global cluster structure of the data,
but generally do not give a global solution to the problem of discovering taxonomic structure.
In the present work, we propose a novel unsupervised clustering algorithm, numerical taxonomy
clustering, which both clusters the data and learns a taxonomy relating the clusters. Our method
works by maximizing a kernel measure of dependence between the observed data, and a product
of the partition matrix that defines the clusters with a structure matrix that defines the relationship
between individual clusters. This leads to a constrained maximization problem that is in general NP
hard, but that can be approximated very efficiently using results in spectral clustering and numerical
1
taxonomy (the latter field addresses the problem fitting taxonomies to pairwise distance data [1, 2,
4, 8, 11, 15, 25], and contains techniques that allow us to efficiently fit a tree structure to our data
with tight approximation guarantees). Aside from its simplicity and computational efficiency, our
method has two important advantages over previous clustering approaches. First, it represents a
more informative visualization of the data than simple clustering, since the relationship between the
clusters is also represented. Second, we find the clustering performance is improved over methods
that do not take cluster structure into account, and over methods that impose a cluster distance
structure rather than learning it.
Several objectives that have been used for clustering are related to the objective employed here. Bach
and Jordan [3] proposed a modified spectral clustering objective that they then maximize either with
respect to the kernel parameters or the data partition. Christianini et al. [10] proposed a normalized
inner product between a kernel matrix and a matrix constructed from the labels, which can be used
to learn kernel parameters. The objective we use here is also a normalized inner product between a
similarity matrix and a matrix constructed from the partition, but importantly, we include a structure
matrix that represents the relationship between clusters. Our work is most closely related to that of
Song et al. [22], who used an objective that includes a fixed structure matrix and an objective based
on the Hilbert-Schmidt Independence Criterion. Their objective is not normalized, however, and
they do not maximize with respect to the structure matrix.
The paper is organized as follows. In Section 2, we introduce a family of dependence measures
with which one can interpret the objective function of the clustering approach. The dependence
maximization objective is presented in Section 3, and its relation to classical spectral clustering
algorithms is explained in Section 3.1. Important results for the optimization of the objective are
presented in Sections 3.2 and 3.3. The problem of numerical taxonomy and its relation to the proposed objective function is presented in Section 4, as well as the numerical taxonomy clustering
algorithm. Experimental results are given in Section 5.
2
Hilbert-Schmidt Independence Criterion
In this section, we give a brief introduction to the Hilbert-Schmidt Independence Criterion (HSIC),
which is a measure of the strength of dependence between two variables (in our case, following
[22], these are the data before and after clustering). We begin with some basic terminology in kernel
methods. Let F be a reproducing kernel Hilbert space of functions from X to R, where X is a
separable metric space (our input domain). To each point x ? X , there corresponds an element
?(x) ? F (we call ? the feature map) such that h?(x), ?(x0 )iF = k(x, x0 ), where k : X ? X ? R
is a unique positive definite kernel. We also define a second RKHS G with respect to the separable
metric space Y, with feature map ? and kernel h?(y), ?(y 0 )iG = l(y, y 0 ).
Let (X, Y ) be random variables on X ? Y with joint distribution PrX,Y , and associated marginals
PrX and PrY . Then following [5, 12], the covariance operator Cxy : G ? F is defined such that
for all f ? F and g ? G,
hf, Cxy giF = Ex,y ([f (x) ? Ex (f (x))] [g(y) ? Ey (g(y))]) .
A measure of dependence is then the Hilbert-Schmidt norm of this operator (the sum of the squared
2
singular values), kCxy kHS . For characteristic kernels [13], this is zero if and only if X and Y
are independent. It is shown in [13] that the Gaussian and Laplace kernels are characteristic on
Rd . Given a sample of size n from PrX,Y , the Hilbert-Schmidt Independence Criterion (HSIC) is
2
defined by [14] to be a (slightly biased) empirical estimate of kCxy kHS ,
1
1n 1Tn ,
n
1n is the n ? 1 vector of ones, K is the Gram matrix for samples from PrX with (i, j)th entry
k(xi , xj ), and L is the Gram matrix with kernel l(yi , yj ).
HSIC := Tr [Hn KHn L] ,
3
where Hn = I ?
Dependence Maximization
We now specify how the dependence criteria introduced in the previous section can be used in
clustering. We represent our data via an n ? n Gram matrix M 0: in the simplest case, this
2
is the centered kernel matrix (M = Hn KHn ), but we also consider a Gram matrix corresponding
to normalized cuts clustering (see Section 3.1). Following [22], we define our output Gram matrix
to be L = ?Y ?T , where ? is an n ? k partition matrix, k is the number of clusters, and Y is a
positive definite matrix that encodes the relationship between clusters (e.g. a taxonomic structure).
Our clustering quality is measured according to
Tr M Hn ?Y ?T Hn
p
.
(1)
Tr [?Y ?T Hn ?Y ?T Hn ]
In terms of the covariance operators introduced earlier, we are optimizing HSIC, this being an em2
2
pirical estimate of kCxy kHS , while normalizing by the empirical estimate of kCyy kHS (we need not
2
normalize by kCxx kHS , since it is constant). This criterion is very similar to the criterion introduced
for use in kernel target alignment [10], the difference being the addition of centering
matrices,
Hn ,
as required by definition of the covariance. We remark that the normalizing term
Hn ?Y ?T Hn
HS
was not needed in the structured clustering objective of [22]. This is because Song et al. were interested only in solving for the partition matrix, ?, whereas we also wish to solve for Y : without
normalization, the objective can always be improved by scaling Y arbitrarily. In the remainder of
this section, we address the maximization of Equation (1) under various simplifying assumptions:
these results will then be used in our main algorithm in Section 4.
3.1
Relation to Spectral Clustering
Maximizing Equation (1) is quite difficult given that the entries of ? can only take on values in
{0, 1}, and that the row sums have to be equal to 1. In order to more efficiently solve this difficult
combinatorial problem, we make use of a spectral relaxation. Consider the case that ? is a column
vector and Y is the identity matrix. Equation (1) becomes
Tr M Hn ??T Hn
?T Hn M Hn ?
max p
= max
(2)
?
?
?T Hn ?
Tr [??T Hn ??T Hn ]
Setting the derivative with respect to ? to zero and rearranging, we obtain
Hn M Hn ? =
?T Hn M Hn ?
Hn ?.
?T Hn ?
(3)
Using the normalization ?T Hn ? = 1, we obtain the generalized eigenvalue problem
Hn M Hn ?i = ?i Hn ?i ,
or equivalently Hn M Hn ?i = ?i ?i .
(4)
n?k
For ? ? {0, 1}
where k > 1, we can recover ? by extracting the k eigenvectors associated
with the largest eigenvalues. As discussed in [24, 21], the relaxed solution will contain an arbitrary
rotation which can be recovered using a reclustering step.
1
1
?
?
If we choose
PM = D 2 AD 2 where A is a similarity matrix, and D is the diagonal matrix such
that Dii = j Aij , we can recover a centered version of the spectral clustering of [21]. In fact, we
wish to ignore the eigenvector with constant entries [24], so the centering matrix Hn does not alter
the clustering solution.
3.2
Solving for Optimal Y 0 Given ?
We now address the subproblem of solving for the optimal structure matrix, Y , subject only to
positive semi-definiteness, for any ?. We note that the maximization of Equation (1) is equivalent
to the constrained optimization problem
max Tr M Hn ?Y ?T Hn , s.t. Tr ?Y ?T Hn ?Y ?T Hn = 1
(5)
Y
We write the Lagrangian
L(Y, ?) = Tr M Hn ?Y ?T Hn + ? 1 ? Tr ?Y ?T Hn ?Y ?T Hn ,
(6)
take the derivative with respect to Y , and set to zero, to obtain
?L
= ?T Hn M Hn ? ? 2? ?T Hn ?Y ?T Hn ? = 0
?Y
3
(7)
which together with the constraint in Equation (5) yields
?
?
?T Hn ? ?T Hn M Hn ? ?T Hn ?
?
Y =r h
i,
? T
?
T
T
T
Tr ? Hn M Hn ? (? Hn ?) ? Hn M Hn ? (? Hn ?)
(8)
where ? indicates the Moore-Penrose generalized inverse [17, p. 421].
?
?1 T
? Hn (see [6, 20]), we note that Equation (8) comBecause ?T Hn ? ?T Hn = Hk ?T ?
putes a normalized set kernel between the elements in each cluster. Up to a constant normalization
factor, Y ? is equivalent to Hk Y? ? Hk where
1 X X ?
Y?ij? =
M?? ,
(9)
Ni Nj
??Ci ??Cj
Ni is the number of elements in cluster i, Ci is the set of indices of samples assigned to cluster i,
? = Hn M Hn . This is a standard set kernel as defined in [16].
and M
Solving for ? with the Optimal Y 0
3.3
As we have solved for Y ? in closed form in Equation (8), we can plug this result into Equation (1)
to obtain a formulation of the problem of optimizing ?? that does not require a simultaneous optimization over Y . Under these conditions, Equation (1) is equivalent to
r h
i
?1
?1
.
(10)
max Tr ?T Hn M Hn ? (?T ?) ?T Hn M Hn ? (?T ?)
?
By evaluating the first order conditions on Equation (10), we can see that the relaxed solution, ?? ,
to Equation (10) must lie in the principal subspace of Hn M Hn .1 Therefore, for the problem of
simultaneously optimizing the structure matrix, Y 0, and the partition matrix, one can use the
same spectral relaxation as in Equation (4), and use the resulting partition matrix to solve for the
optimal assignment for Y using Equation (8). This indicates that the optimal partition of the data
is the same for Y given by Equation (8) and for Y = I. We show in the next section how we can
add additional constraints on Y to not only aid in interpretation, but to actually improve the optimal
clustering.
4
Numerical Taxonomy
In this section, we consolidate the results developed in Section 3 and introduce the numerical taxonomy clustering algorithm. The algorithm allows us to simultaneously cluster data and learn a tree
structure that relates the clusters. The tree structure imposes constraints on the solution, which in
turn affect the data partition selected by the clustering algorithm. The data are only assumed to be
well represented by some taxonomy, but not any particular topology or structure.
In Section 3 we introduced techniques for solving for Y and ? that depend only on Y being constrained to be positive semi-definite. In the interests of interpretability, as well as the ability to
influence clustering solutions by prior knowledge, we wish to explore the problem where additional
constraints are imposed on the structure of Y . In particular, we consider the case that Y is constrained to be generated by a tree metric. By this, we mean that the distance between any two
clusters is consistent with the path length along some fixed tree whose leaves are identified with the
clusters. For any positive semi-definite matrix Y , we can compute the distance
p matrix, D, given by
the norm implied by the inner product that computes Y , by assigning Dij = Yii + Yjj ? 2Yij . It
is sufficient, then, to reformulate the optimization problem given in Equation (1) to add the following
constraints that characterize distances generated by a tree metric
Dab + Dcd ? max (Dac + Dbd , Dad + Dbc )
?a, b, c, d,
(11)
where D is the distance matrix generated from Y . The constraints in Equation (11) are known as
the 4-point condition, and were proven in [8] to be necessary and sufficient for D to be a tree metric.
1
For a detailed derivation, see the extended technical report [6].
4
Optimization problems incorporating these constraints are combinatorial and generally difficult to
solve. The problem of numerical taxonomy, or fitting additive trees, is as follows: given a fixed
distance matrix, D, that fulfills metric constraints, find the solution to
min kD ? DT k2
DT
(12)
with respect to some norm (e.g. L1 , L2 , or L? ), where DT is subject to the 4-point condition. While
numerical taxonomy is in general NP hard, a great variety of approximation algorithms with feasible
computational complexity have been developed [1, 2, 11, 15]. Given a distance matrix that satisfies
the 4-point condition, the associated unrooted tree that generated the matrix can be found in O(k 2 )
time, where k is equal to the number of clusters [25].
We propose the following iterative algorithm to incorporate the 4-point condition into the optimization of Equation (1):
Require: M 0
Ensure: (?, Y ) ? (?? , Y ? ) that solve Equation (1) with the constraints given in Equation (11)
Initialize Y = I
Initialize ? using the relaxation in Section 3.1
while Convergence has not been reached do
Solve for Y given ? using Equation
(8)
p
Construct D such that Dij = Yii + Yjj ? 2Yij
Solve for minDT kD ? DT k2
Assign Y = ? 12 Hk (DT DT )Hk , where represents the Hadamard product
Update ? using a normalized version of the algorithm described in [22]
end while
One can view this optimization as solving the relaxed version of the problem such that Y is only
constrained to be positive definite, and then projecting the solution onto the feasible set by requiring
Y to be constructed from a tree metric. By iterating the procedure, we can allow ? to reflect the fact
that it should best fit the current estimate of the tree metric.
5
Experimental Results
To illustrate the effectiveness of the proposed algorithm, we have performed clustering on two
benchmark datasets. The face dataset presented in [22] consists of 185 images of three different
people, each with three different facial expressions. The authors posited that this would be best
represented by a ternary tree structure, where the first level would decide which subject was represented, and the second level would be based on facial expression. In fact, their clustering algorithm
roughly partitioned the data in this way when the appropriate structure matrix was imposed. We
will show that our algorithm is able to find a similar structure without supervision, which better
represents the empirical structure of the data.
We have also included results for the NIPS 1-12 dataset,2 which consists of binarized histograms
of the first 12 years of NIPS papers, with a vocabulary size of 13649 and a corpus size of 1740.
A Gaussian kernel was used with the normalization parameter set to the median squared distance
between points in input space.
5.1
Performance Evaluation on the Face Dataset
We first describe a numerical comparison on the face dataset [22] of the approach presented in
Section 4 (where M = Hn KHn is assigned as in a HSIC objective). We considered two alternative
approaches: a classic spectral clustering algorithm [21], and the dependence maximization approach
of Song et al. [22]. Because the approach in [22] is not able to learn the structure of Y from the data,
we have optimized the partition matrix for 8 different plausible hierarchical structures (Figure 1).
These have been constructed by truncating n-ary trees to the appropriate number of leaf nodes. For
the evaluation, we have made use of the fact that the desired partition of the data is known for the
face dataset, which allows us to compare the predicted clusters to the ground truth labels. For each
2
The NIPS 1-12 dataset is available at http://www.cs.toronto.edu/?roweis/data.html
5
partition matrix, we compute the conditional entropy of the true labels, l, given the cluster ids, c,
H(l|c), which is related to mutual information by I(l; c) = H(l) ? H(l|c). As H(l) is fixed for
a given dataset, argmaxc I(l; c) = argminc H(l|c), and H(l|c) ? 0 with equality only in the case
that the clusters are pure [9]. Table 1 shows the learned structure and proper normalization of our
algorithm results in a partition of the images that much more closely matches the true identities and
expressions of the faces, as evidenced by a much lower conditional entropy score than either the
spectral clustering approach of [21] or the dependence maximization approach of [22].
Figure 2 shows the discovered taxonomy for the face dataset, where the length of the edges is
proportional to the distance in the tree metric (thus, in interpreting the graph, it is important to take
into account both the nodes at which particular clusters are connected, and the distance between
these nodes; this is by contrast with Figure 1, which only gives the hierarchical cluster structure
and does not represent distance). Our results show we have indeed recovered an appropriate tree
structure without having to pre-specify the cluster similarity relations.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 1: Structures used in the optimization of [22]. The clusters are identified with leaf nodes, and
distances between the clusters are given by the minimum path length from one leaf to another. Each
edge in the graph has equal cost.
spectral
0.5443
a
0.7936
b
0.4970
c
0.6336
d
0.8652
e
1.2246
f
1.1396
g
1.1325
h
0.5180
taxonomy
0.2807
Table 1: Conditional entropy scores for spectral clustering [21], the clustering algorithm of [22],
and the method presented here (last column). The structures for columns a-h are shown in Figure 1,
while the learned structure is shown in Figure 2. The structure for spectral clustering is implicitly
equivalent to that in Figure 1(h), as is apparent from the analysis in Section 3.1. Our method exceeds
the performance of [21] and [22] for all the structures.
5.2
NIPS Paper Dataset
For the NIPS dataset, we partitioned the documents into k = 8 clusters using the numerical taxonomy clustering algorithm. Results are given in Figure 3. To allow us to verify the clustering
performance, we labeled each cluster using twenty informative words, as listed in Table 2. The most
representative words were selected for a given cluster according to a heuristic score ?? ? ?? , where ? is
the number of times the word occurs in the cluster, ? is the number of times the word occurs outside
the cluster, ? is the number of documents in the cluster, and ? is the number of documents outside
the cluster. We observe that not only are the clusters themselves well defined (e.g cluster a contains
neuroscience papers, cluster g covers discriminative learning, and cluster h Bayesian learning), but
the similarity structure is also reasonable: clusters d and e, which respectively cover training and
applications of neural networks, are considered close, but distant from g and h; these are themselves
distant from the neuroscience cluster at a and the hardware papers in b; reinforcement learning gets
a cluster at f distant from the remaining topics. Only cluster c appears to be indistinct, and shows no
clear theme. Given its placement, we anticipate that it would merge with the remaining clusters for
smaller k.
6
Conclusions and Future Work
We have introduced a new algorithm, numerical taxonomy clustering, for simultaneously clustering
data and discovering a taxonomy that relates the clusters. The algorithm is based on a dependence
6
d
e
a
c
f
b
g
h
Figure 3: The taxonomy discovered for the NIPS
Figure 2: Face dataset and the resulting taxon- dataset. Words that represent the clusters are
omy that was discovered by the algorithm
given in Table 2.
a
neurons
cells
model
cell
visual
neuron
activity
synaptic
response
firing
cortex
stimulus
spike
cortical
frequency
orientation
motion
direction
spatial
excitatory
b
chip
circuit
analog
voltage
current
figure
vlsi
neuron
output
circuits
synapse
motion
pulse
neural
input
digital
gate
cmos
silicon
implementation
c
memory
dynamics
image
neural
hopfield
control
system
inverse
energy
capacity
object
field
motor
computational
network
images
subjects
model
associative
attractor
d
network
units
learning
hidden
networks
input
training
output
unit
weights
error
weight
neural
layer
recurrent
net
time
back
propagation
number
e
training
recognition
network
speech
set
word
performance
neural
networks
trained
classification
layer
input
system
features
test
classifier
classifiers
feature
image
f
state
learning
policy
action
reinforcement
optimal
control
function
time
states
actions
agent
algorithm
reward
sutton
goal
dynamic
step
programming
rl
g
function
error
algorithm
functions
learning
theorem
class
linear
examples
case
training
vector
bound
generalization
set
approximation
bounds
loss
algorithms
dimension
h
data
model
models
distribution
gaussian
likelihood
parameters
algorithm
mixture
em
bayesian
posterior
probability
density
variables
prior
log
approach
matrix
estimation
Table 2: Representative words for the NIPS dataset clusters.
maximization approach, with the Hilbert-Schmidt Independence Criterion as our measure of dependence. We have shown several interesting theoretical results regarding dependence maximization
clustering. First, we established the relationship between dependence maximization and spectral
clustering. Second, we showed the optimal positive definite structure matrix takes the form of a set
kernel, where sets are defined by cluster membership. This result applied to the original dependence
maximization objective indicates that the inclusion of an unconstrained structure matrix does not
affect the optimal partition matrix. In order to remedy this, we proposed to include constraints that
guarantee Y to be generated from an additive metric. Numerical taxonomy clustering allows us to
optimize the constrained problem efficiently.
In our experiments on grouping facial expressions, numerical taxonomy clustering is more accurate
than the existing approaches of spectral clustering and clustering with a fixed predefined structure.
We were also able to fit a taxonomy to NIPS papers that resulted in a reasonable and interpretable
clustering by subject matter. In both the facial expression and NIPS datasets, similar clusters are
close together on the resulting tree.We conclude that numerical taxonomy clustering is a useful tool
both for improving the accuracy of clusterings and for the visualization of complex data.
Our approach presently relies on the combinatorial optimization introduced in [22] in order to optimize ? given a fixed estimate of Y . We believe that this step may be improved by relaxing the
problem similar to Section 3.1. Likewise, automatic selection of the number of clusters is an interesting area of future work. We cannot expect to use the criterion in Equation (1) to select the number
of clusters because increasing the size of ? and Y can never decrease the objective. However, the
7
elbow heuristic can be applied to the optimal value of Equation (1), which is closely related to the
eigengap approach. Another interesting line of work is to consider optimizing a clustering objective
derived from the Hilbert-Schmidt Normalized Independence Criterion (HSNIC) [13].
Acknowledgments
This work is funded by the EC projects CLASS, IST 027978, PerAct, EST 504321, and by the
Pascal Network, IST 2002-506778. We would also like to thank Christoph Lampert for simplifying
the Moore-Penrose generalized inverse.
References
[1] R. Agarwala, V. Bafna, M. Farach, B. Narayanan, M. Paterson, and M. Thorup. On the approximability
of numerical taxonomy (fitting distances by tree metrics). In SODA, pages 365?372, 1996.
[2] N. Ailon and M. Charikar. Fitting tree metrics: Hierarchical clustering and phylogeny. In Foundations of
Computer Science, pages 73?82, 2005.
[3] F. R. Bach and M. I. Jordan. Learning spectral clustering, with application to speech separation. JMLR,
7:1963?2001, 2006.
[4] R. Baire. Lec?ons sur les Fonctions Discontinues. Gauthier Villars, 1905.
[5] C. Baker. Joint measures and cross-covariance operators. Transactions of the American Mathematical
Society, 186:273?289, 1973.
[6] M. B. Blaschko and A. Gretton. Taxonomy inference using kernel dependence measures. Technical
report, Max Planck Institute for Biological Cybernetics, 2008.
[7] D. Blei, T. Griffiths, M. Jordan, and J. Tenenbaum. Hierarchical topic models and the nested chinese
restaurant process. In NIPS 16, 2004.
[8] P. Buneman. The Recovery of Trees from Measures of Dissimilarity. In D. Kendall and P. Tautu, editors,
Mathematics the the Archeological and Historical Sciences, pages 387?395. Edinburgh U.P., 1971.
[9] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, 1991.
[10] N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. Kandola. On kernel-target alignment. In NIPS 14,
2002.
[11] M. Farach, S. Kannan, and T. Warnow. A robust model for finding optimal evolutionary trees. In STOC,
pages 137?145, 1993.
[12] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. JMLR, 5:73?99, 2004.
[13] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?olkopf. Kernel measures of conditional dependence. In
NIPS 20, 2008.
[14] A. Gretton, O. Bousquet, A. Smola, and B. Sch?olkopf. Measuring statistical dependence with HilbertSchmidt norms. In Algorithmic Learning Theory, pages 63?78, 2005.
[15] B. Harb, S. Kannan, and A. McGregor. Approximating the best-fit tree under lp norms. In APPROXRANDOM, pages 123?133, 2005.
[16] D. Haussler. Convolution kernels on discrete structures. Technical Report UCSC-CRL-99-10, University
of California at Santa Cruz, 1999.
[17] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge, 1985.
[18] A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice Hall, 1988.
[19] P. Macnaughton Smith, W. Williams, M. Dale, and L. Mockett. Dissimilarity analysis: a new technique
of hierarchical subdivision. Nature, 202:1034?1035, 1965.
[20] C. D. Meyer, Jr. Generalized inversion of modified matrices. SIAM Journal on Applied Mathematics,
24(3):315?323, 1973.
[21] A. Y. Ng, M. I. Jordan, and Y. Weiss. On Spectral Clustering: Analysis and an Algorithm. In NIPS, pages
849?856, 2001.
[22] L. Song, A. Smola, A. Gretton, and K. M. Borgwardt. A Dependence Maximization View of Clustering.
In ICML, pages 815?822, 2007.
[23] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical dirichlet processes. JASA,
101(476):1566?1581, 2006.
[24] U. von Luxburg. A Tutorial on Spectral Clustering. Statistics and Computing, 17(4):395?416, 2007.
[25] M. S. Waterman, T. F. Smith, M. Singh, and W. A. Beyer. Additive Evolutionary Trees. Journal of
Theoretical Biology, 64:199?213, 1977.
8
| 3592 |@word h:1 version:3 inversion:1 norm:5 pulse:1 covariance:4 simplifying:2 elisseeff:1 tr:11 reduction:1 contains:2 score:3 document:4 rkhs:1 existing:1 recovered:2 current:2 assigning:1 must:1 cruz:1 numerical:16 partition:16 informative:3 additive:3 distant:3 motor:1 interpretable:1 update:1 aside:1 greedy:1 discovering:2 selected:2 leaf:4 beginning:1 smith:2 blei:2 node:4 toronto:1 simpler:1 mathematical:1 along:1 constructed:4 ucsc:1 consists:2 fitting:4 introduce:3 x0:2 pairwise:1 indeed:1 roughly:1 mpg:1 themselves:2 increasing:1 becomes:1 begin:1 blaschko:3 elbow:1 project:1 circuit:2 baker:1 kind:1 substantially:1 gif:1 eigenvector:1 developed:2 finding:2 nj:1 guarantee:3 binarized:1 k2:2 classifier:2 control:2 unit:2 pirical:1 planck:2 before:1 positive:7 sutton:1 id:1 path:2 firing:1 merge:1 argminc:1 relaxing:1 christoph:1 practical:1 unique:1 acknowledgment:1 yj:1 ternary:1 horn:1 definite:6 procedure:1 area:1 empirical:3 word:8 pre:1 griffith:1 get:1 onto:1 close:2 selection:1 operator:4 cannot:1 prentice:1 influence:1 www:1 equivalent:4 map:2 lagrangian:1 imposed:2 maximizing:3 optimize:2 straightforward:1 williams:1 truncating:1 simplicity:1 recovery:1 pure:1 haussler:1 importantly:1 classic:1 searching:1 hsic:5 laplace:1 hierarchy:1 target:2 programming:1 element:4 approximated:1 recognition:1 cut:1 labeled:1 observed:1 subproblem:1 solved:1 connected:1 sun:1 decrease:1 complexity:1 reward:1 cristianini:1 dynamic:2 trained:1 depend:1 tight:1 solving:6 singh:1 efficiency:2 easily:1 joint:2 hopfield:1 chip:1 represented:4 various:1 derivation:1 jain:1 describe:1 outside:2 quite:1 whose:1 widely:1 solve:7 plausible:1 apparent:1 heuristic:2 dab:1 ability:1 statistic:1 final:1 associative:1 beal:1 advantage:2 eigenvalue:2 net:1 propose:2 product:5 remainder:1 yii:2 hadamard:1 roweis:1 normalize:1 olkopf:2 convergence:1 cluster:60 cmos:1 object:2 illustrate:1 recurrent:1 dubes:1 archeological:1 measured:1 ij:1 predicted:1 c:1 direction:1 merged:1 closely:3 centered:2 dii:1 require:2 assign:1 generalization:1 biological:2 anticipate:1 yij:2 considered:2 ground:1 hall:1 great:1 algorithmic:1 matthew:1 estimation:1 bag:1 label:3 khn:3 combinatorial:3 largest:1 tool:1 fukumizu:2 gaussian:3 always:1 modified:2 rather:1 beyer:1 voltage:1 derived:1 indicates:3 likelihood:1 hk:5 contrast:1 greedily:1 summarizing:1 inference:2 membership:1 hidden:1 relation:5 vlsi:1 interested:1 germany:1 agarwala:1 classification:1 html:1 pascal:1 orientation:1 art:1 constrained:6 initialize:2 mutual:1 spatial:1 field:2 equal:3 construct:1 having:1 never:1 sampling:1 ng:1 biology:2 represents:4 unsupervised:2 icml:1 alter:1 future:2 np:2 report:3 stimulus:1 simultaneously:4 resulted:1 kandola:1 individual:1 attractor:1 interest:1 evaluation:2 alignment:2 mixture:1 predefined:1 accurate:1 edge:2 arthur:2 necessary:1 facial:4 tree:23 taylor:1 desired:1 theoretical:2 column:3 earlier:1 cover:3 measuring:1 assignment:1 maximization:14 cost:2 subset:1 entry:3 dij:2 johnson:1 characterize:1 density:1 borgwardt:1 siam:1 systematic:1 together:2 squared:2 reflect:1 von:1 hn:64 choose:1 american:1 derivative:2 account:3 de:1 includes:1 matter:1 taxon:1 ad:1 performed:1 root:1 view:2 closed:1 dad:1 kendall:1 reached:1 hf:1 recover:2 kcxy:3 ni:2 accuracy:1 cxy:2 who:1 efficiently:4 characteristic:2 yield:1 ofthe:1 likewise:1 farach:2 bayesian:2 cybernetics:2 ary:1 simultaneous:1 synaptic:1 definition:1 centering:2 energy:1 frequency:1 associated:3 argmaxc:1 dataset:13 knowledge:1 dimensionality:1 hilbert:9 organized:1 cj:1 actually:1 back:1 appears:1 dt:6 supervised:1 specify:3 improved:3 response:1 synapse:1 formulation:1 wei:1 smola:2 until:1 hand:1 dac:1 gauthier:1 propagation:1 defines:2 quality:2 believe:1 normalized:8 contain:1 dbc:1 requiring:1 true:2 equality:1 assigned:2 verify:1 remedy:1 moore:2 criterion:10 generalized:4 agglomerate:1 demonstrate:1 tn:1 l1:1 interpreting:1 motion:2 image:7 novel:1 recently:1 rotation:1 multinomial:1 rl:1 discussed:1 interpretation:1 analog:1 relating:1 interpret:1 marginals:1 silicon:1 cambridge:2 fonctions:1 rd:1 unconstrained:1 automatic:1 pm:1 mathematics:2 inclusion:1 shawe:1 funded:1 harb:1 similarity:6 supervision:1 cortex:1 add:2 posterior:1 showed:1 optimizing:4 ubingen:1 arbitrarily:2 yi:1 minimum:1 additional:2 relaxed:4 impose:1 employed:1 ey:1 unrooted:1 maximize:2 semi:3 relates:2 gretton:5 exceeds:1 technical:3 match:1 plug:1 bach:3 cross:1 posited:1 buneman:1 basic:1 metric:13 histogram:1 kernel:24 represent:4 normalization:5 cell:2 addition:2 whereas:1 singular:1 median:1 appropriately:1 biased:1 sch:2 subject:5 spemannstr:1 effectiveness:1 jordan:6 call:1 extracting:1 paterson:1 split:1 variety:1 affect:3 fit:4 independence:6 xj:1 restaurant:1 topology:1 identified:2 inner:3 regarding:1 expression:5 linkage:1 eigengap:1 song:4 speech:2 remark:1 action:2 generally:3 iterating:1 detailed:1 eigenvectors:1 listed:1 clear:1 useful:1 santa:1 extensively:1 tenenbaum:1 hardware:1 narayanan:1 simplest:1 http:1 specifies:1 tutorial:1 neuroscience:2 write:1 discrete:1 dbd:1 ist:2 terminology:1 christianini:1 graph:2 relaxation:3 sum:2 year:1 luxburg:1 inverse:3 taxonomic:2 soda:1 family:2 reasonable:2 decide:1 separation:1 consolidate:1 scaling:1 capturing:1 layer:2 bound:2 lec:1 encountered:1 activity:1 strength:1 placement:1 constraint:10 hilbertschmidt:1 encodes:4 bousquet:1 min:1 approximability:1 separable:2 structured:2 ailon:1 according:2 alternate:1 charikar:1 representable:1 kd:2 jr:1 smaller:1 slightly:1 em:1 partitioned:2 lp:1 making:1 presently:1 explained:1 projecting:1 equation:22 visualization:3 turn:1 needed:1 end:1 thorup:1 available:1 observe:1 hierarchical:8 spectral:20 appropriate:3 schmidt:7 alternative:1 gate:1 original:2 thomas:1 clustering:58 include:2 ensure:1 remaining:2 dirichlet:1 chinese:1 approximating:1 classical:1 society:1 implied:1 objective:17 omy:1 occurs:2 spike:1 dependence:21 diagonal:1 evolutionary:2 subspace:1 distance:15 thank:1 capacity:1 topic:3 agglomerative:1 yjj:2 tuebingen:1 kannan:2 length:3 sur:1 index:1 relationship:6 reformulate:1 bafna:1 equivalently:1 difficult:3 unfortunately:1 taxonomy:29 relate:1 stoc:1 implementation:1 proper:1 policy:1 twenty:1 teh:1 neuron:3 convolution:1 datasets:2 benchmark:1 waterman:1 extended:1 discovered:3 reproducing:2 arbitrary:1 introduced:6 evidenced:1 required:1 optimized:1 california:1 learned:2 established:1 nip:13 address:4 able:3 max:7 interpretability:1 memory:1 suitable:1 rely:1 improve:2 brief:1 pry:1 text:1 prior:2 literature:1 l2:1 loss:1 expect:1 interesting:3 proportional:1 proven:1 digital:1 foundation:1 agent:1 jasa:1 sufficient:2 consistent:1 imposes:1 editor:1 row:1 excitatory:1 last:2 dcd:1 aij:1 allow:3 institute:2 taking:1 face:7 edinburgh:1 dimension:1 vocabulary:1 gram:5 evaluating:1 cortical:1 computes:1 dale:1 author:1 made:1 reinforcement:2 ig:1 historical:1 ec:1 transaction:1 em2:1 ignore:1 implicitly:1 ons:1 global:2 corpus:1 assumed:1 conclude:1 xi:1 discriminative:1 spectrum:1 iterative:1 table:5 learn:4 nature:1 robust:1 rearranging:1 improving:1 complex:2 domain:1 main:1 lampert:1 prx:4 representative:2 definiteness:1 aid:1 wiley:1 theme:1 meyer:1 wish:3 lie:1 jmlr:2 warnow:1 learns:1 theorem:1 normalizing:2 grouping:2 incorporating:1 merging:1 ci:2 dissimilarity:2 entropy:3 explore:1 forming:1 penrose:2 visual:1 corresponds:1 khs:5 satisfies:1 relies:2 truth:1 nested:1 conditional:4 identity:2 goal:1 crl:1 feasible:2 hard:2 included:1 principal:1 specie:1 divisive:1 experimental:2 subdivision:1 est:1 select:1 phylogeny:1 people:1 latter:1 fulfills:1 incorporate:1 mcgregor:1 ex:2 |
2,860 | 3,593 | Characterizing neural dependencies
with copula models
Pietro Berkes
Volen Center for Complex Systems
Brandeis University, Waltham, MA 02454
[email protected]
Frank Wood and Jonathan Pillow
Gatsby Computational Neuroscience Unit, UCL
London WC1N 3AR, UK
{fwood,pillow}@gatsby.ucl.ac.uk
Abstract
The coding of information by neural populations depends critically on the statistical dependencies between neuronal responses. However, there is no simple model
that can simultaneously account for (1) marginal distributions over single-neuron
spike counts that are discrete and non-negative; and (2) joint distributions over the
responses of multiple neurons that are often strongly dependent. Here, we show
that both marginal and joint properties of neural responses can be captured using
copula models. Copulas are joint distributions that allow random variables with
arbitrary marginals to be combined while incorporating arbitrary dependencies between them. Different copulas capture different kinds of dependencies, allowing
for a richer and more detailed description of dependencies than traditional summary statistics, such as correlation coefficients. We explore a variety of copula
models for joint neural response distributions, and derive an efficient maximum
likelihood procedure for estimating them. We apply these models to neuronal data
collected in macaque pre-motor cortex, and quantify the improvement in coding accuracy afforded by incorporating the dependency structure between pairs
of neurons. We find that more than one third of neuron pairs shows dependency
concentrated in the lower or upper tails for their firing rate distribution.
1
Introduction
An important problem in systems neuroscience is to develop flexible, statistically accurate models
of neural responses. The stochastic spiking activity of individual neurons in cortex is often well
described by a Poisson distribution. Responses from multiple neurons also exhibit strong dependencies (i.e., correlations) due to shared input noise and lateral network interactions. However, there is
no natural multivariate generalization of the Poisson distribution. For this reason, much of the literature on population coding has tended either to ignore correlations entirely, treating neural responses
as independent Poisson random variables [1, 2], or to adopt a Gaussian model of joint responses
[3, 4], assuming a parametric form for dependencies but ignoring key features (e.g., discreteness,
non-negativity) of the marginal distribution. Recent work has focused on the construction of large
parametric models that capture inter-neuronal dependencies using generalized linear point-process
models [5, 6, 7, 8, 9] and binary second-order maximum-entropy models [10, 11, 12]. Although
these approaches are quite powerful, they model spike trains only in very fine time bins, and thus
describe the dependencies in neural spike count distributions only implicitly.
Modeling the joint distribution of neural activities is therefore an important open problem. Here
we show how to construct non-independent joint distributions over firing rates using copulas. In
particular, this approach can be used to combine arbitrary marginal firing rate distributions. The
development of the paper is as follows: in Section 2, we provide a basic introduction to copulas;
in Section 3, we derive a maximum likelihood estimation procedure for neural copula models, in
Sections 4 and 5, we apply these models to physiological data collected in macaque pre-motor
1
??
Figure 1: Samples drawn from a joint distribution defined using the dependency structure of a bivariate Gaussian distribution and changing the marginal distributions. Top row: The marginal distributions (the leftmost
marginal is uniform, by definition of copula). Bottom row: The log-density function of a Gaussian copula, and
samples from the joint distribution defined as in Eq. 2.
cortex; finally, in Section 6 we review the insights provided by neural copula models and discuss
several extensions and future directions.
2
Copulas
A copula C(u1 , . . . , un ) : [0, 1]n ? [0, 1] is a multivariate distribution function on the unit cube
with uniform marginals [13, 14]. The basic idea behind copulas is quite simple, and is closely related
to that of histogram equalization: for a random variable yi with continuous cumulative distribution
function (cdf) Fi , the random variable ui := Fi (yi ) is uniformly distributed on the interval [0, 1].
One can use this basic property to separate the marginals from the dependency structure in a multivariate distribution: the full multivariate distribution is standardized by projecting each marginal
onto one axis of the unit hyper-cube, and leaving one with a distribution on the hyper-cube (the copula, by definition) that represent dependencies in the marginals? quantiles. This intuition has been
formalized in Sklar?s Theorem [15]:
Theorem 1 (Sklar, 1959) Given u1 , . . . , un random variables with continuous distribution functions F1 , . . . , Fn and joint distribution F , there exist a unique copula C such that for all ui :
C(u1 , . . . , un ) = F (F1?1 (u1 ), . . . , Fn?1 (un ))
(1)
Conversely, given any distribution functions F1 , . . . , Fn and copula C,
F (y1 , . . . , yn ) = C(F1 (y1 ), . . . , Fn (yn ))
(2)
is a n-variate distribution function with marginal distribution functions F1 , . . . , Fn .
This result gives a way to derive a copula given the joint and marginal distributions (using Eq. 1), and
also, more importantly here, to construct a joint distribution by specifying the marginal distributions
and the dependency structure separately (Eq. 2). For example, one can keep the dependency structure
fixed and vary the marginals (Fig. 1), or vice versa given fixed marginal distributions define new joint
distributions using parametrized copula families (Fig. 2). For illustration, in this paper we are going
to consider only the bivariate case. All the methods, however, generalize straightforwardly to the
multivariate case.
Since copulas do not depend on the marginals, one can define in this way dependency measures that
are insensitive to non-linear transformations of the individual variables [14] and generalize correlation coefficients, which are only appropriate for elliptic distributions. The copula representation has
also been used to estimate the conditional entropy of neural latencies by separating the contribution
of the individual latencies from that coming from their correlations [16].
Dependencies structures are specified by parametric copula families. One notable example is the
Gaussian copula, which generalizes the dependency structure of the multivariate Gaussian distribution to arbitrary marginal distribution (Fig. 1), and is defined as
!
"
C(u1 , u2 ; ?) = ?? ??1 (u1 ), ??1 (u2 ) ,
(3)
2
Figure 2: Samples drawn from a joint distribution with fixed Gaussian marginals and dependency structure
defined by parametric copula families, as indicated by the labels. Top row: log-density function for three
copula families. Bottom row: Samples from the joint distribution (Eq. 2).
!
"
N
C?
(u1 , u2 ) = ?? ??1 (u1 ), ??1 (u2 )
#
$
??u1
?1)(e??u2 ?1)
C?F r (u1 , u2 ) = ? ?1 log 1 + (e
e?? ?1
Gaussian
Frank
Clayton
Clayton negative
Gumbel
??
?1/?
, ?>0
C?Cl (u1 , u2 ) = (u??
1 + u2 ? 1)
% ??
&?1/?
N eg
, ?1 ? ? < 0
C? (u1 , u2 ) = max (u1 + u??
2 ? 1), 0
"
!
? 1/?
Gu
?
C? (u1 , u2 ) = exp ? (?
u1 + u
?2 )
, u
?j = ? log uj , ? ? 1
Table 1: Definition of families of copula distribution functions.
where ?(u) is the cdf of the univariate Gaussian with mean 0 and variance 1, and ?? is the cdf
of a standard multivariate Gaussian with mean 0 and covariance matrix ?. Other families derive
from the economics literature, and are typically one-parameter families that capture various possible
dependencies, for example dependencies only in one of the tails of the distribution. Table 1 shows
the definition of the copula distributions used in this paper (see [14], for an overview of known
copulas and copula construction methods).
3
Maximum Likelihood estimation for discrete marginal distributions
In the case where the random variables have discrete distribution functions, as in the case of neural
firing rates, only a weaker version of Theorem 1 is valid: there always exists a copula that satisfies
Eq. 2, but it is no longer guaranteed to be unique [17]. With discrete data, the probability of a
particular outcome is determined by an integral over the region of [0, 1]n corresponding to that
outcome; any two copulas that integrate to the same values on all such regions produce the same
joint distribution.
We can derive a Maximum Likelihood (ML) estimation of the parameters ? by considering a generative model where uniform marginals are generated from the copula density, and in turn use these to
generate the discrete variables deterministically using the inverse (marginal) distribution functions,
as in Fig. 3. These marginals can be given by the empirical cumulative distribution of firing rates (as
in this paper) or by any parametrized family of univariate distributions (such as Poisson).
The ML equation then becomes
argmax p(y|?) = argmax
?
?
= argmax
?
'
'
(4)
p(y|u)p(u|?)du
F1 (y1 )
???
F1 (y1 ?1)
3
'
Fn (yn )
Fn (yn ?1)
c? (u1 , . . . , un ) du ,
(5)
?
?
u
p(u|?) = c? (u1 , . . . , un )
y
p(yi |u, ?) =
!
1,
0,
yi = Fi?1 (ui ; ?i )
otherwise
Gaussian
0.3
0
?0.3
Clayton
0.3
0
?0.3
Gumbel
0.3
0
?0.3
Frank
Figure 3: Graphical representation of the copula model with discrete marginals. Uniform marginals u are
drawn from the copula density function c? (u1 , . . . , un ), parametrized by ?. The discrete marginals are then
generated deterministically using the inverse cdf of the marginals, which are parametrized by ?.
0.3
0
?0.3
?1
?0.5
0
1
?10
0
1
2
3
0.5
2
4
1
3
5
?5
6
0
4
7
8
5
5
9
10
10
Figure 4: Distribution of the maximum likelihood estimation of the parameters of four copula families, for
various setting of their parameter (x-axis). On the y-axis, estimates are centered such that 0 corresponds to an
unbiased estimate. Error bars are one standard deviation of the estimate.
where Fi can depend on additional parameters ?i . The last equation is the copula probability mass
inside the volume defined by the vertices Fi (yi ) and Fi (yi ? 1), and can be readily computed using
the copula distribution C? (u1 , . . . , un ). For example, in the bivariate case one obtains
)
(
?
?
?
)
,
,
u
)
?
C
(u
,
u
)
?
C
(u
,
u
argmax p(y1 , y2 |?) = argmax C? (u1 , u2 ) + C? (u?
?
2
?
1
1
2
1
2
?
(6)
?
where ui = Fi (yi ) and u?
i = Fi (yi ? 1).
ML optimization can be performed using standard methods, like gradient descent. In the bivariate
case, we find that optimization using the standard MATLAB optimization routines is relatively efficient. Given neural data in the form of firing rates y1 , y2 from a pair of neurons, we collect the
empirical cumulative histogram of responses, Fi (k) = P (yi ? k). The data is then transformed
through the cdfs ui = Fi (yi ), and the copula model is fit according to Eq. 6. If a parametric distribution family is used for the marginals, the parameters of the copula ? and those of the marginals ? can
be estimated simultaneously, or alternatively ? can be fitted first, followed by ?. In our experience,
the second method is much faster and the quality of the fit is typically unchanged.
We checked for biases in ML estimation due to a limited amount of data and low firing rate by
generating data from the discrete copula model (Fig. 3), for a number of copula families and Poisson
marginals with parameters ?1 = 2, ?2 = 3. The estimate is based on 3500 observations generated
from the models (1000 for the Gaussian copula). The estimation is repeated 200 times (100 for the
Gaussian copula) in order to compute the mean and standard deviation of the ML estimate. Figure 4
shows that the estimate is unbiased and accurate for a wide range of parameters. Inaccuracy in the
estimation becomes larger as the copulas approach functional dependency (i.e., u2 = f (u1 ) for a
deterministic function f ), as it is the case for the Gaussian copula when ? tends to 1, and for the
Gumbel copula as ? goes to infinity. This is an effect due to low firing rates. Given our formulation of
the estimation problem as a generative model, one could use more sophisticated Bayesian methods
in place of the ML estimation, in order to infer a whole distribution over parameters given the data.
This would allow to put error bars on the estimated parameters, and would avoid overfitting at the
cost of computational time.
4
??
??????????????????????????????
??
??
??
????????????????
??
??
????????????????
??
??
??
??????????????????????????????
??
??
??
??
??
Figure 5: Empirical joint distribution and copula fit for two neuron pairs. The top row shows two neurons
that have dependencies mainly in the upper tails of their marginal distribution. The pair in the bottom row has
negative dependency. a,d) Histogram of the firing rate of the two neurons. Colors correspond to the logarithm
of the normalized frequency. b,e) Empirical copula. The color intensity has been cut off at 2.0 to improve
visibility. c,f) Density of the copula fit.
4
Results
To demonstrate the ability of copula models to fit joint firing rate distribution, we model neural data
recorded using a multi-electrode array implanted in the pre-motor cortex (PMd) area of a macaque
monkey [18, 19]. The array consisted in 10 ? 10 electrodes separated by 400?m. Firing times were
recorded while the monkey executed a center-out reaching task. See [19] for a description of the
task and general experimental setup. We fit the copula model using the marginal distribution of
neural activity over the entire recording session, including data recorded between trials (i.e., while
the monkey was freely behaving). Although one might also like to consider data collected during
a single task condition (i.e., the stimulus-conditional response distribution), the marginal response
distribution is an important statistical object in its own right, and has been the focus of recent much
literature [10, 11]. For example, the joint activity across neurons, averaged over stimuli, is the only
distribution the brain has access to, and must be sufficient for learning to construct representations
of the external world.
We collected spike responses in 100ms bins, and selected at random, without repetition, a training
set of 4000 bins and a test set of 2000 bins. Out of a total of 194 neurons we select a subset of 33
neurons that fired a minimum of 2500 spikes over the whole data set. For every pair of neurons in
this subset (528 pairs), we fit the parameters of several copula families to the joint firing rate.
Figure 5 shows two examples of the kind of the dependencies present in the data set and how they
are fit by different copula families. The neuron pair in the top row shows dependency in the upper
tails of their distribution, as can be seen in the histogram of joint firing rates (colors represent the
logarithm of the frequency): The two neurons have the tendency to fire strongly together, but are relatively independent at low firing rates. This is confirmed by the empirical copula, which shows the
probability mass in the regions defined by the cdfs of the marginal distribution. Since the marginal
cdfs are discrete, the data is projected on a discrete set of points on the unit cube; the colors in
the empirical copula plots represent the probability mass in the region where the marginal cdfs are
constant. The axis in the empirical copula should be interpreted as the quantiles of the marginal
distributions ? for example, 0.5 on the x-axis corresponds to the median of the distribution of y1 .
The higher probability mass in the upper right corner of the plot thus means that the two neurons
tend to be in the upper tails of the distributions simultaneously, and thus to have higher firing rates
together. On the right, one can see that this dependency structure is well captured by the Gumbel
copula fit. The second pair of neuron in the bottom row have negative dependency, in the sense that
when one of them has high firing rate the other tends to be silent. Although this is not readily visible
in the joint histogram, the dependency becomes clear in the empirical copula plot. This structure is
captured by the Frank copula fit.
5
6
Frank gain (bits/sec)
Frank parameter
4
2
0
?2
?4
?0.5
0
0.5
Gauss parameter
10
0
<0
<0 0
10
Gauss gain (bits/sec)
1
Figure 6: In the pairs where their fit improves over the independence model, the parameters (left) and the score
(right) of the Gaussian and Frank models are highly correlated.
The goodness-of-fit of the copula families is evaluated by cross-validation: We fit different models
on training data, and compute the log-likelihood of test data under the fitted model. The models are
scored according to the difference between the log-likelihood of a model that assumes independent
neurons and the log-likelihood of the copula model. This measure (appropriately renormalized) can
be interpreted as the number of bits per second that can be saved when coding the firing rate by taking
into account the dependencies encoded by the copula family. This is because this quantity can be
expressed as an estimation of the difference in the Kullback-Leibler divergence of the independent
(pindep ) and copula model (p? ) to the real distribution p?
&log p? (y)'y?p? ? &log pindep (y)'y?p?
'
'
? p? (y) log p? (y)dy ? p? (y) log pindep (y)
= KL(p? ||pindep ) ? KL(p? ||p? ).
(7)
(8)
(9)
We took particular care in selecting a small set of copula families that would be able to capture the
dependencies occurring in the data. Some of the families that we considered at first capture similar
kind of dependencies, and their scores are highly correlated. For example, the Frank and Gaussian
copulas are able to represent both positive and negative dependencies in the data, and simultaneously
in lower and upper tails, although the dependencies in the tails are less strong for the Frank family
(compare the copula densities in Figs. 1 and 5f). Fig. 6 (left) shows that both the parameter fits and
their performance are highly correlated. An advantage of the Frank copula is that it is much more
efficient to fit, since the Gaussian copula requires multiple evaluations of the bivariate Gaussian cdf,
which requires expensive numerical calculations. In addition, The Gaussian copula was also found
to be more prone to overfitting on this data set (Fig. 6, right). For these reasons, we decided to use
the Frank family only for the rest of the analysis.
With similar procedures we shortlisted a total 3 families that cover the vast majority of dependencies
in our data set: Frank, Clayton, and Gumbel copulas. Examples of the copula density of these
families can be found in Figs. 2, and 5. The Clayton and Gumbel copulas describe dependencies in
the lower and upper tails of the distributions, respectively. We didn?t find any example of neuron
pairs where the dependency would be in the upper tail of the distribution for one and in the lower
tail for the other distribution, or more complicated dependencies.
Out of all 528 neuron pairs, 393 had a significant improvement (P<0.05 on test data) over a model
with independent neurons1 and for 102 pairs the improvement was larger than 1 bit/sec. Dependencies in the data set seem thus to be widespread, despite the fact that individual neurons are recorded
from electrodes that are up to 4.4 mm apart. Fig. 7 shows the histogram of improvement in bits/sec.
The most common dependencies structures over all neuron pairs are given by the Gaussian-like dependencies of the Frank copula (54% of the pairs). Interestingly, a large proportion of the neurons
showed dependencies concentrated in the upper tails (Gumbel copula, 22%) or lower tails (Clayton
copula, 16%) of the distributions (Fig. 7).
1
We computed the significance level by generating an artificial data set using independent neurons with the
same empirical pdf as the monkey data. We analyzed the generated data and computed the maximal improvement over an independent model (due to the limited number of samples) on artificial test data. The resulting
distribution is very narrowly distributed around zero. We took the 95th percentile of the distribution (0.02
bits/sec) as the threshold for significance.
6
Improvement over independent model
100
<? 371
Independent 8%
number of pairs
80
Clayton
15%
60
40
Gumbel
23%
20
0
Frank
54%
0
5
bits/sec
10
15
Figure 7: For every pair of neurons, we select the copula family that shows the largest improvement over a
model with independent neurons, in bits/sec. Left: histogram of the gain in bits/sec over the independent model.
Right: Pie chart of the copula families that best fit the neuron pairs.
5
Discussion
The results presented here show that it is possible to represent neuronal spike responses using a
model that preserves discrete, non-negative marginals while incorporating various types of dependencies between neurons. Mathematically, it is straightforward to generalize these methods to the
n-variate case (i.e., distributions over the responses of n neurons). However, many copula families
have only one or two parameters, regardless of the copula dimensionality. If the dependency structure across a neural population is relatively homogeneous, then these copulas may be useful in that
they can be estimated using far less data than required, e.g., for a full covariance matrix (which has
O(n2 ) parameters). On the other hand, if the dependencies within a population vary markedly for
different pairs of neurons (as in the data set examined here), such copulas will lack the flexibility
to capture the complicated dependencies within a full population. In such cases, we can still apply
the Gaussian copula (and other copulas derived from elliptically symmetric distributions), since it is
parametrized by the same covariance matrix as a n-dimensional Gaussian. However, the Gaussian
copula becomes prohibitively expensive to fit in high dimensions, since evaluating the likelihood
requires an exponential number of evaluations of the multivariate Gaussian cdf, which itself must be
computed numerically.
One challenge for future work will therefore be to design new parametric families of copulas whose
parameters grow with the number of neurons, but remain tractable enough for maximum-likelihood
estimation. Recently, Kirshner [20] proposed a copula-based representation for multivariate distributions using a model that averages over tree-structured copula distributions. The basic idea is that
pairwise copulas can be easily combined to produce a tree-structured representation of a multivariate distribution, and that averaging over such trees gives an even more flexible class of multivariate
distributions. We plan to examine this approach using neural population data in future work.
Another future challenge is to combine explicit models of the stimulus-dependence underlying neural responses with models capable of capturing their joint response dependencies. The data set
analyzed here concerned the distribution over spike responses during all all stimulus conditions
(i.e., the marginal distribution over responses, as opposed to the the conditional response distribution given a stimulus). Although this marginal response distribution is interesting in its own right,
for many applications one is interested in separating correlations that are induced by external stimuli
from internal correlations due to the network interactions. One obvious approach is to consider a
hybrid model with a Linear-Nonlinear-Poisson model [21] capturing stimulus-induced correlation,
adjoined to a copula distribution that models the residual dependencies between neurons (Fig. 8).
This is an important avenue for future exploration.
Acknowledgments
We?d like to thank Matthew Fellows for providing the data used in this study. This work was supported by the Gatsby Charitable Foundation.
7
Figure 8: Hybrid LNP-copula model. The LNP part of the model removes stimulus-induced correlations from
the neural data, so that the copula model can take into account residual network-related dependencies.
References
[1] R. Zemel, P. Dayan, and A. Pouget. Probabilistic interpretation of population codes. Neural Computation,
10:403?430, 1998.
[2] A. Pouget, K. Zhang, S. Deneve, and P.E. Latham. Statistically efficient estimation using population
coding. Neural Computation, 10(2):373?401, 1998.
[3] L. Abbott and P. Dayan. The effect of correlated variability on the accuracy of a population code. Neural
Computation, 11:91?101, 1999.
[4] E. Maynard, N. Hatsopoulos, C. Ojakangas, B. Acuna, J. Sanes, R. Normann, and J. Donoghue. Neuronal interactions improve cortical population coding of movement direction. Journal of Neuroscience,
19:8083?8093, 1999.
[5] E. Chornoboy, L. Schramm, and A. Karr. Maximum likelihood identification of neural point process
systems. Biological Cybernetics, 59:265?275, 1988.
[6] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework
for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. J.
Neurophysiol, 93(2):1074?1089, 2004.
[7] M. Okatan, M. Wilson, and E. Brown. Analyzing functional connectivity using a network likelihood
model of ensemble neural spiking activity. Neural Computation, 17:1927?1961, 2005.
[8] S. Gerwinn, J.H. Macke, M. Seeger, and M. Bethge. Bayesian inference for spiking neuron models with
a sparsity prior. Advances in Neural Information Processing Systems, 2008.
[9] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, and E. P. Chichilnisky, E. J. Simoncelli. Spatiotemporal correlations and visual signaling in a complete neuronal population. Nature, 454(7206):995?
999, 2008.
[10] E. Schneidman, M. Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply strongly correlated
network states in a neural population. Nature, 440:1007?1012, 2006.
[11] J. Shlens, G. Field, J. Gauthier, M. Grivich, D. Petrusca, A. Sher, Litke A. M., and E. J. Chichilnisky. The
structure of multi-neuron firing patterns in primate retina. J Neurosci, 26:8254?8266, 2006.
[12] J.H. Macke, P. Berens, A.S. Ecker, A.S. Tolias, and M. Bethge. Generating spike trains with specified
correlation coefficients. Neural Computation, 21(2), 2009.
[13] H. Joe. Multivariate models and dependence concepts. Chapman & Hall, London, 1997.
[14] R.B. Nelsen. An introduction to copulas. Springer, New York, 2nd edition, 2006.
[15] A. Sklar. Fonctions de r?partition ? n dimensions et leurs marges. Publ. Inst. Statist. Univ. Paris, 8:229?
231, 1959.
[16] R.L. Jenison and R.A. Reale. The shape of neural dependence. Neural Computation, 16(4):665?672,
2004.
[17] C. Genest and J. Neslehova. A primer on copulas for count data. Astin Bulletin, 37(2):475?515, 2007.
[18] M. Serruya, N. Hatsopoulos, L. Paninski, M. Fellows, and J. Donoghue. Instant neural control of a
movement signal. Nature, 416:141?142, 2002.
[19] S. Suner, MR Fellows, C. Vargas-Irwin, GK Nakata, and JP Donoghue. Reliability of signals from a
chronically implanted, silicon-based electrode array in non-human primate primary motor cortex. Neural
Systems and Rehabilitation Engineering, IEEE Transactions on, 13(4):524?541, 2005.
[20] S. Kirshner. Learning with tree-averaged densities and distributions. NIPS, 20, 2008.
[21] E. P. Simoncelli, L. Paninski, J. Pillow, and O. Schwartz. Characterization of neural responses with
stochastic stimuli. In M. Gazzaniga, editor, The Cognitive Neurosciences, pages 327?338. MIT Press, 3rd
edition, 2004.
8
| 3593 |@word trial:1 version:1 proportion:1 nd:1 open:1 covariance:3 score:2 selecting:1 interestingly:1 must:2 readily:2 fn:7 numerical:1 visible:1 partition:1 shape:1 motor:4 visibility:1 treating:1 plot:3 remove:1 generative:2 selected:1 characterization:1 zhang:1 combine:2 inside:1 pairwise:2 inter:1 examine:1 multi:2 brain:1 considering:1 becomes:4 provided:1 estimating:1 underlying:1 mass:4 didn:1 kind:3 interpreted:2 monkey:4 transformation:1 fellow:4 every:2 prohibitively:1 schwartz:1 uk:2 control:1 unit:4 yn:4 okatan:1 positive:1 engineering:1 fwood:1 tends:2 despite:1 analyzing:1 firing:18 might:1 examined:1 conversely:1 specifying:1 collect:1 limited:2 cdfs:4 range:1 statistically:2 averaged:2 decided:1 unique:2 acknowledgment:1 signaling:1 procedure:3 area:1 empirical:9 pre:3 acuna:1 onto:1 put:1 equalization:1 deterministic:1 ecker:1 center:2 go:1 economics:1 straightforward:1 regardless:1 focused:1 formalized:1 pouget:2 insight:1 array:3 importantly:1 shlens:2 population:12 construction:2 homogeneous:1 expensive:2 cut:1 bottom:4 capture:6 region:4 movement:2 hatsopoulos:2 intuition:1 ui:5 renormalized:1 depend:2 gu:1 ojakangas:1 neurophysiol:1 easily:1 joint:23 various:3 sklar:3 train:2 separated:1 univ:1 describe:2 london:2 artificial:2 zemel:1 hyper:2 outcome:2 quite:2 richer:1 larger:2 encoded:1 whose:1 otherwise:1 ability:1 statistic:1 itself:1 advantage:1 ucl:2 took:2 interaction:3 coming:1 maximal:1 fired:1 flexibility:1 description:2 electrode:4 produce:2 generating:3 nelsen:1 object:1 derive:5 develop:1 ac:1 eq:6 strong:2 waltham:1 quantify:1 direction:2 closely:1 saved:1 stochastic:2 centered:1 exploration:1 human:1 bin:4 kirshner:2 truccolo:1 f1:7 generalization:1 biological:1 mathematically:1 extension:1 mm:1 around:1 considered:1 hall:1 exp:1 matthew:1 vary:2 adopt:1 estimation:12 label:1 largest:1 vice:1 repetition:1 adjoined:1 mit:1 gaussian:23 always:1 reaching:1 avoid:1 wilson:1 derived:1 focus:1 improvement:7 likelihood:12 mainly:1 seeger:1 litke:2 sense:1 inst:1 inference:1 dependent:1 dayan:2 typically:2 entire:1 going:1 transformed:1 interested:1 flexible:2 development:1 plan:1 copula:93 marginal:24 cube:4 construct:3 field:1 petrusca:1 chapman:1 future:5 stimulus:9 retina:1 simultaneously:4 divergence:1 preserve:1 individual:4 argmax:5 fire:1 highly:3 evaluation:2 analyzed:2 behind:1 wc1n:1 accurate:2 integral:1 capable:1 experience:1 tree:4 logarithm:2 fitted:2 modeling:1 ar:1 cover:1 goodness:1 cost:1 deviation:2 vertex:1 subset:2 leurs:1 uniform:4 straightforwardly:1 dependency:50 spatiotemporal:1 combined:2 density:8 probabilistic:1 off:1 together:2 bethge:2 connectivity:1 recorded:4 opposed:1 external:2 corner:1 cognitive:1 macke:2 account:3 de:1 schramm:1 coding:6 sec:8 coefficient:3 notable:1 depends:1 performed:1 complicated:2 astin:1 contribution:1 chart:1 accuracy:2 variance:1 ensemble:2 correspond:1 generalize:3 weak:1 bayesian:2 identification:1 critically:1 confirmed:1 cybernetics:1 history:1 tended:1 checked:1 definition:4 frequency:2 obvious:1 gain:3 color:4 pmd:1 improves:1 dimensionality:1 routine:1 sophisticated:1 higher:2 response:21 formulation:1 evaluated:1 strongly:3 correlation:12 hand:1 gauthier:1 nonlinear:1 lack:1 widespread:1 maynard:1 quality:1 indicated:1 effect:3 normalized:1 unbiased:2 y2:2 brown:2 concept:1 consisted:1 symmetric:1 leibler:1 eg:1 during:2 percentile:1 m:1 generalized:1 leftmost:1 pdf:1 complete:1 demonstrate:1 latham:1 fi:10 recently:1 nakata:1 common:1 functional:2 spiking:5 overview:1 insensitive:1 volume:1 jp:1 tail:12 interpretation:1 relating:1 marginals:17 numerically:1 significant:1 silicon:1 versa:1 fonctions:1 rd:1 session:1 had:1 jenison:1 reliability:1 access:1 cortex:5 longer:1 behaving:1 berkes:2 multivariate:12 own:2 recent:2 showed:1 apart:1 gerwinn:1 binary:1 yi:10 lnp:2 captured:3 minimum:1 additional:1 seen:1 care:1 mr:1 freely:1 schneidman:1 signal:2 multiple:3 full:3 simoncelli:2 infer:1 faster:1 calculation:1 cross:1 basic:4 implanted:2 poisson:6 histogram:7 represent:5 serruya:1 addition:1 fine:1 separately:1 interval:1 median:1 leaving:1 grow:1 appropriately:1 rest:1 markedly:1 recording:1 tend:1 induced:3 seem:1 enough:1 concerned:1 variety:1 independence:1 variate:2 fit:17 silent:1 idea:2 avenue:1 donoghue:4 narrowly:1 york:1 elliptically:1 matlab:1 useful:1 latency:2 detailed:1 clear:1 amount:1 statist:1 concentrated:2 generate:1 exist:1 neuroscience:4 estimated:3 per:1 extrinsic:1 discrete:11 key:1 four:1 threshold:1 eden:1 drawn:3 changing:1 discreteness:1 abbott:1 vast:1 deneve:1 pietro:1 wood:1 inverse:2 powerful:1 place:1 family:25 dy:1 bit:9 entirely:1 capturing:2 guaranteed:1 followed:1 chornoboy:1 activity:6 infinity:1 segev:1 afforded:1 u1:21 relatively:3 vargas:1 structured:2 according:2 reale:1 across:2 remain:1 primate:2 rehabilitation:1 projecting:1 equation:2 discus:1 count:3 turn:1 tractable:1 grivich:1 generalizes:1 apply:3 appropriate:1 elliptic:1 primer:1 standardized:1 top:4 assumes:1 graphical:1 instant:1 uj:1 unchanged:1 quantity:1 spike:8 parametric:6 primary:1 dependence:3 traditional:1 bialek:1 exhibit:1 gradient:1 separate:1 thank:1 lateral:1 separating:2 parametrized:5 majority:1 collected:4 reason:2 assuming:1 code:2 illustration:1 providing:1 setup:1 executed:1 pie:1 frank:14 gk:1 negative:6 design:1 publ:1 allowing:1 upper:9 neuron:35 observation:1 descent:1 variability:1 volen:1 y1:7 arbitrary:4 intensity:1 clayton:7 pair:19 required:1 specified:2 kl:2 chichilnisky:2 paris:1 inaccuracy:1 macaque:3 nip:1 able:2 bar:2 gazzaniga:1 pattern:1 sparsity:1 challenge:2 max:1 including:1 natural:1 hybrid:2 residual:2 improve:2 imply:1 axis:5 negativity:1 sher:2 normann:1 review:1 literature:3 prior:1 berry:1 interesting:1 validation:1 foundation:1 integrate:1 sufficient:1 editor:1 charitable:1 row:8 prone:1 summary:1 supported:1 last:1 bias:1 allow:2 weaker:1 karr:1 wide:1 characterizing:1 taking:1 bulletin:1 distributed:2 dimension:2 cortical:1 valid:1 pillow:4 cumulative:3 world:1 evaluating:1 projected:1 brandeis:2 far:1 transaction:1 obtains:1 ignore:1 implicitly:1 kullback:1 keep:1 ml:6 overfitting:2 chronically:1 tolias:1 alternatively:1 un:8 continuous:2 table:2 nature:3 ignoring:1 genest:1 du:2 complex:1 cl:1 berens:1 significance:2 neurosci:1 whole:2 noise:1 scored:1 edition:2 n2:1 repeated:1 neuronal:6 fig:12 quantiles:2 gatsby:3 sanes:1 deterministically:2 explicit:1 exponential:1 third:1 theorem:3 covariate:1 physiological:1 bivariate:5 incorporating:3 exists:1 joe:1 occurring:1 gumbel:8 entropy:2 paninski:3 explore:1 univariate:2 visual:1 expressed:1 u2:12 springer:1 corresponds:2 satisfies:1 ma:1 cdf:6 conditional:3 shared:1 determined:1 uniformly:1 averaging:1 total:2 experimental:1 tendency:1 gauss:2 select:2 internal:1 irwin:1 jonathan:1 marge:1 correlated:5 |
2,861 | 3,594 | Support Vector Machines with a Reject Option
Yves Grandvalet 1, 2 , Alain Rakotomamonjy 3 , Joseph Keshet 2 and St?ephane Canu 3
1
2
Heudiasyc, UMR CNRS 6599
Idiap Research Institute
Universit?e de Technologie de Compi`egne
Centre du Parc
BP 20529, 60205 Compi`egne Cedex, France CP 592, CH-1920 Martigny Switzerland
3
LITIS, EA 4108
Universit?e de Rouen & INSA de Rouen
76801 Saint Etienne du Rouvray, France
Abstract
We consider the problem of binary classification where the classifier may abstain
instead of classifying each observation. The Bayes decision rule for this setup,
known as Chow?s rule, is defined by two thresholds on posterior probabilities.
From simple desiderata, namely the consistency and the sparsity of the classifier,
we derive the double hinge loss function that focuses on estimating conditional
probabilities only in the vicinity of the threshold points of the optimal decision
rule. We show that, for suitable kernel machines, our approach is universally
consistent. We cast the problem of minimizing the double hinge loss as a quadratic
program akin to the standard SVM optimization problem and propose an active set
method to solve it efficiently. We finally provide preliminary experimental results
illustrating the interest of our constructive approach to devising loss functions.
1
Introduction
In decision problems where errors incur a severe loss, one may have to build classifiers that abstain
from classifying ambiguous examples. Rejecting these examples has been investigated since the
early days of pattern recognition. In particular, Chow (1970) analyses how the error rate may be
decreased thanks to the reject option.
There have been several attempts to integrate a reject option in Support Vector Machines (SVMs),
using strategies based on the thresholding of SVMs scores (Kwok, 1999) or on a new training criterion (Fumera & Roli, 2002). These approaches have however critical drawbacks: the former is
not consistent and the latter leads to considerable computational overheads to the original SVM
algorithm and lacks some of its most appealing features like convexity and sparsity.
We introduce a piecewise linear and convex training criterion dedicated to the problem of classification with the reject option. Our proposal, inspired by the probabilistic interpretation of SVM
fitting (Grandvalet et al., 2006), is a double hinge loss, reflecting the two thresholds in Chow?s rule.
Hence, we generalize the loss suggested by Bartlett and Wegkamp (2008) to arbitrary asymmetric
misclassification and rejection costs. For the symmetric case, our probabilistic viewpoint motivates
another decision rule. We then propose the first algorithm specifically dedicated to train SVMs with
a double hinge loss. Its implementation shows that our decision rule is at least at par with the one of
Bartlett and Wegkamp (2008).
The paper is organized as follows. Section 2 defines the problem and recalls Bayes rule for binary
classification with a reject option. The proposed double hinge loss is derived in Section 3, together
with the decision rule associated with SVM scores. Section 4 addresses implementation issues: it
formalizes the SVM training problem and details an active set algorithm specifically designed for
1
training with the double hinge loss. This implementation is tested empirically in Section 5. Finally,
Section 6 concludes the paper.
2
Problem Setting and the Bayes Classifier
Classification aims at predicting a class label y ? Y from an observed pattern x ? X . For this
purpose, we construct a decision rule d : X ? A, where A is a set of actions that typically consists
in assigning a label to x ? X . In binary problems, where the class is tagged either as +1 or ?1, the
two types of errors are: (i) false positive, where an example labeled ?1 is predicted as +1, incurring
a cost c? ; (ii) false negative, where an example labeled +1 is predicted as ?1, incurring a cost c+ .
In general, the goal of classification is to predict the true label for an observed pattern. However,
patterns close to the decision boundary are misclassified with high probability. This problem becomes especially eminent in cases where the costs, c? or c+ , are high, such as in medical decision
making. In these processes, it might be better to alert the user and abstain from prediction. This
motivates the introduction of a reject option for classifiers that cannot predict a pattern with enough
confidence. This decision to abstain, which is denoted by 0, incurs a cost, r? and r+ for examples
labeled ?1 and +1, respectively.
y
The costs pertaining to each possible decision are recapped on the righthand-side. In what follows, we assume that all costs are strictly positive:
(1)
Furthermore, it should be possible to incur a lower expected loss by
choosing the reject option instead of any prediction, that is
c? r+ + c+ r? < c? c+ .
(2)
d(x)
c? > 0 , c+ > 0 , r? > 0 , r+ > 0 .
+1
?1
+1
0
c?
0
r+
r?
?1
c+
0
Bayes? decision theory is the paramount framework in statistical decision theory, where decisions
are taken to minimize expected losses. For classification with a reject option, the overall risk is
R(d)
= c+ EXY [Y = 1, d(X) = ?1] + c? EXY [Y = ?1, d(X) = 1] +
r+ EXY [Y = 1, d(X) = 0] + r? EXY [Y = ?1, d(X) = 0] ,
(3)
where X and Y denote the random variable describing patterns and labels.
The Bayes classifier d? is defined as the minimizer of the risk R(d). Since the seminal paper of
Chow (1970), this rule is sometimes referred to as Chow?s rule:
(
+1 if P(Y = 1|X = x) > p+
?
?1 if P(Y = 1|X = x) < p?
(4)
d (x) =
0 otherwise ,
c? ? r?
r?
and p? =
.
c? ? r? + r+
c+ ? r+ + r?
Note that, assuming that (1) and (2) hold, we have 0 < p? < p+ < 1.
where p+ =
One of the major inductive principle is the empirical risk minimization, where one minimizes the
empirical counterpart of the risk (3). In classification, this principle usually leads to a NP-hard
problem, which can be circumvented by using a smooth proxy of the misclassification loss. For
example, Vapnik (1995) motivated the hinge loss as a ?computationally simple? (i.e., convex) surrogate of classification error. The following section is dedicated to the construction of such a surrogate
for classification with a reject option.
3
Training Criterion
One method to get around the hardness of learning decision functions is to replace the conditional
b
probability P(Y = 1|X = x) with its estimation P(Y
= 1|X = x), and then plug this estimation
back in (4) to build a classification rule (Herbei & Wegkamp, 2006). One of the most widespread
2
5.1
`p? ,p+ (?1, f (x))
`p? ,p+ (+1, f (x))
5.1
2.5
0
f? 0
2.5
0
f+
f (x)
f? 0
f+
f (x)
Figure 1: Double hinge loss function `p? ,p+ for positive (left) and negative examples (right), with
p? = 0.4 and p+ = 0.8 (solid: double hinge, dashed: likelihood). Note that the decision thresholds
f+ and f? are not symmetric around zero.
representative of this line of attack is the logistic regression model, which estimates the conditional
probability using the maximum (penalized) likelihood framework.
As a starting point, we consider the generalized logistic regression model for binary classification,
where
1
b
P(Y
= y|X = x) =
,
(5)
1 + exp(?yf (x))
and the function f : X ? R is estimated by the minimization of a regularized empirical risk on the
training sample T = {(xi , yi )}ni=1
n
X
`(yi , f (xi )) + ??(f ) ,
(6)
i=1
where ` is a loss function and ?(?) is a regularization functional, such as the (squared) norm of f in
a suitable Hilbert space ?(f ) = kf k2H , and ? is a regularization parameter. In the standard logistic
regression procedure, ` is the negative log-likelihoood loss
`(y, f (x)) = log(1 + exp(?yf (x))) .
This loss function is convex and decision-calibrated (Bartlett & Tewari, 2007), but it lacks an appealing feature of the hinge loss used in SVMs, that is, it does not lead to sparse solutions. This
drawback is the price to pay for the ability to estimate the posterior probability P(Y = 1|X = x)
on the whole range (0, 1) (Bartlett & Tewari, 2007).
However, the definition of the Bayes? rule (4) clearly shows that the estimation of P(Y = 1|X = x)
does not have to be accurate everywhere, but only in the vicinity of p+ and p? . This motivates the
construction of a training criterion that focuses on this goal, without estimating P(Y = 1|X = x)
on the whole range as an intermediate step. Our purpose is to derive such a loss function, without
sacrifying sparsity to the consistency of the decision rule.
Though not a proper negative log-likelihood, the hinge loss can be interpreted in a maximum a
posteriori framework: The hinge loss can be derived as a relaxed minimization of negative loglikelihood (Grandvalet et al., 2006). According to this viewpoint, minimizing the hinge loss aims
at deriving a loose approximation to the the logistic regression model (5) that is accurate only at
f (x) = 0, thus allowing to estimate whether P(Y = 1|X = x) > 1/2 or not. More generally,
one can show that, in order to have a precise estimate of P(Y = 1|X = x) = p, the surrogate loss
should be tangent to the neg-log-likelihood at f = log(p/(1 ? p)).
Following this simple constructive principle, we derive the double hinge loss, which aims at reliably
estimating P(Y = 1|X = x) at the threshold points p+ and p? . Furthermore, to encourage sparsity,
we set the loss to zero for all points classified with high confidence. This loss function is displayed in
Figure 1. Formally, for the positive examples, the double hinge loss satisfying the above conditions
can be expressed as
`p? ,p+ (+1, f (x)) = max ? (1 ? p? )f (x) + H(p? ), ?(1 ? p+ )f (x) + H(p+ ), 0 , (7)
3
and for the negative examples it can be expressed as
(8)
`p? ,p+ (?1, f (x)) = max p+ f (x) + H(p+ ), p? f (x) + H(p? ), 0 ,
where H(p) = ?p log(p) ? (1 ? p) log(1 ? p). Note that, unless p? = 1 ? p+ , there is no simple
symmetry with respect to the labels.
After training, the decision rule is defined as the plug-in estimation of (4) using the logistic regression probability estimation. Let f+ = log(p+ /(1 ? p+ )) and f? = log(p? /(1 ? p? )), the decision
rule can be expressed in terms of the function f as follows
(
+1 if f (x) > f+
?1 if f (x) < f?
(9)
dp? ,p+ (x; f ) =
0
otherwise .
The following result shows that the rule dp? ,p+ (?; f ) is universally consistent when f is learned by
minimizing empirical risk based on `p? ,p+ . Hence, in the limit, learning with the double hinge loss
is optimal in the sense that the risk for the learned decision rule converges to the Bayes? risk.
Theorem 1. Let H be a functional space that is dense in the set of continuous functions. Suppose
that we have a positive sequence {?n } with ?n ? 0 and n?2n / log n ? ?. We define fn? as
n
1X
arg min
`p? ,p+ (yi , f (xi )) + ?n kf k2H .
f ?H n
i=1
Then, limn?? R(dp? ,p+ (X; fn? )) = R(d? (X)) holds almost surely, that is, the classifier
dp? ,p+ (?; fn? ) is strongly universally consistent.
Proof. Our theorem follows directly from (Steinwart, 2005, Corollary 3.15), since `p? ,p+ is regular
(Steinwart, 2005, Definition 3.9). Besides mild regularity conditions that hold for `p? ,p+ , a loss
function is said regular if, for every ? ? [0, 1], and every t? such that
t? = arg min ? `p? ,p+ (+1, t) + (1 ? ?) `p? ,p+ (?1, t) ,
t
we have that dp? ,p+ (t? , x) agrees with d? (x) almost everywhere.
Let f1 = ?H(p? )/p? , f2 = ?(H(p+ ) ? H(p? ))/(p+ ? p? ) and f3 = H(p+ )/(1 ? p+ ) denote
the hinge locations in `p? ,p+ (?1, f (x)). Note that we have f1 < f? < f2 < f+ < f3 , and that
?
?
?1 if P(Y = 1|x) < p?
(??, f1 ] if 0 ? ? < p?
?
?
?
?
?
?
? ?1 or 0 if P(Y = 1|x) = p?
? [f1 , f2 ] if ? = p?
0 if p? < P(Y = 1|x) < p+
{f2 } if p? < ? < p+ ? dp? ,p+ (t? , x) =
t? ?
?
?
?
?
0
or
+
1
if P(Y = 1|x) = p+
[f
,
f
]
if
?
=
p
2
3
+
?
?
?
?
+1 if P(Y = 1|x) > p+
[f3 , ?) if p+ < ? ? 1
which is the desired result.
Note also that the analysis of Bartlett and Tewari (2007) can be used to show that minimizing `p? ,p+
cannot provide consistent estimates of P(Y = 1|X = x) = p for p ?
/ {p? , p+ }. This property is
desirable regarding sparsity, since sparseness does not occur when the conditional probabilities can
be unambiguously estimated .
Note on a Close Relative A double hinge loss function has been proposed recently with a different perspective by Bartlett and Wegkamp (2008). Their formulation is restricted to symmetric
classification, where c+ = c? = 1 and r+ = r? = r. In this situation, rejection may occur
only if 0 ? r < 1/2, and the thresholds on the conditional probabilities in Bayes? rule (4) are
p? = 1 ? p+ = r.
For symmetric classification, the loss function of Bartlett and Wegkamp (2008) is a scaled version
of our proposal that leads to equivalent solutions for f , but our decision rule differs. While our
probabilistic derivation of the double hinge loss motivates the decision function (9), the decision rule
of Bartlett and Wegkamp (2008) has a free parameter (corresponding to the threshold f+ = ?f? )
whose value is set by optimizing a generalization bound.
Our decision rule rejects more examples when the loss incurred by rejection is small and fewer
examples otherwise. The two rules are identical for r ' 0.24. We will see in Section 5 that this
difference has noticeable outcomes.
4
4
SVMs with Double Hinge
In this section, we show how the standard SVM optimization problem is modified when the hinge
loss is replaced by the double hinge loss. The optimization problem is first written using a compact
notation, and the dual problem is then derived.
4.1
Optimization Problem
Minimizing the regularized empirical risk (6) with the double hinge loss (7?8) is an optimization
problem akin to the standard SVM problem. Let C be an arbitrary constant, we define D = C(p+ ?
p? ), Ci = C(1 ? p+ ) for positive examples, and Ci = Cp? for negative examples. With the
introduction of slack variables ? and ?, the optimization problem can be stated as
?
n
n
X
X
1
?
2
?
?
kf
k
+
C
?
+
D
?i
min
i i
H
?
? f,b,?,? 2
i=1
i=1
(10)
s. t. yi (f (xi ) + b) ? ti ? ?i i = 1, . . . , n
?
?
?
y
(f
(x
)
+
b)
?
?
?
?
i
=
1,
.
.
.
,
n
?
i
i
i
i
?
?i ? 0 , ?i ? 0
i = 1, . . . , n ,
where, for positive examples, ti = H(p+ )/(1 ? p+ ), ?i = ?(H(p? ) ? H(p+ ))/(p? ? p+ ), while,
for negative examples ti = H(p? )/p? , ?i = (H(p? ) ? H(p+ ))/(p? ? p+ ).
For functions f belonging to a Hilbert space H endowed with a reproducing kernel k(?, ?), efficient
optimization algorithms can be drawn from the dual formulation:
?
1 T
?
? min
? G? ? ? T ? ? (t ? ? )T ?
?
? ?,? 2
s. t. yT ? = 0
(11)
?
?
0
?
?
?
C
i
=
1,
.
.
.
,
n
i
i
?
?
0 ? ?i ? ?i ? D i = 1, . . . , n .
T
where y = (y1 , . . . , yn ) , t = (t1 , . . . , tn )T and ? = (?1 , . . . , ?n )T are vectors of Rn and G is the
n ? n Gram matrix with general entry Gij = yi yj k(xi , xj ). Note that (11) is a simple quadratic
problem under box constraints. Compared to the standard SVM dual problem, one has an additional
vector to optimize, but, with the active set we developed, we only have to optimize a single vector of
Rn . The primal variablesP
f and b are then derived from the Karush-Kuhn-Tucker (KKT) conditions.
n
For f , we have: f (?) = i=1 ?i yi k(?, xi ), and b is obtained in the optimization process described
below.
4.2
Solving the Problem
To solve (11), we use an active set algorithm, following a strategy that proved to be efficient in
SimpleSVM (Vishwanathan et al., 2003). This algorithm solves the SVM training problem by a
greedy approach, in which one solves a series of small problems. First, the repartition of training
examples in support and non-support vectors is assumed to be known, and the training criterion is
optimized considering that this partition fixed. Then, this optimization results in an updated partition
of examples in support and non-support vectors. These two steps are iterated until some level of
accuracy is reached.
Partitioning the Training Set The training set is partitioned into five subsets defined by the activity of the box constraints of Problem (11). The training examples indexed by:
I0
It
IC
I?
ID
, defined by I0 = {i|?i = 0}, are such that yi (f (xi ) + b) > ti ;
, defined by It = {i|0 < ?i < Ci }, are such that yi (f (xi ) + b) = ti ;
, defined by IC = {i|?i = Ci }, are such that ?i < yi (f (xi ) + b) ? ti ;
, defined by I? = {i|Ci < ?i < Ci + D}, are such that yi (f (xi ) + b) = ?i ;
, defined by ID = {i|?i = Ci + D}, are such that yi (f (xi ) + b) ? ?i .
When example i belongs to one of the subsets described above, the KKT conditions yield that ?i
is either equal to ?i or constant. Hence, provided that the repartition of examples in the subsets I0 ,
It , IC , I? and ID is known, we only have to consider a problem in ?. Furthermore, ?i has to be
computed only for i ? It ? I? .
5
Updating Dual Variables Assuming a correct partition, Problem (11) reduces to the considerably
smaller problem of computing ?i for i ? IT = It ? I? :
?
X
1 X
?
?i si
?
?
G
?
min
?
i
j
ij
? {?i |i?IT } 2
i?I
i?IT ,j?IT X
T
(12)
X
X
?
?
(C
+
D)
y
=
0
,
C
y
+
y
?
+
s.
t.
i
i
i
i
i
i
?
i?IT
P
i?ID
i?IC
P
P
where si = ti ? j?IC Cj Gji ? j?ID (Cj + D) Gji for i ? It and si = ?i ? j?IC Cj Gji ?
P
j?ID (Cj + D) Gji for i ? I? . Note that the box constraints of Problem (11) do not appear here,
because we assumed the partition to be correct.
The solution of Problem (12) is simply obtained by solving the following linear system resulting
from the first-order optimality conditions:
? X
?
Gij ?j + yi ? = si
for i ? IT
?
?
j?I
T
X
X
X
(13)
?
yi ?i = ?
Ci yi ?
(Ci + D) yi ,
?
?
i?IT
i?IC
i?ID
where ?, which is the (unknown) Lagrange parameter associated to the equality constraint in (12),
is computed along with ?. Note that the |IT | equations of the linear system given on the first line
of (13) express that, for i ? It , yi (f (xi ) + ?) = ti and for i ? I? , yi (f (xi ) + ?) = ?i . Hence, the
primal variable b is equal to ?.
Algorithm The algorithm, described in Algorithm 1, simply alternates updates of the partition of
examples in {I0 , It , IC , I? , ID }, and the ones of coefficients ?i for the current active set IT . As for
standard SVMs, the initialization step consists in either using the solution obtained for a different
hyper-parameter, such as a higher value of C, or in picking one or several examples of each class to
arbitrarily initialize It to a non-empty set, and putting all the other ones in I0 = {1, . . . , n} \ It .
Algorithm 1 SVM Training with a Reject Option
input {xi , yi }1?i?n and hyper-parameters C, p+ , p?
initialize ? old IT = {It , I? }, IT = {I0 , IC , ID },
repeat
solve linear system (13) ? (?i )i?IT , b = ?.
if any (?i )i?IT violates the box constraints (11) then
Compute the largest ? s. t., for all i ? IT ?inew = ?iold + ?(?i ? ?iold ) obey box constraints
Let j denote the index of (?inew )i?IT at bound,
IT = IT \ {j}, IT = IT ? {j}
?jold = ?jnew
else
for all i ? IT do ?inew = ?i
if any (yi (f (xi ) + b))i?IT violates primal constraints (10) then
select i with violated constraint
IT = IT \ {i}, IT = IT ? {i}
else
exact convergence
end if
for all i ? IT do ?iold = ?inew
end if
until convergence
output f , b.
The exact convergence is obtained when all constraints are fulfilled, that is, when all examples belong to the same subset at the begining and the end of the main loop. However, it is possible to relax
the convergence criteria while having a good control on the precision on the solution by monitoring the duality gap, that is the difference between the primal and the dual objectives, respectively
provided in the definition of Problems (10) and (11).
6
Table 1: Performances in terms of average test loss, rejection rate and misclassification rate (rejection is not an error) with r+ = r? = 0.45, for the three rejection methods over four different
datasets.
Average Test Loss Rejection rate (%) Error rate (%)
Wdbc
Naive
2.9 ? 1.6
0.7
2.6
B&W?s
3.5 ? 1.8
3.9
1.8
2.9 ? 1.7
1.2
2.4
Our?s
Liver
Naive
28.9 ? 5.4
3.3
27.4
B&W?s
30.9 ? 4.0
34.5
15.4
Our?s
28.8 ? 5.1
7.9
25.2
.
Thyroid Naive
4.1 ? 2.9
0.9
3.7
B&W?s
4.4 ? 2.7
6.1
1.6
Our?s
3.7 ? 2.7
2.1
2.8
Pima
Naive
23.7 ? 1.9
7.5
20.3
B&W?s
24.7 ? 2.1
24.3
13.8
Our?s
23.1 ? 1.3
6.9
20.0
Theorem 2. Algorithm 1 converges in a finite number of steps to the exact solution of (11).
Proof. The proof follows the ones used to prove the convergence of active set methods in general,
and SimpleSVM in particular, see Propositon 1 in (Vishwanathan et al., 2003)).
5
Experiments
We compare the performances of three different rejection schemes based on SVMs. For this purpose,
we selected the datasets from the UCI repository related to medical problems, as medical decision
making is an application domain for which rejection is of primary importance. Since these datasets
are small, we repeated 10 trials for each problem. Each trial consists in splitting randomly the
examples into a training set with 80 % of examples and an independent test set. Note that the
training examples were normalized to zero-mean and unit variance before cross-validation (test sets
were of course rescaled accordingly).
In a first series of experiments, to compare our decision rule with the one proposed by Bartlett and
Wegkamp (2008) (B&W?s), we used symmetric costs: c+ = c? = 1 and r+ = r? = r. We
also chose r = 0.45, which corresponds to rather low rejection rates, in order to favour different
behaviors between these two decision rules (recall that they are identical for r ' 0.24). Besides
the double hinge loss, we also implemented a ?naive? method that consists in running the standard
SVM algorithm (using the hinge loss) and selecting a symmetric rejection region around zero by
cross-validation.
For all methods, we used Gaussian kernels. Model selection is performed by cross-validation. This
includes the selection of the kernel widths, the regularization parameter C for all methods and
additionally of the rejection thresholds for the naive method. Note that B&W?s and our decision
rules are based on learning with the double-hinge loss. Hence, the results displayed in Table 1 only
differ due to the size of the rejection region, and to the disparities that arise from the choice of
hyper-parameters that may arise in the cross-validation process (since the decision rules differ, the
cross-validation scores differ also).
Table 1 summarizes the averaged performances over the 10 trials. Overall, all methods lead to
equivalent average test losses, with an unsignificant but consistent advantage for our decision rule.
We also see that the naive method tends to reject fewer test examples than the consistent methods.
This means that, for comparable average losses, the decision rules based on the scores learned by
minimizing the double hinge loss tend to classify more accurately the examples that are not rejected,
as seen on the last column of the table.
For noisy problems such as Liver and Pima, we observed that reducing rejection costs considerably
decrease the error rate on classified examples (not shown on the table). The performances of the
two learning methods based on the double-hinge get closer, and there is still no significant gain
7
compared to the naive approach. Note however that the symmetric setting is favourable to the naive
approach, since we only have to estimate a single decision thershold. We are experimenting to see
whether the double-hinge loss shows more substantial improvements for asymmetric losses and for
larger training sets.
6
Conclusion
In this paper we proposed a new solution to the general problem of classification with a reject
option. The double hinge loss was derived from the simple desiderata to obtain accurate estimates
of posterior probabilities only in the vicinity of the decision boundaries. Our formulation handles
asymmetric misclassification and rejection costs and compares favorably to the one of Bartlett and
Wegkamp (2008).
We showed that for suitable kernels, including usual ones such as the Gaussian kernel, training a
kernel machine with the double hinge loss provides a universally consistent classifier with reject
option. Furthermore, the loss provides sparse solutions, with a limited number of support vectors,
similarly to the standard L1-SVM classifier.
We presented what we believe to be the first principled and efficient implementation of SVMs for
classification with a reject option. Our optimization scheme is based on an active set method, whose
complexity compares to standard SVMs. The dimension of our quadratic program is bounded by
the number of examples, and is effectively limited to the number of support vectors. The only
computational overhead is brought by monitoring five categories of examples, instead of the three
ones considered in standard SVMs (support vector, support at bound, inactive example).
Our approach for deriving the double hinge loss can be used for other decision problems relying
on conditional probabilities at specific values or in a limited range or values. As a first example,
one may target the estimation of discretized confidence ratings, such as the ones reported in weather
forecasts. Multi-category classification also belongs to this class of problems, since there, decisions
rely on having precise conditional probabilities within a predefined interval.
Acknowledgements
This work was supported in part by the French national research agency (ANR) through project
GD2GS, and by the IST Programme of the European Community through project DIRAC.
References
Bartlett, P. L., & Tewari, A. (2007). Sparseness vs estimating conditional probabilities: Some asymptotic
results. Journal of Machine Learning Research, 8, 775?790.
Bartlett, P. L., & Wegkamp, M. H. (2008). Classification with a reject option using a hinge loss. Journal of
Machine Learning Research, 9, 1823?1840.
Chow, C. K. (1970). On optimum recognition error and reject tradeoff. IEEE Trans. on Info. Theory, 16, 41?46.
Fumera, G., & Roli, F. (2002). Support vector machines with embedded reject option. Pattern Recognition
with Support Vector Machines: First International Workshop (pp. 68?82). Springer.
Grandvalet, Y., Mari?ethoz, J., & Bengio, S. (2006). A probabilistic interpretation of SVMs with an application
to unbalanced classification. NIPS 18 (pp. 467?474). MIT Press.
Herbei, R., & Wegkamp, M. H. (2006). Classification with reject option. The Canadian Journal of Statistics,
34, 709?721.
Kwok, J. T. (1999). Moderating the outputs of support vector machine classifiers. IEEE Trans. on Neural
Networks, 10, 1018?1031.
Steinwart, I. (2005). Consistency of support vector machine and other regularized kernel classifiers. IEEE
Trans. on Info. Theory, 51, 128?142.
Vapnik, V. N. (1995). The nature of statistical learning theory. Springer Series in Statistics. Springer.
Vishwanathan, S. V. N., Smola, A., & Murty, N. (2003). SimpleSVM. Proceedings of the Twentieth International Conference on Machine Learning (pp. 68?82). AAAI.
8
| 3594 |@word mild:1 trial:3 illustrating:1 version:1 repository:1 norm:1 incurs:1 solid:1 series:3 score:4 selecting:1 disparity:1 current:1 mari:1 exy:4 assigning:1 si:4 written:1 fn:3 partition:5 designed:1 update:1 v:1 greedy:1 fewer:2 devising:1 selected:1 accordingly:1 eminent:1 egne:2 provides:2 location:1 attack:1 five:2 alert:1 along:1 consists:4 prove:1 overhead:2 fitting:1 introduce:1 hardness:1 expected:2 behavior:1 multi:1 discretized:1 inspired:1 relying:1 considering:1 becomes:1 provided:2 estimating:4 notation:1 bounded:1 project:2 simplesvm:3 what:2 interpreted:1 minimizes:1 developed:1 formalizes:1 every:2 ti:8 universit:2 classifier:11 scaled:1 partitioning:1 control:1 medical:3 unit:1 yn:1 appear:1 positive:7 t1:1 before:1 herbei:2 insa:1 limit:1 tends:1 id:9 might:1 chose:1 umr:1 initialization:1 limited:3 range:3 averaged:1 yj:1 differs:1 procedure:1 empirical:5 reject:19 weather:1 murty:1 confidence:3 regular:2 get:2 cannot:2 close:2 selection:2 risk:9 seminal:1 optimize:2 equivalent:2 yt:1 starting:1 convex:3 splitting:1 rule:29 deriving:2 iold:3 handle:1 updated:1 construction:2 suppose:1 target:1 user:1 exact:3 recognition:3 satisfying:1 updating:1 asymmetric:3 labeled:3 observed:3 moderating:1 region:2 decrease:1 rescaled:1 substantial:1 principled:1 agency:1 convexity:1 complexity:1 technologie:1 solving:2 parc:1 incur:2 f2:4 derivation:1 train:1 pertaining:1 hyper:3 choosing:1 outcome:1 whose:2 larger:1 solve:3 loglikelihood:1 relax:1 otherwise:3 anr:1 ability:1 statistic:2 noisy:1 sequence:1 advantage:1 propose:2 uci:1 loop:1 dirac:1 convergence:5 double:24 regularity:1 empty:1 optimum:1 converges:2 derive:3 liver:2 ij:1 noticeable:1 solves:2 implemented:1 idiap:1 predicted:2 differ:3 switzerland:1 kuhn:1 drawback:2 correct:2 violates:2 inew:4 f1:4 generalization:1 karush:1 preliminary:1 strictly:1 hold:3 around:3 considered:1 ic:9 exp:2 k2h:2 predict:2 major:1 early:1 purpose:3 estimation:6 label:5 agrees:1 largest:1 minimization:3 brought:1 clearly:1 mit:1 gaussian:2 aim:3 modified:1 rather:1 corollary:1 heudiasyc:1 focus:2 derived:5 improvement:1 likelihood:4 experimenting:1 sense:1 posteriori:1 cnrs:1 i0:6 typically:1 chow:6 misclassified:1 france:2 issue:1 classification:19 overall:2 arg:2 denoted:1 dual:5 rouvray:1 initialize:2 equal:2 construct:1 f3:3 having:2 identical:2 ephane:1 np:1 piecewise:1 randomly:1 national:1 replaced:1 attempt:1 interest:1 righthand:1 severe:1 primal:4 predefined:1 accurate:3 encourage:1 closer:1 unless:1 indexed:1 old:1 desired:1 classify:1 column:1 cost:10 rakotomamonjy:1 entry:1 subset:4 reported:1 considerably:2 calibrated:1 st:1 thanks:1 international:2 probabilistic:4 picking:1 wegkamp:10 together:1 squared:1 aaai:1 de:4 rouen:2 includes:1 coefficient:1 performed:1 reached:1 bayes:8 option:16 minimize:1 yves:1 ni:1 accuracy:1 variance:1 efficiently:1 yield:1 generalize:1 iterated:1 rejecting:1 accurately:1 monitoring:2 classified:2 definition:3 pp:3 tucker:1 associated:2 proof:3 gain:1 proved:1 recall:2 organized:1 hilbert:2 cj:4 ea:1 reflecting:1 back:1 higher:1 day:1 unambiguously:1 formulation:3 though:1 strongly:1 box:5 furthermore:4 rejected:1 smola:1 until:2 steinwart:3 lack:2 widespread:1 french:1 defines:1 logistic:5 yf:2 believe:1 normalized:1 true:1 counterpart:1 former:1 vicinity:3 tagged:1 regularization:3 hence:5 symmetric:7 inductive:1 equality:1 width:1 ambiguous:1 criterion:6 generalized:1 tn:1 cp:2 dedicated:3 l1:1 abstain:4 recently:1 functional:2 empirically:1 belong:1 interpretation:2 significant:1 consistency:3 canu:1 similarly:1 centre:1 posterior:3 showed:1 perspective:1 optimizing:1 belongs:2 binary:4 arbitrarily:1 yi:19 neg:1 seen:1 additional:1 relaxed:1 surely:1 dashed:1 ii:1 desirable:1 reduces:1 smooth:1 plug:2 cross:5 prediction:2 desideratum:2 regression:5 kernel:8 sometimes:1 proposal:2 decreased:1 interval:1 else:2 limn:1 cedex:1 tend:1 intermediate:1 bengio:1 enough:1 canadian:1 xj:1 regarding:1 tradeoff:1 favour:1 whether:2 motivated:1 inactive:1 bartlett:12 akin:2 action:1 generally:1 tewari:4 svms:11 category:2 estimated:2 fulfilled:1 express:1 ist:1 putting:1 begining:1 four:1 threshold:8 drawn:1 everywhere:2 almost:2 decision:36 summarizes:1 comparable:1 bound:3 pay:1 quadratic:3 paramount:1 activity:1 occur:2 constraint:9 vishwanathan:3 bp:1 thyroid:1 min:5 optimality:1 circumvented:1 according:1 alternate:1 belonging:1 smaller:1 partitioned:1 appealing:2 joseph:1 making:2 restricted:1 taken:1 computationally:1 equation:1 describing:1 loose:1 slack:1 end:3 incurring:2 endowed:1 kwok:2 obey:1 original:1 running:1 saint:1 hinge:33 etienne:1 build:2 especially:1 objective:1 strategy:2 primary:1 usual:1 surrogate:3 said:1 dp:6 assuming:2 besides:2 index:1 gji:4 minimizing:6 setup:1 pima:2 favorably:1 info:2 negative:8 stated:1 martigny:1 implementation:4 reliably:1 motivates:4 proper:1 unknown:1 allowing:1 observation:1 datasets:3 finite:1 displayed:2 situation:1 precise:2 y1:1 rn:2 reproducing:1 arbitrary:2 community:1 rating:1 namely:1 cast:1 optimized:1 learned:3 nip:1 trans:3 address:1 suggested:1 usually:1 pattern:7 below:1 sparsity:5 program:2 max:2 including:1 suitable:3 critical:1 misclassification:4 rely:1 regularized:3 predicting:1 scheme:2 concludes:1 naive:9 acknowledgement:1 tangent:1 kf:3 relative:1 asymptotic:1 embedded:1 loss:51 par:1 validation:5 ethoz:1 integrate:1 incurred:1 consistent:8 proxy:1 thresholding:1 viewpoint:2 principle:3 grandvalet:4 classifying:2 roli:2 course:1 penalized:1 repeat:1 last:1 free:1 supported:1 alain:1 side:1 institute:1 litis:1 sparse:2 boundary:2 dimension:1 gram:1 universally:4 programme:1 compact:1 active:7 kkt:2 assumed:2 xi:15 fumera:2 continuous:1 table:5 additionally:1 nature:1 symmetry:1 du:2 investigated:1 european:1 domain:1 dense:1 main:1 whole:2 arise:2 repeated:1 referred:1 representative:1 jnew:1 precision:1 theorem:3 specific:1 favourable:1 svm:12 workshop:1 false:2 vapnik:2 effectively:1 importance:1 compi:2 ci:9 keshet:1 sparseness:2 forecast:1 gap:1 rejection:15 wdbc:1 simply:2 twentieth:1 lagrange:1 expressed:3 springer:3 ch:1 corresponds:1 minimizer:1 conditional:8 goal:2 replace:1 price:1 considerable:1 hard:1 specifically:2 reducing:1 gij:2 duality:1 experimental:1 formally:1 select:1 support:14 latter:1 unbalanced:1 violated:1 repartition:2 constructive:2 tested:1 |
2,862 | 3,595 | Localized Sliced Inverse Regression
Qiang Wu, Sayan Mukherjee
Department of Statistical Science
Institute for Genome Sciences & Policy
Department of Computer Science
Duke University, Durham
NC 27708-0251, U.S.A
{qiang, sayan}@stat.duke.edu
Feng Liang
Department of Statistics
University of Illinois at Urbana-Champaign
IL 61820, U.S.A.
[email protected]
Abstract
We developed localized sliced inverse regression for supervised dimension reduction. It has the advantages of preventing degeneracy, increasing estimation
accuracy, and automatic subclass discovery in classification problems. A semisupervised version is proposed for the use of unlabeled data. The utility is illustrated on simulated as well as real data sets.
1 Introduction
The importance of dimension reduction for predictive modeling and visualization has a long and
central role in statistical graphics and computation In the modern context of high-dimensional data
analysis this perspective posits that the functional dependence between a response variable y and a
large set of explanatory variables x ? Rp is driven by a low dimensional subspace of the p variables.
Characterizing this predictive subspace, supervised dimension reduction, requires both the response
and explanatory variables. This problem in the context of linear subspaces or Euclidean geometry
has been explored by a variety of statistical models such as sliced inverse regression (SIR, [10]),
sliced average variance estimation (SAVE, [3]), principal Hessian directions (pHd, [11]), (conditional) minimum average variance estimation (MAVE, [18]), and extensions to these approaches. To
extract nonlinear subspaces, one can apply the aforementioned linear algorithms to the data mapped
into a feature space induced by a kernel function [13, 6, 17].
In machine learning community research on nonlinear dimension reduction in the spirit of [19] has
been developed of late. This has led to a variety of manifold learning algorithms such as isometric
mapping (ISOMAP, [16]), local linear embedding (LLE, [14]), Hessian Eigenmaps [5], and Laplacian Eigenmaps [1]. Two key differences exist between the paradigm explored in this approach
and that of supervised dimension reduction. The first difference is that the above methods are unsupervised in that the algorithms take into account only the explanatory variables. This issue can
be addressed by extending the unsupervised algorithms to use the label or response data [7]. The
bigger problem is that these manifold learning algorithms do not operate on the space of the explanatory variables and hence do not provide a predictive submanifold onto which the data should
be projected. These methods are based on embedding the observed data onto a graph and then using spectral properties of the embedded graph for dimension reduction. The key observation in all
of these manifold algorithms is that metrics must be local and properties that hold in an ambient
Euclidean space are true locally on smooth manifolds.
This suggests that the use of local information in supervised dimension reduction methods may be
of use to extend methods for dimension reduction to the setting of nonlinear subspaces and submanifolds of the ambient space. In the context of mixture modeling for classification two approaches
have been developed [9, 15].
1
In this paper we extend SIR by taking into account the local structure of the explanatory variables.
This localized variant of SIR, LSIR, can be used for classification as well as regression applications.
Though the predictive directions obtained by LSIR are linear ones, they coded nonlinear information. Another advantage of our approach is that ancillary unlabeled data can be easily added to the
dimension reduction analysis ? semi-supervised learning.
The paper is arranged as follows. LSIR is introduced in Section 2 for continuous and categorical
response variables. Extensions are discussed in Section 3. The utility with respect to predictive accuracy as well as exploratory data analysis via visualization is demonstrated on a variety of simulated
and real data in Sections 4 and 5. We close with discussions in Section 6.
2 Localized SIR
We start with a brief review of SIR method and remark that the failure of SIR in some situations is
caused by ignoring local structures. Then we propose a generalization of SIR, called localized SIR,
by incorporating some localization idea from manifold learning. Connection to some existing work
is addressed at the end.
2.1 Sliced inverse regression
Assume the functional dependence between a response variable Y and an explanatory variable X ?
Rp is given by
t
X, ?),
(1)
Y = f (?1t X, . . . , ?L
where ?l ?s are unknown orthogonal vectors in Rp and ? is noise independent of X. Let B denote
the L-dimensional subspace spanned by ?l ?s. Then PB X, where PB denotes the projection operator
onto space B, provides a sufficient summary of the information in X relevant to Y . Estimating B or
?l ?s becomes the central problem in supervised dimension reduction. Though we define B here via
a heuristic model assumption (1), a general definition based on conditional independence between
Y and X given PB X can be found in [4]. Following [4], we refer to B as the dimension reduction
(d.r.) subspace and ?l ?s the d.r. directions.
Slice inverse regression (SIR) model is introduced [10] to estimate the d.r. directions. The idea
underlying this approach is that, if X has an identity covariance matrix, the centered inverse regression curve E(X|Y ) ? EX is contained in the d.r. space B under some design conditions; see [10]
for details. According to this result the d.r. directions ?l ?s are given by the top eigenvectors of the
covariance matrix ? = Cov(E(X|Y )). In general when the covariance matrix of X is ?, the ?l ?s
can be obtained by solving a generalized eigen decomposition problem
?? = ???.
n
A simple SIR algorithm operates as the following on a set of samples {(xi , yi )}i=1 :
1. Compute an empirical estimate of ?,
X
?= 1
(xi ? m)(xi ? m)T
?
n i=1
n
where m =
1
n
Pn
i=1
xi is the sample mean.
2. Divide the samples into H groups (or slices) G1 , . . . , GH according to the value of y.
Compute an empirical estimate of ?,
?=
?
where mh =
1
nh
P
H
X
nh
i=1
j?Gh
n
(mh ? m)(mh ? m)T
xj is the sample mean for group Gh with nh being the group size.
3. Estimate the d.r. directions ? by solving a generalized eigen problem
? = ???.
?
??
2
(2)
When Y takes categorical values as in classification problems, it is natural to divide the data into different groups by their group labels. Then SIR is equivalent to Fisher discriminant analysis (FDA).1
Though SIR has been widely used for dimension reduction and yielded many useful results in practice, it has some known problems. For example, it is easy to construct a function f such that
E(X|Y = y) = 0 then SIR fails to retrieve any useful directions [3]. The degeneracy of SIR has also
restricted its use in binary classification problems where only one direction can be obtained. The
failure of SIR in these scenario is partly because the algorithm uses just the mean, E(X|Y = y), as
a summary of the information in each slice, which apparently is not enough. Generalizations of SIR
include SAVE [3], SIR-II [12] and covariance inverse regression estimation (CIRE, [2]) that exploit
the information from the second moment of the conditional distribution of X|Y . However in some
scenario the information in each slice can not be well described by a global statistics. For example,
similar to the multimodal situation considered by [15], the data in a slice may form two clusters,
then a good description of the data would not be a single number such as any moments, but the two
cluster centers. Next we will propose a new algorithm that is a generalization of SIR based on local
structures of X in each slice.
2.2 Localization
A key principle in manifold learning is that the Euclidean representation of a data point in Rp is only
meaningful locally. Under this principle, it is dangerous to calculate the slice average mh , whenever
the slice contains data that are far away. Instead some kind of local averages should be considered.
Motivated by this idea we introduce a localized SIR (LSIR) method for dimension reduction.
Here is the intuition for LSIR. Let us start with the transformed data set where the empirical covariance is identity, for example, the data set after PCA. In the original SIR method, we shift every
data point xi to the corresponding group average, then apply PCA on the new data set to identify
SIR directions. The underline rational for this approach is that if a direction does not differentiate
different groups well, the group means projected to that direction would be very close, therefore
the variance of the new data set will be small at that direction. A natural way to incorporate localization idea into this approach is to shift each data point xi to the average of a local neighborhood
instead of the average of its global neighborhood (i.e., the whole group). In manifolds learning, local
neighborhood is often chosen by k nearest neighborhood (k-NN). Different from manifolds learning
that is designed for unsupervised learning, the neighborhood selection for LSIR that is designed for
supervised learning will also incorporate information from the response variable y.
Here is the mathematical description of LSIR. Recall that the group average mh is used in estimating
? is equivalent to the sample covariance of a data set {mi }n
? = Cov(E(X|Y )). The estimate ?
i=1
where mi = mh , average of the group Gh to which xi belongs. In our LSIR algorithm, we set mi
?
equal to some local average, and then use the corresponding sample covariance matrix to replace ?
in equation (2). Below we give the details of our LSIR algorithm:
? as in SIR.
1. Compute ?
2. Divide the samples into H groups as in SIR. For each sample (xi , yi ) we compute
1X
xj ,
mi,loc =
k j?s
i
where, with h being the group so that i ? Gh ,
si = {j : xj belongs to the k nearest neighbors of xi in Gh } .
Then we compute a localized version of ? by
X
? loc = 1
?
(mi,loc ? m)(mi,loc ? m)T .
n i=1
n
3. Solve the generalized eigen decomposition problem
? loc ? = ???.
?
?
1
FDA is referred to as linear discriminant analysis (LDA) in some literatures.
3
(3)
The neighborhood size k in LSIR is a tuning parameter specified by users. When k is large enough,
? loc is the same as ?
? and LSIR recovers all SIR directions.
say, larger than the size of any group, then ?
With a moderate choice of k, LSIR uses the local information within each slice and is expected to
retrieve directions lost by SIR in case of SIR fails due to degeneracy.
For classification problems LSIR becomes a localized version of FDA. Suppose the number of
? from the original FDA is of rank at most C ? 1, which means
classes is C, then the estimate ?
FDA can only estimate at most C ? 1 directions. This is why FDA is seldom used for binary classification problems where C = 2. In LSIR we use more points to describe the data in each class.
? loc that is no longer bounded by C
Mathematically this is reflected by the increase of the rank of ?
and hence produces more directions. Moreover, if for some classes the data is composed of several
sub-clusters, LSIR can automatically identify these sub-cluster structures. As showed in one of our
examples, this property of LSIR is very useful in data analysis such as cancer subtype discovery
using genomic data.
2.3 Connection to Existing Work
The idea of localization has been introduced to dimension reduction for classification problems
before. For example, the local discriminant information (LDI) introduced by [9] is one of the early
work in this area. In LDI, the local information is used to compute the between-group covariance
matrix ?i over a nearest neighborhood at every data point xiP
and then estimate the d.r. directions by
n
the top eigenvector of the averaged between-group matrix n1 i=1 ?i . The local Fisher discriminant
analysis (LFDA) introduced by [15] can be regarded as an improvement of LDI with the within-class
covariance matrix also being localized.
Comparing to these two approaches, LSIR utilizes the local information directly at the point level.
One advantage of this simple localization is computation. For example, for a problem of C classes,
LDI needs to compute nC local mean points and n between-group covariance matrices, while LSIR
computes only n local mean points and one covariance matrix. Another advantage is LSIR can
be easily extended to handle unlabeled data in semi-supervised learning as explained in the next
section. Such an extension is less straightforward for the other two approaches that operate on the
covariance matrices instead of data points.
3 Extensions
? is singular or has a very large condition number, which is comRegularization. When the matrix ?
mon in high-dimensional problems, the generalized eigen-decomposition problems (3) is unstable.
Regularization techniques are often introduced to address this issue [20]. For LSIR we adopt the
following regularization:
? loc ? = ?(?
? + s)?
?
(4)
where the regularization parameter s can be chosen by cross validation or other criteria (e.g. [20]).
Semi-supervised learning. In semi-supervised learning some data have y?s (labeled data) and some
do not (unlabeled data). How to incorporate the information from unlabeled data has been the main
focus of research in semi-supervised learning. Our LSIR algorithm can be easily modified to take
the unlabeled data into consideration. Since y of an unlabeled sample can take any possible values,
we put the unlabeled data into every slice. So the neighborhood si is defined as the following: for
any point in the k-NN of xi , it belongs to si if it is unlabeled, or if it is labeled and belongs to the
same slice as xi .
4 Simulations
In this section we apply LSIR to several synthetic data sets to illustrate the power of LSIR. The
performance of LSIR is compared with other dimension reduction methods including SIR, SAVE,
pHd, and LFDA.
4
Method
Accuracy
SAVE
0.3451(?0.1970)
pHd
0.3454(?0.1970)
LSIR (k = 20)
0.9534(?.0004)
LSIR (k = 40)
0.9011(?.0008)
Table 1: Estimation accuracy (and standard deviation) of various dimension reduction methods for
semisupervised learning in Example 1.
(a)
?1
?1
0
1
Dimension 1
2
2
0.1
1
0
semiLSIR 2
0
(d)
2
LSIR 2
Principal Component 2
Dimension 2
1
?2
?2
(c)
(b)
2
0
?1
?0.1
?2
?2
?0.1
0
0.1
Principal Component 1
1
0
?1
?2
?1
0
1
LSIR 1
2
?2
?1
0
1
semiLSIR 1
2
Figure 1: Result for Example 1. (a) Plot of data in the first two dimensions, where ?+? corresponds
to y = 1 while ?o? corresponds to y = ?1. The data points in red and blue are labeled and the
ones in green are unlabeled when the semisupervised setting is considered. (b) Projection of data
to the first two PCA directions. (c) Projection of data to the first two LSIR directions when all the
n = 400 data points are labeled. (d) Projection of the data to the first two LSIR directions when
only 20 points as indicated in (a) are labeled.
? = (??1 , ? ? ? , ??L ) denote an estimate of the d.r. subspace B where its columns ??l ?s are the
Let B
estimated d.r. directions. We introduce the following metric to measure the accuracy:
1X
1X
kPB ??i k2 =
k(BB T )??i k2 .
L i=1
L i=1
L
? B) =
Accuracy(B,
L
In LSIR the influence of the parameter k, the size of local neighborhoods, is subtle. In our simulation
study, we found it usually good enough to choose k between 10 to 20, except for the semisupervised
setting (e.g. Example 1 below). But further study and a theoretical justification are necessary.
Example 1. Consider a binary classification problem on R10 where the d.r. directions are the first
two dimensions and the remaining eight dimensions are Gaussian noise. The data in the first two
relevant dimensions are plotted in Figure 1(a) with sample size n = 400. For this example SIR
cannot identify the two d.r. directions because the group averages of the two groups are roughly the
same for the first two dimensions, due to the symmetry in the data. Using local average instead of
group average, LSIR can find both directions, see Figure 1(c). But so do SAVE and pHd since the
high-order moments also behave differently in the two groups.
Next we create a data set for semi-supervised learning by randomly selecting 20 samples, 10 from
each group, to be labeled and setting others to be unlabeled. The directions from PCA where one
ignores the labels do not agree with the discriminant directions as shown in Figure 1(b). So to
retrieve the relevant directions, the information from the labeled points has to be taken consideration.
We evaluate the accuracy of LSIR (the semi-supervised version), SAVE and pHd where the latter
two are operated on just the labeled set. We repeat this experiment 20 times and each time select a
different random set to be labeled. The averaged accuracy is reported in Table 1. The result for one
iteration is displayed in Figure 1 where the labeled points are indicated in (a) and the projection to
the top two directions from LSIR (with k = 40) is in (d). All the results clearly indicate that LSIR
out-performs the other two supervised dimension reduction methods.
Example 2. We first generate a 10-dimensional data set where the first three dimensions are the
Swiss roll data [14]:
X1 = t cos t, X2 = 21h, X3 = t sin t,
3?
where t = 2 (1 + 2?), ? ? Uniform(0, 1) and h ? Uniform([0, 1]). The remaining 7 dimensions are independent Gaussian noises. Then all dimensions are normalized to have unit variance.
Consider the following function:
Y = sin(5??) + h2 + ?,
5
? ? N(0, 0.12 ).
(5)
1
0.9
SIR
LSIR
SAVE
PHD
accuracy
0.8
0.7
0.6
0.5
0.4
200
300
400
500
600
700
sample size
800
900
1000
Figure 2: Estimation accuracy of various dimension methods for example 2.
We randomly choose n samples as a training set and let n change from 200 to 1000 and compare the
estimation accuracy for LSIR with SIR, SAVE and pHd. The result is showed in Figure 2. SAVE
and pHd outperform SIR, but are still much worse comparing to LSIR.
Note that Swiss roll (the first three dimensions) is a benchmark data set in manifolds learning,
where the goal is to ?unroll? the data into the intrinsic two dimensional space. Since LSIR is a
linear dimension reduction method we do not expect LSIR to unroll the data, but expect to retrieve
the dimensions relevant to the prediction of Y . Meanwhile, with the noise, manifolds learning
algorithms will not unroll the data either since the dominant directions are now the noise dimensions.
Example 3. (Tai Chi) The Tai Chi figure is well known in Asian culture where the concepts of
Yin-Yang provide the intellectual framework for much of ancient Chinese scientific development.
A 6-dimensional data set for this example is generated as follows: X1 and X2 are from the Tai Chi
structure as shown in Figure 3(a) where the Yin and Yang regions are assigned class labels Y = ?1
and Y = 1 respectively. X3 , . . . , X6 are independent random noise generated by N(0, 1).
The Tai Chi data set was first used as a dimension reduction example in [12, Chapter 14]. The
correct d.r. subspace B is span(e1 , e2 ). SIR, SAVE and pHd are all known to fail for this example.
By taking the local structure into account, LSIR can easily retrieve the relevant directions. Following
[12], we generate n = 1000 samples as the training data, then run LSIR with k = 10 and repeat
100 times. The average accuracy is 98.6% and the result from one run is shown in Figure 3. For
comparison we also applied LFDA for this example. The average accuracy is 82% which is much
better than SIR, SAVE and pHd but worse than LSIR.
Figure 3: Result for Tai Chi example. (a) The training data in first two dimensions; (b) The training
data projected onto the first two LSIR directions; (c) An independent test data projected onto the
first two LSIR directions.
5 Applications
In this section we apply our LSIR methods to two real data sets.
5.1 Digits recognition
The MNIST data set (Y. LeCun, http://yann.lecun.com/exdb/mnist/) is a well known benchmark data
set for classification learning. It contains 60, 000 images of handwritten digits as training data and
10, 000 images as test data. This data set is commonly believed to have strong nonlinear structures.
6
4
5
x 10
0
?5
?10
?5
0
5
10
4
x 10
Figure 4: Result for leukemia data by LSIR. Red points are ALL and blue ones are AML
In our simulations, we randomly sampled 1000 images (100 samples for each digit) as training set.
We apply LSIR and computed d = 20 e.d.r. directions. Then we project the training data and 10000
test data onto these directions. Using a k-nearest neighbor classifier with k = 5 to classify the test
data, we report the classification error over 100 iterations in Table 2. Compared with SIR method,
the classification accuracy are increased for almost all digits. The improvement for digits 2, 3, 5 is
much significant.
digits
LSIR
SIR
0
0.0350
0.0487
1
0.0098
0.0292
2
0.1363
0.1921
3
0.1055
0.1723
4
0.1309
0.1327
5
0.1175
0.2146
6
0.0445
0.0816
7
0.1106
0.1354
8
0.1417
0.1981
9
0.1061
0.1533
average
0.0927
0.1358
Table 2: Classification error rate for digits classification by SIR and LSIR.
5.2 Gene expression data
Cancer classification and discovery using gene expression data becomes an important technique in
modern biology and medical science. In gene expression data number of genes is huge (usually up
to thousands) and the samples is quite limited. As a typical large p small n problem, dimension
reduction plays very essential role to understand the data structure and make inference.
Leukemia classification. We consider leukemia classification in [8]. This data has 38 training samples and 34 test samples. The training sample has two classes, AML and ALL, and the class ALL
has two subtypes. We apply SIR and LSIR to this data. The classification accuracy is similar by
predicting the test data with 0 or 1 error. An interesting point is that LSIR automatically realizes
subtype discovery while SIR cannot. By project the training data onto the first two directions (Figure 4), we immediately notice that the ALL has two subtypes. It turns out that the 6-samples cluster
are T-cell ALL and the 19-samples cluster is B-cell ALL samples. Note that there are two samples
(which are T-cell ALL) cannot be assigned to each subtype only by visualization. This means LSIR
only provides useful subclass knowledge for future research but itself may not a perfect clustering
method.
6 Discussion
We developed LSIR method for dimension reduction by incorporating local information into the
original SIR. It can prevent degeneracy, increase estimation accuracy, and automatically identify
subcluster structures. Regularization technique is introduced for computational stability. A semisupervised version is developed for the use of unlabeled data. The utility is illustrated on synthetic
as well as real data sets.
Since LSIR involves only linear operations on the data points, it is straightforward to extend it
to kernel models [17] via the so-called kernel trick. An extension of LSIR along this direction
7
can be helpful to realize nonlinear dimension reduction directions and to reduce the computational
complexity in case of p ? n.
Further research on LSIR and its kernelized version includes their asymptotic properties such as
consistency and statistically more rigorous approaches for the choice of k, the size of local neighborhoods, and L, the dimensionality of the reduced space.
References
[1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373?1396, 2003.
[2] R. Cook and L. Ni. Using intra-slice covariances for improved estimation of the central subspace in regression. Biometrika, 93(1):65?74, 2006.
[3] R. Cook and S. Weisberg. Disussion of li (1991). J. Amer. Statist. Assoc., 86:328?332, 1991.
[4] R. Cook and X. Yin. Dimension reduction and visualization in discriminant analysis (with
discussion). Aust. N. Z. J. Stat., 43(2):147?199, 2001.
[5] D. Donoho and C. Grimes. Hessian eigenmaps: new locally linear embedding techniques for
highdimensional data. PNAS, 100:5591?5596, 2003.
[6] K. Fukumizu, F. R. Bach, and M. I. Jordan. Kernel dimension reduction in regression. Annals
of Statistics, to appear, 2008.
[7] A. Globerson and S. Roweis. Metric learning by collapsing classes. In Y. Weiss, B. Sch?olkopf,
and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 451?458.
MIT Press, Cambridge, MA, 2006.
[8] T. Golub, D. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. Mesirov, H. Coller, M. Loh,
J. Downing, M. Caligiuri, C. Bloomfield, and E. Lander. Molecular classification of cancer:
class discovery and class prediction by gene expression monitoring. Science, 286:531?537,
1999.
[9] T. Hastie and R. Tibshirani. Discrminant adaptive nearest neighbor classification. IEEE
Transacations on Pattern Analysis and Machine Intelligence, 18(6):607?616, 1996.
[10] K. Li. Sliced inverse regression for dimension reduction (with discussion). J. Amer. Statist.
Assoc., 86:316?342, 1991.
[11] K. C. Li. On principal hessian directions for data visulization and dimension reduction: another
application of stein?s lemma. J. Amer. Statist. Assoc., 87:1025?1039, 1992.
[12] K. C. Li. High dimensional data analysis via the sir/phd approach, 2000.
[13] J. Nilsson, F. Sha, and M. I. Jordan. Regression on manifold using kernel dimension reduction.
In Proc. of ICML 2007, 2007.
[14] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323?2326, 2000.
[15] M. Sugiyam. Dimension reduction of multimodal labeled data by local fisher discriminatn
analysis. Journal of Machine Learning Research, 8:1027?1061, 2007.
[16] J. Tenenbaum, V. de Silva, and J. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290:2319?2323, 2000.
[17] Q. Wu, F. Liang, and S. Mukherjee. Regularized sliced inverse regression for kernel models.
Technical report, ISDS Discussion Paper, Duke University, 2007.
[18] Y. Xia, H. Tong, W. Li, and L.-X. Zhu. An adaptive estimation of dimension reduction space.
J. R. Statist. Soc. B, 64(3):363?410, 2002.
[19] G. Young. Maximum likelihood estimation and factor analysis. Psychometrika, 6:49?53, 1941.
[20] W. Zhong, P. Zeng, P. Ma, J. S. Liu, and Y. Zhu. RSIR: regularized sliced inverse regression
for motif discovery. Bioinformatics, 21(22):4169?4175, 2005.
8
| 3595 |@word version:6 underline:1 tamayo:1 simulation:3 covariance:13 decomposition:3 moment:3 reduction:32 liu:1 contains:2 loc:8 selecting:1 existing:2 comparing:2 com:1 si:3 must:1 realize:1 designed:2 plot:1 intelligence:1 cook:3 provides:2 intellectual:1 downing:1 mathematical:1 along:1 isds:1 introduce:2 expected:1 roughly:1 weisberg:1 uiuc:1 chi:5 kpb:1 automatically:3 increasing:1 becomes:3 project:2 estimating:2 underlying:1 bounded:1 moreover:1 psychometrika:1 submanifolds:1 kind:1 eigenvector:1 developed:5 lsir:57 every:3 subclass:2 biometrika:1 k2:2 classifier:1 assoc:3 platt:1 subtype:3 unit:1 medical:1 appear:1 before:1 local:23 slonim:1 suggests:1 co:1 limited:1 statistically:1 averaged:2 lecun:2 globerson:1 practice:1 lost:1 swiss:2 x3:2 digit:7 area:1 empirical:3 projection:5 onto:7 unlabeled:12 selection:1 operator:1 close:2 put:1 context:3 influence:1 cannot:3 equivalent:2 demonstrated:1 center:1 straightforward:2 immediately:1 regarded:1 spanned:1 retrieve:5 embedding:4 handle:1 exploratory:1 stability:1 justification:1 annals:1 suppose:1 play:1 user:1 duke:3 us:2 trick:1 recognition:1 mukherjee:2 labeled:11 observed:1 role:2 calculate:1 thousand:1 region:1 intuition:1 complexity:1 solving:2 predictive:5 localization:5 easily:4 mh:6 multimodal:2 differently:1 various:2 chapter:1 describe:1 neighborhood:10 mon:1 heuristic:1 widely:1 solve:1 larger:1 say:1 quite:1 statistic:3 cov:2 g1:1 niyogi:1 itself:1 differentiate:1 advantage:4 propose:2 mesirov:1 relevant:5 liangf:1 roweis:2 description:2 olkopf:1 cluster:6 extending:1 produce:1 perfect:1 illustrate:1 stat:2 ldi:4 nearest:5 strong:1 soc:1 involves:1 indicate:1 direction:38 posit:1 correct:1 aml:2 ancillary:1 centered:1 generalization:3 mathematically:1 subtypes:2 extension:5 hold:1 considered:3 mapping:1 early:1 adopt:1 estimation:11 proc:1 realizes:1 label:4 create:1 fukumizu:1 mit:1 clearly:1 genomic:1 gaussian:2 modified:1 pn:1 zhong:1 focus:1 improvement:2 rank:2 likelihood:1 rigorous:1 helpful:1 inference:1 motif:1 nn:2 xip:1 explanatory:6 kernelized:1 transformed:1 issue:2 classification:20 aforementioned:1 development:1 equal:1 construct:1 qiang:2 biology:1 unsupervised:3 leukemia:3 icml:1 future:1 others:1 report:2 belkin:1 modern:2 randomly:3 composed:1 asian:1 geometry:1 n1:1 huge:1 intra:1 golub:1 mixture:1 grime:1 operated:1 ambient:2 necessary:1 culture:1 orthogonal:1 euclidean:3 divide:3 ancient:1 plotted:1 theoretical:1 increased:1 column:1 modeling:2 classify:1 deviation:1 uniform:2 submanifold:1 eigenmaps:4 graphic:1 reported:1 synthetic:2 rsir:1 central:3 choose:2 collapsing:1 worse:2 li:5 account:3 de:1 includes:1 caused:1 apparently:1 red:2 start:2 il:1 ni:1 accuracy:16 roll:2 variance:4 identify:4 handwritten:1 monitoring:1 whenever:1 definition:1 failure:2 e2:1 mi:6 recovers:1 degeneracy:4 rational:1 sampled:1 recall:1 knowledge:1 dimensionality:4 subtle:1 supervised:14 isometric:1 reflected:1 response:6 x6:1 improved:1 wei:1 arranged:1 amer:3 though:3 just:2 langford:1 zeng:1 nonlinear:8 lda:1 indicated:2 scientific:1 semisupervised:5 normalized:1 true:1 isomap:1 concept:1 unroll:3 hence:2 regularization:4 assigned:2 illustrated:2 sin:2 criterion:1 generalized:4 exdb:1 performs:1 gh:6 silva:1 image:3 consideration:2 functional:2 nh:3 extend:3 discussed:1 refer:1 significant:1 cambridge:1 automatic:1 tuning:1 seldom:1 consistency:1 illinois:1 longer:1 dominant:1 showed:2 perspective:1 belongs:4 driven:1 moderate:1 scenario:2 binary:3 yi:2 minimum:1 paradigm:1 semi:7 ii:1 pnas:1 champaign:1 smooth:1 technical:1 cross:1 long:1 believed:1 bach:1 e1:1 molecular:1 bigger:1 coded:1 laplacian:2 prediction:2 variant:1 regression:14 metric:3 iteration:2 kernel:6 cell:3 addressed:2 lander:1 singular:1 sch:1 operate:2 induced:1 spirit:1 jordan:2 yang:2 coller:1 easy:1 enough:3 variety:3 independence:1 xj:3 hastie:1 reduce:1 idea:5 shift:2 bloomfield:1 motivated:1 pca:4 expression:4 utility:3 loh:1 hessian:4 remark:1 useful:4 eigenvectors:1 stein:1 locally:4 statist:4 tenenbaum:1 reduced:1 generate:2 http:1 outperform:1 exist:1 notice:1 estimated:1 tibshirani:1 blue:2 group:22 key:3 pb:3 prevent:1 r10:1 caligiuri:1 graph:2 run:2 inverse:10 almost:1 wu:2 yann:1 utilizes:1 yielded:1 dangerous:1 x2:2 fda:6 span:1 department:3 according:2 nilsson:1 explained:1 restricted:1 taken:1 equation:1 visualization:4 agree:1 tai:5 turn:1 fail:1 end:1 operation:1 apply:6 eight:1 away:1 spectral:1 save:11 eigen:4 rp:4 original:3 top:3 denotes:1 include:1 remaining:2 clustering:1 exploit:1 chinese:1 feng:1 added:1 cire:1 sha:1 dependence:2 subspace:10 mapped:1 simulated:2 manifold:11 discriminant:6 unstable:1 nc:2 liang:2 design:1 policy:1 unknown:1 observation:1 urbana:1 benchmark:2 behave:1 displayed:1 situation:2 extended:1 community:1 introduced:7 specified:1 connection:2 address:1 below:2 usually:2 pattern:1 including:1 green:1 subcluster:1 power:1 natural:2 regularized:2 predicting:1 zhu:2 brief:1 categorical:2 extract:1 review:1 literature:1 discovery:6 geometric:1 asymptotic:1 sir:40 embedded:1 expect:2 interesting:1 localized:9 validation:1 h2:1 sufficient:1 principle:2 editor:1 cancer:3 summary:2 repeat:2 lle:1 understand:1 institute:1 neighbor:3 saul:1 characterizing:1 taking:2 slice:12 curve:1 dimension:44 xia:1 genome:1 computes:1 preventing:1 ignores:1 commonly:1 adaptive:2 projected:4 far:1 bb:1 gene:5 global:3 xi:11 continuous:1 why:1 table:4 ignoring:1 symmetry:1 meanwhile:1 main:1 whole:1 noise:6 sliced:8 x1:2 gaasenbeek:1 referred:1 tong:1 fails:2 sub:2 late:1 young:1 sayan:2 explored:2 incorporating:2 intrinsic:1 mnist:2 essential:1 importance:1 phd:11 durham:1 led:1 yin:3 aust:1 contained:1 huard:1 corresponds:2 ma:2 conditional:3 identity:2 goal:1 donoho:1 replace:1 fisher:3 change:1 typical:1 except:1 operates:1 principal:4 lemma:1 called:2 partly:1 meaningful:1 select:1 highdimensional:1 latter:1 bioinformatics:1 incorporate:3 evaluate:1 ex:1 |
2,863 | 3,596 | Robust Regression and Lasso
Huan Xu
Department of Electrical and Computer Engineering
McGill University
Montreal, QC Canada
[email protected]
Constantine Caramanis
Department of Electrical and Computer Engineering
The University of Texas at Austin
Austin, Texas
[email protected]
Shie Mannor
Department of Electrical and Computer Engineering
McGill University
Montreal, QC Canada
[email protected]
Abstract
We consider robust least-squares regression with feature-wise disturbance. We
show that this formulation leads to tractable convex optimization problems, and
we exhibit a particular uncertainty set for which the robust problem is equivalent
to `1 regularized regression (Lasso). This provides an interpretation of Lasso from
a robust optimization perspective. We generalize this robust formulation to consider more general uncertainty sets, which all lead to tractable convex optimization
problems. Therefore, we provide a new methodology for designing regression algorithms, which generalize known formulations. The advantage is that robustness
to disturbance is a physical property that can be exploited: in addition to obtaining
new formulations, we use it directly to show sparsity properties of Lasso, as well
as to prove a general consistency result for robust regression problems, including
Lasso, from a unified robustness perspective.
1
Introduction
In this paper we consider linear regression problems with least-square error. The problem is to find
a vector x so that the `2 norm of the residual b ? Ax is minimized, for a given matrix A ? Rn?m
and vector b ? Rn . From a learning/regression perspective, each row of A can be regarded as a
training sample, and the corresponding element of b as the target value of this observed sample.
Each column of A corresponds to a feature, and the objective is to find a set of weights so that the
weighted sum of the feature values approximates the target value.
It is well known that minimizing the least squared error can lead to sensitive solutions [1, 2]. Many
regularization methods have been proposed to decrease this sensitivity. Among them, Tikhonov
regularization [3] and Lasso [4, 5] are two widely known and cited algorithms. These methods
minimize a weighted sum of the residual norm and a certain regularization term, kxk2 for Tikhonov
regularization and kxk1 for Lasso. In addition to providing regularity, Lasso is also known for
1
the tendency to select sparse solutions. Recently this has attracted much attention for its ability
to reconstruct sparse solutions when sampling occurs far below the Nyquist rate, and also for its
ability to recover the sparsity pattern exactly with probability one, asymptotically as the number of
observations increases (there is an extensive literature on this subject, and we refer the reader to
[6, 7, 8, 9, 10] and references therein). In many of these approaches, the choice of regularization
parameters often has no fundamental connection to an underlying noise model [2].
In [11], the authors propose an alternative approach to reducing sensitivity of linear regression, by
considering a robust version of the regression problem: they minimize the worst-case residual for
the observations under some unknown but bounded disturbances. They show that their robust least
squares formulation is equivalent to `2 -regularized least squares, and they explore computational
aspects of the problem. In that paper, and in most of the subsequent research in this area and the
more general area of Robust Optimization (see [12, 13] and references therein) the disturbance is
taken to be either row-wise and uncorrelated [14], or given by bounding the Frobenius norm of the
disturbance matrix [11].
In this paper we investigate the robust regression problem under more general uncertainty sets,
focusing in particular on the case where the uncertainty set is defined by feature-wise constraints,
and also the case where features are meaningfully correlated. This is of interest when values of
features are obtained with some noisy pre-processing steps, and the magnitudes of such noises are
known or bounded. We prove that all our formulations are computationally tractable. Unlike much
of the previous literature, we provide a focus on structural properties of the robust solution. In
addition to giving new formulations, and new properties of the solutions to these robust problems,
we focus on the inherent importance of robustness, and its ability to prove from scratch important
properties such as sparseness, and asymptotic consistency of Lasso in the statistical learning context.
In particular, our main contributions in this paper are as follows.
? We formulate the robust regression problem with feature-wise independent disturbances,
and show that this formulation is equivalent to a least-square problem with a weighted `1
norm regularization term. Hence, we provide an interpretation for Lasso from a robustness
perspective. This can be helpful in choosing the regularization parameter. We generalize
the robust regression formulation to loss functions given by an arbitrary norm, and uncertainty sets that allow correlation between disturbances of different features.
? We investigate the sparsity properties for the robust regression problem with feature-wise
independent disturbances, showing that such formulations encourage sparsity. We thus easily recover standard sparsity results for Lasso using a robustness argument. This also implies a fundamental connection between the feature-wise independence of the disturbance
and the sparsity.
? Next, we relate Lasso to kernel density estimation. This allows us to re-prove consistency
in a statistical learning setup, using the new robustness tools and formulation we introduce.
Notation. We use capital letters to represent matrices, and boldface letters to represent column
vectors. For a vector z, we let zi denote the ith element. Throughout the paper, ai and r>
j denote
th
th
the i column and the j row of the observation matrix A, respectively; aij is the ij element of A,
hence it is the j th element of ri , and ith element of aj . For a convex function f (?), ?f (z) represents
any of its sub-gradients evaluated at z.
2
Robust Regression with Feature-wise Disturbance
We show that our robust regression formulation recovers Lasso as a special case. The regression
formulation we consider differs from the standard Lasso formulation, as we minimize the norm of
the error, rather than the squared norm. It is known that these two coincide up to a change of the regularization coefficient. Yet our results amount to more than a representation or equivalence theorem.
In addition to more flexible and potentially powerful robust formulations, we prove new results, and
give new insight into known results. In Section 3, we show the robust formulation gives rise to new
sparsity results. Some of our results there (e.g. Theorem 4) fundamentally depend on (and follow
from) the robustness argument, which is not found elsewhere in the literature. Then in Section 4,
we establish consistency of Lasso directly from the robustness properties of our formulation, thus
explaining consistency from a more physically motivated and perhaps more general perspective.
2
2.1
Formulation
Robust linear regression considers the case that the observed matrix A is corrupted by some disturbance. We seek the optimal weight for the uncorrupted (yet unknown) sample matrix. We consider
the following min-max formulation:
Robust Linear Regression: minm max kb ? (A + ?A)xk2 .
(1)
x?R
?A?U
Here, U is the set of admissible disturbances of the matrix A. In this section, we consider the specific
setup where the disturbance is feature-wise uncorrelated, and norm-bounded for each feature:
n
o
U , (? 1 , ? ? ? , ? m )k? i k2 ? ci , i = 1, ? ? ? , m ,
(2)
for given ci ? 0. This formulation recovers the well-known Lasso:
Theorem 1. The robust regression problem (1) with the uncertainty set (2) is equivalent to the
following `1 regularized regression problem:
m
n
o
X
minm kb ? Axk2 +
ci |xi | .
x?R
(3)
i=1
Proof. We defer the full details to [15], and give only an outline of the proof here. Showing that the
robust regression is a lower bound for the regularized regression follows from the standard triangle
inequality. Conversely, one can take the worst-case noise to be ? ?i , ?ci sgn(x?i )u, where u is given
by
b?Ax?
if Ax? 6= b,
kb?Ax? k2
u,
,
any vector with unit `2 norm otherwise;
from which the result follows after some algebra.
If we take ci = c and normalized ai for all i, Problem (3) is the well-known Lasso [4, 5].
2.2
Arbitrary norm and correlated disturbance
It is possible to generalize this result to the case where the `2 -norm is replaced by an arbitrary norm,
and where the uncertainty is correlated from feature to feature. For space considerations, we refer
to the full version ([15]), and simply state the main results here.
Theorem 2. Let k ? ka denote an arbitrary norm. Then the robust regression problem
n
o
minm max kb ? (A + ?A)xka ; Ua , (? 1 , ? ? ? , ? m )k? i ka ? ci , i = 1, ? ? ? , m ;
x?R
?A?Ua
n
o
Pm
is equivalent to the regularized regression problem minx?Rm kb ? Axka + i=1 ci |xi | .
Using feature-wise uncorrelated disturbance may lead to overly conservative results. We relax this,
allowing the disturbances of different features to be correlated. Consider the following uncertainty
set:
U 0 , (? 1 , ? ? ? , ? m )fj (k? 1 ka , ? ? ? , k? m ka ) ? 0; j = 1, ? ? ? , k ,
where fj (?) are convex functions. Notice that both k and fj can be arbitrary, hence this is a very
general formulation and provides us with significant flexibility in designing uncertainty sets and
equivalently new regression algorithms. The following theorem converts this formulation to a convex and tractable optimization problem.
Theorem 3. Assume that the set Z , {z ? Rm |fj (z) ? 0, j = 1, ? ? ? , k; z ? 0} has non-empty
relative interior. The robust regression problem
minm max 0 kb ? (A + ?A)xka ,
x?R
?A?U
3
is equivalent to the following regularized regression problem
n
o
min
kb ? Axka + v(?, ?, x) ;
m
m
??Rk
+ ,??R+ ,x?R
k
h
i
X
where: v(?, ?, x) , maxm (? + |x|)> c ?
?j fj (c) .
(4)
c?R
j=1
n
o
Example 1. Suppose U 0 = (? 1 , ? ? ? , ? m )
k? 1 ka , ? ? ? , k? m ka
s ? l; for a symmetric norm
k ? ks , then the resulting regularized regression problem is
n
o
minm kb ? Axka + lkxk?s ; where k ? k?s is the dual norm of k ? ks .
x?R
The robust regression formulation (1) considers disturbances that are bounded in a set, while in
practice, often the disturbance is a random variable with unbounded support. In such cases, it is not
possible to simply use an uncertainty set that includes all admissible disturbances, and we need to
construct a meaningful U based on probabilistic information. In the full version [15] we consider
computationally efficient ways to use chance constraints to construct uncertainty sets.
3
Sparsity
In this section, we investigate the sparsity properties of robust regression (1), and equivalently Lasso.
Lasso?s ability to recover sparse solutions has been extensively discussed (cf [6, 7, 8, 9]), and takes
one of two approaches. The first approach investigates the problem from a statistical perspective.
That is, it assumes that the observations are generated by a (sparse) linear combination of the features, and investigates the asymptotic or probabilistic conditions required for Lasso to correctly
recover the generative model. The second approach treats the problem from an optimization perspective, and studies under what conditions a pair (A, b) defines a problem with sparse solutions
(e.g., [16]).
We follow the second approach and do not assume a generative model. Instead, we consider the
conditions that lead to a feature receiving zero weight. In particular, we show that (i) as a direct
result of feature-wise independence of the uncertainty set, a slight change of a feature that was
originally assigned zero weight still gets zero weight (Theorem 4); (ii) using Theorem 4, we show
that ?nearly? orthogonal features get zero weight (Corollary 1); and (iii) ?nearly? linearly dependent
features get zero weight (Theorem 5). Substantial research regarding sparsity properties of Lasso
can be found in the literature (cf [6, 7, 8, 9, 17, 18, 19, 20] and many others). In particular, similar
results as in point (ii), that rely on an incoherence property, have been established in, e.g., [16], and
are used as standard tools in investigating sparsity of Lasso from a statistical perspective. However,
a proof exploiting robustness and properties of the uncertainty is novel. Indeed, such a proof shows
a fundamental connection between robustness and sparsity, and implies that robustifying w.r.t. a
feature-wise independent uncertainty set might be a plausible way to achieve sparsity for other
problems.
? b), let x? be an optimal solution of the robust regression problem:
Theorem 4. Given (A,
?
minm max kb ? (A + ?A)xk2 .
x?R
?A?U
Let I ? {1, ? ? ? , m} be such that for all i ? I, x?i = 0. Now let
n
o
U? , (? 1 , ? ? ? , ? m )k? j k2 ? cj , j ?
6 I; k? i k2 ? ci + `i , i ? I .
Then, x? is an optimal solution of
min
x?Rm
max kb ? (A + ?A)xk2 ,
?
?A?U
?i k ? `i for i ? I, and aj = a
?j for j 6? I.
for any A that satisfies kai ? a
4
Proof. Notice that for i ? I, x?i = 0, hence the ith column of both A and ?A has no effect on the
residual. We have
max
b ? (A + ?A)x?
= max
b ? (A + ?A)x?
= max
b ? (A? + ?A)x?
.
?
?A?U
2
?A?U
2
?A?U
2
?
?A ? U ? A+?A?A ? U? .
? j for j 6? I. Thus A+?A
For i ? I, kai ??
ai k ? li , and aj = a
Therefore, for any fixed x0 , the following holds:
max
b ? (A? + ?A)x0
? max
b ? (A + ?A)x0
.
?A?U
?
?A?U
2
2
By definition of x? ,
max
b ? (A? + ?A)x?
? max
b ? (A? + ?A)x0
.
?A?U
Therefore we have
?A?U
2
2
max
b ? (A + ?A)x?
? max
b ? (A + ?A)x0
.
?
?A?U
?
?A?U
2
2
0
Since this holds for arbitrary x , we establish the theorem.
Theorem 4 is established using the robustness argument, and is a direct result of the feature-wise
independence of the uncertainty set. It explains why
P Lasso tends to assign zero weight to nonrelative features. Consider a generative model1 b = i?I wi ai + ?? where I ? {1 ? ? ? , m} and ?? is
a random variable, i.e., b is generated by features belonging to I. In this case, for a feature i0 6? I,
Lasso would assign zero weight as long as there exists a perturbed value of this feature, such that
the optimal regression assigned it zero weight. This is also shown in the next corollary, in which
we apply Theorem 4 to show that the problem has a sparse solution as long as an incoherence-type
property is satisfied (this result is more in line with the traditional sparsity results).
Corollary 1. SupposeSthat for all i, ci = c. If there exists I ? {1, ? ? ? , m} such that for all
v ? span {ai , i ? I} {b} , kvk = 1, we have v> aj ? c ?j 6? I, then any optimal solution x?
satisfies x?j = 0, ?j 6? I.
S
Proof. For j 6? I, let a=
j denote the projection of aj onto the span of {ai , i ? I} {b}, and let
=
=
?
a+
j , aj ? aj . Thus, we have kaj k ? c. Let A be such that
ai i ? I;
?i =
a
a+
i 6? I.
i
Now let
U? , {(? 1 , ? ? ? , ? m )|k? i k2 ? c, i ? I; k? j k2 = 0, j 6? I}.
n
o
?
Consider the robust regression problem minx? max?A?U?
b?(A+?A)?
x
2 , which is equivalent
n
?x
+ P c|?
? ? such
to minx?
b ? A?
i?I xi | . Now we show that there exists an optimal solution x
2
S
?j are orthogonal to the span of of {?
that x
??j = 0 for all j 6? I. This is because a
ai , i ? I} {b}.
? , by changing x
Hence for any given x
?j to zero for all j 6? I, the minimizing objective does not
increase.
?j k = ka=
Since k?
a?a
j k ? c ?j 6? I, (and recall that U = {(? 1 , ? ? ? , ? m )|k? i k2 ? c, ?i}) applying
Theorem 4 we establish the corollary.
The next corollary follows easily from Corollary 1.
Corollary 2. Suppose there exists I ? {1, ? ? ? , m}, such that for all i ? I, kai k < ci . Then any
optimal solution x? satisfies x?i = 0, for i ? I.
1
While we are not assuming generative models to establish the results, it is still interesting to see how these
results can help in a generative model setup.
5
The next theorem shows that sparsity is achieved when a set of features are ?almost? linearly dependent. Again we refer to [15] for the proof.
Theorem 5. Given I ? {1, ? ? ? , m} such that there exists a non-zero vector (wi )i?I satisfying
X
X
k
wi ai k2 ?
min
|
?i ci wi |,
?i ?{?1,+1}
i?I
i?I
?
then there exists an optimal solution x such that ?i ? I : x?i = 0.
P
Notice that for linearly dependent features, there exists non-zero (wi )i?I such that k i?I wi ai k2 =
0, which leads to the following corollary.
Corollary 3. Given I ? {1, ? ? ? , m}, let AI , ai
, and t , rank(AI ). There exists an
i?I
optimal solution x? such that x?I , (xi )>
i?I has at most t non-zero coefficients.
Setting I = {1, ? ? ? , m}, we immediately get the following corollary.
Corollary 4. If n < m, then there exists an optimal solution with no more than n non-zero coefficients.
4
Density Estimation and Consistency
In this section, we investigate the robust linear regression formulation from a statistical perspective
and rederive using only robustness properties that Lasso is asymptotically consistent. We note that
our result applies to a considerably more general framework than Lasso. In the full version ([15])
we use some intermediate results used to prove consistency, to show that regularization can be
identified with the so-called maxmin expected utility (MMEU) framework, thus tying regularization
to a fundamental tenet of decision-theory.
We restrict our discussion to the case where the magnitude of the allowable uncertainty for all
features equals c, (i.e., the standard Lasso) and establish the statistical consistency of Lasso from
a distributional robustness argument. Generalization to the non-uniform case is straightforward.
Throughout, we use cn to represent c where there are n samples (we take cn to zero).
Recall the standard generative model in statistical learning: let P be a probability measure with
bounded support that generates i.i.d. samples (bi , ri ), and has a density f ? (?). Denote the set of the
first n samples by Sn . Define
v
v
n
n
nu
o
o
n ?n u
u1 X
uX
2 + c kxk
t
x(cn , Sn ) , arg min t
(bi ? r>
x)
=
arg
min
(bi ? r>
x)2 + cn kxk1 ;
n
1
i
i
x
x
n i=1
n
i=1
sZ
o
n
(b ? r> x)2 dP(b, r) .
x(P) , arg min
x
b,r
?
In words, x(cn , Sn ) is the solution to Lasso with the tradeoff parameter set to cn n, and x(P)
is the ?true? optimal solution. We have the following consistency result. The theorem itself is a
well-known result. However, the proof technique is novel. This technique is of interest because
the standard techniques to establish consistency in statistical learning including VC dimension and
algorithm stability often work for a limited range of algorithms, e.g., SVMs are known to have
infinite VC dimension, and we show in the full version ([15]) that Lasso is not stable. In contrast,
a much wider range of algorithms have robustness interpretations, allowing a unified approach to
prove their consistency.
Theorem 6. Let {cn } be such that cn ? 0 and limn?? n(cn )m+1 = ?. Suppose there exists a
constant H such that kx(cn , Sn )k2 ? H almost surely. Then,
sZ
sZ
lim
(b ? r> x(cn , Sn ))2 dP(b, r) =
(b ? r> x(P))2 dP(b, r),
n??
b,r
b,r
almost surely.
6
The full proof and results we develop along the way are deferred to [15], but we provide the main
ideas and outline here. The key to the proof is establishing a connection between robustness and
kernel density estimation.
Step 1: For a given x, we show that the robust regression loss over the training data is equal to the
worst-case expected generalization error. To show this we establish a more general result:
Proposition 1. Given a function g : Rm+1 ? R and Borel sets Z1 , ? ? ? , Zn ? Rm+1 , let
[
Pn , {? ? P|?S ? {1, ? ? ? , n} : ?(
Zi ) ? |S|/n}.
i?S
The following holds
n
1X
sup h(ri , bi ) = sup
n i=1 (ri ,bi )?Zi
??Pn
Z
h(r, b)d?(r, b).
Rm+1
Step 2: Next we show that robust regression has a form like that in the left hand side above. Also,
the set of distributions we supremize over, in the right hand side above, includes a kernel density
estimator for the true (unknown) distribution. Indeed, consider the following kernel estimator: given
samples (bi , ri )ni=1 ,
n
X
b ? bi , r ? ri
,
K
hn (b, r) , (ncm+1 )?1
c
(5)
i=1
where: K(x) , I[?1,+1]m+1 (x)/2m+1 .
Observe that the estimated distribution given by Equation (5) belongs to the set of distributions
m
Y
Pn (A, ?, b, c) , {? ? P|Zi = [bi ? c, bi + c] ?
[aij ? ?ij , aij + ?ij ];
j=1
?S ? {1, ? ? ? , n} : ?(
[
i?S
Zi ) ? |S|/n},
S
?
?
and hence belongs to P(n)
= P(n)
, ?|?j,P ?2 =nc2 Pn (A, ?, b, c), which is precisely the set
i ij
j
of distributions used in the representation from Proposition 1.
R
Step 3: Combining the last two steps, and using the fact that b,r |hn (b, r) ? h(b, r)|d(b, r) goes to
zero almost surely when cn ? 0 and ncm+1
? ? since hn (?) is a kernel density estimation of f (?)
n
(see e.g. Theorem 3.1 of [21]), we prove consistency of robust regression.
We can remove the assumption that kx(cn , Sn )k2 ? H, and as in Theorem 6, the proof technique
rather than the result itself is of interest. We postpone the proof to [15].
Theorem 7. Let {cn } converge to zero sufficiently slowly. Then
sZ
sZ
>
2
lim
(b ? r x(cn , Sn )) dP(b, r) =
(b ? r> x(P))2 dP(b, r),
n??
b,r
b,r
almost surely.
5
Conclusion
In this paper, we consider robust regression with a least-square-error loss, and extend the results of
[11] (i.e., Tikhonov regularization is equivalent to a robust formulation for Frobenius norm-bounded
disturbance set) to a broader range of disturbance sets and hence regularization schemes. A special
case of our formulation recovers the well-known Lasso algorithm, and we obtain an interpretation
of Lasso from a robustness perspective. We consider more general robust regression formulations,
allowing correlation between the feature-wise noise, and we show that this too leads to tractable
convex optimization problems.
We exploit the new robustness formulation to give direct proofs of sparseness and consistency for
Lasso. As our results follow from robustness properties, it suggests that they may be far more
general than Lasso, and that in particular, consistency and sparseness may be properties one can
obtain more generally from robustified algorithms.
7
References
[1] L. Elden. Perturbation theory for the least-square problem with linear equality constraints. BIT, 24:472?
476, 1985.
[2] G. Golub and C. Van Loan. Matrix Computation. John Hopkins University Press, Baltimore, 1989.
[3] A. Tikhonov and V. Arsenin. Solution for Ill-Posed Problems. Wiley, New York, 1977.
[4] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
Series B, 58(1):267?288, 1996.
[5] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics,
32(2):407?499, 2004.
[6] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on
Scientific Computing, 20(1):33?61, 1998.
[7] A. Feuer and A. Nemirovski. On sparse representation in pairs of bases. IEEE Transactions on Information Theory, 49(6):1579?1581, 2003.
[8] E. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly
incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489?509, 2006.
[9] J. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions on Information
Theory, 50(10):2231?2242, 2004.
[10] M. Wainwright.
Sharp thresholds for noisy and high-dimensional recovery of sparsity using `1 -constrained quadratic programming.
Technical Report Available from:
http://www.stat.berkeley.edu/tech-reports/709.pdf, Department of Statistics,
UC Berkeley, 2006.
[11] L. El Ghaoui and H. Lebret. Robust solutions to least-squares problems with uncertain data. SIAM Journal
on Matrix Analysis and Applications, 18:1035?1064, 1997.
[12] A. Ben-Tal and A. Nemirovski. Robust solutions of uncertain linear programs. Operations Research
Letters, 25(1):1?13, August 1999.
[13] D. Bertsimas and M. Sim. The price of robustness. Operations Research, 52(1):35?53, January 2004.
[14] P. Shivaswamy, C. Bhattacharyya, and A. Smola. Second order cone programming approaches for handling missing and uncertain data. Journal of Machine Learning Research, 7:1283?1314, July 2006.
[15] H. Xu, C. Caramanis, and S. Mannor. Robust regression and Lasso.
http://arxiv.org/abs/0811.1790v1, 2008.
Submitted, available from
[16] J. Tropp. Just relax: Convex programming methods for identifying sparse signals. IEEE Transactions on
Information Theory, 51(3):1030?1051, 2006.
[17] F. Girosi. An equivalence between sparse approximation and support vector machines. Neural Computation, 10(6):1445?1480, 1998.
[18] R. R. Coifman and M. V. Wickerhauser. Entropy-based algorithms for best-basis selection. IEEE Transactions on Information Theory, 38(2):713?718, 1992.
[19] S. Mallat and Z. Zhang. Matching Pursuits with time-frequence dictionaries. IEEE Transactions on Signal
Processing, 41(12):3397?3415, 1993.
[20] D. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289?1306, 2006.
[21] L. Devroye and L. Gy?orfi. Nonparametric Density Estimation: the l1 View. John Wiley & Sons, 1985.
8
| 3596 |@word version:5 norm:16 seek:1 decomposition:1 series:1 bhattacharyya:1 ka:7 yet:2 attracted:1 john:2 tenet:1 subsequent:1 girosi:1 remove:1 generative:6 ith:3 provides:2 mannor:3 org:1 zhang:1 unbounded:1 along:1 direct:3 prove:8 introduce:1 coifman:1 x0:5 expected:2 indeed:2 cand:1 considering:1 ua:2 underlying:1 bounded:6 notation:1 what:1 tying:1 unified:2 berkeley:2 exactly:1 k2:11 rm:6 unit:1 engineering:3 treat:1 tends:1 establishing:1 incoherence:2 might:1 therein:2 k:2 equivalence:2 conversely:1 suggests:1 limited:1 nemirovski:2 bi:9 range:3 atomic:1 practice:1 postpone:1 differs:1 area:2 orfi:1 projection:1 matching:1 pre:1 word:1 get:4 onto:1 interior:1 selection:2 romberg:1 context:1 applying:1 www:1 equivalent:8 missing:1 straightforward:1 attention:1 go:1 convex:7 formulate:1 qc:2 recovery:1 immediately:1 identifying:1 insight:1 estimator:2 regarded:1 stability:1 mcgill:4 target:2 suppose:3 annals:1 mallat:1 exact:1 programming:3 designing:2 element:5 satisfying:1 distributional:1 observed:2 kxk1:2 electrical:3 worst:3 decrease:1 substantial:1 depend:1 algebra:1 basis:2 triangle:1 easily:2 caramanis:2 choosing:1 saunders:1 widely:1 plausible:1 kai:3 posed:1 relax:2 reconstruct:1 otherwise:1 compressed:1 ability:4 statistic:2 noisy:2 itself:2 advantage:1 propose:1 reconstruction:1 combining:1 flexibility:1 achieve:1 frobenius:2 exploiting:1 regularity:1 empty:1 ben:1 help:1 wider:1 develop:1 montreal:2 stat:1 xka:2 ij:4 sim:1 implies:2 kb:10 vc:2 sgn:1 explains:1 assign:2 generalization:2 proposition:2 hold:3 sufficiently:1 algorithmic:1 dictionary:1 xk2:3 estimation:5 utexas:1 sensitive:1 maxm:1 tool:2 weighted:3 rather:2 pn:4 shrinkage:1 broader:1 corollary:11 ax:4 focus:2 rank:1 tech:1 contrast:1 helpful:1 shivaswamy:1 dependent:3 el:1 i0:1 tao:1 arg:3 among:1 flexible:1 dual:1 ill:1 constrained:1 special:2 uc:1 equal:2 construct:2 sampling:1 represents:1 nearly:2 minimized:1 others:1 report:2 fundamentally:1 inherent:1 replaced:1 ab:1 interest:3 investigate:4 highly:1 golub:1 deferred:1 kvk:1 encourage:1 huan:1 orthogonal:2 incomplete:1 re:1 uncertain:3 column:4 zn:1 uniform:1 too:1 perturbed:1 corrupted:1 considerably:1 cited:1 fundamental:4 sensitivity:2 density:7 siam:2 probabilistic:2 receiving:1 hopkins:1 squared:2 again:1 satisfied:1 hn:3 slowly:1 li:1 gy:1 includes:2 coefficient:3 view:1 sup:2 recover:4 defer:1 contribution:1 minimize:3 square:8 ni:1 nc2:1 generalize:4 minm:6 submitted:1 definition:1 frequency:1 proof:13 recovers:3 recall:2 lim:2 efron:1 cj:1 focusing:1 originally:1 follow:3 methodology:1 maxmin:1 formulation:28 evaluated:1 just:1 smola:1 correlation:2 hand:2 tropp:2 defines:1 aj:7 perhaps:1 scientific:1 effect:1 normalized:1 true:2 regularization:12 hence:7 assigned:2 equality:1 symmetric:1 allowable:1 pdf:1 outline:2 l1:1 fj:5 wise:13 consideration:1 novel:2 recently:1 physical:1 discussed:1 interpretation:4 approximates:1 slight:1 extend:1 refer:3 significant:1 ai:13 consistency:14 pm:1 stable:1 base:1 perspective:10 constantine:1 belongs:2 tikhonov:4 certain:1 inequality:1 exploited:1 uncorrupted:1 surely:4 converge:1 july:1 signal:3 ii:2 full:6 technical:1 long:2 regression:42 physically:1 arxiv:1 kernel:5 represent:3 achieved:1 addition:4 baltimore:1 limn:1 unlike:1 subject:1 shie:2 meaningfully:1 structural:1 intermediate:1 iii:1 independence:3 zi:5 hastie:1 lasso:36 identified:1 restrict:1 regarding:1 cn:15 idea:1 tradeoff:1 texas:2 motivated:1 utility:1 nyquist:1 greed:1 york:1 generally:1 amount:1 nonparametric:1 extensively:1 svms:1 http:2 notice:3 estimated:1 overly:1 correctly:1 tibshirani:2 key:1 threshold:1 capital:1 changing:1 v1:1 asymptotically:2 bertsimas:1 wickerhauser:1 sum:2 convert:1 cone:1 angle:1 letter:3 uncertainty:17 powerful:1 throughout:2 reader:1 almost:5 decision:1 investigates:2 kaj:1 bit:1 bound:1 quadratic:1 constraint:3 precisely:1 ri:6 tal:1 generates:1 aspect:1 u1:1 argument:4 min:7 robustifying:1 span:3 robustified:1 department:4 combination:1 belonging:1 son:1 wi:6 axk2:1 ghaoui:1 taken:1 computationally:2 equation:1 tractable:5 pursuit:2 available:2 operation:2 apply:1 observe:1 alternative:1 robustness:19 assumes:1 cf:2 exploit:1 giving:1 establish:7 society:1 objective:2 occurs:1 traditional:1 exhibit:1 gradient:1 minx:3 dp:5 considers:2 feuer:1 boldface:1 assuming:1 devroye:1 providing:1 minimizing:2 equivalently:2 setup:3 potentially:1 relate:1 rise:1 unknown:3 allowing:3 observation:4 january:1 rn:2 perturbation:1 arbitrary:6 sharp:1 august:1 canada:2 pair:2 required:1 extensive:1 connection:4 z1:1 established:2 nu:1 lebret:1 below:1 pattern:1 sparsity:16 model1:1 program:1 including:2 max:16 royal:1 wainwright:1 rely:1 disturbance:21 regularized:7 residual:4 scheme:1 cim:1 sn:7 literature:4 asymptotic:2 relative:1 loss:3 interesting:1 consistent:1 principle:1 uncorrelated:3 austin:2 row:3 elsewhere:1 arsenin:1 last:1 aij:3 side:2 allow:1 johnstone:1 explaining:1 sparse:10 van:1 dimension:2 author:1 coincide:1 far:2 transaction:7 sz:5 investigating:1 xi:4 why:1 robust:40 ca:2 correlated:4 obtaining:1 main:3 linearly:3 bounding:1 noise:4 xu:2 borel:1 wiley:2 sub:1 kxk2:1 rederive:1 admissible:2 theorem:21 rk:1 specific:1 showing:2 sensing:1 exists:10 importance:1 ci:11 magnitude:2 sparseness:3 kx:2 chen:1 entropy:1 simply:2 explore:1 ncm:2 kxk:1 ux:1 applies:1 corresponds:1 chance:1 satisfies:3 donoho:2 price:1 change:2 loan:1 infinite:1 reducing:1 conservative:1 called:1 ece:1 tendency:1 e:1 meaningful:1 select:1 support:3 scratch:1 handling:1 |
2,864 | 3,597 | Large Margin Taxonomy Embedding with an
Application to Document Categorization
Olivier Chapelle
Yahoo! Research
[email protected]
Kilian Weinberger
Yahoo! Research
[email protected]
Abstract
Applications of multi-class classification, such as document categorization, often
appear in cost-sensitive settings. Recent work has significantly improved the state
of the art by moving beyond ?flat? classification through incorporation of class
hierarchies [4]. We present a novel algorithm that goes beyond hierarchical classification and estimates the latent semantic space that underlies the class hierarchy.
In this space, each class is represented by a prototype and classification is done
with the simple nearest neighbor rule. The optimization of the semantic space
incorporates large margin constraints that ensure that for each instance the correct
class prototype is closer than any other. We show that our optimization is convex
and can be solved efficiently for large data sets. Experiments on the OHSUMED
medical journal data base yield state-of-the-art results on topic categorization.
1
Introduction
Multi-class classification is a problem that arises in many applications of machine learning. In many
cases the cost of misclassification varies strongly between classes. For example, in the context of
object recognition it may be significantly worse to misclassify a male pedestrian as a traffic light
than as a female pedestrian. Similarly, in the context of document categorization it seems more
severe to misclassify a medical journal on heart attack as a publication on athlete?s foot than on
Coronary artery disease. Although the scope of the proposed method is by no means limited to
text data and topic hierarchies, for improved clarity we will restrict ourselves to terminology from
document categorization throughout this paper.
The most common approach to document categorization is to reduce the problem to a ?flat? classification problem [13]. However, it is often the case that the topics are not just discrete classes, but
are nodes in a complex taxonomy with rich inter-topic relationships. For example, web pages can be
categorized into the Yahoo! web taxonomy or medical journals can be categorized into the Medical
Subject Headings (MeSH) taxonomy. Moving beyond flat classification to settings that utilize these
hierarchical representations of topics has significantly pushed the state-of-the art [4, 15]. Additional
information about inter-topic relationships can for example be leveraged through cost-sensitive decision boundaries or knowledge sharing between documents from closely related classes.
In reality, however, the topic taxonomy is a crude approximation of topic relations, created by an
editor with knowledge of the true underlying semantic space of topics. In this paper we propose a
method that moves beyond hierarchical presentations and aims to re-discover the continuous latent
semantic space underlying the topic taxonomy. Instead of regarding document categorization as
classification, we will think of it as a regression problem where new documents are mapped into this
latent semantic topic space. Very different from approaches like LSI or LDA [1, 7], our algorithm is
entirely supervised and explicitly embeds the topic taxonomy and the documents into a single latent
semantic space with ?semantically meaningful? Euclidean distances.
1
Topic taxonomy
T
Low dimensional semantic space
pneumonia
X
W
P
stroke arthritis
High dimensional input space
F
original
inputs
heart
attack
class
prototypes
p!?
embedded
inputs
!xi
W!xi
Figure 1: A schematic layout of our taxem method (for Taxonomy Embedding). The classes are
embedded as prototypes inside the semantic space. The input documents are mapped into the same
space, placed closest to their topic prototypes.
In this paper we derive a method to embed the taxonomy of topics into a latent semantic space in
form of topic prototypes. A new document can be classified by first mapping it into this space and
then assigning the label of the closest prototype. A key contribution of our paper is the derivation
of a convex problem that learns the regressor for the documents and the placement of the prototypes
in a single optimization. In particular, it places the topic prototypes such that for each document
the prototype of the correct topic is much closer than any other prototype by a large margin. We
show that this optimization is a special instance of semi-definite programs [2], that can be solved
efficiently [16] for large data sets.
Our paper is structured as follows: In section 2 we introduce necessary notation and a first version
of the algorithm based on a two-step approach of first embedding the hierarchical taxonomy into a
semantic space and then regressing the input documents close to their respective topic prototypes.
In section 3 we extend our model to a single optimization that learns both steps in one convex optimization with large margin constraints. We evaluate our method in section 4 and demonstrate
state-of-the-art results on eight different document categorization tasks from the OHSUMED medical journal data set. Finally, we relate our method to previous work in section 5 and conclude in
section 6.
2
Method
We assume that our input consists of documents, represented as a set of high dimensional sparse
vectors
x1 , ...,
xn X of dimensionality d. Typically, these could be binary bag of words
indicators or tfidf scores. In addition, the documents are accompanied by single topic labels
y1 , ..., yn { 1, ..., c} that lie in some taxonomy T with c total topics. This taxonomy T gives
rise to some cost matrix C Rc? c , where C 0 defines the cost of misclassifying an element
of topic as and C = 0. Technically, we only require knowledge of the cost matrix C, which
could also be obtained from side-information independent of a topic taxonomy. In this paper we will
not focus on how C is obtained. However, we would like to point out that a common way to infer a
cost matrix from a taxonomy is to set C to the length of the shortest path between node and ,
but other approaches have also been studied [3].
Throughout this paper we denote document indices as i, j { 1, ..., n} and topic indices as
, { 1, ..., c} . Matrices are written in bold (e.g. C) and vectors have top arrows (e.g.
xi ).
Figure 1 illustrates our setup schematically. We would like to create a low dimensional semantic
feature space F in which we represent each topic as a topic prototype p F and each document
xi X as a low dimensional vector
zi F. Our goal is to discover a representation of the
data where distances reflect true underlying dissimilarities and proximity to prototypes indicates
topic membership. In other words, documents on the same or related topics should be close to the
respective topic prototypes, documents on highly different topics should be well separated.
2
Throughout this paper we will assume that F = Rc , however our method can easily be adapted to
even lower dimensional settings F = Rr where r < c. As an essential part of our method is to
embed the classes that are typically found in a taxonomy, we refer to our algorithm as taxem (short
for ?taxonomy embedding?).
Embedding topic prototypes
The first step of our algorithm is to embed the document taxonomy into a Euclidean vector
space. More formally, we derive topic prototypes p~1 , . . . , p~c ? F based on the cost matrix C,
where p~? is the prototype that represents topic ?. To simplify notation, we define the matrix
P = [~
p1 , . . . , p~c ] ? Rc?c whose columns consist of the topic prototypes.
There are many ways to derive the prototypes from the cost matrix C. By far the simplest method
is to ignore the cost matrix C entirely and let PI = I, where I ? Rc?c denotes the identity matrix.
?
This results in a c dimensional feature space, where the class-prototypes are all in distance 2
from each other at the corner of a (c-1)-dimensional simplex. We will refer to PI as the simplex
prototypes.
Better results can be expected when the prototypes of similar topics are closer than those of dissimilar topics. We use the cost matrix C as an estimate of dissimilarity and aim to place the prototypes
such that the distance k~
p? ? p~? k22 reflects the cost specified in C?? . More formally, we set
Pmds = argminP
c
X
(k~
p? ? p~? k22 ? C?? )2 .
(1)
?,?=1
If the cost matrix C defines squared Euclidean distances (e.g. when the cost is obtained through the
shortest path between nodes and then squared), we can solve eq. (1) with metric multi-dimensional
? = ? 1 HCH, where the centering matrix H is defined as H = I ?
scaling [5]. Let us denote C
2
1
>
? = V?V> . We obtain the solution by setting
11
,
and
let
its
eigenvector
decomposition
be C
c
?
Pmds = ?V. We will refer to Pmds as the mds prototypes.1
Both prototypes embeddings PI and Pmds are still independent of the input data {~xi }. Before we
can derive a more sophisticated method to place the prototypes with large margin constraints on the
document vectors, we will briefly describe the mapping W : X ? F of the input documents into
the low dimensional feature space F.
Document regression
Assume for now that we have found a suitable embedding P of the class-prototypes. We need to
find an appropriate mapping W : X ? F, that maps each input ~xi with label yi as close as possible
to its topic prototype p~yi . We can find such a linear transformation ~zi = W~xi by setting
X
k~
pyi ? W~xi k2 + ?kWk2F .
(2)
W = argminW
i
Here, ? is the weight of the regularization of W, which is necessary to prevent potential overfitting
due to the high number of parameters in W. The minimization in eq. (2) is an instance of linear
ridge regression and has the closed form solution
W = PJX> (XX> + ?I)?1 ,
(3)
c?n
where X = [~x1 , . . . ~xn ] and J ? {0, 1}
, with J?i = 1 if and only if yi = ?. Please note that
eq. (3) can be solved very accurately without the need to ever compute the d ? d matrix inverse
(XX> + ?I)?1 explicitly, by solving with linear conjugate gradient for each row of W independently.
Inference
Given an input vector ~xt we first map it into F and estimate its label as the topic with the closest
prototype p~?
y?t = argmin? k~
p? ? W~xt k2 .
(4)
1
? does not contain Euclidean distances one can use the common approximation of forcing negative
If C
eigenvalues in ? to zero and thereby fall back onto the projection of C onto the cone of positive semi-definite
matrices.
3
Topic taxonomy
T
I
?
!e?
Low dimensional semantic space
yi
High dimensional input space
!xi
p!?
!eyi
P
X
A
F
p!yi
!zi
!x!i
Rd
embedded
inputs
Rc
large
margin
Rc
class
prototypes
Figure 2: The schematic layout of the large-margin embedding of the taxonomy and the documents.
As a first step, we represent topic as the vector
e and document
xi as
x
xi . We then learn the
i = A
matrix P whose columns are the prototypes p = P
e and which defines the final transformation
of the documents
zi = P
x
yi
i . This final transformation is learned such that the correct prototype p
is closer to
zi than any other prototype p by a large margin.
For a given set of labeled documents (
x1 , y1 ), ..., (
xn , yn ) we measure the quality of our semantic
space with the averaged cost-sensitive misclassification loss,
n
1
E=
Cy y .
(5)
n i=1 i i
3
Large Margin Prototypes
So far we have introduced a two step approach: First, we find the prototypes P based on the cost
matrix C, then we learn the mapping
x W
x that maps each input closest to the prototype of its
class. However, learning the prototypes independent of the data {
xi } is far from optimal in order
to reduce the loss in (5). In this section we will create a joint optimization problem that places the
prototypes P and learns the mapping W while minimizing an upper bound on (5).
Combined learning
In our attempt to learn both mappings jointly, we are faced with a ?chicken and egg? problem. We
want to map the input documents closest to their prototypes and at the same time place the prototypes
where the documents of the respective topic are mapped to. Therefore our first task is to de-tangle
this mutual dependency of W and P. Let us define A as the following matrix product:
A = JX (XX + I)1 .
(6)
It follows immediately form eqs. (3) and (6) that W = PA. Note that eq. (6) is entirely independent
of P and can be pre-computed before the prototypes have been positioned. With this relation we
have reduced the problem of determining W and P to the single problem of determining P. Figure 2
illustrates the new schematic layout of the algorithm.
Let
x
xi and let
e = [0, ..., 1, ..., 0] be the vector with all zeros and a single 1 in the th
i = A
position. We can then rewrite both, the topic prototypes p and the low dimensional documents
zi ,
as vectors within the range of P : Rc Rc :
e , and
zi = P
x
p = P
i.
(7)
Optimization
Ideally we would like to learn P to minimize (5) directly. However, this function is non-continuous
and non-differentiable. For this reason we will derive a surrogate loss function that strictly
bounds (5) from above.
4
The loss for a specific document ~xi is zero if its corresponding vector ~zi is closer to the correct
prototype p~yi than to any other prototype p~? . For better generalization it would be preferable if
prototype p~yi was in fact much closer by a large margin. We can go even further and demand that
prototypes that would incur a larger misclassification loss should be further separated than those
with a small cost. More explicitly, we will try to enforce a margin of Cyi ? . We can express this
condition as a set of ?soft? inequality constraints, in terms of squared-distances,
?i, ? 6= yi
kP(~eyi ? ~x0i )k22 + Cyi ? ? kP(~e? ? ~x0i )k22 + ?i? ,
(8)
where the slack-variable ?i? ? 0 absorbs the amount of violation of prototype p~? into the margin of
~x0i . Given this formulation, we create an upper bound on the loss function (5):
Theorem 1 Given a prototype matrix P, the training error (5) is bounded above by
1
n
P
i? ?i? .
Proof: First, note that we can rewrite the assignment of the closest prototype (4) as y?i =
argmin? kP(~e? ? ~x0i )k2 . It follows that kP(~eyi ? ~x0i )k22 ? kP(~ey?i ? ~x0i )k22 ? 0 for all i (with
equality when y?i = yi ). We therefore obtain:
?i?yi = kP(~eyi ? ~x0i )k22 + Cyi y?i ? kP(~ey?i ? ~x0i )k22 ? Cyi y?i .
The result follows immediately from (9) and that ?i? ? 0:
X
X
X
?i? ?
?i?yi ?
Cyi y?i .
i,?
i
(9)
(10)
i
Theorem 1, together with the constraints in eq. (8), allows us to create an optimization problem that
minimizes an upper bound on the average loss in eq. (5) with maximum-margin constraints:
X
Minimize
?i? subject to:
P
i,?
(1) kP(~eyi ? ~x0i )k22 + Cyi ? ? kP(~e? ? ~x0i )k22 + ?i?
(2) ?i? ? 0
(11)
Note that if we have a very large number of classes, it might be beneficial to choose P ? Rr?c with
r < c. However, the convex formulation described in the next paragraph requires P to be square.
Convex formulation
The optimization in eq. (11) is not convex. The constraints of type (8) are quadratic with respect to
P. Intuitively, any solution P gives rise to infinitely many solutions as any rotation of P results in
the same objective value and also satisfies all constraints. We can make (11) invariant to rotation by
defining Q = P> P, and rewriting all distances in terms of Q,
kP(~e? ? ~x0i )k22 = (~e? ? ~x0i )> Q(~e? ? ~x0i ) = k~e? ? ~x0i k2Q .
(12)
Note that the distance formulation in eq. (12) is linear with respect to Q. As long as the matrix Q
is positive semi-definite, we can re-decompose it into Q = P> P. Hence, we enforce positive semidefiniteness of Q by adding the constraint Q 0. We can now solve (11) in terms of Q instead of
P with the large-margin constraints
?i, ? 6= yi
k~eyi ? ~x0i k2Q + Cyi ? ? k~e? ? ~x0i k2Q + ?i? .
(13)
Regularization
If the size of the training data n is small compared to the number of parameters c2 , we might run
into problems of overfitting to the training data set. To counter those effects, we add a regularization
term to the objective function.
Even if the training data might differ from the test data, we know that the taxonomy does not change.
It is straight-forward to verify that if the mapping A was perfect, i.e. for all i we have ~x0i = ~eyi , Pmds
satisfies all constraints (8) as equalities with zero slack. This gives us confidence that the optimal
solution P for the test data should not deviate too much from Pmds . We will therefore penalize
5
Top category
# samples n
# topics c
# nodes
A
7544
424
519
B
4772
160
312
C
4858
453
610
D
2701
339
608
E
7300
457
559
F
1961
151
218
G
8694
425
533
H
8155
150
170
Table 1: Statistics of the different OHSUMED problems. Note that not all nodes are populated and
that we pruned all strictly un-populated subtrees.
? 2 , where C
? = P> Pmds (as defined in section 2). The final convex optimization of
kQ ? Ck
F
mds
taxem with regularized objective becomes:
X
? 2 subject to:
Minimize (1 ? ?)
?i? + ?kQ ? Ck
F
Q
k~eyi ? ~x0i k2Q
(1)
(2) ?i? ? 0
(3) Q 0
i,?
+ Cyi ? ? k~e? ? ~x0i k2Q + ?i?
(14)
The constant ? ? [0, 1] regulates the impact of the regularization term. The optimization in (14) is
an instance of a semidefinite program (SDP) [2]. Although SDPs can often be expensive to solve,
the optimization (14) falls into a special category2 and can be solved very efficiently with special
purpose sub-gradient solvers even with millions of constraints [16]. Once the optimal solution Q?
is found, one can obtain the position of the prototypes with a simple svd or cholesky decomposition
Q? = P> P and consequently also obtains the mapping W from W = PA.
4
Results
We evaluated our algorithm taxem on several classification problems derived from categorizing publications in the public OHSUMED medical journal data base into the Medical Subject Headings
(MeSH) taxonomy.
Setup and data set description
We used the OHSUMED 87 corpus [9], which consists of abstracts and titles of medical publications. Each of these entries has been assigned one or more categories in the MeSH taxonomy3 . We
used the 2001 version of these MeSH headings resulting in about 20k categories organized in a taxonomy. To preprocess the data we proceeded as follows: First, we discarded all database entries with
empty abstracts, which left us with 36890 documents. We tokenized (after stop word removal and
stemming) each abstract, and represented the corresponding bag of words as its d = 60727 dimensional tfidf scores (normalized to unit length). We removed all topic categories that did not appear
in the MeSH taxonomy (due to out-dated topic names). We further removed all subtrees of nodes
that were populated with one or less documents. The top categories in the OHSUMED data base are
?orthogonal? ? for instance the B top level category is about organism while C is about diseases.
We thus created 8 independent classification problems out of the top categories A,B,C,D,E,F,G,H.
For each problem, we kept only the abstracts that were assigned exactly one category in that tree,
making each problem single-label. The statistics of the different problems are summarized in Table 1. For each problem, we created a 70%/30% random split in training and test samples, ensuring
however that each topic had at least one document in the training corpus.
Document Categorization
The classification results on the OHSUMED data set are summarized in Table 2. We set the regularization constants to be ? = 1 for the regression and ? = 0.1 for the SDP. Preliminary experiments
on data set B showed that regularization was important but the exact settings of the ? and ? had
no crucial impact. We derived the cost-matrix C from the tree hop-distance in all experiments. We
2
The solver described in [16] utilizes that many constraints are inactive and that the sub-gradient can be
efficiently derived from previous gradient steps.
3
see http://en.wikipedia.org/wiki/Medical_Subject_Headings
6
data
A
B
C
D
E
F
G
H
all
SVM 1/all
2.17
1.50
2.41
3.10
3.44
2.59
3.98
2.42
2.78
MCSVM
2.13
1.38
2.32
2.76
3.42
2.65
4.12
2.48
2.77
SVM cost
2.11
1.64
2.25
2.92
3.26
2.66
3.89
2.40
2.77
SVM tax
1.96
1.52
2.25
2.82
3.25
2.69
3.82
2.32
2.65
PI -taxem
2.11
1.57
2.30
2.82
3.45
2.63
4.10
2.45
2.79
Pmds -taxem
2.33
1.99
2.61
3.05
3.05
2.77
3.63
2.24
2.73
LM-taxem
1.95
1.39
2.16
2.66
3.05
2.51
3.59
2.17
2.50
Table 2: The cost-sensitive test error results on various ohsumed classification data sets. The algorithms are from left to right: one vs. all SVM, MCSVM [6], cost-sensitive MCSVM, Hierarchical
SVM [4], simplex regression, mds regression, large-margin taxem. The best results (up to statistical
significance) are highlighted in bold. The taxem algorithm obtains the lowest overall loss and the
lowest individual loss on each data set except B.
compared taxem against four commonly used algorithms for document categorization: 1. A linear support vector machine (SVM) trained in one vs. all mode (SVM 1/all) [12], 2. the Crammer
and Singer multi-class SVM formulation (MCSVM) [6], 3. the Cai and Hoffmann SVM classifier
with cost-sensitive loss function (SVM cost) [4], 4. the Cai and Hoffmann SVM formulation with
a cost sensitive hierarchical loss function (SVM tax) [4]. All SVM classifiers were trained with
regularization constant C = 1 (which worked best on problem B; this value is also commonly used
in text classification when the documents have unit length). Further, we also evaluated the difference between our large margin formulation (taxem) and the results with the simplex (PI -taxem)
and mds (Pmds -taxem) prototypes. To check the significance of our results we applied a standard
t-test with a 5% confidence interval. The best results up to statistical significance are highlighted in
bold font. The final entry in Table 2 shows the average error over all test points in all data sets. Up
to statistical significance, taxem obtains the lowest loss on all data sets and the lowest overall loss.
Ignoring statistical significance, taxem has the lowest loss on all data sets except B. All algorithms
had comparable speed during test-time. The computation time required for solving eq. (6) and the
optimization (14) was on the order of several minutes with our MATLABTM implementation on a
standard IntelTM 1.8GHz core 2 duo processor (without parallelization efforts).
5
Related Work
In recent years, several algorithms for document categorization have been proposed. Several authors
proposed adaptations of support vector machines that incorporate the topic taxonomy through costsensitive loss re-weighting and classification at multiple nodes in the hierarchy [4, 8, 11]. Our
algorithm is based on a very different intuition. It differs from all these methods in that it learns
a low dimensional semantic representation of the documents and classifies by finding the nearest
prototype.
Most related to our work is probably the work by Karypis and Han [10]. Although their algorithm
also reduces the dimensionality with a linear projection, their low dimensional space is obtained
through supervised clustering on the document data. In contrast, the semantic space obtained with
taxem is obtained through a convex optimization with maximum margin constraints. Further, the
low dimensional representation of our method is explicitly constructed to give rise to meaningful
Euclidean distances.
The optimization with large-margin constraints was partially inspired by recent work on large margin
distance metric learning for nearest neighbor classification [16]. However our formulation is a much
more light-weight optimization problem with O(cn) constraints instead of O(n2 ) as in [16]. The
optimization problem in section 3 is also related to recent work on automated speech recognition
through discriminative training of Gaussian mixture models [14].
7
6
Conclusion
In this paper, we have presented a novel framework for classification with inter-class relationships
based on taxonomy embedding and supervised dimensionality reduction. We derived a single convex
optimization problem that learns an embedding of the topic taxonomy as well as a linear mapping
from the document space to the resulting low dimensional semantic space.
As future work we are planning to extend our algorithm to the more general setting of document
categorization with multiple topic memberships and multi-modal topic distributions. Further, we
are keen to explore the implications of our proposed conversion of discrete topic taxonomies into
continuous semantic spaces. This framework opens new interesting directions of research that go
beyond mere classification. A natural step is to consider the document matching problem (e.g.
of web pages and advertisements) in the semantic space: a fast nearest neighbor search can be
performed in a joint low dimensional space without having to resort to classification all together.
Although this paper is presented in the context of document categorization, it is important to emphasize that our method is by no means limited to text data or class hierarchies. In fact, the proposed
algorithm can be applied in almost all multi-class settings with cost-sensitive loss functions (e.g.
object recognition in computer vision).
References
[1] D. Blei, A. Ng, M. Jordan, and J. Lafferty. Latent Dirichlet Allocation. Journal of Machine Learning
Research, 3(4-5):993?1022, 2003.
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[3] A. Budanitsky and G. Hirst. Semantic distance in wordnet: An experimental, application-oriented evaluation of five measures. In Workshop on WordNet and Other Lexical Resources, in the North American
Chapter of the Association for Co mputational Linguistics (NAACL), 2001.
[4] L. Cai and T. Hofmann. Hierarchical document categorization with support vector machines. In ACM
13th Conference on Information and Knowledge Management, 2004.
[5] T. Cox and M. Cox. Multidimensional Scaling. Chapman & Hall, London, 1994.
[6] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research, 2:265?292, 2001.
[7] S. Deerwester, S. Dumais, G. Furnas, T. Landauer, and R. Harshman. Indexing by latent semantic analysis.
Journal of the American Society for Information Science, 41(6):391?407, 1990.
[8] S. Dumais and H. Chen. Hierarchical classification of Web content. In Proceedings of SIGIR-00, 23rd
ACM International Conference on Research and Development in Information Retrieval, pages 256?263.
ACM Press, New York, US, 2000.
[9] W. Hersh, C. Buckley, T. J. Leone, and D. Hickam. OHSUMED: an interactive retrieval evaluation and
new large test collection for research. In SIGIR ?94: Proceedings of the 17th annual international ACM
conference on Research and development in information retrieval, pages 192?201. Springer-Verlag New
York, Inc., 1994.
[10] G. Karypis, E. Hong, and S. Han. Concept indexing a fast dimensionality reduction algorithm with applications to document retrieval & categorization, 2000. Technical Report: 00-016 karypis, [email protected]
Last updated on.
[11] T.-Y. Liu, Y. Yang, H. Wan, H.-J. Zeng, Z. Chen, and W.-Y. Ma. Support vector machines classification
with a very large-scale taxonomy. SIGKDD Explorations Newsletter, 7(1):36?43, 2005.
[12] R. Rifkin and A. Klautau. In Defense of One-Vs-All Classification. The Journal of Machine Learning
Research, 5:101?141, 2004.
[13] F. Sebastiani. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1?47,
2002.
[14] F. Sha and L. K. Saul. Large margin hidden markov models for automatic speech recognition. In Advances
in Neural Information Processing Systems 19, Cambridge, MA, 2007. MIT Press.
[15] A. Weigend, E. Wiener, and J. Pedersen. Exploiting Hierarchy in Text Categorization. Information
Retrieval, 1(3):193?216, 1999.
[16] K. Q. Weinberger and L. K. Saul. Fast solvers and efficient implementations for distance metric learning.
pages 1160?1167, 2008.
8
| 3597 |@word proceeded:1 cox:2 briefly:1 version:2 seems:1 open:1 decomposition:2 thereby:1 reduction:2 liu:1 score:2 document:48 com:2 assigning:1 written:1 stemming:1 mesh:5 hofmann:1 v:3 short:1 core:1 blei:1 node:7 attack:2 org:1 five:1 rc:8 c2:1 constructed:1 consists:2 absorbs:1 inside:1 paragraph:1 introduce:1 inter:3 expected:1 p1:1 planning:1 sdp:2 multi:6 inspired:1 chap:1 ohsumed:9 solver:3 becomes:1 discover:2 underlying:3 notation:2 xx:3 bounded:1 classifies:1 lowest:5 duo:1 argmin:2 minimizes:1 eigenvector:1 finding:1 transformation:3 multidimensional:1 interactive:1 preferable:1 exactly:1 k2:3 classifier:2 unit:2 medical:8 appear:2 yn:2 harshman:1 before:2 positive:3 path:2 might:3 studied:1 co:1 limited:2 range:1 karypis:3 averaged:1 definite:3 differs:1 significantly:3 projection:2 matching:1 word:4 pre:1 confidence:2 boyd:1 onto:2 close:3 context:3 map:4 lexical:1 go:3 layout:3 independently:1 convex:10 sigir:2 pyi:1 survey:1 immediately:2 rule:1 vandenberghe:1 embedding:9 updated:1 hierarchy:6 exact:1 olivier:1 pa:2 element:1 recognition:4 expensive:1 labeled:1 database:1 solved:4 cy:1 kilian:2 counter:1 removed:2 disease:2 intuition:1 ideally:1 tangle:1 trained:2 solving:2 rewrite:2 incur:1 technically:1 easily:1 joint:2 represented:3 various:1 chapter:1 derivation:1 separated:2 fast:3 describe:1 london:1 kp:10 whose:2 larger:1 solve:3 statistic:2 think:1 jointly:1 highlighted:2 final:4 rr:2 eigenvalue:1 differentiable:1 cai:3 propose:1 product:1 adaptation:1 argminw:1 rifkin:1 tax:2 description:1 artery:1 exploiting:1 empty:1 categorization:17 perfect:1 object:2 derive:5 x0i:19 nearest:4 eq:10 c:1 differ:1 direction:1 foot:1 matlabtm:1 closely:1 correct:4 exploration:1 public:1 require:1 generalization:1 decompose:1 preliminary:1 tfidf:2 strictly:2 proximity:1 hall:1 scope:1 mapping:9 algorithmic:1 lm:1 jx:1 purpose:1 bag:2 label:5 title:1 sensitive:8 create:4 reflects:1 minimization:1 mit:1 gaussian:1 aim:2 ck:2 publication:3 categorizing:1 derived:4 focus:1 indicates:1 check:1 contrast:1 sigkdd:1 inference:1 membership:2 typically:2 hidden:1 relation:2 overall:2 classification:21 yahoo:5 development:2 art:4 special:3 mutual:1 once:1 cyi:8 having:1 ng:1 hop:1 chapman:1 represents:1 argminp:1 future:1 simplex:4 report:1 simplify:1 oriented:1 individual:1 ourselves:1 attempt:1 misclassify:2 highly:1 evaluation:2 severe:1 regressing:1 umn:1 male:1 violation:1 mixture:1 semidefinite:1 light:2 subtrees:2 implication:1 closer:6 necessary:2 respective:3 orthogonal:1 tree:2 euclidean:5 re:3 instance:5 column:2 soft:1 assignment:1 cost:25 entry:3 kq:2 too:1 dependency:1 varies:1 combined:1 dumais:2 international:2 regressor:1 together:2 squared:3 reflect:1 management:1 leveraged:1 choose:1 wan:1 worse:1 corner:1 resort:1 american:2 potential:1 de:1 semidefiniteness:1 accompanied:1 bold:3 summarized:2 north:1 inc:3 pedestrian:2 explicitly:4 eyi:8 performed:1 try:1 closed:1 traffic:1 contribution:1 minimize:3 square:1 wiener:1 efficiently:4 yield:1 preprocess:1 pedersen:1 sdps:1 accurately:1 mere:1 straight:1 processor:1 classified:1 stroke:1 sharing:1 centering:1 against:1 keen:1 proof:1 stop:1 knowledge:4 dimensionality:4 organized:1 positioned:1 sophisticated:1 back:1 supervised:3 modal:1 improved:2 leone:1 formulation:8 done:1 evaluated:2 strongly:1 just:1 k2q:5 web:4 zeng:1 defines:3 mode:1 costsensitive:1 lda:1 quality:1 name:1 effect:1 k22:11 normalized:1 true:2 contain:1 verify:1 concept:1 regularization:7 equality:2 hence:1 assigned:2 semantic:20 during:1 please:1 hong:1 ridge:1 demonstrate:1 newsletter:1 novel:2 common:3 rotation:2 wikipedia:1 regulates:1 million:1 extend:2 organism:1 association:1 refer:3 cambridge:2 sebastiani:1 rd:2 automatic:1 populated:3 similarly:1 had:3 chapelle:1 moving:2 han:3 base:3 add:1 closest:6 recent:4 female:1 showed:1 forcing:1 verlag:1 inequality:1 binary:1 yi:13 additional:1 ey:2 shortest:2 semi:3 multiple:2 infer:1 reduces:1 technical:1 long:1 retrieval:5 schematic:3 impact:2 underlies:1 regression:6 ensuring:1 vision:1 metric:3 represent:2 arthritis:1 kernel:1 chicken:1 penalize:1 addition:1 schematically:1 want:1 interval:1 crucial:1 parallelization:1 probably:1 subject:4 incorporates:1 lafferty:1 jordan:1 yang:1 split:1 embeddings:1 automated:2 zi:8 restrict:1 reduce:2 regarding:1 prototype:53 cn:1 multiclass:1 klautau:1 inactive:1 defense:1 effort:1 speech:2 york:2 buckley:1 amount:1 category:8 simplest:1 reduced:1 http:1 wiki:1 lsi:1 misclassifying:1 discrete:2 express:1 key:1 four:1 terminology:1 clarity:1 prevent:1 rewriting:1 utilize:1 kept:1 cone:1 year:1 deerwester:1 run:1 inverse:1 weigend:1 place:5 throughout:3 almost:1 utilizes:1 decision:1 scaling:2 comparable:1 pushed:1 entirely:3 bound:4 quadratic:1 annual:1 adapted:1 placement:1 incorporation:1 constraint:16 worked:1 flat:3 athlete:1 speed:1 pruned:1 structured:1 conjugate:1 beneficial:1 making:1 intuitively:1 invariant:1 indexing:2 heart:2 resource:1 slack:2 singer:2 know:1 eight:1 hierarchical:8 appropriate:1 enforce:2 weinberger:2 original:1 top:5 denotes:1 ensure:1 clustering:1 dirichlet:1 linguistics:1 society:1 move:1 objective:3 hoffmann:2 font:1 sha:1 pneumonia:1 md:4 surrogate:1 gradient:4 distance:14 mapped:3 topic:51 reason:1 tokenized:1 length:3 index:2 relationship:3 minimizing:1 setup:2 taxonomy:29 relate:1 negative:1 rise:3 implementation:3 upper:3 conversion:1 markov:1 discarded:1 defining:1 ever:1 y1:2 introduced:1 required:1 specified:1 learned:1 beyond:5 program:2 misclassification:3 suitable:1 natural:1 regularized:1 indicator:1 dated:1 created:3 mcsvm:4 text:5 faced:1 deviate:1 removal:1 determining:2 embedded:3 loss:16 interesting:1 allocation:1 coronary:1 editor:1 pi:5 row:1 placed:1 last:1 heading:3 side:1 neighbor:3 fall:2 saul:2 sparse:1 ghz:1 boundary:1 xn:3 rich:1 forward:1 commonly:2 author:1 collection:1 far:3 obtains:3 ignore:1 emphasize:1 overfitting:2 corpus:2 conclude:1 xi:14 discriminative:1 landauer:1 continuous:3 latent:7 un:1 search:1 reality:1 table:5 learn:4 ignoring:1 complex:1 did:1 significance:5 arrow:1 kwk2f:1 n2:1 categorized:2 x1:3 en:1 egg:1 embeds:1 furnas:1 sub:2 position:2 lie:1 crude:1 weighting:1 advertisement:1 learns:5 theorem:2 minute:1 embed:3 xt:2 specific:1 svm:13 essential:1 consist:1 workshop:1 adding:1 dissimilarity:2 illustrates:2 margin:20 demand:1 chen:2 naacl:1 explore:1 infinitely:1 partially:1 springer:1 satisfies:2 acm:5 ma:2 goal:1 presentation:1 identity:1 consequently:1 content:1 change:1 except:2 semantically:1 hirst:1 wordnet:2 total:1 svd:1 experimental:1 meaningful:2 formally:2 cholesky:1 support:4 arises:1 crammer:2 dissimilar:1 incorporate:1 evaluate:1 |
2,865 | 3,598 | Multi-Level Active Prediction of Useful Image
Annotations for Recognition
Sudheendra Vijayanarasimhan and Kristen Grauman
Department of Computer Sciences
University of Texas at Austin
{svnaras,grauman}@cs.utexas.edu
Abstract
We introduce a framework for actively learning visual categories from a mixture of
weakly and strongly labeled image examples. We propose to allow the categorylearner to strategically choose what annotations it receives?based on both the
expected reduction in uncertainty as well as the relative costs of obtaining each
annotation. We construct a multiple-instance discriminative classifier based on the
initial training data. Then all remaining unlabeled and weakly labeled examples
are surveyed to actively determine which annotation ought to be requested next.
After each request, the current classifier is incrementally updated. Unlike previous
work, our approach accounts for the fact that the optimal use of manual annotation
may call for a combination of labels at multiple levels of granularity (e.g., a full
segmentation on some images and a present/absent flag on others). As a result, it
is possible to learn more accurate category models with a lower total expenditure
of manual annotation effort.
1
Introduction
Visual category recognition is a vital thread in computer vision research. The recognition problem
remains challenging because of the wide variation in appearance a single class typically exhibits, as
well as differences in viewpoint, illumination, and clutter. Methods are usually most reliable when
good training sets are available, i.e., when labeled image examples are provided for each class, and
where those training examples are adequately representative of the distribution to be encountered at
test time. The extent of an image labeling can range from a flag telling whether the object of interest
is present or absent, to a full segmentation specifying the object boundary. In practice, accuracy
often improves with larger quantities of training examples and/or more elaborate annotations.
Unfortunately, substantial human effort is required to gather such training sets, making it unclear
how the traditional protocol for visual category learning can truly scale. Recent work has begun to
explore ways to mitigate the burden of supervision [1?8]. While the results are encouraging, existing techniques fail to address two key insights about low-supervision recognition: 1) the division
of labor between the machine learner and the human labelers ought to respect any cues regarding
which annotations would be easy (or hard) for either party to provide, and 2) to use a fixed amount
of manual effort most effectively may call for a combination of annotations at multiple levels (e.g.,
a full segmentation on some images and a present/absent flag on others). Humans ought to be responsible for answering the hardest questions, while pattern recognition techniques ought to absorb
and propagate that information and answer the easier ones. Meanwhile, the learning algorithm must
be able to accommodate the multiple levels of granularity that may occur in provided image annotations, and to compute which item at which of those levels appears to be most fruitful to have labeled
next (see Figure 1).
Coarser labels,
less expensive
Coarser labels,
less expensive
Finer labels,
more expensive
Finer labels,
more expensive
Fig. 1. Useful image annotations can occur at multiple levels of granularity. Left: For example, a learner may
only know whether the image contains a particular object or not (top row, dotted boxes denote object is present),
or it may also have segmented foregrounds (middle row), or it may have detailed outlines of object parts (bottom
row). Right: In another scenario, groups of images for a given class are collected with keyword-based Web
search. The learner may only be given the noisy groups and told that each includes at least one instance of the
specified class (top), or, for some groups, the individual example images may be labeled as positive or negative
(bottom). We propose an active learning paradigm that directs manual annotation effort to the most informative
examples and levels.
To address this challenge, we propose a method that actively targets the learner?s requests for supervision so as to maximize the expected benefit to the category models. Our method constructs an
initial classifier from limited labeled data, and then considers all remaining unlabeled and weakly
labeled examples to determine what annotation seems most informative to obtain. Since the varying
levels of annotation demand varying degrees of manual effort, our active selection process weighs
the value of the information gain against the cost of actually obtaining any given annotation. After
each request, the current classifier is incrementally updated, and the process repeats.
Our approach accounts for the fact that image annotations can exist at multiple levels of granularity:
both the classifier and active selection objectives are formulated to accommodate dual-layer labels.
To achieve this duality for the classifier, we express the problem in the multiple instance learning
(MIL) setting [9], where training examples are specified as bags of the finer granularity instances,
and positive bags may contain an arbitrary number of negatives. To achieve the duality for the active
selection, we design a decision-theoretic criterion that balances the variable costs associated with
each type of annotation with the expected gain in information. Essentially this allows the learner to
automatically predict when the extra effort of a more precise annotation is warranted.
The main contribution of this work is a unified framework to actively learn categories from a mixture
of weakly and strongly labeled examples. We are the first to identify and address the problem of
active visual category learning with multi-level annotations. In our experiments we demonstrate
two applications of the framework for visual learning (as highlighted in Figure 1). Not only does our
active strategy learn more quickly than a random selection baseline, but for a fixed amount of manual
resources, it yields more accurate models than conventional single-layer active selection strategies.
2
Related Work
The recognition community is well-aware of the expense of requiring well-annotated image datasets.
Recent methods have shown the possibility of learning visual patterns from unlabeled [3, 2] image
collections, while other techniques aim to share or re-use knowledge across categories [10, 4]. Several authors have successfully leveraged the free but noisy images on the Web [5, 6, 11]. Using
weakly labeled images to learn categories was proposed in [1], and several researchers have shown
that MIL can accommodate the weak or noisy supervision often available for image data [11?14].
Working in the other direction, some research seeks to facilitate the manual labor of image annotation, tempting users with games or nice datasets [7, 8].
However, when faced with a distribution of unlabeled images, almost all existing methods for visual category learning are essentially passive, selecting points at random to label. Active learning
strategies introduced in the machine learning literature generally select points so as to minimize the
model entropy or reduce classification error (e.g., [15, 16]). Decision-theoretic measures for traditional (single-instance) learning have been explored in [17, 18], where they were applied to classify
synthetic data and voicemail. Our active selection procedure is in part inspired by this work, as it
also seeks to balance the cost and utility tradeoff. Recent work has considered active learning with
Gaussian Process classifiers [19], and relevance feedback for video annotations [20].
In contrast, we show how to form active multiple-instance learners, where constraints or labels must
be sought at multiple levels of granularity. Further, we introduce the notion of predicting when to
?invest? the labor of more expensive image annotations so as to ultimately yield bigger benefits to
the classifier. Unlike any previous work, our method continually guides the annotation process to
the appropriate level of supervision. While an active criterion for instance-level queries is suggested
in [21] and applied within an MI learner, it cannot actively select positive bags or unlabeled bags,
and does not consider the cost of obtaining the labels requested. In contrast, we formulate a general selection function that handles the full MIL paradigm and adapts according to the label costs.
Experiments show this functionality to be critical for efficient learning from few images.
3
Approach
The goal of this work is to learn to recognize an object or category with minimal human intervention.
The key idea is to actively determine which annotations a user should be asked to provide, and in
what order. We consider image collections consisting of a variety of supervisory information: some
images are labeled as containing the category of interest (or not), some have both a class label
and a foreground segmentation, while others have no annotations at all. We derive an active learning
criterion function that predicts how informative further annotation on any particular unlabeled image
or region would be, while accounting for the variable expense associated with different annotation
types. As long as the information expected from further annotations outweighs the cost of obtaining
them, our algorithm will request the next valuable label, re-train the classifier, and repeat.
In the following we outline the MIL paradigm and discuss its applicability for two important image
classification scenarios. Then, we describe our decision-theoretic approach to actively request useful
annotations. Finally, we discuss how to attribute costs and risks for multi-level annotations.
3.1 Multiple-Instance Visual Category Learning
Traditional binary supervised classification assumes the learner is provided a collection of labeled
data patterns, and must learn a function to predict labels on new instances. However, the fact that
image annotations can exist at multiple levels of granularity demands a learning algorithm that can
encode any known labels at the levels they occur, and so MIL [9] is more applicable. In MIL, the
learner is instead provided with sets (bags) of patterns rather than individual patterns, and is only told
that at least one member of any positive bag is truly positive, while every member of any negative
bag is guaranteed to be negative. The goal of MIL is to induce the function that will accurately label
individual instances such as the ones within the training bags.
MIL is well-suited for the following two image classification scenarios:
? Training images are labeled as to whether they contain the category of interest, but they also contain other
objects and background clutter. Every image is represented by a bag of regions, each of which is characterized by its color, texture, shape, etc. [12, 13]. For positive bags, at least one of the regions contains the
object of interest. The goal is to predict when new image regions contain the object?that is, to learn to
label regions as foreground or background.
? The keyword associated with a category is used to download groups of images from multiple search engines
in multiple languages. Each downloaded group is a bag, and the images within it are instances [11]. For
each positive bag, at least one image actually contains the object of interest, while many others may be
irrelevant. The goal is to predict the presence or absence of the category in new images.
In both cases, an instance-level decision is desirable, but bag-level labels are easier to obtain. While
it has been established that MIL is valuable in such cases, previous methods do not consider how to
determine what labels would be most beneficial to obtain.
We integrate our active selection method with the SVM-based MIL approach given in [22], which
uses a Normalized Set Kernel (NSK) to describe bags based on the average representation of instances within them. Following [23], we use the NSK mapping for positive bags only; all instances
in a negative bag are treated individually as negative. We chose this classifier since it performs
well in practice [24] and allows incremental updates [25]; further, by virtue of being a kernel-based
algorithm, it gives us flexibility in our choices of features and kernels. However, alternative MIL
techniques that provide probabilitistic outputs could easily be swapped in (e.g. [26, 24, 23]).
3.2 Multi-Level Active Selection of Image Annotations
Given the current MIL classifier, our objective is to select what annotation should be requested next.
Whereas active selection criteria for traditional supervised classifiers need only identify the best
instance to label next, in the MIL domain we have a more complex choice. There are three possible
types of request: the system can ask for a label on an instance, a label on an unlabeled bag, or for
a joint labeling of all instances within a positive bag. So, we must design a selection criterion that
simultaneously determines which type of annotation to request, and for which example to request
it. Adding to the challenge, the selection process must also account for the variable costs associated
with each level of annotation (e.g., it will take the annotator less time to detect whether the class of
interest is present or not, while a full segmentation will be more expensive).
We extend the value of information (VOI) strategy proposed in [18] to enable active MIL selection,
and derive a generalized value function that can accept both instances and bags. This allows us to
predict the information gain in a joint labeling of multiple instances at once, and thereby actively
choose when it is worthwhile to expend more or less manual effort in the training process. Our
method continually re-evaluates the expected significance of knowing more about any unlabeled or
partially labeled example, as quantified by the predicted reduction in misclassification risk plus the
cost of obtaining the label.
We consider a collection of unlabeled data XU , and labeled data XL composed of a set of positive
bags Xp and a set of negative instances X?n . Recall that positively labeled bags contain instances
whose labels are unknown, since they contain an unknown mix of positive and negative instances.
Let rp denote the user-specified risk associated with misclassifying a positive example as negative,
and rn denote the risk of misclassifying a negative. The risk associated with the labeled data is:
X
X
Risk(XL ) =
rp (1 ? p(Xi )) +
rn p(xi ),
(1)
Xi ?Xp
xi ?X?n
where xi denotes an instance and Xi denotes a bag. Here p(x) denotes the probability that a given
input is classified as positive: p(x) = Pr(sgn(w?(x) + b) = +1|x) for the SVM hyperplane parameters w and b. We compute these values using the mapping suggested in [27], which essentially
fits a sigmoid to map the SVM outputs to posterior probabilities. Note that here a positive bag Xi is
first transformed according to the NSK before computing its probability. The corresponding risk for
unlabeled data is:
X
Risk(XU ) =
rp (1 ? p(xi )) Pr(yi = +1|xi ) + rn p(xi )(1 ? Pr(yi = +1|xi )),
(2)
xi ?XU
where yi is the true label for unlabeled example xi . The value of Pr(y = +1|x) is not directly
computable for unlabeled data; following [18], we approximate
it as Pr(y = +1|x) ? p(x). This
P
simplifies the risk for the unlabeled data to: Risk(XU ) = xi ?XU (rp + rn )(1 ? p(xi ))p(xi ), where
again we transform unlabeled bags according to the NSK before computing the posterior.
The total cost T (XL , XU ) associated with the data is the total misclassification risk, plus the cost of
obtaining all labeled data thus far:
X
X
T (XL , XU ) = Risk(XL ) + Risk(XU ) +
C(Xi ) +
C(xi ),
(3)
Xi ?Xp
xi ?X?n
where the function C(?) returns the cost of obtaining an annotation for its input, and will be defined
in more detail below.
To measure the expected utility of obtaining any particular new annotation, we want to predict
the change in total cost that would result from its addition to XL . Thus, the value of obtaining an
annotation for input z is:
V OI(z) = T (XL , XU ) ? T XL ? z(t) , XU r z
(4)
= Risk(XL ) + Risk(XU ) ? Risk XL ? z(t) + Risk (XU r z) ? C(z),
where z(t) denotes that the input z has been merged into the labeled set with its true label t, and
XU r z denotes that it has been removed from the set of unlabeled data. If the VOI is high for a
given input, then the total cost would be decreased by adding its annotation; similarly, low values
indicate minor gains, and negative values indicate an annotation that costs more to obtain than it is
worth. Thus at each iteration, the active learner surveys all remaining unlabeled and weakly labeled
examples, computes their VOI, and requests the label for the example with the maximal value.
However, there are two important remaining technical issues. First, for this to be useful we must
be able to estimate the empirical risk for inputs before their labels are known. Secondly, for active
selection to proceed at multiple levels, the VOI must act as an overloaded function: we need to be
able to evaluate the VOI when z is an unlabeled instance or an unlabeled bag or a weakly labeled
example, i.e., a positive bag containing an unknown number of negative instances.
To estimate the total risk induced by incorporating a newly annotated example z into XL before actually obtaining its true label t, we estimate the updated risk term with its expected value:
Risk(XL ? z(t) ) + Risk(XU r z) ? E[Risk(XL ? z(t) ) + Risk(XU r z)] = E, where E is shorthand for the expected value expression preceding it. If z is an unlabeled instance, then computing
the expectation is straightforward:
X
E=
Risk(XL ? z(l) ) + Risk(XU r z) Pr(sgn(w?(z) + b) = l|z),
(5)
l?L
where L = {+1, ?1} is the set of all possible label assignments for z. The value Pr(sgn(w?(z) +
b) = l|z) is obtained by evaluating the current classifier on z and mapping the output to the associated posterior, and risk is computed based on the (temporarily) modified classifier with z(l) inserted
into the labeled set. Similarly, if z is an unlabeled bag, the label assignment can only be positive or
negative, and we compute the probability of either label via the NSK mapping.
If z is a positive bag containing M = |z| instances, however, there are 2M possible labelings: L =
{+1, ?1}M . For even moderately sized bags, this makes a direct computation of the expectation
impractical. Instead, we use Gibbs sampling to draw samples of the label assignment from the joint
distribution overnthe M instances? descriptors.
Let z = {z1 , . . . , zM } be the positive bag?s instances,
o
(a1 )
(aM )
(a)
and let z = (z1 ), . . . , (zM ) denote the label assignment we wish to sample, with aj ?
{+1, ?1}. To sample from the conditional distribution of one instance?s label given the rest?the
basic procedure required by Gibbs sampling?we re-train the MIL classifier with the given labels
added, and then draw the remaining label according to aj ? Pr(sgn(w?(zj ) + b) = +1|zj ), where
zj denotes the one instance currently under consideration. For positive bag z, the expected total risk
is then the average risk computed over all S generated samples:
S
1 X
(a )
(a )
E=
Risk({XL r z} ? {z1 1 k , . . . , zMM k }) + Risk(XU r {z1 , z2 , ..., zM }) ,
S
(6)
k=1
where k indexes the S samples. To compute the risk on XL for each fixed sample we simply remove the weakly labeled positive bag z, and insert its instances as labeled positives and negatives,
as dictated by the sample?s label assignment. Computing the VOI values for all unlabeled data, especially for the positive bags, requires repeatedly solving the classifier objective function with slightly
different inputs; to make this manageable we employ incremental SVM updates [25].
To complete our active selection function, we must define the cost function C(z), which maps an
input to the amount of effort required to annotate it. This function is problem-dependent. In the
visual categorization scenarios we have set forth, we define the cost function in terms of the type of
annotation required for the input z; we charge equal cost to label an instance or an unlabeled bag,
and proportionally greater cost to label all instances in a positive bag, as determined empirically
with labeling experiments with human users. This reflects that outlining an object contour is more
expensive than naming an object, or sorting through an entire page of Web search returns is more
work than labeling just one.
We can now actively select which examples and what type of annotation to request, so as to maximize the expected benefit to the category model relative to the manual effort expended. After each
annotation is added and the classifier is revised accordingly, the VOI is evaluated on the remaining
unlabeled and weakly labeled data in order to choose the next annotation. This process repeats either until the available amount of manual resources is exhausted, or, alternatively, until the maximum
VOI is negative, indicating further annotations are not worth the effort.
4
Results
In this section we demonstrate our approach to actively learn visual categories. We test with two
distinct publicly available datasets that illustrate the two learning scenarios above: (1) the SIVAL
dataset1 of 25 objects in cluttered backgrounds, and (2) a Google dataset ([5]) of seven categories
downloaded from the Web. In both, the classification task is to say whether each unseen image
contains the object of interest or not. We provide comparisons with single-level active learning (with
both the method of [21], and where the same VOI function is used but is restricted to actively label
only instances), as well as passive learning. For the passive baseline, we consider random selections
from amongst both single-level and multi-level annotations, in order to verify that our approach does
not simply benefit from having access to more informative possible labels. 2
To determine how much more labeling a positive bag costs relative to labeling an instance, we
performed user studies for both of the scenarios evaluated. For the first scenario, users were shown
oversegmented images and had to click on all the segments belonging to the object of interest. In the
second, users were shown a page of downloaded Web images and had to click on only those images
containing the object of interest. For both datasets, their baseline task was to provide a present/absent
flag on the images. For segmentation, obtaining labels on all positive segments took users on average
four times as much time as setting a flag. For the Web images, it took 6.3 times as long to identify
all positives within bags of 25 noisy images. Thus we set the cost of labeling a positive bag to 4 and
6.3 for the SIVAL and Google data, respectively. These values agree with the average sparsity of the
two datasets: the Google set contains about 30% true positive images while the SIVAL set contains
10% positive segments per image. The users who took part in the experiment were untrained but still
produced consistent results.
4.1 Actively Learning Visual Objects and their Foreground Regions from Cluttered Images
The SIVAL dataset [21] contains 1500 images, each labeled with one of 25 class labels. The cluttered images contain objects in a variety of positions, orientations, locations, and lighting conditions.
The images have been oversegmented into about 30 regions (instances) each, each of which is represented by a 30-d feature describing its color and texture. Thus each image is a bag containing both
positive and negative instances (segments). Labels on the training data specify whether the object of
interest is present or not, but the segments themselves are unlabeled (though the dataset does provide
ground truth segment labels for evaluation purposes).
The initial training set is comprised of 10 positive and 10 negative images per class, selected at
random. Our active learning method must choose its queries from among 10 positive bags (complete segmentations), 300 unlabeled instances (individual segments), and about 150 unlabeled bags
(present/absent flag on the image). We use a quadratic kernel with a coefficient of 10?6 , and average
results over five random training partitions.
Figure 2(a) shows representative (best and worst) learning curves for our method and the three
baselines, all of which use the same MIL classifier (NSK-SVM). Note that the curves are plotted
against the cumulative cost of obtaining labels?as opposed to the number of queried instances?
since our algorithm may choose a sequence of queries with non-uniform cost. All methods are given
a fixed amount of manual effort (40 cost units) and are allowed to make a sequence of choices until
that cost is used up. Recall that a cost of 40 could correspond, for example, to obtaining labels on
40
40
1 = 40 instances or 4 = 10 positive bags, or some mixture thereof. Figure 2(b) summarizes
the learning curves for all categories, in terms of the average improvement at a fixed point midway
through the active learning phase.
All four methods steadily improve upon the initial classifier, but at different rates with respect to the
cost. (All methods fail to do better than chance on the ?dirty glove? class, which we attribute to the
lack of distinctive texture or color on that object.) In general, a steeper learning curve indicates that
a method is learning most effectively from the supplied labels. Our multi-level approach shows the
most significant gains at a lower cost, meaning that it is best suited for building accurate classifiers
with minimal manual effort on this dataset. As we would expect, single-level active selections are
better than random, but still fall short of our multi-level approach. This is because single-level active
selection can only make a sequence of greedy choices while our approach can jointly select bags of
instances to query. Interestingly, multi- and single-level random selections perform quite similarly
1
2
http://www.cs.wustl.edu/accio/
See [28] for further implementation details, image examples, and learning curves on all classes.
96
94
92
90
75
70
65
60
49
Multi?level active
Single?level active
Multi?level random
Single?level random
48
Area under ROC
98
Category ? dirtyworkgloves
Multi?level active
Single?level active
Multi?level random
Single?level random
80
Area under ROC
100
47
46
45
44
43
42
88
0
10
20
Cost
30
40
55
0
10
20
30
Cost
40
41
0
10
20
30
40
Cost
12
10
8
6
4
2
0
?2
Multi?level Single?level Multi?level Single?level
active
active
random
random
(b) Summary: all classes
(a) Example learning curves per class
Fig. 2. Results on the SIVAL dataset. (a) Sample learning curves per class, each averaged over five trials. First
two are best examples, last is worst. (b) Summary of the average improvement over all categories after half
of the annotation cost is used. For the same amount of annotation cost, our multi-level approach learns more
quickly than both traditional single-level active selection as well as both forms of random selection.
SIVAL dataset
8
Our Approach
Random Multi-level Gain over
Active Random %
10 +0.0051 +0.0241
372
20 +0.0130 +0.0360
176
50 +0.0274 +0.0495
81
Cost
MI Logistic Regression [21]
Random MIU Gain over
Active Random%
+0.023 +0.050
117
+0.033 +0.070
112
+0.057 +0.087
52
Cumulative number of labels
acquired per type
Area under ROC
85
Multi?level active
Single?level active
Multi?level random
Single?level random
Improvement in AUROC at cost = 20
Category ? apple
Category ? ajaxorange
102
7
6
unlabeled instances
unlabeled bags
positive bags
(all instances)
5
4
3
2
1
0
0
2
4
6
8
10
Timeline
Fig. 3. Left: Comparison with [21] on the SIVAL data, as measured by the average improvement in the AUROC
over the initial model for increasing labeling cost values. Right: The cumulative number of labels acquired for
each type with increasing number of queries. Our method tends to request complete segmentations or image
labels early on, followed by queries on unlabeled segments later on.
on this dataset (see boxplots in (b)), which indicates that having more informative labels alone does
not directly lead to better classifiers unless the right instances are queried.
The table in Figure 3 compares our results to those reported in [21], in which the authors train an
initial classifier with multiple-instance logistic regression, and then use the MI Uncertainty (MIU) to
actively choose instances to label. Following [21], we report the average gains in the AUROC over
all categories at fixed points on the learning curve, averaging results over 20 trials and with the same
initial training set of 20 positive and negative images. Since the accuracy of the base classifiers used
by the two methods varies, it is difficult to directly compare the gains in the AUROC. The NSKSVM we use consistently outperforms the logistic regression approach using only the initial training
set; even before active learning our average accuracy is 68.84, compared to 52.21 in [21]. Therefore, to aid in comparison, we also report the percentage gain relative to random selection, for both
classifiers. The results show that our approach yields much stronger relative improvements, again
illustrating the value of allowing active choices at multiple levels. For both methods, the percent
gains decrease with increasing cost; this makes sense, since eventually (for enough manual effort) a
passive learner can begin to catch up to an active learner.
4.2 Actively Learning Visual Categories from Web Images
Next we evaluate the scenario where each positive bag is a collection of images, among which only
a portion are actually positive instances for the class of interest. Bags are formed from the Googledownloaded images provided in [5]. This set contains on average 600 examples for each of the seven
categories. Naturally, the number of true positives for each class are sparse: on average 30% contain
a ?good? view of the class of interest, 20% are of ?ok? quality (occlusions, noise, cartoons, etc.), and
50% are ?junk?. Previous methods have shown how to learn from noisy Web images, with results
rivaling state-of-the-art supervised techniques [11, 5, 6]. We show how to boost accuracy with these
types of learners while leveraging minimal manual annotation effort.
To re-use the publicly available dataset from [5], we randomly group Google images into bags of
size 25 to simulate multiple searches as in [11], yielding about 30 bags per category. We randomly
select 10 positive and 10 negative bags (from all other categories) to serve as the initial training data
for each class. The rest of the positive bags of a class are used to construct the test sets. All results
are averaged over five random partitions. We represent each image as a bag of ?visual words?, and
compare examples with a linear kernel. Our method makes active queries among 10 positive bags
(complete labels) and about 250 unlabeled instances (images). There are no unlabeled bags in this
scenario, since every downloaded batch is associated with a keyword.
60
55
0
60
55
50
Multi?level active
Single?level active
Multi?level random
Single?level random
72
70
68
66
64
62
60
10
20
Cost
30
40
0
10
20
Cost
30
40
0
10
20
Cost
30
40
Improvment in AUROC at cost 20
65
Category ? motorbike
Multi?level active
Single?level active
Multi?level random
Single?level random
Area under ROC
Area under ROC
Category ? guitar
Multi?level active
Single?level active
Multi?level random
Single?level random
Area under ROC
Category ? cars rear
70
12
10
8
6
4
2
0
?2
Multi?level Single?level Multi?level Single?level
random
active
random
active
(a) Example learning curves per class
(b) Summary: all classes
Fig. 4. Results on the Google dataset, in the same format as Figure 2. Our multi-level active approach outperforms both random selection strategies and traditional single-level active selection.
Figure 4 shows the learning curves and a summary of our active learner?s performance. Our multilevel approach again shows more significant gains at a lower cost relative to all baselines, improving
accuracy with as few as ten labeled instances. On this dataset, random selection with multi-level
annotations actually outperforms random selection on single-level annotations (see the boxplots).
We attribute this to the distribution of bags/instances: on average more positive bags were randomly
chosen, and each addition led to a larger increase in the AUROC.
5
Conclusions and Future Work
Our approach addresses a new problem: how to actively choose not only which instance to label, but
also what type of image annotation to acquire in a cost-effective way. Our method is general enough
to accept other types of annotations or classifiers, as long as the cost and risk functions can be appropriately defined. Comparisons with passive learning methods and single-level active learning show
that our multi-level method is better-suited for building classifiers with minimal human intervention.
In future work, we will consider look-ahead scenarios with more far-sighted choices. We are also
pursuing ways to alleviate the VOI computation cost, which as implemented involves processing all
unlabeled data prior to making a decision. Finally, we hope to incorporate our approach within an
existing system with many real users, like Labelme [8].
References
[1] Weber, M., Welling, M., Perona, P.: Unsupervised Learning of Models for Recognition. In: ECCV. (2000)
[2] Sivic, J., Russell, B., Efros, A., Zisserman, A., Freeman, W.: Discovering Object Categories in Image Collections. In: ICCV. (2005)
[3] Quelhas, P., Monay, F., Odobez, J.M., Gatica-Perez, D., Tuytelaars, T., VanGool, L.: Modeling Scenes with Local Descriptors and Latent
Aspects. In: ICCV. (2005)
[4] Bart, E., Ullman, S.: Cross-Generalization: Learning Novel Classes from a Single Example by Feature Replacement. In: CVPR. (2005)
[5] Fergus, R., Fei-Fei, L., Perona, P., Zisserman, A.: Learning Object Categories from Google?s Image Search. In: ICCV. (2005)
[6] Li, L., Wang, G., Fei-Fei, L.: Optimol: Automatic Online Picture Collection via Incremental Model Learning. In: CVPR. (2007)
[7] von Ahn, L., Dabbish, L.: Labeling Images with a Computer Game. In: CHI. (2004)
[8] Russell, B., Torralba, A., Murphy, K., Freeman, W.: Labelme: a Database and Web-Based Tool for Image Annotation. TR, MIT (2005)
[9] Dietterich, T., Lathrop, R., Lozano-Perez, T.: Solving the Multiple Instance Problem with Axis-Parallel Rectangles. Artificial Intelligence
89 (1997) 31?71
[10] Murphy, K., Torralba, A., Freeman, W.: Using the Forest to See the Trees:a Graphical Model Relating Features, Objects and Scenes. In:
NIPS. (2003)
[11] Vijayanarasimhan, S., Grauman, K.: Keywords to Visual Categories: Multiple-Instance Learning for Weakly Supervised Object Categorization. In: CVPR. (2008)
[12] Maron, O., Ratan, A.: Multiple-Instance Learning for Natural Scene Classification. In: ICML. (1998)
[13] Yang, C., Lozano-Perez, T.: Image Database Retrieval with Multiple-Instance Learning Techniques. In: ICDE. (2000)
[14] Viola, P., Platt, J., Zhang, C.: Multiple Instance Boosting for Object Detection. In: NIPS. (2005)
[15] Freund, Y., Seung, H., Shamir, E., Tishby: Selective Sampling Using the Query by Committee Algorithm. Machine Learning 28 (1997)
[16] Tong, S., Koller, D.: Support Vector Machine Active Learning with Applications to Text Classification. In: ICML. (2000)
[17] Lindenbaum, M., Markovitch, S., Rusakov, D.: Selective Sampling for Nearest Neighbor Classifiers. Machine Learning 54 (2004)
[18] Kapoor, A., Horvitz, E., Basu, S.: Selective Supervision: Guiding Supervised Learning with Decision-Theoretic Active Learning. In:
IJCAI. (2007)
[19] Kapoor, A., Grauman, K., Urtasun, R., Darrell, T.: Active Learning with Gaussian Processes for Object Categorization. In: ICCV. (2007)
[20] Yan, R., Yang, J., Hauptmann, A.: Automatically Labeling Video Data using Multi-Class Active Learning. In: ICCV. (2003)
[21] Settles, B., Craven, M., Ray, S.: Multiple-Instance Active Learning. In: NIPS. (2008)
[22] Gartner, T., Flach, P., Kowalczyk, A., Smola, A.: Multi-Instance Kernels. In: ICML. (2002)
[23] Bunescu, R., Mooney, R.: Multiple Instance Learning for Sparse Positive Bags. In: ICML. (2007)
[24] Ray, S., Craven, M.: Supervised v. Multiple Instance Learning: An Empirical Comparison. In: ICML. (2005)
[25] Cauwenberghs, G., Poggio, T.: Incremental and Decremental Support Vector Machine Learning. In: NIPS. (2000)
[26] Andrews, S., Tsochantaridis, I., Hofmann, T.: Support Vector Machines for Multiple-Instance Learning. In: NIPS. (2002)
[27] Platt, J.: Probabilistic Outputs for Support Vector Machines and Comparisons to Regularized Likelihood Methods. In: Advances in
Large Margin Classifiers. MIT Press (1999)
[28] Vijayanarasimhan, S., Grauman, K.: Multi-level Active Prediction of Useful Image Annotations for Recognition. Technical Report
UT-AI-TR-08-2, University of Texas at Austin (2008)
| 3598 |@word trial:2 illustrating:1 middle:1 manageable:1 seems:1 stronger:1 flach:1 seek:2 propagate:1 ratan:1 accounting:1 thereby:1 tr:2 accommodate:3 reduction:2 initial:9 contains:8 selecting:1 interestingly:1 outperforms:3 existing:3 horvitz:1 current:4 z2:1 must:9 partition:2 informative:5 midway:1 shape:1 hofmann:1 remove:1 update:2 bart:1 alone:1 cue:1 selected:1 greedy:1 item:1 half:1 accordingly:1 discovering:1 intelligence:1 short:1 boosting:1 location:1 zhang:1 five:3 direct:1 shorthand:1 ray:2 introduce:2 acquired:2 expected:10 themselves:1 multi:32 chi:1 inspired:1 freeman:3 automatically:2 encouraging:1 increasing:3 provided:5 begin:1 what:7 voi:10 unified:1 impractical:1 ought:4 mitigate:1 every:3 act:1 charge:1 grauman:5 classifier:28 platt:2 unit:1 intervention:2 continually:2 positive:43 before:5 local:1 tends:1 chose:1 plus:2 quantified:1 specifying:1 challenging:1 limited:1 range:1 averaged:2 responsible:1 practice:2 procedure:2 area:6 empirical:2 yan:1 sudheendra:1 word:1 induce:1 wustl:1 lindenbaum:1 cannot:1 unlabeled:32 selection:26 tsochantaridis:1 vijayanarasimhan:3 risk:33 www:1 fruitful:1 conventional:1 map:2 straightforward:1 odobez:1 cluttered:3 survey:1 formulate:1 insight:1 handle:1 notion:1 variation:1 markovitch:1 updated:3 target:1 shamir:1 user:10 us:1 recognition:8 expensive:7 rivaling:1 coarser:2 labeled:26 predicts:1 bottom:2 inserted:1 database:2 wang:1 worst:2 region:7 keyword:3 decrease:1 removed:1 russell:2 valuable:2 substantial:1 moderately:1 asked:1 seung:1 ultimately:1 weakly:10 solving:2 segment:8 serve:1 upon:1 division:1 distinctive:1 learner:14 easily:1 joint:3 represented:2 train:3 distinct:1 describe:2 effective:1 query:8 artificial:1 labeling:11 whose:1 quite:1 larger:2 cvpr:3 say:1 unseen:1 tuytelaars:1 highlighted:1 noisy:5 transform:1 jointly:1 online:1 sequence:3 took:3 propose:3 maximal:1 zm:3 kapoor:2 flexibility:1 achieve:2 adapts:1 forth:1 invest:1 ijcai:1 darrell:1 categorization:3 incremental:4 object:26 derive:2 illustrate:1 andrew:1 measured:1 nearest:1 keywords:1 minor:1 implemented:1 c:2 predicted:1 indicate:2 involves:1 direction:1 merged:1 annotated:2 functionality:1 attribute:3 human:6 sgn:4 enable:1 settle:1 multilevel:1 generalization:1 sival:7 kristen:1 alleviate:1 secondly:1 insert:1 considered:1 ground:1 mapping:4 predict:6 efros:1 sought:1 early:1 torralba:2 purpose:1 applicable:1 bag:57 label:55 currently:1 utexas:1 individually:1 successfully:1 tool:1 reflects:1 hope:1 mit:2 gaussian:2 aim:1 modified:1 rather:1 varying:2 mil:16 encode:1 directs:1 improvement:5 consistently:1 indicates:2 likelihood:1 contrast:2 baseline:5 detect:1 am:1 sense:1 dependent:1 nsk:6 rear:1 typically:1 entire:1 accept:2 perona:2 koller:1 transformed:1 labelings:1 selective:3 issue:1 dual:1 classification:7 orientation:1 among:3 art:1 equal:1 construct:3 aware:1 once:1 having:2 sampling:4 cartoon:1 hardest:1 look:1 unsupervised:1 icml:5 foreground:4 future:2 others:4 report:3 few:2 strategically:1 employ:1 randomly:3 composed:1 simultaneously:1 recognize:1 individual:4 murphy:2 phase:1 consisting:1 occlusion:1 replacement:1 detection:1 interest:12 expenditure:1 possibility:1 evaluation:1 mixture:3 truly:2 yielding:1 perez:3 dabbish:1 accurate:3 poggio:1 unless:1 tree:1 re:5 plotted:1 weighs:1 minimal:4 instance:62 classify:1 modeling:1 assignment:5 cost:46 applicability:1 uniform:1 comprised:1 tishby:1 reported:1 answer:1 varies:1 synthetic:1 told:2 probabilistic:1 quickly:2 again:3 von:1 containing:5 choose:7 leveraged:1 opposed:1 return:2 ullman:1 actively:15 li:1 account:3 expended:1 includes:1 coefficient:1 performed:1 later:1 view:1 steeper:1 portion:1 cauwenberghs:1 parallel:1 annotation:55 contribution:1 minimize:1 formed:1 oi:1 accuracy:5 publicly:2 descriptor:2 who:1 yield:3 identify:3 correspond:1 weak:1 accurately:1 produced:1 worth:2 researcher:1 finer:3 lighting:1 apple:1 classified:1 mooney:1 manual:14 against:2 evaluates:1 steadily:1 thereof:1 naturally:1 associated:9 mi:3 junk:1 gain:12 newly:1 dataset:10 begun:1 ask:1 recall:2 knowledge:1 color:3 improves:1 car:1 segmentation:8 ut:1 actually:5 appears:1 ok:1 supervised:6 specify:1 zisserman:2 evaluated:2 box:1 strongly:2 though:1 just:1 smola:1 until:3 working:1 receives:1 web:9 lack:1 incrementally:2 google:6 logistic:3 maron:1 aj:2 quality:1 supervisory:1 building:2 dietterich:1 facilitate:1 contain:8 requiring:1 normalized:1 true:5 adequately:1 verify:1 lozano:2 game:2 criterion:5 generalized:1 outline:2 theoretic:4 demonstrate:2 complete:4 performs:1 passive:5 percent:1 image:68 meaning:1 consideration:1 weber:1 novel:1 sigmoid:1 empirically:1 extend:1 relating:1 significant:2 gibbs:2 queried:2 ai:1 automatic:1 similarly:3 language:1 had:2 access:1 supervision:6 ahn:1 etc:2 labelers:1 base:1 posterior:3 recent:3 dictated:1 irrelevant:1 scenario:10 binary:1 yi:3 greater:1 preceding:1 gatica:1 determine:5 paradigm:3 maximize:2 tempting:1 multiple:27 full:5 desirable:1 mix:1 segmented:1 technical:2 characterized:1 cross:1 long:3 retrieval:1 naming:1 bigger:1 a1:1 prediction:2 basic:1 regression:3 vision:1 essentially:3 expectation:2 iteration:1 kernel:6 annotate:1 represent:1 background:3 whereas:1 want:1 addition:2 decreased:1 appropriately:1 extra:1 swapped:1 unlike:2 rest:2 induced:1 member:2 leveraging:1 call:2 presence:1 granularity:7 yang:2 vital:1 easy:1 enough:2 variety:2 fit:1 click:2 reduce:1 regarding:1 idea:1 knowing:1 tradeoff:1 computable:1 simplifies:1 texas:2 absent:5 thread:1 whether:6 expression:1 utility:2 effort:14 proceed:1 repeatedly:1 useful:5 generally:1 detailed:1 proportionally:1 amount:6 clutter:2 bunescu:1 ten:1 category:35 http:1 supplied:1 exist:2 percentage:1 misclassifying:2 zj:3 dotted:1 per:7 express:1 group:6 key:2 four:2 boxplots:2 vangool:1 rectangle:1 icde:1 uncertainty:2 almost:1 pursuing:1 draw:2 decision:6 summarizes:1 layer:2 guaranteed:1 followed:1 quadratic:1 encountered:1 occur:3 ahead:1 constraint:1 fei:4 scene:3 aspect:1 simulate:1 format:1 department:1 according:4 request:11 combination:2 craven:2 belonging:1 across:1 beneficial:1 slightly:1 monay:1 making:2 restricted:1 pr:8 iccv:5 resource:2 agree:1 remains:1 gartner:1 discus:2 describing:1 fail:2 eventually:1 committee:1 know:1 available:5 worthwhile:1 appropriate:1 kowalczyk:1 alternative:1 batch:1 motorbike:1 rp:4 top:2 remaining:6 assumes:1 denotes:6 dirty:1 graphical:1 outweighs:1 sighted:1 especially:1 objective:3 question:1 quantity:1 added:2 improvment:1 strategy:5 traditional:6 unclear:1 exhibit:1 amongst:1 voicemail:1 seven:2 extent:1 collected:1 considers:1 urtasun:1 index:1 balance:2 acquire:1 difficult:1 unfortunately:1 expense:2 negative:19 design:2 implementation:1 optimol:1 unknown:3 perform:1 allowing:1 revised:1 datasets:5 viola:1 precise:1 rn:4 arbitrary:1 community:1 download:1 introduced:1 overloaded:1 required:4 specified:3 z1:4 sivic:1 engine:1 oversegmented:2 established:1 boost:1 timeline:1 nip:5 address:4 able:3 suggested:2 usually:1 pattern:5 below:1 sparsity:1 challenge:2 reliable:1 video:2 critical:1 misclassification:2 treated:1 natural:1 regularized:1 predicting:1 improve:1 picture:1 axis:1 catch:1 faced:1 nice:1 literature:1 prior:1 text:1 relative:6 freund:1 expect:1 annotator:1 outlining:1 downloaded:4 integrate:1 degree:1 gather:1 xp:3 consistent:1 viewpoint:1 share:1 austin:2 row:3 eccv:1 summary:4 repeat:3 last:1 free:1 guide:1 allow:1 telling:1 wide:1 fall:1 neighbor:1 miu:2 expend:1 basu:1 sparse:2 benefit:4 boundary:1 feedback:1 curve:10 evaluating:1 cumulative:3 contour:1 computes:1 dataset1:1 author:2 collection:7 party:1 far:2 welling:1 decremental:1 approximate:1 absorb:1 active:59 discriminative:1 xi:20 alternatively:1 fergus:1 search:5 latent:1 table:1 learn:9 obtaining:13 improving:1 requested:3 forest:1 warranted:1 untrained:1 complex:1 meanwhile:1 protocol:1 domain:1 significance:1 main:1 noise:1 allowed:1 xu:17 positively:1 fig:4 representative:2 roc:6 elaborate:1 aid:1 tong:1 surveyed:1 position:1 guiding:1 wish:1 xl:16 answering:1 learns:1 explored:1 svm:5 auroc:6 virtue:1 guitar:1 burden:1 incorporating:1 adding:2 effectively:2 texture:3 hauptmann:1 illumination:1 exhausted:1 demand:2 margin:1 sorting:1 easier:2 suited:3 entropy:1 led:1 simply:2 appearance:1 explore:1 visual:14 labor:3 temporarily:1 partially:1 truth:1 determines:1 chance:1 conditional:1 goal:4 formulated:1 sized:1 labelme:2 absence:1 hard:1 change:1 determined:1 glove:1 hyperplane:1 averaging:1 flag:6 total:7 lathrop:1 duality:2 indicating:1 select:6 support:4 relevance:1 incorporate:1 evaluate:2 |
2,866 | 3,599 | DiscLDA: Discriminative Learning for
Dimensionality Reduction and Classification
Simon Lacoste-Julien
Computer Science Division
UC Berkeley
Berkeley, CA 94720
Fei Sha
Dept. of Computer Science
University of Southern California
Los Angeles, CA 90089
Michael I. Jordan
Dept. of EECS and Statistics
UC Berkeley
Berkeley, CA 94720
Abstract
Probabilistic topic models have become popular as methods for dimensionality
reduction in collections of text documents or images. These models are usually
treated as generative models and trained using maximum likelihood or Bayesian
methods. In this paper, we discuss an alternative: a discriminative framework in
which we assume that supervised side information is present, and in which we
wish to take that side information into account in finding a reduced dimensionality representation. Specifically, we present DiscLDA, a discriminative variation on
Latent Dirichlet Allocation (LDA) in which a class-dependent linear transformation is introduced on the topic mixture proportions. This parameter is estimated
by maximizing the conditional likelihood. By using the transformed topic mixture proportions as a new representation of documents, we obtain a supervised
dimensionality reduction algorithm that uncovers the latent structure in a document collection while preserving predictive power for the task of classification.
We compare the predictive power of the latent structure of DiscLDA with unsupervised LDA on the 20 Newsgroups document classification task and show how
our model can identify shared topics across classes as well as class-dependent
topics.
1 Introduction
Dimensionality reduction is a common and often necessary step in most machine learning applications and high-dimensional data analyses. There is a rich history and literature on the subject,
ranging from classical linear methods such as principal component analysis (PCA) and Fisher discriminant analysis (FDA) to a variety of nonlinear procedures such as kernelized versions of PCA
and FDA as well as manifold learning algorithms.
A recent trend in dimensionality reduction is to focus on probabilistic models. These models, which
include generative topological mapping, factor analysis, independent component analysis and probabilistic latent semantic analysis (pLSA), are generally specified in terms of an underlying independence assumption or low-rank assumption. The models are generally fit with maximum likelihood,
although Bayesian methods are sometimes used. In particular, Latent Dirichlet Allocation (LDA) is
a Bayesian model in the spirit of pLSA that models each data point (e.g., a document) as a collection of draws from a mixture model in which each mixture component is known as a topic [3]. The
mixing proportions across topics are document-specific, and the posterior distribution across these
mixing proportions provides a reduced representation of the document. This model has been used
successfully in a number of applied domains, including information retrieval, vision and bioinformatics [8, 1].
The dimensionality reduction methods that we have discussed thus far are entirely unsupervised.
Another branch of research, known as sufficient dimension reduction (SDR), aims at making use of
supervisory data in dimension reduction [4, 7]. For example, we may have class labels or regression
responses at our disposal. The goal of SDR is then to identify a subspace or other low-dimensional
object that retains as much information as possible about the supervisory signal. Having reduced dimensionality in this way, one may wish to subsequently build a classifier or regressor in the reduced
representation. But there are other goals for the dimension reduction as well, including visualization,
domain understanding, and domain transfer (i.e., predicting a different set of labels or responses).
In this paper, we aim to combine these two lines of research and consider a supervised form of LDA.
In particular, we wish to incorporate side information such as class labels into LDA, while retaining its favorable unsupervised dimensionality reduction abilities. The goal is to develop parameter
estimation procedures that yield LDA topics that characterize the corpus and maximally exploit the
predictive power of the side information.
As a parametric generative model, parameters in LDA are typically estimated with maximum likelihood estimation or Bayesian posterior inference. Such estimates are not necessarily optimal for
yielding representations for prediction and regression. In this paper, we use a discriminative learning criterion?conditional likelihood?to train a variant of the LDA model. Moreover, we augment
the LDA parameterization by introducing class-label-dependent auxiliary parameters that can be
tuned by the discriminative criterion. By retaining the original LDA parameters and introducing
these auxiliary parameters, we are able to retain the advantages of the likelihood-based training
procedure and provide additional freedom for tracking the side information.
The paper is organized as follows. In Section 2, we introduce the discriminatively trained LDA (DiscLDA) model and contrast it to other related variants of LDA models. In Section 3, we describe our
approach to parameter estimation for the DiscLDA model. In Section 4, we report empirical results
on applying DiscLDA to model text documents. Finally, in Section 5 we present our conclusions.
2 Model
We start by reviewing the LDA model [3] for topic modeling. We then describe our extension to
LDA that incorporates class-dependent auxiliary parameters. These parameters are to be estimated
based on supervised information provided in the training data set.
2.1 LDA
The LDA model is a generative process where each document in the text corpus is modeled as a set
of draws from a mixture distribution over a set of hidden topics. A topic is modeled as a probability
distribution over words. Let the vector wd be the bag-of-words representation of document d. The
generative process for this vector is illustrated in Fig. 1 and has three steps: 1) the document
is first associated with a K-dimensional topic mixing vector ?d which is drawn from a Dirichlet
distribution, ?d ? Dir(?); 2) each word wdn in the document is then assigned to a single topic zdn
drawn from the multinomial variable, zdn ? Multi(?d ); 3) finally, the word wdn is drawn from a
V -dimensional multinomial variable, wdn ? Multi(?zdn ), where V is the size of the vocabulary.
K
Given a set of documents, {wd }D
d=1 , the principal task is to estimate the parameters {?k }k=1 . This
?
V ?K
can be done by maximum likelihood, ? = arg max? p({wd }; ?), where ? ? ?
is a matrix
parameter whose columns {?k }K
k=1 are constrained to be members of a probability simplex. It is
also possible to place a prior probability distribution on the word probability vectors {?k }K
k=1 ?e.g.,
a Dirichlet prior, ?k ? Dir(?)?and treat the parameter ? as well as the hyperparameters ? and ?
via Bayesian methods. In both the maximum likelihood and Bayesian framework it is necessary to
integrate over ? d to obtain the marginal likelihood, and this is accomplished either using variational
inference or Gibbs sampling [3, 8].
2.2 DiscLDA
In our setting, each document is additionally associated with a categorical variable or class label yd ? {1, 2, . . . , C} (encoding, for example, whether a message was posted in the newsgroup
alt.atheism vs. talk.religion.misc). To model this labeling information, we introduce
a simple extension to the standard LDA model. Specifically, for each class label y, we introduce a
linear transformation T y : ?K ? ?L
+ , which transforms a K-dimensional Dirichlet variable ? d to
?
?
?
?
?
yd
?d
?d
T
zdn
?
?
?
wdn
wdn
N
?
?d
zdn
T
zdn
yd
D
Figure 1: LDA model.
wdn N
N
D
?
Figure 2: DiscLDA.
udn
?
D
Figure 3: DiscLDA with
auxiliary variable u.
a mixture of Dirichlet distributions: T y ?d ? ?L . To generate a word wdn , we draw its topic zdn
from T yd ? d . Note that T y is constrained to have its columns sum to one to ensure the normalization
of the transformed variable T y ?d and is thus a stochastic matrix. Intuitively,
P every document in the
text corpus is represented through ?d as a point in the topic simplex {? | k ?k = 1}, and we hope
that the linear transformation {T y } will be able to reposition these points such that documents with
the same class labels are represented by points nearby to each other. Note that these points can not
be placed arbitrarily, as all documents?whether they have the same class labels or they do not?
share the parameter ? ? ?V ?L . The graphical model in Figure 2 shows the new generative process.
Compared to standard LDA, we have added the nodes for the variable yd (and its prior distribution
?), the transformation matrices T y and the corresponding edges.
An alternative to DiscLDA would be a model in which there are class-dependent topic parameters
?yk which determine the conditional distribution of the words:
d
wdn | zdn , yd , ? ? Multi(?yzdn
).
The problem with this approach is that the posterior p(y|w, ?) is a highly non-convex function of ?
which makes its optimization very challenging given the high dimensionality of the parameter space
in typical applications. Our approach circumvents this difficulty by learning a low-dimensional
transformation of the ?k ?s in a discriminative manner instead. Indeed, transforming the topic mixture vector ? is actually equivalent to transforming the ? matrix. To see this, note that by marginalizing out the hidden topic vector z, we get the following distribution for the word wdn given ?:
wdn | yd , ?d , T ? Mult (?T y ? d ) .
By the associativity of the matrix product, we see that we obtain an equivalent probabilistic model
by applying the linear transformation to ? instead, and, in effect, defining the class-dependent topic
parameters as follows:
X
y
?yk =
?l Tlk
.
l
Another motivation for our approach is that it gives the model the ability to distinguish topics which
are shared across different classes versus topics which are class-specific. For example, this separation can be accomplished by using the following transformations (for binary classification):
!
!
IK 0
0
0
1
2
0
0
IK 0
T =
,
T =
(1)
0 IK
0 IK
where I K stands for the identity matrix with K rows and columns. In this case, the last K topics
are shared by both classes, whereas the two first groups of K topics are exclusive to one class or the
other. We will explore this parametric structure later in our experiments.
Note that we can give a generative interpretation to the transformation by augmenting the model
with a hidden topic vector variable u, as shown in Fig. 3, where
y
p(u = k|z = l, T , y) = Tkl
.
In this augmented model T can be interpreted as the probability transition matrix from z-topics to
u-topics.
By including a Dirichlet prior on the T parameters, the DiscLDA model can be related to the authortopic model [10], if we restrict to the special case in which there is only one author per document.
In the author-topic model, the bag-of-words representation of a document is augmented by a list
of the authors of the document. To generate a word in a document, one first picks at random the
author associated with this document. Given the author (y in our notation), a topic is chosen according to corpus-wide author-specific topic-mixture proportions (which is a column vector T y in our
notation). The word is then generated from the corresponding topic distribution as usual. According to this analogy, we see that our model not only enables us to predict the author of a document
(assuming a small set of possible authors), but we also capture the content of documents (using ?)
as well as the corpus-wide class properties (using T ). The focus of the author-topic model was to
model the interests of authors, not the content of documents, explaining why there was no need to
add document-specific topic-mixture proportions. Because we want to predict the class for a specific
document, it is crucial that we also model the content of a document.
Recently, there has been growing interest in topic modeling with supervised information. Blei and
McAuliffe [2] proposed a supervised LDA model where the empirical topic vector z (sampled from
?) is used as a covariate for a regression on y (see also [6]). Mimno and McCallum [9] proposed a
Dirichlet-multinomial regression which can handle various types of side information, including the
case in which this side information is an indicator variable of the class (y)1 . Our work differs from
theirs, however, in that we train the transformation parameter by maximum conditional likelihood
instead of a generative criterion.
3 Inference and learning
y
Given a corpus of documents
P and their labels, ywe estimate the parameters {T } by maximizing
the conditional likelihood d log p(yd | wd ; {T }, ?) while holding ? fixed. To estimate the parameters ?, we hold the transformation matrices fixed and maximize the posterior of the model, in
much the same way as in standard LDA models. Intuitively, the two different training objectives
have two effects on the model: the optimization of the posterior with respect to ? captures the topic
structure that is shared in documents throughout a corpus, while the optimization of the conditional
likelihood with respect to {T y } finds a transformation of the topics that discriminates between the
different classes within the corpus.
We use the Rao-Blackwellized version of Gibbs sampling presented in [8] to obtain samples of z
and u with ? and ? marginalized out. Those samples can be used to estimate the likelihood of
p(w|y, T ), and thus the posterior p(y|w, T ) for prediction, by using the harmonic mean estimator [8]. Even though this estimator can be unstable in general model selection problems, we found
that it gave reasonably stable estimates for our purposes.
We maximize the conditional likelihood objective with respect to T by using gradient ascent, for a
fixed ?. The gradient can be estimated by Monte Carlo EM, with samples from the Gibbs sampler.
More specifically, we use the matching property of gradients in EM to write the gradient as:
?
?
?
log p(y|w, T , ?) = Eqty (z)
log p(w, z|y, T , ?) ?Ert (z)
log p(w, z|T , ?) , (2)
?T
?T
?T
where qty (z) = p(z|w, y, T t , ?), rt (z) = p(z|w, T t , ?) and the derivatives are evaluated at T =
T t . We can approximate those expectations using the relevant Gibbs samples. After a few gradient
updates, we refit ? by its MAP estimate from Gibbs samples.
3.1 Dimensionality reduction
We can obtain a supervised dimensionality reduction method by using the average transformed topic
vector
as the reduced representation of a test document. We estimate it using E [T y ?|?, w, T ] =
P
y
y p(y|?, w, T )E [T ?|y, ?, w, T ]. The first term on the right-hand side of this equation can
1
In this case, their model is actually the same as Model 1 in [5] with an additional prior on the classdependent parameters for the Dirichlet distribution on the topics.
Figure 4: t-SNE 2D embedding of the
E [T y ?|?, w, T ] representation of Newsgroups documents, after fitting to the DiscLDA model (T was fixed).
Figure 5: t-SNE 2D embedding of the
E [?|?, w, T ] representation of Newsgroups
documents, after fitting to the standard unsupervised LDA model.
be estimated using the harmonic mean estimator and the second term can be approximated from
MCMC samples of z. This new representation can be used as a feature vector for another classifier
or for visualization purposes.
4 Experimental results
We evaluated the DiscLDA model empirically on text modeling and classification tasks. Our experiments aimed to demonstrate the benefits of discriminative training of LDA for discovering a
compact latent representation that contains both predictive and shared components across different
types of data. We evaluated the performance of our model by contrasting it to standard LDA models
that were not trained discriminatively.
4.1 Text modeling
The 20 Newsgroups dataset contains postings to Usenet newsgroups. The postings are organized by
content into 20 related categories and are therefore well suited for topic modeling. In this section,
we investigate how DiscLDA can exploit the labeling information?the category?in discovering
meaningful hidden structures that differ from those found using unsupervised techniques.
We fit the dataset to both a standard 110-topic LDA model and a DiscLDA model with restricted
y
forms of the transformation matrices {T y }y=20
y=1 . Specifically, the transformation matrix T for class
label c is fixed and given by the following blocked matrix
?
?
0
0
.. ?
? ..
? .
. ?
?
?
0 ?.
(3)
T y = ? I K0
? .
.. ?
?
? .
.
.
0
I K1
This matrix has (C + 1) rows and two columns of block matrices. All but two block matrices
are zero matrices. At the first column and the row y, the block matrix is an identity matrix with
dimensionality of K0 ? K0 . The last element of T y is another identity matrix with dimensionality
K1 . When applying the transformation to a topic vector ? ? ?K0 +K1 , we obtain a transformed
topic vector ?tr = T y ? whose nonzeros elements partition the components ?tr into (C + 1) disjoint
sets: one set of K0 elements for each class label that does not overlap with the others, and a set
of K1 components that is shared by all class labels. Intuitively, the shared components should use
all class labels to model common latent structures, while nonoverlapping components should model
specific characteristics of data from each class.
Class
alt.atheism
comp.graphics
comp.os.ms-windows.misc
comp.sys.ibmpc.hardware
comp.sys.mac.hardware
comp.windows.x
misc.forsale
rec.autos
rec.motorcycles
rec.sport.baseball
rec.sport.hockey
sci.crypt
sci.electronics
sci.med
sci.space
soc.religion.christian
talk.politics.guns
talk.politics.mideast
talk.politics.misc
talk.religion.misc
Shared topics
Most popular words
atheism, religion, bible, god, system, moral, atheists, keith, jesus, islam,
files, color, images, file, image, format, software, graphics, jpeg, gif,
card, files, mouse, file, dos, drivers, win, ms, windows, driver,
drive, card, drives, bus, mb, os, disk, scsi, controller, ide,
drive, apple, mac, speed, monitor, mb, quadra, mhz, lc, scsi,
server, entry, display, file, program, output, window, motif, widget, lib,
price, mail, interested, offer, cover, condition, dos, sale, cd, shipping,
cars, price, drive, car, driving, speed, engine, oil, ford, dealer,
ca, ride, riding, dog, bmw, helmet, dod, bike, motorcycle, bikes,
games, baseball, year, game, runs, team, hit, players, season, braves,
ca, period, play, games, game, team, win, players, season, hockey,
government, key, public, security, chip, clipper, keys, db, privacy, encryption,
current, power, ground, wire, output, circuit, audio, wiring, voltage,
amp,
gordon, food, disease, pitt, doctor, medical, pain, health, msg, patients,
earth, space, moon, nasa, orbit, henry, launch, shuttle, satellite, lunar,
christians, bible, church, truth, god, faith, christian, christ, jesus, rutgers,
people, gun, guns, government, file, fire, fbi, weapons, militia,
firearms,
people, turkish, government, jews, israel, israeli, turkey, armenian, armenians, armenia,
american, men, war, mr, tax, government, president, health, cramer,
stephanopoulos,
religion, christians, bible, god, christian, christ, morality, objective,
sandvik, jesus,
ca, people, post, wrote, group, system, world, work, ll, make, true, university, great, case, number, read, day, mail, information, send, back,
article, writes, question, find, things, put, don, cs, didn, good, end, ve,
long, point, years, doesn, part, time, state, fact, thing, made, problem,
real, david, apr, give, lot, news
Table 1: Most popular words from each group of class-dependent topics or a bucket of ?shared?
topics learned in the 20 Newsgroups experiment with fixed T matrix.
In a first experiment, we examined whether the DiscLDA model can exploit the structure for T y
given in (3). In this experiment, we first obtained an estimate of the ? matrix by setting it to the
MAP estimate from Gibbs samples as explained in Section 3. We then estimated a new representation for test documents by taking the conditional expectation of T y ? with y marginalized out as
explained in Section 3.1. Finally, we then computed a 2D-embedding of this K1 -dimensional representation of documents. To obtain an embedding, we first tried standard multidimensional scaling
(MDS), using the symmetrical KL divergence between pairs of ?tr topic vectors as a dissimilarity
metric, but the results were hard to visualize. A more interpretable embedding was obtained using
a modified version of the t-SNE stochastic neighborhood embedding presented by van der Maaten
and Hinton [11]. Fig. 4 shows a scatter plot of the 2D?embedding of the topic representation of
the 20 Newsgroups test documents, where the colors of the dots, each corresponding to a document,
encode class labels. Clearly, the documents are well separated in this space. In contrast, the embedding computed from standard LDA, shown in Fig. 5, does not show a clear separation. In this
experiment, we have set K0 = 5 and K1 = 10 for DiscLDA, yielding 110 possible topics; hence
we set K = 110 for the standard LDA model for proper comparison.
It is also instructive to examine in detail the topic structures of the fitted DiscLDA model. Given
the specific setup of our transformation matrix T , each component of the topic vector u is either
associated with a class label or shared across all class labels. For each component, we can compute
the most popular words associated from the word-topic distribution ?. In Table 1, we list these
words and group them under each class labels and a special bucket ?shared.? We see that the words
are highly indicative of their associated class labels. Additionally, the words in the ?shared? category
are ?neutral,? neither positively nor negatively suggesting proper class labels where they are likely
LDA+SVM
20%
DiscLDA+SVM
17%
discLDA alone
17%
Table 2: Binary classification error rates for two newsgroups
to appear. In fact, these words confirm the intuition of the DiscLDA model: they reflect common
English usage underlying different documents. We note that we had already taken out a standard list
of stop words from the documents.
4.2 Document classification
It is also of interest to consider the classification problem more directly and ask whether the features
delivered by DiscLDA are more useful for classification than those delivered by LDA. Of course, we
can also use DiscLDA as a classification method per se, by marginalizing over the latent variables
and computing the probability of the label y given the words in a test document. Our focus in this
section, however, is its featural representation. We thus use a different classification method (the
SVM) to compare the features obtained by DiscLDA to those obtained from LDA.
In a first experiment, we returned to the fixed T setting studied in Section 4.1 and considered the
features obtained by DiscLDA for the 20 Newsgroups problem. Specifically, we constructed multiclass linear SVM classifiers using the expected topic proportion vectors from unsupervised LDA
and DiscLDA models as features as described in Section 3.1. The results were as follows. Using the
topic vectors from standard LDA the error rate of classification was 25%. When the topic vectors
from the DiscLDA model were used we obtained an error rate of 20%. Clearly the DiscLDA features
have retained information useful for classification.
We also computed the MAP estimate of the class label y ? = arg max p(y|w) from DiscLDA and
used this estimate directly as a classifier. The error rate was again 20%.
In a second experiment, we considered the fully adaptive setting in which the transformation matrix
T y is learned in a discriminative fashion as described in Section 3. We initialized the matrix T
to a smoothed block diagonal matrix having a pattern similar to (1), with 20 shared topics and 20
class-dependent topics per class. We then sampled u and z for 300 Gibbs steps to obtain an initial
estimate of the ? vector. This was followed by the discriminative learning process in which we
iteratively ran batch gradient (in the log domain, so that T remained normalized) using Monte Carlo
EM with a constant step size for 10 epochs. We then re-estimated ? by sampling u conditioned
on (?, T ). This discriminative learning process was repeated until there was no improvement on a
validation data set. The step size was chosen by grid search.
In this experiment, we considered the binary classification problem of distinguishing postings of the
newsgroup alt.atheism from postings of the newsgroup talk.religion.misc, a difficult
task due to the similarity in content between these two groups.
Table 2 summarizes the results of our experiment, where we have used topic vectors from unsupervised LDA and DiscLDA as input features to binary linear SVM classifiers. We also computed the
prediction of the label of a document directly with DiscLDA. As shown in the table, the DiscLDA
model clearly generates topic vectors with better predictive power than unsupervised LDA.
In Table 3 we present the ten most probable words for a subset of topics learned using the discriminative DiscLDA approach. We found that the learned T had a block-diagonal structure similar
to (3), though differing significantly in some ways. In particular, although we started with 20 shared
topics the learned T had only 12 shared topics. We have grouped the topics in Table 3 according to
whether they were class-specific or shared, uncovering an interesting latent structure which appears
more discriminating than the topics presented in Table 1.
5 Discussion
We have presented DiscLDA, a variation on LDA in which the LDA parametrization is augmented
to include a transformation matrix and in which this matrix is learned via a conditional likelihood
criterion. This approach allows DiscLDA to retain the ability of the LDA approach to find useful
Topics for alt.atheism
god, atheism, religion, atheists,
religious, atheist, belief, existence, strong
argument, true, conclusion, fallacy, arguments, valid, form,
false, logic, proof
peace, umd, mangoe, god, thing,
language, cs, wingate, contradictory, problem
Topics for talk.religion.misc
evil, group, light, read, stop,
religions, muslims, understand,
excuse
back, gay, convenient, christianity, homosexuality, long, nazis,
love, homosexual, david
bible, ra, jesus, true, christ, john,
issue, church, lds, robert
Shared topics
things, bobby, men, makes, bad,
mozumder, bill, ultb, isc, rit
system, don, moral, morality,
murder, natural, isn, claim, order, animals
evidence, truth, statement, simply, accept, claims, explain, science, personal, left
Table 3: Ten most popular words from a random selection of different types of topics learned in the
discriminative learning experiment on the binary dataset.
low-dimensional representations of documents, but to also make use of discriminative side information (labels) in forming these representations.
Although we have focused on LDA, we view our strategy as more broadly useful. A virtue of the
probabilistic modeling framework is that it can yield complex models that are modular and can be
trained effectively with unsupervised methods. Given the high dimensionality of such models, it may
be intractable to train all of the parameters via a discriminative criterion such as conditional likelihood. In this case it may be desirable to pursue a mixed strategy in which we retain the unsupervised
criterion for the full parameter space but augment the model with a carefully chosen transformation
so as to obtain an auxiliary low-dimensional optimization problem for which conditional likelihood
may be more effective.
Acknowledgements We thank the anonymous reviewers as well as Percy Liang, Iain Murray,
Guillaume Obozinski and Erik Sudderth for helpful suggestions. Our work was supported by Grant
0509559 from the National Science Foundation and by a grant from Google.
References
[1] T. L. Berg, A. C. Berg, J. Edwards, M. Maire, R. White, Y. W. Teh, E. Learned-Miller, and
D. A. Forsyth. Names and faces in the news. In Proceedings of IEEE Conference on Computer
Vision and Pattern Recognition, Washington, DC, 2004.
[2] D. Blei and J. McAuliffe. Supervised topic models. In J. Platt, D. Koller, Y. Singer, and
S. Roweis, editors, Advances in Neural Information Processing Systems 20, Cambridge, MA,
2008. MIT Press.
[3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, 2003.
[4] F. Chiaromonte and R. D. Cook. Sufficient dimension reduction and graphics in regression.
Annals of the Institute of Statistical Mathematics, 54(4):768?795, 2002.
[5] L. Fei-fei and P. Perona. A Bayesian hierarchical model for learning natural scene categories.
In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, San Diego,
CA, 2005.
[6] P. Flaherty, G. Giaever, J. Kumm, M. I. Jordan, and A. P. Arkin. A latent variable model for
chemogenomic profiling. Bioinformatics, 21:3286?3293, 2005.
[7] K. Fukumizu, F. R. Bach, and M. I. Jordan. Kernel dimension reduction in regression. Annals
of Statistics, 2008. To appear.
[8] T. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy
of Sciences, 101:5228?5235, 2004.
[9] D. Mimno and A. McCallum. Topic models conditioned on arbitrary features with Dirichletmultinomial regression. In Proceedings of the 24th Annual Conference on Uncertainty in
Artificial Intelligence, Helsinki, Finland, 2008.
[10] M. Rosen-Zvi, T. Griffiths T, M. Steyvers, and P. Smyth. The author-topic model for authors
and documents. In Proceedings of the 20th Annual Conference on Uncertainty in Artificial
Intelligence, Banff, Canada, 2004.
[11] L. J. P. van der Maaten and G. E. Hinton. Visualizing data using t-SNE. Journal of Machine
Learning Research, 9:2579?2605, 2008.
| 3599 |@word version:3 proportion:7 disk:1 plsa:2 tried:1 uncovers:1 dealer:1 pick:1 tr:3 reduction:14 initial:1 electronics:1 contains:2 murder:1 tuned:1 document:45 amp:1 current:1 wd:4 scatter:1 john:1 partition:1 enables:1 christian:5 plot:1 interpretable:1 update:1 v:1 alone:1 generative:8 discovering:2 cook:1 intelligence:2 parameterization:1 indicative:1 mccallum:2 sys:2 parametrization:1 blei:3 provides:1 node:1 banff:1 blackwellized:1 constructed:1 become:1 driver:2 ik:4 combine:1 fitting:2 manner:1 privacy:1 introduce:3 expected:1 ra:1 indeed:1 examine:1 growing:1 multi:3 nor:1 love:1 food:1 window:4 lib:1 provided:1 underlying:2 moreover:1 notation:2 bike:2 circuit:1 didn:1 israel:1 interpreted:1 gif:1 pursue:1 contrasting:1 differing:1 finding:2 transformation:18 berkeley:4 every:1 multidimensional:1 classifier:5 hit:1 platt:1 sale:1 medical:1 grant:2 appear:2 mcauliffe:2 treat:1 encoding:1 usenet:1 yd:8 studied:1 examined:1 challenging:1 block:5 differs:1 writes:1 procedure:3 maire:1 empirical:2 turkish:1 mult:1 significantly:1 matching:1 convenient:1 word:23 griffith:2 get:1 selection:2 put:1 applying:3 equivalent:2 map:3 bill:1 reviewer:1 maximizing:2 send:1 convex:1 christianity:1 focused:1 estimator:3 iain:1 steyvers:2 embedding:8 handle:1 variation:2 ert:1 president:1 annals:2 diego:1 play:1 smyth:1 distinguishing:1 arkin:1 trend:1 element:3 approximated:1 recognition:2 rec:4 wingate:1 capture:2 news:2 yk:2 disease:1 discriminates:1 transforming:2 intuition:1 ran:1 personal:1 trained:4 reviewing:1 predictive:5 baseball:2 division:1 lunar:1 negatively:1 k0:6 chip:1 represented:2 various:1 talk:7 train:3 separated:1 describe:2 effective:1 monte:2 artificial:2 labeling:2 neighborhood:1 whose:2 modular:1 morality:2 ability:3 statistic:2 god:5 ford:1 delivered:2 advantage:1 product:1 mb:2 relevant:1 motorcycle:2 mixing:3 tax:1 roweis:1 academy:1 faith:1 los:1 satellite:1 armenian:2 encryption:1 object:1 develop:1 augmenting:1 keith:1 edward:1 soc:1 auxiliary:5 launch:1 c:2 tlk:1 strong:1 differ:1 clipper:1 subsequently:1 stochastic:2 brave:1 public:1 government:4 anonymous:1 probable:1 extension:2 hold:1 cramer:1 ground:1 considered:3 great:1 mapping:1 predict:2 visualize:1 pitt:1 claim:2 driving:1 forsale:1 finland:1 purpose:2 earth:1 favorable:1 estimation:3 bag:2 label:23 helmet:1 grouped:1 successfully:1 chemogenomic:1 hope:1 fukumizu:1 mit:1 clearly:3 aim:2 modified:1 season:2 shuttle:1 voltage:1 encode:1 focus:3 improvement:1 rank:1 likelihood:17 contrast:2 firearm:1 helpful:1 inference:3 dependent:8 motif:1 typically:1 associativity:1 accept:1 kernelized:1 hidden:4 perona:1 koller:1 transformed:4 interested:1 issue:1 arg:2 classification:14 uncovering:1 augment:2 retaining:2 animal:1 constrained:2 special:2 uc:2 marginal:1 tkl:1 religious:1 having:2 washington:1 sampling:3 ng:1 unsupervised:10 simplex:2 report:1 others:1 gordon:1 rosen:1 few:1 ve:1 divergence:1 national:2 sdr:2 fire:1 freedom:1 wdn:10 shipping:1 interest:3 message:1 highly:2 investigate:1 mixture:9 yielding:2 bible:4 light:1 isc:1 edge:1 necessary:2 bobby:1 initialized:1 orbit:1 re:1 fitted:1 column:6 modeling:6 rao:1 jpeg:1 mhz:1 cover:1 retains:1 introducing:2 mac:2 entry:1 neutral:1 subset:1 dod:1 graphic:3 characterize:1 zvi:1 eec:1 dir:2 discriminating:1 retain:3 probabilistic:5 regressor:1 michael:1 mouse:1 again:1 reflect:1 american:1 derivative:1 account:1 suggesting:1 nonoverlapping:1 forsyth:1 later:1 view:1 lot:1 doctor:1 start:1 simon:1 moon:1 characteristic:1 miller:1 yield:2 identify:2 lds:1 bayesian:7 carlo:2 comp:5 drive:4 apple:1 history:1 explain:1 crypt:1 associated:6 proof:1 sampled:2 stop:2 dataset:3 popular:5 ask:1 color:2 car:2 dimensionality:15 organized:2 carefully:1 actually:2 back:2 nasa:1 appears:1 disposal:1 supervised:8 day:1 response:2 maximally:1 qty:1 done:1 though:2 evaluated:3 until:1 hand:1 nonlinear:1 o:2 google:1 lda:38 scientific:1 supervisory:2 usage:1 effect:2 oil:1 normalized:1 riding:1 armenia:1 true:3 gay:1 hence:1 assigned:1 read:2 iteratively:1 semantic:1 illustrated:1 misc:7 white:1 wiring:1 ll:1 game:4 visualizing:1 ide:1 criterion:6 m:2 demonstrate:1 percy:1 image:3 ranging:1 variational:1 harmonic:2 recently:1 common:3 multinomial:3 empirically:1 discussed:1 interpretation:1 theirs:1 blocked:1 cambridge:1 gibbs:7 grid:1 mathematics:1 language:1 henry:1 dot:1 had:3 ride:1 stable:1 similarity:1 add:1 posterior:6 recent:1 server:1 binary:5 arbitrarily:1 accomplished:2 der:2 preserving:1 additional:2 mr:1 determine:1 maximize:2 period:1 signal:1 branch:1 full:1 desirable:1 nonzeros:1 turkey:1 profiling:1 offer:1 long:2 retrieval:1 bach:1 post:1 peace:1 prediction:3 variant:2 regression:7 controller:1 vision:3 expectation:2 patient:1 rutgers:1 udn:1 metric:1 sometimes:1 normalization:1 kernel:1 dirichletmultinomial:1 whereas:1 want:1 evil:1 sudderth:1 crucial:1 weapon:1 umd:1 ascent:1 file:6 subject:1 med:1 db:1 thing:4 member:1 incorporates:1 spirit:1 name:1 jordan:4 reposition:1 newsgroups:9 variety:1 independence:1 fit:2 gave:1 restrict:1 multiclass:1 angeles:1 politics:3 whether:5 pca:2 war:1 moral:2 returned:1 generally:2 useful:4 clear:1 aimed:1 se:1 transforms:1 ten:2 hardware:2 category:4 reduced:5 generate:2 estimated:7 disjoint:1 per:3 broadly:1 write:1 group:6 key:2 scsi:2 monitor:1 drawn:3 neither:1 lacoste:1 sum:1 year:2 run:1 uncertainty:2 place:1 throughout:1 separation:2 draw:3 circumvents:1 maaten:2 summarizes:1 scaling:1 disclda:35 entirely:1 followed:1 distinguish:1 display:1 topological:1 annual:2 msg:1 fei:3 scene:1 fda:2 software:1 helsinki:1 nearby:1 generates:1 speed:2 argument:2 jew:1 format:1 according:3 across:6 em:3 widget:1 making:1 intuitively:3 restricted:1 explained:2 bucket:2 atheist:3 taken:1 equation:1 visualization:2 bus:1 discus:1 singer:1 end:1 hierarchical:1 fbi:1 alternative:2 batch:1 existence:1 original:1 dirichlet:10 include:2 ensure:1 graphical:1 marginalized:2 exploit:3 k1:6 build:1 murray:1 classical:1 objective:3 added:1 question:1 already:1 strategy:2 parametric:2 sha:1 exclusive:1 usual:1 rt:1 md:1 diagonal:2 southern:1 gradient:6 win:2 subspace:1 pain:1 thank:1 card:2 sci:4 flaherty:1 gun:3 topic:71 manifold:1 mail:2 discriminant:1 unstable:1 assuming:1 erik:1 modeled:2 retained:1 liang:1 setup:1 difficult:1 robert:1 sne:4 holding:1 statement:1 proper:2 refit:1 teh:1 wire:1 defining:1 hinton:2 team:2 dc:1 smoothed:1 arbitrary:1 zdn:8 canada:1 introduced:1 david:2 dog:1 pair:1 specified:1 kl:1 security:1 california:1 engine:1 learned:8 israeli:1 able:2 usually:1 pattern:3 program:1 max:2 including:4 belief:1 power:5 overlap:1 treated:1 difficulty:1 natural:2 predicting:1 indicator:1 islam:1 classdependent:1 julien:1 started:1 church:2 categorical:1 auto:1 health:2 featural:1 isn:1 text:6 prior:5 literature:1 understanding:1 epoch:1 acknowledgement:1 marginalizing:2 fully:1 discriminatively:2 mixed:1 men:2 interesting:1 allocation:3 suggestion:1 analogy:1 versus:1 validation:1 foundation:1 integrate:1 jesus:4 sufficient:2 article:1 editor:1 share:1 cd:1 row:3 course:1 placed:1 last:2 supported:1 english:1 side:9 understand:1 institute:1 wide:2 explaining:1 taking:1 face:1 benefit:1 mimno:2 van:2 dimension:5 vocabulary:1 stand:1 transition:1 rich:1 world:1 doesn:1 author:12 collection:3 made:1 adaptive:1 valid:1 san:1 far:1 approximate:1 compact:1 wrote:1 confirm:1 logic:1 corpus:8 symmetrical:1 discriminative:14 don:2 search:1 latent:11 fallacy:1 why:1 hockey:2 additionally:2 table:9 transfer:1 reasonably:1 ca:7 necessarily:1 posted:1 complex:1 domain:4 apr:1 motivation:1 hyperparameters:1 atheism:6 repeated:1 positively:1 augmented:3 fig:4 fashion:1 lc:1 wish:3 mideast:1 posting:4 remained:1 bad:1 specific:8 covariate:1 list:3 svm:5 alt:4 virtue:1 evidence:1 intractable:1 false:1 effectively:1 dissimilarity:1 conditioned:2 suited:1 simply:1 explore:1 likely:1 forming:1 religion:9 tracking:1 sport:2 christ:3 truth:2 ma:1 obozinski:1 conditional:11 goal:3 identity:3 bmw:1 shared:17 fisher:1 content:5 price:2 hard:1 specifically:5 typical:1 sampler:1 principal:2 contradictory:1 experimental:1 player:2 meaningful:1 newsgroup:3 stephanopoulos:1 rit:1 guillaume:1 berg:2 people:3 bioinformatics:2 incorporate:1 dept:2 mcmc:1 audio:1 instructive:1 |
2,867 | 36 | 554
STABILITY RESULTS FOR NEURAL NETWORKS
A. N. Michell, J. A. FarreUi , and W. Porod 2
Department of Electrical and Computer Engineering
University of Notre Dame
Notre Dame, IN 46556
ABSTRACT
In the present paper we survey and utilize results from the qualitative theory of large
scale interconnected dynamical systems in order to develop a qualitative theory for the
Hopfield model of neural networks. In our approach we view such networks as an interconnection of many single neurons. Our results are phrased in terms of the qualitative
properties of the individual neurons and in terms of the properties of the interconnecting
structure of the neural networks. Aspects of neural networks which we address include
asymptotic stability, exponential stability, and instability of an equilibrium; estimates
of trajectory bounds; estimates of the domain of attraction of an asymptotically stable
equilibrium; and stability of neural networks under structural perturbations.
INTRODUCTION
In recent years, neural networks have attracted considerable attention as candidates
for novel computational systems l - 3 . These types of large-scale dynamical systems, in
analogy to biological structures, take advantage of distributed information processing
and their inherent potential for parallel computation 4 ,5. Clearly, the design of such
neural-network-based computational systems entails a detailed understanding of the
dynamics of large-scale dynamical systems. In particular, the stability and instability
properties of the various equilibrium points in such networks are of interest, as well
as the extent of associated domains of attraction (basins of attraction) and trajectory
bounds.
In the present paper, we apply and survey results from the qualitative theory oflarge
scale interconnected dynamical systems6 - 9 in order to develop a qualitative theory for
neural networks. We will concentrate here on the popular Hopfield model3 , however,
this type of analysis may also be applied to other models. In particular, we will address
the following problems: (i) determine the stability properties of a given equilibrium
point; (ii) given that a specific equilibrium point of a neural network is asymptotically
stable, establish an estimate for its domain of attraction; (iii) given a set of initial conditions and external inputs, establish estimates for corresponding trajectory bounds; (iv)
give conditions for the instability of a given equilibrium point; (v) investigate stability
properties under structural perturbations. The present paper contains local results. A
more detailed treatment of local stability results can be found in Ref. 10, whereas global
results are contained in Ref. 1l.
In arriving at the results of the present paper, we make use of the method of analysis advanced in Ref. 6. Specifically, we view high dimensional neural network as an
IThe work of A. N. Michel and J. A. Farrell was supported by NSF under grant ECS84-19918.
2The work of W. Porod was supported by ONR under grant NOOOI4-86-K-0506.
? American Institute of Physics 1988
555
interconnection of individual subsystems (neurons). This interconnected systems viewpoint makes our results distinct from others derived in the literature 1 ,12. Our results
are phrased in terms of the qualitative properties of the free subsystems (individual
neurons, disconnected from the network) and in terms of the properties of the interconnecting structure of the neural network. As such, these results may constitute useful
design tools. This approach makes possible the systematic analysis of high dimensional
complex systems and it frequently enables one to circumvent difficulties encountered in
the analysis of such systems by conventional methods.
The structure of this paper is as follows. We start out by defining the Hopfield
model and we then introduce the interconnected systems viewpoint. We then present
representative stability results, including estimates of trajectory bounds and of domains
of attraction, results for instability, and conditions for stability under structural perturbations. Finally, we present concluding remarks.
THE HOPFIELD MODEL FOR NEURAL NETWORKS
In the present paper we consider neural networks of the Hopfield type 3 ? Such systems
can be represented by equations of the form
N
Ui
= . . . biUi + I:Aij Gj(Uj) + Ui(t),
for i
= 1, ... ,N,
(1)
j=1
= *"Ui(t) = l~g) and bi = *..
(-00,00),':. = ~ +E.f=IITiil, Ri > O,Ii:
where Aij
As usual, Ci > O,Tij
= [0,00)
i:;,RijfR =
~ R,Ii is continuous,
Ui = ~,Gi : R ~ (-1,1), Gi is continuously differentiable and strictly monotonically increasing (Le., Gi( uD > G i ( u~') if and only if u~ > u~'), UiGi( Ui) > 0 for all Ui ::j; 0,
and Gi(O) = O. In (1), C i denotes capacitance, Rij denotes resistance (possibly including a sign inversion due to an inverter), G i (?) denotes an amplifier nonlinearity, and Ii(')
denotes an external input.
In the literature it is frequently assumed that Tij = Tji for all i,j = 1, ... , N and
that Tii = 0 for all i = 1, ... , N. We will make these assumptions only when explicitly
stated.
We are interested in the qualitative behavior of solutions of (1) near equilibrium
points (rest positions where Ui == 0, for i = 1, ... , N). By setting the external inputs
Ui(t), i = 1, ... , N, equal to zero, we define U* = [ui, ... , u"NV fRN to be an equilibrium
for (1) provided that -biui' + E.f=l Aij Gj(uj) = 0, for i = 1, ... ,N. The locations
of such equilibria in RN are determined by the interconnection pattern of the neural
network (i.e., by the parameters Aij, i,j = 1,. ", N) as well as by the parameters bi and
the nature of the nonlinearities Gi(')' i = 1, ... ,N.
Throughout, we will assume that a given equilibrium u* being analyzed is an isolated
equilibrium for (1), i.e., there exists an r > 0 such that in the neighborhood B( u*, r) =
{( u - u*)fR N : lu - u*1 < r} no equilibrium for (1), other than u = u*, exists.
When analyzing the stability properties of a given equilibrium point, we will be able
to assume, without loss of generality, that this equilibrium is located at the origin u = 0
of RN. If this is not the case, a trivial transformation can be employed which shifts the
equilibrium point to the origin and which leaves the structure of (1) the same.
R+
556
INTERCONNECTED SYSTEMS VIEWPOINT
We will find it convenient to view system (1) as an interconnection of N free subsystems (or isolated sUbsystems) described by equations of the form
Pi =
-biPi
+ Aii Gi(Pi) + Ui(t).
(2)
Under this viewpoint, the interconnecting structure of the system (1) is given by
Gi(Xb" . ,x n )
~
N
L
AijGj(Xj), i
= 1, ... ,N.
(3)
j=1
ii:i
Following the method of analysis advanced in6 , we will establish stability results
which are phrased in terms of the qualitative properties of the free subsystems (2) and
in terms of the properties of the interconnecting structure given in (3). This method
of analysis makes it often possible to circumvent difficulties that arise in the analysis
of complex high-dimensional systems. Furthermore, results obtained in this manner
frequently yield insight into the dynamic behavior of systems in terms of system com.
ponents and interconnections.
GENERAL STABILITY CONDITIONS
We demonstrate below an example of a result for exponential stability of an equilibrium point. The principal Lyapunov stability results for such systems are presented,
e.g., in Chapter 5 of Ref. 7.
We will utilize the following hypotheses in our first result.
(A-I) For system (1), the external inputs are all zero, i.e.,
Ui(t) == 0, i = 1, ... , N.
(A-2) For system (1), the interconnections satisfy the estimate
for all
Ixil < ri, Ix;1 < rj,
i,j = 1, ... , N, where the ail are real constants.
?
(A-3) There exists an N-vector a> (i.e., aT = (al, ... ,aN) and ai > 0, for all ~ =
1, ... ,N) such that the test matrix S = [Sij]
is negative definite, where the bi are defined in (1) and the aij are given in (A-2).
557
We are now in a position to state and prove the following result.
Theorem 1 The equilibrium x = 0 of the neural network (1) is exponentially stable
if hypotheses (A-l), (A-2) and (A-3) are satisfied.
Proof. For (1) we choose the Lyanpunov function
(4)
where the ai are given in (A-3). This function is clearly positive definite. The time
deri vati ve of v along the solutions of (1) is given by
N 1
DV(1)(X) =
N
2: 2ai(2xd[-biXi + 2: Aij Gj(Xj)]
i=1
j=1
where (A-l) has been invoked. In view of (A-2) we have
DV(1)( x) <
N
N
i=1
j=1
2: ai( -bix~ + Xi 2: aijX j)
for all
where r
IxI2 < r
= mini(ri), IxI2 = (Ef:1 X~) 1/2, and the matrix R =
r;j
-bi + aii),
= { ai(
ai aij,
t
i
[rij] is given by
=J
::J j.
But it follows that
x T Rx = x T ( R
~ RT) X = x T Sx ::; )w(S) Ixl1
(5)
where S is the matrix given in (A-3) and AM(S) denotes the largest eigenvalue of
the real symmetric matrix S. Since S is by assumption negative definite, we have
AM(S) < O. It follows from (4) and (5) that in some neighborhood of the origin x = 0,
we have c1lxl~ ~ v(x) ~ c2lxl~ and DV(1)(X) ~ -c3Ixl~, where C1 = mini ai > 0,
C2 =
maxi ai > 0, and C3 = -AM(S) > O. Hence, the equilibrium x = of the neural
network (1) is exponentially stable (c.f. Theorem 9.10 in Ref. 7).
Consistent with the philosophy of viewing the neural network (1) as an interconnection of N free subsystems (2), we think of the Lyapunov function (4) as consisting
of a weighted sum of Lyapunov functions for each free subsystem (2) (with Ui(t) == 0) .
The weighting vector a > 0 provides flexibility to emphasize the relative importance
of the qualitative properties of the various individual subsystems. Hypothesis (A - 2)
provides a measure of interaction between the various subsystems (3). Furthermore, it
is emphasized that Theorem 1 does not require that the parameters Aij in (1) form a
symmetric matrix.
!
!
?
558
WEAK COUPLING CONDITIONS
The test matrix S given in hypothesis (A - 3) has off-diagonal terms which may be
positive or nonpositive. For the special case where the off-diagonal terms of the test
matrix S = [Sij] are non-negative, equivalent stability results may be obtained which are
much easier to apply than Theorem 1. Such results are called weak-coupling conditions
in the literature6 ,9. The conditions 8ij ~ 0 for all i ::J j may reflect properties of the
system (1) or they may be the consequence of a majorization process.
In the proof of the subsequent result, we will make use of some of the properties
of M- matrices (see, for example, Chapter 2 in Ref. 6). In addition we will use the
following assumptions.
(A-4) For system (1), the nonlinearity Gi(Xi) satisfies the sector condition
(A-S) The successive principal minors of the N
X
N test matrix D = [d ij ]
are all positive where, the bi and Aij are defined in (1) and Ui2 is defined in (A - 4).
Theorem 2 The equilibrium x = 0 of the neural network (1) is asymptotically stable if hypotheses (A-1), (A-4) and (A-5) are true.
Proof. The proof proceeds 10 along lines similar to the one for Theorem 1, this time
with the following Lyapunov function
N
v(x)
= L: Qilxd?
(6)
i=l
The above Lyapunov function again reflects the interconnected nature of the whole
system. Note that this Lyapunov function may be viewed as a generalized Hamming
distance of the state vector from the origin.
ESTIMATES OF TRAJECTORY BOUNDS
In general, one is not only interested in questions concerning the stability of an
equilibrium of the system (1), but also in performance. One way of assessing the qualitative properties of the neural system (1) is by investigating solution bounds near an
equilibrium of interest. We present here such a result by assuming that the hypotheses
of Theorem 2 are satisfied.
In the following, we will not require that the external inputs Ui(t), i = 1, ... , N be
zero. However, we will need to make the additional assumptions enumerated below.
559
(A-6) Assume that there exist .xi
> 0, for i = 1, ... , N, and an ( > 0 such that
N
L: (~~) IAjil
> ( > 0,
i = 1, ... ,N
j=1
i:/;j
where bi and Aij are defined in (1) and (Ti2 is defined in (A-4).
(A-7) Assume that for system (1),
N
L: .xiIUi(t)1 ~ k
for all t ~ 0
i=l
for some constant k > 0 where the .xi, i
= 1, ... , N
are defined in (A-6).
In the proof of our next theorem, we will make use of a comparison result. We
consider a scalar comparison equation of the form iJ = G(y) where y(R,G : B(r) - R
for some r > 0, and G is continuous on B(r) = {XfR: Ixl < r}. We can then prove the
following auxiliary theorem: Let p(t) denote the maximal solution of the comparison
equation with p(to) = Yo(B(r), t ~ to > O. If r(t), t ~ to ~ 0 is a continuous
function such that r(to) $ Yo, and if r(t) satisfies the differential inequality Dr(t) =
limk-+O+ sup[r(t + k) - r(t)] $ G(r(t)) almost everywhere, then r(t) $ p(t) for t ~
to ~ 0, for as long as both r(t) and p(t) exist. For the proof of this result, as well as
other comparison theorems, see e.g., Refs. 6 and 7.
For the next theorem, we adopt the following notation. We let 6 = mini (Til
where (Til is defined in (A - 4), we let c = (6 , where ( is given in (A-6), and
we let ?(t,to,xo) = [?I(t,to,xo)'''',</>N(t,to,xo)]T denote the solution of (1) with
?(to, to, xo) = Xo = (XlO,"" xNol for some to ~ O.
We are now in a position to prove the following result, which provides bounds for
the solution of(1).
t
Theorem 3 Assume that hypotheses (A-6) and (A-7) are satisfied. Then
11?(t, to, xo)11 =~ ~
L..." .xil?i(t, to, xo) I ::; (a - -k) e- c(t - t)
0
i=l
C
k
+ -,
t
~
to
~ 0
C
provided that a > k/c and IIxoll = E~l .xilxiOI ~ a, where the .xi, i = 1,. ", N are
given in (A-6) and k is given in (A-7).
Proof. For (1) we choose the Lyapunov function
N
v(x) =
L .xilxil?
i=l
(7)
560
Along the solutions of (1), we obtain
z: Ai!Ui(t)\
N
DV(l)(X) ~ AT Dw +
(8)
i=l
where wT = [G1J;d\Xl\,'''' G'Z~N)lxN\]' A = (A}, ... ,ANf, and D = [dij] is the test
matrix given in (A-5). Note that when (A-6) is satisfied, as in the present theorem,
then (A-5) is automatically satisfied. Note also that w ~ 0 (Le., Wi ~ 0, i = 1, ... , N)
and w = 0 if and only if x = O.
Using manipulations involving (A-6), (A-7) and (8), it is easy to show that DV(l)(X) ~
-cv(x) + k. This ineqUality yields now the comparison equation iJ = -cy + k, whose
unique solution is given by
pet, to, Po) =
H we let r
(Po - ~) e-c(t-to) +~,
for all t
~ to.
= v, then we obtain from the comparison result
N
pet) ~ ret) = v(4)(t,to,xo)) =
2: Ail4>i(t,to,xo)1 = 114>(t,to,xo)\I,
i=l
i.e., the desired estimate is true, provided that Ir(to)\
= Ef:l Ai/XiOI = IIxoll
~ a and
a> kjc.
ESTIMATES OF DOMAINS OF ATTRACTION
Neural networks of the type considered herein have many equilibrium points. If
a given equilibrium is asymptotically stable, or exponentially stable, then the extent
of this stability is of interest. As usual, we assume that x = 0 is the equilibrium of
interest. If 4>(t, to, xo) denotes a solution of the network (1) with 4>(to, to, xo) = xo, then
we would like to know for which points Xo it is true that 4>( t, to, xo) tends to the origin
as t ---+ 00. The set of all such points Xo makes up the domain of attraction (the basin of
attraction) of the equilibrium x = O. In general, one cannot determine such a domain
in its entirety. However, several techniques have been devised to estimate subsets of
a domain of attraction. We apply one such method to neural networks, making use
of Theorem 1. This technique is applicable to our other results as well, by making
appropriate modifications.
We assume that the hypotheses (A-I), (A-2) and (A-3) are satisfied and for the free
subsystem (2) we choose the Lyapunov function
Vi(Pi)
= 21 Pi'2
(9)
Then DVi(2) (Pi) ~ (-bi + aii)p~, \Pi/ < ri for some ri > O. If (A-3) is satisfied, we
must have (-bi + aii) < 0 and DVi(2)(Pi) is negative definite over B(ri).
Let Gvo ; = {PifR : Vi(Pi) = !p~ < trl ~ Voi}. Then GVo ; is contained in the domain
of attraction of the equilibrium Pi = 0 for the free subsystem (2).
To obtain an estimate for the domain of attraction of x = 0 for the whole neural
network (1), we use the Lyapunov function
561
N
1
N
v(x) -LJ2
- '"' -"'?x~
- '"' o?v?(x?)
.....?? -LJ
???.
i=l
(10)
i=l
It is now an easy matter to show that the set
N
C>.
= {uRN: v(x) = LOiVi(Xi) < oX}
i=l
will be a subset of the domain of attraction of x
oX =
= 0 for the neural network (1), where
min (OiVOi) = min
l$.i$.N
1$.i$.N
(~Oir~)
.
2 ?
In order to obtain the best estimate of the domain of attraction of x = 0 by the
present method, we must choose the 0i in an optimal fashion. The reader is referred to
the literature9 ,l3,l4 where several methods to accomplish this are discussed.
INSTABILITY RESULTS
Some of the equilibrium points in a neural network may be unstable. We present
here a sample instability theorem which may be viewed as a counterpart to Theorem
2. Instability results, formulated as counterparts to other stability results of the type
considered herein may be obtained by making appropriate modifications.
(A-B) For system (1), the interconnections satisfy the estimates
XiAiiGi(Xi)
IXiAjjGj(xj)1
where OJ
= O"il
when Aii
< OiAiiX;,
$
IxdlAijlO"j2l xil, if; j
< 0 and Oi = O"i2 when Aii > 0 for all IXil < ri, and for
alllXjl < Tj,i,j = 1, ... ,N.
(A-9) The successive principal minors of the N x N test matrix D
= [dij ] given by
are positive, where O"i = ~ - Au when ifFIl (i.e., stable subsystems) and O"i
+ Aji when ifFu (i.e., unstable subsystems) with F = FII U Fu and F =
{I, ., . , N} and Fu f; </>.
-!:;
We are now in a position to prove the following result.
Theorem 4 The equilibrium x = 0 of the neural network (1) is unstable if hypotheses
(A-l), (A-8) and (A-g) are satisfied. If in addition, FII = </> (</> denotes the empty set),
then the equilibrium x = 0 is completely unstable.
562
Proof. We choose the Lyapunov function
(11)
ifF.
ifF..
where ai > 0, i = 1, ... ,N. Along the solutions of (1) we have (following the proof of
Theorem 2), DV(l)(X) $ -aTDw for all x?B(r), r = miniri where aT = (a}, ... ,aN),
D is defined in (A-9), and w T = [ G 1
IXll, ... , GNx~N) IXNI]. We conclude that
l;d
?
DV(l)(X) is negative definite over B(r). Since every neighborhood of the origin x =
contains at least one point x' where v(x') < 0, it follows that the equilibrium x = 0 for
(1) is unstable. Moreover, when F, = </>, then the function v(x) is negative definite and
the equilibrium x = 0 of (1) is in fact completely unstable (c.f. Chapter 5 in Ref. 7).
STABILITY UNDER STRUCTURAL PERTURBATIONS
In specific applications involving adaptive schemes for learning algorithms in neural
networks, the interconnection patterns (and external inputs) are changed to yield an
evolution of different sets of desired asymptotica.l1y stable equilibrium points with appropriate domains of attraction. The present diagonal dominance conditions (see, e.g.,
hypothesis (A-6)) can be used as constraints to guarantee that the desired equilibria
always have the desired stability properties.
To be more specific, we assume that a given neural network has been designed with a
set of interconnections whose strengths can be varied from zero to some specified values.
We express this by writing in place of (1),
N
Xi = -biXi
+ L:8ij Aij Gj(Xj) + Ui(t),
for i = 1, ... ,N,
(12)
j=l
where 0 $ 8ij $ 1. We also assume that in the given neural network things have been
arranged in such a manner that for some given desired value ~ > 0, it is true that
~ = mini
8iiAii). From what has been said previously, it should now be clear
that if Ui( t) == 0, i = 1, ... ,N and if the diagonal dominance conditions
(!:; -
~
- t=
(~~)
8ij A iji > 0, for i = 1, ... ,N
(13)
1
j
1
i:f;j
?
are satisfied for some Ai > 0, i = 1, ... , N, then the equilibrium x = for (12) will be
asymptotically stable. It is important to recognize that condition (13) constitutes a single stability condition for the neural network under structural perturbations. Thus, the
strengths of interconnections of the neural network may be rearranged in any manner
to achieve some desired set of equilibrium points. If (13) is satisfied, then these equilibria will be asymptotically stable. (Stability under structural perturbations is nicely
surveyed in Ref. 15.)
563
CONCLUDING REMARKS
In the present paper we surveyed and applied results from the qualitative theory
of large scale interconnected dynamical systems in order to develop a qualitative theory for neural networks of the Hopfield type. Our results are local and use as much
information as possible in the analysis of a given eqUilibrium. In doing so, we established cri-teria for the exponential stability, asymptotic stability, and instability of an
equilibrium in such networks. We also devised methods for estimating the domain of
attraction of an asymptotically stable equilibrium and for estimating trajectory bounds
for such networks. Furthermore, we showed that our stability results are applicable
to systems under structural perturbations (e.g., as experienced in neural networks in
adaptive learning schemes).
In arriving at the above results, we viewed neural networks as an interconnection
of many single neurons, and we phrased our results in terms of the qualitative properties of the free single neurons and in terms of the network interconnecting structure.
This viewpoint is particularly well suited for the study of hierarchical structures which
naturally lend themselves to implementations 16 in VLSI. Furthermore, this type of approach makes it possible to circumvent difficulties which usually arise in the analysis
and synthesis of complex high dimensional systems.
REFERENCES
[1] For a review, see, Neural Networks for Computing, J. S. Denker, Editor, American
Institute of Physics Conference Proceedings 151, Snowbird, Utah, 1986.
[2] J. J. Hopfield and D. W. Tank, Science 233, 625 (1986).
[3] J. J. Hopfield, Proc. Natl. Acad. Sci. U.S.A. 79,2554 (1982), and ibid. 81,3088
(1984).
[4] G. E. Hinton and J. A. Anderson, Editors, Parallel Models of Associative Memory,
Erlbaum, 1981.
[5] T. Kohonen, Self-Organization and Associative Memory, Springer-Verlag, 1984.
[6] A. N. Michel and R. K. Miller, Qualitative Analysis of Large Scale Dynamical
Systems, Academic Press, 1977.
[7] R. K. Miller and A. N. Michel, Ordinary Differential Equations, Academic Press,
1982.
[8] I. W. Sandberg, Bell System Tech. J. 48, 35 (1969).
[9] A. N. Michel, IEEE Trans. on Automatic Control 28, 639 (1983).
[10] A. N. Michel, J. A. Farrell, and W. Porod, submitted for publication.
[11] J.-H. Li, A. N. Michel, and W. Porod, IEEE Trans. Cire. and Syst., in press.
[12] G. A. Carpenter, M. A. Cohen, and S. Grossberg, Science 235, 1226 (1987).
[13] M. A. Pai, Power System Stability, Amsterdam, North Holland, 1981.
[14] A. N. Michel, N. R. Sarabudla, and R. K. Miller, Circuits, Systems and Signal
Processing 1, 171 (1982).
[15] Lj. T. Grujic, A. A. Martynyuk and M. Ribbens-Pavella, Stability of Large-Scale
Systems Under Structural and Singular Perturbations, Nauka Dumka, Kiev, 1984.
[16] D. K. Ferry and W. Porod, Superlattices and Microstructures 2, 41 (1986).
| 36 |@word inversion:1 initial:1 contains:2 com:1 attracted:1 must:2 ixil:2 subsequent:1 enables:1 designed:1 leaf:1 provides:3 location:1 bixi:2 successive:2 along:4 c2:1 differential:2 qualitative:14 prove:4 lj2:1 manner:3 introduce:1 behavior:2 themselves:1 frequently:3 automatically:1 increasing:1 provided:3 estimating:2 notation:1 moreover:1 circuit:1 what:1 ail:1 voi:1 ret:1 transformation:1 guarantee:1 oir:1 every:1 xd:1 control:1 grant:2 positive:4 engineering:1 local:3 tends:1 consequence:1 acad:1 analyzing:1 au:1 bix:1 bi:8 grossberg:1 unique:1 definite:6 aji:1 uigi:1 bell:1 convenient:1 cannot:1 subsystem:13 instability:8 writing:1 conventional:1 equivalent:1 attention:1 survey:2 insight:1 attraction:15 dw:1 stability:28 ui2:1 hypothesis:10 origin:6 particularly:1 located:1 frn:1 electrical:1 rij:2 cy:1 ui:16 dynamic:2 ithe:1 completely:2 po:2 aii:6 hopfield:8 chapter:3 various:3 represented:1 distinct:1 neighborhood:3 whose:2 interconnection:12 gi:8 think:1 associative:2 advantage:1 differentiable:1 eigenvalue:1 interconnected:7 porod:5 interaction:1 fr:1 maximal:1 kohonen:1 iff:2 flexibility:1 achieve:1 empty:1 assessing:1 xil:2 coupling:2 develop:3 snowbird:1 ij:7 minor:2 auxiliary:1 entirety:1 lyapunov:10 concentrate:1 tji:1 viewing:1 require:2 biological:1 enumerated:1 strictly:1 considered:2 equilibrium:39 inverter:1 adopt:1 proc:1 applicable:2 largest:1 tool:1 weighted:1 reflects:1 clearly:2 always:1 publication:1 derived:1 yo:2 tech:1 am:3 cri:1 lj:2 vlsi:1 interested:2 tank:1 special:1 equal:1 nicely:1 pai:1 constitutes:1 oflarge:1 others:1 inherent:1 ve:1 recognize:1 individual:4 consisting:1 amplifier:1 organization:1 interest:4 investigate:1 analyzed:1 notre:2 tj:1 natl:1 xb:1 ferry:1 fu:2 lxn:1 iv:1 desired:6 isolated:2 superlattices:1 ordinary:1 subset:2 dij:2 erlbaum:1 accomplish:1 systematic:1 physic:2 off:2 synthesis:1 continuously:1 again:1 reflect:1 satisfied:10 choose:5 possibly:1 dr:1 external:6 american:2 til:2 michel:7 li:1 syst:1 potential:1 tii:1 nonlinearities:1 north:1 matter:1 satisfy:2 farrell:2 explicitly:1 vi:2 view:4 doing:1 sup:1 start:1 parallel:2 majorization:1 il:1 ir:1 oi:1 miller:3 yield:3 weak:2 lu:1 trajectory:6 rx:1 submitted:1 naturally:1 associated:1 proof:9 hamming:1 nonpositive:1 treatment:1 popular:1 noooi4:1 iji:1 sandberg:1 arranged:1 ox:2 generality:1 furthermore:4 anderson:1 microstructures:1 utah:1 deri:1 true:4 counterpart:2 evolution:1 hence:1 symmetric:2 i2:1 ixi2:2 self:1 generalized:1 demonstrate:1 g1j:1 invoked:1 novel:1 ef:2 cohen:1 exponentially:3 ti2:1 discussed:1 ai:12 in6:1 cv:1 automatic:1 nonlinearity:2 l3:1 stable:12 entail:1 gj:4 fii:2 recent:1 showed:1 manipulation:1 verlag:1 inequality:2 onr:1 additional:1 employed:1 determine:2 ud:1 monotonically:1 signal:1 ii:5 rj:1 academic:2 long:1 devised:2 concerning:1 vati:1 involving:2 c1:1 whereas:1 addition:2 singular:1 rest:1 limk:1 nv:1 thing:1 structural:8 near:2 iii:1 easy:2 xj:4 shift:1 resistance:1 constitute:1 remark:2 useful:1 tij:2 detailed:2 clear:1 ibid:1 rearranged:1 exist:2 nsf:1 sign:1 express:1 dominance:2 utilize:2 asymptotically:7 year:1 sum:1 everywhere:1 place:1 throughout:1 almost:1 reader:1 bound:8 dame:2 encountered:1 biui:2 strength:2 constraint:1 ri:7 phrased:4 aspect:1 min:2 concluding:2 urn:1 department:1 disconnected:1 wi:1 making:3 modification:2 dv:7 sij:2 xo:16 equation:6 previously:1 know:1 apply:3 denker:1 hierarchical:1 appropriate:3 denotes:7 xfr:1 include:1 uj:2 establish:3 capacitance:1 question:1 cire:1 rt:1 usual:2 diagonal:4 said:1 distance:1 sci:1 extent:2 unstable:6 trivial:1 pet:2 assuming:1 kjc:1 mini:4 trl:1 sector:1 stated:1 negative:6 design:2 implementation:1 neuron:6 defining:1 hinton:1 rn:2 perturbation:8 varied:1 specified:1 c3:1 herein:2 established:1 trans:2 address:2 able:1 proceeds:1 dynamical:6 pattern:2 below:2 usually:1 including:2 oj:1 lend:1 memory:2 power:1 difficulty:3 circumvent:3 advanced:2 scheme:2 review:1 understanding:1 literature:2 asymptotic:2 relative:1 loss:1 analogy:1 ixl:1 basin:2 consistent:1 viewpoint:5 dvi:2 editor:2 pi:9 changed:1 supported:2 free:8 arriving:2 aij:11 institute:2 distributed:1 l1y:1 adaptive:2 emphasize:1 global:1 investigating:1 assumed:1 conclude:1 xi:8 continuous:3 nature:2 model3:1 complex:3 domain:14 whole:2 arise:2 ref:9 carpenter:1 representative:1 referred:1 fashion:1 interconnecting:5 surveyed:2 position:4 experienced:1 exponential:3 xl:1 candidate:1 weighting:1 iiaii:1 ix:1 theorem:18 specific:3 emphasized:1 maxi:1 exists:3 importance:1 ci:1 sx:1 easier:1 suited:1 amsterdam:1 contained:2 xlo:1 scalar:1 holland:1 springer:1 satisfies:2 viewed:3 formulated:1 ponents:1 considerable:1 specifically:1 determined:1 wt:1 principal:3 called:1 l4:1 nauka:1 philosophy:1 |
2,868 | 360 | Speech Recognition
Using Demi-Syllable Neural Prediction Model
Ken-ichi Iso and Takao Watanabe
C & C Information Technology Research Laboratories
NEC Corporation
4-1-1 Miyazaki, Miyamae-ku, Kawasaki 213, JAPAN
Abstract
The Neural Prediction Model is the speech recognition model based on
pattern prediction by multilayer perceptrons. Its effectiveness was confirmed by the speaker-independent digit recognition experiments. This
paper presents an improvement in the model and its application to large
vocabulary speech recognition, based on subword units. The improvement
involves an introduction of "backward prediction," which further improves
the prediction accuracy of the original model with only "forward prediction". In application of the model to speaker-dependent large vocabulary
speech recognition, the demi-syllable unit is used as a subword recognition
unit. Experimental results indicated a 95.2% recognition accuracy for a
5000 word test set and the effectiveness was confirmed for the proposed
model improvement and the demi-syllable subword units.
1
INTRODUCTION
The Neural Prediction Model (NPM) is the speech recognition model based on
pattern prediction by multilayer perceptrons (MLPs). Its effectiveness was confirmed by the speaker-independent digit recognition experiments (Iso, 1989; Iso,
1990; Levin, 1990).
Advantages in the NPM approach are as follows. The underlying process of the
speech production can be regarded as the nonlinear dynamical system. Therefore,
it is expected that there is causal relation among the adjacent speech feature vectors.
In the NPM, the causality is represented by the nonlinear prediction mapping F,
= F w (at - d,
(1 )
where at is the speech feature vector at frame t, and subscript w represents mapping parameters. This causality is not explicitly considered in the conventional
at
227
228
Iso and Watanabe
speech recognition model, where the adjacent speech feature vectors are treated as
independent variables.
Another important model characteristic is its applicability to continuous speech
recognition. Concatenating the recognition unit models, continuous speech recognition and model training from continuous speech can be implemented without the
need for segmentation.
This paper presents an improvement in the NPM and its application to large vocabulary speech recognition, based on subword units. It is an introduction of "backward
prediction," which further improves the prediction accuracy for the original model
with only ''forward prediction". In Section 2, the improved predictor configuration,
NPM recognition and training algorithms are described in detail. Section 3 presents
the definition of demi-syllables used as subword recognition units. Experimental
results obtained from speaker-dependent large vocabulary speech recognition are
described in Section 4.
2
2.1
NEURAL PREDICTION MODEL
MODEL CONFIGURATION
Figure 1 shows the MLP predictor architecture. It is given two groups of feature
vectors as input. One is feature vectors for "forward prediction". Another is feature
vectors for "backward prediction". The former includes the input speech feature
vectors, at-TF' .?? , at-l, which have been implemented in the original formulation.
The latter, at+l! ... , at+TB' are introduced in this paper to further improve the prediction accuracy over the original method, with only "forward prediction". This, for
example, is expected to improve the prediction accuracy for voiceless stop consonants, which are characterized by a period of closure interval, followed by a sudden
release. The MLP output, fit, is used as a predicted feature vector for input speech
Forward prediction
Backward prediction
???
ht
Hidden layer
at
Output layer
Figure 1: Multilayer perceptron predictor
Speech Recognition Using Demi-Syllable Neural Prediction Model
feature vector at. The difference between the input speech feature vector at and its
prediction at is the prediction error. Also, it can be regarded as an error function
for the MLP training, based on the back-propagation technique.
The NPM for a recognition class, such as a subword unit, is constructed as a state
transition network, where each state has an MLP predictor described above (Figure
2). This configuration is similar in form to the Hidden Markov Model (HMM),
in which each state has a vector emission probability distribution (Rabiner, 1989).
The concatenation of these subword NPMs enables continuous speech recognition.
Figure 2: Neural Prediction Model
2.2 RECOGNITION ALGORITHM
This section presents the continuous speech recognition algorithm based on the
NPM. The concatenation of subword NPMs, which is also the state transition network, is used as a reference model for the input speech. Figure 3 shows a diagram
of the recognition algorithm. In the recognition, the input speech is divided into
segments, whose number is equal to the total states in the concatenated NPMs
(= N). Each state makes a prediction for the corresponding segment. The local
prediction error, between the input speech at frame t and the n-th state, is given
by
(2)
where n means the consecutive number of the state in the concatenated NPM. The
accumulation oflocal prediction errors defines the global distance between the input
speech and the concatenated NPMs
T
D
= min I: dt(nt),
{nt}
(3)
t=1
where nt denotes the state number used for prediction at frame t, and a sequence
{nIl n2, ... , nt, ... , nT} determines the segmentation of the input speech. The minimization means that the optimal segmentation, which gives a minimum accumulated
prediction error, should be selected. This optimization problem can be solved by
the use of dynamic-programming. As a result, the DP recursion formula is obtained
9t ()
n
. {9t -1 (
( n)
}
= dt ()
n + mm
9t-l n 1
- ).
At the end of Equation (4) recursive application, it is possible to obtain D
Backtracking the result provides the input speech segmentation.
(4)
= 9T(N).
229
230
Iso and Watanabe
NPM
'.6........................ _....._...?............_...?
1
MLP
N
?
?
?
??
?
MLP
2
at
i:
iI
I
;
!
MLP
1
f...................................._...................
ala2a3 ?
? at
?
Input Speech
?ar
Figure 3: Recognition algorithm based on DP
In this algorithm, temporal distortion of the input speech is efficiently absorbed by
DP based time-alignment between the input speech and an MLPs sequence. For
simplicity, the reference model topology shown above is limited to a sequence of
MLPs with no branches. It is obvious that the algorithm is applicable to more
general topologies with branches.
2.3 TRAINING ALGORITHM
This section presents a training algorithm for estimating NPM parameters from continuous utterances. The training goal is to find a set of MLP predictor parameters,
which minimizes the accumulated prediction errors for training utterances. The objective function for the minimization is defined as the average value for accumulated
prediction errors for all training utterances
1
iJ
=M
M
:L D(m),
(5)
m=l
where M is the number of training utterances and D( m) is the accumulated prediction error between the m-th training utterance and its concatenated NPM, whose
expression is given by Equation (3). The optimization can be carried out by an
iterative procedure, combining dynamic-programming (DP) and back-propagation
(BP) techniques. The algorithm is given as follows:
1. Initialize all MLP predictor parameters.
2. Set m
= 1.
Speech Recognition Using Demi-Syllable Neural Prediction Model
3. Compute the accumulated prediction error D{m) by DP (Equation (4)) and
determine the optimal segmentation {nn, using its backtracking.
4. Correct parameters for each MLP predictor by BP, using the optimal segmentation {nn, which determines the desired output at for the actual output at{n;}
of the nt-th MLP predictor.
5. Increase m by 1.
6. Repeat 3 - 5, while m
~
M.
7. Repeat 2 - 6, until convergence occurs.
Convergence proof for this iterative procedure was given in (Iso, 1989; Iso, 1990).
This can be intuitively understood by the fact that both DP and BP decrease the
accumulated prediction error and that they are applied successively.
3
Demi-Syllable Recognition Units
In applying the model to large vocabulary speech recognition, the demi-syllable
unit is used as a subword recognition unit (Yoshida, 1989). The demi-syllable is a
half syllable unit, divided at the center of the syllable nucleus. It can treat contextual variations, caused by the co-articulation effect, with a moderate unit number.
The units consist of consonant-vowel (CV) and vowel-consonant (VC) segments.
Word models are made by concatenation of demi-syllable NPMs, as described in
the transcription dictionary. Their segmentation boundaries are basically defined
as a consonant start point and a vowel center point (Figure 4). Actually, they
are automatically determined in the training algorithm, based on the minimum
accumulated prediction error criterion (Section 2.3).
[hakata]
ha as
*
ka
as *
ta
a>
Figure 4: Demi-syllable unit boundary definition
4
EXPERIMENTS
4.1 SPEECH DATA AND MODEL CONFIGURATION
In order to examine the validity of the proposed model, speaker-dependent Japanese
isolated word recognition experiments were carried out. Phonetically balanced 250,
500 and 750 word sets were selected from a Japanese word lexicon as training vocabularies. For word recognition experiments, a 250 word test set was prepared. All
231
232
Iso and Watanabe
the words in the test set were different from those in the training sets. A Japanese
male speaker uttered these word sets in a quiet environment. The speech data was
sampled at a 16 kHz sampling rate, and analyzed by a 10 msec frame period. As
a feature vector for each time frame, 10 mel-scaled cepstral parameters, 10 melscaled delta cepstral parameters and a changing ratio parameter for amplitude were
calculated from the FFT based spectrum.
The NPMs for demi-syllable units were prepared. Their total number was 241,
where each demi-syllable NPM consists of a sequence of four MLP predictors, except
for silence and long vowel NPMs, which have one MLP predictor. Every MLP
predictor has 20 hidden units and 21 output units, corresponding to the feature
vector dimensions. The numbers of input speech feature vectors, denoted by TF, for
the forward prediction, and by TB, for the backward prediction, in Figure 1, were
chosen for the two configurations, (TF' TB) = (2,1) and (3,0). The former, Type
A, uses the forward and backward predictions, while the latter, Type B, uses the
forward prediction only.
4.2
WORD RECOGNITION EXPERIMENTS
All possible combinations between training data amounts (= 250,500,750 words)
and MLP input layer configurations (Type A and Type B) were evaluated by 5000
word recognition experiments.
To reduce the computational amount in 5000 word recognition experiments, the
similar word recognition method described below was employed. For every word
in the 250 word recognition vocabulary, a 100 similar word set is chosen from the
5000 word recognition vocabulary, using the distance based on the manually defined phoneme confusion matrix. In the experiments, every word in the 250 word
utterances is compared with its 100 similar word set. It has been confirmed that
a result approximately equivalent to actual 5000 word recognition can be obtained
by this similar word recognition method (Koga, 1989).
Recognition
Accuracy
(%)~--------~--------~
??????~??..???????????????..?..?..????????????????,????I.. ????????????????????????????....???????????????1???????
96.0
I
I
I
I
94.0
-
I
~"t~~~~~~~~~--~-~~~-
f
~
92.0
-1_
I
I
I
90.0
l
A
I
---+~-------~-~~~---~--.
!
I
I
'
i! __--~----i
J
I
i
I
-iI
I
I
_ _ _ _ _-+~_~__-I--I
!
I-<~..,...,r
88.0 I--i~------.-+------I-I
250
500
750
Training Data Amount (words)
Figure 5: Recognition accuracy vs. training data amounts
Speech Recognition Using Demi-Syllable Neural Prediction Model
The results for 5000 word recognition experiments are shown in Figure 5. As a
result, consistently higher recognition accuracies were obtained for the input layer
configuration with backward prediction (Type A), compared with the configuration
without backward prediction (Type B), and absolute values for recognition accuracies become higher with the increase in training data amount.
5
DISCUSSION AND CONCLUSION
This paper presents an improvement in the Neural Prediction Model (NPM), which
is the introduction of backward prediction, and its application to large vocabulary
speech recognition based on the demi-syllable units. As a result of experiments,
the NPM applicability to large vocabulary (5000 words) speech recognition was
verified. This suggests the usefulness of the recognition and training algorithms
for concatenated subword unit NPMs, without the need for segmentation. It was
also reported in (Tebelskis, 1990) (90 % for 924 words), where the subword units
(phonemes) were limited to a subset of complete Japanese phoneme set and the
duration constraints were heuristically introduced. In this paper, the authors used
the demi-syllable units, which can cover any Japanese utterances, and no duration
constraints. High recognition accuracies (95.2 %), obtained for 5000 words, indicates
the advantages of the use of demi-syllable units and the introduction of the backward
prediction in the NPM.
Acknowledgements
The authors wish to thank members of the Media Technology Research Laboratory
for their continuous support.
References
K. Iso. (1989), "Speech Recognition Using Neural Prediction Model," IEICE Technical Report, SP89-23, pp.81-87 (in Japanese).
K. Iso and T. Watanabe. (1990), "Speaker-Independent Word Recognition Using
A Neural Prediction Model," Proc.ICASSP-90, S8.8, pp.441-444.
E. Levin. (1990), "Word Recognition Using Hidden Control Neural Architecture,"
Proc.ICASSP-90, S8.6, pp.433-436.
J. Tebelskis and A. Waibel. (1990), "Large Vocabulary Recognition Using Linked
Predictive Neural Networks," Proc. ICASSP-90, S.8.7, pp.437-440.
L.R.Rabiner. (1989), "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition", Proc. of IEEE, Vo1.11, No.2, pp.257-286., February
1989.
K. Yoshida, T. Watanabe and S. Koga. (1989), "Large Vocabulary Word Recognition Based on Demi-Syllable Hidden Markov Model Using Small Amount of Training
Data," Proc.ICASSP-89, S1.1, pp.I-4.
S. Koga, K. Yoshida, and T. Watanabe. (1989), "Evaluation of Large Vocabulary
Speech Recognition Based on Demi-Syllable HMM," Proc. of ASJ Autumn Meeting
(in Japanese).
233
| 360 |@word effect:1 implemented:2 predicted:1 involves:1 validity:1 february:1 former:2 objective:1 correct:1 heuristically:1 closure:1 laboratory:2 vc:1 occurs:1 adjacent:2 dp:6 quiet:1 distance:2 speaker:7 mel:1 thank:1 takao:1 hmm:2 criterion:1 configuration:8 concatenation:3 complete:1 confusion:1 subword:11 ka:1 contextual:1 nt:6 mm:1 considered:1 ratio:1 mapping:2 dictionary:1 enables:1 consecutive:1 khz:1 s8:2 v:1 proc:6 half:1 selected:3 applicable:1 markov:3 iso:10 cv:1 tf:3 sudden:1 minimization:2 provides:1 lexicon:1 frame:5 constructed:1 become:1 introduced:2 consists:1 release:1 emission:1 improvement:5 consistently:1 moderate:1 indicates:1 expected:2 examine:1 meeting:1 dependent:3 transcription:1 accumulated:7 nn:2 minimum:2 dynamical:1 automatically:1 employed:1 actual:2 hidden:6 relation:1 determine:1 period:2 articulation:1 tb:3 estimating:1 underlying:1 ii:2 branch:2 medium:1 among:1 miyazaki:1 denoted:1 technical:1 treated:1 minimizes:1 characterized:1 initialize:1 long:1 recursion:1 equal:1 divided:2 improve:2 corporation:1 technology:2 sampling:1 autumn:1 temporal:1 manually:1 every:3 represents:1 prediction:48 carried:2 multilayer:3 utterance:7 scaled:1 report:1 control:1 unit:23 acknowledgement:1 understood:1 local:1 treat:1 interval:1 diagram:1 vowel:4 subscript:1 approximately:1 mlp:15 nucleus:1 member:1 evaluation:1 suggests:1 alignment:1 male:1 co:1 analyzed:1 effectiveness:3 limited:2 koga:3 production:1 fft:1 fit:1 repeat:2 architecture:2 recursive:1 topology:2 silence:1 reduce:1 perceptron:1 digit:2 procedure:2 cepstral:2 absolute:1 desired:1 expression:1 causal:1 oflocal:1 isolated:1 boundary:2 calculated:1 vocabulary:13 word:31 transition:2 dimension:1 forward:8 demi:19 made:1 ar:1 cover:1 speech:41 author:2 applying:1 applicability:2 subset:1 accumulation:1 conventional:1 equivalent:1 predictor:11 center:2 usefulness:1 uttered:1 yoshida:3 levin:2 amount:6 duration:2 prepared:2 reported:1 ken:1 global:1 simplicity:1 consonant:4 spectrum:1 tutorial:1 continuous:7 iterative:2 regarded:2 delta:1 ku:1 variation:1 ichi:1 group:1 four:1 programming:2 successively:1 us:2 japanese:7 changing:1 verified:1 recognition:56 ht:1 backward:10 n2:1 japan:1 causality:2 solved:1 includes:1 explicitly:1 caused:1 miyamae:1 decrease:1 watanabe:7 msec:1 wish:1 concatenating:1 balanced:1 linked:1 environment:1 start:1 layer:4 followed:1 syllable:21 dynamic:2 formula:1 mlps:3 segment:3 accuracy:10 predictive:1 phonetically:1 characteristic:1 efficiently:1 phoneme:3 rabiner:2 constraint:2 bp:3 icassp:4 consist:1 tebelskis:2 represented:1 min:1 basically:1 below:1 nec:1 confirmed:4 pattern:2 kawasaki:1 waibel:1 npm:15 combination:1 backtracking:2 whose:2 definition:2 absorbed:1 distortion:1 pp:6 asj:1 s1:1 obvious:1 proof:1 intuitively:1 stop:1 sampled:1 determines:2 advantage:2 sequence:4 equation:3 improves:2 goal:1 segmentation:8 amplitude:1 actually:1 back:2 end:1 combining:1 ta:1 dt:2 higher:2 determined:1 except:1 improved:1 vo1:1 formulation:1 evaluated:1 total:2 nil:1 experimental:2 until:1 convergence:2 perceptrons:2 original:4 denotes:1 nonlinear:2 support:1 propagation:2 latter:2 voiceless:1 defines:1 indicated:1 ij:1 ieice:1 concatenated:5 |
2,869 | 3,600 | Adapting to a Market Shock: Optimal Sequential
Market-Making
Malik Magdon-Ismail
Department of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180
[email protected]
Sanmay Das
Department of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180
[email protected]
Abstract
We study the profit-maximization problem of a monopolistic market-maker who
sets two-sided prices in an asset market. The sequential decision problem is hard
to solve because the state space is a function. We demonstrate that the belief state
is well approximated by a Gaussian distribution. We prove a key monotonicity
property of the Gaussian state update which makes the problem tractable, yielding
the first optimal sequential market-making algorithm in an established model. The
algorithm leads to a surprising insight: an optimal monopolist can provide more
liquidity than perfectly competitive market-makers in periods of extreme uncertainty, because a monopolist is willing to absorb initial losses in order to learn a
new valuation rapidly so she can extract higher profits later.
1
Introduction
Designing markets to achieve certain goals is gaining renewed importance with the prevalence of
many novel markets, ranging from prediction markets [13] to markets for e-services [11]. These
markets tend to be thin (illiquid) when they first appear. Similarly, when a market shock occurs
to the value of an instrument on a financial exchange, thousands of speculative traders suddenly
possess new valuations on the basis of which they would like to trade. Periods of uncertainty, like
those following a shock, are also periods of illiquidity, so trading may be sparse right after a shock.
This is a chicken-and-egg problem. People do not want to trade in thin markets, and yet, having
many people trading is what creates liquidity. These markets therefore need to be bootstrapped
into a phase where they are sufficiently liquid to attract trading. This bootstrapping is often achieved
through market-makers [12]. Market-makers are responsible for providing liquidity and maintaining
order on the exchange. For example, the NYSE designates a single monopolist specialist (marketmaker) for each stock, while the NASDAQ allows multiple market-makers to compete.
There has been much debate on whether one of these models is better than the other. This debate
is again important today for those who are designing new markets. Should they employ a single monopolistic market-maker or multiple competitive market-makers? Alternatively, should the
market-maker be based on some other criterion, and if so, what is the optimal design for this agent?
Market makers want to maximize profit, which could run contrary to their ?social responsibility? of
providing liquidity. A monopolist market maker attempts to maximize expected discounted profits, while competitive (non-colluding) market makers may only expect zero profit, since any profits
should be wiped out by competition. Therefore, one would expect markets with competitive marketmakers to be of better quality. However, this has not been observed in practice, especially in the
well-studied case of the NASDAQ vs. the NYSE [1, 9]. Many explanations have been proposed in
the empirical literature, and have explained parts of this phenomenon. One reason that has been
speculated about anecdotally but never analyzed formally is the learning aspect of the problem. For
1
example, the NYSE?s promotional literature used to tout the benefits of a monopolist for ?maintaining a fair and orderly market? in the face of market shocks [6].
The main challenge to formally analyzing this question is the complexity of the monopolistic market
maker?s sequential decision problem. The market maker, when setting bid and ask prices, is plagued
by a heavily path dependent exploitation-exploration dilema. There is a tradeoff between setting
the prices to extract maximum profit from the next trade versus setting the prices to get as much
information about the new value of the instrument so as to generate larger profits from future trades.
There is no known solution to this sequential decision problem.
We present the first such solution within an established model of market making. We show the
surprising fact that a monopolist market maker leads to higher market liquidity in periods of extreme
market shock than does a zero-profit competitive market maker. In various single period settings, it
has been shown that monopolists can sometimes provide greater liquidity [6] by averaging expected
profits across different trade sizes. We show for the first time that this can hold true with fixed trade
sizes in a multi-period setting, because the market-maker is willing to take losses following a shock
in order to learn the new valuation more quickly.
1.1
Market Microstructure Background
Market microstructure has recently received much attention from a computational perspective [10,
4, 12]. The driving problem of this paper is price discovery. Suppose an instrument has just begun
trading in a market where different people have different beliefs about its value. An example is
shares in the ?Barack Obama wins the presidential election? market. These shares should trade at
prices that reflect the probability that the event will occur: if the outcome pays off $100, the shares
should trade at about $55 if the aggregate public belief is 55% that the event will occur. Similarly,
the price of a stock should reflect the aggregate public belief about future cash flows associated
with a company. It is well-known that markets are good at aggregating information into prices,
but different market structures possess different qualities in this regard. We are concerned with the
properties of dealer markets, in which prices are set by one or more market-makers responsible for
providing liquidity by taking one side of every trade.
Market-making has been studied extensively in the theoretical market microstructure literature [8, 7,
for example], but only recently has the dynamic multi-period problem gained attention [2, 3]. Since
we are interested in the problem of how a market-maker learns a value for an asset, we follow the
general model of Glosten and Milgrom which abstracts away from the problem of quantities by
restricting attention to situations where the market-maker places bid and ask quotes for one unit of
the asset at each time step. Das [3] has extended this model to consider the market-maker?s learning
problem with competitive pricing, while Darley et al [2] have used similar modeling for simulations
of the NASDAQ. The Glosten and Milgrom model has become a standard model in this area.
Liquidity, which is not easy to quantify, is the prime social concern. In practice, it is a function of
the depth of the limit order book. In our models, we measure liquidity using the bid-ask spread,
or alternatively the probability that a trade will occur. This gives a good indication of the level of
informational heterogeneity in the market, and of execution costs. The dynamic behavior of the
spread gives insight into the price discovery process.
1.2
Our Contribution
We consider the question of optimal sequential price-setting in the Glosten-Milgrom model. The
market-maker sets bid and ask prices at each trading period1 and when a trader arrives she has the
option of buying or selling at those prices, or of not executing a trade. There are many results
relating to the properties of zero-profit (competitive) market-makers [7, 3]. The zero-profit problem
is a single-period decision-making problem with online belief updates. Within this same framework,
one can formulate the decision problem for a monopolist market-maker who maximizes her total
discounted profit as a reinforcement learning problem. The market maker?s state is her belief about
the instrument value, and her action is to set bid and ask prices. The market maker?s actions must
trade off profit taking (exploitation) with price discovery (exploration).
1
The MM is willing to buy at the bid price and sell at the ask price.
2
The complexity of the sequential problem arises from the complexity of the state space and the fact
that the action space is continuous. The state of the market-maker must represent her belief about
the true value of the asset being traded. As such, it is a probability density function. In a parametric
setting, the state space is finite dimensional, but continuous. Even if we assume a Gaussian prior
for the market-maker?s belief as well as for the beliefs of all the traders, the market-maker?s beliefs
quickly become a complex product of error functions, and the exact dynamic programming problem
becomes intractable.
We solve the Bellman equation for the optimal sequential market maker within the framework of
Gaussian state space evolution, a close approximation to the true state space evolution. We present
simulation results which testify to how closely the Gaussian framework approximates the true evolution. The Gaussian approximation alone does not alleviate the difficulties associated with reinforcement learning in continuous action and state spaces.2 However within our setting, we prove a key
monotonicity property for the state update. This property allows us to solve for the value function
exactly using a single pass dynamic program.
Thus, our first contribution is a complete solution to the optimal sequential market making problem
within a Gaussian update framework. Our second contribution relates to the phenomenological
implications for market behavior. We obtain the surprising result that in periods of extreme shock,
when the market maker has large uncertainty relative to the traders, the monopolist provides greater
liquidity than competitive zero-profit market-makers. The monopolist increases liquidity, possibly
taking short term losses, in order to learn more quickly, and in doing so offers the better social
outcome. Of course, once the monopolist has adapted to the shock, she equilibrates at a higher bid
ask spread than the the corresponding zero-profit market maker with the same beliefs.
2
2.1
The Model and the Sequential Decision Problem
Market Model
At time 0, a shock occurs causing an instrument to attain value V which will be held fixed through
time (we consider one instrument in the market). This could represent a real market shock to a stock
value (change in public beliefs), an IPO, or the introduction of a new contract in a prediction market.
We use a model similar to Das?s [3] extension of the Glosten and Milgrom [7] model. We assume
that trading is divided into a sequence of discrete trading time steps, each time step corresponding
to the arrival of a trader. The value V is drawn from some distribution gV (v).
The market-maker (M M ), at each time step t ? 0, sets bid and ask prices bt ? at at which she is
willing to respectively buy and sell one unit. Traders arrive at time-steps t ? 0. Trader t arrives with
a noisy estimate wt of V , where wt = V + t . The {t } are zero mean i.i.d. random variables with
distribution function F . We will assume that F is symmetric, so that F (?x) = 1 ? F (x). The
trader decides whether to trade at either the bid or ask prices depending on the value of wt . The trader
will buy at at if wt > at (she thinks the instrument is undervalued), sell at bt if wt < bt (she thinks
the instrument is overvalued) and do nothing otherwise. M M receives a signal xt ? {+1, 0, ?1}
indicating whether the trader bought, did nothing or sold. Note that information is conveyed only by
the direction of the trade. Information can also be conveyed by the patterns and size of trades, but
the present work abstracts away from those considerations.
The market-maker?s objective is to maximize profit. In perfect competition, the MM is pushed to
setting bid and ask prices that yield zero expected profit. In a monopolistic setting, she wants to
optimize the profits she receives over time. As we will see below, this can be a difficult problem to
solve. A commonly used alternative is to consider a greedy, or myopically optimal MM who only
maximizes her expected profit from the next trade. This is a good approximation for agents with a
high discount factor, since they are more concerned with immediate reward. We will consider all
three types of market-makers, (1) Zero-profit, (2) Myopic, and (3) Optimal.
2
Where one has to resort to unbounded value iteration methods whose convergence and uniqueness properties are little understood.
3
2.2
State Space
The state space for the MM is determined by MM?s belief about the value V , described by a density
function pt at time step t. The MM decides on actions (bid and ask prices) (bt , at ) based on pt . The
MM receives signal xt ? {+1, 0, ?1} as to whether the trader bought, sold, or did nothing.
Let qt (V ; bt , at ) be the probability of receiving signal xt given bid and ask (bt , at ), conditioned on
V . Assuming that F is continuous at bt ? V and at ? V , a straightforward calculation yields
?
xt = +1,
?1 ? F (at ? V )
qt (V ; bt , at ) = F (at ? V ) ? F (bt ? V ) xt = 0,
?
F (bt ? V )
xt = ?1,
or, qt (V ; bt , at ) = F (zt+ ? V ) ? F (zt? ? V ), where zt+ and zt? are respectively +?, at , bt
and at , bt , ?? when xt = +1, 0, ?1. The Bayesian update to pt is then given by pt+1 (v) =
R?
t ,at )
, where the normalization constant At = ?? dv pt (v)qt (v; bt , at ). Unfolding the
pt (v) qt (v;b
At
Qt
? ,a? )
recursion gives pt+1 (v) = p0 (v) ? =1 q? (v;b
A?
2.3
Solving for Market Maker Prices
Let
P?bt ?t at , and let rt be the expected profit at time t. The expected discounted return is then R =
t=0 ? rt where
R ? 0 < ? < 1 is the discount factor. The optimal MM maximizes R. We can compute
rt as rt = ?? dv vF (?v) (pt (v + bt ) + pt (at ? v)). rt decomposes into two terms which can
be identified as the bid and ask side profits, rt = rtbid (bt ) + rtask (at ). In perfect competition, M M
should not be expecting any profit on either the bid or ask side. This is because if the contrary were
true, a competing MM could place bid or ask prices so as to obtain less profit, wiping out M M ?s
advantage. This should hold at every time step. Hence the M M will set bid and ask prices such that
rtbid (bt ) = 0 and rtask (at ) = 0. Solving for bt , at , we find that bt and at must satisfy the following
fixed point equations (these are also derived for the case of Gaussian noise by Das [3]),
R?
R?
dv vpt (v)F (bt ? v)
dv vpt (v)F (v ? at )
??
bt = R ?
= Ept [V |xt = ?1], at = R??
= Ept [V |xt = +1]
?
dv pt (v)F (bt ? v)
dv pt (v)F (v ? at )
??
??
(assuming the denominators, which are the conditional probabilities of hitting the bid or ask are
non-zero). The myopic monopolist maximizes rt . For the typical case of well behaved distributions
pt (v) and F , the bid and ask returns display a single maximum. In this case, we can obtain bmyp
t
and amyp
by setting the derivatives to zero (we assume the functions are well behaved so that the
t
derivatives are defined). Letting f (x) = F0 (x) be the density function for the noise t , bmyp
and
t
satisfy
the
fixed
point
equations
amyp
t
R?
R?
dv pt (v)(vf (at ? v) + F (v ? at ))
dv pt (v)(vf (bt ? v) ? F (bt ? v))
??
R?
R?
bt =
, at = ??
dv pt (v)f (bt ? v)
dv pt (v)f (at ? v)
??
??
The optimal strategy for MM is not as easy to obtain. When ? is large, the expected discounted
return R could be significantly higher than the myopic return. The optimal MM might choose to
sacrifice short term return for a substantially larger return over the long term. The only reason to
do this is if choosing a sub-optimal short term strategy will lead to a significant decrease in the
uncertainty in V (which translates to a narrowing of the probability distribution pt (v)). MM can
then exploit this more certain information regarding V in the longer term.
The optimal strategy for the MM is encapsulated in the Bellman equation for the value functional
(where the state pt , is a function, (bt , at ) is the action, and ? is a policy):
V (pt ; ?) = E[r0 |pt , b?t (pt ), a?t (pt )] + ?E[V (pt+1 ; ?)|pt , b?t (pt ), a?t (pt )]
This equation reflects the fact that the MM?s expected profit is a function of both her immediate
expected return, and her future state, which is also affected by her bid and ask prices. The fact that
V is a value functional leads to numerous technical problems when solving this Bellman equation.
The problem is heavily path dependent with the number of paths being exponential in the number
of trading periods. To make this tractable, we use a Gaussian approximation for the state space
evolution.
4
Z ?
I(?, ?)
Z ???x
=
dx N (x)
dy N (y)
??
=
??
?
p
1 + ?2
?
!
,
Exact vs. Approximate Belief Update
Z ?
0.030
J(?, ?)
dy N (y)
??
s
=
0.020
0.025
2 steps
5 steps
20 steps
?
??
?2
1 + ?2
=
0.015
0.005
+
?
+
?
+
?
0.000
A(z , z
B(z , z
1
2
3
4
C(z , z
e?x /2
?
2?
!
,
2
Z ???x
dy
??
?? 2
I(?, ?) ?
=
I(?, ?) ? K(?, ?)
!
!
z + ? ?t
z ? ? ?t
I
, ?t ? I
, ?t
?
?
!
!
z + ? ?t
z ? ? ?t
J
, ?t ? J
, ?t
?
?
!
!
z + ? ?t
z ? ? ?t
L
, ?t ? L
, ?t
?
?
)
=
)
=
)
=
(1 + ? 2 )3/2
?N
e?y /2
?
2?
!
?
=
5
Value
2
dx x
??
L(?, ?)
?
p
1 + ?2
?N
2
Z ?
K(?, ?)
0.010
Probability
Z ???x
dx x ? N (x)
=
p
1 + ?2
Figure 1: Gaussian state update (dashed) versus Figure 2: Gaussian integrals and normalization
true state update (solid) illustrating that the Gaus- constants used in the derivation of the DP and
sian approximation is valid.
the state updates.
2.4
The Gaussian Approximation
From a Gaussian prior and performing Bayesian updates, one expects that the state distribution
will be closely approximated by a Gaussian (see Figure 1). Thus, forcing the MM to maintain a
Gaussian belief over the true value at each time t should give a good approximation to the true state
space evolution, and the resulting optimal actions should closely match the true optimal actions. In
making this reduction, we reduce the state space to a two parameter function class parameterized by
the mean and variance, (?t , ?t2 ). The value function is independent of ?t (hence dependent only on
?t ), and the optimal action is of the form bt = ?t ? ?t , at = ?t + ?t . Thus,
V (?t ) = max {rt (?t , ?) + ?E[V (?t+1 )|?]}
?
(1)
To compute the expectation on the RHS, we need the probabilistic dynamics in the (approximate)
Gaussian state space, i.e., we need the evolution of ?t , ?t .
t
Let N (?), ?(?) denote the standard normal density and distribution. Let pt (v) = ?1t N v??
be
?t
Gaussian with mean ?t and variance ?t2 . Assume that the noise is also Gaussian with variance ?2 ,
so F (x) = ?( ?x ). At time t + 1, after the Bayesian update, we have
h +
? i
1 1
z ?v
z ?v
t
pt+1 = ? N v??
?
?
?
.
?t
?
?
A ?t
The normalization constant A(z + , z ? ) is given in Figure 2, and zt+ and zt? are respectively
2
+?, at , bt and
0, ?1.
R at , bt , ?? when xt = +1,
R The2 updates ?t+1 and ?t+1 are obtained from
2
Ept+1 [V ] = dv vpt+1 (v) and Ept+1 [V ] = dv v pt+1 (v). After some tedious algebra (see
supplementary information), we obtain
?t+1
2
?t+1
B
= ?t + ?t ? ,
A
AC + B 2
2
= ?t 1 ?
.
A2
Figure 2 gives the expressions for A, B, C.
2
Theorem 2.1 (Monotonic state update). ?t+1
? ?t2 (see supplementary information for proof).
5
(2)
(3)
Establishing that ?t is decreasing in t allows us to solve the dynamic program efficiently (note that
the property of decreasing variance is well-known for the case of an update to a Gaussian prior when
the observation is also Gaussian ? we are showing this for threshold observations).
2.5
Solving the Bellman Equation
We now return to the Bellman equation (1). In light of Theorem 2.1, the RHS of this equation is
dependent only on states ?t+1 that are strictly smaller than the state ?t on the LHS. We can thus
solve this problem numerically by computing V (0) and then building up the solution for a fine grid
on the real line. We use linear interpolation between previously computed points if the variance
update leads to a point not on the grid.
We need to explicitly construct the states on the RHS with respect to which the expectation is being
taken. The expectation is with respect to the future state ?t+1 , which
p depends directly on the trade
outcome xt ? {?1, 0, +1}. We define ?t = ?t /? and q = ?t /? 1 + ?2t , where at = ?t + ?t and
bt = ?t ? ?t . The following table sumarizes some of the useful quantities:
2
xt
Prob.
?t+1
+1 1 ? ?(qt ) ?t + ?t ?t
?t
0 2?(qt ) ? 1
?1 1 ? ?(qt ) ?t ? ?t ?t
?t+1
?t ?t
?t ?t
?t ?t
where
?t
=
1?
?t
2
=
1?
?t
=
v
u
u
t
?2
t N (qt )(N (qt ) ? qt [1 ? ?(qt )])
2
(1 + ?2
t )(1 ? ?(qt ))
2?2
t qt N (qt )
(1 + ?2
t )(2?(qt ) ? 1)
?2
t
N (qt )
1 + ?2
t 1 ? ?(qt )
Note that qt > 0, ?t , ?t < 1 and ?t > 0. We can now compute E[V (?t+1 |?t )] as
2(1 ? ?(qt ))V (?t ?t ) + (2?(qt ) ? 1)V (?t ?t ).
This allows us to complete the specification for the Bellman equation (with x = ?2t where ?t =
is the MM?s information disadvantage)
?
?
?
V (x; ? ) = max 2?2 1 + x q(1 ? ?(q)) ?
q
x
N (q)
1+x
?
2
?
?
?
2
+ ? 2(1 ? ?(q))V (? (x, q)x; ? ) + (2?(q) ? 1)V (? (x, q)x; ? )
?
ff
where ?2 (x, q) and ? 2 (x, q) are as defined above with ?2t = x and qt = q.
We define the optimal action q ? (x) as the value of q that maximizes the RHS. When x = 0,
?
?
))
the myopic and optimal M M coincide, and so we have that V (0) = 2q (1??(q
, where q ? =
1??
q ? (0) ? 0.7518 satisfies q ? N (q ? ) = 1 ? ?(q ? ). Note that if we only maximize the first term in the
value function, we obtain the myopic action q myp (?), satisfying the fixed point equation: q myp =
myp
)
(1 + ?2t ) 1??(q
N (q myp ) . There is a similarly elegant solution for the zero-profit MM under the Gaussian
assumption, obtained by setting rt = 0, yielding the fixed point equation: q zero =
10 standard fixed point iterations are sufficient to solve these equations accurately.
3
?2t
N (q zero )
.
1+?2t 1??(q zero )
Experimental Results
First, we validate the Gaussian approximation by simulating a market as follows. The initial value
V is drawn from a Gaussian with mean 0 and standard deviation ?, and we set the discount rate
? = 0.9. Each simulation consists of 100 trading periods at which point discounted returns become
negligible. At each trading step t, a new trader arrives with a valuation wt ? N (V, 1) (Gaussian
with mean V and variance 1). We report results averaged over more than 10,000 simulations, each
with a randomly sampled value of V .
In each simulation, the market-maker?s state updates are given by the Gaussian approximation (2),
(3), according to which she sets bid and ask prices. The trader at time-step t trades by comparing
wt to bt , at . We simulate the outcomes of the optimal, myopic, and zero-profit MMs. An alternative
6
Profit Vs. MM Information Disadvantage
3.5
Spread Vs. MM Information Disadvantage
10
Opt.
Myopic
Zero Profit
3
0.5
2.5
2
1.5
6
Profit
Spread (2"/#$)
Discounted Profit
8
Profits Over Time
1
Opt.
Myopic
Zero Profit
4
1
0
0
!0.5
2
0.5
0.5
1
1.5
2
2.5
3
3.5
MM Information Disadvantage !
4
(a) Realized vs theoretical value
function in the Gaussian approximation (thin black line).
The realized closely matches
the theoretical, validating the
Gaussian framework.
0
0
0
1
2
3
MM Information Disadvantage !
4
(b) Bid-ask spreads as a function of the MM information
disadvantage ? indicating that
once ? exceeds about 1.5, the
monopolist offers the greatest
liquidity.
!1
Opt.
Myopic
Zero Profit
5
10
15
20
Time Step t
25
30
35
(c) Realized average return as a
function of time: the monopolist is willing to take significant
short term loss to improve future profits as a result of better
price discovery.
Figure 3: MM Properties derived from the solution of the Bellman equation.
is to maintain the exact state as a product of error functions, and extract the mean and variance
for computing the optimal action. This is computationally prohibitive, and leads to no significant
differences. If the real world conformed to the MM?s belief, a new value Vt would be drawn from
N (?t , ?t ) at each trading period t, and then the trader would receive a sample wt ? N (Vt , 1). All
our computations are exact within this ?Gaussian? world, however the point here is to test the degree
to which the Gaussian and real worlds differ.
The ideal test of our optimal MM is against the true optimal for the real world, which is intractable.
However, if we find that the theoretical value function for the optimal MM in the Gaussian world
matches the realized value function in the real world, then we have strong, though not necessarily
conclusive, evidence for two conclusions: (1) The Gaussian world is a good approximation to the
real world, otherwise the realized and theoretical value functions would not coincide; (2) Since the
two worlds are nearly the same, the optimal MM in the Gaussian world should closely match the
true optimal. Figure 3(a) presents results which show that the realized and theoretical value functions are essentially the same, presenting the desired evidence (note that with independent updates,
the posterior should be asymptotically Gaussian). Figure 3(a) also demonstrates that the optimal
significantly outperforms the myopic market-maker. Figure 3(b) shows how the bid-ask spread will
behave as a function of the MM information disadvantage.
Some phenomenological properties of the market are shown in Figure 4.3 For a starting MM information disadvantage of ? = 3, the optimal MM initially has significantly lower spread, even
compared with the zero profit market-maker. The reason for this outcome is illustrated in Figure
3(c) where we see that the optimal market maker is offering lower spreads and taking on significant
initial loss to be compensated later by significant profits due to better price discovery. At equilibrium
the optimal MM?s spread and the myopic spread are equal, as expected.
4
Discussion
Our solution to the Bellman equation for the optimal monopolistic MM leads to the striking conclusion that the optimal MM is willing to take early losses by offering lower spreads in order to make
significantly higher profits later (Figures 3(b,c) and 4). This is quantitative evidence that the optimal
MM offers more liquidity than a zero-profit MM after a market shock, especially when the MM is
at a large information disadvantage. In this regime, exploration is more important than exploitation.
Competition may actually impede the price discovery process, since the market makers would have
no incentive to take early losses for better price discovery ? competitive pricing is not necessarily
informationally efficient (there are quicker ways for the market to ?learn? a new valuation).
3
With both zero-profit and optimal MMs we reproduce one of the key findings of Das [3]: the market
exhibits a two-regime behavior. Price jumps are immediately followed by a regime of high spreads (the pricediscovery regime), and then when the market-maker learns the new valuation, the market settles into an equilibrium regime of lower spreads (the efficient market regime).
7
Expected Bid!Ask Spread Dynamics
10
70
4
2
Time To Stabilization
6
0
0
80
0.8
Trade Probability
Bid!Ask Spread
8
Spread Stabilization Rate
Trading Activity
1
Opt.
Myopic
Zero Profit
0.6
0.4
0.2
10
20
Time Step t
30
40
(a) Realized spread over time
(? = 3). The optimal MM
starts with lowest spread, and
converges quickest to equilibrium.
0
Opt.
Myopic
Zero Profit
5
10
15
20
Time Step t
25
30
35
(b) Liquidity over time (? =
3), measured by probability of a
trade. Initial liquidity is higher
for the optimal MM.
Opt.
Myopic
Zero Profit
60
50
40
30
20
10
0
0
1
2
3
4
MM Information Disadvantage !
(c) Time to spread stabilization.
When MM?s information disadvantage increases, the optimal
MM is significantly better.
Figure 4: Realized market properties based on simulating the three MMs.
Our solution is based on reducing a functional state space to a finite-dimensional one in which the
Bellman equation can be solved efficiently. When the state is a probability distribution, updated
according to independent events, we expect the Gaussian approximation to closely match the real
state evolution. Hence, our methods may be generally applicable to problems of this form.
While this paper presents a stylized model, simple trading models have been shown to produce rich
market behavior in many cases (for example, [5]). The results presented here are an example of
the kinds of insights that can be be gained from studying market properties in these models while
approaching agent decision problems from the perspective of machine learning. At the same time,
this paper is not purely theoretical. The eventual algorithm we present is easy to implement, and we
are in the process of evaluating this algorithm in test prediction markets. Another direction we are
pursuing is to endow the traders with intelligence, so they may learn the true value too. We believe
the Gaussian approximation admits a solution for a monopolistic market-maker and adaptive traders.
References
[1] W.G. Christie and P.H. Schulz. Why do NASDAQ market makers avoid odd-eighth quotes? J. Fin., 49(5),
1994.
[2] V. Darley, A. Outkin, T. Plate, and F. Gao. Sixteenths or pennies? Observations from a simulation of the
NASDAQ stock market. In IEEE/IAFE/INFORMS Conf. on Comp. Intel. for Fin. Engr., 2000.
[3] S. Das. A learning market-maker in the Glosten-Milgrom model. Quant. Fin., 5(2):169?180, April 2005.
[4] E. Even-Dar, S.M. Kakade, M. Kearns, and Y. Mansour. (In)stability properties of limit order dynamics.
In Proc. ACM Conf. on Elect. Comm., 2006.
[5] J.D. Farmer, P. Patelli, and I.I Zovko. The predictive power of zero intelligence in financial markets.
PNAS, 102(11):2254?2259, 2005.
[6] L.R. Glosten. Insider trading, liquidity, and the role of the monopolist specialist. J. Bus., 62(2), 1989.
[7] L.R. Glosten and P.R. Milgrom. Bid, ask and transaction prices in a specialist market with heterogeneously informed traders. J. Fin. Econ., 14:71?100, 1985.
[8] S.J. Grossman and M.H. Miller. Liquidity and market structure. J. Fin., 43:617?633, 1988.
[9] Roger D. Huang and Hans R. Stoll. Dealer versus auction markets: A paired comparison of execution
costs on NASDAQ and the NYSE. J. Fin. Econ., 41(3):313?357, 1996.
[10] S.M. Kakade, M. Kearns, Y. Mansour, and L. Ortiz. Competitive algorithms for VWAP and limit-order
trading. In Proc. ACM Conf. on Elect. Comm., pages 189?198, 2004.
[11] Juong-Sik Lee and Boleslaw Szymanski. Auctions as a dynamic pricing mechanism for e-services. In
Cheng Hsu, editor, Service Enterprise Integration, pages 131?156. Kluwer, New York, 2006.
[12] D. Pennock and R. Sami. Computational aspects of prediction markets. In N. Nisan, T. Roughgarden,
E. Tardos, and V.V. Vazirani, editors, Algorithmic Game Theory. Cambridge University Press, 2007.
[13] Justin Wolfers and Eric Zitzewitz. Prediction markets. J. Econ. Persp., 18(2):107?126, 2004.
8
5
| 3600 |@word exploitation:3 illustrating:1 heterogeneously:1 tedious:1 willing:6 simulation:6 dealer:2 p0:1 profit:44 solid:1 reduction:1 initial:4 liquid:1 offering:2 renewed:1 bootstrapped:1 outperforms:1 comparing:1 surprising:3 rpi:2 yet:1 dx:3 must:3 gv:1 update:17 v:5 alone:1 greedy:1 prohibitive:1 intelligence:2 short:4 provides:1 unbounded:1 enterprise:1 become:3 prove:2 consists:1 sacrifice:1 expected:11 market:98 behavior:4 multi:2 bellman:9 buying:1 discounted:6 informational:1 decreasing:2 company:1 election:1 little:1 becomes:1 maximizes:5 lowest:1 what:2 kind:1 substantially:1 informed:1 finding:1 bootstrapping:1 quantitative:1 every:2 barack:1 exactly:1 demonstrates:1 farmer:1 unit:2 appear:1 service:3 understood:1 aggregating:1 negligible:1 limit:3 analyzing:1 establishing:1 path:3 interpolation:1 might:1 black:1 quickest:1 studied:2 averaged:1 responsible:2 practice:2 implement:1 prevalence:1 area:1 empirical:1 adapting:1 attain:1 significantly:5 get:1 close:1 optimize:1 compensated:1 straightforward:1 attention:3 starting:1 formulate:1 immediately:1 insight:3 financial:2 stability:1 updated:1 tardos:1 pt:29 today:1 heavily:2 suppose:1 exact:4 programming:1 designing:2 approximated:2 satisfying:1 observed:1 narrowing:1 role:1 quicker:1 solved:1 thousand:1 ipo:1 trade:20 decrease:1 expecting:1 comm:2 complexity:3 reward:1 dynamic:9 engr:1 solving:4 algebra:1 iafe:1 purely:1 creates:1 predictive:1 eric:1 basis:1 selling:1 stylized:1 stock:4 various:1 derivation:1 aggregate:2 outcome:5 choosing:1 insider:1 whose:1 larger:2 solve:7 supplementary:2 vpt:3 otherwise:2 presidential:1 think:2 noisy:1 conformed:1 online:1 sequence:1 indication:1 advantage:1 monopolist:15 product:2 causing:1 rapidly:1 achieve:1 sixteenth:1 ismail:1 tout:1 validate:1 competition:4 convergence:1 produce:1 perfect:2 executing:1 converges:1 depending:1 informs:1 ac:1 measured:1 odd:1 qt:23 received:1 strong:1 c:2 trading:15 quantify:1 differ:1 direction:2 closely:6 exploration:3 stabilization:3 settle:1 public:3 exchange:2 microstructure:3 alleviate:1 opt:6 extension:1 strictly:1 hold:2 mm:44 sufficiently:1 normal:1 plagued:1 equilibrium:3 algorithmic:1 traded:1 driving:1 early:2 a2:1 uniqueness:1 encapsulated:1 proc:2 applicable:1 maker:45 quote:2 undervalued:1 reflects:1 unfolding:1 gaussian:35 cash:1 avoid:1 gaus:1 endow:1 derived:2 she:9 ept:4 promotional:1 dependent:4 attract:1 nasdaq:6 bt:33 initially:1 her:8 reproduce:1 schulz:1 interested:1 integration:1 equal:1 once:2 never:1 having:1 construct:1 sell:3 nearly:1 thin:3 future:5 t2:3 report:1 employ:1 randomly:1 phase:1 maintain:2 ortiz:1 attempt:1 testify:1 analyzed:1 extreme:3 arrives:3 yielding:2 light:1 myopic:14 held:1 implication:1 integral:1 lh:1 desired:1 theoretical:7 modeling:1 disadvantage:11 maximization:1 cost:2 deviation:1 expects:1 trader:17 too:1 density:4 contract:1 off:2 receiving:1 probabilistic:1 lee:1 quickly:3 again:1 reflect:2 choose:1 possibly:1 huang:1 conf:3 book:1 equilibrates:1 resort:1 derivative:2 return:10 grossman:1 satisfy:2 explicitly:1 depends:1 nisan:1 later:3 responsibility:1 doing:1 competitive:10 start:1 option:1 contribution:3 variance:7 who:4 efficiently:2 miller:1 yield:2 bayesian:3 accurately:1 comp:1 asset:4 against:1 associated:2 proof:1 sampled:1 hsu:1 begun:1 ask:25 actually:1 higher:6 follow:1 april:1 though:1 nyse:4 just:1 roger:1 receives:3 quality:2 behaved:2 pricing:3 believe:1 impede:1 building:1 true:12 evolution:7 hence:3 symmetric:1 illustrated:1 game:1 elect:2 criterion:1 plate:1 presenting:1 complete:2 demonstrate:1 auction:2 ranging:1 consideration:1 novel:1 recently:2 speculative:1 functional:3 wolfers:1 approximates:1 relating:1 numerically:1 kluwer:1 significant:5 cambridge:1 grid:2 similarly:3 phenomenological:2 specification:1 han:1 longer:1 posterior:1 perspective:2 prime:1 forcing:1 certain:2 vt:2 zitzewitz:1 greater:2 r0:1 maximize:4 period:12 signal:3 dashed:1 relates:1 multiple:2 pnas:1 exceeds:1 technical:1 match:5 calculation:1 offer:3 long:1 divided:1 paired:1 prediction:5 denominator:1 essentially:1 expectation:3 iteration:2 sometimes:1 represent:2 normalization:3 achieved:1 chicken:1 receive:1 background:1 want:3 fine:1 myopically:1 posse:2 pennock:1 tend:1 elegant:1 validating:1 contrary:2 flow:1 bought:2 ideal:1 easy:3 concerned:2 sami:1 bid:25 approaching:1 perfectly:1 identified:1 competing:1 reduce:1 regarding:1 quant:1 tradeoff:1 translates:1 whether:4 expression:1 york:1 action:12 dar:1 useful:1 generally:1 wiping:1 discount:3 extensively:1 generate:1 econ:3 discrete:1 incentive:1 affected:1 key:3 threshold:1 drawn:3 shock:12 asymptotically:1 compete:1 run:1 parameterized:1 uncertainty:4 prob:1 striking:1 place:2 arrive:1 pursuing:1 decision:7 dy:3 vf:3 pushed:1 pay:1 followed:1 display:1 cheng:1 activity:1 roughgarden:1 adapted:1 occur:3 aspect:2 simulate:1 performing:1 stoll:1 department:2 according:2 across:1 smaller:1 kakade:2 making:7 explained:1 dv:12 sided:1 taken:1 computationally:1 equation:16 previously:1 bus:1 mechanism:1 letting:1 tractable:2 instrument:8 milgrom:6 studying:1 magdon:2 polytechnic:2 away:2 simulating:2 specialist:3 alternative:2 maintaining:2 exploit:1 especially:2 suddenly:1 malik:1 objective:1 question:2 quantity:2 occurs:2 realized:8 parametric:1 strategy:3 rt:9 exhibit:1 win:1 dp:1 valuation:6 reason:3 assuming:2 providing:3 difficult:1 debate:2 troy:2 design:1 zt:6 policy:1 observation:3 sold:2 fin:6 finite:2 behave:1 immediate:2 situation:1 extended:1 heterogeneity:1 mansour:2 conclusive:1 established:2 justin:1 below:1 pattern:1 eighth:1 regime:6 challenge:1 program:2 gaining:1 max:2 explanation:1 belief:16 greatest:1 event:3 power:1 difficulty:1 sian:1 recursion:1 improve:1 numerous:1 extract:3 prior:3 literature:3 discovery:7 relative:1 loss:7 expect:3 versus:3 agent:3 conveyed:2 degree:1 sufficient:1 sik:1 editor:2 share:3 course:1 informationally:1 side:3 institute:2 face:1 taking:4 sparse:1 penny:1 benefit:1 liquidity:17 regard:1 depth:1 valid:1 world:10 rich:1 evaluating:1 commonly:1 reinforcement:2 coincide:2 jump:1 adaptive:1 social:3 transaction:1 vazirani:1 approximate:2 absorb:1 monotonicity:2 orderly:1 decides:2 buy:3 szymanski:1 alternatively:2 continuous:4 rensselaer:2 designates:1 decomposes:1 why:1 table:1 learn:5 anecdotally:1 complex:1 necessarily:2 da:6 obama:1 did:2 the2:1 main:1 spread:20 rh:4 noise:3 arrival:1 nothing:3 sanmay:2 fair:1 intel:1 ff:1 egg:1 ny:2 sub:1 exponential:1 learns:2 theorem:2 boleslaw:1 xt:12 showing:1 monopolistic:6 admits:1 concern:1 evidence:3 intractable:2 restricting:1 sequential:10 importance:1 gained:2 execution:2 conditioned:1 gao:1 hitting:1 speculated:1 monotonic:1 satisfies:1 acm:2 conditional:1 goal:1 eventual:1 price:32 hard:1 change:1 determined:1 typical:1 colluding:1 reducing:1 averaging:1 wt:8 kearns:2 total:1 pas:1 experimental:1 indicating:2 formally:2 people:3 glosten:7 arises:1 phenomenon:1 |
2,870 | 3,601 | Stress, noradrenaline, and realistic prediction of
mouse behaviour using reinforcement learning
Gediminas Luk?sys1,2 , Carmen Sandi2 , Wulfram Gerstner1
1
Laboratory of Computational Neuroscience
2
Laboratory of Behavioural Genetics
Ecole Polytechnique F?ed?erale de Lausanne (EPFL)
Lausanne, CH-1015, Switzerland
{gediminas.luksys,carmen.sandi,wulfram.gerstner}@epfl.ch
Abstract
Suppose we train an animal in a conditioning experiment. Can one predict how
a given animal, under given experimental conditions, would perform the task?
Since various factors such as stress, motivation, genetic background, and previous
errors in task performance can influence animal behaviour, this appears to be a
very challenging aim. Reinforcement learning (RL) models have been successful in modeling animal (and human) behaviour, but their success has been limited
because of uncertainty as to how to set meta-parameters (such as learning rate,
exploitation-exploration balance and future reward discount factor) that strongly
influence model performance. We show that a simple RL model whose metaparameters are controlled by an artificial neural network, fed with inputs such as
stress, affective phenotype, previous task performance, and even neuromodulatory manipulations, can successfully predict mouse behaviour in the ?hole-box?
- a simple conditioning task. Our results also provide important insights on how
stress and anxiety affect animal learning, performance accuracy, and discounting
of future rewards, and on how noradrenergic systems can interact with these processes.
1
Introduction
Animal behaviour is guided by rewards that can be received in different situations and by modulatory
factors, such as stress and motivation. It is known that acute stress can affect learning and memory by
modulating plasticity through stress hormones and neuromodulators [1, 2, 3], but their role in highlevel processes such as learning, memory, and action selection is not well understood. A number
of interesting conceptual and computational models have been proposed relating neuromodulatory
systems, cognitive processes, and abstract statistical quantities characterizing the environment [4, 5].
While such models provide great mechanistic insights, they alone are often unable to accurately
predict animal behaviour in a realistic situation due to a great number of diverse modulatory factors.
Stress [2], genotype [6], affective traits such as anxiety and impulsivity [7], motivation [8], and
evaluation of performance errors [9] can all influence individual performance in any single task, yet
it may prove difficult and inefficient to explicitly model each factor in order to accurately predict
animal behaviour. Instead, we propose a method which could account for the influence of arbitrary
modulatory factors on behaviour as control parameters of a general behavioural model.
In modeling reward-based behavioural learning, approaches based on the formal theory of reinforcement learning (RL) have been the most successful. The basic idea of RL is that animals (or artificial
agents) select their actions based on predicted future rewards that could be acquired upon taking
these actions. The expected values of future rewards for different actions (Q-values) can be gradually learned by observing rewards received under different state-action combinations. An efficient
1
way to do this is temporal difference (TD) learning [10], which uses an error signal that correlates
with the activity of dopaminergic neurons in the Substantia Nigra [11]. TD models have been successfully applied to explain a wide range of experimental data, including animal conditioning [8],
human decision-making [12], and even addiction [13].
Learning and action selection in TD models can be strongly influenced by the choice of model metaparameters such as the learning rate, the future reward discounting, and the exploitation-exploration
balance. While in most modeling studies they have received relatively little attention, it has been proposed that RL meta-parameters are related to specific neuromodulators - noradrenaline, serotonin,
acetylcholine [14], and to neural activity occurring in different brain regions - notably amygdala,
striatum, and anterior cingulate [15]. Modulatory factors such as stress, anxiety, and impulsivity
often act through the same brain systems, which suggests that in RL models their effects could be
expressed through changes in meta-parameter values.
In the present study, we tested mouse behaviour in a simple conditioning task - the hole-box, and
showed how various modulatory factors could control a simple RL model to accurately predict animal behaviour. We used food deprived mice of two genetic strains - ?calm? C57BL/6 and ?anxious?
DBA/2 [6], half of which were exposed to an additional stressor - sitting on an elevated platform before each experimental session. We formalized animal behaviour using a simple RL model, and
trained an artificial neural network that could control RL meta-parameters using information about
stress, motivation, individual affective traits, and previous learning success. We demonstrate that
such model can successfully predict mouse behaviour in the hole-box task and that the resulting
model meta-parameters provide useful insights into how animals adjust their performance throughout the course of a learning experience, and how they respond to stressors and motivational demands.
Finally, using systemic manipulations of the noradrenergic system we show how noradrenaline interacts with stress and anxiety in regulating performance accuracy and temporal discounting.
2
Description of the hole-box experiment
In our hole-box experiments, we used 64 male mice (32 of C57BL/6 strain and 32 of DBA/2 strain)
that were 10-week old at the beginning of the experiment. During an experimental session, each
animal was placed into the hole-box (Figure 1a). The mice had to learn to make a nose poke into
the hole upon the onset of lights and not to make it under the condition of no light. After a response
to light, the animals (which were food deprived to 87.3+/-1.0% of their initial weight) received a
reward in form of a food pellet (Figure 1b). The inter-trial interval (ITI) between subsequent trials
was varying: the probability of a new trial during each 0.5 sec long time step was 1/30, resulting in
the average ITI of 15 sec. The total session duration was 500 sec, equivalent to 1000 time steps.
Figure 1: a. Scheme of the hole-box. b. Protocol of the hole-box experiment. c. Hole-box stateaction chart. Rectangles are states, thin arrows are actions.
During 2 days of habituation (when the food delivery was not paired with light) the mice learned that
food could be delivered from the boxes. After this, they were trained for 8 consecutive days, during
2
which half of the mice were exposed to extrinsic stress (30min on the elevated platform) before
each training session. On training days 3, 6, and 8, animals have been injected i.p. (5 ml/kg, 30
min before the experimental session) with either saline (1/2 of mice), or adrenergic alpha-2 agonist
clonidine (1/4 of mice, 0.05 mg/kg) that reduces brain noradrenaline levels, or adrenergic alpha-2
antagonist yohimbine (1/4 of mice, 1 mg/kg) that increases brain noradrenaline levels. Mice of each
strain were treated equivalently with respect to pharmacological and stress conditions. Stress and
pharmacological treatment groups were the same during all training days.
3
Challenges of behavioural analysis
To quantify animal performance in the hole-box experiment, we used 7 different performance measures (PMs). These were behavioural statistics, calculated for each daily session: number of trials
(within 500 sec), number of ITI pokes, mean response time (after light onset), mean nose poke
duration, number of uneaten food pellets, ?TimePreference?1 , and ?DurationPreference?2 . Different PMs reflected different aspects of behaviour - learning to respond, associating responses with
light, overcoming anxiety to make sufficiently long nose pokes, etc. For this reason, during the process of learning PMs exhibited a variety of dynamics: slowly increasing numbers of trials, rapidly
decreasing mean response times, first increasing and later decreasing numbers of ITI pokes.
Figure 2: a. Development of selected PMs with learning for C57BL/6 mice. b. Results of the PCA
applied for all PMs: eigenvalues and loadings for the first 3 components.
When comparing the PMs between different experimental groups (Figure 2a), it is often hard to
interpret the differences, as each PM describes an unknown mixture of cognitive processes such
as learning, memory, performance intensity and accuracy. In some cases, performing a principal
component analysis (PCA) or similar tools may be suitable for reducing the behavioural measures
to few main components that could be easily interpreted [16]. However, more often that is not
the case - for instance, in our experiment, the 3 principal components are not sufficient to explain
even 75% of the variation, and the composition of the components is not easy to interpret (Figure
2b). As an alternative to conventional behavioural analysis, we propose that a computational model
of behaviour, based on reinforcement learning, could be sufficiently flexible to fit a wide range of
behavioural effects, and in contrast to the PMs, RL meta-parameters could be easily interpreted in
cognitive terms.
4
Modeling the hole-box using reinforcement learning
We used a simple temporal difference RL model to formalize the behaviour. Conceptually, the
model had 4 states: [ITI, trial] x [animal outside, making a nose poke], and 2 actions: move (in
or out) and stay. However, to make model?s performance realistic several extensions had to be
introduced (Figure 1c). First of all, the state animal outside was divided into 6 states corresponding
1
2
TimePreference = (average time between adjacent ITI pokes) / (average response time)
DurationPreference = (average trial response poke duration) / (average ITI poke duration)
3
to different places in the box which the animal could occupy, adding additional actions for the
transitions between these new states (moving around the box). Secondly, we observed that when
our animals made too short trial responses (with nose poke duration under 0.5 sec), they often could
not pick up the delivered food. Conversely, when the nose pokes were longer than 1.5 sec, animals
nearly always managed to pick up the delivered food immediately. To account for this, the state
making a nose poke was divided into 5 states, representing different nose poke durations, with the
increasing probability of picking up the reward (to keep things simple, we chose a linear increase:
from p = 0.2 for the first state to p = 1.0 for the fifth). Note that a food pellet is delivered at the
start of each trial response, irrespectively of whether the animal picks it up during that nose poke or
not. Unconsumed pellets could be eaten during later (sufficiently long) ITI nose pokes.
The Q-values, defined as Q(st , at ) = E[r(t) + ?r(t + 1) + ? 2 r(t + 2) + ...|st , at ], were updated
based on the temporal difference error:
?Q(st , at ) = ?[r(t) + ?Q(st+1 , at+1 ) ? Q(st , at )],
(1)
where r(t) is the reward at time t, st the state, at the action, ? the learning rate, and ? the future
reward discount factor. High ? values (close to 1) signified that future rewards were given high
weight, while low ? values (0-0.5) meant that immediate rewards were preferred. Actions were
chosen probabilistically, based on Q-values and the exploitation factor ?, as follows:
X
p(ai |s) = exp(?Q(s, ai ))/
exp(?Q(s, ak )))
(2)
k?A(s)
where A(s) are actions available at state s. Low ? values implied that the actions were being chosen
more or less randomly (exploration), while high ? values strongly biased the choice towards the
action(s) with the highest Q-value (exploitation). Q-values were initialized as zeros before the first
training day, and the starting state was always ITI / outside, near the hole.
5
Predicting mouse behaviour using dynamic control of model
meta-parameters
To compare the model with animal behaviour we used the following goodness-of-fit function [17]:
?2 =
N
PM
X
(PMexp
? PMmod
(?, ?, ?))2 /(?kexp )2 ,
k
k
(3)
k=1
where PMexp
and PMmod
are the PMs calculated for each animal and the model, respectively, and
k
k
NPM = 7 is the number of the PMs. PMmod
(?, ?, ?) were calculated after simulation of one
k
session (averaged over multiple runs) with fixed values of the meta-parameters. To evaluate whether
our model is sufficiently flexible to fit a wide range of animal behaviours (including effects of stress,
strain, and noradrenaline), we performed an estimation procedure of daily meta-parameters. Using
stochastic gradient ascent from multiple starting points, we minimized (3) with respect to ?, ?, ?
for each session separately by systematically varying the meta-parameters in the following ranges:
?, ? ? [0.03, 0.99] and ? ? [10?1 , 101.5 ]. To evaluate how well the model fits the experimental data
we used ?2 -test with ? = NPM ? 3 degrees of freedom (since our model has 3 free parameters).
The P (?2 , ?) value, defined as the probability that a realization of a chi-square-distributed random
variable would exceed ?2 by chance, was calculated for each session separately. Generally, values
of P (?2 , ?) > 0.01 correspond to a fairly good model [17].
Even if our RL model with estimated meta-parameters is capable of reproducing behaviour of different experimental groups in the hole-box, this does not tell us how, given a new animal in an arbitrary
experimental condition, we should set daily meta-parameters to predict its behaviour. However,
information about animal?s affective phenotype, its experimental condition, and recent task performance may be helpful in determining these meta-parameter settings, and thus, predicting behaviour.
For this purpose, we trained an artificial neural network (NN) model (Figure 3b), whose outputs
would be the predicted values of ?, ?, and ?. The inputs of the model included the following information: animal?s genetic strain (0 for C57BL/6, 1 for DBA/2), its anxiety (% of time it spends in the
center of the open field - a separate experiment for characterization of affective traits), its novelty
response (% of time it spends in the center of the field once a novel object is introduced there),
4
stress prior to a training session (0 or 1), motivation (% of initial weight, correlating with hunger),
noradrenergic manipulation (-1 for NA reduction, 1 for NA increase, and 0 for control), and two
important measures describing performance on the previous day - a number of food pellets eaten
(?rewards?), and a number of nose pokes during which no food was consumed (?misses?). Our NN
had merely 4 hidden layer ?neurons? (to prevent from over-fitting, as we only had 762 samples of
data for training and validation). Its target outputs were the daily estimated meta-parameter sets, and
after normalizing inputs and targets to zero mean and unit variance, the network was trained (100
times) using the Levenberg-Marquardt method [18]. Because of the normalization, the resulting
mean square errors (MSEs) directly indicated how much variance in the meta-parameters could not
be explained by the NN.
Using 10 trained networks with lowest MSEs, we performed simulations to analyze how much different input factors affect each meta-parameter. For this purpose we simulated the NN 106 times,
linearly varying 1 or 2 selected inputs, while all the remaining inputs would be given random values with zero mean and unit variance. Then we could plot mean resulting meta-parameter values
corresponding to different values of the selected inputs. The range of meta-parameter variation and
relative noise in such plots indicated how strongly the selected inputs (compared to other inputs)
influenced the resulting meta-parameters. Finally, to predict the performance of selected animals
and the differences between experimental groups, we simulated the NN with input values of each
animal and analyzed the resulting meta-parameters.
Figure 3: a. Comparison of model performance and animal behaviour. b. Scheme of the NN model.
c. Comparison of daily estimated meta-parameters and outputs of the trained NN model. In a and c
arbitrary performance measures and experimental groups were selected for comparison.
6
Results
The results of daily meta-parameter estimation indicated a good fit between the model and animal
performance (Figure 3a). The condition P (?2 , ?) > 0.01 was satisfied for 92% of estimated parameter sets. The mean ?2 value was h?2 i = 5.4, or only h?2 i = 0.77 per PM.
Figure 4: Estimated daily meta-parameter values and differences between experimental conditions.
a. Exploitation factors ?, strain, and stress. b. Reward discount factors ? and mouse strain. c.
Effects of noradrenergic manipulations (on days 3, 6, and 8).
5
Meta-parameters, estimated for each daily session, indicated interesting dynamics as well as some
profound differences depending on stress condition, animal?s strain, and noradrenergic manipulation. During the process of learning, estimated exploitation-exploration factors ? and future reward
discount factors ? showed progressive increase (Figure 4a,b; regression p < 0.001), meaning that
the better animals learn the task - the more accurately they use their knowledge for selecting actions, and the longer time horizon they can take into account. In addition, extrinsic stress increases
exploitation factors ? for calm C57BL/6 mice (ANOVA p < 0.01) but not for anxious DBA/2
mice (Figure 4a). Reward discount factors ? were higher for C57BL/6 mice (Figure 4b, ANOVA
p < 0.001), indicating that anxious DBA/2 mice act more impulsively. Dynamics of the learning
rates and effects of stress on future reward discounting showed certain trends, however, for these
daily estimated values they were not significant. For the pharmacological manipulations, two results
were significant (Figure 4c): a decrease in noradrenaline led to reduced exploitation factors for the
anxious DBA/2 mice (ANOVA p < 0.001), and to increased reward discount factors for C57BL/6
mice (on day 3, t-test p < 0.01), suggesting that decreasing NA levels counteracts anxiety and
impulsivity.
A problem of daily estimated meta-parameters is their excessive flexibility, allowing them to follow
everyday ups and downs of individual animal behaviour, many of which happen because of factors
unknown to the experimenter. This ?noise? often makes it difficult to see the effects that known
factors (such as stress and strain) have on meta-parameter dynamics. Results of the trained NN
model for prediction of daily meta-parameters indicated that only about 25% of their variation could
be explained. However, the resulting meta-parameter averages for experimental groups indicated
a very good fit with estimated daily meta-parameters (Figure 3c). It is also evident that different
meta-parameters can be predicted to a different extent: for the learning rates only a small part of
variation can be explained (M SE(?) = 0.92), while for exploitation and reward discount factors a substantial part (M SE(?) = 0.72, M SE(?) = 0.62), showing that their values are more reliable
and more sensitive to modulatory influences. The comparison of NN training and validation errors
(Figure 5a) indicated that the effects of over-fitting were negligible.
Figure 5: a. Typical training and validation errors for the NN model. b. Model simulations: interactions between anxiety and noradrenaline in affecting exploitation factors ? and reward discount
factors ?. c. Model simulations: interactions between rewards and misses in task performance. In b
and c light colors represent high meta-parameter values, dark colors - low values.
The meta-parameter prediction model allows us to analyze how (and how much) each modulatory
factor affects meta-parameters and what the interactions between factors are. This is particularly
useful for studying possibly non-linear interactions between continuous-valued factors, such as anxiety, motivation, and previous task performance. Results in Figure 5b,c describe such interactions.
The level of noise in the color plots indicate that previous task performance (Fig. 5c) has a relatively strong influence on meta-parameters, compared to that of anxiety (Fig. 5b). Future reward
discounting is mainly affected by received rewards, while for exploitation factors misses also have
a significant effect, supporting an observation that well trained animals (who receive many rewards
and make few misses) decrease their effort to perform quickly and accurately (Fig. 5c). Finally, anxiety and high noradrenaline levels act additively in lowering the reward discount factors, while their
effects on exploitation factors are more complex: for calm animals NA increase leads to higher exploitation, but for highly anxious animals (whose NA levels are already presumably high) increasing
NA does not improve their performance accuracy (Fig. 5b).
6
When comparing meta-parameter averages between various experimental conditions, the output of
the NN model fits well the daily estimated values (Figure 3c), however, the dynamics become much
smoother and the error bars - much smaller, since they account only for known factors, included in
the NN input. While all meta-parameter effects observed when comparing daily estimated values are
reproduced, excluding unpredicted variability makes some additional effects statistically significant.
For instance, it is evident (Figure 6a) that extrinsic stress decreases future reward discount factors
for the DBA/2 mice (ANOVA p < 0.01) and that the learning rates slightly decrease with learning,
particularly for the C57BL/6 mice (regression p < 0.01). The effects of the pharmacological manipulations of the noradrenergic system have been ?denoised? as well, and several additional effects
become evident (Figure 6b). For C57BL/6 mice, stress plays an important role in modulating effects
of NA: non-stressed mice increase their exploitation upon increased NA level (ANOVA p < 0.01),
and slightly decrease it upon decreased NA levels. Stressed mice do not show significant changes
in exploitation factors. For DBA/2 mice, stimulating noradrenergic function does not lead to higher
exploitation factors (similarly to stressed C57BL/6 mice), but their future reward discounting is
sensitive to NA changes - the lower NA, the higher their ? values (ANOVA, p < 0.01).
Figure 6: ?Denoised? meta-parameters: outputs of the trained neural network model. Several additional differences between experimental conditions become evident. a. Meta-parameters, stress,
and strain. b. Effects of noradrenergic manipulations (on days 3, 6, and 8).
7
Discussion
In this paper, we demonstrated that a simple RL model, whose parameters are controlled by a neural
network that uses the information about various modulatory influences, can successfully predict
mouse behaviour in the hole-box conditioning task. Compared to the conventional performance
measures, the resulting meta-parameters of our model showed more pronounced effects between
experimental groups and they have the additional advantage of being easier to relate to cognitive
processes. Moreover, the results of pharmacological manipulations provided supporting evidence
that RL meta-parameters are indeed related to neuromodulators such as noradrenaline.
The progressive increase of exploitation factors ? and the decrease of learning rates ? are consistent with how the meta-parameters of artificial agents should presumably be controlled to achieve
optimal performance [14]. The increase in reward discount factors ? may have fundamental reasons too, e.g. when exposed to a new environment, hungry animals may become anxious about the
uncertainty in the situation (whether they will be able to find food to survive), which makes them
prefer immediate rewards to delayed ones. However, it may also be related to the specific reward
structure in the model. In order to stay in the hole for longer than 1 time step (and thus have a higher
chance to pick up the food) ? values should be much larger than 0.5. In addition, to avoid making
unnecessary ITI pokes (given that food is usually picked up during the trial response) ? values close
to 1.0 are necessary. For this reason, animal behavioural dynamics (e.g. when the mice start making sufficiently long nose pokes, and when, if at all, they learn to avoid making ITI pokes) could
determine (or be determined by) the prevailing dynamics of ?-s.
Our specific results provide insights into biological mechanisms of stress, anxiety, behavioural performance, and how they relate to formal RL quantities. Stress increased performance accuracy (?
factors) for the calm C57BL/6 mice, but not for the anxious DBA/2 mice. Similarly, increasing
noradrenaline levels had a positive effect on ?-s only for the non-stressed C57BL/6 mice, but not for
7
the other groups, while decreasing NA levels had the strongest negative effect on ?-s for the anxious
DBA/2 mice. This suggests that within a certain range (which is dependent on animal?s anxiety) performance accuracy is determined by NA level. Outside this range, NA effects get saturated or may
even get reversed, as suggested by the inverse-U-shaped-relation theory of arousal/stress effects on
cognition [4]. The effects of stress, strain, and NA on future reward discounting indicate that stress,
high anxiety, and elevated noradrenaline are all detrimental for learning delayed future rewards.
However, since the effects of NA and stress on reward discount factors are more pronounced for
DBA/2 mice, ?-s might be sensitive to noradrenaline at higher levels than ?-s are. It is also likely
that serotonin, mPFC, and other brain systems often implicated in processing of delayed rewards
[15, 19] may be interacting with stress and NA in controlling future reward discounting.
Although the basis of our hole-box behavioural prediction is a simple RL model with discrete states
and actions, it is not obvious that such a model could predict animal behaviour in other significantly
more complex tasks. However, even in more complex models (involving continuous state-action
spaces, episodic memories, etc.), a RL-like module is likely to be central to their performance,
and a similar approach could be applied for controlling its meta-parameters based on numerous
modulatory influences. Further studies relating such meta-parameters to other neuromodulatory
systems and activation patterns of specific brain areas could provide interesting insights and may
prove to be an ultimate test-box for the biological relevance of such an approach.
References
[1] J. J. Kim and K. S. Yoon. Stress: metaplastic effects in the hippocampus. TINS, 21(12):505?9. 1998.
[2] C. Sandi, M. Loscertales, and C. Guaza. Experience-dependent facilitating effect of corticosterone on
spatial memory formation in the water maze. Eur J Neurosci., 9(4):637?42., Apr 1997.
[3] M. Joels, Z. Pu, O. Wiegert, M. S. Oitzl, and H. J. Krugers. Learning under stress: how does it work?
Trends Cogn Sci., 10(4):152?8. Apr 2006.
[4] G. Aston-Jones, J. Rajkowski, and J. Cohen. Locus coeruleus and regulation of behavioral flexibility and
attention. Prog Brain Res., 126:165?82., 2000.
[5] A. J. Yu and P. Dayan. Uncertainty, Neuromodulation, and Attention. Neuron, 46:681?92, May 19 2005.
[6] A. Holmes, C. C. Wrenn, A. P. Harris, K. E. Thayer, and J. N. Crawley. Behavioral profiles of inbred
strains on novel olfactory, spatial and emotional tests for reference memory in mice. Genes Brain Behav.,
1(1):55?69., Jan 2002.
[7] M. J. Kreek, D. A. Nielsen, E. R. Butelman, and K. S. LaForge. Genetic influences on impulsivity, risk
taking, stress responsivity and vulnerability to drug abuse and addiction. Nat Neurosci., 8:1450?7, 2005.
[8] P. Dayan and B. W. Balleine. Reward, Motivation, and Reinforcement Learning. Neuron, 36:285?98, 2002.
[9] M. M. Botvinick, T. S. Braver, C. S. Carter, D. M. Barch, and J. D. Cohen. Conflict monitoring and
cognitive control. Psychol Review,108(3):624?52, Mar 2001.
[10] R. Sutton and A. G. Barto. Reinforcement Learning - An Introduction. MIT Press, 1998.
[11] W. Schultz, P. Dayan, and P. R. Montague. A neural substrate of prediction and reward. Science,
275(5306):1593?9, Mar 14 1997.
[12] S. C. Tanaka, K. Doya, G. Okada, K. Ueda, Y. Okamoto, and S. Yamawaki. Prediction of immediate
and future rewards differentially recruits cortico-basal ganglia loops. Nat Neurosci., 7:887?93, Jul 2004.
[13] A. D. Redish. Addiction as a Computational Process Gone Awry. Science., 306(5703):1944?7, 2004.
[14] K. Doya. Metalearning and neuromodulation. Neural Netw, 15(4-6):495?506, Jun-Jul 2002.
[15] K. Doya. Modulators of decision making. Nat Neurosci., 11:410?6, Apr 2008.
[16] Y. Clement, C. Joubert, C. Kopp, E. M. Lepicard, P. Venault, R. Misslin, M. Cadot, and G. Chapouthier
Anxiety in Mice: A Principal Component Analysis Study. Neural Plast., 35457, Mar 21 2007.
[17] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes in C : The Art of
Scientific Computing. Cambridge University Press, 1992.
[18] D. Marquardt. An algorithm for least squares estimation of nonlinear parameters. SIAM J. Appl. Math,
11:431?441, 1963.
[19] J. Amat, M. V. Baratta, E. Paul, S. T. Bland, L. R. Watkins, and S. F. Maier. Medial prefrontal cortex
determines how stressor controllability affects behavior and dorsal raphe nucleus. Nat Neurosci., 8(3):365?
71. Mar 2005.
8
| 3601 |@word luk:1 exploitation:17 cingulate:1 noradrenergic:8 trial:10 loading:1 hippocampus:1 open:1 additively:1 simulation:4 dba:11 pick:4 reduction:1 initial:2 responsivity:1 selecting:1 ecole:1 genetic:4 comparing:3 anterior:1 marquardt:2 activation:1 yet:1 realistic:3 subsequent:1 happen:1 plasticity:1 numerical:1 plot:3 medial:1 alone:1 half:2 selected:6 beginning:1 short:1 characterization:1 awry:1 math:1 become:4 profound:1 addiction:3 prove:2 fitting:2 affective:5 behavioral:2 olfactory:1 balleine:1 acquired:1 inter:1 indeed:1 notably:1 expected:1 behavior:1 brain:8 chi:1 decreasing:4 td:3 food:14 little:1 increasing:5 motivational:1 provided:1 moreover:1 lowest:1 what:1 kg:3 interpreted:2 spends:2 recruit:1 temporal:4 act:3 stateaction:1 botvinick:1 control:6 unit:2 before:4 negligible:1 understood:1 positive:1 striatum:1 sutton:1 ak:1 abuse:1 might:1 chose:1 suggests:2 lausanne:2 challenging:1 conversely:1 appl:1 limited:1 range:7 ms:2 systemic:1 averaged:1 statistically:1 gone:1 rajkowski:1 cogn:1 procedure:1 substantia:1 jan:1 episodic:1 area:1 drug:1 significantly:1 ups:1 get:2 close:2 selection:2 plast:1 risk:1 influence:9 equivalent:1 conventional:2 demonstrated:1 center:2 attention:3 starting:2 duration:6 formalized:1 hunger:1 immediately:1 insight:5 holmes:1 variation:4 updated:1 target:2 suppose:1 play:1 controlling:2 substrate:1 us:2 trend:2 particularly:2 observed:2 role:2 module:1 yoon:1 pmexp:2 region:1 decrease:6 highest:1 substantial:1 environment:2 reward:40 dynamic:8 gediminas:2 trained:9 exposed:3 upon:4 basis:1 easily:2 montague:1 various:4 anxious:8 train:1 describe:1 modulators:1 artificial:5 tell:1 formation:1 outside:4 whose:4 larger:1 valued:1 serotonin:2 statistic:1 delivered:4 reproduced:1 highlevel:1 mg:2 eigenvalue:1 advantage:1 propose:2 interaction:5 poke:19 loop:1 realization:1 rapidly:1 erale:1 flexibility:2 achieve:1 description:1 pronounced:2 everyday:1 inbred:1 differentially:1 recipe:1 object:1 depending:1 received:5 strong:1 predicted:3 indicate:2 quantify:1 switzerland:1 guided:1 stochastic:1 exploration:4 human:2 behaviour:25 biological:2 secondly:1 extension:1 noradrenaline:13 sufficiently:5 around:1 exp:2 great:2 presumably:2 cognition:1 predict:10 week:1 consecutive:1 purpose:2 estimation:3 vulnerability:1 sensitive:3 modulating:2 pellet:5 successfully:4 tool:1 kopp:1 mit:1 always:2 aim:1 avoid:2 varying:3 barto:1 acetylcholine:1 probabilistically:1 mainly:1 contrast:1 kim:1 helpful:1 dependent:2 dayan:3 epfl:2 nn:12 vetterling:1 signified:1 eaten:2 hidden:1 relation:1 metaparameters:2 flexible:2 development:1 animal:43 platform:2 prevailing:1 fairly:1 spatial:2 art:1 field:2 once:1 shaped:1 progressive:2 jones:1 survive:1 nearly:1 thin:1 excessive:1 yu:1 future:16 minimized:1 few:2 randomly:1 individual:3 delayed:3 metaplastic:1 saline:1 freedom:1 regulating:1 wiegert:1 highly:1 evaluation:1 adjust:1 saturated:1 joel:1 male:1 mixture:1 analyzed:1 genotype:1 light:7 capable:1 daily:14 experience:2 necessary:1 old:1 initialized:1 re:1 arousal:1 instance:2 increased:3 modeling:4 raphe:1 goodness:1 successful:2 too:2 eur:1 st:6 fundamental:1 siam:1 stay:2 pmmod:3 picking:1 mouse:38 quickly:1 na:17 central:1 neuromodulators:3 satisfied:1 slowly:1 possibly:1 thayer:1 prefrontal:1 cognitive:5 inefficient:1 account:4 suggesting:1 de:1 sec:6 redish:1 explicitly:1 onset:2 later:2 performed:2 picked:1 observing:1 analyze:2 start:2 denoised:2 jul:2 chart:1 square:3 accuracy:6 variance:3 who:1 maier:1 sitting:1 correspond:1 conceptually:1 accurately:5 agonist:1 monitoring:1 explain:2 strongest:1 influenced:2 metalearning:1 ed:1 obvious:1 okamoto:1 experimenter:1 treatment:1 knowledge:1 color:3 formalize:1 nielsen:1 appears:1 higher:6 day:9 follow:1 reflected:1 response:10 box:18 strongly:4 mar:4 nonlinear:1 indicated:7 scientific:1 effect:24 managed:1 discounting:8 laboratory:2 adjacent:1 pharmacological:5 during:11 levenberg:1 antagonist:1 stress:34 evident:4 polytechnique:1 demonstrate:1 hungry:1 meaning:1 novel:2 rl:17 cohen:2 conditioning:5 elevated:3 counteracts:1 relating:2 trait:3 interpret:2 significant:5 composition:1 cambridge:1 ai:2 neuromodulatory:3 clement:1 pm:12 session:11 similarly:2 had:7 moving:1 acute:1 longer:3 cortex:1 etc:2 pu:1 showed:4 recent:1 manipulation:9 certain:2 meta:43 success:2 additional:6 nigra:1 novelty:1 determine:1 signal:1 smoother:1 multiple:2 reduces:1 hormone:1 long:4 divided:2 bland:1 paired:1 controlled:3 prediction:6 involving:1 basic:1 regression:2 normalization:1 represent:1 receive:1 background:1 addition:2 separately:2 affecting:1 interval:1 decreased:1 biased:1 exhibited:1 ascent:1 thing:1 habituation:1 near:1 exceed:1 easy:1 variety:1 affect:5 fit:7 associating:1 idea:1 consumed:1 whether:3 pca:2 ultimate:1 effort:1 impulsivity:4 action:17 behav:1 useful:2 modulatory:9 generally:1 se:3 kruger:1 discount:12 dark:1 carter:1 reduced:1 occupy:1 neuroscience:1 extrinsic:3 estimated:12 per:1 diverse:1 discrete:1 irrespectively:1 affected:1 group:8 basal:1 prevent:1 anova:6 rectangle:1 lowering:1 merely:1 run:1 inverse:1 uncertainty:3 respond:2 injected:1 place:1 throughout:1 prog:1 ueda:1 doya:3 delivery:1 decision:2 prefer:1 layer:1 kexp:1 activity:2 yamawaki:1 aspect:1 carmen:2 min:2 performing:1 dopaminergic:1 relatively:2 unpredicted:1 combination:1 describes:1 smaller:1 slightly:2 making:7 deprived:2 explained:3 gradually:1 behavioural:11 describing:1 mechanism:1 neuromodulation:2 locus:1 mechanistic:1 nose:12 fed:1 studying:1 available:1 braver:1 calm:4 alternative:1 remaining:1 emotional:1 implied:1 move:1 already:1 quantity:2 interacts:1 gradient:1 detrimental:1 reversed:1 unable:1 separate:1 simulated:2 sci:1 extent:1 reason:3 water:1 balance:2 anxiety:15 equivalently:1 difficult:2 regulation:1 relate:2 negative:1 unknown:2 perform:2 allowing:1 neuron:4 observation:1 iti:11 controllability:1 supporting:2 immediate:3 situation:3 excluding:1 strain:13 variability:1 interacting:1 reproducing:1 arbitrary:3 intensity:1 overcoming:1 introduced:2 adrenergic:2 conflict:1 learned:2 tanaka:1 able:1 bar:1 suggested:1 usually:1 pattern:1 challenge:1 including:2 memory:6 reliable:1 suitable:1 treated:1 predicting:2 representing:1 scheme:2 improve:1 aston:1 numerous:1 psychol:1 jun:1 prior:1 review:1 determining:1 relative:1 interesting:3 validation:3 nucleus:1 agent:2 degree:1 sufficient:1 consistent:1 systematically:1 genetics:1 course:1 placed:1 free:1 implicated:1 formal:2 cortico:1 wide:3 characterizing:1 taking:2 amat:1 fifth:1 distributed:1 amygdala:1 calculated:4 transition:1 maze:1 made:1 reinforcement:7 schultz:1 correlate:1 alpha:2 netw:1 preferred:1 keep:1 gene:1 ml:1 correlating:1 conceptual:1 unnecessary:1 continuous:2 learn:3 okada:1 interact:1 gerstner:1 complex:3 protocol:1 apr:3 main:1 linearly:1 arrow:1 motivation:7 noise:3 neurosci:5 profile:1 paul:1 facilitating:1 fig:4 watkins:1 tin:1 down:1 specific:4 showing:1 coeruleus:1 normalizing:1 evidence:1 adding:1 barch:1 nat:4 occurring:1 hole:17 demand:1 horizon:1 phenotype:2 npm:2 easier:1 c57bl:12 flannery:1 led:1 likely:2 ganglion:1 expressed:1 ch:2 sandi:2 chance:2 determines:1 harris:1 teukolsky:1 stimulating:1 towards:1 wulfram:2 change:3 hard:1 included:2 typical:1 reducing:1 stressor:3 determined:2 miss:4 principal:3 total:1 experimental:17 indicating:1 select:1 stressed:4 meant:1 dorsal:1 relevance:1 evaluate:2 tested:1 |
2,871 | 3,602 | Dimensionality Reduction for Data in Multiple
Feature Representations
Yen-Yu Lin1,2
Tyng-Luh Liu1
Chiou-Shann Fuh2
1
Institute of Information Science, Academia Sinica, Taipei, Taiwan
{yylin, liutyng}@iis.sinica.edu.tw
2
Department of CSIE, National Taiwan University, Taipei, Taiwan
[email protected]
Abstract
In solving complex visual learning tasks, adopting multiple descriptors to more
precisely characterize the data has been a feasible way for improving performance.
These representations are typically high dimensional and assume diverse forms.
Thus finding a way to transform them into a unified space of lower dimension
generally facilitates the underlying tasks, such as object recognition or clustering. We describe an approach that incorporates multiple kernel learning with
dimensionality reduction (MKL-DR). While the proposed framework is flexible
in simultaneously tackling data in various feature representations, the formulation
itself is general in that it is established upon graph embedding. It follows that
any dimensionality reduction techniques explainable by graph embedding can be
generalized by our method to consider data in multiple feature representations.
1 Introduction
The fact that most visual learning problems deal with high dimensional data has made dimensionality reduction an inherent part of the current research. Besides having the potential for a more
efficient approach, working with a new space of lower dimension often can gain the advantage of
better analyzing the intrinsic structures in the data for various applications, e.g., [3, 7]. However,
despite the great applicability, the existing dimensionality reduction methods suffer from two main
restrictions. First, many of them, especially the linear ones, require data to be represented in the
form of feature vectors. The limitation may eventually reduce the effectiveness of the overall algorithms when the data of interest could be more precisely characterized in other forms, such as
bag-of-features [1, 11] or high order tensors [19]. Second, there seems to be lacking a systematic
way of integrating multiple image features for dimensionality reduction. When addressing applications that no single descriptor can appropriately depict the whole dataset, this shortcoming becomes
even more evident. Alas, it is usually the case for addressing complex visual learning tasks [4].
Aiming to relax the two above-mentioned restrictions, we introduce an approach called MKL-DR
that incorporates multiple kernel learning (MKL) into the training process of dimensionality reduction (DR) algorithms. Our approach is inspired by the work of Kim et al. [8], in which learning an
optimal kernel over a given convex set of kernels is coupled with kernel Fisher discriminant analysis (KFDA), but their method only considers binary-class data. Without the restriction, MKL-DR
manifests its flexibility in two aspects. First, it works with multiple base kernels, each of which
is created based on a specific kind of visual feature, and combines these features in the domain of
kernel matrices. Second, the formulation is illustrated with the framework of graph embedding [19],
which presents a unified view for a large family of DR methods. Therefore the proposed MKL-DR
is ready to generalize any DR methods if they are expressible by graph embedding. Note that these
DR methods include supervised, semisupervised and unsupervised ones.
2 Related work
This section describes some of the key concepts used in the establishment of the proposed approach,
including graph embedding and multiple kernel learning.
2.1 Graph embedding
Many dimensionality reduction methods focus on modeling the pairwise relationships among data,
and utilize graph-based structures. In particular, the framework of graph embedding [19] provides
a unified formulation for a set of DR algorithms. Let ? = {xi ? Rd }N
i=1 be the dataset. A DR
scheme accounted for by graph embedding involves a complete graph G whose vertices are over
?. An affinity matrix W = [wij ] ? RN ?N is used to record the edge weights that characterize the
similarity relationships between training sample pairs. Then the optimal linear embedding v? ? Rd
can be obtained by solving
v? =
arg min
, or
v? XLX ?v,
(1)
v? XDX ? v=1
v? XL? X ? v=1
where X = [x1 x2 ? ? ? xN ] is the data matrix, and L = diag(W ? 1) ? W is the graph Laplacian
of G. Depending on the property of a problem, one of the two constraints in (1) will be used in the
optimization. If the first constraint is chosen, a diagonal matrix D = [dij ] ? RN ?N is included
for scale normalization. Otherwise another complete graph G? over ? is required for the second
?
constraint, where L? and W ? = [wij
] ? RN ?N are respectively the graph Laplacian and affinity
?
matrix of G . The meaning of (1) can be better understood with the following equivalent problem:
PN
?
?
2
min
(2)
i,j=1 ||v xi ? v xj || wij
v
PN
?
2
subject to
(3)
i=1 ||v xi || dii = 1, or
PN
?
?
2 ?
(4)
i,j=1 ||v xi ? v xj || wij = 1.
The constrained optimization problem (2) implies that pairwise distances or distances to the origin
of projected data (in the form of v? x) are modeled by one or two graphs in the framework. By
specifying W and D (or W and W ? ), Yan et al. [19] show that a set of dimensionality reduction
methods, such as PCA, LPP [7], LDA, and MFA [19] can be expressed by (1).
2.2 Multiple kernel learning
MKL refers to the process of learning a kernel machine with multiple kernel functions or kernel
matrices. Recent research efforts on MKL, e.g., [9, 14, 16] have shown that learning SVMs with
multiple kernels not only increases the accuracy but also enhances the interpretability of the resulting
classifier. Our MKL formulation is to find an optimal way to linearly combine the given kernels.
M
Suppose we have a set of base kernel functions {km }M
m=1 (or base kernel matrices {Km }m=1 ). An
ensemble kernel function k (or an ensemble kernel matrix K) is then defined by
PM
k(xi , xj ) =
?m ? 0 ,
(5)
m=1 ?m km (xi , xj ),
PM
K =
?m ? 0 .
(6)
m=1 ?m Km ,
Consequently, the learned model from binary-class data {(xi , yi ? ?1)} will be of the form:
PN
PM
PN
f (x) = i=1 ?i yi k(xi , x) + b = i=1 ?i yi m=1 ?m km (xi , x) + b.
{?i }N
i=1
(7)
{?m }M
m=1
Optimizing both the coefficients
and
is one particular form of the MKL problems. Our approach leverages such an MKL optimization to yield more flexible dimensionality
reduction schemes for data in different feature representations.
3 The MKL-DR framework
To establish the proposed method, we first discuss the construction of a set of base kernels from multiple features, and then explain how to integrate these kernels for dimensionality reduction. Finally,
we design an optimization procedure to learn the projection for dimensionality reduction.
3.1 Kernel as a unified feature representation
Consider a dataset ? of N samples, and M kinds of descriptors to characterize each sample. Let
M
+
? = {xi }N
i=1 , xi = {xi,m ? Xm }m=1 , and dm : Xm ? Xm ? 0 ? R be the distance function for
data representation under the mth descriptor. The domains resulting from distinct descriptors, e.g.
feature vectors, histograms, or bags of features, are in general different. To eliminate these varieties
in representation, we represent data under each descriptor as a kernel matrix. There are several ways
to accomplish this goal, such as using RBF kernel for data in the form of vector, or pyramid match
kernel [6] for data in the form of bag-of-features. We may also convert pairwise distances between
data samples to a kernel matrix [18, 20]. By coupling each representation and its corresponding
distance function, we obtain a set of M dissimilarity-based kernel matrices {Km }M
m=1 with
2
2
Km (i, j) = km (xi , xj ) = exp ?dm (xi,m , xj,m )/?m
(8)
where ?m is a positive constant. As several well-designed descriptors and their associated distance
functions have been introduced over the years, the use of dissimilarity-based kernel is convenient in
solving visual learning tasks. Nonetheless, care must be taken in that the resulting Km is not guaranteed to be positive semidefinite. Zhang et al. [20] have suggested a solution to resolve this issue.
It follows from (5) and (6) that determining a set of optimal ensemble coefficients {?1 , ?2 , . . . , ?M }
can be interpreted as finding appropriate weights for best fusing the M feature representations.
3.2 The MKL-DR algorithm
Instead of designing a specific dimensionality reduction algorithm, we choose to describe MKL-DR
upon graph embedding. This way we can derive a general framework: If a dimensionality reduction
scheme is explained by graph embedding, then it will also be extendible by MKL-DR to handle
data in multiple feature representations. In graph embedding (2), there are two possible types of
constraints. For the ease of presentation, we discuss how to develop MKL-DR subject to constraint
(4). However, the derivation can be analogously applied when using constraint (3).
It has been shown that a set of linear dimensionality reduction methods can be kernelized to nonlinear
ones via kernel trick. The procedure of kernelization in MKL-DR is mostly accomplished in a
similar way, but with the key difference in using multiple kernels {Km }M
m=1 . Suppose the ensemble
kernel K in MKL-DR is generated by linearly combining the base kernels {Km }M
m=1 as in (6).
Let ? : X ? F denote the feature mapping induced by K. Through ?, the training data can be
implicitly mapped to a high dimensional Hilbert space, i.e.,
xi 7? ?(xi ), for i = 1, 2, ..., N .
(9)
By assuming the optimal projection v lies in the span of training data in the feature space, we have
P
v= N
(10)
n=1 ?n ?(xn ).
To show that the underlying algorithm can be reformulated in the form of inner product and accomplished in the new feature space F , we observe that plugging into (2) each mapped sample ?(xi )
and projection v would appear exclusively in the form of vT ?(xi ). Hence, it suffices to show that
in MKL-DR, vT ?(xi ) can be evaluated via the kernel trick:
PN PM
vT ?(xi ) = n=1 m=1 ?n ?m km (xn , xi ) = ?T K(i) ? where
(11)
?
?
?
?
?
?
?1
?1
K1 (1, i) ? ? ? KM (1, i)
? .. ?
? .. ?
?
?
..
..
N ?M
N
M
(i)
..
? = ? . ? ? R ,? = ? . ? ? R ,K = ?
.
??R
.
.
.
?N
?M
K1 (N, i) ? ? ? KM (N, i)
With (2) and (11), we define the constrained optimization problem for 1-D MKL-DR as follows:
PN
T (i)
T (j)
2
min
(12)
i,j=1 ||? K ? ? ? K ?|| wij
?,?
subject to
PN
i,j=1
?
||?T K(i) ? ? ?T K(j) ?||2 wij
= 1,
?m ? 0, m = 1, 2, ..., M .
(13)
(14)
The additional constraints in (14) are included to ensure the the resulting kernel K in MKL-DR is a
non-negative combination of base kernels. We leave the details of how to solve (12) until the next
section, where using MKL-DR for finding a multi-dimensional projection V is considered.
xi,1
?1
?1 (xi )
X1
?1
F1
?m
?m
V
?(xi )
F
xi,M
?M
?M (xi )
XM
V T ?(xi )
= AT K(i) ?
RP
?M
FM
Figure 1: Four kinds of spaces in MKL-DR: the input space of each feature representation, the
RKHS induced by each base kernel, the RKHS by the ensemble kernel, and the projected space.
3.3 Optimization
Observe from (11) that the one-dimensional projection v of MKL-DR is specified by a sample coefficient vector ? and a kernel weight vector ?. The two vectors respectively account for the relative
importance among the samples and the base kernels. To generalize the formulation to uncover a
multi-dimensional projection, we consider a set of P sample coefficient vectors, denoted by
A = [?1 ?2 ? ? ? ?P ].
(15)
With A and ?, each 1-D projection vi is determined by a specific sample coefficient vector ?i and
the (shared) kernel weight vector ?. The resulting projection V = [v1 v2 ? ? ? vP ] will map samples
to a P -dimensional space. Analogous to the 1-D case, a projected sample xi can be written as
V ? ?(xi ) = A? K(i) ? ? RP .
(16)
The optimization problem (12) can now be extended to accommodate multi-dimensional projection:
PN
? (i)
? (j)
min
?||2 wij
(17)
i,j=1 ||A K ? ? A K
A,?
PN
subject to
i,j=1
?
||A? K(i) ? ? A? K(j) ?||2 wij
= 1,
?m ? 0, m = 1, 2, ..., M .
In Figure 1, we give an illustration of the four kinds of spaces related to MKL-DR, including the
input space of each feature representation, the RKHS induced by each base kernel and the ensemble
kernel, and the projected Euclidean space.
Since direct optimization to (17) is difficult, we instead adopt an iterative, two-step strategy to
alternately optimize A and ?. At each iteration, one of A and ? is optimized while the other is
fixed, and then the roles of A and ? are switched. Iterations are repeated until convergence or a
maximum number of iterations is reached.
On optimizing A: By fixing ?, the optimization problem (17) is reduced to
?
min trace(A? SW
A)
A
?
subject to trace(A? SW
? A) = 1
(18)
where
?
SW
=
PN
(i)
? K(j) )??? (K(i) ? K(j) )? ,
i,j=1 wij (K
PN
?
?
(i)
? K(j) )?? ? (K(i) ? K(j) )? .
SW
? =
i,j=1 wij (K
?
?
a trace ratio problem, i.e., minA trace(A? SW
A)/trace(A? SW
? A).
(19)
(20)
The problem (18) is
A closedform solution can be obtained by transforming (18) into the corresponding ratio trace prob?
?
?1
lem, i.e., minA trace[(A? SW
(A? SW
A)]. Consequently, the columns of the optimal A? =
? A)
[?1 ?2 ? ? ? ?P ] are the eigenvectors corresponding to the first P smallest eigenvalues in
?
?
SW
? = ?SW
? ?.
(21)
Algorithm 1: MKL-DR
Input : A DR method specified by two affinity matrices W and W ? (cf. (2));
Various visual features expressed by base kernels {Km }M
m=1 (cf. (8));
Output: Sample coefficient vectors A = [?1 ?2 ? ? ? ?P ]; Kernel weight vector ?;
Make an initial guess for A or ?;
for t ? 1, 2, . . . , T do
?
?
1. Compute SW
in (19) and SW
? in (20);
2. A is optimized by solving the generalized eigenvalue problem (21);
A
A
3. Compute SW
in (23) and SW
? in (24);
4. ? is optimized by solving optimization problem (25) via semidefinite programming;
return A and ?;
On optimizing ?:
By fixing A, the optimization problem (17) becomes
A
min ? ? SW
?
?
(22)
A
subject to ? ? SW
? ? = 1 and ? ? 0
where
A
SW
=
A
SW
? =
PN
(i)
? K(j) )? AA? (K(i) ? K(j) ),
i,j=1 wij (K
PN
?
(i)
? K(j) )? AA? (K(i) ? K(j) ).
i,j=1 wij (K
(23)
(24)
The additional constraints ? ? 0 cause that the optimization to (22) can no longer be formulated as
a generalized eigenvalue problem. Indeed it now becomes a nonconvex quadratically constrained
quadratic programming (QCQP) problem, and is known to be very difficult to solve. We instead
consider solving its convex relaxation by adding an auxiliary variable B of size M ? M :
A
B)
(25)
min
trace(SW
?,B
subject to
A
trace(SW
? B) = 1,
(26)
eTm ?
? 0, m = 1, 2, ..., M,
(27)
T
1 ?
0,
(28)
? B
where em in (27) is a column vector whose elements are 0 except that its mth element is 1, and the
constraint in (28) means that the square matrix is positive semidefinite. The optimization problem
(25) is an SDP relaxation of the nonconvex QCQP problem (22), and can be efficiently solved
by semidefinite programming (SDP). One can verify the equivalence between the two optimization
problems (22) and (25) by replacing the constraint (28) with B = ?? T . In view of that the constraint
B = ??T is nonconvex, it is relaxed to B ?? T . Applying the Schur complement lemma,
B ?? T can be equivalently expressed by the constraint in (28). (Refer to [17] for further details.)
Note that the numbers of constraints and variables in (25) are respectively linear and quadratic to
M , the number of the adopted descriptors. In practice the value of M is often small. (M = 7 in
our experiments.) Thus like most of the other DR methods, the computational bottleneck of our
approach is still in solving the generalized eigenvalue problems.
Listed in Algorithm 1, the procedure of MKL-DR requires an initial guess to either A or ? in the
alternating optimization. We have tried two possibilities: 1) ? is initialized by setting all of its
elements as 1 to equally weight each base kernel; 2) A is initialized by assuming AA? = I. In
our empirical testing, the second initialization strategy gives more stable performances, and is thus
adopted in the experiments. Pertaining to the convergence of the optimization procedure, since
SDP relaxation has been used, the values of objective function are not guaranteed to monotonically
decrease throughout the iterations. Still, the optimization procedures rapidly converge after only a
few iterations in all our experiments.
Novel sample embedding.
dimension by
Given a testing sample z, it is projected to the learned space of lower
z 7? AT K(z) ?, where K(z) ? RN ?M and K(z) (n, m) = km (xn , z).
(29)
4 Experimental results
To evaluate the effectiveness of MKL-DR, we test the technique with the supervised visual learning task of object category recognition. In the application, two (base) DR methods and a set of
descriptors are properly chosen to serve as the input to MKL-DR.
4.1 Dataset
The Caltech-101 image dataset [4] consists of 101 object categories and one additional class of
background images. The total number of categories is 102, and each category contains roughly 40
to 800 images. Although each target object often appears in the central region of an image, the large
class number and substantial intraclass variations still make the dataset very challenging. Still, the
dataset provides a good test bed to demonstrate the advantage of using multiple image descriptors
for complex recognition tasks. Since the images in the dataset are not of the same size, we resize
them to around 60,000 pixels, without changing their aspect ratio.
To implement MKL-DR for recognition, we need to select some proper graph-based DR method to
be generalized and a set of image descriptors, and then derive (in our case) a pair of affinity matrices
and a set of base kernels. The details are described as follows.
4.2 Image descriptors
For the Caltech-101 dataset, we consider seven kinds of image descriptors that result in the seven
base kernels (denoted below in bold and in abbreviation):
GB-1/GB-2: From a given image, we randomly sample 300 edge pixels, and apply geometric blur
descriptor [1] to them. With these image features, we adopt the distance function, as is suggested in
equation (2) of the work by Zhang et al. [20], to obtain the two dissimilarity-based kernels, each of
which is constructed with a specific descriptor radius.
SIFT-Dist: The base kernel is analogously constructed as in GB-2, except now the SIFT descriptor
[11] is used to extract features.
SIFT-Grid: We apply SIFT with three different scales to an evenly sampled grid of each image,
and use k-means clustering to generate visual words from the resulting local features of all images.
Each image can then be represented by a histogram over the visual words. The ?2 distance is used
to derive this base kernel via (8).
C2-SWP/C2-ML: Biologically inspired features are also considered here. Specifically, both the C2
features derived by Serre et al. [15] and by Mutch and Lowe [13] have been chosen. For each of the
two kinds of C2 features, an RBF kernel is respectively constructed.
PHOG: We adopt the PHOG descriptor [2] to capture image features, and limit the pyramid level
up to 2. Together with ?2 distance, the base kernel is established.
4.3 Dimensionality reduction methods
We consider two supervised DR schemes, namely, linear discriminant analysis (LDA) and local
discriminant embedding (LDE) [3], and show how MKL-DR can generalize them. Both LDA and
LDE perform discriminant learning on a fully labeled dataset ? = {(xi , yi )}N
i=1 , but make different
assumptions about data distribution: LDA assumes data of each class can be modeled by a Gaussian,
while LDE assumes they spread as a submanifold. Each of the two methods can be specified by
a pair of affinity matrices to fit the formulation of graph embedding (2), and the resulting MKL
dimensionality reduction schemes are respectively termed as MKL-LDA and MKL-LDE.
?
Affinity matrices for LDA: The two affinity matrices W = [wij ] and W ? = [wij
] are defined as
wij =
1/nyi , if yi = yj ,
0,
otherwise,
and
?
wij
=
1
,
N
where nyi is the number of data points with label yi . See [19] for the derivation.
(30)
Table 1: Recognition rates (mean ? std %) for Caltech-101 dataset
kernel(s)
GB-1
GB-2
SIFT-Dist
SIFT-Grid
C2-SWP
C2-ML
PHOG
All
method
KFD
KFD-Voting
KFD-SAMME
MKL-LDA
number of classes
102
101
57.3 ? 2.5 57.7 ? 0.7
60.0 ? 1.5 60.6 ? 1.5
53.0 ? 1.4 53.2 ? 0.8
48.8 ? 1.9 49.6 ? 0.7
30.3 ? 1.2 30.7 ? 1.5
46.0 ? 0.6 46.8 ? 0.9
41.8 ? 0.6 42.1 ? 1.3
68.4 ? 1.5 68.9 ? 0.3
71.2 ? 1.4 72.1 ? 0.7
74.6 ? 2.2 75.3 ? 1.7
method
KLDE
KLDE-Voting
KLDE-SAMME
MKL-LDE
number of classes
102
101
57.1 ? 1.4 57.7 ? 0.8
60.9 ? 1.4 61.3 ? 2.1
54.2 ? 0.5 54.6 ? 1.5
49.5 ? 1.3 50.1 ? 0.3
31.1 ? 1.5 31.3 ? 0.7
45.8 ? 0.2 46.7 ? 1.5
42.2 ? 0.6 42.6 ? 1.3
68.4 ? 1.4 68.7 ? 0.8
71.1 ? 1.9 71.3 ? 1.2
75.3 ? 1.5 75.5 ? 1.7
Affinity matrices for LDE: In LDE, not only the data labels but also the neighborhood relationships
?
are simultaneously considered to construct the affinity matrices W = [wij ] and W ? = [wij
]:
1, if yi = yj ? [i ? Nk (j) ? j ? Nk (i)],
wij =
(31)
0, otherwise,
1, if yi 6= yj ? [i ? Nk? (j) ? j ? Nk? (i)],
?
wij =
(32)
0, otherwise.
where i ? Nk (j) means that sample xi is one of the k nearest neighbors for sample xj . The
definitions of the affinity matrices are faithful to those in LDE [3]. However, since there are now
multiple image descriptors, we need to construct an affinity matrix for data under each descriptor,
and average the resulting affinity matrices from all the descriptors.
4.4 Quantitative results
Our experiment setting follows the one described by Zhang et al. [20]. From each of the 102 classes,
we randomly pick 30 images where 15 of them are included for training and the other 15 images
are used for testing. To avoid a biased implementation, we redo the whole process of learning
by switching the roles of training and testing data. In addition, we also carry out the experiments
without using the data from the the background class, since such setting is adopted in some of the
related works, e.g., [5]. Via MKL-DR, the data are projected to the learned space, and the recognition
task is accomplished there by enforcing the nearest-neighbor rule.
Coupling the seven base kernels with the affinity matrices of LDA and LDE, we can respectively derive MKL-LDA and MKL-LDE using Algorithm 1. Their effectiveness is investigated by comparing
with KFD (kernel Fisher discriminant) [12] and KLDE (kernel LDE) [3]. Since KFD considers only
one base kernel at a time, we implement two strategies to take account of the classification outcomes
from the seven resulting KFD classifiers. The first is named as KFD-Voting. It is constructed based
on the voting result of the seven KFD classifiers. If there is any ambiguity in the voting result, the
next nearest neighbor in each KFD classifier will be considered, and the process is continued until
a decision on the class label can be made. The second is termed as KFD-SAMME. By viewing each
KFD classifier as a multi-class weak learner, we boost them by SAMME [21], which is a multi-class
generalization of AdaBoost. Analogously, we also have KLDE-Voting and KLDE-SAMME.
We report the mean recognition rates and the standard deviation in Table 1. First of all, MKL-LDA
achieves a considerable performance gain of 14.6% over the best recognition rate by the seven KFD
classifiers. On the other hand, while KFD-Voting and KFD-SAMME try to combine the separately
trained KFD classifiers, MKL-LDA jointly integrates the seven kernels into the learning process. The
quantitative results show that MKL-LDA can make the most of fusing various feature descriptors,
and improves the recognition rates from 68.4% and 71.2% to 74.6%. Similar improvements can
also be observed for MKL-LDE.
The recognition rates 74.6% in MKL-LDA and 75.3% in MKL-LDE are favorably comparable to
those by most of the existing approaches. In [6], Grauman and Darrell report a 50% recognition
rate based on the pyramid matching kernel over data in bag-of-features representation. By combing
shape and spatial information, SVM-KNN of Zhang et al. [20] achieves 59.05%. In Frome et al. [5],
the accuracy rate derived by learning the local distances, one for each training sample, is 60.3%.
Our related work [10] that performs adaptive feature fusing via locally combining kernel matrices
has a recognition rate 59.8%. Multiple kernel learning is also used in Varma and Ray [18], and it
can yield a top recognition rate of 87.82% by integrating visual cues like shape and color.
5 Conclusions and discussions
The proposed MKL-DR technique is useful as it has the advantage of learning a unified space of low
dimension for data in multiple feature representations. Our approach is general and applicable to
most of the graph-based DR methods, and improves their performance. Such flexibilities allow one
to make use of more prior knowledge for effectively analyzing a given dataset, including choosing a
proper set of visual features to better characterize the data, and adopting a graph-based DR method
to appropriately model the relationship among the data points. On the other hand, via integrating
with a suitable DR scheme, MKL-DR can extend the multiple kernel learning framework to address
not just the supervised learning problems but also the unsupervised and the semisupervised ones.
Acknowledgements. This work is supported in part by grants 95-2221-E-001-031-MY3 and 972221-E-001-019-MY3.
References
[1] A. Berg, T. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondences. In CVPR, 2005.
[2] A. Bosch, A. Zisserman, and X. Mu?noz. Image classification using random forests and ferns. In ICCV,
2007.
[3] H.-T. Chen, H.-W. Chang, and T.-L. Liu. Local discriminant embedding and its variants. In CVPR, 2005.
[4] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An
incremental bayesian approach tested on 101 object categories. In CVPR Workshop on Generative-Model
Based Vision, 2004.
[5] A. Frome, Y. Singer, and J. Malik. Image retrieval and classification using local distance functions. In
NIPS, 2006.
[6] K. Grauman and T. Darrell. The pyramid match kernel: Efficient learning with sets of features. JMLR,
2007.
[7] X. He and P. Niyogi. Locality preserving projections. In NIPS, 2003.
[8] S.-J. Kim, A. Magnani, and S. Boyd. Optimal kernel selection in kernel fisher discriminant analysis. In
ICML, 2006.
[9] G. Lanckriet, N. Cristianini, P. Bartlett, L. Ghaoui, and M. Jordan. Learning the kernel matrix with
semidefinite programming. JMLR, 2004.
[10] Y.-Y. Lin, T.-L. Liu, and C.-S. Fuh. Local ensemble kernel learning for object category recognition. In
CVPR, 2007.
[11] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004.
[12] S. Mika, G. R?atsch, J. Weston, B. Sch?olkopf, and K.-R. M?uller. Fisher discriminant analysis with kernels.
In Neural Networks for Signal Processing, 1999.
[13] J. Mutch and D. Lowe. Multiclass object recognition with sparse, localized features. In CVPR, 2006.
[14] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. More efficiency in multiple kernel learning. In
ICML, 2007.
[15] T. Serre, L. Wolf, and T. Poggio. Object recognition with features inspired by visual cortex. In CVPR,
2005.
[16] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning. JMLR,
2006.
[17] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 1996.
[18] M. Varma and D. Ray. Learning the discriminative power-invariance trade-off. In ICCV, 2007.
[19] S. Yan, D. Xu, B. Zhang, H. Zhang, Q. Yang, and S. Lin. Graph embedding and extensions: A general
framework for dimensionality reduction. PAMI, 2007.
[20] H. Zhang, A. Berg, M. Maire, and J. Malik. Svm-knn: Discriminative nearest neighbor classification for
visual category recognition. In CVPR, 2006.
[21] J. Zhu, S. Rosset, H. Zou, and T. Hastie. Multi-class adaboost. Technical report, Dept. of Statistics,
University of Michigan, 2005.
| 3602 |@word seems:1 km:16 tried:1 lpp:1 pick:1 accommodate:1 carry:1 reduction:18 initial:2 liu:2 contains:1 exclusively:1 rkhs:3 ala:1 existing:2 current:1 comparing:1 tackling:1 must:1 written:1 academia:1 blur:1 shape:3 designed:1 depict:1 xdx:1 cue:1 generative:2 guess:2 phog:3 record:1 provides:2 zhang:7 constructed:4 direct:1 c2:6 consists:1 ijcv:1 combine:3 ray:2 introduce:1 pairwise:3 indeed:1 roughly:1 dist:2 sdp:3 multi:6 inspired:3 resolve:1 fuh:2 becomes:3 underlying:2 kind:6 interpreted:1 unified:5 finding:3 quantitative:2 voting:7 grauman:2 classifier:7 grant:1 appear:1 positive:3 understood:1 local:6 limit:1 aiming:1 switching:1 despite:1 analyzing:2 pami:1 mika:1 initialization:1 equivalence:1 specifying:1 challenging:1 ease:1 faithful:1 testing:4 yj:3 practice:1 implement:2 procedure:5 maire:1 empirical:1 yan:2 projection:10 convenient:1 word:2 integrating:3 refers:1 matching:2 boyd:2 selection:1 applying:1 restriction:3 equivalent:1 map:1 optimize:1 convex:2 rule:1 continued:1 varma:2 vandenberghe:1 embedding:17 handle:1 variation:1 analogous:1 construction:1 suppose:2 target:1 programming:5 designing:1 origin:1 lanckriet:1 trick:2 element:3 recognition:18 std:1 labeled:1 observed:1 csie:2 role:2 solved:1 capture:1 region:1 sonnenburg:1 decrease:1 trade:1 mentioned:1 substantial:1 transforming:1 mu:1 cristianini:1 trained:1 solving:7 serve:1 upon:2 distinctive:1 efficiency:1 learner:1 various:4 represented:2 derivation:2 distinct:1 describe:2 shortcoming:1 pertaining:1 neighborhood:1 outcome:1 choosing:1 whose:2 solve:2 cvpr:7 distortion:1 relax:1 otherwise:4 niyogi:1 knn:2 statistic:1 transform:1 itself:1 jointly:1 advantage:3 eigenvalue:4 product:1 combining:2 rapidly:1 flexibility:2 intraclass:1 bed:1 olkopf:2 convergence:2 darrell:2 incremental:1 leave:1 object:9 depending:1 coupling:2 derive:4 fixing:2 develop:1 bosch:1 nearest:4 auxiliary:1 involves:1 implies:1 frome:2 radius:1 viewing:1 dii:1 require:1 suffices:1 f1:1 generalization:1 ntu:1 extension:1 around:1 considered:4 exp:1 great:1 mapping:1 achieves:2 adopt:3 smallest:1 integrates:1 applicable:1 bag:4 label:3 uller:1 gaussian:1 establishment:1 pn:14 avoid:1 derived:2 focus:1 properly:1 improvement:1 kim:2 typically:1 eliminate:1 mth:2 kernelized:1 perona:1 expressible:1 wij:20 pixel:2 arg:1 overall:1 flexible:2 among:3 issue:1 denoted:2 classification:4 constrained:3 spatial:1 construct:2 having:1 yu:1 unsupervised:2 icml:2 report:3 inherent:1 few:2 randomly:2 simultaneously:2 national:1 interest:1 lin1:1 kfd:15 possibility:1 semidefinite:6 edge:2 poggio:1 euclidean:1 initialized:2 column:2 modeling:1 applicability:1 fusing:3 addressing:2 vertex:1 deviation:1 rakotomamonjy:1 submanifold:1 dij:1 characterize:4 accomplish:1 rosset:1 siam:1 systematic:1 off:1 analogously:3 together:1 central:1 ambiguity:1 choose:1 dr:41 return:1 combing:1 closedform:1 account:2 potential:1 luh:1 bold:1 coefficient:6 vi:1 view:2 lowe:3 try:1 liu1:1 reached:1 xlx:1 yen:1 square:1 accuracy:2 descriptor:21 efficiently:1 ensemble:7 yield:2 vp:1 generalize:3 weak:1 bayesian:1 fern:1 explain:1 definition:1 lde:13 nonetheless:1 dm:2 associated:1 gain:2 sampled:1 dataset:12 manifest:1 color:1 knowledge:1 dimensionality:18 improves:2 hilbert:1 uncover:1 appears:1 supervised:4 adaboost:2 mutch:2 zisserman:1 formulation:6 evaluated:1 just:1 until:3 working:1 hand:2 replacing:1 nonlinear:1 mkl:46 lda:13 semisupervised:2 serre:2 concept:1 verify:1 hence:1 alternating:1 illustrated:1 deal:1 generalized:5 mina:2 evident:1 complete:2 demonstrate:1 performs:1 image:22 meaning:1 novel:1 extend:1 he:1 refer:1 rd:2 grid:3 pm:4 canu:1 stable:1 afer:1 similarity:1 longer:1 cortex:1 base:19 recent:1 optimizing:3 termed:2 nonconvex:3 binary:2 samme:6 my3:2 vt:3 yi:8 accomplished:3 caltech:3 preserving:1 additional:3 care:1 relaxed:1 converge:1 monotonically:1 signal:1 ii:1 multiple:21 keypoints:1 technical:1 match:2 characterized:1 bach:1 retrieval:1 lin:2 equally:1 plugging:1 laplacian:2 variant:1 swp:2 vision:1 histogram:2 kernel:69 adopting:2 normalization:1 represent:1 pyramid:4 iteration:5 background:2 addition:1 separately:1 appropriately:2 biased:1 sch:3 subject:7 induced:3 facilitates:1 incorporates:2 effectiveness:3 schur:1 jordan:1 leverage:1 yang:1 variety:1 xj:7 fit:1 hastie:1 fm:1 reduce:1 inner:1 multiclass:1 bottleneck:1 pca:1 bartlett:1 gb:5 effort:1 explainable:1 suffer:1 redo:1 reformulated:1 cause:1 generally:1 useful:1 eigenvectors:1 listed:1 locally:1 svms:1 category:7 reduced:1 generate:1 diverse:1 key:2 four:2 changing:1 utilize:1 v1:1 graph:22 relaxation:3 convert:1 year:1 prob:1 named:1 family:1 throughout:1 decision:1 resize:1 comparable:1 guaranteed:2 correspondence:1 quadratic:2 precisely:2 constraint:13 fei:2 x2:1 qcqp:2 aspect:2 klde:6 min:7 span:1 department:1 combination:1 describes:1 em:1 tw:2 biologically:1 lem:1 explained:1 iccv:2 invariant:1 ghaoui:1 taken:1 equation:1 discus:2 eventually:1 singer:1 adopted:3 apply:2 observe:2 v2:1 appropriate:1 rp:2 assumes:2 clustering:2 include:1 ensure:1 cf:2 top:1 sw:20 taipei:2 k1:2 especially:1 establish:1 nyi:2 tensor:1 objective:1 malik:3 strategy:3 diagonal:1 enhances:1 affinity:13 distance:11 mapped:2 evenly:1 seven:7 evaluate:1 considers:2 discriminant:8 enforcing:1 kfda:1 taiwan:3 assuming:2 besides:1 modeled:2 relationship:4 illustration:1 ratio:3 equivalently:1 sinica:2 mostly:1 difficult:2 favorably:1 trace:9 negative:1 design:1 implementation:1 proper:2 perform:1 extended:1 rn:4 magnani:1 introduced:1 complement:1 pair:3 required:1 specified:3 namely:1 optimized:3 extendible:1 learned:3 quadratically:1 established:2 boost:1 alternately:1 nip:2 address:1 suggested:2 usually:1 below:1 xm:4 including:3 interpretability:1 mfa:1 suitable:1 power:1 zhu:1 scheme:6 created:1 ready:1 coupled:1 extract:1 prior:1 geometric:1 acknowledgement:1 review:1 determining:1 relative:1 lacking:1 fully:1 limitation:1 localized:1 integrate:1 switched:1 grandvalet:1 accounted:1 supported:1 allow:1 institute:1 neighbor:4 noz:1 sparse:1 dimension:4 xn:4 made:2 adaptive:1 projected:6 implicitly:1 ml:2 xi:31 fergus:1 discriminative:2 iterative:1 table:2 learn:1 improving:1 forest:1 investigated:1 complex:3 zou:1 domain:2 diag:1 main:1 spread:1 linearly:2 whole:2 repeated:1 x1:2 xu:1 xl:1 lie:1 jmlr:3 specific:4 sift:6 svm:2 intrinsic:1 workshop:1 adding:1 effectively:1 importance:1 dissimilarity:3 nk:5 chen:1 locality:1 michigan:1 visual:14 expressed:3 chang:1 aa:3 wolf:1 weston:1 abbreviation:1 goal:1 presentation:1 formulated:1 consequently:2 rbf:2 shared:1 fisher:4 feasible:1 considerable:1 included:3 determined:1 except:2 specifically:1 lemma:1 called:1 total:1 invariance:1 experimental:1 atsch:2 select:1 berg:3 dept:1 kernelization:1 tested:1 |
2,872 | 3,603 | A spatially varying two-sample recombinant
coalescent, with applications to HIV escape response
Alexander Braunstein
Statistics Department
University of Pennsylvania
Wharton School
Philadelphia, PA 19104
[email protected]
Zhi Wei
Computer Science Department
New Jersey Institute of Technology
Newark, NJ 07102
[email protected]
Shane T. Jensen
Statistics Department
University of Pennsylvania
Wharton School
Philadelphia, PA 19104
[email protected]
Jon D. McAuliffe
Statistics Department
University of Pennsylvania
Wharton School
Philadelphia, PA 19104
[email protected]
Abstract
Statistical evolutionary models provide an important mechanism for describing
and understanding the escape response of a viral population under a particular
therapy. We present a new hierarchical model that incorporates spatially varying
mutation and recombination rates at the nucleotide level. It also maintains separate parameters for treatment and control groups, which allows us to estimate
treatment effects explicitly. We use the model to investigate the sequence evolution of HIV populations exposed to a recently developed antisense gene therapy,
as well as a more conventional drug therapy. The detection of biologically relevant and plausible signals in both therapy studies demonstrates the effectiveness
of the method.
1
Introduction
The human immunodeficiency virus (HIV) has one of the highest levels of genetic variability yet
observed in nature. This variability stems from its unusual population dynamics: a high growth
rate (?10 billion new viral particles, or virions, per patient per day) combined with a replication
cycle that involves frequent nucleotide mutations as well as recombination between different HIV
genomes that have infected the same cell.
The rapid evolution of HIV and other viruses gives rise to a so-called escape response when infected
cells are subjected to therapy. Widespread availability of genome sequencing technology has had a
profound effect on the study of viral escape response. Increasingly, virologists are gathering twosample data sets of viral genome sequences: a control sample contains genomes from a set of virions
gathered before therapy, and a treatment sample consists of genomes from the post-therapeutic viral population. HIV treatment samples gathered just days after the start of therapy can exhibit a
significant escape response.
Up to now, statistical analyses of two-sample viral sequence data sets have been mainly rudimentary.
As a representative example, [7] presents tabulated counts of mutation occurrences (relative to a
reference wild-type sequence) in the control group and the treatment group, without attempting any
statistical inference.
1
In this paper we develop a model which allows for a detailed quantification of the escape response
present in a two-sample data set. The model incorporates mutation and recombination rate parameters which vary positionally along the viral genome, and which differ between the treatment and
control samples. We present a reversible-jump MCMC procedure for approximate posterior inference of these parameters. The resulting posterior distribution suggests specific regions of the genome
where the treatment sample?s evolutionary dynamics differ from the control?s: this is the putative
escape response. Thus, the model permits an analysis that can point the way to improvements of
current therapies and to the development of new therapeutic strategies for HIV and other viruses.
In the remainder of the paper, we first provide the details of our statistical model and inference
procedure. Then we illustrate the use of the model in two applications. The first study consists of a
control sample of viral sequences obtained from HIV-infected individuals before a drug treatment,
and a corresponding post-treatment sample [9]. The second study set is an in vitro investigation of a
new gene therapy for HIV; it contains a control sample of untreated virions and a treatment sample
of virions challenged with the therapy [7].
2
Methods
We begin by briefly describing the standard statistical genetics framework for populations evolving
under mutation and recombination. Then we present a new Bayesian hierarchical model for two
groups of sequences, each group sampled from one of two related populations. We derive an MCMC
procedure for approximate posterior inference in the model; this procedure is implemented in the
program PICOMAP. Our approach involves modifications and generalizations of the OMEGAMAP
method [12], as we explain. In what follows, each ?individual? in a population is a sequence of L
nucleotides (plus a gap symbol, used when sequences have insertions or deletions relative to each
other). The positions along a sequence are called sites. An alignment is a matrix in which rows are
sequences, columns are sites, and the (i, j)th entry is individual i?s nucleotide at site j.
2.1
The coalescent with recombination
The genome sequences in the control sample were drawn at random from a large population of
sequences at a fixed point in time. We approximate the evolution of this population using the WrightFisher evolutionary model with recombination [3]. Similarly, the treatment sample sequences are
viewed as randomly drawn from a Wright-Fisher recombining population, but governed by different
evolutionary parameters.
In the basic Wright-Fisher model without recombination, a fixed-size population evolves in discrete,
nonoverlapping generations. Each sequence in the gth generation is determined by randomly choosing a sequence from the (g ? 1)th generation, mutating it at one position with probability u, and
leaving it unchanged with probability 1 ? u. Typically, many individuals in each generation share a
parent from the previous generation.
A key insight in statistical population genetics, due to Kingman [5], is the following. If we have
a small sample from a large Wright-Fisher population at a fixed time, and we want to do calculations involving the probability distribution over the sample?s unknown ancestral history, it is highly
uneconomical to ?work forwards? from older generations ? most individuals will not be part of the
sample?s genealogy. Instead, we should follow the lineages of the sampled individuals backwards
in time as they repeatedly coalesce at common ancestors, forming a tree rooted at the most recent
common ancestor (MRCA) of the sample. Kingman showed that the continuous-time limit of the
Wright-Fisher model induces a simple distribution, called the coalescent process, on the topology
and branch lengths of the resulting tree. Mutation events in the coalescent can be viewed as a separate point process marking locations on the branches of a given coalescent tree. This point process
is independent of the tree-generating coalescent process.
Recombination, however, substantially complicates matters. The Wright-Fisher dynamics are extended to model recombination as follows. Choose one ?paternal? and one ?maternal? sequence
from generation (g ? 1). With probability r, their child sequence in generation g is a recombinant:
a juncture between two adjacent sites is chosen uniformly at random, and the child is formed by
joining the paternal sequence to the left of the juncture with the maternal sequence to the right. With
probability (1 ? r), the child is a copy of just one of the two parents, possibly mutated as above.
2
Now look backwards in time at the ancestors of a sample: we find both coalescence events, where
two sequences merge into a common ancestor, and recombination events, where a single sequence
splits into the two parent sequences that formed it. Thus the genealogy is not a tree but a graph, the
ancestral recombination graph (ARG). The continuous-time limit of the Wright-Fisher model with
recombination induces a distribution over ARGs called the recombinant coalescent [4, 2].
In fact, the ARG is the union of L coalescent trees. A single site is never split by recombination, so
we can follow that site in the sample backwards in time through coalescence events to its MRCA. But
recombination causes the sample to have a possibly different ancestral tree (and different MRCA)
at each site. The higher the rate of recombination (corresponding to the parameter r), the more
often the tree changes along the alignment. For this reason, methods that estimate a fixed, global
phylogeny are badly biased in samples from highly recombinant populations, like viruses [10].
The Wright-Fisher assumptions appear quite stylized. But experience has shown that the coalescent
and the recombinant coalescent can give reasonable results when applied to samples from populations not matching the Wright-Fisher model, such as populations of increasing size [3].
2.2
A two-sample hierarchical recombinant coalescent
We now present the components of our new hierarchical model for a control sample and a treatment
sample of nucleotide sequences drawn from two recombining populations. To our knowledge, this is
the first fully specified probabilistic model for such data. There are four parameter vectors of primary
interest in the model: a control-population mutation rate ?C which varies along the sequence, a
corresponding spatially varying treatment-population vector ?T , and analogous recombination rate
parameter vectors ?C and ?T . (The ? and ? here correspond to the u and r mentioned above.)
The prior distribution on ?C and ?T takes the following hierarchical form:
(B? , S? ) | q?
log ?i |
? Blocks(q? ) ,
?0 , ??2 0
iid
(log ?Ci , log ?Ti ) | ?i , ??2
C
?
N (log ?0 , ??2 0 ),
? N (log ?i , ??2 ),
(1)
i = 1, . . . , B? ,
(2)
i = i, . . . , B? .
(3)
T
This prior is designed to give ? and ? a block structure: the Blocks distribution divides the L
sequence positions into B? adjacent subsequences, with the index of each subsequence?s rightmost
B
B
site given by S? = (S?1 , . . . , S? ? ), 1 ? S?1 < ? ? ? ? S? ? ? L. Under the Blocks distribution,
(B? ? 1) is a Bin(L ? 1, q? ) random variable, and given B? , the indexes S? are a simple random
sample without replacement from {1, . . . , L}. The sites in the ith block all mutate at the same rate
?Ci (in the control population) or ?Ti (in the treatment population). We lose no generality in sharing
the same block structure between the populations: two separate block structures can be replaced with
a single block structure formed from the union of their S? ?s. To generate the per-population mutation
rates within a block, we first draw a lognormally distributed variable ?i , which then furnishes the
mean for the independent lognormal variables ?Ci and ?Ti . The triples (?i , ?Ci , ?Ti ) are mutually
independent across blocks i = 1, . . . , B? .
The recombination rate parameters (?C , ?T ) are independent of (?C , ?T ) and have the same form of
prior distribution (1)?(3), mutatis mutandis. In our empirical analyses, we set the hyperparameters
q? and q? to get prior means of 20 to 50 blocks; results were not sensitive to these settings. We
put simple parametric distributions on the hyperparameters ?0 , ??2 0 , ??2 , and their ? analogs, and
included them in the sampling procedure.
The remaining component of the model is the likelihood of the two observed samples. Let H C
be the alignment of control-sample sequences and H T the treatment-sample sequence alignment.
Conditional on all parameters, H C and H T are independent. Focus for a moment on H C . Since we
wish to view it as a sample from a Wright-Fisher recombining population, its likelihood corresponds
to the probability, under the coalescent-with-recombination distribution, of the set of all ARGs that
could have generated H C . However, using the nucleotide mutation model described below, even
Monte Carlo approximation of this probability is computationally intractable [12].
So instead we approximate the true likelihood with a distribution called the ?product of approximate
conditionals,? or PAC [6]. PAC orders the K sequences in H C arbitrarily, then approximates their
probability as the product of probabilities from K hidden Markov models. The kth HMM evaluates
3
the probability that sequence k was produced by mutating and recombining sequences 1 through
k ? 1. We thus obtain the final components of our hierarchical model:
H C | ?C , ?C , ? ? PAC(?C , ?C , ?) ,
T
T
T
T
T
H | ? , ? , ? ? PAC(? , ? , ?) .
(4)
(5)
In order to apply PAC, we must specify a nucleotide substitution model, that is, the probability that a
nucleotide i mutates to a nucleotide j over evolutionary distance t. In the above, ? parametrizes this
model. For our analyses, we employed the well-known Felsenstein substitution model, augmented
with a fifth symbol to represent gaps [8]. For simplicity, we constructed fixed empirical estimates of
the Felsenstein parameters ?, in a standard way.
To incorporate the extended Felsenstein model in PAC, it is necessary to integrate evolutionary
distance out of the substitution process p(j | i, 2t), using the exponential distribution induced by the
coalescent on the evolutionary distance 2t between pairs of sampled individuals. It can be shown
that the required quantity is
Z
p(j | i) =
p(j | i, 2t)p(t) dt =
k
k
1?
?j +
1[i = j] +
k + 2?
k + 2(?1[i 6= gap] + ?)
k
?j
k
?
1[(i, j) ? {(A, G), (C, T )}] . (6)
k + 2?
k + 2(? + ?)
?i + ?j
Here k is the number of sampled individuals, and ?i , ?j , ?, and ? are Felsenstein model parameters
(the last two depending on the mutation rate at the site in question). 1[?] is the indicator function of
the predicate in brackets.
The blocking prior (1) and the use of PAC with spatially varying parameters are ideas drawn from
OMEGAMAP [12]. But our approach differs in two significant respects. First, OMEGAMAP models
codons (the protein sequence encoded by nucleotides), not the nucleotides themselves. This is sometimes unsuitable. For example, in one of our empirical analyses, the treatment population receives
RNA antisense gene therapy. The target of this therapy is the primary HIV genome sequence itself,
not its protein products. So we would expect the escape response to manifest at the nucleotide level,
in the targeted region of the genome. Our model can capture this. Second, we perform simultaneous
hierarchical inference about the control and treatment sample, which encourages the parameter estimates to differ between the samples only where strongly justified by the data. Using a one-sample
tool like OMEGAMAP on each sample in isolation would tend to increase the number of artifactual
differences between corresponding parameters in each sample.
2.3
Inference
The posterior distribution of the parameters in our model cannot be calculated analytically. We
therefore employ a reversible-jump Metropolis-within-Gibbs sampling strategy to construct an approximate posterior. In such an approach, sets of parameters are iteratively sampled from their posterior conditional distributions, given the current values of all other parameters. Because the Blocks
prior generates mutation and recombination parameters with piecewise-constant profiles along the
sequence, we call our sampler implementation PICOMAP.
The sampler uses Metropolis-Hastings updates for the numerical values of parameters, and
reversible-jump updates [1] to explore the blocking structures (B? , S? ) and (B? , S? ). The block
updates consider extending a block to the left or right, merging two adjacent blocks, and splitting a
block. They are similar to the updates (B2)-(B4) of [12], so we omit the details.
To illustrate one of the parameter updates within a block, let (?Ci , ?Ti ) be the current values of the
control and treatment mutation rates in block i. We sample proposal values
log ?
?Ci ? N (log ?Ci , ? 2 ) ,
log ?
?Ti
?
N (log ?Ti , ? 2 )
4
,
(7)
(8)
Figure 1: Posterior estimate of the effect of enfuvirtide drug therapy on mutation rates. Blue line is
posterior mean, Black lines are 95% highest-posterior-density (HPD) intervals.
where ? 2 is a manually configured tuning parameter for the proposal distribution. These proposals
are accepted with probability
?Ci , ?
?Ti | ?i )
p(H C | ?
?Ci , ?) p(H T | ?
?Ti , ?) p(?
?
,
C
C
T
C
T
p(H | ?i , ?) p(H | ?i , ?) p(?i , ?Ti | ?i )
(9)
p(?
?Ci , ?
?T ?C exp {?((log ?
?Ti ? log ?i )2 + (log ?
?Ci ? log ?i )2 )/2? 2 }
?Ti | ?i )
= iT Ci
.
C
T
T
2
p(?i , ?i | ?i )
?
?i ?
?i exp {?((log ?i ? log ?i ) + (log ?Ci ? log ?i )2 )/2? 2 }
(10)
where
Here ? denotes the current values of all other model parameters. Notice that symmetry in the proposal distribution causes that part of the MH acceptance ratio to cancel.
The PICOMAP sampler involves a number of other update formulas, which we do not describe here
due to space constraints.
3
Results
In this section, we apply the PICOMAP methodology to HIV sequence data from two different studies. In the first study, several HIV-infected patients were exposed to a drug-based therapy. In the
second study, the HIV virus was exposed in vitro to a novel antisense gene therapy. In both cases,
our analysis extracts biologically relevant features of the evolutionary response of HIV to these
therapeutic challenges.
For each study we ran at least 8 chains to monitor convergence of the sampler. The chains converged without exception and were thinned accordingly, then combined for analysis. In the interest
of brevity, we include only plots of the posterior treatment-effect estimates for both mutation and
recombination rates.
3.1
Drug therapy study
In this study, five patients had blood samples taken both before and after treatment with the drug
enfuvirtide, also known as Fuzeon or T-20 [11]. Sequences of the Envelope (Env) region of the HIV
genome were generated from each of these blood samples. Pooling across these patients, we have
28 pre-exposure Env sequences which we label as the control sample, and 29 post-exposure Env
sequences which we label as the treatment sample. We quantify the treatment effect of exposure
5
Figure 2: Posterior estimate of the effect of enfuvirtide drug therapy on recombination rates. Blue
line is posterior mean, Black lines are 95% HPD intervals.
to the drug by calculating the posterior mean and 95% highest-posterior-density (HPD) intervals of
the difference in recombination rates ?T ? ?C and mutation rates ?T ? ?C at each position of the
genomic sequence.
The very existence in the patient of a post-exposure HIV population indicates the evolution of
sequence changes that have conferred resistance to the action of the drug enfuvirtide. In fact,
resistance-conferring mutations are known a priori to occur at nucleotide locations 1639-1668 in
the Env sequence. Figure 1 shows the posterior estimate of the treatment effect on mutation rates
over the length of the Env sequence. From nucleotide positions 1590-1700, the entire 95% HPD interval of the mutation rate treatment effect is above zero, which suggests our model is able to detect
elevated levels of mutation in the resistance-conferring region, among individuals in the treatment
sample.
Another preliminary observation from this study was that both the pre-exposure and post-exposure
sequences are mixtures of several different HIV subtypes. Subtype identity is specified by the V3
loop subsequence of the Env sequence, which corresponds to nucleotide positions 887-995. Since it
is unlikely that resistance-conferring mutations developed independently in each subtype, we suspect
that the resistance-conferring mutations were passed to the different subtypes via recombination.
Recombination is the primary means by which drug resistance is transferred in vivo between strains
of HIV, so recombination at these locations involving drug resistant strains would allow successful
transfer of the resistance-conferring mutations between types of HIV.
Figure 2 shows the spatial posterior estimate of the treatment effect on recombination. We see
two areas of increased recombination, one from nucleotide positions 1020-1170 and another from
nucleotide positions 1900-2200. As an interesting side note, we see a marked decrease in mutation
and recombination in the V3 loop that determines sequence specificity.
3.2
Antisense gene therapy study
In the VIRxSYS antisense gene therapy study, we have two populations of wild type HIV in vitro.
The samples consist of 19 Env sequences from a control HIV population that was allowed to evolve
neutrally in cell culture, along with 48 Env sequences sampled from an HIV population evolving in
cell cultures that were transfected with the VIRxSYS antisense vector [7]. The antisense gene therapy vector targets nucleotide positions 1325 - 2249. Unlike drug therapy treatments, whose effect
can be nullified by just one or two well placed mutations, a relatively large number of mutations
are required to escape the effects of antisense gene therapy. We again quantify the treatment effect
of exposure to the antisense vector by calculating the posterior mean and 95% HPD interval of the
6
Figure 3: Posterior estimate of the effect of VIRxSYS antisense gene therapy on mutation rates.
Blue line is posterior mean, Black lines are 95% HPD intervals.
difference in recombination rates ?T ? ?C and mutation rates ?T ? ?C at each position of the Env
sequence.
Figures 3 and 4 show the posterior estimate of the treatment?s effect on mutation and recombination,
respectively. The most striking feature of the plots is the area of significantly elevated mutation in the
treatment sequences. The leftmost region of the highest plateau corresponds to nucleotide position
1325, the 5? boundary of the antisense target region. This area of heightened mutation overlaps
with the target region for around 425 nucleotides in the 3? direction. We see fewer differences in
the recombination rate, suggesting that mutation is the primary mechanism of evolutionary response
to the antisense vector. In fact, we estimate lower recombination rates in the target region of the
treatment sequences relative to the control sequences.
4
Discussion
We have introduced a hierarchical model for the estimation of evolutionary escape response in a
population exposed to therapeutic challenge. The escape response is quantified by mutation and
recombination rate parameters. Our method allows for spatial heterogeneity in these mutation and
recombination rates. It estimates differences between treatment and control sample parameters,
with parameter values encouraged to be similar between the two populations except where the data
suggests otherwise. We applied our procedure to sequence data from two different HIV therapy
studies, detecting evolutionary responses in both studies that are of biological interest and may be
relevant to the design of future HIV treatments.
Although virological problems motivated the creation of our model, it applies more generally to twosample data sets of nucleic acid sequences drawn from any population. The model is particularly
relevant for populations in which the recombination rate is a substantial fraction of the mutation rate,
since simpler models which ignore recombination can produce seriously misleading results.
Acknowledgements
This research was supported by a grant from the University of Pennsylvania Center for AIDS Research. Thanks to Neelanjana Ray, Jessamina Harrison, Robert Doms, Matthew Stephens and Gwen
Binder for helpful discussions.
7
Figure 4: Posterior estimate of the effect of VIRxSYS antisense gene therapy on recombination
rates. Blue line is posterior mean, Black lines are 95% HPD intervals.
References
[1] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model
determination. Biometrika, 82:711?731, 1995.
[2] R. C. Griffiths and P. Marjoram. An ancestral recombination graph. In Progress in Population
Genetics and Human Evolution, pages 257?270. Springer Verlag, 1997.
[3] J. Hein, M. Schierup, and C. Wiuf. Gene Genealogies, Variation and Evolution: A Primer in
Coalescent Theory. Oxford University Press, 2005.
[4] R. R. Hudson. Properties of a neutral allele model with intragenic recombination. Theoretical
Population Biology, 23:183?201, 1983.
[5] J. F. C. Kingman. The coalescent. Stochastic Processes and Their Applications, 13:235?248,
1982.
[6] N. Li and M. Stephens. Modeling linkage disequilibrium and identifying recombination
hotspots using single-nucleotide polymorphism data. Genetics, 165:2213?2233, December
2003.
[7] X. Lu, Q. Yu, G. Binder, Z. Chen, T. Slepushkina, J. Rossi, and B. Dropulic. Antisensemediated inhibition of human immunodeficiency virus (HIV) replication by use of an HIV
type 1-based vector results in severely attenuated mutants incapable of developing resistance.
Journal of Virology, 78:7079?7088, 2004.
[8] G. McGuire, M. Denham, and D. Balding. Models of sequence evolution for DNA sequences
containing gaps. Molecular Biology and Evolution, 18(4):481?490, 2001.
[9] N. Ray, J. Harrison, L. Blackburn, J. Martin, S. Deeks, and R. Doms. Clinical resistance to
enfuvirtide does not affect susceptibility of human immunodeficiency virus type 1 to other
classes of entry inhibitors. Journal of Virology, 81:3240?3250, 2007.
[10] M. H. Schierup and J. Hein. Consequences of recombination on traditional phylogenetic analysis. Genetics, 156:879?891, 2000.
[11] C. Wild, T. Greenwell, and T. Matthews. A synthetic peptide from HIV-1 gp41 is a potent
inhibitor of virus mediated cell-cell fusion. AIDS Research and Human Retroviruses, 9:1051?
1053, 1993.
[12] D. Wilson and G. McVean. Estimating diversifying selection and functional constraint in the
presence of recombination. Genetics, 172:1411?1425, 2006.
8
| 3603 |@word briefly:1 moment:1 substitution:3 contains:2 genetic:1 seriously:1 rightmost:1 current:4 virus:8 yet:1 must:1 schierup:2 numerical:1 designed:1 plot:2 update:6 fewer:1 accordingly:1 ith:1 positionally:1 detecting:1 location:3 simpler:1 five:1 phylogenetic:1 along:6 constructed:1 profound:1 replication:2 consists:2 wild:3 ray:2 thinned:1 upenn:3 rapid:1 themselves:1 codon:1 zhi:1 gwen:1 increasing:1 begin:1 estimating:1 what:1 substantially:1 developed:2 gth:1 nj:1 ti:12 growth:1 biometrika:1 demonstrates:1 control:18 subtype:2 grant:1 omit:1 appear:1 mcauliffe:1 before:3 hudson:1 limit:2 severely:1 consequence:1 joining:1 oxford:1 merge:1 black:4 plus:1 quantified:1 suggests:3 binder:2 union:2 block:18 differs:1 procedure:6 area:3 empirical:3 braunstein:1 drug:12 evolving:2 significantly:1 matching:1 pre:2 griffith:1 specificity:1 protein:2 get:1 cannot:1 selection:1 put:1 conventional:1 center:1 exposure:7 independently:1 simplicity:1 splitting:1 lineage:1 identifying:1 insight:1 population:35 variation:1 analogous:1 target:5 heightened:1 us:1 pa:3 particularly:1 blocking:2 observed:2 capture:1 region:8 cycle:1 decrease:1 highest:4 ran:1 mentioned:1 substantial:1 insertion:1 dynamic:3 exposed:4 creation:1 balding:1 stylized:1 mh:1 jersey:1 describe:1 monte:2 choosing:1 hiv:27 quite:1 encoded:1 plausible:1 whose:1 otherwise:1 statistic:3 mutates:1 itself:1 final:1 sequence:54 product:3 remainder:1 frequent:1 relevant:4 loop:2 billion:1 parent:3 mutate:1 convergence:1 extending:1 wiuf:1 produce:1 generating:1 illustrate:2 develop:1 derive:1 depending:1 school:3 progress:1 implemented:1 involves:3 quantify:2 differ:3 direction:1 stochastic:1 allele:1 coalescent:15 human:5 bin:1 polymorphism:1 generalization:1 investigation:1 preliminary:1 biological:1 subtypes:2 genealogy:3 zhiwei:1 therapy:25 around:1 wright:9 exp:2 immunodeficiency:3 matthew:2 vary:1 susceptibility:1 estimation:1 lose:1 label:2 mutandis:1 sensitive:1 peptide:1 tool:1 genomic:1 rna:1 hotspot:1 inhibitor:2 varying:4 wilson:1 focus:1 improvement:1 mutant:1 sequencing:1 likelihood:3 mainly:1 indicates:1 detect:1 helpful:1 inference:6 typically:1 entire:1 unlikely:1 hidden:1 ancestor:4 arg:2 among:1 priori:1 development:1 spatial:2 wharton:6 construct:1 never:1 sampling:2 manually:1 env:9 encouraged:1 biology:2 look:1 cancel:1 yu:1 jon:1 blackburn:1 future:1 parametrizes:1 piecewise:1 escape:11 employ:1 randomly:2 individual:9 replaced:1 replacement:1 detection:1 interest:3 acceptance:1 investigate:1 highly:2 alignment:4 mixture:1 bracket:1 chain:3 necessary:1 experience:1 culture:2 nucleotide:21 tree:8 divide:1 hein:2 theoretical:1 complicates:1 increased:1 column:1 modeling:1 infected:4 challenged:1 entry:2 neutral:1 predicate:1 successful:1 varies:1 synthetic:1 combined:2 thanks:1 density:2 potent:1 ancestral:4 probabilistic:1 again:1 denham:1 choose:1 possibly:2 containing:1 kingman:3 li:1 suggesting:1 nonoverlapping:1 b2:1 availability:1 matter:1 configured:1 explicitly:1 view:1 start:1 maintains:1 mutation:34 vivo:1 formed:3 acid:1 gathered:2 correspond:1 bayesian:2 mutated:1 nullified:1 produced:1 iid:1 lu:1 carlo:2 history:1 converged:1 explain:1 simultaneous:1 plateau:1 sharing:1 evaluates:1 sampled:6 therapeutic:4 treatment:33 manifest:1 knowledge:1 higher:1 dt:1 day:2 follow:2 methodology:1 response:13 wei:1 specify:1 strongly:1 generality:1 just:3 receives:1 hastings:1 reversible:4 widespread:1 effect:15 true:1 untreated:1 evolution:8 analytically:1 spatially:4 iteratively:1 adjacent:3 encourages:1 intragenic:1 rooted:1 leftmost:1 rudimentary:1 novel:1 recently:1 common:3 viral:8 functional:1 vitro:3 b4:1 analog:1 elevated:2 approximates:1 diversifying:1 significant:2 gibbs:1 tuning:1 similarly:1 particle:1 recombinant:6 had:2 coalescence:2 resistant:1 inhibition:1 posterior:22 recent:1 showed:1 verlag:1 incapable:1 arbitrarily:1 employed:1 v3:2 signal:1 stephen:2 branch:2 hpd:7 stem:1 determination:1 calculation:1 clinical:1 mrca:3 post:5 molecular:1 neutrally:1 involving:2 basic:1 patient:5 represent:1 sometimes:1 cell:6 justified:1 proposal:4 want:1 conditionals:1 interval:7 harrison:2 leaving:1 biased:1 envelope:1 unlike:1 shane:1 induced:1 tend:1 pooling:1 suspect:1 december:1 incorporates:2 effectiveness:1 call:1 presence:1 backwards:3 split:2 affect:1 isolation:1 pennsylvania:4 topology:1 idea:1 attenuated:1 motivated:1 passed:1 linkage:1 tabulated:1 resistance:9 cause:2 repeatedly:1 action:1 generally:1 detailed:1 induces:2 dna:1 generate:1 notice:1 disequilibrium:1 per:3 blue:4 discrete:1 group:5 key:1 four:1 monitor:1 drawn:5 blood:2 graph:3 fraction:1 striking:1 reasonable:1 putative:1 draw:1 conferring:5 badly:1 paternal:2 occur:1 constraint:2 generates:1 attempting:1 recombining:4 relatively:1 rossi:1 transferred:1 department:4 marking:1 developing:1 antisense:13 martin:1 felsenstein:4 across:2 increasingly:1 metropolis:2 evolves:1 biologically:2 modification:1 gathering:1 taken:1 computationally:1 mutually:1 describing:2 count:1 mechanism:2 conferred:1 subjected:1 mutating:2 unusual:1 permit:1 apply:2 hierarchical:8 occurrence:1 primer:1 existence:1 denotes:1 remaining:1 include:1 furnishes:1 unsuitable:1 calculating:2 recombination:42 unchanged:1 mcguire:1 question:1 quantity:1 strategy:2 primary:4 parametric:1 traditional:1 evolutionary:11 exhibit:1 kth:1 distance:3 separate:3 hmm:1 reason:1 length:2 index:2 ratio:1 robert:1 rise:1 implementation:1 design:1 unknown:1 perform:1 observation:1 nucleic:1 markov:2 virology:2 heterogeneity:1 extended:2 variability:2 strain:2 introduced:1 pair:1 required:2 specified:2 coalesce:1 deletion:1 able:1 below:1 challenge:2 program:1 green:1 event:4 overlap:1 doms:2 quantification:1 indicator:1 marjoram:1 older:1 technology:2 misleading:1 mediated:1 extract:1 philadelphia:3 prior:6 understanding:1 acknowledgement:1 evolve:1 relative:3 fully:1 expect:1 generation:8 interesting:1 triple:1 integrate:1 mcvean:1 share:1 row:1 genetics:6 twosample:2 placed:1 last:1 copy:1 supported:1 side:1 allow:1 institute:1 lognormal:1 fifth:1 distributed:1 boundary:1 calculated:1 genome:11 forward:1 jump:4 approximate:6 ignore:1 gene:11 global:1 subsequence:3 continuous:2 virological:1 nature:1 transfer:1 symmetry:1 hyperparameters:2 profile:1 child:3 allowed:1 augmented:1 site:10 representative:1 aid:2 position:11 wish:1 exponential:1 governed:1 formula:1 specific:1 pac:7 jensen:1 symbol:2 fusion:1 intractable:1 consist:1 merging:1 ci:13 juncture:2 gap:4 chen:1 artifactual:1 explore:1 forming:1 maternal:2 mutatis:1 applies:1 springer:1 corresponds:3 determines:1 conditional:2 viewed:2 targeted:1 identity:1 marked:1 fisher:9 change:2 included:1 determined:1 except:1 uniformly:1 sampler:4 called:5 accepted:1 exception:1 phylogeny:1 newark:1 alexander:1 brevity:1 incorporate:1 mcmc:2 |
2,873 | 3,604 | Integrating locally learned causal structures
with overlapping variables
Robert E. Tillman
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
David Danks, Clark Glymour
Carnegie Mellon University &
Institute for Human & Machine Cognition
Pittsburgh, PA 15213
{ddanks,cg09}@andrew.cmu.edu
Abstract
In many domains, data are distributed among datasets that share only some variables; other recorded variables may occur in only one dataset. While there are
asymptotically correct, informative algorithms for discovering causal relationships from a single dataset, even with missing values and hidden variables, there
have been no such reliable procedures for distributed data with overlapping variables. We present a novel, asymptotically correct procedure that discovers a minimal equivalence class of causal DAG structures using local independence information from distributed data of this form and evaluate its performance using synthetic and real-world data against causal discovery algorithms for single datasets
and applying Structural EM, a heuristic DAG structure learning procedure for data
with missing values, to the concatenated data.
1 Introduction
In many domains, researchers are interested in predicting the effects of interventions, or manipulating variables, on other observed variables. Such predictions require knowledge of causal relationships between observed variables. There are existing asymptotically correct algorithms for learning
such relationships from data, possibly with missing values and hidden variables [1][2][3], but these
algorithms all assume that every variable is measured in a single study. Datasets for such studies are
not always readily available, often due to privacy, ethical, financial, and practical concerns. However, given the increasing availability of large amounts of data, it is often possible to obtain several
similar studies that individually measure subsets of the variables a researcher is interested in and
together include all such variables. For instance, models of the United States and United Kingdom
economies share some but not all variables, due to different financial recording conventions; fMRI
studies with similar stimuli may record different variables, since the images vary according to magnet strength, data reduction procedures, etc.; and U.S. states report some of the same educational
testing variables, but also report state-specific variables. In these cases, if each dataset has overlapping variable(s) with at least one other dataset, e.g. if two datasets D1 and D2 , which measure
variables V1 and V2 , respectively, have at least one variable in common (V1 ? V2 6= ?), then we
should be able to learn many of the causal relationships between the observed variables using this
set of datasets. The existing algorithms, however, cannot in general be directly applied to such cases,
since they may require joint observations for variables that are not all measured in a single dataset.
While this problem has been discussed in [4] and [5], there are no general, useful algorithms for
learning causal relationships from data of this form. A typical response is to concatenate the datasets
to form a single common dataset with missing values for the variables that are not measured in each
of the original datasets. Statistical matching [6] or multiple imputation [7] procedures may then
be used to fill in the missing values by assuming an underlying model (or small class of models),
estimating model parameters using the available data, and then using this model to interpolate the
1
missing values. While the assumption of some underlying model may be unproblematic in many
standard prediction scenarios, i.e. classification, it is unreliable for causal inference; the causal relationships learned using the interpolated dataset that are between variables which are never jointly
measured in single dataset will only be correct if the corresponding relationships between variables
in the assumed model happen to be causal relationships in the correct model. The Structural EM
algorithm [8] avoids this problem by iteratively updating the assumed model using the current interpolated dataset and then reestimating values for the missing data to form a new interpolated dataset
until the model converges. The Structural EM algorithm is only justified, however, when missing
data are missing at random (or indicator variables can be used to make them so) [8]. The pattern
of missing values in the concatenated datasets described above is highly structured. Furthermore,
Structural EM is a heuristic procedure and may converge to local maxima. While this may not be
problematic in practice when doing prediction, it is problematic when learning causal relationships.
Our experiments in section 4 show that Structural EM performs poorly in this scenario.
We present a novel, asymptotically correct algorithm?the Integration of Overlapping Networks
(ION) algorithm?for learning causal relationships (or more properly, the complete set of possible
causal DAG structures) from data of this form. Section 2 provides the relevant background and
terminology. Section 3 discusses the algorithm. Section 4 presents experimental evaluations of the
algorithm using synthetic and real-world data. Finally, section 5 provides conclusions.
2 Formal preliminaries
We now introduce some terminology. A directed graph G = hV, Ei is a set of nodes V, which represent variables, and a set of directed edges E connecting distinct nodes. If two nodes are connected
by an edge then the nodes are adjacent. For pairs of nodes {X, Y } ? V, X is a parent (child) of Y ,
if there is a directed edge from X to Y (Y to X) in E. A trail in G is a sequence of nodes such that
each consecutive pair of nodes in the sequence is adjacent in G and no node appears more than once
in the sequence. A trail is a directed path if every edge between consecutive pairs of nodes points in
the same direction. X is an ancestor (descendant) of Y if there is a directed path from X to Y (Y
to X). G is a directed acyclic graph (DAG) if for every pair {X, Y } ? V, X is not both an ancestor
and a descendent of Y (no directed cycles). A collider (v-structure) is a triple of nodes hX, Y, Zi
such that X and Z are parents of Y . A trail is active given C ? V if (i) for every collider hX, Y, Zi
in the trail either Y ? C or some descendant of Y is in C and (ii) no other node in the trail is in C.
For disjoint sets of nodes X, Y, and Z, X is d-separated (d-connected) from Y given Z if and only if
there are no (at least one) active trails between any X ? X and any Y ? Y given Z.
A Bayesian network B is a pair hG, Pi, where G = hV, Ei is a DAG and P is a joint probability
distribution over the variables represented by the nodes in V such that P can be decomposed as
follows:
Y
P(V) =
P (V |Parents(V ))
V ?V
For B = hG, Pi, if X is d-separated from Y given Z in G, then X is conditionally independent of Y
given Z in P [9]. For disjoint sets of nodes, X, Y, and Z in V, P is faithful to G if X is d-separated
from Y given Z in G whenever X is conditionally independent of Y given Z in P [1]. B is a causal
Bayesian network if an edge from X to Y indicates that X is a direct cause of Y relative to V.
Most algorithms for causal discovery, or learning causal relationships from nonexperimental data,
assume that the distribution over the observed variables P is decomposable according to a DAG
G and P is faithful to G. The goal is to learn G using the data from P. Most causal discovery
algorithms return a set of possible DAGs which entail the same d-separations and d-connections,
e.g. the Markov equivalence class, rather than a single DAG. The DAGs in this set have the same
adjacencies but only some of the same directed edges. The directed edges common to each DAG
represent causal relationships that are learned from the data. If we admit the possibility that there
may be unobserved (latent) common causes between observed variables, then this set of possible
DAGs is usually larger.
A partial ancestral graph (PAG) represents the set of DAGs in a particular Markov equivalence class
when latent common causes may be present. Nodes in a PAG correspond to observed variables.
Edges are of four types: ??, ???, ??? and ???, where a ? indicates either an ? or ? orientation,
bidirected edges indicate the presence of a latent common cause, and fully directed edges (??)
2
indicate that the directed edge is present in every DAG, e.g. a causal relationship. For {X, Y } ? V,
a possibly active trail between X and Y given Z ? V/{X, Y } is a trail in a PAG between X and Y
such that some orientation of ??s on edges between consecutive nodes in the trail, to either ? or ?,
makes the trail active given Z.
3 Integration of Overlapping Networks (ION) algorithm
The ION algorithm uses conditional independence information to discover the complete set of PAGs
over a set of variables V that are consistent with a set of datasets over subsets of V which have
overlapping variables. ION accepts as input a set of PAGs which correspond to each of such datasets.
A standard causal discovery algorithm that checks for latent common causes, such as FCI [1] or GES
[3] with latent variable postprocessing steps1 , must first be applied to each of the original datasets
to learn these PAGs that will be input to ION. Expert domain knowledge can also be encoded in the
input PAGs, if available. The ION algorithm is shown as algorithm 1 and described below.
Input : PAGs Gi ? G with nodes Vi ? V for i = 1, . . . , k
Output: PAGs Hi ? H with nodes Vi = V for i = 1, . . . , m
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
K ? the complete graph over V with ??s at every endpoint
A??
Transfer nonadjacencies and endpoint orientations from each Gi ? G to K and propagate the
changes in K using the rules described in [10]
PAT({X, Y}, Z) ? all possibly active trails between X and Y given Z for all {X, Y} ? V and
Z ? V/{X, Y} such that X and Y are d-separated given Z in some Gi ? G
PC ? all minimal hitting sets of changes to K, such that all PATi ? PAT are not active
for PCi ? PC do
Ai ? K after making and propagating the changes PCi
if Ai is consistent with every Gi ? G then add Ai to A
end
for Ai ? A do
Remove Ai from A
Mark all edges in Ai as ???
For each {X, Y} ? V such that X and Y are adjacent in Ai , if X and Y are d-connected
given ? in some Gi ? G, then remove ??? from the edge between X and Y in Ai
PR ? every combination of removing or not removing ??? marked edges from Ai
for PRi ? PR do
Hi ? Ai after making and propagating the changes PRi
if Hi is consistent with every Gi ? G then add Hi to H
end
end
Algorithm 1: The Integration of Overlapping Networks (ION) algorithm
The algorithm begins with the complete graph over V with all ? endpoints and transfers nonadjacencies and endpoint orientations from each Gi ? G at line 3, e.g. if X and Y are not adjacent in Gi
then remove the edge between X and Y , if X is directed into Y in Gi then set the endpoint at Y on
the edge between X and Y to ?. Once these orientations and edge removals are made, the changes
to the complete graph are propagated using the rules in [10], which provably make every change
that is entailed by the current changes made to the graph. Lines 4-9 find every possibly active trail
for every {X, Y } ? V given Z ? V/{X, Y } such that X and Y are d-separated given Z in some
Gi ? G. The constructed set PC includes all minimal hitting sets of graphical changes, e.g. unique
sets of minimal changes that are not subsets of other sets of changes, which make these paths no
longer active. For each minimal hitting set, a new graph is constructed by making the changes in
the set and propagating these changes. If the graph is consistent with each Gi ? G, e.g. the graph
does not imply a d-separation for some {X, Y } ? V given Z ? V/{X, Y } such that X and Y are
d-connected in some Gi ? G, then this graph is added to the current set of possible graphs. Lines 101
We use the standard GES algorithm to learn a DAG structure from the data and then use the FCI rules to
check for possible latent common causes.
3
19 attempt to discover any additional PAGs that may be consistent with each Gi ? G after deleting
edges from PAGs in the current set and propagating the changes. If some pair of nodes {X, Y } ? V
that are adjacent in a current PAG are d-connected given ? in some Gi ? G, then we do not consider
sets of edge removals which remove this edge.
The ION algorithm is provably sound in the sense that the output PAGs are consistent with every
Gi ? G, e.g. no Hi ? H entails a d-separation or d-connection that contradicts a d-separation or
d-connection entailed by some Gi ? G. This property follows from the fact that d-separation and
d-connection are mutually exclusive, exhaustive relations.
Theorem 3.1 (soundness). If X and Y are d-separated (d-connected) given Z in some Gi ? G, then
X and Y are d-separated (d-connected) given Z in every Hi ? H.
Proof Sketch. Every structure Ai constructed at line 7 provably entails every d-separation entailed
by some Gi ? G. Such structures are only added to A if they do not entail a d-separation corresponding to a d-connection in some Gi ? G. The only changes made (other than changes resulting from
propagating other changes which are provably correct by [10]) in lines 10-19 are edge removals,
which can only create new d-separations. If a new d-separation is created which corresponds to a
d-connection in some Gi ? G, then the PAG entailing this new d-separation is not added to H.
The ION algorithm is provably complete in the sense that if there is some structure Hi over the
variables V that is consistent with every Gi ? G, then Hi ? H.
Theorem 3.2 (completeness). Let Hi be a PAG over the variables V such that for every pair
{X, Y } ? V, if X and Y are d-separated (d-connected) given Z ? V/{X, Y } in some Gi ? G, then
X and Y are d-separated (d-connected) given Z in Hi . Then, Hi ? H.
Proof Sketch. Every change made at line 3 is provably necessary to ensure soundness. At least
one graph added to A at line 8 provably has every adjacency (possibly more) in Hi and no non-?
endpoints on an edge found in Hi that is not also present in Hi . Some sequence of edge removals
will provably produce Hi at line 16 and it will be added to the output set since it is consistent with
every Gi ? G.
Thus, by theorems 3.1 and 3.2, ION is an asymptotically correct algorithm for learning the complete
set of PAGs over V that are consistent with a set of datasets over subsets of V with overlapping
variables, if the input PAGs are discovered using an asymtotically correct algorithm that detects the
presence of latent common causes, i.e. FCI, with each of these datasets.
Finding all minimal hitting sets is an NP-complete problem [11]. Since learning a DAG structure
from data is also an NP-complete problem [12], the ION algorithm, as given above, requires a
superexponential (in V) number of operations and is often computationally intractable even for small
sizes of |V|. In practice, however, we can break the minimal hitting set problem into a sequence
of smaller subproblems and use a branch and bound approach that is tractable in many cases and
still results in an asymptotically correct algorithm. We tested several such strategies. The method
which most effectively balanced time and space complexity tradeoffs was to first find all minimal
hitting sets which make all possibly active trails of length 2 that correspond to d-separations in some
Gi ? G not active, then find the structures resulting from making and propagating these changes that
are consistent with every Gi ? G, and iteratively do the same for each of these structures, increasing
the length of possibly active trails considered until trails of all sizes are considered.
4 Experimental results
We first used synthetic data to evaluate the performance of ION with known ground truth. In the first
experiment, we generated 100 random 4-node DAGs using the MCMC algorithm described in [13]
with random discrete parameters (conditional probability tables for the factors in the decomposition
shown in section 2). For each DAG, we then randomly chose two subsets of size 2 or 3 of the nodes
in the DAG such that the union of the subsets included all 4 nodes and at least one overlapping
variable between the two subsets was present. We used forward sampling to generate two i.i.d.
samples of sizes N = 50, N = 100, N = 500, N = 1000 and N = 2500 from the DAGs for
only the variables in each subset. We used both FCI and GES with latent variable postprocessing to
4
5
Edge omissions
4
3
2
1
0
3
2.5
Edge commissions
FCI?baseline
ION?FCI
GES?baseline
ION?GES
Structural EM
2
1.5
1
0.5
N=50
0
N=100 N=500 N=1000 N=2500
Sample size
N=50
(a)
(b)
5
14000
12000
Time (seconds)
4
Orientation errors
N=100 N=500 N=1000 N=2500
Sample size
3
2
10000
8000
6000
4000
1
2000
0
N=50
0
N=100 N=500 N=1000 N=2500
Sample size
(c)
N=50
N=100 N=500 N=1000 N=2500
Sample size
(d)
Figure 1: (a) edge omissions, (b) edge commissions, (c) orientation errors, and (d) runtimes
generate PAGs for each of these samples which were input to ION. To evaluate the accuracy of ION,
we counted the number of edge omission, edge commision, and orientation errors (? instead of ?)
for each PAG in the ION output set and averaged the results. These results were then averaged across
all of the 100 4-node structures. Figure 1 shows the averaged results for these methods along with
3 other methods we included for comparison. ION-FCI and ION-GES refer the the performance of
ION when the input PAGs are obtained using the FCI algorithm and the GES algorithm with latent
variable postprocessing, respectively. For Structural EM, we took each of the datasets over subsets
of the nodes in each DAG and formed a concatenated dataset, as described in section 1, which
was input to the Structural EM algorithm.2 For FCI-baseline and GES-baseline, we used forward
sampling to generate another i.i.d. sample of sizes N = 50, N = 100, N = 500, N = 1000 and
N = 2500 for all of the variables in each DAG and used these datasets as input for the FCI and GES
with latent variable postprocessing algorithms, respectively, to obtain a measure for how well these
algorithms perform when no data is missing. The average runtimes for each method are also reported
in figure 1. Error bars show 95% confidence intervals. We first note the performance of Structural
EM. Almost no edge omission errors are made, but more edge commissions errors are made than any
of the other methods and the edge commission errors do not decrease as the sample size increases.
When we looked at the results, we found that Structural EM always returned either the complete
graph or a graph that was almost complete, indicating that Structural EM is not a reliable method
for causal discovery in this scenario where there is a highly structured pattern to the missing data.
Furthermore, the runtime for Structural EM was considerably higher than any of the other methods.
For the larger sample sizes (where more missing values need to be estimated at each iteration), a
single run required several hours in some instances. Due to its significant computation time, we
2
We ran Structural EM with 5 random restarts and chose the model with the highest BDeu score to avoid
converging to local maxima. Random ?chains? of nodes were used as the initial models. Structural EM was
never stopped before convergence.
5
8
5
4
3
2
5
0.4
Orientation errors
Edge omissions
6
6
0.5
Edge commissions
FCI?baseline
ION?FCI
GES?baseline
ION?GES
7
0.3
0.2
0.1
0
N=50 N=100 N=500 N=1000N=2500
Sample size
(a)
3
2
1
1
0
4
0
N=50 N=100 N=500 N=1000N=2500
Sample size
(b)
N=50 N=100 N=500 N=1000N=2500
Sample size
(c)
Figure 2: (a) edge omissions, (b) edge comissions, and (c) orientation errors
8
5
4
3
2
5
0.4
Orientation errors
Edge omissions
6
6
0.5
Edge commissions
FCI?baseline
ION?FCI
GES?baseline
ION?GES
7
0.3
0.2
0.1
N=50 N=100 N=500 N=1000N=2500
Sample size
(a)
0
3
2
1
1
0
4
N=50 N=100 N=500 N=1000N=2500
Sample size
(b)
0
N=50 N=100 N=500 N=1000N=2500
Sample size
(c)
Figure 3: (a) edge omissions, (b) edge comissions, and (c) orientation errors
were unable to use Structural EM with larger DAG structures so it is excluded in the experiments
below. The FCI-baseline and GES-baseline methods performed similarly to previous simulations
of them. The ION-FCI and ION-GES methods performed similarly to the FCI-baseline and GESbaseline methods but made slightly more errors and showed slower convergence (due to the missing
data). Very few edge commission errors were made. Slightly more edge omission errors were made,
but these errors decrease as the sample size increases. Some edge orientation errors were made even
for the larger sample sizes. This is due to the fact that each of the algorithms returns an equivalence
class of DAGs rather than a single DAG. Even if the correct equivalence class is discovered, errors
result after comparing the ground truth DAG to every DAG in the equivalence class and averaging.
We also note that there are fewer orientation errors for the GES-baseline and ION-GES methods on
the two smallest sample sizes than all of the other sample sizes. While this may seem surprising, it
is simply a result of the fact that more edge omission errors are made for these cases.
We repeated the above experiment for 3 similar cases where we used 6-node DAG structures rather
than 4-node DAG structures: (i) two i.i.d. samples were generated for random subsets of sizes 2-5
with only 1 variable that is not overlapping between the two subsets; (ii) two i.i.d. samples were
generated for random subsets of sizes 2-5 with only 2 variables that are not overlapping between
the two subsets; (iii) three i.i.d. samples were generated for random subsets of sizes 2-5 with only
1 variable that is not overlapping between any pair of subsets. Figures 2, 3, and 4 show edge
omission, edge commission, and orientation errors for each of these cases, respectively. In general,
the performance in each case is similar to the performance for the 4-node case.
We also tested the performance of ION-FCI using a real world dataset measuring IQ and various
neuroanatomical and other traits [14]. We divided the variables into two subsets with overlapping
variables based on domain grounds: (a) variables that might be included in a study on the relationship
between neuroanatomical traits and IQ; and (b) variables for a study on the relationship between IQ,
sex, and genotype, with brain volume and head circumference included as possible confounders.
Figures 5a and 5b show the FCI output PAGs when only the data for each of these subsets of the
variables is provided as input, respectively. Figure 5c shows the output PAG of ION-FCI when these
two resulting PAGs are used as input. We also ran FCI on the complete dataset to have a comparison.
Figure 5d shows this PAG.
6
8
4
3
2
5
0.4
Orientation errors
5
Edge commissions
Edge omissions
6
6
0.5
FCI?baseline
ION?FCI
GES?baseline
ION?GES
7
0.3
0.2
0.1
0
N=50 N=100 N=500 N=1000N=2500
Sample size
3
2
1
1
0
4
0
N=50 N=100 N=500 N=1000N=2500
Sample size
(a)
N=50 N=100 N=500 N=1000N=2500
Sample size
(b)
(c)
Figure 4: (a) edge omissions, (b) edge comissions, and (c) orientation errors
Corpus Collosum Surface Area
Brain Surface Area
IQ
Body Weight
Head Circumference
(a)
Brain Volume
Genotype
IQ
Body Weight
Head Circumference
Brain Volume
(b)
Sex
Corpus Collosum Surface Area
Brain Surface Area
Genotype
IQ
Body Weight
Head Circumference
IQ
Body Weight
Head Circumference
(c)
Brain Volume
Corpus Collosum Surface Area
Sex
Brain Surface Area
Genotype
(d)
Brain Volume
Sex
Figure 5: (a) FCI output PAG for variables in subset a, (b) FCI output PAG for variables in subset b,
(c) ION output PAG when using the FCI ouput PAGs for variables in subset a and variables in subset
b as input, and (d) FCI output PAG for all variables
In this particular case, the output of ION-FCI consists of only a single PAG, which is identical to
the result when FCI is given the complete dataset as input. This case shows that in some instances,
ION-FCI can recover as much information about the true DAG structure as FCI even when less
information can be extracted from the ION-FCI input. We note that the graphical structure of the
complete PAG (figures 5c and 5d) is the union of the structures shown in figures 5a and 5b. While
visually this may appear to be a trivial example for ION where all of the relevant information can be
extracted in the first steps, there is in fact much processing required in later stages in the algorithm
to determine the structure around the nonoverlapping variables.
5 Conclusions
In practice, researchers are often unable to find or construct a single, complete dataset containing
every variable they may be interested in (or doing so is very costly). We thus need some way
of integrating information about causal relationships that can be discovered from a collection of
datasets with related variables [5]. Standard causal discovery algorithms cannot be used, since they
take only a single dataset as input. To address this open problem, we proposed the ION algorithm,
an asymptotically correct algorithm for discovering the complete set of causal DAG structures that
are consistent with such data.
While the results presented in section 4 indicate that ION is useful in smaller domains when the
branch and bound approach described in section 3 is used, a number of issues must be addressed
before ION or a simlar algorithm is useful for higher dimensional datasets. Probably the most significant problem is resolving contradictory information among overlapping variables in different
7
input PAGs, i.e. X is a parent of Y in one PAG and a child of Y in another PAG, resulting from
statistical errors or when the input samples are not identifically distributed. ION currently ignores
such information rather than attempting to resolve it. This increases uncertainty and thus the size of
the resulting output set of PAGs. Furthermore, simply ignoring such information does not always
avoid conflicts. In some of such cases, ION will not discover any PAGs which entail the correct
d-separations and d-connections. Thus, no output PAGs are returned. When performing conditional independence tests or evaluating score functions, statistical errors occur more frequently as
the dimensionality of a dataset increases, unless the sample size also increases at an exponential
rate (resulting from the so-called curse of dimensionality). Thus, until reliable methods for resolving conflicting information from input PAGs are developed, ION and similar algorithms will not
in general be useful for higher dimensional datasets. Furthermore, while the branch and bound
approach described in section 3 is a significant improvement over other methods we tested for computing minimal hitting sets, its memory requirements are still considerable in some instances. Other
algorithmic strategies should be explored in future research.
Acknowledgements
We thank Joseph Ramsey, Peter Spirtes, and Jiji Zhang for helpful discussions and pointers. We
thank Frank Wimberley for implementing the version of Structural EM we used. R.E.T. was supported by the James S. McDonnell Foundation Causal Learning Collaborative Initiative. C.G. was
supported by a grant from the James S. McDonnell Foundation.
References
[1] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, 2nd
edition, 2000.
[2] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000.
[3] D. M. Chickering. Optimal structure identification with greedy search. Journal of Machine
Learning Research, 3:507?554, 2002.
[4] D. Danks. Learning the causal structure of overlapping variable sets. In Discovery Science:
Proceedings of the 5th International Conference, 2002.
[5] D. Danks. Scientific coherence and the fusion of experimental results. The British Journal for
the Philosophy of Science, 56:791?807, 2005.
[6] S. R?assler. Statistical Matching. Springer, 2002.
[7] D. B. Rubin. Multiple Imputation for Nonresponse in Surveys. Wiley & Sons, 1987.
[8] N. Friedman. The Bayesian structural EM algorithm. In Proceedings of the 14th Conference
on Uncertainty in Artificial Intelligence, 1998.
[9] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
Morgan Kauffmann Publishers, 1988.
[10] J. Zhang. A characterization of markov equivalence classes for causal models with latent
variables. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence,
2007.
[11] R. Greiner, B. A. Smith, and R. W. Wilkerson. A correction to the algorithm in Reiter?s theory
of diagnosis. Artificial Intelligence, 41:79?88, 1989.
[12] D. M. Chickering. Learning Bayesian networks is NP-complete. In Proceedings of the 5th
International Workshop on Artificial Intelligence and Statistics, 1995.
[13] G. Melanc?on, I. Dutour, and M. Bousquet-M?elou. Random generation of dags for graph drawing. Technical Report INS-R0005, Centre for Mathematics and Computer Sciences, Amsterdam, 2000.
[14] M. J. Tramo, W. C. Loftus, R. L Green, T. A. Stukel, J. B. Weaver, and M. S. Gazzaniga. Brain
size, head size, and IQ in monozygotic twins. Neurology, 50:1246?1252, 1998.
8
| 3604 |@word version:1 nd:1 sex:4 open:1 d2:1 simulation:1 propagate:1 decomposition:1 reduction:1 initial:1 score:2 united:2 ramsey:1 existing:2 current:5 comparing:1 surprising:1 must:2 readily:1 concatenate:1 happen:1 informative:1 remove:4 greedy:1 discovering:2 fewer:1 tillman:1 intelligence:4 smith:1 record:1 pointer:1 provides:2 completeness:1 node:28 characterization:1 zhang:2 along:1 constructed:3 direct:1 ouput:1 descendant:2 consists:1 initiative:1 introduce:1 privacy:1 frequently:1 brain:9 detects:1 decomposed:1 resolve:1 bdeu:1 curse:1 increasing:2 begin:1 estimating:1 underlying:2 discover:3 provided:1 developed:1 unobserved:1 finding:1 every:24 runtime:1 grant:1 intervention:1 appear:1 before:2 local:3 path:3 might:1 chose:2 equivalence:7 averaged:3 directed:12 practical:1 faithful:2 unique:1 testing:1 practice:3 union:2 fci:32 procedure:6 area:6 matching:2 confidence:1 integrating:2 cannot:2 applying:1 missing:14 circumference:5 educational:1 survey:1 decomposable:1 rule:3 d1:1 fill:1 financial:2 nonresponse:1 kauffmann:1 trail:15 us:1 pa:2 updating:1 observed:6 hv:2 connected:9 cycle:1 decrease:2 highest:1 ran:2 balanced:1 complexity:1 entailing:1 joint:2 represented:1 various:1 separated:9 distinct:1 artificial:4 pci:2 exhaustive:1 heuristic:2 larger:4 encoded:1 plausible:1 drawing:1 soundness:2 gi:25 statistic:1 jointly:1 sequence:5 took:1 relevant:2 poorly:1 parent:4 convergence:2 requirement:1 produce:1 converges:1 iq:8 andrew:2 propagating:6 measured:4 indicate:3 convention:1 direction:1 collider:2 correct:13 human:1 jiji:1 implementing:1 adjacency:2 require:2 hx:2 preliminary:1 correction:1 around:1 considered:2 ground:3 visually:1 cognition:1 algorithmic:1 vary:1 consecutive:3 smallest:1 currently:1 individually:1 create:1 mit:1 danks:3 always:3 rather:4 avoid:2 properly:1 improvement:1 indicates:2 check:2 baseline:14 nonexperimental:1 sense:2 helpful:1 inference:3 economy:1 hidden:2 relation:1 manipulating:1 ancestor:2 interested:3 provably:8 issue:1 among:2 classification:1 orientation:17 integration:3 once:2 never:2 construct:1 sampling:2 runtimes:2 identical:1 represents:1 fmri:1 future:1 report:3 stimulus:1 np:3 intelligent:1 few:1 causation:1 randomly:1 interpolate:1 attempt:1 friedman:1 highly:2 possibility:1 evaluation:1 entailed:3 genotype:4 pc:3 hg:2 chain:1 edge:51 partial:1 necessary:1 unless:1 causal:27 minimal:9 stopped:1 instance:4 bidirected:1 measuring:1 subset:21 commission:9 reported:1 synthetic:3 considerably:1 confounders:1 international:2 ancestral:1 probabilistic:1 together:1 connecting:1 recorded:1 containing:1 possibly:7 admit:1 expert:1 return:2 nonoverlapping:1 twin:1 availability:1 includes:1 descendent:1 vi:2 performed:2 break:1 later:1 doing:2 recover:1 collaborative:1 formed:1 accuracy:1 correspond:3 bayesian:4 identification:1 researcher:3 magnet:1 whenever:1 against:1 pags:21 james:2 proof:2 propagated:1 dataset:17 steps1:1 knowledge:2 dimensionality:2 appears:1 higher:3 restarts:1 response:1 furthermore:4 stage:1 until:3 sketch:2 ei:2 overlapping:15 scientific:1 effect:1 true:1 excluded:1 iteratively:2 pri:2 spirtes:2 reiter:1 conditionally:2 adjacent:5 complete:17 performs:1 postprocessing:4 reasoning:2 image:1 novel:2 discovers:1 common:9 endpoint:6 volume:5 discussed:1 trait:2 mellon:2 refer:1 significant:3 cambridge:1 dag:31 ai:11 rd:1 mathematics:1 similarly:2 centre:1 entail:5 longer:1 surface:6 etc:1 add:2 showed:1 scenario:3 morgan:1 additional:1 determine:1 converge:1 ii:2 branch:3 multiple:2 sound:1 resolving:2 technical:1 divided:1 prediction:4 converging:1 cmu:2 iteration:1 represent:2 ion:42 justified:1 background:1 interval:1 addressed:1 publisher:1 probably:1 recording:1 seem:1 structural:17 presence:2 iii:1 asymtotically:1 independence:3 zi:2 tradeoff:1 peter:1 returned:2 cause:7 useful:4 amount:1 locally:1 generate:3 problematic:2 estimated:1 disjoint:2 diagnosis:1 carnegie:2 discrete:1 four:1 terminology:2 loftus:1 imputation:2 v1:2 asymptotically:7 graph:16 run:1 uncertainty:3 almost:2 separation:12 coherence:1 bound:3 hi:15 strength:1 occur:2 bousquet:1 interpolated:3 attempting:1 performing:1 glymour:2 structured:2 according:2 combination:1 mcdonnell:2 smaller:2 across:1 em:17 contradicts:1 slightly:2 son:1 joseph:1 making:4 pr:2 computationally:1 scheines:1 mutually:1 discus:1 ge:19 tractable:1 end:3 available:3 operation:1 v2:2 slower:1 original:2 neuroanatomical:2 include:1 ensure:1 graphical:2 concatenated:3 rtillman:1 added:5 looked:1 strategy:2 costly:1 exclusive:1 unable:2 thank:2 trivial:1 assuming:1 length:2 relationship:16 pati:1 kingdom:1 robert:1 frank:1 subproblems:1 perform:1 observation:1 datasets:18 markov:3 pat:2 superexponential:1 head:6 discovered:3 omission:13 david:1 pair:8 required:2 connection:7 conflict:1 learned:3 accepts:1 conflicting:1 hour:1 pearl:2 address:1 able:1 bar:1 gazzaniga:1 usually:1 pattern:2 below:2 reliable:3 memory:1 green:1 deleting:1 predicting:1 indicator:1 weaver:1 imply:1 created:1 discovery:7 removal:4 acknowledgement:1 ddanks:1 relative:1 fully:1 generation:1 acyclic:1 reestimating:1 clark:1 triple:1 foundation:2 consistent:11 rubin:1 share:2 pi:2 supported:2 formal:1 institute:1 pag:17 distributed:4 world:3 avoids:1 evaluating:1 ignores:1 forward:2 made:11 collection:1 counted:1 unreliable:1 active:11 corpus:3 pittsburgh:2 assumed:2 neurology:1 search:2 latent:11 table:1 learn:4 transfer:2 ignoring:1 domain:5 unproblematic:1 edition:1 child:2 repeated:1 body:4 causality:1 wiley:1 exponential:1 chickering:2 removing:2 theorem:3 british:1 specific:1 explored:1 concern:1 fusion:1 intractable:1 workshop:1 effectively:1 simply:2 greiner:1 hitting:7 amsterdam:1 ethical:1 springer:1 corresponds:1 truth:2 extracted:2 conditional:3 goal:1 marked:1 considerable:1 change:18 included:4 typical:1 averaging:1 contradictory:1 called:1 experimental:3 indicating:1 mark:1 wilkerson:1 philosophy:1 evaluate:3 mcmc:1 tested:3 |
2,874 | 3,605 | Online Optimization in X -Armed Bandits
S?ebastien Bubeck
INRIA Lille, SequeL project, France
R?emi Munos
INRIA Lille, SequeL project, France
[email protected]
[email protected]
Gilles Stoltz
Ecole Normale Sup?erieure and HEC Paris
Csaba Szepesv?ari
Department of Computing Science, University of Alberta
[email protected] ?
[email protected]
Abstract
We consider a generalization of stochastic bandit problems where the set of arms, X , is
allowed to be a generic topological space. We constraint the mean-payoff function with a
dissimilarity function over X in a way that is more general than Lipschitz. We construct
an arm selection policy whose regret improves upon previous result for a large class of
problems. In particular, our results imply that if X is the unit hypercube in a Euclidean
space and the mean-payoff function has a finite number of global maxima around which
the behavior of the function is locally H?older ?
with a known exponent, then the expected
regret is bounded up to a logarithmic factor by n, i.e., the rate of the growth of the regret
is independent of the dimension of the space. Moreover, we prove the minimax optimality
of our algorithm for the class of mean-payoff functions we consider.
1
Introduction and motivation
Bandit problems arise in many settings, including clinical trials, scheduling, on-line parameter tuning of
algorithms or optimization of controllers based on simulations. In the classical bandit problem there are a
finite number of arms that the decision maker can select at discrete time steps. Selecting an arm results in
a random reward, whose distribution is determined by the identity of the arm selected. The distributions
associated with the arms are unknown to the decision maker whose goal is to maximize the expected sum of
the rewards received.
In many practical situations the arms belong to a large set. This set could be continuous [1; 6; 3; 2; 7],
hybrid-continuous, or it could be the space of infinite sequences over a finite alphabet [4]. In this paper we
consider stochastic bandit problems where the set of arms, X , is allowed to be an arbitrary topological space.
We assume that the decision maker knows a dissimilarity function defined over this space that constraints
the shape of the mean-payoff function. In particular, the dissimilarity function is assumed to put a lower
bound on the mean-payoff function from below at each maxima. We also assume that the decision maker is
able to cover the space of arms in a recursive manner, successively refining the regions in the covering such
that the diameters of these sets shrink at a known geometric rate when measured with the dissimilarity.
?
Csaba Szepesv?ari is on leave from MTA SZTAKI. He also greatly acknowledges the support received from the
Alberta Ingenuity Fund, iCore and NSERC.
1
Our work generalizes and improves previous works on continuum-armed bandit problems: Kleinberg [6]
and Auer et al. [2] focussed on one-dimensional problems. Recently, Kleinberg et al. [7] considered generic
metric spaces assuming that the mean-payoff function is Lipschitz with respect to the (known) metric of the
space. They proposed an interesting algorithm that achieves essentially the best possible regret in a minimax
sense with respect to these environments.
The goal of this paper is to further these works in a number of ways: (i) we allow the set of arms to be
a generic topological space; (ii) we propose a practical algorithm motivated by the recent very successful
tree-based optimization algorithms [8; 5; 4] and show that the algorithm is (iii) able to exploit higher order
smoothness. In particular, as we shall argue in Section 7, (i) improves upon the results of Auer et al. [2],
while (i), (ii) and (iii) improve upon the work of Kleinberg et al. [7]. Compared to Kleinberg et al. [7], our
work represents an improvement in the fact that just like Auer et al. [2] we make use of the local properties
of the mean-payoff function around the maxima only, and not a global property, such as Lipschitzness in
e ?n) 1 when e.g. the space is the unit
the whole space. This allows us to obtain a regret which scales as O(
hypercube and the mean-payoff function is locally H?older with known exponent in the neighborhood of any
maxima (which are in finite number) and bounded away from the maxima outside of these neighborhoods.
Thus, we get the desirable property that the rate of growth of the regret is independent of the dimensionality
of the input space. We also prove a minimax lower bound that matches our upper bound up to logarithmic
factors, showing that the performance of our algorithm is essentially unimprovable in a minimax sense.
Besides these theoretical advances the algorithm is anytime and easy to implement. Since it is based on
ideas that have proved to be efficient, we expect it to perform well in practice and to make a significant
impact on how on-line global optimization is performed.
2
Problem setup, notation
We consider a topological space X , whose elements will be referred to as arms. A decision maker ?pulls?
the arms in X one at a time at discrete time steps. Each pull results in a reward that depends on the arm
chosen and which the decision maker learns of. The goal of the decision maker is to choose the arms so
as to maximize the sum of the rewards that he receives. In this paper we are concerned with stochastic
environments. Such an environment M associates to each arm x ? X a distribution Mx on the real line.
The support of these distributions is assumed to be uniformly bounded with a known bound. For the sake
of simplicity, we assume this bound is 1. We denote by f (x) the expectation of Mx , which is assumed
to be measurable (all measurability concepts are with respect to the Borel-algebra over X ). The function
f : X ? R thus defined is called the mean-payoff function. When in round n the decision maker pulls arm
Xn ? X , he receives a reward Yn drawn from MXn , independently of the past arm choices and rewards.
A pulling strategy of a decision maker is determined by a sequence ? = (?n )n?1 of measurable mappings,
n?1
where each ?n maps the history space Hn = X ? [0, 1]
to the space of probability measures over X .
By convention, ?1 does not take any argument. A strategy is deterministic if for every n the range of ?n
contains only Dirac distributions.
According to the process that was already informally described, a pulling strategy ? and an environment M
jointly determine a random process (X1 , Y1 , X2 , Y2 , . . .) in the following way: In round one, the decision
maker draws an arm X1 at random from ?1 and gets a payoff Y1 drawn from MX1 . In round n ? 2, first,
Xn is drawn at random according to ?n (X1 , Y1 , . . . , Xn?1 , Yn?1 ), but otherwise independently of the past.
Then the decision maker gets a rewards Yn drawn from MXn , independently of all other random variables
in the past given Xn .
Let f ? = supx?X f (x) be the maximal expected payoff. The cumulative regret of a pulling strategy in
bn = n f ? ? Pn Yt , and the cumulative pseudo-regret is Rn = n f ? ? Pn f (Xt ).
environment M is R
t=1
t=1
1
e u ) when un = O(vn ) up to a logarithmic factor.
We write un = O(v
2
bn ], as can be
In the sequel, we restrict our attention to the expected regret E [Rn ], which in fact equals E[R
seen by the application of the tower rule.
3
3.1
The Hierarchical Optimistic Optimization (HOO) strategy
Trees of coverings
We first introduce the notion of a tree of coverings. Our algorithm will require such a tree as an input.
Definition 1 (Tree of coverings). A tree of coverings is a family of measurable subsets (Ph,i )1?i?2h , h?0 of
X such that for all fixed integer h ? 0, the covering ?1?i?2h Ph,i = X holds. Moreover, the elements of the
covering are obtained recursively: each subset Ph,i is covered by the two subsets Ph+1,2i?1 and Ph+1,2i .
A tree of coverings can be represented, as the name suggests, by a binary tree T . The whole domain
X = P0,1 corresponds to the root of the tree and Ph,i corresponds to the i?th node of depth h, which will
be referred to as node (h, i) in the sequel. The fact that each Ph,i is covered by the two subsets Ph+1,2i?1
and Ph+1,2i corresponds to the childhood relationship in the tree. Although the definition allows the childregions of a node to cover a larger part of the space, typically the size of the regions shrinks as depth h
increases (cf. Assumption 1).
Remark 1. Our algorithm will instantiate the nodes of the tree on an ?as needed? basis, one by one. In
fact, at any round n it will only need n nodes connected to the root.
3.2
Statement of the HOO strategy
The algorithm picks at each round a node in the infinite tree T as follows. In the first round, it chooses the
root node (0, 1). Now, consider round n + 1 with n ? 1. Let us denote by Tn the set of nodes that have
been picked in previous rounds and by Sn the nodes which are not in Tn but whose parent is. The algorithm
picks at round n + 1 a node (Hn+1 , In+1 ) ? Sn according to the deterministic rule that will be described
below. After selecting the node, the algorithm further chooses an arm Xn+1 ? PHn+1 ,In+1 . This selection
can be stochastic or deterministic. We do not put any further restriction on it. The algorithm then gets a
reward Yn+1 as described above and the procedure goes on: (Hn+1 , In+1 ) is added to Tn to form Tn+1 and
the children of (Hn+1 , In+1 ) are added to Sn to give rise to Sn+1 . Let us now turn to how (Hn+1 , In+1 ) is
selected.
Along with the nodes the algorithm stores what we call B?values. The node (Hn+1 , In+1 ) ? Sn to expand
at round n + 1 is picked by following a path from the root to a node in Sn , where at each node along the
path the child with the larger B?value is selected (ties are broken arbitrarily). In order to define a node?s
B?value, we need a few quantities. Let C(h, i) be the set that collects (h, i) and its descendants. We let
n
X
Nh,i (n) =
I{(Ht ,It )?C(h,i)}
t=1
be the number of times the node (h, i) was visited. A given node (h, i) is always picked at most once, but
since its descendants may be picked afterwards, subsequent paths in the tree can go through it. Consequently,
1 ? Nh,i (n) ? n for all nodes (h, i) ? Tn . Let ?
bh,i (n) be the empirical average of the rewards received for
the time-points when the path followed by the algorithm went through (h, i):
n
X
1
?
bh,i (n) =
Yt I{(Ht ,It )?C(h,i)} .
Nh,i (n) t=1
The corresponding upper confidence bound is by definition
s
2 ln n
Uh,i (n) = ?
bh,i (n) +
+ ? 1 ?h ,
Nh,i (n)
3
where 0 < ? < 1 and ?1 > 0 are parameters of the algorithm (to be chosen later by the decision maker, see
Assumption 1). For nodes not in Tn , by convention, Uh,i (n) = +?. Now, for a node (h, i) in Sn , we define
its B?value to be Bh,i (n) = +?. The B?values for nodes in Tn are given by
n
o
Bh,i (n) = min Uh,i (n), max Bh+1,2i?1 (n), Bh+1,2i (n) .
Note that the algorithm is deterministic (apart, maybe, from the arbitrary random choice of Xt in PHt ,It ).
Its total space requirement is linear in n while total running time at round n is at most quadratic in n, though
we conjecture that it is O(n log n) on average.
4
Assumptions made on the model and statement of the main result
We suppose that X is equipped with a dissimilarity `, that is a non-negative mapping ` : X 2 ? R
satisfying `(x, x) = 0. The diameter (with respect to `) of a subset A of X is given by diam A =
supx,y?A `(x, y). Given the dissimilarity `, the ?open? ball with radius ? > 0 and center c ? X is
B(c, ?) = { x ? X : `(c, x) < ? } (we do not require the topology induced by ` to be related to the topology of X .) In what follows when we refer to an (open) ball, we refer to the ball defined with respect to `.
The dissimilarity will be used to capture the smoothness of the mean-payoff function. The decision maker
chooses ` and the tree of coverings. The following assumption relates this choice to the parameters ? and ?1
of the algorithm:
Assumption 1. There exist ? < 1 and ?1 , ?2 > 0 such that for all integers h ? 0 and all i = 1, . . . , 2h , the
0
diameter of Ph,i is bounded by ?1 ?h , and Ph,i contains an open ball Ph,i
of radius ?2 ?h . For a given h, the
0
are disjoint for 1 ? i ? 2h .
Ph,i
Remark 2. A typical choice for the coverings in a cubic domain is to let the domains be hyper-rectangles.
They can be obtained, e.g., in a dyadic manner, by splitting at each step hyper-rectangles in the middle along
their longest side, in an axis parallel manner; if all sides are equal, we split them along the?first axis. In
this example, if X = [0, 1]D and `(x, y) = kx ? yk? then we can take ? = 2??/D , ?1 = ( D/2)? and
?2 = 1/8? .
The next assumption concerns the environment.
Definition 2. We say that f is weakly Lipschitz with respect to ` if for all x, y ? X ,
f ? ? f (y) ? f ? ? f (x) + max f ? ? f (x), `(x, y) .
(1)
Note that weak Lipschitzness is satisfied whenever f is 1?Lipschitz, i.e., for all x, y ? X , one has |f (x) ?
f (y)| ? `(x, y). On the other hand, weak Lipschitzness implies local (one-sided) 1?Lipschitzness at any
maxima. Indeed, at an optimal arm x? (i.e., such that f (x? ) = f ? ), (1) rewrites to f (x? ) ? f (y) ?
`(x? , y). However, weak Lipschitzness does not constraint the growth of the loss in the vicinity of other
points. Further, weak Lipschitzness, unlike Lipschitzness, does not constraint the local decrease of the loss
at any point. Thus, weak-Lipschitzness is a property that lies somewhere between a growth condition on
the loss around optimal arms and (one-sided) Lipschitzness. Note that since weak Lipschitzness is defined
with respect to a dissimilarity, it can actually capture higher-order smoothness at the optima. For example,
f (x) = 1 ? x2 is weak Lipschitz with the dissimilarity `(x, y) = c(x ? y)2 for some appropriate constant c.
Assumption 2. The mean-payoff function f is weakly Lipschitz.
?
?
Let fh,i
= supx?Ph,i f (x) and ?h,i = f ? ? fh,i
be the suboptimality of node (h, i). We say that
def
a node (h, i) is optimal (respectively, suboptimal) if ?h,i = 0 (respectively, ?h,i > 0). Let X? =
{ x ? X : f (x) ? f ? ? ? } be the set of ?-optimal arms. The following result follows from the definitions;
a proof can be found in the appendix.
4
Lemma 1. Let Assumption 1 and 2 hold. If the suboptimality ?h,i of a region is bounded by c?1 ?h for some
c > 0, then all arms in Ph,i are max{2c, c + 1}?1 ?h -optimal.
The last assumption is closely related to Assumption 2 of Auer et al. [2], who observed that the regret of
a continuum-armed bandit algorithm should depend on how fast the volume of the sets of ?-optimal arms
shrinks as ? ? 0. Here, we capture this by defining a new notion, the near-optimality dimension of the
mean-payoff function. The connection between these concepts, as well as the zooming dimension defined
by Kleinberg et al. [7] will be further discussed in Section 7.
Define the packing number P(X , `, ?) to be the size of the largest packing of X with disjoint open balls of radius ? with respect to the dissimilarity `.2 We now define the near-optimality dimension, which characterizes
the size of the sets X? in terms of ?, and then state our main result.
Definition 3. For c > 0 and ?0 > 0, the (c, ?0 )?near-optimality dimension of f with respect to ` equals
o
n
(2)
inf d ? [0, +?) : ? C s.t. ?? ? ?0 , P Xc? , `, ? ? C ??d
(with the usual convention that inf ? = +?).
Theorem 1 (Main result). Let Assumptions 1 and 2 hold and assume that the (4?1 /?2 , ?2 )?near-optimality
dimension of the considered environment is d < +?. Then, for any d0 > d there exists a constant C(d0 )
such that for all n ? 1,
1/(d0 +2)
0
0
ERn ? C(d0 ) n(d +1)/(d +2) ln n
.
Further, if the near-optimality dimension is achieved, i.e., the infimum is achieved in (2), then the result holds
also for d0 = d.
Remark 3. We can relax the weak-Lipschitz property by requiring it to hold only locally around the maxima.
In fact, at the price of increased constants, the result continues to hold if there exists ? > 0 such that (1)
holds for any x, y ? X? . To show this we only need to carefully adapt the steps of the proof below. We omit
the details from this extended abstract.
5
Analysis of the regret and proof of the main result
We first state three lemmas, whose proofs can be found in the appendix. The proofs of Lemmas 3 and 4 rely
on concentration-of-measure techniques, while that of Lemma 2 follows from a simple case study. Let us
fix some path (0, 1), (1, i?1 ), . . . , (h, i?h ), . . . , of optimal nodes, starting from the root.
Lemma 2. Let (h, i) be a suboptimal node. Let k be the largest depth such that (k, i?k ) is on the path from
the root to (h, i). Then we have
n
n
X
o
P Nh,i (t) > u and Uh,i (t) > f ? or Us,i?s ? f ? for some s ? {k+1, . . . , t?1} .
E Nh,i (n) ? u+
t=u+1
Lemma
1 and 2 hold.
3. Let Assumptions
1, P Uh,i (n) ? f ? ? n?3 .
Then, for all optimal nodes and for all integers n ?
Lemma 4. Let Assumptions 1 and 2 hold. Then, for all integers t ? n, for all suboptimal nodes
(h, i)
ln n
such that ?h,i > ?1 ?h , and for all integers u ? 1 such that u ? (?h,i8??
,
one
has
P
U
(t)
>
h,i
h 2
1? )
?
?4
f and Nh,i (t) > u ? t n .
2
Note that sometimes packing numbers are defined as the largest packing with disjoint open balls of radius ?/2, or,
?-nets.
5
Taking u as the integer part of (8 ln n)/(?h,i ? ?1 ?h )2 , and combining the results of Lemma 2, 3, and 4
with a union bound leads to the following key result.
Lemma 5. Under Assumptions 1 and 2, for all suboptimal nodes (h, i) such that ?h,i > ?1 ?h , we have, for
all n ? 1,
8 ln n
2
E[Nh,i (n)] ?
+ .
(?h,i ? ?1 ?h )2
n
We are now ready to prove Theorem 1.
Proof. For the sake of simplicity we assume that the infimum in the definition of near-optimality is achieved.
To obtain the result in the general case one only needs to replace d below by d0 > d in the proof below.
First step. For all h = 1, 2, . . ., denote by Ih the nodes at depth h that are 2?1 ?h ?optimal, i.e., the nodes
?
(h, i) such that fh,i
? f ? ? 2?1 ?h . Then, I is the union of these sets of nodes. Further, let J be the set of
nodes that are not in I but whose parent is in I. We then denote by Jh the nodes in J that are located at
depth h in the tree. Lemma 4 bounds the expected number of times each node (h, i) ? Jh is visited. Since
?h,i > 2?1 ?h , we get
8 ln n
2
E Nh,i (n) ? 2 2h + .
?1 ?
n
Second step. We bound here the cardinality |Ih |, h > 0. If (h, i) ? Ih then since ?h,i ? 2?1 ?h , by
Lemma 1 Ph,i ? X4?1 ?h . Since by Assumption 1, the sets (Ph,i ), for (h, i) ? Ih , contain disjoint balls of
radius ?2 ?h , we have that
?d
|Ih | ? P ?(h,i)?Ih Ph,i , `, ?2 ?h ? P X(4?1 /?2 ) ?2 ?h , `, ?2 ?h ? C ?2 ?h
,
where we used the assumption that d is the (4?1 /?2 , ?2 )?near-optimality dimension of f (and C is the
constant introduced in the definition of the near-optimality dimension).
Third step. Choose ? > 0 and let H be the smallest integer such that ?H ? ?. We partition the infinite
tree T into three sets of nodes, T = T1 ? T2 ? T3 . The set T1 contains nodes of IH and their descendants,
T2 = ?0?h<H Ih , and T3 contains the nodes ?1?h?H Jh and their descendants. (Note that T1 and T3 are
potentially infinite, while T2 is finite.)
We denote by (Ht , It ) the node that was chosen by the forecaster at round t to pick Xt . From the definition
of the forecaster, no two such random variables are equal, since each node is picked at most once. We
decompose the regret according to the element Tj where the chosen nodes (Ht , It ) belong to:
" n
#
X
?
E Rn = E
(f ? f (Xt )) = E Rn,1 + E Rn,2 + E Rn,3 ,
t=1
where for all i = 1, 2, 3,
Rn,i =
n
X
(f ? ? f (Xt ))I{(Ht ,It )?Ti } .
t=1
The contribution from T1 is easy to bound. By definition any node in IH is 2?1 ?H -optimal. Hence, by
Lemma 1 the corresponding domain is included in X4?1 ?H . The domains of these nodes? descendants are of
course still included in X4?1 ?H . Therefore, E[Rn,1 ] ? 4n?1 ?H .
For h ? 1, consider a node (h, i) ? T2 . It belongs to Ih and is therefore 2?1 ?h ?optimal. By Lemma 1, the
corresponding domain is included in X4?1 ?h . By the result of the second step and using that each node is
played at most once, one gets
H?1
X
X
H?1
E Rn,2 ?
4?1 ?h |Ih | ? 4?1 C ?2?d
?h(1?d) .
h=0
h=0
6
We finish with the contribution from T3 . We first remark that since the parent of any element (h, i) ? Jh
is in Ih?1 , by Lemma 1 again, we have that Ph,i ? X4?1 ?h?1 . To each node (Ht , It ) played in T3 , we
associate the element (Ht0 , It0 ) of some Jh on the path from the root to (Ht , It ). When (Ht , It ) is played,
the chosen arm Xt belongs also to PHt0 ,It0 . Decomposing Rn,3 according to the elements of ?1?h?H Jh , we
then bound the regret from T3 as
H
H
X
X
X
2
8 ln n
E Rn,3 ?
4?1 ?h?1
E Nh,i (n) ?
+
4?1 ?h?1 |Jh |
?12 ?2h
n
h=1
h=1
i : (h,i)?Jh
where we used the result of the first step. Now, it follows from that fact that the parent of Jh is in Ih?1 that
|Jh | ? 2|Ih?1 |. Substituting this and the bound on |Ih?1 |, we get
H
X
2
8 ln n
?d
h(1?d)+d?1
E Rn,3 ? 8?1 C ?2
+
.
?
?12 ?2h
n
h=1
Fourth step. Putting things together, we have proved
E Rn
?
4n?1 ?H + 4?1 C ?2?d
H?1
X
?h(1?d) + 8?1 C ?2?d
h=0
=
O
n?H + (ln n)
H
X
h=1
!
??h(1+d)
H
X
?h(1?d)+d?1
8 ln n
2
+
?12 ?2h
n
= O n?H + ??H(1+d) ln n = O n(d+1)/(d+2) (ln n)1/(d+2)
h=1
by using first that ? < 1 and then, by optimizing over ?H (the worst value being ?H ? ( lnnn )?1/(d+2) ).
6
Minimax optimality
The packing dimension
of a set X is the smallest d such that there exists a constant k such that for all
? > 0, P X , `, ? ? k ??d . For instance, compact subsets of Rd (with non-empty interior) have a packing
dimension of d whenever ` is a norm. If X has a packing dimension of d, then all environments have a
near-optimality dimension less than d. The proof of the main theorem indicates that the constant C(d) only
depends on d, k (of the definition of packing dimension), ?1 , ?2 , and ?, but not on the environment as long as
e (d+1)/(d+2) ).
it is weakly Lipschitz. Hence, we can extract from it a distribution-free bound of the form O(n
In fact, this bound can be shown to be optimal as is illustrated by the theorem below, whose assumptions
are satisfied by, e.g., compact subsets of Rd and if ` is some norm of Rd . The proof can be found in the
appendix.
Theorem 2. If X is such that there exists c > 0 with P(X , `, ?) ? c ??d ? 2 for all ? ? 1/4 then for all
n ? 4d?1 c/ ln(4/3), all strategies ? are bound to suffer a regret of at least
r
2/(d+2)
1 1
c
sup E Rn (?) ?
n(d+1)/(d+2) ,
4 4 4 ln(4/3)
where the supremum is taken over all environments with weakly Lipschitz payoff functions.
7
Discussion
Several works [1; 6; 3; 2; 7] have considered continuum-armed bandits in Euclidean or metric spaces and
provided upper- and lower-bounds on the regret for given classes of environments. Cope [3] derived a regret
e ?n) for compact and convex subset of Rd and a mean-payoff function with unique minima and second
of O(
order smoothness. Kleinberg [6] considered mean-payoff functions f on the real line that are H?older with
degree 0 < ? ? 1. The derived regret is ?(n(?+1)/(?+2) ). Auer et al. [2] extended the analysis to classes of
functions with only a local H?older assumption around maximum (with possibly higher smoothness degree
1+????
? ? [0, ?)), and derived the regret ?(n 1+2???? ), where ? is such that the Lebesgue measure of ?-optimal
7
states is O(?? ). Another setting is that of [7] who considered a metric space (X , `) and assumed that f
e (d+1)/(d+2) ) where d is the zooming dimension (defined
is Lipschitz w.r.t. `. The obtained regret is O(n
similarly to our near-optimality dimension, but using covering numbers instead of packing numbers and the
sets X? \ X?/2 ). When (X , `) is a metric space covering and packing numbers are equivalent and we may
prove that the zooming dimension and near-optimality dimensions are equal.
Our main contribution compared to [7] is that our weak-Lipschitz assumption, which is substantially weaker
than the global Lipschitz assumption assumed in [7], enables our algorithm to work better in some common
situations, such as when the mean-payoff function assumes a local smoothness whose order is larger than
one. In order to relate all these results, let us consider a specific example: Let X = [0, 1]D and assume that
the mean-reward function f is locally equivalent to a H?older function with degree ? ? [0, ?) around any
maxima x? of f (the number of maxima is assumed to be finite):
f (x? ) ? f (x) = ?(||x ? x? ||? ) as x ? x? .
(3)
? ?
This means that ?c1 , c2 , ?0 > 0, ?x, s.t. ||x ? x? || ? ?0 , c1 ||x ? x? ||? ? f (x? ) ? f (x) ?
c
||x
?
x
||
.
2
?
Under this assumption, the result of Auer et al.
[2]
shows
that
for
D
=
1,
the
regret
is
?(
n)
(since
here
?
? = 1/?). Our result allows us to extend the n regret rate to any dimension D. Indeed, if we choose our
def
dissimilarity measure to be `? (x, y) = ||x ? y||? , we may prove that f satisfies a locally weak-Lipschitz
e ?n),
condition (as defined in Remark 3) and that the near-optimality dimension is 0. Thus our regret is O(
i.e., the rate is independent of the dimension D.
In comparison, since Kleinberg et al. [7] have to satisfy a global Lipschitz assumption, they can not use `?
when ? > 1. Indeed a function globally Lipschitz with respect to `? is essentially constant. Moreover `?
does not define a metric for ? > 1. If one resort to the Euclidean metric to fulfill their requirement that f
be Lipschitz w.r.t. the metric then the zooming dimension becomes D(? ? 1)/?, while the regret becomes
e (D(??1)+?)/(D(??1)+2?) ), which is strictly worse than O(
e ?n) and in fact becomes close to the slow
O(n
e (D+1)/(D+2) ) when ? is larger. Nevertheless, in the case of ? ? 1 they get the same regret rate.
rate O(n
In contrast, our result shows that under very weak constraints on the mean-payoff function and if the local
behavior of the function around its maximum (or finite number of maxima) is known then global optimization
e ?n), independent of the space dimension. As an interesting sidenote let us also
suffers a regret of order O(
remark that our results allow different smoothness orders along different dimensions, i.e., heterogenous
smoothness spaces.
References
[1] R. Agrawal. The continuum-armed bandit problem. SIAM J. Control and Optimization, 33:1926?1951, 1995.
[2] P. Auer, R. Ortner, and Cs. Szepesv?ari. Improved rates for the stochastic continuum-armed bandit problem. 20th
Conference on Learning Theory, pages 454?468, 2007.
[3] E. Cope. Regret and convergence bounds for immediate-reward reinforcement learning with continuous action
spaces. Preprint, 2004.
[4] P.-A. Coquelin and R. Munos. Bandit algorithms for tree search. In Proceedings of 23rd Conference on Uncertainty
in Artificial Intelligence, 2007.
[5] S. Gelly, Y. Wang, R. Munos, and O. Teytaud. Modification of UCT with patterns in Monte-Carlo go. Technical
Report RR-6062, INRIA, 2006.
[6] R. Kleinberg. Nearly tight bounds for the continuum-armed bandit problem. In 18th Advances in Neural Information
Processing Systems, 2004.
[7] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In Proceedings of the 40th ACM
Symposium on Theory of Computing, 2008.
[8] L. Kocsis and Cs. Szepesv?ari. Bandit based Monte-Carlo planning. In Proceedings of the 15th European Conference
on Machine Learning, pages 282?293, 2006.
8
| 3605 |@word trial:1 middle:1 norm:2 open:5 simulation:1 hec:1 bn:2 forecaster:2 p0:1 pick:3 recursively:1 contains:4 selecting:2 ecole:1 past:3 subsequent:1 partition:1 shape:1 enables:1 fund:1 intelligence:1 selected:3 instantiate:1 node:46 teytaud:1 along:5 c2:1 symposium:1 descendant:5 prove:5 introduce:1 manner:3 indeed:3 expected:5 ingenuity:1 behavior:2 planning:1 multi:1 globally:1 alberta:2 armed:8 equipped:1 cardinality:1 becomes:3 project:2 provided:1 notation:1 bounded:5 moreover:3 what:2 substantially:1 csaba:2 lipschitzness:10 pseudo:1 every:1 ti:1 growth:4 tie:1 control:1 unit:2 omit:1 yn:4 t1:4 local:6 path:7 inria:5 suggests:1 collect:1 range:1 practical:2 unique:1 recursive:1 regret:26 implement:1 practice:1 union:2 procedure:1 empirical:1 confidence:1 get:8 interior:1 close:1 selection:2 scheduling:1 put:2 bh:7 restriction:1 measurable:3 map:1 deterministic:4 yt:2 center:1 equivalent:2 go:3 attention:1 starting:1 independently:3 convex:1 simplicity:2 splitting:1 rule:2 pull:3 notion:2 suppose:1 ualberta:1 associate:2 element:6 satisfying:1 located:1 continues:1 observed:1 preprint:1 wang:1 capture:3 childhood:1 worst:1 region:3 connected:1 went:1 decrease:1 yk:1 environment:11 broken:1 reward:11 weakly:4 rewrite:1 depend:1 algebra:1 tight:1 upon:3 basis:1 uh:5 packing:10 represented:1 mxn:2 alphabet:1 fast:1 monte:2 artificial:1 hyper:2 neighborhood:2 outside:1 whose:9 larger:4 say:2 relax:1 otherwise:1 jointly:1 online:1 kocsis:1 sequence:2 agrawal:1 rr:1 net:1 propose:1 maximal:1 fr:3 combining:1 dirac:1 parent:4 empty:1 requirement:2 optimum:1 convergence:1 leave:1 measured:1 received:3 c:3 implies:1 convention:3 radius:5 closely:1 stochastic:5 require:2 fix:1 generalization:1 decompose:1 strictly:1 hold:9 around:7 considered:5 mapping:2 substituting:1 continuum:6 achieves:1 smallest:2 fh:3 maker:13 visited:2 largest:3 always:1 normale:1 fulfill:1 pn:2 derived:3 refining:1 improvement:1 longest:1 indicates:1 greatly:1 contrast:1 sense:2 typically:1 bandit:14 expand:1 france:2 exponent:2 equal:5 construct:1 once:3 x4:5 represents:1 lille:2 nearly:1 t2:4 report:1 few:1 ortner:1 lebesgue:1 unimprovable:1 tj:1 stoltz:2 tree:17 euclidean:3 theoretical:1 increased:1 instance:1 cover:2 subset:8 successful:1 supx:3 chooses:3 siam:1 sequel:4 together:1 again:1 satisfied:2 successively:1 choose:3 hn:6 possibly:1 worse:1 resort:1 sztaki:1 satisfy:1 depends:2 performed:1 root:7 picked:5 later:1 optimistic:1 sup:2 characterizes:1 parallel:1 contribution:3 who:2 t3:6 weak:11 carlo:2 history:1 suffers:1 whenever:2 definition:11 associated:1 proof:9 proved:2 anytime:1 dimensionality:1 improves:3 carefully:1 auer:7 actually:1 higher:3 improved:1 shrink:3 though:1 just:1 uct:1 hand:1 receives:2 infimum:2 measurability:1 pulling:3 name:1 concept:2 y2:1 requiring:1 contain:1 vicinity:1 hence:2 pht:1 illustrated:1 round:12 szepesva:1 covering:12 suboptimality:2 tn:7 ari:4 recently:1 common:1 nh:10 volume:1 belong:2 he:3 discussed:1 extend:1 significant:1 refer:2 smoothness:8 tuning:1 rd:5 erieure:1 similarly:1 recent:1 optimizing:1 belongs:2 inf:2 apart:1 store:1 binary:1 arbitrarily:1 icore:1 seen:1 minimum:1 determine:1 maximize:2 ii:2 relates:1 afterwards:1 desirable:1 d0:6 technical:1 match:1 adapt:1 clinical:1 long:1 impact:1 controller:1 essentially:3 metric:9 expectation:1 sometimes:1 achieved:3 c1:2 szepesv:4 unlike:1 induced:1 thing:1 integer:7 call:1 near:12 iii:2 easy:2 concerned:1 split:1 finish:1 restrict:1 topology:2 suboptimal:4 idea:1 motivated:1 suffer:1 remark:6 action:1 covered:2 informally:1 maybe:1 locally:5 ph:19 diameter:3 mx1:1 exist:1 disjoint:4 discrete:2 write:1 shall:1 key:1 putting:1 nevertheless:1 drawn:4 ht:8 rectangle:2 ht0:1 sum:2 fourth:1 uncertainty:1 family:1 vn:1 draw:1 decision:13 appendix:3 bound:18 def:2 followed:1 played:3 topological:4 quadratic:1 constraint:5 x2:2 sake:2 kleinberg:9 emi:1 argument:1 optimality:14 min:1 conjecture:1 ern:1 department:1 mta:1 according:5 ball:7 hoo:2 modification:1 sided:2 taken:1 ln:14 turn:1 needed:1 know:1 generalizes:1 decomposing:1 hierarchical:1 away:1 generic:3 appropriate:1 assumes:1 running:1 cf:1 xc:1 somewhere:1 exploit:1 gelly:1 hypercube:2 classical:1 already:1 added:2 quantity:1 strategy:7 concentration:1 usual:1 mx:2 zooming:4 tower:1 argue:1 assuming:1 besides:1 relationship:1 setup:1 statement:2 potentially:1 relate:1 negative:1 rise:1 ebastien:1 policy:1 unknown:1 sebastien:1 gilles:2 upper:3 perform:1 finite:7 immediate:1 payoff:19 situation:2 defining:1 extended:2 y1:3 rn:14 arbitrary:2 introduced:1 paris:1 connection:1 slivkins:1 heterogenous:1 able:2 below:6 pattern:1 including:1 max:3 hybrid:1 rely:1 arm:25 minimax:5 older:5 improve:1 imply:1 axis:2 ready:1 acknowledges:1 extract:1 sn:7 geometric:1 loss:3 expect:1 interesting:2 upfal:1 degree:3 i8:1 course:1 last:1 free:1 side:2 allow:2 jh:10 weaker:1 focussed:1 taking:1 munos:4 dimension:24 xn:5 depth:5 cumulative:2 phn:1 made:1 reinforcement:1 cope:2 compact:3 supremum:1 global:6 assumed:6 it0:2 continuous:3 un:2 search:1 ca:1 european:1 domain:6 main:6 motivation:1 whole:2 arise:1 allowed:2 child:2 dyadic:1 x1:3 referred:2 en:1 borel:1 cubic:1 slow:1 lie:1 third:1 learns:1 theorem:5 xt:6 specific:1 showing:1 concern:1 exists:4 ih:15 dissimilarity:11 kx:1 logarithmic:3 remi:1 bubeck:2 nserc:1 corresponds:3 satisfies:1 acm:1 identity:1 goal:3 diam:1 consequently:1 lipschitz:16 price:1 replace:1 included:3 determined:2 infinite:4 uniformly:1 typical:1 lemma:14 called:1 total:2 select:1 coquelin:1 support:2 |
2,875 | 3,606 | Dependent Dirichlet Process Spike Sorting
? Yee Whye Teh
Jan Gasthaus, Frank Wood, Dilan G?orur,
Gatsby Computational Neuroscience Unit
University College London
London, WC1N 3AR, UK
{j.gasthaus, fwood, dilan, ywteh}@gatsby.ucl.ac.uk
Abstract
In this paper we propose a new incremental spike sorting model that automatically
eliminates refractory period violations, accounts for action potential waveform
drift, and can handle ?appearance? and ?disappearance? of neurons. Our approach
is to augment a known time-varying Dirichlet process that ties together a sequence
of infinite Gaussian mixture models, one per action potential waveform observation,
with an interspike-interval-dependent likelihood that prohibits refractory period
violations. We demonstrate this model by showing results from sorting two publicly
available neural data recordings for which a partial ground truth labeling is known.
1
Introduction
Spike sorting (see [1] and [2] for review and methodological background) is the name given to the
problem of grouping action potentials by source neuron. Generally speaking, spike sorting involves
a sequence of steps; 1) recording the activity of an unknown number of neurons using some kind
of extra-cellular recording device, 2) detecting the times at which action potentials are likely to
have occurred, 3) slicing action potential waveforms from the surrounding raw voltage trace where
action potentials were posited to have occurred, 4) (often) performing some kind of dimensionality
reduction/feature extraction on the set of collected action potential waveform snippets, 5) running
a clustering algorithm to produce grouping of action potentials attributed to a single neuron, and
finally 6) running some kind of post hoc algorithm that detects refractory period violations and thins
or adjusts the clustering results accordingly.
Neuroscientists are interested in arriving at the optimal solution to this problem. Towards this end they
have traditionally utilized maximum likelihood clustering methods such as expectation maximization
for finite Gaussian mixture models with cross-validation model selection. This of course allows them
to arrive at an optimal solution, but it is difficult to say whether or not it is the optimal solution, and it
affords them no way of establishing the level of confidence they should have in their result. Recently
several groups have suggested a quite different approach to this problem which eschews the quest
for a single optimal solution in favor of a Bayesian treatment of the problem [3, 4, 5, 6]. In each of
these, instead of pursuing the optimal sorting, multiple sortings of the spikes are produced (in fact
what each model produces is a posterior distribution over spike trains). Neural data analyses may
then be averaged over the resulting spike train distribution to account for uncertainties that may have
arisen at various points in the spike sorting process and would not have been explicitly accounted for
otherwise.
Our work builds on this new Bayesian approach to spike sorting; going beyond them in the way steps
five and six are accomplished. Specifically we apply the generalized Polya urn dependent Dirichlet
process mixture model (GPUDPM) [7, 8] to the problem of spike sorting and show how it allows us
to model waveform drift and account for neuron appearance and disappearance. By introducing a
time dependent likelihood into the model we are also able to eliminate refractory period violations.
1
The need for a spike sorting approach with these features arises from several domains. Waveform
non-stationarities either due to changes in the recording environment (e.g. movement of the electrode)
or due to changes in the firing activity of the neuron itself (e.g. burstiness) cause almost all current
spike sorting approaches to fail. This is because most pool waveforms over time, discarding the time
at which the action potentials were observed. A notable exception to this is the spike sorting approach
of [9], in which waveforms were pooled and clustered in short fixed time intervals. Multiple Gaussian
mixture models are then fit to the waveforms in each interval and then are pruned and smoothed
until a single coherent sequence of mixture models is left that describes the entire time course of the
data. This is accomplished by using a forward-backward-like algorithm and the Jenson-Shannon
divergence between models in consecutive intervals. Although very good results can be produced
by such a model, using it requires choosing values for a large number of parameters, and, as it is a
smoothing algorithm, it requires the entire data set to have been observed already.
A recent study by [10] puts forward a compelling case for online spike sorting algorithms that can
handle waveform non-stationarity, as well as sudden jumps in waveform shape (e.g. abrupt electrode
movements due to high acceleration events), and appearance and disappearance of neurons from the
recording over time. This paper introduces a chronical recording paradigm in which a chronically
implanted recording device is mated with appropriate storage such that very long term recordings
can be made. Unfortunately as the animal being recorded from is allowed its full range of natural
movements, accelerations may cause the signal characteristics of the recording to vary dramatically
over short time intervals. As such data theoretically can be recorded forever without stopping,
forward-backward spike sorting algorithms such as that in [9] are ruled out. As far as we know our
proposed model is the only sequential spike sorting model that meets all of the requirements of this
new and challenging spike sorting problem,
In the next sections we review the GPUDPM on which our spike sorting model is based, introduce
the specifics of our spike sorting model, then demonstrate its performance on real data for which a
partial ground truth labeling is known.
2
Review
Our model is based on the generalized Polya urn Dirichlet process mixture model (GPUDPM)
described in [7, 8]. The GPUDPM is a time dependent Dirichlet process (DDP) mixture model
formulated in the Chinese restaurant process (CRP) sampling representation of a Dirichlet process
mixture model (DPM). We will first very briefly review DPMs in general and then turn to the specifics
of the GPUDPM.
DPMs are a widely used tool for nonparametric density estimation and unsupervised learning in
models where the true number of latent classes is unknown. In a DPM, the mixing distribution G is
distributed according to a DP with base distribution G0 , i.e.
G|?, G0 ? DP(?, G0 )
?i |G ? G
xi |?i ? F (?i )
(1)
Placing a DP prior over G induces a clustering tendency amongst the ?i . If ?i takes on K distinct
values ?1 , . . . , ?K , we can write out an equivalent model using indicator variables ci ? {1, . . . , K}
that assigns data points to clusters. In this representation we track the distinct ?k drawn from G0 for
each cluster, and use the Chinese restaurant process to sample the conditional distributions of the
indicator variables ci
mk
for k ? {cj : j < i}
P (ci = k|c1 , . . . , ci?1 ) = i?1+?
(2)
?
P (ci 6= cj for all j < i|c1 , . . . , ci?1 ) = i?1+?
where mk = #{cj : cj = k ? j < i}.
The GPUDPM consists of T individual DPMs, one per discrete time step t = 1, . . . , T , all tied
together through a particular way of sharing the component parameters ?tk and table occupancy
counts mtk between adjacent time steps (here t indexes the parameters and cluster sizes of the T
DPMs).
Dependence among the mtk is introduced by perturbing the number of customers sitting at each table
when moving forward through time. Denote by mt = (mt1 , . . . , mtK t ) the vector containing the
2
number of customers sitting at each table at time t before a ?deletion? step, where K t is the number
of non-empty tables at time t. Similarly denote by mt+1 the same quantity after this deletion step.
Then the perturbation of the class counts from one step to the next is governed by the process
t
m ? ? t with probability ?
t+1
t
m |m , ? ?
(3)
mt ? ? t with probability 1 ? ?
where ?kt ? Binomial(mtk , 1 ? ?) and ?jt = mtj for j 6= ` and ?jt = 0 for j = ` where ` ?
PK t
Discrete(mt / k=1 mtk ). Before seating the customers arriving at time step t + 1, the number of
customers sitting at each table is initialized to mt+1 . This perturbation process can either remove
some number of customers from a table or effectively delete a table altogether. This deletion procedure
accounts for the ability of the GPUDPM to model births and deaths of clusters.
The GPUDPM is also capable of modeling drifting cluster parameters. This drift is modeled by tying
together the component parameters ?tk through a transition kernel P(?tk |?t?1
k ) from which the class
parameter at time t is sampled given the class parameter at time t ? 1. For various technical reasons
one must ensure that the mixture component parameters ?tk are all drawn independently from G0 ,
i.e. {?tk }Tt=1 ? G0 . This can be achieved by ensuring that G0 is the invariant distribution of the
transition kernel P(?tk |?t?1
k ) [8].
3
Model
In order to apply the GPUDPM model to spike sorting problems one first has to make a number of
modeling assumptions. First is choosing a form for the likelihood function describing the distribution
of action potential waveform shapes generated by a single neuron P (xt |ct = k, ?kt ) (the distibution
of which was denoted F (?kt ) above), the prior over the parameters of that model (the base distribution
G0 above), and the transition kernel P(?tk |?t?1
k ) that governs how the waveshape of the action
potentials emitted by a neuron can change over time. In the following we describe modeling choices
we made for the spike sorting task, as well as how the continuous spike occurrence times can be
incorporated into the model to allow for correct treatment of neuron behaviour during the absolute
refractory period.
Let {xt }Tt=1 be the the set of action potential waveforms extracted from an extracellular recording
(referred to as ?spikes? in the following), and let ? 1 , . . . , ? T be the time stamps (in ms) associated
0
with these spikes in ascending order (i.e. ? t ? ? t if t > t0 ). The model thus incorporates two
different concepts of time: the discrete sequence of time steps t = 1, . . . , T corresponding to the
time steps in the GPUDPM model and the actual spike times ? t at which the spike xt occurs in the
recording. We assume that only one spike occurs per time step t, i.e. we set N = 1 in the model
above and identify ct = (ct1 ) = ct .
It is well known that the distribution of action potential waveforms originating from a single neuron
in a PCA feature space is well approximated by a Normal distribution [1]. We choose to model each
dimension xd (d ? {1, . . . , D}) of the data independently with a univariate Normal distribution and
use a product of independent Normal-Gamma priors as the base distribution G0 of the DP.
def
P (x|?) = N (x|?) =
D
Y
N xd |?d , ??1
d
(4)
d=1
def
G0 (?0 , n0 , a, b) =
D
Y
N ?d |?0,d , (n0 ?d )?1 Ga (?d |a, b)
(5)
d=1
where ? = (?1 , . . . , ?D , ?1 , . . . , ?D ), and ?0 = (?0,1 , . . . , ?0,D ), n0 , a, and b are parameters of
the model. The independence assumption is made here mainly to increase computational efficiency.
A model where P (x|?) is a multivariate Gaussian with full covariance matrix is also possible, but
makes sampling from (7) computationally expensive. While correlations between the components
can be observed in neural recordings, they can at least partially be attributed to temporal waveform
variation.
To account for the fact that neurons have an absolute refractory period following each action potential
during which no further action potential can occur, we extend the GPUDPM by conditioning the model
3
on the spike occurrence times ?1 , . . . , ?T and modifying the conditional probability of assigning
a spike to a cluster given the other cluster labels and the spike occurrence times ?1 , . . . , ?t in the
following way:
?
if ? t ? ??kt ? rabs
?0
t
1:t
t
P(ct = k|m , c1:t?1 , ? , ?) ? mk if ? t ? ??kt > rabs and k ? {1, . . . , Kt?1 }
(6)
?
?
if? t ? ??kt > rabs and k = Kt?1 + 1
0
where ??kt is the spike time of the last spike assigned to cluster k before time step t, i.e. ??kt = ? t ,
t0 = max{t00 |t00 < t ? ct00 = k}. Essentially, the conditional probability of assigning the spike at
time t to cluster k is zero if the difference of the occurrence time of this spike and the occurrence
time of the last spike associated with cluster k is smaller than the refractory period rabs . If the time
difference is larger than rabs then the usual CRP conditional probabilities are used. In terms of the
Chinese restaurant metaphor, this setup corresponds to a restaurant in which seating a customer at
a table removes that table as an option for new customers for some period of time. Note that this
extension introduces additional dependencies among the indicator variables c1 , . . . , cT .
The transition kernel P(?tk |?t?1
k ) specifies how the action potential waveshape can vary over time. To
meet the technical requirements of the GPUDPM and because its waveform drift modeling semantics
are reasonable we use the update rule of the Metropolis algorithm [11] as the transition kernel
P(?tk |?t?1
k ), i.e. we set
Z
t?1
t?1
t?1
t
t
t
0
t
0
t
0
P(?k |?k ) = S(?k , ?k )A(?k , ?k ) + 1 ? S(? , ?k )A(? , ?k )d? ??t?1 (?tk )
(7)
k
where S(?0 , ?tk ) is a (symmetric) proposal distribution and A(?0 , ?tk ) = min 1, G0 (?tk )/G0 (?t?1
k ) .
We choose an isotropic Gaussian centered at the old value as proposal distribution S(?0 , ?tk ) =
t?1
t
N (?t?1
k , ?I). This choice of P (?k |?k ) ensures that G0 is the invariant distribution of the transition
kernel, while at the same time allowing us to control the amount of correlation between time steps
through ?. A transition kernel of this form allows the distribution of the action potential waveforms
to vary slowly (if ? is chosen small) from one time step to the next both in mean waveform shape as
well as in variance. While small changes are preferred, larger changes are also possible if supported
by the data.
Inference in this model is performed using the sequential Monte Carlo algorithm (particle filter)
defined in [7, 8].
4
Experiments
4.1
Methodology
Experiments were performed on a subset of the publicly available1 data set described in [12, 13],
which consists of simultaneous intracellular and extracellular recordings of cells in the hippocampus
of anesthetized rats. Recordings from an extracellular tetrode and an intracellular electrode were
made simultaneously, such that the cell recorded on the intracellular electrode was also recorded
extracellularly by a tetrode.
Action potentials detected on the intracellular (IC) channel are an almost certain indicator that the
cell being recorded spiked. Action potentials detected on the extracellular (EC) channels may include
the action potentials generated by the intracellularly recorded cell, but almost certainly include
spiking activity from other cells as well. The intracellular recording therefore can be used to obtain
a ground truth labeling for the spikes originating from one neuron that can be used to evaluate the
performance of human sorters and automatic spike sorting algorithms that sort extracellular recordings
[13]. However, by this method ground truth can only be determined for one of the neurons whose
spikes are present in the extracellular recording, and this should be kept in mind when evaluating the
performance of spike sorting algorithms on such a data set. Neither the correct number of distinct
neurons recorded from by the extracellular electrode nor the correct labeling for any spikes not
originating from the neuron recorded intracellularly can be determined by this methodology.
1
http://crcns.org/data-sets/hc/hc-1/
4
Data set
1
2
MAP
AVG
MAP
AVG
FP
4.90%
5.11%
0.94%
0.83%
DPM
FN
4.21%
5.17%
9.40%
12.48%
RPV
4
4
1
1
FP
4.71%
4.77%
0.85%
0.86%
GPUDPM
FN
RPV
1.32%
0
1.68%
0
18.63%
0
18.81%
0
Table 1: Performance of both algorithms on the two data sets: % false positives (FP), % false negatives
(FN), # of refratory period violations (RPV). Results are shown for the MAP solution (MAP) and
averaged over the posterior distribution (AVG).
The subset of that data set that was used for the experiments consisted of two recordings from different
animals (4 minutes each), recorded at 10 kHz. The data was bandpass filtered (300Hz ? 3kHz), and
spikes on the intracellular channel were detected as the local maxima of the first derivative of the
signal larger than a manually chosen threshold. Spikes on the extracellular channels were determined
as the local minima exceeding 4 standard deviations in magnitude. Spike waveforms of length 1
ms were extracted from around each spike (4 samples before and 5 samples after the peak). The
positions of the minima within the spike waveforms were aligned by upsampling, shifting and then
downsampling the waveforms. The extracellular spikes corresponding to action potentials from the
identified neuron were determined as the spikes occurring within 0.1 ms of the IC spike.
For each spike the signals from the four tetrode channels were combined into a vector of length 40.
Each dimensions was scaled by the maximal variance among all dimensions and PCA dimensionality
reduction was performed on the scaled data sets (for each of the two recordings separately). The first
three principal components were used as input to our spike sorting algorithm. The first recording
(data set 1) consists of 3187 spikes, 831 originate from the identified neuron, while the second (data
set 2) contains 3502 spikes, 553 of which were also detected on the IC channel. As shown in Figure
1, there is a clearly visible change in waveform shape of the identified neuron over time in data set
1, while in data set 2 the waveform shapes remain roughly constant. Presumably this change in
waveform shape is due to the slow death of the cell as a result of the damage done to the cell by the
intracellular recording procedure.
The parameters for the prior (?0 , n0 , a, b) were chosen empirically and were fixed at ?0 = 0,
n0 = 0.1, a = 4, b = 1 for all experiments. The parameters governing the deletion procedure were
set to ? = 0.985 and ? = 1 ? 10?5 , reflecting the fact that we consider relative firing rates of the
neurons to stay roughly constant over time and neuron death a relatively rare process respectively.
The variance of the proposal distribution ? was fixed at 0.01, favoring small changes in the cluster
parameters from one time step to the next. Experiments on both data sets were performed for
? ? {0.01, 0.005, 0.001} and the model was found to be relatively sensitive to this parameter in our
experiments. The sequential Monte Carlo simulations were run using 1000 particles, and multinomial
resampling was performed at each step.
For comparison, the same data set was also sorted using the DPM-based spike sorting algorithm
described in [6]2 , which pools waveforms over time and thus does not make use of any information
about the occurrence times of the spikes. The algorithm performs Gibbs sampling in a DPM with
Gaussian likelihood and a conjugate Normal-Inverse-Wishart prior. A Gamma prior is placed on the
DP concentration parameter ?. The parameters of the priors the prior were set to ?0 = 0, ?0 = 0.1,
?0 = 0.1 ? I, a0 = 1 and b0 = 1. The Gibbs sampler was run for 6000 iterations, where the first
1000 were discarded as burn-in.
4.2
Results
The performance of both algorithms is shown in Table 1. The data labelings corresponding to these
results are illustrated in Figure 1. As expected, our algorithm outperforms the DPM-based algorithm
on data set 1, which includes waveform drift which the DPM cannot account for. As data set 2 does
not show waveform drift it can be adequately modeled without introducing time dependence. The
DPM model which has the advantage of being significantly less complex than the GPUDPM is able
2
Code publicly available from http://www.gatsby.ucl.ac.uk/?fwood/code.html
5
(a) Ground Truth
(b) Ground Truth
(c) DPM
(d) DPM
(e) GPUDPM
(f) GPUDPM
Figure 1: A comparison of DPM to GPUDPM spike sorting for two channels of tetrode data for
which the ground truth labeling of one neuron is known. Each column shows subsampled results for
one data set. In all plots the vertical axis is time and the horizontal axes are the first two principal
components of the detected waveforms. The top row of graphs shows the ground truth labeling
of both data sets where the action potentials known to have been generated by a single neuron are
labeled with x?s. Other points in the top row of graphs may also correspond to action potentials but
as we do not know the ground truth labeling for them we label them all with dots. The middle row
shows the maximum a posteriori labeling of both data sets produced by a DP mixture model spike
sorting algorithm which does not utilize the time at which waveforms were captured, nor does it
model waveform shape change. The bottom row shows the maximum a posteriori labeling of both
data sets produced by our GPUDPM spike sorting algorithm which does model both the time at
which the spikes occurred and the changing action potential waveshape. The left column shows that
the GPUDPM performs better than the DPM when the waveshape of the underlying neurons changes
over time. The right column shows that the GPUDPM performs no worse than the DPM when the
waveshape of the underlying neurons stays constant.
6
to outperform our model on this data set. The inferior performance of the GPUDPM model on this
data set can also partly be be explained by the inference procedure used: For the GPUDPM model
inference is performed by a particle filter using a relatively small number of particles (1000), whereas
a large number of Gibbs sampler iterations (5000) are used to estimate the posterior for the DPM.
With a larger number of particles (or samples in the Gibbs sampler), one would expect both models
to perform equally well, with possibly a slight advantage for the GPUDPM which can exploit the
information contained in the refractory period violations. As dictated by the model, the GPUDPM
algorithm does not assign two spikes that are within the refractory period of each other to the same
cluster, whereas the DPM does not incorporate this restriction, and therefore can produce labelings
containing refractory period violations. Though only a relatively small number of such mistakes
are made by the DPM algorithm, these effects are likely to become larger in longer and/or noisier
recordings, or when more neurons are present.
For some values of ? the GPUDPM algorithm produced different results, showing either a large
number of false positives or a large number of false negatives. In the former case the algorithm
incorrectly places the waveforms from the IC channel and the waveform of another neuron in one
cluster, in the latter case the algorithm starts assigning the IC waveforms to a different cluster after
some point in time. This behavior is illustrated for data set 1 and ? = 0.01 in Figure 2, and can be
explained by shortcomings of the inference scheme: While in theory the algorithm should be able to
maintain multiple labeling hypotheses throughout the entire time span, the particle filter approach ?
especially when the number of particles is small and no specialized resampling scheme (e.g. [14]) is
used ? in practice often only represents the posterior accurately for the last few time steps.
Figure 2: An alternative ?interpretation? of the data from the left column of Fig. 1 given by the
GPUDPM spike sorter. Here the labels assigned to both the the neuron with changing waveshape and
one of the neurons with stationary waveshape change approximately half-way through the recording.
Although it is difficult to see because the data set must be significantly downsampled for display
purposes, there is a ?noise event? at the point in time where the labels switch. A feature of the DDP
is that it assigns posterior mass to both of these alternative interpretations of the data. While for this
data set we know this labeling to be wrong because we know the ground truth, in other recordings
such an ?injection of noise? could, for instance, signal a shift in electrode position requiring similar
rapid births and deaths of clusters.
5
Discussion
We have demonstrated that spike sorting using time-varying Dirichlet process mixtures in general,
and more specifically our spike sorting specialization of the GPUDPM, produce promising results.
With such a spike sorting approach we, within a single model, are able to account for action potential
waveform drift, refractory period violations, and neuron appearance and disappearance from a
recording. Previously no single model addressed all of these simultaneously, requiring solutions in
the form of ad hoc combinations of strategies and algorithms that produces spike sorting results that
were potentially difficult to characterize. Our model-based approach makes it easy to explicitly state
modeling assumptions and produces results that are easy to characterize. Also, more complex or
application specific models of the interspike interval distribution and/or the data likelihood can easily
7
be incorporated into the model. The performance of the model on real data suggests that a more
complete characterization of its performance is warranted. Directions for further research include the
development of a more efficient sequential inference scheme or a hybrid sequential/Gibbs sampler
scheme that allows propagation of interspike interval information backwards in time. Parametric
models for the interspike interval density for each neuron whose parameters are inferred from the
data, which can improve spike sorting results [15], can also be incorporated into the model. Finally,
priors may be placed on some of the parameters in order to make make the algorithm more robust
and easily applicable to new data.
Acknowledgments
This work was supported by the Gatsby Charitable Foundation and the PASCAL Network of Excellence.
References
[1] M. S. Lewicki. A review of methods for spike sorting: the detection and classification of neural action
potentials. Network: Computation in Neural Systems, 9(4):53?78, 1998.
[2] M. Sahani. Latent variable models for neural data analysis. PhD thesis, California Institute of Technology,
Pasadena, California, 1999.
[3] D. P. Nguyen, L. M. Frank, and E. N. Brown. An application of reversible-jump Markov chain Monte
Carlo to spike classification of multi-unit extracellular recordings. Network, 14(1):61?82, 2003.
[4] D. G?or?ur, C. R. Rasmussen, A. S. Tolias, F. Sinz, and N.K. Logothetis. Modeling spikes with mixtures of
factor analyzers. In Proceeding of the DAGM Symposium, pages 391?398. Springer, 2004.
[5] F. Wood, S. Goldwater, and M. J. Black. A non-parametric Bayesian approach to spike sorting. In
Proceedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology
Society, pages 1165?1168, 2006.
[6] F. Wood and M. J. Black. A nonparametric Bayesian alternative to spike sorting. Journal of Neuroscience
Methods, 173:1?12, 2008.
[7] F. Caron. Inf?erence Bay?esienne pour la d?etermination et la s?election de mod`eles stochastiques. PhD thesis,
?
Ecole
Centrale de Lille and Universit?e des Sciences et Technologiques de Lille, Lille, France, 2006.
[8] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process mixtures.
In 23rd Conference on Uncertainty in Artificial Intelligence (UAI?2007), Vancouver, Canada, July 2007,
2007.
[9] A. Bar-Hillel, A. Spiro, and E. Stark. Spike sorting: Bayesian clustering of non-stationary data. Journal of
Neuroscience Methods, 157(2):303?316, 2006.
[10] G. Santhanam, M. D. Linderman, V. Gilja, A. Afshar, S. I. Ryu, T. H. Meng, and K. V. Shenoy. HermesB:
A continuous neural recording system for freely behaving primates. IEEE Transactions on Biomedical
Engineering, 54(11):2037?2050, 2007.
[11] A. W. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equations of state
calculations by fast computing machines. Journal of Chemical Physics, 21:1087?1092, 1953.
[12] D. A. Henze, Z. Borhegyi, J. Csicsvari, A. Mamiya, K. D. Harris, and G. Buzs?aki. Intracellular features
predicted by extracellular recordings in the hippocampus in vivo. Journal of Neurophysiology, 84(1):390?
400, 2000.
[13] K. D. Harris, D. A. Henze, J. Csicsvari, H. Hirase, and G. Buzs?aki. Accuracy of tetrode spike separation as
determined by simultaneous intracellular and extracellular measurements. Journal of Neurophysiology,
81(1):401?414, 2000.
[14] P. Fearnhead. Particle filters for mixture models with an unknown number of components. Journal of
Statistics and Computing, 14:11?21, 2004.
[15] C. Pouzat, M. Delescluse, P. Viot, and J. Diebolt. Improved spike-sorting by modeling firing statistics
and burst-dependent spike amplitude attenuation: A Markov Chain Monte Carlo approach. Journal of
Neurophysiology, 91(6):2910?2928, 2004.
8
| 3606 |@word neurophysiology:3 briefly:1 middle:1 hippocampus:2 simulation:1 covariance:1 reduction:2 contains:1 ecole:1 outperforms:1 current:1 assigning:3 must:2 fn:3 visible:1 interspike:4 shape:7 remove:2 plot:1 jenson:1 n0:5 update:1 resampling:2 stationary:2 half:1 device:2 intelligence:1 accordingly:1 isotropic:1 short:2 filtered:1 sudden:1 detecting:1 characterization:1 org:1 mtj:1 five:1 burst:1 become:1 symposium:1 consists:3 introduce:1 excellence:1 theoretically:1 expected:1 rapid:1 behavior:1 pour:1 nor:2 roughly:2 multi:1 detects:1 automatically:1 actual:1 metaphor:1 election:1 underlying:2 mass:1 what:1 tying:1 kind:3 prohibits:1 sinz:1 temporal:1 stationarities:1 attenuation:1 xd:2 tie:1 universit:1 scaled:2 wrong:1 uk:3 control:1 unit:2 shenoy:1 before:4 positive:2 engineering:2 local:2 fwood:2 mistake:1 establishing:1 meet:2 firing:3 meng:1 approximately:1 black:2 burn:1 suggests:1 challenging:1 range:1 averaged:2 acknowledgment:1 practice:1 procedure:4 jan:1 erence:1 significantly:2 davy:1 confidence:1 diebolt:1 downsampled:1 cannot:1 ga:1 selection:1 put:1 storage:1 yee:1 www:1 equivalent:1 map:4 customer:7 restriction:1 demonstrated:1 independently:2 abrupt:1 assigns:2 slicing:1 adjusts:1 rule:1 borhegyi:1 handle:2 traditionally:1 variation:1 logothetis:1 hypothesis:1 approximated:1 expensive:1 utilized:1 intracellularly:2 labeled:1 observed:3 bottom:1 eles:1 ensures:1 movement:3 burstiness:1 environment:1 efficiency:1 easily:2 various:2 surrounding:1 train:2 distinct:3 fast:1 describe:1 london:2 monte:4 shortcoming:1 detected:5 artificial:1 labeling:11 choosing:2 hillel:1 birth:2 quite:1 whose:2 widely:1 larger:5 say:1 otherwise:1 favor:1 ability:1 statistic:2 t00:2 itself:1 online:1 hoc:2 sequence:4 advantage:2 ucl:2 propose:1 product:1 maximal:1 aligned:1 dpms:4 mixing:1 spiro:1 electrode:6 requirement:2 cluster:15 empty:1 produce:6 incremental:1 tk:14 ac:2 polya:3 b0:1 predicted:1 involves:1 direction:1 waveform:34 correct:3 modifying:1 filter:4 centered:1 human:1 behaviour:1 assign:1 clustered:1 extension:1 around:1 ground:10 normal:4 ic:5 presumably:1 henze:2 pouzat:1 vary:3 consecutive:1 purpose:1 estimation:1 applicable:1 label:4 sensitive:1 tool:1 clearly:1 gaussian:6 fearnhead:1 varying:3 voltage:1 ax:1 methodological:1 likelihood:6 mainly:1 posteriori:2 inference:5 dependent:6 stopping:1 dagm:1 eliminate:1 entire:3 a0:1 pasadena:1 favoring:1 originating:3 going:1 labelings:2 interested:1 semantics:1 france:1 among:3 html:1 pascal:1 augment:1 denoted:1 classification:2 development:1 animal:2 smoothing:1 extraction:1 sampling:3 manually:1 biology:1 placing:1 represents:1 lille:3 unsupervised:1 few:1 distibution:1 simultaneously:2 gamma:2 divergence:1 individual:1 subsampled:1 maintain:1 detection:1 neuroscientist:1 stationarity:1 mamiya:1 certainly:1 violation:8 mixture:14 introduces:2 viot:1 wc1n:1 chain:2 kt:10 capable:1 partial:2 old:1 initialized:1 ruled:1 delete:1 mk:3 instance:1 column:4 modeling:7 compelling:1 ar:1 maximization:1 introducing:2 deviation:1 subset:2 rare:1 characterize:2 dependency:1 combined:1 density:2 peak:1 international:1 stay:2 physic:1 pool:2 together:3 thesis:2 recorded:9 containing:2 choose:2 slowly:1 possibly:1 sorter:2 wishart:1 worse:1 derivative:1 stark:1 account:7 potential:26 de:4 pooled:1 includes:1 notable:1 explicitly:2 ad:1 performed:6 extracellularly:1 start:1 sort:1 option:1 vivo:1 publicly:3 afshar:1 accuracy:1 variance:3 characteristic:1 sitting:3 identify:1 correspond:1 orur:1 goldwater:1 raw:1 bayesian:5 accurately:1 produced:5 carlo:4 simultaneous:2 sharing:1 associated:2 attributed:2 sampled:1 treatment:2 dimensionality:2 cj:4 amplitude:1 reflecting:1 methodology:2 improved:1 done:1 though:1 governing:1 biomedical:1 crp:2 until:1 correlation:2 horizontal:1 reversible:1 propagation:1 name:1 effect:1 concept:1 true:1 consisted:1 requiring:2 adequately:1 former:1 assigned:2 chemical:1 brown:1 symmetric:1 death:4 illustrated:2 adjacent:1 during:2 inferior:1 aki:2 rat:1 m:3 generalized:3 whye:1 tt:2 demonstrate:2 complete:1 performs:3 rpv:3 recently:1 specialized:1 multinomial:1 mt:5 spiking:1 perturbing:1 empirically:1 refractory:11 conditioning:1 khz:2 extend:1 occurred:3 slight:1 interpretation:2 measurement:1 caron:2 gibbs:5 automatic:1 rd:1 similarly:1 particle:8 analyzer:1 dot:1 moving:1 longer:1 behaving:1 base:3 buzs:2 posterior:5 multivariate:1 recent:1 dictated:1 inf:1 certain:1 accomplished:2 captured:1 minimum:2 additional:1 freely:1 paradigm:1 period:13 signal:4 july:1 multiple:3 full:2 technical:2 calculation:1 cross:1 long:1 posited:1 post:1 equally:1 ensuring:1 implanted:1 essentially:1 expectation:1 iteration:2 kernel:7 arisen:1 achieved:1 cell:7 c1:4 proposal:3 background:1 whereas:2 separately:1 interval:8 addressed:1 source:1 extra:1 eliminates:1 recording:28 hz:1 dpm:16 incorporates:1 mod:1 emitted:1 backwards:1 easy:2 dilan:2 independence:1 fit:1 restaurant:4 switch:1 identified:3 shift:1 t0:2 whether:1 six:1 pca:2 specialization:1 speaking:1 cause:2 action:26 dramatically:1 generally:1 governs:1 amount:1 nonparametric:2 induces:1 http:2 specifies:1 outperform:1 affords:1 neuroscience:3 per:3 track:1 hirase:1 write:1 discrete:3 santhanam:1 group:1 four:1 threshold:1 drawn:2 changing:2 neither:1 kept:1 backward:2 utilize:1 graph:2 wood:3 run:2 inverse:1 uncertainty:2 arrive:1 almost:3 reasonable:1 place:1 pursuing:1 throughout:1 separation:1 def:2 ct:5 ddp:2 display:1 annual:1 activity:3 occur:1 ywteh:1 min:1 mated:1 pruned:1 performing:1 urn:3 span:1 injection:1 extracellular:12 relatively:4 according:1 combination:1 centrale:1 conjugate:1 describes:1 smaller:1 remain:1 ur:1 metropolis:2 primate:1 explained:2 invariant:2 spiked:1 computationally:1 equation:1 previously:1 turn:1 count:2 fail:1 describing:1 know:4 mind:1 ascending:1 end:1 available:2 linderman:1 apply:2 appropriate:1 occurrence:6 alternative:3 altogether:1 drifting:1 binomial:1 dirichlet:8 running:2 clustering:5 ensure:1 include:3 top:2 medicine:1 exploit:1 build:1 chinese:3 especially:1 society:1 g0:13 already:1 quantity:1 spike:74 occurs:2 damage:1 concentration:1 dependence:2 disappearance:4 usual:1 strategy:1 parametric:2 amongst:1 dp:6 upsampling:1 seating:2 originate:1 collected:1 cellular:1 reason:1 length:2 code:2 index:1 modeled:2 downsampling:1 difficult:3 unfortunately:1 setup:1 potentially:1 frank:2 trace:1 negative:2 rosenbluth:2 unknown:3 perform:1 teh:1 allowing:1 vertical:1 neuron:31 observation:1 markov:2 discarded:1 finite:1 snippet:1 incorrectly:1 incorporated:3 gasthaus:2 perturbation:2 smoothed:1 drift:7 canada:1 inferred:1 introduced:1 csicsvari:2 ct1:1 california:2 coherent:1 deletion:4 ryu:1 beyond:1 suggested:1 able:4 mtk:5 bar:1 fp:3 max:1 shifting:1 event:2 natural:1 available1:1 hybrid:1 indicator:4 occupancy:1 scheme:4 improve:1 technology:1 axis:1 mt1:1 sahani:1 review:5 prior:9 teller:2 vancouver:1 relative:1 expect:1 validation:1 foundation:1 charitable:1 row:4 course:2 accounted:1 supported:2 last:3 placed:2 arriving:2 rasmussen:1 allow:1 institute:1 absolute:2 anesthetized:1 distributed:1 dimension:3 transition:7 evaluating:1 stochastiques:1 forward:4 made:5 jump:2 avg:3 nguyen:1 far:1 ec:1 transaction:1 forever:1 preferred:1 doucet:1 uai:1 chronically:1 tolias:1 xi:1 gilja:1 continuous:2 latent:2 bay:1 table:11 promising:1 channel:8 robust:1 warranted:1 hc:2 complex:2 domain:1 pk:1 intracellular:9 noise:2 allowed:1 fig:1 referred:1 crcns:1 gatsby:4 slow:1 position:2 exceeding:1 bandpass:1 governed:1 tied:1 stamp:1 minute:1 discarding:1 specific:3 jt:2 showing:2 xt:3 grouping:2 tetrode:5 false:4 sequential:5 effectively:1 ci:6 phd:2 magnitude:1 occurring:1 sorting:38 appearance:4 likely:2 univariate:1 contained:1 partially:1 lewicki:1 springer:1 corresponds:1 truth:10 extracted:2 harris:2 conditional:4 sorted:1 formulated:1 acceleration:2 towards:1 change:11 infinite:1 specifically:2 determined:5 sampler:4 principal:2 partly:1 tendency:1 la:2 shannon:1 exception:1 college:1 quest:1 latter:1 arises:1 noisier:1 incorporate:1 evaluate:1 |
2,876 | 3,607 | Kernel-ARMA for Hand Tracking and
Brain-Machine Interfacing During 3D Motor Control
Lavi Shpigelman1 , Hagai Lalazar 2 and Eilon Vaadia 3
Interdisciplinary Center for Neural Computation
The Hebrew University of Jerusalem, Israel
1
[email protected], 2 [email protected],
3
[email protected]
Abstract
Using machine learning algorithms to decode intended behavior from neural activity serves a dual purpose. First, these tools allow patients to interact with
their environment through a Brain-Machine Interface (BMI). Second, analyzing
the characteristics of such methods can reveal the relative significance of various
features of neural activity, task stimuli, and behavior. In this study we adapted, implemented and tested a machine learning method called Kernel Auto-Regressive
Moving Average (KARMA), for the task of inferring movements from neural activity in primary motor cortex. Our version of this algorithm is used in an online
learning setting and is updated after a sequence of inferred movements is completed. We first used it to track real hand movements executed by a monkey in
a standard 3D reaching task. We then applied it in a closed-loop BMI setting to
infer intended movement, while the monkey?s arms were comfortably restrained,
thus performing the task using the BMI alone. KARMA is a recurrent method
that learns a nonlinear model of output dynamics. It uses similarity functions
(termed kernels) to compare between inputs. These kernels can be structured to
incorporate domain knowledge into the method. We compare KARMA to various
state-of-the-art methods by evaluating tracking performance and present results
from the KARMA based BMI experiments.
1
Introduction
Performing a behavioral action such as picking up a sandwich and bringing it to one?s mouth is a
motor control task achieved easily every day by millions of people. This simple action, however,
is impossible for many patients with motor deficits. In the future, patients with enough cortical
activity remaining may benefit from Brain Machine Interfaces that will restore motor control with
agility, precision, and the degrees of freedom comparable to natural movements. Such high quality
BMI?s are not yet available. The BMI framework involves recording neural activity, typically using
chronically implanted electrodes, which is fed in real-time to a decoding algorithm. Such algorithms
attempt to infer the subject?s intended behavior. The algorithm?s predictions can be used to artificially control an end-effector: a cursor on a screen, a prosthetic arm, a wheelchair, or the subject?s
own limbs by stimulation of their muscles. This study focuses on the algorithmic component.
Motor control is a dynamic process involving many feedback loops, relevant time frames, and constraints of the body and neural processing. Neural activity in primary motor cortex (MI) is part of
this process. An early approach at decoding movement from MI activity for BMI (see [1]) was rather
simplistic. Instantaneous velocity of the cursor, across a set of movements, was linearly regressed
against neuronal spike rates. This algorithm (known as the Population Vector Algorithm) is equivalent to modelling each neuron as a consine function of movement velocity. This method is still used
today for BMI?s [2], and has become the standard model in many studies of encoding and learning in
MI. Our understanding of motor cortex has progressed, and many other factors have been shown to
1
correlate with neuronal activity, but are typically overlooked in modeling. For example, MI activity
has been shown to encode arm posture [3], the dynamic aspects of the movement (such as current
acceleration, or interaction forces) and the interactions between neurons and their dynamics [4].
State-of-the-art movement decoding methods typically involve improved modeling of behavior, neural activity, and the relations between them. For example, Kalman filtering (see [5]) has been used to
model the system state as being comprised of current hand position, velocity and acceleration. Thus,
the hand movement is assumed to have roughly constant acceleration (with added Gaussian noise
and, consequently, minimal jerk) and the neural activity is assumed to be a linear function of the
hand state (with added Gaussian noise). Particle filtering, which relaxes some of the linearity and
Gaussian assumptions, has also been applied in an offline setting (see [6]). Support Vector Regression (SVR) from neural activity to current hand velocity (see [7]) has the advantage of allowing for
extraction of nonlinear information from neuronal interactions, but is missing a movement model.
One of our previous studies ([8]) combines a linear movement model (as in Kalman filtering) with
SVR-based nonlinear regression from neural activity.
KARMA (see [9] for one of its first appearances, or [10] for a more recent one) is a kernelized
version of the ARMA method [11]. It performs ARMA in a kernel-induced feature space (for a
comprehensive explanation of this kernel-trick, see [12]). It estimates the next system state as a
function of both the time window of previous state estimates (the Auto-Regressive part) and the
time window of previous observations (the Moving-Average part). In our application, we extend
its formulation to the Multi-Input Multi-Output (MIMO) case, allowing for better modeling of the
system state. We apply it in an online learning paradigm, and by limiting the number of support
vectors turn it into an adaptive method. This allows for real-time inference, as is necessary for BMI.
In section 2 we explain the lab setup, the problem setting, and introduce our notation. In section 3
we describe KARMA, and our online and adaptive version of it, in detail. We explain the available
modeling options and how they can be used to either improve performance or to test ideas regarding
motor control and neural encoding. Section 4 describes KARMA?s performance in tracking hand
movements and compares it with other state-of-the-art methods. Section 5 presents results from our
BMI experiment using KARMA and, finally, we summarize in section 6
2
Lab setup and problem setting
In our experiments, a monkey performed a visuomotor control task that involved moving a cursor
on a screen from target to target in 3D virtual space. Neuronal activity from a population of single
and multi-units was recorded with a chronically implanted array of 96 electrodes (Cyberkinetics,
Inc.) in MI, and used to calculate spike rates (spike counts in 50ms time bins smoothed by a causal
filter). In hand-control (open-loop) mode, the monkey used its hand (continuously tracked by an
optical motion capture system; Phoenix Tech., Inc.) to move the cursor. Data collected from these
sessions is used here to assess algorithm performance, by using the real arm movements as the
target trajectories. In hand-control, behavior is segmented into sequences of continuous recordings,
separated by time periods during which the monkey?s hand is not in view of the tracking device (e.g.
when the monkey stops to scratch itself). Each sequence is made up of target-to-target reaching trials
(some successful and some not). The targets appeared randomly in the 27 corners, centers of faces,
and middle of a virtual cube whose side was 6cm. The target radii were 2.4cm. A successful trial
consisted of one reaching movement that started at rest in one target and ended at rest in the next
target (with required target hold periods during which the cursor must not leave the target). The next
target appears at some point during the hold period of the previous target. Food reward is provided
through a tube after each success. In case of failure, the cursor disappears for 1-2 seconds (failure
period). During this time the monkey?s hand is still tracked. In the BMI (closed-loop) setting, the
monkey?s hands were comfortably restrained and the KARMA algorithm?s inference was used to
move the cursor. Trial failures occured if the reaching movement took longer than 6 seconds or if
the cursor was not kept inside the target during a 0.25s hold period. During trial-failure periods the
inference was stopped and at the next trial the cursor reappeared where it left off. The trial-failure
period was also used to pass the latest recorded trial sequence to the model-learning process, and to
replace the working model with an updated one, if available.
In this text, X (Capital, bold) is a matrix, x (bold) is a vector ( so is xi , xt or xit ) and x is a scalar.
(x)T signifies transposition. We will use xit ? IRq to designate the neural activity (of q cortical
units) at time bin t in behavior sequence i, which we refer to as observations. Given a window size,
2
s, xit?s+1:t =
xit?s+1
T
, . . . , xit
T T
? IRsq is an sq-long vector comprising a concatenated
window of observations ending at time t, of trajectory i. xi will be short-hand notation meaning
xi1:tf where tfi is the number of steps in the whole ith trajectory. Similarly, yti ? IRd are used
i
to designate cursor position (d = 3). We refer to yti as the state trajectory. Given a window size,
i
r ,yt?r:t?1
? IRrd is a concatenated vector of states. Estimated states are differentiated from true
? ti . Furthermore (given s and
states (or desired states, as will be explained later) by addition of a hat: y
T
T
T
i
? ti = y
? t?r:t?1
r) we will use v
? IRrd+sq to concatenate windows of estimated
xit?s+1:t
states and of neural observations and vti to concatenate true (rather than estimated) state values.
In the hand-control
we are given a (fully observed) data-set of neural activities and state
setting,
n
trajectories: xi , yi i=1 . Our goal is to learn to reconstruct the state trajectories from the neural
activities. We adhere to the online learning paradigm in which at each step, i, of the process we are
? i , then receive the true yi and update the model. This
given one observation sequence, xi , predict y
allows the model to adapt to changes in the input-output relation that occurs over time.
In BMI mode, since hand movements were not performed, we do not know the correct cursor movement.
Instead, during learning we use the cursor movement
generated in the BMI and the positions of the targets
that the monkey was instructed to reach to guess a
desired cursor trajectory which is used to replace the
missing true trajectory as feedback. The illustration
on the right shows the BMI setup from an algorithmic
point of view.
3
KARMA, modeling options, and online learning
Pr
i
As stated earlier, KARMA is a kernelized ARMA. In ARMA: yti =
k=1 Ak yt?k +
Ps
r
s
i
i
l=1 Bl xt?l+1 + et , where {Ak }k=1 and {Bl }l=1 are the respective Auto-Regressive (AR) and
Moving Average (MA) parameters and eit are residual error terms. Given these model parameters
and initial state values, the rest of the state trajectory can be estimated from the observations by
recursive application, replacing true state values with the estimated ones. Thus, ARMA inference is
essentially application of a linear (MIMO) IIR filter. Defining W = [Ar , . . . , A1 , Bs , . . . , B1 ], the
? ti = W?
next state estimate is simply y
vti (see notation section).
Kernelizing ARMA involves application of the kernel trick. A kernel function k (v1 , v2 ) :
IRrd+sq ? IRrd+sq ? IR is introduced, which, conceptually, can be viewed as a dot product of
feature vectors: k (v1 , v2 ) = ?T (v1 ) ? (v2 ) where the features are possibly
functions
P complicated
? ti , v?j where
? ti = j,? ?j? k v
of both states and (neural) observations. Inference takes the form y
?j? ? IRd are learned weight vectors and v?j are examples from a training set, known as the support
? ti = W? ? v
? ti where, as compared
set. Conceptually, KARMA inference can be viewed as y
P
i
? t is replaced by its feature vector, W is replaced by W? = j,? ?j? ?T (v?j ) and
with ARMA, v
each recursive
step of KARMA is linear regression in the feature space of observations + states. The
weights, ?j? are learned so as to solve the following optimization problem (presented in its primal
P
P
2
2
2
form): arg minW? kW? k + c i,t,k yti k ? W? ? vti k , where kwk = a,b (W)ab is
the Frobenius matrix norm, the sum in the second term is over all trials, times and state dimensions
of the examples in the training set, |v| = max{0, |v|?} is the -insensitive absolute error and c is a
constant that determines the relative trade-off between the first (regularization) term and the second
(error) term. Note that during learning, the states are estimated using the true / desired previous state
values as input instead of the estimated ones (contrary to what is done during inference). ?Luckily?,
this optimization problem reduces to standard SVR where xit is replaced with vti . This replacement
can be done as a preprocessing step in learning and a standard SVR solver can then be used to find
the weights. Inference would require plugging in the previously estimated state values as part of the
inputs between iterative calls to SVR inference.
Application of KARMA to a specific domain entails setting of some key hyper-parameters that
may have drastic effects on performance. The relatively simple ones are the window sizes (r and
3
s) and trade-off parameter c. Beyond the necessary selection of the cursor trajectories as the states,
augmenting state dimensions (whose values are known at training and inferred during model testing)
can be added in order to make the model use them as explicit features. This idea was tried in our
hand tracking experiments using features such as absolute velocity and current trial state (reaching
target, holding at target and trial-failure time). But since results did not improve significantly, we
discontinued this line of research. The kernel function and its parameters must also be chosen.
Note that the kernel in this algorithm is over structured data, which opens the door to a plethora
of choices. Depending on one?s view this can be seen as an optimization curse or as a modeling
blessing. It obviously complicates the search for effective solutions but it allows to introduce domain
knowledge (or assumptions) into the problem. It can also be used as a heuristic for testing the relative
contribution of the assumptions behind the modeling choices. For example, by choosing r = 0 the
algorithm reduces to SVR and the state model (and its dynamics) are ignored. By selecting a kernel
which is a linear sum of two kernels, one for states and one for observations, the user assumes that
states and observations have no ?synergy? (i.e. each series can be read without taking the other into
account). This is because summing of kernels is equivalent to calculating the features on their inputs
separately and then concatenating the feature vectors. Selecting linear kernels reduces KARMA to
ARMA (using its regularized loss function).
In online learning, one may change the learned model between consecutive inferences of (whole)
k
time series. At learning step k, all of xi , yi i=1 are available for training. A naive solution would
be to re-learn the model at every step, however, this would not be the best solution if one believes
that the source of the input-output relation is changing (for example, in our BMI, cortical units may
change their response properties, or units may appear or disappear during a recording session). Also,
it may not be feasible to store all the sequences, or learning may take too long (opening up a delay
between data acquisition until a new model is ready). If the resulting model has too many support
vectors, too much time is required for each inference step (which is less than 50ms in our BMI
setup). We deal with all the issues above by limiting the number of examples (vti ) that are kept in
memory (to 5000 in hand-control tracking and 3000 for real-time use in the BMI). At each online
learning iteration, the latest data is added to the pool of examples one example at a time, and if
the limit has been reached another example is selected at random (uniformly over the data-set) and
thrown out. This scheme gives more mass to recent observations while allowing for a long tail of
older observations. For a 3000 sized database and examples coming in at the rate of one per 50ms,
the cache is filled after the first 150 seconds. Afterwards, the half life (the time required for an
example to have a 50% chance of being thrown out) of an example is approximately 104 seconds,
or conversely, at each point, approx. 63% of the examples in the database are from the last 150
seconds and the rest are earlier ones. This scheme keeps the inference time to a constant and seems
reasonable in terms of rate of adaptation. We chose 5000 for the tracking (hand-control) experiments
since in those experiments there is no real-time inference constraint and the performance improves
a bit (suggesting that the 3000 size is not optimal in terms of inference quality). The similarity
between consecutive examples is rather high as they share large parts of their time windows (when
ARMA parameters r or s are large). Throwing away examples at random has a desired effect of
lessening the dependency between remaining examples.
4
Open-loop hand tracking testing
To test various parametrization of KARMA and to compare its performance to other methods we
used data from 11 hand-control recording days. These sessions vary in length from between 80
to 120 minutes of relatively continuous performance on the part of the monkey. Success rates in
this task were at the 65-85% range. Cortical units were sorted in an automated manner every day
with additional experimenter tuning. The number of very well-isolated single units ranged between
21-41. The remaining units consisted of 140-150 medium quality and multi-units, which the sorting
software often split into more than one channel.
Most of the different algorithms that are compared here have free hyper-parameters that need to be
set (such as a Gaussian kernel width for spike rates, the maximal informative time window of neural
activities, s and the c trade-off parameter). We had a rough estimate for some of these from previous
experiments using similar data (see [8]). To fine-tune these parameters, a brute-force grid search was
performed on data from one of the 11 sessions in a (batch) 5-fold cross validation scheme. Those
parameters were then kept fixed.
4
Earlier experiments showed the Gaussian kernel to be a good candidate for comparing neural spike
rate vectors. It can also be calculated quickly, which is important for the BMI real-time constraint.
We tested several variations of structured kernels on neuro-movement inputs. These variations consisted of all combinations of summing or multiplying Gaussian or linear kernels for the spike rates
and movement states. Taking a sum of Gaussians or their product produced the best results (with
no significant difference between these two options). We chose the sum (having the conceptual ini
? ti = W? ?(?
ference form: y
yt?r:t?1
) + W? ?(xit?s:t ) where ?, ? are the infinite feature vectors
of the Gaussian kernel). The next best result was achieved by summing a Gaussian spike rate kernel and a linear movement kernel (which we will call lin-y-KARMA). The sum of linear kernels
produces ARMA (which we also tested). The test results that are presented in this study are only
for the remaining 10 recording sessions. The main performance measure that we use here is the
(Pearson) correlation coefficient (CC) between true and estimated values of positions (in each of the
3 movement dimensions). To gauge changes in prediction quality over time we use CC in a running
window of sequences (window size is chosen so as to decrease the noise in the CC estimate). In
other cases, the CC for a whole data-set is computed.
To illustrate KARMA?s performance we provide a movie (see video 1 in supplementary material)
showing the true hand position (in black) and KARMA?s tracking estimate (in blue) during a continuous 150 second sequence of target-to target reach movements. This clip is of a model that was
learned in an online manner using the previous (180) sequences, using a support vector cache of
3000 (as described in section 3). The initial position of the cursor is not given to the algorithm.
Instead the average start positions in previous sequences is given as the starting point. The CC in
the time window of the previous 40 sequences (0.9, 0.92 and 0.96 for the 3 dimensions) is given to
provide a feeling of what such CC?s look like. Similarly, Figure 1.B shows tracking and true position
values for an 80 second segment towards the end of a different session.
KARMA achieves good performance and it does so with relatively small amounts of data. Figure
1.A shows tracking quality in terms of a running window of CC?s over time. CC?s for the first
sequences are calculated on predictions made up to those times. While these CC?s are more noisy it
is clear that a useful model is reached within the first 3 minutes (CC?s all higher than 0.6) and close
to peak performance is already available within the first 10 minutes.
C
A
earliest
delay train
delays
test
D
positions
B
Figure 1: KARMA performance: (A) Correlation coefficients in a running window of 20 sequences (or less
at the session start) for a whole 95 minute session (mean CC window size is 9.7 minutes). (B) True (gray)
vs. tracked positions in an 80 second segment at minute 90 of the session. (C) Effect of loosing recording
electrodes: tracking was performed over a full recording session using randomly chosen subsets of electrodes.
For each selected number of electrodes (from the full 92 available down to 5 electrodes) 50 repetitions were
run. CC?s were calculated per run over the last two thirds of the session (to avoid effects of initial performance
transients) and averaged over the 3 movement dimensions. Their distributions (over repetitions) are shown
in terms of the median CC (red center line), quartile values (skirt containing 50% of the CC?s) and extremal
values (whiskers) for each number of electrodes. (D) Effect of delay time between training data and test data:
For the session shown in (A), marked ?day 1? and for another session (marked ?day 2?), hand movement in 20
minute time windows towards the session ends were reconstructed in an online manner but instead of using
the same sequences as training data (with a 1 step delay), earlier sequences were used. Figure (A) shows the
time window that corresponded to opening a maximal time difference of 60 minutes between the last inferred
sequence (at minute 90) and the last learned sequence (at minute 30). CC?s for the test windows (averaged over
movement dimensions) are shown as a function of delay time for the two days.
5
KARMA
ARMA
Kalman Filter
95.6%
SVR
Above diagonal
is better
Figure 2: Algorithm comparisons: 10 hand-movement sessions were each divided into 3 equally long blocks
of 25-35 minutes (the last few minutes were discarded since during this time the monkey often stopped paying
attention to the task) to create 30 data-sets. The following algorithms were run on each data-set in an online
manner: KARMA, lin-y-KARMA, ARMA and SVR. All four were implemented as versions of the KARMA
by varying its parameters. In all cases a support vector cache of 5000 was enforced as described in section 4.
A Kalman Filter was also implemented so as to allow for a window of observations as input and a window of
movement positions as the state (this version was previously shown to outperform the standard Kalman Filter
which has r = s = 1). It was also learned in an online manner, replacing inverses with pseudo-inverses where
necessary to avoid non-invertible matrices when data-sets are too small. Results are shown as scatter plots of
CC values (30 data-sets and 3 movement dimensions produce 90 points per scatter plot). Each point compares
KARMA to another algorithm in a specific data-set and movement dimension pair. Points above the diagonal
mean a higher score for KARMA. The Graph on the left shows win-scores for each pair of algorithm. Winscore is defined at the percentage of times one algorithm got a higher CC than another. Edge direction points
to the loser. The movement reconstruction on the right (horizontal position only) shows KARMA vs. SVR in a
sample 18 second window.
Probably the highest source of variability in BMI performance across subjects is the location and
number of electrodes in the brain. To test how performance with KARMA would degrade if electrodes were lost we simulated an electrode dropping experiment (see figure 1.C). Let?s consider a
CC of 0.7 as a minimal reasonable performance quality. Let?s also assume that with 50 repetitions,
minimal values roughly represent a worst case scenario in terms of mishaps that do not involve
movement of the whole array. Then it seems that we can get by easily with only a third of the
array (28 electrodes) operational. In terms of medians (average bad luck) we could do with less.
Maximal values are relevant in case we need to choose the good electrodes. This may be relevant
in situations involving implanted chips that extract and wirelessly transmit neural activity and may
have constraints in terms of energy expenditure or bandwidth.
Most BMI experiments (e.g. [2, 13] with the exception of [1]) use fixed models that are learned
once at the beginning of a session. Our version of KARMA is adaptive. In order to check the
importance of adapting to changes in the recorded neural activity we ran an experiment in which
variable delays were opened between the inference times andthe latest available
data for learning.
i.e. after inference of sequence yi from xi , sequence pair xi?k , yi?k where k > 0 was first
made available. Figure 1.D shows a degradation in performance during the test periods as the delay
grows for two recording sessions. This suggests that adaptability of the algorithm is important for
keeping high performance levels. There are two possible reasons for the observed degradation. One
is changes in the neural activity within the brain. The other is changes in the process that extracts
the neural activity in the BMI (e.g. electrode movements). Differentiating between the two options
is a subject of future work. In the BMI setting, feedback is involved. The subject might be able to
effortlessly modulate his neural activity and keep it in good fit with the algorithm. In section 5 we
address this issue by running BMI sessions in which the model was frozen.
Comparison of KARMA to other methods is shown in figure 2. It is clear that KARMA performs
much better than ARMA and the Kalman Filter, suggesting that a nonlinear interpretation of neural
activity is helpful. While KARMA is statistically significantly better than SVR, the differences in
CC values are not very big (note the scaling difference of the scatter plots). Looking at the movement
reconstruction comparison it seems that SVR has a good average estimate of the current position,
6
? ti = W? ?(xit?s:t ) ) it fluctuates rapidly
however, missing a movement model (SVR has the form y
around the true value. This fluctuation may not be very apparent in the CC values however it would
make a BMI much more difficult to control. Lin-y-KARMA uses a linear movement model (and
i
? ti = A?
has the form: y
yt?r:t?1
+ W? ?(xit?s:t )). Its performance is inferior to full KARMA.
Having a nonlinear movement model means that different areas of the state-space get treated in
locally relevant fashion. This might explain why full KARMA outperforms. Note that the difference
between lin-y-KARMA and SVR are not very significant (win-score of only 65.6%). Comparison
to the Population Vector algorithm was also done however the PVA achieved especially bad results
for our long sequences since it accumulates errors without any decay (this is less of a problem in
BMI since the subject can correct accumulated errors ). We therefore omit showing them here.
100
hand moving
mixed mode
pure BMI control
both hands restrained
Success rate
80
60
mixing factor:
0?>35%
40
learning
algorithm
frozen
mixing
factor:
0?>50%
days start
with full
amplitude
targets
20
Day 1
Day 2
Day 4
Day 3
144 min.
Freeze
Freeze
Success rate
0
Day 6
Day 5
Day 7
Day 8
Hand
control
BMI
mode
Hand
control
33 min.
trials
49 min.
57 min.
Figure 3: All graphs show success rates. The light, noisy plots are success rates in consecutive bins of 10 trials
while the smoother plots are the result of running a smoothing filter on the noisy plots. Mixing mode was used
on day 1 and part of day 2. Afterwards we switched to full BMI (mixing factor 100%). Hands were allowed
to move freely until day 4. On day 4 both hands were comfortably restrained for the first time and though
performance levels dropped, the monkey did not attempt to move its hands. On day 5 an attempt to freeze the
model was made. When performance dropped and the monkey became agitated we restarted the BMI from
scratch and performance improved rapidly. Day 6 consists of full BMI but with the targets not as far apart as
with hand-control. This makes the task a bit easier and allowed for higher success rates. On days 7 and 8 a
BMI block was interleaved with hand control blocks. Only the BMI blocks are shown in the top graph. The full
three blocks of day 8 are shown in the bottom right graph. Bottom left graph shows a recording session during
which the model was frozen.
5
BMI experiment
The BMI experiment was conducted after approximately four months of neural recordings during
which the monkey learned and performed the hand-control task. Figure 3 shows a trace of the
first days of the BMI task. A movie showing footage from these days is in the supplementary
material (video 2). To make the transition to BMI smooth, the first 1.5 days consisted of work in
a mixed mode, during which control of the cursor was a linear combination of the hand position
and KARMA?s predictions. We did this in order to see if the monkey would accept a noisier control
scheme than it was used to. During the next 1.5 days the cursor was fully controlled by KARMA,
but the monkey kept moving as if it was doing hand-control. i.e. it made movements and corrections
with its hands. On day 4 the monkey?s free hand was comfortably restrained. Despite our concerns
that the monkey would stop performing it seemed impervious to this change, except that control over
the cursor became more difficult. On days 5 and 6 we made some tweaks to the algorithm (tried to
freeze learning and change the way in which correct movement trajectories are generated) and the
7
task (tried to decrease target size and the distance between targets) which had some effects of task
difficulty and on success rates. On days 8 and 9 we interleaved a BMI block with hand-control
blocks. We saw that performance is better in hand-control than in BMI but not drastically so. In
the following sessions we discontinued all tweaking with the algorithm and we?ve seen some steady
improvement in performance.
We repeated the freezing of model learning on two days (one of these session appears in figure 3).
In all cases where we froze the model, we noticed that the monkey starts experiencing difficulty in
controlling the cursor after a period of 10-15 minutes and stopped working completely when this
happened. As stated earlier, in most BMI experiments the subjects interact with fixed models. One
possible explanation for the deterioration is that because our monkey is trained in using an adaptive
model it does not have experience in conforming to such a model?s constraints. Instead, the opposite
burden (that of following a drifting source of neural activity) falls on the algorithm?s shoulders.
As mentioned in section 2, in BMI mode no hand movements are performed and therefore model
learning is based on our guess of what is the desired cursor trajectory (the monkey?s emphintended
cursor movement). We chose to design the desired trajectory as a time varying
linear
combination of
t
i
? i where
? ti + tft y
the cursor trajectory that the monkey saw and the target location: yt = 1 ? tf y
i
i
? i is the target location on trial i. Note that this trajectory starts at the observed cursor location at
y
the trial start and ends at the target location (regardless of where the cursor actually was at the end
of the trial).
6
Summary
This study was motivated by the view that lifting overly simplifying assumptions and integrating
domain knowledge into machine learning algorithms can make significant improvements to BMI
technology. In turn, this technology can be used as a testbed for improved modeling of the interaction
between the brain and environment, especially in visuomotor control. We showed in open-loop hand
tracking experiments that the incorporation of a nonlinear movement model, interpreting the neural
activity as a whole (rather than as the sum of contributions made by single neurons) and allowing the
model to adapt to changes, results in better predictions. The comparison was done against similar
models that lack one or more of these properties. Finally, we showed that this model can be used
successfully in a real-time BMI setting, and that the added mathematical ?complications? result in a
very intuitive and high quality interface.
References
[1] D. M. Taylor, S. I. Helms, S.I. Tillery, and A.B. Schwartz. Direct cortical control of 3d neuroprosthetic devices. Science, 296(7):1829?
1832, 2002.
[2] M.Velliste, S.Perel, M.C.Spalding, A.S. Whitford, and A. B. Schwartz. Cortical control of a prosthetic arm for self-feeding. Nature
(online), May 2008.
[3] S. Kakei, D. S. Hoffman, and P. L. Strick. Muscle and movement representations in the primary motor cortex. Science, 285:2136?2139,
1999.
[4] E. Vaadia, I. Haalman, M. Abeles, H. Bergman, Y. Prut, H. Slovin, and A. Aertsen. Dynamics of neuronal interactions in monkey cortex
in relation to behavioral events. Nature, 373:515?518, Febuary 1995.
[5] W. Wu, Y. Gao, E. Bienenstock, J.P. Donoghue, and M.J. Black. Bayesian population coding of motor cortical activity using a kalman
filter. Neural Computation, 18:80?118, 2005.
[6] A. E. Brockwell, A. L. Rojas, and R. E. Kass. Recursive Bayesian Decoding of Motor Cortical Signals by Particle Filtering. J Neurophysiol, 91(4):1899?1907, 2004.
[7] L. Shpigelman, Y. Singer, R. Paz, and E. Vaadia. Spikernels: Predicting hand movements by embedding population spike rates in
inner-product spaces. Neural Computation, 17(3), 2005.
[8] L. Shpigelman, K.Crammer, R. Paz, E.Vaadia, and Y.Singer. A temporal kernel-based model for tracking hand movements from neural
activities. In NIPS. 2005.
[9] P.M.L. Drezet and R.F. Harrison. Support vector machines for system identification. In UKACC, volume 1, pages 688?692, Sep 1998.
[10] M. Mart?nez-Ram?n, J. Luis Rojo-?lvarez, G. Camps-Valls, J. Mu?oz-Mar?, A. Navia-V?zquez, E. Soria-Olivas, and A. R. FigueirasVidal. Support vector machines for nonlinear kernel ARMA system identification. IEEE Trans. Neural Net., 17(6):1617?1622, 2006.
[11] G. E. P. Box and G. M. Jenkins. Time Series Analysis: Forecasting and Control. Prentice Hall PTR, Upper Saddle River, NJ, USA, 1994.
[12] B. Schoelkopf and A. J. Smola. Learning with Kernels. The MIT Press, Cambridge, MA, 2002.
[13] L.R.Hochberg, M.D. Serruya, G. M. Friehsand J.A. Mukand, M.Saleh, A. H. Caplan, A. Branner, D. Chen, R. D. Penn, and J. P.
Donoghue. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442:164?171, 2006.
8
| 3607 |@word trial:15 middle:1 version:6 norm:1 seems:3 open:4 tried:3 simplifying:1 initial:3 series:3 score:3 selecting:2 outperforms:1 current:5 com:1 comparing:1 ka:1 gmail:1 yet:1 must:2 scatter:3 conforming:1 luis:1 concatenate:2 informative:1 motor:12 reappeared:1 plot:6 update:1 v:2 alone:1 half:1 selected:2 device:3 guess:2 beginning:1 parametrization:1 ith:1 short:1 regressive:3 transposition:1 complication:1 location:5 mathematical:1 direct:1 become:1 lessening:1 consists:1 combine:1 shpigelman:2 behavioral:2 inside:1 manner:5 introduce:2 roughly:2 behavior:6 multi:4 brain:6 mukand:1 footage:1 food:1 curse:1 window:21 solver:1 cache:3 provided:1 linearity:1 notation:3 lalazar:1 mass:1 medium:1 israel:1 what:3 cm:2 monkey:23 ended:1 nj:1 pseudo:1 temporal:1 every:3 ti:11 schwartz:2 control:31 unit:8 brute:1 omit:1 appear:1 penn:1 dropped:2 limit:1 despite:1 encoding:2 ak:2 analyzing:1 accumulates:1 fluctuation:1 approximately:2 black:2 chose:3 might:2 conversely:1 suggests:1 alice:1 range:1 statistically:1 averaged:2 testing:3 recursive:3 block:7 lost:1 sq:4 area:1 significantly:2 got:1 adapting:1 integrating:1 tweaking:1 svr:13 get:2 close:1 selection:1 prentice:1 impossible:1 eilon:1 spikernels:1 equivalent:2 center:3 missing:3 yt:5 jerusalem:1 latest:3 starting:1 attention:1 regardless:1 pure:1 array:3 his:1 population:5 embedding:1 variation:2 updated:2 limiting:2 target:26 today:1 transmit:1 decode:1 user:1 experiencing:1 controlling:1 us:2 bergman:1 trick:2 velocity:5 discontinued:2 database:2 observed:3 bottom:2 capture:1 worst:1 calculate:1 schoelkopf:1 movement:47 trade:3 decrease:2 highest:1 luck:1 ran:1 mentioned:1 environment:2 mu:1 reward:1 dynamic:6 trained:1 zquez:1 segment:2 completely:1 neurophysiol:1 easily:2 sep:1 eit:1 chip:1 various:3 train:1 separated:1 describe:1 effective:1 visuomotor:2 corresponded:1 hyper:2 choosing:1 pearson:1 whose:2 heuristic:1 supplementary:2 solve:1 fluctuates:1 apparent:1 reconstruct:1 itself:1 noisy:3 online:12 obviously:1 vaadia:4 sequence:21 advantage:1 frozen:3 net:1 took:1 reconstruction:2 interaction:5 product:3 coming:1 adaptation:1 maximal:3 relevant:4 loop:6 rapidly:2 loser:1 mixing:4 brockwell:1 oz:1 intuitive:1 frobenius:1 tillery:1 marked:2 electrode:13 p:1 plethora:1 produce:2 leave:1 figueirasvidal:1 depending:1 recurrent:1 ac:2 augmenting:1 illustrate:1 paying:1 implemented:3 involves:2 direction:1 radius:1 correct:3 filter:8 quartile:1 luckily:1 opened:1 human:1 transient:1 virtual:2 material:2 bin:3 require:1 feeding:1 hagai:2 designate:2 correction:1 hold:3 effortlessly:1 around:1 hall:1 branner:1 algorithmic:2 predict:1 achieves:1 vary:1 early:1 consecutive:3 purpose:1 extremal:1 saw:2 repetition:3 tf:2 gauge:1 tool:1 create:1 mishap:1 successfully:1 hoffman:1 rough:1 mit:1 interfacing:1 gaussian:8 reaching:5 rather:4 avoid:2 varying:2 earliest:1 encode:1 focus:1 xit:10 improvement:2 modelling:1 check:1 tech:1 caplan:1 camp:1 helpful:1 inference:16 febuary:1 accumulated:1 typically:3 accept:1 kernelized:2 relation:4 bienenstock:1 comprising:1 issue:2 arg:1 dual:1 ference:1 art:3 smoothing:1 cube:1 once:1 extraction:1 having:2 kw:1 look:1 lavi:1 progressed:1 future:2 stimulus:1 opening:2 few:1 randomly:2 ve:1 comprehensive:1 replaced:3 intended:3 replacement:1 sandwich:1 attempt:3 freedom:1 ab:1 thrown:2 expenditure:1 light:1 primal:1 behind:1 edge:1 necessary:3 experience:1 respective:1 minw:1 filled:1 taylor:1 arma:15 desired:6 causal:1 re:1 isolated:1 skirt:1 minimal:3 stopped:3 effector:1 complicates:1 froze:1 modeling:8 earlier:5 ar:2 signifies:1 tweak:1 subset:1 comprised:1 delay:8 successful:2 mimo:2 conducted:1 paz:2 too:4 iir:1 dependency:1 abele:1 peak:1 river:1 huji:2 interdisciplinary:1 off:4 xi1:1 decoding:4 picking:1 pool:1 invertible:1 continuously:1 quickly:1 recorded:3 tube:1 containing:1 choose:1 possibly:1 corner:1 tft:1 account:1 suggesting:2 bold:2 coding:1 coefficient:2 inc:2 performed:6 view:4 later:1 closed:2 lab:2 kwk:1 doing:1 reached:2 start:6 red:1 option:4 complicated:1 whitford:1 contribution:2 ass:1 il:2 ir:1 became:2 characteristic:1 prut:1 ensemble:1 conceptually:2 bayesian:2 identification:2 produced:1 trajectory:15 multiplying:1 cc:20 explain:3 reach:2 against:2 failure:6 energy:1 acquisition:1 involved:2 mi:5 stop:2 wheelchair:1 experimenter:1 knowledge:3 occured:1 improves:1 amplitude:1 adaptability:1 actually:1 appears:2 higher:4 day:31 response:1 improved:3 formulation:1 done:4 though:1 mar:1 box:1 furthermore:1 smola:1 until:2 correlation:2 hand:44 working:2 horizontal:1 replacing:2 freezing:1 nonlinear:7 lack:1 mode:7 quality:7 reveal:1 gray:1 grows:1 usa:1 effect:6 consisted:4 true:11 ranged:1 regularization:1 read:1 restrained:5 haalman:1 deal:1 during:19 width:1 self:1 inferior:1 steady:1 ptr:1 m:3 ini:1 performs:2 motion:1 interface:3 interpreting:1 tetraplegia:1 meaning:1 instantaneous:1 navia:1 stimulation:1 tracked:3 phoenix:1 insensitive:1 volume:1 million:1 comfortably:4 extend:1 tail:1 interpretation:1 refer:2 significant:3 freeze:4 cambridge:1 velliste:1 approx:1 tuning:1 grid:1 session:21 similarly:2 particle:2 strick:1 had:2 dot:1 moving:6 entail:1 cortex:5 similarity:2 longer:1 own:1 recent:2 showed:3 apart:1 termed:1 irq:1 store:1 scenario:1 success:8 life:1 yi:5 muscle:2 seen:2 additional:1 freely:1 paradigm:2 period:9 signal:1 smoother:1 afterwards:2 full:8 infer:2 reduces:3 segmented:1 smooth:1 adapt:2 cross:1 long:5 lin:4 divided:1 equally:1 a1:1 plugging:1 controlled:1 prediction:5 involving:2 simplistic:1 regression:3 implanted:3 patient:3 essentially:1 neuro:1 iteration:1 kernel:25 represent:1 deterioration:1 achieved:3 serruya:1 receive:1 addition:1 separately:1 fine:1 harrison:1 adhere:1 source:3 median:2 rest:4 bringing:1 probably:1 recording:10 subject:7 induced:1 contrary:1 tfi:1 call:2 door:1 split:1 enough:1 relaxes:1 automated:1 jerk:1 fit:1 bandwidth:1 opposite:1 cyberkinetics:1 idea:2 regarding:1 inner:1 donoghue:2 motivated:1 forecasting:1 ird:2 action:2 ignored:1 useful:1 clear:2 involve:2 tune:1 amount:1 locally:1 clip:1 outperform:1 percentage:1 happened:1 estimated:9 overly:1 track:1 per:3 blue:1 dropping:1 key:1 four:2 capital:1 changing:1 kept:4 v1:3 ram:1 graph:5 sum:6 enforced:1 run:3 inverse:2 reasonable:2 wu:1 hochberg:1 scaling:1 comparable:1 bit:2 interleaved:2 fold:1 activity:28 adapted:1 constraint:5 throwing:1 incorporation:1 software:1 regressed:1 prosthetic:3 aspect:1 min:4 performing:3 optical:1 relatively:3 structured:3 combination:3 across:2 describes:1 b:1 explained:1 pr:1 previously:2 turn:2 count:1 kakei:1 know:1 singer:2 fed:1 drastic:1 serf:1 end:5 available:8 gaussians:1 jenkins:1 apply:1 limb:1 v2:3 differentiated:1 away:1 kernelizing:1 batch:1 hat:1 drifting:1 assumes:1 remaining:4 running:5 top:1 completed:1 calculating:1 concatenated:2 especially:2 disappear:1 bl:2 move:4 noticed:1 added:5 already:1 spike:8 posture:1 occurs:1 primary:3 diagonal:2 aertsen:1 win:2 distance:1 deficit:1 simulated:1 degrade:1 collected:1 reason:1 kalman:7 length:1 illustration:1 hebrew:1 nc:1 setup:4 executed:1 difficult:2 holding:1 trace:1 stated:2 design:1 allowing:4 upper:1 neuron:3 observation:13 discarded:1 agility:1 defining:1 variability:1 situation:1 looking:1 frame:1 shoulder:1 smoothed:1 inferred:3 overlooked:1 introduced:1 pair:3 required:3 lvarez:1 learned:8 testbed:1 nip:1 trans:1 address:1 beyond:1 able:1 appeared:1 summarize:1 max:1 memory:1 explanation:2 belief:1 mouth:1 video:2 event:1 natural:1 force:2 restore:1 regularized:1 treated:1 difficulty:2 residual:1 predicting:1 arm:5 scheme:4 improve:2 older:1 movie:2 technology:2 disappears:1 started:1 ready:1 auto:3 naive:1 extract:2 text:1 understanding:1 relative:3 fully:2 loss:1 whisker:1 mixed:2 filtering:4 validation:1 switched:1 degree:1 slovin:1 share:1 summary:1 last:5 free:2 keeping:1 offline:1 side:1 allow:2 drastically:1 fall:1 face:1 taking:2 differentiating:1 absolute:2 benefit:1 feedback:3 dimension:8 neuroprosthetic:1 cortical:8 evaluating:1 ending:1 calculated:3 transition:1 seemed:1 instructed:1 made:7 adaptive:4 preprocessing:1 feeling:1 far:1 correlate:1 reconstructed:1 synergy:1 keep:2 vti:5 chronically:2 b1:1 assumed:2 summing:3 conceptual:1 xi:7 continuous:3 iterative:1 search:2 helm:1 why:1 learn:2 channel:1 nature:3 operational:1 interact:2 artificially:1 domain:4 did:3 significance:1 main:1 bmi:42 linearly:1 whole:6 noise:3 big:1 allowed:2 karma:39 repeated:1 body:1 neuronal:6 screen:2 fashion:1 precision:1 inferring:1 position:14 explicit:1 concatenating:1 candidate:1 third:2 learns:1 minute:13 down:1 bad:2 xt:2 specific:2 showing:3 decay:1 concern:1 burden:1 importance:1 lifting:1 cursor:24 sorting:1 easier:1 chen:1 simply:1 appearance:1 nez:1 gao:1 saddle:1 tracking:14 scalar:1 restarted:1 determines:1 chance:1 ma:2 mart:1 saleh:1 modulate:1 month:1 rojas:1 goal:1 viewed:2 sized:1 acceleration:3 consequently:1 sorted:1 towards:2 loosing:1 replace:2 yti:4 change:10 feasible:1 infinite:1 except:1 uniformly:1 shpigi:1 degradation:2 called:1 blessing:1 pas:1 exception:1 people:1 support:8 crammer:1 noisier:1 incorporate:1 tested:3 scratch:2 |
2,877 | 3,608 | Kernelized Sorting
Novi Quadrianto
RSISE, ANU & SML, NICTA
Canberra, ACT, Australia
[email protected]
Le Song
SCS, CMU
Pittsburgh, PA, USA
[email protected]
Alex J. Smola
Yahoo! Research
Santa Clara, CA, USA
[email protected]
Abstract
Object matching is a fundamental operation in data analysis. It typically requires
the definition of a similarity measure between the classes of objects to be matched.
Instead, we develop an approach which is able to perform matching by requiring a
similarity measure only within each of the classes. This is achieved by maximizing
the dependency between matched pairs of observations by means of the Hilbert
Schmidt Independence Criterion. This problem can be cast as one of maximizing
a quadratic assignment problem with special structure and we present a simple
algorithm for finding a locally optimal solution.
1
Introduction
Matching pairs of objects is a fundamental operation of unsupervised learning. For instance, we
might want to match a photo with a textual description of a person, a map with a satellite image,
or a music score with a music performance. In those cases it is desirable to have a compatibility
function which determines how one set may be translated into the other. For many such instances
we may be able to design a compatibility score based on prior knowledge or to observe one based
on the co-occurrence of such objects.
In some cases, however, such a match may not exist or it may not be given to us beforehand. That
is, while we may have a good understanding of two sources of observations, say X and Y, we may
not understand the mapping between the two spaces. For instance, we might have two collections of
documents purportedly covering the same content, written in two different languages. Here it should
be our goal to determine the correspondence between both sets and to identify a mapping between
the two domains. In the following we present a method which is able to perform such matching
without the need of a cross-domain similarity measure.
Our method relies on the fact that one may estimate the dependence between sets of random variables
even without knowing the cross-domain mapping. Various criteria are available. We choose the
Hilbert Schmidt Independence Criterion between two sets and we maximize over the permutation
group to find a good match. As a side-effect we obtain an explicit representation of the covariance.
We show that our method generalizes sorting. When using a different measure of dependence,
namely an approximation of the mutual information, our method is related to an algorithm of [1].
Finally, we give a simple approximation algorithm for kernelized sorting.
1.1
Sorting and Matching
The basic idea underlying our algorithm is simple. Denote by X = {x1 , . . . , xm } ? X and Y =
{y1 , . . . , ym } ? Y two sets of observations between which we would like to find a correspondence.
That is, we would like to find some permutation ? ? ?m on m terms, that is
n
o
m?m
?m := ?|? ? {0, 1}
and ?1m = 1m and ? > 1m = 1m ,
(1)
such that the pairs Z(?) := (xi , y?(i) ) for 1 ? i ? m correspond to dependent random variables.
Here 1m ? Rm is the vector of all ones. We seek a permutation ? such that the mapping xi ? y?(i)
and its converse mapping from y to x are simple. Denote by D(Z(?)) a measure of the dependence
between x and y. Then we define nonparametric sorting of X and Y as follows
? ? := argmax???m D(Z(?)).
(2)
This paper is concerned with measures of D and approximate algorithms for (2). In particular we
will investigate the Hilbert Schmidt Independence Criterion and the Mutual Information.
2
Hilbert Schmidt Independence Criterion
Let sets of observations X and Y be drawn jointly from some probability distribution Prxy . The
Hilbert Schmidt Independence Criterion (HSIC) [2] measures the dependence between x and y by
computing the norm of the cross-covariance operator over the domain X ? Y in Hilbert Space. It can
be shown, provided the Hilbert Space is universal, that this norm vanishes if and only if x and y are
independent. A large value suggests strong dependence with respect to the choice of kernels.
Formally, let F be the Reproducing Kernel Hilbert Space (RKHS) on X with associated kernel
k : X ? X ? R and feature map ? : X ? F. Let G be the RKHS on Y with kernel l and feature map
?. The cross-covariance operator Cxy : G 7? F is defined by [3] as
Cxy = Exy [(?(x) ? ?x ) ? (?(y) ? ?y )],
(3)
where ?x = E[?(x)], ?y = E[?(y)], and ? is the tensor product. HSIC, denoted as D, is then
2
defined as the square of the Hilbert-Schmidt norm of Cxy [2] via D(F, G, Prxy ) := kCxy kHS . In
term of kernels HSIC can be expressed as
Exx0 yy0 [k(x, x0 )l(y, y 0 )] + Exx0 [k(x, x0 )]Eyy0 [l(y, y 0 )] ? 2Exy [Ex0 [k(x, x0 )]Ey0 [l(y, y 0 )]],
(4)
where Exx0 yy0 is the expectation over both (x, y) ? Prxy and an additional pair of variables (x0 , y 0 ) ? Prxy drawn independently according to the same law. Given a sample Z =
{(x1 , y1 ), . . . , (xm , ym )} of size m drawn from Prxy an empirical estimate of HSIC is
? L.
?
D(F, G, Z) = (m ? 1)?2 tr HKHL = (m ? 1)?2 tr K
(5)
where K, L ? Rm?m are the kernel matrices for the data and the labels respectively, i.e. Kij =
k(xi , xj ) and Lij = l(yi , yj ). Moreover, Hij = ?ij ? m?1 centers the data and the labels in feature
? := HKH and L
? := HLH denote the centered versions of K and L respectively.
space. Finally, K
Note that (5) is a biased estimate where the expectations with respect to x, x0 , y, y 0 have all been
replaced by empirical averages over the set of observations.
2.1
Kernelized Sorting
Previous work used HSIC to measure independence between given random variables [2]. Here we
use it to construct a mapping between X and Y by permuting Y to maximize dependence. There
are several advantages in using HSIC as a dependence criterion. First, HSIC satisfies concentration
of measure conditions [2]. That is, for random draws of observation from Prxy , HSIC provides
values which are very similar. This is desirable, as we want our mapping to be robust to small
changes. Second, HSIC is easy to compute, since only the kernel matrices are required and no
density estimation is needed. The freedom of choosing a kernel allows us to incorporate prior
knowledge into the dependence estimation process. The consequence is that we are able to generate
a family of methods by simply choosing appropriate kernels for X and Y .
? > L?.
?
Lemma 1 The nonparametric sorting problem is given by ? ? = argmax???m tr K?
Proof We only need to establish that H? = ?H since the rest follows from the definition of (5).
Note that since H is a centering matrix, it has the eigenvalue 0 for the vector of all ones and the
eigenvalue 1 for all vectors orthogonal to that. Next note that the vector of all ones is also an eigenvector of any permutation matrix ? with ?1 = 1. Hence H and ? matrices commute.
Next we show that the objective function is indeed reasonable: for this we need the following inequality due to Polya, Littlewood and Hardy:
Lemma 2 Let a, b ? Rm where a is sorted ascendingly. Then a> ?b is maximized for ? = argsort b.
Lemma 3 Let X = Y = R and let k(x, x0 ) = xx0 and l(y, y 0 ) = yy 0 . Moreover, assume that x is
sorted ascendingly. In this case (5) is maximized by either ? = argsort y or by ? = argsort ?y.
? = Hxx> H and L
? = Hyy > H. Hence we may
Proof Under the assumptions we have that K
2
>
rewrite the objective as (Hx) ?(Hy) . This is maximized by sorting Hy ascendingly. Since the
centering matrix H only changes the offset but not the order this is equivalent to sorting y. We have
two alternatives, since the objective function is insensitive to sign reversal of y.
This means that sorting is a special case of kernelized sorting, hence the name. In fact, when solving
? and L
? is a
the general problem, it turns out that a projection onto the principal eigenvectors of K
good initialization of an optimization procedure.
2.2
Diagonal Dominance
In some cases the biased estimate of HSIC as given in (5) leads to very undesirable results, in
particular in the case of document analysis. This is the case since kernel matrices on texts tend to be
diagonally dominant: a document tends to be much more similar to itself than to others. In this case
the O(1/m) bias of (5) is significant. Unfortunately, the minimum variance unbiased estimator [2]
does not have a computationally appealing form. This can be addressed as follows at the expense of
a slightly less efficient estimator with a considerably reduced bias: we replace the expectations (4)
by sums where no pairwise summation indices are identical. This leads to the objective function
X
X
X
1
1
2
Kij Lij + m2 (m?1)
Kij Luv ? m(m?1)
Kij Liv . (6)
2
2
m(m?1)
i6=j
i6=j,u6=v
i,j6=i,v6=i
This estimator still has a small degree of bias, albeit significantly reduced since it only arises from
the product of expectations over (potentially) independent random variables. Using the shorthand
? ij = Kij (1 ? ?ij ) and L
? ij = Lij (1 ? ?ij ) for kernel matrices where the main diagonal terms have
K
? K.
? The advantage of this term is that
been removed we arrive at the expression (m ? 1)?2 tr H LH
it can be used as a drop-in replacement in Lemma 1.
2.3
Mutual Information
An alternative, natural means of studying the dependence between random variables is to compute
the mutual information between the random variables xi and y?(i) . In general, this is difficult,
since it requires density estimation. However, if we assume that x and y are jointly normal in the
Reproducing Kernel Hilbert Spaces spanned by the kernels k, l and k ? l we can devise an effective approximation of the mutual information. Our reasoning relies on the fact that the differential
entropy of a normal distribution with covariance ? is given by
h(p) =
1
2
log |?| + constant.
(7)
Since the mutual information between random variables X and Y is I(X, Y ) = h(X) + h(Y ) ?
h(X, Y ) we will obtain maximum mutual information by minimizing the joint entropy h(X, Y ).
Using the Gaussian upper bound on the joint entropy we can maximize a lower bound on the mutual
information by minimizing the joint entropy of J(?) := h(X, Y ). By defining a joint kernel on
X ? Y via k((x, y), (x0 , y 0 )) = k(x, x0 )l(y, y 0 ) we arrive at the optimization problem
argmin???m log |HJ(?)H| where Jij = Kij L?(i),?(j) .
(8)
Note that this is related to the optimization criterion proposed by Jebara [1] in the context of sorting
via minimum volume PCA. What we have obtained here is an alternative derivation of Jebara?s
criterion based on information theoretic considerations. The main difference is that [1] uses the
setting to align bags of observations by optimizing log |HJ(?)H| with respect to re-ordering within
each of the bags. We will discuss multi-variable alignment at a later stage.
In terms of computation (8) is considerably more expensive to optimize. As we shall see, for the
optimization in Lemma 1 a simple iteration over linear assignment problems will lead to desirable
solutions, whereas in (8) even computing derivatives is a computational challenge.
3
Optimization
DC Programming To find a local maximum of the matching problem we may take recourse to a
well-known algorithm, namely DC Programming [4] which in machine learning is also known as
the Concave Convex Procedure [5]. It works as follows: for a given function f (x) = g(x) ? h(x),
where g is convex and h is concave, a lower bound can be found by
f (x) ? g(x0 ) + hx ? x0 , ?x g(x0 )i ? h(x).
(9)
This lower bound is convex and it can be maximized effectively over a convex domain. Subsequently
one finds a new location x0 and the entire procedure is repeated.
? > L?
? is convex in ?.
Lemma 4 The function tr K?
? L
? 0 we may factorize them as K
? = U > U and L
? = V > V we may rewrite the objective
Since K,
>
2
function as V ?U
which is clearly a convex quadratic function in ?.
Note that the set of feasible permutations ? is constrained in a unimodular fashion, that is, the set
o
n
X
X
(10)
Mij = 1 and
Mij = 1
Pm := M ? Rm?m where Mij ? 0 and
i
j
has only integral vertices, namely admissible permutation matrices. This means that the following
procedure will generate a succession of permutation matrices which will yield a local maximum for
the assignment problem:
? > L?
? i
?i+1 = (1 ? ?)?i + ? argmax??P tr K?
(11)
m
Here we may choose ? = 1 in the last step to ensure integrality. This optimization problem is well
known as a Linear Assignment Problem and effective solvers exist for it [6].
Lemma 5 The algorithm described in (11) for ? = 1 terminates in a finite number of steps.
We know that the objective function may only increase for each step of (11). Moreover, the solution
set of the linear assignment problem is finite. Hence the algorithm does not cycle.
Nonconvex Maximization When using the bias corrected version of the objective function the
problem is no longer guaranteed to be convex. In this case we need to add a line-search procedure
?
?
along ? which maximizes tr H KH[(1
? ?)?i + ??
?i ]> H LH[(1
? ?)?i + ??
?i ]. Since the function
is quadratic in ? we only need to check whether the search direction remains convex in ?; otherwise
we may maximize the term by solving a simple linear equation.
Initialization Since quadratic assignment problems are in general NP hard we may obviously not
hope to achieve an optimal solution. That said, a good initialization is critical for good estimation
? and L
? only had rank-1, the
performance. This can be achieved by using Lemma 3. That is, if K
problem could be solved by sorting X and Y in matching fashion. Instead, we use the projections
onto the first principal vectors as initialization in our experiments.
Relaxation to a constrained eigenvalue problem Yet another alternative is to find an approximate
solution of the problem in Lemma 1 by solving
maximize? ? > M ? subject to A? = b
2
(12)
2
? ?L
? ? Rm ?m is given by the outer product of the constituting kernel
Here the matrix M = K
m2
matrices, ? ? R is a vectorized version of the permutation matrix ?, and the constraints imposed
by A and b amount to the polytope constraints imposed by ?m . This is essentially the approach
proposed by [7] in the context of balanced graph matching, albeit with a suboptimal optimization
procedure. Instead, one may use the exact algorithm proposed by [8].
The problem with the relaxation (12) is that it does not scale well to large estimation problems (the
size of the optimization problem scales O(m4 )) and that the relaxation does not guarantee a feasible
solution which means that subsequent projection heuristics need to be found. Hence we did not
pursue this approach in our experiments.
4
Multivariate Extensions
A natural extension is to align several sets of observations. For this purpose we need to introduce a
multivariate version of the Hilbert Schmidt Independence Criterion. One way of achieving this goal
is to compute the Hilbert Space norm of the difference between the expectation operator for the joint
distribution and the expectation operator for the product of the marginal distributions.
Formally, let there be T random variables xi ? Xi which are jointly drawn from some distribution
p(x1 , . . . , xm ). Moreover, denote by ki : Xi ? Xi ? R the corresponding kernels. In this case we
can define a kernel on X1 ? . . . ? XT by k1 ? . . . kT . The expectation operator with respect to the
joint distribution and with respect to the product of the marginals is given by [2]
"T
#
T
Y
Y
Ex1 ,...,xT
ki (xi , ?)
and
Exi [ki (xi , ?)]
(13)
i=1
i=1
respectively. Both terms are equal if and only if all random variables are independent. The squared
difference between both is given by
#
"T
"T
#
T
Y
Y
Y
0
0
0
(14)
Ex0i [k(xi , xi )] .
ki (xi , xi ) +
Exi ,x0i [ki (xi , xi )] ? 2ExTi=1
ExTi=1 ,x0 Ti=1
i=1
i=1
i=1
which we refer to as multiway HSIC. A biased empirical estimate of the above is obtained by replacing sums by empirical averages. Denote by Ki the kernel matrix obtained from the kernel ki on
the set of observations Xi := {xi1 , . . . , xim }. In this case the empirical estimate of (14) is given by
!
!
T
T
T
K
Y
K
>
>
>
HSIC[X1 , . . . , XT ] := 1m
Ki 1m +
1m Ki 1m ? 2 ? 1m
Ki 1m
(15)
i=1
i=1
i=1
Tt=1 ?
where
denotes elementwise product of its arguments (the ?.*? notation of Matlab). To apply
this to sorting we only need to define T permutation matrices ?i ? ?m and replace the kernel
matrices Ki by ?i> Ki ?i .
Without loss of generality we may set ?1 = 1, since we always have the freedom to fix the order of
one of the T sets with respect to which the other sets are to be ordered. In terms of optimization the
same considerations as presented in Section 3 apply. That is, the objective function is convex in the
permutation matrices ?i and we may apply DC programming to find a locally optimal solution. The
experimental results for multiway HSIC can be found in the appendix.
5
Applications
To investigate the performance of our algorithm (it is a fairly nonstandard unsupervised method) we
applied it to a variety of different problems ranging from visualization to matching and estimation.
In all our experiments, the maximum number of iterations used in the updates of ? is 100 and we
terminate early if progress is less than 0.001% of the objective function.
5.1
Data Visualization
In many cases we may want to visualize data according to the metric structure inherent in it. In
particular, we want to align it according to a given template, such as a grid, a torus, or any other
fixed structure. Such problems occur when presenting images or documents to a user. While there
is a large number of algorithms for low dimensional object layout (self organizing maps, maximum
variance unfolding, local-linear embedding, generative topographic map, . . . ), most of them suffer
from the problem that the low dimensional presentation is nonuniform. This has the advantage of
revealing cluster structure but given limited screen size the presentation is undesirable.
Instead, we may use kernelized sorting to align objects. Here the kernel matrix L is given by the
similarity measure between the objects xi that are to be aligned. The kernel K, on the other hand,
denotes the similarity between the locations where objects are to be aligned to. For the sake of
simplicity we used a Gaussian RBF kernel between the objects to laid out and also between the
Figure 1: Layout of 284 images into a ?NIPS 2008? letter grid using kernelized sorting.
2
positions of the grid, i.e. k(x, x0 ) = exp(?? kx ? x0 k ). The kernel width ? was adjusted to the
2
inverse median of kx ? x0 k such that the argument of the exponential is O(1). Our choice of the
Gaussian RBF kernel is likely not optimal for the specific set of observations (e.g. SIFT feature
extraction followed by a set kernel would be much more appropriate for images). That said we want
to emphasize that the gains arise from the algorithm rather than a specific choice of a function class.
We obtained 284 images from http://www.flickr.com which were resized and downsampled
to 40 ? 40 pixels. We converted the images from RGB into Lab color space, yielding 40 ? 40 ? 3
dimensional objects. The grid, corresponding to X is a ?NIPS 2008? letters on which the images
are to be laid out. After sorting we display the images according to their matching coordinates
(Figure 1). We can see images with similar color composition are found at proximal locations.
We also lay out the images (we add 36 images to make the number 320) into a 2D grid of 16 ?
20 mesh using kernelized sorting. For comparison we use a Self-Organizing Map (SOM) and a
Generative Topographic Mapping (GTM) and the results are shown in the appendix. Although the
images are also arranged according to the color grading, the drawback of SOM (and GTM) is that it
creates blank spaces in the layout. This is because SOM maps several images into the same neuron.
Hence some neurons may not have data associated with them. While SOM is excellent in grouping
similar images together, it falls short in exactly arranging the images into 2D grid.
5.2
Matching
To obtain more quantifiable results rather than just generally aesthetically pleasing pictures we apply
our algorithm to matching problems where the correct match is known.
Image matching: Our first test was to match image halves. For this purpose we used the data
from the layout experiment and we cut the images into two 20 ? 40 pixel patches. The aim was to
find an alignment between both halves such that the dependence between them is maximized. In
other words, given xi being the left half of the image and yi being the right half, we want to find a
permutation ? which lines up xi and yi .
This would be a trivial undertaking when being able to compare the two image halves xi and yi .
While such comparison is clearly feasible for images where we know the compatibility function, it
may not be possible for generic objects. The figure is presented in the appendix. For a total of 320
images we recovered 140 pairs. This is quite respectable given that chance level would be 1 correct
pair (a random permutation matrix has on expectation one nonzero diagonal entry).
Estimation In a next experiment we aim to determine how well the overall quality of the matches is.
That is, whether the objects matched share similar properties. For this purpose we used binary, multi-
Type
Binary
Multiclass
Regression
Table 1:
Data set
australian
breastcancer
derm
optdigits
wdbc
satimage
segment
vehicle
abalone
bodyfat
Error rate for matching problems
m
Kernelized Sorting Baseline
690
0.29?0.02
0.49
683
0.06?0.01
0.46
358
0.08?0.01
0.43
765
0.01?0.00
0.49
569
0.11?0.04
0.47
620
0.20?0.01
0.80
693
0.58?0.02
0.86
423
0.58?0.08
0.75
417
13.9?1.70
18.7
252
4.5?0.37
7.20
Reference
0.21?0.04
0.06?0.03
0.00?0.00
0.01?0.00
0.05?0.02
0.13?0.04
0.05?0.02
0.24?0.07
6.44?3.14
3.80?0.76
Table 2: Number of correct matches (out of 300) for English aligned documents.
Source language
Pt
Es
Fr
Sv
Da
It
Nl
De
Kernelized Sorting
252 218 246 150 230 237 223
95
Baseline (length match)
9
12
8
6
6
11
7
4
Reference (dictionary)
298 298 298 296 297 300 298 284
class, and regression datasets from the UCI repository http://archive.ics.uci.edu/ml
and the LibSVM site http://www.csie.ntu.edu.tw/?cjlin/libsvmtools.
In our setup we split the dimensions of the data into two sets and permute the data in the second
set. The so-generated two datasets are then matched and we use the estimation error to quantify the
quality of the match. That is, assume that yi is associated with the observation xi . In this case we
compare yi and y?(i) using binary classification, multiclass, or regression loss accordingly.
To ensure good dependence between the subsets of variables we choose a split which ensures correlation. This is achieved as follows: we pick the dimension with the largest correlation coefficient as
a reference. We then choose the coordinates that have at least 0.5 correlation with the reference and
split those equally into two sets, set A and set B. We also split the remainder coordinates equally
into the two existing sets and finally put the reference coordinate into set A. This ensures that the set
B of dimensions will have strong correlation with at least one dimension in the set A. The listing of
the set members for different datasets can be found in the appendix.
The results are summarized in Table 1. As before, we use a Gaussian RBF kernel with median
adjustment of the kernel width. To obtain statistically meaningful results we subsample 80% of the
data 10 times and compute the error of the match on the subset (this is done in lieu of cross-validation
since the latter is meaningless for matching). As baseline we compute the expected performance of
random permutations which can be done exactly. Finally, as reference we use SVM classification /
regression with results obtained by 10-fold cross-validation. Matching is able to retrieve significant
information about the labels of the corresponding classes, in some cases performing as well as a full
classification approach.
Multilingual Document Matching To illustrate that kernelized sorting is able to recover nontrivial
similarity relations we applied our algorithm to the matching of multilingual documents. For this
purpose we used the Europarl Parallel Corpus. It is a collection of the proceedings of the European Parliament, dating back to 1996 [9]. We select the 300 longest documents of Danish (Da),
Dutch (Nl), English (En), French (Fr), German (De), Italian (It), Portuguese (Pt), Spanish (Es), and
Swedish (Sv). The purpose is to match the non-English documents (source languages) to its English
translations (target language). Note that our algorithm does not require a cross-language dictionary.
In fact, one could use kernelized sorting to generate a dictionary after initial matching has occurred.
In keeping with the choice of a simple kernel we used standard TF-IDF (term frequency - inverse
document frequency) features of a bag of words kernel. As preprocessing we remove stopwords (via
NLTK) and perform stemming using http://snowball.tartarus.org. Finally, the feature
vectors are normalized to unit length in term of `2 norm. Since kernel matrices on documents are
notoriously diagonally dominant we use the bias-corrected version of our optimization problem.
As baseline we used a fairly straightforward means of document matching via its length. That is,
longer documents in one language will be most probably translated into longer documents in the
other language. This observation has also been used in the widely adopted sentence alignment
method [10]. As a dictionary-based alternative we translate the documents using Google?s translation engine http://translate.google.com to find counterparts in the source language.
Smallest distance matches in combination with a linear assignment solver are used for the matching.
The experimental results are summarized in Table 2. We describe a line search procedure in Section
3. In practice we find that fixing ? at a given step size and choosing the best solution in terms of
the objective function for ? ? {0.1, 0.2, . . . , 1.0} works better. Further details can be found in the
appendix. Low matching performance for the document length-based method might be due to small
variance in the document length after we choose the 300 longest documents. The dictionary-based
method gives near-to-perfect matching performance. Further in forming the dictionary, we do not
perform stemming on English words and thus the dictionary is highly customized to the problem
at hand. Our method produces results consistent to the dictionary-based method with notably low
performance for matching German documents to its English translations. We conclude that the
difficulty of German-English document matching is inherent to this dataset [9]. Arguably the results
are quite encouraging as our method uses only a within class similarity measure while still matches
more than 2/3 of what is possible by a dictionary-based method.
6
Summary and Discussion
In this paper, we generalized sorting by maximizing the dependency between matched pairs or
observations by means of the Hilbert Schmidt Independence Criterion. This way we are able to
perform matching without the need of a cross-domain similarity measure. The proposed sorting
algorithm is efficient and it can be applied to a variety of different problems ranging from data
visualization to image and multilingual document matching and estimation. Further examples of
kernelized sorting and of reference algorithms are given in the appendix.
Acknowledgments NICTA is funded through the Australian Government?s Backing Australia?s
Ability initiative, in part through the ARC.This research was supported by the Pascal Network. Parts
of this work were done while LS and AJS were working at NICTA.
References
[1] T. Jebara. Kernelizing sorting, permutation, and alignment for minimum volume PCA. In
Conference on Computational Learning Theory (COLT), volume 3120 of LNAI, pages 609?
623. Springer, 2004.
[2] A.J. Smola, A. Gretton, L. Song, and B. Sch?olkopf. A hilbert space embedding for distributions. In E. Takimoto, editor, Algorithmic Learning Theory, Lecture Notes on Computer
Science. Springer, 2007.
[3] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning
with reproducing kernel Hilbert spaces. J. Mach. Learn. Res., 5:73?99, 2004.
[4] T. Pham Dinh and L. Hoai An. A D.C. optimization algorithm for solving the trust-region
subproblem. SIAM Journal on Optimization, 8(2):476?505, 1988.
[5] A.L. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 15:915?
936, 2003.
[6] R. Jonker and A. Volgenant. A shortest augmenting path algorithm for dense and sparse linear
assignment problems. Computing, 38:325?340, 1987.
[7] T. Cour, P. Srinivasan, and J. Shi. Balanced graph matching. In B. Sch?olkopf, J. Platt, and
T. Hofmann, editors, Advances in Neural Information Processing Systems 19, pages 313?320.
MIT Press, December 2006.
[8] W. Gander, G.H. Golub, and U. von Matt. A constrained eigenvalue problem. In Linear
Algebra Appl. 114-115, pages 815?839, 1989.
[9] P. Koehn. Europarl: A parallel corpus for statistical machine translation. In Machine Translation Summit X, pages 79?86, 2005.
[10] W. A. Gale and K. W. Church. A program for aligning sentences in bilingual corpora. In
Meeting of the Association for Computational Linguistics, pages 177?184, 1991.
| 3608 |@word repository:1 version:5 norm:5 seek:1 rgb:1 covariance:4 commute:1 pick:1 tr:7 reduction:1 initial:1 score:2 hardy:1 document:21 rkhs:2 existing:1 blank:1 com:3 recovered:1 clara:1 gmail:1 exy:2 written:1 yet:1 portuguese:1 stemming:2 mesh:1 subsequent:1 hofmann:1 remove:1 drop:1 update:1 generative:2 half:5 accordingly:1 short:1 provides:1 location:3 org:2 stopwords:1 along:1 differential:1 initiative:1 shorthand:1 introduce:1 pairwise:1 x0:16 notably:1 expected:1 indeed:1 multi:2 encouraging:1 quad:1 solver:2 provided:1 matched:5 underlying:1 moreover:4 maximizes:1 notation:1 what:2 argmin:1 pursue:1 eigenvector:1 finding:1 prxy:6 guarantee:1 act:1 concave:3 unimodular:1 ti:1 exactly:2 rm:5 platt:1 unit:1 converse:1 arguably:1 before:1 local:3 tends:1 consequence:1 mach:1 path:1 might:3 luv:1 initialization:4 suggests:1 appl:1 co:1 limited:1 statistically:1 acknowledgment:1 yj:1 practice:1 procedure:8 universal:1 empirical:5 significantly:1 revealing:1 matching:28 projection:3 word:3 downsampled:1 onto:2 undesirable:2 operator:5 breastcancer:1 context:2 put:1 optimize:1 equivalent:1 map:7 imposed:2 shi:1 center:1 maximizing:3 layout:4 www:2 straightforward:1 independently:1 convex:10 l:1 simplicity:1 m2:2 estimator:3 spanned:1 u6:1 retrieve:1 embedding:2 coordinate:4 hsic:13 arranging:1 pt:2 target:1 user:1 exact:1 programming:3 us:2 pa:1 expensive:1 lay:1 summit:1 cut:1 csie:1 subproblem:1 solved:1 region:1 ensures:2 cycle:1 ordering:1 removed:1 balanced:2 vanishes:1 hyy:1 rewrite:2 solving:4 segment:1 algebra:1 yuille:1 creates:1 translated:2 joint:6 exi:2 various:1 gtm:2 derivation:1 effective:2 describe:1 sc:1 choosing:3 quite:2 heuristic:1 widely:1 koehn:1 say:1 otherwise:1 ability:1 topographic:2 jointly:3 itself:1 obviously:1 advantage:3 eigenvalue:4 product:6 jij:1 fr:2 remainder:1 aligned:3 uci:2 organizing:2 translate:2 achieve:1 description:1 kh:1 snowball:1 olkopf:2 quantifiable:1 cour:1 xim:1 cluster:1 satellite:1 rangarajan:1 produce:1 perfect:1 object:12 illustrate:1 develop:1 augmenting:1 fixing:1 x0i:1 ij:5 polya:1 progress:1 strong:2 aesthetically:1 c:1 australian:2 quantify:1 direction:1 undertaking:1 drawback:1 correct:3 subsequently:1 centered:1 australia:2 libsvmtools:1 require:1 government:1 hx:2 fix:1 ntu:1 summation:1 adjusted:1 extension:2 exx0:3 pham:1 ic:1 normal:2 exp:1 mapping:8 algorithmic:1 visualize:1 dictionary:9 early:1 smallest:1 purpose:5 estimation:9 bag:3 label:3 ex0:1 largest:1 tf:1 hope:1 unfolding:1 fukumizu:1 clearly:2 mit:1 gaussian:4 always:1 aim:2 rather:2 hj:2 resized:1 longest:2 rank:1 check:1 baseline:4 dependent:1 typically:1 entire:1 lnai:1 kernelized:12 relation:1 italian:1 backing:1 pixel:2 compatibility:3 ey0:1 classification:3 pascal:1 denoted:1 overall:1 yahoo:1 colt:1 constrained:3 special:2 fairly:2 mutual:8 marginal:1 equal:1 construct:1 extraction:1 identical:1 novi:2 unsupervised:2 others:1 np:1 inherent:2 m4:1 replaced:1 argmax:3 hkhl:1 replacement:1 freedom:2 pleasing:1 investigate:2 highly:1 alignment:4 golub:1 nl:2 yielding:1 permuting:1 kt:1 beforehand:1 integral:1 lh:2 orthogonal:1 re:2 instance:3 kij:6 respectable:1 assignment:8 maximization:1 vertex:1 subset:2 entry:1 dependency:2 sv:2 proximal:1 considerably:2 person:1 density:2 fundamental:2 siam:1 xi1:1 ym:2 together:1 squared:1 von:1 choose:5 gale:1 derivative:1 converted:1 de:2 sml:1 summarized:2 coefficient:1 later:1 vehicle:1 lab:1 recover:1 parallel:2 hoai:1 kcxy:1 square:1 cxy:3 variance:3 maximized:5 correspond:1 identify:1 succession:1 yield:1 listing:1 notoriously:1 j6:1 nonstandard:1 flickr:1 danish:1 definition:2 centering:2 frequency:2 associated:3 proof:2 gain:1 xx0:1 dataset:1 knowledge:2 color:3 dimensionality:1 hilbert:15 back:1 supervised:1 swedish:1 arranged:1 done:3 generality:1 just:1 smola:3 stage:1 correlation:4 hand:2 working:1 replacing:1 trust:1 google:2 french:1 quality:2 hlh:1 argsort:3 usa:2 effect:1 name:1 requiring:1 unbiased:1 normalized:1 counterpart:1 matt:1 hence:6 volgenant:1 nonzero:1 ex1:1 self:2 width:2 spanish:1 covering:1 abalone:1 criterion:11 generalized:1 presenting:1 theoretic:1 tt:1 reasoning:1 image:23 ranging:2 consideration:2 insensitive:1 volume:3 association:1 occurred:1 elementwise:1 marginals:1 significant:2 refer:1 composition:1 dinh:1 grid:6 pm:1 i6:2 multiway:2 language:8 had:1 funded:1 similarity:8 longer:3 align:4 add:2 dominant:2 aligning:1 multivariate:2 optimizing:1 nonconvex:1 inequality:1 binary:3 meeting:1 yi:6 devise:1 minimum:3 additional:1 determine:2 maximize:5 shortest:1 full:1 desirable:3 gretton:1 match:13 cross:8 bach:1 equally:2 basic:1 regression:4 essentially:1 cmu:2 expectation:8 metric:1 dutch:1 iteration:2 kernel:32 achieved:3 whereas:1 want:6 addressed:1 median:2 source:4 sch:2 biased:3 rest:1 meaningless:1 archive:1 probably:1 subject:1 tend:1 member:1 december:1 jordan:1 near:1 split:4 easy:1 concerned:1 variety:2 independence:8 xj:1 suboptimal:1 idea:1 knowing:1 multiclass:2 grading:1 whether:2 expression:1 pca:2 jonker:1 lesong:1 song:2 suffer:1 matlab:1 generally:1 santa:1 eigenvectors:1 amount:1 nonparametric:2 locally:2 reduced:2 generate:3 http:5 exist:2 sign:1 yy:1 shall:1 srinivasan:1 group:1 dominance:1 liv:1 achieving:1 drawn:4 takimoto:1 libsvm:1 integrality:1 graph:2 relaxation:3 sum:2 inverse:2 letter:2 arrive:2 family:1 reasonable:1 laid:2 patch:1 draw:1 appendix:6 bound:4 ki:12 guaranteed:1 followed:1 display:1 correspondence:2 fold:1 quadratic:4 nontrivial:1 occur:1 constraint:2 idf:1 alex:2 hy:2 sake:1 argument:2 performing:1 according:5 combination:1 terminates:1 slightly:1 appealing:1 tw:1 recourse:1 computationally:1 equation:1 visualization:3 remains:1 turn:1 discus:1 cjlin:1 german:3 needed:1 know:2 reversal:1 photo:1 lieu:1 ajs:1 studying:1 operation:2 available:1 generalizes:1 adopted:1 apply:4 observe:1 appropriate:2 generic:1 occurrence:1 kernelizing:1 schmidt:8 alternative:5 denotes:2 ensure:2 linguistics:1 music:2 k1:1 establish:1 tensor:1 objective:10 concentration:1 dependence:11 diagonal:3 said:2 distance:1 hxx:1 outer:1 polytope:1 trivial:1 nicta:3 length:5 index:1 minimizing:2 difficult:1 unfortunately:1 setup:1 potentially:1 hij:1 expense:1 design:1 perform:5 upper:1 observation:13 neuron:2 datasets:3 arc:1 finite:2 defining:1 y1:2 dc:3 nonuniform:1 reproducing:3 jebara:3 pair:7 rsise:1 cast:1 namely:3 required:1 sentence:2 engine:1 textual:1 nip:2 able:8 xm:3 challenge:1 program:1 critical:1 natural:2 difficulty:1 customized:1 picture:1 church:1 lij:3 dating:1 text:1 prior:2 understanding:1 law:1 loss:2 lecture:1 permutation:14 validation:2 degree:1 vectorized:1 consistent:1 parliament:1 editor:2 share:1 translation:5 summary:1 diagonally:2 supported:1 last:1 keeping:1 english:7 side:1 bias:5 understand:1 fall:1 template:1 sparse:1 dimension:4 collection:2 preprocessing:1 constituting:1 eyy0:1 hkh:1 approximate:2 emphasize:1 multilingual:3 ml:1 corpus:3 pittsburgh:1 conclude:1 xi:22 factorize:1 search:3 table:4 terminate:1 learn:1 robust:1 ca:1 permute:1 excellent:1 european:1 som:4 domain:6 da:2 did:1 main:2 dense:1 subsample:1 arise:1 bilingual:1 quadrianto:1 repeated:1 x1:5 site:1 canberra:1 yy0:2 en:1 purportedly:1 fashion:2 screen:1 position:1 explicit:1 torus:1 exponential:1 admissible:1 nltk:1 xt:3 specific:2 sift:1 littlewood:1 offset:1 svm:1 grouping:1 albeit:2 effectively:1 anu:1 kx:2 sorting:26 wdbc:1 exti:2 entropy:4 simply:1 likely:1 forming:1 expressed:1 ordered:1 v6:1 adjustment:1 springer:2 mij:3 khs:1 determines:1 relies:2 satisfies:1 chance:1 goal:2 sorted:2 presentation:2 optdigits:1 rbf:3 satimage:1 replace:2 content:1 change:2 feasible:3 hard:1 corrected:2 lemma:9 principal:2 total:1 experimental:2 e:2 meaningful:1 formally:2 select:1 bodyfat:1 gander:1 latter:1 arises:1 incorporate:1 europarl:2 |
2,878 | 3,609 | Estimating the Location and Orientation of Complex,
Correlated Neural Activity using MEG
D.P. Wipf, J.P. Owen, H.T. Attias, K. Sekihara, and S.S. Nagarajan
Biomagnetic Imaging Laboratory
University of California, San Francisco
Abstract
The synchronous brain activity measured via MEG (or EEG) can be interpreted
as arising from a collection (possibly large) of current dipoles or sources located
throughout the cortex. Estimating the number, location, and orientation of these
sources remains a challenging task, one that is significantly compounded by the
effects of source correlations and the presence of interference from spontaneous
brain activity, sensor noise, and other artifacts. This paper derives an empirical
Bayesian method for addressing each of these issues in a principled fashion. The
resulting algorithm guarantees descent of a cost function uniquely designed to
handle unknown orientations and arbitrary correlations. Robust interference suppression is also easily incorporated. In a restricted setting, the proposed method
is shown to have theoretically zero bias estimating both the location and orientation of multi-component dipoles even in the presence of correlations, unlike a
variety of existing Bayesian localization methods or common signal processing
techniques such as beamforming and sLORETA. Empirical results on both simulated and real data sets verify the efficacy of this approach.
1
Introduction
Magnetoencephalography (MEG) and related electroencephalography (EEG) use an array of sensors to take electromagnetic field (or voltage potential) measurements from on or near the scalp
surface with excellent temporal resolution. In both cases, the observed field is generated by the
same synchronous, compact current sources located within the brain. Although useful for research
and clinical purposes, accurately determining the spatial distribution of these unknown sources is
an open problem. The relevant estimation problem can be posed as follows: The measured electromagnetic signal is B ? Rdb ?dt , where db equals the number of sensors and dt is the number of time
points at which measurements are made. Each unknown source Si ? Rdc ?dt is a dc -dimensional
neural current dipole , at dt timepoints, projecting from the i-th (discretized) voxel or candidate location distributed throughout the cortex. These candidate locations can be obtained by segmenting a
structural MR scan of a human subject and tesselating the gray matter surface with a set of vertices.
B and each Si are related by the likelihood model
B=
ds
X
Li Si + E,
(1)
i=1
where ds is the number of voxels under consideration, Li ? Rdb ?dc is the so-called lead-field
matrix for the i-th voxel. The k-th column of Li represents the signal vector that would be observed
at the scalp given a unit current source/dipole at the i-th vertex with a fixed orientation in the k-th
direction. It is common to assume dc = 2 (for MEG) or dc = 3 (for EEG), which allows flexible
source orientations to be estimated in 2D or 3D space. Multiple methods based on the physical
properties of the brain and Maxwell?s equations are available for the computation of each L i [7].
Finally, E is a noise-plus-interference term where we assume, for simplicity, that columns are drawn
independently from N (0, ? ). However, temporal correlations can easily be incorporated if desired
using a simple transformation outlined in [3].
To obtain reasonable spatial resolution, the number of candidate source locations will necessarily
be much larger than the number of sensors (ds db ). The salient inverse problem then becomes
the ill-posed estimation of regions with significant brain activity, which are reflected by voxels i
such that kSi k > 0; we refer to these as active dipoles or sources. Because the inverse model is
severely underdetermined (the mapping from source activity configuration S , [S 1 , . . . , Sds ]T to
sensor measurement B is many to one), all efforts at source reconstruction are heavily dependent
on prior assumptions, which in a Bayesian framework are embedded in the distribution p(S). Such
a prior is often considered to be fixed and known, as in the case of minimum current estimation
(MCE) [10], minimum variance adaptive beamforming (MVAB) [9], and sLORETA [5]. Alternatively, a number of empirical Bayesian approaches have been proposed that attempt a form of model
selection by using the data, whether implicitly or explicitly, to guide the search for an appropriate
prior. Examples include variational Bayesian methods and hierarchical covariance component models [3, 6, 8, 12, 13]. While advantageous in many respects, all of these methods retain substantial
weaknesses estimating complex, correlated source configurations with unknown orientation in the
presence of background interference (e.g., spontaneous brain activity, sensor noise, etc.).
There are two types of correlations that can potentially disrupt the source localization process. First,
there are correlations within dipole components (meaning the individual rows of S i are correlated),
which always exists to a high degree in real data with unknown orientation (i.e., d c > 1). Secondly,
there are correlations between different dipoles that are simultaneously active (meaning rows of S i
are correlated with rows of Sj for some voxels i 6= j). These correlations are more application specific and may or may not exist. The larger the number of active sources, the greater the chance that
both types or correlation can disrupt the estimation process. This issue can be problematic for two
reasons. First, failure to accurately account for unknown orientations or correlations can severely
disrupt the localization process, leading to a very misleading impression of which brain areas are
active. Secondly, the orientations and correlations themselves may have clinical significance.
In this paper, we present an alternative empirical Bayesian scheme that attempts to improve upon
existing methods in terms of source reconstruction accuracy and/or computational robustness and
efficiency. Section 2 presents the basic generative model which underlies the proposed method and
describes the associated inference problem. Section 3 derives a robust algorithm for estimating the
sources using this model and proves that each iteration is guaranteed to reduce the associated cost
function. It also describes how interference suppression can be naturally incorporated. Section 4
then provides a theoretical analysis of the bias involved in estimating both the location and orientation of active sources, demonstrating that the proposed method has substantial advantages over
existing approaches. Finally, Section 5 contains experimental results using our algorithm on both
simulated and real data, followed by a brief discussion in Section 6.
2
Modeling Assumptions
To begin we invoke the noise model from (1), which fully defines the assumed likelihood
?
2 ?
ds
X
1
?,
p(B|S) ? exp ??
B ?
Li S i
?1
2
i=1
(2)
?
p
where kXkW denotes the weighted matrix norm trace[X T W X]. The unknown noise covariance
? will be estimated from the data using a variational Bayesian factor analysis (VBFA) model as
discussed in Section 3.2 below; for now we will consider that it is fixed and known. Next we adopt
the following source prior for S:
"d
#!
s
X
1
T ?1
p (S|?) ? exp ? trace
S i ?i S i
.
(3)
2
i=1
This is equivalent to applying independently, at each time point, a zero-mean Gaussian distribution
with covariance ?i to each source Si . We define ? to be the ds dc ? ds dc block-diagonal matrix
formed by ordering each ?i along the diagonal
of anotherwise zero-valued matrix. This implies,
equivalently, that p (S|?) ? exp ? 12 trace S T ??1 S .
If ? were somehow known, then the conditional distribution p(S|B, ?) ? p(B|S)p(S|?) is a fully
specified Gaussian distribution with mean and covariance given by
?1
Ep(S|B,?) [S] = ?LT ? + L?LT
B
(4)
?1
Covp(sj |B,?) [sj ] = ? ? ?LT ? + L?LT
L?, ?j,
(5)
where sj denotes the j-th column of S and individual columns are uncorrelated. However, since ?
? ? ? must first be found. One principled way to
is actually not known, a suitable approximation ?
accomplish this is to integrate out the sources S and then maximize
Z
1 T ?1
p(B|?) = p(B|S)p(S|?)dS ? exp ? B ?b B , ?b , ? + L?LT .
(6)
2
This is equivalent to minimizing the cost function
L(?) , ?2 log p(B|?)p(?) ? trace Cb ??1
+ log |?b , | ,
b
(7)
where Cb , n?1 BB T is the empirical covariance, and is sometimes referred to as type-II maximum
likelihood, evidence maximization, or empirical Bayes [1].
The first term of (7) is a measure of the dissimilarity between the empirical data covariance C b and
the model data covariance ?b ; in general, this factor encourages ? to be large. The second term provides a regularizing or sparsifying effect, penalizing a measure of the volume formed by the model
covariance ?b .1 Since the volume of any high dimensional space is more effectively reduced by
collapsing individual dimensions as close to zero as possible (as opposed to incrementally reducing
all dimensions isometrically), this penalty term promotes a model covariance that is maximally degenerate (or non-spherical), which pushes elements of ? to exactly zero. This intuition is supported
theoretically by the results in Section 4.
? we obtain the attendant empirical prior p(S|?).
? To the extent
Given some type-II ML estimate ?,
?
that this ?learned? prior is realistic, the resulting posterior p(S|B, ?) quantifies regions of significant
current density and point estimates for the unknown source dipoles Si can be obtained by evaluating
? i ? 0 as described above, then the associated
the posterior mean computed using (4). If a given ?
?
Si computed using (4) also becomes zero. It is this pruning mechanism that naturally chooses the
number of active dipoles.
3
Algorithm Derivation
Given ? and ?, computing the posterior on S is trivial. Consequently, determining these unknown
quantities is the primary estimation task. We will first derive an algorithm for computing ? assuming
? is known. Later in Section 3.2, we will describe a powerful procedure for learning ? .
Learning the Hyperparameters ?
3.1
The primary objective of this section is to minimize (7) with respect to ?. Of course one option is
to treat the problem as a general nonlinear optimization task and perform gradient descent or some
other generic procedure. Related methods in the MEG literature rely, either directly or indirectly, on
a form of the EM algorithm [3, 8]. However, these algorithms are exceedingly slow when d s is large
and they have not been extended to handle flexible orientations. Consequently, here we derive an
alternative optimization procedures that expands upon ideas from [8, 12], handles arbitrary/unknown
dipole orientations, and converges quickly.
To begin, we note that L(?) only depends on the data B through the db ?db sample correlation matrix
e ? Rdb ?rank(B)
Cb . Therefore, to reduce the computational burden, we replace B with a matrix B
T
e
e
such that B B = Cb . This removes any per-iteration dependency on dt , which can potentially be
large, without altering that actual cost function. It also implies that, for purposes of computing ?,
the number of columns of S is reduced to match rank(B). We now re-express the cost function
L(?) in an alternativeform leading to convenient update rules and, by construction, a proof that
L ?(k+1) ? L ?(k) at each iteration.
1
The determinant of a matrix is equal to the product of its eigenvalues, a well-known volumetric measure.
First, the data fit term can be expressed as
?
?
2
ds
ds
X
X
e?
trace Cb ??1
= min ?
B
Li X i
+
kXi k2??1 ? ,
b
i
X
?1
i=1
?
(8)
i=1
T
is a matrix of auxiliary variables. Likewise, because the logwhere X , X1T , . . . , XdTs
determinant term of L(?) is concave in ?, it can be expressed as a minimum over upper-bounding
hyperplanes via
#
"d
s
X
T
?
log |?b | = min
trace Zi ?i ? h (Z) ,
(9)
Z
T
i=1
and h (Z) is the concave conjugate of log |?b |. For our purposes
where Z ,
below, we will never actually have to compute h? (Z). Dropping the minimizations and combining
terms from (8) and (9) leads to the modified cost function
2
ds
ds h
X
i
e Xe
L(?, X, Z) =
B ?
Li X i
+
kXi k2??1 + trace ZiT ?i ? h? (Z),
(10)
i
?1
Z1T , . . . , ZdTs
?
i=1
?
i=1
where by construction L(?) = minX minZ L(?, X, Z). It is straightforward to show that if
? X,
? Z}
? is a local (global) minimum to L(?, X, Z), then ?
? is a local (global) minimum to L(?).
{?,
Since direct optimization of L(?) may be difficult, we can instead iteratively optimize L(?, X, Z)
via coordinate descent over ?, X, and Z. In each case, when two are held fixed, the third can be
globally minimized in closed form. This ensures that each cycle will reduce L(?, X, Z), but more
importantly, will reduce L(?) (or leave it unchanged if a fixed-point or limit cycle is reached). The
associated update rules from this process are as follows.
The optimal X (with ? and Z fixed) is just the standard weighted minimum-norm solution given by
e
Xinew ? ?i LTi ??1
b B
(11)
for each i. The minimizing Z equals the slope at the current ? of log |?b |. As such, we have
Zinew ? O?i log |?b | = LTi ??1
b Li .
(12)
With Z and X fixed, computing the minimizing ? is a bit more difficult because of the constraint
?i ? H + for all i, where H + is the set of positive-semidefinite, symmetric dc ? dc covariance
matrices. To obtain each ?i , we must solve
h
i
?new
kXi k2??1 + trace ZiT ?i
(13)
i ? arg min
?i ?H +
i
An unconstrained solution will satisfy
O?i L(?i , Xi , Zi ) = 0,
(14)
which, after computing the necessary derivatives and re-arranging terms gives the equivalent condition
Xi XiT = ?i Zi ?i .
(15)
There are multiple (unconstrained) solutions to this equation; we will choose the unique one that
satisfies the constraint ?i ? H + . This can be found using
?1/2
1/2
1/2
?1/2
Zi
Xi XiT = Zi
Zi Xi XiT Zi
1/2
1/2
?1/2
1/2
1/2
1/2
1/2
?1/2
Zi Xi XiT Zi
Zi
(16)
= Zi
Zi Xi XiT Zi
1/2
1/2
?1/2
1/2
1/2
?1/2
?1/2
1/2
1/2
?1/2
=
Zi
Zi Xi XiT Zi
Zi
Zi Zi
Zi Xi XiT Zi
Zi
.
This indicates the solution (or update equation)
1/2
?1/2
1/2
1/2
?1/2
?new
Zi Xi XiT Zi
Zi
,
i ? Zi
(17)
which is satisfies the constraint. And since we are minimizing a convex function of ? i (over the
constraint set), we know that this is indeed a minimizing solution.
In summary then, to estimate ?, we need simply iterate (11), (12), and (17), and with each pass we
are guaranteed to reduce (or leave unchanged) L(?). The per-iteration cost is linear in the number
of voxels ds so the computational cost is relatively modest (it is quadratic in db , and cubic in dc ,
but these quantities are relatively small). The convergence rate is orders of magnitude faster than
EM-based algorithms such as those in [3, 8] (see Figure 1 (right) ).
3.2
Learning the Interference ?
The learning procedure described in the previous section boils down to fitting a structured maximum
likelihood covariance estimate ?b = ? + F ?F T to the data covariance Cb . The idea here is that
F ?F T will reflect the brain signals of interest while ? will capture all interfering factors, e.g.,
spontaneous brain activity, sensor noise, muscle artifacts, etc. Since ? is unknown, it must somehow be estimated or otherwise accounted for. Given access to pre-stimulus data (i.e., data assumed
to have no signal/sources of interest), stimulus evoked factor analysis (SEFA) provides a powerful
means of decomposing a data covariance matrix Cb into signal and interference components. While
details can be found in [4], SEFA computes the approximation
Cb ? ? + EE T + AAT ,
(18)
where E represents a matrix of learned interference factors, ? is a diagonal noise matrix, and A is a
matrix of signal factors. There are two ways to utilize this decomposition (more details can be found
in [11]). First, we can simply set ? ? ? + EE T and proceed as in Section 3.1. Alternatively,
we can set ? ? 0 and then substitute AAT for Cb , i.e., run the same algorithm on a de-noised
signal covariance. For technical reasons beyond the scope of this paper, it appears that algorithm
performance may be superior when the latter paradigm is adopted.
4
Analysis of Theoretical Localization/Orientation Bias
Theoretical support for the proposed algorithm is possible in the context of estimation bias assuming
simplified source configurations. For example, substantial import has been devoted to quantifying
localization bias when estimating a single dipolar source. Recently it has been shown, both empirically and theoretically [5, 9], that the MVAB and sLORETA algorithms have zero location bias under
this condition at high SNR. This has been extended to include certain empirical Bayesian methods
[8, 12]. However, these results assume a single dipole with fixed, known orientation (or alternatively,
that dc = 1), and therefore do not formally handle source correlations or multi-component dipoles.
The methods from [6, 13] also purport to address these issues, but no formal analyses are presented.
In contrast, despite being a complex, non-convex function, we now demonstrate that L(?) has very
attractive bias properties regarding both localization and orientation. We will assume that the full
T
lead-field L , LT1 , . . . , LTds represents a sufficiently high sampling of the source space such that
any active dipole component aligns with some lead-field columns. Unbiasedness can also be shown
in the continuous case, but the discrete scenario is more straightforward and of course more relevant
to any practical task.
Some preliminary definitions are required to proceed. We define the empirical intra-dipole correlation matrix at the i-th voxel as Cii , d1t SiT Si ; non-zero off-diagonal elements imply that correlations are present. Except in highly contrived situations, this type of correlation will always exist.
The empirical inter-dipole correlation matrix between voxels i and j is Cij , d1t SiT Sj ; any nonzero element implies the existence of a correlation. In practice, this form of correlation may or may
not be present. With regard to the lead-field L, spark is defined as the smallest number of linearly
dependent columns [2]. By definition then, 2 ? spark(L) ? db + 1. Finally, da denotes the number
of active sources, i.e., the number of voxels whereby kSi k > 0.
Theorem 1. In the limit as ? ? 0 (high SNR) and assuming da dc < spark(L) ? 1, the cost
function L(?) maintains the following two properties:
1. For arbitrary Cii and Cij , the unique global minimum ?? produces a source estimate S ? =
Ep(S|B,?? ) [S] computed using (4) that equals the generating source matrix S, i.e., it is
unbiased in both location and orientation for all active dipoles and correctly zeros out the
inactive ones.
2. If Cij = 0 for all active dipoles (although Cii is still arbitrary), then there are no local
minima, i.e., the cost function is unimodal.
The proof has been deferred to [11]. In words, this theorem says that intra-dipole correlations do
not disrupt the estimation process by creating local minima, and that the global minimum is always
unbiased. In contrast, inter-dipole correlations can potentially create local minima, but they do not
affect the global minimum. Empirically, we will demonstrate that the algorithm derived in Section
3 is effective at avoiding these local minima (see Section 5). With added assumptions these results
can be extended somewhat to handle the inclusion of noise.
The cost functions from [8, 12] bear the closest resemblance to L(?); however, neither possesses
the second attribute from Theorem 1. This is a very significant failing because, as mentioned previously, intra-dipole correlations are always present in each active dipole. Consequently, localization
and orientation bias can occur because of convergence to a local minimum. The iterative Bayesian
scheme from [13], while very different in structure, also directly attempts to estimate flexible orientations and handle, to some extent, source correlations. While details are omitted for brevity, we
can prove that the full model upon which this algorithm is based fails to satisfy the first property
of the theorem, so the corresponding global minimum can be biased. In contrast, beamformers
and sLORETA are basically linear methods with no issue of global or local minima. However, the
popular sLORETA and MVAB solutions will in general display a bias for multi-component dipoles
(dc > 1) or when multiple dipoles (da > 1) are present, regardless of correlations.
5
Empirical Evaluation
In this section we test the performance of our algorithm on both simulated and real data sets. We
focus here on localization accuracy assuming strong source correlations and unknown orientations.
While orientation estimates themselves are not shown for space considerations, accurate localization
implicitly indicates that this confound has been adequately handled. More comprehensive experiments, including comparisons with additional algorithms, are forthcoming [11].
Simulated Data: We first conducted tests using simulated data with realistic source configurations.
The brain volume was segmented into 5mm voxels and a two orientation (dc = 2) forward leadfield
was calculated using a spherical-shell model [7]. The data time course was partitioned into pre- and
post-stimulus periods. In the pre-stimulus period (263 samples) there is only noise and interfering
brain activity, while in the post-stimulus period (437 samples) there is the same (statistically) noise
and interference factors plus source activity of interest. We used two noise conditions - Gaussiannoise and real-brain noise. In the former case, we seeded voxels with Gaussian noise in each orientation and then projected the activity to the sensors using the leadfield, producing colored Gaussian
noise at the sensors. To this activity, we added additional Gaussian sensor noise. For the real-brain
noise case, we used resting-state data collected from a human subject that is presumed to have ongoing and spontaneous activity and sensor noise. In both the Gaussian and real-brain noise cases,
the pre-stimulus activity was on-going and continued into the post-stimulus period, where the simulated source signals were added. Sources were seeded at locations in the brain as damped-sinusoids
and this voxel activity was projected to the sensors. We could adjust both the signal-to-noise-plusinterefence ratio (SNIR) and the correlations between the different voxel time-courses to examine
the algorithm performance on correlated sources and unknown dipole orientations.
We ran 100 simulations of three randomly seeded sources at different SNIR levels (-5, 0, 5, 10dB).
The sources in these simulations always had an inter-dipole correlation coefficient of 0.5; intradipole correlations were present as well. We ran the simulation with both Gaussian-noise and real
brain noise using a MVAB and our proposed method. In order to evaluate performance, we used the
following test for a hit or miss. We drew spheres around each seeded source location and obtained the
maximum voxel value in each sphere. Then we calculated the maximum voxel activation outside the
three spheres. If the maximum inside each sphere was greater than the maximum outside all of the
spheres, it was counted as a hit (in this way, we are implicitly accounting somewhat for false alarms).
Each simulation could get a score or 0, 1 ,2 , or 3, with 3 being the best. Figure 1 (left) displays
comparative results averaged over 100 trials with standard errors. Our method quite significantly
outperforms the MVAB, which is designed to handle unknown orientations but has difficulty with
source correlations. Figure 1 (middle) shows a sample reconstruction on a much more complex
source configuration composed of 10 dipolar sources. Finally, Figure 1 (right) gives an example
of the relative convergence improvement afforded by our method relative to an EM implementation
analogous to [3, 8]. We also wanted to test the performance on perfectly correlated sources with
unknown orientations and compare it to other state-of-the-art Bayesian methods. An example using
three such sources and 5 dB SNIR is given in Figure 2.
?160
3
50
cost function value
30
2
1.5
Gaussian Noise
Proposed Method
Real Brain Noise
Proposed Method
1
0.5
0
z (mm)
successful localizations
?180
40
2.5
?6
?4
?2
0
2
4
6
8
20
10
0
Gaussain Noise
MVAB
?10
Real Brain Noise
MVAB
?20
10
12
?200
?220
EM algorithm
proposed method
?240
?260
?60
?40
?20
0
20
40
x (mm)
SNIR
?280
0
10
20
30
40
50
60
70
80
90
100
iteration number
50
50
40
40
40
30
30
30
20
20
20
10
z (mm)
50
z (mm)
z (mm)
Figure 1: Left: Aggregate localization results for MVAB and the proposed method recovering three
correlated sources with unknown orientations. Middle: Example reconstruction of 10 relatively
shallow sources (green circles) using proposed method (MVAB performs poorly on this task). Right:
Convergence rate of proposed method relative to a conventional EM implementation based on [3, 8].
10
10
0
0
0
?10
?10
?10
?20
?20
?60
?40
?20
0
x (mm)
20
40
?20
?60
?40
?20
0
x (mm)
20
40
?60
?40
?20
0
20
40
x (mm)
Figure 2: Reconstructions of three perfectly correlated dipoles (green circles) with unknown orientations using, Left: MVAB, Middle: variational Bayesian method from [13], Right: proposed
method.
Real Data: Two stimulus-evoked data sets were collected from normal, healthy research subjects
on a 275-channel CTF System MEG device. The first data set was a sensory evoked field (SEF)
paradigm, where the subject?s right index finger was tapped for a total of 256 trials. A peak is typically seen 50ms after stimulation in the contralateral (in this case, the left) somatosensory cortical
area for the hand, i.e., dorsal region of the postcentral gyrus. The proposed algorithm was able to
localize this activation to the correct area of somatosensory cortex as seen in Figure 3 (left) and the
estimated time course shows the typical 50ms peak (data not shown). The second data set analyzed
was an auditory evoked field (AEF) paradigm. In this paradigm the subject is presented tones binaurally for a total of 120 trials. There are two typical peaks seen after the presentation of an auditory
stimulus, one at 50ms and one at 100ms, called the M50 and M100 respectively. The auditory processing of tones is bilateral at early auditory areas and the activations are correlated. The algorithm
was able to localize activity in both primary auditory cortices and the time courses for these two
activations reveal the M50 and M100. Figure 3 (middle) and (right) displays these results. The
analysis of simple auditory paradigms is problematic because many source localization algorithms,
such as the MVAB, do not handle the bilateral correlated sources well. We also ran MVAB on the
AEF data and it localized activity to the center of the head between the two auditory cortices (data
not shown).
6
Discussion
This paper derives a novel empirical Bayesian algorithm for MEG source reconstruction that readily
handles multiple correlated sources with unknown orientations, a situation that commonly arises
even with simple imaging tasks. Based on a principled cost function and fast, convergent update
Normalized Intensity (0-1000)
800
700
600
500
400
300
200
100
0
50
100
150
Time (ms)
Figure 3: Real-world example Left: Somatosensory reconstruction. Middle: Bilateral auditory reconstruction. Right: Recovered timecourse from left auditory cortex (right auditory cortex, not
shown, is similar).
rules, this procedure displays significant theoretical and empirical advantages over many existing
methods. We have restricted most of our exposition and analyses to MEG; however, preliminary
work with EEG is also promising. For example, on a real-world passive visual task where subjects
viewed flashing foreground/background textured images, our method correctly localizes activity to
the lateral occipital cortex while two state-of-the-art beamformers fail. This remains an active area
of research.
References
[1] J.O. Berger, Statistical Decision Theory and Bayesian Analysis, Springer-Verlag, New York,
2nd edition, 1985.
[2] D.L. Donoho and M. Elad, ?Optimally sparse representation in general (nonorthogonal) dictionaries via `1 minimization,? Proc. National Academy of Sciences, vol. 100, no. 5, pp. 2197?
2202, March 2003.
[3] K. Friston, L. Harrison, J. Daunizeau, S. Kiebel, C. Phillips, N. Trujillo-Barreto, R. Henson,
G. Flandin, and J. Mattout, ?Multiple sparse priors for the MEG/EEG inverse problem,? NeuroImage, 2008 (in press).
[4] S.S. Nagarajan, H.T. Attias, K.E. Hild K.E., K. Sekihara, ?A probabilistic algorithm for robust
interference suppression in bioelectromagnetic sensor data,? Stat Med. vol. 26, no. 21, pp.
3886?910 Sept. 2007.
[5] R.D. Pascual-Marqui, ?Standardized low resolution brain electromagnetic tomography (sloreta):
Technical details,? Methods and Findings in Experimental and Clinical Pharmacology, vol. 24,
no. Suppl D, pp. 5?12, 2002.
[6] M. Sahani and S.S. Nagarajan, ?Reconstructing MEG sources with unknown correlations,?
Advances in Neural Information Processing Systems 16, 2004.
[7] J. Sarvas, ?Basic methematical and electromagnetic concepts of the biomagnetic inverse problem,? Phys. Med. Biol., vol. 32, pp. 11?22, 1987.
[8] M. Sato, T. Yoshioka, S. Kajihara, K. Toyama, N. Goda, K. Doya, and M. Kawato, ?Hierarchical
Bayesian estimation for MEG inverse problem,? NeuroImage, vol. 23, pp. 806?826, 2004.
[9] K. Sekihara, M. Sahani, and S.S. Nagarajan, ?Localization bias and spatial resolution of adaptive
and non-adaptive spatial filters for MEG source reconstruction,? NeuroImage, vol. 25, pp. 1056?
1067, 2005.
[10] K. Uutela, M. Hamalainen, and E. Somersalo, ?Visualization of magnetoencephalographic
data using minimum current estimates,? NeuroImage, vol. 10, pp. 173?180, 1999.
[11] D.P. Wipf, J.P. Owen, H.T. Attias, K. Sekihara, and S.S. Nagarajan ?Robust Bayesian Estimation of the Location, Orientation, and Timecourse of Mutliple Correlated Neural Sources using
MEG,? submitted, 2009.
[12] D.P. Wipf, R.R. Ram??rez, J.A. Palmer, S. Makeig, and B.D. Rao, ?Analysis of empirical
Bayesian methods for neuroelectromagnetic source localization,? Advances in Neural Information Processing Systems 19, 2007.
[13] J.M. Zumer, H.T. Attias, K. Sekihara, and S.S. Nagarajan, ?A probabilistic algorithm for
interference suppression and source reconstruction from MEG/EEG data,? Advances in Neural
Information Processing System 19, 2007.
| 3609 |@word trial:3 determinant:2 middle:5 advantageous:1 norm:2 nd:1 open:1 m100:2 simulation:4 covariance:14 decomposition:1 accounting:1 configuration:5 contains:1 efficacy:1 score:1 outperforms:1 existing:4 current:8 recovered:1 si:7 activation:4 must:3 import:1 readily:1 kiebel:1 realistic:2 wanted:1 remove:1 designed:2 update:4 generative:1 beamformers:2 device:1 tone:2 colored:1 provides:3 location:12 hyperplanes:1 covp:1 along:1 direct:1 prove:1 magnetoencephalographic:1 fitting:1 inside:1 theoretically:3 inter:3 presumed:1 indeed:1 themselves:2 examine:1 multi:3 brain:19 discretized:1 globally:1 spherical:2 actual:1 electroencephalography:1 becomes:2 begin:2 estimating:7 interpreted:1 finding:1 transformation:1 guarantee:1 temporal:2 toyama:1 expands:1 concave:2 isometrically:1 exactly:1 makeig:1 k2:3 hit:2 unit:1 producing:1 segmenting:1 positive:1 aat:2 local:8 treat:1 sd:1 limit:2 severely:2 despite:1 plus:2 evoked:4 challenging:1 palmer:1 statistically:1 averaged:1 unique:2 practical:1 practice:1 block:1 procedure:5 area:5 empirical:15 significantly:2 convenient:1 pre:4 d1t:2 word:1 m50:2 mvab:12 get:1 close:1 selection:1 context:1 applying:1 optimize:1 equivalent:3 conventional:1 center:1 straightforward:2 regardless:1 occipital:1 independently:2 convex:2 resolution:4 simplicity:1 spark:3 dipole:26 rule:3 continued:1 array:1 importantly:1 sarvas:1 handle:9 coordinate:1 arranging:1 analogous:1 spontaneous:4 construction:2 heavily:1 tapped:1 element:3 located:2 observed:2 ep:2 capture:1 region:3 ensures:1 cycle:2 noised:1 ordering:1 binaurally:1 ran:3 principled:3 substantial:3 intuition:1 mentioned:1 sef:1 localization:14 upon:3 efficiency:1 textured:1 easily:2 finger:1 derivation:1 fast:1 describe:1 effective:1 methematical:1 aggregate:1 outside:2 quite:1 posed:2 larger:2 valued:1 solve:1 say:1 otherwise:2 elad:1 advantage:2 eigenvalue:1 reconstruction:10 product:1 relevant:2 combining:1 mutliple:1 degenerate:1 poorly:1 academy:1 x1t:1 convergence:4 contrived:1 produce:1 generating:1 comparative:1 converges:1 leave:2 derive:2 stat:1 measured:2 strong:1 zit:2 auxiliary:1 recovering:1 implies:3 somatosensory:3 direction:1 correct:1 attribute:1 filter:1 human:2 goda:1 nagarajan:6 electromagnetic:4 preliminary:2 underdetermined:1 secondly:2 mm:9 hild:1 sufficiently:1 considered:1 around:1 normal:1 exp:4 cb:9 mapping:1 scope:1 nonorthogonal:1 dictionary:1 adopt:1 smallest:1 omitted:1 early:1 purpose:3 failing:1 estimation:9 proc:1 leadfield:2 healthy:1 create:1 weighted:2 minimization:2 sensor:13 always:5 gaussian:8 modified:1 voltage:1 derived:1 xit:8 focus:1 improvement:1 rank:2 likelihood:4 indicates:2 contrast:3 suppression:4 inference:1 yoshioka:1 dependent:2 typically:1 postcentral:1 going:1 issue:4 arg:1 orientation:30 flexible:3 ill:1 spatial:4 art:2 field:8 equal:4 never:1 sampling:1 represents:3 foreground:1 wipf:3 minimized:1 stimulus:9 randomly:1 composed:1 simultaneously:1 uutela:1 national:1 comprehensive:1 individual:3 attempt:3 interest:3 highly:1 intra:3 evaluation:1 adjust:1 weakness:1 deferred:1 analyzed:1 semidefinite:1 devoted:1 held:1 damped:1 accurate:1 neuroelectromagnetic:1 necessary:1 modest:1 snir:4 desired:1 re:2 circle:2 kajihara:1 theoretical:4 column:7 modeling:1 rao:1 rdb:3 altering:1 maximization:1 cost:13 addressing:1 vertex:2 snr:2 contralateral:1 successful:1 conducted:1 optimally:1 dependency:1 accomplish:1 kxi:3 chooses:1 unbiasedness:1 density:1 peak:3 retain:1 probabilistic:2 invoke:1 off:1 quickly:1 reflect:1 opposed:1 choose:1 possibly:1 collapsing:1 creating:1 derivative:1 leading:2 li:7 account:1 potential:1 de:1 coefficient:1 matter:1 satisfy:2 explicitly:1 depends:1 later:1 bilateral:3 closed:1 reached:1 bayes:1 option:1 maintains:1 slope:1 minimize:1 formed:2 accuracy:2 variance:1 likewise:1 bayesian:16 accurately:2 basically:1 submitted:1 somersalo:1 phys:1 aligns:1 volumetric:1 definition:2 failure:1 pp:7 involved:1 naturally:2 associated:4 proof:2 boil:1 auditory:10 popular:1 sloreta:6 actually:2 appears:1 maxwell:1 dt:5 reflected:1 maximally:1 just:1 correlation:30 d:12 hand:1 nonlinear:1 incrementally:1 somehow:2 defines:1 artifact:2 gray:1 resemblance:1 reveal:1 effect:2 verify:1 unbiased:2 normalized:1 concept:1 adequately:1 former:1 seeded:4 sinusoid:1 symmetric:1 laboratory:1 iteratively:1 nonzero:1 attractive:1 uniquely:1 encourages:1 whereby:1 m:5 impression:1 demonstrate:2 performs:1 passive:1 meaning:2 variational:3 consideration:2 novel:1 recently:1 image:1 common:2 superior:1 kawato:1 stimulation:1 physical:1 empirically:2 volume:3 discussed:1 resting:1 measurement:3 significant:4 refer:1 ctf:1 trujillo:1 phillips:1 gaussain:1 unconstrained:2 outlined:1 inclusion:1 had:1 access:1 henson:1 cortex:8 surface:2 etc:2 posterior:3 closest:1 scenario:1 certain:1 verlag:1 xe:1 muscle:1 seen:3 minimum:17 greater:2 somewhat:2 additional:2 mr:1 cii:3 maximize:1 paradigm:5 period:4 signal:10 rdc:1 ii:2 multiple:5 full:2 unimodal:1 compounded:1 technical:2 match:1 faster:1 segmented:1 clinical:3 sphere:5 post:3 promotes:1 underlies:1 basic:2 dipolar:2 iteration:5 sometimes:1 suppl:1 background:2 harrison:1 source:56 biased:1 daunizeau:1 unlike:1 posse:1 subject:6 med:2 db:8 beamforming:2 structural:1 near:1 presence:3 ee:2 variety:1 iterate:1 fit:1 zi:26 affect:1 forthcoming:1 perfectly:2 reduce:5 idea:2 regarding:1 attias:4 synchronous:2 whether:1 inactive:1 handled:1 effort:1 penalty:1 mattout:1 proceed:2 york:1 useful:1 tomography:1 reduced:2 gyrus:1 exist:2 problematic:2 estimated:4 arising:1 per:2 correctly:2 discrete:1 dropping:1 vol:7 express:1 sparsifying:1 salient:1 demonstrating:1 drawn:1 localize:2 penalizing:1 neither:1 lti:2 utilize:1 imaging:2 ram:1 run:1 inverse:5 powerful:2 throughout:2 reasonable:1 doya:1 decision:1 flandin:1 bit:1 guaranteed:2 followed:1 display:4 convergent:1 quadratic:1 activity:17 scalp:2 sato:1 occur:1 constraint:4 afforded:1 min:3 relatively:3 structured:1 march:1 conjugate:1 describes:2 em:5 reconstructing:1 partitioned:1 shallow:1 projecting:1 restricted:2 confound:1 interference:11 equation:3 visualization:1 remains:2 previously:1 mechanism:1 fail:1 know:1 adopted:1 available:1 decomposing:1 hierarchical:2 appropriate:1 generic:1 indirectly:1 alternative:3 robustness:1 existence:1 substitute:1 denotes:3 standardized:1 include:2 prof:1 unchanged:2 objective:1 added:3 quantity:2 primary:3 diagonal:4 hamalainen:1 gradient:1 minx:1 simulated:6 lateral:1 extent:2 collected:2 trivial:1 reason:2 assuming:4 meg:14 index:1 berger:1 ratio:1 minimizing:5 equivalently:1 difficult:2 cij:3 potentially:3 trace:8 implementation:2 unknown:19 perform:1 upper:1 descent:3 situation:2 extended:3 incorporated:3 head:1 dc:13 arbitrary:4 intensity:1 required:1 specified:1 timecourse:2 california:1 learned:2 address:1 beyond:1 able:2 below:2 including:1 green:2 suitable:1 difficulty:1 rely:1 friston:1 localizes:1 scheme:2 improve:1 misleading:1 brief:1 imply:1 commonly:1 sept:1 sahani:2 prior:7 voxels:8 literature:1 determining:2 relative:3 embedded:1 fully:2 bear:1 localized:1 integrate:1 degree:1 uncorrelated:1 interfering:2 row:3 course:6 summary:1 accounted:1 supported:1 bias:10 guide:1 formal:1 sparse:2 distributed:1 regard:1 dimension:2 attendant:1 world:2 evaluating:1 calculated:2 cortical:1 sensory:1 exceedingly:1 computes:1 forward:1 collection:1 made:1 san:1 adaptive:3 simplified:1 projected:2 voxel:7 counted:1 bb:1 sj:5 pruning:1 compact:1 implicitly:3 ml:1 global:7 active:12 assumed:2 francisco:1 xi:9 alternatively:3 disrupt:4 search:1 continuous:1 iterative:1 quantifies:1 promising:1 channel:1 robust:4 eeg:6 excellent:1 complex:4 necessarily:1 da:3 sekihara:5 significance:1 linearly:1 bounding:1 noise:25 hyperparameters:1 alarm:1 edition:1 pharmacology:1 marqui:1 referred:1 pascual:1 fashion:1 cubic:1 slow:1 lt1:1 fails:1 purport:1 neuroimage:4 timepoints:1 candidate:3 minz:1 third:1 rez:1 down:1 theorem:4 specific:1 evidence:1 derives:3 exists:1 biomagnetic:2 burden:1 sit:2 false:1 effectively:1 drew:1 flashing:1 dissimilarity:1 magnitude:1 push:1 ksi:2 mce:1 lt:5 simply:2 visual:1 expressed:2 springer:1 chance:1 satisfies:2 shell:1 conditional:1 z1t:1 viewed:1 presentation:1 magnetoencephalography:1 consequently:3 quantifying:1 exposition:1 donoho:1 owen:2 replace:1 typical:2 except:1 reducing:1 miss:1 called:2 total:2 pas:1 experimental:2 formally:1 support:1 latter:1 scan:1 dorsal:1 brevity:1 arises:1 avoiding:1 barreto:1 ongoing:1 evaluate:1 regularizing:1 biol:1 correlated:12 |
2,879 | 361 | EMPATH: Face, Emotion, and Gender Recognition Using Holons
Munro & Zipser (1987) showed that a back propagation network could be used
compression. The network is trained to simply reproduce its input, and so can
as a non-linear version of Kohonen's (1977) auto-associator. However it must
through a narrow channel of hidden units, so it must extract regularities from the
during learning. Empirical analysis of the trained network showed that the
span the principal subspace of the image vectors, with some noise on the
component due to network nonlinearity (Cottrell & Munro, 1988).
EMPATH:
Face, Emotion, and Gender
Recognition Using Ho10ns
the network uses error-correction learning, no teacher other than the input is
so the learning can be regarded as unsupervised. We suggested that this
be used for automatic feature extraction in a pattern recognition system.
approach taken here.
is shown in Figure 1. The image compression network extracts the features,
hidden unit representation so developed is given as input to one and two
')lll',......
networks which yield identity, gender, and emotion as output. In
we showed that the features developed by the model are holistic rather
fea'ture:s, that they can combine to form faces thatthe model has never been
that they form a redintegrative memory, able to complete noisy or partial
et aI., 1977). We have dubbed them holons.
J ..
Garrison w. CottreU
.
.
Computer Science and Engm~enng Dept
Institute for Neural Computati~n
University of California San Diego
La Jolla, CA 92093
Abstract
The dimens~onali~y of a set Off 1601:~ :a:~s ~~?. 10 .
female subjects IS reduced rom
........ .
network The extracted features do not correspond to
in previ~us face recognition systems (KaR
? na~e, 19~;)y' ......??.?..
d' tances between facial elements. at. er, .. ......\
f~tures we call holons. The hol.ons are fV~~ t~!
propagation networks th.at are teamed toc ~:~~~ . y
identity. feigned emouonal state. and g. f ..
....
extracted holons provide a. suf~cient basIS or
discriminations, 99% of the Idenuty .'
.t
emotion discriminations ~mong the traml: S~~d .
'ud ements of the emouons are compar .'
~et!orkS tend to confuse more distant emouons
. ..
comprised the input to the network were selected from full face pictures
and 10 males. All subjects were introductory psychology studentc; at
of California, San Diego who received partial course credit for
the procedure outlined by Galton (1878), images were aligned
eyes and mouth. These images were captured by a fraOl~ grabber,
pixels by averaging. To prevent the use of first order statistics for
were normalized to have equal brightness and variance. The
scaled to the range [0 ?.8]. Part of the training set and its
~I'; ~lIItclenl~oder are shown in Figure 2.
faceness
O~?? ?. ?~
1 Introduction and motivation
d ribe further research on the use of
.
:og:on first described in (Cottrell ~ Flemmg.
There, we demonstrated that an U~SUpe~lSed autoerIC~[llnl~\
features from faces sufficient for Idenuty
.
show that a network so trained can also recognIZe
564
gender
inputs (hiddeils from auto-associator)
Face informa.ticnextractor
b
'nrt'~iti,.....
model. (a) An image compression network is trained
4096 inputs into 40 hidden units. (b) The hiddenunits
'.:""llti1(\lr1l" are used as inputs to various recognition networks.
565
566
Cottrell and Metcalfe
Each column corresponds to one of 8 different feigned emotional eXI)r~;sioins)
(1980) has shown that subjects' judgements of adjectives describing human
be represented in a two-dimensional "emotion space" (Figure 3).
dimension can be characterized as pleasure/displeasure; the vertical <1lnllensiril
arousal/low arousal. Russell and his colleagues have shown using
scaling techniques (RusseU, 1980, 1983; Russell & Bullock, 1986) that
human emotions fallon a circle within this space. We chose adjectives from
be the emotions that we asked our subjects to feign. The adjectives for
those in the numbered circles in Figure 3. If the subject did not respond
the adjectives, others from the circled region were given as en<;ouragement
proper facial expression. We labeled these classes with one adjective from
astonished, happy, pleased, relaxed, sleepy, bored, miserable and angry.
were presented in randomized order to offset possible carry over effects.
We found that subjects were enthusiastically expressive with certain of
such as astonished and delighted. However, despite claims of negative
cued with adjectives such as miserable, bored and sleepy, the subjects
express these negative emotions very clearly.
EMPATH: Face, Emotion, and Gender Recognition Using Holons
We used a learning rate of .25 at the output layer during the first epoch in
learn the bias, or "palette", then a rate of .1 was used for the remaining
where an epoch corresponds to the presentation of all 160 images. The
used a constant learning rate of .0001. The initial weight span was .1 (+/_
no momentum or weight decay. The average squared error per unit at the
17. This corresponds to about 12 gray levels per pixel. Sample
of trained images are shown in Figure 2.
vectors produced by the hidden units of the compression network are
input to a single layer network that has a Iocalist unit for every name and a
A two-layer network with 20 hidden units is used for identifying which
adjectives that were given to the subjects pertains to each image. The
to produce .5 for the wrong answer, and 1 for the correct answer. The
is trained for 1000 epochs, which reduces the total sum squared error to
.
how performance changed with further training, we trained this
2000
epochs. 9 other networks were trained using the features from
.
network for 1000 epochs from different initial random weights for
human subjects on the same task.
3 Procedure
The whole image is input at once to our network, so the input layer is
40 hidden units, and a 64x64 output layer, with a sigmoidal activation
range [-1,11. Due to the extreme difference in fan-in at the hidden and
(4096 vs. 40), differential learning rates were used at the two layers.
learning rate led to most of the hidden units becoming pinned at full off or
correctness was that the output unit with the maximum activation must
the correct answer. The network learned to discriminate 99% of the
identity. One image of one woman was taken for another. Sex
was perfect. It was found that performance was . better on these tasks
layer. The emotion classification network performed better with a
AIAIiMEO'
M ultidim('Mional ~(':lIi"~ !nrulion
(0C'" 18
affect ~Yords.
two-dimensional emotion space extracted via multi-dimensional scaling
similarity ratings. Data from Russell (1980).
Figure 2: Three subjects and their reproductions by the cOInm:ess:ionl J'
567
EMPATH: Face, Emotion, and Gender Recognition Using Holons
568
Cottrell and Metcalfe
Table 1: Percentage hits on each emotion (Generalization in prurenlthe:seSl
delighted
pleased
relaxed
sleepy
bored
miserable
hidden layer. However, the observation during data acquisition that
are poorly portrayed was confirmed by the network's early performance
Initially, the network was much better at detecting positive states than
Later training improved some categories at the expense of others, .." ... "'~~....
network did not have enough capacity to perform all the discriminations.
tests were performed on a small set of 5 subjects (40 faces in all), with the
in parentheses in Table 1. Generalization gets w~rse with more ~in~ng,
network is becoming overtrained. The network IS best at generalization
emotions. This also suggests that the negative emotions are not easily
olir data set. The generalization results, while not spectacular, should be .
the light of the fact that thetrainirig set o~ly contain~ 20 SU?ject.s, and it
that the compression netwodc: was not tramed on the Images In thIS test set
similar to the "eigenfaces" found by Turk & Pentland (submitted) in their
components analysis of faces. Such a representation, if localized by a
such as Sanger's (1990), or as we have found in previous work develops at
rates (Cottrell & Fleming, 1990), could provide a computational basis for
cell recordings found in the STS of monkey cortex, without resorting
"grandmother" cells for each face.
with human subjects
compare our network performance to that of human subjects on the same task,
human subjects on the same discriminations the networks were required to
. ??? 10 subjects were presented with one quarter of the training set (40 images of 5
.... times in randomized order in each block (320 presentations total). On each
? of an image, the subject was asked to make a Yes/No discrimination as to
image is a good example of one of the adjective class names.
(small sample size, large heterogeneity of variance) prevented a reliable
test of the model vs. the subjects. However, it is informative to compare the
. . matrices for the two pools of subjects (the other 9 network simulations are
here). All "yes" responses to each kind of face with each adjective are summed
..
for the humans and the networks. The networks' responses were
"yes/no" responses by thresholding the outputs of the networks for each
?..
8 yes/no responses. The threshold was chosen to produce approximately
. . overall number of "yes" responses for the 10 networks as the 10 humans. The
. .. shown in Table 2. The rows correspond to the portrayals of the emotions,
are the adjectives presented with them. So, for example, across 10 subjects
45 instances of calling a "pleased" face a good example of "delighted" (out of
5 Internal representation
We investigated the representation formed by the compression network.
receptive fields of the hidden units in this network to be white noise. In
the actual features used, we recorded the hidden unit activations as the "plUlnrlt
all 160 images. We formed the covariance matrix of the hidden unit
extracted the principal components. Note that this operation
components from the distributed representation used. T~e r~sulting
decompressed for viewing purposes. The results are shown 10 Figure 4.
the tables that there is a lot of regularity in the the human subjects data
totally captured by the model. The first three emotions/adjectives form a
? do the last four. Since these adjectives are listed in the order of a tour around
?
.
of Russell's circomptex model, the confusability of nearby emotions
. the clustering of descriptive adjectives is matched by a perceptual clustering
? expressions produced by a subject in response to those adjectives. However,
? diagonal band of confusabitity, as would be predicted by the circomplex, the
ne~ath,e dimension appears to separate into two clusters. For example, anger
?
rure separated more than would be expected from Russell's circomplex
. is no "wrap-around" in this matrix). In between these two clusters, the
is seen by the subjects as compatible with nearly every facial
degree, but the "relaxed" faces rure not compatible with the first three
lIIUI,luna! categories.
while displaying some of the clustering shown in the human data, have
along the diagonals, due to having been trained on this data, and more
()nlUSJ1ons in regions where the human subjects (upper right and lower left) have
entries, such as "angry" labels on "delighted" and "pleased" faces. This
to forcing the netwodc:s to make as many responses as the humans. We
minor threshold change leads to many more responses, suggesting that we
uractll1lO' responses from the network.
Figure 4: Sixteen botons derived by PCA from hidden unit re~,nol~se:
569
570
EMPATH: Face, Emotion, and Gender RecognitioriUshlg Hol()l:ts
Cottrell and Metcalfe
several category confusions that humans do not.
Table 2: Confusion matrices for human and network subjects
astonished
delighted
pleased
relaxed
sleepy
bored
24
1
6 29
6
2 45 48 18
0
1 0 7 42 22 32
o 0 0 28 31 33
o 0 1 33 24 38
o 0 1 24 17
5
27
50
2
2
15
9
14
7
1
46
39
26
3
13
8
3
43
49
21
18
:, ~ ..:.
:
G. & Fleming, M. (1990). Face recognition using unsupervised feature
In Proceedings of the International Neural Network Conference, Paris.
'0, Munro, P.
? !.;o..ft~~,~,
& Zipser, D. (1987)~ Learning internal representations of gray
An example of extensional programming. In Proc. Ninth Annual Cognitive
Conference, Seattle, Wa.
? G.W. and Munro, P. (1988) Principal components analysis of images via back
In Proc. Soc. of Photo-Optical Instr. Eng., Cambridge, MA.
7 Holons
R., Albright. T., Gross, C., and Bruce, C. (1984). Stimulus-selective
of inferior temporal neurons in the Macaque. J. Neuroscience. 4, 2051-2062.
This work demonstrates that, at least for our data set, dimensionality reduction
preprocessing step that can maintain enough information for the recognition
term the representational units used by the compression network "holons".
than just another name for a distributed representation. By this we simply
representational element is a holon if its receptive field sub tends the whole
represented. Ideally we want to require that the infonnation in a set of
maximally distributed: i.e., the average unit entropy is maximized. The latter
eliminates grandmother cells, insures that the representation be noise resls~mt a
distributes the processing Joadeventy. A weak point of our definition is the
defining precisely the notion of a "whole object".
This definition applies to many distributed representational schemes, but does ..
to articulated ones such as the Wickelfeatures used by Rumelhart and
.
in their past tense model as these only represent portions of the verb. On the
we would not have holons for a "room", simply because we can not get a
not extend beyond our sensory surface at once. Given this meaning for
units of area 17 are not holons, but the units in Superior Temporal Sulcus
main motivation for this definition is to give an alternative notion to the ...... ntir....
one for face cells in STS (Desimone et al., 1984).
.?? M. & Cottrell, G. (1990). A neural network model of face recognition. In
of the Int. Joint Con/. on Neural Networks, San Diego, CA.
. R. S. (1878). Composite Portraits. Nature, 23. 97-100.
'rakeo (1973). Picture processing system by computer complex and recognition
faces. Unpublished Ph.D. Thesis, Dept. of Info. Science, Kyoto University.
t. Lehtio, P., Oja, E., Kortekangas, A., & Makisara, K. (1977).
Demoristralion
processing properties of the optimal associative mappings. In Proc Inti. Conj.
?"'"r,..",ti,,. .. and Society, Wash., D.C.
.
:.
. . 1. A.
(1980). A circomplex model of affect. Journal of Personality and Social
39,1161-1178.
. J. A. (1983). Pancultural aspects of the human conceptual organi7..ation of
. '''.''''Pf.nI of Personality and Social Psychology, 45, 1281-1288.
8 Conclusions
We have shown that a network model that extracts features from its "n"irr,rim
unsupervised manner can achieve near perfect recognition rales
discrimination and sex discrimination, even though the features were not
that purpose. Where categories become "fuzzier", as in emotional states,
abilities are also limited. In particular, generali7..ation to new faces is
preliminary study of human perception of these faces, we found support for
when subjects are asked to produce expressions based on "near" aUII""",'L.n~~.,,,?
space, they produce "near" expressions in perceptual space. These
positive/negative clusters much more than the circomplex model
However, this could be a fault of the subjects' abilities to portray the
rather than a fault of the Circomplex model. Finally, we cornoatred
perfonnance to that of humans. We found that the networks (when l"n"C!tr<1lin,~rl
as many responses as humans), while generally following the pattern of the
..,
A. & Bullock, M. (1986). On the dimensions preschoolers use to interpret
Developmental Psychology. 22,97-102.
~xDressilonsofemotion.
. D. E., Hinton, G. E., and Williams, R. J. (1986). Learning repreSentations by
errors. Nature, 323, 533-536.
ropa~atin~
.
.
D~ & McClelland, J. On learning the past tenses of English verbs. In J.L.
.... & D.E. Rumelhart (Eds.), Parallel Distributed Processing. Vol 2.,
? MA: MIT Press.
(1989) ()ptimal unsupervised learning iri a single-layer linear feMf()rward
Neural Networks, 2, pp. 459-473.
net'worlc
Pentland, A. (1990) Eigenfaces for recognition. (Submitted for publiCation).
571
| 361 |@word version:1 judgement:1 compression:7 sex:2 simulation:1 covariance:1 eng:1 brightness:1 tr:1 carry:1 reduction:1 initial:2 empath:5 past:2 activation:3 must:3 cottrell:7 distant:1 rward:1 informative:1 extensional:1 discrimination:7 v:2 selected:1 es:1 detecting:1 sigmoidal:1 along:1 differential:1 become:1 combine:1 introductory:1 manner:1 lehtio:1 expected:1 multi:1 actual:1 lll:1 pf:1 totally:1 matched:1 kind:1 monkey:1 developed:2 dubbed:1 temporal:2 every:2 ti:1 holons:10 scaled:1 wrong:1 hit:1 demonstrates:1 unit:18 ly:1 positive:2 tends:1 despite:1 becoming:2 approximately:1 instr:1 chose:1 suggests:1 limited:1 nol:1 range:2 block:1 procedure:2 area:1 empirical:1 composite:1 numbered:1 get:2 spectacular:1 demonstrated:1 williams:1 iri:1 identifying:1 regarded:1 his:1 x64:1 notion:2 diego:3 programming:1 us:1 element:2 rumelhart:2 recognition:13 labeled:1 ft:1 region:2 russell:5 preschooler:1 gross:1 developmental:1 asked:3 ideally:1 hol:2 trained:9 rse:1 basis:2 easily:1 exi:1 joint:1 various:1 represented:2 articulated:1 separated:1 ability:2 statistic:1 noisy:1 supe:1 associative:1 descriptive:1 net:1 fea:1 kohonen:1 aligned:1 ath:1 holistic:1 poorly:1 achieve:1 representational:3 seattle:1 regularity:2 cluster:3 produce:4 perfect:2 object:1 cued:1 minor:1 received:1 soc:1 predicted:1 ptimal:1 correct:2 human:18 viewing:1 pinned:1 require:1 generalization:4 preliminary:1 correction:1 around:2 credit:1 mapping:1 claim:1 early:1 purpose:2 proc:3 label:1 infonnation:1 correctness:1 mit:1 clearly:1 rather:2 og:1 publication:1 derived:1 initially:1 hidden:13 reproduce:1 selective:1 pixel:2 overall:1 classification:1 summed:1 emotion:19 equal:1 never:1 extraction:1 once:2 ng:1 field:2 having:1 makisara:1 unsupervised:4 nearly:1 anger:1 others:2 stimulus:1 develops:1 oja:1 recognize:1 maintain:1 male:1 extreme:1 light:1 desimone:1 partial:2 conj:1 facial:3 perfonnance:1 circle:2 arousal:2 re:1 sulting:1 instance:1 column:1 portrait:1 entry:1 tour:1 comprised:1 answer:3 teacher:1 international:1 randomized:2 off:2 pool:1 na:1 squared:2 thesis:1 recorded:1 woman:1 luna:1 cognitive:1 lii:1 suggesting:1 int:1 sts:2 performed:2 later:1 lot:1 portion:1 orks:1 parallel:1 bruce:1 formed:2 ni:1 variance:2 who:1 maximized:1 yield:1 correspond:2 yes:5 weak:1 produced:2 confirmed:1 submitted:2 ed:1 definition:3 colleague:1 acquisition:1 turk:1 pp:1 con:1 dimensionality:1 rim:1 back:2 appears:1 irr:1 response:10 improved:1 maximally:1 though:1 just:1 thatthe:1 fallon:1 expressive:1 su:1 propagation:2 gray:2 name:3 effect:1 normalized:1 contain:1 tense:2 white:1 during:3 inferior:1 complete:1 confusion:2 llnl:1 image:16 meaning:1 superior:1 quarter:1 mt:1 rl:1 extend:1 interpret:1 cambridge:1 ai:1 automatic:1 outlined:1 resorting:1 nonlinearity:1 similarity:1 cortex:1 surface:1 showed:3 female:1 jolla:1 forcing:1 certain:1 kar:1 fault:2 captured:2 grabber:1 seen:1 relaxed:4 ud:1 full:2 reduces:1 kyoto:1 characterized:1 lin:1 prevented:1 parenthesis:1 represent:1 cell:4 want:1 eliminates:1 subject:26 tend:1 recording:1 call:1 zipser:2 near:3 enough:2 ture:1 affect:2 psychology:3 nrt:1 iocalist:1 expression:4 pca:1 munro:4 ject:1 oder:1 generally:1 se:1 compar:1 listed:1 band:1 ph:1 category:4 decompressed:1 reduced:1 mcclelland:1 percentage:1 neuroscience:1 per:2 vol:1 express:1 four:1 threshold:2 sulcus:1 prevent:1 sum:1 respond:1 scaling:2 layer:9 angry:2 ements:1 fan:1 toc:1 portrayal:1 annual:1 precisely:1 calling:1 nearby:1 aspect:1 span:2 optical:1 across:1 bullock:2 inti:1 taken:2 describing:1 photo:1 operation:1 alternative:1 personality:2 remaining:1 clustering:3 overtrained:1 emotional:2 sanger:1 society:1 receptive:2 diagonal:2 subspace:1 wrap:1 pleasure:1 separate:1 capacity:1 rom:1 happy:1 expense:1 info:1 negative:4 proper:1 galton:1 perform:1 upper:1 vertical:1 observation:1 neuron:1 iti:1 t:1 pentland:2 heterogeneity:1 defining:1 hinton:1 ninth:1 verb:2 rating:1 palette:1 required:1 paris:1 unpublished:1 california:2 fv:1 learned:1 narrow:1 fleming:2 macaque:1 able:1 suggested:1 beyond:1 pattern:2 perception:1 rale:1 grandmother:2 adjective:14 reliable:1 memory:1 confusability:1 mouth:1 tances:1 ation:2 scheme:1 eye:1 ne:1 picture:2 portray:1 auto:2 extract:3 pleased:5 epoch:5 circled:1 suf:1 tures:1 localized:1 sixteen:1 degree:1 sufficient:1 thresholding:1 displaying:1 previ:1 row:1 course:1 changed:1 compatible:2 last:1 sleepy:4 english:1 bias:1 institute:1 eigenfaces:2 face:23 distributed:5 dimension:3 sensory:1 san:3 preprocessing:1 social:2 ons:1 conceptual:1 table:5 channel:1 learn:1 nature:2 associator:2 ca:2 investigated:1 complex:1 did:2 bored:4 main:1 motivation:2 noise:3 whole:3 cient:1 en:1 garrison:1 sub:1 momentum:1 portrayed:1 perceptual:2 er:1 offset:1 decay:1 reproduction:1 wash:1 confuse:1 entropy:1 led:1 simply:3 insures:1 wickelfeatures:1 applies:1 gender:7 corresponds:3 informa:1 extracted:4 ma:2 identity:3 presentation:2 room:1 change:1 averaging:1 distributes:1 principal:3 total:2 discriminate:1 feigned:2 albright:1 la:1 metcalfe:3 internal:2 support:1 latter:1 pertains:1 dept:2 |
2,880 | 3,610 | Cell Assemblies in Large Sparse Inhibitory Networks
of Biologically Realistic Spiking Neurons
Jeff Wickens
OIST, Uruma, Okinawa, Japan.
[email protected]
Adam Ponzi
OIST, Uruma, Okinawa, Japan.
[email protected]
Abstract
Cell assemblies exhibiting episodes of recurrent coherent activity have been
observed in several brain regions including the striatum[1] and hippocampus
CA3[2]. Here we address the question of how coherent dynamically switching
assemblies appear in large networks of biologically realistic spiking neurons interacting deterministically. We show by numerical simulations of large asymmetric inhibitory networks with fixed external excitatory drive that if the network has
intermediate to sparse connectivity, the individual cells are in the vicinity of a bifurcation between a quiescent and firing state and the network inhibition varies
slowly on the spiking timescale, then cells form assemblies whose members show
strong positive correlation, while members of different assemblies show strong
negative correlation. We show that cells and assemblies switch between firing and
quiescent states with time durations consistent with a power-law. Our results are in
good qualitative agreement with the experimental studies. The deterministic dynamical behaviour is related to winner-less competition[3], shown in small closed
loop inhibitory networks with heteroclinic cycles connecting saddle-points.
1
Introduction
Cell assemblies exhibiting episodes of recurrent coherent activity have been observed in several
brain regions including the striatum[1] and hippocampus CA3[2], but how such correlated activity
emerges in neural microcircuits is not well understood. Here we address the question of how coherent assemblies can emerge in large inhibitory neural networks and what this implies for the structure
and function of one such network, the striatum.
Carrillo-Reid et al.[1] performed calcium imaging of striatal neuronal populations and revealed sporadic and asynchronous activity. They found that burst firing neurons were widespread within the
field of observation and that sets of neurons exhibited episodes of recurrent and synchronized bursting. Furthermore dimensionality reduction of network dynamics revealed functional states defined
by cell assemblies that alternated their activity and displayed spatiotemporal pattern generation.
Recurrent synchronous activity traveled from one cell assembly to the other often returning to the
original assembly; suggesting a robust structure. Assemblies were visited non-randomly in sequence
and not all state transitions were allowed. Moreover the authors showed that while each cell assembly comprised different cells, a small set of neurons was shared by different assemblies. Although
the striatum is an inhibitory network composed of GABAergic projection neurons, similar types of
cell assemblies have also been observed in excitatory networks such as the hippocampus. In a related and similar study Sasaki et al.[2] analysed spontaneous CA3 network activity in hippocampal
slice cultures using principal component analysis. They found discrete heterogeneous network states
defined by active cell ensembles which were stable against external perturbations through synaptic
activity. Networks tended to remain in a single state for tens of seconds and then suddenly jump
to a new state. Interestingly the authors tried to model the temporal profile of state transitions by a
1
hidden Markov model, but found that the transitions could not be simulated in this way. The authors
suggested that state dynamics is non-random and governed by local attractor-like dynamics.
We here address the important question of how such assemblies can appear deterministically in biologically realistic cell networks. We focus our modeling on the inhibitory network of the striatum,
however similar models can be proposed for networks such as CA3 if the cell assembly activity is
controlled by the inhibitory CA3 interneurons. Network synchronization dynamics[4, 5] of random
sparse inhibitory networks of CA3 interneurons has been addressed by Wang and Buzsaki[5]. They
determined specific conditions for population synchronization including that the ratio between the
synaptic decay time constant and the oscillation period be sufficiently large and that a critical minimal average number of synaptic contacts per cell, which was not sensitive to the network size, was
required. Here we extend this work focusing on the formation of burst firing cell assemblies
The striatum is composed of GABAergic projection neurons with fairly sparse asymmetric inhibitory collaterals which seem quite randomly structured and that receive an excitatory cortical
projection[6]. Each striatal medium spiny neuron (MSN) is inhibited by about 500 other MSNs
in the vicinity via these inhibitory collaterals and similarly each MSN inhibits about 500 MSNs.
However only about 10% ? 30% of MSNs are actually excited by cortex at any particular time. This
implies that each MSN is actively inhibited by about 50?150 cortically excited cells in general. It is
important to understand why the striatum has this particular structure, which is incompatible with its
putative winner-take-all role. We show by numerical computer simulation that very general random
networks of biologically realistic neurons coupled with inhibitory Rall-type synapses[7] and individually driven by excitatory input can show switching assembly dynamics. We commonly observed
a switching bursting regime in networks with sparse to intermediate connectivity when the level of
network inhibition approximately balanced the external excitation so that the individual cells were
near a bifurcation point. In our simulations, cells and assemblies slowly and spontaneously switch
between a depolarized firing state and a more hyperpolarized quiescent state. The proportion of
switching cells varies with the network connectivity, peaking at low connection probability for fixed
total inhibition. The sorted cross correlation matrix of the firing rates time series for switching cells
shows a fascinating multiscale clustered structure of cell assemblies similar to observations in[1, 2].
The origin of the deterministic switching dynamics in our model is related to the principle of winnerless competition (WLC) which has previously been observed by Rabinovich and coworkers[3] in
small inhibitory networks with closed loops based on heteroclinic cycles connecting saddle points.
Rabinovich and coworkers[3] demonstrate that such networks can generate stimulus specific patterns
by switching among small and dynamically changing neural ensembles with application to insect olfactory coding[8, 9], sequential decision making[10] and central pattern generation[11]. Networks
produce this switching mode of dynamical activity when lateral inhibitory connections are strongly
non-symmetric. WLC can represent information dynamically and is reproducible, robust against intrinsic noise and sensitive to changes in the sensory input. A closely related dynamical phenomenon
is referred to as chaotic itinerancy,[12]. This is a state that switches between fully developed chaos
and ordered behavior. The orbit remains in the vicinity of lower dimensional quasi-stable nearly periodic ?attractor ruins? for some time before eventually exiting to a state of high dimensional chaos.
This high-dimensional state is also quasi-stable, and after chaotic wandering the orbit is again attracted to one of the attractor ruins. Our study suggests attractor switching may be ubiquitous in
biologically realistic large sparse random inhibitory networks.
2
Model
The network is composed of biologically realistic model neurons in the vicinity of a bifurcation
from a stable fixed point to spiking limit cycle dynamical behaviour. To describe the cells we use
the IN a,p + Ik model described in Izhikevich[13] although any model near such a bifurcation would
be appropriate. The IN a,p + Ik cell model is two-dimensional and described by,
dVi
C
= Ii (t) ? gL (Vi ? EL ) ? gN a m? (Vi )(Vi ? EN a ) ? gk ni (Vi ? Ek )
(1)
dt
dni
= (n? ? ni )/?n
(2)
dt
having leak current IL , persistent N a+ current IN a,p with instantaneous activation kinetic and a
relatively slower persistent K + current IK . Vi (t) is the membrane potential of the i ? th cell, C
2
the membrane capacitance, EL,N a,k are the channel reversal potentials and gL,N a,k are the maximal
conductances. ni (t) is K + channel activation variable of the i ? th cell. The steady state activation
x
x
curves m? and n? are both described by, x? (V ) = 1/(1 + exp{(V?
? V )/k?
}) where x denotes
x
x
+
m or n and V? and k? are fixed parameters. ?n is the fixed timescale of the K activation variable.
The term Ii (t) is the input current to the i ? th cell.
The parameters are chosen so that the cell is the vicinity of a saddle-node on invariant circle bifurcation. As the current Ii (t) in Eq.1 increases through the bifurcation point the stable node fixed point
and the unstable saddle fixed point annihilate each other and a limit cycle having zero frequency is
formed[13]. Increasing current further increases the frequency of the limit cycle. The input current
Ii (t) in Eq.1 is composed of both excitatory and inhibitory parts and given by,
X
Ii (t) = Iic +
?ksyn,ij gj (t)(Vi (t) ? Vsyn ).
(3)
j
The excitatory part is represented by Iic
and models the effect of the cortico-striatal synapses. It has a
fixed magnitude for the duration of a simulation, but varying across cells. In the simulations reported
here the Iic are quenched random variables drawn uniformly randomly from the interval [Ibif , Ibif +
1] where Ibif = 4.51 is the current at the saddle-node bifurcation point. These values of excitatory
input current mean that all cells would be on limit cycles and firing with low rates if the network
inhibition were not present. In fact the inhibitory network may cause some cells to become quiescent
by reducing the total input current to below the bifurcation point. Since the inhibitory current part
is provided by the GABAergic collaterals of the striatal network it is dynamically variable. These
synapses are described by Rall-type synapses[7] in Eq.3 where the current into postsynaptic neuron
i is summed over all inhibitory presynaptic neurons j and Vsyn and ksyn,ij are channel parameters.
gj (t) is the quantity of postsynaptically bound neurotransmitter given by,
?g
dgj
dt
=
?(Vj (t) ? Vth ) ? gj (t)
(4)
for the j ? th presynaptic cell. Here Vth is a threshold, and ?(x) is the Heaviside function. gj is essentially a low-pass filter of presynaptic firing. The timescale ?g should be set relatively large so that
the postsynaptic conductance follows the exponentially decaying time average of many preceding
presynaptic high frequency spikes.
The network structure is described by the parameters ksyn,ij = (ksyn /p)ij Xij where ij is another
uniform quenched random variable on [0.5, 1.5] independent in i and j. Xij = 1 if cells i and j
are connected and zero otherwise. In the simulations reported here we use random networks where
cells i and j are connected with probability p, and there are no self-connections, Xii = 0. ksyn is a
parameter which is rescaled by the connection probability p so that average total inhibition on each
cell is constant independent of p. All simulations were carried out with fourth order Runge-Kutta.
3
Results
Figure 1(a) shows a time series segment of membrane potentials Vi (t) for some randomly selected
cells from an N = 100 cell network. The switching between firing and quiescent states can clearly
be seen. Cells fire with different frequencies and become quiescent for variable periods before starting to fire again apparently randomly. However the model has no stochastic variables and therefore
this switching is caused by deterministic chaos. As explained above the firing rate is determined
by the proximity of the limit cycle to the saddle-node bifurcation and can therefore be arbitrarily
low for this type of bifurcation. Since we have set the unit parameters so that all units are near
the bifurcation point even weak network inhibition is able to cause the cells to become quiescent
at times. The parameter settings are biologically realistic[13] and MSN cells are known to show
irregular quiescent and firing states in vivo[14].
The complex bursting structure is easier to see from raster plots. A segment from a N = 100
cell time series is shown in Fig.1(b). This figure clearly shows attractor switching, or chaotic
itinerancy[12], where a quasi-stable nearly-periodic state (an ?attractor ruin?) is visited from higher
dimensional chaos. To make this plot the cells have been ordered by the k-means algorithm with
five clusters (see below). The cells are coloured according to the cluster assigned to them by the
algorithm. During the periodic window, most cells are silent however some cells fire continuously
3
Figure 1: (a) Membrane potential Vi (t) time series segment for a few cells from a N = 100 cell
network simulation with 20 connections per cell. Each cell time series is a different colour. (b)
Spike raster plot from an N = 100 cell network simulation with 20 connections per cell. Each line
is a different cell and the 71 cells which fire at least one spike during the period shown are plotted.
Cells are ordered by k-means with five clusters and coloured according to their assigned clusters.
at fixed frequency and some cells fire in periodic bursts. In fact the cells which fire in bursts have
been separated into two clusters, as can be seen in Fig.1(b), the blue and green clusters. These two
clusters fire periodic bursts in anti-phase. Cell assemblies can also be seen in the chaotic regions.
The cells in the black cluster fire together in a burst around t = 17500 while the cells in the orange
cluster fire a burst together around t = 16000. Fig.2(a) shows another example of a spike raster plot
from a N = 100 cell network simulation where again the cells have been ordered by the k-means
algorithm with five clusters. Now cell assemblies, blue, orange and red coloured, can clearly be seen
which appear to switch in alternation. This switching is further interrupted from time to time by the
green and black assemblies.
Due to the presence of attractor switching where cell assemblies can burst in antiphase we can
expect the appearance of strongly positively and strongly negatively correlated cell pairs. Correlation
matrices are constructed by dragging a moving window over a long spike time series and counting
the spikes to construct the associated firing rate time series. The correlation matrix of the rate time
series is then sorted by the k-means method[2], which is equivalent to PCA. Each cell is assigned
to one of a fixed number of clusters and the cells indices are reordered accordingly. Fig.2(b) shows
the cross-correlation matrix corresponding to the spike raster plot in Fig.2(a) with cells ordered the
same way. Within an assembly cells are positively correlated, while cells in different assemblies
often show negative correlation.
Larger networks with appropriate connectivites also show complex identity-temporal patterns. A
patch-work of switching cell assembly clusters can be seen in the spike raster plot and corresponding
cross-correlation matrix shown in Figs.2(c) and (d) respectively for a N = 500 cell system where the
cells have been ordered by the k-means algorithm, now with 30 clusters. Any particular assembly
can seem to be burst firing periodically for a spell before becoming quiescent for long spells. Other
cell assemblies burst very occasionally for no apparent reason. Notice from the cross-correlation
matrices in Fig.2(b) and (d) that although some cell assemblies are positively correlated with each
other, they have different relationships to other cell assemblies, and therefore cannot be combined
into a single larger assemblies.
Fig.2(c) reveals many cells switching between a firing state and quiescent state. What is the structure of this switching state? To investigate this we analyse inter spike interval (ISI) distributions.
Shown in Fig.1(b) are three ISI distributions for three 500 cell network simulations in the sparse to
intermediate regime with 30 connections per cell. The distributions are very broad and far from the
exponential distribution one would expect from a Poisson process. They are consistent with a scalefree power law behaviour for three orders of magnitude, but exponentially cut off at large ISIs due
to finite size effects. It is this distribution which produces the appearance of the complex identitytemporal patterns shown in the 500 cell time series figure in Fig.2(c) with the long ISIs interspersed
with the bursts of short ISIs. Power-law distributions are characteristic of systems showing chaotic
4
Figure 2: (a) Spike raster plot from all 69 cells in a 100 cell network with 20 connections per cell
which fire at least one spike. The cells are ordered by k-means with five clusters and coloured
according to their assigned cluster. (b) Cross-correlation matrix corresponding to (a). The cells are
ordered by the k-means algorithm the same way as (a). Red colour means positive correlation, blue
means negative correlation, colour intensity matches strength. White is weak or no correlation. (c)
Spike raster plot from an N = 500 cell sparse network with 6 connections per cell. The 379 cells
which fire at least one spike during the period shown are plotted. The cells are ordered by k-means
with 30 clusters. (d) Cross-correlation matrix corresponding to (c) with same conventions as (b).
attractor switching and have been studied in connection with deterministic intermittency[15]. Intermittency consists of laminar phases where the system orbits appear to be relatively regular, and
bursts phases where the motion is quite violent and irregular. Interestingly a power-law distribution
of state sojourn times was also observed in the hippocampal study of by Sasaki et al.[2] described
above. Plenz and Thiagarajan[16] discuss cortical cell assemblies in the framework of scale free
avalanches which are associated with intermittency[17].
The broad power-law distribution produces the temporal aspect of the complex identity-temporal
patterns observed in the time series in Fig.2(c), however the fact that the cells show strong crosscorrelation produces the spatial structure aspect. In the above we have shown how this structure can
be revealed using the k-means sorting algorithm. By combining the spikes of cells in a cluster into a
?cluster spike train? preserving each spikes? timing we can study the ISIs of cluster spike time series.
However the k-means algorithm produces a different clustering depending on the initial choice of
centroids. To control for this we perform the clustering many, here 200, times and combine the ISI
time series so generated into a single distribution. The black circles in Fig.3(a) show the cluster
ISI distribution after cells have been associated to clusters with the k-means algorithm with 10
clusters. The cluster ISI distribution, like the individual cell ISI distribution, also shows a powerlaw over several orders of magnitude. This implies clusters also burst in a multiple scale way. The
slope of the power law is greater than the individual cell result and the cut-off is lower as would
be expected when spike trains are combined. Nevertheless the distribution is still very broad. To
demonstrate this we perform a bootstrap type test where rather than making each cluster spike train
5
Figure 3: (a) Green, brown, blue: Three cell cumulative ISI distributions from 500 cell network simulations with 30 connections per cell, all cells combined. Log-log scale. The slope of the dashed line
is ?1.38. Black: ISI distribution for clusters formed by k-means algorithm corresponding to green
single cell distribution. The slope of the solid line is ?2.35. Red: ISI distribution for clusters formed
from cells randomly corresponding to green single cell distribution. (b) Variation of connectivity for
500 cell networks. Inset shows low connectivity detail. Each point calculated from a different network simulation for observation period t = 2000 to t = 12000 msec. Red: Proportion of cells
which fire at least one spike during the period. Blue: Proportion of cells firing periodically. Black:
Average absolute cross-correlation h|Cij |i between all cells in network calculated from rate time
series constructed from counting spikes in moving window of size 2000 msec. Green: Coefficient
of variation hCV i of ISI distribution averaged across all cells in network rescaled by 1/3.
from the cells associated to the cluster we perform the same k-means clustering to obtain correct
cluster sizes but then scramble the cell indices, associating the cells to the clusters randomly. Again
we do this 200 times and combine all the results into one cluster ISI distribution. The red circles
in Fig.3(a) show this random cluster ISI distribution. The distribution is much narrower than the
distribution obtained from the non-randomized k-means clustering. This demonstrates further that
the time series have a clustered structure which can be revealed by the k-means algorithm and that the
clusters produced have a larger periods of quiescence between bursting than would be expected from
randomly associating cells, even when the cells themselves have power-law distributed ISIs. This
broadened distribution produced by the clustering reflects the complex identity-temporal structure
of the ordered spike time series figures such as shown in Fig.2(c).
The model has several parameters, in particular the connection probability p. How does the formation of switching assembly dynamics depend on the network connectivity? To study this we perform
many numerical simulations while varying p. As described above the synaptic efficacy is rescaled
by the connection probability so the total inhibition on each cell is fixed and therefore effects arise
purely from variations in connectivity.
Fig.3(b) (red) shows the proportion of cells which fire at least one spike versus average connections
per cell for 500 cell network simulations. This quantity shows a transition around 5 connections per
cell to state where almost all the network is burst firing and then decays off to a plateau region at
higher connectivity. Fig.3(b) (blue) shows the proportion of cells firing periodically. This is zero
above the transition. Below the transition a large proportion of cells are not inhibited and firing
periodically due to the excitatory cortical drive, while another large proportion are not firing at all,
inhibited by the periodically firing group. At high connectivities however most cells receive similar
inhibition levels which leaves a certain proportion firing. Fig.3(b) (green) shows the coefficient of
variation CV of the single cell ISI distribution averaged across all cells and rescaled by 1/3. CV is
defined to be the ISI standard deviation normalized by the mean ISI. It is unity for Poisson processes.
Below the transition CV is very low due to many periodic firing cells. At high connectivities it is also
low and inspection of spike time series shows all cells firing with fairly regular ISIs. In intermediate
regions however this quantity can become very large reflecting long periods of quiescent interrupted
by high frequency bursting, as also reflected in the single cell ISI distributions in Fig.3(a). Fig.3(b)
(black) shows the average absolute cross-correlation h|Cij |i where Cij is the cross-correlation co6
efficient between cells i and j firing rate time series? and its absolute value is averaged across all
cells. This quantity also shows the low connectivity transition but peaks around 200 connections per
cell, where many cells are substantially cross-correlated (both positively and negatively). This is in
accordance with the study of Wang and Buzsaki[5]. Fig.3(b) therefore displays an interesting regime
between about 50 and 200 connections per cell where many cells are burst firing with long periods
of quiescence but have substantial cross-correlation. It is in this regime that spike time series often
show the complex identity-temporal patterns and switching cell assemblies exemplified in Fig.2(c).
4
Discussion
We have shown that inhibitory networks of biologically realistic spiking neurons obeying deterministic dynamical equations with sparse to intermediate connectivity can show bursting dynamics,
complex identity-temporal patterns and form cell assemblies. The cells should be near a bifurcation point where even weak inhibition can cause them to become quiescent. The synapses should
have a slower timescale, ?g > 10 in Eq.4, which produces a low pass filter of presynaptic spiking.
This slow change in inhibition allows the bursting assembly dynamics since presynaptic cells do
not instantly inhibit postsynaptic cells, but inhibition builds up gradually, allowing the formation of
assemblies which eventually becoming strong enough to quench the postsynaptic cell activity.
At low connectivities sets of cells with sufficiently few and/or sufficiently weak connections between
them will exist and these cells will fire together as an assembly due to the cortical excitation, if the
rest of the network which inhibits them is sufficiently quiescent for a period. Such a set of weakly
connected cells can be inhibited by another such set of weakly connected cells if each member of
the first set is inhibited by a sufficient number of cells of the second set. When the second set ceases
firing the first set will start to fire. These assemblies can exist in asymmetric closed loops which
slowly switch active set. Multiple ?frustrated? interlocking loops can exist where the slow switching
of one loop will interfere with the dynamical switching of another loop; only when inhibition on one
member set is removed will the loop be able to continue slow switching, producing a type of neural
computation. Furthermore any given cell can be a member of several such sets of weakly connected
cells, as also described by Assisi and Bazhenov[18]. This can explain the findings of Carrillo-Reid
et.al.[1] who show some cells firing with only one assembly and other cells firing in multiple assemblies. These cross-coupled switching assemblies with partially shared members produce complex
multiple timescale dynamics and identity-temporal patterning for appropriate connectivities.
Switching assemblies are most likely to be observed in networks of sparse to intermediate connectivities. This is consistent with WLC based attractor switching. Indeed networks with non-symmetric
inhibitory connections which form closed circuits display WLC dynamics[3] and these will be likely
to occur in networks with sparse to intermediate connectivities. The spike time series in Figs.2(a)
and (c), indicate that cell assemblies switch non-randomly in sequence due to the deterministic attractor switching. This is in good agreement with Carrillo-Reid et al.[1] study of striatal dynamics
and also with the Sasaki et al.[2] study of CA3 cell assemblies. Our time series and the crosscorrelation matrices demonstrate that while most cells fire with only one particular assembly, some
cells are shared between assemblies, as observed by Carrillo-Reid et al.[1]. We have shown that
cells form assemblies of positively correlated cells and assemblies are negatively correlated with
each other, in accordance with the similarity matrix results shown in Sasaki et al.[2].
Very interestingly cell assemblies are predominantly found in a connectivity regime appropriate for
the striatum[6], where each cell is likely to be connected to about 100 cortically excited cells, suggesting the striatum may have adapted to be in this regime. Studies of spontaneous firing in the
striatum also show very variable firing patterns with long periods of quiescence[14], as shown in our
simulations at this connectivity. Based on studies of random striatal connectivity[6] we have simulated a random network without real spatial dimension. In support of this assumption Carrillo-Reid
et al.[1] find that correlated activity is spatially distributed, noting that neurons firing synchronously
could be hundreds of microns apart intermingled with silent cells.
Although we leave this point for future work the dynamics can also be affected by the details of
the spiking. Detailed inspection of the spike raster plot in Fig.1(b) confirms three cells firing with
identical frequency. Since these cells are driven by different levels of cortical excitation, the synchronization can only result from an entrainment produced by the spiking. This is possible in cells
with close firing rates since the effect an inhibitory spike has on a post-synaptic cell depends on
7
the post-synaptic membrane potential[13, 19]. In this way the spiking can affect cluster formation
dynamics and may prolong the lifetime of visits to quasi-stable periodic states. The coupling of assembly dynamics and spiking may be relevant for coding in the insect olfactory lobe for example[9].
The striatum is the main input structure to the basal ganglia (BG). Correlated activity in cortico-basal
ganglia circuits is important in the encoding of movement, associative learning, sequence learning
and procedural memory. Aldridge and Berridge[21] demonstrate that the striatum implements action syntax in rats grooming behaviour. BG may contain central pattern generators (CPGs) that
activate innate behavioral routines, procedural memories, and learned motor programs[20] and recurrent alternating bursting is characteristic of cell assemblies included in CPGs[20]. WLC has been
applied to modeling CPGs[11]. Our modeling suggests that complex switching dynamics based in
the sparse striatal inhibitory network may allow the generation of cell assemblies which interface
sensory driven cortical patterns to dynamical sequence generation. Further work is underway to
demonstrate how these dynamics may be utilized in behavioural tasks recruiting the striatum.
References
[1] Carrillo-Reid L, Tecuapetla F, Tapia D, Hernandez-Cruz A, Galarraga E, Drucker-Colin R, Bargas J.
J.Neurophys. 99, 1435 (2008).
[2] Sasaki T, Matsuki N, Ikegaya Y. J.Neurosci. 27(3), 517-528 (2007). Sasaki T, Kimura R, Tsukamoto M,
Matsuki N, Ikegaya Y. J.Physiol. 574.1, 195-208 (2006).
[3] Rabinovich MI, Huerta R, Volkovskii A, Abarbanel HDI, Stopfer M, Laurent G. J.Physiol. 94, 465 (2000).
Rabinovich M, Volkovskii A, Lecanda P, Huerta R, Abarbanel HDI, Laurent G PRL 87,06:U149-U151
(2001). Rabinovich MI, Huerta R, Varona P, Afraimovich V. Biol. Cybern. 95:519-536 (2006). Nowotny
T, Rabinovich MI. PRL 98,128106 (2007).
[4] Golomb D and Rinzel J. PRE 48, 4810-4814 (1993). Golomb D and Hansel D. Neur.Comp. 12, 10951139 (2000). Tiesinga PHE and Jose VJ. J.Comp.Neuro. 9(1):49-65 (2000).
[5] Wang X-J and Buzsaki G. J.Neurosci. 16(20):6402-6413 (1996).
[6] Wickens JR, Arbuthnott G, Shindou T. Prog. Brain Res. 160, 316 (2007).
[7] Rall WJ. Neurophys. 30, 1138-1168 (1967).
[8] Laurent G. Science 286, 723-728 (1999). Laurent G and Davidowitz H. Science 265, 1872-1875 (1994).
Wehr M and Laurent G. Nature 384, 162-166 (1996). Laurent G, Stopfer M, Friedrich RW, Rabinovich
MI, Abarbanel HDI Annu. Rev. Neurosci. 24:263-297 (2001).
[9] Bazhenov M, Stopfer M, Rabinovich MI, Huerta R, Abarbanel HDI, Sejnowski TJ, Laurent G. Neuron,
30, 553-567 (2001). Nowotny T, Huerta R, Abarbanel HDI, Rabinovich MI. Biol. Cybern 93, 436-446
(2005). Huerta R, Nowotny T, Garcia-Sanchez M, Abarbanel HDI. Neur.Comp. 16, 1601-1640 (2004).
[10] Rabinovich MI, Huerta R,Afraimovich VS. PRL 97, 188103 (2006). Rabinovich MI, Huerta R, Varona
P, Afraimovich VS. PLoS Comput Biol 4(5): e1000072. doi:10.1371 (2008).
[11] Selverston A, Rabinovich M, Abarbanel H, Elson R, Szncs A, Pinto R, Huerta R, Varona P. J.Physiol.
94:357-374 (2000). Varona P, Rabinovich MI, Selverston AI, Arshavsky YI. Chaos 12(3) 672, (2003).
[12] Kaneko K, Tsuda I. Chaos 13(3), 926-936 (2003). K. Kaneko. Physica D 41, 137 (1990). I. Tsuda. Neural
Networks 5, 313 (1992). Tsuda I, Fujii H, Tadokoro S, Yasuoka T, Yamaguti Y. J.Int.Neurosci, 3(2), 159182 (2004). Fujii H and Tsuda I. Neurocomp. 58:151-157 (2004).
[13] Izhikevich E.M. Dynamical Systems in Neuroscience: The Geometry of .... MIT press (2005).
[14] Wilson CJ and Groves PM. Brain Research 220:67-80 (1981).
[15] Gluckenheimer J, Holmes P. Non-linear Oscillations,Dynamical Systems and Bifurcations of Vector
Fields, Springer, Berlin (1983). Ott E. Chaos in Dynamical Systems. Cambridge, U.K.: Cambridge Univ.
Press (2002). Pomeau Y and Manneville P. Commun. Math. Phys., 74, 189-197 (1980). Pikovsky A, J.
Phys. A, 16, L109-L112, (1984).
[16] Plenz D, Thiagarajan TC. Trends Neurosci 30: 101-110, (2007).
[17] Bak P, Tang C, Wiesenfeld K. PRA 38, 364-374 (1988). Bak P, Sneppen K. PRL 71, 4083-4086 (1993).
[18] Assisi CG and Bazhenov MV. SfN 2007 abstract.
[19] Ermentrout B, Rep. Prog. Phys. 61, 353-430 (1998). van-Vreeswijk C, Abbott LF, Ermentrout B. J. Comp.
Neuro., 1, 313-321 (1994).
[20] Grillner S, Hellgren J, Menard A, Saitoh K, Wikstrom MA. Trends Neurosci. 28: 364-370, (2005).
Takakusaki K, Oohinata-Sugimoto J, Saitoh K, Habaguchi T. Prog Brain Res. 143: 231-237, (2004).
[21] Aldridge JW and Berridge KC. J. Neurosci., 18(7):2777-2787 (1998).
8
| 3610 |@word hippocampus:3 proportion:8 hyperpolarized:1 confirms:1 simulation:16 tried:1 lobe:1 excited:3 postsynaptically:1 solid:1 reduction:1 initial:1 series:20 efficacy:1 interestingly:3 current:12 neurophys:2 analysed:1 activation:4 attracted:1 cruz:1 interrupted:2 realistic:8 numerical:3 periodically:5 physiol:3 motor:1 reproducible:1 plot:9 rinzel:1 v:2 selected:1 leaf:1 patterning:1 accordingly:1 inspection:2 short:1 math:1 node:4 five:4 fujii:2 burst:15 constructed:2 become:5 bazhenov:3 ik:3 persistent:2 qualitative:1 consists:1 combine:2 behavioral:1 olfactory:2 inter:1 indeed:1 expected:2 behavior:1 isi:22 themselves:1 brain:5 rall:3 window:3 increasing:1 provided:1 moreover:1 circuit:2 medium:1 golomb:2 what:2 substantially:1 developed:1 selverston:2 finding:1 kimura:1 temporal:8 returning:1 demonstrates:1 control:1 unit:2 broadened:1 appear:4 producing:1 reid:6 before:3 positive:2 understood:1 local:1 timing:1 accordance:2 limit:5 striatum:13 switching:28 encoding:1 laurent:7 firing:33 becoming:2 approximately:1 hernandez:1 black:6 studied:1 bursting:8 dynamically:4 suggests:2 averaged:3 spontaneously:1 implement:1 lf:1 chaotic:5 bootstrap:1 projection:3 pre:1 quenched:2 regular:2 cannot:1 close:1 huerta:9 cybern:2 davidowitz:1 equivalent:1 deterministic:6 interlocking:1 starting:1 duration:2 vsyn:2 powerlaw:1 holmes:1 population:2 variation:4 spontaneous:2 origin:1 agreement:2 trend:2 utilized:1 asymmetric:3 cut:2 observed:9 role:1 wang:3 region:5 wj:1 cycle:7 connected:6 episode:3 plo:1 movement:1 rescaled:4 inhibit:1 removed:1 afraimovich:3 balanced:1 substantial:1 leak:1 ermentrout:2 dynamic:17 okinawa:2 depend:1 weakly:3 segment:3 reordered:1 negatively:3 purely:1 represented:1 neurotransmitter:1 train:3 separated:1 univ:1 msns:3 describe:1 activate:1 phe:1 sejnowski:1 doi:1 formation:4 intermingled:1 whose:1 quite:2 iic:3 larger:3 apparent:1 otherwise:1 timescale:5 analyse:1 runge:1 associative:1 sequence:4 maximal:1 hcv:1 loop:7 combining:1 relevant:1 buzsaki:3 competition:2 cluster:34 produce:7 adam:1 leave:1 wiesenfeld:1 depending:1 recurrent:5 coupling:1 ij:5 eq:4 strong:4 implies:3 indicate:1 synchronized:1 exhibiting:2 convention:1 bak:2 closely:1 correct:1 filter:2 stochastic:1 msn:4 wlc:5 behaviour:4 clustered:2 physica:1 proximity:1 sufficiently:4 around:4 ruin:3 exp:1 recruiting:1 violent:1 hansel:1 visited:2 sensitive:2 individually:1 scramble:1 reflects:1 mit:1 clearly:3 rather:1 varying:2 wilson:1 focus:1 dragging:1 arshavsky:1 centroid:1 cg:1 el:2 hidden:1 kc:1 quiescence:3 quasi:4 among:1 insect:2 spatial:2 summed:1 bifurcation:13 fairly:2 orange:2 field:2 construct:1 having:2 tapia:1 identical:1 broad:3 nearly:2 future:1 stimulus:1 inhibited:6 few:2 randomly:9 composed:4 individual:4 phase:3 geometry:1 fire:16 attractor:10 conductance:2 pra:1 interneurons:2 investigate:1 tj:1 grove:1 culture:1 collateral:3 hdi:6 sojourn:1 orbit:3 circle:3 plotted:2 re:2 tsuda:4 varona:4 minimal:1 modeling:3 gn:1 rabinovich:13 ott:1 ca3:7 deviation:1 uniform:1 comprised:1 hundred:1 tiesinga:1 wickens:3 reported:2 varies:2 spatiotemporal:1 periodic:7 combined:3 peak:1 randomized:1 off:3 connecting:2 continuously:1 together:3 connectivity:19 again:4 central:2 slowly:3 external:3 berridge:2 ek:1 abarbanel:7 crosscorrelation:2 actively:1 japan:2 suggesting:2 potential:5 coding:2 coefficient:2 int:1 caused:1 mv:1 bg:2 vi:8 depends:1 performed:1 closed:4 apparently:1 red:6 start:1 decaying:1 avalanche:1 slope:3 vivo:1 il:1 ni:3 formed:3 elson:1 characteristic:2 who:1 ensemble:2 weak:4 produced:3 comp:4 drive:2 plateau:1 synapsis:5 explain:1 tended:1 phys:3 synaptic:6 against:2 raster:8 frequency:7 associated:4 mi:9 emerges:1 dimensionality:1 ubiquitous:1 cj:1 routine:1 actually:1 reflecting:1 focusing:1 higher:2 dt:3 reflected:1 jw:1 microcircuit:1 strongly:3 furthermore:2 lifetime:1 correlation:18 multiscale:1 widespread:1 interfere:1 mode:1 izhikevich:2 innate:1 effect:4 brown:1 normalized:1 contain:1 spell:2 vicinity:5 assigned:4 spatially:1 symmetric:2 alternating:1 white:1 during:4 self:1 excitation:3 steady:1 rat:1 hippocampal:2 syntax:1 demonstrate:5 motion:1 interface:1 chaos:7 instantaneous:1 predominantly:1 functional:1 spiking:10 volkovskii:2 winner:2 jp:2 exponentially:2 interspersed:1 extend:1 cambridge:2 cv:3 ai:1 pm:1 similarly:1 moving:2 stable:7 cortex:1 similarity:1 inhibition:12 gj:4 saitoh:2 showed:1 commun:1 driven:3 apart:1 nowotny:3 occasionally:1 certain:1 rep:1 arbitrarily:1 continue:1 alternation:1 yi:1 seen:5 preserving:1 greater:1 preceding:1 colin:1 period:11 coworkers:2 dashed:1 ii:5 multiple:4 match:1 cross:12 long:6 pomeau:1 post:2 visit:1 controlled:1 neuro:2 heterogeneous:1 essentially:1 poisson:2 represent:1 cell:161 irregular:2 receive:2 addressed:1 interval:2 aldridge:2 rest:1 depolarized:1 exhibited:1 sanchez:1 member:6 seem:2 near:4 presence:1 counting:2 intermediate:7 revealed:4 enough:1 noting:1 prl:4 switch:6 affect:1 associating:2 silent:2 drucker:1 synchronous:1 pca:1 colour:3 dgj:1 wandering:1 cause:3 action:1 detailed:1 ten:1 rw:1 generate:1 xij:2 exist:3 inhibitory:22 notice:1 neuroscience:1 per:11 blue:6 instantly:1 xii:1 discrete:1 affected:1 group:1 basal:2 procedural:2 threshold:1 nevertheless:1 drawn:1 sporadic:1 changing:1 abbott:1 imaging:1 jose:1 micron:1 fourth:1 prog:3 almost:1 patch:1 putative:1 oscillation:2 incompatible:1 decision:1 bound:1 display:2 laminar:1 fascinating:1 activity:13 strength:1 occur:1 adapted:1 aspect:2 inhibits:2 relatively:3 structured:1 according:3 neur:2 membrane:5 remain:1 across:4 jr:1 postsynaptic:4 spiny:1 unity:1 rev:1 biologically:8 making:2 peaking:1 invariant:1 explained:1 gradually:1 behavioural:1 equation:1 previously:1 remains:1 discus:1 eventually:2 vreeswijk:1 reversal:1 appropriate:4 slower:2 original:1 denotes:1 clustering:5 assembly:55 build:1 suddenly:1 contact:1 capacitance:1 question:3 quantity:4 spike:28 kutta:1 simulated:2 lateral:1 berlin:1 presynaptic:6 matsuki:2 unstable:1 reason:1 sfn:1 index:2 relationship:1 ratio:1 cij:3 striatal:7 gk:1 negative:3 calcium:1 perform:4 allowing:1 neuron:15 observation:3 markov:1 finite:1 anti:1 displayed:1 intermittency:3 interacting:1 perturbation:1 synchronously:1 exiting:1 intensity:1 pair:1 required:1 connection:19 friedrich:1 coherent:4 learned:1 address:3 able:2 suggested:1 dynamical:10 pattern:11 below:4 exemplified:1 regime:6 program:1 including:3 green:7 memory:2 power:7 critical:1 co6:1 gabaergic:3 carried:1 bargas:1 coupled:2 alternated:1 traveled:1 coloured:4 underway:1 law:7 synchronization:3 fully:1 expect:2 grooming:1 generation:4 interesting:1 versus:1 generator:1 sufficient:1 consistent:3 principle:1 dvi:1 excitatory:8 gl:2 asynchronous:1 free:1 cpgs:3 cortico:2 understand:1 allow:1 emerge:1 absolute:3 sparse:12 dni:1 distributed:2 slice:1 curve:1 calculated:2 cortical:6 transition:8 cumulative:1 dimension:1 van:1 sensory:2 author:3 commonly:1 jump:1 far:1 stopfer:3 scalefree:1 annihilate:1 active:2 reveals:1 quiescent:13 why:1 channel:3 nature:1 robust:2 complex:9 wehr:1 vj:2 main:1 neurosci:7 grillner:1 noise:1 arise:1 profile:1 allowed:1 positively:5 neuronal:1 fig:23 referred:1 en:1 slow:3 cortically:2 deterministically:2 msec:2 exponential:1 obeying:1 comput:1 governed:1 tang:1 annu:1 specific:2 inset:1 showing:1 ikegaya:2 decay:2 cease:1 intrinsic:1 sequential:1 magnitude:3 sorting:1 easier:1 tc:1 garcia:1 saddle:6 appearance:2 likely:3 ganglion:2 vth:2 ordered:10 partially:1 pinto:1 springer:1 kaneko:2 plenz:2 frustrated:1 kinetic:1 ma:1 sorted:2 identity:6 narrower:1 jeff:1 oist:4 shared:3 change:2 included:1 determined:2 uniformly:1 reducing:1 entrainment:1 principal:1 total:4 sasaki:6 pas:2 experimental:1 thiagarajan:2 support:1 carrillo:6 phenomenon:1 heaviside:1 biol:3 correlated:9 |
2,881 | 3,611 | PSDBoost: Matrix-Generation Linear Programming
for Positive Semidefinite Matrices Learning
Chunhua Shen?? , Alan Welsh? , Lei Wang?
NICTA Canberra Research Lab, Canberra, ACT 2601, Australia?
?
Australian National University, Canberra, ACT 0200, Australia
?
Abstract
In this work, we consider the problem of learning a positive semidefinite matrix.
The critical issue is how to preserve positive semidefiniteness during the course
of learning. Our algorithm is mainly inspired by LPBoost [1] and the general
greedy convex optimization framework of Zhang [2]. We demonstrate the essence
of the algorithm, termed PSDBoost (positive semidefinite Boosting), by focusing on a few different applications in machine learning. The proposed PSDBoost
algorithm extends traditional Boosting algorithms in that its parameter is a positive semidefinite matrix with trace being one instead of a classifier. PSDBoost is
based on the observation that any trace-one positive semidefinite matrix can be decomposed into linear convex combinations of trace-one rank-one matrices, which
serve as base learners of PSDBoost. Numerical experiments are presented.
1
Introduction
Column generation (CG) [3] is a technique widely used in linear programming (LP) for solving
large-sized problems. Thus far it has mainly been applied to solve problems with linear constraints.
The proposed work here?which we dub matrix generation (MG)?extends the column generation
technique to non-polyhedral semidefinite constraints. In particular, as an application we show how
to use it for solving a semidefinite metric learning problem. The fundamental idea is to rephrase a
bounded semidefinite constraint into a polyhedral one with infinitely many variables. This construction opens possibilities for use of the highly developed linear programming technology. Given the
limitations of current semidefinite programming (SDP) solvers to deal with large-scale problems,
the work presented here is of importance for many real applications.
The choice of a metric has a direct effect on the performance of many algorithms such as the simplest
k-NN classifier and some clustering algorithms. Much effort has been spent on learning a good
metric for pattern recognition and data mining. Clearly a good metric is task-dependent: different
applications should use different measures for (dis)similarity between objects. We show how a
Mahalanobis metric is learned from examples of proximity comparison among triples of training
data. For example, assuming that we are given triples of images ai , aj and ak (ai , aj have same
labels and ai , ak have different labels, ai ? RD ), we want to learn a metric between pairs of images
such that the distance from aj to ai (distij ) is smaller than from ak to ai (distik ). Triplets like this
are the input of our metric learning algorithm. By casting the problem as optimization of the inner
product of the linear transformation matrix and its transpose, the formulation is based on solving
a semidefinite program. The algorithm finds an optimal linear transformation that maximizes the
margin between distances distij and distik .
?
NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council through the ICT Center of Excellence
program.
A major drawback of this formulation is that current SDP solvers utilizing interior-point (IP) methods do not scale well to large problems with computation complexity roughly O(n4.5 ) (n is the
number of variables). On the other hand, linear programming is much better in terms of scalability.
State-of-the-art solvers like CPLEX [4] can solve large problems up to millions of variables and
constraints. This motivates us to develop an LP approach to solve our SDP metric learning problem.
2
Related Work
We overview some relevant work in this section.
Column generation was first proposed by Dantzig and Wolfe [5] for solving some special structured
linear programs with extremely large number of variables. [3] has presented a comprehensive survey
on this technique. The general idea of CG is that, instead of solving the original large-scale problem (master problem), one works on a restricted master problem with a reasonably small subset of
variables at each step. The dual of the restricted master problem is solved by the simplex method,
and the optimal dual solution is used to find the new column to be included into the restricted master
problem. LPBoost [1] is a direct application of CG in Boosting. For the first time, LPBoost shows
that in an LP framework, unknown weak hypotheses can be learned from the dual although the
space of all weak hypotheses is infinitely large. This is the highlight of LPBoost, which has directly
inspired our work.
Metric learning using convex optimization has attracted a lot of attention recently [6?8]. These
work has made it possible to learn distance functions that are more appropriate for a specific task,
based on partially labeled data or proximity constraints. These techniques improve classification
or clustering accuracy by taking advantage of prior information. There is plenty of work reported.
We list a few that are most relevant to ours. [6] learns a Mahalanobis metric for clustering using
convex optimization to minimize the distance between examples belonging to the same class, while
at the same time restricting examples in difference classes not to be too close. The work in [7] also
learns a Mahalanobis metric using SDP by optimizing a modified k-NN classifier. They have used
first-order alternating projection algorithms, which are faster than generic SDP solvers. The authors
in [8] learns a Mahalanobis by considering proximity relationships of training examples. The final
formulation is also an SDP. They replace the positive semidefinite (PSD) conic constraint using a
sequence of linear constraints under the fact that a diagonal dominance matrix must be PSD (but not
vice versa). In other words the conic constraint is replaced by a more strict one. The feasibility set
shrinks and the solution obtained is not necessarily a solution of the original SDP.
3
Preliminaries
We begin with some notational conventions and basic definitions that will be useful.
A bold lower case letter x represents a column vector and an upper case letter X is a matrix. We
denote the space of D ? D symmetric matrices by SD , and positive semidefinite matrices by SD
+.
P
Tr(?) is the trace of a square matrix and hX, Zi = Tr(XZ? ) = ij Xij Zij calculates the inner
product of two matrices. An element-wise inequality between two vectors writes u ? v, which
means ui ? vi for all i.
We use X < 0 to indicate that matrix X is positive semidefinite. For a matrix X ? SD , the
following statements are equivalent: (1) X < 0 (X ? SD
+ ); (2) All eigenvalues of X are nonnegative
(?i (X) ? 0, i = 1, ? ? ? , D); and (3) ?u ? RD , u? Xu ? 0.
3.1
Extreme Points of Trace-one Semidefinite Matrices
Before we present our main results, we prove an important theorem that serves the basis of the
proposed algorithm.
Definition 3.1 For any positive integer M , given a set of points {x1 , ..., xM } in a real vector or
matrix space Sp, the convex hull of Sp spanned by M elements in Sp is defined as:
X
XM
M
convM (Sp) =
?i xi ?i ? 0,
?i = 1, xi ? Sp .
i=1
i=1
Define the convex hull1 of Sp as:
[
conv(Sp) =
convM (Sp)
M
=
X
M
i=1
XM
?i xi ?i ? 0,
?i = 1, xi ? Sp, M ? Z+ .
i=1
Here Z+ denotes the set of all positive integers.
Definition 3.2 Let us define ?1 to be the space of all positive semidefinite matrices X ? SD
+ with
trace equaling one:
?1 = {X | X < 0, Tr(X) = 1 } ; 2
and ?1 to be the space of all positive semidefinite matrices with both trace and rank equaling one:
?1 = {Z | Z < 0, Tr(Z) = 1, rank(Z) = 1 } .
We also define ?2 as the convex hull of ?1 , i.e.,
?2 = conv(?1 ).
Lemma 3.3 Let ?2 be a convex polytope defined as ?2 = {? ? RD | ?k ? 0, ?k = 1, ? ? ? , D,
PD
k=1 ?k = 1}, then the points with only one element equaling one and all the others being zeros
are the extreme points (vertexes) of ?2 . All the other points can not be extreme points.
Proof: Without loss of generality, let us consider such a point ?? = {1, 0, ? ? ? , 0}. If ?? is not an
extreme point of ?2 , then it must be expressed as an convex combination of a few other points in
PM
PM
PM
?2 : ?? = i=1 ?i ?i , ?i > 0, i=1 ?i = 1 and ?i 6= ?? . Then we have equations: i=1 ?i ?ik = 0,
?k = 2, ? ? ? , D. It follows that ?ik = 0, ?i and k = 2, ? ? ? , D. That means, ?i1 = 1 ?i. This is
inconsistent with ?i 6= ?? . Therefore such a convex combination does not exist and ?? must be
an extreme point. It is trivial to see that any ? that has more than one active element is an convex
combination of the above-defined extreme points. So they can not be extreme points.
Theorem 3.4 ?1 equals to ?2 ; i.e., ?1 is also the convex hull of ?1 . In other words, all Z ? ?1 ,
forms the set of extreme points of ?1 .
P
Proof: It is easy to check that any convex combination i ?i Zi , such that Zi ? ?1 , resides in ?1 ,
withP
the following
Ptwo facts: i(1)
a convex combination of PSD matrices is still a PSD matrix; (2)
i
?
Z
Tr
=
?
Tr(Z
)
= 1.
i
i
i
i
By denoting ?1 ? ? ? ? ? ?D ? 0 the eigenvalues of a Z ? ?1 , we know that ?1 ? 1 because
PD
? = Tr(Z) = 1. Therefore, all eigenvalues of Z must satisfy: ?i ? [0, 1], ?i = 1, ? ? ? , D
i=1
PDi
and i ?i = 1. By looking at the eigenvalues of Z and using Lemma 3.3, it is immediate to see that
a matrix Z such that Z < 0, Tr(Z) = 1 and rank(Z) > 1 can not be an extreme point of ?1 . The
only candidates for extreme points are those rank-one matrices (?1 = 1 and ?2,??? ,D = 0). Moreover,
it is not possible that some rank-one matrices are extreme points and others are not because the other
two constraints Z < 0 and Tr(Z) = 1 do not distinguish between different rank-one matrices.
Hence, all Z ? ?1 forms the set of extreme points of ?1 . Furthermore, ?1 is a convex and compact
set, which must have extreme points. Krein-Milman Theorem [9] tells us that a convex and compact
set is equal to the convex hull of its extreme points.
This theorem is a special case of the results from [10] in the context of eigenvalue optimization. A
different proof for the above theorem?s general version can also be found in [11]. In the context of
SDP optimization, what is of interest about Theorem 3.4 is as follows: it tells us that a bounded
PSD matrix constraint X ? ?1 can be equivalently replaced with a set of constrains which belong to
?2 . At the first glance, this is a highly counterintuitive proposition because ?2 involves many more
complicated constraints. Both ?i and Zi (?i = 1, ? ? ? , M ) are unknown variables. Even worse, M
could be extremely (or even indefinitely) large.
1
Strictly speaking, the union of convex hulls may not be a convex hull in general. It is a linear convex span.
Such a matrix X is called a density matrix, which is one of the main concepts in quantum physics. A
density matrix of rank one is called a pure state, and a density matrix of rank higher than one is called a mixed
state.
2
3.2
Boosting
Boosting is an example of ensemble learning, where multiple learners are trained to solve the same
problem. Typically a boosting algorithm [12] creates a single strong learner by incrementally adding
base (weak) learners to the final strong learner. The base learner has an important impact on the
strong learner. In general, a boosting algorithm builds on a user-specified base learning procedure
and runs it repeatedly on modified data that are outputs from the previous iterations.
The inputs to a boosting algorithm are a set of training example x, and their corresponding class
labels y. The final output strong classifier takes the form
XM
F? (x) =
?i fi (x).
(1)
i=1
Here fi (?) is a base learner. From Theorem 3.4, we know that a matrix X ? ?1 can be decomposed
as
XM
X=
?i Zi , Zi ? ?1 .
(2)
i=1
By observing the similarity between Equations (1) and (2), we may view Zi as a weak classifier
and the matrix X as the strong classifier we want to learn. This is exactly the problem that boosting
methods have been designed to solve. This observation inspires us to solve a special type of SDPs
using boosting techniques.
A sparse greedy approximation algorithm proposed by Zhang [2] is an efficient way of solving a
class of convex problems, which provides fast convergence rates. It is shown in [2] that boosting
algorithms can be interpreted within the general framework of [2]. The main idea of sequential
greedy approximation is as follows. Given an initialization u0 ? V, V can be a subset of a linear
vector space, a matrix space or a functional space. The algorithm finds ui ? V, i = 1, ? ? ? , and
0 ? ? ? 1 such that the cost function F ((1 ? ?)ui?1 + ?ui ) is approximately minimized; Then the
solution ui is updated as ui = (1 ? ?)ui?1 + ?ui and the iteration goes on.
4
Large-margin Semidefinite Metric Learning
We consider the Mahalanobis metric learning problem as an example although the proposed technique can be applied to many other problems in machine learning such as nonparametric kernel
matrix learning [13].
We are given a set of training examples ai ? RD , i = 1, 2, ? ? ? . The task is to learn a distance metric
such that with the learned metric, classification or clustering will achieve better performance on
testing data. The information available is a bunch of relative distance comparisons. Mathematically
we are given a set S which contains the training triplets: S = {(ai , aj , ak )| distij < distik },
where distij measures distance between ai and aj with a certain metric. In this work we focus
on the case that dist calculates the Mahalanobis distance. Equivalently we are learning a linear
transformation P ? RD?d such that dist is the Euclidean distance in the projected space: distij =
?
P ai ? P? aj
2 = (ai ? aj )? PP? (ai ? aj ). It is not difficult to see that the inequalities in the set
2
S are non-convex because a difference of quadratic terms in P is involved. In order to convexify the
inequalities in S, a new variable X = PP? is instead used. This is a typical technique for modeling
an SDP problem [14]. We wish to maximize the margin that is defined as the distance between
distij and distik . That is, ? = distik ? distij = (ai ? ak )? X(ai ? ak ) ? (ai ? aj )? X(ai ? aj ).
Also one may use soft margin to tolerate noisy data. Putting these thoughts together, the final convex
program we want to optimize is:
X|S|
max ? ? C
?r
?,X,?
r=1
s.t. X < 0, Tr(X) = 1, ? ? 0,
?
(3)
?
(ai ? ak ) X(ai ? ak ) ? (ai ? aj ) X(ai ? aj ) ? ? ? ?r ,
?(ai , aj , ak ) ? S.
Here r indexes the training set S. |S| denotes the size of S. C is a trade-off parameter that balances
the training error and the margin. Same as in support vector machine, the slack variable ? ? 0
corresponds to the soft-margin hinge loss. Note that the constraint Tr(X) = 1 removes the scale
ambiguity because the distance inequalities are scale invariant.
To simplify our exposition, we write
Ar = (ai ? ak )(ai ? ak )? ? (ai ? aj )(ai ? aj )? .
(4)
The last constraint in (3) is then written
hAr , Xi ? ? ? ?r , ?Ar built from S; r = 1, ? ? ? |S|.
(5)
Problem (3) is a typical SDP since it has a linear cost function and linear constraints plus a PSD
conic constraint. Therefore it can be solved using off-the-shelf SDP solvers like CSDP [15]. As
mentioned general interior-point SDP solvers do not scale well to large-sized problems. Current
solvers can only solve problems up to a few thousand variables, which makes many applications
intractable. For example, in face recognition if the inputs are 30 ? 30 images, then D = 900 and
there would be 0.41 million variables. Next we show how we reformulate the above SDP into an LP.
5
Boosting via Matrix-Generation Linear Programming
Using Theorem 3.4, we can replace the PSD conic constraint in (3) with a linear convex combination
PM
of rank-one unitary PSD matrices: X = i=1 ?i Zi . Substituting X in Problem (3), we obtain
max ? ? C
?,?,?,Z
X|S|
r=1
?r
s.t. ? ? 0,
r PM
PM
A , i=1 ?i Zi = i=1 Ar , Zi ?i ? ? ? ?r ,
?Ar built from S; r = 1, ? ? ? |S|,
PM
i=1 ?i = 1, ? ? 0,
(P1 )
Zi ? ?1 , i = 1, ? ? ? , M.
This above problem is still very hard to solve since it has non-convex rank constraints and an indefinite number of variables (M is indefinite because there are an indefinite number of rank-one
matrices). However if we somehow know matrices Zi (i = 1, ? ? ? ) a priori, we can then drop all the
constraints imposed on Zi (i = 1, ? ? ? ) and the problem becomes a linear program; or more precisely
a semi-infinite linear program (SILP) because it has an infinitely large set of variables ?.
Column generation is a state-of-the-art method for optimally solving difficult large-scale optimization problems. It is a method to avoid considering all variables of a problem explicitly. If an LP
has extremely many variables (columns) but much fewer constraints, CG can be very beneficial.
The crucial insight behind CG is: for an LP problem with many variables, the number of non-zero
variables of the optimal solution is equal to the number of constraints, hence although the number
of possible variables may be large, we only need a small subset of these in the optimal solution. It
works by only considering a small subset of the entire variable set. Once it is solved, we ask the
question: ?Are there any other variables that can be included to improve the solution??. So we must
be able to solve the subproblem: given a set of dual values, one either identifies a variable that has
a favorable reduced cost, or indicates that such a variable does not exist. In essence, CG finds the
variables with negative reduced costs without explicitly enumerating all variables. For a general LP,
this may not be possible. But for some types of problems it is possible.
We now consider Problem (P1 ) as if all Zi , (i = 1, ? ? ? ) were known. The dual of (P1 ) is easily
derived:
min ?
?,w
s.t.
P|S|
r i
r=1 A , Z wr ? ?, i = 1, ? ? ? , M,
P|S|
r=1 wr = 1,
0 ? wr ? C, r = 1, ? ? ? , |S|.
(D1 )
For convex programs with strong duality, the dual gap is zeros, which means the optimal value of
the primal and dual problems coincide. For LPs and SDPs, strong duality holds under very mild
conditions (almost always satisfied by LPs and SDPs considered here).
We now only consider a small subset of the variables in the primal; i.e., only a subset of Z (denoted
? 3 is used. The LP solved using Z
? is usually termed restricted master problem (RMP). Because
by Z)
the primal variables correspond to the dual constraints, solving RMP is equivalent to solving a
? the first set of constraints in (D1 ) are finite, and
relaxed version of the dual problem. With a finite Z,
we can solve the LP that satisfies all the existing constraints.
If we can prove that among all the constraints that we have not added to the dual problem, no
single constraint is violated, then we can conclude that solving the restricted problem is equivalent
to solving the original problem. Otherwise, there exists at least one constraint that is violated. The
violated constraints correspond to variables in primal that are not in RMP. Adding these variables
to RMP leads to a new RMP that needs to be re-optimized. In our case, by finding the violated
constraint, we generate a rank-one matrix Z? . Hence, as in LPBoost [1] we have a base learning
algorithm as an oracle that either finds a new Z? such that
P|S|
r ?
?,
r=1 A , Z wr > ?
where ?
? is the solution of the current restricted problem, or a guarantee that such a Z? does not exist.
To make convergence fast, we find the one that has largest deviation. That is,
nP
o
|S|
r
Z? = argmaxZ
A
,
Z
w
?
,
s.t.
Z
?
?
.
(B1 )
r
1
r=1
Again here w
?r (r = 1, ? ? ? , |S|) are obtained by solving the current restricted dual problem (D1 ). Let
us denote Opt(B1 ) the optimal value of the optimization problem in (B1 ). We now have a criterion
that guarantees the optimal convex combination over all Z?s satisfying the constraints in ?2 has been
found. If Opt(B1 ) ? ?
? , then we are done?we have solved the original problem.
The presented algorithm is a variant of the CG technique. At each iteration, a new matrix is generated, hence the name matrix generation.
5.1
Base Learning Algorithm
In this section, we show that the optimization problem (B1 ) can be exactly and efficiently solved
using eigen-decomposition.
From Z < 0 and rank(Z) = 1, we know that Z has the format: Z = uu? , u ? RD ; and Tr(Z) = 1
means kuk2 = 1. We have
P|S|
P|S|
r
P|S|
?r =
? r Ar , Z = u
?r Ar u? .
r=1 A , Z w
r=1 w
r=1 w
By denoting
r
? = P|S| w
H
r=1 ?r A ,
(6)
?
max u? Hu,
subject to kuk2 = 1.
(7)
the optimization in (B1 ) equals:
u
? ?max (H),
? and its corresponding eigenvector u1 give the
It is clear that the largest eigenvalue of H,
?
solution to the above problem. Note that H is symmetric. Therefore we have the solution of the
? and Z? = u1 u? .
original problem (B1 ): Opt(B1 ) = ?max (H)
1
There are approximate eigenvalue solvers, which guarantee that for a symmetric matrix U and any
? > 0, a vector v is found such that v? Uv ? ?max ??. To approximately find the largest eigenvalue
and eigenvector can be very efficient using Lanczos or power method. We use the MATLAB function
eigs to calculate the largest eigenvector, which calls mex files of ARPACK. ARPACK is a collection
of Fortran subroutines designed to solve large scale eigenvalue problems. When the input matrix is
symmetric, this software uses a variant of the Lanczos process called the implicitly restarted Lanczos
method [16].
3
? ?
We also use ?,
? and w
? etc. to denote the solution of the current RMP and its dual.
Algorithm 1: PSDBoost for semidefinite metric learning.
Input: Training set triplets (ai , aj , ak ) ? S; Calculate Ar , r = 1, ? ? ? from S using Equation (4).
Initialization:
1. M = 1 (no bases selected);
2. ? = 0 (all primal coefficients are zeros);
3. ? = 0;
4. wr =
1
|S| ,
r = 1, ? ? ? , |S| (uniform dual weights).
while true do
? in (6);
1. Find a new base Z? by solving Problem (B1 ), i.e., eigen-decomposition of H
2. if Opt(B1 ) ? ? then break (problem solved);
3. Add Z? to the restricted master problem, which corresponds to a new constraint in
Problem (D1 );
4. Solve the dual (D1 ) to obtain updated ? and wr (r = 1, ? ? ? , |S|);
5. M = M + 1 (base count).
end
Output:
1. Calculate the primal variable ? from the optimality conditions and the last solved dual LP;
PM
2. The learned PSD matrix X ? RD?D , X = i=1 ?i Zi .
Putting all the above analysis together, we summarize our PSDBoost algorithm for metric learning
in Algorithm 1. Note that, in practice, we can relax the convergence criterion by setting a small
positive threshold ?? > 0 in order to obtain a good approximation quickly. Namely the convergence
criterion is Opt(B1 ) ? ? + ?? .
The algorithm has some appealing properties. Each iteration the solution is provably better than the
preceding one, and has rank at most one larger. Hence after M iterations the algorithm attains a
solution with rank at most M . The algorithm preserves CG?s property that each iteration improves
the quality of the solution. The bounded rank follows the fact that rank(A + B) ? rank(A) +
rank(B), ? matrices A and B.
An advantage of the proposed PSDBoost algorithm over standard boosting schemes is the totallycorrective weight update in each iteration, which leads faster convergence. The coordinate descent
optimization employed by standard boosting algorithms is known to have a slow convergence rate in
general. However, the price of this totally-corrective update is obvious. PSDBoost spans the space of
the parameter X incrementally. The computational cost for solving the subproblem grows with the
number of linear constraints, which increases by one at each iteration. Also it needs more and more
memory to store the generated base learner Zi as represented by a series of unit vectors. To alleviate
this problem, one can use a selection and compression mechanism as the aggregation step of bundle
methods [17]. When the size of of the bundle becomes too large, bundle methods select columns to
be discarded and the selected information is aggregated into a single one. It can be shown that as
long as the aggregated column is introduced in the bundle, the bundle algorithm remains convergent,
although different selection of discarded columns may lead to different convergence speeds. See [17]
for details.
6
Experiments
In the first experiment, we have artificially generated 600 points in 24 dimensions. Therefore the
learned metric is of size 24 ? 24. The triplets are obtained in this way: For a point ai , we find its
nearest neighbor in the same class aj and its nearest neighbor in the different class ak . We subsample
to have 550 triplets for training. To show the convergence, we have plotted the optimal values of
the dual problem (D1 ) at each iteration in Figure 1. We see that PSDBoost quickly converges to
the near-optimal solution. We have observed the so-called tailing-off effect of CG on large datasets.
While a near-optimal solution is approached considerably fast, only little progress per iteration is
made close to the optimum. Stabilization techniques have been introduced to partially alleviate
this problem [3]. However, approximate solutions are sufficient for most machine learning tasks.
Moreover, we usually are not interested in the numerical accuracy of the solution but the test error
for many problems such as metric and kernel learning.
The second experiment uses the Pendigits data from the UCI repository that contains handwritten
samples of digits 1, 5, 7, 9. The data for each digits are 16-dimensional. 80 samples for each digit
are used for training and 500 for each digit for testing. The results show that PSDBoost converges
quickly and the learned metric is very similar to the results obtained by a standard SDP solver. The
classification errors on testing data with a 1-nearest neighbor are identical using the metrics learned
by PSDBoost and a standard SDP solver. Both are 1.3%.
7
Conclusion
We have presented a new boosting algorithm, PSDBoost, for learning a positive semidefinite matrix. In particular, as an example, we use PSDBoost to learn a distance metric for classification.
PSDBoost can also be used to learn a kernel matrix, which is of interest in machine learning. We
are currently exploring new applications with PSDBoost. Also we want to know what kind of SDP
optimization problems can be approximately solved by PSDBoost.
References
300
0
200
?5
100
Opt(D1)
5
1
Opt(D )
[1] A. Demiriz, K.P. Bennett, and J. Shawe-Taylor. Linear programming boosting via column generation. Mach. Learn., 46(1-3):225?254,
2002.
[2] T. Zhang. Sequential greedy approximation for certain convex optimization problems. IEEE Trans. Inf. Theory, 49(3):682?691, 2003.
[3] M. E. L?ubbecke and J. Desrosiers. Selected topics in column generation. Operation Res., 53(6):1007?1023, 2005.
[4] ILOG, Inc. CPLEX 11.1, 2008. http://www.ilog.com/products/cplex/.
[5] G. B. Dantzig and P. Wolfe. Decomposition principle for linear programs. Operation Res., 8(1):101?111, 1960.
[6] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning, with application to clustering with side-information. In Proc. Adv.
Neural Inf. Process. Syst. MIT Press, 2002.
[7] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. In Proc. Adv.
Neural Inf. Process. Syst., pages 1473?1480, 2005.
[8] R. Rosales and G. Fung. Learning sparse metrics via linear programming. In Proc. ACM Int. Conf. Knowledge Discovery & Data Mining,
pages 367?373, Philadelphia, PA, USA, 2006.
[9] M. Krein and D. Milman. On extreme points of regular convex sets. Studia Mathematica, 9:133?138, 1940.
[10] M. L. Overton and R. S. Womersley. On the sum of the largest eigenvalues of a symmetric matrix. SIAM J. Matrix Anal. Appl.,
13(1):41?45, 1992.
[11] P. A. Fillmore and J. P. Williams. Some convexity theorems for matrices. Glasgow Math. Journal, 12:110?117, 1971.
[12] R. E. Schapire. Theoretical views of boosting and applications. In Proc. Int. Conf. Algorithmic Learn. Theory, pages 13?25, London,
UK, 1999. Springer-Verlag.
[13] B. Kulis, M. Sustik, and I. Dhillon. Learning low-rank kernel matrices. In Proc. Int. Conf. Mach. Learn., pages 505?512, Pittsburgh,
Pennsylvania, 2006.
[14] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[15] B. Borchers. CSDP, a C library for semidefinite programming. Optim. Methods and Softw., 11(1):613?623, 1999.
[16] D. Calvetti, L. Reichel, and D. C. Sorensen. An implicitly restarted Lanczos method for large symmetric eigenvalue problems. Elec.
Trans. Numer. Anal, 2:1?21, Mar 1994. http://etna.mcs.kent.edu.
[17] J. F. Bonnans, J. C. Gilbert, C. Lemar?echal, and C. A. Sagastiz?abal. Numerical Optimization: Theoretical and Practical Aspects (1st
edition). Springer-Verlag, Berlin, 2003.
?10
0
?15
?100
?20
?200
?25
0
50
100
iterations
150
200
?300
0
50
100
150
iterations
Figure 1: The objective value of the dual problem (D1 ) on the first (left) and second (right) experiment. The dashed line shows the ground
truth obtained by directly solving the original primal SDP (3) using interior-point methods.
| 3611 |@word mild:1 kulis:1 repository:1 version:2 compression:1 open:1 hu:1 decomposition:3 kent:1 tr:12 contains:2 series:1 zij:1 denoting:2 ours:1 psdboost:17 existing:1 current:6 com:1 optim:1 attracted:1 must:6 written:1 numerical:3 remove:1 designed:2 drop:1 update:2 greedy:4 fewer:1 selected:3 indefinitely:1 provides:1 boosting:17 math:1 zhang:3 direct:2 ik:2 prove:2 polyhedral:2 excellence:1 roughly:1 p1:3 xz:1 sdp:17 dist:2 inspired:2 decomposed:2 little:1 solver:10 considering:3 conv:2 begin:1 becomes:2 bounded:3 moreover:2 maximizes:1 totally:1 what:2 kind:1 interpreted:1 eigenvector:3 developed:1 finding:1 transformation:3 convexify:1 guarantee:3 act:2 exactly:2 classifier:6 uk:1 unit:1 positive:15 before:1 sd:5 mach:2 ak:13 approximately:3 plus:1 pendigits:1 initialization:2 dantzig:2 appl:1 practical:1 testing:3 silp:1 union:1 practice:1 writes:1 digit:4 procedure:1 thought:1 projection:1 boyd:1 word:2 regular:1 interior:3 close:2 selection:2 context:2 optimize:1 equivalent:3 imposed:1 www:1 center:1 gilbert:1 go:1 attention:1 williams:1 convex:30 survey:1 shen:1 pure:1 glasgow:1 insight:1 utilizing:1 spanned:1 counterintuitive:1 vandenberghe:1 coordinate:1 updated:2 construction:1 user:1 programming:9 us:2 hypothesis:2 pa:1 wolfe:2 element:4 recognition:2 satisfying:1 lpboost:5 labeled:1 observed:1 subproblem:2 wang:1 solved:9 thousand:1 calculate:3 equaling:3 adv:2 trade:1 russell:1 mentioned:1 pd:2 convexity:1 complexity:1 ui:8 constrains:1 trained:1 solving:15 serve:1 creates:1 learner:9 basis:1 easily:1 represented:2 corrective:1 elec:1 fast:3 london:1 borchers:1 approached:1 tell:2 widely:1 solve:12 larger:1 relax:1 otherwise:1 demiriz:1 noisy:1 ip:1 final:4 advantage:2 mg:1 sequence:1 eigenvalue:11 product:3 relevant:2 uci:1 achieve:1 scalability:1 convergence:8 optimum:1 converges:2 object:1 spent:1 blitzer:1 develop:1 nearest:4 ij:1 progress:1 strong:7 involves:1 indicate:1 australian:3 convention:1 uu:1 rosales:1 drawback:1 hull:6 stabilization:1 desrosiers:1 australia:2 government:1 hx:1 bonnans:1 preliminary:1 alleviate:2 proposition:1 opt:7 mathematically:1 strictly:1 exploring:1 hold:1 proximity:3 considered:1 ground:1 algorithmic:1 abal:1 substituting:1 major:1 favorable:1 proc:5 label:3 currently:1 council:1 largest:5 vice:1 mit:1 clearly:1 always:1 modified:2 avoid:1 shelf:1 casting:1 derived:1 focus:1 notational:1 rank:21 check:1 mainly:2 indicates:1 cg:9 pdi:1 attains:1 economy:1 dependent:1 nn:2 typically:1 entire:1 subroutine:1 i1:1 interested:1 provably:1 issue:1 among:2 dual:17 classification:5 denoted:1 priori:1 art:2 special:3 equal:4 once:1 ng:1 softw:1 identical:1 represents:1 plenty:1 simplex:1 others:2 minimized:1 simplify:1 np:1 few:4 preserve:2 national:1 comprehensive:1 replaced:2 welsh:1 cplex:3 psd:9 interest:2 possibility:1 highly:2 mining:2 withp:1 numer:1 extreme:15 semidefinite:20 behind:1 primal:7 sorensen:1 bundle:5 har:1 overton:1 euclidean:1 taylor:1 re:3 plotted:1 theoretical:2 column:12 modeling:1 soft:2 ar:7 lanczos:4 cost:5 vertex:1 subset:6 deviation:1 uniform:1 csdp:2 inspires:1 too:2 optimally:1 reported:1 considerably:1 st:1 density:3 fundamental:1 siam:1 physic:1 off:3 together:2 quickly:3 again:1 ambiguity:1 satisfied:1 worse:1 conf:3 syst:2 semidefiniteness:1 bold:1 coefficient:1 inc:1 int:3 satisfy:1 explicitly:2 vi:1 view:2 lot:1 lab:1 break:1 observing:1 xing:1 aggregation:1 complicated:1 reichel:1 minimize:1 square:1 accuracy:2 efficiently:1 ensemble:1 correspond:2 weak:4 handwritten:1 sdps:3 dub:1 mc:1 bunch:1 definition:3 pp:2 involved:1 mathematica:1 obvious:1 proof:3 studia:1 ask:1 knowledge:1 improves:1 focusing:1 higher:1 tolerate:1 formulation:3 done:1 shrink:1 mar:1 generality:1 furthermore:1 hand:1 glance:1 incrementally:2 somehow:1 aj:17 quality:1 lei:1 grows:1 name:1 effect:2 usa:1 concept:1 true:1 hence:5 alternating:1 symmetric:6 dhillon:1 deal:1 mahalanobis:6 during:1 essence:2 criterion:3 demonstrate:1 image:3 wise:1 recently:1 fi:2 womersley:1 functional:1 overview:1 million:2 ptwo:1 belong:1 rmp:6 versa:1 cambridge:1 ai:27 rd:7 uv:1 pm:8 shawe:1 funded:1 similarity:2 etc:1 base:11 add:1 optimizing:1 inf:3 chunhua:1 termed:2 store:1 certain:2 verlag:2 inequality:4 relaxed:1 preceding:1 employed:1 aggregated:2 maximize:1 dashed:1 u0:1 semi:1 multiple:1 alan:1 faster:2 long:1 feasibility:1 calculates:2 impact:1 variant:2 basic:1 metric:26 iteration:12 kernel:4 mex:1 want:4 crucial:1 strict:1 file:1 subject:1 inconsistent:1 jordan:1 integer:2 call:1 unitary:1 near:2 easy:1 zi:16 pennsylvania:1 inner:2 idea:3 enumerating:1 effort:1 speaking:1 repeatedly:1 matlab:1 useful:1 clear:1 nonparametric:1 simplest:1 reduced:2 generate:1 http:2 schapire:1 xij:1 exist:3 wr:6 per:1 write:1 dominance:1 putting:2 indefinite:3 threshold:1 distij:7 sum:1 run:1 letter:2 master:6 extends:2 almost:1 distinguish:1 milman:2 convergent:1 quadratic:1 calvetti:1 nonnegative:1 oracle:1 constraint:31 precisely:1 software:1 u1:2 speed:1 aspect:1 extremely:3 span:2 min:1 optimality:1 format:1 department:1 structured:1 fung:1 combination:8 belonging:1 smaller:1 beneficial:1 lp:12 appealing:1 n4:1 restricted:8 invariant:1 equation:3 remains:1 slack:1 count:1 mechanism:1 fortran:1 know:5 serf:1 end:1 sustik:1 available:1 operation:2 appropriate:1 generic:1 weinberger:1 eigen:2 original:6 denotes:2 clustering:5 hinge:1 build:1 objective:1 question:1 added:1 traditional:1 diagonal:1 distance:14 distik:5 berlin:1 eigs:1 polytope:1 topic:1 trivial:1 nicta:2 assuming:1 krein:2 index:1 relationship:1 reformulate:1 balance:1 equivalently:2 difficult:2 statement:1 trace:7 negative:1 anal:2 motivates:1 unknown:2 upper:1 observation:2 ilog:2 datasets:1 discarded:2 finite:2 descent:1 immediate:1 communication:1 looking:1 introduced:2 pair:1 namely:1 specified:1 optimized:1 rephrase:1 learned:7 trans:2 able:1 usually:2 pattern:1 xm:5 summarize:1 program:8 built:2 max:6 memory:1 power:1 critical:1 scheme:1 improve:2 technology:1 library:1 conic:4 identifies:1 philadelphia:1 prior:1 ict:1 discovery:1 relative:1 loss:2 highlight:1 mixed:1 generation:10 limitation:1 triple:2 digital:1 sufficient:1 principle:1 echal:1 sagastiz:1 course:1 last:2 transpose:1 dis:1 side:1 neighbor:4 saul:1 taking:1 face:1 sparse:2 dimension:1 resides:1 quantum:1 author:1 made:2 collection:1 projected:1 coincide:1 far:1 approximate:2 compact:2 implicitly:2 arpack:2 active:1 b1:11 pittsburgh:1 conclude:1 xi:5 triplet:5 learn:9 reasonably:1 necessarily:1 artificially:1 sp:9 main:3 subsample:1 edition:1 xu:1 x1:1 fillmore:1 canberra:3 broadband:1 slow:1 wish:1 candidate:1 learns:3 theorem:9 kuk2:2 specific:1 list:1 intractable:1 exists:1 restricting:1 adding:2 sequential:2 importance:1 argmaxz:1 margin:7 gap:1 infinitely:3 expressed:1 partially:2 restarted:2 springer:2 corresponds:2 truth:1 satisfies:1 acm:1 sized:2 exposition:1 replace:2 price:1 bennett:1 hard:1 lemar:1 included:2 typical:2 infinite:1 lemma:2 called:5 duality:2 select:1 support:1 violated:4 d1:8 |
2,882 | 3,612 | Theory of matching pursuit
Zakria Hussain and John Shawe-Taylor
Department of Computer Science
University College London, UK
{z.hussain,j.shawe-taylor}@cs.ucl.ac.uk
Abstract
We analyse matching pursuit for kernel principal components analysis (KPCA)
by proving that the sparse subspace it produces is a sample compression scheme.
We show that this bound is tighter than the KPCA bound of Shawe-Taylor et al
[7] and highly predictive of the size of the subspace needed to capture most of the
variance in the data. We analyse a second matching pursuit algorithm called kernel matching pursuit (KMP) which does not correspond to a sample compression
scheme. However, we give a novel bound that views the choice of subspace of the
KMP algorithm as a compression scheme and hence provide a VC bound to upper
bound its future loss. Finally we describe how the same bound can be applied to
other matching pursuit related algorithms.
1
Introduction
Matching pursuit refers to a family of algorithms that generate a set of bases for learning in a greedy
fashion. A good example of this approach is the matching pursuit algorithm [4]. Viewed from this
angle sparse kernel principal components analysis (PCA) looks for a small number of kernel basis vectors in order to maximise the Rayleigh quotient. The algorithm was proposed by [8]1 and
motivated by matching pursuit [4], but to our knowledge sparse PCA has not been analysed theoretically. In this paper we show that sparse PCA (KPCA) is a sample compression scheme and can
be bounded using the size of the compression set [3, 2] which is the set of training examples used
in the construction of the KPCA subspace. We also derive a more general framework for this algorithm that uses the principle ?maximise Rayleigh quotient and deflate?. A related algorithm called
kernel matching pursuit (KMP) [10] is a sparse version of least squares regression but without the
property of being a compression scheme. However, we use the number of basis vectors constructed
by KMP to help upper bound the loss of the KMP algorithm using the VC dimension. This bound
is novel in that it is applied in an empirically chosen low dimensional hypothesis space and applies
independently of the actual dimension of the ambient feature space (including one constructed from
the Gaussian kernel). In both cases we illustrate the use of the bounds on real and/or simulated data.
Finally we also show that the KMP bound can be applied to a sparse kernel canonical correlation
analysis that uses a similar matching pursuit technique. We do not describe the algorithm here due to
space constraints and only concentrate on theoretical results. We begin with preliminary definitions.
2
Preliminary definitions
Throughout the paper we consider learning from samples of data. For the regression section the
data is a sample S = {(xi , yi )}m
i=1 of input-output pairs drawn from a joint space X ? Y where
x ? Rn and y ? R. For the principal components analysis the data is a sample S = {xi }m
i=1 of
1
The algorithm was proposed as a low rank kernel approximation ? however the algorithm turns out to be a
sparse kernel PCA (to be shown).
1
multivariate examples drawn from a space X . For simplicity we always assume that the examples
are already projected into the kernel defined feature space, so that the kernel matrix K has entries
K[i, j] = hxi , xj i. The notation K[i, :] and K[:, i] will denote the ith row and ith column of the
matrix K, respectively. When using a set of indices i = {i1 , . . . , ik } (say) then K[i, i] denotes the
square matrix defined solely by the index set i. The transpose of a matrix X or vector x is denoted
by X0 or x0 respectively. The input data matrix X will contain examples as row vectors.
For analysis purposes we assume that the training examples are generated i.i.d. according to an unknown but fixed probability distribution that also governs the generation of the test data. Expectation
? while expectation with respect to
over the training examples (empirical average) is denoted by E[?],
the underlying distribution is denoted E[?]. For the sample compression analysis the compression
function ? induced by a sample compression learning algorithm A on training set S is the map
? : S 7?? ?(S) such that the compression set ?(S) ? S is returned by A. A reconstruction function ? is a mapping from a compression set ?(S) to a set F of functions ? : ?(S) 7?? F . Let A(S)
be the function output by learning algorithm A on training set S. Therefore, a sample compression
scheme is a reconstruction function ? mapping a compression set ?(S) to some set of functions F
such that A(S) = ?(?(S)). If F is the set of Boolean-valued or Real-valued functions then the
sample compression scheme is said to be a classification or regression algorithm, respectively.
2.1
Sparse kernel principal components analysis
Principal components analysis [6] can be expressed as the following maximisation problem:
w0 X0 Xw
max
,
(1)
w
w0 w
where w is the weight vector. In a sparse KPCA algorithm we would like to find a sparsely repre? that is a linear combination of a small number of training examples
sented vector w = X[i, :]0 ?,
indexed by vector i. Therefore making this substitution into Equation (1) we have the following
sparse dual PCA maximisation problem,
?
? 0 X[i, :]X0 XX[i, :]0 ?
?
,
max
?
?
? 0 X[i, :]X[i, :]0 ?
?
?
which is equivalent to sparse kernel PCA (SKPCA) with sparse kernel matrix K[:, i]0 = X[i, :]X0 ,
? 0 K[:, i]0 K[:, i]?
?
?
,
max
?
?
? 0 K[i, i]?
?
?
? is a sparse vector of length k = |i|. Clearly maximising the quantity above will lead to
where ?
? ? and hence a sparse subset of
the maximisation of the generalised eigenvalues corresponding to ?
the original KPCA problem. We would like to find the optimal set of indices i. We proceed in a
greedy manner (matching pursuit) in much the same way as [8]. The procedure involves choosing
basis vectors that maximise the Rayleigh quotient without the set of eigenvectors. Choosing basis
vectors iteratively until some pre-specified number of k vectors are chosen. An orthogonalisation
of the kernel matrix at each step ensures future potential basis vectors will be orthogonal to those
already chosen. The quotient to maximise is:
e0i K2 ei
max ?i =
,
(2)
e0i Kei
where ei is the ith unit vector. After this maximisation we need to orthogonalise (deflate) the kernel
matrix to create a projection into the space orthogonal to the basis vectors chosen to ensure we find
the maximum variance of the data in the projected space. The deflation step can be carried out as
follows. Let ? = K[:, i] = XX0 ei where ei is the ith unit vector. We know that primal PCA
deflation can be carried out with respect to the features in the following way:
0
? 0 = I ? uu X0 ,
X
u0 u
? is the deflated matrix.
where u is the projection directions defined by the chosen eigenvector and X
0
However, in sparse KPCA, u = X ei because the projection directions are simply the examples in
X. Therefore, for sparse KPCA we have:
uu0
uu0
XX0 ei e0i XX0
K[:, i]K[:, i]0
0
?
?
XX = X I ? 0
I? 0
X0 = XX0 ?
=
K
?
.
uu
uu
e0i XX0 ei
K[i, i]
2
? can be computed as follows:
Therefore, given a kernel matrix K the deflated kernel matrix K
? = K?
K
??0
K[ik , ik ]
(3)
where ? = K[:, ik ] and ik denotes the latest element in the vector i. The algorithm is presented
below in Algorithm 1 and we use the notation K.2 to denote component wise squaring. Also,
division of vectors are assumed to be component wise.
Algorithm 1: A matching pursuit algorithm for kernel principal components analysis (i.e., sparse
KPCA)
Input: Kernel K, sparsity parameter k > 0.
1: initialise i = [ ]
2: for j = 1 to k do
n .2 0 o
(K ) 1
3:
Set ij to index of max diag{K}
4:
set ? = K[:, ij ] to deflate kernel matrix like so: K = K ?
5: end for
? using i and Equation (5)
6: Compute K
??0
K[ij ,ij ]
?
Output: Output sparse matrix approximation K
This algorithm is presented in Algorithm 1 and is equivalent to the algorithm proposed by [8].
However, their motivation comes from the stance of finding a low rank matrix approximation of
? = K[:, i]T for a set i such
the kernel matrix. They proceed by looking for an approximation K
? is minimal.
that the Frobenius norm between the trace residuals tr{K ? K[:, i]T } = tr{K ? K}
Their algorithm finds the set of indices i and the projection matrix T . However, the use of T in
computing the low rank matrix approximation seems to imply the need for additional information
from outside of the chosen basis vectors in order to construct this approximation. However, we
show that a projection into the space defined solely by the chosen indices is enough to reconstruct
the kernel matrix and does not require any extra information.2 The projection is the well known
Nystr?om method [11].
An orthogonal projection Pi (?(xj )) of a feature vector ?(xj ) into a subspace defined only by the
? 0 (X
?X
? 0 )?1 X?(x
?
?
set of indices i can be expressed as: Pi (xj ) = X
j ), where X = X[i, :] are the i
training examples from data matrix X. It follows that,
Pi (xj )0 Pi (xj )
? 0 (X
?X
? 0 )?1 X
?X
? 0 (X
?X
? 0 )?1 X?(x
?
= ?(xj )0 X
j)
= K[i, j]K[i, i]?1 K[j, i],
(4)
with K[i, j] denoting the kernel entries between the index set i and the feature vector ?(xj ). Giving
us the following projection into the space defined by i:
? = K[:, i]K[i, i]?1 K[:, i]0 .
K
(5)
Claim 1. The sparse kernel principal components analysis algorithm is a compression scheme.
Proof. We can reconstruct the projection from the set of chosen indices i using Equation (4). Hence,
i forms a compression set.
We now prove that Smola and Sch?olkopf?s low rank matrix approximation algorithm [8] (without
sub-sampling)3 is equivalent to sparse kernel principal components analysis presented in this paper
(Algorithm 1).
Theorem 1. Without sub-sampling, Algorithm 1 is equivalent to Algorithm 2 of [8].
2
In their book, Smola and Sch?olkopf redefine their kernel approximation in the same way as we have done
[5], however they do not make the connection that it is a compression scheme (see Claim 1).
3
We do not use the ?59-trick? in our algorithm ? although it?s inclusion would be trivial and would result in
the same algorithm as in [8]
3
Proof. Let K be the kernel matrix and let K[:, i] be the ith column of the kernel matrix. Assume X is
the input matrix containing rows of vectors that have already been mapped into a higher dimensional
0
feature space using ? such that X = (?(x1 ), . . . , ?(xm )) . Smola and Sch?olkopf [8] state in section
4.2 of their paper that their algorithm 2 finds a low rank approximation of the kernel matrix such that
? 2
?
?
it minimises the Frobenius norm kX? Xk
Frob = tr{K? K} where X is the low rank approximation
of X. Therefore, we need to prove that Algorithm 1 also minimises this norm.
We would like to show that the maximum reduction in the Frobenius norm between the kernel K
? is in actual fact the choice of basis vectors that maximise the Rayleigh quotient
and its projection K
and deflate according to Equation (3). At each stage we deflate by,
??0
K = K?
.
K[ik , ik ]
Pm
The trace tr{K} = i=1 K[i, i] is the sum of the diagonal elements of matrix K. Therefore,
tr{? ? 0 }
tr{? 0 ? }
K2 [ik , ik ]
tr{K} = tr{K} ?
= tr{K} ?
= tr{K} ?
.
K[ik , ik ]
K[ik , ik ]
K[ik , ik ]
The last term of the final equation corresponds exactly to the Rayleigh quotient of Equation (2).
Therefore the maximisation of the Rayleigh quotient does indeed correspond to the maximum re? and X.
duction in the Frobenius norm between the approximated matrix X
2.2
A generalisation error bound for sparse kernel principal components analysis
We use the sample compression framework of [3] to bound the generalisation error of the sparse
KPCA algorithm. Note that kernel PCA bounds [7] do not use sample compression in order to
bound the true error. As pointed out above, we use the simple fact that this algorithm can be viewed
as a compression scheme. No side information is needed in this setting and a simple application
of [3] is all that is required. That said the usual application of compression bounds has been for
classification algorithms, while here we are considering a subspace method.
Theorem 2. Let Ak be any learning algorithm having a reconstruction function that maps compression sets to subspaces. Let m be the size of the training set S, let k be the size of the compression
set, let E?m?k [`(Ak (S))] be the residual loss between the m ? k points outside of the compression
set and their projections into a subspace, then with probability 1 ? ?, the expected loss E[`(Ak (S))]
of algorithm Ak given any training set S can be bounded by,
s
"
#
em
R
2m
t ln
,
E[`(Ak (S))] ? min E?m?t [`(At (S))] +
+ ln
1?t?k
2(m ? t)
t
?
where `(?) ? 0 and R = sup `(?).
Proof. Consider the case where we have a compression set of size k. Then we have m
k different
ways of choosing the compression set. Given ? confidence we apply Hoeffding?s bound
to the m ? k
points not in the compression set once for each choice by setting it equal to ?/ m
k . Solving for
gives us the theorem when we further apply a factor 1/m to ? to ensure one application for each
possible choice of k. The minimisation over t chooses the best value making use of the fact that
using more dimensions can only reduce the expected loss on test points.
We now consider the application of the above bound to sparse KPCA. Let the corresponding loss
function be defined as
`(At (S))(x) = kx ? Pit (x)k2 ,
where x is a test point and Pit (x) its projection into the subspace determined by the set it of indices
returned by At (S). Thus we can give a more specific loss bound in the case where we use a Gaussian
kernel in the sparse kernel principal components analysis.
Corollary 1 (Sample compression bound for sparse KPCA). Using a Gaussian kernel and all of the
definitions from Theorem 2, we get the following bound:
s
"
#
m?t
em
1 X
1
2m
2
E[`(A(S))] ? min
kxi ? Pit (xi )k +
t ln
+ ln
,
1?t?k m ? t
2(m ? t)
t
?
i=1
4
Note that R corresponds to the smallest radius of a ball that encloses all of the training points. Hence,
for the Gaussian kernel R equals 1. We now compare the sample compression bound proposed
above for sparse KPCA with the kernel PCA bound introduced by [7]. The left hand side of Figure 1
shows plots for the test error residuals (for the Boston housing data set) together with its upper
bounds computed using the bound of [7] and the sample compression bound of Corollary 1. The
sample compression bound is much tighter than the KPCA bound and also non-trivial (unlike the
KPCA bound).
The sample compression bound is at its lowest point after 43 basis vectors have been added. We
speculate that at this point the ?true? dimensions of the data have been found and that all other dimensions correspond to ?noise?. This corresponds to the point at which the plot of residual becomes
linear, suggesting dimensions with uniform noise. We carry out an extra toy experiment to help
assess whether or not this is true and to show that the sample compression bound can help indicate
when the principal components have captured most of the actual data. The right hand side plot of
Figure 1 depicts the results of a toy experiment where we randomly sampled 1000 examples with
450 dimensions from a Gaussian distribution with zero mean and unit variance. We then ensured
that 50 dimensions contained considerably larger eigenvalues than the remaining 400. From the
right plot of Figure 1 we see that the test residual keeps dropping at a constant rate after 50 basis
vectors have been added. The compression bound picks 46 dimensions with the largest eigenvalues,
however, the KPCA bound of [7] is much more optimistic and is at its lowest point after 30 basis
vectors, suggesting erroneously that SKPCA has captured most of the data in 30 dimensions. Therefore, as well as being tighter and non-trivial, the compression bound is much better at predicting the
best choice for the number of dimensions to use with sparse KPCA. Note that we carried out this
experiment without randomly permuting the projections into a subspace because SKPCA is rotation
invariant and will always choose the principal components with the largest eigenvalues.
Bound plots for sparse kernel PCA
Bound plots for sparse kernel PCA
2.5
1.8
PCA bound
sample compression bound
test residual
PCA bound
sample compression bound
test residual
1.6
2
X: 30
Y: 1.84
1.4
1.2
1.5
Residual
Residual
1
0.8
1
X: 46
Y: 0.7984
0.6
0.4
0.5
0.2
0
0
10
20
30
40
50
60
70
80
Level of sparsity
90
100
110
120
130
140
0
150
0
10
20
30
40
50
60
70
80
Level of sparsity
90
100
110
120
130
140
150
Figure 1: Bound plots for sparse kernel PCA comparing the sample compression bound proposed
in this paper and the already existing PCA bound. The plot on the left hand side is for the Boston
Housing data set and the plot on the right is for a Toy experiment with 1000 training examples (and
450 dimensions) drawn randomly from a Gaussian distribution with zero mean and unit variance.
3
Kernel matching pursuit
Unfortunately, the theory of the last section, where we gave a sample compression bound for SKPCA
cannot be applied to KMP. This is because the algorithm needs information from outside of the
compression set in order to construct its regressors and make predictions. However, we can use
a VC argument together with a sample compression trick in order to derive a bound for KMP in
terms of the level of sparsity achieved, by viewing the sparsity achieved in the feature space as a
5
compression scheme. Please note that we do not derive or reproduce the KMP algorithm here and
advise the interested reader to read the manuscript of [10] for the algorithmic details.
3.1
A generalisation error bound for kernel matching pursuit
VC bounds have commonly been used to bound learning algorithms whose hypothesis spaces are
infinite. One problem with these results is that the VC-dimension can sometimes be infinite even in
cases where learning is successful (e.g., the SVM). However, in this section we can avoid this issue
by making use of the fact that the VC-dimension of the set of linear threshold functions is simply the
dimensionality of the function class. In the kernel matching pursuit algorithm this translates directly
into the number of basis vectors chosen and hence a standard VC argument.
The natural loss function for KMP is regression ? however in order to use standard VC bounds we
map the regression loss into a classification loss in the following way.
Definition 1. Let S ? D be a regression training sample generated iid from a fixed but unknown
probability distribution D. Given the error `(f ) = |f (x) ? y| for a regression function f between
training example x and regression output y we can define, for some fixed positive scalar ? ? R, the
corresponding true classification loss (error) as
`? (f )
=
{|f (x) ? y| > ?} .
Pr
(x,y)?D
Similarly, we can define the corresponding empirical classification loss as
`?? (f ) = `S? (f )
=
Pr
(x,y)?S
{|f (x) ? y| > ?} = E(x,y)?S {I(|f (x) ? y| > ?)} ,
where I is the indicator function and S is suppressed when clear from context.
Now that we have a loss function that is binary we can make a simple sample compression argument,
that counts the number of possible subspaces, together with a traditional VC style bound to upper
bound the expected loss of KMP. To help keep the notation consistent with earlier definitions we
will denote the indices of the chosen basis vectors by i. The indices of i are chosen from the training
sample S and we denote Si to be those samples indexed by the vector i. Given these definitions and
the bound of Vapnik and Chervonenkis [9] we can upper bound the true loss of KMP as follows.
Theorem 3. Fix ? ? R, ? > 0. Let A be the regression algorithm of KMP, m the size of the training
set S and k the size of the chosen basis
Pm vectors i. Let S be reordered so that the last m ? k points
are outside of the set i and let t = i=m?k I(|f (xi ) ? yi | > ?) be the number of errors for those
points in S r Si . Then with probability 1 ? ? over the generation of the training set S the expected
loss E[`(?)] of algorithm A can be bounded by,
em
2
4e(m ? k ? t)
E[`(A(S))] ?
(k + 1) log
+ k log
m?k?t
k+1
k
2
e(m ? k)
2m
+t log
+ log
.
t
?
Proof. First consider a fixed size k for the compression set and number of errors t. Let S1 =
{xi1 , . . . , xik } be the set of k training points chosen by the KMP regressor, S2 = {xik+1 , . . . , xik+t }
the set of points erred on in training and S? = S r (S1 ? S2 ) the points outside of the compression
set (S1 ) and training error set (S2 ). Suppose that the first k points form the compression set and
the next t are the errors of the KMP regressor. Since the remaining m ? k ? t points S? are drawn
independently we can apply the VC bound [9] to the `? loss to obtain the bound
k+1
n
o
4e(m ? k ? t)
?
S
?
2?(m?k?t)/2 ,
Pr S : `? (f ) = 0, `? (f ) > ? 2
k+1
where we have made use of a bound on the number of dichotomies that can be generated by parallel
k+d?1
Pk+d?1 md?1
hyperplanes [1], which is i=0
which is ? e(md?1)
, where d is the number
k+d?1
i
of parallel hyperplanes and equals 2 in our case. We now need to consider all of the ways that the
6
k basis vectors and t error points might have occurred and apply the union bound over all of these
possibilities. This gives the bound
n
o
?
Pr S : ? f ? span{S1 } s.t. `S?2 (f ) = 1, `S? (f ) = 0, `? (f ) >
k+1
m m?k
4e(m ? k ? t)
2?(m?k?t)/2 .
(6)
?
2
k+1
k
t
Finally we need to consider all possible choices of the values of k and t. The number of these
possibilities is clearly upper bounded by m2 . Setting m2 times the rhs of (6) equal to ? and solving
for gives the result.
This is the first upper bound on the generalisation error for KMP that we are aware of and as such
we cannot compare the bound against any others. Figure 2 plots the KMP test error against the loss
KMP error on Boston housing data set
0.9
bound
KMP test error
0.8
0.7
0.6
Loss
0.5
0.4
0.3
0.2
0.1
0
0
5
10
15
20
25
30
Level of sparsity
35
40
45
50
Figure 2: Plot of KMP bound against its test error. We used 450 examples for training and the 56
for testing. Bound was scaled down by a factor of 5.
bound given by Theorem 3. The bound value has been scaled by 5 in order to get the correct pictorial
representation of the two plots. Figure 2 shows its minima directly coincides with the lowest test
error (after 17 basis vectors). This motivates a training algorithm for KMP that would use the bound
as the minimisation criteria and stop once the bound fails to become smaller. Hence, yielding a more
automated training procedure.
4
Extensions
The same approach that we have used for bounding the performance of kernel matching pursuit can
be used to bound a matching pursuit version of kernel canonical correlation analysis (KCCA) [6].
By choosing the basis vectors greedily to optimise the quotient:
max ?i
i
=
e0i Kx Ky ei
q
e0i K2x ei e0i K2y ei
,
and proceeding in the same manner as Algorithm 1 by deflating after each pair of basis vectors are
chosen, we create a sparsely defined subspace within which we can run the standard CCA algorithm.
This again means that the overall algorithm fails to be a compression scheme as side information is
required. However, we can use the same approach described for KMP to bound the expected fit of
the projections from the two views. The resulting bound has the following form.
Theorem 4. Fix ? ? R, ? > 0. Let A be the SKCCA algorithm, m the size of the paired training sets
S X ?Y and k the cardinality of the set i of chosen basis vectors. LetP
S X ?Y be reordered so that the
m
last m ? k paired data points are outside of the set i and define t = i=m?k I(|fx (xi ) ? fy (yi )| >
7
?) to be the number of errors for those points in S X ?Y r SiX ?Y , where fx is the projection function
of the X view and fy the projection function of the Y view. Then with probability 1 ? ? over the
generation of the paired training sets S X ?Y the expected loss E[`(?)] of algorithm A can be bounded
by,
em
2
4e(m ? k ? t)
E[`(A(S))] ?
(k + 1) log
+ k log
m?k?t
k+1
k
2
e(m ? k)
2m
+t log
+ log
.
t
?
5
Discussion
Matching pursuit is a meta-scheme for creating learning algorithms for a variety of tasks. We have
presented novel techniques that make it possible to analyse this style of algorithm using a combination of compression scheme ideas and more traditional learning theory. We have shown how sparse
KPCA is in fact a compression scheme and demonstrated bounds that are able to accurately guide dimension selection in some cases. We have also used the techniques to bound the performance of the
kernel matching pursuit (KMP) algorithm and to reinforce the generality of the approach indicated
and how the approach can be extended to a matching pursuit version of KCCA.
The results in this paper imply that the performance of any learning algorithm from the matching
pursuit family can be analysed using a combination of sparse and traditional learning bounds. The
bounds give a general theoretical justification of the framework and suggest potential applications
of matching pursuit methods to other learning tasks such as novelty detection, ranking and so on.
Acknowledgements
The work was sponsored by the PASCAL network of excellence and the SMART project.
References
[1] M. Anthony. Partitioning points by parallel planes. Discrete Mathematics, 282:17?21, 2004.
[2] S. Floyd and M. Warmuth. Sample compression, learnability, and the Vapnik-Chervonenkis
dimension. Machine Learning, 21(3):269?304, 1995.
[3] N. Littlestone and M. K. Warmuth. Relating data compression and learnability. Technical
report, University of California Santa Cruz, Santa Cruz, CA, 1986.
[4] S. Mallat and Z. Zhang. Matching pursuit with time-frequency dictionaries. IEEE Transactions
on Signal Processing, 41(12):3397?3415, 1993.
[5] B. Sch?olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[6] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, U.K., 2004.
[7] J. Shawe-Taylor, C. K. I. Williams, N. Cristianini, and J. Kandola. On the eigenspectrum of the
Gram matrix and the generalization error of kernel-PCA. IEEE Transactions on Information
Theory, 51(7):2510?2522, 2005.
[8] A. J. Smola and B. Sch?olkopf. Sparse greedy matrix approximation for machine learning. In
Proceedings of 17th International Conference on Machine Learning, pages 911?918. Morgan
Kaufmann, San Francisco, CA, 2000.
[9] V. N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of
events to their probabilities. Theory of Probability and its Applications, 16(2):264?280, 1971.
[10] P. Vincent and Y. Bengio. Kernel matching pursuit. Machine Learning, 48:165?187, 2002.
[11] C. K. I. Williams and M. Seeger. Using the Nystr?om method to speed up kernel machines. In
Advances in Neural Information Processing Systems, volume 13, pages 682?688. MIT Press,
2001.
8
| 3612 |@word version:3 compression:51 norm:5 seems:1 pick:1 tr:10 nystr:2 carry:1 reduction:1 substitution:1 chervonenkis:3 denoting:1 existing:1 comparing:1 analysed:2 si:2 john:1 cruz:2 plot:12 sponsored:1 greedy:3 warmuth:2 plane:1 xk:1 ith:5 hyperplanes:2 zhang:1 constructed:2 become:1 ik:15 prove:2 redefine:1 manner:2 excellence:1 x0:7 theoretically:1 expected:6 indeed:1 actual:3 considering:1 cardinality:1 becomes:1 begin:1 xx:2 bounded:5 notation:3 underlying:1 project:1 lowest:3 eigenvector:1 deflate:5 finding:1 exactly:1 ensured:1 k2:3 scaled:2 uk:2 partitioning:1 unit:4 generalised:1 maximise:5 positive:1 ak:5 solely:2 might:1 pit:3 testing:1 maximisation:5 union:1 procedure:2 empirical:2 projection:16 matching:24 pre:1 confidence:1 refers:1 advise:1 suggest:1 get:2 cannot:2 selection:1 encloses:1 context:1 equivalent:4 map:3 demonstrated:1 latest:1 williams:2 independently:2 simplicity:1 m2:2 initialise:1 proving:1 fx:2 justification:1 construction:1 suppose:1 mallat:1 us:2 hypothesis:2 trick:2 element:2 approximated:1 sparsely:2 capture:1 ensures:1 cristianini:2 solving:2 smart:1 predictive:1 reordered:2 division:1 basis:19 joint:1 describe:2 london:1 duction:1 dichotomy:1 choosing:4 outside:6 whose:1 larger:1 valued:2 say:1 reconstruct:2 analyse:3 final:1 housing:3 eigenvalue:4 orthogonalisation:1 ucl:1 reconstruction:3 deflating:1 frobenius:4 olkopf:5 ky:1 convergence:1 produce:1 help:4 derive:3 illustrate:1 ac:1 minimises:2 ij:4 c:1 quotient:8 involves:1 uu:3 come:1 indicate:1 concentrate:1 direction:2 radius:1 correct:1 vc:10 viewing:1 require:1 fix:2 generalization:1 preliminary:2 tighter:3 extension:1 mapping:2 algorithmic:1 claim:2 dictionary:1 smallest:1 purpose:1 largest:2 create:2 mit:2 clearly:2 gaussian:6 always:2 avoid:1 minimisation:2 corollary:2 rank:6 seeger:1 greedily:1 squaring:1 reproduce:1 i1:1 interested:1 issue:1 classification:5 dual:1 pascal:1 denoted:3 overall:1 equal:4 construct:2 once:2 having:1 aware:1 sampling:2 look:1 future:2 others:1 report:1 randomly:3 kandola:1 pictorial:1 frob:1 detection:1 highly:1 possibility:2 yielding:1 primal:1 permuting:1 ambient:1 orthogonal:3 indexed:2 taylor:5 littlestone:1 re:1 theoretical:2 minimal:1 column:2 earlier:1 boolean:1 kpca:18 entry:2 subset:1 uniform:2 successful:1 learnability:2 kxi:1 considerably:1 chooses:1 international:1 xi1:1 regressor:2 together:3 again:1 containing:1 choose:1 hoeffding:1 book:1 creating:1 style:2 toy:3 suggesting:2 potential:2 speculate:1 ranking:1 view:4 optimistic:1 sup:1 repre:1 parallel:3 ass:1 square:2 om:2 variance:4 kaufmann:1 correspond:3 vincent:1 accurately:1 iid:1 definition:6 against:3 frequency:2 proof:4 erred:1 sampled:1 xx0:5 stop:1 k2x:1 knowledge:1 dimensionality:1 manuscript:1 higher:1 done:1 generality:1 smola:5 stage:1 correlation:2 until:1 hand:3 e0i:7 ei:10 indicated:1 contain:1 true:5 hence:6 stance:1 read:1 iteratively:1 floyd:1 please:1 coincides:1 criterion:1 wise:2 novel:3 rotation:1 empirically:1 volume:1 occurred:1 relating:1 cambridge:3 pm:2 similarly:1 inclusion:1 pointed:1 mathematics:1 shawe:5 hxi:1 base:1 multivariate:1 meta:1 binary:1 yi:3 captured:2 minimum:1 additional:1 morgan:1 novelty:1 signal:1 u0:1 technical:1 paired:3 prediction:1 regression:9 kcca:2 expectation:2 kmp:23 kernel:52 sometimes:1 achieved:2 sch:5 extra:2 unlike:1 induced:1 bengio:1 enough:1 automated:1 variety:1 xj:8 hussain:2 gave:1 fit:1 reduce:1 idea:1 translates:1 whether:1 motivated:1 pca:16 six:1 returned:2 proceed:2 governs:1 eigenvectors:1 clear:1 santa:2 generate:1 canonical:2 discrete:1 dropping:1 threshold:1 drawn:4 sum:1 run:1 angle:1 family:2 throughout:1 reader:1 sented:1 bound:73 cca:1 letp:1 constraint:1 erroneously:1 speed:1 argument:3 min:2 span:1 department:1 according:2 combination:3 ball:1 smaller:1 em:4 suppressed:1 making:3 s1:4 invariant:1 pr:4 ln:4 equation:6 turn:1 count:1 deflation:2 needed:2 know:1 end:1 pursuit:24 apply:4 original:1 denotes:2 remaining:2 ensure:2 uu0:2 xw:1 giving:1 already:4 quantity:1 added:2 usual:1 diagonal:1 traditional:3 said:2 md:2 subspace:12 mapped:1 simulated:1 reinforce:1 w0:2 fy:2 eigenspectrum:1 trivial:3 maximising:1 length:1 index:12 unfortunately:1 xik:3 trace:2 motivates:1 unknown:2 upper:7 extended:1 looking:1 rn:1 introduced:1 pair:2 required:2 specified:1 connection:1 california:1 able:1 below:1 pattern:1 xm:1 sparsity:6 max:6 including:1 optimise:1 event:1 natural:1 predicting:1 indicator:1 residual:9 scheme:15 imply:2 carried:3 acknowledgement:1 relative:1 loss:20 generation:3 consistent:1 principle:1 pi:4 row:3 last:4 transpose:1 side:5 guide:1 sparse:33 dimension:16 gram:1 commonly:1 made:1 projected:2 regressors:1 san:1 kei:1 transaction:2 keep:2 assumed:1 francisco:1 xi:5 ca:2 anthony:1 diag:1 pk:1 rh:1 motivation:1 noise:2 s2:3 bounding:1 x1:1 depicts:1 fashion:1 sub:2 fails:2 theorem:7 down:1 specific:1 deflated:2 svm:1 vapnik:3 kx:3 boston:3 rayleigh:6 simply:2 expressed:2 contained:1 scalar:1 applies:1 corresponds:3 ma:1 viewed:2 generalisation:4 determined:1 infinite:2 principal:12 called:2 college:1 |
2,883 | 3,613 | Improving on Expectation Propagation
Manfred Opper
Computer Science, TU Berlin
[email protected]
Ulrich Paquet
Computer Laboratory, University of Cambridge
[email protected]
Ole Winther
Informatics and Mathematical Modelling, Technical University of Denmark
[email protected]
Abstract
A series of corrections is developed for the fixed points of Expectation Propagation (EP), which is one of the most popular methods for approximate probabilistic
inference. These corrections can lead to improvements of the inference approximation or serve as a sanity check, indicating when EP yields unrealiable results.
1 Introduction
The expectation propagation (EP) message passing algorithm is often considered as the method of
choice for approximate Bayesian inference when both good accuracy and computational efficiency
are required [5]. One recent example is a comparison of EP with extensive MCMC simulations for
Gaussian process (GP) classifiers [4], which has shown that not only the predictive distribution, but
also the typically much harder marginal likelihood (the partition function) of the data, are approximated remarkably well for a variety of data sets. However, while such empirical studies hold great
value, they can not guarantee the same performance on other data sets or when completely different
types of Bayesian models are considered.
In this paper methods are developed to assess the quality of the EP approximation. We compute
explicit expressions for the remainder terms of the approximation. This leads to various corrections
for partition functions and posterior distributions. Under the hypothesis that the EP approximation
works well, we identify quantities which can be assumed to be small and can be used in a series
expansion of the corrections with increasing complexity. The computation of low order corrections
in this expansion is often feasible, typically require only moderate computational efforts, and can
lead to an improvement to the EP approximation or to the indication that the approximation cannot
be trusted.
2 Expectation Propagation in a Nutshell
Since it is the goal of this paper to compute corrections to the EP approximation, we will not discuss details of EP algorithms but rather characterise the fixed points which are reached when such
algorithms converge.
EP is applied to probabilistic models with an unobserved latent variable x having an intractable
distribution p(x). In applications p(x) is usually the Bayesian posterior distribution conditioned on
a set of observations. Since the dependency on the latter variables is not important for the subsequent
theory, we will skip them in our notation.
1
It is assumed that p(x) factorizes into a product of terms fn such that
p(x) =
1 Y
fn (x) ,
Z n
R
dx
Q
q(x) =
Y
gn (x)
where the normalising partition function Z =
an approximation to p(x) in the form
n
(1)
fn (x) is also intractable. We then assume
(2)
n
where the terms gn (x) belong to a tractable, e.g. exponential family of distributions. To compute
the optimal parameters of the gn term approximation a set of auxiliary tilted distributions is defined
via
1
q(x)fn (x)
qn (x) =
.
(3)
Zn
gn (x)
Here a single approximating term gn is replaced by an original term fn . Assuming that this replacement leaves qn still tractable, the parameters in gn are determined by the condition that q(x)
and all qn (x) should be made as similar as possible. This is usually achieved by requiring that these
distributions share a set of generalised moments (which usually coincide with the sufficient statistics
of the exponential family). Note, that we will not assume that this expectation consistency [8] for
the moments is derived by minimising a Kullback?Leibler divergence, as was done in the original
derivations of EP [5]. Such an assumption would limit the applicability of the approximate inference
and exclude e.g. the approximation of models with binary, Ising variables by a Gaussian model as
in one of the applications in the last section.
The corresponding approximation to the normalising partition function in (1) was given in [8] and
[7] and reads in our present notation1
Y
ZEP =
Zn .
(4)
n
3 Corrections to EP
An expression for the remainder terms which are neglected by the EP approximation can be obtained
by solving for fn in (3), and taking the product to get
Y
Y Zn qn (x)gn (x)
Y qn (x)
fn (x) =
= ZEP q(x)
.
(5)
q(x)
q(x)
n
n
n
Hence Z =
R
dx
Q
fn (x) = ZEP R, with
Z
Y qn (x)
R = dx q(x)
q(x)
n
n
and p(x) =
Y
1
q(x)
R
n
qn (x)
q(x)
.
(6)
This shows that corrections to EP are small when all distributions qn are indeed close to q, justifying
the optimality criterion of EP. For related expansions, see [2, 3, 9].
Exact probabilistic inference with the corrections described here again leads to intractable computations. However, we can derive exact perturbation expansions involving a series of corrections with
increasing computational complexity. Assuming that EP already yields a good approximation, the
computation of a small number of these terms maybe sufficient to obtain the most dominant corrections. On the other hand, when the leading corrections come out large or do not sufficiently decrease
with order, this may indicate that the EP approximation is inaccurate. Two such perturbation expansions are be presented in this section.
1
The definition of partition functions Zn is slightly different from previous works.
2
3.1 Expansion I: Clusters
n (x)
The most basic expansion is based on the variables ?n (x) = qq(x)
? 1 which we can assume to be
typically small, when the EP approximation is good. Expanding the products in (6) we obtain the
correction to the partition function
Z
Y
R = dx q(x)
(1 + ?n (x))
(7)
n
X
=1+
?n1 (x)?n2 (x) q +
n1 <n2
X
n1 <n2 <n3
?n1 (x)?n2 (x)?n3 (x) q + . . . ,
(8)
which is a finite series in terms of growing clusters of ?interacting? variables ?n (x). Here the
brackets h. . .iq denote expectations with respect to the distribution q. Note, that the first order term
P
n h?n (x)iq = 0 vanishes by the normalization of qn and q. As we will see later, the computation
of corrections is feasible when qn is just a finite mixture of K simpler densities from the exponential
family to which q belongs. Then the number of mixture components in the j-th term of the expansion
of R is just of the order O(K j ) and an evaluation of low order terms should be tractable.
In a similar way, we get
P
P
q(x) 1 + n ?n (x) + n1 <n2 ?n1 (x)?n2 (x) + . . .
P
p(x) =
,
1 + n1 <n2 h?n1 (x)?n2 (x)iq + . . .
(9)
In order to keep the resulting density normalized to one, we should keep as many terms in the
numerator as in the denominator. As an example, the first order correction to q(x) is
X
p(x) ?
qn (x) ? (N ? 1)q(x) .
(10)
n
3.2 Expansion II: Cumulants
One of most important applications of EP is to the case of statistical models with Gaussian process
priors. Here x is a latent variable with Gaussian prior distribution and covariance E[xx? ] = K
where K is the kernel matrix. In this case we have N + 1 terms f0 , f1 , . . . , fN in (1) where f0 (x) =
g0 (x) = exp[? 21 x? K?1 x]. For n ? 1 each fn (x) = tn (xn ) is the likelihood term for the nth
observation which depends only on a single component xn of the vector x.
The corresponding approximating terms are chosen to be Gaussian of the form gn (x) ?
2
1
e?n x? 2 ?n x . The 2N parameters ?n and ?n are determined in such a way that q(x) and the distributions qn (x) have the same first and second marginal moments hxn i and hx2n i. In this case, the
computation of corrections (7) would require the computation of multivariate integrals of increasing
dimensionality. Hence, a different type of expansion seems more appropriate. The main idea is to
expand with respect to the higher order cumulants of the distributions qn .
To derive this expansion, we simplify (6) using the fact that q(x) = q(x\n |xn )q(xn ) and qn (x) =
q(x\n |xn )qn (xn ), where we have (with a slight abuse of notation) introduced
q(xn ) and qn (xn ),
R
the marginals of q(x) and qn (x). Thus p(x) = R1 q(x)F (x) and R = dx q(x)F (x), where
Y qn (xn )
F (x) =
.
(11)
q(xn )
n
Since q(xn ) and the qn (xn ) have the same first two cumulants, corrections can be expressed by the
higher cumulants of the qn (xn ) (note, that the higher cumulants of q(xn ) vanish). The cumulants
cln of qn (xn ) are defined by their characteristic functions ?n (k) via
Z
X
dk ?ikxn
cln l
qn (xn ) =
e
?n (k) and ln ?n (k) =
(i)l
k .
(12)
2?
l!
l
Expressing the Gaussian marginals q(xn ) by their first and second cumulants, the means mn and
the variances Snn and introducing the function
X
cln l
rn (k) =
(i)l
k
(13)
l!
l?3
3
which contains the contributions of all higher order cumulants, we get
R
!
Y
dkn exp ?ikn (xn ? mn ) ? 21 Snn kn2 + rn (kn )
R
F (x) =
dkn exp ?ikn (xn ? mn ) ? 12 Snn kn2
n
"
#
"
s
#
Z
Y Snn
X Snn ? 2
X
(xn ? mn )
n
= d?
exp ?
exp
rn ?n ? i
2?
2
Snn
n
n
n
(14)
(15)
n)
where in the last equality we have introduced a shift of variables ?n = kn + i (xnS?m
.
nn
An expansion can be performed with respect to the cumulants in the terms gn which had been neglected in the EP approximation. The basic computations are most easily explained for the correction
R to the partition function.
3.2.1 Correction to the partition function
Since q(x) is a multivariate Gaussian of the form q(x) = N (x; m, S), the correction R to the
partition Z involves a double Gaussian average over the vector x and the set of ?n . This can be
simplified by combining them into a single complex zero mean Gaussian random vector defined as
n
such that
zn = ?n ? i xnS?m
nn
*
"
#+
X
R = exp
rn (zn )
(16)
n
z
The most remarkable property of the Gaussian z is its covariance which is easily found to be
Sij
hzi zj iz = ?
when i 6= j, and hzi2 iz = 0 .
Sii Sjj
The last equation has important consequences for the surviving terms in an expansion of R!
(17)
Assuming that the gn are small we perform a power series expansion of ln R
*
+
hX
i
2
X
1 D X 2 E
1 X
ln R = ln exp
rn (zn )
=
hrn iz +
rn
?
hrn iz ? . . . (18)
2
2 n
n
n
n
z
z
X X cln clm Snm l
1 X
=
hrm rn iz ? . . .
=
? ...
(19)
2
l!
Snn Smm
m6=n
m6=n l?3
s
Here we have repeatedly used the fact that each factor zn in expectations hznl zm
i have to be paired
(by Wick?s theorem) with a factor zm where m 6= n (diagonal terms vanish by (17)). This gives
nonzero contributions only, when l = s and there are l! ways for pairing.2
This expansion gives a hint why EP may work typically well for multivariate models when covariances Sij are small compared to the variances Sii . While we may expect that ln ZEP = O(N )
where N is the number of variables xn , the vanishing of the ?self interactions? indicates that corrections may not scale with N .
3.2.2 Correction to marginal moments
The predictive density of a novel observation can be treated by extending the Gaussian prior to
include a new latent variable x? with E[x? x] = k? and E[x2? ] = k? , and appears as an average of a
likelihood term over the posterior marginal of x? .
A correction for the predictive density can also be derived in terms of the cumulant expansion by
?1
?1
averaging the conditional distribution p(x? |x) = N (x? ; k?
x, ??2 ) with ??2 = k? ? k?
k? .
?K
?K
Using the expression (15) we obtain (where we set R = 1 in (6) to lowest order)
*
+
Z
X
x
?
m
n
n
p(x? ) = dx p(x? |x) p(x) = N (x? ; ?x? , s2x? ) 1 +
rn ?n ? i
+ ...
Snn
n
?,x?N (x;?,?)
(20)
2
The terms in the expansion might be organised in Feynman graphs, where ?self interaction? loops are
absent.
4
?195
?200
?205
logZ
?210
?215
?220
?225
?230
?235
1
2
3
4
5
Number of components K
6
Figure 1: ln Z approximations obtained from
q(x)?s factorization in (2), for sec. 4.1?s mixture
model, as obtained by: variational Bayes (see [1]
for details) as red squares; ? = 12 in Minka?s ?divergence message passing scheme, described in
[6], as magenta triangles; EP as blue circles; EP
with the 2nd order correction in (8) as green diamonds. For 20 runs each, the colour intensities
correspond to the frequency of reaching different
estimates. A Monte Carlo estimate of the true
ln Z, as found by parallel tempering with thermodynamic integration, is shown as a line with twostandard deviation error bars.
?1
?1 ?1
where ?x? = k?
m and variance s2x? = k? ? k?
) k? and ? = diag(?) denotes
?K
? (K + ?
the parameters in the Gaussian terms gn . The average in (20) is over a Gaussian x with ??1 =
?1
(K ? k??1 k? k?
+ ??1 and ? = (x? ? ?x? )???2 ?K?1 k? + m. By simplifying the inner
?)
expectation over the complex Gaussian variables ? we obtain
?
?
*
+
X X cln 1 l
x
?
m
n
n
?
?
p(x? ) = N (x? ; ?x? , s2x? ) ?1 +
hl
+ ????
l!
S
S
nn
nn
n l?3
x?N (x;?,?)
(21)
where hl is the lth Hermite polynomial. The Hermite polynomials are averaged over a Gaussian
density where the only occurrence of x? is through (x? ? ?x? ) in ?, so that the expansion ultimately
appears as a polynomial in x? . A correction to the predictive density follows from averaging t? (x? )
over (21).
4 Applications
4.1 Mixture of Gaussians
This section illustrates an example where a large first nontrivial correction term in (8) reflects an
inaccurate EP approximation. We explain this for a K-component Gaussian mixture model.
P
Consider N observed data points ?n with likelihood terms fn (x) = ? ?? N (?n ; ?? , ??1
? ), with
n ? 1 and with the mixing weights ?? forming a probability vector. The latent variables are then
x = {?? , ?? , ?? }K
we use a Dirichlet distribution and product of Normal?=1 . For our prior on xQ
Wisharts densities so that f0 (x) = D(?) ? N W(?? , ?? ). When we multiply the fn terms we
see that intractability for the mixture model arises because the number of terms in the marginal
likelihood is K N , rather than because integration is intractable. The computation of lower-order
terms in (8) should therefore be immediately feasible. The approximation q(x) and each gn (x) are
chosen to be of the same exponential family form as f0 (x), where we don?t require gn (x) to be
normalizable.
For brevity we omit the details of the EP algorithm for this mixture model, and assume here that an
EP fixed point has been found, possibly using some damping. Fig. 1 shows various approximations
to the log marginal likelihood ln Z for ?n coming from the acidity data set. It is evident that the
?true peak? doesn?t match the peak obtained by approximate inference, and we will wrongly predict
which K maximizes the log marginal likelihood. Without having to resort to Monte Carlo methods,
the second order correction for K = 3 both corrects our prediction and already confirms that the
original approximation might be inadequate.
4.2 Gaussian Process Classification
The GP classification model arises when we observe N data points ?n with class labels yn ?
{?1, 1}, and model y through a latent function x with the GP prior mentioned in sec. 3.2. The
5
0.3
log magnitude, ln(?)
0.1
0.6
0.
4
3
2.5
5
0.
2
1.5
0.4
2
0.
2
0.
1
0.5
0.2
0.3
1
0.0
0.1
3
0.
01
0.0
1
0.0
0.1
4
0.1
1
0.0
2
1.5
0.4
2
0.
01
1
0.
1
0.2
01
1.5
2
0
2.5
0
3
3.5
4
4.5
5
0.0
1
0.0
0
0.0
1.5
log lengthscale, ln(?)
0.00
01
0
01
1
.00
.01
0.0
1
0.0
0.1
0
0.1
3
2.5
0.5
0.1
0.2
3.5
0.3
4
0.2
3
0.
0.2
4.5
0.
0.5
0.6
4
3.5
0.5
0.4
0.3
log magnitude, ln(?)
5
0.6
0.5
0.4
0.3
5
4.5
2
2.5
3
3.5
4
4.5
5
log lengthscale, ln(?)
(a) ln R, second order, with l = 3, 4.
(b) Monte Carlo ln R
Figure 2: A comparison of a perturbation expansion of (19) against Monte Carlo estimates of the
true correction ln R, using the USPS data set from [4].
likelihood terms for yn are assumed to be tn (xn ) = ?(yn xn ), where ?(?) denotes the cumulative
Normal density.
Eq. (19) shows how to compute the cumulant expansion by dovetailing the EP fixed point with the
characteristic function of qn (xn ): From the EP fixed point we have q(x) = N (x; m, S) and gn ?
1
e?n xn ? 2 ?n xn ; consequently the marginal density of xn in q(x)/gn (xn ) from (3) is N (xn ; ?, v 2 ),
where v ?2 = 1/Snn ? ?n and ? = v ?2 (mn /Snn ? ?n ). Using (3) again we have
qn (xn ) =
1
?(yn xn )N (xn ; ?, v 2 ) .
Zn
(22)
The characteristic function of qn (xn ) is obtained by the inversion of (12),
1 2 2 ?(wk )
yn ?
yn ? + ikv 2
and wk = ?
,
?n (k) = eikxn = eik?? 2 k v
, with w = ?
2
?(w)
1+v
1 + v2
(23)
with expectations h? ? ?i being with respect to qn (xn ). Raw moments are computed through deriva(j)
tives of the characteristic function, i.e. hxjn i = i?j ?n (0). The cumulants cln are determined
from the derivatives of ln ?n (k) evaluated at zero (or equally from raw moments, e.g. c3n =
3
2hxn i ? 3hxn ihx2n i + hx3n i), such that
c3n = ?3 ? 2? 2 + 3w? + w2 ? 1
(24)
3
4
2
2
3
c4n = ?? ? 6? + 12w? + 7w ? + w ? 4? ? 3w ,
(25)
?
where ? = v 2 / 1 + v 2 and ? = N (w; 0, 1)/?(w).
An extensive MCMC evaluation of EP for GP classification on various data sets was recently given
by [4], showing that the log marginal likelihood of the data can be approximated remarkably well.
An even more accurate estimation of the approximation error is given by considering the second
order correction in (19) (computed here up to l = 4). For GPC we generally found that the l = 3
term dominates l = 4, and we do not include any higher cumulants here. Fig. 2 illustrates the ln R
correction on the binary subproblem of the USPS 3?s vs. 5?s digits data set, with N = 767, as was
used by [4]. We used the same kernel k(?, ? ? ) = ? 2 exp(? 21 k? ? ? ? k2 /?2 ) as [4], and evaluated
(19) on a similar grid of ln ? and ln ? values. For the same grid values we obtained Monte Carlo
estimates of ln Z, and hence ln R. They are plotted in fig. 2(b) for the cases where they estimate ln Z
to sufficient accuracy (up to four decimal places) to obtain a smoothly varying plot of ln R.3 The
correction from (19), as computed here, is O(N 2 ), and compares favourably to O(N 3 ) complexity
of EP for GPC.
3
The Monte Carlo estimates in [4] are accurate enough for showing EP?s close approximation to ln Z, but
not enough to make any quantified statement about ln R.
6
Coefficients of x a? & correction ratio
yn = +1
1.2
yn = ?1
coeff of x0
1
coeff of
0.8
coeff of
*
1
x*
2
x*
3
x*
0.6
coeff of
0.4
correction ratio
p (y = 1) /
gpc *
pcorr(y* = 1)
0.2
0
?0.2
?0.4
?0.6
?6
?4
?2
0
2
4
6
Figure 3: The initial coefficients of the polynomial in
x? , as they ultimately appear
in the first nontrivial correction term in (21). Cumulants
l = 3 and l = 4 were used.
The coefficients are shown for
test points ?? after observing data points ?n . The ratio between the standard and
(1st order) corrected GP classification predictive density is
also illustrated.
Location of ? ?
In fig. 3 we show the coefficients of the polynomial corrections (21) in powers of x? to the predictive
density p(x? ), using 3rd and 4th cumulants. The small corrections arise as whenever terms yn mn
are positive and large compared to the posterior variance, non-Gaussian terms fn (x) = tn (xn ) ? 1
for almost all values of xn which have significant probability under the Gaussian distribution that
is proportional to q(x)/gn (xn ). For these terms qn (x) is therefore almost Gaussian and higher
cumulants are small. A example where this will no longer be the case is a GP model with tn (xn ) = 1
for |xn | < a and tn (xn ) = 0 for |xn | > a. This is a regression model yn = xn +?n where i.i.d. noise
variables ?n have uniform distribution and the observed outputs are all zero, i.e. yn = 0. For this
case, the exact posterior variance does not shrink to zero even if the number of data points goes
to infinity. The EP approximation however has the variance decrease to zero and our corrections
increase with sample size.
4.3 Ising models
Somewhat surprising (and probably less known) is the fact that EP and our corrections apply
well to a fairly limiting case of the GP model where the terms are of the form tn (xn ) =
1)), where ?(x) is the Dirac distribution. These terms, together with
e?n xn (?(xn + 1) + ?(xn ? P
a ?Gaussian? f0 (x) = exp[ i<j Jij xi xj ] (where we do not assume that the matrix J is negative
definite), makes this GP model an Ising model with binary variables xn = ?1. As shown in [8],
this model can still be treated with the same type of Gaussian term approximations as ordinary GP
models, allowing for surprisingly accurate estimation of the mean and covariance. Here we will
show the effect of our corrections for toy models, where exact inference is possible by enumeration.
The tilted distributions qn (xn ) are biased binary distributions with cumulants: c3n = ?2mn (1 ?
m2n ), c4n = ?2 + 8m2n ? 6m4n , etc. We will consider two different scenarios for random ? and J
0
5
10
AD Free energy
MAD 2 node marginals
10
?2
10
?4
10
?6
10 ?1
10
0
10
?5
10
?10
0
10
?
10
1
10
?1
10
0
10
?
1
10
Figure 4: The left plot shows the MAD of the estimated covariance matrix from the exact one for
different values of ? for EP (blue), EP 2nd order l = 4 corrections (blue with triangles), Bethe or
loopy belief propagation (LBP; dashed green) and Kikuchi or generalized LBP (dash?dotted red).
The Bethe and Kikuchi approximations both give covariance estimates for all variable pairs as the
model is fully connected. The right plot shows the absolute deviation of ln Z from the true value
using second order perturbations with l = 3, 4, 5 (l = 3 is the smallest change). The remaining line
styles are the same as in the left plot.
7
described in detail in [8]. In the first scenario, with N = 10, the Jij ?s are generated independently
at random according to Jij = ?wij and wij ? N (0, 1). For varying ?, the maximum absolute
exact
deviation (MAD) of the estimated covariance matrices from the exact one maxi,j |?est
|
ij ? ?ij
is shown in fig. 4 left. The absolute deviation on the log partition function is shown in fig. 4 right.
In the Wainwright-Jordan set-up N = 16 nodes are either fully connected or connected to nearest
neighbors in a 4?by?4 grid. The external field (observation) strengths ?i are drawn from a uniform
distribution ?i ? U[?dobs , dobs ] with dobs = 0.25. Three types of coupling strength statistics are
considered: repulsive (anti-ferromagnetic) Jij ? U[?2dcoup , 0], mixed Jij ? U[?dcoup , +dcoup ]
and attractive (ferromagnetic) Jij ? U[0, +2dcoup]. Table 1 gives the MAD of marginals averaged
of 100 repetitions. The results for both set-ups give rise to the conclusion that when the EP approximation works well then the correction give an order of magnitude of improvement. In the opposite
situation, the correction might worsen the results.
Table 1: Average MAD of marginals in a Wainwright-Jordan set-up, comparing loopy belief propagation (LBP), log-determinant relaxation (LD), EP, EP with l = 5 correction (EP+), and EP with
only one spanning tree approximating term (EP tree).
Problem type
Coupling dcoup
Repulsive
0.25
Repulsive
0.50
Full
Mixed
0.25
Mixed
0.50
Attractive
0.06
Attractive
0.12
Repulsive
1.0
Repulsive
2.0
Grid
Mixed
1.0
Mixed
2.0
Attractive
1.0
Attractive
2.0
Graph
LBP
0.037
0.071
0.004
0.055
0.024
0.435
0.294
0.342
0.014
0.095
0.440
0.520
LD
0.020
0.018
0.020
0.021
0.027
0.033
0.047
0.041
0.016
0.038
0.047
0.042
Method
EP
EP+
0.003 0.00058487
0.031
0.0157
0.002 0.00042727
0.022
0.0159
0.004
0.0023
0.117
0.1066
0.153
0.1693
0.198
0.4244
0.011
0.0122
0.082
0.0984
0.125
0.1759
0.177
0.4730
EP tree
0.0017
0.0143
0.0013
0.0151
0.0025
0.0211
0.0031
0.0021
0.0018
0.0068
0.0028
0.0002
5 Outlook
We expect that it will be possible to develop similar corrections to other approximate inference
methods, such as the variational approach or the ?power EP? approximations which interpolate
between the variational method and EP. This may help the user to decide which approximation is
more accurate for a given problem. We will also attempt an analysis of the scaling of higher order
terms in these expansions to see if they are asymptotic or have a finite radius of convergence.
References
[1] H. Attias. A variational Bayesian framework for graphical models. In Advances in Neural Information
Processing Systems 12, 2000.
[2] M. Chertkov and V. Y. Chernyak. Loop series for discrete statistical models on graphs. Journal of Statistical
Mechanics: Theory and Experiment, page P06009, 2006.
[3] S. Ikeda, T. Tanaka, and S. Amari. Information geometry of turbo and low-density parity-check codes.
IEEE Transactions on Information Theory, 50(6):1097, 2004.
[4] M. Kuss and C. E. Rasmussen. Assessing approximate inference for binary Gaussian process classification.
Journal of Machine Learning Research, 6:1679?1704, 2005.
[5] T. P. Minka. Expectation propagation for approximate Bayesian inference. In UAI 2001, pages 362?369,
2001.
[6] T. P. Minka. Divergence measures and message passing. Technical Report MSR-TR-2005-173, Microsoft
Research, Cambridge, UK, 2005.
[7] T.P. Minka. The EP energy function and minimization schemes. Technical report, 2001.
[8] M. Opper and O. Winther. Expectation consistent approximate inference. Journal of Machine Learning
Research, 6:2177?2204, 2005.
[9] E. Sudderth, M. Wainwright, and A. Willsky. Loop series and Bethe variational bounds in attractive graphical models. In Advances in Neural Information Processing Systems 20, pages 1425?1432. 2008.
8
| 3613 |@word determinant:1 msr:1 inversion:1 polynomial:5 seems:1 nd:2 confirms:1 simulation:1 covariance:7 simplifying:1 tr:1 outlook:1 harder:1 ld:2 moment:6 initial:1 series:7 contains:1 comparing:1 surprising:1 dx:6 ikeda:1 fn:13 subsequent:1 partition:10 tilted:2 plot:4 v:1 leaf:1 vanishing:1 manfred:1 normalising:2 node:2 location:1 simpler:1 hermite:2 mathematical:1 sii:2 pairing:1 x0:1 indeed:1 growing:1 mechanic:1 snn:10 enumeration:1 considering:1 increasing:3 xx:1 notation:2 maximizes:1 lowest:1 developed:2 unobserved:1 guarantee:1 zep:4 nutshell:1 classifier:1 k2:1 uk:1 omit:1 yn:11 appear:1 generalised:1 positive:1 limit:1 consequence:1 chernyak:1 abuse:1 might:3 quantified:1 factorization:1 averaged:2 definite:1 digit:1 logz:1 empirical:1 ups:1 get:3 cannot:1 close:2 wrongly:1 notation1:1 go:1 independently:1 immediately:1 qq:1 limiting:1 user:1 exact:7 hypothesis:1 approximated:2 ising:3 ep:47 observed:2 subproblem:1 acidity:1 ferromagnetic:2 connected:3 decrease:2 mentioned:1 vanishes:1 complexity:3 neglected:2 ultimately:2 solving:1 predictive:6 serve:1 efficiency:1 completely:1 triangle:2 usps:2 easily:2 various:3 derivation:1 lengthscale:2 monte:6 ole:1 sanity:1 ikn:2 amari:1 statistic:2 paquet:1 gp:9 indication:1 net:1 interaction:2 jij:6 product:4 remainder:2 zm:2 tu:2 coming:1 combining:1 loop:3 mixing:1 opperm:1 dirac:1 convergence:1 cluster:2 double:1 r1:1 extending:1 assessing:1 kikuchi:2 help:1 derive:2 iq:3 coupling:2 develop:1 xns:2 nearest:1 ij:2 eq:1 auxiliary:1 c:1 skip:1 come:1 indicate:1 involves:1 radius:1 require:3 hx:1 f1:1 correction:44 hold:1 sufficiently:1 considered:3 s2x:3 normal:2 exp:9 great:1 predict:1 smallest:1 estimation:2 label:1 repetition:1 trusted:1 reflects:1 minimization:1 gaussian:23 rather:2 reaching:1 factorizes:1 varying:2 dcoup:5 derived:2 dkn:2 improvement:3 modelling:1 check:2 likelihood:9 indicates:1 inference:11 nn:4 inaccurate:2 typically:4 expand:1 wij:2 classification:5 integration:2 fairly:1 marginal:9 field:1 having:2 eik:1 report:2 simplify:1 hint:1 divergence:3 interpolate:1 replaced:1 geometry:1 replacement:1 n1:8 microsoft:1 attempt:1 message:3 multiply:1 evaluation:2 mixture:7 bracket:1 clm:1 accurate:4 integral:1 damping:1 tree:3 circle:1 plotted:1 gn:16 cumulants:15 zn:9 ordinary:1 applicability:1 introducing:1 deviation:4 loopy:2 uniform:2 inadequate:1 cln:6 dependency:1 kn:2 st:1 density:12 winther:2 peak:2 probabilistic:3 informatics:1 corrects:1 together:1 again:2 hzi:1 possibly:1 external:1 resort:1 derivative:1 leading:1 style:1 toy:1 exclude:1 de:1 sec:2 wk:2 coefficient:4 dobs:3 depends:1 ad:1 later:1 performed:1 observing:1 reached:1 red:2 bayes:1 parallel:1 worsen:1 contribution:2 ass:1 square:1 accuracy:2 variance:6 characteristic:4 yield:2 identify:1 correspond:1 bayesian:5 raw:2 carlo:6 kuss:1 explain:1 whenever:1 definition:1 against:1 energy:2 frequency:1 minka:4 popular:1 dimensionality:1 appears:2 higher:7 snm:1 done:1 evaluated:2 shrink:1 just:2 hand:1 favourably:1 propagation:7 kn2:2 quality:1 effect:1 requiring:1 true:4 normalized:1 hence:3 equality:1 read:1 laboratory:1 leibler:1 nonzero:1 illustrated:1 attractive:6 numerator:1 self:2 criterion:1 generalized:1 evident:1 tn:6 variational:5 novel:1 recently:1 belong:1 slight:1 marginals:5 expressing:1 significant:1 cambridge:2 rd:1 consistency:1 grid:4 had:1 f0:5 longer:1 etc:1 dominant:1 posterior:5 multivariate:3 recent:1 moderate:1 belongs:1 scenario:2 binary:5 somewhat:1 converge:1 dashed:1 ii:1 thermodynamic:1 full:1 technical:3 match:1 minimising:1 justifying:1 equally:1 paired:1 prediction:1 involving:1 basic:2 regression:1 denominator:1 expectation:11 normalization:1 kernel:2 achieved:1 lbp:4 remarkably:2 sudderth:1 w2:1 biased:1 probably:1 jordan:2 surviving:1 p06009:1 enough:2 m6:2 variety:1 xj:1 opposite:1 inner:1 idea:1 attias:1 shift:1 absent:1 expression:3 hxn:3 colour:1 effort:1 passing:3 repeatedly:1 generally:1 gpc:3 characterise:1 maybe:1 deriva:1 zj:1 dotted:1 estimated:2 blue:3 discrete:1 wick:1 iz:5 four:1 tempering:1 drawn:1 graph:3 relaxation:1 run:1 place:1 family:4 almost:2 decide:1 coeff:4 scaling:1 bound:1 dash:1 turbo:1 nontrivial:2 strength:2 infinity:1 normalizable:1 n3:2 x2:1 optimality:1 according:1 slightly:1 m2n:2 hl:2 explained:1 sij:2 ln:26 equation:1 discus:1 tractable:3 feynman:1 repulsive:5 gaussians:1 apply:1 observe:1 sjj:1 v2:1 appropriate:1 occurrence:1 original:3 denotes:2 dirichlet:1 include:2 remaining:1 graphical:2 approximating:3 g0:1 already:2 quantity:1 diagonal:1 berlin:2 spanning:1 mad:5 denmark:1 willsky:1 assuming:3 code:1 decimal:1 ratio:3 hrm:1 statement:1 negative:1 rise:1 perform:1 diamond:1 allowing:1 observation:4 finite:3 anti:1 situation:1 interacting:1 perturbation:4 rn:8 intensity:1 introduced:2 pair:1 required:1 extensive:2 tanaka:1 bar:1 usually:3 hrn:2 green:2 belief:2 wainwright:3 power:3 treated:2 nth:1 mn:7 scheme:2 tives:1 dtu:1 xq:1 prior:5 asymptotic:1 fully:2 expect:2 mixed:5 organised:1 proportional:1 remarkable:1 sufficient:3 consistent:1 ulrich:2 intractability:1 share:1 surprisingly:1 last:3 free:1 parity:1 rasmussen:1 neighbor:1 taking:1 absolute:3 opper:2 xn:48 cumulative:1 qn:28 doesn:1 made:1 coincide:1 simplified:1 transaction:1 approximate:8 smm:1 kullback:1 keep:2 imm:1 uai:1 assumed:3 xi:1 don:1 latent:5 why:1 table:2 bethe:3 expanding:1 improving:1 expansion:21 complex:2 diag:1 main:1 owi:1 noise:1 arise:1 n2:8 fig:6 explicit:1 exponential:4 vanish:2 chertkov:1 theorem:1 magenta:1 showing:2 maxi:1 dk:2 dominates:1 intractable:4 magnitude:3 conditioned:1 illustrates:2 smoothly:1 forming:1 expressed:1 conditional:1 lth:1 goal:1 consequently:1 feasible:3 change:1 determined:3 corrected:1 averaging:2 est:1 indicating:1 unrealiable:1 latter:1 arises:2 cumulant:2 brevity:1 mcmc:2 |
2,884 | 3,614 | An interior-point stochastic approximation
method and an L1-regularized delta rule
Peter Carbonetto
[email protected]
Mark Schmidt
[email protected]
Nando de Freitas
[email protected]
Department of Computer Science
University of British Columbia
Vancouver, B.C., Canada V6T 1Z4
Abstract
The stochastic approximation method is behind the solution to many important, actively-studied problems in machine learning. Despite its farreaching application, there is almost no work on applying stochastic approximation to learning problems with general constraints. The reason for
this, we hypothesize, is that no robust, widely-applicable stochastic approximation method exists for handling such problems. We propose that
interior-point methods are a natural solution. We establish the stability
of a stochastic interior-point approximation method both analytically and
empirically, and demonstrate its utility by deriving an on-line learning algorithm that also performs feature selection via L1 regularization.
1
Introduction
The stochastic approximation method supplies the theoretical underpinnings behind many
well-studied algorithms in machine learning, notably policy gradient and temporal differences for reinforcement learning, inference for tracking and filtering, on-line learning [1, 17, 19], regret minimization in repeated games, and parameter estimation in probabilistic graphical models, including expectation maximization (EM) and the contrastive
divergences algorithm. The main idea behind stochastic approximation is simple yet profound. It is simple because it is only a slight modification to the most basic optimization
method, gradient descent. It is profound because it suggests a fundamentally different way
of optimizing a problem?instead of insisting on making progress toward the solution at
every iteration, it only requires that progress be achieved on average.
Despite its successes, people tend to steer clear of constraints on the parameters. While
there is a sizable body of work on treating constraints by extending established optimization
techniques to the stochastic setting, such as projection [14], subgradient (e.g. [19, 27]) and
penalty methods [11, 24], existing methods are either unreliable or suited only to specific
types of constraints. We argue that a reliable stochastic approximation method that handles
constraints is needed because constraints routinely arise in the mathematical formulation of
learning problems, and the alternative approach?penalization?is often unsatisfactory.
Our main contribution is a new stochastic approximation method in which each step is the
solution to the primal-dual system arising in interior-point methods [7]. Our method is easy
to implement, dominates other approaches, and provides a general solution to constrained
learning problems. Moreover, we show interior-point methods are remarkably well-suited to
stochastic approximation, a result that is far from trivial when one considers that stochastic
algorithms do not behave like their deterministic counterparts (e.g. Wolfe conditions [13]
do not apply). We derive a variant of Widrow and Hoff?s classic ?delta rule? for on-line
learning (Sec. 5). It achieves feature selection via L1 regularization (known to statisticians
as the Lasso [22] and to signal processing engineers as basis pursuit [3]), so it is well-suited
to learning problems with lots of data in high dimensions, such as the problem of filtering
spam from your email account (Sec. 5.2). To our knowledge, no method has been proposed
that reliably achieves L1 regularization in large-scale problems when data is processed online or on-demand. Finally, it is important that we establish convergence guarantees for our
method (Sec. 4). To do so, we rely on math from stochastic approximation and optimization.
2
Overview of algorithm
In their 1952 research paper, Robbins and Monro [15] examined the problem of tuning a
control variable x (e.g. amount of alkaline solution) so that the expected outcome of the
experiment F (x) (pH of soil) attains a desired level ? (so your Hydrangea have pink blossoms). When the distribution of the experimental outcomes is unknown to the statistician
or gardener, it may be still possible to take observations at x. In such case, Robbins and
Monro showed that a particularly effective way to achieve a response level ? = 0 is to take
a (hopefully unbiased) measurement yk ? F (xk ), adjust the control variable according to
xk+1 = xk ? ak yk
(1)
for step size ak > 0, then repeat. Provided the sequence {ak } behaves like the harmonic
series (see Sec. 4.1), this algorithm converges to the solution F (x? ) = 0.
Since the original publication, mathematicians have extended, generalized, and further weakened the convergence conditions; see [11] for some of these developments. Kiefer and Wolfowitz re-interpreted the stochastic process as one of optimizing an unconstrained objective
(F (x) acts as the gradient vector) and later Dvoretsky pointed out that each measurement
y is actually the gradient F (x) plus some noise ?(x). Hence, the stochastic gradient algorithm. In this paper, we introduce a convergent sequence of nonlinear systems F? (x) = 0 and
interpret the Robbins-Monro process {xk } as solving a constrained optimization problem.
We focus on convex optimization
problems [2] of the form
minimize f (x)
subject to c(x) ? 0,
(2)
procedure IP?SG (Interior-point stochastic gradient)
for k = 1, 2, 3, . . .
? Set max. step size a
?k and centering parameter ?k .
? Set barrier parameter ?k = ?k zkT c(xk )/m.
? Run simulation to obtain gradient observation yk .
? Compute primal-dual search direction (?xk , ?zk )
by solving equations (6,7) with ?f (x) = yk .
? Run backtracking line search to find largest
ak ? min{?
ak , 0.995 mini (?zk,i /?zk,i )} such
that c(xk?1 + ak ?xk ) < 0, and mini ( ? ) is
over all i such that ?zk,i < 0.
? Set xk = xk?1 + ak ?xk and zk = zk?1 + ak ?zk .
where c(x) is a vector of inequality
constraints, f (x) and c(x) have continous partial derivatives, and measurements yk of the gradient at xk are
noisy. The feasible set, by contrast,
should be known exactly. To simplify
our exposition, we do not consider
equality constraints; techniques for Figure 1: Proposed stochastic gradient algorithm.
handling them are discussed in [13].
Convexity is a standard assumption made to simplify analysis of stochastic approximation
algorithms and, besides, constrained, non-convex optimization raises unresolved complications. We assume standard constraint qualifications so we can legitimately identify optimal
solutions via the Karush-Kuhn-Tucker (KKT) conditions [2, 13].
Following the standard barrier approach [7], we frame the constrained optimization problem
as a sequence of unconstrained objectives. This in turn is cast as a sequence of root-finding
problems F? (x) = 0, where ? > 0 controls for the accuracy of the approximate objective
and should tend toward zero. As we explain, a dramatically more effective strategy is to
solve for the root of the primal-dual equations F? (x, z), where z represents the set of dual
variables. This is the basic formula of the interior-point stochastic approximation method.
Fig. 1 outlines our main contribution. Provided x0 is feasible and z0 > 0, every subsequent
iterate (xk , zk ) will be a feasible or ?interior? point as well. Notice the absence of a sufficient decrease condition on kF? (x, z)k or suitable merit function; this is not needed in the
stochastic setting. Our stochastic approximation algorithm requires a slightly non-standard
treatment because the target F? (x, z) moves as ? changes. Fortunately, convergence under
non-stationarity has been studied in the literature on tracking and adaptive filtering. The
next section is devoted to deriving the primal-dual search direction (?x, ?z).
3
Background on interior-point methods
We motivate and derive primal-dual interior-point methods starting from the logarithmic
barrier method. Barrier methods date back to the work of Fiacco and McCormick [6] in
the 1960s, but they lost favour due to their unreliable nature. Ill-conditioning was long
considered their undoing. However, careful analysis [7] has shown that poor conditioning is
not the problem?rather, it is a deficiency in the search direction. In the next section, we
exploit this very analysis to show that every iteration of our algorithm produces a stable
iterate in the face of: 1) ill-conditioned linear systems, 2) noisy observations of the gradient.
The logarithmic barrier approach for the constrained optimization problem (2) amounts to
solving a sequence of unconstrained subproblems of the form
Pm
minimize f? (x) ? f (x) ? ? i=1 log(?ci (x)),
(3)
where ? > 0 is the barrier parameter, and m is the number of inequality constraints.
As ? becomes smaller, the barrier function f? (x) acts more and more like the objective.
The philosophy of barrier methods differs fundamentally from ?exterior? penalty methods
that penalize points violating the constraints [13, Chapter 17] because the logarithm in (3)
prevents iterates from violating the constraints at all, hence the word ?barrier?.
The central thrust of the barrier method is to progressively push ? to zero at a rate which
allows the iterates to converge to the constrained optimum x? . Writing out a first-order
Taylor-series expansion to the optimality conditions ?f? (x) = 0 about a point x, the Newton
step ?x is the solution to the linear equations ?2 f? (x) ?x = ??f? (x). The barrier Hessian
has long been known to be incredibly ill-conditioned?this fact becomes apparent by writing
out ?2 f? (x) in full?but an analysis by Wright [25] shows that the ill-conditioning is not
harmful under the right conditions. The ?right conditions? are that x be within a small
distance1 from the central path or barrier trajectory, which is defined to be the sequence of
isolated minimizers x?? satisfying ?f? (x?? ) = 0 and c(x?? ) < 0. The bad news: the barrier
method is ineffectual at remaining on the barrier trajectory?it pushes iterates too close to
the boundary where they are no longer well-behaved [7]. Ordinarily, a convergence test is
conducted for each value of ?, but this is not a plausible option for the stochastic setting.
Primal-dual methods form a Newton search direction for both the primal variables and the
Lagrange multipliers. Like classical barrier methods, they fail catastrophically outside the
central path. But their virtue is that they happen to be extremely good at remaining on
the central path (even in the stochastic setting; see Sec. 4.2). Primal-dual methods are also
blessed with strong results regarding superlinear and quadratic rates of convergence [7].
The principal innovation is to introduce Lagrange multiplier-like variables zi ? ??/ci (x).
By setting ?x f? (x) to zero, we recover the ?perturbed? KKT optimality conditions:
?x f (x) + J T Z1
F? (x, z) ?
= 0,
(4)
CZ1 + ?1
where Z and C are matrices with z and c(x) along their diagonals, and J ? ?x c(x). Forming
a first-order Taylor expansion about (x, z), the primal-dual Newton step is the solution to
?x
W JT
?x f (x) + J T Z1
=?
,
(5)
?z
ZJ C
CZ1 + ?1
Pm
where W = H + i=1 zi ?2x ci (x) is the Hessian of the Lagrangian (as written in any textbook
on constrained optimization), and H is the Hessian of the objective or an approximation.
Through block elimination, the Newton step ?x is the solution to the symmetric system
(W ? J T ?J)?x = ??x f? (x),
(6)
?1
where ? ? C Z. The dual search direction is then recovered according to
?z = ?(z + ?/c(x) + ?J?x).
(7)
Because (2) is a convex optimization problem, we can derive a sensible update rule for the
barrier parameter by guessing the distance between the primal and dual objectives [2]. This
guess is typically ? = ??z T c(x)/m, where ? > 0 is a centering parameter. This update is
supported by the convergence theory (Sec. 4.1) so long as ?k is pushed to zero.
1
See Sec. 4.3.1 of [7] for the precise meaning of a ?small distance?. Since x must be close to the
central path but far from the boundary, the favourable neighbourhood shrinks as ? nears 0.
4
Analysis of convergence
First we establish conditions upon which the sequence of iterates generated by the algorithm
converges almost surely to the solution (x? , z ? ) as the amount of data or iteration count goes
to infinity. Then we examine the behaviour of the iterates under finite-precision arithmetic.
4.1 Asymptotic convergence
A convergence proof from first principles is beyond the scope of this paper; we build upon
the martingale convergence proof of Spall and Cristion for non-stationary systems [21].
Assumptions: We establish convergence under the following conditions. They may be
weakened by applying results from the stochastic approximation and optimization literature.
1. Unbiased observations: yk is a discrete-time martingale difference with respect to
the true gradient ?f (xk ); that is, E(yk | xk , history up to time k) = ?f (xk ).
2. Step sizes: The maximum P
step sizes a
?k bounding ak (see Fig.P
1) must approach
?
?
zero (?
ak ? 0 as k ? ? and k=1 a
?2k < ?) but not too quickly ( k=1 a
?k = ?).
3. Bounded iterates: lim supk kxk k < ? almost surely.
4. Bounded gradient estimates: for some ? and for every k, E(kyk k) < ?.
5. Convexity: The objective f (x) and constraints c(x) are convex.
6. Strict feasibility: There must exist an x that is strictly feasible; i.e. c(x) < 0.
7. Regularity assumptions: There exists a feasible minimizer x? to the problem (2)
such that first-order constraint qualification and strict complementarity hold, and
?x f (x), ?x c(x) are Lipschitz-continuous. These conditions allow us to directly apply
standard theorems on constrained optimization for convex programming [2, 6, 7, 13].
Proposition: Suppose Assumptions 1?7 hold. Then ?? ? (x? , z ? ) is an isolated (locally
unique within a ?-neighbourhood) solution to (2), and the iterates ?k ? (xk , zk ) of the
feasible interior-point stochastic approximation method (Fig. 1) converge to ?? almost surely;
that is, as k approaches the limit, ||?k ? ?? || = 0 with probability 1.
Proof: See Appendix A.
4.2 Considerations regarding the central path
The object of this section is to establish that computing the stochastic primal-dual search
direction is numerically stable. (See Part III of [23] for what we mean by ?stable?.) The
concern is that noisy gradient measurements will lead to wildly perturbed search directions.
As we mentioned in Sec. 3, interior-point methods are surprisingly stable provided the
iterates remain close to the central path, but the prospect of keeping close to the path
seems particularly tenuous in the stochastic setting. A key observation is that the central
path is itself perturbed by the stochastic gradient estimates. Following arguments similar
to those given in Sec. 5 of [7], we show that the stochastic Newton step (6,7) stays on target.
We define the noisy central path as ?(?, ?) = (x, z), where (x, z) is a solution to F? (x, z) = 0
with gradient estimate y ? ?f (x) + ?. Suppose we are currently at point ?(?, ?) = (x, z)
along the path, and the goal is to move closer to ?(?? , ?? ) = (x? , z ? ) by solving (5) or (6,7).
One way to assess the quality of the Newton step is to compare it to the tangent line of the
noisy central path at (?, ?). Taking implicit partial derivatives at (x, z), the tangent line is
?(?? , ?? ) ? ?(?, ?) + (?? ? ?) ??(?,?)
+ (y ? ? y) ??(?,?)
such that
??
?? ,
#
" ?
?x
(? ? ?) ??
+ (y ? ? y) ?x
y? ? y
H JT
??
=?
.
?z
(?? ? ?)1,
ZJ C
(?? ? ?) ??
+ (y ? ? y) ?z
??
(8)
(9)
with y ? ? ?f (x) + ?? . Since we know that F? (x, z) = 0, the Newton step (5) at (x, z) with
perturbation ?? and stochastic gradient estimate y ? is the solution to
?x
y? ? y
H JT
=?
.
(10)
?z
(?? ? ?)1.
ZJ C
In conclusion, if the tangent line (8) is a fairly reasonable approximation to the central path,
then the stochastic Newton step (10) will make good progress toward ?(?? , ?? ).
Having established that the stochastic gradient algorithm closely follows the noisy central
path, the analysis of M. H. Wright [26] directly applies, in which round-off error (?machine )
is occasionally replaced by gradient noise (?). Since stability is of fundamental concern?
particularly in computing the values of W ? J T ?J, the right-hand side of (6), and the
solution to ?x and ?z?we elaborate on the significance of Wright?s results in Appendix B.
5
On-line L1 regularization
In this section, we apply our findings to the problem of computing an L1 -regularized least
squares estimator in an ?on-line? manner; that is, by making adjustments to each new
example without having to review all the previous training instances. While this problem
only involves simple bound constraints, we can use it to compare our method to existing
approaches such as gradient projection. We start with some background behind the L1 ,
motivate the on-line learning approach, draw some experimental comparisons with existing
methods, then show that our algorithm can be used to filter spam.
Suppose we have n training examples xi ? (xi1 , . . . , xim )T paired with real-valued responses
yi . (The notation here is separate from previous sections.) Assuming a linear model and
centred coordinates, the least squares estimate ? minimizes the mean squared error (MSE).
Linear regression based on the maximum likelihood estimator is one of the basic statistical
tools of science and engineering and, while primitive, generalizes to many popular statistical
estimators, including linear discriminant analysis [9]. Because the least squares estimator is
unstable when m is large, it can generalize poorly to unseen examples. The standard cure
is ?regularization,? which introduces bias, but typically produces estimators that are better
at predicting the outputs of unseen examples. For instance, the MSE with an L1 -penalty,
Pn
T
2
1
?
MSE(L1 ) ? 2n
(11)
i=1 (yi ? xi ?) + n k?k1 ,
not only prevents overfitting but tends to produce estimators that shrink many of the
components ?j to zero, resulting in sparse codes. Here, k ? k1 is the L1 norm and ? > 0
controls for the level of regularization. This approach has been independently studied for
many problems, including statistical regression [22] and sparse signal reconstruction [3, 10],
precisely because it is effective at choosing useful features for prediction.
We can treat the gradient of MSE as a sample expectation over responses of the form
?xi (yi ? xTi ?), so the on-line or stochastic update
? (new) = ? + axi (yi ? xTi ?),
(12)
2
improves the linear regression with only a single data point (a is the step size). This is the
famed ?delta rule? of Widrow and Hoff [12]. Since standard ?batch? learning requires a full
pass through the data for each gradient evaluation, the on-line update (12) may be the only
viable option when faced with, for instance, a collection of 80 million images [16]. On-line
learning for regression and classification?including L2 regularization?is a well-researched
topic, particularly for neural networks [17] and support vector machines (e.g. [19]). On-line
learning with L1 regularization, despite its ascribed benefits, has strangely avoided study.
(The only known work that has approached the problem is [27] using subgradient methods.)
We derive an on-line, L1 -regularized learning rule of the form
(new)
?pos = ?pos + a??pos
(new)
?neg = ?neg + a??neg
such that
(new)
zpos = zpos + a(?/?pos ? zpos ? ??pos zpos /?pos )
(new)
zneg = zneg + a(?/?neg ? zneg ? ??neg zneg /?neg ),
(13)
??pos = (xi (yi ? xTi ?) ? n? + ?/?pos )/(1 + zpos /?pos )
??neg = (?xi (yi + xTi ?) ? n? + ?/?neg )/(1 + zneg /?neg ),
and where ? > 0 is the barrier parameter, ? = ?pos ? ?neg , zpos and zneg are the Lagrange
multipliers associated with the lower bounds ?pos ? 0 and ?neg ? 0, respectively, and a is a
step size ensuring the variables remain in the positive quadrant. Multiplication and division
in (13) are component-wise. The remainder of the algorithm (Fig. 1) consists of choosing ?
and feasible step size a at each iteration. Let us briefly explain how we arrived at (13).
2
The gradient descent direction can be a poor choice because it ignores the scaling of the problem.
Much work has focused on improving the delta rule, but we shall not discuss these improvements.
Figure 2: (left) Performance of constrained stochastic gradient methods for different step size
sequences. (right) Performance of methods for increasing levels of variance in the dimensions
of the training data. Note the logarithmic scale in the vertical axis.
It is difficult to find a collection of regression coefficients ? that directly minimizes MSE(L1 )
because the L1 norm is not differentiable near zero. The trick is to separate the coefficients
into their positive (?pos ) and negative (?neg ) components following [3], thereby transforming the non-smooth, unconstrained optimization problem (11) into a smooth problem with
convex, quadratic objective and bound constraints ?pos , ?neg ? 0. The regularized delta
rule (13) is then obtained from direct application of the primal-dual interior-point Newton
search direction (6,7) with a stochastic gradient (see Eq. 12), and identity in place of H.
5.1
Experiments
We ran four small experiments to assess the reliability and shrinkage effect of the interiorpoint stochastic gradient method for linear regression with L1 regularization; refer to Fig. 1
and Eq. 13.3 We also studied four alternatives to our method: 1) a subgradient method,
2) a smoothed, unconstrained approximation to (11), 3) a projected gradient method, and
4) the augmented Lagrangian approach described in [24]. See [18] for an in-depth discussion
of the merits of applying the first three optimization approaches to L1 regularization. All
these methods have a per-iteration cost on the order of the number of features.
Method. For the first three experiments, we simulated 20 data sets following the procedure
described in Sec. 7.5 of [22]. Each data set had n = 100 observations with m = 40 features.
We defined observations by xij = zij + zi , where zi was drawn from the standard normal
and zij was drawn i.i.d. from the normal with variance ?j2 , which in turn was drawn from
the inverse Gamma with shape 2.5 and scale ? = 1. (The mean of ?j2 is proportional to ?.)
The regression coefficients were ? = (0, . . . , 0, 2, . . . , 2, 0, . . . , 0, 2, . . . , 2)T with 10 repeats in
each block [22]. Outputs were generated according to yi = ? T xi + ? with standard Gaussian
noise ?. Each method was executed with a single pass on the data (100 iterations) with
step sizes a
?k = 1/(k0 + k), where k0 = 50 by default. We chose L1 penalty ?/n = 1.25,
which tended to produce about 30% zero coefficients at the solution to (11). The augmented
Lagrangian required a sequence of penalty terms rk ? 0; after some trial and error, we chose
rk = 50/(k0 + k)0.1 . The control variables of Experiments 1, 2 and 3 were, respectively, the
step size parameter k0 , the inverse Gamma scale parameter ?, and the L1 penalty parameter
?. In Experiment 4, each example yi in the training set xi had 8 features, and we set the
true coefficients were set to ? = (0, 0, 2, ?4, 0, 0, ?1, 3)T .
Results. Fig. 2 shows the results of Experiments 1 and 2, with error n1 k? exact ? ? on-line k1
averaged over the 20 data sets, in which ? exact is the solution to (11), and ? on-line is the estimate obtained after 100 iterations of the on-line or stochastic gradient method. With a large
enough step size, almost all the methods converged close to ? exact . The stochastic interiorpoint method, however, always came closest to ? exact and, for the range of values we tried, its
solution was by far the least sensitive to the step size sequence and level of variance in the observations. Experiment 3 (Fig. 3) shows that even with well-chosen step sizes for all methods,
3
The Matlab code for all our experiments is on the Web at http://www.cs.ubc.ca/?pcarbo.
Figure 4: Shrinkage effect for different choices of the L1 penalty parameter.
the stochastic interior-point method still best approximated the exact solution, and its performance did not degrade when ? was small. (The dashed vertical line at ?/n = 1.25 in Fig. 3
corresponds to k0 = 50 and E(? 2 ) = 2/3 in the left and right plots of Fig. 2.) Fig. 4 shows the
regularized estimates of Experiment 4. After one
pass through the data (middle)?equivalent to a
single iteration of an exact solver?the interiorpoint stochastic gradient method shrank some
of the data components, but didn?t quite discard irrelevant features altogether. After 10 visits to the training data (right), the stochastic algorithm exhibited feature selection close to what
we would normally expect from the Lasso (left).
5.2 Filtering spam
Classifying email as spam or not is most faith- Figure 3: Performance of the methods
fully modeled as an on-line learning problem in for various choices of the L penalty.
1
which supervision is provided after each email
has been designated for the inbox or trash [5]. An effective filter is one that minimizes misclassification of incoming messages?throwing away a good email being considerably more
deleterious than incorrectly placing a spam in the inbox. Without any prior knowledge as
to what spam looks like, any filter will be error-prone at initial stages of deployment.
Spam filtering necessarily involves lots of data and an even larger number of features, so
a sparse, stable model is essential. We adapted the L1 -regularized delta rule to the spam
filtering problem by replacing the linear regression with a binary logistic regression [9]. The
on-line updates are similar to (13), only xTi ? is replaced by ?(xTi ?), with ?(u) ? 1/(1+e?u ).
To our knowledge, no one has investigated this approach for on-line spam filtering, though
there is some work on logistic regression plus the Lasso for batch classification in text
corpora [8]. Needless to say, batch learning is completely impractical in this setting.
Method. We simulated the on-line spam filtering task on the trec2005 corpus [4] containing emails from the legal investigations of Enron corporation. We compared our on-line clas1
sifier (? = 10, ? = 12 , a
?i = 1+i
) with two open-source software packages, SpamBayes 1.0.3
and Bogofilter 0.93.4. (These packages are publicly available at spambayes.sourceforge.net
and bogofilter.sourceforge.net.) A full comparison is certainly beyond the scope of this paper;
see [5] for a comprehensive evaluation. We represented each email as a vector of normalized
word frequencies, and used the word tokens extracted by SpamBayes. In the end, we had
an on-line learning problem involving n = 92189 documents and m = 823470 features.
3291
49499
Results for SpamBayes
not spam 39393
spam
3
5515
47275
Results for Bogofilter
true
not spam spam
pred.
not spam 39382
spam
17
true
not spam spam
pred.
pred.
true
not spam spam
not spam 39389
spam
10
2803
49987
Results for Logistic + L1
Table 1: Contingency tables for on-line spam filtering task on the trec2005 data set.
Results. Following [5], we use contingency tables to present results of the on-line spam
filtering experiment (Table 1). The top-right/bottom-left entry of each table is the number
of misclassified spam/non-spam. Everything was evaluated on-line. We tagged an email
for deletion only if p(yi = spam) ? 97%. Our spam filter dominated SpamBayes on the
trec2005 corpus, and performed comparably to Bogofilter?one of the best spam filters to
date [5]. Our model?s expense was slightly greater than the others. As we found in Sec. 5.1,
assessing sparsity of the on-line solution is more difficult than in the exact case, but we can
say that removing the 41% smallest entries of ? resulted in almost no (< 0.001) change.
6
Conclusions
Our experiments on a learning problem with noisy gradient measurements and bound constraints show that the interior-point stochastic approximation algorithm is a significant
improvement over other methods. The interior-point approach also has the virtue of being
much more general, and our analysis guarantees that it will be numerically stable.
Acknowledgements. Thanks to Ewout van den Berg, Matt Hoffman and Firas Hamze.
References
[1] L. Bottou and O. Bousquet, The tradeoffs of large scale learning, in Advances in Neural Information Processing Systems, vol. 20, 1998.
[2] S. Boyd and L. Vandenberghe, Convex optimization, Cambridge University Press, 2004.
[3] S. Chen, D. Donoho, and M. Saunders, Atomic decomposition by basis pursuit, SIAM Journal
on Scientific Computing, 20 (1999), pp. 33?61.
[4] G. V. Cormack and T. R. Lynam, Spam corpus creation for TREC, in Proc. 2nd CEAS, 2005.
[5]
, Online supervised spam filter evaluation, ACM Trans. Information Systems, 25 (2007).
[6] A. V. Fiacco and G. P. McCormick, Nonlinear programming: sequential unconstrained minimization techniques, John Wiley and Sons, 1968.
[7] A. Forsgren, P. E. Gill, and M. H. Wright, Interior methods for nonlinear optimization, SIAM
Review, 44 (2002), pp. 525?597.
[8] A. Genkin, D. D. Lewis, and D. Madigan, Large-scale Bayesian logistic regression for text
categorization, Technometrics, 49 (2007), pp. 291?304.
[9] T. Hastie, R. Tibshirani, and J. Friedman, The elements of statistical learning, Springer, 2001.
[10] S.-J. Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky, An interior-point method for
large-scale L1-regularized least squares, IEEE J. Selected Topics in Signal Processing, 1 (2007).
[11] H. J. Kushner and D. S. Clark, Stochastic approximation methods for constrained and unconstrained systems, Springer-Verlag, 1978.
[12] T. M. Mitchell, Machine Learning, McGraw-Hill, 1997.
[13] J. Nocedal and S. J. Wright, Numerical Optimization, Springer, 2nd ed., 2006.
[14] B. T. Poljak, Nonlinear programming methods in the presence of noise, Mathematical Programming, 14 (1978), pp. 87?97.
[15] H. Robbins and S. Monro, A stochastic approximation method, Annals Math. Stats., 22 (1951).
[16] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, LabelMe: a database and
web-based tool for image annotation, Intl. Journal of Computer Vision, 77 (2008), pp. 157?173.
[17] D. Saad, ed., On-line learning in neural networks, Cambridge University Press, 1998.
[18] M. Schmidt, G. Fung, and R. Rosales, Fast optimization methods for L1 regularization, in
Proceedings of the 18th European Conference on Machine Learning, 2007, pp. 286?297.
[19] S. Shalev-Shwartz, Y. Singer, and N. Srebro, Pegasos: primal estimated sub-gradient solver for
SVM, in Proceedings of the 24th Intl. Conference on Machine learning, 2007, pp. 807?814.
[20] J. C. Spall, Adaptive stochastic approximation by the simultaneous perturbation method, IEEE
Transactions on Automatic Control, 45 (2000), pp. 1839?1853.
[21] J. C. Spall and J. A. Cristion, Model-free control of nonlinear stochastic systems with discretetime measurements, IEEE Transactions on Automatic Control, 43 (1998), pp. 1148?1210.
[22] R. Tibshirani, Regression shrinkage and selection via the Lasso, Journal of the Royal Statistical
Society, 58 (1996), pp. 267?288.
[23] L. N. Trefethen and D. Bau, Numerical linear algebra, SIAM, 1997.
[24] I. Wang and J. C. Spall, Stochastic optimization with inequality constraints using simultaneous
perturbations and penalty functions, in Proc. 42nd IEEE Conf. Decision and Control, 2003.
[25] M. H. Wright, Some properties of the Hessian of the logarithmic barrier function, Mathematical
Programming, 67 (1994), pp. 265?295.
, Ill-conditioning and computational error in interior methods for nonlinear programming,
[26]
SIAM Journal on Optimization, 9 (1998), pp. 84?111.
[27] A. Zheng, Statistical software debugging, PhD thesis, University of California, Berkeley, 2005.
| 3614 |@word trial:1 middle:1 briefly:1 seems:1 norm:2 nd:3 open:1 simulation:1 tried:1 decomposition:1 contrastive:1 thereby:1 catastrophically:1 initial:1 series:2 zij:2 document:1 freitas:1 existing:3 recovered:1 yet:1 written:1 must:3 john:1 subsequent:1 happen:1 thrust:1 numerical:2 shape:1 hypothesize:1 treating:1 plot:1 progressively:1 update:5 stationary:1 selected:1 guess:1 kyk:1 xk:17 provides:1 math:2 complication:1 iterates:8 mathematical:3 along:2 direct:1 supply:1 profound:2 viable:1 consists:1 manner:1 introduce:2 ascribed:1 x0:1 notably:1 expected:1 examine:1 freeman:1 researched:1 xti:6 solver:2 increasing:1 becomes:2 provided:4 moreover:1 bounded:2 notation:1 didn:1 what:3 interpreted:1 minimizes:3 textbook:1 mathematician:1 finding:2 corporation:1 impractical:1 guarantee:2 temporal:1 berkeley:1 every:4 act:2 exactly:1 control:9 normally:1 positive:2 engineering:1 qualification:2 tends:1 limit:1 treat:1 despite:3 ak:10 path:13 plus:2 chose:2 studied:5 examined:1 weakened:2 suggests:1 deployment:1 range:1 averaged:1 unique:1 atomic:1 lost:1 regret:1 implement:1 differs:1 block:2 procedure:2 inbox:2 cormack:1 projection:2 boyd:2 word:3 quadrant:1 madigan:1 pegasos:1 interior:19 selection:4 close:6 superlinear:1 needle:1 applying:3 writing:2 www:1 equivalent:1 deterministic:1 lagrangian:3 go:1 primitive:1 starting:1 incredibly:1 convex:7 independently:1 focused:1 stats:1 rule:8 estimator:6 deriving:2 vandenberghe:1 stability:2 handle:1 classic:1 coordinate:1 annals:1 target:2 suppose:3 exact:7 programming:6 complementarity:1 wolfe:1 trick:1 satisfying:1 particularly:4 approximated:1 element:1 database:1 v6t:1 bottom:1 wang:1 news:1 decrease:1 prospect:1 russell:1 yk:7 mentioned:1 ran:1 transforming:1 convexity:2 motivate:2 raise:1 solving:4 interiorpoint:3 algebra:1 creation:1 upon:2 division:1 basis:2 completely:1 po:13 k0:5 routinely:1 chapter:1 various:1 represented:1 fast:1 effective:4 approached:1 outcome:2 outside:1 choosing:2 shalev:1 apparent:1 quite:1 widely:1 solve:1 plausible:1 valued:1 larger:1 say:2 trefethen:1 unseen:2 noisy:7 itself:1 ip:1 online:2 ceas:1 sequence:10 differentiable:1 net:2 propose:1 reconstruction:1 unresolved:1 remainder:1 j2:2 date:2 poorly:1 achieve:1 faith:1 sourceforge:2 convergence:11 regularity:1 optimum:1 extending:1 xim:1 produce:4 assessing:1 categorization:1 converges:2 intl:2 object:1 derive:4 widrow:2 progress:3 eq:2 strong:1 sizable:1 c:4 involves:2 rosales:1 direction:9 kuhn:1 closely:1 filter:6 stochastic:47 zkt:1 nando:2 elimination:1 everything:1 carbonetto:1 behaviour:1 trash:1 karush:1 investigation:1 proposition:1 strictly:1 hold:2 considered:1 wright:6 normal:2 scope:2 achieves:2 torralba:1 smallest:1 estimation:1 proc:2 applicable:1 currently:1 sensitive:1 robbins:4 largest:1 tool:2 hoffman:1 minimization:2 gaussian:1 always:1 rather:1 pn:1 shrinkage:3 publication:1 focus:1 improvement:2 unsatisfactory:1 likelihood:1 contrast:1 attains:1 kim:1 inference:1 minimizers:1 nears:1 typically:2 misclassified:1 dual:13 ill:5 classification:2 development:1 constrained:10 fairly:1 hoff:2 distance1:1 having:2 represents:1 placing:1 look:1 spall:4 others:1 fundamentally:2 simplify:2 genkin:1 gamma:2 divergence:1 comprehensive:1 resulted:1 murphy:1 replaced:2 statistician:2 n1:1 technometrics:1 friedman:1 stationarity:1 message:1 zheng:1 evaluation:3 adjust:1 certainly:1 introduces:1 behind:4 primal:13 devoted:1 underpinnings:1 closer:1 partial:2 ewout:1 taylor:2 logarithm:1 harmful:1 desired:1 re:1 poljak:1 isolated:2 theoretical:1 instance:3 steer:1 maximization:1 cost:1 entry:2 conducted:1 firas:1 too:2 perturbed:3 fiacco:2 considerably:1 thanks:1 fundamental:1 siam:4 gorinevsky:1 stay:1 probabilistic:1 off:1 xi1:1 quickly:1 squared:1 central:12 thesis:1 containing:1 conf:1 derivative:2 actively:1 account:1 de:1 centred:1 sec:11 coefficient:5 later:1 root:2 lot:2 performed:1 start:1 recover:1 option:2 annotation:1 shrank:1 monro:4 contribution:2 minimize:2 ass:2 square:4 kiefer:1 accuracy:1 variance:3 publicly:1 identify:1 generalize:1 bayesian:1 comparably:1 trajectory:2 history:1 converged:1 explain:2 simultaneous:2 tended:1 ed:2 email:7 centering:2 frequency:1 tucker:1 pp:12 proof:3 associated:1 treatment:1 popular:1 mitchell:1 knowledge:3 lim:1 improves:1 actually:1 back:1 violating:2 supervised:1 response:3 formulation:1 evaluated:1 shrink:2 though:1 wildly:1 implicit:1 stage:1 hand:1 clas1:1 web:2 replacing:1 nonlinear:6 hopefully:1 schmidtm:1 logistic:4 quality:1 behaved:1 scientific:1 effect:2 matt:1 normalized:1 unbiased:2 multiplier:3 counterpart:1 true:5 analytically:1 regularization:11 hence:2 equality:1 symmetric:1 tagged:1 round:1 game:1 generalized:1 arrived:1 outline:1 hill:1 demonstrate:1 performs:1 l1:23 meaning:1 harmonic:1 consideration:1 image:2 wise:1 behaves:1 empirically:1 overview:1 conditioning:4 million:1 discussed:1 slight:1 interpret:1 numerically:2 measurement:6 refer:1 significant:1 cambridge:2 tuning:1 unconstrained:7 automatic:2 pm:2 z4:1 pointed:1 had:3 reliability:1 stable:6 longer:1 supervision:1 closest:1 showed:1 optimizing:2 irrelevant:1 discard:1 occasionally:1 verlag:1 inequality:3 binary:1 success:1 came:1 yi:9 neg:13 fortunately:1 greater:1 gill:1 undoing:1 surely:3 converge:2 wolfowitz:1 bau:1 signal:3 arithmetic:1 dashed:1 full:3 smooth:2 long:3 visit:1 paired:1 feasibility:1 ensuring:1 prediction:1 variant:1 basic:3 regression:12 involving:1 vision:1 expectation:2 iteration:8 sifier:1 achieved:1 penalize:1 background:2 remarkably:1 source:1 saad:1 exhibited:1 enron:1 strict:2 subject:1 tend:2 hamze:1 saunders:1 near:1 presence:1 iii:1 easy:1 enough:1 iterate:2 zi:4 hastie:1 lasso:4 idea:1 regarding:2 tradeoff:1 favour:1 deleterious:1 utility:1 penalty:9 peter:1 hessian:4 matlab:1 dramatically:1 useful:1 clear:1 amount:3 locally:1 ph:1 processed:1 discretetime:1 http:1 exist:1 xij:1 zj:3 notice:1 delta:6 arising:1 per:1 tibshirani:2 estimated:1 discrete:1 shall:1 vol:1 key:1 four:2 lustig:1 drawn:3 pcarbo:2 nocedal:1 subgradient:3 run:2 inverse:2 package:2 place:1 almost:6 reasonable:1 draw:1 decision:1 appendix:2 scaling:1 pushed:1 bound:4 convergent:1 quadratic:2 adapted:1 constraint:18 deficiency:1 your:2 infinity:1 precisely:1 throwing:1 software:2 dominated:1 bousquet:1 argument:1 min:1 optimality:2 extremely:1 strangely:1 department:1 designated:1 according:3 fung:1 debugging:1 poor:2 pink:1 smaller:1 slightly:2 em:1 remain:2 son:1 modification:1 making:2 den:1 koh:1 legal:1 equation:3 turn:2 count:1 fail:1 discus:1 needed:2 know:1 merit:2 singer:1 end:1 pursuit:2 generalizes:1 available:1 apply:3 away:1 neighbourhood:2 schmidt:2 alternative:2 batch:3 altogether:1 original:1 top:1 remaining:2 kushner:1 graphical:1 newton:9 exploit:1 k1:3 build:1 establish:5 classical:1 society:1 objective:8 move:2 strategy:1 diagonal:1 guessing:1 gradient:30 distance:2 separate:2 simulated:2 sensible:1 degrade:1 topic:2 argue:1 considers:1 discriminant:1 trivial:1 reason:1 toward:3 unstable:1 assuming:1 besides:1 code:2 modeled:1 mini:2 innovation:1 difficult:2 executed:1 subproblems:1 expense:1 negative:1 ordinarily:1 reliably:1 policy:1 unknown:1 mccormick:2 vertical:2 observation:8 finite:1 descent:2 behave:1 incorrectly:1 extended:1 precise:1 frame:1 trec:1 perturbation:3 smoothed:1 canada:1 pred:3 cast:1 required:1 continous:1 ineffectual:1 z1:2 california:1 deletion:1 established:2 trans:1 beyond:2 sparsity:1 including:4 reliable:1 max:1 royal:1 suitable:1 misclassification:1 natural:1 rely:1 regularized:7 predicting:1 legitimately:1 axis:1 columbia:1 text:2 prior:1 faced:1 acknowledgement:1 review:2 sg:1 literature:2 tangent:3 kf:1 vancouver:1 asymptotic:1 l2:1 multiplication:1 fully:1 expect:1 filtering:10 proportional:1 srebro:1 clark:1 penalization:1 contingency:2 sufficient:1 principle:1 classifying:1 prone:1 token:1 soil:1 repeat:2 supported:1 surprisingly:1 keeping:1 free:1 side:1 blossom:1 allow:1 bias:1 face:1 barrier:18 taking:1 sparse:3 benefit:1 van:1 boundary:2 dimension:2 axi:1 depth:1 cure:1 default:1 ignores:1 made:1 reinforcement:1 adaptive:2 collection:2 spam:31 avoided:1 far:3 projected:1 transaction:2 approximate:1 mcgraw:1 unreliable:2 tenuous:1 kkt:2 overfitting:1 incoming:1 corpus:4 xi:7 shwartz:1 search:9 continuous:1 table:5 nature:1 zk:9 robust:1 ca:4 exterior:1 improving:1 expansion:2 mse:5 investigated:1 necessarily:1 bottou:1 european:1 did:1 significance:1 main:3 bounding:1 noise:4 arise:1 repeated:1 body:1 augmented:2 fig:10 elaborate:1 martingale:2 wiley:1 precision:1 sub:1 british:1 formula:1 z0:1 bad:1 specific:1 theorem:1 jt:3 rk:2 removing:1 favourable:1 svm:1 virtue:2 dominates:1 concern:2 exists:2 essential:1 sequential:1 ci:3 phd:1 conditioned:2 push:2 demand:1 chen:1 suited:3 hydrangea:1 logarithmic:4 backtracking:1 forming:1 prevents:2 lagrange:3 kxk:1 adjustment:1 tracking:2 supk:1 applies:1 springer:3 ubc:4 minimizer:1 corresponds:1 lewis:1 extracted:1 insisting:1 acm:1 goal:1 identity:1 donoho:1 exposition:1 careful:1 cz1:2 lipschitz:1 absence:1 feasible:7 change:2 labelme:1 engineer:1 principal:1 pas:3 experimental:2 berg:1 mark:1 people:1 support:1 philosophy:1 handling:2 |
2,885 | 3,615 | Structure Learning in Human Sequential
Decision-Making
?
Daniel Acuna
Dept. of Computer Science and Eng.
University of Minnesota?Twin Cities
[email protected]
Paul Schrater
Dept. of Psychology and Computer Science and Eng.
University of Minnesota?Twin Cities
[email protected]
Abstract
We use graphical models and structure learning to explore how people learn policies in sequential decision making tasks. Studies of sequential decision-making
in humans frequently find suboptimal performance relative to an ideal actor that
knows the graph model that generates reward in the environment. We argue that
the learning problem humans face also involves learning the graph structure for reward generation in the environment. We formulate the structure learning problem
using mixtures of reward models, and solve the optimal action selection problem using Bayesian Reinforcement Learning. We show that structure learning in
one and two armed bandit problems produces many of the qualitative behaviors
deemed suboptimal in previous studies. Our argument is supported by the results
of experiments that demonstrate humans rapidly learn and exploit new reward
structure.
1
Introduction
Humans daily perform sequential decision-making under uncertainty to choose products, services,
careers, and jobs; and to mate and survive as species. One of the central problems in sequential decision making with uncertainty is balancing exploration and exploitation in the search for good policies. Using model-based (Bayesian) Reinforcement learning [1], it is possible to solve this problem
optimally by finding policies that maximize the expected discounted future reward [2]. However,
solutions are notoriously hard to compute, and it is unclear whether optimal models are appropriate
for human decision-making. For tasks simple enough to allow comparison between human behavior
and normative theory, like the multi-armed bandit problem, human choices appear suboptimal. In
particular, earlier studies suggested human choices reflect inaccurate Bayesian updating with suboptimalities in exploration [3, 4, 5, 6]. Moreover, in one-armed bandit tasks where exploration is
not necessary, people frequently converge to probability matching [7, 8, 9, 10], rather than the better
option, even when subjects are aware which option is best [11]. However, failures against normative
prediction may reflect optimal decion-making, but for a task that differs from the experimenter?s
intention. For example, people may assume the environment is potentially dynamically varying.
When this assumption is built into normative predictions, these models account much better for human choices in one-armed bandit problems [12], and potentially multi-armed problems [13]. In this
paper, we investigate another possibility, that humans may be learning the structure of the task by
forming beliefs over a space of canonical causal models of reward-action contingencies.
Most human performance assessments view the subject?s task as parameter estimation (e.g. reward probabilities) within a known model (a fixed causal graph structure) that encodes the relations
between environmental states, rewards and actions created by the experimenter. However despite
instruction, it is reasonable that subjects may be uncertain about the model, and instead try to learn
it. To illustrate structure learning in a simple task, suppose you are alone in a casino with many
rooms. In one room you find two slot machines. It is typically assumed you know the machines are
1
independent and give rewards either 0 (failure) or 1 (success) with unknown probabilities that must
be estimated. The structure learning viewpoint allows for more possibilities: Are they independent,
or are are they rigged to covary? Do they have the same probability? Does reward accrue when
the machine is not played for a while? We believe answers to these questions form a natural set of
causal hypotheses about how reward/action contingencies may occur in natural environments.
In this work, we assess the effect of uncertainty between two critical reward structures in terms of the
need to explore. The first structure is a one-arm bandit problem in which exploration is not necessary
(reward generation is coupled across arms); greedy action is optimal [14]. And the other structure
is a two-arm bandit problem in which exploration is necessary (reward generation is independent
at each arm); each action needs to balance the exploration/exploitation tradeoff [15]. We illustrate
how structure learning affects action selection and the value of information gathering in a simple
sequential choice task resembling a Multi-armed Bandit (MAB), but with uncertainty between the
two previous models of reward coupling. We develop a normative model of learning and action
for this class of problems, illustrate the effect of model uncertainty on action selection, and show
evidence that people perform structure learning.
2
Bayesian Reinforcement Learning: Structure Learning
The language of graphical models provides a useful framework for describing the possible structure
of rewards in the environment. Consider an environment with several distinct reward sites that can
be sampled, but the way models generate these rewards is unknown. In particular, rewards at each
site may be independent, or there may be a latent cause which accounts for the presence of rewards
at both sites. Even if independent, if the reward sites are homogeneous, then they may have the same
probability.
Uncertainty about which reward model is correct naturally produces a mixture as the appropriate
learning model. This structure learning model is a special case of Bayesian Reinforcement Learning
(BRL), where the states of the environment are the reward sites and the transitions between states
are determined by the action of sampling a reward site. Uncertainty about reward dynamics and
contingencies can be modeled by including within the belief state not only reward probabilities, but
also the possibility of independent or coupled rewards. Then, the optimal balance of exploration
and exploitation in BRL results in action selection that seeks to maximize (1) expected rewards (2)
information about rewards dynamics, and (3) information about task structure.
Given that tasks tested in this research involve mixtures of Multi-Armed Bandit (MAB) problems,
we borrow MAB language to call a reward site, an arm, and sample a choice or pull. However, the
mixture models we describe are not MAB problems. MAB problems require the dynamics of one
site (arm) remain frozen until visited again, which is not true in general for our mixture model.
Let ? (0 < ? < 1) be a discounting factor such that a possibly stochastic reward x obtained t time
steps in the future means ? t x today. Optimality requires an
action selection policy that maximizes
the expectation over the total discounted future reward Eb x + ?x + ? 2 x + . . . , where b is the belief
over environment dynamics. Let xa be a reward acquired from arm a. After observing
reward xa , we
R
compute a belief state posterior bxa ? p(b|xa ) ? p(xa |b)p(b). Let f (xa |b) ? db p(xa |b)p(b) be the
predicted probability of reward xa given belief b. Let r(b, a) ? ? xa f (xa | b) be the expected reward
of sampling arm a at state b. The value of a state can be found using the Bellman equation [2],
(
)
V (b) = max r(b, a) + ? ? f (xa | b)V (bxa ) .
a
(1)
xa
The optimal action can be recovered by choosing arm
(
0
)
a = arg max r(b, a ) + ? ? f (xa0 | b)V (bxa0 ) .
a0
(2)
xa
The belief over dynamics is effectively a probability distribution over possible Markov Decision
Processes that would explain observables. As such, the optimal policy can be described as a mapping
from belief states to actions. In principle, the optimal solution can be found by solving Bellman
optimality equations but generally there are countably or uncountably infinitely many states and
solutions need approximations.
2
c
?1
?2
x1
x2
?3
x1
x3
x2
?1
?3
?2
x1
x3
x2
N
N
N
M
M
M
(a) 2-arm bandit with no coupling
(b) 1-arm, reward coupling
(c) Mixture of generative models
Figure 1: Different graphical models for generation of rewards at two known sites in the environment. The agent faces M bandit tasks each comprising a random number of N choices (a) Reward
sites are independent. (b) Rewards are dependent within a bandit task (c) Mixture of generative
models used by the learning model. The causes of reward may be independent or coupled. The node
c acts as a ?XOR? switch between coupled and independent reward.
In Figure 1, we show the two reward structures considered on this paper. Figure 1(a) illustrates a
structure where arms are independent and (b) coupled. When independent, rewards xa at arm a are
samples from a unknown distribution p(xa |?a ). When coupled, rewards xa depends on a ?hidden?
state of reward x3 sampled from p(x3 |?3 ). In this case, the rewards x1 and x2 are coupled and depends
on x3 .
If we were certain which of the two models were right, the action selection problem has known
solution for both cases, presented below.
Independent Rewards. Learning and acting in an environment like the one described in Figure 1(a)
is known as the Multi-Armed Bandit (MAB) problem. The MAB problem is a special case of BRL
because we can partition the belief b into a disjoint set of beliefs about each arm {ba }. Because beliefs about non-sampled arms remain frozen until sampled again and sampling one arm doesn?t affect
the belief about any other, independent learning and action selection for each arm is possible. Let
?a be the reward of a deterministic arm in V (ba ) = max {?a /(1 ? ?), r(ba , a) + ? ? f (xa |ba )V (bxa )}
such that both terms inside the maximization are equal. Gittins [16] proved that it is optimal to
choose the arm a with the highest such reward ?a (called the Gittins Index). This allows speedup of
computation by transforming a many-arm bandit problem to many 2-arm bandit problems.
In our task, the belief about a binary reward may be represented by a Beta Distribution with sufficient statistics parameters ?, ? (both > 0) such that xa ? p(xa |?a ) =
?axa (1 ? ?a )1?xa , where ?a ? p(?a ; ?a , ?a ) ? ?a?a ?1 (1 ? ?a )?a ?1 .Thus, the expected reward
r(?a , ?a , a) and predicted probability of reward f (xa = 1|?a , ?a ) are ?a (?a + ?a )?1 . The
belief state transition is bxa = h?a + xa , ?a + 1 ? xa i . Therefore, the Gittins index may
be found
by solving the Bellman equations using dynamic programming
V (?a , ?a ) =
max ?a (1 ? ?)?1 , (?a + ?a )?1 [?a + ? (?a + ?aV (?a + 1, ?a ) + ?aV (?a , ?a + 1))] to a sufficiently
large horizon. In experiments, we use ? = 0.98, for which a horizon of H = 1000 suffices.
Coupled Rewards. Learning and acting in coupled environments (Figure 1b) is trivial because
there is no need to maximize information in acting [14]. The belief state is represented by a Beta
distribution with sufficient statistics ?3 , ?3 (> 0). Therefore, the optimal action is to choose the arm
a with highest expected reward
(
r(?3 , ?3 , a) =
?3
?3 +?3
?3
?3 +?3
a=1
a=2
The belief state transitions are b1 = h?3 + x1 , ?3 + 1 ? x1 i and b2 = h?3 + 1 ? x2 , ?3 + x2 i.
3
3
Learning and acting with model uncertainty
In this section, we consider the case where there is uncerainty about the reward model. The agent?s
belief is captured by a graphical model for a family of reward structures that may or may not be
coupled. We show that learning can be accurate and that action selection is relatively efficient.
We restrict ourselves to the following scenario. The agent is presented with a block of M bandit
tasks, each with initially unknown Bernoulli reward probabilities and coupling. Each task involves
N discrete choices, where N is sampled from a Geometric distribution (1 ? ?)? N .
Figure 1(c) shows the mixture of two possible reward models shown in figure 1(a) and (b). Node c
switches the mixture between the two possible reward models and encodes part of the belief state of
the process. Notice that c is acting as a ?XOR? gate between the two generative models. Given that
it is unknown, the probability distribution p(c = 0) is the mixed proportion for independent reward
structure and p(c = 1) is the mixed proportion for coupled reward structure. We put a prior on the
state c using the distribution p(c; ? ) = ? c (1 ? ? )1?c , with parameter ? . The posterior is
p(?1 , ?2 , ?3 , c|s1 , f1 , s2 , f2 ) =
(
? ?1
(1 ? ? ) ? ?1?1 ?1+s1 (1 ? ?1 )?1 ?1+ f1 ?2?2 ?1+s2 (1 ? ?2 )?2 ?1+ f2 ?3 3 (1 ? ?3 )?3 ?1
c=0
?
?3 ?1+s1 + f2
?1 ?1
?2 ?1
?
?1
?
?1
?
?1+s
+
f
1
2
3
2
1
(? ) ? (?1
(1 ? ?1 )
?2
(1 ? ?2 )
?3
(1 ? ?3 )
c=1
(3)
where sa and fa is the number of successes and failures observed in arm a. It is clear that the
posterior (3) is a mixture of the beliefs on parameters ? j , for 1 ? j ? 3. With mixed proportion ? ,
successes of arm 1 and failures of arm 2 are attributed to successes on the shared ?hidden? arm 3,
whereas failures of arm 1 and successes of arm 2 are attributed to failures of arm 3. On the other
hand, the usual Beta-Bernoulli learning of independent arms happens with mixed proportion 1 ? ? .
At the beginning of each bandit task, we assume the agent ?resets? its belief about arms (si = fi =
0), but the posterior over p(c) is carried over and used as the prior on the next bandit task. Let
Beta(?, ? ) be the Beta function. The marginal posterior on c is as follows
(
1 ,?1 + f 1 )Beta(?2 +s2 ,?2 + f 2 )
(1 ? ? ) Beta(?1 +s
c=0
Beta(?1 ,?1 )Beta(?2 ,?2 )
p(c|s1 , f1 , s2 , f2 ) ?
Beta(?3 +s1 + f2 ,?3 + f1 +s2 )
?
c=1
Beta(? ,? )
3
3
The belief state b of this process may be completely represented
hs1 , f1 , s2 , f2 ; ? , ?1, ?1 , ?2 , ?2 , ?3 , ?3 i. The predicted probability of reward x1 and x2 are:
f (x1 |s1 , f1 , s2 , f2 ) =
?1 +s1
1 +s1 +?1 + f 1
?1 + f1
?1 +s1 +?1 + f1
(
(1 ? ? ) ?
(1 ? ? )
+? ?
x1 = 1
+?
?3 +s1 + f2
3 +s1 + f 2 +?3 +s2 + f 1
?3 +s2 + f1
?3 +s1 + f2 +?3 +s2 + f1
x1 = 0
+? ?
x2 = 1
+?
?3 +s2 + f1
3 +s1 + f 2 +?3 +s2 + f 1
?3 +s1 + f2
?3 +s1 + f2 +?3 +s2 + f1
x2 = 0
by
(4)
and similarly
f (x2 |s1 , f1 , s2 , f2 ) =
(
(1 ? ? ) ?
(1 ? ? )
?2 +s2
2 +s2 +?2 + f 2
?2 + f2
?2 +s2 +?2 + f2
(5)
Let us drop prior parameters ? j , ? j , 1 ? j ? 3, and ? from b. The action selection involves solving
the following Bellman equations
V (s1 , f1 , s2 , f2 ) =
r(b, 1) + ? [ f (x1 = 0|b)V (s1 , f1 + 1, s2 , f2 ) + f (x1 = 1|b)V (s1 + 1, f1 , s2 , f2 )] a = 1
max
a=1,2 r(b, 2) + ? [ f (x2 = 0|b)V (s1 , f 1 , s2 , f 2 + 1) + f (x2 = 1|b)V (s1 , f 1 , s2 + 1, f 2 )] a = 2
4
(6)
0
1
p(? )
p(?1)
0
0.5
1
0
50
100
1
150
2
0.5
0
50
100
3
p(? )
p(?3)
100
150
0
50
100
150
0
50
100
150
0
0
50
100
150
0.5
0
0.5
0
50
100
p(c)
1
0.5
0
0
0.5
1
150
1
p(c)
50
1
150
0
1
0
0
p(? )
p(?2)
0
1
0.5
50
100
150
200
(a) Learning in coupled environment
0.5
200
(b) Learning in independent enviroment
Figure 2: Learning example. A block of four bandit tasks of 50 trials each for each environment.
Marginal beliefs on reward probabilities and coupling are shown as functions of time. The brightness
indicates the relative probability mass. The coupling belief distribution starts uniform with ? = 0.5
and is not reset within a block. The priors p(?i ; ?i , ?i ) are reset at the beginning of each task with
?i , ?i = 1 (1 ? i ? 3) . Note that how well the reward probabilities sum to one forms critical evidence
for or against coupling.
To obtain (6) using dynamic programing for a horizon H, there will be a total of (1/24)(1 + H)(2 +
H)(3 + H)(4 + H) computations which represent different occurrences of si , fi out of 4H possible
histories of rewards. This dramatic reduction allows us to be relatively accurate in our approximation
to the optimal value of an action.
4
Simulation Results
In Figure 2, we perform simulations of learning on blocks of four bandit tasks, each comprising 50
trials. In one simulation, (a) rewards are coupled and the other (b) independent. Note that the model
learns quickly on both cases, but it is slower when task is truly coupled because fewer cases support
this hypothesis (when compared to the independent hypothesis).
The importance of the belief on the coupling parameter is that it has a decisive influence on exploratory behavior. Coupling between the two arms corresponds to the case where one arm is a
winner and the other is a loser by experimenter design. When playing coupled arms, evidence that
an arm is ?good? (e.g. > 0.5) necessarily entails the other is ?bad?, and hence eliminates the need for
exploratory behavior - the optimal choice is to ?stick with the winner?, and switch when the probability estimate suggests dips below 0.5. An agent learning a coupling parameter while sampling
arms can manifest a range of exploratory behaviors that depend critically on both the recent reward
history and the current state of the belief about c, illustrated in figure 3. The top row shows the value
of both arms as a function of coupling belief p(c) after different amounts of evidence for the success
of arm 2. The plots show that optimal actions stick with the winner when belief in coupling is high,
even for small amounts of data. Thus belief in coupling produces underexploration compared to a
model assuming independence, and generates behavior similar to a ?win stay, lose switch? heuristic
early in learning. However, overexploration can also occur when the expected values of both arms
are similar. Figure 3 (lower left) shows that uncertainty about c provides an exploratory bonus to the
lower probability arm which incentivizes switching, and hence overexploration. In fact, when the
difference in probability between arms is small, action selection can fail to converge to the better option. Figure 3 (to the right) shows that p(c) together with the probability of the better arm determine
the transition between exploration vs. exploitation. These results show that optimal action selection with model uncertainty can generate several kinds of behavior typically labeled suboptimal in
multi-armed bandit experiments. Next we provide evidence that people are capable of learning and
exploiting coupling?evidence that structure learning may play a role in apparent failures of humans
to behave optimally in multi-armed bandit tasks.
5
0.75
f 2=3
0.65
0.65
0.6
0.24
0.3
0.2
0.1
0.2
0.1
0
0.5
p(c)
1
f2=4
0
0.5
p(c)
1
0.14
0.12
0.1
0.08
0.06
0.5
0.65
0.65
0.65
0.6
0.6
0.6
0.15
0.1
0.05
0.2
0.15
0.1
0.05
0.15
0.1
0
0.5
p(c)
1
0.05
0
0.5
p(c)
1
0
0.5
p(c)
0
10
f 2=6
f2=5
0.7
0.7
0.7
0.7
0.7
0.04
Bonus
f 2=2
0.75
1
Critical value of p(c)
V(1-?)
f2=1
0.8
10
0
0.5
p(c)
1
?1
?0.3
10
?0.2
10
Expected value of ? 2
?0.1
10
Figure 3: Value of arms as a function of coupling. The priors are uniform (? j = ? j = 1, 1 ? j ? 3),
the evidence for arm 1 remains fixed for all cases (s1 = 1, f1 = 0), and successes of arm 2 remains
fixed as well (s2 = 5). Failures for arm 2 ( f2 ) vary from 1 to 6 . Upper left: Belief that arms are
coupled (p(c)) versus reward per unit time (V (1 ? ?), where V is the value) of arm 1 (dashed line)
and arm 2 (solid line). In all cases, an independent model would choose arm 1 to pull. Vertical line
shows the critical coupling belief value where the structure learning model switches to exploitative
behavior. Lower left: Exploratory bonus (V (1??)?r, where r is the expected reward) for each arm.
Right panel: Critical coupling belief values for exploitative behavior vs. the expected probability of
reward of arm 2. Individual points correspond to different information states (successes and failures
on both arms).
5
Human Experiments
Each of 16 subjects ran on 32 bandit tasks, a block of 16 in a independent environment and a
block of 16 coupled. Within blocks, the presentation order was randomized, and the order of the
coupled environment was randomized accross subjects. On average each task required 48 pulls.
For independent environment, the subjects made 1194 choices across the 16 tasks, and 925 for the
coupled environment.
Each arm is shown in the screen as a slot machine. Subjects pull a machine by pressing a key in
the keyboard. When pulled, an animation of the lever is shown, 200 msec later the reward appears
in the machine?s screen, and a sound mimicking dropping coins lasts proportionally to the amount
gathered. We provide several cues, some redundant, to help subjects keep track of previous rewards.
At the top, the machine shows the number of pulls, total reward, and average reward per pull so
far. Instead of binary rewards 0 and 1, the task presented 0 and 100. The machine?s screen changes
the color according to the average reward, from red (zero points), through yellow (fifty points), and
green (one hundred points). The machine?s total reward is shown as a pile of coins underneath it.
The total score, total pulls, and rankings within a game were presented.
6
Results
We analyze how task uncertainty affects decisions by comparing human behavior to that of the
optimal model and models that assume a structure. For each agent, be human or not, we compute
the (empirical) probability that it selects the oracle-best action versus the optimal belief that a block
of tasks is coupled. The idea behind this measure is to show how the belief on task structure changes
the behavior and which of the models better captures human behavior.
We run 1000 agents for each of the models with task uncertainty (optimal model), assumed coupled
reward task (coupled model), and assumed independent reward task (independent model) under the
same conditions that subjects faced on both the blocks of coupled and independent tasks. And for
each of the decisons of these models and the 33904 decisions performed by the 16 subjects, we
compute the optimal belief on coupling according to our model and bin the proportion of times
the agent chooses the (oracle) best arm according to this belief. The results are summarized in
Figure 4. The independent model tends to perform equally well on both coupled and independent
reward tasks. The coupled model tends to perform well only in the coupled task and worse in the
independent tasks. As expected, the optimal model has better overall performance, but does not
perform better than models with fixed task structure?in their respective tasks?because it pays the
price of learning early in the block. The optimal model behaves like a mixture between the coupled
and independent model. Human behavior is much better captured by the optimal model (Figure
6
0.96
Human
Optimal model
Coupled model
Independent model
Prob. of choosing best arm
0.94
0.92
0.9
0.88
0.86
0.84
0.82
0.8
0.78
0
0.25
0.5
Coupling belief p(c=1|D)
0.75
1
Figure 4: Effect of coupling on behavior. For each of the decisions of subjects and simulated
models under the same conditions, we compute the optimal belief on coupling according to the
model proposed in this paper and bin the proportion of times an agent chooses the (oracle) best arm
according to this belief. This plot represents the empirical probability that an agent would pick the
best arm at a given belief on coupling.
4). This is evidence that human behavior shares the characteristics of the optimal model, namely,
it contains task uncertainty and exploit the knowledge of the task structure to maximize its gains.
The gap in performance that exists between the optimal model and humans may be explained by
memory limitations or more complicated task structures being entertained by subjects. Because the
subjects are not told the coupling state of the environment and the arms appear as separate options
we conclude that people are capable of learning and exploiting task structure. Together these results
suggest that structure learning may play a significant role in explaining differences between human
behavior and previous normative predictions.
7
Conclusions and future directions
We have provided evidence that structure learning may be an important missing piece in evaluating
human sequential decision making. The idea of modeling sequential decision making under uncertainty as a structure learning problem is a natural extension of previous work on structure learning
in Bayesian models of cognition [17, 18] and animal learning [19] to sequential decision making
problems under uncertainty. It also extends previous work on Bayesian approaches to modeling sequential decision making in the multi-armed bandit [20] by adding structure learning. It is important
to note that we have intentionally focused on reward structure, ignoring issues involving dependencies across trials. Clearly reward structure learning must be integrated with learning about temporal
dependencies [21].
Although we focused on learning coupling between arms, there are other kinds of reward structure
learning that may account for a broad variety of human decision making performance. In particular,
allowing dependence between the probability of reward at a site and previous actions can produce
large changes in decision making behavior. For instance, in a ?foraging? model where reward is collected from a site and probabilistically replenished, optimal strategies will produce choice sequences
that alternate between reward sites. Thus uncertainty about the independence of reward on previous
actions can produce a continuum of behavior, from maximization to probability matching. Note that
structure learning explanations for probability matching is significantly different than explanations
based on reinforcing previously successful actions (the ?law of effect?) [22]. Instead of explaining
behavior in terms of the idiosynchracies of a learning rule, structure learning constitutes a fully rational response to uncertainty about the causal structure of rewards in the environment. We intend to
test the predictive power of a range of structure learning ideas on experimental data we are currently
collecting. Our hope is that, by expanding the range of normative hypotheses for human decisionmaking, we can begin to develop more principled accounts of human sequential decision-making
behavior.
7
Acknowledgements
The work was supported by NIH NPCS 1R90 DK71500-04, NIPS 2008 Travel Award, CONICYTFIC-World Bank 05-DOCFIC-BANCO-01, ONR MURI N 00014-07-1-0937, and NIH EY02857.
References
[1] Pascal Poupart, Nikos Vlassis, Jesse Hoey, and Kevin Regan. An analytic solution to discrete
bayesian reinforcement learning. In 23rd International Conference on Machine Learning,
Pittsburgh, Penn, 2006.
[2] Richard Ernest Bellman. Dynamic programming. Princeton University Press, Princeton, 1957.
[3] Noah Gans, George Knox, and Rachel Croson. Simple models of discrete choice and their
performance in bandit experiments. Manufacturing and Service Operations Management,
9(4):383?408, 2007.
[4] C.M. Anderson. Behavioral Models of Strategies in Multi-Armed Bandit Problems. PhD thesis,
Pasadena, CA., 2001.
[5] Jeffrey Banks, David Porter, and Mark Olson. An experimental analysis of the bandit problem.
Economic Theory, 10(1):55?77, 1997.
[6] R. J. Meyer and Y. Shi. Sequential choice under ambiguity: Intuitive solutions to the armedbandit problem. Management Science, 41:817?83, 1995.
[7] N Vulkan. An economist?s perspective on probability matching. Journal of Economic Surveys,
14:101?118, 2000.
[8] Yvonne Brackbill and Anthony Bravos. Supplementary report: The utility of correctly predicting infrequent events. Journal of Experimental Psychology, 64(6):648?649, 1962.
[9] W Edwards. Probability learning in 1000 trials. Journal of Experimental Psychology, 62:385?
394, 1961.
[10] W Edwards. Reward probability, amount, and information as determiners of sequential twoalternative decisions. J Exp Psychol, 52(3):177?88, 1956.
[11] E. Fantino and A Esfandiari. Probability matching: Encouraging optimal responding in humans. Canadian Journal of Experimental Psychology, 56:58 ? 63, 2002.
[12] Timothy E J Behrens, Mark W Woolrich, Mark E Walton, and Matthew F S Rushworth. Learning the value of information in an uncertain world. Nat Neurosci, 10(9):1214?1221, 2007.
[13] N. D. Daw, J. P. O?Doherty, P. Dayan, B. Seymour, and R. J. Dolan. Cortical substrates for
exploratory decisions in humans. Nature, 441(7095):876?879, 2006.
[14] JS Banks and RK Sundaram. A class of bandit problems yielding myopic optimal strategies.
Journal of Applied Probability, 29(3):625?632, 1992.
[15] John Gittins and You-Gan Wang. The learning component of dynamic allocation indices. The
Annals of Statistics, 20(2):1626?1636, 1992.
[16] J. C. Gittins and D. M. Jones. A dynamic allocation index for the sequential design of experiments. Progress in Statistics, pages 241?266, 1974.
[17] Joshua B. Tenenbaum, Thomas L. Griffiths, and Charles Kemp. Theory-based bayesian models
of inductive learning and reasoning. Trends in Cognitive Sciences, 10(7):309?318, 2006.
[18] Joshua B. Tenenbaum and Thomas L. Griffiths. Structure learning in human causal induction.
NIPS 13, pages 59?65, 2000.
[19] A. C. Courville, N. D. Daw, G. J. Gordon, and D. S. Touretzky. Model uncertainty in classical
conditioning. Advances in Neural Information Processing Systems, (16):977?986, 2004.
[20] Daniel Acuna and Paul Schrater. Bayesian modeling of human sequential decision-making on
the multi-armed bandit problem. In CogSci, 2008.
[21] Michael D. Lee. A hierarchical bayesian model of human decision-making on an optimal
stopping problem. Cognitive Science: A Multidisciplinary Journal, 30:1 ? 26, 2006.
[22] Ido Erev and Alvin E. Roth. Predicting how people play games: Reinforcement learning in
experimental games with unique, mixed strategy equilibria. The American Economic Review,
88(4):848?881, 1998.
8
| 3615 |@word trial:4 exploitation:4 proportion:6 rigged:1 instruction:1 seek:1 simulation:3 eng:2 brightness:1 dramatic:1 pick:1 solid:1 reduction:1 contains:1 score:1 daniel:2 recovered:1 current:1 comparing:1 si:2 must:2 john:1 partition:1 analytic:1 drop:1 plot:2 sundaram:1 v:2 alone:1 greedy:1 generative:3 fewer:1 cue:1 beginning:2 provides:2 node:2 beta:11 qualitative:1 behavioral:1 inside:1 incentivizes:1 acquired:1 expected:10 behavior:20 frequently:2 multi:10 bellman:5 discounted:2 encouraging:1 armed:13 accross:1 provided:1 begin:1 moreover:1 maximizes:1 mass:1 bonus:3 panel:1 kind:2 finding:1 temporal:1 collecting:1 act:1 stick:2 unit:1 penn:1 appear:2 service:2 tends:2 seymour:1 switching:1 despite:1 eb:1 dynamically:1 suggests:1 range:3 unique:1 block:10 differs:1 x3:5 empirical:2 significantly:1 matching:5 intention:1 griffith:2 suggest:1 acuna:2 selection:11 put:1 influence:1 deterministic:1 missing:1 shi:1 resembling:1 jesse:1 roth:1 focused:2 formulate:1 survey:1 rule:1 borrow:1 pull:7 exploratory:6 annals:1 behrens:1 suppose:1 today:1 play:3 infrequent:1 programming:2 homogeneous:1 substrate:1 hypothesis:4 trend:1 updating:1 muri:1 labeled:1 observed:1 role:2 wang:1 capture:1 highest:2 ran:1 principled:1 environment:19 transforming:1 reward:90 dynamic:10 depend:1 solving:3 predictive:1 f2:21 observables:1 completely:1 represented:3 distinct:1 describe:1 cogsci:1 brl:3 kevin:1 choosing:2 apparent:1 heuristic:1 supplementary:1 solve:2 statistic:4 sequence:1 pressing:1 frozen:2 product:1 reset:3 rapidly:1 loser:1 intuitive:1 olson:1 exploiting:2 decisionmaking:1 walton:1 produce:6 gittins:5 help:1 illustrate:3 coupling:24 develop:2 progress:1 sa:1 job:1 edward:2 predicted:3 involves:3 direction:1 correct:1 stochastic:1 exploration:8 human:31 bin:2 require:1 alvin:1 suffices:1 f1:17 mab:7 extension:1 sufficiently:1 considered:1 exp:1 equilibrium:1 mapping:1 cognition:1 matthew:1 vary:1 early:2 continuum:1 determiner:1 estimation:1 travel:1 lose:1 currently:1 visited:1 hs1:1 city:2 hope:1 clearly:1 rather:1 varying:1 probabilistically:1 bernoulli:2 indicates:1 underneath:1 dependent:1 dayan:1 stopping:1 inaccurate:1 integrated:1 typically:2 a0:1 initially:1 hidden:2 bandit:28 relation:1 pasadena:1 comprising:2 selects:1 mimicking:1 issue:1 arg:1 overall:1 pascal:1 animal:1 special:2 marginal:2 equal:1 aware:1 sampling:4 represents:1 broad:1 jones:1 survive:1 constitutes:1 future:4 report:1 gordon:1 richard:1 individual:1 ourselves:1 jeffrey:1 investigate:1 possibility:3 umn:2 mixture:11 truly:1 yielding:1 behind:1 myopic:1 accurate:2 capable:2 daily:1 necessary:3 respective:1 causal:5 accrue:1 uncertain:2 instance:1 earlier:1 modeling:3 maximization:2 npc:1 uniform:2 hundred:1 successful:1 optimally:2 dependency:2 answer:1 foraging:1 ido:1 chooses:2 knox:1 international:1 randomized:2 stay:1 told:1 lee:1 michael:1 together:2 quickly:1 gans:1 thesis:1 lever:1 again:2 management:2 central:1 reflect:2 choose:4 possibly:1 esfandiari:1 ambiguity:1 woolrich:1 worse:1 cognitive:2 american:1 account:4 twin:2 casino:1 b2:1 summarized:1 ranking:1 depends:2 decisive:1 piece:1 later:1 view:1 try:1 performed:1 observing:1 analyze:1 red:1 rushworth:1 start:1 xa0:1 option:4 complicated:1 enviroment:1 ass:1 xor:2 characteristic:1 correspond:1 gathered:1 yellow:1 bayesian:11 critically:1 notoriously:1 history:2 explain:1 touretzky:1 failure:9 against:2 intentionally:1 naturally:1 attributed:2 sampled:5 gain:1 experimenter:3 proved:1 rational:1 manifest:1 color:1 knowledge:1 appears:1 response:1 brackbill:1 bxa:4 anderson:1 xa:22 until:2 hand:1 assessment:1 porter:1 multidisciplinary:1 believe:1 effect:4 true:1 inductive:1 discounting:1 hence:2 covary:1 illustrated:1 game:3 demonstrate:1 doherty:1 reasoning:1 fi:2 charles:1 nih:2 behaves:1 winner:3 conditioning:1 schrater:3 significant:1 rd:1 similarly:1 language:2 minnesota:2 actor:1 entail:1 j:1 posterior:5 r90:1 recent:1 perspective:1 scenario:1 keyboard:1 certain:1 axa:1 binary:2 success:8 onr:1 joshua:2 captured:2 george:1 nikos:1 converge:2 maximize:4 determine:1 redundant:1 dashed:1 sound:1 equally:1 award:1 prediction:3 involving:1 ernest:1 expectation:1 represent:1 whereas:1 suboptimalities:1 fifty:1 eliminates:1 subject:13 db:1 call:1 presence:1 ideal:1 canadian:1 enough:1 switch:5 affect:3 independence:2 psychology:4 variety:1 restrict:1 suboptimal:4 economic:3 idea:3 tradeoff:1 whether:1 utility:1 reinforcing:1 cause:2 action:27 bravo:1 useful:1 generally:1 clear:1 involve:1 proportionally:1 amount:4 tenenbaum:2 generate:2 canonical:1 exploitative:2 notice:1 estimated:1 disjoint:1 per:2 track:1 correctly:1 discrete:3 dropping:1 key:1 four:2 graph:3 sum:1 run:1 prob:1 uncertainty:18 you:4 extends:1 family:1 reasonable:1 rachel:1 erev:1 decision:21 pay:1 played:1 courville:1 oracle:3 occur:2 noah:1 x2:12 encodes:2 generates:2 argument:1 optimality:2 relatively:2 speedup:1 according:5 alternate:1 across:3 remain:2 making:16 s1:22 happens:1 explained:1 hoey:1 gathering:1 equation:4 remains:2 previously:1 describing:1 fail:1 know:2 operation:1 hierarchical:1 appropriate:2 occurrence:1 coin:2 gate:1 slower:1 thomas:2 top:2 responding:1 gan:1 graphical:4 exploit:2 classical:1 intend:1 question:1 fa:1 strategy:4 dependence:1 usual:1 unclear:1 win:1 separate:1 simulated:1 poupart:1 argue:1 collected:1 kemp:1 trivial:1 induction:1 assuming:1 economist:1 modeled:1 index:4 balance:2 potentially:2 ba:4 design:2 policy:5 unknown:5 perform:6 allowing:1 upper:1 av:2 vertical:1 markov:1 mate:1 behave:1 vlassis:1 david:1 namely:1 required:1 daw:2 nip:2 suggested:1 below:2 built:1 including:1 max:5 green:1 belief:38 memory:1 explanation:2 critical:5 power:1 natural:3 event:1 predicting:2 arm:60 created:1 deemed:1 carried:1 psychol:1 coupled:28 faced:1 prior:5 geometric:1 acknowledgement:1 review:1 dolan:1 relative:2 law:1 fully:1 mixed:5 generation:4 limitation:1 regan:1 allocation:2 versus:2 contingency:3 agent:10 sufficient:2 principle:1 viewpoint:1 bank:3 playing:1 share:1 balancing:1 uncountably:1 row:1 pile:1 supported:2 last:1 allow:1 pulled:1 explaining:2 face:2 dip:1 cortical:1 transition:4 evaluating:1 world:2 doesn:1 made:1 reinforcement:6 far:1 countably:1 keep:1 b1:1 pittsburgh:1 assumed:3 conclude:1 search:1 latent:1 learn:3 nature:1 expanding:1 career:1 ignoring:1 ca:1 necessarily:1 anthony:1 neurosci:1 s2:23 paul:2 animation:1 x1:12 site:13 screen:3 meyer:1 msec:1 learns:1 rk:1 bad:1 normative:6 evidence:9 exists:1 sequential:15 effectively:1 importance:1 adding:1 phd:1 entertained:1 nat:1 illustrates:1 horizon:3 gap:1 timothy:1 explore:2 infinitely:1 forming:1 corresponds:1 environmental:1 slot:2 presentation:1 manufacturing:1 room:2 shared:1 price:1 hard:1 programing:1 change:3 determined:1 acting:5 total:6 specie:1 called:1 experimental:6 people:7 support:1 mark:3 dept:2 princeton:2 tested:1 |
2,886 | 3,616 | Nonparametric Regression and Classification with
Joint Sparsity Constraints
Han Liu John Lafferty
Larry Wasserman
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
We propose new families of models and algorithms for high-dimensional nonparametric learning with joint sparsity constraints. Our approach is based on a regularization method that enforces common sparsity patterns across different function
components in a nonparametric additive model. The algorithms employ a coordinate descent approach that is based on a functional soft-thresholding operator.
The framework yields several new models, including multi-task sparse additive
models, multi-response sparse additive models, and sparse additive multi-category
logistic regression. The methods are illustrated with experiments on synthetic data
and gene microarray data.
1
Introduction
Many learning problems can be naturally formulated in terms of multi-category classification or
multi-task regression. In a multi-category classification problem, it is required to discriminate between the different categories using a set of high-dimensional feature vectors?for instance, classifying the type of tumor in a cancer patient from gene expression data. In a multi-task regression
problem, it is of interest to form several regression estimators for related data sets that share common
types of covariates?for instance, predicting test scores across different school districts. In other areas, such as multi-channel signal processing, it is of interest to simultaneously decompose multiple
signals in terms of a large common overcomplete dictionary, which is a multi-response regression
problem. In each case, while the details of the estimators vary from instance to instance, across
categories, or tasks, they may share a common sparsity pattern of relevant variables selected from a
high-dimensional space. How to find this common sparsity pattern is an interesting learning task.
In the parametric setting, progress has been recently made on such problems using regularization
based on the sum of supremum norms (Turlach et al., 2005; Tropp et al., 2006; Zhang, 2006). For
?p
(k)
(k)
(k) (k)
(k)
example, consider the K-task linear regression problem yi = ?0 + j=1 ?j xij + ?i where
the superscript k indexes the tasks, and the subscript i = 1, . . . , nk indexes the instances within a
task. Using quadratic loss, Zhang (2006) suggests the following estimator
?
?
?
?
?2 ?
?
?
p
p
n
K
k
??
?
? (k) (k) ?
?
? 1 ? ? (k)
(k)
(k)
?b = arg min
yi ? ?0 ?
?j xij ? ? + ?
max |?j |
(1)
?
?
?
k
?
?k=1 2nk i=1
?
j=1
j=1
(1)
(k)
(K)
where maxk |?j | = ??j ?? is the sup-norm of the vector ?j ? (?j , . . . , ?j )T of coefficients
for the j th feature across different tasks. The sum of sup-norms regularization has the effect of
?grouping? the elements in ?j such that they can be shrunk towards zero simultaneously. The
problems of multi-response (or multivariate) regression and multi-category classification can be
viewed as a special case of the multi-task regression problem where tasks share the same design
matrix. Turlach et al. (2005) and Fornasier and Rauhut (2008) propose the same sum of sup-norms
1
regularization as in (1) for such problems in the linear model setting. In related work, Zhang et al.
(2008) propose the sup-norm support vector machine, demonstrating its effectiveness on gene data.
In this paper we develop new methods for nonparametric estimation for such multi-task and multicategory regression and classification problems. Rather than fitting a linear model, we instead estimate smooth functions of the data, and formulate a regularization framework that encourages joint
functional sparsity, where the component functions can be different across tasks while sharing a
common sparsity pattern. Building on a recently proposed method called sparse additive models,
or ?SpAM? (Ravikumar et al., 2007), we propose a convex regularization functional that can be
viewed as a nonparametric analog of the sum of sup-norms regularization for linear models. Based
on this regularization functional, we develop new models for nonparametric multi-task regression
and classification, including multi-task sparse additive models (MT-SpAM), multi-response sparse
additive models (MR-SpAM), and sparse multi-category additive logistic regression (SMALR).
The contributions of this work include (1) an efficient iterative algorithm based on a functional
soft-thresholding operator derived from subdifferential calculus, leading to the multi-task and multiresponse SpAM procedures, (2) a penalized local scoring algorithm that corresponds to fitting a
sequence of multi-response SpAM estimates for sparse multi-category additive logistic regression,
and (3) the successful application of this methodology to multi-category tumor classification and
biomarker discovery from gene microarray data.
2
Nonparametric Models for Joint Functional Sparsity
We begin by introducing some notation.
If X has distribution PX , and f is a function of x, its
?
L2 (PX ) norm is denoted by ?f ?2 = X f 2 (x)dPX = E(f 2 ). If v = (v1 , . . . , vn )T is a vector, de?n
fine ?v?2n = n1 j=1 vj2 and ?v?? = maxj |vj |. For a p-dimensional random vector (X1 , . . . , Xp ),
let Hj denote the Hilbert subspace L2 (PXj ) of PXj -measurable functions fj (xj ) of the single scalar
variable Xj with zero mean, i.e. E[fj (Xj )] = 0. The inner product on this space is defined as
?fj , gj ? = E [fj (Xj )gj (Xj )]. In this paper, we mainly?
study multivariate functions f (x1 , . . . , xp )
that have an additive form, i.e., f (x1 , . . . , xp ) = ? + j fj (xj ), with fj ? Hj for j = 1, . . . , p.
With H ? {1} ? H1 ? H2 ? . . . ? Hp denoting the direct sum Hilbert space, we have that f ? H.
2.1
Multi-task/Multi-response Sparse Additive Models
(k)
(k)
In a K-task regression problem, we have observations {(xi , yi ), i = 1, . . . , nk , k = 1, . . . , K},
(k)
(k)
(k)
where xi = (xi1 , . . . , xip )T is a p-dimensional covariate vector, the superscript k indexes tasks
and i indexes the i.i.d. samples for each task. In the following, for notational simplicity, we assume
that n1 = . . . = nK = n. We also assume different tasks are comparable and each Y (k) and
(k)
Xj has been standardized, i.e., has mean zero and variance one. This is not really a restriction
of the model since a straightforward weighting scheme can be
( adopted to extend) our approach to
handle noncomparable tasks. We assume the true model is E Y (k) | X (k) = x(k) = f (k) (x(k) ) ?
?p
(k) (k)
(k)
to be zero. Let
j=1 fj (xj ) for k = 1, . . . , K, where, for simplicity, we take all intercepts ?
Qf (k) (x, y) = (y ? f (k) (x))2 denote the quadratic loss. To encourage common sparsity patterns
across different function components, we define the regularization functional ?K (f ) by
?K (f ) =
p
?
j=1
(k)
max ?fj ?.
k=1,...,K
(2)
The regularization functional ?K (f ) naturally combines the idea of the sum of sup-norms penalty
for parametric joint sparsity and the regularization idea of SpAM for nonparametric functional sparsity; if K = 1, then ?1 (f ) is just the regularization term introduced for (single-task) sparse additive
(k)
models by Ravikumar et al. (2007). If each fj is a linear function, then ?K (f ) reduces to the
sum of sup-norms regularization term as in (1). We shall employ ?K (f ) to induce joint functional
sparsity in nonparametric multi-task inference.
2
Using this regularization functional, the multi-task sparse additive model (MT-SpAM) is formulated
as a penalized M-estimator, by framing the following optimization problem
{
}
n ?
K
?
1
(k)
(k)
(1)
(K)
fb , . . . , fb
= arg min
(3)
Qf (k) (xi , yi ) + ??K (f )
2n i=1
f (1) ,...,f (K)
k=1
(k)
fj
(k)
Hj
where
?
for j = 1, . . . , p and k = 1, . . . , K, and ? > 0 is a regularization parameter.
The multi-response sparse additive model (MR-SpAM) has exactly the same formulation as in (3)
except that a common design matrix is used across the K different tasks.
2.2 Sparse Multi-Category Additive Logistic Regression
In a K-category classification problem, we are given n examples (x1 , y1 ), . . . , (xn , yn ) where xi =
(1)
(K?1) T
(xi1 , . . . , xip )T is a p-dimensional predictor vector and yi = (yi , . . . , yi
) is a (K ? 1)dimensional response vector in which at most one element can be one, with all the others being
(k)
zero. Here, we adopt the common ?1-of-K? labeling convention where yi = 1 if xi has category
(k)
k and yi = 0 otherwise; if all elements of yi are zero, then xi is assigned the K-th category.
The multi-category additive logistic regression model is
(
)
exp f (k) (x)
(k)
(4)
P(Y
= 1 | X = x) =
(
) , k = 1, . . . , K ? 1
?K?1
1 + k? =1 exp f (k? ) (x)
?p
(k)
where f (k) (x) = ?(k) + j=1 fj (xj ) has an additive form. We define f = (f (1) , . . . , f (K?1) ) to
(k)
be a discriminant function and pf (x) = P(Y (k) = 1 | X = x) to be the conditional probability of
category k given X = x. The logistic regression classifier hf (?) induced by f , which is a mapping
(k)
from the sample space to the category labels, is simply given by hf (x) = arg maxk=1,...,K pf (x).
(k)
If a variable Xj is irrelevant, then all of the component functions fj are identically zero, for each
k = 1, 2, . . . , K ? 1. This motivates the use of the regularization functional ?K?1 (f ) to zero out
(1)
(K?1)
entire vectors fj = (fj , . . . , fj
).
Denoting
?f (x, y) =
K?1
?
(
y
(k) (k)
f
(x) ? log 1 +
K?1
?
)
exp f
(k? )
(x)
k? =1
k=1
as the multinomial log-loss, the sparse multi-category additive logistic regression estimator
(SMALR) is thus formulated as the solution to the optimization problem
}
{
n
1?
(1)
(K?1)
b
b
f ,...,f
= arg min
?f (xi , yi ) + ??K?1 (f )
(5)
?
n i=1
f (1) ,...,f (K?1)
(k)
where fj
3
(k)
? Hj
for j = 1, . . . , p and k = 1, . . . , K ? 1.
Simultaneous Sparse Backfitting
We use a blockwise coordinate descent algorithm to minimize the functional defined in (3). We first
formulate the population version of the problem by replacing sample averages by their expectations.
We then derive stationary conditions for the optimum and obtain a population version algorithm for
computing the solution by a series of soft-thresholded univariate conditional expectations. Finally,
a finite sample version of the algorithm can be derived by plugging in nonparametric smoothers for
these conditional expectations.
?
(1)
(K)
(k)
(k)
(k)
For the j th block of component functions fj , . . . , fj , let Rj = Y (k) ? l?=j fl (Xl ) denote the partial residuals. Assuming all but the functions in the j th block to be fixed, the optimization
problem is reduced to
[K (
}
{
)2 ]
?
1
(k)
(k)
(k)
(k)
(1)
(K)
E
Rj ? fj (Xj )
+ ? max ?fj ? .
(6)
fbj , . . . , fbj = arg min
k=1,...,K
2
(1)
(K)
f ,...,f
j
j
k=1
3
The following result characterizes the solution to (6).
(
)
(k)
(k)
(k)
(k)
(k)
Theorem 1. Let Pj = E Rj | Xj
and sj = ?Pj ?, and order the indices according to
(k1 )
sj
(k2 )
? sj
(kK )
? . . . ? sj
. Then the solution to (6) is given by
?
(ki )
?
for i > m?
?
?Pj [ ?
]
(ki )
m
(ki )
Pj
fj =
1 ? (ki? )
?
for i ? m? .
sj
??
?
?
(ki )
?m
s
i? =1
+ j
(?
)
(ki? )
m
1
where m? = arg maxm m
? ? and [?]+ denotes the positive part.
i? =1 sj
(7)
Therefore, the optimization problem in (6) is solved by a soft-thresholding operator, given in equation (7), which we shall denote as
(1)
(K)
(fj , . . . , fj
(?)
(1)
(K)
) = Soft? [Rj , . . . , Rj
].
(8)
While the proof of this result is lengthy, we sketch the key steps below, which are a functional extension of the subdifferential calculus approach of Fornasier and Rauhut (2008) in the linear setting.
First, we formulate an optimality condition in terms of the G?ateaux derivative as follows.
(k)
Lemma 2. The functions fj
(k)
are solutions to (6) if and only if fj
(k)
? Pj
+ ?uk vk = 0 (almost
(k)
surely), for k = 1, . . . , K , where uk are scalars and vk are measurable functions of Xj , with
?T and v ? ??f (k) ?, k = 1, . . . , K.
(u1 , . . . , uK )T ? ?? ? ?? ? (1)
k
(K)
j
?fj
?,...,?fj
?
Here the former one denotes the subdifferential of the convex functional ? ? ?? evaluated at
(1)
(K)
(?fj ?, . . . , ?fj ?)T , it lies in a K-dimensional Euclidean space. And the latter denotes the sub(k)
differential of ?fj ?, which is a set of functions. Next, the following proposition from Rockafellar
and Wets (1998) is used to characterize the subdifferential of sup-norms.
Lemma 3. The subdifferential of ? ? ?? on RK is
{ 1
B (1)
if x = 0
?? ? ?? x =
conv{sign(xk )ek : |xk | = ?x?? } otherwise.
where B 1 (1) denotes the ?1 ball of radius one, conv(A) denotes the convex hull of set A, and ek is
the k -th canonical unit vector in RK .
Using Lemma 2 and Lemma 3, the proof of Theorem 1 proceeds by considering three cases for the
(1)
(K)
(k)
sup-norm subdifferential evaluated at (?fj ?, . . . , ?fj ?)T : (1) ?fj ? = 0 for k = 1, . . . , K; (2)
(k? )
(k)
there exists a unique k, such that ?fj ? = maxk? =1,...,K ?fj
?
(k)
?fj ?
(k? )
?fj ?
? ?= 0; (3) there exists at least two
(m)
maxm=1,...,K ?fj ?
k ?= k , such that
=
=
?= 0. The derivations for cases (1) and
(2) are relatively straightforward, but for case (3) we prove the following.
Lemma 4. The sup-norm
at m > 1 entries if only if m is the largest number
(?is attained precisely
)
(km )
m?1 (ki? )
1
such that sj
? m?1
?? .
i? =1 sj
The proof of Theorem 1 then follows from the above lemmas and some calculus. Based on this
result, the data version of the soft-thresholding operator is obtained by replacing the conditional
(k)
(k)
(k)
(k) (k)
(k)
expectation Pj
= E(Rj | Xj ) by Sj Rj , where Sj is a nonparametric smoother for
(k)
variable Xj , e.g., a local linear or spline smoother; see Figure 1. The resulting simultaneous
sparse backfitting algorithm for multi-task and multi-response sparse additive models (MT-SpAM
and MR-SpAM) is shown in Figure 2. The algorithm for the multi-response case (MR-SpAM) has
(1)
(K)
Sj = . . . = Sj since there is only a common design matrix.
4
(?)
(1)
(K)
S OFT- THRESHOLDING OPERATOR S OFT? [Rj , . . . , Rj
(k)
(1)
(K)
; Sj , . . . , Sj
]: DATA VERSION
(k)
Input: Smoothing matrices Sj , residuals Rj for k = 1, . . . , K, regularization parameter ?.
h
i
(k)
(k)
(k)
(k)
(k) (k)
(1) Estimate Pj = E Rj | Xj
by smoothing: Pbj = Sj Rj ;
= ?Pbj ?n and order the indices according to sbj
?P
?
(ki? )
m
1
? ? and calculate
(3) Find m? = arg maxm m
i? =1 sj
(k)
(k1 )
(2) Estimate norm: sbj
(k )
fbj i
(4) Center fbj
8
(ki )
>
>Pbj "
<
#
(k )
m?
Pbj i
=
1 X (ki? )
>
?
?
s
b
j
>
?
(k
)
:m
sb i
i? =1
+ j
(k2 )
? sbj
(kK )
? . . . ? sbj
;
for i > m?
for i ? m? ;
? fbj ? mean(fbj ) for k = 1, . . . , K.
(k)
Output: Functions fbj for k = 1, . . . , K.
(k)
(k)
(k)
Figure 1: Data version of the soft-thresholding operator.
M ULTI - TASK AND M ULTI - RESPONSE S PAM
(k)
(k)
Input: Data (xi , yi ), i = 1, . . . , n, k = 1, . . . , K and regularization parameter ?.
(k)
(k)
Initialize: Set fb = 0 and compute smoothers S for j = 1, . . . , p and k = 1, . . . , K;
j
j
Iterate until convergence:
For each j = 1, . . . , p:
P
(k)
(k)
(1) Compute residuals: Rj = y (k) ? k? ?=j fbk? for k = 1, . . . , K;
(1)
(K)
(?)
(1)
(K)
(1)
(K)
(2) Threshold: fb , . . . , fb
? Soft
[R , . . . , R ; S , . . . , S
].
j
j
?
j
j
j
j
Output: Functions fb(k) for k = 1, . . . , K.
Figure 2: The simultaneous sparse backfitting algorithm for MT-SpAM or MR-SpAM. For the multiresponse case, the same smoothing matrices are used for each k.
3.1
Penalized Local Scoring Algorithm for SMALR
We now derive a penalized local scoring algorithm for sparse multi-category additive logistic regression (SMALR), which can be viewed as a variant of Newton?s method in function space. At
each iteration, a quadratic approximation to the loss is used as a surrogate functional with the regularization term added to induce joint functional sparsity. However, a technical difficulty is that the
approximate quadratic problem in each iteration is weighted by a non-diagonal matrix in function
space, thus a trivial extension of the algorithm in (Ravikumar et al., 2007) for sparse binary nonparametric logistic regression does not apply. To tackle this problem, we use an auxiliary function
to lower bound the log-loss, as in (Krishnapuram et al., 2005).
The population version of the log-loss is L(f ) = E[?f (X, Y )] with f = (f (1) , . . . , f (K?1) ). A
second-order Lagrange form Taylor expansion to L(f ) at fb is then
[
] 1 [
]
L(f ) = L(fb) + E ?L(fb)T (f ? fb) + E (f ? fb)T H(fe)(f ? fb)
(9)
2
for some function fe, where the gradient is ?L(fb) = Y ? pfb(X) with pfb(X) = (pfb(Y (1) =
(
)
1 | X), . . . , pfb(Y (K?1) = 1 | X))T , and the Hessian is H(fe) = ?diag pfe(X) + pfe(X)pfe(X)T .
Defining B = ?(1/4)IK?1 , it is straightforward to show that B ? H(fe), i.e., H(fe) ? B is
positive-definite. Therefore, we have that
[
] 1 [
]
L(f ) ? L(fb) + E ?L(fb)T (f ? fb) + E (f ? fb)T B(f ? fb) .
(10)
2
5
SMALR: S PARSE M ULTI - CATEGORY A DDITIVE L OGISTIC R EGRESSION
Input: Data (xi , yi ), i = 1, . . . , n and regularization parameter ?.
?P
.?
P
PK?1 (k? ) ??
(k)
(k)
n
Initialize: fbj = 0 and ?
b(k) = log
n? n
, k = 1, . . . , K ? 1
i=1 yi
i=1
k? =1 yi
Iterate until convergence:
(k)
(1) Compute pfb (xi ) ? P(Y (k) = 1 | X = xi ) as in (4) for k = 1, . . . , K ? 1;
?
?
P
(k)
(k)
(k)
(k)
(2) Calculate the transformed responses Zi = 4 yi ? pfb (xi ) + ?
b(k) + pj=1 fbj (xij )
for k = 1, . . . , K ? 1 and i = 1, . . . , n;
?
? ?
(k)
(3) Call subroutines (fb(1) , . . . , fb(K?1) ) ? MR-SpAM (xi , Z )n
2? ;
i=1 ,
i
(4) Adjust the intercepts: ?(k)
n
1 X (k)
?
Z ;
n i=1 i
b(k) for k = 1, . . . , K ? 1.
Output: Functions fb(k) and intercepts ?
Figure 3: The penalized local scoring algorithm for SMALR.
The following lemma results from straightforward calculation.
Lemma 5. The solution
f that)maximizes the righthand side of (10) is equivalent to the solution
(
that minimizes 12 E ?Z ? Af ?2n where A = (?B)1/2 and Z = A?1 (Y ? pfb) + Afb.
Recalling that f (k) = ?(k) +
auxiliary functional
?p
j=1
(k)
fj , equation (9) and Lemma 5 then justify the use of the
K?1 [(
)2 ]
?p
1 ?
?(k)
(k)
+ ?? ?K?1 (f )
(11)
E Z
? j=1 f (Xj )
2
k=1
(
)
?
?p
(k)
where Z ?(k) = 4 Y (k) ? Pfb(Y (k) = 1 | X) + ?
b(k) + j=1 fbj (Xj ) and ?? = 2?. This is
precisely in the form of a multi-response SpAM optimization problem in equation (3). The resulting
algorithm, in the finite sample case, is shown in Figure 3.
4
Experiments
In this section, we first use simulated data to investigate the performance of the MT-SpAM simultaneous sparse backfitting algorithm. We then apply SMALR to a tumor classification and biomarker
identification problem. In all experiments, the data are rescaled to lie in the p-dimensional cube
[0, 1]p . We use local linear smoothing with a Gaussian kernel. To choose the regularization parameter ?, we simply use J-fold cross-validation or the GCV score from (Ravikumar et al., 2007) ex?n ?K
(k) (k)
tended to the multi-task setting: GCV(?) = i=1 k=1 Qfb(k) (xi , yi ))/(n2 K 2 ?(nK)df(?))2
(
)
?K ?p
(k)
(k)
(k)
(k)
where df(?) =
? I ?fb ?n ?= 0 , and ? = trace(S ) is the effective degrees
k=1
j=1
j
j
j
of freedom for the univariate local linear smoother on the j
j
th
variable.
4.1 Synthetic Data
We generated n = 100 observations from a 10-dimensional three-task additive model with four
?4
(k)
(k) (k)
(k)
(k)
relevant variables: yi = j=1 fj (xij ) + ?i , k = 1, 2, 3, where ?i ? N (0, 1); the com(k)
ponent functions fj
(k)
Xj
(k)
(Wj
are plotted in Figure 4. The 10-dimensional covariates are generated as
(k)
(k)
(k)
=
+ tU )/(1 + t), j = 1, . . . , 10 where W1 , . . . , W10 and U (k) are i.i.d. sampled
from Uniform(?2.5, 2.5). Thus, the correlation between Xj and Xj ? is t2 /(1 + t2 ) for j ?= j ? .
The results of applying MT-SpAM with the bandwidths h = (0.08, . . . , 0.08) and regularization
parameter ? = 0.25 are summarized in Figure 4. The upper 12 figures show the 12 relevant component functions for the three tasks; the estimated function components are plotted as solid black
6
k=3
k=2
k=1
k=2
k=3
2
?1
?1
0
0
0
1
2
1
3
2
1
2
1
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
?1
0.0
1.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
x1
k=3
x1
k=2
0.4
0.6
0.8
1.0
0.0
0.2
x2
k=1
0.4
0.6
0.8
1.0
0.0
2
3
1
2
1
0
?1
0.0
0.2
0.4
0.6
0.8
1.0
0.0
x3
0.2
0.4
0.6
0.8
1.0
0.0
0.4
0.6
0.8
3
4
Empirical sup?L1 norm
0.4
0.3
0.2
10 6
0.1
0.0
0.0
1.0
0.2
0.4
0.6
0.8
Variable selection accuracy
Model
MR?SpAM
MARS
1.0
0.0
Path Index
Path Index
0.4
0.6
0.8
1.0
0.0
x4
3
1
10
0.2
Empirical sup?L1 norm
2
0.4
0.3
0.2
Empirical sup?L1 norm
0.1
0.0
0.0
0.2
x4
t=2
0.2
0.4
0.6
0.8
1.0
x4
t=4
2
1.0
3
0.8
1
0.6
x3
7 8
0.4
0.4
0.2
0.3
0.0
t=0
0.2
1.0
0.1
0.8
1.0
0.0
0.6
x3
0.8
?2
?2
?1
0.4
0.6
?1
?1
0
0
0
1
3
1
2
1
0
?1
?2
0.2
0.4
x2
k=3
2
2
2
1
0
?1
?2
0.0
0.2
x2
k=2
4
x1
k=1
4
0.0
?2
?2
?2
?2
?1
?1
?1
0
0
0
1
1
3
2
2
4
4
k=1
0.2
0.4
0.6
0.8
1.0
Path Index
Estimation accuracy: MSE (sd)
t=0
t=1
t=2
t=3
t=0
t=1
t=2
t=3
89
0
80
0
47
0
37
0
7.43 (0.71)
8.66 (0.78)
5.82 (0.60)
7.52 (0.61)
3.83 (0.37)
5.36 (0.40)
3.07 (0.30)
4.64 (0.35)
Figure 4: (Top) Estimated vs. true functions from MT-SpAM; (Middle) Regularization paths using MT-SpAM.
(Bottom) Quantitative comparison between MR-SpAM and MARS
lines and the true function components are plotted using dashed red lines. For all the other variables
(from dimension 5 to dimension 10), both the true and estimated components are zero. The middle
three figures show regularization paths as the parameter ? varies; each curve is a plot of the maximum empirical L1 norm of the component functions for each variable, with the red vertical line
representing the selected model using the GCV score. As the correlation increases (t increases), the
separation between the relevant dimensions and the irrelevant dimensions becomes smaller. Using
the same setup but with one common design matrix, we also compare the quantitative performance
of MR-SpAM with MARS (Friedman, 1991), which is a popular method for multi-response additive
regression. Using 100 simulations, the table illustrates the number of times the models are correctly
identified and the mean squared error with the standard deviation in the parentheses. (The MARS
simulations are carried out in R, using the default options of the mars function in the mda library.)
4.2 Gene Microarray Data
Here we apply the sparse multi-category additive logistic regression model to a microarray dataset
for small round blue cell tumors (SRBCT) (Khan et al., 2001). The data consist of expression
profiles of 2,308 genes (Khan et al., 2001) with tumors classified into 4 categories: neuroblastoma
(NB), rhabdomyosarcoma (RMS), non-Hodgkin lymphoma (NHL), and the Ewing family of tumors
(EWS). The dataset includes a training set of size 63 and a test set of size 20. These data have
been analyzed by different groups. The main purpose is to identify important biomarkers, which
are a small set of genes that can accurately predict the type of tumor of a patient. To achieve 100%
accuracy on the test data, Khan et al. (2001) use an artificial neural network approach to identify 96
genes. Tibshirani et al. (2002) identify a set of only 43 genes, using a method called nearest shrunken
centroids. Zhang et al. (2008) identify 53 genes using the sup-norm support vector machine.
In our experiment, SMALR achieves 100% prediction accuracy on the test data with only 20 genes,
which is a much smaller set of predictors than identified in the previous approaches. We follow
the same procedure as in (Zhang et al., 2008), and use a very simple screening step based on the
marginal correlation to first reduce the number of genes to 500. The SMALR model is then trained
using a plugin bandwidth h0 = 0.08, and the regularization parameter ? is tuned using 4-fold cross
validation. The results are tabulated in Figure 5. In the left figure, we show a ?heat map? of the
selected variables on the training set. The rows represent the selected genes, with their cDNA chip
image id. The patients are grouped into four categories according to the corresponding tumors,
7
2.5
3
2
3
1
1
2
0.4
0.6
0.8
1.0
0
0.0
244618
EWS.T1
EWS.T2
EWS.T3
EWS.T4
EWS.T6
EWS.T7
EWS.T9
EWS.T11
EWS.T12
EWS.T13
EWS.T14
EWS.T15
EWS.T19
EWS.C8
EWS.C3
EWS.C2
EWS.C4
EWS.C6
EWS.C9
EWS.C7
EWS.C1
EWS.C11
EWS.C10
BL.C5
BL.C6
BL.C7
BL.C8
BL.C1
BL.C2
BL.C3
BL.C4
NB.C1
NB.C2
NB.C3
NB.C6
NB.C12
NB.C7
NB.C4
NB.C5
NB.C10
NB.C11
NB.C9
NB.C8
RMS.C4
RMS.C3
RMS.C9
RMS.C2
RMS.C5
RMS.C6
RMS.C7
RMS.C8
RMS.C10
RMS.C11
RMS.T1
RMS.T4
RMS.T2
RMS.T6
RMS.T7
RMS.T8
RMS.T5
RMS.T3
RMS.T10
RMS.T11
0.2
0.4
0.6
0.8
1.0
k=1
2.0
0.6
0.8
1.0
2
3
0.0
0.4
1
0
?3 ?2 ?1
701751
0.2
k=3
ID.377048
?3 ?2 ?1
?3 ?2 ?1
80649
236282
0.0
1.0
0
0
814526
0.8
1
1
296448
0.6
2
2
207274
0.4
k=1
ID.377048
3
796258
0.2
3
784224
1.5
0.2
k=1
ID.770394
377048
CV Score
0.0
308231
1.0
812105
0.5
325182
?3 ?2 ?1
841620
?3 ?2 ?1
?3 ?2 ?1
134748
0
0
383188
0.0
0.2
0.4
0.6
k=2
0.8
1.0
0.0
0.2
0.4
0.6
k=3
0.8
1.0
0.0
2
1
486110
ID.207274
ID.1435862
3
377461
3.0
ID.207274
770394
1435862
0.1
0.3
0.5
0.7
lambda
Figure 5: SMALR results on gene data: heat map (left), marginal fits (center), and CV score (right).
as illustrated in the vertical groupings. The genes are ordered by hierarchical clustering of their
expression profiles. The heatmap clearly shows four block structures for the four tumor categories.
This suggests visually that the 20 genes selected are highly informative of the tumor type. In the
middle of Figure 5, we plot the fitted discriminant functions of different genes, with their image ids
listed on the plot. The values k = 1, 2, 3 under each subfigure indicate the discriminant function
the plot represents. We see that the fitted functions are nonlinear. The right subfigure illustrates the
total number of misclassified samples using 4-fold cross validation, the ? values 0.3, 0.4 are both
zero, for the purpose of a sparser biomarker set, we choose ? = 0.4. Interestingly, only 10 of the
20 identified genes from our method are among the 43 genes selected using the shrunken centroids
approach of Tibshirani et al. (2002). 16 of them are are among the 96 genes selected by neural
network approach of Khan et al. (2001). This non-overlap may suggest some further investigation.
5
Discussion and Acknowledgements
We have presented new approaches to fitting sparse nonparametric multi-task regression models and
sparse multi-category classification models. Due to space constraints, we have not discussed results
on the statistical properties of these methods, such as oracle inequalities and risk consistency; these
theoretical results will be reported elsewhere. This research was supported in part by NSF grant
CCF-0625879.
References
F ORNASIER , M. and R AUHUT, H. (2008). Recovery algorithms for vector valued data with joint sparsity
constraints. SIAM Journal of Numerical Analysis 46 577?613.
F RIEDMAN , J. H. (1991). Multivariate adaptive regression splines. The Annals of Statistics 19 1?67.
K HAN , J., W EI , J. S., R INGNER , M., S AA , L. H., L ADANYI , M., W ESTERMANN , F., B ERTHOLD , F.,
S CHWAB , M., A NTONESCU , C. R., P ETERSON , C. and M ELTZER , P. S. (2001). Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nature Medicine
7 673 ?679.
K RISHNAPURAM , B., C ARIN , L., F IGUEIREDO , M. and H ARTEMINK , A. (2005). Sparse multinomial logistic
regression: Fast algorithms and generalization bounds. IEEE Transactions on Pattern Analysis and Machine
Intelligence 27 957? 968.
R AVIKUMAR , P., L IU , H., L AFFERTY, J. and WASSERMAN , L. (2007). SpAM: Sparse additive models. In
Advances in Neural Information Processing Systems 20. MIT Press.
ROCKAFELLAR , R. T. and W ETS , R. J.-B. (1998). Variational Analysis. Springer-Verlag Inc.
T IBSHIRANI , R., H ASTIE , T., NARASIMHAN , B., and C HU , G. (2002). Diagnosis of multiple cancer types
by shrunken centroids of gene expression. Proc Natl Acad Sci U.S.A. 99 6567?6572.
T ROPP, J., G ILBERT, A. C. and S TRAUSS , M. J. (2006). Algorithms for simultaneous sparse approximation.
Part II: Convex relaxation. Signal Processing 86 572?588.
T URLACH , B., V ENABLES , W. N. and W RIGHT, S. J. (2005). Simultaneous variable selection. Technometrics
27 349?363.
Z HANG , H. H., L IU , Y., W U , Y. and Z HU , J. (2008). Variable selection for the multicategory SVM via
adaptive sup-norm regularization. Electronic Journal of Statistics 2 149?1167.
Z HANG , J. (2006). A probabilistic framework for multitask learning. Tech. Rep. CMU-LTI-06-006, Ph.D. thesis, Carnegie Mellon University.
8
| 3616 |@word multitask:1 version:7 middle:3 norm:19 turlach:2 calculus:3 km:1 simulation:2 hu:2 solid:1 liu:1 series:1 score:5 t7:2 denoting:2 tuned:1 interestingly:1 fbj:10 com:1 john:1 additive:24 numerical:1 informative:1 enables:1 plot:4 v:1 stationary:1 intelligence:1 selected:7 xk:2 district:1 c6:4 zhang:5 c2:4 direct:1 differential:1 ik:1 prove:1 backfitting:4 fitting:3 combine:1 multi:39 pf:2 considering:1 conv:2 begin:1 becomes:1 notation:1 maximizes:1 minimizes:1 narasimhan:1 quantitative:2 tackle:1 exactly:1 classifier:1 k2:2 uk:3 unit:1 grant:1 pfe:3 yn:1 positive:2 t1:2 t13:1 local:7 sd:1 acad:1 plugin:1 ets:1 id:8 subscript:1 path:5 black:1 pam:1 suggests:2 unique:1 enforces:1 block:3 definite:1 dpx:1 x3:3 procedure:2 area:1 empirical:4 t10:1 induce:2 ateaux:1 krishnapuram:1 suggest:1 selection:3 operator:6 nb:13 risk:1 applying:1 intercept:3 restriction:1 measurable:2 equivalent:1 map:2 center:2 straightforward:4 convex:4 formulate:3 simplicity:2 recovery:1 wasserman:2 estimator:5 population:3 handle:1 coordinate:2 annals:1 pa:1 element:3 bottom:1 solved:1 calculate:2 wj:1 t12:1 rescaled:1 covariates:2 trained:1 joint:8 chip:1 derivation:1 heat:2 fast:1 effective:1 artificial:2 labeling:1 ponent:1 h0:1 lymphoma:1 valued:1 otherwise:2 statistic:2 superscript:2 sequence:1 propose:4 product:1 tu:1 relevant:4 shrunken:3 achieve:1 convergence:2 optimum:1 derive:2 develop:2 nearest:1 school:1 c10:3 progress:1 auxiliary:2 indicate:1 convention:1 radius:1 shrunk:1 hull:1 larry:1 pfb:8 fornasier:2 really:1 decompose:1 investigation:1 proposition:1 generalization:1 extension:2 exp:3 visually:1 mapping:1 predict:1 dictionary:1 vary:1 adopt:1 achieves:1 purpose:2 estimation:2 proc:1 wet:1 label:1 pxj:2 maxm:3 largest:1 grouped:1 weighted:1 mit:1 clearly:1 gaussian:1 rather:1 hj:4 derived:2 notational:1 vk:2 biomarker:3 mainly:1 t14:1 tech:1 centroid:3 inference:1 xip:2 sb:1 entire:1 transformed:1 subroutine:1 misclassified:1 iu:2 arg:7 classification:11 among:2 denoted:1 heatmap:1 smoothing:4 special:1 initialize:2 ewing:1 cube:1 marginal:2 x4:3 represents:1 others:1 spline:2 t2:4 employ:2 simultaneously:2 maxj:1 n1:2 recalling:1 freedom:1 friedman:1 technometrics:1 interest:2 screening:1 investigate:1 highly:1 ogistic:1 multiresponse:2 adjust:1 righthand:1 analyzed:1 natl:1 encourage:1 partial:1 euclidean:1 taylor:1 plotted:3 overcomplete:1 subfigure:2 theoretical:1 fitted:2 instance:5 soft:8 introducing:1 deviation:1 entry:1 predictor:2 uniform:1 successful:1 gcv:3 characterize:1 reported:1 varies:1 synthetic:2 siam:1 probabilistic:1 xi1:2 w1:1 squared:1 thesis:1 choose:2 lambda:1 ek:2 derivative:1 leading:1 de:1 c12:1 summarized:1 includes:1 coefficient:1 rockafellar:2 inc:1 h1:1 ropp:1 sup:15 characterizes:1 red:2 hf:2 option:1 contribution:1 minimize:1 accuracy:4 variance:1 yield:1 identify:4 t3:2 pbj:4 identification:1 accurately:1 rauhut:2 classified:1 simultaneous:6 tended:1 sharing:1 lengthy:1 c7:4 naturally:2 proof:3 sampled:1 dataset:2 popular:1 hilbert:2 attained:1 follow:1 methodology:1 response:14 afb:1 formulation:1 evaluated:2 mar:5 just:1 until:2 correlation:3 sketch:1 tropp:1 replacing:2 parse:1 nonlinear:1 ei:1 logistic:11 building:1 effect:1 true:4 ccf:1 former:1 regularization:26 assigned:1 illustrated:2 round:1 encourages:1 ulti:3 l1:4 fj:41 image:2 variational:1 recently:2 common:11 functional:18 mt:8 multinomial:2 analog:1 extend:1 discussed:1 cdna:1 mellon:2 cv:2 consistency:1 hp:1 han:2 gj:2 multivariate:3 t19:1 irrelevant:2 verlag:1 inequality:1 binary:1 rep:1 yi:18 scoring:4 mr:9 surely:1 c11:3 signal:3 dashed:1 smoother:5 multiple:2 ii:1 rj:13 reduces:1 ibshirani:1 smooth:1 technical:1 calculation:1 af:1 cross:3 profiling:1 ravikumar:4 plugging:1 parenthesis:1 prediction:2 variant:1 regression:25 patient:3 expectation:4 df:2 cmu:1 iteration:2 kernel:1 represent:1 cell:1 c1:3 subdifferential:6 fine:1 microarray:4 induced:1 lafferty:1 effectiveness:1 call:1 identically:1 iterate:2 xj:21 fit:1 zi:1 bandwidth:2 identified:3 inner:1 idea:2 reduce:1 biomarkers:1 expression:5 rms:21 penalty:1 tabulated:1 hessian:1 listed:1 nonparametric:13 ph:1 category:24 reduced:1 xij:4 canonical:1 nsf:1 sign:1 estimated:3 diagnostic:1 correctly:1 tibshirani:2 blue:1 diagnosis:1 carnegie:2 shall:2 group:1 key:1 four:4 demonstrating:1 threshold:1 nhl:1 pj:8 thresholded:1 lti:1 v1:1 relaxation:1 sum:7 hodgkin:1 family:2 almost:1 electronic:1 vn:1 separation:1 comparable:1 fl:1 ki:10 bound:2 fold:3 quadratic:4 oracle:1 mda:1 constraint:4 precisely:2 x2:3 u1:1 min:4 optimality:1 c8:4 px:2 relatively:1 according:3 ball:1 across:7 smaller:2 equation:3 adopted:1 apply:3 hierarchical:1 sbj:4 standardized:1 denotes:5 include:1 top:1 t11:2 clustering:1 newton:1 medicine:1 multicategory:2 k1:2 neuroblastoma:1 bl:8 added:1 parametric:2 diagonal:1 surrogate:1 gradient:1 subspace:1 simulated:1 sci:1 discriminant:3 trivial:1 assuming:1 index:9 kk:2 setup:1 fe:5 blockwise:1 trace:1 design:4 motivates:1 upper:1 vertical:2 observation:2 afferty:1 finite:2 descent:2 maxk:3 defining:1 srbct:1 vj2:1 y1:1 introduced:1 required:1 khan:4 c3:4 c4:4 framing:1 proceeds:1 below:1 pattern:6 egression:1 sparsity:14 oft:2 including:2 max:3 overlap:1 difficulty:1 predicting:1 residual:3 representing:1 scheme:1 library:1 carried:1 discovery:1 l2:2 acknowledgement:1 loss:6 interesting:1 validation:3 h2:1 degree:1 riedman:1 xp:3 thresholding:6 classifying:1 share:3 row:1 cancer:3 qf:2 penalized:5 elsewhere:1 supported:1 t6:2 side:1 sparse:27 curve:1 dimension:4 xn:1 default:1 fb:22 t5:1 made:1 c5:3 adaptive:2 spam:23 transaction:1 sj:17 approximate:1 hang:2 gene:22 supremum:1 astie:1 pittsburgh:1 xi:14 iterative:1 table:1 channel:1 nature:1 expansion:1 mse:1 vj:1 diag:1 t8:1 pk:1 main:1 profile:2 n2:1 x1:7 sub:1 xl:1 lie:2 weighting:1 theorem:3 rk:2 avikumar:1 covariate:1 svm:1 t9:1 grouping:2 exists:2 consist:1 illustrates:2 t4:2 nk:5 sparser:1 simply:2 univariate:2 lagrange:1 ordered:1 scalar:2 springer:1 aa:1 corresponds:1 w10:1 conditional:4 viewed:3 formulated:3 towards:1 except:1 justify:1 tumor:10 lemma:9 called:2 total:1 discriminate:1 t15:1 ew:24 support:2 latter:1 c9:3 ex:1 |
2,887 | 3,617 | Hierarchical Fisher Kernels for Longitudinal Data
Zhengdong Lu Todd K. Leen
Dept. of Computer Science & Engineering
Oregon Health & Science University
Beaverton, OR 97006
[email protected],[email protected]
Jeffrey Kaye
Layton Aging & Alzheimer?s Disease Center
Oregon Health & Science University
Portland, OR 97201
[email protected]
Abstract
We develop new techniques for time series classification based on hierarchical Bayesian
generative models (called mixed-effect models) and the Fisher kernel derived from them.
A key advantage of the new formulation is that one can compute the Fisher information matrix despite varying sequence lengths and varying sampling intervals. This avoids
the commonly-used ad hoc replacement of the Fisher information matrix with the identity which destroys the geometric invariance of the kernel. Our construction retains the
geometric invariance, resulting in a kernel that is properly invariant under change of coordinates in the model parameter space. Experiments on detecting cognitive decline show
that classifiers based on the proposed kernel out-perform those based on generative models
and other feature extraction routines, and on Fisher kernels that use the identity in place
of the Fisher information.
1 Introduction
Time series classification arises in diverse application. This paper develops new techniques based on hierarchical Bayesian generative models and the Fisher kernel derived from them. A key advantage of the
new formulation is that, despite varying sequence lengths and sampling times, one can compute the Fisher
information matrix. This avoids its common ad hoc replacement with the identity matrix. The latter strategy,
common in the biological sequence literature [4], destroys the geometrical invariance of the kernel. Our construction retains the proper geometric structure, resulting in a kernel that is properly invariant under change
of coordinates in the model parameter space.
This work was motivated by the need to classify clinical longitudinal data on human motor and psychometric
test performance. Clinical studies show that at the population level progressive slowing of walking and the
rate at which a subject can tap their fingers are predictive of cognitive decline years before its manifestation
[1]. Similarly, performance on psychometric tests such as delayed recall of a story or word lists( tests
not used in diagnosis), are predictive of cognitive decline [8]. An early predictor of cognitive decline for
individual patients based on such longitudinal data would improve medical care and planning for assistance.
1
Our new Fisher kernels use mixed-effects models [6] as the generative process. These are hierarchical
models that describe the population (consisting of many individuals) as a whole, and variations between
individuals in the population. The population model parameters (called fixed effects), the covariance of
the between-individual variability (the random effects), and the additive noise variance are fit by maximum
likelihood. The overall population model together with the covariance of the random effects comprise a set
of parameters for the prior on an individual subject model, so the fitting scheme is a hierarchical empirical
Bayesian procedure.
Data Description The data for this study was drawn from the Oregon Brain Aging Study (OBAS) [2], a
longitudinal study spanning up to fifteen years with roughly yearly assessment of subjects. For our work,
we grouped the subjects into two classes: those who remain cognitively healthy through the course of the
study (denoted normal), and those who progress to mild cognitive impairment (MCI) or further to dementia
(denoted impaired). Since we are interested in prediction, we retain only data taken prior to diagnosis of
impairment. We use 97 subjects from the normal group and 46 from the group that becomes impaired.
Motor task data included the time (denoted as seconds) and the number of steps (denoted as steps) to walk
9 meters, and the number of times the subject can tap their forefinger, both dominant (tappingD) and nondominant hands (tappingN) in 10 seconds. Psychometric test data include delayed-recall, which measures
the number of words from a list of 10 that the subject can recall one minute after hearing the list, and logical
memory II in which the subject is graded on recall of a story told 15-20 minutes earlier.
2 Mixed-effect Models
2.1 Mixed-effect Regression Models
In this paper, we confine attention to parametric regression. Suppose there are k individuals (indexed by i =
1, . . . , k) contributing data to the sample, and we have observations {tin , yni }, n = 1, . . . , N i as a function
of time t for individual i. The data are modeled as yni = f (tin ; ? i ) + ?in , where ? i are the regression
parameters and ?in is zero-mean white Gaussian noise with (unknown) variance ? 2 . The superscript on the
model parameters ? i indicates that the regression parameters are different for each individual contributing to
the population. Since the model parameters vary between individuals, it is natural to consider them generated
by the sum of a fixed and a random piece: ? i = ? + ? i , where ? i (called the random effect), is assumed
distributed N (0, D) with unknown covariance D. The expected parameter vector ?, called the fixed effect,
determines the model for the population as a whole, and the random effect ? i accounts for the differences
between individuals. This intuition is most precise for the case in which the model is linear in parameters
f (t; ?) = ? T ?(t) = ?T ?(t) + ? T ?(t)
T
(1)
1
where ?(t) = [?1 (t), ?2 (t), ..., ?d (t)] denotes a vector of basis functions . We use M = {?, D, ?} to
denote the mixed-effect model parameters. The feature values, observation times, and observation noise are
i T
yi ? [y1i , ? ? ? , yN
i] ,
ti ? [ti1 , ? ? ? , tiN i ]T ,
?i ? [?i1 , ? ? ? , ?iN i ]T .
2.2 Maximum Likelihood Fitting
Model fitting uses the entire collection of data {ti , yi }, i = 1, . . . , k to determine the parameters M =
{?, D, ?} by maximum likelihood. The likelihood of the data {ti , yi } given M is
Z
p(yi ; ti , M) =
p(yi | ? i ; ti , ?)p(? i | M)d? i
(2)
=
(2?)?N
i
/2
|?i |?1/2 exp((yi ? ?T ?(ti ))T (?i )?1 (yi ? ?T ?i (ti )))
where
1
More generally, the fixed and random effects can be associated with different basis functions.
2
(3)
Seconds: Impaired
3.5
3.5
2.5
2
3
# of words
3
logical memory II: Normal
2.5
2
10
10
8
8
6
4
1.5
1.5
2
1
1
0
70
80
90
100
Age
70
80
90
100
logical memory II: Impaired
# of words
4
log(seconds)
log(seconds)
Seconds: Normal
4
6
4
2
70
80
90
100
0
70
80
Age
Age
90
100
Age
Figure 1: The fit mixed-effect models for two tests.
p In each panel, the red line stands for the fixed effect
?T ?(t). The two green lines stand for ?T ?(t) ? ?T (t)D?(t), i.e., the population model ? the s.t.d. of
the deviation from the uncertainty of the ?. The black dash line is the s.t.d of the deviation when we consider
the observation noise.
i
i
? =
N
X
?(tin )D?(tin )T + ? 2 I,
and
?(ti ) = [?(ti1 ), ?(ti2 ), ? ? ? , ?(tin )]T .
n=1
The data likelihood for Y = {y1 , y2 , ? ? ? , yk } with T = {t1 , t2 , ? ? ? , tk } is then p(Y; T, M) =
Qk
i i
The maximum likelihood values of {?, D, ?} are found using the Expectationi=1 p(y | t ; M).
Maximization algorithm [6] with {? 1 , ? 2 , ? ? ? , ? k } considered as the latent variable:
E-step:
M-step:
Q(M, Mg ) = E{? i } (log p(Y, {? i }; T, M)|Y; T, Mg )
g
M = arg max Q(M, M ),
M
(4)
(5)
where Mg stands for the model parameters estimated in previous step, and the expectation in the E-step is
with respect to the posterior distribution of on {? i } when Y is known and the model parameter is Mg . For
the linear mixed-effect model in Equation (1), the M-step can be given in a closed form. The details of the
updating equations are given by Laird et al. [6].
We use the linear mixed-effect model with polynomial basis functions
?(t) = [1, t]T . We trained separate mixed-effect models for each of the
six measurements. For the four motor behavior measurements, we use
the logarithm of data to reduce the skew of the residuals. Figure 1 shows
the fit models for seconds and logical memory II, as the representatives of
the six measurements. The plots show the fixed effect regression ?T ?(t)
(red curve), and the standard deviations arising from the random effects
(green curves) and measurement noise (dashed black curve, see caption).
The data are the blue spaghetti plots. The plots confirm that subjects that
become impaired deteriorate faster than those who remain healthy.
With multiple classes (or component subpopulations), it is natural to use
a mixture of mixed-effect models. We have two components: one fit on
the normal group (denoted M0 ) and one fit on impaired group (denoted
f =
M1 ), with Mm = {?m , Dm , ?m }, m = 0, 1. Here, we use M
Figure 2: The graphical model of
{?0 , M0 , ?1 , M1 } to denote the parameters of this mixture, with ?0 and the mixture of mixed-effect models.
?1 being the mixing proportions (prior) estimated from the training data.
The overall generative process for any individual (ti , yi ) is summarized
in Figure 2. Here z i ? {0, 1} is the latent variable indicating which model component is used to generate
yi .
3
3 Hierarchical Fisher Kernel
3.1 Fisher Kernel Background
The Fisher kernel [4] provides a way to extract discriminative features from the generative model. For any
?-parameterized model p(x; ?), the Fisher kernel between xi and xj is defined as
K(xi , xj ) = (?? log p(xi ; ?))T I?1 ?? log p(xj ; ?),
where I is the Fisher information matrix with the (n, m) entry
Z
? log p(x; ?) ? log p(x; ?)
In,m =
p(x; ?)dx.
??n
??m
x
(6)
(7)
The kernel entry K(xi , xj ) can be viewed as the inner product of the natural gradient I?1 ?? log p(x; ?) at
xi and xj with metric I, and is invariant to re-parametrization of ?. Jaakkola and Haussler [4] prove that a
linear classifier based on the Fisher kernel performs at least as well as the generative model.
3.2 Retaining the Fisher Information Matrix
In the bioinformatics literature [3] and for longitudinal data such as ours, p(xi ; ?) is different for each
individual owing to different sequence lengths, and (for longitudinal data) different sampling times ti . The
integral in Equation (7) must therefore include the distribution sequence lengths and observation times.
Where only sequence lengths differ, an empirical average can be used. However where observation times
are non-uniform and vary considerably between individuals (as is the case here), there is insufficient data to
form an estimate by empirical averaging.
The usual response to the difficulty is to replace the Fisher information with the identity matrix [4]. This
spoils the geometric structure, in particular the invariance of the the kernel K(xi , xj ) under change of coordinates in the model parameter space (model re-parameterization). This is a significant flaw: the coordinate
system used to describe the model is immaterial and should not influence the value of K(xi , xj ). For probabilistic kernel regression, the choice of metric is immaterial in the limit of large training sets [4]. However for
our application, which uses a support vector machine (SVM), we found the difference cannot be neglected.
In our case, replacing Fisher information matrix with the identity matrix is grossly unsuitable. For the
mixed-effect model with polynomial basis functions the Fisher score components associated with higher
order terms (such as slope and curvature) are far larger than the entries associated with lower order term
(such as intercept). Without the proper normalization provided by the Fisher information matrix, the kernel
will be dominated by higher order entries2 . A principled extension of the Fisher kernel provided by our
hierarchical model allows proper calculation of the Fisher information matrix.
3.3 Hierarchical Fisher Kernel
Our design of kernel is based on the generative hierarchy of mixture of mixed-effect models, in Figure 2. We
notice that the individual-specific information ti enter into this generative process at the last step, but the ?la? = {?0 , ?0 , D0 , ?1 , ?1 , D1 },
tent? variables ? i and z i are drawn from the Gaussian mixture model (GMM) ?
e = ?zi p(?zi ; ?zi , Dzi ).
with p(z i , ? i ; ?)
We can thus build a standard Fisher kernel for the latent variables, and use it to induce a kernel on the
observed data. Denoting the latent variables by v i , the Fisher kernel between v i and v j is
K(v i , v j ) = (?? log p(v i ; ?))T (Iv )?1 ?? log p(v j ; ?),
2
Our experiments on the OBAS data show that replacing the Fisher information with the identity compromises
classifier performance.
4
i ?
where the Fisher score ??
? log p(v ; ?) is a column vector
? log p ? log p ? log p ? log p ? log p ? log p T
i ?
;
] ,
??
;
;
;
;
? log p(v ; ?) = [
??0
??0
?D0
??1
??1
?D1
and Iv is the well-defined Fisher information matrix for v:
Z
?
? ? log p(v; ?)
? log p(v; ?)
?
p(v|?)dv.
Ivn,m =
?n
?m
??
??
v
(8)
The kernel for yi and yj is the expectation of K(v i , v j ) given the observation yi and yj .
ZZ
j
f =
f
f i dv j
K(yi , yj ) = Evi ,vj [K(v i , v j )| yi , yj ; ti , tj , M]
K(v i , v j )p(v i | yi ; ti , M)p(v
| yj ; tj , M)dv
With different choices of latent variable v, we have three kernel design strategies in the following subsections. This extension to the Fisher kernel, named hierarchical Fisher kernel (HFK), enables us to deal with
time series with irregular sampling and different sequence lengths. To our knowledge it has not been reported
elsewhere in the literature.
Design A: v i = ? i
This kernel design marginalizes out the higher level variable {z i } and constructs Fisher kernel between the
{? i }. This generative process is illustrated in Figure 3 (left panel), which is the same graphical model in
Figure 2 with latent variable z i marginalized out3 . The Fisher kernel for ? is
i ? T ? ?1
i ?
K(? i , ? j ) = (??
??
? log p(? |?)) (I )
? log p(? |?).
(9)
The kernel between yi and yj as the expectation of K(? i , ? j ):
f
K(yi , yj ) = E? i ,? j (K(? i , ? j )| yi , yj ; ti , tj , M)
(10)
Z
Z
j
i ?
i
i i f
i T ? ?1
j ?
j
j j f
= ( ??
??
? log p(? |?)p(? | y ; t , M)d? ) (I )
? log p(? |?)p(? | y ; t M)d? . (11)
R
j ?
j
j j f
j
The computational drawback is that the integral required to evaluate ??
? log p(? |?)p(? | y ; t M)d?
and Ir do not have an analytical solution. In our experiments, we estimated the integral with Monte-Carlo
sampling.
Design B: v i = (z i , ? i )
This design strategy takes both ? i and z i as joint latent variable and build a Fisher kernel for them. The
generative process, as summarized in Figure 3 (middle panel), gives the probability for latent variables
? = ?zi p(?i ; ?zi , Dzi ).
p(z i , ? i ; ?)
The Fisher kernel for the joint variable (? i , z i ) is
i
i ? T z,? ?1
i
i ?
K((z i , ? i ), (z j , ? j )) = (??
) ??
? log p(z , ? ; ?)) (I
? log p(z , ? ; ?),
(12)
? It can be shown that
where Iz,? is the Fisher information matrix associated with distribution p(z, ?; ?).
K((z i , ? i ), (z j , ? j )) =
1
?(z i , z j )(1 + Kzi (? i , ? j ))
?zi
3
Strictly speaking, we cannot sum out z i at this step since the group membership is used later in generating the
observation noise. However this is a reasonable approximation since the noise variance from M0 and M1 are similar.
5
where Km (? i , ? j ) is the Fisher kernel for ? i associated with component m (= 0, 1)
i
Km (? i , ? j ) = (??m log p(? i ; ?m , Dm ))T I?1
m ??m log p(? ; ?m , Dm ),
(13)
The kernel for yi and yj is defined similarly as in Design A:
K(yi , yj ) =
f
Ezi ,? i ,zj ,? j (K((z i , ? i ), (z j , ? j ))| yi , yj ; ti , tj , M)
(14)
where the integral can be evaluated analytically.
f = Mm , m = 0, 1
Design C: M
This design uses one mixed-effect component instead of the mixture as the generative model, as illustrated in Figure 3 (right panel). Although any
single Mm is not a satisfying generative model
for the whole population, the resulting kernel is
still useful for classification as follows. For either
model, m = 0, 1, the Fisher score for the ith individual ??m log p(? i ; ?m ) describes how the probability p(? i ; ?m ) responds to the change of parameters ?m . This is a discriminative feature vector
since the likelihood of ?i for individuals from difDesign A
Design B
Design C
ferent group are likely to have different response to
the change of parameters ?m . The kernel between Figure 3: The graphical model of the mixture of
? i and ? j is Km (? i , ? j ) defined in Equation (13). mixed-effect models for Design A, B, and C.
And then the kernel for yi and yj :
K(yi , yj )
= E? i ,? j (K(? i , ? j )| yi , yj ; ti , tj , Mm )
(15)
Our experiments show that the kernel based on the impaired group is significantly better than others; we
therefore use this kernel as the representative of Design C. It is easy to see that the designed kernel is a
special case of Design A or Design B when ?0 = 1 and ?1 = 0.
3.4 Related Models
Marginalized Kernel Our HFK is related to the marginalized kernel (MK) proposed by Tsuda et. al. [10].
MK uses a distribution with discrete latent variable h (indicating the generating component) and observable
x, which form a complete data pair x = (h, x). The kernel for observable xi and xj is defined as
XX
e i , xj ) =
e i , xj )
K(x
P (hi |xi )P (hj |xj )K(x
(16)
hi
hj
e i , xj ) is the joint kernel for complete data. Tsuda et. al. [10] uses the form:
where K(x
e i , xj ) = ?(hi , hj )Khi (xi , xj ),
K(x
i
j
i
(17)
where Khi (x , x ) is the pre-defined kernel for observables associated the h generative component. Equae i , xj ) takes the value of kernel defined for the mth component model if xi and xj
tion (17) says that K(x
e i , xj ) = 0. HFK can be viewed as a
are generated from the same component hi = hj = m; otherwise, K(x
special case of the generalized marginalized kernel that allows continuous latent variables h. This is clear if
we re-write Equation (16) as
e i , xj ) = Ehi ,hj (K(x
e i , xj )|xi , xj )
K(x
i
j
e , x ) as a generalization of kernel between hi and hj . Nevertheless HFK is different from the
and view K(x
original work in [10], in that MK requires existing kernels for observable, such as Kh (xi , xj ) in Equation
(17). In our problem setting, this kernel does not exist due to the different lengths of time series.
6
Probability Product Kernel We can get a family of kernels by employing various kernel designs
of K(v i , v j ). The simplest example is to let K(v i , v j ) = ?(z i , z j ), which immediately leads to
f = P P (z i = m|yi ; ti , M)P
f (z j = m|yj ; tj , M),
f
K(yi , yj ) = Evi ,vj (K(v i , v j )| yi , yj ; ti , tj , M)
m
which is obviously related to the posterior probabilities of samples, and is essentially a special case of the
probability product kernels [5] proposed by Jebara et. al.
4 Experiments
Performance Evaluation We use the empirical ROC curve (detection rate vs. false alarm rate) to evaluate classifiers. We compare different classifiers using the area under the curve (AUC), and calculate the
statistical significance following the method given by Pepe [9]. We tested the classifiers on the five features:
steps, seconds, tappingD, tappingN, and logical memory II. The results of delayed-recall are omitted, they are
very close to those for logical memory II. The mixed-effect models for each feature were trained separately
with order-1 polynomials (linear) as the basis functions. For each feature, the kernels are used in support
vector machines (SVM) for classification, and the ROC is obtained by thresholding the classifier output with
varying values. The classifiers are evaluated by leave-one-out cross-validation, the left-out sample consisting
of an individual subject?s complete time series (which is also held out of the fitting of the generative model).
Classifiers For comparison, we also examined the following two classifiers. First, we consider the likelihood ratio test based on mixed-effect models {M0 , M1 }. For any given observation (t, y), the likelihood
that it is generated by mixed-effect model Mm is given by p(y; t, Mm ), which is defined similarly as in
Equation (3). The classification decision for a likelihood ratio classifier is made by thresholding the ratio
p(y;t,M0 )
p(y;t,M1 ) . Second, we consider a feature extraction routine independent of any generative model. We summarize each individual i with the least-square fit coefficients for a d-degree polynomial regression model,
denoted as pi . To get a reliable fitting we only consider the case d = 1 since many individuals only have four
? i , as the feature vector,
or five observations. We use the coefficients (normalized to their s.t.d.), denoted as p
i
j 2
||?
p ??
p ||
and build a RBF kernel Gij = exp(? 2s2 2 ), where s is the kernel width estimated with leave-one-out
cross validation in our experiment. The obtained kernel matrix G will be referred to as LSQ kernel.
Results We first compare three HFK designs, using the ROC curves plotted in Figure 4 (upper row). On
all four motor tests, Design A and Design B are very much comparable except on tappingD, on which
Design A is marginally better than Design B with p = 0.136. Also on the motor tests, Design C is slightly
but consistently better than other two designs. On logical memory II (story recall), the three designs have
comparable performance. We thus use Design C as the representative of HFK, and compare it with the
likelihood ratio classifier and SVM based on LSQ kernel, as shown in Figure 4 (lower row). On four motor
test, the classifier based on HFK obviously out-performs the other two classifiers, and on logical memory II,
the three classifiers have very much comparable performance.
5 Discussion
Fisher kernels derived from mixed-effect generative models retain the Fisher information matrix, and hence
the proper invariance of the kernel under change of coordinates in the model parameter space. In additional
experiments, classifiers constructed with the proper kernel out-perform those constructed with the identity
matrix in place of the Fisher information on our data. For example, on seconds, the HKF (Deign C) achieves
AUC = 0.7333, while the Fisher kernel computed with the identity matrix as metric on p(yi ; ti , M) achieves
a AUC = 0.6873, with the p-value (Z-test) 0.0435.
Our classifiers built with Fisher kernels derived from mixed-effect models outperform those based solely
on the generative model (using likelihood ratio tests) for the motor task data, and are comparable on the
psychometric tests. The hierarchical kernels also produce better classifiers than a standard SVM using the
coefficients of a least squares fit to the individual?s data. This shows that the generative model provides real
advantage for classification. The mixed-effect models capture both the population behavior (through ?),
and the statistical variability of the individual subject models (through the covariance of ?). Knowledge of
7
steps
p1 =0.486, p2 =0.326
1
DesignA
DesignB
DesignC
0.9
DesignA
DesignB
DesignC
0.9
tappingN
p1 =0.482, p2 =0.286
1
DesignA
DesignB
DesignC
0.9
0.9
logical memory II
p1 =0.491, p2 =0.452
1
DesignA
DesignB
DesignC
0.7
0.7
0.7
0.7
0.7
0.5
0.4
0.6
0.5
0.4
0.6
0.5
0.4
Detection Rate
0.8
Detection Rate
0.8
Detection Rate
0.8
0.6
0.6
0.5
0.4
0.6
0.5
0.4
0.3
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.2
0.4
0.6
False Alarm Rate
0.8
0
0
1
p1 =0.041, p2 =0.038
0.4
0.6
False Alarm Rate
0.8
0
0
1
p1 =0.056, p2 =0.083
1
0.9
0.2
0.9
0.4
0.6
False Alarm Rate
0.8
0
0
1
p1 =0.042, p2 =0.085
1
DesignC
LSQK
LKHD
0.2
0.9
0.1
0.2
0.4
0.6
False Alarm Rate
0.8
0
0
1
p1 =0.38, p2 =0.049
1
DesignC
LSQK
LKHD
0.9
0.9
0.7
0.7
0.7
0.7
0.7
0.4
0.6
0.5
0.4
0.6
0.5
0.4
Detection Rate
0.8
Detection Rate
0.8
Detection Rate
0.8
Detection Rate
0.8
0.5
0.6
0.5
0.4
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.8
1
0
0
0.2
0.4
0.6
False Alarm Rate
0.8
1
0
0
0.2
0.4
0.6
False Alarm Rate
0.8
1
0
0
DesignC
LSQK
LKHD
0.4
0.3
0.4
0.6
False Alarm Rate
1
0.5
0.3
0.2
0.8
0.6
0.3
0
0
0.4
0.6
False Alarm Rate
1
DesignC
LSQK
LKHD
0.8
0.6
0.2
p1 =0.485, p2 =0.523
1
DesignC
LSQK
LKHD
DesignA
DesignB
DesignC
0.9
0.8
0
0
Detection Rate
tappingD
p1 =0.136, p2 =0.210
1
0.8
Detection Rate
Detection Rate
seconds
p1 =0.387, p2 =0.158
1
0.1
0.2
0.4
0.6
False Alarm Rate
0.8
1
0
0
0.2
0.4
0.6
False Alarm Rate
0.8
1
Figure 4: Comparison of classifiers. Upper row: Three HFK designs. The number in the parenthesis is the
p-value (Z-test) for the null-hypothesis ?the AUC of Classifier 1 is the same as the AUC of Classifier 2?.
Upper row: Three HKF designs. p1 : Design A vs. Design B, p2 : Design C vs. Design A; Lower row:
HFK & other classifiers. p1 : Design C vs. Likelihood ratio, p2 : Design C vs. LSQ kernel.
the statistics of the subject variability is extremely important for classification: although not discussed here,
classifiers based only on the population model (?) perform far worse than those presented here [7].
Acknowledgements
This work was supported by Intel Corp. under the OHSU BAIC award. Milar Moore and to Robin Guariglia
of the Layton Aging & Alzheimer?s Disease Center gave invaluable help with data from the Oregon Brain
Aging Study. We thank Misha Pavel, Tamara Hayes, and Nichole Carlson for helpful discussion.
References
[1] R. Camicioli, D. Howieson, B. Oken, G. Sexton, and J. Kaye. Motor slowing precedes cognitive impairment in the
oldest old. Neurology, 50:1496?1498, 1998.
[2] M. Green, J. Kaye, and M. Ball. The Oregon brain aging study: Neuropathology accompanying healthy aging in
the oldest old. Neurology, 54(1):105?113, 2000.
[3] T. Jaakkola, M. Diekhaus, and D. Haussler. Using the fisher kernel method to detect remote protein homologies.
7th Intell. Sys. Mol. Biol., pages 149?158, 1999.
[4] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. Technical report, Dept. of
Computer Science, Univ. of California, 1998.
[5] T. Jebara, R. Kondor, and A. Howard. Probability product kernels. Journal of Machine Learning Research, 5:819?
844, 2004.
[6] N. Laird and J. Ware. Random-effects models for longitudinal data. Biometrics, 38(4):963?974, 1982.
[7] Z. Lu. Constrained Clustering and Cognitive Decline Detection. PhD thesis, OHSU, 2008.
[8] S. Marquis, M. Moore, D. Howieson, G. Sexton, H. Payami, J. Kaye, and R. Camicioli. Independent predictors of
cognitive decline in healthy elderly persons. Arch. Neurol., 59:601?606, 2002.
[9] M. Pepe. The Statistical Evaluation of Medical Tests for Classification and Prediction. Oxford University Press,
Oxford, 2003.
[10] K. Tsuda, T. Kin, and K. Asai. Marginalized kernels for biological sequences. Bioinformatics, 1(1):1?8, 2002.
8
| 3617 |@word mild:1 middle:1 kondor:1 polynomial:4 proportion:1 km:3 covariance:4 pavel:1 fifteen:1 series:5 score:3 denoting:1 ours:1 longitudinal:7 yni:2 existing:1 dx:1 must:1 additive:1 enables:1 motor:8 plot:3 designed:1 v:5 generative:20 parameterization:1 slowing:2 oldest:2 parametrization:1 ith:1 sys:1 detecting:1 provides:2 five:2 constructed:2 become:1 prove:1 fitting:5 deteriorate:1 elderly:1 expected:1 behavior:2 p1:12 planning:1 roughly:1 brain:3 becomes:1 provided:2 xx:1 panel:4 null:1 ti:19 classifier:24 medical:2 yn:1 before:1 t1:1 engineering:1 todd:1 aging:6 limit:1 despite:2 oxford:2 marquis:1 ware:1 solely:1 black:2 examined:1 yj:17 hfk:9 procedure:1 area:1 empirical:4 significantly:1 word:4 induce:1 subpopulation:1 pre:1 protein:1 get:2 cannot:2 close:1 guariglia:1 influence:1 intercept:1 center:2 attention:1 asai:1 immediately:1 haussler:3 d1:2 population:11 coordinate:5 variation:1 construction:2 suppose:1 hierarchy:1 caption:1 us:5 hypothesis:1 satisfying:1 walking:1 updating:1 observed:1 capture:1 calculate:1 remote:1 yk:1 disease:2 intuition:1 principled:1 neglected:1 trained:2 immaterial:2 compromise:1 predictive:2 basis:5 observables:1 joint:3 various:1 finger:1 univ:1 describe:2 monte:1 diekhaus:1 precedes:1 larger:1 say:1 otherwise:1 statistic:1 laird:2 superscript:1 obviously:2 hoc:2 advantage:3 sequence:8 mg:4 analytical:1 product:4 mixing:1 description:1 kh:1 exploiting:1 oken:1 impaired:7 produce:1 generating:2 leave:2 tk:1 help:1 develop:1 progress:1 p2:12 c:1 dzi:2 differ:1 drawback:1 owing:1 human:1 generalization:1 biological:2 extension:2 strictly:1 mm:6 accompanying:1 confine:1 considered:1 normal:5 exp:2 m0:5 vary:2 early:1 achieves:2 omitted:1 healthy:4 utexas:1 grouped:1 destroys:2 gaussian:2 hj:6 varying:4 jaakkola:3 derived:4 properly:2 portland:1 consistently:1 likelihood:13 indicates:1 hkf:2 detect:1 helpful:1 flaw:1 membership:1 entire:1 mth:1 interested:1 i1:1 overall:2 classification:8 arg:1 denoted:8 retaining:1 constrained:1 special:3 comprise:1 construct:1 extraction:2 sampling:5 zz:1 progressive:1 t2:1 others:1 develops:1 report:1 intell:1 individual:21 delayed:3 cognitively:1 consisting:2 replacement:2 jeffrey:1 detection:12 evaluation:2 mixture:7 misha:1 tj:7 held:1 integral:4 biometrics:1 forefinger:1 indexed:1 iv:2 logarithm:1 walk:1 re:3 tsuda:3 plotted:1 old:2 mk:3 classify:1 earlier:1 column:1 retains:2 maximization:1 hearing:1 deviation:3 entry:3 predictor:2 uniform:1 reported:1 considerably:1 person:1 ehi:1 retain:2 told:1 probabilistic:1 together:1 thesis:1 marginalizes:1 worse:1 cognitive:8 account:1 summarized:2 coefficient:3 oregon:5 ad:2 piece:1 later:1 tion:1 view:1 closed:1 red:2 slope:1 square:2 ir:1 variance:3 kaye:5 who:3 qk:1 zhengdong:1 bayesian:3 lu:2 carlo:1 marginally:1 grossly:1 tamara:1 dm:3 associated:6 logical:9 recall:6 subsection:1 knowledge:2 routine:2 higher:3 response:2 luz:1 leen:1 formulation:2 evaluated:2 arch:1 hand:1 replacing:2 assessment:1 effect:32 normalized:1 y2:1 homology:1 analytically:1 hence:1 moore:2 illustrated:2 white:1 deal:1 ogi:1 assistance:1 width:1 auc:5 manifestation:1 generalized:1 complete:3 performs:2 invaluable:1 geometrical:1 common:2 ti2:1 discussed:1 m1:5 measurement:4 significant:1 enter:1 similarly:3 ezi:1 dominant:1 curvature:1 posterior:2 corp:1 yi:27 additional:1 care:1 determine:1 dashed:1 ii:9 multiple:1 d0:2 out3:1 technical:1 faster:1 calculation:1 clinical:2 cross:2 dept:2 award:1 parenthesis:1 prediction:2 regression:7 patient:1 expectation:3 metric:3 essentially:1 kernel:73 normalization:1 irregular:1 background:1 separately:1 interval:1 subject:12 alzheimer:2 easy:1 xj:21 fit:7 zi:6 gave:1 reduce:1 decline:6 inner:1 ti1:2 motivated:1 six:2 speaking:1 impairment:3 generally:1 useful:1 clear:1 simplest:1 generate:1 outperform:1 exist:1 zj:1 notice:1 estimated:4 arising:1 blue:1 diverse:1 diagnosis:2 discrete:1 write:1 iz:1 group:7 key:2 four:4 nevertheless:1 drawn:2 gmm:1 year:2 sum:2 parameterized:1 uncertainty:1 named:1 place:2 family:1 reasonable:1 decision:1 comparable:4 tleen:1 hi:5 dash:1 y1i:1 dominated:1 extremely:1 ball:1 remain:2 describes:1 slightly:1 tent:1 dv:3 invariant:3 taken:1 equation:7 skew:1 hierarchical:10 evi:2 original:1 denotes:1 clustering:1 include:2 graphical:3 beaverton:1 marginalized:5 unsuitable:1 carlson:1 yearly:1 build:3 graded:1 strategy:3 parametric:1 usual:1 responds:1 gradient:1 separate:1 thank:1 spanning:1 length:7 modeled:1 insufficient:1 ratio:6 design:33 proper:5 unknown:2 perform:3 upper:3 observation:10 howard:1 variability:3 precise:1 y1:1 jebara:2 pair:1 required:1 tap:2 california:1 summarize:1 built:1 green:3 memory:9 max:1 reliable:1 natural:3 difficulty:1 residual:1 scheme:1 improve:1 extract:1 health:2 prior:3 geometric:4 literature:3 meter:1 acknowledgement:1 contributing:2 mixed:21 age:4 validation:2 spoil:1 ivn:1 degree:1 thresholding:2 story:3 pi:1 row:5 course:1 elsewhere:1 supported:1 last:1 distributed:1 curve:6 stand:3 avoids:2 ferent:1 commonly:1 collection:1 made:1 far:2 employing:1 kzi:1 observable:3 confirm:1 hayes:1 assumed:1 neuropathology:1 discriminative:3 xi:14 neurology:2 continuous:1 latent:10 robin:1 lsq:3 layton:2 mol:1 vj:2 significance:1 whole:3 noise:7 alarm:11 s2:1 psychometric:4 representative:3 referred:1 roc:3 intel:1 khi:2 csee:1 tin:6 kin:1 minute:2 specific:1 dementia:1 list:3 neurol:1 svm:4 false:11 phd:1 ohsu:3 likely:1 determines:1 identity:8 viewed:2 rbf:1 replace:1 fisher:43 change:6 included:1 except:1 averaging:1 called:4 gij:1 invariance:5 mci:1 la:1 indicating:2 support:2 latter:1 arises:1 nondominant:1 bioinformatics:2 evaluate:2 sexton:2 tested:1 biol:1 |
2,888 | 3,618 | Improved Moves for Truncated Convex Models
M. Pawan Kumar
Dept. of Engineering Science
University of Oxford
P.H.S. Torr
Dept. of Computing
Oxford Brookes University
[email protected]
[email protected]
Abstract
We consider the problem of obtaining the approximate maximum a posteriori estimate of a discrete random field characterized by pairwise potentials that form a
truncated convex model. For this problem, we propose an improved st-MINCUT
based move making algorithm. Unlike previous move making approaches, which
either provide a loose bound or no bound on the quality of the solution (in terms
of the corresponding Gibbs energy), our algorithm achieves the same guarantees as the standard linear programming (LP) relaxation. Compared to previous approaches based on the LP relaxation, e.g. interior-point algorithms or treereweighted message passing (TRW), our method is faster as it uses only the efficient st-MINCUT algorithm in its design. Furthermore, it directly provides us with
a primal solution (unlike TRW and other related methods which solve the dual
of the LP). We demonstrate the effectiveness of the proposed approach on both
synthetic and standard real data problems.
Our analysis also opens up an interesting question regarding the relationship between move making algorithms (such as ?-expansion and the algorithms presented in this paper) and the randomized rounding schemes used with convex relaxations. We believe that further explorations in this direction would help design
efficient algorithms for more complex relaxations.
1 Introduction
Discrete random fields are a powerful tool for formulating several problems in Computer Vision
such as stereo reconstruction, segmentation, image stitching and image denoising [22]. Given data
D (e.g. an image or a video), random fields model the probability of a set of random variables v,
i.e. either the joint distribution of v and D as in the case of Markov random fields (MRF) [2] or the
conditional distribution of v given D as in the case of conditional random fields (CRF) [18]. The
word ?discrete? refers to the fact that each of the random variables va ? v = {v0 , ? ? ? , vn?1 } can
take one label from a discrete set l = {l0 , ? ? ? , lh?1 }. Throughout this paper, we will assume a MRF
framework while noting that our results are equally applicable for an CRF.
An MRF defines a neighbourhood relationship (denoted by E) over the random variables such that
(a, b) ? E if, and only if, va and vb are neighbouring random variables. Given an MRF, a labelling
refers to a function f such that f : {0, ? ? ? , n ? 1} ?? {0, ? ? ? , h ? 1}. In other words, the function
f assigns to each random variable va ? v, a label lf (a) ? l. The probability of the labelling is
given by the following Gibbs distribution: Pr(f, D|?) = exp(?Q(f, D; ?))/Z(?), where ? is the
parameter of the MRF and Z(?) is the normalization constant (i.e. the partition function). Assuming
a pairwise MRF, the Gibbs energy is given by:
X
X
1
2
Q(f, D; ?) =
?a;f
?ab;f
(1)
(a)f (b) ,
(a) +
va ?v
1
?a;f
(a)
(a,b)?E
2
?ab;f
(a)f (b)
and
are the unary and pairwise potentials respectively. The superscripts
where
?1? and ?2? indicate that the unary potential depends on the labelling of one random variable at a
time, while the pairwise potential depends on the labelling of two neighbouring random variables.
Clearly, the labelling f which maximizes the posterior Pr(f, D|?) can be obtained by minimizing
the Gibbs energy. The problem of obtaining such a labelling f is known as maximum a posteriori
1
(MAP) estimation. In this paper, we consider the problem of MAP estimation of random fields where
the pairwise potentials are defined by truncated convex models [4]. Formally speaking, the pairwise
potentials are of the form
2
?ab;f
(a)f (b) = wab min{d(f (a) ? f (b)), M }
(2)
where wab ? 0 for all (a, b) ? E, d(?) is a convex function and M > 0 is the truncation factor.
Recall that, by the definition of Ishikawa [9], a function d(?) defined at discrete points (specifically
over integers) is convex if, and only if, d(x + 1) ? 2d(x) + d(x ? 1) ? 0, for all x ? Z. It is assumed
that d(x) = d(?x). Otherwise, it can be replaced by (d(x) + d(?x))/2 without changing the Gibbs
energy of any of the possible labellings of the random field [23]. Examples of pairwise potentials of
this form include the truncated linear metric and the truncated quadratic semi-metric, i.e.
2
2
2
?ab;f
(a)f (b) = wab min{|f (a) ? f (b)|, M }, ?ab;f (a)f (b) = wab min{(f (a) ? f (b)) , M }.
(3)
Before proceeding further, we would like to note here that the method presented in this paper can be
trivially extended to truncated submodular models (a generalization of truncated convex models).
However, we will restrict our discussion to truncated convex models for two reasons: (i) it makes
the analysis of our approach easier; and (ii) truncated convex pairwise potentials are commonly
used in several problems such as stereo reconstruction, image denoising and inpainting [22]. Note
that in the absence of a truncation factor (i.e. when we only have convex pairwise potentials) the
exact MAP estimation can be obtained efficiently using the methods of Ishikawa [9] or Veksler [23].
However, minimizing the Gibbs energy in the presence of a truncation factor is well-known to be
NP -hard. Given their widespread use, it is not surprising that several approximate MAP estimation
algorithms have been proposed in the literature for the truncated convex model. Below, we review
such algorithms.
1.1 Related Work
Given a random field with truncated convex pairwise potentials, Felzenszwalb and Huttenlocher [6]
improved the efficiency of the popular max-product belief propagation (BP) algorithm [19] to obtain
the MAP estimate. BP provides the exact MAP estimate when the neighbourhood structure E of the
MRF defines a tree (i.e. it contains no loops). However, for a general MRF , BP provides no bounds on
the quality of the approximate MAP labelling obtained. In fact, it is not even guaranteed to converge.
The results of [6] can be used directly to speed-up the tree-reweighted message passing algorithm
(TRW) [24] and its sequential variant TRW- S [10]. Both TRW and TRW- S attempt to optimize the
Lagrangian dual of the standard linear programming (LP) relaxation of the MAP estimation problem [5, 15, 21, 24]. Unlike BP and TRW, TRW- S is guaranteed to converge. However, it is wellknown that TRW- S and other related algorithms [7, 13, 25] suffer from the following problems: (i)
they are slower than algorithms based on efficient graph-cuts [22]; and (ii) they only provide a dual
solution [10]. The primal solution (i.e. the labelling f ) is often obtained from the dual solution in an
unprincipled manner1. Furthermore, it was also observed that, unlike graph-cuts based approaches,
TRW- S does not work well when the random field models long range interactions (i.e. when the
neighbourhood relationship E is highly connected) [11]. However, due to the lack of experimental
results, it is not clear whether this observation applies to the methods described in [7, 13, 25].
Another way of solving the LP relaxation is to resort to interior point algorithms [3]. Although
interior point algorithms are much slower in practice than TRW- S, they have the advantage of providing the primal (possibly fractional) solution of the LP relaxation. Chekuri et al. [5] showed that
when using certain randomized rounding schemes on the primal solution (to get the final labelling
f ), the following guarantees hold true: (i) for Potts model (i.e. d(f (a) ? f (b)) = |f (a) ? f (b)|
and M = 1), we obtain a multiplicative bound2 of 2; (ii) for the truncated linear metric (i.e.
1
We note here that the recently proposed algorithm in [20] directly provides the primal solution. However,
it is much slower than the methods which solve the dual.
2
Let f be the labelling obtained by an algorithm A (e.g. in this case the LP relaxation followed by the
rounding scheme) for a class of MAP estimation problems (e.g. in this case when the pairwise potentials form a
Potts model). Let f ? be the optimal labelling. The algorithm A is said to achieve a multiplicative bound of ?,
if for every instance in the class of MAP estimation problems the following holds true:
?
?
Q(f, D; ?)
E
? ?,
Q(f ? , D; ?)
where E(?) denotes the expectation of its argument under the rounding scheme.
2
Initialization
- Initialize the labelling to some function f1 . For example, f1 (a) = 0 for all va ? v.
Iteration
- Choose an interval Im = [im + 1, jm ] where (jm ? im ) = L such that d(L) ? M .
- Move from current labelling fm to a new labelling fm+1 such that
fm+1 (a) = fm (a) or fm+1 (a) ? Im , ?va ? v.
The new labelling is obtained by solving the st- MINCUT problem on a graph described in ? 2.1.
Termination
- Stop when there is no further decrease in the Gibbs energy for any interval Im .
Table 1: Our Algorithm. As is typical with move making methods, our approach iteratively goes
from one labelling to the next by solving an st-MINCUT problem. It converges when there remain no
moves which reduce the Gibbs energy further.
d(f (a)
of
? ? f (b)) = |f (a) ? f (b)| and a general M > 0), we obtain a multiplicative bound
2
2 + 2; and (iii) for the truncated quadratic semi-metric (i.e.
d(f
(a)
?
f
(b))
=
(f
(a)
?
f
(b))
and
?
a general M > 0), we obtain a multiplicative bound of O( M ).
The algorithms most related to our approach are the so-called move making methods which rely on
solving a series of graph-cut (specifically st-MINCUT) problems. Move making algorithms start with
an initial labelling f0 and iteratively minimize the Gibbs energy by moving to a better labelling. At
each iteration, (a subset of) random variables have the option of either retaining their old label or
taking a new label from a subset of the labels l. For example, in the ??-swap algorithm [4] the
variables currently labelled l? or l? can either retain their labels or swap them (i.e. some variables
labelled l? can be relabelled as l? and vice versa). The recently proposed range move algorithm [23]
modifies this approach such that any variable currently labelled li where i ? [?, ?] can be assigned
any label lj where j ? [?, ?]. Note that the new label lj can be different from the old label li , i.e.
i 6= j. Both these algorithms (i.e. ??-swap and range move) do not provide any guarantees on the
quality of the solution.
In contrast, the ?-expansion algorithm [4] (where each variable can either retain its label or get
assigned the label l? at an iteration) provides a multiplicative bound of 2 for the Potts model and
2M for the truncated linear metric. Gupta and Tardos [8] generalized the ?-expansion algorithm for
the truncated linear metric and obtained a multiplicative bound of 4. Komodakis and Tziritas [14]
designed a primal-dual algorithm which provides a bound of 2M for the truncated quadratic semimetric. Note that these bounds are inferior to the bounds obtained by the LP relaxation. However,
all the above move making algorithms use only a single st-MINCUT at each iteration and are hence,
much faster than interior point algorithms, TRW, TRW- S and BP.
1.2 Our Results
We further extend the approach of Gupta and Tardos [8] in two ways (section 2). The first extension
allows us to handle any truncated convex model (and not just truncated linear). The second extension
allows us to consider a potentially larger subset of labels at each iteration compared to [8]. As will
be seen in the subsequent analysis (?2.2), these two extensions allow us to solve the MAP estimation
problem efficiently using st-MINCUT whilst obtaining the same guarantees as the LP relaxation [5].
Furthermore, our approach does not suffer from the problems of TRW- S mentioned above. In order
to demonstrate its practical use, we provide a favourable comparison of our method with several
state of the art MAP estimation algorithms (section 3).
2 Description of the Algorithm
Table 1 describes the main steps of our approach. Note that unlike the methods described in [4, 23]
we will not be able to obtain the optimal move at each iteration. In other words, if in the mth iteration
?
we move from label fm to fm+1 then it is possible that there exists another labelling fm+1
such
?
?
?
that fm+1 (a) = fm (a) or fm+1 (a) ? Im for all va ? v and Q(fm+1 , D; ?) < Q(fm+1 , D; ?).
However, our analysis in the next section shows that we are still able to reduce the Gibbs energy
sufficiently at each iteration so as to obtain the guarantees of the LP relaxation.
We now turn our attention to designing a method of moving from labelling fm to fm+1 . Our approach relies on constructing a graph such that every st-cut on the graph corresponds to a labelling
f ? of the random variables which satisfies: f ? (a) = fm (a) or f ? (a) ? Im , for all va ? v. The
new labelling fm+1 is obtained in two steps: (i) we obtain a labelling f ? which corresponds to the
3
st-MINCUT on our graph; and (ii) we choose the new labelling fm+1 as
?
f
if
Q(f ? , D; ?) ? Q(fm , D; ?),
fm+1 =
fm otherwise.
Below, we provide the details of the graph construction.
(4)
2.1 Graph Construction
At each iteration of our algorithm, we are given an interval Im = [im + 1, jm ] of L labels (i.e. (jm ?
im ) = L) where d(L) ? M . We also have the current labelling fm for all the random variables.
We construct a directed weighted graph (with non-negative weights) Gm = {Vm , Em , cm (?, ?)} such
that for each va ? v, we define vertices {aim +1 , aim +2 , ? ? ? , ajm } ? Vm . In addition, as is the case
with every st-MINCUT problem, there are two additional vertices called terminals which we denote
by s (the source) and t (the sink). The edges e ? Em with capacity (i.e. weight) cm (e) are of two
types: (i) those that represent the unary potentials of a labelling corresponding to an st-cut in the
graph and; (ii) those that represent the pairwise potentials of the labelling.
Figure 1: Part of the graph Gm containing the terminals and the vertices corresponding to the
variable va . The edges which represent the unary potential of the new labelling are also shown.
Representing Unary Potentials For all random variables va ? v, we define the following
edges which belong to the set Em : (i) For all k ? [im + 1, jm ), edges (ak , ak+1 ) have ca1
pacity cm (ak , ak+1 ) = ?a;k
; (ii) For all k ? [im + 1, jm ), edges (ak+1 , ak ) have capacity
1
cm (ak+1 , ak ) = ?; (iii) Edges (ajm , t) have capacity cm (ajm , t) = ?a;j
; (iv) Edges (t, ajm )
m
1
have capacity cm (t, ajm ) = ?; (v) Edges (s, aim +1 ) have capacity cm (s, aim +1 ) = ?a;f
if
m (a)
fm (a) ?
/ Im and ? otherwise; and (vi) Edges (aim +1 , s) have capacity cm (aim +1 , s) = ?.
Fig. 1 shows the above edges together with their capacities for one random variable va . Note that
there are two types of edges in the above set: (i) with finite capacity; and (ii) with infinite capacity.
Any st-cut with finite cost3 contains only one of the finite capacity edges for each random variable
va . This is because if an st-cut included more than one finite capacity edge, then by construction it
must include at least one infinite capacity edge thereby making its cost infinite [9, 23]. We interpret
a finite cost st-cut as a relabelling of the random variables as follows:
(
k
if st-cut includes edge (ak , ak+1 ) where k ? [im + 1, jm ),
?
jm
if st-cut includes edge (ajm , t),
(5)
f (a) =
fm (a) if st-cut includes edge (s, aim +1 ).
Note that the sum of the unary potentials for the labelling f ? is exactly equal to the cost of the st-cut
over the edges defined above. However, the Gibbs energy of the labelling also includes the sum of
the pairwise potentials (as shown in equation (1)). Unlike the unary potentials we will not be able
to model the sum of pairwise potentials exactly. However, we will be able to obtain its upper bound
using the cost of the st-cut over the following edges.
Representing Pairwise Potentials For all neighbouring random variables va and vb , i.e. (a, b) ?
E, we define edges (ak , bk? ) ? Em where either one or both of k and k ? belong to the set (im + 1, jm ]
(i.e. at least one of them is different from im + 1). The capacity of these edges is given by
wab
cm (ak , bk? ) =
(d(k ? k ? + 1) ? 2d(k ? k ? ) + d(k ? k ? ? 1)) .
(6)
2
The above capacity is non-negative due to the fact that wab ? 0 and d(?) is convex. Furthermore,
we also add the following edges:
cm (ak , ak+1 ) = w2ab (d(L ? k + im ) + d(k ? im )) , ?(a, b) ? E, k ? [im + 1, jm )
cm (bk? , bk? +1 ) = w2ab (d(L ? k ? + im ) + d(k ? ? im )) , ?(a, b) ? E, k ? ? [im + 1, jm )
cm (ajm , t) = cm (bjm , t) = w2ab d(L), ?(a, b) ? E.
(7)
3
Recall that the cost of an st-cut is the sum of the capacities of the edges whose starting point lies in the set
of vertices containing the source s and whose ending point lies in the set of vertices containing the sink t.
4
(b)
(c)
(d)
(a)
Figure 2: (a) Edges that are used to represent the pairwise potentials of two neighbouring random
variables va and vb are shown. Undirected edges indicate that there are opposing edges in both
directions with equal capacity (as given by equation 6). Directed dashed edges, with capacities
shown in equation (7), are added to ensure that the graph models the convex pairwise potentials
correctly. (b) An additional edge is added when fm (a) ? Im and fm (b) ?
/ Im . The term ?ab =
wab d(L). (c) A similar additional edge is added when fm (a) ?
/ Im and fm (b) ? Im . (d) Five
edges, with capacities as shown in equation (8), are added when fm (a) ?
/ Im and fm (b) ?
/ Im .
Undirected edges indicate the presence of opposing edges with equal capacity.
Note that in [23] the graph obtained by the edges in equations (6) and (7) was used to find the exact
MAP estimate for convex pairwise potentials. A proof that the above edges exactly model convex
pairwise potentials up to an additive constant ?ab = wab d(L) can be found in [17]. However, we
are concerned with the NP-hard case where the pairwise potentials are truncated. In order to model
this case, we incorporate some additional edges to the above set. These additional edges are best
described by considering the following three cases for all (a, b) ? E.
? If fm (a) ? Im and fm (b) ? Im then we do not add any more edges in the graph (see Fig. 2(a)).
? If fm (a) ? Im and fm (b) ?
/ Im then we add an edge (aim +1 , bim +1 ) with capacity wab M +?ab /2,
where ?ab = wab d(L) is a constant for a given pair of neighbouring random variables (a, b) ? E
(see Fig. 2(b)). Similarly, if fm (a) ?
/ Im and fm (b) ? Im then we add an edge (bim +1 , aim +1 ) with
capacity wab M + ?ab /2 (see Fig. 2(c)).
? If fm (a) ?
/ Im and fm (b) ?
/ Im , we introduce a new vertex pab . Using this vertex pab , five edges
are defined with the following capacities (see Fig. 2(d)):
cm (aim +1 , pab ) = cm (pab , aim +1 ) = cm (bim +1 , pab ) = cm (pab , bim +1 ) = wab M + ?ab /2,
2
+ ?ab .
(8)
cm (s, pab ) = ?ab;f
m (a),fm (b)
This completes our graph construction. Given the graph Gm we solve the st-MINCUT problem which
provides us with a labelling f ? as described in equation (5). The new labelling fm+1 is obtained
using equation (4). Note that our graph construction is similar to that of Gupta and Tardos [8] with
two notable exceptions: (i) we can handle any general truncated convex model and not just truncated
linear as in the case of [8]. This is achieved in part by using the graph construction of [23]; and (ii)
we have the freedom to choose the value of L, while [8] fixed this value to M . A logical choice
would be to use that value of L which minimizes the worst case multiplicative bound for a particular
class of problems. The following properties provide such a value of L for both the truncated linear
and the truncated quadratic models. Our worst case multiplicative bounds are exactly those achieved
by the LP relaxation (see [5]).
5
2.2 Properties of the Algorithm
For the above graph construction, the following properties hold true:
? The cost of the st-MINCUT provides an upper bound on the Gibbs energy of the labelling f ? and
hence, on the Gibbs energy of fm+1 (see section 2.2 of [17]).
?
? For?the truncated linear metric, our algorithm obtains a multiplicative bound of 2 + 2 using
L = 2M (see section 3, Theorem 1, of [17]). Note that this bound is better than those obtained by
?-expansion [4] (i.e. 2M ) and its generalization [8] (i.e. 4).
?
? For the truncated
quadratic semi-metric, our algorithm obtains a multiplicative bound of O( M )
?
using L = M (see section 3, Theorem 2, of [17]). Note that both ?-expansion and the approach
of Gupta and Tardos provide no bounds for the above case. The primal-dual method of [14] obtains
a bound of 2M which is clearly inferior to our guarantees.
3 Experiments
We tested our approach using both synthetic and standard real data. Below, we describe the experimental setup and the results obtained in detail.
3.1 Synthetic Data
Experimental Setup We used 100 random fields for both the truncated linear and truncated
quadratic models. The variables v and neighbourhood relationship E of the random fields described
a 4-connected grid graph of size 50 ? 50. Note that 4-connected grid graphs are widely used to
model several problems in Computer Vision [22]. Each variable was allowed to take one of 20 possible labels, i.e. l = {l0 , l1 , ? ? ? , l19 }. The parameters of the random field were generated randomly.
1
Specifically, the unary potentials ?a;i
were sampled uniformly from the interval [0, 10] while the
weights wab , which determine the pairwise potentials, were sampled uniformly from [0, 5]. The
parameter M was also chosen randomly while taking care that d(5) ? M ? d(10).
Results Fig. 3 shows the results obtained by our approach and five other state of the art algorithms:
??-swap, ?-expansion, BP, TRW- S and the range move algorithm of [23]. We used publicly available
code for all previously proposed approaches with the exception of the range move algorithm4. As can
be seen from the figure, the most accurate approach is the method proposed in this paper, followed
closely by the range move algorithm. Recall that, unlike range move, our algorithm is guaranteed to
provide the same worst case multiplicative bounds as the LP relaxation. As expected, both the range
move algorithm and our method are slower than ??-swap and ?-expansion (since each iteration
computes an st-MINCUT on a larger graph). However, they are faster than TRW- S, which attempts to
minimize the LP relaxation, and BP. We note here that our implementation does not use any clever
tricks to speed up the max-flow algorithm (such as those described in [1]) which can potentially
decrease the running time by orders of magnitude.
3.2 Real Data - Stereo Reconstruction
Given two epipolar rectified images D1 and D2 of the same scene, the problem of stereo reconstruction is to obtain a correspondence between the pixels of the images. This problem can be modelled
using a random field whose variables correspond to pixels of one image (say D1 ) and take labels
from a set of disparities l = {0, 1, ? ? ? , h ? 1}. A disparity value i for a random variable a denoting
pixel (x, y) in D1 indicates that its corresponding pixel lies in (x + i, y) in the second image.
For the above random field formulation, the unary potentials were defined as in [22] and were truncated at 15. As is typically the case, we chose the neighbourhood relationship E to define a 4neighbourhood grid graph. The number of disparities h was set to 20. We experimented using the
following truncated convex potentials:
2
2
?ab;ij
= 50 min{|i ? j|, 10}, ?ab;ij
= 50 min{(i ? j)2 , 100}.
(9)
The above form of pairwise potentials encourage neighbouring pixels to take similar disparity values
which corresponds to our expectations of finding smooth surfaces in natural images. Truncation of
pairwise potentials is essential to avoid oversmoothing, as observed in [4, 23]. Note that using
spatially varying weights wab provides better results. However, the main aim of this experiment is
to demonstrate the accuracy and speed of our approach and not to design the best possible Gibbs
4
When using ?-expansion with the truncated quadratic semi-metric, all edges with negative capacities in
the graph construction were removed, similar to the experiments in [22].
6
(a)
(b)
Figure 3: Results of the synthetic experiment. (a) Truncated linear metric. (b) Truncated quadratic
semi-metric. The x-axis shows the time taken in seconds. The y-axis shows the average Gibbs energy
obtained over all 100 random fields using the six algorithms. The lower blue curve is the value of
the dual obtained by TRW- S. In both the cases, our method and the range move algorithm provide
the most accurate solution and are faster than TRW- S and BP.
energy. Table 2 provides the value of the Gibbs energy and the total time taken by all the approaches
for a standard stereo pair (Teddy). As in the case of the synthetic experiments, the range move
algorithm and our method provide the most accurate solutions while taking less time than TRW- S and
BP . Additional experiments on other stereo pairs with similar observations about the performances
of the various algorithms can be found in [17]. However, we would again like to emphasize that
unlike our method the range move algorithm provides no theoretical guarantees about the quality of
the solution.
Algorithm
??-swap
?-expansion
TRW- S
BP
Range Move
Our Approach
Energy-1
3678200
3677950
3677578
3789486
3686844
3613003
Time-1(s)
18.48
11.73
131.65
272.06
97.23
120.14
Energy-2
3707268
3687874
3679563
5180705
3679552
3679552
Time-2(s)
20.25
8.79
332.94
331.36
141.78
191.20
Table 2: The energy obtained and the time taken by the algorithms used in the stereo reconstruction
experiment with the Teddy image pair. Columns 2 and 3 : truncated linear metric. Columns 4 and
5: truncated quadratic semi-metric.
4 Discussion
We have presented an st-MINCUT based algorithm for obtaining the approximate MAP estimate of
discrete random fields with truncated convex pairwise potentials. Our method improves the multiplicative bound for the truncated linear metric compared to [4, 8] and provides the best known
bound for the truncated quadratic semi-metric. Due to the use of only the st-MINCUT problem in
its design, it is faster than previous approaches based on the LP relaxation. In fact, its speed can
be further improved by a large factor using clever techniques such as those described in [12] (for
convex unary potentials) and/or [1] (for general unary potentials). Furthermore, it overcomes the
well-known deficiencies of TRW and its variants. Experiments on synthetic and real data problems
demonstrate its effectiveness compared to several state of the art algorithms.
The analysis in ?2.2 shows that, for the truncated linear and truncated quadratic models, the bound
achieved by our move making algorithm over intervals of any length L is equal to that of rounding
the LP relaxation?s optimal solution using the same intervals [5]. This equivalence also extends to
the Potts model (in which case ?-expansion provides the same bound as the LP relaxation). A natural
question would be to ask about the relationship between move making algorithms and the rounding
schemes used in convex relaxations. Note that despite recent efforts [14] which analyze certain move
making algorithms in the context of primal-dual approaches for the LP relaxation, not many results
7
are known about their connection with randomized rounding schemes. Although the discussion in
?2.2 cannot be trivially generalized to all random fields, it offers a first step towards answering this
question. We believe that further exploration in this direction would help improve the understanding
of the nature of the MAP estimation problem, e.g. how to derandomize approaches based on convex
relaxations. Furthermore, it would also help design efficient move making algorithms for more
complex relaxations such as those described in [16].
Acknowledgments The first author was supported by the EU CLASS project and EPSRC grant
EP/C006631/1(P). The second author is in receipt of a Royal Society Wolfson Research Merit
Award, and would like to acknowledge support from the Royal Society and Wolfson foundation.
References
[1] K. Alahari, P. Kohli, and P. H. S. Torr. Reduce, reuse & recycle: Efficiently solving multi-label MRFs.
In CVPR, 2008.
[2] J. Besag. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society, Series B,
48:259?302, 1986.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[4] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI,
23(11):1222?1239, 2001.
[5] C. Chekuri, S. Khanna, J. Naor, and L. Zosin. A linear programming formulation and approximation
algorithms for the metric labelling problem. SIAM Journal on Disc. Math., 18(3):606?635, 2005.
[6] P. Felzenszwalb and D. Huttenlocher. Efficient belief propagation for early vision. In CVPR, 2004.
[7] A. Globerson and T. Jaakkola. Fixing max-product: Convergent message passing for MAP LPrelaxations. In NIPS, 2007.
[8] A. Gupta and E. Tardos. A constant factor approximation algorithm for a class of classification problems.
In STOC, 2000.
[9] H. Ishikawa. Exact optimization for Markov random fields with convex priors. PAMI, 25(10):1333?1336,
October 2003.
[10] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. PAMI,
28(10):1568?1583, 2006.
[11] V. Kolmogorov and C. Rother. Comparison of energy minimization algorithms for highly connected
graphs. In ECCV, pages II: 1?15, 2006.
[12] V. Kolmogorov and A. Shioura. New algorithms for the dual of the convex cost network flow problem
with applications to computer vision. Technical report, University College London, 2007.
[13] N. Komodakis, N. Paragios, and G. Tziritas. MRF optimization via dual decomposition: Message-passing
revisited. In ICCV, 2007.
[14] N. Komodakis and G. Tziritas. Approximate labeling via graph-cuts based on linear programming. PAMI,
2007.
[15] A. Koster, C. van Hoesel, and A. Kolen. The partial constraint satisfaction problem: Facets and lifting
theorems. Operations Research Letters, 23(3-5):89?97, 1998.
[16] M. P. Kumar, V. Kolmogorov, and P. H. S. Torr. An analysis of convex relaxations for MAP estimation.
In NIPS, 2007.
[17] M. P. Kumar and P. H. S. Torr. Improved moves for truncated convex models. Technical report, University
of Oxford, 2008.
[18] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labelling sequence data. In ICML, 2001.
[19] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kauffman, 1988.
[20] P. Ravikumar, A. Agarwal, and M. Wainwright. Message-passing for graph-structured linear programs:
Proximal projections, convergence and rounding schemes. In ICML, 2008.
[21] M. Schlesinger. Sintaksicheskiy analiz dvumernykh zritelnikh singnalov v usloviyakh pomekh (syntactic
analysis of two-dimensional visual signals in noisy conditions). Kibernetika, 4:113?130, 1976.
[22] R. Szeliski, R. Zabih, D. Scharstein, O. Veksler, V. Kolmogorov, A. Agarwala, M. Tappen, and C. Rother.
A comparative study of energy minimization methods for Markov random fields with smoothness-based
priors. PAMI, 2008.
[23] O. Veksler. Graph cut based optimization for MRFs with truncated convex priors. In CVPR, 2007.
[24] M. Wainwright, T. Jaakkola, and A. Willsky. MAP estimation via agreement on trees: Message passing
and linear programming. IEEE Trans. on Information Theory, 51(11):3697?3717, 2005.
[25] Y. Weiss, C. Yanover, and T. Meltzer. MAP estimation, linear programming and belief propagation with
convex free energies. In UAI, 2007.
8
| 3618 |@word kohli:1 open:1 termination:1 d2:1 decomposition:1 thereby:1 inpainting:1 initial:1 contains:2 series:2 disparity:4 relabelled:1 denoting:1 current:2 surprising:1 must:1 additive:1 subsequent:1 partition:1 designed:1 mccallum:1 provides:13 math:1 revisited:1 treereweighted:1 five:3 relabelling:1 naor:1 introduce:1 pairwise:24 expected:1 multi:1 terminal:2 jm:11 considering:1 project:1 maximizes:1 wolfson:2 cm:18 minimizes:1 ca1:1 whilst:1 finding:1 guarantee:7 every:3 exactly:4 uk:2 grant:1 segmenting:1 before:1 engineering:1 despite:1 ak:14 oxford:3 pami:5 chose:1 initialization:1 equivalence:1 bim:4 range:12 directed:2 practical:1 acknowledgment:1 globerson:1 practice:1 lf:1 boyd:1 projection:1 word:3 refers:2 get:2 cannot:1 interior:4 clever:2 context:1 optimize:1 map:19 lagrangian:1 modifies:1 go:1 attention:1 starting:1 convex:30 assigns:1 vandenberghe:1 handle:2 analiz:1 tardos:5 construction:8 gm:3 exact:4 programming:6 neighbouring:6 us:1 designing:1 agreement:1 trick:1 tappen:1 cut:17 huttenlocher:2 observed:2 epsrc:1 ep:1 worst:3 connected:4 eu:1 decrease:2 removed:1 mentioned:1 solving:5 efficiency:1 swap:6 sink:2 joint:1 various:1 kolmogorov:5 oversmoothing:1 fast:1 describe:1 london:1 labeling:1 whose:3 larger:2 solve:4 widely:1 say:1 cvpr:3 otherwise:3 pab:7 plausible:1 tested:1 zosin:1 syntactic:1 noisy:1 superscript:1 final:1 advantage:1 sequence:1 propose:1 reconstruction:5 interaction:1 product:2 loop:1 achieve:1 description:1 convergence:1 comparative:1 converges:1 help:3 ac:2 fixing:1 ij:2 tziritas:3 indicate:3 ajm:7 direction:3 closely:1 exploration:2 f1:2 generalization:2 im:36 extension:3 hold:3 sufficiently:1 exp:1 achieves:1 early:1 estimation:13 applicable:1 label:17 currently:2 vice:1 tool:1 weighted:1 minimization:4 clearly:2 aim:12 avoid:1 varying:1 jaakkola:2 l0:2 potts:4 indicates:1 contrast:1 besag:1 posteriori:2 inference:1 mrfs:2 unary:11 lj:2 typically:1 mth:1 pixel:5 agarwala:1 dual:11 classification:1 denoted:1 retaining:1 art:3 initialize:1 field:20 construct:1 equal:4 ishikawa:3 icml:2 np:2 report:2 intelligent:1 randomly:2 replaced:1 pawan:2 opposing:2 ab:15 attempt:2 freedom:1 message:7 highly:2 brooke:2 primal:8 accurate:3 edge:41 encourage:1 partial:1 lh:1 tree:4 iv:1 old:2 theoretical:1 schlesinger:1 instance:1 column:2 facet:1 cost:7 vertex:7 subset:3 veksler:4 rounding:8 proximal:1 synthetic:6 st:25 randomized:3 siam:1 retain:2 probabilistic:2 vm:2 together:1 again:1 containing:3 choose:3 possibly:1 receipt:1 algorithm4:1 resort:1 li:2 potential:34 kolen:1 includes:4 notable:1 depends:2 vi:1 multiplicative:12 analyze:1 start:1 option:1 minimize:2 publicly:1 accuracy:1 efficiently:3 correspond:1 modelled:1 disc:1 rectified:1 wab:14 definition:1 energy:23 semimetric:1 proof:1 stop:1 sampled:2 popular:1 ask:1 logical:1 recall:3 fractional:1 improves:1 segmentation:1 trw:21 improved:5 wei:1 formulation:2 ox:1 alahari:1 furthermore:6 just:2 chekuri:2 propagation:3 lack:1 widespread:1 defines:2 khanna:1 quality:4 believe:2 true:3 hence:2 assigned:2 spatially:1 iteratively:2 reweighted:2 komodakis:3 inferior:2 generalized:2 crf:2 demonstrate:4 l1:1 reasoning:1 image:10 recently:2 boykov:1 extend:1 belong:2 interpret:1 versa:1 gibbs:16 cambridge:1 smoothness:1 trivially:2 grid:3 similarly:1 submodular:1 moving:2 robot:1 f0:1 surface:1 v0:1 add:4 posterior:1 showed:1 recent:1 wellknown:1 certain:2 seen:2 morgan:1 additional:6 care:1 converge:2 determine:1 lprelaxations:1 dashed:1 semi:7 ii:9 signal:1 smooth:1 technical:2 faster:5 characterized:1 offer:1 long:1 equally:1 award:1 ravikumar:1 va:15 mrf:9 variant:2 vision:4 metric:16 expectation:2 iteration:10 normalization:1 represent:4 agarwal:1 achieved:3 addition:1 interval:6 completes:1 source:2 unlike:8 undirected:2 flow:2 lafferty:1 effectiveness:2 integer:1 noting:1 presence:2 iii:2 concerned:1 meltzer:1 restrict:1 fm:41 reduce:3 regarding:1 whether:1 six:1 reuse:1 effort:1 stereo:7 suffer:2 passing:7 speaking:1 clear:1 zabih:2 correctly:1 blue:1 discrete:6 changing:1 w2ab:3 graph:30 relaxation:22 sum:4 koster:1 letter:1 powerful:1 extends:1 throughout:1 vn:1 vb:3 bound:25 guaranteed:3 followed:2 convergent:2 correspondence:1 quadratic:11 constraint:1 deficiency:1 bp:10 scene:1 speed:4 argument:1 min:5 formulating:1 kumar:3 structured:1 recycle:1 remain:1 describes:1 em:4 lp:17 labellings:1 making:12 iccv:1 pr:2 taken:3 equation:7 previously:1 turn:1 loose:1 bjm:1 merit:1 stitching:1 available:1 operation:1 neighbourhood:6 slower:4 denotes:1 running:1 include:2 ensure:1 mincut:14 dirty:1 society:3 move:28 question:3 added:4 said:1 capacity:23 reason:1 willsky:1 assuming:1 rother:2 code:1 length:1 relationship:6 providing:1 minimizing:2 setup:2 october:1 potentially:2 stoc:1 negative:3 design:5 implementation:1 upper:2 observation:2 markov:3 finite:5 acknowledge:1 teddy:2 truncated:41 extended:1 bk:4 pair:4 connection:1 pearl:1 nip:2 trans:1 able:4 below:3 kauffman:1 program:1 max:3 royal:3 video:1 belief:3 epipolar:1 wainwright:2 satisfaction:1 natural:2 rely:1 philiptorr:1 yanover:1 representing:2 scheme:7 improve:1 picture:1 axis:2 hoesel:1 review:1 literature:1 understanding:1 prior:3 interesting:1 foundation:1 eccv:1 supported:1 truncation:4 free:1 allow:1 szeliski:1 kibernetika:1 felzenszwalb:2 taking:3 van:1 curve:1 ending:1 computes:1 author:2 commonly:1 unprincipled:1 approximate:6 obtains:3 emphasize:1 scharstein:1 overcomes:1 uai:1 assumed:1 table:4 nature:1 obtaining:4 expansion:10 complex:2 constructing:1 main:2 allowed:1 fig:6 paragios:1 pereira:1 lie:3 answering:1 theorem:3 favourable:1 experimented:1 gupta:5 exists:1 essential:1 sequential:1 lifting:1 magnitude:1 labelling:35 easier:1 visual:1 applies:1 corresponds:3 satisfies:1 relies:1 conditional:3 towards:1 labelled:3 absence:1 hard:2 included:1 specifically:3 torr:4 typical:1 infinite:3 uniformly:2 denoising:2 called:2 total:1 experimental:3 exception:2 formally:1 college:1 cost3:1 support:1 incorporate:1 dept:2 d1:3 |
2,889 | 3,619 | A computational model of hippocampal function in
trace conditioning
Elliot A. Ludvig, Richard S. Sutton, Eric Verbeek
Department of Computing Science
University of Alberta
Edmonton, AB, Canada T6G 2E8
{elliot,sutton,everbeek}@cs.ualberta.ca
E. James Kehoe
School of Psychology
University of New South Wales
Sydney, NSW, Australia 2052
[email protected]
Abstract
We introduce a new reinforcement-learning model for the role of the hippocampus in classical conditioning, focusing on the differences between trace and delay conditioning. In the model, all stimuli are represented both as unindividuated wholes and as a series of temporal elements with varying delays. These two
stimulus representations interact, producing different patterns of learning in trace
and delay conditioning. The model proposes that hippocampal lesions eliminate
long-latency temporal elements, but preserve short-latency temporal elements. For
trace conditioning, with no contiguity between cue and reward, these long-latency
temporal elements are necessary for learning adaptively timed responses. For delay conditioning, the continued presence of the cue supports conditioned responding, and the short-latency elements suppress responding early in the cue. In accord
with the empirical data, simulated hippocampal damage impairs trace conditioning, but not delay conditioning, at medium-length intervals. With longer intervals,
learning is impaired in both procedures, and, with shorter intervals, in neither. In
addition, the model makes novel predictions about the response topography with
extended cues or post-training lesions. These results demonstrate how temporal
contiguity, as in delay conditioning, changes the timing problem faced by animals,
rendering it both easier and less susceptible to disruption by hippocampal lesions.
The hippocampus is an important structure in many types of learning and memory, with prominent
involvement in spatial navigation, episodic and working memories, stimulus configuration, and contextual conditioning. One empirical phenomenon that has eluded many theories of the hippocampus
is the dependence of aversive trace conditioning on an intact hippocampus (but see Rodriguez &
Levy, 2001; Schmajuk & DiCarlo, 1992; Yamazaki & Tanaka, 2005). For example, trace eyeblink
conditioning disappears following hippocampal lesions (Solomon et al., 1986; Moyer, Jr. et al.,
1990), induces hippocampal neurogenesis (Gould et al., 1999), and produces unique activity patterns in hippocampal neurons (McEchron & Disterhoft, 1997). In this paper, we present a new abstract computational model of hippocampal function during trace conditioning. We build on a recent
extension of the temporal-difference (TD) model of conditioning (Ludvig, Sutton & Kehoe, 2008;
Sutton & Barto, 1990) to demonstrate how the details of stimulus representation can qualitatively
alter learning during trace and delay conditioning. By gently tweaking this stimulus representation
and reducing long-latency temporal elements, trace conditioning is severely impaired, whereas delay conditioning is hardly affected. In the model, the hippocampus is responsible for maintaining
these long-latency elements, thus explaining the selective importance of this brain structure in trace
conditioning.
The difference between trace and delay conditioning is one of the most basic operational distinctions
in classical conditioning (e.g., Pavlov, 1927). Figure 1 is a schematic of the two training procedures.
In trace conditioning, a conditioned stimulus (CS) is followed some time later by a reward or uncon1
Trace
Delay
Stimulus
Reward
Figure 1: Event timelines in trace and delay conditioning. Time flows from left-to-right in the diagram. A vertical bar represents a punctate (short) event, and the extended box is a continuously
available stimulus. In delay conditioning, the stimulus and reward overlap, whereas, in trace conditioning, there is a stimulus-free gap between the two punctate events.
ditioned stimulus (US); the two stimuli are separated by a stimulus-free gap. In contrast, in delay
conditioning, the CS remains on until presentation of the US. Trace conditioning is learned more
slowly than delay conditioning, with poorer performance often observed even at asymptote.
In both eyeblink conditioning (Moyer, Jr. et al., 1990; Solomon et al., 1986; Tseng et al., 2004)
and fear conditioning (e.g., McEchron et al., 1998), hippocampal damage severely impairs the acquisition of conditioned responding during trace conditioning, but not delay conditioning. These
selective hippocampal deficits with trace conditioning are modulated by the inter-stimulus interval
(ISI) between CS onset and US onset. With very short ISIs (?300 ms in eyeblink conditioning in
rabbits), there is little deficit in the acquisition of responding during trace conditioning (Moyer, Jr.
et al., 1990). Furthermore, with very long ISIs (>1000 ms), delay conditioning is also impaired
by hippocampal lesions (Beylin et al., 2001). These interactions between ISI and the hippocampaldependency of conditioning are the primary data that motivate the new model.
1
TD Model of Conditioning
Our full model of conditioning consists of three separate modules: the stimulus representation,
learning algorithm, and response rule. The explanation of hippocampal function relies mostly on
the details of the stimulus representation. To illustrate the implications of these representational
issues, we have chosen the temporal-difference (TD) learning algorithm from reinforcement learning
(Sutton & Barto, 1990, 1998) that has become the sine qua non for modeling reward learning in
dopamine neurons (e.g., Ludvig et al., 2008; Schultz, Dayan, & Montague, 1997), and a simple,
leaky-integrator response rule described below. We use these for simplicity and consistency with
prior work; other learning algorithms and response rules might also yield similar conclusions.
1.1
Stimulus Representation
In the model, stimuli are not coherent wholes, but are represented as a series of elements or internal
microstimuli. There are two types of elements in the stimulus representation: the first is the presence
microstimulus, which is exactly equivalent to the external stimulus (Sutton & Barto, 1990). This microstimulus is available whenever the corresponding stimulus is on (see Fig. 3). The second type
of elements are the temporal microstimuli or spectral traces, which are a series of successively later
and gradually broadening elements (see Grossberg & Schmajuk, 1989; Machado, 1997; Ludvig et
al., 2008). Below, we show how the interaction between these two types of representational elements produces different styles of learning in delay and trace conditioning, resulting in differential
sensitivity of these procedures to hippocampal manipulation.
The temporal microstimuli are created in the model through coarse coding of a decaying memory
trace triggered by stimulus onset. Figure 2 illustrates how this memory trace (left panel) is encoded
by a series of basis functions evenly spaced across the height of the trace (middle panel). Each basis
function effectively acts as a receptive field for trace height: As the memory trace fades, different
basis functions become more or less active, each with a particular temporal profile (right panel).
These activity profiles for the temporal microstimuli are then used to generate predictions of the US.
For the basis functions, we chose simple Gaussians:
1
(y ? ?)2
f (y, ?, ?) = ? exp(?
).
2? 2
2?
2
(1)
0.4
Microstimulus Level
Trace Height
1
0.75
+
0.5
0.25
0
0
20
40
60
Time Step
0.3
0.2
0.1
0
Temporal Basis
Functions
0
20
40
60
Time Step
Figure 2: Creating Microstimuli. The memory traces for a stimulus (left) are coarsely coded by
a series of temporal basis functions (middle). The resultant time courses (right) of the temporal
microstimuli are used to predict future occurrence of the US. A single basis function (middle) and
approximately corresponding microstimulus (right) have been darkened. The inset in the right panel
shows the levels of several microstimuli at the time indicated by the dashed line.
Given these basis functions, the microstimulus levels xt (i) at time t are determined by the corresponding memory trace height:
xt (i) = f (yt , i/m, ?)yt ,
(2)
where f is the basis function defined above and m is the number of temporal microstimuli per
stimulus. The trace level yt was set to 1 at stimulus onset and decreased exponentially, controlled
by a single decay parameter, which was allowed to vary to simulate the effects of hippocampal
lesions. Every stimulus, including the US, was represented by a single memory trace and resultant
microstimuli.
1.2
Hippocampal Damage
We propose that hippocampal damage results in the selective loss of the long-latency temporal elements of the stimulus representation. This idea is implemented in the model through a decrease in
the memory decay constant from .985 to .97, approximately doubling the decay rate of the memory
trace that determines the microstimuli. In effect, we assume that hippocampal damage results in a
memory trace that decays more quickly, or, equivalently, is more susceptible to interference. Figure
3 shows the effects of this parameter manipulation on the time course of the elements in the stimulus
representation. The presence microstimulus is not affected by this manipulation, but the temporal
microstimuli are compressed for both the CS and the US. Each microstimulus has a briefer time
course, and, as a group, they cover a shorter time span. Other means for eliminating or reducing
the long-latency temporal microstimuli are certainly possible and would likely be compatible with
our theory. For example, if one assumes that the stimulus representation contains multiple memory
traces with different time constants, each with a separate set of microstimuli, then eliminating the
slower memory traces would also remove the long-latency elements, and many of the results below
hold (simulations not shown). The key point is that hippocampal damage reduces the number and
magnitude of long-latency microstimuli.
1.3
Learning and Responding
The model approaches conditioning as a reinforcement-learning prediction problem, wherein the
agent tries to predict the upcoming rewards or USs. The model learns through linear TD(?) (Ludvig
et al., 2008; Schultz et al., 1997; Sutton, 1988; Sutton & Barto, 1990, 1998). At each time step, the
US prediction (Vt ) is determined by:
Vt (x) =
bwtT xc0
=
$ n
X
i=1
3
%
wt (i)x(i)
,
0
(3)
Microstimulus Level
Normal
Hippocampal
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
500
1000
0
500
1000
Time (ms)
Figure 3: Hippocampal effects on the stimulus representation. The left panel presents the stimulus
representation in delay conditioning with the normal parameter settings, and the right panel presents
the altered stimulus representation following simulated hippocampal damage. In the hippocampal
representation, the temporal microstimuli for both CS (red, solid lines) and US (green, dashed lines)
are all briefer and shallower. The presence microstimuli (blue square wave and black spike) are not
affected by the hippocampal manipulation.
where x is a vector of the activation levels x(i) for the various microstimuli, wt is a corresponding
vector of adjustable weights wt (i) at time step t, and n is the total number of all microstimuli. The
US prediction is constrained to be non-negative, with negative values rectified to 0. As is standard
in TD models, this US prediction is compared to the reward received and the previous US prediction
to generate a TD error (?t ):
?t = rt + ?Vt (xt ) ? Vt (xt?1 ),
(4)
where ? is a discount factor that determines the temporal horizon of the US prediction. This TD
error is then used to update the weight vector based on the following update rule:
wt+1 = wt + ??t et ,
(5)
where ? is a step-size parameter and et is a vector of eligibility trace levels (see Sutton & Barto,
1998), which together help determine the speed of learning. Each microstimulus has its own corresponding eligibility trace which continuously decays, but accumulates whenever that microstimulus
is present:
et+1 = ??et + xt ,
(6)
where ? is the discount factor as above and ? is a decay parameter that determines the plasticity
window. These US predictions are translated into responses through a simple, thresholded leakyintegrator response rule:
at+1 = ?at + bVt+1 (xt )c? ,
(7)
where ? is a decay constant, and ? is a threshold on the value function V .
Our model is defined by Equations 1-7 and 7 additional parameters, which were fixed at the following values for the simulations below: ? = .95, ? = .005, ? = .97, n = 50, ? = .08, ? = .93, ? = .25. In
the simulated experiments, one time step was interpreted as 10 ms.
4
CR Magnitude
ISI250
5
4
3
3
Delay!Normal
2
Delay!HPC
Trace!Normal 1
Trace!HPC
0
250
500
50
3
2
1
50
ISI1000
5
4
4
0
ISI500
5
2
1
250
500
0
50
250
500
Trials
Figure 4: Learning in the model for trace and delay conditioning with and without hippocampal
(HPC) damage. The three panels present training with different interstimulus intervals (ISI).
2
Results
We simulated 12 total conditions with the model: trace and delay conditioning, both with and without hippocampal damage, for short (250 ms), medium (500 ms), and long (1000 ms) ISIs. Each
simulated experiment was run for 500 trials, with every 5th trial an unreinforced probe trial, during
which no US was presented. For delay conditioning, the CS lasted the same duration as the ISI and
terminated with US presentation. For trace conditioning, the CS was present for 5 time steps (50
ms). The US always lasted for a single time step, and an inter-trial interval of 5000 ms separated
all trials (onset to onset). Conditioned responding (CR magnitude) was measured as the maximum
height of the response curve on a given trial.
0.8
CR Magnitude
US Prediction
Figure 4 summarizes our results. The figure depicts how the CR magnitude changed across the 500
trials of acquisition training. In general, trace conditioning produced lower levels of responding
than delay conditioning, but this effect was most pronounced with the longest ISI. The effects of
simulated hippocampal damage varied with the ISI. With the shortest ISI (250 ms; left panel), there
was little effect on responding in either trace or delay conditioning. There was a small deficit early in
training with trace conditioning, but this difference disappeared quickly with further training. With
the longest ISI (1000 ms; right panel), there was a profound effect on responding in both trace and
delay conditioning, with trace conditioning completely eliminated. The intermediate ISI (500 ms;
middle panel) produced the most complex and interesting results. With this interval, there was only
a minor deficit in delay conditioning, but a substantial drop in trace conditioning, especially early in
training. This pattern of results roughly matches the empirical data, capturing the selective deficit in
trace conditioning caused by hippocampal lesions (Solomon et al., 1986) as well as the modulation
of this deficit by ISI (Beylin et al., 2001; Moyer, Jr. et al., 1990).
Delay
Trace
0.6
0.4
0.2
0
0
250
500
750
Time (ms)
5
4
3
2
1
0
0
250
500
750
Time (ms)
Figure 5: Time course of US prediction and CR magnitude for both trace (red, dashed line) and
delay conditioning (blue, solid line) with a 500-ms ISI.
5
These differences in sensitivity to simulated hippocampal damage arose despite similar model performance during normal trace and delay conditioning. Figure 5 shows the time course of the US
prediction (left panel) and CR magnitude (right panel) after trace and delay conditioning on a probe
trial with a 500-ms ISI. In both instances, the US prediction grew throughout the trial as the usual
time of the US became imminent. Note the sharp drop off in US prediction for delay conditioning
exactly as the CS terminates. This change reflects the disappearance of the presence microstimulus,
which supports much of the responding in delay conditioning (see Fig. 6). In both procedures, even
after the usual time of the US (and CS termination in the case of delay conditioning), there was still
some residual US prediction. These US predictions were caused by the long-latency microstimuli,
which did not disappear exactly at CS offset, and were ordinarily (on non-probe trials) countered by
negative weights on the US microstimuli. The CR magnitude tracked the US prediction curve quite
closely, peaking around the time the US would have occurred for both trace and delay conditioning.
There was little difference in either curve between trace and delay conditioning, yet altering the
stimulus representation (see Fig. 3) had a more pronounced effect on trace conditioning.
An examination of the weight distribution for trace and delay conditioning explains why hippocampal damage had a more pronounced effect on trace than delay conditioning. Figure 6 depicts some
representative microstimuli (left column) as well as their corresponding weights (right columns) following trace or delay conditioning with or without simulated hippocampal damage. For clarity in
the figure, we have grouped the weights into four categories: positive (+), large positive (+++), negative (-), and large negative (--). The left column also depicts how the model poses the computational
problem faced by an animal during conditioning; the goal is to sum together weighted versions of the
available microstimuli to produce the ideal US prediction curve in the bottom row. In normal delay
conditioning, the model placed a high positive weight on the presence microstimulus, but balanced
that with large negative weights on the early CS microstimuli, producing a prediction topography
that roughly matched the ideal prediction (see Fig. 5, left panel). In normal trace conditioning, the
model only placed a small positive weight on the presence microstimulus, but supplemented that
with large positive weights on both the early and late CS microstimuli, also producing a prediction
topography that roughly matched the ideal prediction.
Weights
Normal
CS Presence Stimulus
CS Early Microstimuli
CS Late Microstimuli
US Early Microstimuli
HPC Lesion
Delay
Trace
Delay
Trace
+++
+
+++
+
--
+
--
+
+
+++
N/A
N/A
-
--
-
-
Ideal Summed Prediction
Figure 6: Schematic of the weights (right columns) on various microstimuli following trace and
delay conditioning. The left column illustrates four representative microstimuli: the presence microstimulus, an early CS microstimulus, a late CS microstimulus, and a US microstimulus. The
ideal prediction is the expectation of the sum of future discounted rewards.
6
Following hippocampal lesions, the late CS microstimuli were no longer available (N/A), and the
system could only use the other microstimuli to generate the best possible prediction profile. In
delay conditioning, the loss of these long-latency microstimuli had a small effect, notable only with
the longest ISI (1000 ms) with these parameter settings. With trace conditioning, the loss of the
long-latency microstimuli was catastrophic, as these microstimuli were usually the major basis for
the prediction of the upcoming US. As a result, trace conditioning became much more difficult (or
impossible in the case of the 1000-ms ISI), even though delay conditioning was less affected.
The most notable (and defining) difference between trace and delay conditioning is that the CS and
US overlap in delay conditioning, but not trace conditioning. In our model, this overlap is necessary,
but not sufficient, for the the unique interaction between the presence microstimulus and temporal
microstimuli in delay conditioning. For example, if the CS were extended to stay on beyond the
time of US occurrence, this contiguity would be maintained, but negative weights on the early CS
microstimuli would not suffice to suppress responding throughout this extended CS. In this case, the
best solution to predicting the US for the model might be to put high weights on the long-latency
temporal microstimuli (as in trace conditioning; see Fig 6), which would not persist as long as the
now extended presence microstimulus. Indeed, with a CS that was three times as long as the ISI, we
found that the US prediction, CR magnitude, and underlying weights were completely indistinguishable from trace conditioning (simulations not shown). Thus, the model predicts that this extended
delay conditioning should be equally sensitive to hippocampal damage as trace conditioning for
the same ISIs. This empirical prediction is a fundamental test of the representational assumptions
underlying the model.
The particular mechanism that we chose for simulating the loss of the long-latency microstimuli
(increasing the decay rate of the memory trace) also leads to a testable model prediction. If one
were to pre-train an animal with trace conditioning and then perform hippocampal lesions, there
should be some loss of responding, but, more importantly, those CRs that do occur should appear
earlier in the interval because the temporal microstimuli now follow a shorter time course (see Fig.
3). There is some evidence for additional short-latency CRs during trace conditioning in lesioned
animals (e.g., Port et al., 1986; Solomon et al., 1986), but, to our knowledge, this precise model
prediction has not been rigorously evaluated.
3
Discussion and Conclusion
We evaluated a novel computational model for the role of the hippocampus in trace conditioning,
based on a reinforcement-learning framework. We extended the microstimulus TD model presented
by Ludvig et al. (2008) by suggesting a role for the hippocampus in maintaining long-latency elements of the temporal stimulus representation. The current model also introduced an additional
element to the stimulus representation (the presence microstimulus) and a simple response rule for
translating prediction into actions; we showed how these subtle innovations yield interesting interactions when comparing trace and delay conditioning. In addition, we adduced a pair of testable
model predictions about the effects of extended stimuli and post-training lesions.
There are several existing theories for the role of the hippocampus in trace conditioning, including
the modulation of timing (Solomon et al., 1986), establishment of contiguity (e.g., Wallenstein et
al., 1998), and overcoming of task difficulty (Beylin et al., 2001). Our new model provides a computational mechanism that links these three proposed explanations. In our model, for similar ISIs,
delay conditioning requires learning to suppress responding early in the CS, whereas trace conditioning requires learning to create responding later in the trial, near the time of the US (see Fig. 6).
As a result, for the same ISI, delay conditioning requires changing weights associated with earlier
microstimuli than trace conditioning, though in opposite directions. These early microstimuli reach
higher activation levels (see Fig. 2), producing higher eligibility traces, and are therefore learned
about more quickly. This differential speed of learning for short-latency temporal microstimuli corresponds with much behavioural data that shorter ISIs tend to improve both the speed and asymptote
of learning in eyeblink conditioning (e.g., Schneiderman & Gormerzano, 1964). Thus, the contiguity between the CS and US in delay conditioning alters the timing problem that the animal faces,
effectively making the time interval to be learned shorter, and rendering the task easier for most ISIs.
In future work, it will be important to characterize the exact mathematical properties that constrain
the temporal microstimuli. Our simple Gaussian basis function approach suffices for the datasets
7
examined here (cf. Ludvig et al., 2008), but other related mathematical functions are certainly
possible. For example, replacing the temporal microstimuli in our model with the spectral traces
of Grossberg & Schmajuk (1989) produces results that are similar to ours, but using sequences of
Gamma-shaped functions tends to fail, with longer intervals learned too slowly relative to shorter
intervals. One important characteristic of the microstimulus series seems to be that the heights of
individual elements should not decay too quickly. Another key challenge for future modeling is
reconciling this abstract account of hippocampal function in trace conditioning with approaches that
consider greater physiological detail (e.g., Rodriguez & Levy, 2001; Yamazaki & Tanaka, 2005).
The current model also contributes to our understanding of the TD models of dopamine (e.g., Schultz
et al., 1997) and classical conditioning (Sutton & Barto, 1990). These models have often given short
shrift to issues of stimulus representation, focusing more closely on the properties of the learning
algorithm (but see Ludvig et al., 2008). Here, we reveal how the interaction of various stimulus
representations in conjunction with the TD learning rule produces a viable model of some of the
differences between trace and delay conditioning.
References
Beylin, A. V., Gandhi, C. C, Wood, G. E., Talk, A. C., Matzel, L. D., & Shors, T. J. (2001). The role of the
hippocampus in trace conditioning: Temporal discontinuity or task difficulty? Neurobiology of Learning &
Memory, 76, 447-61.
Gould, E., Beylin, A., Tanapat, P., Reeves, A., & Shors, T. J. (1999). Learning enhances adult neurogenesis in
the hippocampal formation. Nature Neuroscience, 2, 260-5.
Grossberg, S., & Schmajuk, N. A. (1989). Neural dynamics of adaptive timing and temporal discrimination
during associative learning. Neural Networks, 2, 79-102.
Ludvig, E. A., Sutton, R. S., & Kehoe, E. J. (2008). Stimulus representation and the timing of reward-prediction
errors in models of the dopamine system. Neural Computation, 20, 3034-54.
Machado, A. (1997). Learning the temporal dynamics of behavior. Psychological Review, 104, 241-265.
McEchron, M. D., Bouwmeester, H., Tseng, W., Weiss, C., & Disterhoft, J. F. (1998). Hippocampectomy
disrupts auditory trace fear conditioning and contextual fear conditioning in the rat. Hippocampus, 8, 63846.
McEchron, M. D., Disterhoft, J. F. (1997). Sequence of single neuron changes in CA1 hippocampus of rabbits
during acquisition of trace eyeblink conditioned responses. Journal of Neurophysiology, 78, 1030-44.
Moyer, J. R., Jr., Deyo, R. A., & Disterhoft, J. F. (1990). Hippocampectomy disrupts trace eye-blink conditioning in rabbits. Behavioral Neuroscience, 104, 243-52.
Pavlov, I. P. (1927). Conditioned Reflexes. London: Oxford University Press.
Port, R. L., Romano, A. G., Steinmetz, J. E., Mikhail, A. A., & Patterson, M. M. (1986). Retention and acquisition of classical trace conditioned responses by rabbits with hippocampal lesions. Behavioral Neuroscience,
100, 745-752.
Rodriguez, P., & Levy, W. B. (2001). A model of hippocampal activity in trace conditioning: Where?s the
trace? Behavioral Neuroscience, 115, 1224-1238.
Schmajuk, N. A., & DiCarlo, J. J. (1992). Stimulus configuration, classical conditioning, and hippocampal
function. Psychological Review, 99, 268-305.
Schneiderman, N., & Gormezano, I. (1964). Conditioning of the nictitating membrane of the rabbit as a function of CS-US interval. Journal of Comparative and Physiological Psychology, 57, 188-195.
Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275,
1593-9.
Solomon, P. R., Vander Schaaf, E. R., Thompson, R. F., & Weisz, D. J. (1986). Hippocampus and trace conditioning of the rabbit?s classically conditioned nictitating membrane response. Behavioral Neuroscience,
100, 729-744.
Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3, 9-44.
Sutton, R. S., & Barto, A. G. (1990). Time-derivative models of Pavlovian reinforcement. In M. Gabriel
& J. Moore (Eds.), Learning and Computational Neuroscience: Foundations of Adaptive Networks (pp.
497-537). Cambridge, MA: MIT Press.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press.
Tseng, W., Guan, R., Disterhoft, J. F., & Weiss, C. (2004). Trace eyeblink conditioning is hippocampally
dependent in mice. Hippocampus, 14, 58-65.
Wallenstein, G., Eichenbaum, H., & Hasselmo, M. (1998). The hippocampus as an associator of discontiguous
events. Trends in Neuroscience, 21, 317-323.
Yamazaki, T., & Tanaka, S. (2005). A neural network model for trace conditioning. International Journal of
Neural Systems, 15, 23-30.
8
| 3619 |@word neurophysiology:1 trial:12 middle:4 eliminating:2 version:1 hippocampus:14 seems:1 termination:1 simulation:3 bvt:1 nsw:1 solid:2 configuration:2 series:6 contains:1 ours:1 existing:1 current:2 contextual:2 comparing:1 activation:2 yet:1 plasticity:1 asymptote:2 remove:1 drop:2 update:2 discrimination:1 cue:4 short:8 coarse:1 provides:1 height:6 mathematical:2 become:2 differential:2 profound:1 viable:1 consists:1 wale:1 behavioral:4 introduce:1 discontiguous:1 inter:2 indeed:1 behavior:1 isi:23 disrupts:2 roughly:3 brain:1 integrator:1 discounted:1 alberta:1 td:10 little:3 window:1 increasing:1 matched:2 suffice:1 panel:13 medium:2 underlying:2 interpreted:1 contiguity:5 ca1:1 temporal:32 every:2 act:1 exactly:3 appear:1 producing:4 positive:5 retention:1 timing:5 tends:1 nictitating:2 severely:2 sutton:14 punctate:2 accumulates:1 despite:1 oxford:1 modulation:2 approximately:2 might:2 chose:2 black:1 au:1 examined:1 pavlov:2 grossberg:3 unique:2 responsible:1 procedure:4 episodic:1 empirical:4 imminent:1 pre:1 tweaking:1 put:1 impossible:1 equivalent:1 yt:3 duration:1 rabbit:6 thompson:1 simplicity:1 fade:1 rule:7 continued:1 importantly:1 ualberta:1 exact:1 gandhi:1 substrate:1 us:1 element:18 trend:1 persist:1 predicts:1 observed:1 role:5 module:1 bottom:1 decrease:1 e8:1 substantial:1 balanced:1 reward:10 lesioned:1 rigorously:1 dynamic:2 motivate:1 patterson:1 eric:1 basis:11 completely:2 translated:1 montague:2 represented:3 various:3 talk:1 train:1 separated:2 london:1 xc0:1 formation:1 quite:1 encoded:1 compressed:1 associative:1 triggered:1 sequence:2 propose:1 interaction:5 representational:3 interstimulus:1 pronounced:3 impaired:3 produce:5 disappeared:1 comparative:1 help:1 illustrate:1 pose:1 measured:1 ludvig:9 minor:1 school:1 received:1 sydney:1 implemented:1 c:27 direction:1 closely:2 australia:1 translating:1 explains:1 suffices:1 extension:1 hold:1 around:1 normal:8 exp:1 predict:3 major:1 vary:1 early:11 neurogenesis:2 sensitive:1 grouped:1 hpc:4 hasselmo:1 create:1 reflects:1 weighted:1 mit:2 always:1 gaussian:1 establishment:1 arose:1 cr:10 varying:1 barto:8 conjunction:1 hippocampectomy:2 longest:3 lasted:2 contrast:1 dayan:2 dependent:1 eliminate:1 selective:4 issue:2 proposes:1 animal:5 spatial:1 constrained:1 summed:1 schaaf:1 field:1 shaped:1 eliminated:1 represents:1 alter:1 future:4 hippocampally:1 stimulus:41 richard:1 steinmetz:1 preserve:1 gamma:1 individual:1 disterhoft:5 ab:1 certainly:2 navigation:1 implication:1 poorer:1 necessary:2 shorter:6 timed:1 psychological:2 instance:1 column:5 modeling:2 earlier:2 cover:1 altering:1 delay:54 too:2 characterize:1 adaptively:1 fundamental:1 sensitivity:2 international:1 stay:1 off:1 together:2 continuously:2 quickly:4 mouse:1 solomon:6 successively:1 slowly:2 classically:1 external:1 creating:1 derivative:1 style:1 suggesting:1 account:1 coding:1 notable:2 caused:2 onset:6 later:3 sine:1 try:1 red:2 wave:1 decaying:1 square:1 became:2 matzel:1 characteristic:1 yield:2 spaced:1 blink:1 produced:2 rectified:1 reach:1 whenever:2 ed:1 acquisition:5 pp:1 james:1 resultant:2 associated:1 auditory:1 knowledge:1 subtle:1 focusing:2 higher:2 follow:1 response:12 wherein:1 wei:2 unreinforced:1 box:1 though:2 evaluated:2 furthermore:1 until:1 working:1 replacing:1 rodriguez:3 indicated:1 reveal:1 effect:12 moore:1 adduced:1 elliot:2 indistinguishable:1 during:10 eligibility:3 maintained:1 rat:1 m:18 hippocampal:38 prominent:1 demonstrate:2 disruption:1 novel:2 machado:2 tracked:1 conditioning:106 exponentially:1 gently:1 occurred:1 cambridge:2 reef:1 consistency:1 had:3 longer:3 own:1 recent:1 showed:1 involvement:1 manipulation:4 vt:4 additional:3 greater:1 determine:1 shortest:1 dashed:3 full:1 multiple:1 reduces:1 match:1 long:18 post:2 equally:1 coded:1 controlled:1 schematic:2 verbeek:1 prediction:34 basic:1 expectation:1 dopamine:3 accord:1 addition:2 whereas:3 interval:12 decreased:1 diagram:1 wallenstein:2 south:1 tend:1 flow:1 near:1 presence:12 ideal:5 intermediate:1 rendering:2 psychology:2 opposite:1 idea:1 impairs:2 hardly:1 action:1 romano:1 gabriel:1 latency:18 discount:2 induces:1 category:1 generate:3 alters:1 neuroscience:7 per:1 blue:2 affected:4 coarsely:1 group:1 key:2 four:2 threshold:1 clarity:1 changing:1 neither:1 thresholded:1 sum:2 wood:1 run:1 schneiderman:2 throughout:2 summarizes:1 capturing:1 followed:1 activity:3 occur:1 constrain:1 simulate:1 speed:3 span:1 pavlovian:1 eichenbaum:1 gould:2 department:1 jr:5 across:2 terminates:1 membrane:2 making:1 peaking:1 gradually:1 interference:1 behavioural:1 equation:1 remains:1 mechanism:2 fail:1 available:4 gaussians:1 probe:3 spectral:2 simulating:1 occurrence:2 slower:1 responding:14 assumes:1 cf:1 reconciling:1 maintaining:2 testable:2 build:1 especially:1 disappear:1 classical:5 upcoming:2 spike:1 damage:14 primary:1 dependence:1 receptive:1 rt:1 usual:2 disappearance:1 countered:1 enhances:1 darkened:1 deficit:6 separate:2 simulated:8 link:1 evenly:1 tseng:3 kehoe:4 length:1 dicarlo:2 innovation:1 equivalently:1 difficult:1 susceptible:2 mostly:1 trace:93 negative:7 ordinarily:1 suppress:3 aversive:1 unsw:1 adjustable:1 perform:1 shallower:1 vertical:1 neuron:3 datasets:1 defining:1 extended:8 grew:1 precise:1 neurobiology:1 gormezano:1 varied:1 sharp:1 vander:1 canada:1 overcoming:1 introduced:1 pair:1 eluded:1 coherent:1 distinction:1 learned:4 tanaka:3 timeline:1 discontinuity:1 adult:1 beyond:1 bar:1 below:4 pattern:3 usually:1 challenge:1 including:2 memory:15 explanation:2 green:1 event:4 overlap:3 difficulty:2 examination:1 predicting:1 residual:1 altered:1 improve:1 eye:1 disappears:1 created:1 faced:2 prior:1 understanding:1 review:2 relative:1 loss:5 topography:3 interesting:2 foundation:1 agent:1 sufficient:1 t6g:1 port:2 beylin:5 row:1 course:6 compatible:1 changed:1 placed:2 free:2 explaining:1 face:1 mikhail:1 leaky:1 schmajuk:5 curve:4 qualitatively:1 reinforcement:6 adaptive:2 schultz:4 active:1 why:1 nature:1 ca:1 associator:1 operational:1 contributes:1 yamazaki:3 interact:1 broadening:1 complex:1 did:1 terminated:1 whole:2 profile:3 lesion:12 allowed:1 fig:8 representative:2 edmonton:1 depicts:3 eyeblink:6 guan:1 levy:3 late:4 learns:1 xt:6 qua:1 inset:1 supplemented:1 offset:1 decay:9 physiological:2 evidence:1 effectively:2 importance:1 magnitude:9 conditioned:8 illustrates:2 horizon:1 gap:2 easier:2 likely:1 doubling:1 fear:3 reflex:1 corresponds:1 determines:3 relies:1 ma:2 goal:1 presentation:2 briefer:2 change:3 determined:2 reducing:2 wt:5 total:2 catastrophic:1 intact:1 internal:1 support:2 modulated:1 phenomenon:1 |
2,890 | 362 | A Second-Order Translation, Rotation and
Scale Invariant Neural Network
Shelly D.D. Goggin
Kristina M. Johnson
Karl E. Gustafson?
Optoelectronic Computing Systems Center and
Department of Electrical and Computer Engineering
University of Colorado at Boulder
Boulder, CO 80309
[email protected]
ABSTRACT
A second-order architecture is presented here for translation, rotation and
scale invariant processing of 2-D images mapped to n input units. This
new architecture has a complexity of O( n) weights as opposed to the O( n 3 )
weights usually required for a third-order, rotation invariant architecture.
The reduction in complexity is due to the use of discrete frequency information. Simulations show favorable comparisons to other neural network
architectures.
1
INTRODUCTION
Multiplicative interactions in neural networks have been proposed (Pitts and McCulloch, 1947; Giles and Maxwell, 1987; McClelland et aI, 1988) both to explain biological neural functions and to provide invariances in pattern recognition. Higherorder neural networks are useful for invariant pattern recognition problems, but
their complexity prohibits their use in mal1Y large image processing applications.
The complexity of the third-order rotation invariant neural network of Reid et aI,
1990 is 0(n 3 ), which will clearly not scale. For example, when 11 is on the order
of 10 6 , as in high definition television (HDTV), 0(10 18 ) weights would be required
in a third-order neural network. Clearly, image processing applications are best
approached with neural networks of lower complexity. \Ve present a translation,
?Department of Mathematics
313
314
Goggin, Johnson, and Gustafson
rotation and scale invariant architecture, which has weight complexity of O( n), and
requires only multiplicative and additive operations in the activation function.
2
HIGHER-ORDER NEURAL NETWORKS
Higher-order neural networks (HONN) have multiplicative terms in their activation
function, such that the output of a unit, Ok, has the form
(n-l)(n-l)
Ok
= f[ E
(n-l)
E ... E
(i=O) (j=0)
Wij .. .lkXiXj ... Xr]
(1)
1=0
where f is a thresholding function, Wij. .. lk is the weight for each term, and Xi is one
of n input values. Some of the Xi could be bias units to give lower order terms. The
order of the multiplications is O(nm) for an m-order network, but the order of the
number of weights can be lower. Since the multiplications of data can be done in
a preprocessing stage, the major factor in the computational burden is the number
of weights. The emphasis on the complexity of the weights is especially relevant for
optical implementations of higher-order networks (Psaltis et aI, 1988, Zhang et aI,
1990), since the multiplications can usually be performed in parallel.
Invariances can be achieved with higher-order neural networks by using the spatial frequencies of the input as a priori information. Wechsler and Zimmerman,
1988, compute the Fourier transform of the data in polar coordinates and use these
data as inputs to a neural network to achieve rotation, scale and translation invarianee. The disadvantage with this approach is that the Fourier transform and the
computation of polar coordinates require more complex operations than addition
and multiplication of inputs. It has been shown that second-order networks can
be constructed to provide either translation and scale invariance or rotation and
scale invariance (Giles et aI, 1988). However, their approach does not consider the
difficulties in defining scale and rotation for images made up of pixels. Our architecture directly addresses the problem of rotation, translation and scale invariance
in pattern recognition for 2-D arrays ofbinal'Y pixels. Restrictions permit structure
to be built into the weights, which reduces their complexity.
3
WEDGE-RING HONN
vVe present a new architecture for a second-order neural network based on the
concept of the wedge-ring detector (Casasent, 1985). When a wedge-ring detector
is used in the Fourier plane of an optical processor, a set of features are obtained
that are invariant to scale, rotation and translation. As shown in figure 1, the lens
performs a spatial Fourier transform on an image, which yields an intensity pattern
that is invariant to translations in the image plane. The ring detectors sum the
amplitudes of the spatial frequencies with the same radial distance from the zero
frequency, to give features that are invariant to rotation and shift changes. The
wedge detectors sum the amplitudes of frequencies within a range of angles with
respect to the zero frequency to produce features that are invariant to scale and
shift changes, assuming the images retain the same zero frequency power as they
are scaled.
A Second-Order Thanslation, Rotation and Scale Invariant Neural Network
Laser
Image
Fourier Wedge-Ring
Transform Detector
Lens
Computer
Figure 1: A Wedge-Ring Detector Optical Processor
In a multi-pixel, binary image, a second-order neural network can perform the same
function as the wedge-ring detector without the need for a Fourier transform. For
an image of dimensions fo x yin, let us define the pixel spatial frequency fi,j as
(v'n-l-Ikl) (v'n- l -Ill)
L
L
(i=O)
(j=O)
h,l =
;ri,j;ri+lkl,j+I'I'
-(vn -1) ~ k,
I
<
vn - 1
(2)
where ;ri,j is a binary valued pixel at location (i, j). Note that the pixel frequencies
have symmetry; /i,j = f -i,-j. The frequency terms can be arranged in a grid
in a manner analogous to the Fourier transform image in the optical wedge-ring
detector. (See figure 2.)
Pixel Wedge Terms
??
B1~lIIll?Zlra.
V lao. V lS3. V 13S' V 117' V 90" V 63. V
(~1 f~. f4,l (I.'
XO.!l Xo
XO?2
x l ?O Xl,l Xl,
X2,O X2,1
~2
4S.
V
27?
f4,1
f' r1 f. r ? f?.- f ?.? f.,2
fa,1 fl;-' f ... f .~ fl,t
f.,4 f ..... f ?.- f .~ f .,2
fl,4 f2,o1 f1,t fl,l f2,3
Image
(Input Units)
Pixel Spatial Frequencies
Pixel Ring Terms
DIml ? ? ?
ro
r1
r2
r3
r4
Figure 2: A Simple Input Image and its Associated Pixel Spatial
Frequencies, Pixel Ring Terms and Pixel Wedge Terms
For all integers p, 0
L
~ p ~
2( fo - 1), the ring pixel terms
vn -
rp
are given by
h,l, 0 ~ k ~
1, 0 ~ I ~ yin - 1, if k = O.
(3)
Ikl+lll=p
-(yin - 1) ~ I ~ yin - 1, if k > O.
as shown in figure 2. This definition of the ring pixel terms works well for
images with a small number of pixels. Larger pixel arrays can use the following
rp
= 2
315
316
Goggin, Johnson, and Gustafson
definition. For 0 ~ p ~ 2( Vii - I?,
rp
L
=2
h: ,f, 0 ~ k ~ Fn - 1, 0 < I < y'n - 1,
-(y'n - 1)
I:l+ll=p
~ 1~
if k =
o.
y'n - 1, if k >
o.
(4)
Note that p will not take on all values less than 2n. The number of ring pixel terms
generated by equation 4 is less than or equal to n/21 + Ly'n/2 J. The number of
ring pixel terms can be reduced by making the rings a fixed width, ~r. Then, for
all integers p, 0 ~ p < rV2(Vii - 1)/~rl
r
rp
L
= 2
fl:,l,
(p-l)~r<~~p~r
o <k ~ Vii -1,
o < I ~ y'n - 1, if k = o.
-( Vii -
1) ~ I ~
Vii -
(5)
1, if k
> o.
As the image size increases, the ring pixel terms will approximate continuous rings.
For 0
V9
< ()
~
180 0 , the wedge pixel terms
=2
V9
are
-(Fn -1) < k < 0, -(y'n - 1) ~ I
-( Vii - 1) < I
fl:,l,
tan- l (I: 11)=9
~
~
1, if k = 0,
y'n - 1, if k < 0,
(6)
as shown in figure 2. The number of wedge pixel terms is less than or equal to
2n - 2y'n + 1. The number of wedge pixel terms can be reduced by using a fixed
wedge width, ~v. Then for all integers q, 1 ~ q ~ P80? / ~v1,
-(Vii -( Vii -(Vii -
(q-l )~tJ< tan- l (I: 11)~q~tJ
1) ~ k < 0,
1) < I ~ 1, if k 0,
1) ~ I ~ Vii - 1, if k
=
(7)
< 0,
For small pixel arrays, the pixel frequencies are not evenly distributed between the
wedges.
All of the operations from the second-order terms to the pixel frequencies and from
the pixel frequencies to the ring and wedge pixel terms are linear. Therefore, the
values of the wedge-ring features can be obtained by directly summing the secondorder terms, without explicitly determining the individual spatial frequencies.
(y"il-l-II:I) (y"il-I-lll)
L
(i=O)
L
(j=O)
o <k ~ y'n - 1,
o ~ I ~ y'n - 1, if k = o.
-(y'n - 1) < I
if k
> o.
~
y'n - 1,
(8)
(y"il-l-Ikl) (fo-l-lll)
V9
=2
(tan-l(1: 11)=9)
L
(i=O)
L
(j=O)
-(y'n-l)<k~O,
~ I ~ 1,
-(y'n - 1)
if k
= o.
-( y'n - 1)
if k
< o.
~ 1<
Vii - 1,
(9)
A mask can be used to sum the second-order terms directly. For an example of the
mask for the 3 x 3 image, see figure 3.
A Second-Order Iranslation, Rotation and Scale Invariant Neural Network
Pixel Wedge Terms
~~~-
x,~
.Blm~[]]]~mlll
V 180" V 153" V 135" V 117? V
90"
V 63" V
45.
V
'1:1?
Pixel Ring Terms
DElm ? ?
fO
f1
f2
f3
f4
Figure 3: A Mask for Summing Second-Order Terms for Ring Features
and "Vedge Features for the Image in Figure 2
The ring and wedge pixel terms can be used as inputs for a multilayer neural
network that can then perform pattern recognition with general combinations of
these features. The output of the first (and possibly only) hidden layer units are
for unit j,
OJ
wj,prp +
Wj,(1V(1],
(10)
= J[L
p
L
(1
where f here is the threshold function. The total number of ring and wedge terms,
which corresponds to the number of weights, is less than or equal to (5/2)n.
4
EXAMPLE RESULTS FOR THE TC PROBLEM
Results have been obtained for the 9 x 9 TC problem (McClelland et aI, 1988) (see
figure 4). Since wedge and ring pixel terms are used, a solution to the problem
is readily seen. Figure 5 shows the final neural network architecture. Equations 4
and 6 are used to calculate the ring and wedge pixel terms, respectively. With two
additional layers, the network can distinguish between the T and the C at any of
the three scales or four rotations. In the hidden layer, the 180 0 wedge pixel term is
subtracted from the 90 0 wedge pixel term and vice-versa with a bias unit weighted
by 0.5 and a hard-limiting threshold function. This computation results ill hidden
units with values (0,1) or (1,0) for the C and (1,1) for the T. The next level then
performs a binary AND, to get a 1 for T and a 0 for C. The weJge features are also
used in a layer to determine whether the image was rotated by ?90? or not. The ring
units are used as input to a layer with an output uuit for each of the three scales.
Due to the reduced complexity of the weights in this second-order neural network,
a solution for the architecture and weights is obtained by inspection, whereas t.he
317
318
Goggin, Johnson, and Gustafson
Scale =2
Scale =3
Figure 4: Examples of Rotated and Scaled Input Images for the
TC Problem
???
???
Figure 5: Multilayer Neural Network for the Wedge-Ring Features
for the TC Problem
A Second-Order Thanslation, Rotation and Scale Invariant Neural Network
same problem required computer simulation when presented to a third-order neural
network (Reid et aI, 1990).
5
CONCLUSIONS
In this paper, we show how the weight complexity in a higher-order neural network
is reduced from O( n 3 ) to O( n) by building into the architecture invariances in
rotation, translation and scale. These invariances were built into the neural network
architecture by analogy to the architecture for feature extraction in the optical
wedge-ring detector system. This neural network architecture has been shown to
greatly simplify the computations required to solve the classic TC problem.
Acknowledgements
We gratefully acknowledge fellowship support from GTE Research Labs and the
NSF Engineering Research Center for Optoelectronic Computing Systems grant
CDR8622236.
References
D. Casasent, "Coherent optical pattern recognition: A review," Optical Engineering,
vol. 24, no. 1, pp. 26-32 (1985).
C.L. Giles, R.D. Griffin, and T. Maxwell, "Encoding geometric invariances in higherorder networks," In: Neural Information Processing Systems, D. Z. Anderson (ed.),
(New York: American Institute of Physics, 1988) pp. 301-309.
C.L. Giles and T. Maxwell, "Lea.rning, invariance and generalization in high-order
neural networks," Applied Optics, vol. 26, no. 23, pp. 4972-4978 (1987).
J.t. McClelland, D.E. Rumelhart.and the PDP Research Group, Parallel Distributed
Processing, Explorations in the ~Microstructure of Cognition, (Cambridge, MA: The
MIT Press, 1988).
vv.
Pitts and \V.S. ~IcCulloch, "How we know universals: The perception of auditory and visual forms," Bulletin of Afathematical Biophysics, vol. 9, pp. 127-147
(1947).
D. Psaltis, C.H. Park and J. Hong, "Higher order associative memories and their
optical implementations," Neural Networks, vol. 1, pp. 149-163 (1988).
M.ll. Reid, L. Spirkovska and E. Ochoa, "Simultaneous position, scale and rotation
invariant pattern classification using third-order neural networks," To appear in:
The International Journal of Neural Networks - Research and Applications.
H. \Vechsler and G.L. Zimmerman, "Invariant object recognition using a distributed
associative memory," In: Neural Information Processing Systems, D. Z. Anderson
(ed.), (New York: American Institute of Physics, 1988) pp. 830-839.
L. Zhang, M.G. Robinson and K.M. Johnson, "Optical implementation of a second
order neural network," International Neural Network Conference, Paris, July, 1990.
319
| 362 |@word especially:1 concept:1 wedge:26 f4:3 simulation:2 fa:1 exploration:1 ll:2 width:2 distance:1 higherorder:2 require:1 mapped:1 reduction:1 hong:1 f1:2 generalization:1 microstructure:1 evenly:1 review:1 biological:1 performs:2 assuming:1 o1:1 lkl:1 image:19 activation:2 fi:1 cognition:1 readily:1 rotation:17 pitt:2 fn:2 additive:1 major:1 rl:1 favorable:1 polar:2 kristina:1 he:1 implementation:3 psaltis:2 perform:2 plane:2 inspection:1 versa:1 cambridge:1 ai:7 vice:1 acknowledge:1 grid:1 weighted:1 mathematics:1 defining:1 mit:1 clearly:2 location:1 gratefully:1 pdp:1 zhang:2 constructed:1 intensity:1 required:4 paris:1 manner:1 prp:1 greatly:1 mask:3 binary:3 coherent:1 zimmerman:2 robinson:1 multi:1 address:1 seen:1 usually:2 additional:1 pattern:7 perception:1 hidden:3 lll:3 determine:1 wij:2 july:1 ii:1 built:2 pixel:34 oj:1 classification:1 mcculloch:1 ill:2 reduces:1 priori:1 memory:2 power:1 difficulty:1 prohibits:1 spatial:7 equal:3 f3:1 extraction:1 lao:1 casasent:2 biophysics:1 park:1 lk:1 multilayer:2 ro:1 scaled:2 simplify:1 unit:9 ly:1 grant:1 appear:1 achieved:1 reid:3 lea:1 ve:1 addition:1 engineering:3 individual:1 blm:1 whereas:1 fellowship:1 multiplication:4 determining:1 encoding:1 analogy:1 emphasis:1 r4:1 integer:3 co:1 thresholding:1 range:1 tj:2 gustafson:4 translation:9 karl:1 architecture:13 bias:2 vv:1 xr:1 institute:2 bulletin:1 shift:2 rning:1 universal:1 whether:1 distributed:3 dimension:1 radial:1 giles:4 made:1 get:1 disadvantage:1 york:2 preprocessing:1 useful:1 approximate:1 restriction:1 center:2 johnson:5 mcclelland:3 b1:1 reduced:4 summing:2 xi:2 nsf:1 continuous:1 array:3 international:2 retain:1 classic:1 physic:2 discrete:1 coordinate:2 analogous:1 limiting:1 vol:4 group:1 tan:3 colorado:2 four:1 threshold:2 nm:1 symmetry:1 opposed:1 complex:1 possibly:1 secondorder:1 rumelhart:1 recognition:6 american:2 v1:1 sum:3 angle:1 vve:1 electrical:1 calculate:1 wj:2 explicitly:1 vn:3 acknowledgement:1 multiplicative:3 performed:1 griffin:1 lab:1 position:1 xl:2 complexity:10 fl:6 layer:5 parallel:2 distinguish:1 third:5 il:3 optic:1 v9:3 r2:1 f2:3 yield:1 ri:3 x2:2 burden:1 fourier:7 optical:9 laser:1 television:1 processor:2 department:2 approached:1 explain:1 detector:9 fo:4 simultaneous:1 combination:1 ed:2 vii:11 definition:3 tc:5 larger:1 valued:1 solve:1 yin:4 frequency:16 pp:6 making:1 visual:1 associated:1 invariant:15 transform:6 xo:3 auditory:1 final:1 associative:2 boulder:3 corresponds:1 equation:2 ma:1 r3:1 interaction:1 amplitude:2 know:1 relevant:1 ok:2 maxwell:3 higher:6 change:2 operation:3 hard:1 permit:1 achieve:1 gte:1 arranged:1 done:1 lens:2 total:1 anderson:2 optoelectronic:2 invariance:9 stage:1 subtracted:1 geometric:1 rp:4 r1:2 produce:1 support:1 ring:28 rotated:2 object:1 ikl:3 wechsler:1 building:1 |
2,891 | 3,620 | Learning Hybrid Models for Image Annotation with
Partially Labeled Data
Xuming He
Department of Statistics
UCLA
[email protected]
Richard S. Zemel
Department of Computer Science
University of Toronto
[email protected]
Abstract
Extensive labeled data for image annotation systems, which learn to assign class
labels to image regions, is difficult to obtain. We explore a hybrid model framework for utilizing partially labeled data that integrates a generative topic model
for image appearance with discriminative label prediction. We propose three alternative formulations for imposing a spatial smoothness prior on the image labels. Tests of the new models and some baseline approaches on three real image
datasets demonstrate the effectiveness of incorporating the latent structure.
1
Introduction
Image annotation, or image labeling, in which the task is to label each pixel or region of an image
with a class label, is becoming an increasingly popular problem in the machine learning and machine
vision communities [7, 14]. State-of-the-art methods formulate image annotation as a structured
prediction problem, and utilize methods such as Conditional Random Fields [8, 4], which output
multiple values for each input item. These methods typically rely on fully labeled data for optimizing model parameters. It is widely acknowledged that consistently-labeled images are tedious and
expensive to obtain, which limits the applicability of discriminative approaches. However, a large
number of partially-labeled images, with a subset of regions labeled in an image, or only captions
for images, are available (e.g., [12]). Learning labeling models with such data would help improve
segmentation performance and relax the constraint of discriminative labeling methods.
A wide range of learning methods have been developed for using partially-labeled image data. One
approach adopts a discriminative formulation, and treats the unlabeled regions as missing data [16],
Others take a semi-supervised learning approach by viewing unlabeled image regions as unlabeled
data. One class of these methods generalizes traditional semi-supervised learning to structured prediction tasks [1, 10]. However, the common assumption about the smoothness of the label distribution with respect to the input data may not be valid in image labeling, due to large intra-class
variation of object appearance. Other semi-supervised methods adopt a hybrid approach, combining
a generative model of the input data with a discriminative model for image labeling, in which the
unlabeled data are used to regularize the learning of a discriminative model [6, 9]. Only relatively
simple probabilistic models are considered in these approaches, without capturing the contextual
information in images.
Our approach described in this paper extends the hybrid modeling strategy by incorporating a more
flexible generative model for image data. In particular, we introduce a set of latent variables that
capture image feature patterns in a hidden feature space, which are used to facilitate the labeling
task. First, we extend the Latent Dirichlet Allocation model (LDA) [3] to include not only input
features but also label information, capturing co-occurrences within and between image feature
patterns and object classes in the data set. Unlike other topic models in image modeling [11, 18],
our model integrates a generative model of image appearance and a discriminative model of region
1
labels. Second, the original LDA structure does not impose any spatial smoothness constraint to
label prediction, yet incorporating such a spatial prior is important for scene segmentation. Previous
approaches have introduced lateral connections between latent topic variables [17, 15]. However,
this complicates the model learning, and as a latent representation of image data, the topic variables
can be non-smooth over the image plane in general. In this paper, we model the spatial dependency
of labels by two different structures: one introduces directed connections between each label variable
and its neighboring topic variables, and the other incorporates lateral connections between label
variables. We will investigate whether these structures effectively capture the spatial prior, and lead
to accurate label predictions.
The remainder of this paper is organized as follows. The next section presents the base model,
and two different extensions to handle label spatial dependencies. Section 3 and 4 define inference
and learning procedures for these models. Section 5 describes experimental results, and in the final
section we discuss the model limitations and future directions.
2
Model description
The structured prediction problem in image labeling can be formulated as follows. Let an image
x
x be represented as a set of subregions {xi }N
i=1 . The aim is to assign each xi a label li from a
categorical set L. For instance, subregion xi ?s can be image patches or pixels, and L consists of
x
object classes. Denote the set of labels for x as l = {li }N
i=1 . A key issue in structured prediction
concerns how to capture the interactions between labels in l given the input image.
Model I. We first introduce our base model for capturing individual patterns in image appearance
and label space. Assume each subregion xi is represented by two features (ai , ti ), in which ai
describes its appearance (including color, texture, etc.) in some appearance feature space A and
ti is its position on the image plane T . Our method focuses on the joint distribution of labels
and subregion appearances given positions by modeling co-occurred patterns in the joint space of
L ? A. We achieve this by extending the latent Dirichlet allocation model to include both label and
appearance.
More specifically, we assume each observation pair (ai , li ) in image x is generated from a mixture
of K hidden ?topic? components shared across the whole dataset, given the position information ti .
Following the LDA notation, the mixture proportion is denoted as ?, which is image-specific and
shares a common Dirichlet prior parameterized by ?. Also, zi is used as an indicator variable to
specify from which hidden topic component the pair (ai, li ) is generated. In addition, we use a to
denote the appearance feature vector of each image, z for the indicator vector and t for the position
vector. Our model defines a joint distribution of label variables l and appearance feature variables a
given the position t as follows,
Z YX
Pb (l, a|t, ?) = [
P (li |ai , ti , zi )P (ai |zi )P (zi |?)]P (?|?)d?
(1)
?
i
zi
where P (?|?) is the Dirichlet distribution. We specify the appearance model P (ai |zi ) to be position
invariant but the label predictor P (li |ai , ti , zi ) depends on the position information. Those two
components are formulated as follows, and the graphical representation of the model is shown in the
left panel of Figure 1.
(a) Label prediction module P (li |ai , ti , zi ). The label predictor P (li |ai , ti , zi ) is modeled by a
probabilistic classifier that takes (ai , ti , zi ) as its input and produces a properly normalized distribution for li . Note that we represent zi in its ?0-1? vector form when it is used as the classifier input.
So if the dimension of A is M , then the input dimension of the classifier is M + K + 2. We use a
MLP with one hidden layer in our experiments, although other strong classifiers are also feasible.
(b) Image appearance module P (ai |zi ). We follow the convention of topic models and model the
topic conditional distributions of the image appearance using a multinomial distribution with parameters ?zi . As the appearance features typically take on real values, we first apply k-means clustering
to the image features {ai } to build a visual vocabulary V. Thus a feature ai in the appearance space
A can be represented as a visual word v, and we have P (ai = v|zi = k) = ?k,v .
While the topic prediction model in Equation 1 is able to capture regularly co-occurring patterns in
the joint space of label and appearance, it ignores spatial priors on the label prediction. However,
2
?
?
zi
?
?
?
?
?
zi?1
zi
zi+1
ai?1
ai
ai+1
zi?1
zi
zi+1
ai?1
ai
ai+1
li?1
li
li+1
ai
K
li
ti
?
K
li?1
?
li
li+1
K
N
t
D
t
D
D
Figure 1: Left:A graphical representation of the base topic prediction model (Model I). Middle:
Model II. Right: Model III. Circular nodes are random variables, and shaded nodes are observed. N
is the number of image features in each image, and D denotes all the training data.
spatial priors, such as spatial smoothness, are crucial to labeling tasks, as neighboring labels are
usually strongly correlated. To incorporate spatial information, we extend our base model in two
different ways as follows.
Model II. We introduce a dependency between each label variable and its neighboring topic variables. In this model, each label value is predicted based on the summary information of topics within
a neighborhood. More specifically, we change the label prediction model into the following form:
X
P (li |ai , ti , zN (i) ) = P (li |ai , ti ,
wj zj ),
(2)
j?N (i)
where N (i) is a predefined neighborhood for site i, and wj is the
P weight for the topic variable
zj . We set wj ? exp(?|ti ? tj |/? 2 ), and normalized to 1, i.e., j?N (i) wj = 1. The graphical
representation is shown in the middle panel of Figure 1. This model variant can be viewed as an
extension to the supervised LDA [2]. Here, however, rather than a single label applying to each
input example instead there are multiple labels, one for each element of x.
Model III. We add lateral connections between label variables to build a Conditional Random Field
of labels. The joint label distribution given input image is defined as
X
X
1
P (l|a, t, ?) = exp{
f (li , lj ) + ?
log Pb (li |a, t, ?)},
(3)
i,j?N (i)
i
Z
P
where Z is the partition function. The pairwise potential f (li , lj ) =
a,b uab ?li ,a ?lj ,b , and the
unary potential is defined as log output of the base topic prediction
model
weighted
by ?. Here ? is
P
the Kronecker delta function. Note that Pb (li |a, t, ?) = zi P (li |ai , ti , zi )P (zi |a, t). This model
is shown in the right panel of Figure 1.
Note that the base model (Model I) obtains spatially smooth labels simply through the topics capturing location-dependent co-occurring appearance/label patterns, which tend to be nearby in image
space. Model II explicitly predicts a region?s label from the topics in its local neighborhood, so that
neighboring labels share similar contexts defined by latent topics. In both of these models, the interaction between labels takes effect through the hidden input representation. The third model uses
a conventional form of spatial dependency by directly incorporating local smoothing in the label
field. While this structure may impose a stronger spatial prior than other two, it also requires more
complicated learning methods.
3
Inference and Label Prediction
Given a new image x = {a, t} and our topic models, we predict its labeling based on the Maximum
Posterior Marginals (MPM) criterion:
li? = arg max P (li |a, t).
(4)
li
We consider the label inference procedure for three models separately as follows.
Models I&II: The marginal label distribution P (li |a, t) can be computed as:
X
X
P (li |a, t) =
P (li |ai , ti ,
wj zj )P (zN (i) |a, t)
j?N (i)
zN (i)
3
(5)
The summation
P here is difficult when N
P(i) is large. However, it can be approximated as follows.
Denote vi = j?N (i) wj zj and vi,q = j?N (i) wj q(zj ), where q(zj ) = {P (zj |a, t)} is the vector
form of posterior distribution. Both vi and vi,q are in [0, 1]K . The marginal label distribution can
be written as P (li |a, t) = hP (li |ai , ti , vi )iP (zN (i) |a,t) . We take the first-order approximation of
P (li |ai , ti , vi ) around vi,q using Taylor expansion:
P (li |ai , ti , vi ) ? P (li |ai , ti , vi,q ) + (vi ? vi,q )T ? ?vi P (li |ai , ti , vi )|vi,q .
(6)
Taking expectation on both sides of Equation 6 w.r.t. P (zN (i) |a, t) (notice that hvi iP (zN (i) |a,t) =
P
P
vi,q ), we have the following approximation: P (li |a, t) ? zN (i) P (li |ai , ti , j?N (i) wj q(zj )).
Model III: We first compute
P the unary potential of the CRF model from the base topic prediction
model, i.e., Pb (li |a, t) = zi P (li |ai , ti , zi )P (zi |a, t). Then the label marginals in Equation 4 are
computed by applying loopy belief propagation to the conditional random field.
In both situations, we need the conditional distribution of the hidden topic variables z given observed
data components to compute the label prediction. We take a Gibbs sampling approach by integrating
out the Dirichlet variable ?. From Equation 1, we can derive the posterior of each topic variable zi
given other variables, which is required by Gibbs sampling:
X
P (zi = k|z?i , ai ) ? P (ai |zi )(?k +
?zm ,k )
(7)
m?S\i
where z?i denotes all the topic variables in z except zi , and S is the set of all sites. Given the
samples of the topic variables, we estimate their posterior marginal distribution P (zi |a, x) by simply
computing their normalized histograms.
4
Learning with partially labeled data
Here we consider estimating the parameters of both extended models from a partially labeled image
set D = {xn , ln }. For an image xn , its label ln = (lno , lnh ) in which lno denotes the observed labels,
and lnh are missing. We also use o to denote the set of labeled regions. As the three models are built
with different components, we treat them separately.
Models I&II. We use the Maximum Likelihood criterion to estimate the model parameters. Let ?
be the parameter set of the model,
X
?? = arg max
log P (lno , an |tn ; ?)
(8)
?
n
We maximize the log data likelihood by Monte Carlo EM. The lower bound of the likelihood can be
written as
XX
X
n
log P (ani |zin ) + log P (z)iP (zn |lno ,an )
Q=
h
log P (lin |ani , tni , zN
(9)
(i) ) +
n
i?o
i
In the E step, the posterior distributions of the topic variables are estimated by a Gibbs sampling
procedure similar to Equation 7. It uses the following conditional probability:
Y
X
P (zi = k|z?i , ai , l, t) ?
P (lj |aj , tj , zN (j) )P (ai |zi )(?k +
?zm ,k )
(10)
j?N (i)?o
m?S\i
Note that any label variable is marginalized out if it is missing. In the M step, we update the
model parameters by maximizing the lower bound Q. Denote the posterior distribution of z as q(?),
the updating equation for parameters of the appearance module P (a|z) can be derived from the
stationary point of Q:
X
?
?
q(zin = k)?(ani , v).
(11)
?k,v
n,i
The classifier in the label prediction module is learned by maximizing the following log likelihood,
X
X
X
X
Lc =
hlog P (lin |ani , tni ,
wj zj )iq(zN (i) ) ?
log P (lin |ani , tni ,
wj q(zj )). (12)
n,i?o
n,i?o
j?N (i)
4
j?N (i)
where the approximation takes the same form as in Equation 6. We use a gradient ascent algorithm
to update the classifier parameters. Note that we need to run only a few iterations at each M step,
which reduces training time.
Model III. We estimate the parameters of Model III in two stages: (1). The parameters of the base
topic prediction model are learned using the same procedure as in Models I&II. More specifically,
we set N (i) = i and estimate the parameters of the appearance module and label classifier based
on Maximum Likelihood. (2). Given the base topic prediction model, we compute the marginal
label probability Pb (li |a, t) and plug in the unary potential function in the CRF model (see Equation 3). We then estimate the parameters in the CRF by maximizing conditional pseudo-likelihood
as follows:
?
?
XX
X X
?log exp{
Lp =
uab ?lin ,a ?ljn ,b + ? log Pb (lin |an , tn )} ? log Zin ? .
(13)
n
i?o
j?N (i) a,b
P
P
where Zin = li exp{ j?N (i) a,b uab ?li ,a ?ljn ,b + ? log Pb (li |a, t)} is the normalizing constant.
As this cost function is convex, we use a simple gradient ascent method to optimize the conditional
pseudo-likelihood.
P
5
Experimental evaluation
Data sets and representation. Our experiments are based on three image datasets. The first is a
subset of the Microsoft Research Cambridge (MSRC) Image Database [14] as in [16]. This subset
includes 240 images and 9 different label classes. The second set is the full MSRC image dataset,
including 591 images and 21 object classes. The third set is a labeled subset of the Corel database
as in [5] (referred therein as Corel-B). It includes 305 manually labeled images with 11 classes,
focusing on animals and natural scenes.
We use the normalized cut segmentation algorithm [13] to build a super-pixel representation of the
images, in which the segmentation algorithm is tuned to generate approximately 1000 segments for
each image on average. We extract a set of basic image features, including color, edge and texture
information, from each pixel site. For the color information, we transform the RGB values into CIE
Lab* color space. The edge and texture are extracted by a set of filter-banks including a differenceof-Gaussian filter at 3 different scales, and quadrature pairs of oriented even- and odd-symmetric
filters at 4 orientations and 3 scales.The color descriptor of a super-pixel is the average color over
the pixels in that super-pixel. For edge and texture descriptors, we first discretize the edge/texture
feature space by k-means, and use each cluster as a bin. Then we compute the normalized histograms
of the features within a super-pixel as the edge/texture descriptor. In the experiments reported here,
we used 20 bins for edge information and 50 bins for texture information. We also augment each
feature by a SIFT descriptor extracted from a 30 ? 30 image patch centered at the super-pixel. The
image position of a super-pixel is the average position of its pixels. To compute the vocabulary of
visual words in the topic model, we apply k-means to group the super-pixel descriptors into clusters.
The cluster centers are used as visual words and each descriptor is encoded by its word index.
Comparison methods. We compare our approach directly with two baseline systems: a superpixel-wise classifier and a basic CRF model. We also report the experimental results from [16],
although they adopt a different data representation in their experiments (patches rather than superpixels). The super-pixel-wise classifier is an MLP with one hidden layer, which predicts labels for
each super-pixel independently. The MLP has 30 hidden units, a number chosen based on validation
performance. In the basic CRF, the conditional distribution of the labels of an image is defined as:
XX
X
P (l|a, t) ? exp{
?u,v ?li ,u ?lj ,v + ?
h(li |ai , ti )}
(14)
i,j u,v
i
where h(?) is the log output from the super-pixel classifier. We train the CRF model by maximizing
its conditional pseudo-likelihood, and label the image based on the marginal distribution of each
label variable, computed by the loopy belief propagation algorithm.
Performance on MSRC-9. Following the setting in [16], we randomly split the dataset into training
and testing sets with equal size, and use 10% training data as our validation set. In this experiment,
5
Table 1: A comparison of classification accuracy of the 3 variants of our model with other methods.
The average classification accuracy is at the pixel level.
Label
S Class
CRF
Model I
Model II
Model III
[16]
building
61.2
69.8
64.8
79.2
78.1
73.6
grass
93.2
94.4
93.0
94.1
92.5
91.1
tree
71.3
82.1
76.6
81.4
85.4
82.1
cow
57.0
73.3
72.0
80.2
86.7
73.6
sky
92.9
94.2
93.5
93.5
94.6
95.7
plane
37.5
62.0
65.1
72.4
77.9
78.3
face
69.0
80.5
74.4
86.3
83.5
89.5
car
56.0
80.1
61.3
69.5
74.7
84.5
bike
54.1
78.6
77.7
86.2
88.3
81.4
Total
74.2
83.5
79.7
85.5
86.7
84.9
we set the vocabulary size to 500, the number of hidden topics to 50, and each symmetric Dirichlet
parameter ?k = 0.5, based on validation performance. For Model II, we define the neighborhood
of each site i as a subset of sites that falls into a circular region centered at i and with radius of 2?,
where ? is the fall-off rate of the weights. We set ? to be 10 pixels, which is roughly 1/20 of image
size. The classifiers for label prediction have 15 hidden units. The appearance model for topics
and the classifier are initialized randomly. In the learning procedure, the E step uses 500 samples
to estimate the posterior distribution of topics. In the M step, we take 3 steps of gradient ascent
learning of the classifiers per iteration.
The performance of our models is first evaluated on the dataset with all the labels available. We
compare the performance of the three model variants to the super-pixel classifier (S Class), and the
CRF model. Table 1 shows the average classification accuracy rates of our model and the baselines
for each class and in total, over 10 different random partitions of the dataset. We can see that Model
I, which uses latent feature representations as additional inputs, achieves much better performance
than the S Class. Also, Model II and III improve the accuracy further by incorporating the label
spatial priors. We notice that the lateral connections between label variables are more effective than
integrating information from neighboring latent topic variables. This is also demonstrated by the
good performance of the simple CRF.
Learning with different amounts of label data. In order to test the robustness of the latent feature
representation, we evaluate our models using data with different amount of labeling information.
We use an image dilation operator on the image regions labeled as ?void?, and control the proportion
of labeled data by varying the diameters of the dilation operator (see [16] for similar processing).
Specifically, we use diameter values of 5, 10, 15, 20, 25, 30 and 35 to change the proportion of the
labeled pixels to 62.9%, 52.1%, 44.1%, 36.4%, 30.5%, 24.9% and 20.3%, respectively. The original
proportion is 71.9%. We report the average accuracies of 5 runs of training and testing with random
equal partition of the dataset in Figure 2. The figure shows that the performance of all three models
degrades with fewer labeled data, but the degradation is relatively gradual. When the proportion of
labeled data decreases from 72% to 20%, the total loss in accuracy is less than 10%. This suggests
that incorporating latent features makes our models more robust against missing labels than the
previous work (cf. [16]). We also note that the performance of Model III is more robust than the
other two variants, which may derive from stronger smoothing.
Table 2: A comparison of classification accuracy of our three model variants with other methods on
the full MSRC dataset and Corel-B dataset.
MSRC
Corel-B
S Class
60.0
68.2
Model I
65.9
69.2
Model II
72.3
73.4
Model III
74.0
75.5
[14]
72.2
-
[5]
75.3
Performance on other sets. We further evaluate our models on two larger datasets to see whether
they can scale up. The first dataset is the full version of the MSRC dataset, and we use the same
training/testing partition as in [14]. The model setting is the same as in MSRC-9 except that we use
a MLP with 20 hidden units for label prediction. The second is the Corel-B dataset, which is divided
into 175 training images and 130 testing images randomly. We use the same setting of the models
as in the experiments on the full MSRC set. Table 2 summarizes the classification accuracies of our
models as well as some previous methods. For the full MSRC set, the two extended versions of our
model achieve the similar performance as in [14], and we can see that the latent topic representation
6
S_Class
Model?I
Model?II
Model?III
0.85
Accuracy
0.8
0.75
0.7
0.65
0.6
71.9
62.8
52.1
44.1
36.4
30.5
Percentage of Labeled Pixels
24.9
20.3
Figure 2: Left: Classification Accuracy with gradually decreasing proportion of labeled pixels.
Right top: Examples of an image and its super-pixelization. Right bottom: Examples of original
labeling and labeling after dilation (the ratio is 36.4).
provides useful cues. Also, our models have the same accuracy as reported in [5] on the Corel-B
dataset, while we have a simpler label random field and use a smaller training set. It is interesting to
note that the topics and spatial smoothness play less roles in the labeling performance on CorelB. Figure 3 shows some examples of labeling results from both datasets. We can see that our
models handle the extended regions better than those fine object structures, due to the tendency
of (over)smoothing caused by super-pixelization and the two spatial dependency structures.
6
Discussion
In this paper, we presented a hybrid framework for image labeling, which combines a generative topic model with discriminative label prediction models. The generative model extends latent
Dirichlet allocation to capture joint patterns in the label and appearance space of images. This latent representation of an image then provides an additional input to the label predictor. We also
incorporated the spatial dependency into the model structure in two different ways, both imposing a
prior of spatial smoothness for labeling on the image plane. The results of applying our methods to
three different image datasets suggest that this integrated approach may extend to a variety of image
databases with only partial labeling available. The labeling system consistently out-performs alternative approaches, such as a standard classifier and a standard CRF. Its performance also matches
that of the state-of-the-art approaches, and is robust against different amount of missing labels.
Several avenues exist for future work. First, we would like to understand when the simple first-order
approximation in inference for Model II holds, e.g., when the local curvature of the classifier with
respect to its input is large. In addition, it is important to address model selection issues, such as
the number of topics. We currently rely on the validation set, but more principled approaches are
possible. A final issue concerns the reliance on visual words formed by clustering features in a
complicated appearance space. Using a stronger appearance model may help us understand the role
of different visual cues, as well as construct a more powerful generative model.
References
[1] Yasemin Altun, David McAllester, and Mikhail Belkin. Maximum margin semi-supervised learning for
structured variables. In NIPS 18, 2006.
[2] David Blei and Jon McAuliffe. Supervised topic models. In NIPS 20, 2008.
[3] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. J. Mach. Learn. Res.,
3:993?1022, 2003.
[4] Xuming He, Richard Zemel, and Miguel Carreira-Perpinan. Multiscale conditional random fields for
image labelling. In CVPR, 2004.
[5] Xuming He, Richard S. Zemel, and Debajyoti Ray. Learning and incorporating top-down cues in image
segmentation. In ECCV, 2006.
[6] Michael Kelm, Chris Pal, and Andrew McCallum. Combining generative and discriminative methods for
pixel classification with multi-conditional learning. In ICPR, 2006.
7
Grass
Tree
Cow
Sky
Plane
Face
Car
Bike
Ground Truth Our Model
Building
Orig Image
MSRC
Hippo/Rhino
Horse
Tigher
Orig Image
Corel-B
Wolf/Lepard
Water
Vegetaion
Sky
Ground
Snow
Fence
Ground Truth Our Model
Polar Bear
Figure 3: Some labeling results for the Corel-B (bottom panel) and MSRC-9 (top panel) datasets,
based on the best performance of our models. The ?Void? region is annotated by color ?black?.
[7] Sanjiv Kumar and Martial Hebert. Discriminative random fields: A discriminative framework for contextual interaction in classification. In ICCV, 2003.
[8] John Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic models
for segmenting and labeling sequence data. In ICML, pages 282?289, 2001.
[9] Julia A. Lasserre, Christopher M. Bishop, and Thomas P. Minka. Principled hybrids of generative and
discriminative models. In CVPR, 2006.
[10] Chi-Hoon Lee, Shaojun Wang, Feng Jiao, Dale Schuurmans, and Russell Greiner. Learning to model
spatial dependency: Semi-supervised discriminative random fields. In NIPS 19, 2007.
[11] Nicolas Loeff, Himanshu Arora, Alexander Sorokin, and David Forsyth. Efficient unsupervised learning
for localization and detection in object categories. In NIPS, 2006.
[12] B. Russell, A. Torralba, K. Murphy, and W. Freeman. LabelMe: A database and web-based tool for image
annotation. Technical report, MIT AI Lab Memo AIM-2005-025, 2005.
[13] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. PAMI, 2000.
[14] Jamie Shotton, John M. Winn, Carsten Rother, and Antonio Criminisi. Textonboost: Joint appearance,
shape and context modeling for multi-class object recognition and segmentation. In ECCV, 2006.
[15] Jakob Verbeek and Bill Triggs. Region classification with markov field aspect models. In CVPR, 2007.
[16] Jakob Verbeek and Bill Triggs. Scene segmentation with CRFs learned from partially labeled images. In
NIPS 20, 2008.
[17] Gang Wang, Ye Zhang, and Li Fei-Fei. Using dependent regions for object categorization in a generative
framework. In CVPR, 2006.
[18] Xiaogang Wang and Eric Grimson. Spatial latent Dirichlet allocation. In NIPS, 2008.
8
| 3620 |@word version:2 middle:2 proportion:6 stronger:3 triggs:2 tedious:1 lnh:2 gradual:1 rgb:1 textonboost:1 tuned:1 contextual:2 yet:1 written:2 john:2 sanjiv:1 partition:4 shape:1 update:2 grass:2 stationary:1 generative:10 fewer:1 cue:3 item:1 mpm:1 plane:5 mccallum:2 blei:2 provides:2 node:2 toronto:2 location:1 simpler:1 zhang:1 consists:1 combine:1 ray:1 introduce:3 pairwise:1 hippo:1 roughly:1 multi:2 chi:1 freeman:1 decreasing:1 estimating:1 notation:1 xx:3 panel:5 bike:2 developed:1 pseudo:3 sky:3 ti:22 classifier:16 control:1 unit:3 mcauliffe:1 segmenting:1 local:3 treat:2 limit:1 mach:1 becoming:1 approximately:1 pami:1 black:1 therein:1 suggests:1 shaded:1 co:4 range:1 directed:1 testing:4 procedure:5 word:5 integrating:2 suggest:1 altun:1 unlabeled:4 selection:1 operator:2 context:2 applying:3 optimize:1 conventional:1 bill:2 demonstrated:1 missing:5 maximizing:4 center:1 shi:1 crfs:1 independently:1 convex:1 formulate:1 utilizing:1 regularize:1 handle:2 variation:1 play:1 caption:1 us:4 superpixel:1 element:1 expensive:1 approximated:1 updating:1 recognition:1 cut:2 predicts:2 labeled:21 database:4 observed:3 bottom:2 module:5 role:2 wang:3 capture:5 region:14 wj:10 decrease:1 russell:2 principled:2 grimson:1 segment:1 orig:2 localization:1 eric:1 joint:7 represented:3 train:1 jiao:1 effective:1 monte:1 zemel:4 labeling:20 horse:1 neighborhood:4 encoded:1 widely:1 larger:1 cvpr:4 relax:1 statistic:1 transform:1 final:2 ip:3 sequence:1 propose:1 jamie:1 interaction:3 remainder:1 zm:2 neighboring:5 combining:2 achieve:2 tni:3 description:1 cluster:3 extending:1 produce:1 categorization:1 object:8 help:2 derive:2 iq:1 andrew:3 stat:1 miguel:1 odd:1 strong:1 subregion:3 c:1 predicted:1 differenceof:1 convention:1 direction:1 snow:1 radius:1 annotated:1 filter:3 criminisi:1 centered:2 viewing:1 mcallester:1 bin:3 assign:2 summation:1 extension:2 hold:1 around:1 considered:1 ground:3 exp:5 predict:1 achieves:1 adopt:2 hvi:1 torralba:1 polar:1 integrates:2 label:71 currently:1 tool:1 weighted:1 mit:1 gaussian:1 aim:2 super:13 rather:2 varying:1 derived:1 focus:1 fence:1 properly:1 consistently:2 likelihood:8 superpixels:1 baseline:3 inference:4 dependent:2 unary:3 typically:2 lj:5 integrated:1 hidden:11 rhino:1 pixel:22 issue:3 arg:2 flexible:1 orientation:1 denoted:1 augment:1 classification:9 animal:1 spatial:19 art:2 smoothing:3 marginal:5 field:10 equal:2 construct:1 ng:1 sampling:3 manually:1 icml:1 unsupervised:1 jon:1 future:2 others:1 report:3 richard:3 few:1 belkin:1 oriented:1 randomly:3 individual:1 murphy:1 microsoft:1 detection:1 mlp:4 investigate:1 circular:2 intra:1 evaluation:1 introduces:1 mixture:2 tj:2 predefined:1 accurate:1 edge:6 partial:1 hoon:1 tree:2 taylor:1 initialized:1 re:1 complicates:1 instance:1 modeling:4 zn:11 loopy:2 applicability:1 cost:1 subset:5 predictor:3 lno:4 pal:1 reported:2 dependency:7 probabilistic:3 off:1 lee:1 michael:2 li:47 potential:4 includes:2 forsyth:1 explicitly:1 caused:1 depends:1 vi:15 lab:2 complicated:2 annotation:5 formed:1 accuracy:11 descriptor:6 carlo:1 against:2 minka:1 dataset:12 popular:1 color:7 car:2 segmentation:8 organized:1 focusing:1 supervised:7 follow:1 specify:2 formulation:2 evaluated:1 strongly:1 stage:1 web:1 christopher:1 multiscale:1 propagation:2 defines:1 lda:4 aj:1 facilitate:1 effect:1 ye:1 normalized:6 building:2 spatially:1 symmetric:2 criterion:2 crf:10 demonstrate:1 julia:1 tn:2 performs:1 image:77 wise:2 common:2 multinomial:1 corel:8 extend:3 he:3 occurred:1 marginals:2 cambridge:1 imposing:2 ai:39 gibbs:3 smoothness:6 msrc:11 hp:1 etc:1 base:9 add:1 curvature:1 posterior:7 optimizing:1 uab:3 yasemin:1 additional:2 impose:2 maximize:1 fernando:1 semi:5 ii:12 multiple:2 full:5 reduces:1 smooth:2 technical:1 match:1 plug:1 lin:5 divided:1 zin:4 prediction:22 variant:5 basic:3 verbeek:2 vision:1 expectation:1 histogram:2 represent:1 iteration:2 cie:1 addition:2 separately:2 fine:1 winn:1 void:2 crucial:1 unlike:1 ascent:3 tend:1 regularly:1 incorporates:1 lafferty:1 effectiveness:1 jordan:1 iii:10 split:1 shotton:1 variety:1 zi:34 cow:2 avenue:1 whether:2 antonio:1 useful:1 amount:3 subregions:1 category:1 diameter:2 generate:1 percentage:1 exist:1 zj:10 notice:2 delta:1 estimated:1 per:1 group:1 key:1 reliance:1 pb:7 acknowledged:1 ani:5 utilize:1 run:2 parameterized:1 powerful:1 extends:2 patch:3 loeff:1 summarizes:1 capturing:4 layer:2 bound:2 sorokin:1 gang:1 xiaogang:1 constraint:2 kronecker:1 fei:2 scene:3 ucla:2 nearby:1 aspect:1 kumar:1 relatively:2 department:2 structured:5 icpr:1 describes:2 across:1 increasingly:1 em:1 smaller:1 lp:1 invariant:1 gradually:1 iccv:1 ln:2 equation:8 discus:1 available:3 generalizes:1 apply:2 himanshu:1 occurrence:1 alternative:2 robustness:1 shaojun:1 original:3 thomas:1 denotes:3 dirichlet:9 include:2 clustering:2 cf:1 graphical:3 top:3 marginalized:1 yx:1 build:3 feng:1 malik:1 strategy:1 degrades:1 traditional:1 gradient:3 lateral:4 chris:1 topic:37 water:1 rother:1 modeled:1 index:1 ratio:1 difficult:2 hlog:1 memo:1 discretize:1 observation:1 datasets:6 markov:1 situation:1 extended:3 incorporated:1 kelm:1 jakob:2 community:1 introduced:1 david:4 pair:3 required:1 extensive:1 connection:5 learned:3 nip:6 trans:1 address:1 able:1 usually:1 pattern:7 built:1 including:4 max:2 belief:2 natural:1 hybrid:6 rely:2 indicator:2 improve:2 martial:1 arora:1 categorical:1 extract:1 prior:9 fully:1 loss:1 bear:1 interesting:1 limitation:1 allocation:5 xuming:3 validation:4 bank:1 share:2 eccv:2 tigher:1 summary:1 hebert:1 side:1 understand:2 wide:1 fall:2 taking:1 face:2 mikhail:1 dimension:2 vocabulary:3 valid:1 xn:2 ignores:1 adopts:1 dale:1 debajyoti:1 obtains:1 discriminative:13 xi:4 latent:16 dilation:3 table:4 lasserre:1 learn:2 robust:3 nicolas:1 schuurmans:1 expansion:1 whole:1 quadrature:1 site:5 referred:1 lc:1 position:9 pereira:1 perpinan:1 third:2 down:1 specific:1 bishop:1 sift:1 concern:2 normalizing:1 incorporating:7 effectively:1 texture:7 labelling:1 occurring:2 ljn:2 margin:1 simply:2 explore:1 appearance:24 visual:6 greiner:1 partially:7 wolf:1 truth:2 extracted:2 conditional:13 viewed:1 formulated:2 carsten:1 shared:1 labelme:1 feasible:1 change:2 carreira:1 specifically:4 except:2 degradation:1 total:3 experimental:3 tendency:1 alexander:1 incorporate:1 evaluate:2 correlated:1 |
2,892 | 3,621 | Analyzing human feature learning as
nonparametric Bayesian inference
Thomas L. Griffiths
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
Tom [email protected]
Joseph L. Austerweil
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Abstract
Almost all successful machine learning algorithms and cognitive models require
powerful representations capturing the features that are relevant to a particular
problem. We draw on recent work in nonparametric Bayesian statistics to define a
rational model of human feature learning that forms a featural representation from
raw sensory data without pre-specifying the number of features. By comparing
how the human perceptual system and our rational model use distributional and
category information to infer feature representations, we seek to identify some of
the forces that govern the process by which people separate and combine sensory
primitives to form features.
1
Introduction
Most accounts of the processes underlying human learning, decision-making, and perception assume
that stimuli have fixed sets of features. For example, traditional accounts of category learning start
with a set of features (e.g., is furry and barks), which are used to learn categories (e.g., dogs).
In a sense, features are the basic atoms for these processes. Although the model?s features may
be combined in particular ways to create new features, the basic primitives are assumed to be fixed.
While this assumption has been useful in investigating many cognitive functions, it has been attacked
on empirical [1] and theoretical [2] grounds. Experts identify parts of objects in their domain of
expertise vastly differently than novices (e.g., [3]), and evidence for flexible feature sets has been
found in many laboratory experiments (see [2] for a review). In this paper, we present an account of
how flexible features sets could be induced from raw sensory data without requiring the number of
features to be prespecified.
From early work demonstrating XOR is only learnable by a linear classifier with the right representation [4] to the so-called ?kernel trick? popular in support vector machines [5], forming an appropriate representation is a fundamental issue for applying machine learning algorithms. We draw on the
convergence of interest from cognitive psychologists and machine learning researchers to provide a
rational analysis of feature learning in the spirit of [6], defining an ?ideal? feature learner using ideas
from nonparametric Bayesian statistics. Comparing the features identified by this ideal learner to
those learned by people provides a way to understand how distributional and category information
contribute to feature learning.
We approach the problem of feature learning as one of inferring hidden structure from observed data
? a problem that can be solved by applying Bayesian inference. By using methods from nonparametric Bayesian statistics, we can allow an unbounded amount of structure to be expressed in the
observed data. For example, nonparametric Bayesian clustering models allow observations to be
assigned to a potentially infinite number of clusters, of which only a finite number are represented
at any time. When such a model is presented with a new object that it cannot currently explain,
1
it increases the complexity of its representation to accommodate the object. This flexibility gives
nonparametric Bayesian models the potential to explain how people infer rich latent structure from
the world, and such models have recently been applied to a variety of aspects of human cognition
(e.g., [6, 7]). While nonparametric Bayesian models have traditionally been used to solve problems
related to clustering, recent work has resulted in new models that can infer a set of features to represent a set of objects without limiting the number of possible features [8]. These models are based on
the Indian Buffet Process (IBP), a stochastic process that can be used to define a prior on the features
of objects. We use the IBP as the basis for a rational model of human perceptual feature learning.
The plan of the paper is as follows. Section 2 summarizes previous empirical findings from the
human perceptual feature learning literature. Motivated by these results, Section 3 presents a rational
analysis of feature learning, focusing on the IBP as one component of a nonparametric Bayesian
solution to the problem of finding an optimal representation for some set of observed objects. Section
4 compares human learning and the predictions of the rational model. Section 5 concludes the paper.
2
Human perceptual feature learning
One main line of investigation of human feature learning concerns the perceptual learning phenomena of unitization and differentiation. Unitization occurs when two or more features that were
previously perceived as distinct features merge into one feature. In a visual search experiment by
Shiffrin and Lightfoot [9], after learning that the features that generated the observed objects co-vary
in particular ways, partcipants represented each object as its own feature instead of as three separate
features. In contrast, differentiation is when a fused feature splits into new features. For example,
color novices cannot distinguish between a color?s saturation and brightness; however, people can
be trained to make these distinctions [10]. Although general conditions for when differentiation or
unitization occur have been outlined, there is no formal account for why and when these processes
take place.
In Shiffrin and Lightfoot?s visual search experiment [9], participants were trained to find one of the
objects shown in Figure 1(a) in a scene where the other three objects were present as distractors.
Each object is composed of three features (single line segments) inside a rectangle. The objects can
thus be represented by the feature ownership matrix shown in Figure 1(a), with Zik = 1 if object i
has feature k. After prolonged practice, human performance drastically and suddenly improved, and
this advantage did not transfer to other objects created from the same feature set. They concluded
that the human perceptual system had come to represent each object holistically, rather than as being
composed of its more primitive features. In this case, the fact that the features tended to co-occur
only in the configurations corresponding to the four objects provides a strong cue that they may not
be the best way to represent these stimuli.
The distribution of potential features over objects provides one cue for inferring a feature representation; however, there can be cases where multiple feature representations are equally good. For example, Pevtzow and Goldstone [11] demonstrated that human perceptual feature learning is affected
by category information. In the first part of their experiment, they trained participants to categorize
eight ?distorted? objects into one of three groups using one of two categorization schemes. The
objects were distorted by the addition of a random line segment. The category membership of four
of the objects, A-D, depended on the training condition, as shown in Figure 1 (b). Participants in the
horizontal categorization condition had objects A and B categorized into one group and objects C
and D into the other. Those in the vertical categorization condition learned objects A and C are categorized into one group and objects B and D in the other. The nature of this categorization affected
the features learned by participants, providing a basis for selecting one of the two featural representations for these stimuli that would otherwise be equally well-justified based on distributional
information.
Recent work has supplemented these empirical results with computational models of human feature
learning. One such model is a neural network that incorporates categorization information as it
learns to segment objects [2]. Although the inputs to the model are the raw pixel values of the
stimuli, the number of features must be specified in advance. This is a serious issue for an analysis of
human feature learning because it does not allow us to directly compare different feature set sizes ?
a critical factor in capturing unitization and differentiation phenomena. Other work has investigated
how the human perceptual system learns to group objects that seem to arise from a common cause
2
x1
x3
x2
x1
x2
x3
x4
1
0
0
1
1
1
0
0
1
0
1
0
0
1
1
0
x4
0
0
1
1
0
1
0
1
(a)
(b)
Figure 1: Inferring representations for objects. (a) Stimuli and feature ownership matrix from
Shiffrin and Lightfoot [9]. (b) Four objects (A-D) and inferred features depending on categorization
scheme from Pevtzow and Goldstone [11]
[12]. This work uses a Bayesian model that can vary the number of causes it identifies, but assumes
indifference to the spatial position of the objects and that the basic objects themselves are already
known, with a binary variable representing the presence of an object in each scene being given to
the model as the observed data. This model is thus given the basic primitives from raw sensory data
and does not provide an account of how the human perceptual system identifies these primitives. In
the remainder of the paper, we develop a rational model of human feature learning that applies to
raw sensory data and does not assume a fixed number of features in advance.
3
A Rational Analysis of Feature Learning
Rational analysis is a technique for understanding a cognitive process by comparing it to the optimal
solution to an underlying computational problem [6], with the goal of understanding how the structure of this problem influences human behavior. By formally analyzing the problem of inferring
featural representations from raw sensory data of objects, we can determine how distributional and
category information should influence the features used to represent a set of objects.
3.1
Inferring Features from Percepts
Our goal is to form the most probable feature representation for a set of objects given the set of
objects we see. Formally, we can represent the features of a set of objects with a feature ownership
matrix Z like that shown in Figure 1, where rows correspond to objects, columns correspond to
features, and Zik = 1 indicates that object i possesses feature k. We can then seek to identify the
most likely feature ownership matrix Z given the observed properties of a set of objects X by a
simple application of Bayes theorem:
P (X|Z)P (Z)
Z? = arg max P (Z|X) = arg max !
= arg max P (X|Z)P (Z)
!
!
Z
Z
Z
Z ! P (X|Z )P (Z )
(1)
This separates the problem of finding the best featural representation given a set of data into two
subproblems: finding a representation that is in general probable, as expressed by the prior P (Z),
and finding a representation that generates the observed properties of the objects with high probability, as captured by the likelihood P (X|Z). We consider how these distributions are defined in
turn.
3
3.2
A Prior on Feature Ownership Matrices
Although in principle any distribution on binary matrices P (Z) could be used as a prior, we use one
particular nonparametric Bayesian prior, the Indian Buffet Process (IBP) [8]. The IBP has several
nice properties: it allows for multiple features per object, possessing one feature does not make
possessing another feature less likely, and it generates binary matrices of unbounded dimensionality.
This allows the IBP to use an appropriate, possibly different, number of features for each object and
makes it possible for the size of the feature set to be learned from the objects.
The IBP defines a distribution over binary matrices with a fixed number of rows and an infinite
number of columns, of which only a finite number are expected to have non-zero elements. The
distribution thus permits tractable inference of feature ownership matrices without specifying the
number of features ahead of time. The probability of a feature ownership matrix under the IBP is
typically described via an elaborate metaphor in which objects are customers and features are dishes
in an Indian buffet, with the choice of dishes determining the features of the object, but reduces to
K+
#
? K+
(N ? mk )!(mk ? 1)!
P (Z) = "2N ?1
(2)
exp{??HN }
N!
K
!
h
k=1
h=1
where N is the number of objects, Kh is the number of features with history h (the history is the
column of the feature interpreted as a binary number), K+ is the number of columns with non-zero
entries, HN is the N -th harmonic number, ? affects the number of features objects own and mk is
the number of objects that have feature k.
3.3
Two Likelihood Functions for Perceptual Data
To define the likelihood, we assume N objects with d observed dimensions (e.g., pixels in an image)
are grouped in a matrix X (X = [xT1 , . . . , xTN ], where xi ? Rd ). The feature ownership matrix Z
marks the commonalities and contrasts between these objects, and the likelihood P (X|Z) expresses
how these relationships influence their observed properties. Although in principle many forms are
possible for the likelihood, two have been used successfully with the IBP in the past: the linearGaussian [8] and noisy-OR [13] models.
The linear-Gaussian model assumes that xi is drawn from a Gaussian distribution with mean zi A
2
and covariance matrix ?X = ?X
I, where zi is the binary vector defining the features of object xi
and A is a matrix of the weights of each element of D of the raw data for each feature k.
1
1
exp{? 2 tr((X ? ZA)T (X ? ZA))}
(3)
p(X|Z, A,? X ) =
2
N
D/2
2?X
(2??X )
Although A actually represents the weights of each feature (which combine with each other to
determine raw pixel values of each object), it is integrated out of so that the conditional probability
of X given Z and A only depends on Z and hyperparameters corresponding to the variance in X and
A (see [8] for details). The result of using this model is a set of images representing the perceptual
features corresponding to the matrix Z, expressed in terms of the posterior distribution over the
weights A.
For the noisy-OR model [13], the raw visual data is reduced to binary pixel values. This model
assumes that the pixel values X are generated from a noisy-OR distribution where Z defines the
features that each object has and Y defines which pixels that should be one for each feature:
p(xi,d = 1|Z, Y,? ,% ) = 1 ? (1 ? ?)zi,: y:,d (1 ? %)
(4)
where hyperparameters % and ? represent the probability a pixel is turned on without a cause and
the probability a feature fails to turn on a pixel respectively. Additionally, Y is assumed to have
a Bernoulli prior
" with hyperparameter p representing the probability that an entry of Y is one,
with p(Y ) = k,d pyk,d (1 ? p)1?yk,d . The result of using this model is a distribution over binary
arrays indicating the pixels associated with the features identified by Z, expressed via the posterior
distribution on Y .
3.4
Summary
The prior and likelihood defined in the preceding sections provide the ingredients necessary to use
Bayesian inference to identify the features of a set of objects from raw sensory data. The result
4
Figure 2: Inferring feature representations using distributional information from Shriffin and Lightfoot [9]. On the left, bias features and on the right, the four objects as learned features. The rational
model justifies the human perceptual system?s unitization of the objects as features
is a posterior distribution on feature ownership matrices Z, indicating how a set of objects could
be represented, as well as an indication of how the features identified by this representation are
expressed in the sensory data. While computing this posterior distribution exactly is intractable, we
can use existing algorithms developed for probabilistic inference in these models. Although we used
Gibbs sampling ? a form of Markov chain Monte Carlo that produces samples from the posterior
distribution on Z ? for all of our simulations, Reversible Jump MCMC and particle filtering inference
algorithms have also been derived for these models [8, 13, 14].
4
Comparison with Human Feature Learning
The nonparametric Bayesian model outlined in the previous section provides an answer to the question of how an ideal learner should represent a set of objects in terms of features. In this section we
compare the representations discovered by this ideal model to human inferences. First, we demonstrate that the representation discovered by participants in Shiffrin and Lightfoot?s experiment [9]
is optimal under this model. Second, we illustrate that both the IBP and the human perceptual
system incorporate category information appropriately. Finally, we present simulations that show
the flexibility of the IBP to learn different featural representations depending on the distributional
information of the actual features used to generate the objects, and discuss how this relates to the
phenomena of unitization and differentiation more generally.
4.1
Using Distributional Information
When should whole objects or line segments be learned as features? It is clear which features should
be learned when all of the line segments occur independently and when the line segments in each
object always occur together (the line segments and the objects respectively). However, in the intermediate cases of non-perfect co-occurence, what should be learned? Without a formal account of
feature learning, there is no basis for determining when object ?wholes? or ?parts? should be learned
as features. Our rational model provides an answer ? when there is enough statistical evidence for
the individual line segments to be features, then each line segment should be differentiated into
features. Otherwise, the collection of line segments should be learned as one unitized feature.
The stimuli constructed by Shiffrin and Lightfoot [9] constitute one of the intermediate cases between the extremes of total independence and perfect correlation, and are thus a context in which
formal modeling can be informative. Figure 2 presents the features learned by applying the model
with a noisy-OR likelihod to this object set. The features on left are the bias and the four features on
the right are the four objects from their study. The learned features match the representation formed
by people in the experiment. Although there is imperfect co-occurence between the features in each
object, there is not enough statistical evidence to warrant representing the object as a combination of
features. These results were obtained with an object set consisting of five copies of each of the four
1
objects with added noise that flips a pixel?s value with probability 75
. The results were obtained by
running the Gibbs sampler with initialization p = 0.2, ? = 1.0, % = 0.025, and ? = .975. Inference
is robust to different initializations as long as they are near these values.
5
(a)
(c)
(b)
(d)
Figure 3: Inferring feature representations using category information from Pevtzow and Goldstone
[11]. (a) - (b) Features learned from using the rational model with the noisy-OR likelihood where
10 distorted copies of objects A-D comprise the object set with (a) horizontal and (b) vertical categorization schemes (c = 35) respectively. The features inferred by the model match those learned
by participants in the experiment. (c) - (d) Features learned from using the same model with the full
object set with 10 distorted copies of each object, the (c) horizontal and (d) vertical categorization
schemes (c = 75) respectively. The first two features learned by the model match those learned
by participants in the experiment. The third feature represents the intersection of the third category
(Pevtzow and Goldstone did not test if participants learned this feature).
4.2
Using Category Information
To model the results of Pevtzow and Goldstone [11], we applied the rational model with the noisyOR likelihood to the stimuli used in their experiment. Although this model does not incorporate
category information directly, we included it indirectly by postpending c bits per category to the
end of each image. Figure 3 (a) and (b) show the features learned by the model when trained on
distorted objects A-D using both categorization schemes. The categorization information is used
appropriately by the model and mirrors the different feature representations inferred by the two
pariticipant groups. Figure 3 (c) and (d) show the features learned by the model when given ten
distorted copies of all eight objects. Like the human perceptual system, the model infers different,
otherwise undistinguishable, feature sets using categorization information appropriately. Although
the neural network model of feature learning presented in [2] also inferred correct representations
with the four object set, this model did not produce correct results for the eight object set. Inference
is susceptible to local minima given poor initializations of the hyperparameters. The features shown
in Figure 3 used the following initialization: p = 0.125, ? = 1.5, ? = 0.99, and % = 0.01.1
4.3
Unitization and Differentiation
The results presented in this section show that our rational model reproduces human inferences for
particular datasets, suggesting that the model might be useful more generally in identifying conditions under which the human perceptual system should unitize or differentiate sensory primitives.
The Shiffrin and Lightfoot results demonstrated one case where whole objects should be learned as
features even though each object was created from features that did not perfectly co-occur. The IBP
confirms the intuitive explanation that there is not enough statistical evidence to break (differentiate)
the objects into individual features and thus the unitization behavior of the participants is justified.
However, there is no comparison with the same underlying feature set to when statistical evidence
warrants differentiation, so that the individual features should be learned as features.
To illustrate the importance of distributional information on the inferred featural representation, we
designed a simulation to show cases where the objects and the actual features used to generate the
objects should be learned as the features. Figure 4 (a) shows the bias (on left) and the set of six
features used in the simulations. Figure 4 (b) is an artificially generated set of observed objects for
1
The features inferred by the model in each figure has highest probability given the images it observed.
6
(a)
(c)
(b)
1
0
1
0
1
1
1
0
0
1
0
1
1
0
0
..
.
0
1
0
1
0
0
0
1
1
0
1
1
1
1
1
1
0
0
1
1
(d)
1
1
1
1
0
1
0
0
0
1
..
.
0
1
0
0
1
0
0
1
0
0
0
0
0
1
0
(e)
Figure 4: Inferring different feature representations depending on the distributional information.
(a) The bias (on left) and the six features used to generate both object sets. (b) - (c) The feature
membership matrices for (b) unitization and (c) differentiation sets respectively. (d) - (e) The feature
representations inferred by model for (d) unitization and (e) differentiation sets respectively.
which there is not enough statistical evidence to warrant differentiation. This is the same underlying
feature membership matrix as the Shiffrin and Lightfoot result (unitization set). Figure 4 (c) is an
artificially generated object set in which the observed objects should be differentiated. Here, the features used to generate the objects occur independently of each other and thus
$ % the underlying feature
membership matrix used to generate the observed objects is all possible 63 objects (differentiation
set).
Figure 4 (d) and (e) show the results of applying the rational model with a noisy-OR likelihood to
these two object sets. When the underlying features occur independently of each other, the model
represents the objects in terms of these features. When the features often co-occur, the model forms
a representation which consists simply of the objects themselves. For each simulation, 40 objects
from the appropriate set (repeating as necessary) were presented to the model. Each object was
1
perturbed by added noise that flipped a pixel?s value with probability 75
. The hyperparameters were
inferred with Metropolis-Hastings steps during Gibbs sampling and were initialized to: ? = 1,
2
2
?X
= 2.25, and ?A
= 0.5. These simulations demonstrate that even when the same underlying
features create two object sets, different representations should be inferred depending on the the
distributional information, suggesting that this kind of information can be a powerful driving force
behind unitization and differentiation.
5
Discussion and Future Directions
The flexibility of human featural representations and the power of representation in machine learning make a formal account of how people derive representations from raw sensory information
tremendously important. We have outlined one approach to this problem, drawing on ideas from
nonparametric Bayesian statistics to provide a rational account of how the human perceptual system
uses distributional and category information to infer representations. First, we showed that in one
circumstance where it is ambiguous whether or not parts or objects should form the featural rep7
resentation of the objects, that this model peforms similarily to the human perceptual system (they
both learn the objects themselves as the basic features). Second, we demonstrated that the IBP and
the human perceptual systems both use categorization information to make the same inductions as
appropriate for the given categorization scheme. Third, we further investigated how distributional
information of the features that create the object set affects the inferred representation. These results
begin to sketch a picture of human feature learning as a rational combination of different sources of
information about the structure of a set of objects.
There are two main future directions for our work. First, we intend to perform further analysis of
how the human perceptual system uses statistical cues. Specifically, we plan to investigate whether
the feature sets identified by the perceptual system are affected by the distributional information it
is given (as our simulations would suggest). Second, we hope to use hierarchical nonparametric
Bayesian models to investigate the interplay between knowledge effects and perceptual input. Recent work has identified a connection between the IBP and the Beta process [15], making it possible
to define hierarchical Bayesian models in which the IBP appears as a component. Such models
would provide a more natural way to capture the influence of category information on feature learning, extending the analyses that we have performed here.
Acknowledgements We thank Rob Goldstone, Karen Schloss, Stephen Palmer, and the Computational Cognitive Science Lab at Berkeley for discussions and the Air Force Office of Scientific Research for support.
References
[1] P. G. Schyns, R. L. Goldstone, and J. Thibaut. Development of features in object concepts. Behavioral
and Brain Sciences, 21:1?54, 1998.
[2] R. L. Goldstone. Learning to perceive while perceiving to learn. In Perceptual organization in vision:
Behavioral and neural perspectives, pages 233?278. 2003.
[3] I. Biederman and M. M. Schiffrar. Sexing day-old chicks: A case study and expert systems analysis
of a difficult perceptual-learning task. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 13:640?645, 1987.
[4] M. L. Minsky and S. A. Papert. Perceptrons. MIT Press, Cambridge, MA, 1969.
[5] B. Scholkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2001.
[6] J. R. Anderson. Is human cognition adaptive? Behavioral and Brain Sciences, 14:471?517, 1991.
[7] A. N. Sanborn, T. L. Griffiths, and D. J. Navarro. A more rational model of categorization. In Proceedings
of the 28th Annual Conference of the Cognitive Science Society, 2006.
[8] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances
in Neural Information Processing Systems 18, 2006.
[9] R. M. Shiffrin and N. Lightfoot. Perceptual learning of alphanumeric-like characters. In The psychology
of learning and motivation, volume 36, pages 45?82. Academic Press, San Diego, 1997.
[10] R. L. Goldstone. Influences of categorization on perceptual discrimination. Journal of Experimental
Psychology: General, 123:178?200, 1994.
[11] R. Pevtzow and R. L. Goldstone. Categorization and the parsing of objects. In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, pages 712?722, Hillsdale, NJ, 1994. Lawrence
Erlbaum Associates.
[12] G. Orban, J. Fiser, R. N. Aslin, and M. Lengyel. Bayesian model learning in human visual perception. In
Advances in Neural Information Processing Systems 18, 2006.
[13] F. Wood, T. L. Griffiths, and Z. Ghahramani. A non-parametric Bayesian method for inferring hidden
causes. In Proceeding of the 22nd Conference on Uncertainty in Artificial Intelligence, 2006.
[14] F. Wood and T. L. Griffiths. Particle filtering for nonparametric Bayesian matrix factorization. In Advances
in Neural Information Processing Systems 19, 2007.
[15] R. Thibaux and M. I. Jordan. Hierarchical Beta processes and the Indian buffet process. Technical Report
719, University of California, Berkeley. Department of Statistics, 2006.
8
| 3621 |@word nd:1 confirms:1 simulation:7 seek:2 covariance:1 brightness:1 tr:1 accommodate:1 configuration:1 selecting:1 past:1 existing:1 com:1 comparing:3 gmail:1 must:1 parsing:1 alphanumeric:1 informative:1 designed:1 zik:2 discrimination:1 cue:3 intelligence:1 prespecified:1 provides:5 contribute:1 five:1 unbounded:2 constructed:1 beta:2 scholkopf:1 consists:1 combine:2 behavioral:3 inside:1 expected:1 behavior:2 themselves:3 brain:2 prolonged:1 actual:2 metaphor:1 begin:1 underlying:7 what:1 kind:1 interpreted:1 developed:1 finding:5 differentiation:12 nj:1 berkeley:7 exactly:1 classifier:1 local:1 depended:1 analyzing:2 merge:1 might:1 initialization:4 specifying:2 co:6 factorization:1 palmer:1 practice:1 x3:2 pyk:1 empirical:3 pre:1 griffith:6 suggest:1 cannot:2 context:1 applying:4 influence:5 demonstrated:3 customer:1 primitive:6 independently:3 identifying:1 perceive:1 array:1 traditionally:1 limiting:1 diego:1 us:3 trick:1 element:2 associate:1 distributional:13 observed:13 solved:1 capture:1 highest:1 yk:1 govern:1 complexity:1 trained:4 segment:10 learner:3 basis:3 differently:1 represented:4 distinct:1 monte:1 artificial:1 solve:1 drawing:1 otherwise:3 austerweil:2 statistic:5 noisy:6 differentiate:2 advantage:1 indication:1 interplay:1 remainder:1 relevant:1 turned:1 flexibility:3 shiffrin:8 sixteenth:1 intuitive:1 kh:1 convergence:1 cluster:1 extending:1 produce:2 categorization:16 perfect:2 object:102 depending:4 develop:1 illustrate:2 derive:1 ibp:15 strong:1 come:1 direction:2 correct:2 stochastic:1 human:34 hillsdale:1 require:1 investigation:1 probable:2 ground:1 exp:2 lawrence:1 cognition:3 driving:1 vary:2 early:1 commonality:1 perceived:1 currently:1 grouped:1 create:3 successfully:1 hope:1 mit:2 gaussian:2 always:1 rather:1 office:1 derived:1 bernoulli:1 indicates:1 likelihood:9 contrast:2 tremendously:1 sense:1 inference:10 membership:4 typically:1 integrated:1 hidden:2 pixel:11 issue:2 arg:3 flexible:2 development:1 plan:2 spatial:1 comprise:1 atom:1 sampling:2 x4:2 represents:3 flipped:1 warrant:3 future:2 report:1 stimulus:7 aslin:1 serious:1 composed:2 resulted:1 individual:3 minsky:1 consisting:1 organization:1 interest:1 investigate:2 extreme:1 behind:1 chain:1 necessary:2 old:1 initialized:1 theoretical:1 mk:3 column:4 modeling:1 similarily:1 entry:2 successful:1 resentation:1 erlbaum:1 thibaux:1 answer:2 perturbed:1 combined:1 fundamental:1 probabilistic:1 together:1 fused:1 vastly:1 hn:2 possibly:1 cognitive:7 expert:2 account:8 potential:2 suggesting:2 depends:1 performed:1 break:1 lab:1 start:1 bayes:1 participant:9 noisyor:1 formed:1 air:1 xor:1 variance:1 percept:1 correspond:2 identify:4 bayesian:19 raw:11 carlo:1 expertise:1 researcher:1 xtn:1 lengyel:1 history:2 za:2 explain:2 tended:1 associated:1 rational:18 popular:1 color:2 distractors:1 dimensionality:1 infers:1 knowledge:1 actually:1 focusing:1 appears:1 day:1 tom:1 improved:1 though:1 anderson:1 smola:1 fiser:1 correlation:1 sketch:1 hastings:1 horizontal:3 reversible:1 defines:3 scientific:1 effect:1 requiring:1 concept:1 assigned:1 laboratory:1 furry:1 during:1 ambiguous:1 demonstrate:2 image:4 harmonic:1 thibaut:1 recently:1 possessing:2 common:1 volume:1 cambridge:2 gibbs:3 rd:1 outlined:3 particle:2 had:2 posterior:5 own:2 recent:4 showed:1 perspective:1 dish:2 binary:8 captured:1 minimum:1 preceding:1 determine:2 schloss:1 stephen:1 relates:1 full:1 multiple:2 infer:4 reduces:1 technical:1 match:3 academic:1 long:1 equally:2 prediction:1 basic:5 circumstance:1 vision:1 kernel:2 represent:7 justified:2 addition:1 source:1 concluded:1 appropriately:3 posse:1 navarro:1 induced:1 incorporates:1 spirit:1 seem:1 jordan:1 near:1 presence:1 ideal:4 intermediate:2 split:1 enough:4 variety:1 affect:2 independence:1 psychology:5 zi:3 identified:5 perfectly:1 imperfect:1 idea:2 whether:2 motivated:1 six:2 karen:1 cause:4 constitute:1 useful:2 generally:2 clear:1 amount:1 nonparametric:13 repeating:1 ten:1 category:15 reduced:1 goldstone:10 generate:5 unitized:1 holistically:1 per:2 hyperparameter:1 affected:3 express:1 group:5 four:8 demonstrating:1 drawn:1 rectangle:1 wood:2 powerful:2 uncertainty:1 distorted:6 chick:1 place:1 almost:1 draw:2 decision:1 summarizes:1 bit:1 capturing:2 distinguish:1 annual:2 occur:8 ahead:1 scene:2 x2:2 generates:2 aspect:1 orban:1 department:3 combination:2 poor:1 character:1 joseph:2 metropolis:1 making:2 rob:1 psychologist:1 previously:1 turn:2 discus:1 flip:1 tractable:1 end:1 permit:1 eight:3 hierarchical:3 appropriate:4 differentiated:2 indirectly:1 buffet:5 thomas:1 assumes:3 clustering:2 running:1 ghahramani:2 society:2 suddenly:1 intend:1 already:1 question:1 occurs:1 added:2 parametric:1 traditional:1 sanborn:1 separate:3 thank:1 induction:1 relationship:1 providing:1 difficult:1 susceptible:1 potentially:1 subproblems:1 likelihod:1 perform:1 vertical:3 observation:1 markov:1 datasets:1 finite:2 attacked:1 defining:2 discovered:2 biederman:1 inferred:10 dog:1 specified:1 connection:1 california:3 learned:23 distinction:1 undistinguishable:1 perception:2 saturation:1 max:3 memory:1 explanation:1 power:1 critical:1 natural:1 force:3 representing:4 scheme:6 picture:1 identifies:2 created:2 concludes:1 featural:8 occurence:2 review:1 prior:7 literature:1 bark:1 understanding:2 nice:1 determining:2 acknowledgement:1 filtering:2 ingredient:1 principle:2 row:2 summary:1 copy:4 drastically:1 formal:4 allow:3 understand:1 bias:4 dimension:1 world:1 rich:1 sensory:10 collection:1 jump:1 adaptive:1 san:1 novice:2 reproduces:1 investigating:1 xt1:1 assumed:2 xi:4 search:2 latent:2 why:1 additionally:1 learn:4 transfer:1 nature:1 ca:2 lineargaussian:1 robust:1 schyns:1 investigated:2 artificially:2 domain:1 did:4 main:2 whole:3 noise:2 arise:1 hyperparameters:4 motivation:1 categorized:2 x1:2 elaborate:1 fails:1 inferring:9 position:1 papert:1 perceptual:25 third:3 learns:2 theorem:1 supplemented:1 learnable:1 evidence:6 concern:1 intractable:1 importance:1 mirror:1 justifies:1 intersection:1 simply:1 likely:2 forming:1 visual:4 expressed:5 indifference:1 applies:1 ma:2 conditional:1 goal:2 ownership:9 included:1 infinite:3 specifically:1 perceiving:1 sampler:1 called:1 total:1 experimental:2 perceptrons:1 indicating:2 formally:2 people:6 support:2 mark:1 categorize:1 indian:5 incorporate:2 mcmc:1 phenomenon:3 |
2,893 | 3,622 | The Mondrian Process
Daniel M. Roy
Massachusetts Institute of Technology
Yee Whye Teh
Gatsby Unit, University College London
[email protected]
[email protected]
Abstract
We describe a novel class of distributions, called Mondrian processes, which
can be interpreted as probability distributions over kd-tree data structures. Mondrian processes are multidimensional generalizations of Poisson processes and this
connection allows us to construct multidimensional generalizations of the stickbreaking process described by Sethuraman (1994), recovering the Dirichlet process in one dimension. After introducing the Aldous-Hoover representation for
jointly and separately exchangeable arrays, we show how the process can be used
as a nonparametric prior distribution in Bayesian models of relational data.
1
Introduction
Relational data are observations of relationships between sets of objects and it is therefore natural
to consider representing relations1 as arrays of random variables, e.g., (Ri,j ), where i and j index
objects xi ? X and yj ? Y . Nonrelational data sets (e.g., observations about individual objects in
X) are simply one-dimensional arrays (Ri ) from this viewpoint.
A common Bayesian approach in the one-dimensional setting is to assume there is cluster structure
and use a mixture model with a prior distribution over partitions of the objects in X. A similar
approach for relational data would na??vely require a prior distribution on partitions of the product
space X ? Y = {(x, y) | x ? X, y ? Y }. One choice is to treat each pair (x, y) atomically,
clustering the product space directly, e.g., by placing a Chinese restaurant process (CRP) prior on
partitions of X ? Y . An unsatisfactory implication of this choice is that the distribution on partitions
of (Ri,j ) is exchangeable, i.e., invariant to swapping any two entries; this implies that the identity
of objects is ignored when forming the partition, violating common sense.
Stochastic block models2 place prior distributions on partitions of X and Y separately, which can be
interpreted as inducing a distribution on partitions of the product space by considering the product of
the partitions. By arranging the rows and columns of (Ri,j ) so that clustered objects have adjacent
indices, such partitions look like regular grids (Figure 1.1). An unfortunate side effect of this form
of prior is that the ?resolution? needed to model fine detail in one area of the array necessarily
causes other parts of the array to be dissected, even if the data suggest there is no such structure.
The annotated hierarchies described by Roy et al. (2007) generate random partitions which are not
constrained to be regular grids (Figure 1.2), but the prior is inconsistent in light of missing data.
Motivated by the need for a consistent distribution on partitions of product spaces with more structure than classic block models, we define a class of nonparametric distributions we have named
Mondrian processes after Piet Mondrian and his abstract grid-based paintings. Mondrian processes
are random partitions on product spaces not constrained to be regular grids. Much like kd-trees,
Mondrian processes partition a space with nested, axis-aligned cuts; see Figure 1.3 for examples.
We begin by introducing the notion of partially exchangeable arrays by Aldous (1981) and Hoover
(1979), a generalization of exchangeability on sequences appropriate for modeling relational data.
1
We consider binary relations here but the ideas generalize easily to multidimensional relations.
Holland et al. (1983) introduced stochastic block models. Recent variations (Kemp et al., 2006; Xu et al.,
2006; Roy et al., 2007) descend from Wasserman and Anderson (1987) and Nowicki and Snijders (2001).
2
1
We then define the Mondrian process, highlight a few of its elegant properties, and describe two
nonparametric models for relational data that use the Mondrian process as a prior on partitions.
2
Exchangeable Relational Data
The notion of exchangeability3 , that the probability of a sequence of data items does not depend on
the ordering of the items, has played a central role in hierarchical Bayesian modeling (Bernardo and
Smith, 1994). A classic result by de Finetti (1931), later extended by Ryll-Nardzewski (1957), states
that if x1 , x2 , ... is an exchangeable sequence, then there exists a random parameter ? such that the
sequence is conditionally iid given ?:
Z
n
Y
p(x1 , ..., xn ) = p? (?)
px (xi |?)d?
(1)
i=1
That is, exchangeable sequences arise as a mixture of iid sequences, where the mixing distribution
is p(?). The notion of exchangeability has been generalized to a wide variety of settings. In this
section we describe notions of exchangeability for relational data originally proposed by Aldous
(1981) and Hoover (1979) in the context of exchangeable arrays. Kallenberg (2005) significantly
expanded on the concept, and Diaconis and Janson (2007) showed a strong correspondence between
such exchangeable relations and a notion of limits on graph structures (Lov?asz and Szegedy, 2006).
Here we shall only consider binary relations?those involving pairs of objects. Generalizations to
relations with arbitrary arity can be gleaned from Kallenberg (2005). For i, j = 1, 2, ... let Ri,j
denote a relation between two objects xi ? X and yj ? Y from possibly distinct sets X and Y . We
say that R is separately exchangeable if its distribution is invariant to separate permutations on its
rows and columns. That is, for each n, m ? 1 and each pair of permutations ? ? Sn and ? ? Sm ,
p(R1:n,1:m ) = p(R?(1:n),?(1:m) )
(2)
in MATLAB notation. Aldous (1981) and Hoover (1979) showed that separately exchangeable
relations can always be represented in the following way: each object i (and j) has a latent representation ?i (?j ) drawn iid from some distribution p? (p? ); independently let ? be an additional random
parameter. Then,
Z
Y
Y
Y
p(R1:n,1:m ) = p? (?)
p? (?i )
p? (?j )
pR (Ri,j |?, ?i , ?j )d?d?1:n d?1:m
(3)
i
j
i,j
As opposed to (1), the variables ?i and ?j capture additional dependencies specific to each row and
column. If the two sets of objects are in fact the same, i.e. X = Y , then the relation R is a square
array. We say R is jointly exchangeable if it is invariant to jointly permuting rows and columns; that
is, for each n ? 1 and each permutation ? ? Sn we have
p(R1:n,1:n ) = p(R?(1:n),?(1:n) )
(4)
Such jointly exchangeable relations also have a form similar to (3). The differences are that we have
one latent variable ?i for to each object xi , and that Ri,j , Rj,i need not be independent anymore:
Z
Y
Y
p(R1:n,1:n ) = p? (?)
p? (?i )
pR (Ri,j , Rj,i |?, ?i , ?j )d?d?1:n
(5)
i
i?j
In (5) it is important that pR (s, t|?, ?i , ?j ) = pR (t, s|?, ?j , ?i ) to ensure joint exchangeability. The
first impression from (5) is that joint exchangeability implies a more restricted functional form than
separately exchangeable (3). In fact, the reverse holds?(5) means that the latent representations
of row i and column i need not be independent, and that Ri,j and Rj,i need not be conditionally
independent given the row and column representations, while (3) assumes independence of both.
For example, a symmetric relation, i.e. Ri,j = Rj,i , can only be represented using (5).
The above Aldous-Hoover representation serves as the theoretical foundation for hierarchical
Bayesian modeling of exchangeable relational data, just as de Finetti?s representation serves as a
foundation for the modeling of exchangeable sequences. In Section 5, we cast the Infinite Relational
Model (Kemp et al., 2006) and a model based on the Mondrian process into this representation.
2
Anowadya
Anowadya (IRM)
1.0
0.5
0.0
Figure 1: (1) Stochastic block models like the Infinite Relational model (Kemp et al., 2006) induce regular partitions on the product space, introducing structure where the data do not support it. (2) Axis- aligned partitions,
like those produced by annotated hierarchies and the Mondrian process provide (a posteriori) resolution only
where it is needed. (3) Mondrian process on unit square, [0, 1]2 . (4) We can visualize the sequential hierarchical
process by spreading the cuts out over time. The third dimension is ?. (5) Mondrian process with beta L?evy
measure, ?(dx) = x?1 dx on [0, 1]2 . (6) 10x zoom of 5 at origin. (7) Mondrian on [, 1]3 with beta measure.
3
The Mondrian Process
The Mondrian process can be expressed as a recursive generative process that randomly makes axisaligned cuts, partitioning the underlying product space in a hierarchical fashion akin to decision
trees or kd- trees. The distinguishing feature of this recursive stochastic process is that it assigns
probabilities to the various events in such a way that it is consistent (in a sense we make precise
later). The implication of consistency is that we can extend the Mondrian process to infinite spaces
and use it as a nonparametric prior for modeling exchangeable relational data.
3.1
The one dimensional case
The simplest space to introduce the Mondrian process is the unit interval [0, 1]. Starting with an
initial ?budget? ?, we make a sequence of cuts, splitting the interval into subintervals. Each cut
costs a random amount, eventually exhausting the budget and resulting in a finite partition m of the
unit interval. The cost, EI , to cut an interval I is exponentially distributed with inverse mean given
by the length of the interval. Therefore, the first cut costs E[0,1] ? Exp(1). Let ?0 = ? ? E[0,1] .
If ?0 < 0, we make no cuts and the process returns the trivial partition m = {[0, 1]}. Otherwise,
we make a cut uniformly at random, splitting the unit interval into two subintervals A and B. The
0
process recurses independently on A and B, with independent
S budgets ? , producing partitions mA
and mB , which are then combined into a partition m = mA mB of [0, 1].
The resulting cuts can be shown to be a Poisson (point) process. Unlike the standard description of
the Poisson process, the cuts in this ?break and branch? process are organized in a hierarchy. As the
Poisson process is a fundamental building block for random measures such as the Dirichlet process
(DP), we will later exploit this relationship to build various multidimensional generalizations.
3.2
Generalizations to higher dimensions and trees
We begin in two dimensions by describing the generative process for a Mondrian process m ?
MP(?, (a, A), (b, B)) on the rectangle (a, A)?(b, B). Again, let ?0 = ??E, where E ? Exp(A?
a+B ?b) is drawn from an exponential distribution with rate the sum of the interval lengths. If ?0 <
0, the process halts, and returns the trivial partition {(a, A) ? (b, B)}. Otherwise, an axis- aligned
cut is made uniformly at random along the combined lengths of (a, A) and (b, B); that is, the cut
lies along a particular dimension with probability proportional to its length, and is drawn uniformly
within that interval. W.l.o.g., a cut x ? (a, A) splits the interval into (a, x) and (x, A). The process
then recurses, generating independent Mondrian processes with diminished rate parameter ?0 on
0
, (a, x), (b, B)) and m> ? MP(?0 , (x, A), (b, B)). The partition
both sides of the cut: m< ? MP(?
S
on (a, A)?(b, B) is then m< m> . Like the one- dimensional special case, the ? parameter controls
the number of cuts, with the process more likely to cut rectangles with large perimeters.
The process can be generalized in several ways. In higher dimensions, the cost E to make an
additional cut is exponentially distributed with rate given by the sum over all dimensions of the
interval lengths. Similarly, the cut point is chosen uniformly at random from all intervals, splitting
only that interval in the recursion. Like non- homogeneous Poisson processes, the cut point need not
3
In this paper we shall always mean infinite exchangeability when we state exchangeability.
3
be chosen uniformly at random, but can instead be chosen according to a non-atomic rate measure
?d associated with each dimension. In this case, lengths (A ? a) become measures ?1 (a, A).
The process can also be generalized beyond products of intervals. The key property of intervals
that the Mondrian process relies upon is that any point cuts the space into one-dimensional, simplyconnected pieces. Trees also have this property: a cut along an edge splits a tree into two trees.
We denote a Mondrian process m with rate ? on a product of one-dimensional, simply-connected
domains ?1 ??????D by m ? MP(?, ?1 , ..., ?D ), with the dependence on ?1 , ..., ?D left implicit.
A description of the recursive generative model for the conditional Mondrian (see Section 4) is given
in Algorithm 1.
4
Properties of the Mondrian Process
This section describes a number of interesting properties of the Mondrian process. The most important properties of the Mondrian is its self-consistency. Instead of representing a draw from a
Mondrian as an unstructured partition of ?1 ? ? ? ? ? ?D , we will represent the whole history of the
generative process. Thus a draw from the Mondrian process is either a trivial partition or a tuple
m = hd, x, ?0 , m< , m> i, representing a cut at x along the d?th dimension ?d , with nested Mondrians m< and m> on either side of the cut. Therefore, m is itself a tree of axis-aligned cuts (a kd-tree
data structure), with the leaves of the tree forming the partition of the original product space.
Conditional Independencies: The generative process for the Mondrian produces a tree of cuts,
where each subtree is itself a draw from a Mondrian. The tree structure precisely reflects the conditional independencies of the Mondrian; e.g., the two subtrees m< and m> are conditional independent given ?0 , d and x at the first cut.
Consistency: The Mondrian process satisfies an important self-consistency property: given a draw
from a Mondrian on some domain, the partition on any subdomain has the same distribution as if
we sampled a Mondrian process directly on that subdomain.
More precisely, let m ? MP(?, ?1 , ..., ?D ) and, for each dimension d, let ?d be a connected
subdomain of ?d . The restriction ?(m, ?1 , ..., ?D ) of m to ?1 ? ? ? ? ? ?D is the subtree of
cuts within ?1 ? ? ? ? ? ?D . We define restrictions inductively: If there are no cuts in m, i.e.
m = ?1 ??????D , then ?(m, ?1 , ..., ?D ) is simply ?1 ??????D . Otherwise m = hd, x, ?, m< , m> i
>x
for some d, x, and ?, and where m< and m> are the two subtrees. Let ?<x
be the d?th
d , ?d
domains of m< and m> respectively. If x 6? ?d this implies that ?d must be on exactly one side of x
(because ?d and ?d are connected). W.l.o.g., assume ?d ? ?<x
d . In this case, ?(m, ?1 , ..., ?D ) =
>x
?(m< , ?1 , ..., ?D ). If x ? ?d then both ?<x
and
?
overlap
?d and ?(m, ?1 , ..., ?D ) =
d
d
>x
hd, x, ?, ?(m< , ?1 , ..., ?d ? ?<x
,
...?
),
?(m
,
?
,
...,
?
?
?
,
...?D )i.
D
>
1
d
d
d
By integrating out the variables on nodes not contained in the restriction, it can be shown that the
restriction ?(m, ?1 , ..., ?D ) is itself distributed according to a Mondrian MP(?, ?1 , ..., ?D ).
So far the construction of the Mondrian process assumes that each domain ?d has finite measure. A
consequence of this consistency property is that we can now use the Daniell-Kolmogorov extension
theorem to extend the Mondrian process to ?-finite domains (those that can be written as a countable
union of finite domains). For example, from a Mondrian process on products of intervals, we can
construct a Mondrian process on all of RD . Note that if the domains have infinite measure, the tree
of cuts will be infinitely deep with no root and infinitely many leaves (being the infinite partition of
the product space). However the restriction of the tree to any given finite subdomains will be finite
with a root (with probability one).
Mondrian Slices: One interesting specific case of consistency under restriction is worth mentioning.
Suppose that our subdomains are ?1 = {y} and ?d = ?d for d ? 2. That is, we consider the
restriction of the Mondrian to a slice of the space where the first coordinate takes on value y. The
consistency property shows that the restriction ? = ?(m, ?1 , ..., ?D ) onto these subdomains is
distributed according to a Mondrian as well. But since ?1 is non-atomic, ?1 ({y}) = 0 thus ? will
not have any cuts in the first domain (with probability 1). That is, we can interpret ? as a draw from
a D ? 1 dimensional Mondrian with domains ?2 , ..., ?D . This is true of any lower dimensional
slice of the Mondrian. One particular extreme is that since a one dimensional Mondrian is simply the
4
1
5
2
3
.5
1
7
4
6
0
0
.1
.5
1
3
5
6
2
1
4
Figure 2: Modeling a Mondrian with a Mondrian: A posterior sample given relational data created from an
actual Mondrian painting. (from left) (1) Composition with Large Blue Plane, Red, Black, Yellow, and Gray
(1921). (2) Raw relational data, randomly shuffled. These synthetic data were generated by fitting a regular
6 ? 7 point array over the painting (6 row objects, 7 column objects), and using the blocks in the painting
to determine the block structure of these 42 relations. We then sampled 18 relational arrays with this block
structure. (3) Posterior sample of Mondrian process on unit square. The colors are for visual effect only as the
partitions are contiguous rectangles. The small black dots are the embedding of the pairs (?i , ?j ) into the unit
square. Each point represents a relation Ri,j ; each row of points are the relations (Ri,? ) for an object ?i , and
similarly for columns. Relations in the same block are clustered together. (4) Induced partition on the (discrete)
relational array, matching the painting. (5) Partitioned and permuted relational data showing block structure.
break-and-branch generative process for a Poisson process, any one dimensional slice of a Mondrian
gives a Poisson point process.
Conditional Mondrians: Using the consistency property, we can derive the conditional distribution
of a Mondrian m with rate ? on ?1 ? ? ? ? ? ?D given its restriction ? = ?(m, ?1 , ..., ?D ). To do
so, we have to consider three possibilities: when m contains no cuts, when the first cut of m is in
?, and when the first cut of m is above ?. Fortunately the probabilities of each of
P these events can
be computed easily, and amounts to drawing an exponential sample E ? Exp( d ?d (?d \ ?d )),
and comparing it against the diminished rate after the first cut in ?. Pseudocode for generating from
a conditional Mondrian is given in Algorithm 1. When every domain of ? has zero measure, i.e.,
?d (?d ) = 0 for all d, the conditional Mondrian reduces to an unconditional Mondrian.
Algorithm 1 Conditional Mondrian m ? MP(?, ?1 , ..., ?D | ?)
? = ?d = ? is unconditioned
PD
0
1. let ? ? ? ? E where E ? Exp( d=1 ?d (?d \ ?d )).
2. if ? has no cuts then ?00 ? 0 else hd0 , x0 , ?00 , ?< , ?> i ? ?.
3. if ?0 < ?00 then take root form of ?
4. if ? has no cut then
5.
return m ? ?1 ? ? ? ? ? ?D .
6. else (d0 , x0 ) is the first cut in m
0
7.
return m ? hd0 , x0 , ?00 , MP(?00 , ?1 , . . . , ?<x
d0 0 , . . . , ?D | ?< ),
MP(?00 , ?1 , . . . , ?>x
d0 , . . . , ?D | ?> )i.
8. else ?00 < ?0 and there is a cut in m above ?
9. draw a cut (d, x) outside ?, i.e., p(d) ? ?d (?d \ ?d ), x|d ? ?d (??dd\?d )
without loss of generality suppose ?d ? ?<x
d
10. return m ? hd, x, ?0 , MP(?0 , ?1 , . . . , ?<x
d , . . . , ?D | ?),
MP(?0 , ?1 , . . . , ?>x
d , . . . , ?D )i.
Partition Structure: The Mondrian is simple enough that we can characterize a number of its other
properties. As an example, the expected number of slices along each dimension of (0, A) ? (0, B) is
?A and ?B, while the expected total number of partitions is (1 + ?A)(1 + ?B). Interestingly, this is
also the expected number of partitions in a biclustering model where we first have two independent
Poisson processes with rate ? partition (0, A) and (0, B), and then form the product partition of
(0, A) ? (0, B).
5
posterior sample of Mondrian on !0,1"2
food!animals
crude materials
minerals!fuels
basic goods
UK
JAPAN
SPAIN
USA
YUGOS
SWITZ
FINLA
ISRAE
EGYPT
ALGER
CZECH
SYRIA
PAKIS
NEWZE
THAIL
INDON
CHINA
ARGEN
ECUAD
MADAG
BRAZL
ETHIO
HONDU
LIBER
crude materials
minerals!fuels
basic goods
diplomats
SYRIA
MADAG
LIBER
ETHIO
HONDU
JAPAN
UK
SWITZ
SPAIN
CHINA
NEWZE
EGYPT
THAIL
ARGEN
CZECH
FINLA
PAKIS
BRAZL
YUGOS
INDON
USA
ISRAE
ECUAD
ALGER
diplomats
food!animals
Figure 3: Trade and Diplomacy relations between 24 countries in 1984. Rij = 1 (black squares) implies
that country i imports R from country j. The colors are for visual effect only as the partitions are contiguous
rectangles.
X
X
X
X
X
2
4
3
1
[Kingman 82]
2
4
3
1
Prof. C
Prof. C
Prof. C
Student A
Student A
Student A
Student B
Student B
Student B
Student C
Student C
Student C
Prof. C
Student C
Prof. B
Student B
Prof. A
Student A
Janitor C
Prof. B
Friends?
Student B
Janitor B
Janitor A
Student C
Prof. C
Student B
Prof. B
Prof. A
Prof. B
Works With?
Gives orders to?
Student C
Prof. A
Prof. B
Prof. C
Student C
Student B
Student A
Janitor C
Janitor B
Janitor A
Prof. A
Prof. B
Prof. C
Student C
Student B
Student A
Janitor C
Janitor B
Janitor A
Prof. A
Prof. B
Prof. C
1 3 4 5 2
Prof. A
Prof. B
Student C
2
4
3
1
Janitor C
Prof. A
Student B
2
4
3
1
Janitor B
Janitor C
Prof. A
Student A
2
4
3
1
Janitor A
Janitor B
Janitor C
Janitor C
2
4
3
1
Student A
Janitor A
Janitor B
Janitor B
2
4
3
1
Janitor C
Janitor A
Janitor A
2
4
3
1
1 3 4 5 2
Janitor B
2
4
3
1
Student A
Janitor A
Janitor C
Learning the latent tree
Janitor B
X
Janitor A
8
Prof. C
7
Student C
6
Prof. B
2
Student B
9
Student A
4
Prof. A
1
Janitor C
3
Janitor B
10
Janitor A
5
Prof. C
Prof. B
1 3 4 5 2
Prof. A
Janitor A
Janitor A
Janitor A
Janitor B
Janitor B
Janitor B
Janitor C
Janitor C
Janitor C
Student A
Student A
Student A
Student B
Student B
Student B
Student C
Student C
Student C
Prof. C
Prof. C
Prof. C
Prof. B
Prof. B
Prof. B
Prof. A
Prof. A
Prof. A
Figure 4: (clockwise from bottom left) (1) Nine samples from the Mondrian process on Kingman coalescents
with rate ? = 0.25, 0.5, and 1, respectively. As the rate increases, partitions become finer. Note that partitions
are not necessarily contiguous; we use color to identify partitions. The partition structure is related to the
annotated hierarchies model (Roy et al., 2007). (2) Kingman (1982a,b) describes the relationship between
random trees and the DP, which we exploit to define a nonparametric, hierarchical block model. (3) A sequence
of cuts; each cut separates a subtree. (4) Posterior trees and Mondrian processes on a synthetic social network.
5
Relational Modeling
To illustrate how the Mondrian process can be used to model relational data, we describe two nonparametric block models for exchangeable relations. While we will only consider binary data and
assume that each block is conditionally iid, the ideas can be extended to many likelihood models.
Recall the Aldous- Hoover representation (?, ?i , ?j , pR ) for exchangeable arrays. Using a Mondrian
process with beta L?evy measure ?(dx) = ?x?1 dx, we first sample a random partition of the unit
square into blocks and assign each block a probability:
M ? MP(?, [0, 1], [0, 1])
?S | M ? Beta(a0 , a1 ), ?S ? M.
slices up unit square into blocks
each block S gets a probability ?S
(6)
(7)
The pair (M, ?) plays the role of ? in the Aldous- Hoover representation. We next sample row and
column representations (?i and ?j , respectively), which have a geometrical interpretation as x,ycoordinates (?i , ?j ) in the unit square:
?i ? U[0, 1], i ? {1, . . . , n}
?j ? U[0, 1], j ? {1, . . . , n}.
shared x coordinate for each row
shared y coordinate for each column
(8)
(9)
Let Sij be the block S ? M such that (?i , ?j ) ? S. We finally sample the array R of relations:
Rij | ?, ?, ?, M ? Bernoulli(?Sij ), i, j ? {1, . . . , n}.
6
Rij is true w.p. ?Sij
(10)
This model clusters relations together whose (?i , ?j ) pairs fall in the same blocks in the Mondrian
partition and models each cluster with a beta-binomial likelihood model. By mirroring the AldousHoover representation, we guarantee that R is exchangeable and that there is no order dependence.
This model is closely related to the IRM (Kemp et al., 2006) and IHRM (Xu et al., 2006), where
rows and columns are first clustered using a CRP prior, then each relation Rij is conditionally
independent from others given the clusters that row i and column j belong to. In particular, if we
replace Eq. (6) with
M ? MP(?, [0, 1]) ? MP(?, [0, 1]),
product of partitions of unit intervals
(11)
then we recover the same marginal distribution over relations as the IRM/IHRM. To see this, recall
that a Mondrian process in one-dimension produces a partition whose cut points follow a Poisson
point process. Teh et al. (2007) show that the stick lengths (i.e., partitions) induced by a Poisson
point process on [0, 1] with the beta L?evy measure have the same distribution as those in the stickbreaking construction of the DP. Therefore, (11) is the product of two stick-breaking priors. In
comparison, any one dimensional slice of (6), e.g., each column or row of the relation, is marginally
distributed as a DP, but is more flexible than the product of one-dimensional Mondrian processes.
We can also construct an exchangeable variant of the Annotated Hierarchies model (a hierarchical
block model) by moving from the unit square to a product of random trees drawn from Kingman?s
coalescent prior (Kingman, 1982a). Let ?d be Lebesgue measure.
Td ? KC(?), ?d ? {1, . . . , D}
M | T ? MP(2?, T1 , . . . , TD )
?S | M ? Beta(a0 , a1 ), ?S ? M.
for each dimension, sample a tree
partition the cross product of trees
each block S gets a probability ?S
(12)
(13)
(14)
Let Sij be the subset S ? M where leaves (i, j) fall in S. Then
Rij | ?, M ? Bernoulli(?Sij ), i, j ? {1, . . . , n}.
Rij is true w.p. ?Sij
(15)
Figure 4 shows some samples from this prior. Again, this model is related to the DP. Kingman shows
that the partition on the leaves of a coalescent tree when its edges are cut by a Poisson point process
is the same as that of a DP (Figure 4). Therefore, the partition structure along every row and column
is marginally the same as a DP. Both the unit square and product of random trees models give DP
distributed partitions on each row and column, but they have different inductive biases.
6
Experiments
The first data set was synthetically created using an actual painting by Piet Mondrian, whose gridbased paintings were the inspiration for the name of this process. Using the model defined by (10)
and a uniform rate measure, we performed a Markov chain Monte Carlo (MCMC) simulation of
the posterior distribution over the Mondrian, ??s, ??s, and hyperparameters. We employed a number
of Metropolis-Hastings (MH) proposals that rotated, scaled, flipped, and resampled portions of the
Mondrian. It can be shown that the conditional distribution of each ?i and ?j is piecewise constant;
given the conjugacy of the beta-binomial, we can Gibbs sample the ??s and ??s. Figure 2 shows a
sample after 1500 iterations (starting from a random initialization) where the partition on the array
is exactly recovered. This was a typical attractor state for random initializations. While the data
are sufficient to recover the partition on the array, they are not sufficient to recover the underlying
Mondrian process. It is an open question as to its identifiability in the limit of infinite data.
We next analyzed the classic Countries data set from the network analysis literature (Wasserman
and Faust, 1994), which reports trade in 1984 between 24 countries in food and live animals; crude
materials; minerals and fuels; basic manufactured goods; and exchange of diplomats. We applied
the model defined by (10). Figure 3 illustrates the type of structure the model uncovers during
MCMC simulation; it has recognized several salient groups of countries acting in blocs; e.g., Japan,
the UK, Switzerland, Spain and China export to nearly all countries, although China behaves more
like the other Pacific Rim countries as an importer. The diplomats relation is nearly symmetric, but
the model does not represent symmetry explicitly and must redundantly learn the entire relation.
Reflecting the Mondrian about the line y = x is one way to enforce symmetry in the partition.
In our final experiment, we analyzed a synthetic social network consisting of nine university employees: 3 janitors, 3 professors and 3 students. Given three relations (friends, works-with, and
7
gives-orders-to), the maximum a posteriori Mondrian process partitions the relations into homogeneous blocks. Tree structures around the MAP clustered the janitors, professors and students into
three close-knit groups, and preferred to put the janitors and students more closely together in the
tree. Inference in this model is particularly challenging given the large space of trees and partitions.
7
Discussion
While the Mondrian process has many elegant properties, much more work is required to determine
its usefulness for relational modeling. Just as effective inference procedures preceded the popularity
of the Dirichlet process, a similar leap in inference sophistication will be necessary to assess the
Mondrian process on large data sets. We are currently investigating improved MCMC sampling
schemes for the Mondrian process, as well as working to develop a combinatorial representation of
the distribution on partitions induced by the Mondrian process. Such a representation is of practical interest (possibly leading to improved inference schemes) and of theoretical interest, being a
multidimensional generalization of Chinese restaurant processes.
The axis-aligned partitions of [0, 1]n produced by the Mondrian process have been studied extensively in combinatorics and computational geometry, where they are known as guillotine partitions.
Guillotine partitions have wide ranging applications including circuit design, approximation algorithms and computer graphics. However, the question of consistent stochastic processes over guillotine partitions, i.e. the question addressed here, has not, to our knowledge, been studied before.
At a high level, we believe that developing nonparametric priors on complex data structures from
computer science may successfully bridge the gap between old-fashioned Artificial Intelligence and
modern statistical approaches. Developing representations for these typically recursive structures
will require us to go beyond graphical models; stochastic lambda calculus is an appealing option.
References
D. J. Aldous. Representations for Partially Exchangeable Arrays of Random Variables. Journal of Multivariate
Analysis, 11:581?598, 1981.
J. M. Bernardo and A. F. M. Smith. Bayesian theory. John Wiley & Sons, 1994.
B. de Finetti. Funzione caratteristica di un fenomeno aleatorio. Atti della R. Academia Nazionale dei Lincei,
Serie 6. Memorie, Classe di Scienze Fisiche, Mathematice e Naturale, 4:251299, 1931.
P. Diaconis and S. Janson. Graph limits and exchangeable random graphs. arXiv:0712.2749v1, 2007.
P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2):109
? 137, 1983.
D. Hoover. Relations on probability spaces and arrays of random variables. Technical report, Preprint, Institute
for Advanced Study, Princeton, NJ, 1979.
O. Kallenberg. Probabilistic Symmetries and Invariance Principles. Springer, 2005.
C. Kemp, J. Tenenbaum, T. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with an infinite
relational model. In Proceedings of the 21st National Conference on Artificial Intelligence, 2006.
J. F. C. Kingman. On the genealogy of large populations. Journal of Applied Probability, 19:27?43, 1982a.
J. F. C. Kingman. The coalescent. Stochastic Processes and their Applications, 13:235?248, 1982b.
L. Lov?asz and B. Szegedy. Limits of dense graph sequences. J. Comb. Theory B, 96:933957, 2006.
K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of the
American Statistical Association, 96:1077?1087(11), 2001.
D. M. Roy, C. Kemp, V. Mansinghka, and J. B. Tenenbaum. Learning annotated hierarchies from relational
data. In Advances in Neural Information Processing Systems 19, 2007.
C. Ryll-Nardzewski. On stationary sequences of random variables and the de Finetti?s equivalence. Colloq.
Math., 4:149?156, 1957.
J. Sethuraman. A Constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994.
Y. W. Teh, D. G?or?ur, and Z. Ghahramani. Stick-breaking construction for the Indian buffet process. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 11, 2007.
S. Wasserman and C. Anderson. Stochastic a posteriori blockmodels: Construction and assessment. Social
Networks, 9(1):1 ? 36, 1987.
S. Wasserman and K. Faust. Social Network Analysis: Methods and Applications, pages 64?65. Cambridge
University Press, 1994.
Z. Xu, V. Tresp, K. Yu, and H.-P. Kriegel. Infinite Hidden Relational Models. In Proceedings of the 22nd
Conference on Uncertainty in Artificial Intelligence, 2006.
8
| 3622 |@word nd:1 open:1 calculus:1 simulation:2 uncovers:1 serie:1 initial:1 contains:1 daniel:1 janson:2 interestingly:1 recovered:1 comparing:1 dx:4 must:2 written:1 john:1 import:1 academia:1 partition:58 stationary:1 generative:6 leaf:4 intelligence:4 item:2 plane:1 smith:2 yamada:1 guillotine:3 math:1 node:1 evy:3 along:6 beta:8 become:2 fitting:1 comb:1 introduce:1 x0:3 lov:2 expected:3 td:2 food:3 actual:2 considering:1 begin:2 spain:3 notation:1 underlying:2 circuit:1 fuel:3 interpreted:2 redundantly:1 nj:1 guarantee:1 every:2 multidimensional:5 bernardo:2 exactly:2 scaled:1 uk:4 exchangeable:22 unit:13 partitioning:1 control:1 stick:3 producing:1 t1:1 before:1 treat:1 limit:4 consequence:1 black:3 initialization:2 china:4 studied:2 equivalence:1 challenging:1 mentioning:1 practical:1 yj:2 atomic:2 recursive:4 block:22 union:1 procedure:1 area:1 significantly:1 matching:1 induce:1 regular:5 integrating:1 griffith:1 suggest:1 get:2 onto:1 close:1 put:1 context:1 live:1 yee:1 restriction:9 map:1 missing:1 go:1 starting:2 independently:2 resolution:2 splitting:3 assigns:1 unstructured:1 wasserman:4 array:17 piet:2 his:1 hd:4 classic:3 embedding:1 notion:5 variation:1 coordinate:3 arranging:1 population:1 hierarchy:6 construction:4 suppose:2 play:1 homogeneous:2 distinguishing:1 origin:1 roy:5 particularly:1 cut:44 bottom:1 role:2 preprint:1 export:1 rij:6 capture:1 descend:1 ycoordinates:1 connected:3 ordering:1 trade:2 nazionale:1 pd:1 inductively:1 depend:1 mondrian:77 upon:1 easily:2 joint:2 mh:1 represented:2 various:2 kolmogorov:1 distinct:1 describe:4 london:1 monte:1 effective:1 artificial:4 outside:1 whose:3 say:2 drawing:1 otherwise:3 faust:2 coalescents:1 statistic:1 axisaligned:1 jointly:4 itself:3 unconditioned:1 final:1 sequence:11 ucl:1 leinhardt:1 product:20 mb:2 recurses:2 aligned:5 mixing:1 description:2 inducing:1 cluster:4 r1:4 produce:2 generating:2 rotated:1 object:14 derive:1 friend:2 ac:1 illustrate:1 develop:1 mansinghka:1 eq:1 strong:1 recovering:1 implies:4 switzerland:1 closely:2 annotated:5 stochastic:10 coalescent:3 dissected:1 material:3 require:2 exchange:1 assign:1 generalization:7 hoover:8 clustered:4 extension:1 genealogy:1 hold:1 around:1 exp:4 visualize:1 estimation:1 spreading:1 leap:1 currently:1 combinatorial:1 stickbreaking:2 bridge:1 successfully:1 reflects:1 mit:1 always:2 exchangeability:7 unsatisfactory:1 bernoulli:2 likelihood:2 sense:2 posteriori:3 inference:4 entire:1 typically:1 a0:2 hidden:1 relation:26 kc:1 flexible:1 animal:3 constrained:2 special:1 marginal:1 construct:3 sampling:1 placing:1 represents:1 look:1 flipped:1 nearly:2 yu:1 others:1 report:2 piecewise:1 few:1 modern:1 randomly:2 diaconis:2 national:1 zoom:1 individual:1 geometry:1 consisting:1 lebesgue:1 attractor:1 interest:2 possibility:1 mixture:2 extreme:1 analyzed:2 light:1 swapping:1 permuting:1 unconditional:1 perimeter:1 implication:2 subtrees:2 chain:1 edge:2 tuple:1 necessary:1 vely:1 tree:26 old:1 irm:3 theoretical:2 column:15 modeling:8 contiguous:3 cost:4 introducing:3 entry:1 subset:1 uniform:1 usefulness:1 graphic:1 characterize:1 dependency:1 synthetic:3 combined:2 st:1 fundamental:1 international:1 probabilistic:1 bloc:1 together:3 na:1 again:2 central:1 opposed:1 possibly:2 lambda:1 american:1 kingman:8 return:5 leading:1 szegedy:2 japan:3 manufactured:1 de:4 student:42 combinatorics:1 mp:15 explicitly:1 piece:1 later:3 break:2 root:3 performed:1 red:1 portion:1 recover:3 option:1 identifiability:1 ass:1 square:10 identify:1 painting:7 yellow:1 generalize:1 bayesian:5 raw:1 produced:2 iid:4 marginally:2 carlo:1 worth:1 finer:1 history:1 definition:1 against:1 associated:1 di:2 sampled:2 massachusetts:1 recall:2 color:3 ihrm:2 knowledge:1 organized:1 rim:1 reflecting:1 originally:1 higher:2 violating:1 follow:1 diplomacy:1 improved:2 anderson:2 generality:1 just:2 implicit:1 crp:2 fashioned:1 working:1 hastings:1 ei:1 assessment:1 gray:1 laskey:1 believe:1 building:1 effect:3 usa:2 concept:2 true:3 anowadya:2 name:1 inductive:1 inspiration:1 shuffled:1 symmetric:2 nowicki:2 conditionally:4 adjacent:1 during:1 self:2 generalized:3 whye:1 impression:1 gleaned:1 egypt:2 geometrical:1 aldoushoover:1 ranging:1 subdomain:3 novel:1 common:2 permuted:1 functional:1 pseudocode:1 behaves:1 preceded:1 exponentially:2 volume:1 extend:2 interpretation:1 belong:1 association:1 interpret:1 employee:1 composition:1 cambridge:1 gibbs:1 rd:1 grid:4 consistency:8 similarly:2 dot:1 moving:1 atomically:1 posterior:5 multivariate:1 recent:1 showed:2 aldous:8 reverse:1 binary:3 additional:3 fortunately:1 atti:1 employed:1 recognized:1 determine:2 clockwise:1 branch:2 snijders:2 rj:4 reduces:1 d0:3 technical:1 cross:1 halt:1 a1:2 prediction:1 involving:1 basic:3 variant:1 poisson:11 arxiv:1 iteration:1 represent:2 proposal:1 separately:5 fine:1 interval:16 addressed:1 else:3 country:8 unlike:1 asz:2 induced:3 elegant:2 inconsistent:1 synthetically:1 split:2 enough:1 variety:1 independence:1 restaurant:2 idea:2 motivated:1 akin:1 cause:1 nine:2 matlab:1 deep:1 ignored:1 mirroring:1 amount:2 nonparametric:7 extensively:1 tenenbaum:2 simplest:1 generate:1 popularity:1 blue:1 discrete:1 shall:2 finetti:4 group:2 key:1 independency:2 salient:1 drawn:4 kallenberg:3 rectangle:4 v1:1 graph:4 sum:2 inverse:1 uncertainty:1 named:1 place:1 blockstructures:1 ueda:1 draw:6 decision:1 resampled:1 played:1 correspondence:1 precisely:2 ri:12 x2:1 ywteh:1 expanded:1 px:1 pacific:1 developing:2 according:3 kd:4 describes:2 son:1 ur:1 partitioned:1 appealing:1 metropolis:1 scienze:1 lincei:1 invariant:3 pr:5 restricted:1 sij:6 daniell:1 conjugacy:1 describing:1 eventually:1 needed:2 serf:2 hierarchical:6 appropriate:1 enforce:1 anymore:1 buffet:1 original:1 subdomains:3 assumes:2 dirichlet:4 clustering:1 ensure:1 unfortunate:1 binomial:2 graphical:1 exploit:2 ghahramani:1 chinese:2 build:1 prof:39 question:3 dependence:2 dp:8 separate:2 hd0:2 mineral:3 kemp:6 trivial:3 length:7 index:2 relationship:3 sinica:1 design:1 countable:1 models2:1 teh:3 observation:2 markov:1 sm:1 finite:6 relational:22 extended:2 precise:1 arbitrary:1 introduced:1 pair:6 cast:1 required:1 connection:1 czech:2 beyond:2 kriegel:1 thail:2 including:1 event:2 overlap:1 natural:1 recursion:1 advanced:1 representing:3 scheme:2 technology:1 dei:1 sethuraman:2 axis:5 created:2 tresp:1 sn:2 prior:15 literature:1 loss:1 highlight:1 permutation:3 interesting:2 proportional:1 foundation:2 sufficient:2 consistent:3 dd:1 viewpoint:1 principle:1 row:15 side:4 bias:1 institute:2 wide:2 fall:2 distributed:6 slice:7 dimension:13 xn:1 made:1 far:1 social:5 preferred:1 investigating:1 xi:4 un:1 latent:4 learn:1 subintervals:2 symmetry:3 necessarily:2 complex:1 domain:10 blockmodels:2 dense:1 statistica:1 whole:1 arise:1 hyperparameters:1 xu:3 x1:2 syria:2 fashion:1 gatsby:2 wiley:1 droy:1 exponential:2 classe:1 lie:1 crude:3 breaking:2 third:1 theorem:1 specific:2 arity:1 showing:1 exists:1 sequential:1 subtree:3 budget:3 illustrates:1 gap:1 sophistication:1 simply:4 likely:1 infinitely:2 forming:2 visual:2 expressed:1 contained:1 partially:2 biclustering:1 holland:2 springer:1 nested:2 satisfies:1 relies:1 ma:2 conditional:10 identity:1 shared:2 replace:1 professor:2 diminished:2 infinite:9 typical:1 uniformly:5 acting:1 exhausting:1 called:1 total:1 invariance:1 college:1 support:1 indian:1 constructive:1 mcmc:3 princeton:1 della:1 |
2,894 | 3,623 | Adaptive Martingale Boosting
Philip M. Long
Google
[email protected]
Rocco A. Servedio
Columbia University
[email protected]
Abstract
In recent work Long and Servedio [LS05] presented a ?martingale boosting? algorithm that works by constructing a branching program over weak classifiers and
has a simple analysis based on elementary properties of random walks. [LS05]
showed that this martingale booster can tolerate random classification noise when
it is run with a noise-tolerant weak learner; however, a drawback of the algorithm
is that it is not adaptive, i.e. it cannot effectively take advantage of variation in the
quality of the weak classifiers it receives.
We present an adaptive variant of the martingale boosting algorithm. This adaptiveness is achieved by modifying the original algorithm so that the random walks
that arise in its analysis have different step size depending on the quality of the
weak learner at each stage. The new algorithm inherits the desirable properties of
the original [LS05] algorithm, such as random classification noise tolerance, and
has other advantages besides adaptiveness: it requires polynomially fewer calls to
the weak learner than the original algorithm, and it can be used with confidencerated weak hypotheses that output real values rather than Boolean predictions.
1 Introduction
Boosting algorithms are efficient procedures that can be used to convert a weak learning algorithm
(one which outputs a weak hypothesis that performs only slightly better than random guessing for
a binary classification task) into a strong learning algorithm (one which outputs a high-accuracy
classifier). A rich theory of boosting has been developed over the past two decades; see [Sch03,
MR03] for some overviews. Two important issues for boosting algorithms which are relevant to the
current work are adaptiveness and noise-tolerance; we briefly discuss each of these issues before
describing the contributions of this paper.
Adaptiveness. ?Adaptiveness? refers to the ability of boosting algorithms to adjust to different
accuracy levels in the sequence of weak hypotheses that they are given. The first generation of
boosting algorithms [Sch90, Fre95] required the user to input an ?advantage? parameter ? such that
the weak learner was guaranteed to always output a weak hypothesis with accuracy at least 1/2 + ?.
Given an initial setting of ?, even if the sequence of weak classifiers generated by the runs of the
weak learner included some hypotheses with accuracy (perhaps significantly) better than 1/2+?, the
early boosting algorithms were unable to capitalize on this extra accuracy; thus, these early boosters
were not adaptive. Adaptiveness is an important property since it is often the case that the advantage
of successive weak classifiers grows smaller and smaller as boosting proceeds.
A major step forward was the development of the AdaBoost algorithm [FS97]. AdaBoost does
not require a lower bound ? on the minimum advantage, and the error rate of its final hypothesis
depends favorably on the different advantages of the different weak classifiers in the sequence. More
precisely, if the accuracy of the t-th weak classifier is 21 + ?t , then the AdaBoost final hypothesis
has error at most
QT ?1 p
1 ? 4?t2 . This error rate is usually upper bounded (see [FS97]) by
t=0
!
T
?1
X
2
exp ?2
?t
(1)
t=0
and indeed (1) is a good approximation if no ?t is too large.
Noise tolerance. One drawback of many standard boosting techniques, including AdaBoost, is that
they can perform poorly when run on noisy data [FS96, MO97, Die00, LS08]. Motivated in part by
this observation, in recent years boosting algorithms that work by constructing branching programs
over the weak classifiers (note that this is in contrast with AdaBoost, which constructs a single
weighted sum of weak classifiers) have been developed and shown to enjoy some provable noise
tolerance. In particular, the algorithms of [KS05, LS05] have been shown to boost to optimally high
accuracy in the presence of random classification noise when run with a random classification noise
tolerant weak learner. (Recall that ?random classification noise at rate ?? means that the true binary
label of each example is independently flipped with probability ?. This is a very well studied noise
model, see e.g. [AL88, Kea98, AD98, BKW03, KS05, RDM06] and many other references.)
While the noise tolerance of the boosters [KS05, LS05] is an attractive feature, a drawback of these
algorithms is that they do not enjoy the adaptiveness of algorithms like AdaBoost. The MMM
booster of [KS05] is not known to have any adaptiveness at all, and the ?martingale boosting?
algorithm of [LS05] only has the following limited type of adaptiveness. The algorithm works in
stages t = 0, 1, . . . where in the t-th stage a collection of t + 1 weak hypotheses are obtained; let ?t
denote the minimum advantage of these t + 1 hypotheses obtained in stage t. [LS05] shows that the
final hypothesis constructed by martingale boosting has error at most
!
PT ?1
( t=0 ?t )2
exp ?
.
(2)
2T
(2) is easily seen to always be a worse bound than (1), and the difference can be substantial. Consider,
for example, a sequence of weak classifiers in which the advantages decrease as
?
?t = 1/ t + 1 (this is in line with the oft-occurring situation, mentioned above, that advantages
grow smaller and ?
smaller as boosting progresses). For any ? > 0 we can bound (1) from above by ?
by taking T ? 1/ ?, whereas for this sequence of advantages the error bound (2) is never less than
0.5 (which is trivial), and in fact (2) approaches 1 as t ? ?.
Our contributions: adaptive noise-tolerant boosting.
give the first boosting algorithm that
We
PT ?1 2
and is provably tolerant to
is both adaptive enough to satisfy a bound of exp ??
t=0 ?t
random classification noise. We do this by modifying the martingale boosting algorithm of [LS05]
to make it adaptive; the modification inherits the noise-tolerance of the original [LS05] algorithm. In
addition to its adaptiveness, the new algorithm also improves on [LS05] by constructing a branching
program with polynomially fewer nodes than the original martingale boosting algorithm (thus it
requires fewer calls to the weak learner), and it can be used directly with weak learners that generate
confidence-rated weak hypotheses (the original martingale boosting algorithm required the weak
hypotheses to be Boolean-valued).
Our approach. We briefly sketch the new idea that lets us achieve adaptiveness. Recall that the
original martingale booster of Long and Servedio formulates the boosting process as a random walk;
intuitively, as a random example progresses down through the levels of the branching program constructed by the [LS05] booster, it can be viewed as performing a simple random walk with step size 1
on the real line, where the walk is biased in the direction (positive or negative) corresponding to the
correct classification of the example. (The quantity tracked during the random walk is the difference
between the number of positive predictions and the number of negative predictions made by base
classifiers encountered in the braching program up to a given point in time.) This means that after
enough stages, a random positive example will end up to the right of the origin with high probability,
and contrariwise for a random negative example. Thus a high-accuracy classifier is obtained simply
by labelling each example according to the sign (+ or ?) of its final location on the real line.
The new algorithm extends this approach in a simple and intuitive way, by having examples perform
a random walk with variable step size: if the weak classifier at a given internal node has large
advantage, then the new algorithm makes the examples that reach that node take a large step in
the random walk. This is a natural way to exploit the fact that examples reaching such a largeadvantage node usually tend to walk in the right direction. The idea extends straightforwardly to
let us handle confidence-rated weak hypotheses (see [SS99]) whose predictions are real values in
[?1, 1] as opposed to Boolean values from {?1, 1}. This is done simply by scaling the step size for a
given example x from a given node according to the numerical value h(x) that the confidence-rated
weak hypothesis h at that node assigns to example x.
While using different step sizes at different levels is a natural idea, it introduces some complications.
In particular, if a branching program is constructed naively based on this approach, it is possible for
the number of nodes to increase exponentially with the depth. To avoid this, we use a randomized
rounding scheme together with the variable-step random walk to ensure that the number of nodes
in the branching program grows polynomially rather than exponentially in the number of stages
in the random walk (i.e. the depth of the branching program). In fact, we actually improve on
the efficiency of the original martingale boosting algorithm of [LS05] by a polynomial factor, by
truncating ?extreme? nodes in the branching program that are ?far? from the origin. Our analysis
shows that this truncation has only a small effect on the accuracy of the final classifier, while giving
a significant asymptotic savings in the size of the final branching program (roughly 1/? 3 nodes as
opposed to the 1/? 4 nodes of [KS05, LS05]).
2 Preliminaries
We make the following assumptions and notational conventions throughout the paper. There is an
initial distribution D over a domain of examples X. There is a target function c : X ? {?1, 1} that
we are trying to learn. Given the target function c and the distribution D, we write D+ to denote
the distribution D restricted to the positive examples {x ? X : c(x) = 1}. Thus, for any event
S ? {x ? X : c(x) = 1} we have PrD+ [x ? S] = PrD [x ? S]/PrD [c(x) = 1]. Similarly, we
write D? to denote D restricted to the negative examples {x ? X : c(x) = ?1}.
As usual, our boosting algorithms work by repeatedly passing a distribution D? derived from D to
a weak learner, which outputs a classifier h. The future behavior will be affected by how well h
performs on data distributed according to D? . To keep the analysis clean, we will abstract away
issues of sampling from D? and estimating the accuracy of the resulting h. These issues are trivial
if D is uniform over a moderate-sized domain (since all probabilities can be computed exactly), and
otherwise they can be handled via the same standard estimation techniques used in [LS05].
Martingale boosting. We briefly recall some key aspects of the martingale boosting algorithm of
[LS05] which are shared by our algorithm (and note some differences). Both boosters work by
constructing a leveled branching program. Each node in the branching program has a location; this
is a pair (?, t) where ? is a real value (a location on the line) and t ? 0 is an integer (the level of the
node; each level corresponds to a distinct stage of boosting). The initial node, where all examples
start, is at (0, 0). In successive stages t = 0, 1, 2, . . . the booster constructs nodes in the branching
program at levels 0, 1, 2, . . . . For a location (?, t) where the branching program has a node, let
D?,t be the distribution D conditioned on reaching the node at (?, t). We sometimes refer to this
distribution D?,t as the distribution induced by node (?, t).
As boosting proceeds, in stage t, each node (?, t) at level t is assigned a hypothesis which we
call h?,t . Unlike [LS05] we shall allow confidence-rated hypotheses, so each weak hypothesis is a
mapping from X to [?1, 1]. Once the hypothesis h?,t has been obtained, out-edges are constructed
from (?, t) to its child nodes at level t + 1. While the original martingale boosting algorithm of
[LS05] had two child nodes at (? ? 1, t + 1) and (? + 1, t + 1) from each internal node, as we
describe in Section 3 our new algorithm will typically have four child nodes for each node (but may,
for a confidence-rated base classifier, have as many as eight).
Our algorithm. To fully specify our new boosting algorithm we must describe:
(1) How the weak learner is run at each node (?, t) to obtain a weak classifier. This is straightforward for the basic case of ?two-sided? weak learners that we describe in Section 3 and
somewhat less straightforward in the usual (non-two-sided) weak learner setting. In Section 5.1 we describe how to use a standard weak learner, and how to handle noise ? both
extensions borrow heavily from earlier work [LS05, KS05].
(2) What function is used to label the node (?, t), i.e. how to route subsequent examples that
reach (?, t) to one of the child nodes. It turns out that this function is a randomized version
of the weak classifier mentioned in point (1) above.
(3) Where to place the child nodes at level t + 1; this is closely connected with (2) above.
As in [LS05], once the branching program has been fully constructed down through some level T
the final hypothesis it computes is very simple. Given an input example x, the output of the final
hypothesis on x is sgn(?) where (?, T ) is the location in level T to which x is ultimately routed as
it passes through the branching program.
3 Boosting a two-sided weak learner
In this section we assume that we have a two-sided weak learner. This is an algorithm which, given
a distribution D, can always obtain hypotheses that have two-sided advantage as defined below:
Definition 1 A hypothesis h : X ? [?1, 1] has two-sided advantage ? with respect to D if it
satisfies both Ex?D+ [h(x)] ? ? and Ex?D? [h(x)] ? ??.
As we explain in Section 5.1 we may apply methods of [LS05] to reduce the typical case, in which
we only receive ?normal? weak hypotheses rather than two-sided weak hypotheses, to this case.
The branching program starts off with a single node at location (0, 0). Assuming the branching
program has been constructed up through level t, we now explain how it is extended in the t-th stage
up through level t + 1. There are two basic steps in each stage: weak training and branching.
Weak training. Consider a given node at location (?, t) in the branching program. As in [LS05] we
construct a weak hypothesis h?,t simply by running the two-sided weak learner on examples drawn
from D?,t and letting h?,t be the hypothesis it generates. Let us write ??,t to denote
def
??,t = min{Ex?(D?,t )+ [h?,t (x)], Ex?(D?,t )? [?h?,t (x)]}.
We call ??,t the advantage at node (?, t).
We do this for all nodes at level t. Now we define the advantage at level t to be
def
?t = min ??,t .
?
(3)
Branching. Intuitively, we would like to use ?t as a scaling factor for the ?step size? of the random
walk at level t. Since we are using confidence-rated weak hypotheses, it is also natural to have
the step that example x takes at a given node be proportional to the value of the confidence-rated
hypothesis at that node on x. The most direct way to do this would be to label the node (?, t) with
the weak classifier h?,t and to route each example x to a node at location (? + ?t h?,t (x), t + 1).
However, there are obvious difficulties with this approach; for one thing a single node at (?, t) could
give rise to arbitrarily many (infinitely many, if |X| = ?) nodes at level t+1. Even if the hypotheses
h?,t were all guaranteed to {?1, 1}-valued, if we were to construct a branching program in this way
then it could be the case that by the T -th stage there are 2T ?1 distinct nodes at level T .
We get around this problem by creating nodes at level t + 1 only at integer multiples of ?2t . Note that
this ?granularity? that is used is different at each level, depending on the advantage at each level (we
shall see in the next section that this is crucial for the analysis). This keeps us from having too many
nodes in the branching program at level t + 1. Of course, we only actually create those nodes in the
branching program that have an incoming edge as described below (later we will give an analysis to
bound the number of such nodes).
We simulate the effect of having an edge from (?, t) to (? + ?t h?,t (x), t + 1) by using two edges
from (?, t) to (i ? ?t /2, t + 1) and to ((i + 1) ? ?t /2, t + 1), where i is the unique integer such that
i ? ?t /2 ? ? + ?t h?,t (x) < (i + 1) ? ?t /2. To simulate routing an example x to (? + ?t h?,t (x), t + 1),
the branching program routes x randomly along one of these two edges so that the expected location
at which x ends up is (? + ?t h?,t (x), t + 1). More precisely, if ? + ?t h?,t (x) = (i + ?) ? ?t /2 where
0 ? ? < 1, then the rule used at node (?, t) to route an example x is ?with probability ? send x to
((i + 1) ? ?t /2, t + 1) and with probability (1 ? ?) send x to (i ? ?t /2, t + 1).?
Since |h?,t (x)| ? 1 for all x by assumption, it is easy to see that at most eight outgoing edges
are required from each node (?, t). Thus the branching program that the booster constructs uses
a randomized variant of each weak hypothesis h?,t to route examples along one of (at most) eight
outgoing edges.
4 Proof of correctness for boosting a two-sided weak learner
The following theorem shows that the algorithm described above is an effective adaptive booster for
two-sided weak learners:
Theorem 2 Consider running the above booster for T stages. For t = 0, . . . , T ? 1 let the values ?0 , . . . , ?T ?1 > 0 be defined as described above, so each invocation of the two-sided weak
learner on distribution D?,t yields a hypothesis h?,t that has ??,t ? ?t . Then the final hypothesis h
constructed by the booster satisfies
!
T ?1
1X 2
? .
(4)
Prx?D [h(x) 6= c(x)] ? exp ?
8 t=0 t
PT ?1 Pt?1
The algorithm makes at most M ? O(1) ? t=0 ?1t j=0 ?j calls to the weak learner (i.e. constructs a branching program with at most M nodes).
PT ?1
Proof: We will show that Prx?D+ [h(x) 6= 1] ? exp ? 81 t=0 ?t2 ; a completely symmetric
argument shows a similar bound for negative examples, which gives (4).
For t = 1, . . . , T we define the random variable At as follows: given a draw of x from D+ (the
original distribution D restricted to positive examples), the value of At is ?t?1 h?,t?1 (x), where
(?, t ? 1) is the location of the node that x reaches at level t of the branching program. Intuitively
At captures the direction and size of the move that we would like x to make during the branching
step that brings it to level t.
We define Bt to be the random variable that captures the direction and size of the move that x
actually makes during the branching step that brings it to level t. More precisely, let i be the integer
such that i ? (?t?1 /2) ? ? + ?t?1 h?,t?1 (x) < (i + 1) ? (?t?1 /2), and let ? ? [0, 1) be such that
? + ?t?1 h?,t?1 (x) = (i + ?) ? (?t?1 /2). Then
((i + 1) ? (?t?1 /2) ? ?) with probability ?, and
Bt =
(i ? (?t?1 /2) ? ?)
with probability 1 ? ?.
We have that E[Bt ] (where the expectation is taken only over the ?-probability in the definition of
Pt
Bt ) equals ((i + ?) ? (?t?1 /2) ? ?)h?,t?1 (x) = ?t?1 h?,t?1 (x) = At . Let Xt denote i=1 Bt , so
the value of Xt is the actual location on the real line where x ends up at level t.
Fix 1 ? t ? T and let us consider the conditional random variable (Xt |Xt?1 ). Conditioned on
Xt?1 taking any particular value (i.e. on x reaching any particular location (?, t ? 1)), we have that
x is distributed according to (D?,t?1 )+ , and thus we have
2
E[Xt |Xt?1 ] = Xt?1 + Ex?(D?,t )+ [?t?1 h?,t?1 (x)] ? Xt?1 + ?t?1 ??,t?1 ? Xt?1 + ?t?1
, (5)
where the first inequality follows from the two-sided advantage of h?,t?1 .
P
2
For t = 0, . . . , T , define the random variable Yt as Yt = Xt ? t?1
i=0 ?i (so Y0 = X0 = 0). Since
conditioning on the value of Yt?1 is equivalent to conditioning on the value of Xt?1 , using (5) we
get
"
#
t?1
t?1
t?2
X
X
X
2
E[Yt |Yt?1 ] = E Xt ?
?i Yt?1 = E[Xt |Yt?1 ] ?
?i2 ? Xt?1 ?
?i2 = Yt?1 ,
i=0
i=0
i=0
so the sequence of random variables Y0 , . . . , YT is a sub-martingale.1 To see that this sub-martingale
has bounded differences, note that we have
2
2
|Yt ? Yt?1 | = |Xt ? Xt?1 ? ?t?1
| = |Bt ? ?t?1
|.
1
The more common definition of a sub-martingale requires that E[Yt |Y0 , ..., Yt?1 ] ? Yt?1 , but the weaker
assumption that E[Yt |Yt?1 ] ? Yt?1 suffices for the concentration bounds that we need (see [ASE92, Hay05]).
The value of Bt is obtained by first moving by ?t?1 h?,t?1 (x), and then rounding to a neighboring
2
multiple of ?t?1 /2, so |Bt | ? (3/2)?t?1 , which implies |Yt ? Yt?1 | ? (3/2)?t?1 + ?t?1
? 2?t?1 .
Now recall Azuma?s inequality for sub-martingales:
Let 0 = Y0 , . . . , YT be a sub-martingale which has |Yi ? Yi?1| ? ci for each
2
i = 1, . . . , T . Then for any ? > 0 we have Pr[YT ? ??] ? exp ? 2 P?T c2 .
i=1
PT ?1
We apply this with each ci = 2?i?1 and ? =
positive examples, Prx?D+ [h(x) = ?1], equals
Pr[XT < 0] = Pr[YT < ??] ?
t=0
i
?t2 . This gives us that the error rate of h on
?2
exp ? PT ?1
8 t=0 ?t2
!
T ?1
1X 2
?
= exp ?
8 t=0 t
!
.
(6)
So we have established (4); it remains to bound the number of nodes constructed in the branching
PT ?1
program. Let us write Mt to denote the number of nodes at level t, so M = t=0 Mt .
The t-th level of boosting can cause the rightmost (leftmost) node to be at most 2?t?1 distance
farther away from the origin than the rightmost (leftmost) node at the (t ? 1)-st level. This means
Pt?1
that at level t, every node is at a position (?, t) with |?| ? 2 j=0 ?j . Since nodes are placed at
PT ?1
PT ?1 Pt?1
integer multiples of ?t /2, we have that M = t=0 Mt ? O(1) ? t=0 ?1t j=0 ?j .
Remark. Consider the case in which each advantage ?t is just ? and we are boosting to accuracy
?. As usual taking T = O(log(1/?)/? 2) gives an error bound of ?. With these parameters we have
that M ? O(log2 (1/?)/? 4), the same asymptotic bound achieved in [LS05]. In the next section we
describe a modification of the algorithm that improves this bound by essentially a factor of ?1 .
4.1 Improving efficiency by freezing extreme nodes
Here we describe a variant of the algorithm from the previous section that constructs a branching
program with fewer nodes.
The algorithm requires an input parameter ? which is an upper bound on the desired final error of the
aggregate classifier. For t ? 1, after the execution of
step t ? 1 of boosting, when all nodes at level
r
Pt?1
t have been created, each node (?, t) with |?| >
8 s=0 ?s2 2 ln t + ln 4? is ?frozen.? The
algorithm commits to classifying any test examples routed to any such nodes according to sgn(?),
and these nodes are not used to generate weak hypotheses during the next round of training.
We have the following theorem about the performance of this algorithm:
Theorem 3 Consider running the modified booster for T stages. For t = 0, . . . , T ? 1 let the
values ?1 , . . . , ?T > 0 be defined as described above, so each invocation of the weak learner on
distribution D?,t yields a hypothesis h?,t that has ??,t ? ?t . Then the final output hypothesis h of
the booster satisfies
?1
?
1 TP
2
Prx?D [h(x) 6= c(x)] ? + exp ?
? .
(7)
2
8 t=0 t
r
PT ?1 1
PT ?1 2
1
The algorithm makes O
ln T + ln ? ? t=0 ?t calls to the weak learner.
t=0 ?t
Proof: As in the previous proof it suffices to bound Prx?D+ [h(x) 6= 1]. The
of Theorem 2
proof
PT ?1 2
1
gives us that if we never did any freezing, then Prx?D+ [h(x) 6= 1] ? exp ? 8 t=0 ?t . Now
let us analyze the effect of freezing in a given stage t q
< T . Let At be the distance from the origin
Pt?1
past which examples are frozen in round t; i.e. At = (8 s=0 ?s2 )(2 ln t + ln 4? ). Nearly exactly
the same analysis as proves (6) can be used here: for a positive example x to be incorrectly frozen
in round t, it must be the case Xt < ?At , or equivalently Yt < ?At ?
of At gives us that Prx?D+ [x incorrectly frozen in round t] is at most
Pr[Yt ? ?At ?
t?1
X
i=0
?t2 ] ? Pr[Yt ? ?At ] ?
so consequently we have Prx?D+ [x ever incorrectly frozen ] ?
[LS05]: we have that Prx?D+ [h(x) = 0] equals
?
2.
Pt?1
i=0
?i2 . Thus our choice
?
,
4t2
From here we may argue as in
T ?1
?
1X 2
Prx?D+ [h(x = 0 and x is frozen] + Prx?D+ [h(x) = 0 and x is not frozen] ? + exp ?
?
2
2 t=0 t
which gives (7). The bound on the number of calls to the weak learner follows from the
fact
there are O(At /?t ) such calls in each stage of boosting, and the fact that At ?
q that
PT ?1 2
(8 s=0 ?s )(2 ln T + ln 4? ) for all t.
It is easy to check that if ?t = ? for all t, taking T = O(log(1/?)/? 2) the algorithm in this section
will construct an ?-accurate hypothesis that is an O(log2 (1/?)/? 3 )-node branching program.
5 Extensions
5.1 Standard weak learners
In Sections 3 and 4, we assumed that the boosting algorithm had access to a two-sided weak learner,
which is more accurate than random guessing on both the positive and the negative examples separately. To make use of a standard weak learner, which is merely more accurate than random guessing
on average, we can borrow ideas from [LS05].
The idea is to force a standard weak learner to provide a hypothesis with two-sided accuracy by (a)
balancing the distribution so that positive and negative examples are accorded equal importance, (b)
balancing the predictions of the output of the weak learner so that it doesn?t specialize on one kind
of example.
b be the distribution obtained
Definition 4 Given a probability distribution D over examples, let D
b
by rescaling the positive and negative examples so that they have equal weight: i.e., let D[S]
=
1 +
1 ?
D
[S]
+
D
[S].
2
2
Definition 5 Given a confidence-rated classifier h : X ? [?1, 1] and a probability distribution D
? : X ? [?1, 1] defined as
over X, let the balanced variant of h with respect to D be the function h
h(x)+1
?
follows: (a) if Ex?D [h(x)] ? 0, then, for all x ? X, h(x)
=
? 1. (b) if Ex?D [h(x)] ?
Ex?D [h(x)]+1
?
0, then, for all x ? X, h(x)
=
h(x)?1
?Ex?D [h(x)]+1
+ 1.
The analysis is the natural generalization of Section 5 of [LS05] to confidence-rated classifiers.
Lemma 6 If D is balanced with respect to c, and h is a confidence-rated classifier such that
?
Ex?D [h(x)c(x)] ? ?, then Ex?D [h(x)c(x)]
? ?/2.
Proof. Assume without loss of generality that Ex?D [h(x)] ? 0 (the other case can be handled
symmetrically). By linearity of expectation
Ex?D [h(x)c(x)]
1
?
Ex?D [h(x)c(x)]
=
+ Ex?D [c(x)]
?1 .
Ex?D [h(x)] + 1
Ex?D [h(x]) + 1
?
Since D is balanced we have Ex?D [c(x)] = 0, and hence Ex?D [h(x)c(x)]
=
lemma follows from the fact that Ex?D [h(x)] ? 1.
Ex?D [h(x)c(x)]
Ex?D [h(x)]+1 ,
so the
We will use a standard weak learner to simulate a two-sided weak learner as follows. Given a
b to the standard weak learner, take its output
distribution D, the two-sided weak learner will pass D
g, and return h = g?. Our next lemma analyzes this transformation.
!
Lemma 7 If Ex?Db [g(x)c(x)] ? ?, then Ex?D+ [h(x)] ? ?/2 and Ex?D? [?h(x)] ? ?/2.
b we have
Proof: Lemma 6 implies that Ex?Db [h(x)c(x)] ? ?/2. Expanding the definition of D,
Ex?D+ [h(x)] ? Ex?D? [h(x)] ? ?.
(8)
b and c, we have E b [h(x)] = 0. Once again expanding
Since h balanced g with respect to D
x?D
b we get that Ex?D+ [h(x)] + Ex?D? [h(x)] = 0 which implies Ex?D? [h(x)] =
the definition of D,
?Ex?D+ [h(x)] and Ex?D+ [h(x)] = ?Ex?D+ [h(x)]. Substituting each of the RHS for its respective LHS in (8) completes the proof.
Lemma 7 is easily seen to imply counterparts of Theorems 2 and 3 in which the requirement of a
two-sided weak learner is weakened to require only standard weak learning, but each ?t is replaced
with ?t /2.
5.2 Tolerating random classification noise
As in [LS05], noise tolerance is facilitated by the fact that the path through the network is not
affected by altering the label of an example. On the other hand, balancing the distribution before
passing it to the weak learner, which was needed to use a standard weak learner, may disturb the
independence between the event that an example is noisy, and the random draw of x. This can be
repaired exactly as in [KS05, LS05]; because of space constraints we omit the details.
References
[AD98]
[AL88]
J. Aslam and S. Decatur. Specification and simulation of statistical query algorithms for efficiency
and noise tolerance. J. Comput & Syst. Sci., 56:191?208, 1998.
Dana Angluin and Philip Laird. Learning from noisy examples. Machine Learning, 2(4):343?370,
1988.
[ASE92] N. Alon, J. Spencer, and P. Erdos. The Probabilistic Method (1st ed.). Wiley-Interscience, New
York, 1992.
[BKW03] A. Blum, A. Kalai, and H. Wasserman. Noise-tolerant learning, the parity problem, and the statistical query model. J. ACM, 50(4):506?519, 2003.
[Die00] T.G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning, 40(2):139?158, 2000.
[Fre95]
Y. Freund. Boosting a weak learning algorithm by majority. Information and Computation,
121(2):256?285, 1995.
[FS96]
Y. Freund and R. Schapire. Experiments with a new boosting algorithm. In ICML, pages 148?156,
1996.
[FS97]
Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. JCSS, 55(1):119?139, 1997.
T. P. Hayes. A large-deviation inequality for vector-valued martingales. 2005.
[Hay05]
[Kea98]
M. Kearns. Efficient noise-tolerant learning from statistical queries. JACM, 45(6):983?1006, 1998.
[KS05]
A. Kalai and R. Servedio. Boosting in the presence of noise. JCSS, 71(3):266?290, 2005.
[LS05]
[LS08]
P. Long and R. Servedio. Martingale boosting. In Proc. 18th Annual COLT, pages 79?94, 2005.
P. Long and R. Servedio. Random classification noise defeats all convex potential boosters. In
ICML, 2008.
[MO97]
R. Maclin and D. Opitz. An empirical evaluation of bagging and boosting. In AAAI/IAAI, pages
546?551, 1997.
[MR03]
R. Meir and G. R?atsch. An introduction to boosting and leveraging. In LNAI Advanced Lectures on
Machine Learning, pages 118?183, 2003.
[RDM06] L. Ralaivola, F. Denis, and C. Magnan. CN=CNNN. In ICML, pages 265?272, 2006.
[Sch90]
R. Schapire. The strength of weak learnability. Machine Learning, 5(2):197?227, 1990.
[Sch03]
R. Schapire. The boosting approach to machine learning: An overview. Springer, 2003.
[SS99]
R. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37:297?336, 1999.
| 3623 |@word version:1 briefly:3 polynomial:1 simulation:1 initial:3 rightmost:2 past:2 current:1 com:1 must:2 subsequent:1 numerical:1 fewer:4 farther:1 boosting:46 complication:1 location:12 successive:2 denis:1 node:60 along:2 constructed:8 direct:1 c2:1 specialize:1 interscience:1 x0:1 expected:1 indeed:1 behavior:1 roughly:1 actual:1 estimating:1 bounded:2 linearity:1 what:1 kind:1 developed:2 transformation:1 every:1 exactly:3 classifier:23 enjoy:2 omit:1 before:2 positive:10 path:1 studied:1 weakened:1 limited:1 unique:1 procedure:1 empirical:1 significantly:1 confidence:11 refers:1 get:3 cannot:1 ralaivola:1 equivalent:1 yt:25 send:2 straightforward:2 independently:1 truncating:1 convex:1 assigns:1 wasserman:1 rule:1 borrow:2 handle:2 variation:1 pt:20 target:2 heavily:1 user:1 us:1 hypothesis:37 origin:4 fs97:3 jcss:2 capture:2 connected:1 decrease:1 substantial:1 mentioned:2 balanced:4 ultimately:1 efficiency:3 learner:35 completely:1 easily:2 distinct:2 describe:6 effective:1 query:3 aggregate:1 whose:1 valued:3 otherwise:1 ability:1 noisy:3 laird:1 final:11 advantage:18 sequence:6 frozen:7 neighboring:1 relevant:1 poorly:1 achieve:1 intuitive:1 requirement:1 disturb:1 depending:2 alon:1 mmm:1 qt:1 progress:2 strong:1 c:1 implies:3 convention:1 direction:4 drawback:3 correct:1 closely:1 modifying:2 sgn:2 routing:1 require:2 fix:1 suffices:2 generalization:2 preliminary:1 randomization:1 elementary:1 spencer:1 extension:2 around:1 normal:1 exp:11 mapping:1 substituting:1 major:1 early:2 estimation:1 proc:1 label:4 correctness:1 create:1 weighted:1 always:3 modified:1 rather:3 reaching:3 avoid:1 kalai:2 derived:1 inherits:2 notational:1 check:1 contrast:1 typically:1 bt:8 lnai:1 maclin:1 provably:1 issue:4 classification:10 colt:1 development:1 equal:5 construct:8 never:2 having:3 saving:1 sampling:1 once:3 flipped:1 capitalize:1 icml:3 nearly:1 future:1 t2:6 randomly:1 replaced:1 evaluation:1 adjust:1 introduces:1 extreme:2 accurate:3 edge:7 lh:1 respective:1 tree:1 walk:12 desired:1 earlier:1 boolean:3 tp:1 formulates:1 altering:1 deviation:1 uniform:1 rounding:2 too:2 learnability:1 optimally:1 straightforwardly:1 st:2 randomized:3 probabilistic:1 off:1 together:1 again:1 aaai:1 opposed:2 worse:1 booster:15 creating:1 rescaling:1 return:1 syst:1 potential:1 satisfy:1 depends:1 leveled:1 later:1 analyze:1 aslam:1 start:2 contribution:2 accuracy:12 ensemble:1 yield:2 weak:70 tolerating:1 explain:2 reach:3 ed:1 definition:7 servedio:6 obvious:1 proof:8 iaai:1 recall:4 improves:2 actually:3 tolerate:1 adaboost:6 specify:1 improved:1 done:1 generality:1 just:1 stage:16 sketch:1 receives:1 hand:1 freezing:3 google:2 brings:2 quality:2 perhaps:1 grows:2 effect:3 dietterich:1 true:1 counterpart:1 hence:1 assigned:1 symmetric:1 i2:3 attractive:1 round:4 during:4 branching:32 leftmost:2 trying:1 theoretic:1 performs:2 common:1 mt:3 overview:2 tracked:1 conditioning:2 exponentially:2 defeat:1 significant:1 refer:1 ss99:2 similarly:1 had:2 moving:1 specification:1 access:1 base:2 recent:2 showed:1 moderate:1 route:5 inequality:3 binary:2 arbitrarily:1 yi:2 seen:2 minimum:2 analyzes:1 somewhat:1 multiple:3 desirable:1 long:5 fre95:2 prediction:6 variant:4 basic:2 essentially:1 expectation:2 sometimes:1 achieved:2 receive:1 whereas:1 addition:1 separately:1 completes:1 grow:1 crucial:1 extra:1 biased:1 unlike:1 pass:1 induced:1 tend:1 db:2 thing:1 leveraging:1 call:8 integer:5 presence:2 granularity:1 symmetrically:1 enough:2 easy:2 independence:1 reduce:1 idea:5 cn:1 motivated:1 handled:2 routed:2 passing:2 cause:1 york:1 repeatedly:1 remark:1 generate:2 angluin:1 schapire:5 meir:1 sign:1 write:4 prd:3 shall:2 affected:2 key:1 four:1 blum:1 drawn:1 clean:1 decatur:1 merely:1 convert:1 year:1 sum:1 run:5 facilitated:1 extends:2 throughout:1 place:1 draw:2 decision:2 scaling:2 bound:15 def:2 guaranteed:2 encountered:1 annual:1 strength:1 precisely:3 constraint:1 generates:1 aspect:1 simulate:3 argument:1 min:2 performing:1 according:5 smaller:4 slightly:1 y0:4 modification:2 intuitively:3 restricted:3 pr:5 sided:17 taken:1 ln:8 remains:1 discus:1 describing:1 turn:1 needed:1 singer:1 letting:1 end:3 accorded:1 eight:3 apply:2 away:2 original:10 bagging:2 running:3 ensure:1 log2:2 exploit:1 giving:1 commits:1 prof:1 move:2 quantity:1 opitz:1 rocco:2 concentration:1 usual:3 guessing:3 distance:2 unable:1 sci:1 philip:2 majority:1 argue:1 trivial:2 provable:1 assuming:1 besides:1 equivalently:1 favorably:1 negative:8 sch90:2 rise:1 perform:2 upper:2 observation:1 incorrectly:3 situation:1 extended:1 ever:1 pair:1 required:3 established:1 boost:1 proceeds:2 usually:2 below:2 azuma:1 oft:1 program:29 including:1 event:2 natural:4 difficulty:1 force:1 advanced:1 scheme:1 improve:1 rated:11 imply:1 created:1 columbia:2 asymptotic:2 repaired:1 fully:2 loss:1 freund:3 lecture:1 generation:1 proportional:1 dana:1 classifying:1 balancing:3 course:1 placed:1 parity:1 truncation:1 allow:1 weaker:1 taking:4 tolerance:8 distributed:2 depth:2 contrariwise:1 rich:1 computes:1 doesn:1 forward:1 collection:1 adaptive:8 made:1 far:1 polynomially:3 erdos:1 keep:2 tolerant:6 incoming:1 hayes:1 assumed:1 decade:1 learn:1 expanding:2 improving:1 constructing:5 domain:2 did:1 rh:1 s2:2 noise:22 arise:1 prx:11 child:5 martingale:21 wiley:1 sub:5 position:1 plong:1 comput:1 invocation:2 down:2 theorem:6 xt:19 naively:1 effectively:1 importance:1 ci:2 execution:1 labelling:1 conditioned:2 occurring:1 simply:3 jacm:1 infinitely:1 springer:1 corresponds:1 satisfies:3 acm:1 conditional:1 viewed:1 sized:1 confidencerated:1 consequently:1 shared:1 included:1 typical:1 lemma:6 kearns:1 pas:1 experimental:1 atsch:1 internal:2 adaptiveness:11 outgoing:2 ex:34 |
2,895 | 3,624 | Generative and Discriminative Learning with
Unknown Labeling Bias
Miroslav Dud??k
Carnegie Mellon University
5000 Forbes Ave, Pittsburgh, PA 15213
Steven J. Phillips
AT&T Labs ? Research
180 Park Ave, Florham Park, NJ 07932
[email protected]
[email protected]
Abstract
We apply robust Bayesian decision theory to improve both generative and discriminative learners under bias in class proportions in labeled training data, when the
true class proportions are unknown. For the generative case, we derive an entropybased weighting that maximizes expected log likelihood under the worst-case true
class proportions. For the discriminative case, we derive a multinomial logistic
model that minimizes worst-case conditional log loss. We apply our theory to the
modeling of species geographic distributions from presence data, an extreme case
of labeling bias since there is no absence data. On a benchmark dataset, we find
that entropy-based weighting offers an improvement over constant estimates of
class proportions, consistently reducing log loss on unbiased test data.
1
Introduction
In many real-world classification problems, it is not equally easy or affordable to verify membership
in different classes. Thus, class proportions in labeled data may significantly differ from true class
proportions. In an extreme case, labeled data for an entire class might be missing (for example,
negative experimental results are typically not published). A naively trained learner may perform
poorly on test data that is not similarly afflicted by labeling bias. Several techniques address labeling
bias in the context of cost-sensitive learning and learning from imbalanced data [5, 11, 2]. If the
labeling bias is known or can be estimated, and all classes appear in the training set, a model trained
on biased data can be corrected by reweighting [5]. When the labeling bias is unknown, a model is
often selected using threshold-independent analysis such as ROC curves [11]. A good ROC curve,
however, does not guarantee a low loss on test data. Here, we are concerned with situations when
the labeling bias is unknown and some classes may be missing, but we have access to unlabeled
data. We want to construct models that in addition to good ROC-based performance, also yield
low test loss. We will be concerned with minimizing joint and conditional log loss, or equivalently,
maximizing joint and conditional log likelihood.
Our work is motivated by the application of modeling species? geographic distributions from occurrence data. The data consists of a set of locations within some region (for example, the Australian
wet tropics) where a species (such as the golden bowerbird) was observed, and a set of features such
as precipitation and temperature, describing environmental conditions at each location. Species distribution modeling suffers from extreme imbalance in training data: we often only have information
about species presence (positive examples), but no information about species absence (negative examples). We do, however, have unlabeled data, obtained either by randomly sampling locations
from the region [4], or pooling presence data for several species collected with similar methods to
yield a representative sample of locations which biologists have surveyed [13].
Previous statistical methods for species distribution modeling can be divided into three main approaches. The first interprets all unlabeled data as examples of species absence and learns a rule
to discriminate them from presences [19, 4]. The second embeds a discriminative learner in the
EM algorithm in order to infer presences and absences in unlabeled data; this explicitly requires
knowledge of true class probabilities [17]. The third models the presences alone, which is known in
machine learning as one-class estimation [14, 7]. When using the first approach, the training data is
commonly reweighted so that positive and negative examples have the same weight [4]; this models
a quantity monotonically related to conditional probability of presence [13], with the relationship
depending on true class probabilities. If we use y to denote the binary variable indicating presence
and x to denote a location on the map, then the first two approaches yield models of conditional
probability p(y = 1|x), given estimates of true class probabilities. On the other hand, the main instantiation of the third approach, maximum entropy density estimation (maxent) [14] yields a model
of the distribution p(x|y = 1). To convert this to an estimate of p(y = 1|x) (as is usually required,
and necessary for measuring conditional log loss on which we focus here) again requires knowledge
of the class probabilities p(y = 1) and p(y = 0). Thus, existing discriminative approaches (the first
and second) as well as generative approaches (the third) require estimates of true class probabilities.
We apply robust Bayesian decision theory, which is closely related to the maximum entropy principle [6], to derive conditional probability estimates p(y | x) that perform well under a wide range
of test distributions. Our approach can be used to derive robust estimates of class probabilities p(y)
which are then used to reweight discriminative models or to convert generative models into discriminative ones. We present a treatment for the general multiclass problem, but our experiments focus on
one-class estimation and species distribution modeling in particular. Using an extensive evaluation
on real-world data, we show improvement in both generative and discriminative techniques.
Throughout this paper we assume that the difficulty of uncovering the true class label depends on the
class label y alone, but is independent of the example x. Even though this assumption is simplistic,
we will see that our approach yields significant improvements. A related set of techniques estimates
and corrects for the bias in sample selection, also known as covariate shift [9, 16, 18, 1, 13]. When
the bias can be decomposed into an estimable and inestimable part, the right approach might be to
use a combination of techniques presented in this paper and those for sample-selection bias.
2
Robust Bayesian Estimation with Unknown Class Probabilities
Our goal is to estimate an unknown conditional distribution ?(y | x), where x ? X is an example
and y ? Y is a label. The input consists of labeled examples (x1 , y1 ), . . . , (xm , ym ) and unlabeled
examples xm+1 , . . . , xM . Each example x is described by a set of features fj : X ? R, indexed
by j ? J. For simplicity, we assume that sets X, Y, and J are finite, but we would like to allow the
space X and the set of features J to be very large.
In species distribution modeling from occurrence data, the space X corresponds to locations on the
map, features are various functions derived from the environmental variables, and the set Y contains
two classes: presence (y = 1) and absence (y = 0) for a particular species. Labeled examples are
presences of the species, e.g., recorded presence locations of the golden bowerbird, while unlabeled
examples are locations that have been surveyed by biologists, but neither presence nor absence was
recorded. The unlabeled examples can be obtained as presence locations of species observed by a
similar protocol, for example other birds [13].
We posit a joint density ?(x, y) and assume that examples are generated by the following process.
First, a pair (x, y) is chosen according to ?. We always get to see the example x, but the label y is
revealed with an unknown probability that depends on y and is independent of x. This means that
we have access to independent samples from ?(x) and from ?(x | y), but no information about ?(y).
In our example, species presence is revealed with an unknown fixed probability whereas absence is
revealed with probability zero (i.e., never revealed).
2.1
Robust Bayesian Estimation, Maximum Entropy, and Logistic Regression
Robust Bayesian decision theory formulates an estimation problem as a zero-sum game between a
decision maker and nature [6]. In our case, the decision maker chooses an estimate p(x, y) while
nature selects a joint density ?(x, y). Using the available data, the decision maker forms a set P in
which he believes nature?s choice lies, and tries to minimize worst-case loss under nature?s choice.
In this paper we are interested in minimizing the worst-case log loss relative to a fixed default
estimate ? (equivalently, maximizing the worst-case log likelihood ratio)
p(X, Y )
min max E? ln
.
p?? ??P
?(X, Y )
(1)
Here, ? is the simplex of joint densities and E? is a shorthand for EX,Y ?? . The default density ?
represents any prior information we have about ?; if we have no prior information, ? is typically the
uniform density.
Gr?unwald and Dawid [6] show that the robust Bayesian problem (Eq. 1) is often equivalent to the
minimum relative entropy problem
min RE(p k ?) ,
(2)
p?P
where RE(p k q) = Ep [ln(p(X, Y )/q(X, Y )] is relative entropy or Kullback-Leibler divergence
and measures discrepancy between distributions p and q. The formulation intuitively says that we
should choose the density p which is closest to ? while respecting constraints P. When ? is uniform,
minimizing relative entropy is equivalent to maximizing entropy H(p) = Ep [? ln p(X, Y )]. Hence,
the approach is mainly referred to as maximum entropy [10] or maxent for short. The next theorem
outlines the equivalence of robust Bayes and maxent for the case considered in this paper. It is a
special case of Theorem 6.4 of [6].
Theorem 1 (Equivalence of maxent and robust Bayes). Let X ? Y be a finite sample space, ?
a density on X ? Y and P ? ? a closed convex set containing at least one density absolutely
continuous w.r.t. ? . Then Eqs. (1) and (2) have the same optimizers.
For the case without labeling bias, the set P is usually described in terms of equality constraints
on moments of the joint distribution (feature expectations). Specifically, feature expectations with
respect to p are required to equal their empirical averages. When features are functions of x, but the
goal is to discriminate among classes y, it is natural to consider a derived set of features which are
versions of fj (x) active solely in individual classes y (see for instance [8]). If we were to estimate the
distribution of the golden bowerbird from presence-absence data then moment equality constraints
require that the joint model p(x, y) match the average altitude of presence locations as well as the
average altitude of absence locations (both weighted by their respective training proportions).
When the number of samples is too small or the number of features too large then equality constraints lead to overfitting because the true distribution does not match empirical averages exactly.
Overfitting is alleviated by relaxing the constraints so that feature expectations are only required to
lie within a certain distance of sample averages [3].
The solution of Eq. (2) with equality or relaxed constraints can be shown to lie in an exponential
family parameterized by ? = h?y iy?Y , ?y ? RJ , and containing densities
q? (x, y) ? ?(x, y)e?
y
?f (x)
.
The optimizer of Eq. (2) is the unique density which minimizes the empirical log loss
1 X
ln q? (xi , yi )
m
(3)
i?m
possibly with an additional ?1 -regularization term accounting for slacks in equality constraints. (See
[3] for a proof.)
In addition to constraints on moments of the joint distribution, it is possible to introduce constraints
on marginals of p. The most common implementations of maxent impose marginal constraints
p(x) = ?
? lab (x) where ?
? lab is the empirical distribution over labeled examples. The solution then
takes form q? (x, y) = ?
? lab (x)q? (y | x) where q? (y | x) is the multinomial logistic model
q? (y | x) ? ?(y | x)e?
y
?f (x)
.
As before, the maxent solution is the unique density of this form which minimizes the empirical log
loss (Eq. 3). The minimization of Eq. (3) is equivalent to the minimization of conditional log loss
1 X
? ln q? (yi | xi ) .
m
i?m
Hence, this approach corresponds to logistic regression. Since it only models the labeling process
?(y | x), but not the sample generation ?(x), it is known as discriminative training.
The case with equality constraints p(y) = ?
? lab (y) has been analyzed for example by [8]. The
solution has the form q? (x, y) = ?
? lab (y)q? (x | y) with
y
q? (x | y) ? ?(x | y)e? ?f (x) .
Log loss can be minimized for each class separately, i.e., each ?y is the maximum likelihood estimate (possibly with regularization) of ?(x | y). The joint estimate q? (x, y) can be used to derive the
conditional distribution q? (y | x). Since this approach estimates the sample generating distributions
of individual classes, it isQ
known as generative training. Naive Bayes is a special case of generative
training when ?(x | y) = j ?j (fj (x) | y).
The two approaches presented in this paper can be viewed as generalizations of generative and
discriminative training with two additional components: availability of unlabeled examples and lack
of information about class probabilities. The former will influence the choice of the default ?, the
latter the form of constraints P.
2.2
Generative Training: Entropy-weighted Maxent
When the number of labeled and unlabeled examples is sufficiently large, it is reasonable to assume
that the empirical distribution ?
? (x) over all examples (labeled and unlabeled) is a faithful representation of ?(x). Thus, we consider defaults with ?(x) = ?
? (x), shown to work well in species
distribution modeling [13]. For simplicity, we assume that ?(y | x) does not depend on x and focus
on ?(x, y) = ?
? (x)?(y). Other options are possible. For example, when the number of examples is
small, ?
? (x) might be replaced by an estimate of ?(x). The distribution ?(y) can be chosen uniform
across y, but if some classes are known to be rarer than others then a non-uniform estimate will
perform better. In Section 3, we analyze the impact of this choice.
Constraints on moments of the joint distribution, such as those in the previous section, will misspecify true moments in the presence of labeling bias. However, as discussed earlier, labeled examples
from each class y approximate conditional distributions ?(x | y). Thus, instead of constraining joint
expectations, we constrain conditional expectations Ep [fj (X) | y]. In general, we consider robust
Bayes and maxent problems with the set P of the form P = {p ? ? : pyX ? PyX } where pyX denotes
the |X|-dimensional vector of conditional probabilities p(x | y) and PyX expresses the constraints on
pyX . For example, relaxed constraints for class y are expressed as
?j : Ep [fj (X) | y] ? ?
?y ? ? y
(4)
j
j
where ?
?yj is the empirical average of fj among labeled examples in class y and ?jy are estimates of
deviations of averages from true expectations. Similar to [14], we use standard-error-like deviation
?
estimates ?jy = ? ?
?jy / my where ? is a single tuning constant, ?
?jy is the empirical standard deviation of fj among labeled examples in class y, and my is the number of labeled examples in class y.
When my equals 0, we choose ?jy = ? and thus leave feature expectations unconstrained.
The next theorem and the following corollary show that robust Bayes (and also maxent) with the
constraint set P of the form above yield estimators similar to generative training. In addition to the
notation pyX for conditional densities, we use the notation pY and pX to denote vectors of marginal
probabilities p(y) and p(x), respectively. For example, the empirical distribution over examples is
denoted ?
?X .
Theorem 2. Let PyX , y ? Y be closed convex sets of densities over X and P = {p ? ? : pyX ? PyX }.
If P contains at least one density absolutely continuous w.r.t. ? then robust Bayes and maxent over
P are equivalent. The solution p? has the form p?(y)?
p(x | y) where class-conditional densities p?yX
y
y
y
minimize RE(pX k ?
?X ) among pX ? PX and
y
p?(y) ? ?(y)e?RE(p?X k ??X ) .
(5)
Proof. It is not too difficult to verify that the set P is a closed convex set of joint densities, so
the equivalence of robust Bayes and maxent follows from Theorem 1. To prove the remainder, we
rewrite the maxent objective as
X
RE(p k ?) = RE(pY k ?Y ) +
p(y)RE(pyX k ?
?X ) .
y
Maxent problem is then equivalent to
h
i
X
min RE(pY k ?Y ) +
p(y) yminy RE(pyX k ?
?X )
pY
y
"
= min
pY
"
= min
pY
X
y
X
pX ?PX
p(y)
p(y) ln
?(y)
p(y) ln
y
!!
+
X
p(y)RE(?
pyX
y
p(y)
y
?(y)e?RE(p?X k ??X )
!#
k?
?X )
!#
= const. + min RE(pY k p?Y ) .
pY
Since RE(p k q) is minimized for p = q, we indeed obtain that for the minimizing p, pY = p?Y .
Theorem 2 generalizes to the case when in addition to constraining pyX to lie in PyX , we also constrain
pY to lie in a closed convex set PY . The solution then takes form p(y)?
p(x | y) with p?(x | y) as
in the theorem, but with p(y) minimizing RE(pY k p?Y ) subject to pY ? PY . Unlike generative
training without labeling bias, the class-conditional densities in the theorem above influence class
probabilities. When sets PyX are specified using constraints of Eq. (4) then p? has a form derived from
regularized maximum likelihood estimates in an exponential family (see, e.g., [3]):
Corollary 3. If sets PyX are specified by inequality constraints of Eq. (4) then robust Bayes and
maxent are equivalent. The class-conditional densities p?(x | y) of the solution take form
? y ?f (x)
q? (x | y) ? ?
? (x)e?
and solve single-class regularized maximum likelihood problems
n X
o
X
y
min
|
.
?
ln
q
(x
|
y)
+
m
?
|?
?
i
y
j
j
y
?
i:yi =y
(6)
(7)
j?J
One-class Estimation. In one-class estimation problems, there are two classes (0 and 1), but we
only have access to labeled examples from one class (e.g., class 1). In species distribution modeling,
we only have access to presence records of the species. Based on labeled examples, we derive a set
of constraints on p(x | y = 1), but leave p(x | y = 0) unconstrained. By Theorem 2, p?(x | y = 1)
then solves the single-class maximum entropy problem, we write p?(x | y = 1) = p?ME (x), and
p?(x | y = 0) = ?
? (x). Assume without loss of generality that examples x1 , . . . , xM are distinct (but
allow them to have identical feature vectors). Thus, ?
? (x) = 1/M on examples and zero elsewhere,
and RE(?
pME k ?
?X ) = ?H(?
pME ) + ln M . Plugging these into Theorem 2, we can derive the conditional estimate p?(y = 1 | x) across all unlabeled examples x:
p?(y = 1 | x) =
?(y = 1)?
pME (x)eH(p?ME )
.
?(y = 0) + ?(y = 1)?
pME (x)eH(p?ME )
(8)
If constraints on p(x | y = 1) are chosen as in Corollary 3 then p?ME is exponential and Eq. (8) thus
describes a logistic model. This model has the same coefficients as p?ME , with the intercept chosen
so that ?typical? examples x under p?ME (examples with log probability close to the expected log
probability) yield predictions close to the default.
2.3
Discriminative Training: Class-robust Logistic Regression
Similar to the previous section, we consider ?(x, y) = ?
? (x)?(y). The set of constraints P will
now also include equality constraints on p(x). Since ?
? lab (x) misspecifies the marginal, we use
p(x) = ?
? (x). Next theorem is an analog of Corollary 3 for discriminative training. It follows from
a combination of Theorem 1 and duality of maxent with maximum likelihood [3]. A complete proof
will appear in the extended version of this paper.
Theorem 4. Assume that sets PyX are specified by inequality constraints of Eq. (4). Let P = {p ?
? : pyX ? PyX and pX = ?
?X }. If the set P is non-empty then robust Bayes and maxent over P are
equivalent. For the solution p?, p?(x) = ?
? (x) and p?(y | x) takes form
P y y
?y+
?y ?f (x)??y ??
? |?j |
j j
q? (y | x) ? ?(y)e
(9)
and solves the regularized ?logistic regression? problem
(
)
i X
i
Xh y y
1 X Xh
y
y
y
min
?
? (y)
?j ?j + (?
?j ? ?
?j )?j
.
??
? (y | xi ) ln q? (y | xi ) +
?
M
i?M y?Y
y?Y
(10)
j?J
where ?
? is an arbitrary feasible point, ?
? ? P, and ?
?yj its class-conditional feature expectations.
We put logistic regression in quotes, because the model described by Eq. (9) is not the usual logistic
model; however, once the parameters ?y are fixed, Eq. (9) simply determines a logistic model with a
special form of the intercept. Note that the second term of Eq. (10) is indeed a regularization, albeit
possibly an asymmetric one, since any feasible ?
? will have |?
?yj ? ?
?yj | ? ?jy . Since ?
? (x) = ?
? (x),
?
? is specified solely by ?
? (y | x) and thus can be viewed as a tentative imputation of labels across
all examples. We remark that the value of the objective of Eq. (10) does not depend on the choice
of ?
? , because a different choice of ?
? (influencing the first term) yields a different set of means ?
?yj
(influencing the second term) and these differences cancel out. To provide a more concrete example
and some intuition about Eq. (10), we now consider one-class estimation.
One-class estimation. A natural choice of ?
? is the ?pseudo-empirical? distribution which views
all unlabeled examples as negatives. Pseudo-empirical means of class 1 match empirical averages of
class 1 exactly, whereas pseudo-empirical means of class 0 can be arbitrary because they are unconstrained. The lack of constraints on class 0 forces the corresponding ?y to equal zero. The objective
can thus be formulated solely using ?y for the class 1; therefore, we will omit the superscript y.
Eq. (10) after multiplying by M then becomes
)
(
X
X
X
min
? ln q? (y = 1 | xi ) +
? ln q? (y = 0 | xi ) + m
?j |?j | .
?
i?m
m<i?M
j?J
Thus the objective of class-robust logistic regression is the same as of regularized logistic regression
discriminating positives from unlabeled examples.
3
Experiments
We evaluate our techniques using a large real-world dataset containing 226 species from 6 regions
of the world, produced by the ?Testing alternative methodologies for modeling species? ecological
niches and predicting geographic distributions? Working Group at the National Center for Ecological
Analysis and Synthesis (NCEAS). The training set contains presence-only data from unplanned
surveys or incidental records, including those from museums and herbariums. The test set contains
presence-absence data from rigorously planned independent surveys (i.e., without labeling bias).
The regions are described by 11?13 environmental variables, with 20?54 species per region, 2?5822
training presences per species (median of 57), and 102?19120 test points (presences and absences);
for details see [4]. As unlabeled examples we use presences of species captured by similar methods,
known as ?target group?, with the groups as in [13].
We evaluate both entropy-weighted maxent and class-robust logistic regression while varying the
default estimate ?(y = 1), referred to as default species prevalence by analogy with p(y = 1), which
is called species prevalence. Entropy-weighted maxent solutions for different default prevalences are
derived by Eq. (8) from the same one-class estimate p?ME . Class-robust logistic regression requires
separate optimization for each default prevalence.
We calculate p?ME using the Maxent package [15] with features spanning the space of piecewise linear
splines (of each environmental variable separately) and a tuned value of ? (see [12] for the details
on features and tuning). Class-robust logistic models are calculated by a boosting-like algorithm
SUMMET [3] with the same set of features and the same value ? as the maxent runs.
For comparison, we also evaluate default-weighted maxent, using class probabilities p(y) = ?(y)
instead of Eq. (5), and two ?oracle? methods based on class probabilities in the test data: constant
Bernoulli prediction p(y | x) = ?(y), and oracle-weighted maxent, using p(y) = ?(y) instead of
Eq. (5). Note that the constant Bernoulli prediction has no discrimination power (its AUC is 0.5)
even though it matches class probabilities perfectly.
average test log loss
species with
test prev. 0.00?0.04
species with
test prev. 0.04?0.15
species with
test prev. 0.15?0.70
all species
0.2
0.35
0.65
0.1
0.3
0.6
0.05
0.25
0.55
0.15
0
0.2
0.4
0
0.2
0.35
0.3
0.4
0.2
0.4
0.6
0
0.2
0.4
0.6
default prevalence
maxent weighted by default prevalence
maxent weighted by default*exp{?RE}
range of default prev. values
achieving given test loss
0.4
species with
test prev. 0.00?0.04
Bernoulli according to test prevalence (oracle setting)
maxent weighted by test prevalence (oracle setting)
species with
test prev. 0.04?0.15
species with
test prev. 0.15?0.70
all species
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0.05
0
0.1
0.15
0.2
0
0.25
0.3
0.35
0
0.55
0.6
0.65
0.3
0.35
0.4
test log loss
maxent weighted by default prevalence
maxent weighted by default*exp{?RE}
BRT reweighted by default prevalence
BRT reweighted by default*exp{?RE}
class?robust logistic regression
Bernoulli according to test prevalence (oracle setting)
maxent weighted by test prevalence (oracle setting)
Figure 1: Comparison of reweighting schemes. Top: Test log loss averaged over species with given
values of test prevalence, for varying default prevalence. Bottom: For each value of test log loss, we
determine the range of default prevalence values that achieve it.
To test entropy-weighting as a general method for estimating class probabilities, we also evaluate boosted regression trees (BRT), which have the highest predictive accuracy along with maxent
among species distribution modeling techniques [4]. In this application, BRT is used to construct a
logistic model discriminating positive examples from unlabeled examples. Recent work [17] uses a
more principled approach where unknown labels are fitted by an EM algorithm, but our preliminary
runs had too low AUC values, so they are excluded from our comparison. We train BRT using the
R package gbm on datasets weighted so that the total weight of positives is equal to the total weight
of unlabeled examples, and then apply Elkan?s reweighting scheme [5]. Specifically, the BRT result
p?BRT (y | x) is transformed to
p(y = 1)?
pBRT (y = 1 | x)
p(y = 1 | x) =
p(y = 1)?
pBRT (y = 1 | x) + p(y = 0)?
pBRT (y = 0 | x)
for two choices of p(y): default, p(y) = ?(y), and entropy-based (using p?ME ).
All three techniques yield state-of-the-art discrimination (see [13]) measured by the average AUC:
maxent achieves AUC of 0.7583; class-robust logistic regression 0.7451?0.7568; BRT 0.7545. Unlike maxent and BRT estimates, class-robust logistic estimates are not monotonically related, so they
yield different AUC for different default prevalence. However, log loss performance varies broadly
according to the reweighting scheme. In the top portion of Fig. 1, we focus on maxent. Naive
weighting by default prevalence yields sharp peaks in performance around the best default prevalence. Entropy-based weighting yields broader peaks, so it is less sensitive to the default prevalence.
The improvement diminishes as the true prevalence increases, but entropy-based weighting is never
more sensitive. Thanks to smaller sensitivity, entropy-based weighting outperforms naive weighting when a single default needs to be chosen for all species (the rightmost plot). Note that the
optimal default values are higher for entropy-based weighting, because in one-class estimation the
entropy-based prevalence is always smaller than default (unless the estimate p?ME is uniform).
Improved sensitivity is demonstrated more clearly in the bottom portion of Fig. 1, now also including
BRT and class-robust logistic regression. We see that BRT and maxent results are fairly similar, with
BRT performing overall slightly better than maxent. Note that entropy-reweighted BRT relies both
on BRT and maxent for its performance. A striking observation is the poor performance of classrobust logistic regression for species with larger prevalence values; it merits further investigation.
4
Conclusion and Discussion
To correct for unknown labeling bias in training data, we used robust Bayesian decision theory and
developed generative and discriminative approaches that optimize log loss under worst-case true
class proportions. We found that our approaches improve test performance on a benchmark dataset
for species distribution modeling, a one-class application with extreme labeling bias.
Acknowledgments. We would like to thank all of those who provided data used here: A. Ford,
CSIRO Atherton, Australia; M. Peck and G. Peck, Royal Ontario Museum; M. Cadman, Bird Studies Canada, Canadian Wildlife Service of Environment Canada; the National Vegetation Survey
Databank and the Allan Herbarium, New Zealand; Missouri Botanical Garden, especially R. Magill
and T. Consiglio; and T. Wohlgemuth and U. Braendi, WSL Switzerland.
References
[1] Bickel, S., M. Br?uckner, and T. Scheffer (2007). Discriminative learning for differing training and test
distributions. In Proc. 24th Int. Conf. Machine Learning, pp. 161?168.
[2] Chawla, N. V., N. Japkowicz, and A. Ko?cz (2004). Editorial: special issue on learning from imbalanced
data sets. SIGKDD Explorations 6(1), 1?6.
[3] Dud??k, M., S. J. Phillips, and R. E. Schapire (2007). Maximum entropy density estimation with generalized
regularization and an application to species distribution modeling. J. Machine Learning Res. 8, 1217?1260.
[4] Elith, J., C. H. Graham, et al. (2006). Novel methods improve prediction of species? distributions from
occurrence data. Ecography 29(2), 129?151.
[5] Elkan, C. (2001). The foundations of cost-sensitive learning. In Proc. 17th Int. Joint Conf. on Artificial
Intelligence, pp. 973?978.
[6] Gr?unwald, P. D. and A. P. Dawid (2004). Game theory, maximum entropy, minimum discrepancy, and
robust Bayesian decision theory. Ann. Stat. 32(4), 1367?1433.
[7] Guo, Q., M. Kelly, and C. H. Graham (2005). Support vector machines for predicting distribution of Sudden
Oak Death in California. Ecol. Model. 182, 75?90.
[8] Haffner, P., S. Phillips, and R. Schapire (2005). Efficient multiclass implementations of L1-regularized
maximum entropy. E-print arXiv:cs/0506101.
[9] Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica 47(1), 153?161.
[10] Jaynes, E. T. (1957). Information theory and statistical mechanics. Phys. Rev. 106(4), 620?630.
[11] Maloof, M. (2003). Learning when data sets are imbalanced and costs are unequal and unknown. In Proc.
ICML?03 Workshop on Learning from Imbalanced Data Sets.
[12] Phillips, S. J. and M. Dud??k (2008). Modeling of species distributions with Maxent: new extensions and a
comprehensive evaluation. Ecography 31(2), 161?175.
[13] Phillips, S. J., M. Dud??k, J. Elith, C. H. Graham, A. Lehmann, J. Leathwick, and S. Ferrier. Sample
selection bias and presence-only models of species distributions: Implications for selection of background
and pseudo-absences. Ecol. Appl. To appear.
[14] Phillips, S. J., M. Dud??k, and R. E. Schapire (2004). A maximum entropy approach to species distribution
modeling. In Proc. 21st Int. Conf. Machine Learning, pp. 655?662. ACM Press.
[15] Phillips, S. J., M. Dud??k, and R. E. Schapire (2007). Maxent software for species habitat modeling. http://
www.cs.princeton.edu/?schapire/maxent.
[16] Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood
function. J. Stat. Plan. Infer. 90(2), 227?244.
[17] Ward, G., T. Hastie, S. Barry, J. Elith, and J. Leathwick (2008). Presence-only data and the EM algorithm.
Biometrics. In press.
[18] Zadrozny, B. (2004). Learning and evaluating classifiers under sample selection bias. In Proc. 21st Int.
Conf. Machine Learning, pp. 903?910. ACM Press.
[19] Zaniewski, A. E., A. Lehmann, and J. M. Overton (2002). Predicting species spatial distributions using
presence-only data: A case study of native New Zealand ferns. Ecol. Model. 157, 261?280.
| 3624 |@word version:2 proportion:8 accounting:1 moment:5 herbarium:2 contains:4 att:1 tuned:1 rightmost:1 outperforms:1 existing:1 ferrier:1 com:1 jaynes:1 plot:1 discrimination:2 alone:2 generative:13 selected:1 intelligence:1 short:1 record:2 sudden:1 boosting:1 location:11 oak:1 along:1 consists:2 shorthand:1 prove:1 prev:7 introduce:1 allan:1 indeed:2 expected:2 nor:1 mechanic:1 decomposed:1 precipitation:1 becomes:1 estimating:1 notation:2 provided:1 maximizes:1 minimizes:3 developed:1 differing:1 nj:1 guarantee:1 pseudo:4 golden:3 exactly:2 classifier:1 omit:1 appear:3 peck:2 maloof:1 positive:5 before:1 influencing:2 service:1 solely:3 might:3 bird:2 equivalence:3 relaxing:1 appl:1 range:3 averaged:1 unique:2 faithful:1 acknowledgment:1 yj:5 testing:1 elith:3 prevalence:22 optimizers:1 empirical:13 significantly:1 alleviated:1 get:1 unlabeled:16 selection:6 close:2 put:1 context:1 influence:2 intercept:2 py:14 optimize:1 equivalent:7 map:2 demonstrated:1 missing:2 maximizing:3 center:1 www:1 convex:4 survey:3 zealand:2 simplicity:2 rule:1 estimator:1 target:1 us:1 elkan:2 pa:1 dawid:2 asymmetric:1 native:1 labeled:14 observed:2 steven:1 ep:4 bottom:2 worst:6 calculate:1 region:5 highest:1 principled:1 intuition:1 environment:1 respecting:1 econometrica:1 rigorously:1 trained:2 depend:2 rewrite:1 predictive:2 learner:3 joint:13 various:1 train:1 distinct:1 artificial:1 labeling:14 larger:1 solve:1 say:1 florham:1 ward:1 ford:1 superscript:1 remainder:1 poorly:1 achieve:1 ontario:1 empty:1 generating:1 leave:2 derive:7 depending:1 stat:2 measured:1 nceas:1 eq:19 solves:2 c:2 australian:1 differ:1 switzerland:1 posit:1 closely:1 correct:1 exploration:1 australia:1 require:2 generalization:1 preliminary:1 investigation:1 wohlgemuth:1 extension:1 ecol:3 sufficiently:1 considered:1 around:1 exp:3 optimizer:1 achieves:1 bickel:1 gbm:1 estimation:12 diminishes:1 proc:5 wet:1 label:6 maker:3 quote:1 sensitive:4 weighted:13 minimization:2 clearly:1 always:2 boosted:1 varying:2 broader:1 corollary:4 derived:4 focus:4 improvement:4 consistently:1 bernoulli:4 likelihood:8 mainly:1 ave:2 sigkdd:1 inference:1 membership:1 entire:1 typically:2 transformed:1 japkowicz:1 selects:1 interested:1 issue:1 uncovering:1 classification:1 among:5 overall:1 denoted:1 plan:1 art:1 special:4 fairly:1 biologist:2 marginal:3 equal:4 construct:2 never:2 once:1 spatial:1 sampling:1 identical:1 represents:1 park:2 cancel:1 icml:1 discrepancy:2 simplex:1 minimized:2 others:1 piecewise:1 spline:1 randomly:1 missouri:1 museum:2 divergence:1 national:2 individual:2 comprehensive:1 replaced:1 csiro:1 botanical:1 evaluation:2 analyzed:1 extreme:4 implication:1 overton:1 necessary:1 respective:1 unless:1 indexed:1 tree:1 biometrics:1 maxent:38 re:19 fitted:1 miroslav:1 instance:1 modeling:15 earlier:1 planned:1 formulates:1 measuring:1 afflicted:1 cost:3 deviation:3 uniform:5 gr:2 too:4 varies:1 my:3 chooses:1 thanks:1 density:20 peak:2 sensitivity:2 discriminating:2 st:2 corrects:1 ym:1 iy:1 concrete:1 synthesis:1 again:1 recorded:2 containing:3 choose:2 possibly:3 conf:4 availability:1 coefficient:1 int:4 explicitly:1 depends:2 try:1 view:1 lab:7 closed:4 analyze:1 unplanned:1 portion:2 bayes:9 option:1 forbes:1 minimize:2 accuracy:1 who:1 yield:12 bayesian:8 produced:1 fern:1 multiplying:1 published:1 phys:1 suffers:1 pp:4 proof:3 dataset:3 treatment:1 knowledge:2 brt:14 higher:1 methodology:1 improved:1 formulation:1 though:2 generality:1 hand:1 working:1 reweighting:4 lack:2 logistic:21 verify:2 true:13 geographic:3 unbiased:1 former:1 hence:2 equality:7 regularization:4 dud:6 excluded:1 leibler:1 death:1 reweighted:4 game:2 auc:5 generalized:1 outline:1 complete:1 l1:1 temperature:1 fj:7 novel:1 common:1 multinomial:2 discussed:1 he:1 analog:1 vegetation:1 marginals:1 mellon:1 significant:1 phillips:8 tuning:2 unconstrained:3 similarly:1 had:1 access:4 specification:1 closest:1 imbalanced:4 recent:1 certain:1 ecological:2 inequality:2 binary:1 yi:3 captured:1 minimum:2 additional:2 relaxed:2 impose:1 wildlife:1 determine:1 monotonically:2 barry:1 rj:1 infer:2 match:4 offer:1 divided:1 equally:1 jy:6 plugging:1 uckner:1 impact:1 prediction:4 simplistic:1 regression:14 ko:1 cmu:1 expectation:8 affordable:1 editorial:1 arxiv:1 cz:1 addition:4 want:1 whereas:2 separately:2 background:1 median:1 biased:1 unlike:2 pooling:1 subject:1 presence:26 revealed:4 constraining:2 easy:1 concerned:2 canadian:1 hastie:1 perfectly:1 interprets:1 haffner:1 multiclass:2 br:1 shift:2 motivated:1 remark:1 bowerbird:3 schapire:5 http:1 estimated:1 per:2 broadly:1 carnegie:1 write:1 databank:1 express:1 group:3 threshold:1 achieving:1 wsl:1 imputation:1 neither:1 convert:2 sum:1 run:2 package:2 parameterized:1 lehmann:2 striking:1 throughout:1 family:2 reasonable:1 decision:8 graham:3 oracle:6 constraint:24 constrain:2 software:1 min:9 performing:1 px:7 according:4 combination:2 poor:1 shimodaira:1 across:3 describes:1 em:3 smaller:2 slightly:1 rev:1 intuitively:1 altitude:2 ln:12 describing:1 slack:1 merit:1 available:1 generalizes:1 apply:4 chawla:1 occurrence:3 alternative:1 denotes:1 top:2 include:1 yx:1 const:1 especially:1 objective:4 print:1 quantity:1 usual:1 heckman:1 distance:1 separate:1 thank:1 me:10 collected:1 spanning:1 relationship:1 ratio:1 minimizing:5 equivalently:2 difficult:1 tropic:1 reweight:1 negative:4 implementation:2 incidental:1 unknown:11 perform:3 imbalance:1 observation:1 datasets:1 benchmark:2 finite:2 zadrozny:1 situation:1 extended:1 y1:1 arbitrary:2 sharp:1 leathwick:2 canada:2 rarer:1 pair:1 required:3 specified:4 extensive:1 tentative:1 california:1 unequal:1 address:1 usually:2 xm:4 max:1 including:2 royal:1 belief:1 garden:1 power:1 difficulty:1 natural:2 regularized:5 eh:2 force:1 predicting:3 scheme:3 improve:3 habitat:1 naive:3 prior:2 kelly:1 relative:4 loss:20 generation:1 analogy:1 foundation:1 principle:1 elsewhere:1 bias:20 allow:2 wide:1 curve:2 default:28 calculated:1 world:4 evaluating:1 commonly:1 approximate:1 kullback:1 active:1 instantiation:1 overfitting:2 pittsburgh:1 discriminative:14 xi:6 continuous:2 nature:4 robust:26 improving:1 pyx:19 protocol:1 main:2 x1:2 fig:2 representative:1 referred:2 scheffer:1 roc:3 embeds:1 surveyed:2 xh:2 exponential:3 lie:5 weighting:10 third:3 learns:1 niche:1 theorem:14 covariate:2 naively:1 workshop:1 albeit:1 entropy:25 simply:1 expressed:1 corresponds:2 environmental:4 determines:1 relies:1 acm:2 conditional:19 goal:2 viewed:2 formulated:1 ann:1 absence:12 mdudik:1 feasible:2 specifically:2 typical:1 reducing:1 corrected:1 pme:4 called:1 specie:45 discriminate:2 duality:1 experimental:1 total:2 unwald:2 indicating:1 estimable:1 support:1 guo:1 latter:1 absolutely:2 evaluate:4 princeton:1 ex:1 |
2,896 | 3,625 | Stochastic Relational Models for
Large-scale Dyadic Data using MCMC
Shenghuo Zhu
Kai Yu
Yihong Gong
NEC Laboratories America, Cupertino, CA 95014, USA
{zsh, kyu, ygong}@sv.nec-labs.com
Abstract
Stochastic relational models (SRMs) [15] provide a rich family of choices for
learning and predicting dyadic data between two sets of entities. The models generalize matrix factorization to a supervised learning problem that utilizes attributes
of entities in a hierarchical Bayesian framework. Previously variational Bayes inference was applied for SRMs, which is, however, not scalable when the size of
either entity set grows to tens of thousands. In this paper, we introduce a Markov
chain Monte Carlo (MCMC) algorithm for equivalent models of SRMs in order to
scale the computation to very large dyadic data sets. Both superior scalability and
predictive accuracy are demonstrated on a collaborative filtering problem, which
involves tens of thousands users and half million items.
1
Stochastic Relational Models
Stochastic relational models (SRMs) [15] are generalizations of Gaussian process (GP) models [11]
to the relational domain, where each observation is a dyadic datum, indexed by a pair of entities.
They model dyadic data by a multiplicative interaction of two Gaussian process priors.
Let U be the feature representation (or index) space of a set of entities. A pair-wise similarity in
U is given by a kernel (covariance) function ? : U ? U ? R. A Gaussian process (GP) defines
a random function f : U ? R, whose distribution is characterized by a mean function and the
covariance function ?, denoted by f ? N? (0, ?)1 , where, for simplicity, we assume the mean to
be the constant zero. GP complies with the intuition regarding the smoothness ? if two entities ui
and uj are similar according to ?, then f (ui ) and f (uj ) are similar with a high probability.
A domain of dyadic data must involve another set of entities, let it be represented (or indexed) by
V. In a similar way, this entity set is associated with another kernel function ?. For example, in a
typical collaborative filtering domain, U represents users while V represents items, then, ? measures
the similarity between users and ? measures the similarity between items.
Being the relation between a pair of entities from different sets, a dyadic variable y is indexed by
the product space U ? V. Then an SRM aims to model y(u, v) by the following generative process,
Model 1. The generative model of an SRM:
1. Draw kernel functions ? ? IW ? (?, ?? ), and ? ? IW ? (?, ?? );
2. For k = 1, . . . , d: draw random functions fk ? N? (0, ?), and gk ? N? (0, ?);
1
We denote an n dimensional Gaussian distribution with a covariance matrix ? by Nn (0, ?). Then
N? (0, ?) explicitly indicates that a GP follows an ?infinite dimensional? Gaussian distribution.
1
3. For each pair (u, v): draw y(u, v) ? p(y(u, v)|z(u, v), ?), where
d
1 X
z(u, v) = ?
fk (u)gk (v) + b(u, v).
d k=1
In this model, IW ? (?, ?? ) and IW ? (?, ?? ) are hyper priors, whose details will be introduced
later. p(y|z, ?) is the problem-specific noise model. For example, it can follow a Gaussian noise
distribution y ? N1 (z, ?) if y is numerical, or, a Bernoulli distribution if y is binary. Function
b(u, v) is the bias function over the U ? V. For simplicity, we assume b(u, v) = 0.
In the limit d ? ?, the model converges to a special case where fk and gk can be analytically
marginalized out and z becomes a Gaussian process z ? N? (0, ? ? ?) [15], with the covariance
between pairs being a tensor kernel
K ((ui , vs ), (uj , vt )) = ?(ui , uj )?(vs , vt ).
In anther special case, if ? and ? are both fixed to be Dirac delta functions, and U, V are finite sets,
it is easy to see that the model reduces to probabilistic matrix factorization.
The hyper prior IW ? (?, ?? ) is called inverted Wishart Process that generalizes the finite ndimensional inverted Wishart distribution [2]
1
? 1 (?+2n)
IW n (?|?, ?? ) ? |?| 2
etr ? ??1 ?? ,
2
where ? is the degree-of-freedom parameter, and ?? is a positive definite kernel matrix. We note
that the above definition is different from the popular formulation [3] or [4] in the machine learning
community. The advantage of this new notation is demonstrated by the following theorem [2].
Theorem 1. Let A ? IW m (?, K), A ? R+ , K ? R+ , and A and K be partitioned as
A11 , A12
K11 , K12
A=
, K=
A21 , A22
K21 , K22
where A11 and K11 are two n ? n sub matrices, n < m, then A11 ? IW n (?, K11 ).
The new formulation of inverted Wishart is consistent under marginalization. Therefore, similar to
the way of deriving GPs from Gaussian distributions, we define a distribution of infinite-dimensional
kernel functions, denoted by ? ? IW ? (?, ?? ), such that any sub kernel matrix of size m ? m
follows ? ? IW m (?, ?? ), where both ? and ?? are positive definite kernel functions. In case
when U and V are sets of entity indices, SRMs let ?? and ?? both be Dirac delta functions, i.e., any
of their sub kernel matrices is an identity matrix.
Similar to GP regression/classification, the major application of SRMs is supervised prediction based
on observed relational values and input features of entities. Formally, let YI = {y(u, v)|(u, v) ? I}
be the set of noisy observations, where I ? U ? V, the model aims to predict the noise-free values
ZO = {z(u, v)|(u, v) ? O} on O ? U ? V. As our computation is always on a finite set containing
both I and O, from now on, we only consider the finite subset U0 ? V0 , a finite support subset of
U ? V that contains I ? O. Accordingly we let ? be the covariance matrix of ? on U0 , and ? be the
covariance matrix of ? on V0 .
Previously a variational Bayesian method was applied to SRMs [15], which computes the maximum
a posterior estimates of ? and ?, given YI , and then predicts ZO based on the estimated ? and ?.
There are two limitations of this empirical Bayesian approach: (1) The variational method is not a
fully Bayesian treatment. Ideally we wish to integrate ? and ?; (2) The more critical issue is, the
algorithm has the complexity O(m3 + n3 ), with m = |U0 | and n = |V0 |, is not scalable to a large
relational domain where m or n exceeds several thousands. In this paper we will introduce a fully
Bayesian inference algorithm using Markov chain Monte Carlo sampling. By deriving equivalent
sampling processes, we show the algorithms can be applied to a dataset, which is 103 times larger
than the previous work [15], and produce an excellent accuracy.
In the rest of this paper, we present our algorithms for Bayesian inference of SRMs in Section 2.
Some related work is discussed in Section 3, followed by experiment results of SRMs in Section 4.
Section 5 concludes.
2
2
Bayesian Models and MCMC Inference
In this paper, we tackle the scalability issue with a fully Bayesian paradigm. We estimate the expectation of ZO directly from YI using Markov-chain Monte Carlo (MCMC) algorithm (specifically,
Gibbs sampling), instead of evaluating that from estimated ? or ?. Our contribution is in how to
make the MCMC inference more efficient for large scale data.
We first introduce some necessary notation here. Bold capital letters, e.g. X, indicate matrices. I(m)
is an identity matrix of size m ? m. Nd , Nm,d , IW m , ??2 are the multivariate normal distribution,
the matrix-variate normal distribution, the inverse-Wishart distribution, and the inverse chi-square
distribution, respectively.
2.1
Models with Non-informative Priors
Let r = |I|, m = |U0 | and n = |V0 |. It is assumed that d min(m, n), and the observed set, I, is
sparse, i.e. r mn. First, we consider the case of ?? = ?I(m) and ?? = ?I(n) . Let {fk } on U0
denoted by matrix variate F of size m ? d, {gk } on V0 denoted by matrix variate G of size n ? d.
Then the generative model is written as Model 2 and depicted in Figure 1.
Model 2. The generative model of a matrix-variate SRM:
?
1. Draw ? ? IW m (?, ?I(m) ) and ? ? IW n (?, ?I(n) );
?
I(d)
G
F
2. Draw F|? ? Nm,d (0, ? ? I(d) ) and G|? ? Nn,d (0, ? ? I(d) );
I(d)
Z
3. Draw s2 ? ??2 (?, ? 2 ) ;
4. Draw Y|F, G, s2 ? Nm,n (Z, s2 I(m) ? I(n) ), where Z = FG> .
s2
Y
where Nm,d is the matrix-variate normal distribution of size m ? d; ?,
Figure 1: Model 2
?, ?, ? and ? 2 are scalar parameters of the model. A slight difference
?
between this finite model and Model 1 is that the coefficient 1/ d is ignored for simplicity because
this coefficient can be absorbed by ? or ?.
As we can explicitly compute Pr(?|F), Pr(?|G), Pr(F|YI , G, ?, s2 ), Pr(G|YI , F, ?, s2 ),
Pr(s2 |YI , F, G), we can apply Gibbs sampling algorithm to compute ZO . However, the computational time complexity is at least O(m3 + n3 ), which is not practical for large scale data.
2.2
Gibbs Sampling Method
To overcome the inefficiency in sampling large covariance matrices, we rewrite the sampling
process using the property of Theorem 2 to take the advantage of d min(m, n).
?I(d)
?I(m)
Theorem 2. If
1. ? ? IW m (?, ?I(m) ) and F|? ? Nm,d (0, ? ? I(d) ),
2. K ? IW d (?, ?I(d) ) and H|K ? Nm,d (0, I(m) ? K),
then, matrix variates, F and H, have the same distribution.
Proof sketch. Matrix variate F follows a matrix variate t distribution,
t(?, 0, ?I(m) , I(d) ), which is written as
1
?
I(d)
F
?
I(m)
K
F
Figure 2: Theorem 2
1
p(F) ? |I(m) + (?I(m) )?1 F(I(d) )?1 F> |? 2 (?+m+d?1) = |I(m) + ??1 FF> |? 2 (?+m+d?1)
Matrix variate H follows a matrix variate t distribution, t(?, 0, I(m) , ?I(d) ), which can be written as
1
1
p(H) ? |I(m) + (I(m) )?1 H(?I(d) )?1 H> |? 2 (?+m+d?1) = |I(m) + ??1 HH> |? 2 (?+m+d?1)
Thus, matrix variates, F and H, have the same distribution.
3
This theorem allows us to sample a smaller covariance matrix K of size d ? d on the column side
instead of sampling a large covariance matrix ? of size m ? m on the row side. The translation is
depicted in Figure 2. This theorem applies to G as well, thus we rewrite the model as Model 3 (or
Figure 3). A similar idea was used in our previous work [16].
Model 3. The alternative generative model of a matrix-variate SRM:
I(m)
I(n)
K
R
1. Draw K ? IW d (?, ?I(d) ) and R ? IW d (?, ?I(d) );
G
F
2. Draw F|K ? Nm,d (0, I(m) ? K), and G|R ? Nn,d (0, I(n) ? R),
3. Draw s2 ? ??2 (?, ? 2 ) ;
Z
4. Draw Y|F, G, s2 ? Nm,n (Z, s2 I(m) ? I(n) ), where Z = FG> .
s2
Y
Let column vector f i be the i-th row of matrix F, and column vector gj
Figure 3: Model 3
be the j-th row of matrix G. In Model 3, {f i } are independent given K,
2
G and s . Similar independence applies to {gj } as well. The conditional posterior distribution of
K, R, {f i }, {gj } and s2 can be easily computed, thus the Gibbs sampling for SRM is named BSRM
(for Bayesian SRM).
We use Gibbs sampling to compute the mean of ZO , which is derived from the samples of FG> .
Because of the sparsity of I, each iteration in this sampling algorithm can be computed in O(d2 r +
d3 (m + n)) time complexity2 , which is a dramatic reduction from the previous time complexity
O(m3 + n3 ) .
2.3
Models with Informative Priors
An important characteristic of SRMs is that it allows the inclusion of certain prior knowledge of
entities into the model. Specifically, the prior information is encoded as the prior covariance parameters, i.e. ?? and ?? . In the general case, it is difficult to run sampling process due to the size of ??
and ?? . We assume that ?? and ?? have a special form, i.e. ?? = F? (F? )> + ?I(m) , where F? is
an m ? p matrix, and ?? = G? (G? )> + ?I(n) , where G? is an n ? q matrix, and the magnitude of
p and q is about the same as or less than that of d. This prior knowledge can be obtained from some
additional features of entities.
Although such an informative ?? prevents us from directly sampling each row of F independently,
as we do in Model 3, we can expand matrix F of size m ? d to (F, F? ) of size m ? (d + p), and
derive an equivalent model, where rows of F are conditionally independent given F? . Figure 4
illustrates this transformation.
Theorem 3. Let ? > p, ?? = F? (F? )> + ?I(m) , where F? is an m ? p
matrix. If
1. ? ? IW m (?, ?? ) and F|? ? Nm,d (0, ? ? I(d) ),
K11 K12
2. K =
? IW d+p (? ? p, ?I(d+p) ) and
K21 K22
H|K ? Nm,d (F? K?1
22 K21 , I(m) ? K11?2 ),
?I(d+p)
?0
?
I(d)
F
?
I(m)
K
(F, F0 )
Figure 4: Theorem 3
where K11?2 = K11 ? K12 K?1
22 K21 , then F and H have the same distribution.
Proof sketch. Consider the distribution
(H1 , H2 )|K ? Nm,d+p (0, I(m) ? K).
(1)
?
Because H1 |H2 ? Nm,d (H2 K?1
22 K21 , I(m) ? K11?2 ), p(H) = p(H1 |H2 = F ). On the other
hand, we have a matrix-variate t distribution, (H1 , H2 ) ? tm,d+p (? ? p, 0, ?I(m) , I(d+p) ). By
?
Theorem 4.3.9 in [4], we have H1 |H2 ? tm,d (?, 0, ?I(m) + H2 H>
2 , I(d) ) = tm,d (?, 0, ? , I(d) ),
?
which implies p(F) = p(H1 |H2 = F ) = p(H).
2
|Y ? FG> |2I can be efficiently computed in O(dr) time.
4
The following corollary allows us to compute the posterior distribution of K efficiently.
Corollary 4. K|H ? IW d+p (? + m, ?I(d+p) + (H, F? )> (H, F? )).
Proof sketch. Because normal distribution and inverse Wishart distribution are conjugate, we can
derive the posterior distribution K from Eq. (1).
Thus, we can explicitly sample from the conditional posterior distributions, as listed in Algorithm 1
(BSRM/F for BSRM with features) in Appendix. We note that when p = q = 0, Algorithm 1
(BSRM/F) reduces to the exact algorithm for BSRM. Each iteration in this sampling algorithm can
be computed in O(d2 r + d3 (m + n) + dpm + dqn) time complexity.
2.4
Unblocking for Sampling Implementation
Blocking Gibbs sampling technique is commonly used to improve the sampling efficiency by reducing the sample variance according to the Rao-Blackwell theorem (c.f. [9]). However, blocking
Gibbs sampling is not necessary to be computationally efficient. To improve the computational efficiency of Algorithm 1, we use unblocking sampling to reduce the major computational cost is Step 2
and Step 4. We consider sampling each element of F conditionally. The sampling process is written
as Step 4 and Step 9 of Algorithm 2, which is called BSRM/F with conditional Gibss sampling. We
can reduce the computational cost of each iteration to O(dr + d2 (m + n) + dpm + dqn), which is
comparable to other low-rank matrix factorization approaches. Though such a conditional sampling
process increases the sample variance comparing to Algorithm 1, we can afford more samples within
a given amount of time due to its faster speed. Our experiments show that the overall computational
cost of Algorithm 2 is usually less than that of Algorithm 1 when achieving the same accuracy.
Additionally, since {f i } are independent, we can parallelize the for loops in Step 4 and Step 9 of
Algorithm 2.
3
Related Work
SRMs fall into a class of statistical latent-variable relational models that explain relations by latent
factors of entities. Recently a number of such models were proposed that can be roughly put into two
groups, depending on whether the latent factors are continuous or discrete: (1) Discrete latent-state
relational models: a large body of research infers latent classes of entities and explains the entity
relationship by the probability conditioned on the joint state of participating entities, e.g., [6, 14, 7,
1]. In another work [10], binary latent factors are modeled; (2) Continuous latent-variable relational
models: many such models assume relational data underlain by multiplicative effects between latent
variables of entities, e.g. [5]. A simple example is matrix factorization, which recently has become
very popular in collaborative filtering applications, e.g., [12, 8, 13].
The latest Bayesian probabilistic matrix factorization [13] reported the state-of-the-art accuracy of
matrix factorization on Netflix data. Interestingly, the model turns out to be similar to our Model 3
under the non-informative prior. This paper reveals the equivalence between different models and
offers a more general Bayesian framework that allows informative priors from entity features to
play a role. The framework also generalizes Gaussian processes [11] to a relational domain, where
a nonparametric prior for stochastic relational processes is described.
4
Experiments
Synthetic data: We compare BSRM under noninformative priors against two other algorithms: the
fast max-margin matrix factorization (fMMMF) in [12] with a square loss, and SRM using variational Bayesian approach (SRM-VB) in [15]. We generate a 30 ? 20 random matrix (Figure 5(a)),
then add Gaussian noise with ? 2 = 0.1 (Figure 5(b)). The root mean squared noise is 0.32. We
select 70% elements as the observed data and use the rest of the elements for testing. The reconstruction matrix and root mean squared errors (RMSEs) of predictions on the test elements are shown
in Figure 5(c)-5(e). BSRM outperforms the variational approach of SRMs and fMMMF. Note that
because of the log-determinant penalty of the inverse Wishart prior, SRM-VB enforces the rank to
be smaller, thus the result of SRM-VB looks smoother than that of BSRM.
5
5
5
5
5
5
10
10
10
10
10
15
15
15
15
15
20
20
20
20
20
25
25
25
25
30
30
2
4
6
8
10
12
14
16
18
20
30
2
4
6
8
10
12
14
16
18
20
25
30
2
4
6
8
10
12
14
16
18
20
30
2
4
6
8
10
12
14
16
18
20
(a) Original Matrix (b) With Noise(0.32) (c) fMMMF (0.27) (d) SRM-VB(0.22)
2
4
6
8
10
12
14
16
18
20
(e) BSRM(0.19)
Figure 5: Experiments on synthetic data. RMSEs are shown in parentheses.
RMSE
MAE
User Mean
1.425
1.141
Movie Mean
1.387
1.103
fMMMF [12]
1.186
0.943
VB [8]
1.165
0.915
Table 1: RMSE (root mean squared error) and MAE (mean absolute error) of the experiments on
EachMovie data. All standard errors are 0.001 or less.
EachMovie data: We test the accuracy and the efficiency of our algorithms on EachMovie. The
dataset contains 74, 424 users? 2, 811, 718 ratings on 1, 648 movies, i.e. about 2.29% are rated by
zero-to-five stars. We put all the ratings into a matrix, and randomly select 80% as observed data
to predict the remaining ratings. The random selection was carried out 10 times independently. We
compare our approach against several competing methods: 1) User Mean, predicting ratings by the
sample mean of the same user?s ratings; 2) Move Mean, predicting rating by the sample mean of
ratings on the same movie; 3) fMMMF [12]; 4) VB introduced in [8], which is a probabilistic lowrank matrix factorization using variational approximation. Because of the data size, we cannot run
the SRM-VB of [15]. We test the algorithms BSRM and BSRM/F, both following Algorithm 2,
which run without and with features, respectively. The features used in BSRM/F are generated from
the PCA result of the binary indicator matrix that indicates whether the user rates the movie. The
10 top factors of both the user side and the movie side are used as features, i.e. p = 10, q = 10. We
run the experiments with different d = 16, 32, 100, 200, 300. The hyper parameters are set to some
trivial values, ? = p + 1 = 11, ? = ? = 1, ? 2 = 1, and ? = 1. The results are shown in Table 1
and 2. We find that the accuracy improves as the number of d is increased. Once d is greater than
100, the further improvement is not very significant within a reasonable amount of running time.
rank (d)
BSRM
RMSE
MAE
BSRM/F RMSE
MAE
16
1.0983
0.8411
1.0952
0.8311
32
1.0924
0.8321
1.0872
0.8280
100
1.0905
0.8335
1.0848
0.8289
200
1.0903
0.8340
1.0846
0.8293
300
1.0902
0.8393
1.0852
0.8292
Table 2: RMSE (root mean squared error) and MAE (mean absolute error) of experiments on EachMovie data. All standard errors are 0.001 or less.
RMSE
To compare the overall computational efficiency of the two Gibbs sampling procedures, Algorithm 1
and Algorithm 2, we run both algorithms
1.2
and record the running time and accuracy
Algorithm 1
Algorithm 2
in RMSE. The dimensionality d is set to
1.18
be 100. We compute the average ZO and
burn-in ends
evaluate it after a certain number of itera1.16
tions. The evaluation results are shown in
Figure 6. We run both algorithms for 100
1.14
burn-in ends
iterations as the burn-in period, so that we
1.12
can have an independent start sample. After the burn-in period, we restart to compute
1.1
the averaged ZO and evaluate them, therefore there are abrupt points at 100 iterations
1.08
in both cases. The results show that the
0 1000 2000 3000 4000 5000 6000 7000 8000
overall accuracy of Algorithm 2 is better at
Running time (sec)
any given time.
Figure 6: Time-Accuracy of Algorithm 1 and 2
6
Netflix data: We also test the algorithms on the large collection of user ratings from netflix.com. The
dataset consists of 100, 480, 507 ratings from 480, 189 users on 17, 770 movies. In addition, Netflix
also provides a set of validation data with 1, 408, 395 ratings. In order to evaluate the prediction
accuracy, there is a test set containing 2, 817, 131 ratings whose values are withheld and unknown
for all the participants.
The features used in BSRM/F are generated from the PCA result of a binary matrix that indicates
whether or not the user rated the movie. The top 30 user-side factors are used as features, none of
movie-side factors are used, i.e. p = 30, q = 0. The hyper parameters are set to some trivial values,
? = p + 1 = 31, ? = ? = 1, ? 2 = 1, and ? = 1. The results on the validation data are shown in
Table 3. The submitted result of BSRM/F(400) achieves RMSE 0.8881 on the test set. The running
time is around 21 minutes per iteration for 400 latent dimensions on an Intel Xeon 2GHz PC.
RMSE
VB[8]
0.9141
BPMF [13]
0.8920
100
0.8930
BSRM
200
0.8910
400
0.8895
100
0.8926
BSRM/F
200
400
0.8880 0.8874
Table 3: RMSE (root mean squared error) of experiments on Netflix data.
5
Conclusions
In this paper, we study the fully Bayesian inference for stochastic relational models (SRMs), for
learning the real-valued relation between entities of two sets. We overcome the scalability issue
by transforming SRMs into equivalent models, which can be efficiently sampled. The experiments
show that the fully Bayesian inference outperforms the previously used variational Bayesian inference on SRMs. In addition, some techniques for efficient computation in this paper can be applied to
other large-scale Bayesian inferences, especially for models involving inverse-Wishart distributions.
Acknowledgment: We thank the reviewers and Sarah Tyler for constructive comments.
References
[1] E. Airodi, D. Blei, S. Fienberg, and E. P. Xing. Mixed membership stochastic blockmodels. In
Journal of Machine Learning Research, 2008.
[2] A. P. Dawid. Some matrix-variate distribution theory: notational considerations and a Bayesian
application. Biometrika, 68:265?274, 1981.
[3] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman &
Hall, New York, 1995.
[4] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman & Hall/CRC, 2000.
[5] P. Hoff. Multiplicative latent factor models for description and prediction of social networks.
Computational and Mathematical Organization Theory, 2007.
[6] T. Hofmann. Latent semantic models for collaborative filtering. ACM Trans. Inf. Syst.,
22(1):89?115, 2004.
[7] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of
concepts with an infinite relational model. In Proceedings of the 21st National Conference on
Artificial Intelligence (AAAI), 2006.
[8] Y. J. Lim and Y. W. Teh. Variational Bayesian approach to movie rating prediction. In Proceedings of KDD Cup and Workshop, 2007.
[9] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer, 2001.
[10] E. Meeds, Z. Ghahramani, R. Neal, and S. T. Roweis. Modeling dyadic data with binary latent
factors. In Advances in Neural Information Processing Systems 19, 2007.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press,
2006.
[12] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative
prediction. In ICML, 2005.
7
[13] R. Salakhutdinov and A. Mnih. Bayeisna probabilistic matrix factorization using Markov chain
Monte Carlo. In The 25th International Conference on Machine Learning, 2008.
[14] Z. Xu, V. Tresp, K. Yu, and H.-P. Kriegel. Infinite hidden relational models. In Proceedings of
the 22nd International Conference on Uncertainty in Artificial Intelligence (UAI), 2006.
[15] K. Yu, W. Chu, S. Yu, V. Tresp, and Z. Xu. Stochastic relational models for discriminative link
prediction. In Advances in Neural Information Processing Systems 19 (NIPS), 2006.
[16] S. Zhu, K. Yu, and Y. Gong. Predictive matrix-variate t models. In J. Platt, D. Koller, Y. Singer,
and S. Roweis, editors, NIPS ?07: Advances in Neural Information Processing Systems 20,
pages 1721?1728. MIT Press, Cambridge, MA, 2008.
Appendix
Before presenting the algorithms, we introduce the necessary notation. Let Ii = {j|(i, j) ? I} and
Ij = {i|(i, j) ? I}. A matrix with subscripts indicates its submatrix, which consists its entries at the
given indices in the subscripts, for example, XIj ,j is a subvector of the j-th column of X whose row
indices are in set Ij , X?,j is the j-th column of X (? indicates the
P full set).2 Xi,j denotes the (i, j)-th
entry of X. |X|2I is the squared sum of elements in set I, i.e. (i,j)?I Xi,j
. We fill the unobserved
elements in Y with 0 for simplicity in notation
Algorithm 1 BSRM/F: Gibbs sampling for SRM with features
1: Draw K ? IW d+p (? + m, ?I(d+p) + (F, F? )> (F, F? ));
?1
?1 ?
2: For each i ? U0 , draw f i ? Nd (K(i) (s?2 G> Y >
i,? + K11?2 K12 K22 f i ), K(i) ),
?1
where K(i) = s?2 (GIi ,? )> GIi ,? + K?1
;
11?2
3: Draw R ? IW d+q (? + n, ?I(d+q) + (G, G? )> (G, G? ));
?1 ?
4: For each j ? V0 , draw gj ? Nd (R(j) (s?2 F> Y ?,j + R?1
11?2 R12 R22 gj ), R(j) ),
?1
?1
where R(j) = s?2 (FIj ,? )> FIj ,? + R11?2
;
5: Draw s2 ? ??2 (? + r, ? 2 + |Y ? FG> |2I ).
Algorithm 2 BSRM/F: Conditional Gibbs sampling for SRM with features
P
1: ?i,j ? Yi,j ? k Fi,k Gj,k , for (i, j) ? I;
2: Draw ? ? Wd+p (? + m + d + p ? 1, (?I(d+p) + (F, F? )> (F, F? ))?1 );
3: for each (i, k) ? U0 ? {1, ? ? ? , d} do
4:
Draw f ? N1 (??1 (s?2 ?i,Ii GIi ,k ? Fi,? ??,k ), ??1 ), where ? = s?2 (GIi ,k )> GIi ,k + ?k,k ;
5:
Update Fi,k ? Fi,k + f , and ?i,j ? ?i,j ? f Gj,k , for j ? Ii ;
6: end for
7: Draw ? ? Wd+q (? + n + d + q ? 1, (?I(d+q) + (G, G? )> (G, G? ))?1 );
8: for each (j, k) ? V0 ? {1, ? ? ? , d} do
?1
9:
Draw g ? N1 (? ?1 (s?2 ?>
), where ? = s?2 (FIj ,k )> FIj ,k +?k,k ;
Ij ,j FIj ,k ?Gj,? ??,k ), ?
10:
Update Gj,k ? Gj,k + g and ?i,j ? ?i,j ? gFi,k , for i ? Ij ;
11: end for
12: Draw s2 ? ??2 (? + r, ? 2 + |?|2I ).
8
| 3625 |@word determinant:1 nd:4 d2:3 covariance:10 dramatic:1 reduction:1 inefficiency:1 contains:2 liu:1 interestingly:1 outperforms:2 com:2 comparing:1 wd:2 chu:1 must:1 written:4 numerical:1 informative:5 kdd:1 hofmann:1 noninformative:1 update:2 v:2 half:1 generative:5 intelligence:2 item:3 accordingly:1 yamada:1 record:1 blei:1 provides:1 five:1 mathematical:1 become:1 consists:2 introduce:4 roughly:1 chi:1 salakhutdinov:1 becomes:1 notation:4 unobserved:1 transformation:1 tackle:1 biometrika:1 platt:1 positive:2 before:1 limit:1 parallelize:1 subscript:2 burn:4 shenghuo:1 equivalence:1 factorization:10 averaged:1 practical:1 acknowledgment:1 enforces:1 testing:1 definite:2 procedure:1 empirical:1 griffith:1 cannot:1 selection:1 gelman:1 put:2 equivalent:4 demonstrated:2 reviewer:1 latest:1 williams:1 independently:2 simplicity:4 abrupt:1 deriving:2 fill:1 bpmf:1 play:1 user:13 exact:1 gps:1 element:6 dawid:1 predicts:1 blocking:2 observed:4 role:1 thousand:3 intuition:1 transforming:1 complexity:4 ui:4 ideally:1 rewrite:2 predictive:2 efficiency:4 meed:1 easily:1 joint:1 represented:1 america:1 zo:7 fast:2 monte:5 artificial:2 hyper:4 whose:4 encoded:1 kai:1 larger:1 valued:1 rennie:1 gp:5 noisy:1 advantage:2 reconstruction:1 interaction:1 product:1 loop:1 roweis:2 description:1 participating:1 dirac:2 scalability:3 produce:1 a11:3 converges:1 tions:1 derive:2 depending:1 sarah:1 gong:2 ij:4 lowrank:1 a22:1 eq:1 involves:1 indicate:1 implies:1 gib:1 fij:5 attribute:1 stochastic:8 a12:1 explains:1 crc:1 generalization:1 around:1 hall:2 normal:4 tyler:1 predict:2 major:2 achieves:1 iw:22 mit:2 gaussian:11 always:1 aim:2 corollary:2 derived:1 improvement:1 notational:1 bernoulli:1 indicates:5 rank:3 inference:9 membership:1 nn:3 hidden:1 relation:3 koller:1 expand:1 issue:3 classification:1 overall:3 denoted:4 art:1 special:3 hoff:1 once:1 sampling:26 chapman:2 represents:2 yu:5 look:1 icml:1 randomly:1 r11:1 national:1 n1:3 freedom:1 organization:1 mnih:1 evaluation:1 pc:1 chain:4 necessary:3 indexed:3 increased:1 column:5 xeon:1 modeling:1 rao:1 cost:3 subset:2 entry:2 srm:14 reported:1 sv:1 synthetic:2 st:1 international:2 probabilistic:4 squared:6 aaai:1 nm:12 containing:2 wishart:7 dr:2 syst:1 star:1 bold:1 sec:1 coefficient:2 explicitly:3 multiplicative:3 later:1 h1:6 lab:1 root:5 netflix:5 bayes:1 start:1 participant:1 xing:1 rmse:10 collaborative:5 contribution:1 square:2 accuracy:10 variance:2 characteristic:1 efficiently:3 generalize:1 bayesian:19 none:1 carlo:5 submitted:1 unblocking:2 explain:1 definition:1 against:2 associated:1 proof:3 sampled:1 dataset:3 treatment:1 popular:2 knowledge:2 lim:1 infers:1 improves:1 dimensionality:1 supervised:2 follow:1 formulation:2 though:1 sketch:3 hand:1 defines:1 scientific:1 grows:1 dqn:2 usa:1 effect:1 k22:3 concept:1 analytically:1 laboratory:1 semantic:1 neal:1 conditionally:2 presenting:1 variational:8 wise:1 consideration:1 recently:2 fi:4 superior:1 srms:15 million:1 cupertino:1 discussed:1 slight:1 mae:5 significant:1 cup:1 gibbs:10 cambridge:1 smoothness:1 fk:4 inclusion:1 f0:1 similarity:3 v0:7 gj:10 add:1 posterior:5 multivariate:1 nagar:1 inf:1 certain:2 binary:5 vt:2 yi:7 inverted:3 additional:1 greater:1 paradigm:1 period:2 u0:7 smoother:1 ii:3 full:1 reduces:2 eachmovie:4 exceeds:1 faster:1 characterized:1 offer:1 parenthesis:1 prediction:7 scalable:2 regression:1 involving:1 expectation:1 iteration:6 kernel:9 addition:2 rest:2 comment:1 gii:5 dpm:2 easy:1 marginalization:1 variate:16 independence:1 carlin:1 competing:1 reduce:2 regarding:1 idea:1 tm:3 yihong:1 whether:3 pca:2 penalty:1 york:1 afford:1 ignored:1 involve:1 listed:1 amount:2 nonparametric:1 gfi:1 ten:2 tenenbaum:1 generate:1 xij:1 r12:1 delta:2 estimated:2 per:1 r22:1 discrete:2 group:1 achieving:1 capital:1 d3:2 sum:1 run:6 inverse:5 letter:1 uncertainty:1 named:1 family:1 reasonable:1 ueda:1 utilizes:1 draw:21 appendix:2 vb:8 comparable:1 submatrix:1 followed:1 datum:1 n3:3 speed:1 min:2 according:2 conjugate:1 smaller:2 partitioned:1 pr:5 fienberg:1 computationally:1 previously:3 ygong:1 turn:1 hh:1 singer:1 complies:1 end:4 generalizes:2 apply:1 hierarchical:1 alternative:1 original:1 top:2 remaining:1 running:4 denotes:1 marginalized:1 kyu:1 ghahramani:1 uj:4 especially:1 tensor:1 move:1 strategy:1 thank:1 link:1 entity:20 restart:1 etr:1 kemp:1 trivial:2 index:4 relationship:1 modeled:1 difficult:1 gk:4 implementation:1 stern:1 unknown:1 teh:1 observation:2 markov:4 finite:6 withheld:1 relational:17 community:1 rating:12 introduced:2 pair:5 blackwell:1 subvector:1 nip:2 trans:1 kriegel:1 usually:1 sparsity:1 max:1 critical:1 predicting:3 indicator:1 ndimensional:1 zhu:2 mn:1 improve:2 movie:9 rated:2 concludes:1 carried:1 tresp:2 prior:14 fully:5 loss:1 mixed:1 limitation:1 filtering:4 srebro:1 rmses:2 h2:8 integrate:1 validation:2 degree:1 consistent:1 rubin:1 editor:1 translation:1 row:6 free:1 rasmussen:1 bias:1 side:6 fall:1 absolute:2 sparse:1 k12:4 fg:5 ghz:1 overcome:2 dimension:1 evaluating:1 rich:1 computes:1 commonly:1 collection:1 social:1 reveals:1 uai:1 assumed:1 discriminative:1 xi:2 continuous:2 latent:12 table:5 additionally:1 ca:1 excellent:1 domain:5 blockmodels:1 s2:14 noise:6 dyadic:8 body:1 zsh:1 xu:2 intel:1 ff:1 sub:3 wish:1 a21:1 complexity2:1 theorem:11 minute:1 specific:1 k21:5 gupta:1 k11:9 workshop:1 nec:2 magnitude:1 illustrates:1 conditioned:1 margin:2 depicted:2 absorbed:1 prevents:1 scalar:1 applies:2 springer:1 acm:1 ma:1 conditional:5 identity:2 typical:1 infinite:4 specifically:2 reducing:1 called:2 m3:3 formally:1 select:2 support:1 constructive:1 evaluate:3 mcmc:5 |
2,897 | 3,626 | A Convergent O(n) Algorithm
for Off-policy Temporal-difference Learning
with Linear Function Approximation
Richard S. Sutton, Csaba Szepesv?ari?, Hamid Reza Maei
Reinforcement Learning and Artificial Intelligence Laboratory
Department of Computing Science
University of Alberta
Edmonton, Alberta, Canada T6G 2E8
Abstract
We introduce the first temporal-difference learning algorithm that is stable with
linear function approximation and off-policy training, for any finite Markov decision process, behavior policy, and target policy, and whose complexity scales
linearly in the number of parameters. We consider an i.i.d. policy-evaluation setting in which the data need not come from on-policy experience. The gradient
temporal-difference (GTD) algorithm estimates the expected update vector of the
TD(0) algorithm and performs stochastic gradient descent on its L2 norm. We
prove that this algorithm is stable and convergent under the usual stochastic approximation conditions to the same least-squares solution as found by the LSTD,
but without LSTD?s quadratic computational complexity. GTD is online and incremental, and does not involve multiplying by products of likelihood ratios as in
importance-sampling methods.
1
Off-policy learning methods
Off-policy methods have an important role to play in the larger ambitions of modern reinforcement
learning. In general, updates to a statistic of a dynamical process are said to be ?off-policy? if
their distribution does not match the dynamics of the process, particularly if the mismatch is due
to the way actions are chosen. The prototypical example in reinforcement learning is the learning
of the value function for one policy, the target policy, using data obtained while following another
policy, the behavior policy. For example, the popular Q-learning algorithm (Watkins 1989) is an offpolicy temporal-difference algorithm in which the target policy is greedy with respect to estimated
action values, and the behavior policy is something more exploratory, such as a corresponding greedy policy. Off-policy methods are also critical to reinforcement-learning-based efforts to model
human-level world knowledge and state representations as predictions of option outcomes (e.g.,
Sutton, Precup & Singh 1999; Sutton, Rafols & Koop 2006).
Unfortunately, off-policy methods such as Q-learning are not sound when used with approximations
that are linear in the learned parameters?the most popular form of function approximation in reinforcement learning. Counterexamples have been known for many years (e.g., Baird 1995) in which
Q-learning?s parameters diverge to infinity for any positive step size. This is a severe problem in
so far as function approximation is widely viewed as necessary for large-scale applications of reinforcement learning. The need is so great that practitioners have often simply ignored the problem
and continued to use Q-learning with linear function approximation anyway. Although no instances
?
Csaba Szepesv?ari is on leave from MTA SZTAKI.
1
of absolute divergence in applications have been reported in the literature, the potential for instability
is disturbing and probably belies real but less obvious problems.
The stability problem is not specific to reinforcement learning. Classical dynamic programming
methods such as value and policy iteration are also off-policy methods and also diverge on some
problems when used with linear function approximation. Reinforcement learning methods are actually an improvement over conventional dynamic programming methods in that at least they can
be used stably with linear function approximation in their on-policy form. The stability problem is
also not due to the interaction of control and prediction, or to stochastic approximation effects; the
simplest counterexamples are for deterministic, expected-value-style, synchronous policy evaluation
(see Baird 1995; Sutton & Barto 1998).
Prior to the current work, the possibility of instability could not be avoided whenever four individually desirable algorithmic features were combined: 1) off-policy updates, 2) temporal-difference
learning, 3) linear function approximation, and 4) linear complexity in memory and per-time-step
computation. If any one of these four is abandoned, then stable methods can be obtained relatively
easily. But each feature brings value and practitioners are loath to give any of them up, as we discuss
later in a penultimate related-work section. In this paper we present the first algorithm to achieve
all four desirable features and be stable and convergent for all finite Markov decision processes, all
target and behavior policies, and all feature representations for the linear approximator. Moreover,
our algorithm does not use importance sampling and can be expected to be much better conditioned
and of lower variance than importance sampling methods. Our algorithm can be viewed as performing stochastic gradient-descent in a novel objective function whose optimum is the least-squares TD
solution. Our algorithm is also incremental and suitable for online use just as are simple temporaldifference learning algorithms such as Q-learning and TD(?) (Sutton 1988). Our algorithm can be
broadly characterized as a gradient-descent version of TD(0), and accordingly we call it GTD(0).
2
Sub-sampling and i.i.d. formulations of temporal-difference learning
In this section we formulate the off-policy policy-evaluation problem for one-step temporaldifference learning such that the data consists of independent, identically-distributed (i.i.d.) samples. We start by considering the standard reinforcement learning framework, in which a learning
agent interacts with an environment consisting of a finite Markov decision process (MDP). At each
of a sequence of discrete time steps, t = 1, 2, . . ., the environment is in a state st ? S, the agent
chooses an action at ? A, and then the environment emits a reward rt ? R, and transitions to its
next state st+1 ? S. The state and action sets are finite. State transitions are stochastic and dependent on the immediately preceding state and action. Rewards are stochastic and dependent on the
preceding state and action, and on the next state. The agent process generating the actions is termed
the behavior policy. To start, we assume a deterministic target policy ? : S ? A. The objective is
to learn an approximation to its state-value function:
"?
#
X
V ? (s) = E?
(1)
? t?1 rt |s1 = s ,
t=1
where ? ? [0, 1) is the discount rate. The learning is to be done without knowledge of the process
dynamics and from observations of a single continuous trajectory with no resets.
In many problems of interest the state set is too large for it to be practical to approximate the value of
each state individually. Here we consider linear function approximation, in which states are mapped
to feature vectors with fewer components than the number of states. That is, for each state s ? S
there is a corresponding feature vector ?(s) ? Rn , with n |S|. The approximation to the value
function is then required to be linear in the feature vectors and a corresponding parameter vector
? ? Rn :
V ? (s) ? ?>?(s).
(2)
Further, we assume that the states st are not visible to the learning agent in any way other than
through the feature vectors. Thus this function approximation formulation can include partialobservability formulations such as POMDPs as a special case.
The environment and the behavior policy together generate a stream of states, actions and rewards, s1 , a1 , r1 , s2 , a2 , r2 , . . ., which we can break into causally related 4-tuples, (s1 , a1 , r1 , s01 ),
2
(s2 , a2 , r2 , s02 ), . . . , where s0t = st+1 . For some tuples, the action will match what the target policy
would do in that state, and for others it will not. We can discard all of the latter as not relevant to
the target policy. For the former, we can discard the action because it can be determined from the
state via the target policy. With a slight abuse of notation, let sk denote the kth state in which an
on-policy action was taken, and let rk and s0k denote the associated reward and next state. The kth
on-policy transition, denoted (sk , rk , s0k ), is a triple consisting of the starting state of the transition,
the reward on the transition, and the ending state of the transition. The corresponding data available
to the learning algorithm is the triple (?(sk ), rk , ?(s0k )).
The MDP under the behavior policy is assumed to be ergodic, so that it determines a stationary
state-occupancy distribution ?(s) = limk?? P r{sk = s}. For any state s, the MDP and target
policy together determine an N ? N state-transition-probability matrix P , where pss0 = P r{s0k =
s0 |sk = s}, and an N ? 1 expected-reward vector R, where Rs = E[rk |sk = s]. These two
together completely characterize the statistics of on-policy transitions, and all the samples in the
sequence of (?(sk ), rk , ?(s0k )) respect these statistics. The problem still has a Markov structure
in that there are temporal dependencies between the sample transitions. In our analysis we first
consider a formulation without such dependencies, the i.i.d. case, and then prove that our results
extend to the original case.
In the i.i.d. formulation, the states sk are generated independently and identically distributed according to an arbitrary probability distribution ?. From each sk , a corresponding s0k is generated
according to the on-policy state-transition matrix, P , and a corresponding rk is generated according
to an arbitrary bounded distribution with expected value Rsk . The final i.i.d. data sequence, from
which an approximate value function is to be learned, is then the sequence (?(sk ), rk , ?(s0k )), for
k = 1, 2, . . . Further, because each sample is i.i.d., we can remove the indices and talk about a single
tuple of random variables (?, r, ?0 ) drawn from ?.
It remains to define the objective of learning. The TD error for the linear setting is
? = r + ??>?0 ? ?>?.
(3)
Given this, we define the one-step linear TD solution as any value of ? at which
0 = E[??] = ?A? + b,
(4)
where A = E ?(? ? ??0 )> and b = E[r?]. This is the parameter value to which the linear TD(0)
algorithm (Sutton 1988) converges under on-policy training, as well as the value found by LSTD(0)
(Bradtke & Barto 1996) under both on-policy and off-policy training. The TD solution is always a
fixed-point of the linear TD(0) algorithm, but under off-policy training it may not be stable; if ? does
not exactly satisfy (4), then the TD(0) algorithm may cause it to move away in expected value and
eventually diverge to infinity.
3
The GTD(0) algorithm
We next present the idea and gradient-descent derivation leading to the GTD(0) algorithm. As discussed above, the vector E[??] can be viewed as an error in the current solution ?. The vector should
be zero, so its norm is a measure of how far we are away from the TD solution. A distinctive feature of our gradient-descent analysis of temporal-difference learning is that we use as our objective
function the L2 norm of this vector:
>
J(?) = E[??] E[??] .
(5)
This objective function is quadratic and unimodal; it?s minimum value of 0 is achieved when
E[??] = 0, which can always be achieved. The gradient of this objective function is
?? J(?)
=
2(?? E[??])E[??]
>
= 2E ?(?? ?)> E[??]
>
= ?2E ?(? ? ??0 )> E[??] .
(6)
This last equation is key to our analysis. We would like to take a stochastic gradient-descent approach, in which a small change is made on each sample in such a way that the expected update
3
is the direction opposite to the gradient. This is straightforward if the gradient can be written as a
single expected value, but here we have a product of two expected values. One cannot sample both
of them because the sample product will be biased by their correlation. However, one could store
a long-term, quasi-stationary estimate of either of the expectations and then sample the other. The
question is, which expectation should be estimated and stored, and which should be sampled? Both
ways seem to lead to interesting learning algorithms.
First let us consider the algorithm obtained
by forming and storing a separate estimate of the first
expectation, that is, of the matrix A = E ?(? ? ??0 )> . This matrix is straightforward to estimate
from experience as a simple arithmetic average of all previously observed sample outer products
?(? ? ??0 )> . Note that A is a stationary statistic in any fixed-policy policy-evaluation problem; it
does not depend on ? and would not need to be re-estimated if ? were to change. Let Ak be the
estimate of A after observing the first k samples, (?1 , r1 , ?01 ), . . . , (?k , rk , ?0k ). Then this algorithm
is defined by
k
1X
Ak =
?i (?i ? ??0i )>
(7)
k i=1
along with the gradient descent rule:
?k+1 = ?k + ?k A>
k ? k ?k ,
k ? 1,
(8)
where ?1 is arbitrary, ?k = rk + ??k>?0k ? ?k>?k , and ?k > 0 is a series of step-size parameters,
possibly decreasing over time. We call this algorithm A> TD(0) because it is essentially conventional
TD(0) prefixed by an estimate of the matrix A> . Although we find this algorithm interesting, we do
not consider it further here because it requires O(n2 ) memory and computation per time step.
The second path to a stochastic-approximation algorithm for estimating the gradient (6) is to form
and
of the second expectation, the vector E[??], and to sample the first expectation,
store an estimate
E ?(? ? ??0 )> . Let uk denote the estimate of E[??] after observing the first k ? 1 samples, with
u1 = 0. The GTD(0) algorithm is defined by
uk+1 = uk + ?k (?k ?k ? uk )
(9)
and
?k+1 = ?k + ?k (?k ? ??0k )?>
(10)
k uk ,
where ?1 is arbitrary, ?k is as in (3) using ?k , and ?k > 0 and ?k > 0 are step-size parameters,
possibly decreasing over time. Notice that if the product is formed right-to-left, then the entire
computation is O(n) per time step.
4
Convergence
The purpose of this section is to establish that GTD(0) converges with probability one to the TD
solution in the i.i.d. problem formulation under standard assumptions. In particular, we have the
following result:
Theorem 4.1 (Convergence of GTD(0)). Consider the GTD(0) iteration
(9,10) with
P?
P?step-size sequences ?k and ?k satisfying ?k = ??k , ? > 0, ?k , ?k ? (0, 1], k=0 ?k = ?, k=0 ?k2 < ?.
Further assume that (?k, rk , ?0k ) is an i.i.d. sequence with uniformly bounded second moments. Let
A = E ?k (?k ? ??0k )> and b = E[rk ?k ] (note that A and b are well-defined because the distribution of (?k , rk , ?0k ) does not depend on the sequence index k). Assume that A is non-singular. Then
the parameter vector ?k converges with probability one to the TD solution (4).
Proof. We use the ordinary-differential-equation (ODE) approach (Borkar & Meyn 2000). First,
we rewrite the algorithm?s two iterations as a single?iteration in a combined parameter vector with
> >
2n components ?>
k = (vk , ?k ), where vk = uk / ?, and a new reward-related vector with 2n
>
>
components gk+1
= (rk ?>
k , 0 ):
?
?k+1 = ?k + ?k ? (Gk+1 ?k + gk+1 ) ,
where
Gk+1 =
?
? ?I
(?k ? ??0k )?>
k
4
?k (??0k ? ?k )>
0
.
Let G = E[Gk ] and g = E[gk ]. Note that G and g are well-defined as by the assumption the process
{?k , rk , ?0k }k is i.i.d. In particular,
?
? ? I ?A
b
G=
, g=
.
0
A>
0
Further, note that (4) follows from
G? + g = 0,
(11)
where ?> = (v >, ?>).
Now
? = ?k +
? we apply Theorem 2.2 of Borkar & Meyn (2000). For this purpose we write ?k+1
?k ?(G?k +g+(Gk+1 ?G)?k +(gk+1 ?g)) = ?k +?k0 (h(?k )+Mk+1 ), where ?k0 = ?k ?, h(?) =
g + G? and Mk+1 = (Gk+1 ? G)?k + gk+1 ? g. Let Fk = ?(?1 , M1 , . . . , ?k?1 , Mk ). Theorem 2.2
requires the verification of the following conditions: (i) The function h is Lipschitz and h? (?) =
limr?? h(r?)/r is well-defined for every ? ? R2n ; (ii-a) The sequence
(Mk , Fk ) is a martingale
difference sequence, and (ii-b) for some C0 > 0, E kMk+1 k2 | Fk ? C0 (1 + k?k k2 ) holds for
P?
any
parameter vector ?1 ; (iii) The sequence ?k0 satisfies 0 < ?k0 ? 1, k=1 ?k0 = ?,
P?initial
0 2
k=1 (?k ) < +?; and (iv) The ODE ?? = h(?) has a globally asymptotically stable equilibrium.
Clearly, h(?) is Lipschitz with coefficient kGk and h? (?) = G?. By construction, (Mk , Fk )
satisfies E[Mk+1 |Fk ] = 0 and Mk ? Fk , i.e., it is a martingale difference sequence. Condition
(ii-b) can be shown to hold by a simple application of the triangle inequality and the boundedness of
the the second moments of (?k , rk , ?0k ). Condition (iii) is satisfied by our conditions on the step-size
sequences ?k , ?k . Finally, the last condition (iv) will follow from the elementary theory of linear
differential equations if we can show that the real parts of all the eigenvalues of G are negative.
First, let us show that G is non-singular. Using the determinant rule for partitioned matrices1 we
get det(G) = det(A> A) 6= 0. This indicates that all the eigenvalues of G are non-zero. Now,
let ? ? C, ? 6= 0 be an eigenvalue of G with corresponding normalized eigenvector x ? C2n ;
2
that is, kxk = x? x = 1, where x? is the complex conjugate of x. Hence x? Gx = ?. Let
?
2
>
> >
x = (x1 , x2 ), where x1 , x2 ? Cn . Using the definition of G, ? = x? Gx = ? ?kx1 k +
x?1 Ax2 ? x?2 A> x1 . Because A is real, A? = A> , and it follows that (x?1 Ax2 )? = x?2 A> x1 . Thus,
?
2
Re(?) = Re(x? Gx) = ? ?kx1 k ? 0. We are now done if we show that x1 cannot be zero. If
?
x1 = 0, then from ? = x Gx we get that ? = 0, which contradicts with ? 6= 0.
The next result concerns the convergence of GTD(0) when (?k , rk , ?0k ) is obtained by the off-policy
sub-sampling process described originally in Section 2. We make the following assumption:
Assumption A1 The behavior policy ?b (generator of the actions at ) selects all actions of the target
policy ? with positive probability in every state, and the target policy is deterministic.
This assumption is needed to ensure that the sub-sampled process sk is well-defined and that the
obtained sample is of ?high quality?. Under this assumption it holds that sk is again a Markov chain
by the strong Markov property of Markov processes (as the times selected when actions correspond
to those of the behavior policy form Markov times with respect to the filtration defined by the original
process st ). The following theorem shows that the conclusion of the previous result continues to hold
in this case:
Theorem 4.2 (Convergence of GTD(0) with a sub-sampled process.). Assume A1. Let
the2 param
0
eters
?
,
u
be
updated
by
(9,10).
Further
assume
that
(?
,
r
,
?
)
is
such
that
E
k?
k
|s
,
k
k
k
k
k
k?1
k
2
0 2
E rk |sk?1 , E k?k k |sk?1 are uniformly bounded. Assume that the Markov chain (sk ) is aperiodic and irreducible, so that limk?? P(sk = s0 |s0 = s) = ?(s0 ) exists and is unique. Let s be a
state randomly drawn from ?, and let s0 be a state obtained by following
? for one time step in
the
MDP from s. Further, let r(s, s0 ) be the reward incurred. Let A = E ?(s)(?(s) ? ??(s0 ))> and
b = E[r(s, s0 )?(s)]. Assume that A is non-singular. Then the parameter vector ?k converges with
probability one to the TD solution (4), provided that s1 ? ?.
Proof. The proof of Theorem 4.1 goes through without any changes once we observe that G =
E[Gk+1 |Fk ] and g = E[gk+1 | Fk ].
1
R
According to this rule, if A ? Rn?n , B ? Rn?m , C ? Rm?n , D ? Rm?m then for F = [A B; C D] ?
, det(F ) = det(A) det(D ? CA?1 B).
(n+m)?(n+m)
5
The condition that (sk ) is aperiodic and irreducible guarantees the existence of the steady state
distribution ?. Further, the aperiodicity and irreducibility of (sk ) follows from the same property of
the original process (st ). For further discussion of these conditions cf. Section 6.3 of Bertsekas and
Tsitsiklis (1996).
With considerable more work the result can be extended to the case when s1 follows an arbitrary
distribution. This requires an extension of Theorem 2.2 of Borkar and Meyn (2000) to processes of
the form ?k+1 + ?k (h(?k ) + Mk+1 + ek+1 ), where ek+1 is a fast decaying perturbation (see, e.g.,
the proof of Proposition 4.8 of Bertsekas and Tsitsiklis (1996)).
5
Extensions to action values, stochastic target policies, and other sample
weightings
The GTD algorithm extends immediately to the case of off-policy learning of action-value functions.
For this assume that a behavior policy ?b is followed that samples all actions in every state with
positive probability. Let the target policy to be evaluated be ?. In this case the basis functions are
dependent on both the states and actions: ? : S ? A ? Rn . The learning equations are unchanged,
except that ?t and ?0t are redefined as follows:
?t
?0t
= ?(st , at ),
X
=
?(st+1 , a)?(st+1 , a).
(12)
(13)
a
(We use time indices t denoting physical time.) Here ?(s, a) is the probability of selecting action
a in state s under the target policy ?. Let us call the resulting algorithm ?one-step gradient-based
Q-evaluation,? or GQE(0).
Theorem 5.1 (Convergence of GQE(0)). Assume that st is a state sequence generated by following some stationary policy ?b in a finite MDP. Let rt be the corresponding sequence
of 2rewards
0
be
given
by
the
respective
equations
(12)
and
(13),
and
assume
that
E
k?t k |st?1 ,
and
let
?
,
?
t t
2
E rt |st?1 , E k?0t k2 |st?1 are uniformly bounded. Let the parameters ?t , ut be updated by Equations (9) and (10). Assume that the Markov chain (st ) is aperiodic and irreducible, so that
limt?? P(st = s0 |s0 = s) = ?(s0 ) exists and is unique. Let s be a state randomly drawn from
?, a be an action chosen by ?b in s, let s0 be the next state obtained and let a0 = ?(s0 ) be the
action chosen by the target
policy in state s0 . Further, let r(s, a, s0 ) be the reward incurred in this
transition. Let A = E ?(s, a)(?(s, a) ? ??(s0 , a0 ))> and b = E[r(s, a, s0 )?(s, a)]. Assume that
A is non-singular. Then the parameter vector ?t converges with probability one to a TD solution
(4), provided that s1 is selected from the steady-state distribution ?.
The proof is almost identical to that of Theorem 4.2, and hence it is omitted.
Our main convergence results are also readily generalized to stochastic target policies by replacing
the sub-sampling process described in Section 2 with a sample-weighting process. That is, instead of
including or excluding transitions depending upon whether the action taken matches a deterministic
policy, we include all transitions but give each a weight. For example, we might let the weight wt for
time step t be equal to the probability ?(st , at ) of taking the action actually taken under the target
policy. We can consider the i.i.d. samples now to have four components (?k , rk , ?0k , wk ), with the
update rules (9) and (10) replaced by
and
uk+1 = uk + ?k (?k ?k ? uk )wk ,
(14)
?k+1 = ?k + ?k (?k ? ??0k )?>
k uk wk .
(15)
Each sample is also weighted by wk in the expected values, such as that defining the TD solution
(4). With these changes, Theorems 4.1 and 4.2 go through immediately for stochastic policies. The
reweighting is, in effect, an adjustment to the i.i.d. sampling distribution, ?, and thus our results
hold because they hold for all ?. The choice wt = ?(st , at ) is only one possibility, notable for its
equivalence to our original case if the target policy is deterministic. Another natural weighting is
wt = ?(st , at )/?b (st , at ), where ?b is the behavior policy. This weighting may result in the TD
solution (4) better matching the target policy?s value function (1).
6
6
Related work
There have been several prior attempts to attain the four desirable algorithmic features mentioned at
the beginning this paper (off-policy stability, temporal-difference learning, linear function approximation, and O(n) complexity) but none has been completely successful.
One idea for retaining all four desirable features is to use importance sampling techniques to reweight off-policy updates so that they are in the same direction as on-policy updates in expected
value (Precup, Sutton & Dasgupta 2001; Precup, Sutton & Singh 2000). Convergence can sometimes then be assured by existing results on the convergence of on-policy methods (Tsitsiklis &
Van Roy 1997; Tadic 2001). However, the importance sampling weights are cumulative products
of (possibly many) target-to-behavior-policy likelihood ratios, and consequently they and the corresponding updates may be of very high variance. The use of ?recognizers? to construct the target
policy directly from the behavior policy (Precup, Sutton, Paduraru, Koop & Singh 2006) is one strategy for limiting the variance; another is careful choice of the target policies (see Precup, Sutton &
Dasgupta 2001). However, it remains the case that for all of such methods to date there are always
choices of problem, behavior policy, and target policy for which the variance is infinite, and thus for
which there is no guarantee of convergence.
Residual gradient algorithms (Baird 1995) have also been proposed as a way of obtaining all four
desirable features.
These methods can be viewed as gradient descent in the expected squared TD
error, E ? 2 ; thus they converge stably to the solution that minimizes this objective for arbitrary
differentiable function approximators. However, this solution has always been found to be much
inferior to the TD solution (exemplified by (4) for the one-step linear case). In the literature (Baird
1995; Sutton & Barto 1998), it is often claimed that residual-gradient methods are guaranteed to
find the TD solution in two special cases: 1) systems with deterministic transitions and 2) systems in
which two samples can be drawn for each next state (e.g., for which a simulation model is available).
Our own analysis indicates that even these two special requirements are insufficient to guarantee
convergence to the TD solution.2
Gordon (1995) and others have questioned the need for linear function approximation. He has
proposed replacing linear function approximation with a more restricted class of approximators,
known as averagers, that never extrapolate outside the range of the observed data and thus cannot
diverge. Rightly or wrongly, averagers have been seen as being too constraining and have not been
used on large applications involving online learning. Linear methods, on the other hand, have been
widely used (e.g., Baxter, Tridgell & Weaver 1998; Sturtevant & White 2006; Schaeffer, Hlynka &
Jussila 2001).
The need for linear complexity has also been questioned. Second-order methods for linear approximators, such as LSTD (Bradtke & Barto 1996; Boyan 2002) and LSPI (Lagoudakis & Parr 2003;
see also Peters, Vijayakumar & Schaal 2005), can be effective on moderately sized problems. If the
number of features in the linear approximator is n, then these methods require memory and per-timestep computation that is O(n2 ). Newer incremental methods such as iLSTD (Geramifard, Bowling
& Sutton 2006) have reduced the per-time-complexity to O(n), but are still O(n2 ) in memory. Sparsification methods may reduce the complexity further, they do not help in the general case, and may
apply to O(n) methods as well to further reduce their complexity. Linear function approximation is
most powerful when very large numbers of features are used, perhaps millions of features (e.g., as
in Silver, Sutton & M?uller 2007). In such cases, O(n2 ) methods are not feasible.
7
Conclusion
GTD(0) is the first off-policy TD algorithm to converge under general conditions with linear function approximation and linear complexity. As such, it breaks new ground in terms of important,
2
For a counterexample, consider that given in Dayan?s (1992) Figure 2, except now consider that state A is
actually two states, A and A?, which share the same feature vector. The two states occur with 50-50 probability,
and when one occurs the transition is always deterministically to B followed by the outcome 1, whereas when
the other occurs the transition is always deterministically to the outcome 0. In this case V (A) and V (B) will
converge under the residual-gradient algorithm to the wrong answers, 1/3 and 2/3, even though the system is
deterministic, and even if multiple samples are drawn from each state (they will all be the same).
7
absolute abilities not previous available in existing algorithms. We have conducted empirical studies with the GTD(0) algorithm and have confirmed that it converges reliably on standard off-policy
counterexamples such as Baird?s (1995) ?star? problem. On on-policy problems such as the n-state
random walk (Sutton 1988; Sutton & Barto 1998), GTD(0) does not seem to learn as efficiently as
classic TD(0), although we are still exploring different ways of setting the step-size parameters, and
other variations on the algorithm. It is not clear that the GTD(0) algorithm in its current form will
be a fully satisfactory solution to the off-policy learning problem, but it is clear that is breaks new
ground and achieves important abilities that were previously unattainable.
Acknowledgments
The authors gratefully acknowledge insights and assistance they have received from David Silver,
Eric Wiewiora, Mark Ring, Michael Bowling, and Alborz Geramifard. This research was supported
by iCORE, NSERC and the Alberta Ingenuity Fund.
References
Baird, L. C. (1995). Residual algorithms: Reinforcement learning with function approximation. In Proceedings
of the Twelfth International Conference on Machine Learning, pp. 30?37. Morgan Kaufmann.
Baxter, J., Tridgell, A., Weaver, L. (1998). Experiments in parameter learning using temporal differences.
International Computer Chess Association Journal, 21, 84?99.
Bertsekas, D. P., Tsitsiklis. J. (1996). Neuro-Dynamic Programming. Athena Scientific, 1996.
Borkar, V. S. and Meyn, S. P. (2000). The ODE method for convergence of stochastic approximation and
reinforcement learning. SIAM Journal on Control And Optimization , 38(2):447?469.
Boyan, J. (2002). Technical update: Least-squares temporal difference learning. Machine Learning, 49:233?
246.
Bradtke, S., Barto, A. G. (1996). Linear least-squares algorithms for temporal difference learning. Machine
Learning, 22:33?57.
Dayan, P. (1992). The convergence of TD(?) for general ?. Machine Learning, 8:341?362.
Geramifard, A., Bowling, M., Sutton, R. S. (2006). Incremental least-square temporal difference learning.
Proceedings of the National Conference on Artificial Intelligence, pp. 356?361.
Gordon, G. J. (1995). Stable function approximation in dynamic programming. Proceedings of the Twelfth
International Conference on Machine Learning, pp. 261?268. Morgan Kaufmann, San Francisco.
Lagoudakis, M., Parr, R. (2003). Least squares policy iteration. Journal of Machine Learning Research,
4:1107-1149.
Peters, J., Vijayakumar, S. and Schaal, S. (2005). Natural Actor-Critic. Proceedings of the 16th European
Conference on Machine Learning, pp. 280?291.
Precup, D., Sutton, R. S. and Dasgupta, S. (2001). Off-policy temporal-difference learning with function
approximation. Proceedings of the 18th International Conference on Machine Learning, pp. 417?424.
Precup, D., Sutton, R. S., Paduraru, C., Koop, A., Singh, S. (2006). Off-policy Learning with Recognizers.
Advances in Neural Information Processing Systems 18.
Precup, D., Sutton, R. S., Singh, S. (2000). Eligibility traces for off-policy policy evaluation. Proceedings of
the 17th International Conference on Machine Learning, pp. 759?766. Morgan Kaufmann.
Schaeffer, J., Hlynka, M., Jussila, V. (2001). Temporal difference learning applied to a high-performance gameplaying program. Proceedings of the International Joint Conference on Artificial Intelligence, pp. 529?534.
Silver, D., Sutton, R. S., M?uller, M. (2007). Reinforcement learning of local shape in the game of Go.
Proceedings of the 20th International Joint Conference on Artificial Intelligence, pp. 1053?1058.
Sturtevant, N. R., White, A. M. (2006). Feature construction for reinforcement learning in hearts. In Proceedings of the 5th International Conference on Computers and Games.
Sutton, R. S. (1988). Learning to predict by the method of temporal differences. Machine Learning, 3:9?44.
Sutton, R. S., Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press.
Sutton, R.S., Precup D. and Singh, S (1999). Between MDPs and semi-MDPs: A framework for temporal
abstraction in reinforcement learning. Artificial Intelligence, 112:181?211.
Sutton, R. S., Rafols, E.J., and Koop, A. 2006. Temporal abstraction in temporal-difference networks. Advances
in Neural Information Processing Systems 18.
Tadic, V. (2001). On the convergence of temporal-difference learning with linear function approximation. In
Machine Learning 42:241?267
Tsitsiklis, J. N., and Van Roy, B. (1997). An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 42:674?690.
Watkins, C. J. C. H. (1989). Learning from Delayed Rewards. Ph.D. thesis, Cambridge University.
8
| 3626 |@word kgk:1 determinant:1 version:1 norm:3 c0:2 twelfth:2 r:1 simulation:1 boundedness:1 moment:2 initial:1 series:1 selecting:1 denoting:1 existing:2 kmk:1 current:3 written:1 readily:1 visible:1 wiewiora:1 shape:1 remove:1 update:9 fund:1 stationary:4 intelligence:5 greedy:2 fewer:1 selected:2 offpolicy:1 accordingly:1 rsk:1 beginning:1 gx:4 along:1 differential:2 prove:2 consists:1 introduce:1 expected:12 ingenuity:1 behavior:14 globally:1 decreasing:2 alberta:3 td:26 param:1 considering:1 provided:2 estimating:1 moreover:1 notation:1 bounded:4 what:1 rafols:2 minimizes:1 eigenvector:1 averagers:2 csaba:2 sparsification:1 guarantee:3 temporal:21 every:3 exactly:1 k2:4 rm:2 uk:10 control:3 wrong:1 causally:1 bertsekas:3 positive:3 local:1 sutton:24 ak:2 path:1 abuse:1 might:1 equivalence:1 range:1 practical:1 unique:2 acknowledgment:1 empirical:1 attain:1 matching:1 get:2 cannot:3 wrongly:1 instability:2 conventional:2 deterministic:7 straightforward:2 go:3 starting:1 independently:1 limr:1 ergodic:1 formulate:1 immediately:3 rule:4 continued:1 insight:1 meyn:4 stability:3 classic:1 exploratory:1 anyway:1 variation:1 updated:2 limiting:1 target:23 play:1 construction:2 programming:4 roy:2 satisfying:1 particularly:1 continues:1 observed:2 role:1 e8:1 mentioned:1 environment:4 complexity:9 moderately:1 reward:11 dynamic:6 singh:6 depend:2 rewrite:1 distinctive:1 upon:1 eric:1 completely:2 triangle:1 basis:1 ilstd:1 easily:1 joint:2 k0:5 talk:1 derivation:1 fast:1 effective:1 artificial:5 outcome:3 outside:1 whose:2 larger:1 widely:2 ability:2 statistic:4 final:1 online:3 sequence:14 eigenvalue:3 differentiable:1 interaction:1 product:6 reset:1 relevant:1 date:1 achieve:1 kx1:2 convergence:13 optimum:1 r1:3 requirement:1 generating:1 incremental:4 leave:1 converges:6 silver:3 help:1 depending:1 ring:1 received:1 strong:1 matrices1:1 come:1 direction:2 aperiodic:3 stochastic:12 human:1 tridgell:2 require:1 hamid:1 proposition:1 elementary:1 extension:2 exploring:1 hold:6 ground:2 great:1 equilibrium:1 algorithmic:2 predict:1 parr:2 achieves:1 a2:2 omitted:1 purpose:2 individually:2 weighted:1 uller:2 mit:1 clearly:1 always:6 barto:7 schaal:2 improvement:1 vk:2 likelihood:2 indicates:2 dependent:3 dayan:2 abstraction:2 entire:1 a0:2 quasi:1 selects:1 denoted:1 retaining:1 geramifard:3 special:3 equal:1 once:1 construct:1 never:1 sampling:9 identical:1 others:2 gordon:2 richard:1 irreducible:3 modern:1 randomly:2 divergence:1 national:1 delayed:1 replaced:1 consisting:2 attempt:1 interest:1 possibility:2 evaluation:6 severe:1 chain:3 jussila:2 tuple:1 necessary:1 experience:2 respective:1 iv:2 walk:1 re:3 mk:8 instance:1 ordinary:1 successful:1 conducted:1 too:2 characterize:1 reported:1 stored:1 unattainable:1 dependency:2 answer:1 combined:2 chooses:1 st:19 hlynka:2 international:8 siam:1 vijayakumar:2 off:22 diverge:4 michael:1 together:3 precup:9 again:1 squared:1 satisfied:1 thesis:1 possibly:3 ek:2 style:1 leading:1 sztaki:1 potential:1 paduraru:2 star:1 wk:4 coefficient:1 baird:6 satisfy:1 notable:1 stream:1 ax2:2 later:1 break:3 observing:2 start:2 decaying:1 option:1 square:6 formed:1 aperiodicity:1 kaufmann:3 variance:4 efficiently:1 correspond:1 eters:1 none:1 multiplying:1 trajectory:1 pomdps:1 confirmed:1 whenever:1 definition:1 pp:8 obvious:1 associated:1 proof:5 schaeffer:2 emits:1 sampled:3 popular:2 knowledge:2 ut:1 actually:3 originally:1 follow:1 alborz:1 formulation:6 done:2 evaluated:1 though:1 just:1 correlation:1 hand:1 replacing:2 reweighting:1 brings:1 stably:2 quality:1 perhaps:1 scientific:1 mdp:5 effect:2 normalized:1 former:1 hence:2 laboratory:1 satisfactory:1 white:2 assistance:1 bowling:3 game:2 eligibility:1 inferior:1 steady:2 generalized:1 performs:1 bradtke:3 novel:1 ari:2 lagoudakis:2 physical:1 reza:1 million:1 extend:1 slight:1 discussed:1 m1:1 he:1 association:1 cambridge:1 counterexample:4 automatic:1 fk:8 gratefully:1 stable:7 actor:1 recognizers:2 something:1 own:1 discard:2 termed:1 store:2 claimed:1 inequality:1 icore:1 approximators:3 seen:1 minimum:1 morgan:3 preceding:2 determine:1 converge:3 semi:1 arithmetic:1 ii:3 sound:1 desirable:5 unimodal:1 multiple:1 technical:1 match:3 characterized:1 long:1 a1:4 ambition:1 prediction:2 koop:4 involving:1 neuro:1 essentially:1 expectation:5 iteration:5 sometimes:1 limt:1 achieved:2 szepesv:2 whereas:1 ode:3 singular:4 biased:1 limk:2 probably:1 seem:2 practitioner:2 call:3 constraining:1 iii:2 identically:2 baxter:2 irreducibility:1 opposite:1 reduce:2 idea:2 cn:1 det:5 synchronous:1 whether:1 effort:1 peter:2 questioned:2 cause:1 action:23 ignored:1 clear:2 involve:1 discount:1 ph:1 simplest:1 reduced:1 generate:1 notice:1 estimated:3 per:5 broadly:1 discrete:1 write:1 dasgupta:3 key:1 four:7 drawn:5 timestep:1 asymptotically:1 year:1 powerful:1 extends:1 almost:1 decision:3 followed:2 guaranteed:1 convergent:3 quadratic:2 occur:1 infinity:2 x2:2 u1:1 performing:1 relatively:1 department:1 mta:1 according:4 conjugate:1 contradicts:1 newer:1 partitioned:1 s1:6 chess:1 restricted:1 taken:3 heart:1 equation:6 remains:2 previously:2 discus:1 eventually:1 needed:1 prefixed:1 available:3 apply:2 observe:1 away:2 existence:1 s01:1 abandoned:1 original:4 include:2 ensure:1 cf:1 establish:1 classical:1 unchanged:1 lspi:1 objective:7 move:1 question:1 occurs:2 strategy:1 rt:4 usual:1 interacts:1 said:1 gradient:17 kth:2 separate:1 mapped:1 penultimate:1 c2n:1 outer:1 athena:1 index:3 insufficient:1 ratio:2 tadic:2 unfortunately:1 gk:12 reweight:1 negative:1 filtration:1 trace:1 reliably:1 policy:82 redefined:1 rightly:1 observation:1 markov:10 finite:5 acknowledge:1 descent:8 defining:1 extended:1 excluding:1 rn:5 perturbation:1 arbitrary:6 canada:1 maei:1 david:1 required:1 learned:2 dynamical:1 exemplified:1 mismatch:1 program:1 including:1 memory:4 s0k:7 critical:1 suitable:1 natural:2 boyan:2 residual:4 weaver:2 occupancy:1 mdps:2 prior:2 literature:2 l2:2 fully:1 sturtevant:2 prototypical:1 interesting:2 approximator:2 triple:2 generator:1 incurred:2 agent:4 verification:1 t6g:1 temporaldifference:2 s0:17 storing:1 share:1 r2n:1 critic:1 supported:1 last:2 tsitsiklis:5 taking:1 absolute:2 distributed:2 van:2 world:1 transition:16 ending:1 gqe:2 cumulative:1 author:1 made:1 reinforcement:15 disturbing:1 avoided:1 san:1 pss0:1 far:2 transaction:1 approximate:2 assumed:1 francisco:1 tuples:2 continuous:1 sk:18 learn:2 ca:1 obtaining:1 complex:1 european:1 assured:1 the2:1 main:1 linearly:1 s2:2 gtd:16 n2:4 x1:6 edmonton:1 martingale:2 s0t:1 sub:5 deterministically:2 watkins:2 weighting:4 rk:18 theorem:10 specific:1 r2:2 concern:1 exists:2 importance:5 conditioned:1 borkar:4 simply:1 forming:1 kxk:1 adjustment:1 nserc:1 lstd:4 determines:1 satisfies:2 viewed:4 sized:1 consequently:1 careful:1 lipschitz:2 considerable:1 change:4 feasible:1 determined:1 except:2 uniformly:3 infinite:1 wt:3 s02:1 mark:1 latter:1 extrapolate:1 |
2,898 | 3,627 | The Infinite Hierarchical Factor Regression Model
Piyush Rai and Hal Daum?e III
School of Computing, University of Utah
{piyush,hal}@cs.utah.edu
Abstract
We propose a nonparametric Bayesian factor regression model that accounts for
uncertainty in the number of factors, and the relationship between factors. To
accomplish this, we propose a sparse variant of the Indian Buffet Process and
couple this with a hierarchical model over factors, based on Kingman?s coalescent.
We apply this model to two problems (factor analysis and factor regression) in
gene-expression data analysis.
1
Introduction
Factor analysis is the task of explaining data by means of a set of latent factors. Factor regression
couples this analysis with a prediction task, where the predictions are made solely on the basis of the
factor representation. The latent factor representation achieves two-fold benefits: (1) discovering the
latent process underlying the data; (2) simpler predictive modeling through a compact data representation. In particular, (2) is motivated by the problem of prediction in the ?large P small N? paradigm
[1], where the number of features P greatly exceeds the number of examples N , potentially resulting
in overfitting.
We address three fundamental shortcomings of standard factor analysis approaches [2, 3, 4, 1]: (1)
we do not assume a known number of factors; (2) we do not assume factors are independent; (3)
we do not assume all features are relevant to the factor analysis. Our motivation for this work stems
from the task of reconstructing regulatory structure from gene-expression data. In this context, factors correspond to regulatory pathways. Our contributions thus parallel the needs of gene pathway
modeling. In addition, we couple predictive modeling (for factor regression) within the factor analysis framework itself, instead of having to model it separately.
Our factor regression model is fundamentally nonparametric. In particular, we treat the gene-tofactor relationship nonparametrically by proposing a sparse variant of the Indian Buffet Process
(IBP) [5], designed to account for the sparsity of relevant genes (features). We couple this IBP with
a hierarchical prior over the factors. This prior explains the fact that pathways are fundamentally
related: some are involved in transcription, some in signaling, some in synthesis. The nonparametric
nature of our sparse IBP requires that the hierarchical prior also be nonparametric. A natural choice
is Kingman?s coalescent [6], a popular distribution over infinite binary trees.
Since our motivation is an application in bioinformatics, our notation and terminology will be drawn
from that area. In particular, genes are features, samples are examples, and pathways are factors.
However, our model is more general. An alternative application might be to a collaborative filtering
problem, in which case our genes might correspond to movies, our samples might correspond to
users and our pathways might correspond to genres. In this context, all three contributions of our
model still make sense: we do not know how many movie genres there are; some genres are closely
related (romance to comedy versus to action); many movies may be spurious.
1
2
Background
Our model uses a variant of the Indian Buffet Process to model the feature-factor (i.e., gene-pathway)
relationships. We further use Kingman?s coalescent to model latent pathway hierarchies.
2.1
Indian Buffet Process
The Indian Buffet Process [7] defines a distribution over infinite binary matrices, originally motivated by the need to model the latent factor structure of a given set of observations. In the standard
form it is parameterized by a scale value, ?. The distribution can be explained by means of a simple
culinary analogy. Customers (in our context, genes) enter an Indian restaurant and select dishes
(in our context, pathways) from an infinite array of dishes. The first customer selects P oisson(?)
dishes. Thereafter, each incoming customer i selects a previously-selected dish k with a probability
mk /(i ? 1), where mk is the number of previous customers who have selected dish k. Customer i
then selects an additional P oisson(?/i) new dishes. We can easily define a binary matrix Z with
value Zik = 1 precisely when customer i selects dish k. This stochastic process thus defines a
distribution over infinite binary matrices.
It turn out [7] that the stochastic process defined above corresponds to an infinite limit of an
exchangeable process over finite matrices with K columns. This distribution takes the form
QK ? ?(mk + K? )?(P ?mk ?1)
P
p(Z | ?) = k=1 K
, where mk = i Zik and P is the total number of cus?
?(P +1+ K
)
tomers. Taking K ? ? yields the IBP. The IBP has several nice properties, the most important
of which is exchangeablility. It is the exchangeablility (over samples) that makes efficient sampling algorithms possible. There also exists a two-parameter generalization to IBP where the second
parameter ? controls the sharability of dishes.
2.2
Kingman?s Coalescent
Our model makes use of a latent hierarchical structure over factors; we use Kingman?s coalescent [6]
as a convenient prior distribution over hierarchies. Kingman?s coalescent originated in the study of
population genetics for a set of single-parent organisms. The coalescent is a nonparametric model
over a countable set of organisms. It is most easily understood in terms of its finite dimensional
marginal distributions over n individuals, in which case it is called an n-coalescent. We then take
the limit n ? ?. In our case, the individuals are factors.
The n-coalescent considers a population of n organisms at time t = 0. We follow the ancestry of
these individuals backward in time, where each organism has exactly one parent at time t < 0. The
n-coalescent is a continuous-time, partition-valued Markov process which starts with n singleton
clusters at time t = 0 and evolves backward, coalescing lineages until there is only one left. We
denote by ti the time at which the ith coalescent event occurs (note ti ? 0), and ?i = ti?1 ?
ti the time between events (note ?i > 0). Under the n-coalescent,
each pair of lineages merges
indepentently with exponential rate 1; so ?i ? Exp n?i+1
. With probability one, a random draw
2
from the n-coalescent is a binary tree with a single root at t = ?? and n individuals at time t = 0.
We denote the tree structure by ?. The marginal distribution over tree topologies is uniform and
independent of coalescent times; and the model is infinitely exchangeable. We therefore consider
the limit as n ? ?, called the coalescent.
Once the tree structure is obtained, one can define an additional Markov process to evolve over the
tree. One common choice is a Brownian diffusion process. In Brownian diffusion in D dimensions,
we assume an underlying diffusion covariance of ? ? RD?D p.s.d. The root is a D-dimensional
vector drawn z. Each non-root node in the tree is drawn Gaussian with mean equal to the value of
the parent, and variance ?i ?, where ?i is the time that has passed.
Recently, Teh et al. [8] proposed efficient bottom-up agglomerative inference algorithms for the
coalescent. These (approximately) maximize the probability of ? and ?s, marginalizing out internal
nodes by Belief Propagation. If we associate with each node in the tree a mean y and variance v
message, we update messages as Eq (1), where i is the current node and li and ri are its children.
?1
v i = (v li + (tli ? ti )?)?1 + (v ri + (tri ? ti )?)?1
(1)
?1
?1 ?1
y i = y li (v li + (tli ? ti )?) + y ri (v ri + (tri ? ti )?)
vi
2
3
Nonparametric Bayesian Factor Regression
Recall the standard factor analysis problem: X = AF + E, for standardized data X. X is a P ? N
matrix consisting of N samples [x1 , ..., xN ] of P features each. A is the factor loading matrix of
size P ? K and F = [f 1 , ..., f N ] is the factor matrix of size K ? N . E = [e1 , ..., eN ] is the matrix
of idiosyncratic variations. K, the number of factors, is known.
Recall that our goal is to treat the factor analysis problem nonparametrically, to model feature relevance, and to model hierarchical factors. For expository purposes, it is simplest to deal with each of
these issues in turn. In our context, we begin by modeling the gene-factor relationship nonparametrically (using the IBP). Next, we propose a variant of IBP to model gene relevance. We then present
the hierarchical model for inferring factor hierarchies. We conclude with a presentation of the full
model and our mechanism for modifying the factor analysis problem to factor regression.
3.1
Nonparametric Gene-Factor Model
We begin by directly using the IBP to infer the number of factors. Although IBP has been applied
to nonparametric factor analysis in the past [5], the standard IBP formulation places IBP prior on
the factor matrix (F) associating samples (i.e. a set of features) with factors. Such a model assumes
that the sample-fctor relationship is sparse. However, this assumption is inappropriate in the geneexpression context where it is not the factors themselves but the associations among genes and
factors (i.e., the factor loading matrix A) that are sparse. In such a context, each sample depends on
all the factors but each gene within a sample usually depends only on a small number of factors.
Thus, it is more appropriate to model the factor loading matrix (A) with the IBP prior. Note that
since A and F are related with each other via the number of factors K, modeling A nonparametrically
allows our model to also have an unbounded number of factors.
For most gene-expression problems [1], a binary factor loadings matrix (A) is inappropriate. Therefore, we instead use the Hadamard (element-wise) product of a binary matrix Z and a matrix V
of reals. Z and V are of the same size as A. The factor analysis model, for each sample i, thus
becomes: xi = (Z ? V )f i + ei . We have Z ? IBP(?, ?). ? and ? are IBP hyperparameters
and have vague gamma priors on them. Our initial model assumes no factor hierarchies and hence
the prior over V would simply be a Gaussian: V ? Nor(0, ?v2 I) with an inverse-gamma prior on
?v . F has a zero mean, unit variance Gaussian prior, as used in standard factor analysis. Finally,
ei = Nor(0, ?) models the idiosyncratic variations of genes where ? is a P ? P diagonal matrix
(diag(?1 , ..., ?P )). Each entry ?P has an inverse-gamma prior on it.
3.2
Feature Selection Prior
Typical gene-expression datasets are of the order of several thousands of genes, most of which
are not associated with any pathway (factor). In the above, these are accounted for only by the
idiosyncratic noise term. A more realistic model is that certain genes simply do not participate in
the factor analysis: for a culinary analogy, the genes enter the restaurant and leave before selecting
any dishes. Those genes that ?leave?, we term ?spurious.? We add an additional prior term to account
for such spurious genes; effectively leading to a sparse solution (over the rows of the IBP matrix).
It is important to note that this notion of sparsity is fundamentally different from the conventional
notion of sparsity in the IBP. The sparsity in IBP is over columns, not rows. To see the difference,
recall that the IBP contains a ?rich get richer? phenomenon: frequently selected factors are more
likely to get reselected. Consider a truly spurious gene and ask whether it is likely to select any
factors. If some factor k is already frequently used, then a priori this gene is more likely to select it.
The only downside to selecting it is the data likelihood. By setting the corresponding value in V to
zero, there is no penalty.
Our sparse-IBP prior is identical to the standard IBP prior with one exception. Each customer (gene)
p is associated with Bernoulli random variable Tp that indicates whether it samples any dishes. The
T vector is given a parameter ?, which, in turn, is given a Beta prior with parameters a, b.
3.3
Hierarchical Factor Model
In our basic model, each column of the matrix Z (and the corresponding column in V ) is associated
with a factor. These factors are considered unrelated. To model the fact that factors are, in fact, re3
lated, we introduce a factor hierarchy. Kingman?s coalescent [6] is an attractive prior for integration
with IBP for several reasons. It is nonparametric and describes exchangeable distributions. This
means that it can model a varying number of factors. Moreover, efficient inference algorithms exist
[8].
Figure 1: The graphical model for nonparametric Figure 2: Training and test data are combined to-
Bayesian Factor Regression. X consists of response gether and test responses are treated as missing values
to be imputed
variables as well.
3.4
Full Model and Extension to Factor Regression
Our proposed graphical model is depicted in Figure 1. The key aspects of this model are: the IBP
prior over Z, the sparse binary vector T, and the Coalescent prior over V.
In standard Bayesian factor regression [1], factor analysis is followed by the regression task. The
regression is performed only on the basis of F, rather than the full data X. For example, a simple
linear regression problem would involve estimating a K-dimensional parameter vector ? with regression value ? ? F. Our model, on the other hand, integrates factor regression component in the
nonparametric factor analysis framework itself. We do so by prepending the responses yi to the
expression vector xi and joining the training and test data (see figure 2). The unknown responses
in the test data are treated as missing variables to be iteratively imputed in our MCMC inference
procedure. It is straightforward to see that it is equivalent to fitting another sparse model relating
factors to responses. Our model thus allows the factor analysis to take into account the regression
task as well. In case of binary responses, we add an extra probit regression step to predict binary
outcomes from real-valued responses.
4
Inference
We use Gibbs sampling with a few M-H steps. The Gibbs distributions are summarized here.
Sampling the IBP matrix Z: Sampling Z consists of sampling existing dishes, proposing new
dishes and accepting or rejecting them based on the acceptance ratio in the associated M-H step. For
sampling existing dishes, an entry in Z is set as 1 according to p(Zik = 1|X, Z?ik , V, F, ?) ?
m?i,k
(P +??1) p(X|Z, V, F, ?) whereas it is set as 0 according to p(Zik = 0|X, Z?ik , V, F, ?) ?
P
P +??1?m?i,k
p(X|Z, V, F, ?). m?i,k = j6=i Zjk is how many other customers chose dish k.
(P +??1)
For sampling new dishes, we use an M-H step where we simultaneously propose ? =
(K new , V new , F new ) where K new ? P oisson(??/(? + P ? 1)). We accept the proposal with
?
)
an acceptance probability (following [9]) given by a = min{1, p(rest|?
p(rest|?) }. Here, p(rest|?) is the
likelihood of the data given parameters ?. We propose V new from its prior (either Gaussian or
Coalescent) but, for faster mixing, we propose F new from its posterior.
Sampling V new from the coalescent is slightly involved. As shown pictorially in figure 3, proposing
a new column of V corresponds to adding a new leaf node to the existing coalescent tree. In
particular, we need to find a sibling (s) to the new node y ? and need to find an insertion point on the
branch joining the sibling s to its parent p (the grandparent of y ? ). Since the marginal distribution
over trees under the coalescent is uniform, the sibling s is chosen uniformly over nodes in the tree.
We then use importance sampling to select an insertion time for the new node y ? between ts and
tp , according to the exponential distribution given by the coalescent prior (our proposal distribution
is uniform). This gives an insertion point in the tree, which corresponds to the new parent of y ? .
4
We denote this new parent by p? and the time of insertion as t. The predictive density of the newly
inserted node y ? can be obtained by marginalizing the parent p? . This yields Nor(y 0 , v 0 ), given by:
v 0 = [(v s + (ts ? t)?)?1 + (v p + (t ? tp )?)?1 ]?1
y 0 = [y s /(vs + (ts ? t)?) + y p /(vp + (tp ? t)?)]v 0
Here, ys and vs are the messages passed up through the tree, while yp and vp are the messages
passed down through the tree (compare to Eq (1)).
Sampling the sparse IBP vector T: In the sparse IBP prior, recall that we
have an additional P -many variables Tp , indicating whether gene p ?eats?
any dishes. Tp is drawn from Bernoulli with parameter ?, which, in turn, is
given a Bet(a, b) prior. For inference, we collapseP
? and ? and get Gibbs
posterior over Tp of the form p(Tp = 1|.) ? (a + q6=p Tp )Stu(xp |(Zp ?
P
Vp )F , g/h, g)) and p(Tp = 0|.) ? (b + P ? q6=p Tq )Stu(xp |0, g/h, g),
where Stu is the non-standard Student?s t-distribution. g, h are hyperparameters of the inverse-gamma prior on the entries of ?.
Sampling the real valued matrix V: For the case when V has a Gaus- Figure 3: Adding a
sian prior on it, we sample V from its posterior p(Vg,j |X, Z, F, ?) ? new node to the tree
PN F 2
+ ?12 )?1 and
Nor(Vg,j |?g,j , ?g,j ), where ?g,j
= ( i=1 ?j,i
g
v
PN
?
?
?g,j = ?g,j ( i=1 Fj,i Xg,j
)??1
.
We
define
X
=
Xg,i ?
g
g,j
PK
l=1,l6=j (Ag,l Vg,l )Fl,i , and A = Z ? V. The hyperparameter ?v on V has an inverse-gamma
prior and posterior also has the same form. For the case with coalescent prior on V, we have
PN F 2
PN
y g,j ?1
?
?g,j = ( i=1 ?j,i
+ v10j )?1 and ?g,j = ?g,j ( i=1 Fj,i Xg,j
)(?g + v00j
) , where y 0 and
g
v 0 are the Gaussian posteriors of the leaf node added in the coalescent tree (see Eq (1)), which
corresponds to the column of V being sampled.
Sampling the factor matrix F: We sample for F from its posterior p(F|X, Z, V, ?) ? Nor(F|?, ?)
where ? = AT (AAT + ?)?1 X and ? = I ? (AAT + ?)?1 A, where A = Z ? V
Sampling the idiosyncratic noise term: We place an inverse-gamma prior on the diagonal entries
h
of ? and the posterior too is inverse-gamma: p(?p |.) ? IG(g + N2 , 1+ h tr(E
T E) ), where E =
2
X ? (Z ? V)F.
Sampling IBP parameters: We sample the IBP parameter ? from its posterior: p(?|.) ?
Gam(K+ + a, 1+bHbP (?) ), where K+ is the number of active features at any moment and HP (?) =
PP
i=1 1/(? + i ? 1). ? is sampled from a prior proposal using an M-H step.
Sampling the Factor Tree: Use the Greedy-Rate1 algorithm [8].
5
Related Work
A number of probabilistic approaches have been proposed in the past for the problem of generegulatory network reconstruction [2, 3, 4, 1]. Some take into account the information on the prior
network topology [2], which is not always available. Most assume the number of factors is known.
To get around this, one can perform model selection via Reversible Jump MCMC [10] or evolutionary stochastic model search [11]. Unfortunately, these methods are often difficult to design and
may take quite long to converge. Moreover, they are difficult to integrate with other forms of prior
knowledge (eg., factor hierarchies). A somewhat similar approach to ours is the infinite independent component analysis (iICA) model of [12] which treats factor analysis as a special case of ICA.
However, their model is limited to factor analysis and does not take into account feature selection,
factor hierarchy and factor regression. As a generalization to the standard ICA model, [13] proposed
a model in which the components can be related via a tree-structured graphical model. It, however,
assumes a fixed number of components.
Structurally, our model with Gaussian-V (i.e. no hierarchy over factors) is most similar to the
Bayesian Factor Regression Model (BFRM) of [1]. BFRM assumes a sparsity inducing mixture
prior on the factor loading matrix A. Specifically, Apk ? (1 ? ?pk )?0 (Apk ) + ?pk Nor(Apk |0, ?k )
5
where ?0 () is a point mass centered at zero. To complete the model specification, they define ?pk ?
(1 ? ?k )?0 (?pk ) + ?k Bet(?pk |sr, s(1 ? r)) and ?k ? Bet(?k |av, a(1 ? v)). Now, integrating out ?pk
gives: Apk ? (1?v?k )?0 (Apk )+v?k Nor(Apk |0, ?k ). It is interesting to note that the nonparametric
prior of our model (factor loading matrix defined as A = Z ? V) is actually equivalent to the
(parametric) sparse mixture prior of the BFRM as K ? ?. To see this, note that our prior on the
factor loading matrix A (composed of Z having an IBP prior, and V having a Gaussian prior), can be
written as Apk ? (1 ? ?k )?0 (Apk ) + ?k Nor(Apk |0, ?v2 ), if we define ?k ? Bet(1, ??/K). It is easy
to see that, for BFRM where ?k ? Bet(av, a(1 ? v)), setting a = 1 + ??/K and v = 1 ? ??/(aK)
recovers our model in the limiting case when K ? ?.
6
Experiments
In this section, we report our results on synthetic and real datasets. We compare our nonparametric
approach with the evolutionary search based approach proposed in [11], which is the nonparametric
extension to BFRM.
We used the gene-factor connectivity matrix of E-coli network (described in [14]) to generate a
synthetic dataset having 100 samples of 50 genes and 8 underlying factors. Since we knew the
ground truth for factor loadings in this case, this dataset was ideal to test for efficacy in recovering
the factor loadings (binding sites and number of factors). We also experimented with a real geneexpression data which is a breast cancer dataset having 251 samples of 226 genes and 5 prominent
underlying factors (we know this from domain knowledge).
6.1
Nonparametric Gene-Factor Modeling and Variable Selection
For the synthetic dataset generated by the E-coli network, the results are shown in figure 4 comparing
the actual network used to generate the data and the inferred factor loading matrix. As shown in
figure 4, we recovered exactly the same number (8) of factors, and almost exactly the same factor
loadings (binding sites and number of factors) as the ground truth. In comparison, the evolutionary
search based approach overestimated the number of factors and the inferred loadings clearly seem
to be off from the actual loadings (even modulo column permutations).
Factor Loadings Inferred by BFRM
Inferred Factor Loadings
5
5
10
10
15
15
15
20
20
20
25
10
Genes
Genes
Genes
True Factor Loadings
5
25
25
30
30
30
35
35
35
40
40
45
45
50
40
45
50
1
2
3
4
5
6
7
8
Factors
1
2
3
4
5
Factors
6
7
8
1
2
3
4
5
6
7
8
9
10
Factors
Figure 4: (Left and middle) True and inferred factor loadings (with our approach) for the synthetic data
with P=50, K=8 generated using connectivity matrix of E-coli data. (Right) Inferred factor loadings with the
evolutionary search based approach. White rectangles represent active sites. The data also has added noise with
signal-to-noise-ratio of 10
Our results on real data are shown in figure 5. To see the effect of variable selection for this data,
we also introduced spurious genes by adding 50 random features in each sample. We observe the
following: (1) Without variable selection being on, spurious genes result in an overestimated number
of factors and falsely discovered factor loadings for spurious genes (see figure 5(a)), (2) Variable
selection, when on, effectively filters out spurious genes, without overestimating the number of
factors (see figure 5(b)). We also investigated the effect of noise on the evolutionary search based
approach and it resulted in an overestimated number of factor, plus false discovered factor loadings
for spurious genes (see figure 5(c)). To conserve space, we do not show here the cases when there
are no spurious genes in the data but it turns out that variable selection does not filter out any of 226
relevant genes in such a case.
6.2
Hierarchical Factor Modeling
Our results with hierarchical factor modeling are shown in figure 6 for synthetic and real data. As
shown, the model correctly infers the gene-factor associations, the number of factors, and the factor
6
60
60
60
30
10
250
2
4
6
40
20
200
40
30
20
200
10
10
250
250
8
Factors
100
150
30
150
20
200
100
Genes
40
50
50
Noise
100
Genes
50
150
Noise
50
50
Noise
Genes
50
1
2
3
4
Factors
(a)
1
5
2
3
4
5
6
7
8
Factors
(b)
(c)
Figure 5: Effect of spurious genes (heat-plots of factor loading matrix shown): (a) Standard IBP (b) Our model
with variable selection (c) The evolutionary search based approach
hierarchy. There are several ways to interpret the hierarchy. From the factor hierarchy for E-coli data
(figure 6), we see that column-2 (corresponding to factor-2) of the V matrix is the most prominent
one (it regulates the highest number of genes), and is closest to the tree-root, followed by column2, which it looks most similar to. Columns corresponding to lesser prominent factors are located
further down in the hierarchy (with appropriate relatedness). Figure 6 (d) can be interpreted in a
similar manner for breast-cancer data. The hierarchy can be used to find factors in order of their
prominence. The higher we chop off the tree along the hierarchy, the more prominent the factors,
we discover, are. For instance, if we are only interested in top 2 factors in E-coli data, we can
chop off the tree above the sixth coalescent point. This is akin to the agglomerative clustering sense
which is usually done post-hoc. In contrast, our model discovers the factor hierarchies as part of the
inference procedure itself. At the same time, there is no degradation of data reconstruction (in mean
squared error sense) and the log-likelihood, when compared to the case with Gaussian prior on V
(see figure 7 - they actually improve). We also show in section 6.3 that hierarchical modeling results
in better predictive performance for the factor regression task. Empirical evidences also suggest that
the factor hierarchy leads to faster convergence since most of the unlikely configurations will never
be visited as they are constrained by the hierarchy.
0.085
60
20
0.08
5
0.12
40
50
0.075
10
0.11
60
15
0.07
80
20
0.065
100
25
0.06
30
0.055
Genes
40
0.1
120
30
0.09
35
0.05
40
0.045
140
20
160
0.08
180
10
200
45
0.07
0.04
220
50
1
2
3
4
5
6
7
8
(a)
3
5
8
7
4
6
1
1
2
2
3
Factors
(b)
(c)
4
5
1
2
3
5
4
(d)
Figure 6: Hierarchical factor modeling results. (a) Factor loadings for E-coli data. (b) Inferred hierarchy for
E-coli data. (c) Factor loadings for breast-cancer data. (d) Inferred hierarchy for breast-cancer data..
6.3
Factor Regression
We report factor regression results for binary and real-valued responses and compare both variants
of our model (Gaussian V and Coalescent V) against 3 different approaches: logistic regression,
BFRM, and fitting a separate predictive model on the discovered factors (see figure 7 (c)). The
breast-cancer dataset had two binary response variables (phenotypes) associated with each sample.
For this binary prediction task, we split the data into training-set of 151 samples and test-set of 100
samples. This is essentially a transduction setting as described in section 3.4 and shown in figure 2.
For real-valued prediction task, we treated a 30x20 block of the data matrix as our held-out data and
predicted it based on the rest of the entries in the matrix. This method of evaluation is akin to the
task of image reconstruction [15]. The results are averaged over 20 random initializations and the
low error variances suggest that our method is fairly robust w.r.t. initializations.
7
4
?4.5
x 10
0.75
Model
Coalescent V
0.7
Binary
Real
(%error,std dev) (MSE)
LogReg
17.5 (1.6)
19.8 (1.4)
0.48
BFRM
Nor-V
15.8 (0.56)
0.45
Coal-V
14.6 (0.48)
0.43
PredModel
18.1 (2.1)
-
?5
Gaussian V
0.65
?5.5
log likelihood
0.6
MSE
0.55
0.5
0.45
?6
?6.5
?7
0.4
?7.5
Post Convergence
MSE of BFRM
0.35
Gaussian V
?8
0.3
0.25
Coalescent V
0
100
200
300
400
500
600
700
800
900
1000
?8.5
0
100
Iterations
200
300
400
500
600
700
800
900
1000
Iterations
Figure 7: (a) MSE on the breast-cancer data for BFRM (horizontal line), our model with Gaussian (top red
curved line) and Coalescent (bottom blue curved line) priors. This MSE is the reconstruction error for the data
- different from the MSE for the held-out real valued responses (fig 7 c) (b) Log-likelihoods for our model with
Gaussian (bottom red curved line) and Coalescent (top blue curved line) priors. (c) Factor regression results
7
Conclusions and Discussion
We have presented a fully nonparametric Bayesian approach to sparse factor regression, modeling
the gene-factor relationship using a sparse variant of the IBP. However, the true power of nonparametric priors is evidenced by the ease of integration of task-specific models into the framework.
Both gene selection and hierarchical factor modeling are straightforward extensions in our model
that do not significantly complicate the inference procedure, but lead to improved model performance and more understandable outputs. We applied Kingman?s coalescent as a hierarhical model
on V, the matrix modulating the expression levels of genes in factors. An interesting open question
is whether the IBP can, itself, be modeled hierarchically.
References
[1] M. West. Bayesian Factor Regression Models in the ?Large p, Small n? Paradigm. In Bayesian Statistics
7, 2003.
[2] C. Sabatti and G. James. Bayesian Sparse Hidden Components Analysis for Transcription Regulation
Networks,. Bioinformatics 22, 2005.
[3] G. Sanguinetti, N. D. Lawrence, and M. Rattray. Probabilistic Inference of Transcription Factor Concentrations and Gene-specific Regulatory Activities. Bioinformatics, 22(22), 2006.
[4] M. J. Beal, F. Falciani, Z. Ghahramani, C. Rangel, and D. L. Wild. A Bayesian Approach to Reconstructing Genetic Regulatory Networks with Hidden Factors. Bioinformatics, 21(3), 2005.
[5] Z. Ghahramani, T.L. Griffiths, and P. Sollich. Bayesian Nonparametric Latent Feature Models. In
Bayesian Statistics 8. Oxford University Press, 2007.
[6] J. F. C. Kingman. The coalescent. Stochastic Processes and their Applications, 1982.
[7] T. Griffiths and Z. Ghahramani. Infinite Latent Feature Models and the Indian Buffet Process. In Advances
in Neural Information Processing Systems 18, 2006.
[8] Y. W. Teh, H. Daum?e III, and D. M. Roy. Bayesian Agglomerative Clustering with Coalescents. In
Advances in Neural Information Processing Systems, volume 20, 2008.
[9] E. Meeds, Z. Ghahramani, R. M. Neal, and S. T. Roweis. Modeling Dyadic Data with Binary Latent
Factors. In Advances in Neural Information Processing Systems 19. 2007.
[10] P. Green. Reversible jump markov chain monte carlo computation and bayesian model determination.
Biometrica 82, 1995.
[11] C. Carvalho, J. Lucas, Q. Wang, J. Chang, J. Nevins, and M. West. High-Dimensional Sparse Factor
Modelling - Applications in Gene Expression Genomics. In JASA, 2008.
[12] D. Knowles and Z. Ghahramani. Infinite Sparse Factor Analysis and Infinite Independent Components
Analysis. In ICA 2007, 2007.
[13] Francis R. Bach and Michael I. Jordan. Beyond independent components: trees and clusters. Journal of
Machine Learning Research, pages 1205?1233, 2003.
[14] I. Pournara and L. Wernisch. Factor Analysis for Gene Regulatory Networks and Transcription Factor
Activity Profiles. BMC Bioinformatics, 2007.
[15] J. J. Verbeek, S. T. Roweis, and N. Vlassis. Non-linear CCA and PCA by Alignment of Local Models. In
Advances in Neural Information Processing Systems 16. 2004.
8
| 3627 |@word cu:1 middle:1 loading:23 open:1 covariance:1 prominence:1 tr:1 moment:1 initial:1 configuration:1 contains:1 efficacy:1 selecting:2 genetic:1 ours:1 past:2 existing:3 current:1 comparing:1 recovered:1 written:1 romance:1 realistic:1 partition:1 designed:1 plot:1 update:1 zik:4 v:2 greedy:1 discovering:1 selected:3 leaf:2 ith:1 accepting:1 node:11 simpler:1 unbounded:1 along:1 beta:1 ik:2 consists:2 pathway:9 fitting:2 wild:1 manner:1 coal:1 introduce:1 falsely:1 ica:3 themselves:1 nor:9 frequently:2 actual:2 inappropriate:2 becomes:1 begin:2 estimating:1 underlying:4 notation:1 unrelated:1 moreover:2 mass:1 discover:1 interpreted:1 proposing:3 ag:1 ti:8 exactly:3 lated:1 exchangeable:3 control:1 unit:1 before:1 understood:1 aat:2 treat:3 local:1 limit:3 joining:2 ak:1 oxford:1 solely:1 approximately:1 might:4 chose:1 plus:1 initialization:2 ease:1 limited:1 averaged:1 nevins:1 block:1 signaling:1 procedure:3 area:1 empirical:1 significantly:1 convenient:1 integrating:1 griffith:2 suggest:2 get:4 selection:10 context:7 conventional:1 equivalent:2 customer:8 missing:2 straightforward:2 lineage:2 array:1 population:2 notion:2 variation:2 limiting:1 hierarchy:19 user:1 modulo:1 us:1 associate:1 element:1 roy:1 conserve:1 located:1 std:1 bottom:3 inserted:1 pictorially:1 wang:1 thousand:1 highest:1 insertion:4 predictive:5 meed:1 basis:2 vague:1 logreg:1 easily:2 genre:3 heat:1 shortcoming:1 monte:1 outcome:1 quite:1 richer:1 valued:6 coalescents:1 statistic:2 itself:4 beal:1 hoc:1 propose:6 reconstruction:4 product:1 relevant:3 hadamard:1 mixing:1 roweis:2 inducing:1 parent:7 cluster:2 convergence:2 zp:1 leave:2 geneexpression:2 piyush:2 school:1 ibp:32 eq:3 recovering:1 c:1 predicted:1 closely:1 modifying:1 stochastic:4 filter:2 centered:1 coalescent:33 oisson:3 explains:1 generalization:2 extension:3 around:1 considered:1 ground:2 exp:1 lawrence:1 predict:1 achieves:1 purpose:1 integrates:1 visited:1 modulating:1 eats:1 clearly:1 gaussian:13 always:1 rather:1 pn:4 gaus:1 varying:1 bet:5 bernoulli:2 likelihood:5 indicates:1 modelling:1 greatly:1 contrast:1 sense:3 inference:8 unlikely:1 accept:1 spurious:11 hidden:2 selects:4 interested:1 issue:1 among:1 priori:1 lucas:1 constrained:1 integration:2 special:1 fairly:1 marginal:3 equal:1 once:1 never:1 having:5 sampling:15 identical:1 bmc:1 look:1 report:2 fundamentally:3 rate1:1 few:1 overestimating:1 composed:1 gamma:7 simultaneously:1 resulted:1 individual:4 stu:3 consisting:1 tq:1 acceptance:2 message:4 evaluation:1 alignment:1 truly:1 mixture:2 held:2 chain:1 tree:22 mk:5 instance:1 column:9 modeling:13 downside:1 dev:1 tp:10 entry:5 uniform:3 culinary:2 too:1 accomplish:1 synthetic:5 combined:1 density:1 fundamental:1 re3:1 overestimated:3 probabilistic:2 off:3 michael:1 synthesis:1 connectivity:2 squared:1 coli:7 kingman:9 leading:1 yp:1 li:4 account:6 singleton:1 summarized:1 student:1 vi:1 depends:2 tli:2 root:4 performed:1 francis:1 red:2 start:1 parallel:1 contribution:2 collaborative:1 qk:1 who:1 variance:4 correspond:4 yield:2 vp:3 bayesian:14 rejecting:1 carlo:1 q6:2 j6:1 iica:1 hierarhical:1 complicate:1 sixth:1 against:1 pp:1 involved:2 james:1 associated:5 recovers:1 couple:4 sampled:2 newly:1 dataset:5 popular:1 ask:1 recall:4 knowledge:2 infers:1 actually:2 originally:1 higher:1 follow:1 response:10 improved:1 formulation:1 done:1 until:1 hand:1 horizontal:1 ei:2 reversible:2 propagation:1 nonparametrically:4 defines:2 logistic:1 hal:2 zjk:1 utah:2 effect:3 true:3 hence:1 iteratively:1 neal:1 deal:1 attractive:1 eg:1 white:1 chop:2 prominent:4 apk:9 complete:1 fj:2 image:1 wise:1 discovers:1 recently:1 common:1 regulates:1 volume:1 association:2 organism:4 relating:1 interpret:1 gibbs:3 enter:2 rd:1 hp:1 had:1 specification:1 add:2 brownian:2 posterior:8 closest:1 dish:16 certain:1 binary:15 yi:1 additional:4 somewhat:1 converge:1 paradigm:2 maximize:1 biometrica:1 signal:1 branch:1 full:3 infer:1 stem:1 exceeds:1 faster:2 determination:1 af:1 bach:1 long:1 post:2 e1:1 y:1 prediction:5 variant:6 regression:27 basic:1 breast:6 essentially:1 verbeek:1 iteration:2 represent:1 proposal:3 addition:1 background:1 separately:1 whereas:1 extra:1 rest:4 sr:1 tri:2 seem:1 jordan:1 ideal:1 iii:2 easy:1 split:1 restaurant:2 topology:2 associating:1 lesser:1 sibling:3 whether:4 expression:7 motivated:2 pca:1 passed:3 akin:2 penalty:1 action:1 involve:1 nonparametric:18 simplest:1 imputed:2 generate:2 exist:1 correctly:1 rattray:1 blue:2 hyperparameter:1 thereafter:1 key:1 terminology:1 drawn:4 diffusion:3 backward:2 rectangle:1 inverse:6 parameterized:1 uncertainty:1 place:2 almost:1 knowles:1 draw:1 fl:1 cca:1 followed:2 fold:1 activity:2 precisely:1 ri:4 aspect:1 min:1 structured:1 rai:1 according:3 expository:1 describes:1 slightly:1 reconstructing:2 sollich:1 evolves:1 explained:1 previously:1 turn:5 mechanism:1 know:2 available:1 apply:1 gam:1 hierarchical:13 v2:2 appropriate:2 observe:1 alternative:1 buffet:6 coalescing:1 standardized:1 assumes:4 top:3 clustering:2 graphical:3 l6:1 daum:2 ghahramani:5 already:1 added:2 occurs:1 question:1 parametric:1 concentration:1 diagonal:2 evolutionary:6 grandparent:1 separate:1 participate:1 agglomerative:3 considers:1 reason:1 modeled:1 relationship:6 ratio:2 difficult:2 unfortunately:1 idiosyncratic:4 x20:1 potentially:1 regulation:1 design:1 countable:1 understandable:1 unknown:1 perform:1 teh:2 av:2 observation:1 markov:3 datasets:2 finite:2 t:3 curved:4 vlassis:1 discovered:3 inferred:8 introduced:1 evidenced:1 pair:1 comedy:1 merges:1 address:1 beyond:1 sabatti:1 usually:2 sparsity:5 green:1 belief:1 power:1 event:2 natural:1 treated:3 sian:1 improve:1 movie:3 xg:3 genomics:1 prior:41 nice:1 evolve:1 marginalizing:2 probit:1 fully:1 permutation:1 interesting:2 filtering:1 analogy:2 versus:1 vg:3 carvalho:1 integrate:1 jasa:1 xp:2 row:2 cancer:6 genetics:1 accounted:1 explaining:1 taking:1 sparse:17 benefit:1 dimension:1 xn:1 rich:1 made:1 jump:2 ig:1 compact:1 relatedness:1 transcription:4 gene:53 overfitting:1 incoming:1 active:2 conclude:1 knew:1 xi:2 sanguinetti:1 ancestry:1 latent:9 regulatory:5 continuous:1 search:6 nature:1 robust:1 mse:6 investigated:1 domain:1 diag:1 pk:7 gether:1 hierarchically:1 motivation:2 noise:8 hyperparameters:2 profile:1 n2:1 child:1 dyadic:1 x1:1 site:3 fig:1 west:2 en:1 transduction:1 structurally:1 inferring:1 originated:1 exponential:2 down:2 specific:2 experimented:1 evidence:1 exists:1 false:1 adding:3 effectively:2 importance:1 prepending:1 phenotype:1 depicted:1 simply:2 likely:3 infinitely:1 chang:1 binding:2 corresponds:4 truth:2 goal:1 presentation:1 infinite:10 typical:1 uniformly:1 specifically:1 degradation:1 total:1 called:2 exception:1 select:4 indicating:1 rangel:1 internal:1 bioinformatics:5 indian:7 relevance:2 mcmc:2 phenomenon:1 |
2,899 | 3,628 | Kernel Methods for Deep Learning
Youngmin Cho and Lawrence K. Saul
Department of Computer Science and Engineering
University of California, San Diego
9500 Gilman Drive, Mail Code 0404
La Jolla, CA 92093-0404
{yoc002,saul}@cs.ucsd.edu
Abstract
We introduce a new family of positive-definite kernel functions that mimic the
computation in large, multilayer neural nets. These kernel functions can be used
in shallow architectures, such as support vector machines (SVMs), or in deep
kernel-based architectures that we call multilayer kernel machines (MKMs). We
evaluate SVMs and MKMs with these kernel functions on problems designed to
illustrate the advantages of deep architectures. On several problems, we obtain
better results than previous, leading benchmarks from both SVMs with Gaussian
kernels as well as deep belief nets.
1
Introduction
Recent work in machine learning has highlighted the circumstances that appear to favor deep architectures, such as multilayer neural nets, over shallow architectures, such as support vector machines
(SVMs) [1]. Deep architectures learn complex mappings by transforming their inputs through multiple layers of nonlinear processing [2]. Researchers have advanced several motivations for deep
architectures: the wide range of functions that can be parameterized by composing weakly nonlinear transformations, the appeal of hierarchical distributed representations, and the potential for
combining unsupervised and supervised methods. Experiments have also shown the benefits of
deep learning in several interesting applications [3, 4, 5].
Many issues surround the ongoing debate over deep versus shallow architectures [1, 6]. Deep architectures are generally more difficult to train than shallow ones. They involve difficult nonlinear
optimizations and many heuristics. The challenges of deep learning explain the early and continued
appeal of SVMs, which learn nonlinear classifiers via the ?kernel trick?. Unlike deep architectures,
SVMs are trained by solving a simple problem in quadratic programming. However, SVMs cannot
seemingly benefit from the advantages of deep learning.
Like many, we are intrigued by the successes of deep architectures yet drawn to the elegance of kernel methods. In this paper, we explore the possibility of deep learning in kernel machines. Though
we share a similar motivation as previous authors [7], our approach is very different. Our paper
makes two main contributions. First, we develop a new family of kernel functions that mimic the
computation in large neural nets. Second, using these kernel functions, we show how to train multilayer kernel machines (MKMs) that benefit from many advantages of deep learning.
The organization of this paper is as follows. In section 2, we describe a new family of kernel
functions and experiment with their use in SVMs. Our results on SVMs are interesting in their own
right; they also foreshadow certain trends that we observe (and certain choices that we make) for the
MKMs introduced in section 3. In this section, we describe a kernel-based architecture with multiple
layers of nonlinear transformation. The different layers are trained using a simple combination of
supervised and unsupervised methods. Finally, we conclude in section 4 by evaluating the strengths
and weaknesses of our approach.
1
2
Arc-cosine kernels
In this section, we develop a new family of kernel functions for computing the similarity of vector
inputs x, y ? <d . As shorthand, let ?(z) = 12 (1 + sign(z)) denote the Heaviside step function. We
define the nth order arc-cosine kernel function via the integral representation:
Z
kn (x, y) = 2
kwk2
e? 2
?(w ? x) ?(w ? y) (w ? x)n (w ? y)n
dw
(2?)d/2
(1)
The integral representation makes it straightforward to show that these kernel functions are positivesemidefinite. The kernel function in eq. (1) has interesting connections to neural computation [8]
that we explore further in sections 2.2?2.3. However, we begin by elucidating its basic properties.
2.1
Basic properties
We show how to evaluate the integral in eq. (1) analytically in the appendix. The final result is most
easily expressed in terms of the angle ? between the inputs:
x?y
? = cos?1
.
(2)
kxkkyk
The integral in eq. (1) has a simple, trivial dependence on the magnitudes of the inputs x and y, but
a complex, interesting dependence on the angle between them. In particular, we can write:
kn (x, y) =
1
kxkn kykn Jn (?)
?
(3)
where all the angular dependence is captured by the family of functions Jn (?). Evaluating the
integral in the appendix, we show that this angular dependence is given by:
n
???
1 ?
.
(4)
Jn (?) = (?1)n (sin ?)2n+1
sin ? ??
sin ?
For n = 0, this expression reduces to the supplement of the angle between the inputs. However, for
n > 0, the angular dependence is more complicated. The first few expressions are:
J0 (?)
J1 (?)
= ???
= sin ? + (? ? ?) cos ?
J2 (?)
=
3 sin ? cos ? + (? ? ?)(1 + 2 cos2 ?)
(5)
(6)
(7)
We describe eq. (3) as an arc-cosine kernel because for n = 0, it takes the simple form
x?y
k0 (x, y) = 1? ?1 cos?1 kxkkyk
. In fact, the zeroth and first order kernels in this family are strongly
motivated by previous work in neural computation. We explore these connections in the next section.
Arc-cosine kernels have other intriguing properties. From the magnitude dependence in eq. (3),
we observe the following: (i) the n = 0 arc-cosine kernel maps inputs x to the unit hypersphere
in feature space, with k0 (x, x) = 1; (ii) the n = 1 arc-cosine kernel preserves the norm of inputs,
with k1 (x, x) = kxk2 ; (iii) higher order (n > 1) arc-cosine kernels expand the dynamic range of the
inputs, with kn (x, x) ? kxk2n . Properties (i)?(iii) are shared respectively by radial basis function
(RBF), linear, and polynomial kernels. Interestingly, though, the n = 1 arc-cosine kernel is highly
nonlinear, also satisfying k1 (x, ?x) = 0 for all inputs x. As a practical matter, we note that arccosine kernels do not have any continuous tuning parameters (such as the kernel width in RBF
kernels), which can be laborious to set by cross-validation.
2.2
Computation in single-layer threshold networks
Consider the single-layer network shown in Fig. 1 (left) whose weights Wij connect the jth input
unit to the ith output unit. The network maps inputs x to outputs f (x) by applying an elementwise
nonlinearity to the matrix-vector product of the inputs and the weight matrix: f (x) = g(Wx). The
nonlinearity is described by the network?s so-called activation function. Here we consider the family
of one-sided polynomial activation functions gn (z) = ?(z)z n illustrated in the right panel of Fig. 1.
2
f1
f2
f3 . . .
fi . . . fm
Step (n=0)
W
x1
x2 . . . xj . . . xd
Ramp (n=1)
Quarter?pipe (n=2)
1
1
1
0.5
0.5
0.5
0
0
0
?1
0
1
?1
0
1
?1
0
1
Figure 1: Single layer network and activation functions
For n = 0, the activation function is a step function, and the network is an array of perceptrons. For
n = 1, the activation function is a ramp function (or rectification nonlinearity [9]), and the mapping
f (x) is piecewise linear. More generally, the nonlinear (non-polynomial) behavior of these networks
is induced by thresholding on weighted sums. We refer to networks with these activation functions
as single-layer threshold networks of degree n.
Computation in these networks is closely connected to computation with the arc-cosine kernel function in eq. (1). To see the connection, consider how inner products are transformed by the mapping
in single-layer threshold networks. As notation, let the vector wi denote ith row of the weight
matrix W. Then we can express the inner product between different outputs of the network as:
f (x) ? f (y) =
m
X
?(wi ? x)?(wi ? y)(wi ? x)n (wi ? y)n ,
(8)
i=1
where m is the number of output units. The connection with the arc-cosine kernel function emerges
in the limit of very large networks [10, 8]. Imagine that the network has an infinite number of
output units, and that the weights Wij are Gaussian distributed with zero mean and unit variance. In this limit, we see that eq. (8) reduces to eq. (1) up to a trivial multiplicative factor:
2
limm?? m
f (x) ? f (y) = kn (x, y). Thus the arc-cosine kernel function in eq. (1) can be viewed
as the inner product between feature vectors derived from the mapping of an infinite single-layer
threshold network [8].
Many researchers have noted the general connection between kernel machines and neural networks
with one layer of hidden units [1]. The n = 0 arc-cosine kernel in eq. (1) can also be derived from
an earlier result obtained in the context of Gaussian processes [8]. However, we are unaware of any
previous theoretical or empirical work on the general family of these kernels for degrees n ? 0.
Arc-cosine kernels differ from polynomial and RBF kernels in one especially interesting respect.
As highlighted by the integral representation in eq. (1), arc-cosine kernels induce feature spaces
that mimic the sparse, nonnegative, distributed representations of single-layer threshold networks.
Polynomial and RBF kernels do not encode their inputs in this way. In particular, the feature vector
induced by polynomial kernels is neither sparse nor nonnegative, while the feature vector induced
by RBF kernels resembles the localized output of a soft vector quantizer. Further implications of
this difference are explored in the next section.
2.3
Computation in multilayer threshold networks
A kernel function can be viewed as inducing a nonlinear mapping from inputs x to feature vectors ?(x). The kernel computes the inner product in the induced feature space:
k(x, y) = ?(x)??(y). In this section, we consider how to compose the nonlinear mappings induced by kernel functions. Specifically, we show how to derive new kernel functions
k (`) (x, y) = ?(?(...?(x))) ? ?(?(...?(y)))
| {z }
| {z }
`
times
`
(9)
times
which compute the inner product after ` successive applications of the nonlinear mapping ?(?). Our
motivation is the following: intuitively, if the base kernel function k(x, y) = ?(x) ? ?(y) mimics
the computation in a single-layer network, then the iterated mapping in eq. (9) should mimic the
computation in a multilayer network.
3
test set. SVMs with arc cosine kernels have error rates from 22.36?25.64%. Results are s
kernels of varying degree (n) and levels of recursion (!). The best previous results are 24
SVMs with RBF kernels and 22.50% for deep belief nets [2]. See text for details.
Test error rate (%)
21
20
Test error rate (%) 19
26
18
17
24
22
1 2
3 4 5 6
Step (n=0)
1 2
3 4
5 6
SVM?RBF
Ramp (n=1)
1 2 3 4 5 6
Quarter!pipe (n=2)
Figure 3: Left: examples from the convex data set. Right: classification error rates on th
DBN?3
SVMs with arc cosine kernels have error rates from 17.15?20.51%. Results are shown fo
1 2 3 4 5 of
6 varying
1 2degree
3 4 (n)
5 6and levels
1 2 of3 recursion
4 5 6 (!). The best previous results are 19.13% f
Step (n=0) with RBF kernels
Ramp (n=1)
(n=2) nets [2]. See text for details.
and 18.63%Quarter?pipe
for deep belief
Figure 2: Left: examples from the rectangles-image data set. Right: classification error rates on the
test set. SVMs with arc-cosine kernels have error rates from 22.36?25.64%. Results are shown for
training
a validation
set to
the for
margin penalty parameter; after
kernels of varying degree (n) and levels of 2000
recursion
(`).examples
The bestasprevious
results
arechoose
24.04%
thisbelief
parameter
cross-validation,
we then retrained each SVM using all the training exam
SVMs with RBF kernels and 22.50% for deep
nets by
[11].
See text for details.
reference, we also report the best results obtained previously from three layer deep belief ne
3) and SVMs with RBF kernels (SVM-RBF). These references are representative of th
state-of-the-art for deep and shallow architectures on these data sets.
We first examine the results of this procedure for widely used kernels. Here we find that the iterated
The
right panels
of figures
2 andthe
3 show
the test
set error rates from arc cosine kernels o
mapping in eq. (9) does not yield particularly
interesting
results.
Consider
two-fold
composition
(n) =
and
recursion (!).isWe
experimented
that maps x to ?(?(x)). For linear kernelsdegree
k(x, y)
x ?levels
y, theofcomposition
trivial:
we obtainwith kernels of degree n = 0
corresponding
to single
layer threshold
networks
the identity map ?(?(x)) = ?(x) = x. For
homogeneous
polynomial
kernels k(x,
y) =with
(x ??step?,
y)d , ?ramp?, and ?quarter-pipe?
functions.
We
also
experimented
with
the
multilayer
kernels
described in section 2.3, c
the composition yields:
from one to six levels of recursion. Overall, the figures show that on these two data se
2
different
arcdcosine
previously reported for SVMs w
?(?(x)) ? ?(?(y)) = (?(x)
? ?(y))
= ((xkernels
? y)d )doutperform
= (x ? y)dthe
. best results(10)
kernels and deep belief nets. We give more details on these experiments below. At a h
though,
we note
that SVMs
arc cosine is
kernels
are very straightforward to train; unli
The above result is not especially interesting:
the kernel
implied
by thiswith
composition
also polynowith
they do
not require
tuning aLikewise,
kernel width
mial, just of higher degree (d2 versus d) than
theRBF
onekernels,
from which
it was
constructed.
for parameter, and unlike deep b
they do not require solving a difficult nonlinear optimization or searching over possible arch
??kx?yk2
RBF kernels k(x, y) = e
, the composition yields:
In our experiments,
we quickly discovered that the multilayer kernels only performed w
2
?2?(1?k(x,y))
= 1 kernels were used
higher (! > 1). levels in the recursion.
Figs. 2 and 3 therefore s
?(?(x)) ? ?(?(y)) = n
e??k?(x)??(y)k
= eat
(11)
these sets of results; in particular, each group of bars shows the test error rates when a
Though non-trivial, eq. (11) does not represent
particularly
Recall
thatnonlinearity, while the n = 1 k
kernela (of
degree n =interesting
0, 1, 2) wascomputation.
used at the first
layer of
at successive
layers.
doy)
not
have
a formal
explanation
RBF kernels mimic the computation of softused
vector
quantizers,
withWe
k(x,
1 when
kx?yk
is for this effect. However, r
only the
n = how
1 arc the
cosine
kernelmapping
preserves?(?(x))
the norm of
its inputs: the n = 0 kernel maps
large compared to the kernel width. It is hard
to see
iterated
would
onto than
a unit
in feature
space, while higher-order (n > 1) kernels may induc
generate a qualitatively different representation
thehypersphere
original mapping
?(x).
spaces with severely distorted dynamic ranges. Therefore, we hypothesize that only n = 1 a
Next we consider the `-fold composition inkernels
eq. (9)
for arc-cosine
functions.
We magnitude
state the of their inputs to work effe
preserve
sufficientkernel
information
about the
with
kernels.
result in the form of a recursion. The basecomposition
case is given
byother
eq. (3)
for kernels of depth ` = 1 and
degree n. The inductive step is given by: Finally, the results on both data sets reveal an interesting trend: the multilayer arc cosin
layer counterparts. Though SVMs are shallow arch
often perform better ithan
n/2 theirsingle
1 h (l)
kn(l+1) (x, y) =
Jn ?n(`) ,
kn (x, x) kn(l) (y, y)
(12)
?
5
(`)
where ?n is the angle between the images of x and y in the feature space induced by the `-fold
composition. In particular, we can write:
h
i?1/2
(`)
?1
(`)
(`)
(`)
?n = cos
kn (x, y) kn (x, x) kn (y, y)
.
(13)
The recursion in eq. (12) is simple to compute in practice. The resulting kernels mimic the computations in large multilayer threshold networks. Above, for simplicity, we have assumed that the
arc-cosine kernels have the same degree n at every level (or layer) ` of the recursion. We can also
use kernels of different degrees at different layers. In the next section, we experiment with SVMs
whose kernel functions are constructed in this way.
2.4
Experiments on binary classification
We evaluated SVMs with arc-cosine kernels on two challenging data sets of 28 ? 28 grayscale pixel
images. These data sets were specifically constructed to compare deep architectures and kernel
machines [11]. In the first data set, known as rectangles-image, each image contains an occluding
rectangle, and the task is to determine whether the width of the rectangle exceeds its height; examples are shown in Fig. 2 (left). In the second data set, known as convex, each image contains a
white region, and the task is to determine whether the white region is convex; examples are shown
4
test set. SVMs with arc cosine kernels have error rates from 22.36?25.64%. Results are s
kernels of varying degree (n) and levels of recursion (!). The best previous results are 24
SVMs with RBF kernels and 22.50% for deep belief nets [2]. See text for details.
Test error rate (%)
21
20
S
D
Test error rate (%) 19
21
18
20
17
19
18
17
1 2
3 4 5 6
Step (n=0)
1 2 3 4 5 6
SVM?RBF
Ramp (n=1)
DBN?3
1 2 3 4 5 6
Quarter!pipe (n=2)
Figure 3: Left: examples from the convex data set. Right: classification error rates on th
SVMs with arc cosine kernels have error rates from 17.15?20.51%. Results are shown fo
1 2 3 4 5 of
6 varying
1 2degree
3 4 (n)
5 6and levels
1 2 of3 recursion
4 5 6 (!). The best previous results are 19.13% f
Step (n=0) with RBF kernels
Ramp (n=1)
(n=2) nets [2]. See text for details.
and 18.63%Quarter?pipe
for deep belief
Figure 3: Left: examples from the convex data set. Right: classification error rates on the test set.
SVMs with arc-cosine kernels have error rates from 17.15?20.51%. Results are shown for kernels
2000
examples
as aresults
validation
to choose
the margin penalty parameter; after
of varying degree (n) and levels of recursion
(`).training
The best
previous
are set
19.13%
for SVMs
thisnets
parameter
by cross-validation,
with RBF kernels and 18.63% for deep belief
[11]. See
text for details. we then retrained each SVM using all the training exam
reference, we also report the best results obtained previously from three layer deep belief ne
3) and SVMs with RBF kernels (SVM-RBF). These references are representative of th
state-of-the-art for deep and shallow architectures on these data sets.
in Fig. 3 (left). The rectangles-image data set has 12000 training examples, while the convex data
of test
figures
2 and 3 show
thedata
test set
rates from arc cosine kernels o
set has 8000 training examples; both dataThe
setsright
havepanels
50000
examples.
These
setserror
have
(n)[11].
and levels
of recursion in
(!).binary
We experimented
with kernels of degree n = 0
been extensively benchmarked by previousdegree
authors
Our experiments
classification
corresponding
to benchmarks,
single layer threshold
networksthe
withbiggest
?step?, ?ramp?, and ?quarter-pipe? a
focused on these data sets because in previously
reported
they exhibited
Webelief
also experimented
with theSVMs.
multilayer kernels described in section 2.3, c
performance gap between deep architecturesfunctions.
(e.g., deep
nets) and traditional
from one to six levels of recursion. Overall, the figures show that on these two data se
different
arc cosineauthors
kernels[11].
outperform
the best
results
previously reported for SVMs w
We followed the same experimental methodology
as previous
SVMs were
trained
using
kernels
and deep
belief nets.
We give
more
on these
libSVM (version 2.88) [12], a publicly available
software
package.
For each
SVM,
wedetails
used the
last experiments below. At a h
though,
wethe
notemargin
that SVMs
withparameter;
arc cosine kernels
are very straightforward to train; unli
2000 training examples as a validation set to
choose
penalty
after choosing
RBF kernels,
they dousing
not require
a kernel
width parameter, and unlike deep be
this parameter by cross-validation, we thenwith
retrained
each SVM
all thetuning
training
examples.
they
do not require
solving
a difficult
nonlinear
optimization
For reference, we also report the best results
obtained
previously
from
three-layer
deep
belief netsor searching over possible arch
(DBN-3) and SVMs with RBF kernels (SVM-RBF).
These references
appear
to be representative
of kernels only performed w
In our experiments,
we quickly
discovered
that the multilayer
n = 1architectures
kernels were used
at higher
> 1) levels in the recursion. Figs. 2 and 3 therefore s
the current state-of-the-art for deep and shallow
on these
data (!
sets.
these sets of results; in particular, each group of bars shows the test error rates when a
Figures 2 and 3 show the test set error rates kernel
from arc-cosine
(of degree kernels
n = 0, 1,of
2)varying
was useddegree
at the (n)
firstand
layerlevels
of nonlinearity, while the n = 1 k
of recursion (`). We experimented with kernels
of successive
degree n =
0, 1 We
anddo
2, not
corresponding
thresh- for this effect. However, r
used at
layers.
have a formaltoexplanation
old networks with ?step?, ?ramp?, and ?quarter-pipe?
functions.
We also
only the n =activation
1 arc cosine
kernel preserves
theexperimented
norm of its inputs: the n = 0 kernel maps
onto a2.3,
unitcomposed
hypersphere
in feature
space,
while
with the multilayer kernels described in section
from
one to six
levels
of higher-order
recursion. (n > 1) kernels may induc
severelykernels
distorted
dynamic ranges.
Therefore,
we hypothesize that only n = 1 a
Overall, the figures show that many SVMsspaces
with with
arc-cosine
outperform
traditional
SVMs,
kernels
preserve
sufficient
information
about
the
magnitude
and a certain number also outperform deep belief nets. In addition to their solid performance, we of their inputs to work effe
with other kernels.
note that SVMs with arc-cosine kernels arecomposition
very straightforward
to train; unlike SVMs with RBF
kernels, they do not require tuning a kernel width
unlike
deep
they do nottrend: the multilayer arc cosin
Finally,parameter,
the resultsand
on both
data
setsbelief
revealnets,
an interesting
often perform
betterover
than possible
their single
layer counterparts. Though SVMs are shallow arch
require solving a difficult nonlinear optimization
or searching
architectures.
Our experiments with multilayer kernels revealed that these SVMs only performed well when arc5
cosine kernels of degree n = 1 were used at higher (` > 1) levels in the recursion. Figs.
2 and
3 therefore show only these sets of results; in particular, each group of bars shows the test error
rates when a particular kernel (of degree n = 0, 1, 2) was used at the first layer of nonlinearity,
while the n = 1 kernel was used at successive layers. We hypothesize that only n = 1 arc-cosine
kernels preserve sufficient information about the magnitude of their inputs to work effectively in
composition with other kernels. Recall that only the n = 1 arc-cosine kernel preserves the norm of
its inputs: the n = 0 kernel maps all inputs onto a unit hypersphere in feature space, while higherorder (n > 1) kernels induce feature spaces with different dynamic ranges.
Finally, the results on both data sets reveal an interesting trend: the multilayer arc-cosine kernels
often perform better than their single-layer counterparts. Though SVMs are (inherently) shallow
architectures, this trend suggests that for these problems in binary classification, arc-cosine kernels
may be yielding some of the advantages typically associated with deep architectures.
3
Deep learning
In this section, we explore how to use kernel methods in deep architectures [7]. We show how to train
deep kernel-based architectures by a simple combination of supervised and unsupervised methods.
Using the arc-cosine kernels in the previous section, these multilayer kernel machines (MKMs)
perform very competitively on multiclass data sets designed to foil shallow architectures [11].
5
3.1
Multilayer kernel machines
We explored how to train MKMs in stages that involve kernel PCA [13] and feature selection [14] at
intermediate hidden layers and large-margin nearest neighbor classification [15] at the final output
layer. Specifically, for `-layer MKMs, we considered the following training procedure:
1. Prune uninformative features from the input space.
2. Repeat ` times:
(a) Compute principal components in the feature space induced by a nonlinear kernel.
(b) Prune uninformative components from the feature space.
3. Learn a Mahalanobis distance metric for nearest neighbor classification.
The individual steps in this procedure are well-established methods; only their combination is new.
While many other approaches are worth investigating, our positive results from the above procedure
provide a first proof-of-concept. We discuss each of these steps in greater detail below.
Kernel PCA. Deep learning in MKMs is achieved by iterative applications of kernel PCA [13]. This
use of kernel PCA was suggested over a decade ago [16] and more recently inspired by the pretraining of deep belief nets by unsupervised methods. In MKMs, the outputs (or features) from
kernel PCA at one layer are the inputs to kernel PCA at the next layer. However, we do not strictly
transmit each layer?s top principal components to the next layer; some components are discarded if
they are deemed uninformative. While any nonlinear kernel can be used for the layerwise PCA in
MKMs, arc-cosine kernels are natural choices to mimic the computations in large neural nets.
Feature selection. The layers in MKMs are trained by interleaving a supervised method for feature
selection with the unsupervised method of kernel PCA. The feature selection is used to prune away
uninformative features at each layer in the MKM (including the zeroth layer which stores the raw
inputs). Intuitively, this feature selection helps to focus the unsupervised learning in MKMs on
statistics of the inputs that actually contain information about the class labels. We prune features
at each layer by a simple two-step procedure that first ranks them by estimates of their mutual
information, then truncates them using cross-validation. More specifically, in the first step, we
discretize each real-valued feature and construct class-conditional and marginal histograms of its
discretized values; then, using these histograms, we estimate each feature?s mutual information with
the class label and sort the features in order of these estimates [14]. In the second step, considering
only the first w features in this ordering, we compute the error rates of a basic kNN classifier using
Euclidean distances in feature space. We compute these error rates on a held-out set of validation
examples for many values of k and w and record the optimal values for each layer. The optimal w
determines the number of informative features passed onto the next layer; this is essentially the
width of the layer. In practice, we varied k from 1 to 15 and w from 10 to 300; though exhaustive,
this cross-validation can be done quickly and efficiently by careful bookkeeping. Note that this
procedure determines the architecture of the network in a greedy, layer-by-layer fashion.
Distance metric learning. Test examples in MKMs are classified by a variant of kNN classification
on the outputs of the final layer. Specifically, we use large margin nearest neighbor (LMNN) classification [15] to learn a Mahalanobis distance metric for these outputs, though other methods are
equally viable [17]. The use of LMNN is inspired by the supervised fine-tuning of weights in the
training of deep architectures [18]. In MKMs, however, this supervised training only occurs at the
final layer (which underscores the importance of feature selection in earlier layers). LMNN learns a
distance metric by solving a problem in semidefinite programming; one advantage of LMNN is that
the required optimization is convex. Test examples are classified by the energy-based decision rule
for LMNN [15], which was itself inspired by earlier work on multilayer neural nets [19].
3.2
Experiments on multiway classification
We evaluated MKMs on the two multiclass data sets from previous benchmarks [11] that exhibited
the largest performance gap between deep and shallow architectures. The data sets were created from
the MNIST data set [20] of 28 ? 28 grayscale handwritten digits. The mnist-back-rand data set was
generated by filling the image background by random pixel values, while the mnist-back-image data
set was generated by filling the image background with random image patches; examples are shown
in Figs. 4 and 5. Each data set contains 12000 and 50000 training and test examples, respectively.
6
test set. SVMs with arc cosine kernels have error rates from 22.36?25.64%. Resul
kernels of varying degree (n) and levels of recursion (!). The best previous results
SVMs with RBF kernels and 22.50% for deep belief nets [2]. See text for details.
Test error rate (%)
21
20
26
TestTest
error
raterate
(%)(%) 19
error
21
8
7
Test error rate (%)
2418
20
17
19
22
6 18
1 2 3 4 5 6
1 2 3 4 5 6
1 2 3 4 5
SVM!RBF
DBN?3Ramp (n=1)
Step (n=0)
Quarter!pipe (n=
DBN!3
1 2 3 4 5 6
1 2 3 4 5 6
1 2 3 4 5 6
Stepconvex
(n=0)
(n=1)classification
Quarter!pipe
from the
data set.Ramp
Right:
error(n=2)
rates
Figure 3: Left: examples
SVMs with arc cosine kernels have error rates from 17.15?20.51%. Results are sh
2: 1Left:
from the
classification error
0
5of
2 examples
11 22rectangles-image
1 1 2 2 3 34 45Figure
6 varying
2degree
33 44 (n)
55 6and
4 15 62 (!). data
levels
of3 recursion
Theset.
bestRight:
previous
results are 19
set. SVMs
with
arc cosine
kernels
haveRBF
error rates from 22.36?25.64%. Results a
Step (n=0)
(n=0)test
(n=1)
Quarter?pipe
(n=2)
Step
Ramp
(n=1)
Quarter!pipe
(n=2)
with RBF Ramp
kernels
and 18.63%
for deep
belief
nets [2]. See text for details.
kernels of varying degree (n) and levels of recursion (!). The best previous results are
SVMsdata
withset.
RBF
kernels
and 22.50% for
deep
belief
Figure 4: Left: examples from the mnist-back-rand
Right:
classification
error
rates
on nets
the [2]. See text for details.
5 17
test set for MKMs with different kernels and numbers of layers `. MKMs with arc-cosine kernel
2000results
training
validation
to choose
the margin penalty parameter
have error rates from 6.36?7.52%. The best previous
areexamples
14.58% as
fora SVMs
withset
RBF
kernels
Test error rate (%)
this parameter by cross-validation,
we
then
retrained
each
SVM using all the trainin
and 6.73% for deep belief nets [11].
21
reference, we also report the best results obtained previously from three layer deep be
3) and SVMs with RBF20kernels (SVM-RBF). These references are representativ
state-of-the-art
for
and shallow architectures on these data sets.
TestTest
errorerror
rate rate
(%) deep
(%) 19
30
21
18 2 and 3 show the test set error rates from arc cosine ke
The right panels of figures
degree
(n)
and
levels
of
17recursion (!). We experimented with kernels of degree n
25
2 3 4 5networks
6
1with
2 3?step?,
4 5 6 ?ramp?,
1 2and
3 ?quarter4 5 6
SVM!RBF
corresponding to single layer 1threshold
19
SVM?RBF
Step (n=0)
Ramp (n=1)
Quarter!pipe (n=2)
DBN!3
functions.
We
also
experimented
with
the
multilayer
kernels
described
in
section
20
18
Figure
3: Left:
examples
from
the convex
data set.
error
rates
on
from one
to six
levels of
recursion.
Overall,
theRight:
figuresclassification
show that on
these
two
SVMs
witharc
arccosine
cosinekernels
kernels outperform
have error rates
from
17.15?20.51%.
Results
are shown
different
theDBN?3
best results
previously
reported
for S
15 17
0
11and
22degree
33deep
55belief
11 22We
26 (!).
11 2 2 3 3 4 4 5 5of
6kernels
44 (n)
6and nets.
4 1 5 more
varying
levels
of3 recursion
The best
previous
results arebelow.
19.13%
give
details
on these
experiments
Step
(n=1)
Quarter?pipe
RBF
Step (n=0)
(n=0) with RBFRamp
Ramp
(n=1) 18.63%
Quarter!pipe
(n=2) nets [2]. See text for details.
kernels
for (n=2)
deep
belief
though, we
note and
that SVMs with
arc cosine
kernels are very straightforward to trai
with data
RBF set.
kernels,
they
do not require
tuning
a kernel
Figure 5: Left: examples from the mnist-back-image
Right:
classification
error
rates
on thewidth parameter, and unlike d
they do not
solving
a difficult
nonlinear optimization
or searching over possib
test set for MKMs with different kernels and numbers
of require
layers `.
MKMs
with arc-cosine
kernel
training
examples
a validation
set to with
choose
themultilayer
margin penalty
af
have error rates from 18.43?29.79%. The best2000
previous
results
areweas
22.61%
for
SVMs
RBF
In
our
experiments,
quickly
discovered
that the
kernelsparameter;
only perfor
cross-validation,
SVM usingFigs.
all the
training
ex
n =parameter
1 kernelsby
were
used at higherwe
(!then
> 1)retrained
levels ineach
the recursion.
2 and
3 ther
kernels and 16.31% for deep belief nets [11]. this
reference,
reportinthe
best results
obtained
from the
threetest
layer
deep
belief
these setswe
ofalso
results;
particular,
each
grouppreviously
of bars shows
error
rates
w
3)kernel
and SVMs
with nRBF
(SVM-RBF).
references
are representative
(of degree
= 0,kernels
1, 2) was
used at the These
first layer
of nonlinearity,
while the of
n
state-of-the-art
for deep
and We
shallow
architectures
on these
data sets.for this effect. How
used at successive
layers.
do not
have a formal
explanation
We trained MKMs with arc-cosine kernels andThe
RBF
kernels
layer.
eachthedata
weof its
only
the
n = 1 in
arc
cosine
the set,
norm
inputs:
n = 0 kernel
right
panels
of each
figures
2kernel
andFor
3 preserves
show
test
set error
rates
from the
arc cosine
kernel
initially withheld the last 2000 training examples
as
a
validation
set.
Performance
on
this
validation
onto a(n)
unitand
hypersphere
in feature
while higher-order
(n > 1)
degree
levels of recursion
(!).space,
We experimented
with kernels
of kernels
degree nmay
=
set was used to determine each MKM?s architecture,
described
indistorted
the previous
section,
also
spacesaswith
severely
dynamic
ranges.and
Therefore,
hypothesize
that only
corresponding
to single
layer
threshold
networks
with
?step?,we
?ramp?,
and ?quarter-pip
to set the kernel width in RBF kernels, following
the preserve
same
methodology
as with
earlier
studies
kernels
sufficient
information
the[11].
magnitude
of their inputs
to wo
functions.
We also
experimented
theabout
multilayer
kernels described
in section
2.3
Once these parameters were set by cross-validation,
we re-inserted
the
validation
into
the show that on these two data
from
one
to six
levels
ofkernels.
recursion. examples
Overall, the
figures
composition
with
other
different
arc cosine
kernels
the best
results previously reported for SVM
training set and used all 12000 training examples
for feature
selection
andoutperform
distance
metric
learning.
Finally,and
thedeep
results
on nets.
both data
sets more
revealdetails
an interesting
the multilayer
ar
kernels
belief
We give
on these trend:
experiments
below. At
For kernel PCA, we were limited by memory requirements
to
processing
only
6000
out
of
12000
often
perform
better
than
their
single
layer
counterparts.
Though
SVMs
are
shallo
though,randomly,
we note that
with arc
kernels are
training examples. We chose these 6000 examples
butSVMs
repeated
eachcosine
experiment
fivevery straightforward to train; u
with RBF
not require
tuning
a kernel
width parameter, and unlike deep
times to obtain a measure of average performance.
Thekernels,
resultsthey
we do
report
for each
MKM
are the
they do not require solving a difficult nonlinear optimization
or searching over possible a
5
20
average performance over these five runs.
In our experiments, we quickly discovered that the multilayer kernels only performe
The right panels of Figs. 4 and 5 show the testnset
rates
of used
MKMs
with different
kernels
= 1error
kernels
were
at higher
(! > 1) levels
in theand
recursion. Figs. 2 and 3 therefor
numbers of layers `. For reference, we also show
previously
reportedeach
results
using
these the
setsbest
of results;
in particular,
group[11]
of bars
shows the test error rates when
traditional SVMs (with RBF kernels) and deepkernel
belief(of
nets
(withnthree
MKMs
perform
sig-of nonlinearity, while the n = 1
degree
= 0, 1,layers).
2) was used
at the
first layer
at successive
layers.
We doornot
have awith
formal
explanation for this effect. Howeve
nificantly better than shallow architectures suchused
as SVMs
with RBF
kernels
LMNN
feature
only the
n = 1belief
arc cosine
preserves
the norm
of its inputs: the n = 0 kernel ma
selection (reported as the case ` = 0). Compared
to deep
nets,kernel
the leading
MKMs
obtain
onto ahigher
unit hypersphere
space, while higher-order (n > 1) kernels may in
slightly lower error rates on one data set and slightly
error ratesinonfeature
another.
spaces with severely distorted dynamic ranges. Therefore, we hypothesize that only n =
We can describe the architecture of an MKM by
the number
selectedinformation
features atabout
each the
layer
(inkernels
preserveofsufficient
magnitude
of their inputs to work e
composition
with
other kernels.
cluding the input layer). The number of features
essentially
corresponds
to the number of units in
each layer of a neural net. For the mnist-back-rand
datathe
set,results
the best
MKM
used
n = 1anarc-cosine
Finally,
on both
data
setsanreveal
interesting trend: the multilayer arc co
kernel and 300-90-105-136-126-240 features atoften
eachperform
layer. better
For the
mnist-back-image
set, the Though SVMs are shallow a
than
their single layer data
counterparts.
best MKM used an n = 0 arc-cosine kernel and 300-50-130-240-160-150 features at each layer.
MKMs worked best with arc-cosine kernels of degree n = 0 and n = 1. The kernel of degree5 n = 2
performed less well in MKMs, perhaps because multiple iterations of kernel PCA distorted the
dynamic range of the inputs (which in turn seemed to complicate the training for LMNN). MKMs
with RBF kernels were difficult to train due to the sensitive dependence on kernel width parameters.
It was extremely time-consuming to cross-validate the kernel width at each layer of the MKM. We
only obtained meaningful results for one and two-layer MKMs with RBF kernels.
7
We briefly summarize many results that we lack space to report in full. We also experimented
on multiclass data sets using SVMs with single and multi-layer arc-cosine kernels, as described in
section 2. For multiclass problems, these SVMs compared poorly to deep architectures (both DBNs
and MKMs), presumably because they had no unsupervised training that shared information across
examples from all different classes. In further experiments on MKMs, we attempted to evaluate the
individual contributions to performance from feature selection and LMNN classification. Feature
selection helped significantly on the mnist-back-image data set, but only slightly on the mnist-backrandom data set. Finally, LMNN classification in the output layer yielded consistent improvements
over basic kNN classification provided that we used the energy-based decision rule [15].
4
Discussion
In this paper, we have developed a new family of kernel functions that mimic the computation in
large, multilayer neural nets. On challenging data sets, we have obtained results that outperform previous SVMs and compare favorably to deep belief nets. More significantly, our experiments validate
the basic intuitions behind deep learning in the altogether different context of kernel-based architectures. A similar validation was provided by recent work on kernel methods for semi-supervised
embedding [7]. We hope that our results inspire more work on kernel methods for deep learning.
There are many possible directions for future work. For SVMs, we are currently experimenting with
arc-cosine kernel functions of fractional and (even negative) degree n. For MKMs, we are hoping
to explore better schemes for feature selection [21, 22] and kernel selection [23]. Also, it would be
desirable to incorporate prior knowledge, such as the invariances modeled by convolutional neural
nets [24, 4], though it is not obvious how to do so. These issues and others are left for future work.
A
Derivation of kernel function
In this appendix, we show how to evaluate the multidimensional integral in eq. (1) for the arc-cosine
kernel. Let ? denote the angle between the inputs x and y. Without loss of generality, we can take x
to lie along the w1 axis and y to lie in the w1 w2 -plane. Integrating out the orthogonal coordinates
of the weight vector w, we obtain the result in eq. (3) where Jn (?) is the remaining integral:
Z
2
2
1
Jn (?) = dw1 dw2 e? 2 (w1 +w2 ) ?(w1 ) ?(w1 cos ? + w2 sin ?) w1n (w1 cos ? + w2 sin ?)n . (14)
Changing variables to u = w1 and v = w1 cos ?+w2 sin ?, we simplify the domain of integration to
the first quadrant of the uv-plane:
Z ? Z ?
2
2
2
1
du
dv e?(u +v ?2uv cos ?)/(2 sin ?) un v n .
(15)
Jn (?) =
sin ? 0
0
The prefactor of (sin ?)?1 in eq. (15) is due to the Jacobian. To simplify the integral further, we
adopt polar coordinates u = r cos( ?2 + ?4 ) and v = r sin( ?2 + ?4 ). Then, integrating out the radius
coordinate r, we obtain:
Z ?2
cosn ?
Jn (?) = n! (sin ?)2n+1
d?
.
(16)
(1 ? cos ? cos ?)n+1
0
To evaluate eq. (16), we first consider the special case n = 0. The following result can be derived by
contour integration in the complex plane [25]:
Z ?/2
d?
???
=
.
(17)
1
?
cos
?
cos
?
sin ?
0
Substituting eq. (17) into our expression for the angular part of the kernel function in eq. (16), we
recover our earlier claim that J0 (?) = ? ? ?. Related integrals for the special case n = 0 can also be
found in earlier work [8].For the case n > 0, the integral in eq. (16) can be performed by the method
of differentiating under the integral sign. In particular, we note that:
Z ?/2
Z ?2
cosn ?
1
?n
d?
d?
=
.
(18)
n+1
n
(1
?
cos
?
cos
?)
n!
?(cos
?)
1
?
cos
? cos ?
0
0
Substituting eq. (18) into eq. (16), then appealing to the previous result in eq. (17), we recover the
expression for Jn (?) in eq. (4).
8
References
[1] Y. Bengio and Y. LeCun. Scaling learning algorithms towards AI. MIT Press, 2007.
[2] G.E. Hinton, S. Osindero, and Y.W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527?1554, 2006.
[3] G.E. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313(5786):504?507, July 2006.
[4] M.A. Ranzato, F.J. Huang, Y.L. Boureau, and Y. LeCun. Unsupervised learning of invariant feature
hierarchies with applications to object recognition. In Proceedings of the 2007 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR-07), pages 1?8, 2007.
[5] R. Collobert and J. Weston. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning
(ICML-08), pages 160?167, 2008.
[6] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, to appear,
2009.
[7] J. Weston, F. Ratle, and R. Collobert. Deep learning via semi-supervised embedding. In Proceedings of
the 25th International Conference on Machine Learning (ICML-08), pages 1168?1175, 2008.
[8] C.K.I. Williams. Computation with infinite neural networks. Neural Computation, 10(5):1203?1216,
1998.
[9] R.H.R. Hahnloser, H.S. Seung, and J.J. Slotine. Permitted and forbidden sets in symmetric thresholdlinear networks. Neural Computation, 15(3):621?638, 2003.
[10] R.M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag New York, Inc., 1996.
[11] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In Proceedings of the 24th International Conference
on Machine Learning (ICML-07), pages 473?480, 2007.
[12] C.C. Chang and C.J. Lin. LIBSVM: a library for support vector machines, 2001. Software available at
http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[13] B. Sch?olkopf, A. Smola, and K. M?uller. Nonlinear component analysis as a kernel eigenvalue problem.
Neural Computation, 10(5):1299?1319, 1998.
[14] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. Journal of Machine Learning
Research, 3:1157?1182, 2003.
[15] K.Q. Weinberger and L.K. Saul. Distance metric learning for large margin nearest neighbor classification.
Journal of Machine Learning Research, 10:207?244, 2009.
[16] B. Sch?olkopf, A. J. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue
problem. Technical Report 44, Max-Planck-Institut f?ur biologische Kybernetik, 1996.
[17] J. Goldberger, S. Roweis, G.E. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In
L.K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages
513?520. MIT Press, 2005.
[18] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In
B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19,
pages 153?160. MIT Press, 2007.
[19] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to
face verification. In Proceedings of the 2005 IEEE Conference on Computer Vision and Pattern Recognition (CVPR-05), pages 539?546, 2005.
[20] Y. LeCun and C. Cortes. The MNIST database of handwritten digits. http://yann.lecun.com/
exdb/mnist/.
[21] M. Tipping. Sparse kernel principal component analysis. In Advances in Neural Information Processing
Systems 13. MIT Press, 2001.
[22] A.J. Smola, O.L. Mangasarian, and B. Sch?olkopf. Sparse kernel feature analysis. Technical Report 99-04,
University of Wisconsin, Data Mining Institute, Madison, 1999.
[23] G. Lanckriet, N. Cristianini, P. Bartlett, L.E. Ghaoui, and M.I. Jordan. Learning the kernel matrix with
semidefinite programming. Journal of Machine Learning Research, 5:27?72, 2004.
[24] Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and L.D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541?551, 1989.
[25] G.F. Carrier, M. Krook, and C.E. Pearson. Functions of a Complex Variable: Theory and Technique.
Society for Industrial and Applied Mathematics, 2005.
9
| 3628 |@word multitask:1 nificantly:1 version:1 briefly:1 polynomial:7 norm:6 d2:1 cos2:1 elisseeff:1 solid:1 contains:3 interestingly:1 current:1 com:1 activation:7 yet:1 intriguing:1 dw1:1 goldberger:1 wx:1 j1:1 informative:1 hypothesize:5 designed:2 hoping:1 greedy:2 plane:3 ith:2 record:1 hypersphere:5 quantizer:1 successive:6 five:1 height:1 along:1 constructed:3 viable:1 shorthand:1 compose:1 introduce:1 behavior:1 nor:1 examine:1 multi:1 ratle:1 discretized:1 inspired:3 lmnn:9 salakhutdinov:2 considering:1 begin:1 provided:2 notation:1 awith:1 panel:5 benchmarked:1 developed:1 unified:1 transformation:2 every:1 multidimensional:1 xd:1 mkm:7 classifier:2 platt:1 unit:11 appear:3 planck:1 positive:2 carrier:1 engineering:1 limit:2 severely:3 kybernetik:1 zeroth:2 chose:1 resembles:1 suggests:1 challenging:2 co:20 limited:1 youngmin:1 range:8 practical:1 lecun:6 practice:2 kxkkyk:2 definite:1 backpropagation:1 digit:2 procedure:6 j0:2 empirical:2 significantly:2 radial:1 induce:2 integrating:2 quadrant:1 kxkn:1 cannot:1 onto:6 selection:13 context:2 applying:1 www:1 map:7 straightforward:6 williams:1 convex:8 focused:1 of3:4 ke:1 simplicity:1 hadsell:1 rule:2 continued:1 array:1 lamblin:1 dw:1 embedding:2 searching:5 coordinate:3 variation:1 transmit:1 diego:1 imagine:1 dbns:1 hierarchy:1 programming:3 homogeneous:1 sig:1 lanckriet:1 trick:1 trend:7 gilman:1 satisfying:1 particularly:2 recognition:4 database:1 inserted:1 csie:1 prefactor:1 region:2 connected:1 theand:1 ordering:1 ranzato:1 yk:1 intuition:1 transforming:1 seung:1 cristianini:1 dynamic:7 trained:5 weakly:1 solving:7 f2:1 basis:1 easily:1 k0:2 derivation:1 train:9 fast:1 describe:4 choosing:1 pearson:1 exhaustive:1 whose:2 heuristic:1 widely:1 valued:1 cvpr:2 ramp:17 favor:1 statistic:1 knn:3 highlighted:2 itself:1 final:4 seemingly:1 dthe:1 advantage:5 eigenvalue:2 net:31 product:6 j2:1 combining:1 poorly:1 roweis:1 inducing:1 validate:2 olkopf:4 requirement:1 object:1 help:1 illustrate:1 develop:2 derive:1 exam:2 nearest:4 eq:28 c:1 larochelle:2 differ:1 direction:1 radius:1 closely:1 require:10 f1:1 ntu:1 strictly:1 kykn:1 considered:1 presumably:1 lawrence:1 mapping:10 claim:1 substituting:2 early:1 a2:1 adopt:1 polar:1 label:2 currently:1 jackel:1 sensitive:1 hubbard:1 largest:1 weighted:1 hoffman:1 hope:1 uller:2 mit:4 gaussian:3 varying:11 pip:1 encode:1 derived:3 focus:1 improvement:1 rank:1 experimenting:1 underscore:1 industrial:1 w1n:1 typically:1 initially:1 hidden:2 expand:1 wij:2 transformed:1 limm:1 pixel:2 issue:2 classification:20 overall:5 art:5 integration:2 special:2 mutual:2 marginal:1 construct:1 f3:1 once:1 unsupervised:8 filling:2 icml:3 mimic:9 future:2 report:8 others:1 piecewise:1 simplify:2 few:1 randomly:1 preserve:10 individual:2 organization:1 possibility:1 highly:1 mining:1 elucidating:1 laborious:1 evaluation:1 weakness:1 henderson:1 sh:1 yielding:1 semidefinite:2 behind:1 held:1 implication:1 integral:12 orthogonal:1 institut:1 old:1 euclidean:1 re:1 theoretical:1 earlier:6 soft:1 gn:1 ar:1 osindero:1 reported:6 kn:10 connect:1 cho:1 international:3 quickly:5 w1:8 choose:4 huang:1 effe:2 leading:2 potential:1 bergstra:1 matter:1 inc:1 collobert:2 multiplicative:1 performed:5 helped:1 biologische:1 sort:1 recover:2 complicated:1 contribution:2 publicly:1 convolutional:1 variance:1 efficiently:1 yield:3 raw:1 handwritten:3 iterated:3 bayesian:1 worth:1 drive:1 researcher:2 ago:1 classified:2 explain:1 fo:2 complicate:1 energy:2 slotine:1 obvious:1 elegance:1 associated:1 proof:1 recall:2 knowledge:1 emerges:1 fractional:1 dimensionality:1 actually:1 back:7 higher:10 tipping:1 supervised:8 methodology:2 permitted:1 inspire:1 rand:3 wei:1 evaluated:2 though:15 strongly:1 done:1 generality:1 angular:4 just:1 arch:4 stage:1 smola:3 nonlinear:19 lack:1 reveal:2 perhaps:1 effect:4 concept:1 contain:1 counterpart:5 inductive:1 analytically:1 symmetric:1 neal:1 illustrated:1 white:2 mahalanobis:2 sin:14 width:11 noted:1 cosine:57 exdb:1 image:15 wise:1 mangasarian:1 fi:1 recently:1 thedata:1 bookkeeping:1 quarter:16 induc:2 elementwise:1 kwk2:1 refer:1 composition:9 surround:1 ai:2 tuning:6 uv:2 dbn:6 mathematics:1 nonlinearity:7 multiway:1 therefor:1 had:1 language:1 similarity:2 yk2:1 base:1 own:1 recent:2 thresh:1 forbidden:1 jolla:1 store:1 certain:3 verlag:1 binary:3 success:1 captured:1 greater:1 zip:1 prune:4 determine:3 july:1 ii:1 semi:2 multiple:3 full:1 desirable:1 reduces:2 exceeds:1 technical:2 af:1 cross:10 lin:1 equally:1 variant:1 basic:5 multilayer:25 circumstance:1 metric:7 essentially:2 vision:2 histogram:2 kernel:212 represent:1 iteration:1 achieved:1 addition:1 uninformative:4 fine:1 background:2 sch:4 w2:5 resul:1 unlike:7 exhibited:2 induced:7 jordan:1 call:1 chopra:1 revealed:1 iii:2 intermediate:1 bengio:4 xj:1 architecture:35 fm:1 inner:5 multiclass:4 whether:2 expression:4 motivated:1 six:5 pca:10 bartlett:1 passed:1 penalty:5 wo:1 weof:1 york:1 pretraining:1 deep:68 generally:2 se:2 involve:2 extensively:1 svms:54 generate:1 http:2 outperform:5 sign:2 write:2 express:1 group:4 threshold:11 drawn:1 changing:1 neither:1 libsvm:3 rectangle:6 sum:1 run:1 angle:5 parameterized:1 package:1 distorted:4 family:9 guyon:1 mial:1 yann:1 patch:1 decision:2 appendix:3 scaling:1 cosin:2 layer:71 followed:1 courville:1 fold:3 quadratic:1 nonnegative:2 yielded:1 strength:1 worked:1 x2:1 software:2 layerwise:1 extremely:1 eat:1 department:1 combination:3 across:1 slightly:3 ur:1 wi:5 appealing:1 shallow:16 tw:1 thresholdlinear:1 intuitively:2 dv:1 invariant:1 ghaoui:1 sided:1 rectification:1 previously:10 discus:1 turn:1 cjlin:1 available:2 competitively:1 denker:1 observe:2 hierarchical:1 away:1 neighbourhood:1 weinberger:1 altogether:1 jn:9 original:1 top:1 remaining:1 madison:1 intrigued:1 k1:2 arccosine:2 especially:2 forum:1 society:1 implied:1 occurs:1 dependence:7 traditional:3 distance:7 higherorder:1 mail:1 trivial:4 code:2 modeled:1 difficult:8 truncates:1 quantizers:1 debate:1 favorably:1 negative:1 perform:6 teh:1 discretize:1 discarded:1 benchmark:3 arc:60 withheld:1 howard:1 hinton:3 discovered:4 ucsd:1 varied:1 retrained:5 introduced:1 required:1 pipe:15 connection:5 california:1 boser:1 established:1 ther:1 bar:5 suggested:1 below:4 pattern:2 challenge:1 summarize:1 including:1 memory:1 explanation:3 belief:26 max:1 natural:2 recursion:28 advanced:1 nth:1 scheme:1 datathe:2 library:1 ne:2 axis:1 created:1 deemed:1 text:10 prior:1 popovici:1 wisconsin:1 loss:1 discriminatively:1 interesting:13 versus:2 localized:1 validation:19 foundation:1 degree:31 sufficient:3 iswe:1 consistent:1 verification:1 thresholding:1 editor:2 share:1 row:1 foil:1 repeat:1 last:2 jth:1 formal:3 institute:1 saul:4 wide:1 neighbor:4 face:1 differentiating:1 sparse:4 distributed:3 benefit:3 depth:1 evaluating:2 unaware:1 computes:1 seemed:1 author:2 qualitatively:1 contour:1 san:1 erhan:1 andthe:2 investigating:1 conclude:1 assumed:1 consuming:1 grayscale:2 continuous:1 iterative:1 un:1 decade:1 learn:4 ca:1 composing:1 inherently:1 du:1 bottou:1 complex:4 domain:1 main:1 motivation:3 repeated:1 x1:1 fig:10 representative:4 fashion:1 trainin:1 kxk2:1 lie:2 jacobian:1 learns:1 interleaving:1 appeal:2 explored:2 svm:17 experimented:10 cortes:1 trai:1 mnist:11 effectively:1 importance:1 supplement:1 magnitude:7 margin:7 kx:2 gap:2 boureau:1 explore:5 expressed:1 chang:1 springer:1 corresponds:1 determines:2 ma:1 weston:2 conditional:1 hahnloser:1 viewed:2 identity:1 rbf:42 careful:1 towards:1 shared:2 doy:1 hard:1 infinite:3 specifically:5 reducing:1 principal:3 called:1 invariance:1 experimental:1 la:1 attempted:1 meaningful:1 perceptrons:1 occluding:1 perfor:1 support:3 ongoing:1 incorporate:1 evaluate:5 heaviside:1 ex:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.